text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Experimental Setup. In our experiment, we evaluate the performance of ScatterID in both indoor and outdoor environments. Specifically, the experimental settings include an office room with a size of 4.5 m {{formula:862ee761-fee4-4ec6-b20e-8b15b092c916}} 5.5 m and a building rooftop with an area of 8 m {{formula:41e267ba-2eae-4d2d-8adf-b84e91615ff5}} 10 m as depicted in Fig. REF . The office room is a typical multipath-rich environment, which contains some furniture and surrounding walls. The rooftop is a multipath-poor area with a few walls and roof guardrails. According to the speed configurations in {{cite:f4e4c92349f90ffeba4ef6b5de113536d5f88ba1}}, {{cite:267a4de3a33c856dd51abee930757e018fd13e64}}, we set all robots to move with a speed of 20 cm/s for easily controlling them at the same time. Based on this moving speed, we set the ScatterID robot to update multipath signatures in every 0.6 s, which can avoid similar signal propagation of successive signatures and keep their distinguishability {{cite:3f326025f398ad692014d0f25686dce6c2021f02}}. Note that generally, the shorter the updating interval, the higher the detection accuracy. However, based on the above analysis, the interval that is shorter than 0.6 s will increase the information redundancy in a signal profile and consequently lead to a marginal accuracy gain. To launch basic Sybil attacks, a robotic attacker broadcasts signals with two or three IDs at the same time. Moreover, it can trigger power-scaling attacks by changing its transmission power for each fake ID using a random scaling coefficient from the set {{formula:eed172b8-43c7-4f15-ad44-fe02e2814fe5}} . To launch colluding attacks, we first set two Sybil attackers to together use a total of four fake IDs. Then, in each transmission, one attacker is controlled to randomly take two of these fake IDs to communicate with the ScatterID robot, and the other attacker is set to use the left two fake IDs for communication. Finally, we conduct our experiment over different days in the two environments and yield backscattered signal traces of six-hour in total.
| m | 663582f070eca830109f4b286e8d249c |
Despite this result, the finite section can perform quite well. This is the case for self adjoint operators {{cite:8d60b2b5d1cdbed4dc2572e4fc0e7917b1fffeb7}}, {{cite:218c72a85dffb7d0877bf233b6d89a41ff8cc4a6}}, {{cite:c9b9c9ffa8731d0875ccdf7ba479a2b5fdeda76e}} and it is also well suited for the computation of pseudospectra of Toeplitz operators {{cite:22bd187b363a3b618b9846dbc0e47c239a386ffe}}, {{cite:4197252bd5e18cfad0b4b079a773bfe2495b1534}}. Moreover, in general, we have the following (recall that {{formula:e37cc7f0-d17e-4331-851a-54b916ce70d0}} is the convex hull of the essential spectrum for {{formula:ab89a7b0-8718-4cb3-ada3-3b0d3d02f9e3}} normal):
| m | 04ca361f302746f087b5a1fb2485feb2 |
Various constraints on the translation function {{formula:6d008817-890e-485b-95c0-4f7975949d6c}} have been proposed to remedy the ill-posedness of the problem. For example, cycle consistency (CycleGAN) {{cite:12b4bfa097d8a11abda415987ec9138325204154}} enforces the cyclic reconstruction consistency: {{formula:ee14df58-9106-4219-aba0-8c146057a6a1}} , which means {{formula:473f3264-c723-44c3-9b72-13488c7dd4e6}} and its inverse are bijections. CUTGAN {{cite:4c247cadc8284d708902729d90acff06242afab5}} maximizes the mutual information between an input image and the translated image via constrastive learning on the patch-level features.
The GCGAN {{cite:663e5efa35a23efddff82071b3159b85f8f7d4a4}}, on the other hand, effectively uses geometric consistency by applying a predefined geometry transformation {{formula:30510d84-b679-4cd2-9791-953fb6c2b799}} , i.e., fixed rotation, encouraging {{formula:af9f1a96-5b7c-4ad3-95ce-8d465ec1f780}} to be robust to geometry transformation. The underlying assumption of the GCGAN is that the {{formula:db4bae12-2eb1-44ab-8b08-83a939bdbd82}} and {{formula:acbd4dd1-0458-4884-b2d6-779b285aace2}} are commutative (i.e., {{formula:dbe5cc8c-0b2a-443f-abce-60340310b16c}} ).
However, CycleGAN assumes that the relationship of bijection between source and target, which is limited for most real-life applications {{cite:4c247cadc8284d708902729d90acff06242afab5}}. For instance, the translation function is non-invertible in the Cityscapes {{formula:1fb9a2c6-c59e-4f7a-9249-0f0ae5c8ac41}} Parsing task. Though geometry consistency used in GCGAN is a general I2I constraint, it is too weak in the sense that the model would easily memorize the pattern of a fixed transformation.
CUTGAN enforces the strong correlation between the input images and the translated images at the corresponded patches; thus it would fail when the patches at the same spatial location do not contain the same content, , in the Front Face {{formula:be13448b-2d04-4412-9076-fa892b80afa2}} Profile task (shown in Figure REF ). Thus, the above models are either too restrictive or too weak for specific I2I tasks. Besides, all of them overlook the extra spatial variations in image translation, which are caused by the change of object size, object distortion, background interruptions, .
| i | 5d000ed7d087d5ad1bad2b5f83749e9e |
Our proposed algorithm for computing the log-concave MLE is given in algo:smoothing. It relies on the choice of a smoothing sequence of {{formula:f911c0c8-a39e-48fb-bda6-b9a6778b8b7c}} , which may be constructed using Nesterov or randomized smoothing, for instance. For a non-negative sequence {{formula:241a04d5-5ffc-466a-a31e-831bb84dbd53}} , this smoothing sequence is denoted by {{formula:c65be3c8-c17a-4e04-9f79-6a02ed07703a}} . In fact, algo:smoothing is a modification of an algorithm due to {{cite:3c5984b7f2d3e7cb754c3c6932afdacd02d1dc7f}}, and can be regarded as an accelerated version of the dual averaging scheme {{cite:c7d7483fb4941738e6f1d1bc8f311e218edb187b}} applied to {{formula:43c1cb01-5def-4f15-92cf-d8c31e268110}} .
[!h]
[1]
Smoothing sequence {{formula:377b9284-2c2b-4972-98be-f0e7f869897f}} whose gradients have Lipschitz constants {{formula:5167728c-c330-41dc-affc-f2acaa4eaf37}} ; initialization {{formula:f7135531-74a8-44da-adc0-a6f641399eb3}} ; learning rate sequence {{formula:8e48f16a-d5e7-4a65-b28f-5446ba76ec44}} of positive real numbers; number of iterations {{formula:7d12b3dd-eded-4397-847d-47c274f95efe}}
{{formula:e7314625-5f26-484b-96e2-9c348a2a0396}} , {{formula:74446676-4f54-4bca-92fe-93fd34c1d316}} , {{formula:a8dcd4f3-2229-4d1f-9ac6-0734ba4169eb}}
{{formula:0e3c5c6e-c58a-4a23-94f7-d0ea33f5edbd}}
Compute an approximation {{formula:6a3c3349-567b-48b2-aedd-b64640dfa898}} of {{formula:ab5cb9ea-e261-4707-8d0c-b730a82a3024}} ; see Section REF
{{formula:fa652ac6-7a73-4e5a-a24d-73f59872588c}}
{{formula:b09e8409-2ddb-46b0-860c-5c3cf4f290b0}}
{{formula:ead786eb-b13c-457c-9d0e-38ab527a99ee}}
{{formula:0699f208-1db7-4574-9f6b-1ea0ed266811}}
{{formula:9f9b2146-4e51-4607-921e-e66b00d198dc}}
{{formula:545ef70c-85a4-4555-99fc-8b80d36e737b}}
Accelerated stochastic dual averaging on a smoothing sequence with increasing grids
| m | e95f9270c5d5f1494038d175a2beea38 |
To explore how the network is able to distinguish between different lymphocytes so successfully, we use stochastic gradient descent (SGD) to optimize an input image {{formula:d26cd2ff-c0a3-4bcb-ac14-ea4d115b2f97}} set uniformly to the normalised mean of all training set images to maximise output logits (raw outputs, without activation or normalisation) for each class, as proposed by Simonyan et al. {{cite:4cf85578668bad18980fc108810b011a65dfab95}}. We extend this technique for multi-class segmentation using a custom loss function designed to balance per-class optimisation for class-specific quadrants in a single image, allowing for easy comparison of learned features.
| d | 9fd0144a872b4fa1fcc80ed454e7c3a1 |
In this section, we show that the proposed agent is able to outperform the human benchmark in all 57 Atari games, in only 390M frames.
This is a significant speed-up compared to Agent57 which requires 78B frames to achieve this benchmark.
Figure REF gives further details per game, about the required number of frames to reach the human baseline, and shows that hard exploration games such as Private Eye, Pitfall! and Montezuma's Revenge pose the hardest challenge for both agents, being among the last ones in which the human baseline is surpassed.
This can be explained by the fact that in these games the agent might need a very large number of episodes before it is able to generate useful experience from which to learn. Notably, our agent is able to reduce the number of interactions required to outperform the human baseline in each of the 57 Atari games, by {{formula:1be879f8-66e2-45e5-ae42-a83b0d32bfb9}} on average.
In addition to the improved sample-efficiency, our agent shows competitive performance compared to the state-of-the-art methods shown in Table REF , while being trained for only 1B frames.
When comparing our results with other state-of-the-art agents such as MuZero {{cite:6170c2c3bde61b009882c67a28458ae04b5fc7af}} or Muesli {{cite:a4f5db8a6addcb775b75f8db6eddbfe9a6b0ce3a}}, we observe a similar pattern to that reported by {{cite:6b94aeb8282ebbae753d375bec419be555b24018}}: they achieve very high scores in most games, as denoted by their high mean and median scores, but struggle to learn completely on a few of them {{cite:06ed8fbe92b52918257a7ee3ccdf66ae5c798f99}}. It is important to note that our agent achieves the human benchmark in all games, as demonstrated by its higher scores on the lower percentiles.
{{table:c3d5c62e-818b-4f2e-9e73-25c736e2e203}} | r | 4e5d0af83a10f908b202f20fe3161cff |
where {{formula:eeb35739-42dd-4694-9364-cc5610f849d9}} is a convenient inner product in {{formula:5b8cd490-632c-4e31-a46d-40be997dad19}} such that the equality (REF ) holds (see {{cite:c9736365471bbff0b8a10dd77f8956199b23f28d}}). Notice that a possible choice for {{formula:4fc4309f-2dca-482f-b70f-3b722068a982}} is the orthogonal complementary subspace of {{formula:6c15cf74-0fe8-43b1-afd8-cb6238bc91aa}} characterized by {{formula:966406ed-dde8-46a8-be71-df232f2e1581}} , where {{formula:00ae9607-4918-4f1b-b179-fa0698c025ca}} is the adjoint operator of {{formula:022abc20-52a4-4d16-aace-bcb042a8e59f}} . Since {{formula:4705f6d2-adca-4f84-91b6-ea836ed50ced}} acts symplectically on {{formula:57e855ce-1ada-4cc9-af0b-67a9fe05f13f}} , we highlight that {{formula:18f445e4-bb39-409f-b736-18f5a1d1dec5}} is an {{formula:6044fc67-365c-4a02-a5f5-2edb9f6c1e7e}} -Hamiltonian vector field (see Proposition REF and {{cite:a056a38b9750800db8f5cd2c63018000169d7472}}), that is, there exists a quadratic form denoted by {{formula:5438d892-9a9c-4aa6-9370-64c28f434292}} such that {{formula:70be0f55-0aea-4da8-b6b8-c712d694ae5f}} . For each {{formula:55b7ea95-8c8e-4f60-b44f-8c0862fa950c}} , it is possible to prove that {{formula:fc4cf64f-b3c8-4d30-84ea-f307678e3d09}} is the adjoint operator of {{formula:f40d01a4-80d0-47a6-9584-a4911657b47a}} with respect to the inner product {{formula:17ff6bdc-345c-4be7-8096-fda53f0481e3}} . The proof for this result is similar to that found in {{cite:afd0ebdfa060a6fea12bd2c2f18b2d9f96ac82a4}} for {{formula:9db45477-d2cf-43c2-9c95-efd834987822}} .
| m | 7aa93d9b5f6159a280055a604bf7e25d |
Another obvious direction is to extend the current formalism to Baryons. Doing so will determine if the current formalism can explain the experimental puzzle that at LHC energies {{formula:7c031c6f-7f1d-4f20-98f7-5e1a98660b2f}} polarization is consistent with zero, while spin alignment is not {{cite:442cf0373a34572874e8df00366fc34e36ba21f5}}. A look at Equation REF shows such an explanation is indeed possible:
The equivalent of (REF ) for baryons would be a yet more
complicated coalescence term on the right hand side (3 fermions and a vorticity) and a yet more restrictive wavefunction on the left hand side (a qubit instead of a qutrit). Hence, there are more ways for vorticity and quark spin effects to cancel in a baryon rather than a spin 1 meson.
| d | ca3c99f8919a32dd1a3eac999c62094d |
In the following part, we further review a representation theorem in {{formula:c7819219-8061-4229-bb2a-8cfaae300b22}} , which is the cornerstone to construct our proposed nonlinear {{formula:acebb5db-5828-41d2-ad6a-4d06258f9f13}} -SVM.
In mathematics, there are many closely related variations about the representation theorem. One of them is called Riesz representation theorem {{cite:12322792f32665e0ec6d52e2a53949b4bcad3f65}}, which establishes an important connection between a Hilbert space and its continuous dual space.
Wahba {{cite:71d1e6051441b164a26f3254d0b3515d3b38d48b}} applied Riesz Representation theorem on {{formula:83967d86-9185-4992-88af-e0aca91c1186}} , it shows that the solutions of certain risk minimization problems, involving an empirical risk term and a quadratic regularizer, can be expressed as expansions in terms of the training examples. Schölkopf et al. further generalized the theorem to a larger class regularizers and empirical risk terms in {{cite:8652837355338d1c482379ebb01a68e4c326f9a5}}, and this article explicitly gives the nonparametric formulation of the general theorem as follows:
| m | 068b60fc651e9aae50fe05fa534c368e |
Conversely, the right plot of Fig. REF shows that, if multiple planets sculpt each disc, then for eight systems the minimum-mass sculpting planets would be too small to also stir the discs (to {{formula:6067b814-c0b2-4206-add3-cdbc6e72b7fc}} ). These systems are {{formula:56717210-f0d2-4cf3-8ae2-4fe6960377e1}} ({{formula:d9115ba7-4270-4147-b632-c1edb6015f77}} ), {{formula:23a34c86-3fd9-472c-8455-8d099fc0216d}} ({{formula:d6675fab-f497-4969-9fc0-5fd3c5073708}} ), {{formula:4e2c1849-ecd1-492c-8f08-dabd5ba0d84e}} ({{formula:43bcb497-e78d-47b8-a3df-bb00b4dda251}} ), {{formula:d5684b0c-8967-4474-b514-25f9c1184cb1}} , {{formula:72230596-b339-40a2-9e76-cbcc92de8e36}} ({{formula:e31a0f19-a293-4ff1-9f7f-ad0f6411212b}} ), {{formula:4121659a-b738-42c0-8888-143f24884e58}} , {{formula:20328a4c-3148-472e-8c94-f49e969f1de7}} , and {{formula:bcab542c-2d91-45a2-88e2-8ad0ddb17bec}} . If the inner edges of these systems are carved by multiple planets, then those planets must either be larger than the minimum masses derived in Sect. REF (in which case they could also stir the discs), or the stirring is performed by some other process(es). Some of these discs could be self-stirred without requiring disc masses greater than {{formula:0da6b98c-3bdf-455b-a5f2-6a4e7459fc27}} , so for these systems a mixture of self-stirring and multi-planet sculpting could be occurring. Alternatively, the planet(s) responsible for stirring the discs could be different to those sculpting them; larger planets located further in could stir the discs, whilst smaller ones at the disc edges could sculpt them (discussed further in Sect. REF ). Another possibility is that, since some of these discs contain gaps that could host planets, planets located in the gaps could be responsible for stirring, whilst other planets sculpt the inner edges. However, two of the above discs ({{formula:0403ef84-6d57-4dc3-adbf-8f2cd8bdf9b0}} and {{formula:74538218-cee0-41b0-b58f-4a9d7d814455}} ) have no observed gaps, and would require masses of more than {{formula:26826582-9316-42fa-b10d-f825b675a548}} to be self-stirred, which are possibly unfeasibly high; for these systems it seems likely that the sculpting planets are larger than the minimum masses derived assuming multi-planet sculpting, and that these planets also stir the discs, or that other interior planets stir the discs. Note that both of these discs also have gas detected {{cite:29b2246e9b6780deca968a503e9190890aba4034}}, {{cite:059438cb3bb44d891d15d94e78abf1b41eb414d4}}, so dust-gas interactions could also be occurring in them (e.g. {{cite:920c34e9a284560ce79094f202269c9935d9d7ff}}). In summary, if one planet sculpts each of our discs, then the discs could also all be stirred by that planet or a known companion; conversely, if the discs are sculpted by multiple planets, then for some systems the minimum planet masses required for sculpting are smaller than those needed for stirring.
| r | 7868b1648ebdf8b264dc043b1f26a803 |
By introducing a dividing surface and defining its potential {{formula:d0d4113c-7707-4d2e-877c-35dc04c4feab}} and corresponding concentration of electrolyte {{formula:22d0b2b5-79e8-48ed-b881-e24e630d1c33}} , the expression for the apparent slip velocity can be derived from the asymptotic solution for a flow in the
inner region {{cite:bc7cc0fed3151cc11d714428b7fca3edcb00cbfa}}, {{cite:f67440913ddab690f71bbc1c515955a60460302e}}
{{formula:73b218f2-7cbb-40ad-8f1e-cb76255a8302}}
| m | 0c9f2aaae80754fec6251c74cc9c12de |
We employ MontePython {{cite:1c9cf95339f5b01a8354ce85d15275d341275380}}, {{cite:41ad97d9439de03fbe2bc019c6e25ae0c495d40f}} to explore the parameter space of our CDE models. We use the regular Metropolis-Hastings sampling algorithm {{cite:a95f7d1a966fb5f475d8703c917817eb7dca26e9}}, {{cite:bcca5fc80980e40f5285e6dfea9de4271d974f20}}, and stop the Monte Carlo when the Gelman-Rubin convergence statistic R{{formula:ae0d9493-6b0d-4467-a5bd-2258488ed19d}} . On top of the {{formula:97fb699a-8c6d-4089-83fe-08f2120a3448}} 's, we vary the reduced baryon density, {{formula:29f63449-42c9-4e24-8fa8-e5ed7ad856e3}} , the reionization depth, {{formula:c386624e-c670-4c18-ba1a-5c664351ed4e}} , the cosmological constant {{formula:3eca74bb-fa88-4dd4-ac9f-0e138e7570be}} , the initial CDM density at {{formula:2a47a31a-25fb-4c7f-9cc3-6dd7e097b1e4}} , and the parameters that characterise the shape of the primordial power spectrum, i.e. its amplitude {{formula:f459b790-64aa-42bc-a4b3-0f4ab9cdb2fc}} and the spectral tilt {{formula:f007d58e-ae7e-4891-85a9-4048d9f9aa38}} . Notice that we vary directly the initial CDM density instead of the reduced CDM density, {{formula:32d17e2c-832b-4ee0-ab2f-0fabe8dc0a65}} . This allows us to avoid the use of the shooting method and speed up the Monte Carlo runs, as in {{cite:99470136910057cd6ae03440745a842af21ab3e7}}, {{cite:f2e5c862c6f1a7657817d7058cd38cca1324c744}}. In this paper, we are interested in setting the initial conditions in the past, evolving quantities under the effect of the varying coupling, in order to see the effect on present quantities. We keep the Hubble parameter, {{formula:217fcb7a-60cf-4d14-8aed-20a88da23e1f}} , {{formula:a61ce733-5dab-49c5-81f3-38e9f91dff4f}} , the root mean square of mass fluctuations at scales of {{formula:6ac6b5bf-4171-4d13-8f52-3979eb5fe009}} Mpc, {{formula:aa21c295-35d8-4a2c-9846-51fc66774151}} , and {{formula:86287f1e-5f25-40e8-931e-bc3c6ca44ef0}} as derived parameters. We consider two massless neutrinos and one massive neutrino of 0.06 eV. For the initial conditions of the scalar field, we are allowed to set {{formula:68874f85-d4a7-4cf2-8b4e-3b5c9d955d37}} and {{formula:5c374b35-95f2-4c0a-9299-c50e1ba42167}} , since the scalar field has no dynamics during the radiation-dominated epoch due to the null impact of the potential in that era, and the equations do not depend on {{formula:9f3cceb5-8e9b-4ab2-a0f4-135200fcd2af}} , but only on its time derivatives.
| m | 2520d6f2e4647010a4cf50a567729a15 |
The 1st quartile, median, and 3rd quartile values for question one were {{formula:0bbf7f94-e282-43f4-952f-9e869fa81e52}} , {{formula:28bdf529-bf05-4e28-9069-15af736df135}} , and {{formula:d71e7a4a-28e6-4b79-b32a-60f380431bbf}} respectively. For question two, we define orientation error between two quaternions {{formula:da30f8f0-6299-42d2-888f-dd1857229754}} as {{formula:c5160aea-64e0-46ec-be8a-1438a8ab84fa}} [{{formula:c5d5ec9d-42e5-45e5-bbf2-6422c0c89fc1}} ] {{cite:b8f36a308e80ed10d20d7058d88aca9e90069571}}. Average error was {{formula:b2063b3f-77f2-42bb-adb5-15d7f190c6ac}} , or {{formula:25ff1a6d-f9e2-4e41-9b42-c5808492ea7c}} of the max possible error {{formula:3d2dda55-d5b7-4d32-86aa-7eb98d842609}} . This indicates that our model matched users' desired orientations well.
{{figure:fbf7caff-9d5c-49bf-8c6b-1033f84e1c4d}} | r | be727fb5400c275e81535134d6a5b4e8 |
Then we can solve {{formula:1c34b815-d2c6-4c59-82b6-7d6fc0d003d8}} and {{formula:7a2998ff-4d92-480a-84a9-faf57c9f59c2}} on the same mesh by using the DG method. The LDG method shares all the nice features of the DG methods for hyperbolic equations, and it becomes one of the most popular numerical methods for solving convection-diffusion equations. However, due to the discontinuity nature of the numerical approximations, it may not be easy to construct and analyze the scheme for some specials convection-diffusion equations. For example, the convection terms of chemotaxis model {{cite:7744c9f26d52322b19bcb0d7f54e2102f18a1ed7}}, {{cite:2cf7b2003410c405129104691bbcf654f898b47f}} and miscible displacements in porous media {{cite:b80eda1413b76447a6a9ebbaf01fa6cb7e21c380}}, {{cite:5e5b622b9c1079f83ebc0f3ad9050fa9f1c49793}} are products of one of the primary variable and the derivative of another one. Therefore, the upwind flux for the convection term may not be easy to obtain. One of the alternatives is to use other methods, such as mixed finite element method, to obtain continuous approximations of the derivatives, see e.g. {{cite:5f6a82c97aaa6728841b451f5c7d558fb6087b48}}. A more general idea is to use the Lax-Friedrichs flux, see e.g. {{cite:8457a8ef301e89f845fdc9a0dc0ea95cb63b3fed}}, {{cite:c6a056b3154cf49b222735520dcfb77bc3172bd0}}, {{cite:27e54959dac208e874d618106568dd99a79e7a03}} for the error estimates for miscible displacements and chemotaxis models. The main technique is to use the diffusion term to control the convection term {{cite:7047799377f5094db651c19bb2f8158f5fbbd598}}, {{cite:23b491a4e32dfb41f24c3c123855589d0b3a26d3}}, {{cite:87a22e9146813c1af523a2d96f4af6aaa74bcbc8}}. Moreover, to make the numerical solutions to be physically relevant, we have to add a sufficiently large penalty which depends on the numerical approximations of the derivatives of the primary variables {{cite:0072e0e9a61baf9074d4d89205dc6d8f0a548410}}, {{cite:c6a056b3154cf49b222735520dcfb77bc3172bd0}}, {{cite:e9dc2580017dc95fed6042cb496ce01944cb78b9}}. Another possible way is to construct flux-free schemes, such as the central discontinuous Galerkin (CDG) method {{cite:9cd9e032f8fd81210d60e1454eba3bf8ea3bac4a}} and the staggered discontinuous Galerkin (SDG) method {{cite:a6658ad316c5792d4ceeb53cfe61ccdf747afbd7}}. However, the CDG scheme doubles the computational cost as we have to solve each equation in (REF ) on both the primary and dual meshes twice and it is not easy to apply limiters in SDG method because it requires partial continuity of the numerical approximations.
| i | f2a7af3f1433456a6a79e7805aeb7a18 |
Larger LMs plagiarize more. Consistent with {{cite:38e4916376a8bb83c4f525995dc2edd37d4405ce}} and {{cite:029c9fb0f37116a5ac58aaac5769f0bae857d172}}, we find that larger models (large and xl) generally generate plagiarized sequences more frequently than smaller ones. Based on the decoding approaches, however, the model size that yields the largest amount of plagiarism seems to change: when the next token is sampled from truncated distribution, the GPT-2 large model plagiarizes the most. On the other hand, the GPT-2 xl becomes more strongly associated with plagiarism than the GPT-2 large when temperature setting without truncation is employed. This discrepancy may be attributable to error rates of our paraphrase and idea plagiarism detection tool. Regardless, it is evident that larger models plagiarize significantly more training data. Considering LMs' performance improvement with larger model sizes, this finding sheds light on a trade-off between models' performance and authorship or copyright protection of training samples.
| d | 3a7f692241f5a791c6935daf07bf4e6f |
We agree that on-shell we have {{formula:ad9a70f3-99e4-41e4-b790-7aef3b06d1a6}} for the total system
(gravity+matter). But this does not cancel the anomaly of the local
gauged conformal symmetry which is supposed to be a local symmetry in conformal
gravity. Analogously, {{formula:33747cf4-aa48-454c-887d-9535bd65840b}} does not guarantee
that local gauge anomaly is cancelled in models with chiral fermions
and gauge symmetries. It would only guarantee that global gauge anomaly
is not there in the model. But for local gauge anomalies if they are
there the theory is sick and should not be considered on the quantum
level. Moreover, the argument from {{cite:9898728e9d5daa30812574e7ceafd3922291da05}} about vanishing of the total
EMT on-shell seems too much robust and holds for any gravitational
theory (with diffeomorphism symmetry) and with any conformal matter
as a gravitational source (so this matter must have {{formula:bafd6f50-7ab4-4afa-aeca-93a44f5455a9}}
on matter EOM to be conformally coupled). The original problem with
CA was originated in matter sectors, and this was global conformal
anomaly. Some people claim that the anomaly from the matter side is cancelled
by coupling to conformal gravity. But pure conformal gravity should
solve this problem by its own. So how is it that the contribution
of gravity is enough for gravity alone, but adapts to the matter in
such a way that this reaches a balance and the total contribution to
CA is zero off-shell? It is impossible, if one knows that the beta
functions at the one-loop level are independent and additive (for
two sectors: matter and gravity), but here the contributions do not
adapt to whether there is matter added or not to always balance, so
the total anomaly does not vanish in full generality.
| r | 01ba1982e0e6cffc2db81ae266e33333 |
We can see also that in the case that both the metric 1 and the metric 2 are two flat metrics, like in our studies of cosmological solutions, or for the warped spaces in the case the warped space REF is defined with {{formula:cf5b0321-6f2f-4e07-bb7b-7c7b45b2bf44}} being de Sitter and not the more generic Schwarzschild de Sitter case, the solutions are most likely exact and not just to first order in the slope, since not only the Ricci tensor vanish but also the Riemann curvature and so will all higher curvature corrections to the beta function.
In other paper we will address other scenarios for cosmology and warped spaces where we will emphasize the effect of dynamical string tensions on the possibility of escaping the Hagedorn temperature, this could be possible if the string tensions become very large at the early Universe or at some position in the warped coordinate {{cite:a24955f6bdf519951563eccd8a7cf17a3693394f}}. In this case in certain regions of space both type of string tensions are positive and go to infinity in sch a way that their ratio is one and for equal tension strings the more conventional string interactions (besides the one governed by the tension field), have a chance to operate. Notice that a multi string tension scenario formulated phenomenologically in order to avoid the Hagedorn temperature was formulated by Oleg Andreev {{cite:ff60236fbd7527a644ff1d93e09c16c81d712256}}, which seems to be the only other place in the literature, besides our research, where a multi string tension scenario has been discussed.
| d | 8e8c5002f1cd6f428ce33ee6b00adab3 |
These challenges open a number of interesting questions for future work.
For instance, novel uncertainty estimation methods for deep neural networks {{cite:9c176e0468e865a59bc79c38599c8b6cabc4c253}}, {{cite:c2d0805a9437b9f0c9b701ed53ec923d211edd10}} can be employed as proxy for assessment of segmentation quality upon previously unseen combinations.
In addition, although overfitting to specific markers can be naïvely addressed with a validation set containing as many markers as possible, investigating how to adapt the dropout rate of MS to different experimental settings may lead to solutions with superior results.
Lastly, our MS-ME model may enable novel possibilities in the context of transfer learning {{cite:fd3a36afc9dc868129139f3f34f33526dd7fbaec}} that enables seamless application of trained networks to distinct biological tissues with new markers.
For example, fine-tuning only the few parameters in the ME module as opposed to expensive, classical fine-tuning strategies may yield competitive results if the core network already learned suitable general purpose features.
| d | 18b5f8e313facd27aabcd5d1d2382226 |
Note that the fault-tolerance bound cannot be beaten for any arbitrary pairwise communication {{cite:c2a94f472b292916343089f78e0d6dc535514ac7}}, {{cite:57c9270a9cbf2747be93a28233981f31d7ae9fe1}}, {{cite:89ba41c9b33ab6e2e28a2a4884d44023d441010f}}, {{cite:d4fd7caff46f86191350875d1f46ae488706bfde}}, i.e., not even quantum channels can help solve this problem. However, the QDS is a useful tool for solving this problem. In nature, three-party QDS establishes a correlation among the three participants to ensure the unforgeability and nonrepudiation {{cite:496dcc90a414fc34f6d39882860c4f13b57c84b5}}, {{cite:5f75c345d577133dea1dc92b43f4a57ee813affc}}. The secret sharing correlation plays a fundamental role in QDS {{cite:5f75c345d577133dea1dc92b43f4a57ee813affc}}, which is equivalent to that of using multiparticle correlation. In addition, the consistency check between each pair of adjacent multicast rounds enables the unforgeability and non-repudiation to work throughout the whole protocol. This connects the correlations of every pair of rounds and can effectively prevent malicious players in the system from sending conflicting messages at will, as we can see in the security analysis. Thanks to the three-party correlation established by the QDS and consistency check, our QBA protocol is equivalent to using quantum entanglement to construct correlations among all players, i.e., to correlate the communications between all pairs of players. Due to the unique nature of the multiparty correlation provided by quantum physics, our protocol is able to break the fault-tolerance bound even under pairwise communication, which demonstrates quantum superiority in terms of the Byzantine consensus problem.
| d | 0de9b5b092366859a11ab933dd81c73f |
Relatedly, information theory has often been invoked in explaining complexity {{cite:bd873b7e791fda360f1efbe91b136d2daeb0fbb1}}, {{cite:da37bce73ae0e4b72c2cbe2b6bcda9636ee90722}}, {{cite:b7fffe9ef05e7c128cf0b5bcc1564a77b47ff9e3}}. Specifically, it has been hypothesized that major evolutionary transitions involve changes in the way information is stored and transmitted {{cite:019eb37333f91b00532b3427ea3678ff7835e390}}, and communication of information is a prerequisite for the emergence of cooperation {{cite:f4f9423c27f08638fbe30769445e6eca0a303449}}. The emergence of TFT as one of the simplest conditional strategies to encode also contributes to laying out the foundations for theories of life that, in the spirit of determining which features are universal in every possible living form, consider that its fundamental character lies in its capacity to process informational content {{cite:9541cadc89376fc6025b8b1cf7037b56fc18b4f3}}, {{cite:360cd1221313c9572221a662e2916f7b993fafbc}}, {{cite:dd89303b1ce509f26d998f9d3cd84712c1dbad32}}. If life is defined as “information that copies itself” then the origin of life problem becomes one of the origins of this information. Likewise, the problem that ought to be solved concerning the transition from the abiotic to fully-functional replicators encoding their own replication mechanism, in the context of models of the emergence of life on Earth such as the RNA world hypothesis, is an informational one. It is precisely this gap in the evolutionary history of the information content of early replicators that a minimal information-processing strategy such as TFT —whatever its precise physico-chemical implementation in a given early lifeform— could bridge.
| d | fe6877ac2063965d5878be12665cbc31 |
The spectroscopic [Fe/H] derived here also agrees with previous
spectroscopic analyses. {{cite:fe7664b44ae241401009f7eec706dcd53b70b381}} obtained CaT spectra of
individual M31 field stars, including at least one star that is a
likely member of G1 based on its radial velocity; this likely member
has a CaT-based metallicity of {{formula:1347e92d-2740-4565-b122-159e9ede7d39}} . Other IL
spectroscopic analyses at lower-resolution also find similar
metallicities. The earliest abundance analysis by
{{cite:5d595d8e55a21b467c592b73152d18c634e2f4a0}} found {{formula:c42faa01-8e51-4bec-bc89-fa0af92c803c}} , while subsequent
analyses found similar results (e.g., {{cite:74a0e1aa80c583f880ac157b8eda5f6b53f45d1f}} found
{{formula:24908727-f59c-4fde-8ab4-a0a642dbc854}} ). The Revised Bologna Catalog
(RBC)http://www.bo.astro.it/M31/ reports
{{formula:f0ade7ca-73c7-43d8-a8e7-488e4ff32ae5}} , based on a Lick index analysis
{{cite:9a36a5fb656bd15f164831a6582754b09cd030bf}}—however, {{cite:1a61038fb505c43dac705c8186c2b2ddca12a1b5}} note that the Lick
index [Fe/H] ratios of their sample of M31 GCs are slightly higher
than the high-resolution values for clusters at {{formula:aa37eab9-db29-42d2-b69a-ad7bb3005ff5}} .
The spectroscopic value in this paper is therefore generally
consistent with other values from the literature.
| r | f5ac129d891ab884a2468bc3a1183b3e |
Considering the applicability, image dehazing can usually be used to the preprocess step of other computer vision tasks. The proposed image dehazing algorithm can be used to assist object detection, as shown in Fig.REF , which is the comparison of object detection results before and after dehazing with the proposed UR-Net-7. The detection algorithm is SNIPER {{cite:6fa1debee641e65f08615a68da6bc458aef137cc}}, we only use the released codehttps://github.com/mahyarnajibi/SNIPER and the pre-trained model for detection. From the detection results of the two images in the middle of Fig.REF , we can see that more objects can be detected after dehazing with UR-Net-7 (the objects in the red rectangle).
| d | 30a5f8f5952cb1f9f806be2303f2c2ad |
The standard feature representation choice for music alignment is a time-chroma representation generated from the log-frequency spectrogram. Since this representation only relies on pitch class information, it ignores variations in timbre and instrumentation, and is not adaptable to different acoustic settings.
Using neural networks helps us to override the manual feature engineering whilst providing the capability to adapt to different settings. Rather than extracting a feature representation from the inputs, we focus on the task of constructing a frame-similarity matrix. This matrix is then passed on to a DTW-based algorithm to generate the alignments. We employ a “Siamese" Convolutional Neural Network {{cite:3dca5d8598914caa856fd3e79251506f899d9fb8}} for this task. This framework has shown promising results in the field of computer vision for computing image similarity {{cite:3bcd1a47fb012955fea3f3d91dac2b6ee206a585}}, as well as in the field of natural language processing, for learning sentence similarity {{cite:9d8352bfe1010a621617f66038e644f719b2cf81}} and speaker identification {{cite:734afd75b95f6f2b883151a71e7b237b4a592341}} amongst others.
| r | e1bc9388a7f96a6fa87194c82b84da94 |
Other bi-class tasks The proposed SCAST can generalize well to other bi-class detection tasks. We evaluate another two bi-class detection tasks on vehicle detection and pedestrian detection. Experiments over the adaptation task GTA5 {{cite:140b1f7123101d761976e6b7e4e29fe93445540c}} {{formula:afea3388-44b7-4ecc-aceb-d4b879416e7b}} Cityscapes {{cite:363e9d46bf03f9ba9d9e8c0393581b3f2a6cd8e9}} show that SCAST outperforms the state-of-the-art by 2.9% and 3.1% (in foreground IoU), respectively. Please refer to the appendix for details.
| d | 6b8d945d9b8e368e3f21080188715bde |
The Weis multiplier theorem allows to deduce the {{formula:8dbc1be6-1db4-4bdc-9078-1e7b05779423}} maximal regularity to problems connected with the fluid dynamics. We refer to {{cite:4591c567c73668b253569429ca83aad8ba191ec7}} and {{cite:dcea05b2a9ba092e3cad20e8d36fd35b80ba8673}} for the comprehensible decription of the method.
The maximal {{formula:2473c546-f639-4d64-9a59-214dcf4a798f}} regularity result for incompressible fluids can be found in work by Shibata, Shimizu {{cite:71197da1f0d750f85dcf34a0750e224d5f9c0b76}}, for the non-Newtonian incompressible situation it was established in work by Bothe and Prüss {{cite:8e275e71b1dec1dd8ba4bdcd4311ecfc159f15d1}}. Maximal regularity for the compressible case was shown by Shibata, Enomoto{{cite:286c6a252cd78c84824ece42219b717a86c39864}}.
| d | 8e8b302ed82bdccb057effafa5c62872 |
For the second strategy (w/o detection and w/ correction), we just select clean labels according to the activation scores from initial CAM. Similar to {{cite:b7b6a365bd61ed862cfd69a0d7d468273ead0aca}}, we consider pixels with activation scores larger than 0.3 and smaller than 0.05 as foreground and background clean labels, and use them as the supervision for training GAT. Such strategy is similar with {{cite:533743cd49a82bdd317771e60eb1456daff5c8a7}}, but using GAT instead of GCN. However, the segmentation result is only 59.6%, which is much lower than that of using our proposed detection strategy. The result illustrates that the seed
regions with high confidence contain many mislabeled pixels and {{cite:533743cd49a82bdd317771e60eb1456daff5c8a7}} is not suitable for image-level supervisions. For the third strategy (w/ detection and w/ correction), our approach achieves the best segmentation result of 67.5%. This clearly validates the effectiveness of the proposed detection and correction method.
{{table:8a27f994-6bee-4ce2-b756-213374ae4c0f}}{{figure:4e0cc63d-8f6f-4bcc-a42f-12cd91c63d08}} | r | da00611d1c6dd2e98f61a2d6befc4e5f |
In our method, the proposed comprehensive and challenging DUTLF-V2 can assist in higher generalization for our models, and the three distillation schemes aim to ensure better absorption and integration of focusness knowledge for the student. This enables the student no longer need the focal slices but a single RGB image as input, and guarantees flexibility and productivity for mobile devices. But interestingly, we noticed that there is a certain performance gap between the student and the teacher. One possible reason we consider is that as the teacher network propagates forward and takes up a large percentage of GPU memory, less computing resources are allocated to the student network and the batch size is set to 1, thus resulting in suboptimal performance and generalization of student network{{cite:bf2efc43e51a6d79f508788b42f69f9fc891357d}}. We argue that this problem will be relieved as computing resources are updated.
| d | 86da5acc8e5cb278f069da64f35579c7 |
Graphene has recently attracted intensive interest as a promising
candidate material for the new generation of electronics and
spintronics {{cite:35ff946e8fa269c9bf362f81048159afe21f7a6a}}. One of the exciting physics on graphene is
strain exerted on graphene samples {{cite:1727f5fe575616161870c77ebe77d0db26771d05}}, {{cite:91cfc8ef2890c16055b4ba53457fb76f49ba1fe6}}, {{cite:f87bb270006c675f6e3371ca94d9ba6710417579}}. It was
proposed that strain can be utilized to generate various basic
elements for all-graphene electronics {{cite:91cfc8ef2890c16055b4ba53457fb76f49ba1fe6}}.
| i | 135901019eefc927c12cb25dd1e038df |
The particular novelty of these solutions is that, unlike previous small data solutions {{cite:3d4ca8a5c8ab0cfa1fb4be3bc16c0e062f9dd9c1}}, {{cite:205134ff43f67dd3afce7fecba721658d6bb2d97}}, {{cite:d7e4e2dd35770cd5bfd94a09c8b544d9db6b0ea0}} for {{formula:f12f6b0e-829b-4003-8716-ff61b8cc05bb}} and {{cite:e3835ac77af6442ec96a70346354df62c2205e60}} for {{formula:d5f09e82-6246-470d-8406-9549354b629b}} , they allow {{formula:775314d2-acda-4b7f-b2dc-efc8cde66994}} to be arbitrarily large and further do not require a smallness assumption on derivatives of initial data.
| r | dae93e8452a0b8142fcd4cda5e366217 |
Theorem 11 (Slepian's Lemma {{cite:fb7c1f43008823a63dacce6683ac0914c5c909e7}})
Let {{formula:044d51cd-a50a-4caf-a0e5-25bcdaa22739}} and {{formula:c66ec7d7-c35a-45b1-a4ab-f53f6c9cc78d}} be mean zero, separable Gaussian processes indexed by a common set {{formula:c94601e1-6ec4-4107-956d-ded910e64330}} , such that
{{formula:ea3bac00-59e9-4652-8408-792661b63863}}
| r | 9b926e6b2cc1320035706597068bf498 |
Our proposed approach is presented in Figure REF . Given the input image {{formula:11abc7eb-05c9-4263-9dc6-889821e4c6aa}} , similar to TransUNet architecture {{cite:55bc1079557578a5a5fe27a346d0c1d62434337e}}, our proposed generator network {{formula:374c32d3-3170-4521-9101-4bdbdb20b905}} , termed CATformer, is comprised of four key components: encoder (feature extractor) module, class-aware transformer module, transformer encoder module, and decoder module. As shown in Figure REF , our generator has four stages with four parallel subnetworks. All stages share a similar architecture, which contains a patch embedding layer, class-aware layer, and {{formula:ee3817d0-8506-43c5-aefc-4092f3006bd9}} Transformer encoder layers.
| m | 96b288bd70c26b0e59b6bf10da9a7907 |
In {{cite:eec8748f546422fc3b6e55751605b5ba9aa56631}}, the authors provide a method for the matching problem by reusing eigenvectors.
We here develop a power update method to resolve the above difficulties with a similar spirit. Specifically, after obtaining the whitened (orthonormal) vectors {{formula:412da52e-d01c-4a74-8962-90e7b08bc015}} {{formula:47fac812-c892-4905-8f19-6fbe72bb867d}} is a scalar coefficient that depends on {{formula:b634c059-bd4b-425a-a186-8c298203517b}} and {{formula:834d5b38-a6c9-4405-9b49-bfe9998e0770}} . See Appendix A.2 for details. we recover the entry {{formula:054ad406-75a1-4165-b67c-6cf605699cdd}} of the linear regression model directly by computing a power update {{formula:6c53d9fb-fcc6-4ccb-bd50-1b1c989d8fd6}} . In this way, the matching problem is automatically solved because we know what topic distribution vector {{formula:6eddcbb1-9782-4775-8143-595e1f7e1794}} is used when recovering {{formula:a47bf83f-339e-4be1-bc9c-086bfb2becc8}} . Furthermore, the singular values (corresponding to the entries of {{formula:b734e522-bd6e-4110-bc35-a1b63e1e753b}} ) do not need to be distinct because we are not using any unique SVD properties of {{formula:9cfdf2a5-e492-4bdc-84ee-891b7040c28f}} . As a result, our proposed algorithm works for any linear model {{formula:8fe3fe3d-8e7a-4ee7-ac6d-c0268ed1707b}} .
| m | 79257cf044bea480bbf2801f0652e01a |
Of course, in spiking neurons the true gradients cannot be computed because the step functions used as non-linearities are non-differentiable. However, surrogate gradient methods {{cite:ee1891525ea5b2c2d143badc14cd271c9bcddb51}} used to approximate gradients in spiking neural networks can naturally be incorporated into predictive coding networks. Such surrogate-gradient spiking PC models could help further develop the empirical hypothesis that predictive coding is the general algorithm the brain uses to solve the credit assignment problem. Additionally, such networks could lead to useful local-learning algorithms that are compatible with neuromorphic hardware.
| d | 7264f56b62d369945a1861eddacbee34 |
To address catastrophic forgetting, regularization-based methods protect the old tasks by adding regularization terms in the loss function that constrain the change of neural network weights. Multiple approaches have been proposed, such as Elastic Weight Consolidation (EWC) {{cite:09352b741944d0c2d7cbed4277a4369eeaf9416b}}, Synaptic Intelligence (SI) {{cite:5e4085717317124cd8a7c9b353580d3fe132672b}}, and Memory Aware Synapses (MAS) {{cite:9a0bd41277a386ce26c0790ae3daf42e3be51c83}}. The main idea behind these approaches is to estimate the importance of each weight with respect to the trained task. During the training of a new task, any change to the important weights of the old tasks is penalized. Despite that regularization methods are suitable for the situation where one can not access the data from previous tasks, their performance degrade quickly in the classical lifelong learning scenario.
| m | d3d10b23d8df052130eabc6e87969737 |
Table REF compares our approach with common long-tail methods on CIFAR-10-LT {{cite:158eb812d7b3bc773fefdd9ae90f44e2ffe301e8}}. Our method outperforms all baselines.
| r | 0992f5a96617771c69bdce272de463b9 |
Deep learning algorithms have outperformed the state of the art in many biomedical image processing tasks {{cite:99107a4893c7283ca2a5cfe5fc8db16751eafaaf}}. Shi et. al. {{cite:dbaa800b495d20a5c7c7881709cca32d053c7101}} applied a cascade of Quaternion Grassmann average layers to develop an unsupervised deep network for segmentation of histology cells. Others have applied deep auto-encoders for cell segmentation {{cite:afa2f1714965b2740de86b3e18e90bab6320dc43}}. Shen et. al. {{cite:99107a4893c7283ca2a5cfe5fc8db16751eafaaf}} reviewed many unsupervised, or CNN-based approaches combined with hand-crafted features for segmentation of biomedical images.
Roopa et. al. {{cite:1427acb44a67eae534da9e64e65de74e17342cfb}} trained a CNN with hand-craft features as input to classify white blood cells in peripheral blood smear images. Hand-crafted cell nuclei boundary masks are also used as shape prior to filter the detection of CNNs {{cite:84254c2b9bafed6b55c53e9fed92cd7d998c1bdb}}.
Others applied CNNs for cell detection with pixel-level classification for each patch in the images{{cite:a6420c31444eef30c4f9c94b8047ca6d04c28456}}, {{cite:ee94fb0686d25b9da1fbe28031bf9cd706adcd91}}, {{cite:c0f60e7f5a1208cd7958827c60134f048fe49ff2}}. Hofener et. al. {{cite:5abafef25fab70c79afc857c21f81f5cfa2ff421}} applied post-processing to smooth the scores derived by CNNs to improve the cell nuclei detection in histology images.
However, patch-based approaches need to run the network for every patch resulting in redundant computations.
| m | ba756ac696b482aa35f7be4879c4586c |
We first re-established the known results of the entanglement entropy of a single strip in the confining phase, however now, in the presence of a magnetic field. In particular, a phase transition from connected to disconnected entanglement entropy is observed at some critical strip length in the confined phase, at which the order of the entanglement entropy changes from {{formula:baae2510-6bd5-41ba-9db4-223493efbb8c}} to {{formula:65017f17-e1c6-4614-87c5-b744962d2fc7}} . Interestingly, this critical length is found to be increasing/decreasing for parallel/perpendicular magnetic fields, thereby providing anisotropic imprints of the magnetic field on the entanglement structure. We then analysed
the two equal strip entanglement phase diagram in the parameter space of strip length {{formula:709cf476-ab73-4511-a921-a154ac1392e1}} and separation length {{formula:6cd19abe-03e9-45f2-8444-7d4c85c3d5d8}} and found four distinct phases {{formula:71f5bd0b-6192-417c-983f-a00e62e0fcbb}} . These four phases exchanged dominance
as {{formula:c7095ade-c71a-4cb9-a1c8-410f0ce98247}} and {{formula:8d5190e7-44bf-48e3-896a-5c8b05c23278}} are varied, leading to an interesting phase diagram. This two-strip phase diagram is again greatly modified in the presence of a magnetic field, while further exhibiting anisotropic features. The mutual information turned out to be nonzero only in {{formula:908103df-15ca-4fac-a1bf-c8ff4834c637}} and {{formula:442d24da-011f-4929-af03-88520ea905a1}} phases and is always a monotonic function of {{formula:c2b3a420-de83-42ee-a0c0-44083cc93507}} and {{formula:3f43a764-c34b-46b0-8292-9f2972a7608a}} . Similarly, the entanglement wedge cross-section {{formula:c3a81a90-e3a5-45d2-9223-7a74c808095d}} is found to be nonzero only in {{formula:c10a21c8-1734-429f-980a-7f56bf553a65}} and {{formula:7d54a6db-a9d5-4514-bcf3-656e8883ea62}} phases. Interestingly, unlike the mutual information, {{formula:409835cd-7440-4b90-997e-ce531db2cdbe}} vanishes discontinuously for large values of {{formula:2c343f7f-2181-408f-9eac-c7b21adeb858}} and {{formula:6312ff04-b844-494a-b781-e8c9ef6512b6}} and exhibits nonanalytic behaviour across various transition lines. In particular, going from {{formula:a49ff559-dde8-4baa-bcd0-0ceac81ca120}} to {{formula:652d2d49-093c-4586-8f38-0ebec7039344}} phase, {{formula:61e493aa-4b87-4b1f-83d8-6104866534f4}} increases at the {{formula:f8e40f67-e13a-4e93-8e7f-964cfd437d48}} transition line. Interestingly, this increment in the area of the entanglement wedge at the {{formula:a4cd3e4b-45e7-460d-bf5e-18174639ea1e}} transition line is found to be decreasing/increasing for the magnetic field in parallel/perpendicular directions, yielding yet another anisotropic feature in the entanglement structure. We, moreover, tested the inequality concerning
mutual information and {{formula:323ecec8-8d29-4450-8941-d8a9ea326a7e}} and found that the latter always exceeds half of the former everywhere in the {{formula:4625dac6-6427-4097-851c-59231ac4e857}} -{{formula:22ea6816-3d51-4b76-b681-99d8a2a9cf08}} parameter space for all values of {{formula:599334ef-13b6-4ca4-8e73-c26960adf793}} . Similarly, we analysed the behaviour of entanglement negativity with one and two intervals using the holographic proposal suggested in {{cite:55edb3d48a8a3db0dc1eea5c02ea55f23a8c0325}}, {{cite:73f81c3e1453c4383c6601718d8a2649f071a559}} and found many interesting features in the confined phase. For a single strip subsystem, the negativity turned out to be just {{formula:c8044325-b70d-4609-bd40-433f143a9821}} times of the entanglement entropy, implying that it also undergoes an order change, from {{formula:1620c141-842e-4947-b2c0-2eb2949e39f1}} to {{formula:2011d718-86c1-4014-8b06-54a32e3d0042}} , as the strip length is varied. This suggests that it can also be used, like the entanglement entropy, to probe confinement. The corresponding critical length is further found to be increasing/decreasing with the parallel/perpendicular magnetic field. Moreover, for two strips, negativity behaves smoothly across various phase transition lines and no discontinuity in its structure is realised. However, unlike the mutual information and entanglement wedge, the negativity can be nonzero in some parts of the {{formula:df879ea1-5060-4f90-bbaf-2566270d6a92}} phase, an interesting feature which may not be observed in the holographic negativity proposal of {{cite:0959763fd63f41715c399b53603ea7c5e410f7ff}}, {{cite:67400c1ae737cd3af47c1a489244109bed5e9055}}. In addition, the negativity was found to be displaying anisotropic features in parallel and perpendicular directions.
| d | ea8dc87821a6c848c5cb075ed28a6a3e |
It has been shown that the Lattice Linear Predicate (LLP) algorithm solves many combinatorial optimization problems such as the shortest path problem, the stable marriage problem and the market clearing price problem {{cite:c925fcbc86f6c231bc50a36584542655a73809ec}}.
In this paper, we show that many problems that can be solved using dynamic programming {{cite:84428c191a817aa5590d1ea9c6e20ef4924c1c4d}} can also be solved in parallel using the LLP algorithm.
Dynamic programming is applicable to problems where it is easy to set up a recurrence relation
such that the solution of the problem can be derived from the solutions of problems with smaller sizes.
One can solve the problem using recursion; however, recursion may result in many duplicate computations.
By using memoization, we can avoid recomputing previously computed values. We assume that the problem is solved using dynamic programming with
such bottom-up approach in this paper.
| i | 01c388f75a2132acad8d8556235f4dc2 |
Remark 4.3 (Hard-SVM)
The dual formulation (REF ) is the one used for solving linear SVM problems (see {{cite:fa8a36bdd542f23a7d17a1bdbafc15dafadf62ed}}). Indeed tackling the max-margin problem via its dual formulation (REF ) is popular, due to its favorable structure and there is a very rich literature on methods to solve it (see e.g. interior point-methods ({{cite:9bd97fa5448af94763539d0e0f92ef54f35a5a9b}}, {{cite:4350eaeb7f2a6a4434af88948192f947f5bfc8f6}}) or decomposition methods (see e.g. {{cite:9bea2bba7893e9b8c7a6bdb0b8690d4a531e2310}}, {{cite:fa8a36bdd542f23a7d17a1bdbafc15dafadf62ed}}, {{cite:4350eaeb7f2a6a4434af88948192f947f5bfc8f6}}, {{cite:6d385c5b4c9f4b3827cb5f1480dfafb61d655fd0}} and {{cite:5d85f9e41842a2ea52b0dd3f3e97e7d79514addc}}, {{cite:524a464fba88736af1a5ce1bcbdf32fb34ec012d}}, {{cite:3f1ecafff07cf8f1bfc1af8ce7a6df358ee5dd5b}}. Compared to this methods our diagonal approach enjoys good theoretical guarantees while providing a direct link to regularization methods.
| m | a6aff01dc9638df424f9d9e1b1c86f30 |
By using the hodograph method, Courant and Friedrichs in Section 104 of {{cite:ed7e3b9e0d4a0e82bb0dea6933fd0d37e1a06e1c}} obtained some particular planar radially symmetric flows including circulatory flows and purely radial flows. Their superpositions are called
spiral flows. Weng-Xin-Yuan {{cite:75e7e2fc46ccdbef0d9aa912c58c164de3bf0623}} gave a complete classification of the radially symmetric flow with or without shocks by prescribing suitable boundary conditions on the inner and outer circle of an annulus and also analyzed the dependence of the solutions on the boundary data. However, there are very few papers working on subsonic spiral flows outside a body. This motivate us to investigate the subsonic spiral flows whose asymptotic state is the radially symmetric subsonic spiral flow. To this end, we first describe the subsonic spiral flow in the exterior domain {{formula:c7d41b77-db0f-4b5e-8b1b-51e092516664}} . It is convenient to use the polar coordinate {{formula:88393a6c-8d97-4b8d-bf15-e15ce2b5b27c}} :
{{formula:f1e2776c-1d0f-41e1-92d4-702b1e3dcc02}}
| i | 507eee781c0f2843758ca837e9767caf |
Efficient posterior sampling for Bayesian statistical models has attracted
a substantial amount of research the last decades {{cite:2cac35503e27b090153006489cc9dda9b030d32d}}.
Riemann manifold Monte Carlo methods {{cite:c2eac8fcd8142d0aa81e4b47f6ad65348933d9ff}}
are in particular well suited for posterior distributions exhibiting
complicated non-linear dependence structures and/or substantial differences
in scale across the target distribution. Posterior distributions with
these properties arise (among other) for Bayesian hierarchical models
which are widely used to model dependent data {{cite:dd5ddf19c6a4b9fc01a536d62dc93af075b1af64}}.
The successful application of the RMMC methods relies on the selection
of a suitable metric tensor, a symmetric, positive definite matrix-valued
function that should reflect the local scaling properties of the posterior
distribution in question.
| i | b5b079bd5745753f78fbd2e10a9c75ac |
The above discussion lays out our strategy to prove that PMF dynamics with zero refractory period represents “physical” solutions, which is one of our main objectives.
PMF dynamics are of special interest because they admit an explicit iterative construction in terms of well-known analytic functions.
Moreover, PMF can be easily simulated within the PDE setting.
Such simulations confirm the prediction from particle-system simulations that explosive PMF solutions become asymptotically periodic (see Fig. REF ).
In that respect, we conclude by mentioning that the existence of periodic PMF dynamics directly follows from our {{formula:83f504da-3dc5-467d-aeb7-2790a739ca5d}} continuity analysis with {{formula:1436add9-add0-473f-a106-120ad9c48c85}} .
This is a consequence of Schauder's fixed-point theorem {{cite:11f690e8a675d7b17a114904d8a8b503b76ec138}} applied to the inter-exit-time mapping
{{formula:5cfdb157-e20d-47fa-983a-33262cba27ed}}
| r | 818ecfa1ee527178ef0a8d82c51784e7 |
Results on ADE20K.
Table REF compares the segmentation results on the validation set of ADE20K.
With the ResNet-101 backbone, ISNet {{cite:4cea26f17e18c11b4b54c1f5fefd321d2b9593c3}} achieves {{formula:b1b03032-d948-45b7-9a9f-84154a9a2610}} mIoU via integrating the image-level and semantic-level context for semantic segmentation, which is the previous best method.
Based on the same ResNet-101 backbone, our approach achieves superior mIoU of {{formula:7e1dcc4e-672c-4df4-b0db-f5c45045f597}} with a significant margin over ISNet.
Moreover, our UperNet+MCIBI++ with the ResNet-101 backbone even outperforms the previous ISNet, DeepLabV3 and DeepLabV3+MCIBI with a more powerful ResNeSt-101 {{cite:b0542f82485ff52a643aee4e7cd559370599eec3}} backbone by {{formula:2e3de723-9d59-441a-826d-fb040a1655dd}} , {{formula:a31529c0-f1b2-4813-a272-2838ecd66a0d}} and {{formula:3f9b38b6-9cd8-455f-9f97-7d0ea563badc}} , respectively.
Without transformer-based backbone, the state-of-the-art segmentation framework is DeepLabV3 with ResNeSt-200, hitting {{formula:9bf86bd9-f433-48e5-9cdb-45a98180f93f}} mIoU.
Due to the superiority of the proposed MCIBI++, our UperNet+MCIBI++ with a relatively light backbone network ResNeSt-101 still reports the new state-of-the-art performance (i.e., {{formula:f85c5503-afc8-4998-8160-ee621850d460}} mIoU) among the transformer-free methods.
In addition, prior to this work, UperNet achieves the state-of-the-art with {{formula:c2e40ce2-4ad1-4e96-a8c0-62b7308b5e8a}} mIoU by adopting a transformer-based backbone network, i.e., Swin-Large {{cite:d0e398f159327e2fc52c65a402e8e8df6514f2a2}}.
Based on this, our method UperNet+MCIBI++ with the same Swin-Large backbone achieves a new state-of-the-art result with mIoU hitting {{formula:a0b4a936-54c6-465a-ac9d-3fd27609a684}} .
It is worth noting that ADE20K is very challenging due to its various image sizes, the large number of semantic categories and the gap between the training and validation sets.
Our framework achieves consistent improvements over counterparts and new state-of-the-art performance on this dataset, which clearly demonstrates the significance of our proposed MCIBI++ paradigm.
{{table:20af9fd6-93a4-4295-bb38-d6d11580ffad}} | m | 00b40599418c811db259b221aa06db19 |
{{cite:801d6f40d7af80afd03e85086317d3536e9aabcd}}— Using the redshift evolution of the neutral hydrogen density inferred from observations of DLAs,
these authors calculated the evolution of elemental abundances in the Universe based on an analytical model.
From this work, models with a mean metallicity value (not corrected for dust obscuration) in a given redshift were considered.
{{cite:ba8069073734dc237f4006669c01e4dc71a6efd0}}— These authors obtained solutions for the cosmic histories of stars, interstellar gas, heavy elements, dust, and radiation from stars and dust in galaxies using the available data
from quasar absorption-line surveys, optical imaging and redshift surveys, and the COBE DIRBE and FIRAS extragalactic infrared background measurements.
We considered the mean metallicity of interstellar gas in galaxies predicted by the best models from {{cite:ba8069073734dc237f4006669c01e4dc71a6efd0}}.
{{cite:b4a6dc0ab149fa183af6a52968bd83157dae610a}}— They investigated several scenarios for the nature of the high-redshift Lyman-break galaxies using semi-analytic models of galaxy formation set within the cold dark matter
merging hierarchy.
From the models proposed by these authors, we considered the predictions for the average metallicity of the entire Universe (taken from their Fig. 14), i.e. the total mass in metals divided by total mass of gas.
This is the average between the metallicities of the cold gas, stars, hot gas, and diffuse gas.
{{cite:d697855ff21053aff4505211e6bf99b91e4f694d}}– These authors computed chemical evolution of spiral bulges hosting Seyfert nuclei,
based on chemical and spectro-photometrical evolution models for the bulge of our Galaxy. We considered the metallicities predicted by those models
built assuming a mass of the bulge of {{formula:162b95eb-9ee3-4c81-baa2-30eb2904966c}} .
| d | b6bf28617a1a2c71791c41bfbce9800e |
Further, we have identified the disk precession rate, which encodes disk symmetry about the origin as a key diagnostic of the disk and eccentric binary response. Future work should also carry out a quantitative investigation of disk eccentricity in our scenario, as studied in many previous works {{cite:35405deb69f938b1868567b5dabed7651062b462}}, {{cite:c44b41b70f7295f1b298e76f796fe870bb4c9e49}}.
| d | f6b58f217cbd28c99cdb95587ac56400 |
Recently, game theory has emerged as a significant framework in the study of multi-agent systems: the agents in the system are modeled as agents in a game, each with its own actions and individual rewards that are functions of the actions of other agents. Some prominent examples where these models have found success include wireless communication networks {{cite:598e5200dc9bcd1f5a366e529f172e3dde34b37e}}, UAV swarm task allocation {{cite:35b6fa069c3acb8080a31103037a67bfc03702a0}}, news subscription services {{cite:493b5c4aa6e4b407325548ad7d2f1e180703ef3b}}, vaccinations during an epidemic {{cite:164d797ef886bd95546bd48aeecfb72f17a635cd}}, security systems {{cite:c3ea64f15823b8fe99265295fd75e7f2dc4ebb61}}, facility location {{cite:5d5199e22b59ddf9f1662ccea88f437406576c23}}, coordinating the charging of electric vehicles {{cite:7635f770602d75edcc402edc6085aad8788a6af1}}, and national defense {{cite:c12eb505145db3792aa63e2810978de8f40b1dac}}, among others. Indeed, game theory has played a significant role in many applications and across various disciplines.
| i | db289e6b5db5b92b8d45256e52a2e4b4 |
In this study we use popular evaluation metrics such as HitRate, Precision, Recall, MRR, MAP, NDCG, and RocAuc. All the metrics, except for RocAuc, are calculated at depth cut-off 20. RocAuc was calculated using full predictions, except for the training data. Evaluation tools we used are our own library RePlay , libraries released as a follow-up to the recently published papers discussing difficulties in RecSys evaluation, such as Beta RecSys {{cite:caee21396d1d75a7becfa6ca5c3890af48119a0a}}, DaisyRec {{cite:4272eca5e03d78af437b098ef1d4b8371ff86106}}, RecBole {{cite:5b004fdf4e53c2074357e121894a71a3eec0c498}}, Elliot {{cite:be207bf7563a111bbbd2aa42bf9e2f77d9800449}}, OpenRec {{cite:7634a02b08b761dc7f9eb2f3258a85c83ea48294}}, DL RS Evaluation {{cite:07d7b9e37bc59b4e750bd0110ba57cc5f4f06edd}} and Python packages available at GitHub, such as MS Recommenders , NeuRec , RecSys PyTorch , and rs_metrics .
| m | 6db85a6c21ab5e9b96d685c4e18f5725 |
Our method combines image generation with face feature extraction and reasoning. To this end, it builds upon PULSE {{cite:705b2f81afc60e3a0c165c8630ad4e06208361b3}}.
| m | 2f064240ad4b68b8c947565ef5bc904c |
Setup.
We conduct comprehensive experiments to show the effect of WT-AWP on the natural and robustness performance of different GNNs for both node classification and graph classification tasks.
We utilize the open-source libraries Pytorch-Geometric
{{cite:4d0a4813348935da408af32a1528a6955804825c}}
and Deep-Robust
{{cite:1088cb8256eed3193b82f1990f6a45756ea89e5e}}
to evaluate clean and robust node classification performance respectively. To achieve fair comparison we keep the same training settings for all models. We report the mean and standard deviation over 20 different train/val/test splits and 10 random weight initializations. See Sec. REF for further details and hyperparameters.
| r | 67cb6fe298fa608b48a2eb6103152ed8 |
The Bayesian approach gives a distribution, instead of an estimated value, as an outcome which is more informative than LS. Compared to ML, where parameters are considered fixed, in Bayesian method they are treated as random variable with known prior distribution {{cite:10ac2cf90f785e0f868adde622503935796fa09b}}. Based on Bayes's theorem, the position and the parameters act as random variables where the prior information and observations are leveraged to infer and update the posterior distribution of unknown random variables. For example, in {{cite:51c4c275c5957f0af87ea38940a2277df68063ea}}, PLE and position are treated as mutually independent random variables from which the posterior distributions are derived. To do so, a message passing algorithm, called belief propagation {{cite:7da4333f35495bbbfbd89a17b03d94921b14f517}}, is used on the factor graph, which allows for efficient computation of marginal distributions and dealing with the problem's intractability. The cooperative localization scenario is also dealt with using the Bayesian method in {{cite:51c4c275c5957f0af87ea38940a2277df68063ea}}. In this paper, in addition to PLE, transmission power is estimated.
| m | f04dec14f357facfb77d6b0e9ddc8b87 |
Lemma 4 {{cite:f4dd03f18d9cce5b6dda77abff430e65309441b1}}
Let {{formula:49c01ee3-a58b-4e0f-b2ad-7f4401790813}} and {{formula:eecc0127-3bee-4f8a-ac57-e00dc6447b74}} be connected graphs, at least one not complete. Set {{formula:3ead585b-b4b1-47ad-8487-13cad8d2df72}} be the minimum size of a {{formula:66d57348-6f19-4f13-8180-4a4f6e8961cb}} -set of {{formula:a4e15d8d-6dd6-400c-8574-5bebefb9c9c6}} . Then
{{formula:62c35120-8c6f-4102-aef0-9939c62d7465}}
| r | c9758dacd1f4a0a0186d2d4649803bb7 |
Table REF (c) shows long-video classification results on Charades and Kinetics-Sounds (KS) when pretrained on Kinetics-700. We test Visual-only (V), Audio-only (A), and Multimodal (M) settings to verify the benefit of multimodal learning. Because there is no published self-supervised learning results on these datasets, we demonstrate long-term representations by comparing CNNs (CNN; short-term) to Transformers (BERT; long-term) on KS that contains 10-second clips. Since CNNs process 1-second clips, we feed 10 non-overlapping clips to CNNs and average the prediction output. In all settings, we add a 2-layer MLP with softmax classifier on top. The results show that Transformers outperform CNNs on Kinetics-Sounds, suggesting the superiority of long-term representations. We also see that combining audio-visual information performs the best. We notice that audio representations are generally stronger than visual representations; we believe that learning discriminative visual representations is generally more challenging, especially when the CNNs receive (self-)supervision signals only indirectly through Transformers. We believe that providing (self-)supervision directly to CNNs, e.g., by first pretraining CNNs on 3D rotation prediction {{cite:c86b78cb45180de7052236d1d223821592fcb736}} and then jointly training the whole model (as was done in CBT {{cite:2e51c85630d2dd527a3d588e763755b3a1ec6553}}), could further improve performance. Incorporating contrastive learning {{cite:3ffad58d52e22c5bd8ba94518331479439306f0f}} over the CNN embeddings and training the whole model end-to-end is another promising direction for future work.
{{table:8eccbe7c-d0cb-40a1-a03f-f15db3f07b3a}} | r | 98e753c0119041c0c393b2a102ddd37a |
With the rise of CNN, learning-based methods have become the mainstream method and have achieved tremendous development.
These methods usually use CNN to estimate {{formula:6cc1979b-7ae9-4938-8f6e-95e57c9e7950}} and {{formula:49c5408f-13b5-4884-8688-2b2097c00359}} or directly recover the clear haze-free image by an efficient CNN.
For example, Cai et al. {{cite:ace4c2de199e418fa5442e89afc1f68622562125}} adopt a three-layer convolutional neural network (Dehazenet) to estimate the transmission map;
Zhang et al. {{cite:84819e4c3c0900fb86ad3ca3185e86dd5c40af18}} proposed a density dehazing network (DCPDN) that can simultaneously estimate the transmission map and atmospheric light intensity;
Liu et al. {{cite:3c9c60862a516588d6ccad98837152b89c53d29b}} proposed a grid network (GDN) to directly reconstruct clear images;
Dong et al. {{cite:aacd9bb57ed67eec9f345a31a77f1fd63c3e7898}} proposed multi-scale based deep network which works on strengthen-operate-subtract-boosting strategy for image dehazing (DFF).
Although these learning-based methods have made great progress, they did not fully extract the features of the hazy image itself, resulting in sub-optimal reconstruction results.
| m | e81f5f6437b0eb7e665477bd46143306 |
Data and task based multi-modalities fusions take the advantage of the complementary information and increase the robustness {{cite:898a96698f4fc447450a4b57c066031263d1bafe}}.
Meanwhile, AU analysis relies on the facial muscle movement, the RGB-only detection methods are insufficient for recognizing subtle changes.
Thanks to recently developed multi-modal databases {{cite:821d45983f112c755a1500daca195e0a749cf79d}}, {{cite:78af04740f351d4bc014df7688f6c2e069acebd0}}, {{cite:27bbec15ddda7e8bc3a6ae0700d9dd8ac80eea73}} on facial expressions (e.g. RGB, 3D mesh, thermal, and physiology signals in BP4D+ {{cite:78af04740f351d4bc014df7688f6c2e069acebd0}}), we can broaden our views from classic RGB to various modalities and from 2D image to 3D space.
For instance, AU6 (cheek raiser) involves the deformation of Orbicularis oculi and pars orbitalis muscles, which shows subtle differences when observed in visible images.
However, due to the high density of points around the eyes region, better geometric changes can be characterized on 3D mesh.
Although some works utilized the additional information presented from multiple modalities in FER and AU detection {{cite:29f3b59379207a5ba0c42e0dfc61570b83298763}}, {{cite:4bb146385d504074ef26836e43de3a034c324b58}}, {{cite:87e1d1e5b93c66ecd54b6f0a5611b4400afb141e}}, two main challenges remain: 1) the model must learn the most relevant features for representation; 2) the model must be effectively when fusing two modalities.
| i | b4f246c1c9dc71ad9a3e27faa5636163 |
The training and validation errors of the models were recorded and are be reported in Section , however they do not represent a complete measure for the performance of a generative algorithm. In generative models, the goal is to be able to learn the generative distribution underlying the samples used during training, while at the same time not over-fitting on these few examples. This means that it is possible to have a model that has average training and validation accuracy produce much more realistic and creative musical samples than one with a very high training accuracy, as music is an art form and is very subjective. Majority of existing literature in neural music generation relies on subjective evaluation methods, with human listener surveys being the most common approach {{cite:4d3d17f3342a245344c4adb1fa2471ee5ec607d3}}, {{cite:8a6264cbccb21c585a2c8d9d935681b07b8e2b48}}, {{cite:6a2fcec3d546f9ba26d81ee297d30bdd734a668b}}, {{cite:81df2fcfd4224ae70c0e0c25f1fe992c19607027}}. For this study, a survey was conducted where listener impression scores from 10 individual volunteers were collected. The scores give a subjective opinion of the listeners impression of the samples generated by both the encoder-decoder and the WGAN generators.
| m | b5d9b35d9d96bbecc86f0857a427d2ea |
Fig. REF (right) reports the length of paths generated by the different planners for the same set of start and goal pairs. The paths generated by CCGP-MP* are shorter compared to the paths generated by LQG-MP. CCGP-MP* is able to generate optimal paths because the underlying planner of CCGP-MP* uses the RRT* algorithm, which is an asymptotically optimal planner {{cite:078743a6b5781cba5a5257ac252c2c28bd03eeb6}} that makes no assumptions on the CONNECT function. Fig. REF , plots the different plans generated by RRT*, LQG-MP, and CCGP-MP* for a random start and goal point. From the image, we may infer that CCGP-MP* deviates from the RRT* plan where the chance constraints are not met, which results in better accuracy for these paths. In comparison, the paths from RRT*, although shorter, would result in multiple failures during execution because of the noisy robot motion. The paths from LQG-MP, on the other hand, although safer, are not optimal.
{{figure:55860d8c-a5e3-4f8c-9258-78ff1cbececc}} | r | 3edf581bae078860b72edb963f7ece11 |
Some of the small clusters are not as well separated as the best clusters in Table REF . Their {{formula:c50d0694-1125-4cb6-879e-e13385d8fdd3}} -values exceed 1/4 (cf.also Table REF ). Evaluation function {{formula:213968b5-f6e7-4277-93ec-8c327613b487}} is always larger than the escape probability of the random link-node-link walker {{cite:57632938d639629bb66d1bd6278c943f690b7802}}, for small clusters only slightly larger, because the denominator of the second term in the definition of {{formula:09afb8ae-7070-4e3d-8445-533bb0bb279b}} (Equation REF ) is very large. That means, for {{formula:2bc69ddd-fedb-4c6b-9238-84cc70465e8b}} the random walker's probability to remain within the cluster is always larger than to escape from it in the next step.
| d | 3eda745d8ced5a9fd0d93d162eb15f31 |
Following the same settings used in the datasets experimented in the
main paper, we also apply leave-one-domain-out evaluation {{cite:d52e36231fd03a87ef84b5fa30eccab069fb5a5c}}.
For a fair comparison, we follow {{cite:431f24a26ea8ac2f8ad60920bbff628fbbd6e1f2}}, {{cite:47733cbdb52c5c615d47e4273484f32391bf0aa3}}
by using the pre-trained AlexNet {{cite:932d4d0f01d0d359bfa204013197065f76523b2d}} and
ResNet18 {{cite:3bb87a391740ec112bb4547b15a0b97d75fb6587}}, {{cite:feaf85e67a080972ed682df3dc13e5b454ae0867}} backbones on the VLCS
and mini-DomainNet respectively. For the VLCS, we trained a classifier
{{formula:9d30eb70-4f63-48fd-aad3-7dbd8b5ba379}} for 50 epochs using the SGD optimizer with momentum {{formula:84c4756b-ea85-47d7-b182-f98b0e6502a1}} ,
weight decay {{formula:fff43478-0bf7-4bcd-ac94-c23ab0707f04}} , and batch size 32. For the mini-DomainNet,
we used the same optimizer, but, like {{cite:f080b47357333208881d96141447ae8fa61aa49d}}, we
trained {{formula:dbcfd5cb-c1b7-4d13-b62d-0f305673a2e6}} for 60 epochs and batch size 96. The cosine learning
rate scheduler is used in both the datasets with the initial learning
rates for VLCS and mini-DomainNet being {{formula:86162b97-7759-432d-a041-9554dfbfd1d2}} and {{formula:44f38074-28d8-4678-a52e-fdbeb62af271}} respectively.
We follow the same data augmentation protocol in {{cite:ee2c3113fc173f44b0d02bda75606e4225b95a96}}, {{cite:47733cbdb52c5c615d47e4273484f32391bf0aa3}},
where random flipping, random translation, color-jittering, and random
gray-scaling are used. The experimental settings of the AdaIN NST
model remain the same as for all the datasets (including the ones
used in the main paper). We also used {{formula:d2eaa927-fd50-4867-bb70-b9a402ca4ee4}} with the domain-balance
strategy for all of our methods on two datasets. Hyper-parameters
are listed in Section .
{{table:4bd1a971-ff2a-4bc3-94a3-a3cd21721f85}}{{table:babba2cb-1a41-47a7-9ac5-37eb01bbd484}} | r | ecfdf449e798a4e0bb38c0b042dbb869 |
The 2006 September 9 UT Spitzer spectrum has been previously modelled with Dusty by {{cite:3b24c31267028e20f692080e6601b71fd0bbe6cf}}. They obtained a reasonable fit with a higher dust temperature ({{formula:771edcf3-d937-461a-8d84-d4866f12e529}} K) and larger optical depth ({{formula:dfc1b34b-685f-4ce3-b675-4b77b1b5c047}} ) than we determine here. This is due to a different SED for the central source. Using a 3600 K blackbody, as used by {{cite:3b24c31267028e20f692080e6601b71fd0bbe6cf}}, we also obtain a reasonable fit for {{formula:73c22874-7ff5-40b8-ac6c-f83807d39632}} K and {{formula:669ac0fe-7f46-42d1-bba5-de1027a9deee}} . {{cite:28edcc7a7a64674fd0f8881d0bf1e9602b893b20}} also find a lower dust temperature (500 K) from fits to the excess at {{formula:d8e34dd8-2332-4807-be6f-477ef647a5ad}} m.
Although {{cite:3b24c31267028e20f692080e6601b71fd0bbe6cf}} determine a higher dust temperature (600 K) from the 2006 September data, the inner shell radii in Table REF (column 4) are in reasonable agreement with their value ({{formula:50fb0a6b-e4a1-4d89-9559-e63f51105d90}} cm) because they use a lower source luminosity.
| d | 055a7f386571dc33bddf9bf4a8e80e55 |
To extend gravity beyond general relativity,
“teleparallelism" could be considered by using
the Weitzenböck connection, which has no curvature but
torsion, rather than the curvature defined by the Levi-Civita connection
{{cite:b637c3536617ecaf41d4b265907bd5c4b9ec038b}}, {{cite:9ccd38fbef4e88a2b879dfcb69c3ff8fa5548199}}.
This approach was also taken by Einstein {{cite:77922606e677555d885d6ddd9968a2b498d76877}}.
The teleparallel Lagrangian density described by the torsion scalar {{formula:021a684c-a7be-4a65-8f99-c28e2284df5d}}
has been promoted to a function of {{formula:afbde86c-36bf-4930-88de-446731244584}} , {{formula:1dc3cb88-eae0-47b6-9fc1-0bec4f320bec}} , {{formula:daa30830-db65-492d-a441-44d184a03a67}} ,
in order to account for the late time cosmic
acceleration {{cite:b93043153663d90fd096556292de832b4c1efedc}}
as well as inflation {{cite:90388e00a63293a00333d9ee1bff773c8294be10}}.
This concept is similar to the idea of {{formula:3b6a3c03-327d-41ed-aacd-0810d71ced98}} gravity.
Various aspects of
{{formula:bd226e6c-40fc-4a12-83f4-5c66f1db7230}} gravity have been examined in the
literature {{cite:dfcf1caf7c87b3c10dee687c63385d2f6f8e8a20}}, {{cite:4f4a5ed7b3c1743c918ee971d9da457cf25a66a6}}, {{cite:768db2ea3819f7af941d48781bcfe5bbaf6d076c}}, {{cite:83bf2abef88f56ad9fb633458537de90e00cb609}}, {{cite:396701b572cb14b8dc87d0349a8063f61ccc200a}}.
In particular, the presence of extra degrees of freedom and the violation of local Lorentz invariance as well
as the existence of non-trivial frames for {{formula:ac94ceb2-2ac9-42ef-85b5-559ff2a73918}} gravity have been noted {{cite:396701b572cb14b8dc87d0349a8063f61ccc200a}}.
Evidently, more studies on {{formula:8bac73bb-4c9b-4db1-9fcf-8beb5497c10e}} gravity are needed to see if the theory is a viable one.
For a comprehensive review of the teleparallel gravity, the reader is referred to {{cite:fd1445b5ac25daf0d0c7b29f9e5a7bf9af9266a3}}.
| i | 60a1b4f5c32f09dac9de55c4d8d7ee57 |
These results mean that it is quite challenging to compare our theoretical and numerical results to existing experimental data for the diffusive evaporative model. Since our results concern the early stages of the development of the coffee ring, we require mass profiles long before the final deposit, which is by far the most commonly reported in existing experimental studies. However, even in cases for which transient mass profiles are given — for example {{cite:e7c7f8c588e9bce2e677a44aba60e61b1f2f6f9b}}, {{cite:fd3665870f5706ae244e6c3f9b921720d6203cc9}} and {{cite:c7fd6d7bed1d02635e23b13b9ca3c091d0e07168}} — the limited range of applicability for the dilute regime makes comparisons unfeasible.
| d | ae0d9bcbb558b08978d971c34fc71660 |
If {{formula:de0916d0-ff45-4bc7-b6aa-4d8fb46cb7bd}} is an ample line bundle, then we define the slope of a vector bundle {{formula:c3da05c7-ad66-4e29-9710-1e2bd4011af7}} on {{formula:b3db6488-59e1-4aa7-adb8-30e120643056}} with respect to {{formula:06aee132-96e7-48b0-a50e-5f3701239e91}} as {{formula:ed791eba-7375-494e-8945-2731a38455d5}} . The bundle {{formula:5964a413-fa9a-4eaa-8c7b-3a2ce50bb39a}} is {{formula:f40df7f3-0435-47de-adec-d22e24321365}} –stable (resp. {{formula:1497d743-ec38-4497-bca4-0dd35ec91e04}} –semistable) with respect to {{formula:a236a1b5-cacc-4e8b-aa53-89a4bbf7a1fd}} if {{formula:0ab472e0-38c2-494b-8620-6763eeb8a7f8}} (resp. {{formula:a217c81e-69fb-4873-b142-40ab4fa4d064}} ) for each subsheaf {{formula:d63eec4d-5c13-4af8-8e0b-e3df296e1b5c}} with {{formula:0787308e-34cc-4c81-b224-b1ca714c6429}} . Every {{formula:c345226c-90a8-42c9-a6e1-72193416bca9}} –stable bundle {{formula:40c32be6-f0b3-4e15-b267-205c5445ef3c}} is simple (see {{cite:59e0c2f5240af514556c0a0eaec1a5f3abf88826}}). The following criterion for {{formula:6c4d1566-d097-4aac-86f5-7dd0e39303c2}} –(semi)stability of a vector bundle on {{formula:6625c29d-cd73-4b53-bcc7-184f5f435339}} will be used in what follows.
| r | 7c5827762efc1a41adb065d5cec27f8f |
Following the progressive elaboration levels as in the previously shown quark pressure approximations, we first
illustrate in Fig.REF the results of using the simple perturbatively re-expanded approximation
for {{formula:3a09a56d-10e2-45fd-ae85-b79d51257616}} , Eq.(REF ), for the quark contribution, but supplemented now by the NLO glue
contribution, Eq.(REF ). The resulting RGOPT pressure is compared with both the (massless quark)
state-of-the-art {{formula:3aee7a29-c2ad-4abc-9253-05f7dad82e4a}} LO pQCD,
which expression is taken from Ref. {{cite:4ba41902a3ae41c3838896343c377ccdb2b001b6}}, and to available LQCD results from
Ref. {{cite:3a90cd1cfb19999d941e5d983f75757a1f2bcb06}}, {{cite:1498af8ccf207b8a8da045737fb169df1a596930}}, {{cite:393bb91bc2e8c30e37bb84d2009f5735fabc6ca1}}.
As is seen, adding the NLO PT glue contribution puts our results
in the right ballpark of LQCD data, with clearly visible improvement as compared to pQCD, both for the central scale choice
and resulting remnant scale uncertainty. (We also note that
using instead the similar RG perturbative approximation Eq.(REF ) gives almost undistinguishable
results from Fig. REF , illustrating the low order perturbative consistency of the two different
MOP and RG prescriptions).
{{figure:62db3c5b-4de6-416c-a186-5c785d0abc80}}{{figure:3ee2340e-83cb-41b5-a52b-f9202e898371}}{{figure:51f7da22-c829-4add-9e9e-ac8311fa4d27}} | r | b424866d101f8e481eac68710a318492 |
We suggest that these errors could be averted by applying a different approximation for the leading-order behaviour, such as rational approximations {{cite:3163c74bf14ce407fd5b65fb3913f27776af9278}}, {{cite:fd1dcfeccf96065b01152af6ca7dc4298721fc07}}, which can approximate the leading-order behaviour to arbitrary precision, although this is beyond the scope of the present study. Even despite these errors in the amplitude calculation for smaller values of {{formula:89f3f0a6-d5c7-4df0-9526-ba6fd4c69816}} , the comparisons in Figure REF show that this method is particularly effective at identifying anti-resonance conditions, and can accurately predict the oscillation amplitudes for a wide range of system parameters.
{{figure:1c3b0f26-fc86-4866-8272-75d98d0d935d}} | r | 1758986e5c6805b2dc200827911bf821 |
Moreover, although the proposed VoxelMorph-based cardiac motion extraction method can capture the frame-to-frame motion with sufficient accuracy, as shown in this work, our ongoing and future efforts are focused on further improving the algorithm by imposing diffeomorphic deformations {{cite:ab26c72682392a3d7e15ab77b4745b6a0580b949}}. This improvement will help maintain a high quality of the meshes and prevent mesh tangling and element degeneration, especially for the systolic phases.
| r | 320ee231c1c07b3ba88c43c2ae346b92 |
where {{formula:efe49f27-197e-4a6f-a944-63b1701da3a3}} and {{formula:232858aa-1aec-40b6-a62d-5de8d4657dd4}} denote the probability measures
under {{formula:f9a71958-9f51-40f9-8543-fba6c07979c3}} and {{formula:5f62e660-1a2b-4b4b-a1b0-48520cc6bde5}} , respectively.
A test {{formula:99d04efe-d680-4399-8521-394cf279cc2a}} is said to be asymptotically powerful (or asymptotically powerless) if
{{formula:f2cb0cf4-506b-46ea-a691-8689d2b00513}} (or {{formula:613b453c-cd82-4861-87d1-e9a349ccf164}} ).
If an asymptotically powerful test exists, we say that the dense subhypergraph is detectable; otherwise, it is undetectable.
As an initial stage, we show that the sufficient condition provided in
{{cite:52eda878ef9538b169cf8f80b8487a807eec4319}} is also necessary (see Theorem REF ).
As a byproduct, we propose the hypergraphic clique number test (HCNT)
for testing HPC which is proven asymptotically powerful.
We next consider the problem (REF ) for general {{formula:e25304de-0e7a-4026-8cad-dd5754891934}} and derive sharp detection boundary in terms of {{formula:a18506ed-c744-4d83-bd34-0c285c275c7e}}
(see Theorems REF and REF ).
We propose either the hypergrahic total degree test (HTDT) or
hypergraphic scan test (HST) both being proven asymptotically powerful. See Table REF
for a summary of our results for testing (REF ) in the two regimes {{formula:4a0408ba-ce08-43f8-9bfd-8fc18a735e34}} and {{formula:db2107a0-34fd-4d98-ac8c-881345e579fa}} , including the corresponding asymptotically powerful tests if they exist, in which
{{formula:a937b9e9-20f4-4073-b0b6-f652cbabc36b}} is the Kullback-Leibler divergence from {{formula:9d884542-04a6-49ba-95b7-699a7579bb9d}} to {{formula:3fd76061-0cc5-4aa9-8a4f-e24a9c580ee0}} defined as
{{formula:466c6fad-9db9-4a2b-81cd-d1423e62c5d4}} , for {{formula:fafa5e21-38be-4203-a4f8-d3e87dd90044}} .
{{table:4c702da7-8fc8-4376-abd3-65ede7fe7acb}} | i | 945f8eee03524df5cf2a8a7d7c1223ed |
The next generation of gravitational wave detectors are expected to come online in the 2030s. Those set to explore the milli- and deci-Hertz regimes, such as LISA {{cite:49099c748ca28f20613878c4c453862ba27b13e7}}, Taiji {{cite:bece08eeae89ffacd334a596ef2288212432dc42}}, DECIGO {{cite:2287b3332c9c9f71c18fe5afab186b61cb1e4d81}} and TianQin {{cite:79846ccc2b066b1fc468bb12b31a748a61144882}}, will open a new window for gravitational wave discoveries. They will have a much lower frequency range than the current LIGO {{cite:1c1c2d27a25bea7b50c2fa5c56f109361bf254a5}}, Virgo {{cite:1efc3ace549f0e4d1546ac1849f951bc884f3a57}} and KAGRA {{cite:f593d4a39a10c943aadfdf333aefc0acf4c6e392}} detectors. For example, LISA is expected to be sensitive in the range {{formula:d8ce024b-23a5-4f39-9443-91ba11581e96}} , meaning that black hole (BH) binaries with much larger chirp masses will be detectable. Moreover, these sources will stay in band for long durations, up to weeks, months or years in some cases, especially for intermediate mass ratio inspirals (IMRIs) and extreme mass ratio inspirals (EMRIs), which take longer to inspiral than more equal-mass binaries.
Observations of IMRIs and EMRIs provide a unique opportunity to learn about the environments of the binaries {{cite:15eeb61582d5e45e2a50e7fdbc45bf3ff5ee5e97}}, {{cite:0c755973d2ae35ff89995df1ed070e27ea788301}}, {{cite:8fbc01aacd8186cf826abc84537cfa9cc98bb682}}. This is because not only will the binaries stay in the sensitive range of the detector for a considerable amount of time, allowing the imprints of environmental effects to accumulate in the gravitational waveform, but also the environment of the central BH is more robust to disruptions by a much lighter companion object {{cite:c4b758d0b3c56d90de5c7d1a8d36a242109452fd}}.
| i | 2dac5a01a2dabdad5855bc2a6f94a0a9 |
Deep neural networks (DNNs) are remarkably successful on handling many computer vision tasks, however, they have been shown to be vulnerable to adversarial perturbations of inputs {{cite:23e54cea81062008f2e2d1a96735cc2a887bd25a}}.
The adversarial perturbations are physically and digitally distorted input examples that can attack and fool the learned model such that it would produce an intentionally fabricated or an unexpected result {{cite:23e54cea81062008f2e2d1a96735cc2a887bd25a}}. DNNs for many visual intelligence tasks, such as image classification {{cite:23e54cea81062008f2e2d1a96735cc2a887bd25a}}, {{cite:04b2ad04910df59e95b989be47a24b516a220607}}, {{cite:8e7de3d4fa39ed611b6de6964fe0f0cf1201575f}}, segmentation {{cite:35c81122e12e5077c798dbf4b9816cddea076b7f}}, and detection {{cite:8874d71860afdb2bfd62d19b30522b60c1e417d0}}, are shown to be highly vulnerable to them. However, it has never been examined how much and what kinds of perturbations are detrimental to image-to-image (Im2Im) tasks. Im2Im frameworks are essentially more complex and sophisticated than pure classification-related problems.
{{figure:6930f326-bc5f-4996-bdb6-63888f4e8492}} | i | 470215cce38fe1a09796aac479486520 |
The key idea of the proposed MMFF is to combine the temporal feature in the skeleton modality with the spatial feature in the single RGB frame. In this section, we first present an overview of the method. We then briefly review the ST-GCN {{cite:cd8cf187ddfe4a3262808969f8a0ada3b78068e7}} and Bi-LSTM {{cite:42662090928ece4ddac38ddc6d379bf556ea4029}} networks, which are the backbones of the skeleton stream. Next, we present the data enhancement technique. Finally, the RGB stream and the two-stage multi-modality fusion module are introduced.
| m | bbc7e3a0e8e5a0a4006ee46e25e24209 |
DeepFool attack {{cite:ca9ddc08912329ed2512bd5bc99bcbe8e4a004eb}} The perturbation is updated by {{formula:e77f6085-6ca8-461c-a66c-278d58671937}} , where {{formula:6ce8ac17-b8c8-4bb8-b55d-13e4d8987128}} maps to the logits of the classifier and {{formula:1bc09e3f-320b-4cb4-8197-92f3d842c0a1}} . Similarly to the C&W attack, DeepFool attacks aim to find the smallest perturbation such that the counterfactual individual is negatively classified. The results of the DeepFool attack, shown in Table REF , are comparable to those of the C&W attack.
{{table:8fe8d361-ef3f-40b5-b378-44525ee06b06}} | m | 0758bd51d9d8ad34a2dbe1e0afaeb937 |
In this work we introduced a method to search for simple neural network architectures with strong inductive biases. Since networks are optimized to perform well using a shared weight over a range of values, this single parameter can easily be tuned to increase performance.
Individual weights can be further tuned from a best shared weight. The ability to quickly fine-tune weights is useful in few-shot learning {{cite:094edcc286de586379c0d68ebf43ef8ea7fd010e}} and may find uses in continual learning {{cite:5ff39e0ffbb2c0ccd80fdce0779b10d79563943d}} where agents continually acquire, fine-tune, and transfer skills throughout their lifespan, as in animals {{cite:572f0afd9c9a9a737cd6e4db897bde99155342c7}}.
Inspired by the Baldwin effect {{cite:e0b1ffae4530be195994d3edbb60aa135449a0e7}}, weight tolerant networks have long linked theories of evolution and learning in AI {{cite:94813824cdaea6557953b8188c27a2cf49ae745b}}, {{cite:8e8489eb7661b1e3107cdbfca6aeaa76748e8ba9}}, {{cite:55df466493e828ffb743995c1ad6d052cdda90a9}}.
| d | 0bd58a2242e94e1ef0cf6185e89a120e |
The results above connecting both the radio and {{formula:c938c83b-302b-4846-8fb8-de2becae3322}} emission from low luminous compact sources to magnetically dominated reconnection processes in their nuclear regions, though preliminary are very promising as they suggest a unifying single process of relativistic particle acceleration in the core region of low luminous AGNs and compact galactic sources which naturally interpret the Fundamental Plane {{cite:635fae5799ad4228ec5a667e8cfcd5a1b8888c99}}. In forthcoming work, we intend to extend the analysis of the diagram above including more radio compact sources with {{formula:92d5f023-8a43-4d0d-8de4-43abaf7efac5}} -ray emission counterparts in order to reinforce the present conclusions.
| d | c27a92a395de936cbef77a5be307577d |
The mixed performance of D-MPNN {{cite:28ecf919688987f033a210ab7973448e64ac4874}} on QM8 and ESOL may also be due to the basic sum readout function. However, the ability of D-MPNN to surpass the performance of the standard MPNN + set2set readout on QM8 indicates that global information propagation bottleneck is actually more impactful in this case. For the significantly larger molecules in ESOL, the sum readout greatly limits the expressivity of D-MPNN.
| d | 098871d38c79ef4caf727b62afbac1e7 |
Existing DA algorithms {{cite:de14cb969f9b29730fb9b1d501dfd9c52e18c3d7}}, {{cite:4e9db683e5017f69d122df50e574b56e600de7c1}}, {{cite:8eecfe0dc98dc9da694352116ed539ba7894cefc}} train target models and then use the target models to classify all samples in the target dataset. While this approach attempts to address the problem that the source and the target data are from different distributions, it fails to recognize the possibility that, within the target domain, some samples are closer to the distribution of the source domain than the distribution of the target domain. In this case, using a target model specifically trained to classify samples that are in-distribution with the target domain no longer make sense.
| i | 0ced1e8f2d7d23224b4a75a63e875e6b |
Conventional adversaries reveal intriguing properties of the learned representations in deep neural networks.
However, as a means of attacking real systems, they pose limited threats outside of the digital domain {{cite:9eb2aceb57f45f9d97caf58642c26fdae5ac798d}}.
Given our results and related work, a focus on adversarial features and robust, physically-realizable attacks will be key to understanding practical threats.
Importantly, even if a deep network is adversarially trained to be robust to one class of perturbations, this does not guarantee robustness to others that may be used to attack it in deployment.
For better or for worse, feature-fool attacks are effective and easy to make using pretrained models.
Consequently, we argue for focusing on pragmatic threats, training robust models (e.g. {{cite:d62432b83a68e7f96a6ee114af34551299e4a115}}, {{cite:55a68d831a17cbbe409c88258094c3103b01ceb5}}), and the use of caution with deep networks in the real world.
As a promising sign, we show in Appendix REF that adversarial training is useful against our attacks.
| d | bbe3bb2122c9efef247fbd953f6fe368 |
For variables associated with time-series data, dynamic linear models (DLMs), an important class of state-space models, are used to provide a description of the time-varying relationship between a collection of independent variables and a response {{cite:7047e93f62c8ba9556a8226baad20adb599a9521}}, {{cite:544cefdb5be5d4c14c869a05339b9f341d722861}}. In addition, the computation is straightforward as the Kalman filter provides us with the closed-form expressions for estimation and forecasting.
| i | 939a8939a40c9acefb0943d0d395b723 |
One of the main limitations of PDPs is that they bear the risk of providing misleading results if applied to correlated data in the presence of interactions, especially for nonparametric models {{cite:1ac05c4450418906396071e9ef417301720ed37b}}.
However, existing alternatives that visualize the global marginal effect of a feature such as accumulated local effect (ALE) plots {{cite:d43a38abadaca03b4962d2e58cc2bd01b1d89bbd}} do also not provide a fully satisfying solution to this problem {{cite:1ac05c4450418906396071e9ef417301720ed37b}}. As a solution to this problem,
{{cite:1ac05c4450418906396071e9ef417301720ed37b}} suggests stratified PDPs by conditioning on a correlated and potentially interacting feature to group ICE curves.
This idea is in the spirit of our introduced tree-based partitioning algorithm.
However, in the context of BO we might assume the distribution in Eq. (REF ) to be uniform and therefore no correlations are present.
Instead of correlated features, we are faced with a sampling bias (see Section ) where we observe regions of varying uncertainty.
Hence, instead of stratifying with respect to correlated features and aggregating ICE curves in regions with less correlated features, we stratify with respect to uncertainty and aggregate ICE curves in regions with low uncertainty variation.
Nonetheless, it might be interesting to compare our approach with approaches based on the considerations made by {{cite:1ac05c4450418906396071e9ef417301720ed37b}} – or potentially improved ALE curves.
| d | ee8f35a667acb267f885c593b3eb0126 |
where {{formula:be8a6502-edfa-4032-a630-13c39b775c99}} denotes the original signal. For achieving an optimal value for {{formula:70e910cb-b311-429f-a832-717b8bd141a8}} , often a costly optimization formulation is required since the difference between {{formula:c9afd680-0363-4c39-87b5-25badea0ff27}} and {{formula:a7cc7ba2-7bb4-412a-be99-dcb8d91db917}} should be inaudible. Toward satisfying this condition, many optimization algorithms have been developed thus far. However, the majority of them implement a sort of convex formulation most likely inspired by Carlini et al. {{cite:f60e857ffacb94385b0679b7c3cd2b7c9b17a6b4}} as the following (C&W attack).
F + iciL(xadv,yi)
| i | 75e019f522ebe32f261014dd4c293e88 |
Token Semantics Analysis of Source Code When compilers translate source code to IR, partial semantics of tokens is lost since all variables' names would be automatically normalized and replaced by LLVM value identifier. It may lead to a failure of semantics analysis as important information is contained in the variable name. For example, CuBERT {{cite:cc2f7d62ccf90f8daffdec83e00b50d01a904011}} claims that it can detect the following code written in Python is buggy:
| d | 5a356e7bd52097622b3f93ff9d454323 |
We will first take a look at the claimed performance of the DP-Fed-Avg GAN. {{cite:5bd1e0b7ff9c17bd3a8b167eb27f678bed1039e7}} claims that the DP-Fed-Avg GAN reaches an epsilon of {{formula:a85d2192-ad36-4f68-9bf6-10052bcf3623}} in the simulation run in code. This is far above any useful privacy, as this basically means the chance that our GAN generates an actual training input (thus breaking privacy) is enormous. However, the researchers note that this is due to the small sample size of users (around 10 of a total of 3000) in the simulation. In a realistic scenario, with millions of users, they note an epsilon around the 1.4 mark. This is a much more practical privacy preservation level.
As for the FID, we see a claim around the 200 mark, which is not amazing, but considering the noise added for differential privacy, it is definitely acceptable.
| d | 25f279659acfe03d4dc5f039a27a56f2 |
At least in the present scenario considering a two solar masses collapsing star, it is found that the torsion contribution in the stress-energy tensor is comparable to the dust one when the radius is of order {{formula:ae4759c4-634d-433d-880c-efa602c53030}} m. At roughly the same radius all the energy conditions are also violated. We saw that this radius is much bigger than the radius at which the density becomes Planckian, {{formula:c9c9a804-300e-4d67-87c8-cf876ba87f07}} m, and hence one would need a full quantum gravity treatment. This indicates that if the constituent particles of the collapsing dust star have an intrinsic spin then at some point of the collapse the discrepancies between Einstein-Cartan theory and Einstein gravity would become evident and due to this the formation of a singularity could in principle be avoided, possibly leading to some form of regularized black hole interior {{cite:e07ac1197cea558d4823748ee9a46b71c769d764}}, {{cite:de8b13690f1b2c81012d460a4a123d8c99927714}}. This conclusion seems also to lend support to the outcome of the numerical investigation carried out in {{cite:25457350ff1d57ccf3c18b22e9eb9b791235f378}}. There an OS collapse in ECT was also considered and it was found, via a numerical analysis, that the singularity formation is resolved by a bouncing geometry (another typical scenario in quantum gravity settings {{cite:735fd1f8addceeb75337ed6f38faa20db753f1bb}}, {{cite:547ad911157367ed2dd35d0d9cd26dba6140d6c5}}, {{cite:ded637a02d384dd298b327270a8f35c306b03917}}, {{cite:af26c9eae337db469b5e720eade757c1362ff1d3}}). We hope to further explore this possibility in future work.
| d | d21b5713e3ea01d88c0897245a1594bb |
In a separate study, {{cite:8060bb0dad094329f60943e10fc37d5eb20d475a}} resolved the Fe K{{formula:1028dbbc-213e-414e-9367-0e7cba1f6bb2}} line region in Cyg X-1 through spectral analysis. The Chandra X-ray observatory detected the source in an intermediate spectral state during these observations. The authors observed a narrow line around 6.42 keV along with a broad line feature at {{formula:72cd51fa-7071-4913-a726-a3ded6df0c2f}} 5.82 keV and a smeared edge at 7.3 keV. These results support our findings, as the study reported a photon index around 1.8 which is close to the value we found during E1. The value of R{{formula:b34c7a02-2a11-41cc-b69a-3d49c6997907}} reported in that study is similar to the value of R{{formula:3d10d8cb-c5bd-4996-b0ad-f0ad13f5a2cf}} found during E1 in this work. Two different lines and a smeared edge are not observed in our study, either because of the superior spectral resolution of Chandra in comparison to NuSTAR {{cite:7677c8aad4c4d9018c75d561a7d23dfb8d7a295f}}, {{cite:b35c17b400a80108e408939a67676ac0b9292f98}}, or because they may not be present in GRS 1716–249.
| d | cb6685c4e0f0e3f4d0c57a1b8c70ae81 |
The rapidly increasing size of deep neural networks, along with advancements in design and training, have made them perform well on a broad spectrum of tasks. The large-scale models improve predictive performance but significantly escalate production costs with slower inference, in particular, severely limiting the adoption of deep models on resource-constrained edge devices like wearables with limited battery and computational power. Binary neural networks (BNNs) have become promising methods for obtaining highly compact and efficient models in deployment on resource-constrained devices due to extreme compression and speed-up gains compared to their real-valued counterparts {{cite:6505b1e995e8274de643006c06f91a823b63a613}}, {{cite:4a5754422e82b395aad8fe865020738e1c95feec}}. A complementary technique for accelerating deep models is a dynamic input-dependent prediction generation, which has recently gained notable traction and has become widely known as early-exit architecture {{cite:7d29e3193ef2d7e091b370031107caebc35eb077}}, {{cite:ff055036a04b5b1bd66dfebd48d0253186c4c6da}}, {{cite:b3f81bec06895da8deb63ccf9365232c187bd841}}, {{cite:4da0071112298fcedaf3158e7b426dc2cecc493e}}, {{cite:d7cb5bdfdbc4b65a71878258722069e766d90ae9}}, {{cite:acdf912dee4be0c78a7b7d7d83bfa042ab360b37}}, {{cite:d4f3ca0bbd026483d5d530647c83df0662eb8233}}, {{cite:fa3bf885bae507e01fd512095619cdb97ce922ea}}.
| i | 4c000d77d5dea703cec14042b3cad78b |
It is to be noted that the present reasonable success of the SCM, which assumes no geometrical {{formula:58327b73-bd39-477d-bcb9-ab3c6d044ff4}} cluster configuration, does not mean ruling out the geometrical {{formula:bac5bb59-097a-4956-95fb-e54c3a38cd63}} cluster model. It should be emphasized that the SCM only describes the condensation, superfluid, aspect of the duality of the {{formula:567465d6-42b2-4ca8-b871-64bd3e0bce4c}} cluster structure, being complementary to the geometrical {{formula:4010df4f-8d28-4820-809a-9d0f5a1c1b9c}} cluster picture. In the {{formula:325b7777-f0d4-44b9-94ad-b281dcc6188b}} cluster structure the geometrical structure is essential. The SCM does not replace the geometrical {{formula:d6f1b3c4-7681-4364-b099-ce5b4c1588bc}} cluster model. For example, the rotation motion, the large moment of inertia, is caused by the spontaneous symmetry breaking of the rotational invariance due to the geometrical {{formula:78779c9b-4990-4f5d-b90d-eed7668c09a0}} cluster configuration, which is absent in the present SCM under a spherical trapping potential.
The reduction of the moment of inertia in the SCM calculations is inherent to superfluidity, which is well known in the superfluid heavy nuclei {{cite:e66565147c0d68bd4b4c8548995e450f9211de2a}}.
A cluster model with a geometrical configuration which involves the order parameter to characterize the condensation of {{formula:072e720d-6b13-4418-a903-25976650ebd2}} clusters is a future work to be studied.
| d | a13f898f2c9f019c3d09a100def2e973 |
we consider the class of General Linear Methods{{cite:881410cbf6911c857bc8698fbb5f9a672b37125e}}, {{cite:4c4c47708a47acf541430120aa43fe7c579dafc8}}
{{formula:746743c0-7eac-41b1-bed3-954c962a4196}}
| m | 9ad39b7b532647abdfc852bc15c74069 |
Atoms cooled by DLC in the tube are detected by measuring absorption of a resonant probe laser in the direction of the tube's axis, as shown in Fig. REF (b). The The diameter of Gaussian probe beam is 1 mm with intensity around 100 {{formula:ea098e52-d566-4072-a4fb-a9247cb59631}} W/cm{{formula:4b2fa546-cafd-431a-9885-1170dfdbefc5}} . Generally, OD of the cold atoms is given by the relation {{formula:d27de7c4-0e9d-44e4-bbcb-8f12c93bc21f}} , here {{formula:c4642b77-d5af-4b00-bbec-2fcc4016bec3}} and {{formula:2d9f2f33-2674-4399-b493-f2add75532f3}} are the incident and the output intensity, respectively. The number of cold atoms {{formula:9755b1d2-0e66-4a59-9e69-ae9af498d209}} obeys the relation {{formula:c0bcb0fe-8664-4e67-899f-97638805bfd1}} , where {{formula:c2c6b488-ad62-4212-8bec-77c334c75358}} is the cross section of the atom-light interaction. The maximum number of cold atoms can be reached at the balance between capture and loss of laser-cooled atoms . A straightforward phenomenological model may be given as {{formula:953cc5ef-a669-4f40-8c9f-f1ae715c9275}} , where {{formula:7a79805f-18ab-4a4e-8919-2e8e70a14a79}} describes the loss and {{formula:a3ec119c-7a2f-4d23-b684-211b18fb6976}} describes the replenishing rate of the newly captured cold atoms {{cite:7b0f78e15657e322ec78c503abfd0a9499b49f71}}. Therefore {{formula:290b069a-9bee-47b9-9e5d-21ea6fb89ebb}} , and {{formula:4c0bc08d-409f-44d7-a7de-10d27f3814f5}} when {{formula:c46c3258-2c4f-416e-8196-93d3a8e2b13c}} is long enough.
| r | 6e7e85e654e17cdbeaed19cf7cdde53d |
Following 't Hooft, we will use Feynman diagrams with double lines {{cite:ca830c26a78014350dc491c6dc2710996da7aa11}} in the planar limit{{cite:f6d4a3192a3a99208ff0c8fde8a0594662d3813c}}, when {{formula:e9c23a0b-86e6-4fa4-9e24-f0b84fd47270}} . We will calculate the planar limit by the following expansion in Feynman graphs. We expand the resolvent {{formula:1549945a-7da0-4052-aec1-835676d0ba3d}} in powers of bare propagators
{{formula:474b3cf5-db8d-4947-9fad-5b132a91d285}}
| m | d8ce1542b579b8563b1bddc85e27d463 |
the gradients {{formula:d87e48ab-05c3-4ee3-95fb-d5194225f94f}} converge in direction, then the limit direction {{formula:95a46837-3ccc-46df-b1ab-91dc3ddf5b35}} is given by,
{{formula:d5fd5956-4e19-44d1-834d-9822ce6c6157}}
We already see how introducing a single convolutional layer changes the implicit bias of gradient descent—even without any explicit regularization, gradient descent on the parameters of convolutional network architecture returns solutions that are biased to have sparsity in the frequency domain.
Furthermore, unlike fully connected networks, for convolutional networks we also see that the implicit bias changes with the depth of the network as shown by the following theorem.
[Linear Convolutional Networks of any Depth]subtheoremthmcnl
For any depth {{formula:02f87970-26f8-49e2-9867-8b098880d12d}} , under the conditions of Theorem REF ,
the limit direction {{formula:548bbf17-ea19-4bba-8186-42dd2466bf06}} is a scaling of a first order stationary point of the following optimization problem,
{{formula:4a136a5d-c614-4ffb-a70b-89c7ec6a1359}}
where the {{formula:be3b94f0-9ccc-4441-91af-b031e44e9e69}} penalty given by {{formula:a7242ee2-35c0-485d-ad72-4f6abafd3323}} (also called the bridge penalty) is a norm for {{formula:a21667c8-1359-4485-887a-d5cedf026abb}} and a quasi-norm for {{formula:fb15652f-487d-4d0a-9b20-67ccd261af2e}} .
When {{formula:f2617d16-9d9e-4255-a75a-105ac7622f9b}} , and thus {{formula:98420a39-9277-4d48-8f2c-9be22190ac41}} , problem (REF ) is non-convex and intractable {{cite:9a278e44004248ef82de7ad7b2baacbcb23936d4}}. Hence, we cannot expect to ensure convergence to a global minimum. Instead we show convergence to a first order stationary point of (REF ) in the sense of sub-stationary points of {{cite:ca0210539886c7c4ddeea948585f88fd587742c5}} for optimization problems with non-smooth and non-convex objectives. These are solutions where the local directional derivative along the directions in the tangent cone of the constraints are all zero.
The first order stationary points, or sub-stationary points, of (REF ) are the set of feasible predictors {{formula:0425b34f-f461-40a4-9650-d123dd20bdd6}} such that {{formula:e9ce6e74-6ae4-4961-a8ee-272f6b9e3b29}} satisfying the following:
{{formula:b164f03b-98d7-47af-9589-06d23a6ca465}} , {{formula:7588ddae-f567-4725-92d6-09d7b9cc765b}} , and
{{formula:05824e31-538a-47f3-bed1-c31c9d53862d}}
where {{formula:adb510f5-ed65-4e9c-b0dc-3e08c5d59a81}} is the Fourier transformation of {{formula:81f504fd-b6b2-48bd-82b4-aba3a3e8bc7f}} , and {{formula:847713d6-a0f7-4517-b4a6-46c23fcf04eb}} denotes the local sub-differential (or Clarke's sub-differential) operator defined as
{{formula:dba2042e-11a5-40be-aa7d-982ccd2f68bd}}
For {{formula:64e634cd-d32f-4b2e-9cd0-1fd8775a8bbc}} and {{formula:864e0db1-b36d-4b7d-ada6-9fb976a194d6}} represented in polar form as {{formula:19f5da9a-09ea-4944-860f-675be2792a8e}} , {{formula:c74bcf06-5bc1-4b12-b601-8459f8872faa}} is convex and the local sub-differential is indeed the global sub-differential given by,
{{formula:799ba5e1-2d5c-41ce-a2b2-3b9cb4e78e3b}}
For {{formula:02e5f69b-0807-4a7f-bc79-2b743d6533e4}} , the local sub-differential of {{formula:de8594d1-e369-43b9-a7b1-89078690b7ab}} is given by,
{{formula:689b69b9-fd27-4fdd-90f5-95787b856b47}}
Figures REF –REF summarize the implications of the main results in the paper.
The proof of this Theorem, exploits the following representation of {{formula:f7ae28d2-22c4-43a2-aef6-6f35e8a75ed5}} in the Fourier domain.
lemmalemfft For full-dimensional convolutions, {{formula:8998459c-07d9-41da-b8c2-1d4ed9cc5d53}} is equivalent to
{{formula:1d035152-6257-4b44-9af6-8b3a6092822c}}
where for {{formula:b6eae825-f63c-4527-b670-3ad318d7f814}} , {{formula:6ca8c3f1-0f6a-4828-ad39-c6107cd42dad}} are the Fourier coefficients of the parameters {{formula:342348ac-637a-4961-8adc-85729a0acc6b}} .
From above lemma (proved in Appendix {{formula:bdd804ad-27e0-46fa-a01a-fd113545dbd1}} ), we can see a connection of convolutional networks to a special network where the linear transformation between layers is restricted to diagonal entries (see depiction in Figure REF ), we refer to such networks as linear diagonal network.
The proof of Theorem REF and Theorem REF -REF are provided in Appendix {{formula:f3bde7fe-5a77-4ae2-b31d-440debff7076}} and {{formula:312e2511-8d85-4238-9f71-025808a681f1}} , respectively.
Understanding Gradient Descent in the Parameter Space
We can decompose the characterization of implicit bias of gradient descent on a parameterization {{formula:0d4eac25-1d9e-4db1-84ac-5bbdc547b432}} into two parts: [(a)]
what is the implicit bias of gradient descent in the space of parameters {{formula:d7fe5b32-fb4c-4da1-8bc0-af901ab01188}} ?, and
what does this imply in term of the linear predictor {{formula:c58a6d23-6ff3-4a77-86b8-966afc5b5a41}} , i.e., how does the bias in parameter space translate to the linear predictor learned from the model class?
We look at the first question for a broad class of linear models, where the linear predictor is given by a homogeneous polynomial mapping of the parameters: {{formula:d67dac89-cd82-4147-b98b-383f47c68fc9}} , where {{formula:605cea73-2bf2-4c48-aeec-0d95b57cd837}} are the parameters of the model and {{formula:6f1e77d3-9e52-4571-bdb9-35ae88f3cba2}} satisfies definition below. This class covers the linear convolutional, fully connected networks, and diagonal networks discussed in Section .
Definition (Homogeneous Polynomial) A multivariate polynomial function {{formula:eab00333-eb06-44c7-80f5-a3a609b8896d}} is said to be homogeneous, if for some finite integer {{formula:af7a70da-656a-4b79-be47-dc72663e6e2b}} , {{formula:9c174040-a122-426d-9ea8-c9ef6c050fd6}} , {{formula:24328c52-ad71-4948-9bf2-b7629f3d066a}} .
[Homogeneous Polynomial Parameterization]theoremmetathm For any homogeneous polynomial map {{formula:2aac9be2-f8af-4ef1-9de7-aaf45a22be44}} from parameters {{formula:da0afe15-9fad-467b-bb5e-1b7d30558160}} to linear predictors, almost all datasets {{formula:9bf85e04-65bb-4f27-96c9-0a1cd27c8f9a}} separable by {{formula:c1e0721d-8e82-413c-baa5-6da5a0ebc568}} , almost all initializations {{formula:facc4972-67bf-4375-9622-c33be41fedd8}} , and any bounded sequence of step sizes {{formula:1e9d8841-741e-42a7-b2b3-50db3a440626}} , consider the sequence of gradient descent updates {{formula:1857edb7-8fd6-4eef-b21a-6cfa103f1ece}} from eq. (REF ) for minimizing the empirical risk objective {{formula:c60c365c-2cd7-42ab-ab56-cc54b8beedf6}} in (REF ) with exponential loss {{formula:ebfed5ec-9876-46de-94ba-a5f0fe44910e}} .
If [(a)]
the iterates {{formula:6e8a6665-79ff-4acf-936a-20bfe22ae3e5}} asymptotically minimize the objective, i.e., {{formula:7c7179ca-76b0-4b4d-85c3-1d6c21d2df32}} ,
{{formula:1a748b32-8a48-4361-a65f-20667120b4b5}} , and consequently {{formula:bfda8fb7-e82a-41c5-9b87-4d3a3dcdef64}} , converge in direction to yield a separator with positive margin, and
the gradients w.r.t. to the linear predictors, {{formula:5dddc29e-1ee4-4a6d-996c-07e70ae4b29e}} converge in direction, then the limit direction of the parameters {{formula:613257fb-3ec7-41c2-a581-6ac57239ecac}} is a positive scaling of a first order stationary point of the following optimization problem,
{{formula:a56a57c0-b3f4-4987-80cc-c30f7e3f5839}}
Theorem REF is proved in Appendix {{formula:f0a52141-887f-448d-b0e8-d3f76225fc48}} .
The proof of Theorem REF involves showing that the asymptotic direction of gradient descent iterates satisfies the KKT conditions for first order stationary points of (REF ). This crucially relies on two properties. First, the sequence of gradients {{formula:b87719ef-40b7-40cf-8348-791be265ce2e}} converge in direction to a positive span of support vectors of {{formula:5861cef4-d658-4947-a438-a2d3c6f0427d}} (Lemma 8 in {{cite:ad2a2234f0a1395d2283a88ef838c6ca38a7f9ad}}), and this result relies on the loss function {{formula:0261e776-4def-4696-8e34-d0af8484ce49}} being exponential tailed. Secondly, if {{formula:ad911195-f267-450d-80cc-45fd9fe3339c}} is not homogeneous, then the optimization problems {{formula:95760c1c-c649-403c-926f-e9b2594f3480}} for different values of unnormalized margin {{formula:45d237ab-0d6a-406e-8243-62999d0d8d88}} are not equivalent and lead to different separators. Thus, for general non-homogeneous {{formula:cd0a6b0b-e576-4e46-8840-4dde14ea08bc}} , the unnormalized margin of one does not have a significance and the necessary conditions for the first order stationarity of (REF ) are not satisfied.
Finally, we also note that in many cases (including linear convolutional networks) the optimization problem (REF ) is non-convex and intractable (see e.g., {{cite:9a278e44004248ef82de7ad7b2baacbcb23936d4}}). So we cannot expect {{formula:dcf75ac4-cc53-4f2a-9914-f324a83e08bc}} to be always be a global minimizer of eq. (REF ). We however suspect that it is possible to obtain a stronger result that {{formula:8c9ac0f8-4c6b-47fb-a586-45eafec22dbe}} reaches a higher order stationary point or even a local minimum of the explicitly regularized estimator in eq. (REF ).
Implications of the implicit bias in predictor space
While eq. (REF ) characterizes the bias of gradient descent in the parameter space, what we really care about is the effective bias introduced in the space of functions learned by the network. In our case, this class of functions is the set of linear predictors {{formula:f829a57b-74e6-4374-abb0-a56c2e2aec99}} .
The {{formula:cc3f67a6-37b1-4ede-9b0c-8828c1d3a14a}} norm penalized solution in eq. (REF ), is equivalently given by,
{{formula:20da5d08-cd54-4de3-b85e-e33c829c8b0e}}
The problems in eq. (REF ) and eq. (REF ) have the same global minimizers, i.e., {{formula:bf1d3d40-ef66-4f17-b27e-d7dbec837af0}} is global minimizer of eq. (REF ) if and only if {{formula:afc1ced8-4438-4a61-8bed-60652eb75397}} minimizes eq. (REF ). However, such an equivalence does not extend to the stationary points of the two problems. Specifically, it is possible that a stationary point of eq. (REF ) is merely a feasible point for eq. (REF ) with no special significance.
So instead of using Theorem REF , for the specific networks in Section , we directly show (in Appendix) that gradient descent updates converge in direction to a first order stationary point of the problem in eq. (REF ).
Understanding Gradient Descent in Predictor Space
In the previous section, we saw that the implicit bias of gradient descent on a parameterization {{formula:025dfdf2-1538-494a-9c43-bebcbcc574f6}} can be described in terms of the optimization problem (REF ) and the implied penalty function {{formula:b3e13f42-b0df-4a93-8ce9-534684206a4a}} . We now turn to studying this implied penalty {{formula:9dd281e2-91d7-4148-88c4-08eca5e43f4d}} and obtaining explicit forms for it, which will reveal the precise form of the implicit bias in terms of the learned linear predictor. The proofs of the lemmas in this section are provided in the Appendix {{formula:ada92ce6-d55c-429a-abf5-af38b699dcb9}} .
lemmalemfcn For fully connected networks of any depth {{formula:c0d96654-f98b-4a58-8bc3-7b3305e5d7f9}} ,
{{formula:12d1ce15-82d1-481d-a234-5f986e2069ba}}
We see that {{formula:984da09c-a64a-45c0-a81e-a4b16e07a2c5}} in eq. (REF ) for fully connected networks is independent of the depth of the network {{formula:8f38929d-e4d9-4e34-a76e-2a594ac22921}} . In Theorem REF , we indeed show that gradient descent for this class of networks converges in the direction of {{formula:3000c96c-00cd-45da-bccb-0a012d475b55}} .
Next, we motivate the characterization of {{formula:0d53c90e-c5ef-4004-acce-377f68599cb4}} for linear convolutional networks by first looking at the special linear diagonal network depicted in Figure REF .
The depth–{{formula:9e09f027-477c-4bb4-ae13-637006dc2291}} diagonal network is parameterized by {{formula:06568e5e-2c85-4d50-84c9-718cf4c56814}} and the mapping to a linear predictor is given by {{formula:58fc249b-71ca-4719-b1f0-7e09edc7e236}} .
lemmalemdn For a depth–{{formula:3d253a36-8879-459d-8d11-238ed72af1cf}} diagonal network with parameters {{formula:bd0471c0-48c5-49d3-89d2-a6c8ed6f1834}} , we have
{{formula:03fa01b6-8ebe-4a3d-b002-4e78fb33ac7f}}
Finally, for full width linear convolutional networks parameterized by {{formula:3f56f0ea-078d-424c-9e5a-9e6b23588341}} , recall the following representation of {{formula:fe714f5f-0e00-48d4-ad64-0c6b03ff6af3}} in Fourier from Lemma REF .
{{formula:cfb1b43c-bf00-4173-aa1b-9301b34e6d86}}
where {{formula:c0fa0bd7-b575-4f95-ba0c-ba6ff5ea8f8e}} are Fourier basis representation of {{formula:c3dc747f-30f0-4a12-8717-0faadfe4b2b4}} , respectively.
Extending the result of diagonal networks for the complex vector spaces, we get the following characterization of {{formula:9ec5f216-0ee3-4fd0-aab7-30319e6f2864}} for linear convolutional networks.
lemmalemcn For a depth–{{formula:d7f10d70-f352-4133-99b6-71251ab79a18}} convolutional network with parameters {{formula:c762a94d-d049-45ab-ae19-c90430260131}} , we have
{{formula:0a5a8d4e-b592-4712-adb6-c17376742511}}
Discussion
In this paper, we characterized the implicit bias of gradient descent on linear convolutional networks. We showed that even in the case of linear activations and a full width convolution, wherein the convolutional network defines the exact same model class as fully connected networks, merely changing to a convolutional parameterization introduces radically different, and very interesting, bias when training with gradient descent. Namely, training a convolutional representation with gradient descent implicitly biases towards sparsity in the frequency domain representation of linear predictor.
For convenience and simplicity of presentation, we studied one dimensional circular convolutions. Our results can be directly extended to higher dimensional input signals and convolutions, including the two-dimensional convolutions common in image processing and computer vision. We also expect similar results for convolutions with zero padding instead of circular convolutions, although this requires more care with analysis of the edge effects.
A more significant way in which our setup differs from usual convolutional networks is that we use full width convolutions, while in practice it is common to use convolutions with bounded width, much smaller then the input dimensionality. This setting is within the scope of Theorem REF , as the linear transformation is still homogeneous. However, understanding the implied bias in the predictor space, i.e. understanding {{formula:53da9304-f96d-44ab-85d0-3f9b94ecce34}} requires additional work. It will be very interesting to see if restricting the width of the convolutional network gives rise to further interesting behaviors.
Another important direction for future study is understanding the implicit bias for networks with multiple outputs. For both fully connected and convolutional networks, we looked at networks with a single output. With {{formula:a9b5db4d-28ea-4f1e-b490-7cddd5ea3b3b}} outputs, the network implements a linear transformation {{formula:0c0c4211-96ae-4248-bd39-c46fa5b704b2}} where {{formula:87a6344a-6c33-4bc0-9875-83161fba9fe8}} is now a matrix. Results for matrix sensing in {{cite:ad2a2234f0a1395d2283a88ef838c6ca38a7f9ad}} imply that for two layer fully connected networks with multiple outputs, the implicit bias is to a maximum margin solution with respect to the nuclear norm {{formula:2493d1b4-28b3-4276-b1a5-187b3a2e8479}} . This is already different from the implicit bias of a one-layer “network” (i.e. optimizing {{formula:7a33a191-a482-4154-b18b-2b7d98c11924}} directly), which would be in terms of the Frobenius norm {{formula:ba3cac4e-e50b-4929-9489-d0b75d0c6b5b}} (from the result of {{cite:af227989dc6783264fcd5e96102d1ffc259eab91}}). We suspect that with multiple outputs, as more layers are added, even fully connected networks exhibit a shrinking sparsity penalty on the singular values of the effective linear matrix predictor {{formula:0525e948-5d3f-4436-aa50-cc42e95f685b}} . Precisely characterizing these biases requires further study.
When using convolutions as part of a larger network, with multiple parallel filters, max pooling, and non-linear activations, the situation is of course more complex, and we do not expect to get the exact same bias. However, we do expect the bias to be at the very least related to the sparsity-in-frequency-domain bias that we uncover here, and we hope our work can serve as a basis for further such study. There are of course many other implicit and explicit sources of inductive bias—here we show that merely parameterizing transformations via convolutions and using gradient descent for training already induces sparsity in the frequency domain.
On a technical level, we provided a generic characterization for the bias of gradient descent on linear models parameterized as {{formula:48783aa9-8e96-4b08-b948-890b4ce8ff60}} for a homogeneous polynomial {{formula:55f429c5-3096-480e-8202-8fe273b57f0d}} . The {{formula:f4d4d623-c236-49d3-841d-fd889a473e2f}} bias (in parameter space) we obtained is not surprising, but also should not be taken for granted – e.g., the result does not hold in general for non-homogeneous {{formula:d23da24b-b577-47d0-8550-97a08b260970}} , and even with homogeneous polynomials, the characterization is not as crisp when other loss functions are used, e.g., with a squared loss and matrix factorization (a homogeneous degree two polynomial representation), the implicit bias is much more fragile {{cite:d635fed230891d62783dc039bd8282af18f980ab}}, {{cite:0ea0cbe16858dd5adc27cb7cfee015798bc84217}}. Moreover, Theorem REF only ensures convergence to first order stationary point in the parameter space, which is not sufficient for convergence to stationary points of the implied bias in the model space (eq. (REF )). It is of interest for future work to strengthen this result to show either convergence to higher order stationary points or local minima in parameter space, or to directly show the convergence to stationary points of (REF ).
It would also be of interest to strengthen other technical aspects of our results: extend the results to loss functions with tight exponential tails (including logistic loss) and handle all datasets including the set of measure zero degenerate datasets—these should be possible following the techniques of {{cite:af227989dc6783264fcd5e96102d1ffc259eab91}}, {{cite:9a148d5a27080d391f272d6197fd9c76cdccceed}}, {{cite:ac89dcc44b7e41f5e810d8e3562dd032f6503b41}}. We can also calculate exact rates of convergence to the asymptotic separator along the lines of {{cite:af227989dc6783264fcd5e96102d1ffc259eab91}}, {{cite:bd4530d535bd948ff63b2fac888f52e9eb69b82f}}, {{cite:ac89dcc44b7e41f5e810d8e3562dd032f6503b41}} showing how fast the inductive bias from optimization kicks in and why it might be beneficial to continue optimizing even after the loss value {{formula:e83e2f65-062c-44f3-a420-d028f12a9103}} itself is negligible.
Finally, for logistic regression, {{cite:ac89dcc44b7e41f5e810d8e3562dd032f6503b41}} extend the results of asymptotic convergence of gradient descent classifier to the cases where the data is not strictly linearly separable. This is an important relaxation of our assumption on strict linear separability. More generally, for non-separable data, we would like a more fine grained analysis connecting the iterates {{formula:2c78b97d-3aa4-4b59-a2ef-bd1a7917637e}} along the optimization path to the estimates along regularization path, {{formula:c125d4f9-4a56-4d40-9341-285b5cb7f2b1}} , where an explicit regularization is added to the optimization objective.
Appendix
The proofs of the theorems in the paper are organized as follows: In Appendix we first give the proof for Theorem REF , which includes linear fully connected and full width convolutional networks as special cases. This gives us some general results that can be special-cased to prove the stronger results for these networks in Section .
In Appendix , we prove Theorem REF on the implicit bias of fully connected linear networks. In Appendix , we prove Theorem REF –REF on the implicit bias of linear convolutional networks.
Finally, in Appendix we prove the lemmas in Section on computing the form of implicit bias of linear networks learned using gradient descent.
Unless specified otherwise, {{formula:cb0841d3-e931-450a-ac13-de380fb7bc9f}} denotes the Euclidean norm. We additionally use the notation {{formula:6d63d325-e8ba-46fe-8dbc-f9ea0ad5163e}} to denote equality up to strictly positive scalar multipliers, i.e., when {{formula:fe1ca833-ea80-4d58-84ff-a8261976fd3f}} for some {{formula:fdfd261c-0d5a-4aad-b99e-790a60ecb176}} .
The following is a paraphrasing of Lemma 8 in {{cite:ad2a2234f0a1395d2283a88ef838c6ca38a7f9ad}} and is used in multiple proofs.
Lemma 1 [Lemma 8 in {{cite:ad2a2234f0a1395d2283a88ef838c6ca38a7f9ad}}]
For almost all linearly separable dataset {{formula:d98bc6b1-31ee-4fb2-93ab-9661019b2d51}} , consider any sequence {{formula:95a81d9b-2650-45de-acc7-715d79f4218b}} that minimizes the empirical objective in eq. (REF ), i.e., {{formula:1e29fddc-cd5c-4fe7-8570-87a471fab658}} . If [(a)]
{{formula:6188ac22-462f-4d69-90b4-17a0c1f86f04}} exists and has a positive margin, and
{{formula:bcab70f6-f605-4c3f-8ac0-fd32f6c5c6ac}} exists, then
{{formula:adbf0679-4e2c-46c1-8c8d-8f5f7f1ff55b}}
where {{formula:54b7b933-7045-4b84-9851-4d2426ac9e12}} are the indices of the data points with smallest margin to the limit direction {{formula:40a7f727-da95-490f-9c05-4189b388d949}} .
Homogeneous Polynomial Parameterization: Proof of Theorem REF
*
{{formula:7ec396a5-ee11-48ba-9cee-8e67a6e68cf0}} are the sequence gradient descent iterates from eq. (REF ) for minimizing {{formula:7122e5e3-e833-4282-ae3b-20b705517aa3}} in eq (REF ) with exponential loss over the model class of {{formula:f3d02aaa-6237-4755-ae34-9334597ce5fb}} , where {{formula:37c14b6e-1fa3-4802-a605-e993b6b950ab}} is a homogeneous polynomial function.
We first introduce some notation.
From the assumption in theorem, we have that {{formula:74c5b845-c032-4aad-af16-523864ff2b41}} . Denoting {{formula:2a2a06b8-bc33-445d-bc3e-3876d495dfeb}} , we have that for some {{formula:b326c63c-a5a3-4c68-b4d5-b48eede93672}} , the following representation of {{formula:2fb4c082-c295-4522-bbba-38538de0d197}} holds.
{{formula:fe1abba8-1f9d-4875-9faa-e6d16b7f0cdb}}
Let {{formula:6e872f47-fa34-47a0-9dfe-f25e9a2c8870}} denote the sequence of linear predictors for this network induced by the gradient descent iterates. We can see that {{formula:88c940be-bf51-4a94-94fe-e5c52a7e31f7}} converges in direction too using the following arguments: homogeneity of {{formula:6cd0699f-40a2-456d-88e1-fa3038795bcc}} implies that {{formula:1da1f937-76cf-4010-8d88-c5d0aaee27bf}} for some {{formula:a5757cc3-c341-4709-87ee-3a68d3a90b77}} . Hence, {{formula:0ba91c5b-baf3-46c3-bf73-ff71216649c9}} .
{{formula:efba991d-0d86-4fdc-b610-e735dfacd41d}} . Since we assume that {{formula:8628dd4d-c917-4020-9b7f-9167b868dbb9}} converges in direction, let {{formula:63d6075f-4e14-4989-9577-5fd16661deba}} . Denoting {{formula:ff3a131e-a63e-43b8-89d4-f55b0296697c}} , for some {{formula:9a50ed9a-2142-452e-b0c8-c188322e2e12}} , we can write {{formula:40e0483b-eb81-4bc9-875e-e6fc8c71032a}} as,
{{formula:c2cf67e2-0f73-478f-b279-0beb5c45720b}}
Let {{formula:8fc14701-2ec5-401b-bb74-1514aee7aaad}} denote the Jacobian of {{formula:3cbab0cd-cf5d-4efd-9db6-76a5f0f1449b}} , i.e., {{formula:4c1f5a88-ea37-4c07-b3e4-9bbf1723dc59}} .
If {{formula:6c42c3e9-c721-4db5-a2f7-627c21cc9cb1}} is a homogeneous polynomial of degree {{formula:24484bbc-b14c-4f2c-9735-d76d4893f167}} , then {{formula:4cc730de-51d3-4b33-b8cb-53b2f5763ed7}} is a homogeneous polynomial of degree {{formula:c3abf909-4851-405c-a4c3-3f196b7542bc}} . Using eq. (REF ), we have
{{formula:1aac2f37-9da1-4234-894a-3418a52af8e2}}
Thus, {{formula:6d539ddd-54ea-419c-acc3-ffc047ec0439}} , such that
{{formula:37c137bd-bf11-4474-b735-0a53be62fa40}}
Finally, from the definition of {{formula:b8209624-89f3-4bd5-af7e-47c8e088c94b}} , we have {{formula:949adfab-2418-479e-8abd-51d4256032a3}} , and hence from eq. (REF ),
{{formula:ba0a941b-e4da-4981-9024-bad8ec46109d}}
Using the assumptions in the theorem along with our argument above for convergence of {{formula:d4976682-26c1-4dda-af77-44ea9416c182}} in direction, we satisfy the conditions of Lemma REF , which will be crucially used in our proof.
KKT conditions for first order stationary points
We want show that there exists a positive scaling of {{formula:da945a4c-f373-4892-ac17-6fcca924c774}} , denoted as {{formula:31aa6c90-1107-48f2-afdb-b2c1a8888a2d}} for some {{formula:54afeb45-016d-4367-b72c-235c71f7417f}} ,
such that {{formula:343ddedc-87f6-442f-958f-23a2faa80077}} is a first order stationary point of the explicitly regularized problem in eq. (REF ). Towards this we show that {{formula:58441514-8e1c-4b67-87aa-d700f66d63a2}} satisfy the following first order KKT conditions of eq. (REF )
{{formula:795bff36-1f88-43f0-ae1d-fb4470d2be81}}
Primal feasibility.
We showed earlier that if {{formula:d6ad7823-2f06-40e3-a94c-6d8b2cbc1aeb}} converges in direction, then {{formula:dbb96a11-6e82-497d-ae1f-2f4ff3ca1ddd}} converges in direction to {{formula:eb600074-e00a-4810-8dd8-29312af85937}} . Further, from the assumptions in the theorem, we have that
{{formula:81f0b5c7-75c7-4e98-9975-20bf1482911c}} satisfies {{formula:9eca0846-de58-461d-b0f8-245993f41e19}} , {{formula:2a44c979-71d3-4b12-a3f1-e58f9703dccc}} , which also implies {{formula:d3e0b498-f22c-41bb-961a-670e2804532d}} since {{formula:2efa2e48-6597-4de1-84a6-1a876d317ad5}} .
Now, if {{formula:5a5e3110-8ae1-498d-be40-b3db1796bc32}} is homogeneous of of degree {{formula:c8ff1c50-ddf9-4779-a701-f29dbd7d7427}} , then for {{formula:c1c43471-9066-4a18-a942-aab3af756201}} , {{formula:a9338af1-0859-4f80-96b6-194ad4a6ccd1}} satisfies {{formula:1f538557-b382-4c21-81f1-0762d6f70a09}} .
Showing other KKT conditions for {{formula:3084342c-0e1f-48e6-b685-181787fc617d}} .
The crux of the proof of Theorem REF involves showing the existence of {{formula:dc7b3bb1-7249-47b9-8555-6ef31226033b}} such that the stationarity and complementary slackness conditions in eq. (REF ) are satisfied. This crucially relies on a key lemma (Lemma REF ) showing that the gradient in the space of linear predictors {{formula:e2fafaa3-cc8e-479c-b7d5-1c28519667dc}} are dominated by positive linear combinations of support vectors of the asymptotic predictor {{formula:9712052b-8212-4a11-9516-4ed0c9a1d7b3}} .
Let {{formula:a9f6bdc7-3ee7-419e-aa7b-58c9c447d6b5}} denote the indices of support vectors for {{formula:16ddc687-fe98-4bd5-82e3-8c1b0227205d}} , which are also the support vectors of {{formula:bd8ef601-4f9e-4e34-be67-f33d4e9805bc}} , since by homogeneity of {{formula:59fde89e-b6a7-4d99-90be-fa6730961553}} , {{formula:d0ad3580-9c72-4c8f-bb37-154e2e13cd84}} .
Thus, from Lemma REF , we have {{formula:ce1063d1-d227-4513-b8cd-076e5fd1f68c}} for some {{formula:cc0363ef-02ac-4a23-b193-cc70d8271c64}} such that {{formula:8c4fdc15-4fd2-408c-975b-22f730bf7a5a}} . We propose a positive scaling of this {{formula:f697bfdd-a40b-4577-a9cc-ab7de79ecb1f}} as our candidate dual certificate, which satisfies both dual feasibility and complementary slackness.
To prove the theorem, the remaining step is to show that {{formula:106a1d0d-2bf6-4fde-aac0-56caa6af117c}} . Since {{formula:742abb57-69b3-4c64-a85b-83eb98cca72c}} and {{formula:611da10a-99ed-4a10-9de4-1b9cd680d199}} is homogeneous, this condition is equivalent to showing that {{formula:49639780-f977-4b60-8943-3551cfbcb487}} .
Showing that {{formula:614974cd-6b74-4eb3-ae24-0f8c130bd4ed}} .
Substituting for {{formula:a44bb5b5-6149-4510-be38-7b0298f8b3fc}} and {{formula:45d4f837-c149-465e-9fc9-f9b002791df2}} from eqs. (REF ) and (REF ), respectively, in the gradient descent updates (eq. (REF )), we have the following:
{{formula:1756193b-558c-4d0f-8065-8ecb02760d15}}
where in {{formula:8368a5c2-c816-4365-b25f-1a733ebdc871}} {{formula:f9f5a740-1f10-485d-8ef3-146a8159e4ea}} .
Summing over {{formula:44b7112a-5bc7-4a24-92ee-dcfa34374b66}} , we have
{{formula:93bf247f-7b8a-4f59-be0a-e9f3aebbdaed}}
We want to argue that the first term, i.e., {{formula:d155621e-db9b-42f3-82dd-67a1eed6c06d}} , is the dominant term. Towards this we state and prove the following intermediate claim
Claim 1 {{formula:01186f10-3910-40de-b278-838e140b59df}} and {{formula:7b5861fd-8e5a-48e6-9c85-d8d250feb800}} .
First, it is straight forward to check that for any scalar valued homogeneous polynomial {{formula:a91fbbbb-f38f-424c-9f01-a0ec51d4cb5d}} of degree {{formula:5d5671b4-1772-49fc-9470-fd8208cafaa7}} , we have {{formula:433e34ea-33e9-4383-ad03-b28a2343bc13}} , where for {{formula:26207ed6-271a-4159-b882-11ca067b02a5}} , {{formula:06da2cec-8428-4c4c-8b99-e183734c0750}} (this is also known as the Euler's homogeneous function theorem). Extending this to our vector valued homogeneous function {{formula:1a3a0d4c-7d8f-4266-8d6d-61c503018e29}} , we have that for all {{formula:d723d64f-a70c-4d76-b75c-7051f9de1251}} , the Jacobian {{formula:f69839b2-52a1-4444-a0d5-31997c8f880e}} satisfies {{formula:ba7f33c4-71d9-43a6-aafb-e3b3cc8dc2c9}} .
Moreover, we have that for the limit direction {{formula:20870ce1-6fba-464a-b7a2-a71c8f424f6c}} , the margin of the corresponding classifier is strictly positive, i.e., {{formula:fc9fccea-9b66-4f53-91b3-5e13f17eaa85}} . Now from Lemma REF , using that {{formula:f698b2d8-81b9-44af-b1cc-fa6f69ee0afa}} for {{formula:d9849cfb-50d7-43af-b851-a24f885f4700}} (and not all zero since {{formula:34bbbf64-e10c-4e85-b14c-4c51554d48cd}} is unit norm), we immediately get the following
{{formula:d4616a84-fd05-4689-94d8-70f93198972c}}
To prove the second part, we note the following
since {{formula:feb8051f-1cef-42ab-928e-56356be58756}} in eq. (REF ), {{formula:72cd71f4-6583-4b2f-820b-1595af4a3959}} such that {{formula:56daa7a9-f343-4708-8f6c-ddd6eb8ff4c3}} , {{formula:ab865f65-98e1-40ee-83e0-9c0b4bc9d2d0}} , and since all the incremental updates to gradient descent are finite, we have that {{formula:fb765e85-77b5-4181-94c8-0722b64186bd}} ,
since {{formula:3defd130-0da7-4d34-bc15-8315b2b7a914}} and {{formula:d61e3949-43ea-4cf7-8e9e-5bb0cddfe36a}} are positive, we have that {{formula:aa8979df-744d-4bb5-b719-1ea240abbae0}} is monotonic increasing.
Thus, if {{formula:af21f73e-af5d-453d-b05d-d2268ad2fd89}} then {{formula:3823b5f3-e238-4c1e-93f5-1b38634ca54d}} .
On contrary, if {{formula:e8ea54e5-5eff-4ff0-8c7c-dcea70740349}} , then from eq. (REF ), for large {{formula:16023292-2899-48a9-9529-d75193487b03}} we get, {{formula:fe65b19a-14a2-49ac-9f41-de6cea05c7bc}} which contradicts {{formula:99fcd091-5357-4bf9-bbf9-1956fd5d05ac}} .
From above claim, the sequence {{formula:dfbda773-6b38-49ce-bd08-1ff0f38a8875}} is monotonic increasing and diverging. Thus, for {{formula:8c33bd3a-91d6-4baa-81ef-63ec06437972}} , using Stolz-Cesaro theorem (Theorem REF ), we have
{{formula:9383ee87-dc05-4fa0-b39a-0dec31633be2}}
Substituting eq. (REF ) in eq. (REF ), we have
{{formula:5f2b53b6-6ccc-4dff-84d0-f95cf5d06f16}}
where in {{formula:2a91a06a-ad7d-4227-b4cd-a07652708185}} we absorbed the diminishing terms into {{formula:2a978637-13ad-4100-b07e-455e5161714b}} , {{formula:d599eea1-3365-4634-ab6c-6d419abdda71}} follows since we proved in the claim above that {{formula:8422cdf5-e205-429e-96ab-0e9a999bb44f}} and hence dominates {{formula:75913b21-8f06-4e73-acf2-f4d3568f6617}} .
We have shown that {{formula:df846dce-cd39-417f-a7b5-d1cbb401c4a0}} for a positive scalar {{formula:9ce297d6-61a2-47c9-bf06-3393a28be44b}} , which completes the proof.
Linear Fully Connected Networks: Proof of Theorem REF
*
Recall that for fully connected networks of any depth {{formula:044f2441-a4e3-4260-9851-164ed9d28ee5}} with parameters
{{formula:96123839-1813-4da9-922c-4a3352a67b3b}} , the equivalent linear predictor given by {{formula:b39ba7bf-6f54-468c-bdd8-e3aafecf0644}} is a homogeneous polynomial of degree {{formula:b45bac3c-fbc6-49f8-91f7-c7e88b47d065}} .
Let {{formula:d0ad5dbc-9799-4d71-9333-acad66def0e8}} denote the iterates of individual matrices {{formula:4a335054-14e2-4d67-be5a-30e9d26bc730}} along the gradient descent path, and {{formula:c9fae834-2b80-4fd3-80e0-72f9e4b998ca}} denote the corresponding sequence of linear predictors.
We first introduce the following notation.
Let {{formula:6036786c-ab05-46dc-9a4e-43ffaf2f1cb3}} denote the limit direction of the parameters, with component matrices in each layer denoted as {{formula:fdfabe4a-e672-4fc8-8dd1-5e3bb48a65ea}} . Specializing (REF ) for fully connected networks, we have:
{{formula:d5d520c3-32b0-4fe9-a516-f6e01dd350d5}}
where {{formula:49e30274-16cb-47eb-88d7-5efa93d42cac}} and {{formula:a2f62a84-27ea-480a-9f1d-b7c3c3733b23}} .
For {{formula:d89299dc-1069-483f-87d7-8cd4ef511944}} , denote {{formula:20999cdc-0234-49df-842b-f8006ce09e02}} and {{formula:460858fd-e0d2-406d-83d5-3ebbbda624cb}} . Using eq. (REF ), we can check by induction on {{formula:e9479e70-0382-496e-a98c-a84973c38ad0}} that {{formula:e83efc0f-e664-4dd5-b64c-156e228a6aa5}} , and hence {{formula:c183e7ec-8e91-4c89-bc9d-bffabff2f40c}} such that the following holds,
{{formula:23a10218-c732-4ae5-80da-7462e94a755e}}
Let {{formula:ccf3a7a1-4a16-47b8-b9f9-2d2f60ce9a26}} . Again repeating eq. (REF ) for fully connected networks, we have for some {{formula:f9dd29cf-4059-4306-b27c-eddacbd83936}} and {{formula:3004033e-648b-42fb-9218-c1a977de86b8}} ,
{{formula:e0e94771-9f68-4d2e-970c-80ce02c4990f}}
From Lemma REF , we have that {{formula:15f5913c-6167-4c14-a20d-227e9d31b444}} such that {{formula:a464c748-00dd-4a52-90c1-a740150f4beb}} , where {{formula:c70737d1-4eb0-4fdc-8e07-da47175a933f}} are support vectors of {{formula:8c89f0f0-97f2-43fd-aef3-c970c8e9d545}} .
The proof of Theorem REF is fairly straight forward from using Lemma REF and the intermediate results in the proof of Theorem REF .
Showing KKT conditions for {{formula:55561f39-b98c-4153-bc2e-ac67311a85a1}} .
Using our notation described above, we have {{formula:64ea72e9-d622-400c-b9fb-85a43b574f45}} .
In the following arguments we show that a positive scaling {{formula:b00656b4-40c5-45f8-8b78-4e7d0e2194be}} satisfies the following KKT conditions for the optimality of {{formula:7bcfbe10-ce62-41d4-94f5-cab3a9f18c72}} maximum margin problem in eq. (REF ):
{{formula:8050f6b1-991c-4be8-a712-7f8b0d0c3484}}
As we saw in proof of Theorem REF , since {{formula:4b48830b-4038-44d0-af56-d30b6cad91f4}} has strictly positive margin, using homogeneity of {{formula:c16ef404-16fd-400b-a92c-6a99803ccdd0}} , we can scale {{formula:e8b78493-a6dc-4328-98ce-588325338355}} to get {{formula:769a4e15-7ef3-4f79-9f2f-55ab04394d76}} with unit margin, i.e., {{formula:c9725e64-70a6-4eec-8220-afea1babdcb6}} .
For dual variables, we again use a positive scaling of {{formula:25104f18-d3b8-4d56-9b63-5d202f81614b}} from Lemma REF , such that {{formula:a8070d35-743b-4c23-84c0-6f512f3ca695}} . In order to prove the theorem, we need to show that {{formula:da36739d-536a-4701-923f-90c2ec9dcbb2}} or equivalently {{formula:227be606-f7b8-443d-a814-59e4dc3516a6}} .
Recall that in the proof of Theorem REF , we showed a version of stationarity in the parameter space in eq. (REF ), repeated below.
{{formula:a1bd8f64-90d5-4145-b0f4-111287623835}}
This case in particular includes {{formula:a1086ffb-bb30-4df7-99b1-18986ff6df0f}} which is homogeneous with {{formula:c20b1766-9bfc-4cc3-a5d9-68d823ecd322}} . We special case the result fully connected network. In particular, for the parameters of the first layer {{formula:1fd51239-1b07-415b-9640-79f6deff537c}} , we have {{formula:515d7dcb-560f-4961-abe7-23bb7e3b55a3}} , where {{formula:da8186bf-d567-40d6-873d-838b87a2d8f6}} and {{formula:d9d43e6a-6587-44b0-a02d-a729194e016e}} .
This implies, for any {{formula:e3d087c5-13ca-4706-a927-6074a5ef6f9e}} , {{formula:6d77dc3e-d49f-41b9-bb61-b273131dbb72}} . Using this along with eq. (REF ), we get the following expression for some positive scalar {{formula:9e3463d5-12b9-4eac-979d-c6f2c4cc1561}}
{{formula:d74609a6-7972-45e5-a383-7b9f036d6bb8}}
Since {{formula:67306648-7245-4748-9c3b-e980ac49e9d4}} , we have shown that {{formula:1df8cce0-13a5-48a8-be2d-07df29b8ed2a}} , which completes our proof of Theorem REF .
Linear Convolutional Networks: Proof of Theorem REF – REF
Recall that {{formula:a8abc535-8e5d-4e18-ba03-49edb513df26}} –layer linear convolutional networks have parameters
{{formula:610a2506-638a-4b54-838a-48592c701d0a}} . We first recall some complex numbers terminology and properties
Complex vectors {{formula:bba28fa6-9793-468e-956e-7c02a37b1cf5}} are represented in polar form as {{formula:08b99b20-35be-483d-889a-085cc19b13a1}} , where {{formula:904a1ab9-357e-4d84-8448-cefe9de6521b}} and {{formula:bdce7ae4-33a7-4499-bc78-1c5d862a0c7b}} are the vectors with magnitudes and phases, respectively, of components {{formula:6cfbb25e-7b90-41ba-a106-c73afc54b673}} .
For {{formula:e0be9669-7023-4858-9397-4a6229ab2bbe}} , the complex conjugate vector is denoted by {{formula:cf1dbdb7-5d81-44fb-9ff3-6331c4303da5}} .
The complex inner product for {{formula:f14c5000-099e-445f-805a-6c0284579aa2}} is given by {{formula:3545c331-2852-4bd0-a454-d4c5db007336}} .
Let {{formula:c4448334-0dd9-4f2c-b080-07caa09ffcb9}} denote the discrete Fourier transform matrix with {{formula:2c230884-c506-48ba-bbf7-8ac2b59d2373}} where recall that {{formula:559e7912-71cc-4080-b197-69dd74a83552}} is the {{formula:11086d09-b5a7-476c-ab1e-863dbc36a2bc}} complex root of unity. Thus, for any {{formula:f1d5f43a-bac6-47af-aab5-cf97ec2746fb}} , the representation in Fourier basis is given by {{formula:80fb5002-3785-41f6-832f-b3c9c1d7e7d9}} . {{formula:bd991b4a-f73a-4693-8aed-37a0bae7df74}} and its complex conjugate matrix {{formula:9cd75357-f7ef-41f8-ab18-c7a70eddebaa}} also satisfy: {{formula:4a9b616d-0f3e-4d99-902c-c874aa577f03}} , {{formula:abcff3bc-fc01-40a6-80ce-db691e692685}} and {{formula:afc199d5-9510-40a8-859a-70d31877993b}} .
Before getting into full proofs of Theorem REF –REF , we also prove the two lemmas (Lemma REF and Lemma REF ) that establish equivalence of dynamics of gradient descent on full dimensional convolutional networks to those on linear diagonal networks (Figure REF ), albeit with complex valued parameters. This makes the analysis of the of convolutional networks simpler and more intuitive.
We begin by proving Lemma REF which shows the equivalence of representation between convolutional networks and diagonal networks.
*
First, we state the following properties which follow immediately from definitions:
For {{formula:a121ebbf-a057-4926-a757-5591a3dce4ea}} ,
{{formula:0eee7fe2-ff3f-4eb2-9b5d-ba998e0c41be}}
where recall that the complex inner product is given by {{formula:87942403-4afe-4836-92d9-a749689a2ec0}} .
We next show the following property
{{formula:561e4c78-ed58-4ae3-a624-68d5076cc6fb}}
The above equation follows from simple manipulations of definitions: recall that {{formula:422c9205-3c26-41dc-bb1a-bd5c7ab6556c}} and {{formula:fdb8be1a-3fbe-4445-b329-833ccf53a612}} defined in eq. (REF ) as {{formula:595db9b4-8318-4b26-88ab-bb708caf7aef}} .
{{formula:6c73f441-bace-446a-9ed0-b4c8d5f305ed}}
Recall from eq. (REF ) the output of an {{formula:c17037bc-5b61-486f-89ab-8bf78fb863cb}} -layer convolutional network is given by
{{formula:2e72c919-a617-497e-9c00-f2c52fd0092e}}
Denote {{formula:99fde9b9-d10b-4ab6-8c71-566ecf639599}} . By iteratively using eq. (REF ), we have
{{formula:f1313452-c71a-43f6-b318-5ade9455a00d}}
Thus, on one hand using the above equation we have,
{{formula:04e0c99a-02cf-4e3d-a2a1-51c641243827}}
where {{formula:9babad67-609d-427f-b106-9984474f573e}} follows from substituting for {{formula:79426a49-1f94-439e-9c82-19a084e1f755}} from eq. (REF ) and noting that for any {{formula:c007763e-ea0b-46c7-829a-5ad5614e8bfb}} , {{formula:d6e76078-9326-4779-af1a-9d142a76b215}} , and {{formula:a38ada6c-d838-4d2c-b51f-2a3dc97c20cd}} uses the definition of complex inner product {{formula:df5bbe36-9e7d-4ecf-bb40-46cd3000e5fe}} .
Now further using eq. (REF ) in above equation, we have
{{formula:bb1cf240-cd41-4c45-b6cb-32e439f3a72d}}
Thus, for {{formula:12178a91-96f3-48ee-be43-1a7a2be009d7}} , we have shown that {{formula:02f0bce7-fe99-4b2c-9815-bb2a842b93aa}} .
For {{formula:0da8cfcc-8272-4512-b3cf-7839345a9a6c}} , let {{formula:6fd85dbf-5572-44cf-bbf7-8bd267c18413}} denote the equivalent parameterization of convolutional network in Fourier domain.
The above lemma shows that optimizing {{formula:5f588070-28b2-4ee0-ae7f-29b023212606}} in eq. (REF ) is equivalent to the following minimization problem in terms of representation,
{{formula:c85eab4e-b07e-4934-a3b5-cf705bf3d14a}}
The following lemma further shown that not only the representations of {{formula:dbeb9232-2527-4ed8-8c0b-74151b129e74}} and {{formula:fb386dc9-92b1-4940-9795-0f9e9e455d55}} are equivalent, but there corresponding gradient descent updates for problems in eq. (REF ) and eq. (REF ) are also equivalent up to Fourier transformations.
Lemma 2 Consider the gradient descent iterates {{formula:ec4582c9-9798-435a-8d06-9a6b3bb7529b}} from eq. (REF ) for minimizing {{formula:6c43fcbf-c18f-427d-8908-cccb712fb46d}} in eq. (REF ) over full dimensional linear convolutional networks. For all {{formula:f44ec2dd-092e-4dae-ae87-6842c27db9f7}} , the incremental update directions, {{formula:4e397272-b2e5-4fed-bc14-f41be6c525ba}} satisfy the following,
{{formula:ce46b5e6-ec23-482f-a3b3-95605a628a92}}
where {{formula:07e61297-00ec-4cc7-a4b1-276b58e297c8}} are the Fourier transformations of {{formula:4c352df9-b96b-46d2-be19-373aa8a27b1c}} , respectively.
The above lemma shows that Fourier transformation of the gradient descent iterates {{formula:afb2b813-05d4-483d-81e8-be0733567bd3}} for {{formula:c46e4991-3029-40c0-b00b-eb5ac76582d8}} in eq. (REF ) are equivalently obtained by gradient descent on the complex parameters {{formula:b4c901f9-403e-492e-8760-fffe1df8dbf7}} for minimizing {{formula:4051333c-60a5-4cc8-8dc9-721b37079b17}} in eq. (REF )
We use the notation {{formula:864902dd-eb22-4002-a6da-709f74b2a78c}} to denote Hadamard product across all parameters {{formula:d80cee06-f6c7-4b3e-b24b-2521a93e9a2a}} with {{formula:3528ab80-c69c-4ac1-93cb-ccf40a8622d3}} .
For any {{formula:f4ae1307-0858-495a-8ea1-ce593af276cd}} , using eq. (REF ), we have the following for all {{formula:56e39e41-fa71-4589-bbcd-2ab9e7c3e8c4}} ,
{{formula:d2db482f-92a7-4451-8483-27d47eee1329}}
Using the above equation we have,
{{formula:aa0dfcd6-d6ad-40df-9d24-53573989a813}}
where in {{formula:dc847637-0826-433d-b3d6-62d4f08db591}} we use {{formula:a59d7efa-87fa-4fa5-a14a-e2138102fb24}} and the remaining equalities simply follow from manipulation of derivatives.
From above equation, we have the following:
{{formula:e544e4fe-a7a6-4c66-921f-94ddb5b99950}}
Proof of Theorem REF – REF
*
*
For the gradient descent iterates {{formula:704ded45-82f1-4140-b103-b30e26b413fe}} from eq. (REF ) denote the sequence of corresponding linear predictors as {{formula:0788b6c5-1526-4e5e-81fb-63418829f95d}} . Let {{formula:b0f82919-de56-46ca-84cf-3b2b3b35ea69}} and {{formula:4f3d92c6-e56e-4e14-b862-a36f81a4a952}} denote the Fourier transforms of {{formula:0ad082ad-2931-4015-beda-278d8a820f87}} and {{formula:4c6f92ae-960f-42a3-bc79-2fa06dbb1f34}} , respectively, and let {{formula:5007ccc9-29d5-4dee-8f92-8f7ea2fbf574}} .
Summarizing the results so far, we have {{formula:142814af-737f-427c-af4b-3c31683e43e6}} (from Lemma REF ) and {{formula:eaf53fc4-c726-436a-976c-16dcca9553ce}} (from Lemma REF ).
We use the following observations/notations
Let {{formula:aa46ef13-8937-478c-981b-3acceb811c1d}} .
Denote the Fourier transform of {{formula:1b06a0d7-1c31-42f7-896b-18e9fb2e880c}} as {{formula:d22c7d23-8068-4375-9c94-bb4c2289197d}} .
Taking Fourier transforms of eq. (REF ) which are also applicable here, we have:
{{formula:32aaeaef-5bbf-4a94-9918-302f17c6bbc1}}
where {{formula:5ac3ac7a-defa-417d-bdf4-76f226651e4f}} and {{formula:8802fed1-fbfb-426d-adad-f631cdfea396}} .
Denote the negative gradients with respect to {{formula:f9794500-0b1b-45af-a004-50569aaa47f4}} as {{formula:17e0b1ea-bb67-4801-877b-c31c1c2dadaa}} and let
{{formula:9d10dd38-6ecc-44b6-9331-88fb7cf90e7c}} . From the assumption of Theorem REF -REF , {{formula:6171cf15-8509-4f0d-b311-ee60acb44737}} exists. Let {{formula:95a2a57d-0ce8-40cf-946f-9d2bb6c59bf6}} . Denote {{formula:c1ce9664-622c-494b-9b34-31d0e0a64ab9}} . We get the following by taking Fourier transform of eq. (REF )
{{formula:e925f9a4-0baa-49e7-926c-33a37fe2bf48}}
where {{formula:24d8b5fd-3937-4319-aa56-ff99bf41fc6a}} and {{formula:a9b75ba0-879a-4be1-b12e-e7fb8ac804d7}} .
From Lemma REF , we have that {{formula:8c35c02a-3e40-4255-adbc-62a69bcc706e}} such that
{{formula:b4349ec8-103f-4968-aeee-a6344f7862da}} . Thus,
{{formula:b3a24f7b-483c-4f28-b73e-c5a108d2f58e}}
KKT conditions for optimality
We want to show that a positive scaling of {{formula:459177be-eae6-4cc8-bd63-6be490475f40}} , denoted by {{formula:342e0092-5c2d-4ffe-9e19-0170df6ee2dc}} is a first order stationary point of eq. (REF ), repeated below,
{{formula:bba4b1c8-bdcc-4b3c-860c-139ce4e3abfb}}
Recall the KKT conditions discussed in Section . The first order stationary points, or sub-stationary points, of (REF ) are the set of feasible predictors {{formula:9dbdd4e5-9f73-41e5-aae6-cfb2bdbaa0ad}} such that {{formula:62460dba-6294-42df-a76c-d06e1adad79b}} satisfying the following:
{{formula:73bae719-363d-45a3-8893-26f48764eeff}} , {{formula:177827df-7155-4f87-a666-db244411c7d2}} , and
{{formula:6f02aa5f-2436-4ac6-bd59-402b951ae4ee}}
where {{formula:ebf00ba3-4325-440d-a45b-7d9297bd22a2}} denotes the local sub-differential (or Clarke's sub-differential) operator defined as
{{formula:04da2da7-8a16-487b-a6ee-95dea6c76041}}
For {{formula:2ea301d7-5d34-491f-9cbc-ed6f149562b6}} and {{formula:341fa9fa-dcd9-4fb7-9b4f-0f3480b717c2}} represented in polar form as {{formula:92830878-8b7b-4399-9f48-34997ab3a111}} , {{formula:7bc52d6e-9a36-4941-a107-d868458d697e}} is convex and the local sub-differential is indeed the global sub-differential given by,
{{formula:946ae2fa-02b1-40cc-866d-cc95dc0f5b4d}}
For {{formula:16b26fbf-2331-449b-8ee5-5a030d906bfd}} , the local sub-differential of {{formula:6f7a8b2f-7576-4552-8663-32947d39ec10}} is given by,
{{formula:af7ecc5b-7898-49fd-b6c6-42a24709f2ac}}
Showing KKT conditions for {{formula:ff740bf3-87a0-45a1-8d52-6e6fabf22f52}} .
As we showed proof of Theorem REF , since {{formula:bb6c8858-8c85-46c6-9a0b-c109dffe3bb6}} has strictly positive margin, using homogeneity of {{formula:22dbc1cd-9963-4ace-b2cb-fb7bf551130c}} , we can scale {{formula:622fed65-4ae8-45f8-ab7f-9f6cb218c1b0}} to get {{formula:58a77d32-7d31-4d39-9546-9313c8444636}} with unit margin, i.e., {{formula:8187206f-e08a-4c88-aaa6-614b149983aa}} .
For dual variables, we again use a positive scaling of {{formula:10422643-5667-4cbd-8e4d-54a11db5a23a}} from Lemma REF , such that {{formula:dd93605d-1ef0-4c64-b7e8-6be42ae3462b}} .
In order to prove the theorem, we need to show that for some positive scalar {{formula:ce6d99ce-3352-4916-8a52-f8d40e4768dc}} , {{formula:76951a60-699d-4c26-b82b-3ddae5846435}} , i.e., satisfies the conditions in eq. (REF ) and (REF ), for {{formula:87ed4ade-3bc8-4d43-86f4-2ebd8327d837}} and {{formula:6f116511-335c-4dc0-890a-693f4bb669a3}} , respectively.
We start from the stationarity condition in the parameter space in eq. (REF ) of Theorem REF . For some positive scalar {{formula:2a4bcf98-c0f8-46be-b2bd-a089593d44f0}} , we have
{{formula:a9fc1063-c22f-4b79-9fed-9ae4aaf8db7e}}
We will now special case the above equation for fully width convolutional networks.
From Lemma REF , we have that for all {{formula:6cb0fdea-11cd-465d-8040-ba0c15d8b325}} , we have {{formula:d34f09c3-8821-479c-acc7-5f5b7bda80ce}} where {{formula:9643ab4a-f3b8-40a3-b733-33c8415a2617}} and {{formula:c5c047d7-894e-4c6d-a61b-b25e3dac3eb7}} denote discrete Fourier matrix and its inverse in appropriate dimensions. Let {{formula:e06434e9-3cfc-4c54-9ca4-618c83a1780c}} denote the standard basis in {{formula:4f2c3768-d863-4196-a778-d3ef3c23740e}} . We first note that for all {{formula:f2ac3645-6724-4df8-955d-79d5295c2896}} and for all {{formula:ec9d9190-797e-4b6f-80ae-ad92848c1037}} , the following holds
{{formula:70f66bcc-38f6-4205-a92a-be6e211fe0b2}}
This implies, for {{formula:fc274841-de96-45fc-874f-1d1cd0300104}} and any {{formula:7193b4f3-0022-47bc-a6ce-f19703c71496}} , we have
{{formula:879428ae-e27c-40a8-8edb-7b8b5abe2d23}}
Substituting the above equation in eq. (REF ), we have,
{{formula:d5e8069a-86c5-4d60-9d21-6a93b0403cd0}}
where {{formula:5b9fa190-452b-4243-bf4e-cb4959648406}} denotes the complex conjugate of {{formula:36437ce8-c288-438e-ac22-2b0b34a3c504}} .
Let {{formula:ac110323-1a39-4044-89b3-f000933b63b0}} . The above equation, further implies, for all {{formula:e63604c7-7719-467e-a8fc-48e0e8327db1}}
{{formula:75ae0274-9833-472c-b22d-3dddb4e23177}}
In eq. (REF ), since the LHS is a real number, we have that for all {{formula:65e47f31-1c85-4634-ad49-8f47e9e54403}} such that {{formula:99303e86-e1b3-471c-b64c-1fd32b7aa167}}
{{formula:24da6b66-b55d-4a78-b2a2-a1d4f4b44ecb}}
Also, by multiplying the LHS of eq. (REF ) across all {{formula:266b1e4a-8bd4-4c1c-bc08-2d0cdaff42d8}} and taking {{formula:93d7effa-ebaf-43c6-8f4c-1a0c693e7fd8}} th root over positive scalars, we have for {{formula:64886eb2-b019-46c5-806a-9eb0df4b1b39}} ,
{{formula:26f5f36b-51a7-4133-bb22-c2dc5ccedb21}}
Finally, let {{formula:fbdf52fb-1cdf-419a-b485-e2b90446ff95}} be a positive scaling of {{formula:cd60a22c-dbd2-4d2e-b512-d344ffe1578c}} such that {{formula:9bfaf3f3-057a-4d0e-bb6e-4e33368b638c}} has unit margin. Let {{formula:21573a1f-4572-4984-9cc4-ff5d5afd6ca8}} . Since {{formula:245df687-0817-4599-948f-b6e30aa203ed}} is arbitrary positive scalar, redefining as {{formula:05670efe-2b2d-49e3-a78a-6bf081b6da9e}} , we have from eq. (REF )-(REF ),
{{formula:e21b7015-a177-4916-992e-3372c8510f6a}}
Case of {{formula:6d180a9f-fb1d-4bb1-a142-fe797554af07}} or {{formula:0b384017-52bc-4c0a-bf16-4f2a0afde2ac}}
For {{formula:09899e94-89ca-465d-bfbf-1992f5850c6e}} , since {{formula:81a6c25f-4f52-4dfe-ad65-57cb6c1b0df6}} , eq. (REF ) is indeed the first order stationarity condition for eq. (REF ) as described in eq. (REF ) and (REF ).
Case of {{formula:dbf4588e-7658-4f52-a065-ca0955a590ef}} or {{formula:0d89f596-b56c-4ab2-8ae7-a811a98e1eea}}
For the case of {{formula:87eae36d-8ed0-4d20-9dfd-ad7bf892e98f}} , in addition to eq. (REF ), we need to show that {{formula:83b7edc4-ca49-4d3d-acdd-06924d573d30}} . From eq. (REF ), for {{formula:cae05dd3-7629-41c8-9f18-a77b394f113f}} we have {{formula:611c3cc2-13e6-4821-8b72-37793fc4c675}} .
We need to further show that {{formula:3306a562-731b-4e41-914e-11fe8cd9a79d}} , {{formula:3680a794-1ede-482c-bcca-325364f83882}} .
Showing {{formula:47761dd2-5fcc-4d44-8f93-2ae69379e693}}
Using Lemma REF for for the special case of 2–layer linear convolutional network, for {{formula:31084a27-62ae-4596-9d68-f27f6a350f09}} ,
{{formula:0f84c32e-8f23-4c4c-a280-94d66bedd71e}}
Recall: for {{formula:b34710fc-d26b-4a9a-a836-ce768a3a38b0}} , {{formula:3029647c-1af2-45dc-9a2b-9d334f184963}} , {{formula:4fa40831-0d75-427c-a7c6-a22655fd6b90}} , {{formula:b386e67d-04a5-4390-b093-98841cc9fbfc}} and {{formula:452e8ea5-88f4-4d63-8f7a-f9048e2602f0}} .
Further, from eq. (REF ), we have {{formula:02e46d1f-333b-430f-9353-8195d01804c4}} , {{formula:3774acb0-faa5-4419-8443-df7b37744002}} , and hence
{{formula:5541c164-1d6f-4639-921e-2d86fbc8441f}}
From the convergence of complex numbers, we have the following:
{{formula:9bb2dd89-7a46-4bc4-98ac-3c6371497a31}} such that {{formula:d0801dd2-766d-44b1-b27c-e9e4ac66a9a9}} , we have
{{formula:0dbb8095-df3d-4b87-baa1-fb502eb3080a}}
{{formula:1c3a0f3a-2d58-402d-b5a6-9acce7de2328}} such that {{formula:17b8d4b3-ade4-4b9d-9a7f-28b79eeb0dd2}} , we have {{formula:eb44286f-7288-486c-8f9a-5ab490649632}} , and the following holds
{{formula:a7b7eca4-596e-4781-a6b6-9c5332913efb}}
where the last equation follows from eq. (REF ).
{{formula:bf1256b5-8e86-494d-a104-bb81e27d9494}} such that {{formula:d45eb097-2ef2-4eb6-9edb-eddf5e880ec6}} , from eq. (REF ), we have {{formula:ae7f25e4-084f-46fb-bba8-fc25f3694ce9}} .
In the remainder of the proof, we only consider {{formula:d9a34fe6-8b3b-4edd-b841-2b2153c4d88c}} with {{formula:3fa375a1-08e1-47eb-ad9d-7bbf8f45cedb}} .
Consider {{formula:caa29592-d9ca-4030-9cc2-afc60be43588}} defined below,
{{formula:83f1ce49-a826-4fd7-99e6-dc32f51ed283}}
Since for {{formula:b7af8068-4b4c-49be-b524-61f591d9ee85}} , {{formula:594e3803-04cc-4411-a9bf-81d1b87d8724}} , we have the following:
{{formula:d9bc47a0-26ab-4394-9d40-7d8766b8d154}}
where {{formula:a2ef7ccc-d8c8-4b0f-bfe9-8caec924a737}} follows from using {{formula:9c8cc539-1b74-4315-9dc1-0a4cc27a7429}} whenever {{formula:966c687b-546c-4054-ac18-948278559dc4}} (from eq. (REF )), and {{formula:a914c32c-ed4a-4cdf-8a0a-90386637f2b7}} follows from eq. (REF ).
[Step 1.]
Dynamics of {{formula:d87fd878-dbeb-4915-ac77-7df2e9782439}} :
Now looking at the dynamics of {{formula:466d8ade-743e-4dda-881d-f067de60c91f}} , using eq. (REF ) we have that
{{formula:f8b31b4a-a74b-4546-8c6e-741514e04389}}
Additionally, since {{formula:8263c1e4-ab63-4b14-8853-23fa50c4c76c}} , we can write {{formula:4e3e60b5-b0ae-4bc2-babf-e49fab696d4b}} where {{formula:c4c70a2a-20b2-426e-bdf1-2564b1af8c4e}} are real scalars. Substituting in above equation and rearranging the terms, we have
{{formula:17639bce-8cf8-429d-89af-6ffff1da46fc}}
where in {{formula:d296583a-dd3e-4b73-96d2-856ab90c5bde}} we define {{formula:05b1024c-69b2-48b1-b819-4e963bde59df}} .
The following intermediate lemma is proved in Appendix REF .
lemmalemcnlemma
Consider {{formula:237898d2-27fe-40ae-8be3-1c36b6d91b21}} in eq. (REF ). For all {{formula:324d1cd1-74a9-4eed-af16-f6b290c9c198}} such that {{formula:868c92d5-eb10-40c0-9502-3c9f5d1ad7da}} , {{formula:e8f8899a-9436-4393-9e58-ca1cb2e768ab}} and {{formula:76903475-f0f8-46a0-a6a9-1a7b1f8b320e}} .
Using the above lemma, we have {{formula:9be11928-2565-4bde-958f-f50ba9812803}} such that {{formula:7565a85c-0234-422b-832b-76aa30cbb22d}} . Additionally, since {{formula:76a5dc88-cae8-4437-ae12-72053be47b32}} , there exists {{formula:b15e01f1-601b-4aa4-a488-8de5cadd3a7d}} such that {{formula:298ffe43-95c6-4274-8ef4-d3a752feeab3}} . Substituting these representations in eq. (REF ), we have the following dynamics for {{formula:01d23211-7dc6-4550-896f-df03660f04ad}} ,
{{formula:5e6389aa-7c7d-4e3b-93ec-c5d52390fbfd}}
where in {{formula:331944a7-555a-41fa-8b30-2759281d20c7}} we have accumulated all diminishing terms into {{formula:4f0e754c-9a56-4775-a39b-268c577143ac}} .
Remainder of the proof: We now prove our theorem by looking the following quantity: For any {{formula:f36ddabc-857a-401f-92cf-ca0fd5fedb18}} with {{formula:e511f2c4-2f8b-49c6-8b1b-c6fa3e2d02e8}} , define {{formula:6df6fe00-ab14-4a12-aa04-32839925528c}} .
We will show that whenever {{formula:43165b64-e51f-4fb9-a96b-4b7445900827}} , we get {{formula:906bd24d-8c7f-4a10-b6a6-93ec796c3f79}} .
Along with eq. (REF ), this would imply that {{formula:972d6ded-a2fa-45b1-a3a8-baba19602a23}} .
Hence, for any {{formula:cd69846b-0319-49dd-9512-6d8909180730}} with {{formula:abff0297-2a5e-475b-b96a-7b6a2392c890}} and {{formula:c4cce850-43a6-4835-bb1f-7cb8a6548707}} , we have {{formula:0be74db8-be28-4e43-9451-f7cbb4201eaf}} . Moreover from eq.(REF )), we know that {{formula:3319fe9e-b4fe-4b70-b547-bbf693f6c059}} for all {{formula:e4435fa9-f1dd-4e42-af00-4b45e8cb6587}} with {{formula:7d709b0c-f321-4e5b-bc98-99a448bf6c98}} . This implies {{formula:ad239437-6b70-43da-ac55-14caae4d538d}} , {{formula:4fed63d2-d40a-4074-a864-1158fa2fd5d8}} and concludes the proof.
Showing {{formula:dc97c813-efd0-4806-8568-61fb31551d21}} :
For any {{formula:ba87fcf9-1594-4321-8abb-a59ede815799}} , let {{formula:0bd305bb-bc25-4670-9917-fec7e78b7b13}} . We note that the since the loss {{formula:42e54fae-e5fa-451d-9006-14445d64816a}} , norm of the gradient {{formula:8d71d01e-85b6-4165-baa6-a0d185010d6a}} . Hence, for any finite step size sequence {{formula:66209cf4-adc1-4238-990d-7676f55377a1}} , there exists {{formula:578aea41-6db5-41b3-9b2d-b54eb8068d9a}} such that {{formula:d8d40e70-dcae-4ca6-a8be-d84b5cc7498c}} and {{formula:8b30615e-e134-4d18-9d7c-f4c03302ef4b}} , {{formula:7b83ebcf-5270-4475-8ead-beb59a1f9aec}} and the following inequalities hold,
{{formula:6bad2829-b52c-49cc-889a-7729c8dab8aa}}
where in {{formula:304cf484-4f13-4783-9e62-d180317fb51d}} follows from using {{formula:9bbb0f7d-47cc-4d50-904a-02aa14b0c796}} for {{formula:d961566f-c057-4e9c-9acd-e3c744adac29}} since {{formula:9d7a375f-4a10-4895-9e2c-b9df91f47beb}} for all {{formula:600e4402-fd7f-4740-b0be-76d48a03fa8d}} , and in {{formula:74071bd1-5236-49dd-b6db-fd0a7305234e}} , we absorbed all {{formula:24d20d50-2e5c-45a4-8531-ab490161f3d3}} terms as {{formula:acf91865-b7fa-4b7b-bef7-63ef7b1c884a}} for {{formula:e8b5a569-3d2a-4c42-b5f0-3907d16289bb}} and used {{formula:1ac7c2d4-21b4-47af-9a74-aa779da68f61}} .
Since {{formula:91855ac8-c925-4d6d-8b3f-843b01e8d327}} , for large enough {{formula:6e0b61e9-0dbe-40b0-a48d-3e061f02e411}} and {{formula:23618884-5c3c-4159-af02-961fb064c32d}} , we have {{formula:388e280d-8033-43f1-8f8d-ff110532f470}} . Thus, for all {{formula:fb411514-e31f-436f-b590-87b0b05eb579}} ,
{{formula:de497f94-2a83-41b3-acea-695dbb327538}}
Further, from the conditions of the theorem, for almost all initializations, {{formula:581c2fbf-0845-463c-a7cb-b601accfb714}} for all {{formula:e823a76e-7906-4f50-bdff-08c9493ba074}} . For step sizes {{formula:e3cfa6e2-4387-4efa-83e7-ff2e08e1afee}} smaller than the local Lipschitz constant, for all finite {{formula:cc63a51d-10a0-4ce6-bd9b-52a7bf34685e}} , we also have {{formula:c0743f9a-9900-42b2-95f6-0ba6c065bfcc}} . Moreover from Lemma REF , we have that {{formula:eeac5252-7478-4852-b398-c7e681665319}} and hence {{formula:caf0f266-0129-45b5-9911-c1267414cce5}} such that {{formula:3c0fab99-1697-409b-b558-fcad9fe603a1}} , {{formula:54468a01-51fe-4729-9f92-81d9dfe5c92d}} , but for any finite {{formula:5bc1057c-520e-45f9-ba47-b294a2a9b950}} , {{formula:feeee316-72d2-458a-af52-397522fa196f}} . Thus, for {{formula:3af639de-caa0-426a-adb0-3956a4e77c99}} , using the above observations, we have that {{formula:fe9ad6cd-a18b-43cc-820d-c859e1a9328e}} .
Now, using eq. (REF ), for all {{formula:e84bf389-600f-4abf-ad95-16afa345036e}} ,
{{formula:9716dafd-215e-413c-8435-768b53059c09}}
Finally, we show the following claim:
Claim 2 For any finite {{formula:65a5a03b-0a3d-445f-bb6d-c300529b22d6}} , finite step-sizes {{formula:8a7dd93d-10b4-4b63-8b5d-0e2c01f19cc4}} , and any {{formula:d661b7ca-7c01-460c-b97e-d8de5acf1fdd}} , we have {{formula:c7bc270e-c50d-4e8b-9b49-94bcb24e3344}} .
Let {{formula:26ce6650-8779-4399-a93b-770eb9d6b83c}} .
From eq. (REF ), we have that for all {{formula:a5a1f6f3-52b2-46c1-81d7-e07d83578318}} ,
{{formula:85997ef5-f717-413d-ad9a-b70aeed2aa5e}}
Moreover, we have {{formula:e0ad10cb-bfad-46a2-9d95-a2e2f44ffade}} for at least one {{formula:2b68e473-9fad-4c2d-acd3-6c01ff450347}} , and for any finite step sizes and finite {{formula:5c2bdbdb-68ac-4049-863b-30f9608fd346}} , {{formula:e32b43dd-e8e1-4018-854b-7493823e4395}} . This then implies that for some {{formula:22f224ff-fe9e-49e9-b615-c235b25e2604}} , {{formula:7aca10d4-cce1-4ecd-bfff-5d0236a15a8c}} .
Thus, for any {{formula:d36017dc-8135-4744-852c-33f5923486a1}} , we also have {{formula:52dbd995-0483-4359-a4b1-165318e6fdd3}} .
From eq. (REF ) and above claim, we conclude that for all {{formula:1fcbd369-f6e9-4314-a611-f0710bef0d1c}} , if {{formula:afcf6e79-eba0-46bc-8b2d-3799567ac0b5}} , then {{formula:5b32aa89-1f9e-4bd9-b90b-6bdb56c55cdb}} .
This completes the proof of the theorem. {{formula:d496669e-8ca1-4692-8af7-3465ffc972a0}}
Proof of Lemma REF
*
Recalling {{formula:e3a04e10-d897-4cfc-b28e-bb41fb188da1}} from eq. (REF ) and {{formula:19154831-b6ef-4e4c-967a-b734c4f0fe65}} from eq. (REF ), we have the following:
{{formula:d689b933-323c-44c6-8b59-a6b9edc5782a}}
For all {{formula:8bb1fe9a-8eab-4b90-93cb-59147739eba2}} if {{formula:f63420b8-2f9f-480c-9f10-5d4626b9c36d}} , the it is straightforward to see that {{formula:0aad3bc1-431f-47b4-bc2c-172ed07add3d}} (from eq. (REF )), and also that {{formula:a97607ac-0e3b-4ff3-be23-926fa8de3d3b}} (from eq. (REF )).
This along with eq. (REF ) gives us {{formula:67b46564-ccfe-42cf-9d08-197e0ef2a1d6}} .
Moreover, since {{formula:5f4fd07a-ea8f-4bb6-b9e7-6ee1d32e51e7}} , we have {{formula:e5e2f777-9390-4541-80a2-71d8010a5422}} or {{formula:017abfb4-cfcf-4455-95fe-98313a7ee264}} . Further, using {{formula:2127f89f-18d8-4427-8faf-3430ce82506d}} , we have {{formula:da4a634b-4499-4cca-b7ef-ec224e3f48d3}} .
We now only need to show that these results also hold for {{formula:a1b361d9-28f7-44db-9952-085f0cbc2e0b}} such that {{formula:c68f550d-5255-4a3f-8298-a34bf15857e8}} . Recall from the assumptions of the theorem that even when {{formula:0e4887da-6113-4726-932e-beaef3fb3e0e}} , {{formula:15e5b37f-6a00-4826-abba-b6c6598a9749}} such that {{formula:ffd71cdf-cd09-477d-875d-e73f6cf9ebdc}} . We now prove the lemma by showing the following steps for {{formula:117db222-b42e-4d86-bec4-dfdc9feca83a}} such that {{formula:096e1600-a444-4243-a0c1-fc0e315dbac7}} . :
[Step 1.]
Show {{formula:9aaebb51-cb75-4978-9075-1eba4401d166}} .
Show {{formula:27c3dfec-e475-455c-b4bc-ded48f88d90b}} .
Proof of lemma assuming Step 1 and Step 2 hold
The above steps would imply that in eq. (REF ),
the denominator satisfies
{{formula:581a9fda-8e61-40df-bb85-789de034f98b}}
the numerator satisfies
{{formula:ebbffea9-6c97-463b-922a-7673f5815022}}
These eqs. along with eq. (REF ) in turn prove the lemma, i.e., {{formula:d5373501-013d-4048-99cd-c2de47580de0}} and {{formula:d9ec7495-e4d7-47fa-b834-561a7147e7bd}} .
Showing Step 1 and Step 2
[Step 1.]
Show {{formula:d4e03832-2d4a-446d-a85d-b4ed003ec260}} .
From the dynamics of {{formula:aafdf3f7-11d5-4320-90a0-514145b580d1}} from eq. (REF ), we have the following,
{{formula:0e176c7c-f6e5-49f1-a5cb-9b524028c0ac}}
Note that since {{formula:38cea69a-7ffe-45b5-87d9-692a27088bc3}} and {{formula:bc84a28f-fc32-4ea8-8334-6cd73420902a}} are finite, we have that {{formula:41ac0c16-452d-4ee6-b209-e683d7eada92}} such that for all {{formula:4d73590b-5a44-4d55-a0b1-7b52b3127336}} , {{formula:fbebaffe-d971-40e3-9f20-c00838de1c44}} . From the above equation, we have the following for {{formula:03c9b59c-c638-403e-b9ac-80c0bb02ccfd}} ,
{{formula:01f1fc6d-9c9e-45f9-9523-6d1f592dd7f3}}
where {{formula:d035060d-cdcf-44c5-9200-46a285b2b209}} follows from iterating over {{formula:da56f10e-20ef-4b6b-946a-c84e5d48f8be}} and using {{formula:c265b791-a0b0-4da0-a409-b51fc97a9cca}} for {{formula:432ffa54-2b11-4fd0-bfb5-895a5ea68e7a}} .
Since {{formula:b36c8a13-384b-429b-ac88-ef2a7ceda7b8}} , at least one of {{formula:d11f385f-6039-4f8c-a2b2-01dfd1ca99c2}} must diverge. Without loss of generality, let {{formula:094b4d65-b139-46f6-ab40-74b446f252f6}} . Let {{formula:6abed3d4-f5c4-4865-8561-feb88ab42552}} with {{formula:17f6ed09-f3fc-40a0-8a67-51f0f0684349}} . We have
{{formula:4f14064a-098d-4b34-9916-07b3677a2840}}
where the convergence in {{formula:f4e652a7-8195-4b8e-a594-4409685c23b8}} follows since {{formula:3433dca3-100a-4f2f-9ea5-de70374692e9}} (from eq. (REF )) and {{formula:f99303cf-ac56-446a-9e2f-c1790e9f020f}} .
Show {{formula:4d7c23ec-ab66-4ba3-9db0-e64be4b8e4e8}} .
Note that from Step 1 above, we have that {{formula:2075e9a8-0343-4226-bd72-990e9c2e3741}} , which implies {{formula:f2104877-ed0d-4891-b281-2c8c0c417a68}} . Thus, there exists {{formula:95aaff12-27b4-4c9f-ad9a-e63573f66516}} , such that
{{formula:520a9a23-e287-46bd-ba20-71250f3421c2}}
Also, from eq. (REF ), there exists {{formula:d315cd9f-eb2e-4407-a1ab-7976db3e3beb}} , such that
{{formula:98677a9f-7bf6-4099-a6ef-9a4db205f92f}}
Using the above representations, along with eq. (REF ), we have the following,
{{formula:5f915fac-03f2-4719-876b-6c4e7babdca5}}
where {{formula:5b3ed905-53f6-47c3-97ff-a4cf16ca3886}} follows from substituting eqs. (REF )-(REF ), and {{formula:8a282cb0-f54b-4e88-8e9e-6d5291a0a071}} follows from using {{formula:850e9b54-99f3-487f-8bb3-798d67f88a2a}} and defining {{formula:a169fbd2-ccc6-4c15-a6fb-c42c38f54b7d}} .
Denote {{formula:b6178a26-00db-4e6a-a358-9374df614202}} . Additionally, from the assumption in the theorem, we have {{formula:a66f2941-9fcc-4593-961b-58fe035a531a}} , hence there exists {{formula:6a14cceb-6801-4411-94a1-c72d8a7f6437}} such that {{formula:766c4004-4dcd-4eb7-a312-f5bc953e906b}} .
Now, from the above equation, for any {{formula:be8dec56-50e6-4335-bf58-531be54c3a89}} and {{formula:0f8a995c-6011-42b1-9afe-0f288d59bb75}} , we derive the updates for {{formula:fd2c9c3a-d041-4aca-ad29-184499f4bfbe}} ,
{{formula:e75550c3-9446-4868-8797-f98fc66e5aee}}
where in {{formula:951ab570-ef65-4b70-ba4a-a67c85378596}} we used {{formula:fd00f16f-3c18-4c85-a344-1061aa4e1558}} and collected all {{formula:4bb829aa-1587-4f2d-95a0-a8eb89cdcc84}} terms into {{formula:b554683a-af25-44e5-a27f-1d12f4353b45}} (since {{formula:d1a4d97d-3867-4913-9bfb-8cae289174a0}} ); in {{formula:17e96f94-f821-4fda-9f82-71aec6c0b555}} we defined {{formula:d3c5809c-32c6-4be7-bfe9-aa4449fbff2d}} ; {{formula:cd75eff1-d76a-4b16-aa85-28993d542327}} is obtained by iterating over {{formula:318924bd-fbe4-4d9c-9682-09f603299776}} ; and {{formula:96a26034-da1d-4e4e-a5bf-7b1842293918}} follows from using {{formula:5f7a4b03-8d10-452e-9f31-fbd373c6c122}} .
If possible, let {{formula:e89d9060-a885-4804-b11f-6c9c79310c5c}} . Since {{formula:97831456-2b4d-44f4-9514-99d574478563}} , and for finite step sizes {{formula:dba96ed4-f088-40fc-a9ae-a5140220403c}} , {{formula:48db98ea-8332-4266-85e0-c34bd01cb97d}} such that for all {{formula:6a54f99a-53a4-43d0-ad5f-241af4c24e90}} , {{formula:030b5cc3-ac24-40ff-85d9-8f1d224c1c6f}} and {{formula:c44d50be-73cd-4a7a-8cdd-6ad8cf4b193f}} . From eq. (REF ), we now have
{{formula:6cbba60a-2b31-47ac-aa4b-fa34fd7be5a9}}
Finally, for any finite step sizes and finite {{formula:ec0dce6d-a871-4d52-be13-fc0f5d796c50}} , we have {{formula:a637bd75-50f1-46c4-ad79-95371a029a17}} and this creates a contradiction since the LHS in the above equation diverges, {{formula:b9597ed1-4965-44bf-b62a-2ad07e98e63c}} . Hence, in order for the updates in eq. (REF ) to lead to a divergent {{formula:16e56891-3d9d-4797-ac07-50f2b8c7be77}} , we necessarily require that {{formula:1ba4fc77-4024-46c4-861d-05098ed1f7fc}} .
This completes the proof of the lemma.
Computing {{formula:39f43f80-eea9-4d77-aca9-144f000f7843}} : Proofs of Lemmas in Section
In this appendix we prove the lemmas in Section that compute the form of induced bias of linear networks in the space of predictors. Recall that for linear predictors parameterized as {{formula:8bfbf8cd-9bf2-4818-849c-3c8d1f17b1a0}} , {{formula:f14fcf6f-a7f9-4940-afe1-61ab00941af5}} .
*
Recall that for fully connected networks of any depth {{formula:dbc1fae8-0c7b-4ce3-a5e8-84707a75740e}} with parameters
{{formula:cea3317a-8660-46bf-be8c-72bff67413a7}} , the equivalent linear predictor given by {{formula:c206b86a-04a1-4979-9263-19cf529cdcbb}} .
We first show that {{formula:1122ad20-c405-4d15-bbfa-5d55845169de}} .
Let {{formula:a93fd9a0-88f5-4e97-8b4d-eba72d3a16ae}} be the minimizer of {{formula:bfc128c5-7de2-485a-8586-d22ab16cece0}} , so that {{formula:94b99146-53ce-4ccb-8890-362f3e9593a1}} and {{formula:cda2f8db-624e-4e5d-b30a-4a336766c368}} . We then have,
{{formula:cb444452-58a6-4d3b-9d6d-94d693a4a5b4}}
where {{formula:8943e020-de8f-4a99-bbc0-723f659e8844}} follows as arithmetic mean is greater than the geometric mean.
Next, we show that {{formula:630cdcef-263c-4b8f-9b2b-476a24e4ddf0}} .
Given any unit norm vectors {{formula:9d25e6ba-552f-4330-889c-03693b6fc053}} for {{formula:8945af8c-1b7e-4314-a626-e37884b5fa3d}} , consider {{formula:8671900e-ac54-43b0-8498-d40f7216a4e4}} , defined as
{{formula:63409c59-11c9-4448-a4e9-839ae606a863}}
This ensures that {{formula:9c4c7bbd-6263-4638-b4d8-f39755ec6cac}} and {{formula:d4244beb-1d40-410b-9a44-03272f865703}} , and hence
{{formula:5d596c1c-d30d-45a8-a72a-01e5dbe3d886}}
Combining eq. (REF ) and eq. (REF ), we get {{formula:a3876982-7476-4fd5-9ea6-4a6f816466e8}}
The proofs of the lemmas for computing {{formula:6a869c6e-c9ea-4a1c-99dc-18047d414dd7}} for diagonal and convolutional networks are similar to those of fully connected network.
*
Recall that for an {{formula:7f7e33a3-4df8-4406-b5c8-b6a5b63eed50}} –layer linear diagonal networks with parameters
{{formula:9e055504-ca59-4e96-ae0c-0170ccbf6411}} , the equivalent linear predictor is given by {{formula:96b87f96-7837-4938-98a1-ce26864efdd8}} .
Let {{formula:04d0fa01-0d9d-4bda-81bb-b4037f1dc5cc}} be the minimizer of {{formula:0a03947b-0223-43f4-8cca-6e7975757fa8}} , so that {{formula:114e6398-7571-471a-9efd-ac2a024a8912}} and {{formula:8f4bfc97-066d-49a6-862e-64d0153dcbdc}} . We then have,
{{formula:8d1ab54b-1bcd-4aa1-b1fe-785fa6e32dfb}}
where {{formula:76217d93-8e9f-4174-9fa8-b475a54e8ecf}} again follows as arithmetic mean is greater than the geometric mean.
Similar to the case of fully connected networks, we now choose {{formula:35c66605-f277-45d1-9fec-1f2a7dc0cd46}} that satisfies {{formula:671c6d3f-5443-445b-9285-8a005c6e826e}} and {{formula:c31646a8-8731-462d-8c27-0e2e34125d5c}} . This would ensure that,
{{formula:36158daa-6c0b-47d6-a104-06e75c830a12}}
We can check that these properties are satisfied by choosing {{formula:801a5e8d-69f5-4ff7-aede-2fccd52c83eb}} as follows: for {{formula:0342d47c-7be6-4040-a61e-7f8fa6e51e62}} , let {{formula:695ef5ed-1891-46aa-86da-e685a0094c9f}} and {{formula:001e5e8c-840c-4e52-919f-8196271fc2c8}} for {{formula:da5cf32c-3f97-45bb-9e07-4e256a10a722}} .
Combining this argument with eq. REF concludes the proof.
For convolutional networks, the argument is the exactly the same as that for diagonal network adapted for complex vectors.
*
Denote the Fourier basis coefficients of {{formula:a592dad5-2951-43d8-9656-a03f85828856}} and {{formula:aaf4a7c3-f20f-470e-94d6-cdc8adea075e}} in polar form as
{{formula:9a8f330b-4383-4170-bb58-fe063b81d123}}
where {{formula:ea3a117c-2753-4f78-a34e-422865822cda}} and {{formula:d709caad-10c0-4e78-bfd4-3bf7b51043fc}} are the vectors with magnitudes and phases, respectively, of {{formula:8dcc3ce0-a153-4a48-8f04-bc1462b1e58b}} .
From Lemma REF , the Fourier basis representation of {{formula:31a2d6d2-d673-438b-9e4b-495eaae61ae8}} is given by
{{formula:4f498a04-10fb-435a-85c8-06ca97530a52}}
where we have overloaded the notation {{formula:70b58256-dda0-4f01-8b83-2316d3f73f9c}} to denote the mapping of diagonal networks in complex vector fields, and {{formula:8fee5e4b-1814-42d0-b366-ddf0977fa52f}} . We thus have for {{formula:0085c748-dff9-4d6d-b5e9-31ab484c1b9e}} ,
{{formula:a85a9642-055a-44b6-867d-d176d9cd4c62}}
From orthonormality of discrete Fourier transformation, we have for all {{formula:0bbf3bc5-ebd1-4e82-adfd-d4f75069f668}} , {{formula:e918c593-0558-4ec8-9ad9-b72604b8a37a}} . Thus,
{{formula:b3b7493b-0ad0-45fa-bba9-5c4612381e57}}
We can now adapt the proof of diagonal networks here.
Let {{formula:c9b25999-128c-4b2e-9c8b-af84dceff999}} be the minimizer of {{formula:3a498be9-6aed-4d11-b307-f8b86461033b}} , so that {{formula:a070580f-6b3d-40ad-8b20-32d65713c903}} and {{formula:40a02053-b281-4144-ac2d-41b915b5f517}} , and
{{formula:c1d18d75-6062-415f-84c2-90d5142ddcd8}}
Similar to the diagonal networks, we can choose the parameters in the Fourier domain {{formula:f7c9f225-bf3c-472d-a2ca-0bce778c7681}} to ensure that {{formula:08e461d5-8b66-47c9-8899-865aa23950c7}} and {{formula:436e6c48-2e8d-4091-abeb-025b4035a0fd}} as follows: for {{formula:f4aecfb0-0b1a-47b2-a712-d16020046920}} , let
{{formula:c0a80453-81f1-40e5-9e3e-d34458da8e2c}}
This gives us
{{formula:567fc89f-037e-4923-a610-7dd60614bdad}}
Combining this with eq. REF concludes the proof.
Background Results
Theorem 3 (Stolz–Cesaro theorem, proof in Theorem {{formula:37f8a7a5-bfc0-4df7-b9cc-e796a9708bad}} of {{cite:1d01da08f1471ce9676ff2f5b04dc87b6184dc2b}}) Assume that {{formula:3ac0eac6-aa07-4d6f-949b-d457343a4618}} and {{formula:428e69e4-ee60-40a5-9505-51838c272976}} are two sequences of real numbers such that {{formula:6021c2c0-dd02-464b-af8c-e3ac3db0333f}} is strictly monotonic and diverging (i.e., monotone increasing with {{formula:16297528-01e8-448e-b1c7-af4e44b9e20d}} , or monotone decreasing with {{formula:b8946684-990e-4d35-aa7b-ec6dac4230e1}} ).
Additionally, if {{formula:f8a85743-0f85-4694-b3dc-1ec29b8991cd}} exists, then {{formula:0a1344d0-fe36-4d6f-85ad-426edaaa0557}} exists and is equal to {{formula:ab6fefdb-9714-455f-aab7-d9b1fcad144e}} .
| r | bef3a8c815ecaf726df3c15d0b01a295 |
A large number of attribution methods have been developed relying on the gradient of the decision studied.
The first method was introduced in {{cite:0bb828bf18a3898ba80368d670a35b6c320783f1}} and improved in {{cite:f628d9a53d4bc05e59e1e7d346de1d9dd12aecfd}}, {{cite:567956029bb58264ee42ca09b0f085bc817c68df}}, {{cite:5a319414e1e9f8cf9afe9101a083988884310cc1}} and consists of explaining the decisions of a convolution model by back-propagating the gradient from the output to the input. The resulting gradient heatmap, also called the saliency map, indicates which pixels affect the decision score the most.
However, this family of methods is limited because they focus on the influence of individual pixels in an infinitesimal neighborhood around the input image in the image space. For instance, it has been shown that gradients often vanish when the prediction score to be explained is near the maximum value {{cite:e73271b43e4de7a97aba1d79eb9d430a63e49037}}.
Integrated Gradient {{cite:e73271b43e4de7a97aba1d79eb9d430a63e49037}} and SmoothGrad {{cite:524f77933761cc4c8842a2320a4ab280c43000f8}} partially address this issue by accumulating gradients. Another family of attribution methods relies on the neural network's activations. Popular examples include CAM {{cite:44e29220cb396f483daf624505b10e81d806f3f9}}, which computes an attribution score based on a weighted sum of feature channel activities – right before the classification layer.
GradCAM {{cite:e97aa1a6b8ef70fc8fe92bf4606a1b11bff27a15}} extends CAM via the use of gradients, reweighting each feature channel to take into account their importance for the predicted class. Nevertheless, the choice of the layer has a huge impact on the quality of the explanation. In comparison our proposed approach is model-agnostic and hence does not require access to internal computations.
| m | 1d0356c567bd273139a69e961b0d3334 |
An efficient way of implementing this scheme could be to create an array of traps by using an optical lattice. By making use of the superfluid to Mott insulator phase transition it is possible to achieve an array where each of the traps contains the same fixed small number of atoms {{cite:c8c2d06f0c22d4ab81db9935925c101bbc391cd5}}, which is important for our scheme. It is also possible to achieve the Mott-like features that we need at finite temperatures {{cite:a4f52a5eee4708e699d5d518b9536e775e062c64}}. Once we are in the Mott regime, the optical lattice can be adiabatically transformed to create an array of traps with the desired populations and geometries. For our simulations, we model an array with 200 lattice sites, though this could be scaled up to larger numbers. By using this array, 200 results can be taken with a single measurement run. This is a big advantage for measuring time-dependent systems since, if the rotation rate is changing with time, we would need to gather sufficiently good statistics to measure it at a given time before it changes. By making all the measurements on a single shot, we greatly increase the bandwidth of our sensor. Our simulations use the initial values {{formula:acbe10c3-cd6b-4299-8dea-332ed11f152d}} and {{formula:4aa3824e-2bfa-4ee0-9593-aac6782f1278}} and follow the procedure described above. A single run gives us 200 measurements in parallel and significantly changes the Bayesian distribution resulting in {{formula:075de70c-700b-4a4a-b295-72666b012996}} . At this point we apply the tuning and a second array is created and measured using updated values of {{formula:ee9dea70-fd16-4efc-b452-63937e0613f0}} and {{formula:a5a3b3b5-33e1-419e-837b-b5626dc9d9ba}} . These values are chosen as {{formula:f5e66d38-34cb-4ce1-9157-25ee6478498f}} and {{formula:66310bee-fde6-4acc-96c3-479e856e3ab7}} to approximately match the width of the distribution, i.e. {{formula:27c62175-4417-4ae5-8356-13cafaa3cde5}} . The array method means that we can only alter the {{formula:58f0a55f-df72-46e0-9ca5-7be4fc929dc9}} and {{formula:b9184afb-4f2e-4489-89f5-85e5b2f6750e}} values at measurement numbers corresponding to integer multiples of the number of traps. This restricts us a little but this restriction is far outweighed by the practical advantages of the scheme. After this process (i.e. 400 trap measurements in total) our simulation gives {{formula:382ec152-2b16-466a-aec3-495821da7738}} . This is a factor of 10 improvement over the untuned case with 400 measurements. It is possible to gain a bigger advantage with three or more tunings and the details of this can be worked out for specific implementations taking into account practical considerations.
{{figure:7e408599-aba0-41f3-8320-8c27af9d90c2}} | d | 2763e1862b5e4600ce2688269ebb7b20 |
Our main results are summarized in Fig. REF .
We find that the magnetic fields whose coherence length at the EWSB is larger than the comoving neutron diffusion scale at BBN
are completely ruled out since generation of the correct amount of the BAU and the small enough baryon isocurvature
perturbation cannot be satisfied simultaneously. In summary,
for the primordial magnetic field to produce the BAU,
the coherence length and the strength are constrained as {{formula:f0e39348-85d6-4481-a4aa-c09d5572acd4}} and {{formula:6475d5cd-1d6f-44db-a2de-74b005f85173}} in physical quantity at the EWSB.
Such magnetic fields evolve according to the MHD cascade processes and remain as the IGMFs.
For the typical cascade parameter {{formula:4daab9e5-95e8-424d-bc35-5793cac80fe4}} , the present IGMF property is uniquely determined as
{{formula:14916c22-75a5-43dc-b22b-962d44166695}} and {{formula:6205c1d9-f78e-434a-bc9e-d301c16ea274}} with being maximally helical today,
regardless of the shape of the magnetic field spectrum.
Even allowing relatively unrealistic value of {{formula:517b181a-c419-4a30-93fc-dd25a38371e3}} up to 0, we find the present IGMF must satisfy
{{formula:d40b7e50-30c8-45e9-a802-2a17d092638a}} and {{formula:f72083cc-e9b9-4606-a224-3138cf6491cf}} ,
on the line Eq. (REF ), which is determined by the eddy turnover scale.
It is found that the relationship among the parameters that describe the magnetic fields, that is,
{{formula:cb28eaf6-443b-4f3a-93ea-eb260ddd02d2}} , {{formula:96d307cc-cc36-40e6-9c24-8a804735a8f9}} ), and {{formula:377ecce0-4540-4b19-a33b-2a172391e669}} is
uniquely determined by the condition to generate the correct amount of BAU (Eq. (REF ))
and to satisfy the MHD evolution (Eq. (REF )) as {{formula:0b80351e-93df-4dbe-a63b-47389e4b0ed7}} ,
leaving only one degree of freedom to the IGMFs regardless of their spectrum or the evolution history.
While this would act as the important consistency relation to test this scenario,
the upper bound from the baryon isocurvature perturbation is (at most) marginally below the present lower bound of the IGMFs
suggested by the blazar observations {{cite:e16af1c3367d14c36578d4732c9fcc6192afcf0a}}, {{cite:a3fcd442672135a0b2b37dba06174f66b107497b}}, {{cite:59aa319bfd721aa3e80f7ee3fbb9ff249f873a8a}}, {{cite:4df19043dd49b817f065f4ab0615755d99256e37}}, {{cite:3b345ae63fca681380c73c74233f74dde1325f83}}, {{cite:8b9d662f64c96d668f16e7187ea5749d8bab9e21}}, {{cite:a240c8f82f71243ad2b5d191fb22f296bcbfe125}}, {{cite:589a5f5628d2fd28c0927adb39d1dc1e9b762027}}, {{cite:a4b8c5264982b0d36be03e444dc41675a765a3bd}}, {{cite:351a3b75e6b8f63ee6ec4bbfc7c34034c923d4d5}}Especially, if the constraint on the IGMF coherence length suggested in Ref. {{cite:351a3b75e6b8f63ee6ec4bbfc7c34034c923d4d5}} is correct,
it is quite difficult to satisfy both the baryon isocurvature constraints and IGMFs.
| d | 4607fb13363d594f9d4ce8b5c5d9a8ff |
It is common to consider a LED model with a dominant extra dimension {{cite:38b01de72974379cb99b29a20904506ebc2dd585}}, {{cite:2c396b68541d75fbac637430c23f0775c23abe67}}, {{cite:d418d61e0f4c40fd9d48d14a601327b0c4afa610}}, {{cite:8ca56d315a22fcba516e4c5620ecd8c120dfd431}}, {{cite:58ad3d4c98d7ecdb925b1b67855a5ba83a3aaf6d}}, {{cite:4a8edf6198b4ea31a28e27534deddd3ca143cee0}}, {{cite:de55d2da2c2b11533eb2809dc597e8ae5fdc2f7d}}, {{cite:303379177c9ee7058b8f0adafb91321d6e1f18b5}}, {{cite:457859ee53d6084ed8b3387ddabe3c6b8285c853}}, {{cite:aa6f54145d22f08a7a7d54e0503cfb6acdbccb23}}, {{cite:8571d3f541cf83dc1cb02ade1da7beabb063f2a2}}, {{cite:764c4215e00535dbf269763a9a2b4407121558a9}}, {{cite:d57706a25b7f227aace533b16317352958a13ecc}}, {{cite:a53d6f162fd7e6933cc28800ae66eec5365095a4}}
whose neutrino phenomenology is described by only two parameters:
the neutrino mass scale and the radius of the dominant extra dimension.
In this paper we present the bounds on these parameters
obtained from the analysis of several neutrino oscillation experiments
and from the recent results of the KATRIN experiment
on the search for the effects of sub-eV neutrino masses
on the spectrum of electrons emitted in tritium decay.
| i | a46f9bdae0e6c6d1a9656645fa11b2f1 |
To resolve the inconsistency pointed out by Bell, we must, at some point, determine whether
projection is a real physical phenomenon. The demonstrations of superposition in systems that
have become entangled by interactions span an impressive range of physical
situations{{cite:9129d865261ae87437aa202ec4362b2e257c5c87}}, {{cite:6740c1c9ade1946c8fa9fa76ce463bff3b82a4fc}}, {{cite:f8fd03dd971849a89470b4f94a758c559e436bbf}}, {{cite:2efe0c9ec9d9851dcb1a8cf80d796524148bf419}}, {{cite:52bd01a65a59e356d9ff746597a584e454229618}}, {{cite:7ad4d1527474174b4ec6282acbd9685476e80315}}, {{cite:a76c7bdb42e04064bdb8572ed3009f4127a555fb}}, {{cite:62b2f00a3bce3dd08c8903bfb7bff390ee0a6134}}, {{cite:685502aab60ac2a93bd81e174bb4493b61563c0d}}, {{cite:d8685bf638c72f04e1e30a68d348efef12f7aac1}},
and they are important first steps in determining the extent of linear evolution. However, to establish the
complete absence of any real projection effects would require something like the building of a perfectly functioning
quantum computer of arbitrarily large complexity, or the teleportation of extremely complicated quantum states.
Clearly, these tasks are far beyond any foreseeable technology.
| d | 7d5907459caf8ccf31f361b25a4a3024 |
Goodfellow {{cite:6898dca9164a24af8e02a30d927c50eebcc692f1}} developed an effective white box attack algorithm, FGSM, based on gradients. The idea is to attack the model based on the gradient direction of the model loss function. When the model converges, the weights are fixed. By computing the partial derivative of the input image through the loss function, we can obtain the optimal change direction of the input image, thereby reducing the value of loss. The FGSM algorithm does the opposite. It adds a minimal perturbation to the input image that is opposite to the optimal direction, thereby increasing the value of the loss function. It causes the model to misclassify the input image.
We denote {{formula:c6104e58-fcbc-4d55-97a5-ef567c0553af}} to represents the original RSI, {{formula:d9dc45f9-99f8-4c2a-9f2c-95b589cd83db}} to represents adversarial image of RSI by the attack algorithm. {{formula:29c572f4-a3fd-4759-81a6-6bc326f4fbef}} is the model loss function, where {{formula:b9efcef2-246f-4f33-a100-e9f8d4c9c3a8}} is the weights of the RSI classification model. The FGSM attack algorithm is as follows:
{{formula:f986813e-43d7-4244-9c07-9d7c1b08a89b}}
| m | 2c2c55ec7c9a201d9c734f5574e30b7e |
The best models are Lstm and BiLstm. The best overall model is
BiLSTM, which outperforms the other models on half of the
tasks (SST-fine, Opener, and SemEval)
and consistently beats the baseline. This is in line with other
research {{cite:14e54af63291cce55ba7888d9c2f7446cdceb396}}, {{cite:9316f7037dee8fd62f94b75b98a08a56ddeb119c}}, {{cite:66c9a94aa8e49512285112d17b8b8ff3617d430c}}, which suggests
that this model is very robust across tasks as well as datasets.
The differences in performance between Lstm and BiLstm, however, are only significant
(p {{formula:c26adf9e-9021-4766-b1de-b9d12e29a27c}} ) on the SemEval dataset.
| r | 4e1e4e8c811665535845cbcf57cc66ed |
Another cost-sensitive method for mitigating bias is the Exponentiated gradient reduction {{cite:93aa8548c31d862baa479144f94b85edcb537075}}. The algorithm consists of a sequence of two reductions aiming at yielding the classifier with the lowest error in the context of the defined constraints. The metrics considered for fairness optimization are the statistical parity and equalized odds. The method was designed only for binary classification.
| m | 14b79b0c9a31b82ea74355207444f99d |
Another two important differences between Swin-Transformer to ResNet are: 1) Replacing the pooling operations to Patch merging. 2) Using layer normalization (LN) {{cite:af3272198cddf760d7b9417433a6beac082cbd8d}} instead of batch normalization (BN){{cite:6a0193df0d000ac23645a973b0ec5fee0b33213d}}. In {{cite:2dfc8796028a40150abdcdc6c05869b434ed7fc7}}, the auther replacing the BN in ConvNets with LN and did not observe significant performance difference. We are not clear about the influence of those two to the performance. It can be further explored. We plot the histogram of last layer's feature map and also visualize the last layer's feature map output from both the ResNet-50 and Swin-T from our experiments based on using simCrossTrans ones. See Figure REF and REF . From Figure REF , we can see the clear difference of the last layer's feature map value distribution. For the Swin-T one, the distribution follow a bell shape due to the layer normalization {{cite:af3272198cddf760d7b9417433a6beac082cbd8d}}. In Figure REF , we visualize the last layer backbone's feature map. From the visualization we can clear see that ResNet has a more sparse outputs due to the pooling layers and not using layer normalization. The Swin-T's last layer is more dense. Also, it is interesting to see that for the Swin-T, there is a clear feature map which is similar to the original input image. We thought this might be due to the residual connection in both the ResNet and transformer, however, we did not find similar ones in the ResNet outputs. Figure shows more details about 2 channel feature maps from ResNet and Swin-T.
| d | a83c7f9a08d36a40287d15acf2aebf2c |
The dynamic properties of AR-VTG are presented in Sect. , divided in different subsections, were the dust core is considered.
The study of the internal core in comoving coordinates leads to, as in GR, three possible families of solutions
that classify the spacetime as bound, marginally bound or unbound {{cite:bd5c24dc28d8dfdfa64dd41fcabbc50389ee7f98}}.
This fact has been characterized with {{formula:f66be1e5-b4f6-4d0b-b3bb-7dac6eb5ce84}} , {{formula:6d7d5e91-8327-4973-8689-2f15c9be6bda}} and {{formula:41e5db96-b2bb-42fc-936f-f1e420596ed6}} respectively.
In all three cases a minimum value for {{formula:47b6251d-c392-46bc-8ade-babb9da67884}} has been found, say {{formula:c4b14bbb-5be5-4db0-ae27-0b52683cde25}} , where {{formula:3249ea57-550b-484f-87c1-b74a519fb7bf}} and which is reached after crossing the apparent horizon at {{formula:396b867d-477f-40f7-b771-165e549a28c3}} , then the collapse is halted and reversed.
The same fact was first found for a non charged dust in an external electromagnetic field by Shikin {{cite:24381c511893acdac02c90eccd791b44b720e222}}.
So, under the premise that the real Universe and astrophysical objects (in general) have no net electric charge,
AR-VTG provides an interesting alternative for preventing gravitational singularities as a pure classical gravitation effect.
| d | 52f195a8e7147c04647cbdcb18c886de |
Among these works, NeRF{{cite:130efedcebaef0e902915fde98773b2a063202d4}} is a representative method that incorporates a part of physically based light transport{{cite:17a8c1240a08c8d220d4f5f326b6e2eb5e8ca53f}} into the neural field.
The light transport describes light travels from the light source to the scene and then from the scene to the camera.
NeRF considers the latter part to model the interaction between the scene and the cameras along the camera rays (rays from the camera through the scene).
By supervising these camera rays of different viewpoints with the corresponding recorded images, NeRF optimizes a neural field to represent the scene.
Then NeRF casts camera rays from novel viewpoints through the optimized neural field to generate novel-view images.
| i | bf3890838a61cd4753d21cafeecb20fd |
Graphene synthesis and transfer. Our graphene synthesis and transfer procedure is a modified polymer-assisted wet transfer of CVD-grown graphene {{cite:76fdcb55c2d7f28052f29fd5b9c0848a1c5b80b6}}, similar to the transfer recipe from previous demonstrations of remote epitaxy {{cite:d3b628f0e07b517939334ab1929e4c249f9e052d}}, {{cite:0020f49a14372dc403c5cd470b4b29ae6e0efdf2}}. Graphene was grown by thermal chemical vapor deposition (CVD) of ultra high purity CH{{formula:197be949-9c00-4c6c-9b73-24fdea63e77d}} at 1050 {{formula:15a2eda2-358a-4860-87a2-b16a2cd44122}} C on Cu foil, as described in Refs. {{cite:35340a50eb34c434bce585c3abcdb222f1514682}}, {{cite:e3dc740262ee4dce497b6c57303e069ffa514beb}}. The graphene/Cu foils were then cut and and flattened using clean glass slides to match the dimensions of the semiconductor substrate. Approximately 200 nm of 950K C2 PMMA (Chlorobenzene base solvent, 2% by wt., Kayaku Advanced Materials, Inc.) was spin coated on the graphene/Cu foil at 2000 RPM for 2 minutes and left to cure at room temperature for 24 hours. Graphene on the backside of the Cu foil was removed via reactive ion etching using 90 W O{{formula:ccbd75ff-d275-4ec2-b507-cdb606b2699e}} plasma at a pressure of 100 mTorr for 30 s. The Cu foil was then etched by placing the PMMA/graphene/Cu foil stack on the surface of an etch solution containing 1-part ammonium persulfate (APS-100, Transene) and 3-parts H{{formula:336344a1-384c-4584-8dd0-3c8a554ead8e}} O. After 16 hours of etching at room temperature, the floating PMMA/graphene membrane was scooped up with a clean glass slide and sequentially transferred onto five 5-minute water baths to rinse any etch residuals.
| m | cbe5959798bba38276e85b4dd9e82222 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.