text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
Some readers might wonder whether optimizing the GPU memory usage for long-video processing is a valuable contribution since modern GPUs can accommodate larger and larger GPU memory requirements. Furthermore, there exist many prior memory saving techniques such as Gradient Checkpoint {{cite:9f67f3705b733518b910fc7b13d037200b92af9d}} and Mixed Precision {{cite:68a6396641a59ac8bef70c96b95358fb8f466213}}. Despite the advances in GPU hardware, and new developments in memory saving techniques, we believe that TallFormer is still a valuable contribution to the research community. With the new developments in GPU hardware, the demands for higher resolution video analysis and larger models also grow. Thus, such demands pose new GPU memory-related challenges, especially for long-term video understanding tasks such as temporal action localization. We also note that TallFormer can be easily combined with the existing memory-saving techniques, which we demonstrated in our experiments. Our future work involves extending our framework to various multimodal settings that involve processing both visual inputs and language.
d
336a775eb135c52114a9b3c38eded95d
The second option is to introduce a high-dimensional L2-normalized Gaussian random vector {{formula:1dda563b-ae5d-4cae-8d2e-d6fd3aa2570c}} for class {{formula:eef4444b-8e21-4488-bdbc-e28ffb929c1b}} into the batch which acts as the target {{formula:3b32729f-a9cd-4b97-aa76-0e6949526cae}} during computation of Eq. REF . The vectors are fixed at the beginning of the training. Here, we make use of the fact that any two random high-dimensional vectors are almost orthogonal to each other with high probability {{cite:944b1994291fa5f6f139245bfba1ccafbcd3c42f}}, thus data points of different classes are trained to move "far away" from each other. The first strategy will be referred to as link, the latter as anchor in the following evaluation.
m
14d1ff304a39a0d26897fb27829b66db
In this paper we aim to solve the {{formula:2ed95ad1-e41b-4620-b183-183a57f460da}} SDYM equation via a direct method, the Cauchy matrix approach. The Cauchy matrix approach is a method to construct and study integrable equations by means of the Sylvester-type equations. In this approach integrable equations are presented as closed forms of some recurrence relations involving derivatives (or shifts). It was first systematically used in {{cite:efd13ca2bce538f203f0fe76e6ab86607d8448ce}} to investigate integrable quadrilateral equations and later developed in {{cite:c6ef728727a58b42c059c0657ec3a0047159f6c9}}, {{cite:f50d05eff04df0d9174c46beb37740baa2e02ba4}} to more general cases. We will construct the {{formula:f228bc53-52d3-4115-b79e-2585dadcdde7}} SDYM equation together with its explicit solutions.
i
93f2a1602e0effd8d21b8838382005b8
Prior arts: To our knowledge, the existing dataless methods are mainly divided into two categories: discriminative dataless methods {{cite:d31481fb584016e51a400492dcd5c497a2e0dbfb}}, {{cite:ccf7c7484c2f48513c2c09094600d6a6f1b55247}}, {{cite:848b98b96563945bb59d697a90195b2d952bb1dd}}, {{cite:07399f05847344da0e3b3df0263bb7e8aa22f967}}, {{cite:b1215da545bcf4bdb1da4d8f3e537d8b965c8b02}} and generative dataless methods {{cite:b5ab9bde251b66699282f565be8116bf1926ddcc}}, {{cite:9a4dfd24bd22ad57eb85fd820a970405e6f0ae62}}, {{cite:e37d1eeb47c3bdf2d2ad0a13f1e39b024511ebe8}}, {{cite:e6725474d6300eeef14fb70a2edad9e6f95bab67}}, {{cite:8cea860f671d14f866ce3d05136747b94ad9af02}}.
i
c1bb39df1ca06c7965cfbce36d483c42
Finally, in this section, I will discuss the neutron star properties such as the tidal deformability of the binary neutron star. To calculate the tidal deformability, I am solving the TOV equation together with the perturbing metric, where the nuclear equation of state use as an input. Tidal deformability is highly sensitive to the fifth power of the radius ({{formula:96989245-b993-47b1-a4a7-6c0425680981}} ) or the compactness ({{formula:cc00e099-c6aa-49c6-9e90-deebc23e94fc}} ) of the star. The relation defines the dimensionless version of the tidal deformability {{cite:b800b01557bdfaafd722f8293d89aa24dc4a15a1}}: {{formula:8288acfe-36a0-4234-8f38-063430f675c4}}
r
57855977ab73831715ea796fd8269332
Other than the individual and group level parity based graph fairness {{cite:c8e4a770386536eea18c89d663f57489456bed5d}}, {{cite:d77c59419a3f3f4f72dd6f7c7d32324694098be2}}, {{cite:fe20d8bf6168e4f31b6f2cf585e79f3c5257f4cb}}, graph causal reasoning fairness has been investigated, particularly graph counterfactual fairness. In {{cite:24d1e90073cd4bdd4d3225d5c17371dea8b3e03c}}, the connection between counterfactual fairness and stability is first identified then leveraged to propose a framework that is both fair and stable. Specifically, the connection refers to perturbations on the input graph that should not affect the mining output too much while perturbations on the sensitive attribute of the input graph should not change the mining output either. Recently, {{cite:1eb80570f856d23565677eba6c7e5e4edefc35a8}} enhance the equal prediction made from counterfactual versions of the same individual by accounting for causal effect of sensitive attributes on the prediction, other features and graph structure. To this end, three modules (subgraph generation, counterfactual data augmentation and node representation learning) are introduced to reduce large graph causal relations costs, counterfactualize own and also neighbors' sensitive attribute value, and minimize discrepancy between original and corresponding counterfactual representations, respectively.
m
56a9b68f28b3fac3714b1799521a89c9
In this paper, we introduce a new medical report generation model for retinal images. Our work tries to join three different disciplines; natural language processing, computer vision, and ophthalmology {{cite:e2970464c62293c51ae24db969e196a63e1e1905}}, {{cite:6a0ed218036e25534e01c73bb89b786b5ac4fc5c}}, {{cite:b36c9e05a3b9ea9dd1cee3417a3fcd112568ed88}}.
d
62fe56e834f50445b03ace5b20f62e1f
Future Work.  We want to explore ways to: (i) adapt {{formula:f544c3ae-15e3-4994-abf1-2129056738e5}} -learning to multi-agent strategic settings, where coordination and opponent modelling {{cite:f07a79457830f55df769476174453e042712500d}} are essential and (ii) use a universal successor features approximator {{cite:85dd695f75930d0b8da96eafe2d8050edb832f5a}} for ITD, overcoming its current, linear scaling with the number of distinct demonstrators.
d
32a85737183185d79e893b32cf3ca59e
We propose a cost constructor to replace the shift-and-concat approach for matching cost construction. We make our cost constructor to be occlusion-aware by modulating pixels from different views in a fine-grained manner. We develope an OACC-Net for LF depth estimation. Our method achieves top accuracy with significant acceleration as compared to other state-of-the-art methods on the 4D LF benchmark {{cite:d6e1722ccd9bdeaef485f9c40e19fff2799d6f75}}.
i
6124c4ca49ffc07db10c975afd1b4882
Our first-principles DFT calculations were performed using the Vienna {{formula:20a4b62a-5e53-41cf-90a3-0d5077356d94}} {{formula:b5ebcdd8-e959-427c-823f-3bf9e67294ed}} simulation package (VASP) with the projector-augmented wave method {{cite:7b3ebf2b5d0310820abb2aa00b1fa1909c2f645c}}, {{cite:7fd61ad853d47f87e4f5f11b90f14e4dd31b85ff}}, {{cite:dba9d21d3ee5138d4571a6dfe606a2a264b0ba42}}. The exchange-correlation energy was treated with the generalized-gradient approximation functional of Perdew-Burke-Ernzerhof {{cite:8444a7d182323727c503795d44026c42a1f95391}}. The plane wave basis was employed with a kinetic energy cutoff of 550 eV, and the {{formula:2923f909-0590-4f8f-8617-c9c59d175a83}} -space integration was done with 18{{formula:553d4267-6b3b-416e-b518-5da6d999adb7}} 18{{formula:c7237988-1817-4e5e-b0ba-3055258bdbd7}} 12, 18{{formula:6e3857bf-c4cb-4e45-bf4c-5aa8c224ae7f}} 18{{formula:4b1d99cf-714f-45a5-b137-0dfee0f10da2}} 6, and 18{{formula:e8d33459-32bd-462e-9534-ac42f3a71990}} 18{{formula:eef6d491-9623-49cb-83d6-bf25f87d53b6}} 1 meshes for the 1T bulk, 2H bulk, and (001) surface, respectively. All atoms were allowed to relax along the calculated forces until all the residual force components were less than 0.005 eV/Å. The phonon spectrum calculation of the 1T bulk was carried out by using the QUANTUM ESPRESSO package {{cite:138ab26fe75f4a90a228eb80d8427988c9cef989}}, with the 6{{formula:aec78f4e-8599-47a0-bcd9-1daf9d8bf43f}} 6{{formula:531416ed-6d93-4a28-9062-9d58841d5f06}} 4 {{formula:fc8b93bd-774e-409d-91f7-96107b8ace81}} points. The {{formula:d74d2ba9-0c77-4679-b873-7b93677e3103}} {{formula:121e349e-f7c7-4c83-9da0-dcb37c77e4c8}} molecular dynamics simulations were performed by using a 3{{formula:c9310b44-fb68-4a50-b0c5-857f53f81760}} 3{{formula:2fc6afaf-ca75-49ca-aaa9-49ed5adb0554}} 2 supercell. The Zr{{formula:dabc8cc2-9562-47ac-9abe-96424f3909f9}} S(001) surface was simulated using a periodic slab of twelve Zr-S-Zr stacks with {{formula:a3115c16-4f1e-4df8-9fed-c9a5ce320736}} 25 Å vacuum in-between adjacent slabs.
m
6104375a1e41b199457e6b000680ca34
A number of methods have been proposed in recent years to mitigate catastrophic forgetting, falling under three general archetypes {{cite:06c4cc0a4bf49ddbf61ceb7d17fdc2c371cc2edd}}:
m
95b4b81aae4e93cc2f391aa1bd94cb6e
In this paper we study the {{formula:db59265b-4a97-4f7b-ac1f-ac62469ae858}} -staggered six-vertex model with anisotropy {{formula:c17ec0e9-6f37-4f52-8d0c-e9379e9bd58b}} and staggering parameter {{formula:cb3ec084-75c9-490a-b837-18bde0d262e1}} . At the 'self-dual' point, {{formula:02be4c02-7c2d-4a0f-98fe-f8c8a97d11cc}} , this model is equivalent to the critical anti-ferromagnetic Potts model. At low energies it can be described effectively in terms of the {{formula:63fbd8fe-c60f-4b0c-abc9-6da519e63634}} sigma model, a CFT on the two-dimensional Euclidean black hole background {{cite:8b7392a3edcde997bb6347bf850a3e9898a1b897}}, {{cite:7951151c8362d6d2c226f29ace23071a7e40b00b}}, {{cite:4ee30fa61ae8cf5af47c17df30c3b2c6338d061a}}, at level {{formula:e25235e1-97b6-48c3-aa2d-8fe6dd7d678b}} {{cite:9506b9a84c94c999a0676b91c1187cbc69dffad0}}, {{cite:041dc4ee0aff02cca8f8a442a5b4c6a8b03ea024}}, {{cite:85c66543d56bb1759e316f84995da70c6f82bd4d}}, {{cite:c882a20c404ca2523cee72034fff72c25b1d697c}}, {{cite:e338a55457e57118b7a08c9072da421f913fb4db}}: for periodic boundary conditions the observed finite size spectrum of the lattice model and the density of states in the continua emerging in the thermodynamic limit have been found to agree with what is known for this CFT. Moreover, the quantum number describing the states in the continuum has been related to a conserved quasi-momentum operator in the lattice model. The construction of this operator relies on the existence of the staggering of the vertex model in the vertical direction: the two-row transfer matrix of the periodic model generating conserved quantities such as the Hamiltonian factorizes into a product of two commuting single-row transfer matrices taken at spectral parameters shifted by the staggering parameter. The quasi-momentum operator, on the other hand, is obtained in an expansion of the ratio of these single-row operators.
i
73049ebb65fea1be8b96f19f5786174e
Thankfully, an alternative route was introduced in 2006 by Balan, Casazza and Edidin {{cite:207f69a0817044d495b21a4a0af0e2a2db882b67}}: Seek injectivity, not by restricting to a smaller signal class, but rather by designing a larger ensemble of measurement vectors. Unbeknownst to Balan et al. at the time, this idea had already been in the air in the quantum community (for quantum state tomography {{cite:06392c35056db182a1502e0408c8a292b6bffc1a}}, {{cite:23869db9332daa4b01a99ad19f292406a8761f92}}), but posing the idea to the signal processing community led to a flurry of research in search of practical phase retrieval guarantees {{cite:19576b1bc06e25028ddfc3abadcc44fbb8ef0417}}, {{cite:60ee9c16e38d2a9bef570962ad39735c52298ae9}}, {{cite:b5237cbf8818d35a4640488cee7a6b6a6f35b500}}, {{cite:924ef4e3b90dba404f2d0e837fcb584fdc53a1ab}}, {{cite:6ec8bfe4bf1910f6fd223043f36bda5aecf55117}}, {{cite:a8ff7dcfa442d94bf29803c359bf981d1b8a8541}}, {{cite:7ee62da3cb76779ed6faee09a51db1e2e3dcfacd}}, {{cite:82fd9e0192c92aabea0c34bd457e62df388ae242}}, {{cite:e9271c0cac94343a557ffb2368133841fecd4b6c}}, {{cite:1551d6475417d1d59bf76070125e4daa277c6901}}, {{cite:872a539e3da46fa788178322d2d9a4d1d1e4f26d}}, {{cite:7fb0d7fa5bcfac8a4c54f779fcd2bcff7ffc9826}}. One popular method called PhaseLift recasts phase retrieval as a semidefinite program {{cite:6ec8bfe4bf1910f6fd223043f36bda5aecf55117}}, {{cite:a8ff7dcfa442d94bf29803c359bf981d1b8a8541}}, {{cite:7ee62da3cb76779ed6faee09a51db1e2e3dcfacd}}, {{cite:82fd9e0192c92aabea0c34bd457e62df388ae242}}, another called PhaseCut reformulates it in terms of MaxCut {{cite:872a539e3da46fa788178322d2d9a4d1d1e4f26d}}, {{cite:7fb0d7fa5bcfac8a4c54f779fcd2bcff7ffc9826}}, and yet another uses the polarization identity along with angular synchronization to quickly solve certain instances {{cite:19576b1bc06e25028ddfc3abadcc44fbb8ef0417}}, {{cite:924ef4e3b90dba404f2d0e837fcb584fdc53a1ab}}. In this same line of research, a new methodology for coherent diffractive imaging emerged {{cite:6ec8bfe4bf1910f6fd223043f36bda5aecf55117}}: Instead of taking a single exposure and attempting phase retrieval with possibly incomplete information, take multiple exposures of the same object with different masks or diffraction gratings. Not only can such a process produce complete information, there are also provably efficient (and apparently stable) phase retrieval algorithms for this setting {{cite:924ef4e3b90dba404f2d0e837fcb584fdc53a1ab}}, {{cite:7ee62da3cb76779ed6faee09a51db1e2e3dcfacd}}. Considering phase retrieval has a wide range of applications, it would be interesting to find other areas to apply this philosophy of taking more measurements to obtain injectivity.
i
f8b3c662f6253b4062c93271d166b447
The structure of the paper is the following. In Sec. we collect the basic formulas of the seesaw mechanism {{cite:9b570f0478469be3dcf4fbc02afc60185fc46916}}, {{cite:4137a7f8e366a20417f2be943c4c561ad0d7320d}} for fixing the notation and for further reference; the reader familiar with the subject can skip to the next section. In Sec. we construct the coherent oscillating states for Dirac–Majorana neutrinos, which can straightforwardly be particularized to the type I/III and II seesaw schemes. We show how the coherence is built in the oscillating states, by performing only coherence-preserving transformations. The technique is inspired by Bogoliubov's treatment of superconductivity {{cite:dbd982879f77f16aac086b7e538f3a93ed1819e6}} and the Nambu–Jona-Lasinio scheme {{cite:80c4256816b430e7293c2436f14d194b613c906e}} for the dynamical mass generation of nucleons, adapted here to the case of fields with Majorana mixing terms. The same results are obtained in Appendix by using the direct procedure of Hamiltonian diagonalization. Furthermore, we discuss also the normalization and the orthogonality of the oscillating neutrino states. In Sec. we view the results in a wider context, including the potential effects on the interpretation of the KATRIN and PTOLEMY experiments, in which the neutrinos are non-relativistic.
i
c62874609f019012a4af7ec0c7da9770
As a first benchmark, we assess our parallel DMRG implementation in a ground-state energy calculation of benzene using a cc-pVDZ basis {{cite:f11daa66bb2cf2a749bca6ba4e5204e79bdb5fb6}} with an orbital space comprising 108 orbitals and 30 electrons.{{cite:9e168e6dffdd52e48f88b8d41618cda1e94384fe}} Although the benzene system is a closed shell system and thus does not showcase the strengths of the DMRG algorithm, it nonetheless serves as an example in the literature where a DMRG calculation with a large bond dimension and a relatively large number of orbitals has been recently reported.
r
809aa288d07093b8f5427e0bebe3fbef
We propose a different approach based on selective prediction {{cite:c55838b94db1318197a40de6f077aeeff7560619}}, {{cite:c0c26676451c40bc908d07f5135d561e390893a5}}, where a model quantifies its prediction uncertainty and abstains from predicting uncertain instances. First, we develop a method for selective prediction with guaranteed coverage. This method identifies the best abstaining threshold and coverage bound for a given pretrained classifier {{formula:ccec29df-0c3c-4c50-b889-7d6a6fc22292}} , such that the resulting empirical coverage will not violate the bound with a high probability (when abstention is determined using the threshold). The guaranteed coverage method is of independent interest, and it is analogous to selective prediction with guaranteed risk {{cite:c0c26676451c40bc908d07f5135d561e390893a5}}. Because the empirical coverage of such a classifier is highly unlikely to violate the bound if the underlying distribution remains the same, a systematic violation indicates a shift in distribution. To be more specific, given a detection-training sample {{formula:f74af96c-4f3b-44f9-9b5a-834da3d01622}} , our coverage-based detection algorithm computes {{formula:d0b63045-f12b-431c-8eab-ca26a6c05a41}} tight generalization coverage bounds, which are then used to detect a distribution shift in a window {{formula:52332364-fc4e-4770-bdc4-281b1770e1b9}} of test data. Due to its aggressive reduction of {{formula:acbdc334-f30e-4acf-ba10-b514dfea47ca}} to {{formula:ce08473c-de1a-47dc-bb3e-c1633b12d9d8}} numbers, the proposed detection algorithm is extremely efficient in its computation requirements, unlike the baseline algorithms mentioned above, which follow the framework depicted in Figure REF in Appendix REF . For example, consider the JFT-3B dataset {{cite:42ac2b8650891c8692b94ab3fee2be097d713791}}. Previous methods that require the processing of this set for each incoming window are infeasible, while our method allows one to summarize it with only 32 scalars.
i
88fd91a2a4e07d04f0ada8acfe7dfa87
Now we present the Linear Combination of Unitaries (LCU) Lemma {{cite:97db056ae6946426890c78000260d07f66896ad9}}, {{cite:c83b61d2ce1d7dd3bf13f91158b80ea16ac135e7}}, {{cite:6b020c29d811123fafc2caaeb85d25af84e715de}}, which we will use for combining the Fourier terms in our quantum circuit. Since we intend to use LCU for implementing non-unitary operations, we describe a version without the final amplitude amplification step. We provide a short proof for completeness.
r
f4222cd9eb16694b791a972f7b863bad
None of the published TESS exoplanet transit curves of M and K dwarf host stars are known to contain a starspot anomaly, though three appear to show suspect anomalies (LP 791-18: {{cite:dcfde49904b6b771182c8a3958171c244407a1bb}}; L 98-59: {{cite:e95fbbed264c468263883fb4538b0c4dbe93e644}}; HD-21749: {{cite:26bbc2c5c660148d65fc2f230a691f04b7021fe3}}) and are the subject of further analysis with a dedicated transit-starspot model (e.g. PRISM {{cite:f1672e7c4ce9ab6bbeb9cb1c0fe6049eae99dbc2}}, {{cite:19a673d8dca323d0159a946aee56ccb0422da793}}). At present See Exoplanet Archive accessed 2021/03/01. there are 17 confirmed M dwarf and 10 K dwarf planetary systems detected by TESS. Using the weighted mean of starspot detections for the uniform starspot distribution, threeWe assume that multi-planet systems are coplanar. M dwarf targets and one K dwarf are expected to show starspot anomalies. On the other hand, for a polar-biased distribution, two of the M dwarf systems are expected to show starspot anomalies. Using the mid-latitude distribution results, two M dwarf and one K dwarf planetary system are expected to contain starspot anomalies. However, with the small sample currently available it is not possible to draw firm conclusions.
d
0c80dd494b003f62761debbe79cbea5d
The three components in (REF ) account for bounding box regression, objectness confidence, and class probability. We used a modified loss definition compared to the original YOLO model {{cite:2a70a6f11c35d1221ed0900f20fa9f928b2e429c}} for better convergence.
m
e62d1db3836922b3833532d799011dcd
The predictions output by paegan are similar to those of pf (albeit more uncertain). Consider Figure REF , but it is best to see videos hereGoogleDoc. This favours the hypothesis from {{cite:15901f9457f78eae77f9beab31bbea68cbb70644}}, that a deterministic predictive network (pae) approximates the expectation over the target value. Figure REF shows how reconstruction error gradually increases when observations are not available. As should be expected, pf is more successful at tracking the environment state. This is because pae did not infer the forward model perfectly. The difference is especially apparent in more complex environments as models of those are more difficult to extract from data (e.g. inherently chaotic simulation of collisions). Notice, that neither method does worse than the uninformed baseline (dotted green line). At this error level, the estimate reaches maximum uncertainty. {{figure:d497ac97-16ca-4add-b1ae-d06ab40b778c}}
r
a8c6d098bcb6ca19f6da4308ff842616
Binary relevance {{cite:249e46a7828316446a3bf48f9c5a5761bcdf08ca}} is a common approach to multi-label classification where the problem is decomposed into a set of {{formula:6b633148-f643-4ab4-90e0-44755ab5bf9e}} binary classification tasks, where {{formula:aa2743e5-b294-441c-b63a-fdb124ebc00b}} is the number of classes. This approach typically fails to exploit inter-class correlations, which can be used to help improve generalization {{cite:249e46a7828316446a3bf48f9c5a5761bcdf08ca}}. To address this issue the classifier chain method was developed, which ensures class correlation through a chain of binary classifiers while maintaining the computational complexity of conventional binary relevance {{cite:d2fd3c676e43012fecfe4a1c9ced350fdf139f07}}.
m
45ba11baf06a67316571cba79f6e428a
Given that we need some theory of initial conditions to explain why our universe was not chosen at random, the question becomes whether inflation provides any help to this unknown theory. There are two ways in which it does. First, inflation allows the initial patch of spacetime with a Planck-scale Hubble parameter to be physically small, while conventional cosmology does not. If we extrapoloate a matter- and radiation-dominated universe from today backwards in time, a comoving patch of size {{formula:fc67c559-57ec-4180-8810-15d03ed53e9a}} today corresponds to a physical size {{formula:6828dd80-e96f-473a-b545-391da1737aa8}}  cm when {{formula:48127a81-ce90-47e4-9897-7c5e35ee2ca9}} . In contrast, with inflation, the same patch needs to be no larger than {{formula:9af43475-8ac3-4a5b-9c33-562257baa264}} when {{formula:32683306-8a5b-4405-a932-991be5430919}} , as emphasized by Kofman, Linde, and Mukhanov {{cite:5afb8ca71f7a59af51720cd74b63e3963dd8bbbc}}, {{cite:e13719dc100af5c368fea2075bcd18a954c7d292}}. If our purported theory of initial conditions, whether quantum cosmology or baby-universe nucleation or some other scheme, has an easier time making small patches of space than large ones, inflation would be an enormous help.
d
dd1c28de1f5dc4b761962abd31598de4
Some of the first unsupervised methods of the deep learning era were fashioned after pretext tasks from natural language processing. A network would be trained to perform some auxiliary task before transfer for downstream tasks. These auxiliary pretexts task included solving jigsaw puzzles {{cite:24e1e63401aaf20b2a058ab4ddfa7ff125f9fa92}}, colorization from grayscale {{cite:09833b69cb6ebf7561cdf4231c3dcbfee05e6893}}, {{cite:cf518d1d8fbdae6dcc5fae22605e828caf4fe452}}, inpainting {{cite:b94c0bcf3fce7482bfce0c3789ad6e656b1ed393}}, relative patch prediction {{cite:37a322b6f49f7a4973547538937f5e6d840cb113}}, predicting rotation angle {{cite:6e3ef8c66a5a22c5679a9463c08eb59e2636c314}}, or a combination of tasks {{cite:2c83bf18f9426943ab6485b9111d7858b4e6c956}}. However, the introduction of Noise Contrastive Estimation (NCE) {{cite:8d37bfbcb3420e6ba12744f5198ccc15e7b85961}} triggered a paradigm shift within unsupervised learning {{cite:c5138b4fcf9e38414024104746e68bec41ec252c}}, and subsequent methods which utilized contrastive learning {{cite:84c1150080396893f126cb01cc65aa97be48e559}}, {{cite:3b75b980898d6e67999a632764ec76dfb6eb7b7c}}, {{cite:86f87f4718c8374d0a6a62fec24a1ca244e9f3e0}} would surpass all of these.
m
3d2317d880723a37863c14d7c4cadf34
For a fair comparison between the state-of-the-art {{cite:96ab7bce8bdc1d674dec330fea8a4d8fee15377b}} and the proposed PARAFAC-IRS and Tucker-IRS models, we optimize the precoder ({{formula:ea84e293-d071-4bee-a3e4-45a0258b7da6}} ), combiner ({{formula:dbd256ee-0e21-4cf3-818e-a9d296c72d4b}} ), and the IRS phase-shifts ({{formula:3e161da3-0656-46aa-a762-5a2bb7a8dba7}} ) using the upper-bound solution of {{cite:96ab7bce8bdc1d674dec330fea8a4d8fee15377b}}. In this case, they are given as {{formula:331df425-ab44-4619-bdb2-d78d67d1de66}}
r
40fda73d05a56c070df1695c51d6d896
Sparse Attention.   Perhaps the most intuitive solution to alleviate the quadratic cost, Sparse Attention only calculates a portion of the full {{formula:5e74a584-0f35-4b9f-885f-f8fe4762808c}} attention matrix. Early stage methods include Local Attention {{cite:97fc3ec1aee8b0b1b16e1a0e1e822b71230a3885}} and Multi-passage BERT {{cite:66f9a27e2e47c7470c7dcd769989df32e04ea169}} use sliding windows or chunked blocks to speed up computation. Longformer {{cite:8471cd10fbbda31f832fbf92bb4061150ff6249a}} and BigBird {{cite:3a53bdaa1318f63c30dfee4feeebaaca3c54cb0e}} further combine global attention, sliding window attention, dilated sliding window attention, and random attention together to form strong sparse attention mechanisms, and BigBird showed that their method is a universal approximator of sequence functions. To make the block truncation a learnable process, Reformer {{cite:dd590c1a761fb84243b14244270a17d78f6c8494}} groups and sorts input segments via locality-sensitive hashing such that similar tokens are placed in the same chunk. Similarly, Sinkhorn Transformer {{cite:042eb0181b03328afe28d7f6d3a372ea7dbb27b0}} trains a meta sorting network to reorganize input sequences before applying windowed attention.
m
ab95179af777bd33ee42708e478fbe56
In this paper, I adapt the analysis from {{cite:25d7b1e7befd156e6485cf1a4dfe6a8be3420597}} to demonstrate convergence of Implicit Update (IU) dynamics and Predictive Methods (PM) to a neighborhood around the optimum, with the size of this neighborhood shrinking with the width of the neural network. To do this, I separate the error term from the known analysis in a manner similar to that of {{cite:d880f8f7f855c4574e6d63dc47dc0cb85dd36762}} and then compute the convergence rate and the size of the neighborhood by combining these two terms.
i
1261baa4097c6a577040009ac644d36d
In this section, we present numerical results to demonstrate the superior performance of our proposed NN schemes. The data set is obtained by utilizing the conventional optimization scheme in Section III with {{formula:dca59dd2-5d7f-4b60-84d1-5240cbe3bf2b}} different random channel realizations. We split the data set into two subsets of data: {{formula:0bf37309-3026-4840-87ed-993c15c2c9b1}} for training and {{formula:616ea12b-dafd-42ab-9657-465482201067}} for validation. In the training process, all the NN parameters are updated by utilizing mini-batch gradient descent algorithm based on the Adam optimizer {{cite:c92a73486f7d5bc61c768a23c6be570dc9689ed0}}, where the batch size is chosen to be ten. All the parameters in NN are initialized with by the Xavier initializer {{cite:77edfe188640beff5cb93f076dc7892dbcad3e0b}}. Furthermore, similar to {{cite:c8d5cab8bbf8deb5ca04aa546d10e819cd42bb2b}}, it is assumed that the NN has two hidden layers with one hundred neurons in each layer. The learning rate {{formula:dcf39af4-964e-4b35-a1aa-61002bf13bc1}} is set to {{formula:f0dafecd-8247-4449-975b-c47368f4445b}} and the regularization parameter {{formula:fc701ffa-9a9a-4ed1-9e84-5f03a968a0e4}} is assumed to be {{formula:9046a767-f23a-4e8d-89a6-aabb68f5e7ec}} {{cite:fd7da206cf7fcf0929fe7a67d613090b794803a2}}, {{cite:9713366f6b8b7575804e7155a2f090b49f7a610a}}, {{cite:81bb4e72745768a9f2d93a154b5caaa63f03182c}}. The test data set is obtained by using 3000 channel realizations. The transmit power of PU-Tx is assumed to be 60 mW, whereas all the noise variances are set to be 0.001. The channels {{formula:00528128-ddc5-4ac0-a278-eb7e87def7bf}} , and {{formula:f407c7bd-8f9d-4d0e-bd40-e33288445b2a}} are all generated by {{formula:e883a423-00b0-4ad4-8e91-8e722fca784c}} and {{formula:b17467ec-2681-42a3-a7b6-318c87b3ae7a}} , where {{formula:3bd936a1-12e8-4c50-934f-a5e4733a2b1a}} , {{formula:a749b665-4304-47a9-a40c-df376759e8b9}} , {{formula:d0025019-c0a6-4591-91fc-a1ae5c6ba0fa}} is the distance between the SU-Tx and the {{formula:980ae0a9-cf2e-46a6-bae0-57d260294fd9}} -th user and {{formula:402f2d0d-582e-4c76-ae8a-7b345c24a4b9}} denotes the distance between the PU-Tx and the {{formula:d48d8416-4e94-40c2-8877-5014534fbfaf}} -th user. The parameter {{formula:a3bc267a-918e-4abd-bf5c-65279f5824ca}} denotes the path loss exponent. The distances between the transmitters and corresponding receivers are assumed to be {{formula:ded7568a-9f61-4687-a4c3-59bd6b6c109b}} m, {{formula:045ffdac-3a00-4009-9ea2-62baa8e7a974}} m, {{formula:912cc29b-db6e-4bdb-987b-e97a33850824}} m, {{formula:e9aa819a-00d0-46ce-990f-ec18080c984c}} m and {{formula:6a87a7b3-eca7-4520-9c43-3aef6148edfd}} m, respectively. The simulated datasets for training and testing were generated using MATLAB scripts, and the performance of data generation is irrelevant to the results. For training and testing the model, we used a system with Intel Core i7-9700K processor, with eight cores, clocked at 3.9 GHz, 12 MB cache memory and 32 GB random access memory. The training was performed purely on CPUs (opposed to GPUs).
r
5cbd92e610fb1f9af6a78d656756f722
In {{cite:a351422c9907b22ae6c0f2061df836a405eda1e4}}, the authors studied the degree-corrected HSBM with general connection probability parameters by using a tensor power iteration algorithm and Tucker decomposition. Their algorithm achieves weak consistency for uniform hypergraphs when the average degree is {{formula:e9c2022c-6c4c-4963-bcc9-9ce7df075ed1}} , which is the regime complementary to the regime we studied here. They discussed a way to generalize the algorithm to non-uniform hypergraphs, but the theoretical analysis remains open. The recent paper {{cite:608a27b679905d2d58e324885a03c8a50bf911a7}} analyzed non-uniform hypergraph community detection by using hypergraph embedding and optimization algorithms and obtained weak consistency when the expected degrees are of {{formula:6e055a78-f57a-498f-b953-aed8467f02e0}} , again a complementary regime to ours. Results on spectral norm concentration of sparse random tensors were obtained in {{cite:f31469296850ef0ff160ed4160cc22f24aa82c9d}}, {{cite:407c73aa9df1e68a4c01efc78fef1a62f8d0fe06}}, {{cite:0c37f06ebfbf42e61e0772550ebd6dc957b494ec}}, {{cite:bb988b79cdc279a069c60a2a06b6108dfa3fc265}}, {{cite:bed4ad3b180be6832e0aa7907b38304ef1ca40cb}}, but no provable tensor algorithm in the bounded expected degree is known. Testing for the community structure for non-uniform hypergraphs was studied in {{cite:e16b2a93516eaeb688e1b123a1aecec00ade9611}}, {{cite:96437dc5c309e7a7ad3558b83b8ac38f42ab4a77}}, which is a problem different from community detection.
r
973d3889e2b8bf4e3a61d988a7f151ab
As exemplified by the Bonnet–Meyers Theorem, Differentiable Sphere Theorem ({{cite:43828a65db6c508deeda99a9d2f8ffe589116acb}}) and Poincaré Conjecture ({{cite:e33d77480106735da2cc9c698799a03316ae9a11}}, {{cite:a12b97d505aee7ce6e220fdffef9b69fd7c2b700}}, {{cite:3e9b2f0b973cb8b1538029d9e98e412a120a201e}}), to assume the existence of a special type of geometry is a consistent way to understand a manifold's topology. The converse question, however,
i
db77ade07828e5e388ae72cbc6ee701b
where {{formula:5bb36ed7-9fa1-48cb-abf8-4806209b748f}} is referred to as the augmented Lagrangian for problem (REF ) with penalty parameter {{formula:a8648ef3-5b1b-4d06-863e-2547b8768aea}} and dual variable {{formula:23249d76-b3c0-4eef-8bdd-64778ae9664f}} (see e.g. {{cite:cbd23fcf00f48cc435e814cab79cd6a97355f6e8}}, {{cite:21a06586dec846d0896da4eea50177e92d012c82}}). A classical method to iteratively solve (REF ) is to minimize {{formula:cce55978-2bae-4572-89f9-93d7d6f1b05b}} w.r.t. {{formula:9a6da779-a409-440e-9961-16e8de8d50aa}} and {{formula:70f9749f-c722-41fd-b8ae-f837f13c556c}} , followed by a maximization over {{formula:b24905f7-1df6-4f26-924b-5527b886cc70}} . Convergence guarantees only hold if the minimization is computed jointly in {{formula:ab499b4a-36b1-475b-9753-b1cd6efd7c35}} and {{formula:f5a522c3-bb94-4f9e-af55-f7dbd835fd7a}} , which in practice is often difficult. The main extension in ADMM is to split the minimization into two steps: {{formula:3d4d947c-28a2-41a0-8a8b-1677a05d1bfd}}
m
dc91bf7498de25f4c161ecfc5be3229e
As the affective GMM is getting fitted to the data, a small number of affective Gaussian components might overly fit to some emotion annotations, rendering the so-called singularity problem {{cite:1882436b1ae35114c2993338ebe016f7a016c421}}. When this occurs, the corresponding covariance matrices would become non-positive definite (non-PD). Imagining that when a component affective Gaussian is contributed by only one or two annotations, the corresponding covariance shape will become a point or a straight line in the VA space. To tackle this issue, we can remove the component Gaussian when it happens to produce a non-PD covariance matrix during the EM iterations {{cite:4503c60f1e14efc1141016357b9962b4a34672ea}}.
d
2bf20f149399305df699f7d785c7238e
and therefore, {{formula:a37d1485-bc86-4b3a-8118-81869667009f}} . Hence, by the portmanteau theorem {{cite:8d04ec22ae72030f49b0f51ae836e33abc913ff2}}, for all {{formula:2eb529d3-7e24-4f4a-bb86-64ca6c802682}} , {{formula:103c7995-8c73-454b-af38-6778cb776eab}}
r
1d59c24351f6637039f07f34349ecb43
The results obtained via the QASM simulator were found to be highly accurate, with the error that is inherently associated with the quantum projection measurements giving rise to very slight deviations from the exact results. The results obtained by implementing the simulation on the IBM Q quantum computers were also found to be in reasonable qualitative agreement with the exact results, although clearly the noise and errors inherent to the NISQ era quantum machines negatively affected the accuracy {{cite:d2de6b0c1644c352962c1957c9d7fb7aa57102ce}}. For model 3, the results obtained on the IBM quantum computer capture the general dynamical trends [see Fig. REF (a)]. The fact that the quantum result vanishes asymptotically rather than relaxes to the actual non-zero equilibrium value is likely due to decoherence caused by the gate errors. It should be noted that the quantity plotted is the donor-acceptor population difference. Thus, the gate errors and system noise cause the computational basis states to be more uniformly populated, so that the difference between them is diminished. Indeed, the agreement with the exact results is better for model 4, which is symmetrical and therefore corresponds to zero population difference at equilibrium [see Fig. REF (b)].
d
1973b443ade499d0cf4e47b74a49edd9
Image classification is one of main topics of neural networks, starting from the success of AlexNet {{cite:972da84b527e22d0e94f4fd0052f12195d48f67f}}. The availability of object classification can lay the foundation for advanced neural systems and is of great significance in the research of perceiving media data, such as face recognition {{cite:4907acaeccf9442c070743a9f5d092620957e535}}, {{cite:31a6b9adbbaa53f5a5b68aa91e0510614f776ff7}}, {{cite:1a8678200da7d71e4adb88eee3aad3cff2e3612e}}, medicine image analysis {{cite:ec879c3df2961afd89cd5ecbdd2fdf860a2f6f6e}}, {{cite:f0c2978d423794352bc75a9676e5089ca9dccc0a}}, {{cite:5a024ccff7fbf9cf6280b30a5f071c4e24276f9a}} and autonomous vehicles {{cite:f1da606162063d6617dea879a0d3f3639efcf704}}, {{cite:7b20ee552959ac849ef1ed433316fa2468275854}}, etc. As we know that image classification by using neural networks has good performance in over 90.0% of cases, but there is the challenge of how to overcome the left cases with significant variations of poor of illumination, image blurring and occlusions {{cite:24e62cdf885e96f14b48d83e87cd97cd5ab1bbea}}. Therefore, the current methods are hardly completely applicable to those cases where errors are strictly intolerable, for instance the autonomous vehicles often miss the traffic light when driving fast in the midnight.
i
997f46e3aa1cd3b53a0ac23be2543be5
Non-uniform Blur: Our deblurring framework is constructed based on the physical blur model (REF ), which assumes uniform blur kernel across the spatial domain. For non-uniform blurring, we can follow previous studies {{cite:1fd41a22b6d915fade3d6fb914badf223c36d25a}}, {{cite:63ccd164df2f56485d2d98693a9f349ea4c159e5}}, and treat the blur kernel locally uniform within a relatively small image patch. Since in the training phase, we indeed train the inference networks with patches cropped from images in the training set, the local uniformity of the blur kernel approximately holds, and therefore the proposed framework is also applicable. In the testing phase, the kernel-free inference can be done without any specific modification, while the kernel-dependent inference can be implemented by first performing deblurring on overlapping image patches and then aggregating them to obtain the final result, just like many traditional image restoration methods {{cite:e16098f7e31f7d459778d4b3c1e6ce033136d935}}, {{cite:826434f15d9c0f6f8ae271494b53510456ab11a7}}. However, the above strategy is only a rough approximation, and the application to non-uniform deblurring is still a major limitation of the proposed framework, which should be further investigated in future.
d
17c97fed59eced05e211a5fcbab9bd0d
Evaluation with beam length=4. We evaluated the model #2 and SDPA model with highest BLEU scores using beam search at beam length=4 to compare. The results are shown in table REF . The PLGA model results in better BLEU score than RNN model {{cite:b4dcc3b2722a87784aaa34309bf31d6bcd239577}} with attention evaluated in {{cite:aafdcb0debe228284962705cb53fd1c2da2a710d}} for PT-EN and TR-EN tasks with standard (randomly initialized) embeddings. When evaluated at beam length=4, the SDPA model fared better in BLEU score than the PLGA model for PT-EN and TR-EN tasks. {{table:1e4af112-5540-415f-8e93-e0d98917cd27}}{{table:a6ec4cd6-545c-443e-af70-a11a1260df9d}}
r
02381232d9976f2af16b49e51e8eea51
The emergence of sparse activation in Transformer models discussed in Section  (see also Appendix ) may offer an explanation for why DNNs work well and do not overfit. The notion of sparsity pertains to the law of parsimony, a.k.a. Occam's razor, where among all possible explanations of observed data, the simplest ones are preferred. As a fundamental scientific principle, the law of parsimony is broadly used in various scientific and engineering subjects {{cite:fc87476f34ff2eeae2920be3c0b4a6219e236880}}, {{cite:4b9d1577ce8e4daa4723212291d0b95d16a86fb5}}, including classical machine learning {{cite:930bde09e2307fd29b58d6678fb32466418e5f3a}}. However, it is often not a critical design component in DNNs and goes against the recent deep learning practice of training increasingly flexible, redundant, and powerful models. Hence, our discovery may be rather surprising. Even not explicitly designed so, Transformers only use a small fraction of its parameters to parse a given input, hence may be regarded as parsimonious models as well. As discussed in Section REF , sparsity may arise from the dynamic of neural network training rather than any explicit sparsity regularizer. More importantly, evidence of improved robustness and calibration in Section REF indicates that sparsity is a pertinent prior for its good generalization.
d
a138b5d924e8609bc4603521753750a3
Since these systems are Hamiltonian, one needs to use numerical schemes that are symplectic or Poisson. There are a number of ways one can derive such integrators, such as the partitioned Runge–Kutta method {{cite:ffa9b031c2b589703400e50e5a12a04515f1cc7b}}, or variational integrators {{cite:e8340e0553a0bb052c9a61ede87812a2a438e0d0}} and here we focus on a type that uses generating functions. It relies on the Hamilton–Jacobi theory, which is an alternative formulation of Hamiltonian mechanics in terms of wavefronts and instead of solving for a single path, it generates a family of solutions. At the heart of the Hamilton–Jacobi theory, there is the Hamilton–Jacobi equation, as it appears in geometrical optics {{cite:9bc763acf25af9c1694222a3eefa2b1cfcf140ee}}, {{cite:7d3b2c70fd982f53761206a32f66e9db16d5b815}}, and the equation for modelling the wave front propagation and this is considered a passage from quantum mechanics to classical mechanics {{cite:749c58bdc79ac1394fbfcdfff36741f28f6daed4}}. The solution of the Hamilton–Jacobi equation, known as generating function, which from the point of view of symplectic geometry, allows to heuristically construct coordinate transformations that preserve a geometric structure known as symplectic structure. This forms a basis for the numerical scheme, as obtaining an approximation of the generating function we can then use it to generate symplectic transformations. In other words, the numerical scheme is a symplectic transformation.
i
908481f2d19f2f5cf3676510de21c361
Our method, which is concisely presented in Alg. , starts with reconstructing static configurations with a state-of-the-art pipeline {{cite:40a5323a66bb1b44d98efacedf00e84bdceeaf5f}}. Then, it finds the poses of the cameras from other takes towards the background and towards the foreground. Inliers of these poses give local segmentations. Then, the poses are grouped to obtain the global segmentation to merge partial reconstructions of the static configurations into the resulting model. [b] InputinputOutputoutput Model of the scene with point labelling {{formula:36d819ea-6d26-41de-adea-6a578331bacc}} take {{formula:b37ecc43-948f-43b3-99c8-e344c6835f17}} (Sec. REF ) Reconstruct model of take {{formula:1cd6f29c-c237-4abe-9887-de0561b36fd8}} (Sec. REF ) Sequentially register cameras from other takes {{formula:13a081ba-86c3-4ace-a02c-d7a763247302}} towards {{formula:0e963ed7-e375-4b8f-966e-2ce5000a5094}} (Sec. REF ) Merge the observations of the sequentially registered cameras (Sec. REF ) Merge the groups of points from different takes (Sec. REF ) Merge the models according to {{cite:7217cce362a918c0b1a687c1c4b674d81b87fd3f}} Perform Bundle Adjustment {{cite:05ec8a5622638395f9f84bf88fa1ba528a6d613c}} where function (REF ) is minimized Two Body SfM.
m
1bd7d261a9ce343868684fb6e7695119
Our approach is borrowed from {{cite:536a7faf0382aeab56f66a25088b6a06483cae57}}. We estimate the sharp functions of the solutions and apply the Hardy-Littlewod theorem and the Fefferman-Stein theorem. This approach is typically used to treat the second-order PDEs with small BMO or VMO coefficients (for instance, see {{cite:08574d5f7e429afb712766e02a26e93b388478e2}}). In {{cite:536a7faf0382aeab56f66a25088b6a06483cae57}} this method is applied to a non-local operator with the kernel {{formula:629b1df2-46ed-4c72-bb86-ebce28219356}} . As in {{cite:536a7faf0382aeab56f66a25088b6a06483cae57}}, our sharp function estimates are based on some Hölder estimates of solutions. The original idea of obtaining Hölder estimates is from {{cite:bbb24c49b8d0f2be18a276f28d0bf3d90fa3a4e6}}. Nonetheless, since we are considering much general {{formula:225c2436-3d99-46f0-81f5-cc9937f29402}} rather then {{formula:d876c6dd-a4b3-4de9-9f5c-1e13a90edff3}} , many new difficulties arise. In particular, our operators do not have the nice scaling property which is used in {{cite:08574d5f7e429afb712766e02a26e93b388478e2}} and {{cite:536a7faf0382aeab56f66a25088b6a06483cae57}}, and this cause many difficulties in the estimates.
i
89887cb1795dc44c7f320b8ebca7165e
We compare our approach with the existing methods for UAS, including AE {{cite:43ef2ed407f69c6e46f04608015c366e7de0f2a1}}, VAE {{cite:484751535dddcda48860b1eb041031bc6bda545d}}, GMVAE {{cite:48602189844392fb1e223b9d7306a8e531ca04f3}} and fAnoGAN {{cite:b0ee60bd086ea712e1fb7537c9b671e71a75e115}}, on two datasets, BraTS and ISLES. Table REF and Figure REF presents the quantitative and visual results. The first three rows of Figure REF corresponds to the results on BraTS. Compared to other methods, the proposed approach can provide more obvious contrast information between the lesions and healthy tissues, which highlights the lesions and improves anomaly detection. The reasons come from the following two aspects. On the one hand, the healthy anatomy is recovered successfully. The proposed approach has high reconstruction fidelity in the regions of healthy tissues, where AE and VAE can generate only blurry reconstructions. fAnoGAN can reconstruct more obviously in those healthy regions, but the visual similarity of normal tissues is still poor. This makes some highlighted norm tissues in images, like WM and CSF, are also detected, i.e., false positives. It also confirms the concerns about the loss of localization information in the compression to a low-dimensional manifold. On the other hand, the most anomalous regions are recovered poorly by the proposed method, as we are willing to see. Instead, some methods, such as GMVAE, generates nearly the same images as origin input, which ruins the contrast relationship between the anomaly and healthy anatomy, which brings many false negatives and the drop of AUPRC, as shown in Table REF . In the end, the proposed approach achieves the AUPRC of 0.511 and the [DICE] of 0.544, which is the state-of-the-art performance and outperforms other existing methods by a large margin. {{figure:af2ed56c-2378-4afa-bdef-929226c4c5d6}}{{table:589ce7e2-581f-445f-99b6-50bb1f4cafb4}}
m
7c23d04397ee275a8a57042e2852719a
In order to confirm the results obtained with WKB approximation and with qSW, one can solve the differential equation exploiting the method of continuous fraction formulated by Leaver {{cite:1157bdc8053b0af52a3c4b1dec5a0fe1c8b00f6e}}, {{cite:38550cde923b41e4a8d2349108315eed5bd2a08f}}.
m
e0ac667832e2127eaa0daa4a5df192c1
The general form of Usadel equation {{cite:a34df16a81f0d8e39a12d1abcc891b7abf0f82b3}} (which can be derived from the Eilenberger equation {{cite:a3c2bf9a3799bb39232712528708e8ed06275971}}) in the presence of an exchange field with components {{formula:956ac7c8-bc62-4d7f-af5b-f4c77c03f82a}} in the ferromagnetic layers, and a gap energy {{formula:d619133a-2bc2-4cd4-bfc4-b0c515f3f3c9}} associated with the {{formula:2841dbe7-d6cd-4dfc-8a61-4d39fddc7d63}} -wave superconducting region, can be compactly expressed by{{cite:05cbce1d66931ce65d7a8b1946beaf6b4f54af5e}}, {{formula:226b0ecd-413f-45fc-a1cd-88f1b68c9849}}
m
6883b36ae596ddff649c41f9b28e7dbb
In spreading dynamics, the most important nodes called influential spreaders or super-spreaders are nodes which can induce the largest outbreak sizes when the spreading originates from them. As control of epidemics is a major challenge the human beings are facing with {{cite:89b7bea027e252aa49fcd6e785ea208b48415493}}, {{cite:47af2ffad6bdf281fb5b7dbd75a300ecd16023d9}}, identification of influential spreaders is a key step in optimizing the available resources and ensuring more efficient control strategy {{cite:2e5a787080a6eac3ffaa381e376863fa9b580c2b}}. A great many of methods are proposed to identify influential spreaders {{cite:db8b9be4df2d11244ce8c92ebfadd5c5cd6f4b94}}, {{cite:16f9ddf45d483676cf55f2d45b574b6b0180d6a1}}, {{cite:f1a7f3bf3ba5683a5dc6d01072b48dd5a438f85c}}, {{cite:1bb05af0ab8add5ea2ddbec40470aa5d0f1d699a}}, {{cite:d5dd72afb9f1b45ca90c5ef4abfe1cb6efe74b8d}}. However these progresses are mostly in single layer networks, while some real complex systems are better represented as multilayer networks {{cite:d53a74ebed9868a33f956a6dc098c49494df4b7c}}. For example an individual may have relationships with others in different ways, either being friends, colleagues, schoolmates, or doing business, which is a multilayer network {{cite:92663eae5800e5d0b76d71f31043265b33e23d56}}. Integrating these different types of interactions into a single network may lose some critical information of the system. The multilayer network approach is adopted in understanding the robustness of infrastructures {{cite:821f629f09795753b49e633b36bdbd75d40755a6}}, {{cite:d87603f2f4e0dcdbbb824d76837bc5513c71465e}}, coevolving spreading dynamics of information and epidemics {{cite:714ea11368a8feaafaee61c7676e82c220433c54}}, evolutionary games {{cite:b8b9a4e6cb5f706c3c46aca49cd4a9f2f3ce00e0}}, functions of brain {{cite:ffd36e87bdfa8339b3240447c87552bcc81c4313}} and stability of economical and financial systems {{cite:624efa251ec652180f4209e37a4deb8120e4541d}}.
i
3ad3a01e87896792c06de4ec376ee2e2
Both the above-mentioned weaknesses of our analysis are good news. Within our proposed mechanism, the experiments can achieve a much stronger critical current modulation than adequately described by our analytical theory. The goal of this work has been to clarify the physics of critical current control by modifying the vortex surface barrier using a simple analytically tractable model. Having understood the key ingredients and qualitative dependencies, the experiments can engineer new hybrids capable of robust critical current control via gate voltage and/or magnetic order orientation. Further, important insights can be achieved using recently established methods for measuring interfacial spin-orbit torques {{cite:cf10f0a7b5fe50a1625959eb5837014dd821406d}}. On the theory side, the vortex surface barriers and corresponding critical currents should be evaluated numerically for various superconducting hybrids of interest {{cite:f4837fd11da7698dfd9ffa8cba438b8e30227d4f}}, {{cite:92b818c9fbb67d3dc8f7775febe6aca1ded9b398}}, {{cite:b5786c4fb98f3da34ce3c6b0f1df0160b63f582b}}, for example using the quasiclassical Green's function method {{cite:e7a6114c996698ee54ee7b80f9c7d403fdf022a7}} or Bogoliubov-de Gennes equation {{cite:b15237a2ad163efdddf766dcac97455841ece651}}. The recent new insights regarding orbital contributions to magnetization in superconductors with Rashba SOC {{cite:16e4c2be360da3727b083cd837ff444d327902c6}} pose another important question and could offer an avenue for enhancement of the critical current modulation.
d
64ceb4a39c442283263333950720811c
In this work, we only theoretically focus on the dynamical effect of BH subsystems for the long-term evolution of two-component, spherical and low-{{formula:4794f8e8-7922-4f44-8287-2c318b28d654}} star clusters. These simple models allow us to well isolate the dynamical effect of BH subsystems from other physical processes. In a realistic situation, primordial binaries, stellar evolution, mass spectrum, evolution of tidal field, density profiles and rotation of clusters complicate the simple picture discussed here {{cite:773508a2fb043ed9e890e0fe02e3d0008add8e06}}.
d
ad6dbf1bbce5c0738de656d97ce8f9eb
We focus on three popular and/or recent metric techniques: ProtoNet {{cite:7d288cf958bbb69d2e78eb362fbf68023e8d7294}}, FEAT {{cite:d484d1cad5e138572561b0140b7a21c503cbd603}}, and FRN {{cite:d7f9663bc7a2322c76152646c185b1924adf4d0a}}. We describe each model and our corresponding cosine alternatives in the following sections.
m
9275609defcd8c86ce5dea2e45c13bf5
The models were trained on 400,000 32x32 image patches from ImageNet ILSVRC12 {{cite:944acc19ea20ed7c00770fa768a5552731a0f0e8}}. The patches were randomly sampled from images after subtracting the mean and normalizing the variance of the images. Low contrast patches were not included (variance less than 0.32) as was done in {{cite:9b0ec2b9f9f117ece593c58235352a3735e298e9}}. The mean of each patch was also subtracted and its variance normalized. The overcomplete ICA and non-negative sparse coding models were trained for 16 epochs (presentations of the whole training set). The model hyperparameters were matched to that of {{cite:9b0ec2b9f9f117ece593c58235352a3735e298e9}}. The probability density function (the function “G”) of the input under the overcomplete ICA model was the log of the hyperbolic cosine function. The model V1 simple cell responses were computed with Gabor filters along the 6x6 center locations of each 32x32 image patch. This is equivalent to 2D convolution of the Gabor filters with the image with a stride of 4 and no padding around the edges of the image. There were 3 frequencies (1.25 cycles, 1.5 cycles, 1.75 cycles), 12 orientations (increments of 15{{formula:1bed7f6f-346c-4d6f-bc74-5b059feeb188}} from 0{{formula:dc7fc57e-e7f5-4dcc-8e2f-a9a4f8ca9015}} to 165{{formula:30fe14b4-97ae-40d5-967f-ce6dafebbe08}} ), and 2 phases (0{{formula:1b004961-b8ca-4413-890b-ad3f8011a062}} and 90{{formula:d25365c0-017b-4f94-8ed2-519360ed78d8}} ). The filters had a receptive field size of approximately 12x12 pixels. The resulting set of model V1 simple cell responses for the location and parameter choices had a dimension of (6, 6, 3, 12, 2) responses. The model V1 complex cell responses were computed by taking the square-root of the sum of the squares of each quadrature (ninety-degree out of phase) pair of Gabor functions to model phase invariance. The resulting model V1 complex cell responses had a dimension of (6, 6, 3, 12) because the last dimension of the model V1 simple cell responses corresponded to the quadrature pair. Before computing the model V2 responses, the model V1 complex cell responses were pooled with principal component analysis (PCA) by maintaining only the 100 components with the largest eigenvalues. Finally, the V2 responses were computed with overcomplete ICA or non-negative sparse coding with 800 filters or basis functions. The source code for the complete V2 model has been made available at https://notabug.org/jbowren/hv2model.
m
1ffa91ed2115310716028c840ca7f247
During the XRT monitoring of BL Lacertae we observed a softer-when-brighter behaviour in X-rays, with the photon index ranges between 1.35 and 2.63. This behavior can be related to an increasing importance of the synchrotron emission in the X-ray part of the spectrum covered by XRT during bright states, likely due to a shift of the synchrotron and IC peaks to higher frequencies. In this context, we noticed that an X-ray photon index higher than 2.2 (see Table A1 in Appendix ) has been estimated in the two XRT observations close to the VHE detection of the source reported by the MAGIC telescopes on 2020 August 19 {{cite:30ed7bdea6ff66d81dac09bea10c83cdeb3a7248}} and September 19 {{cite:a694f0dd03a366c99fa76d05fcb962dee98b93b0}}, in agreement with a shift of both the SED peaks to higher frequencies in this period. A similar harder-when-brighter behaviour has been reported in {{cite:c385b2b310d4cd135e657ace5fe49d6859efa731}}, when 40 XRT observations of BL Lacertae have been carried out between 2019 September 14 to 19. In that period the fluxes were lower than the values observed in 2020 August–October (i.e., 3.1{{formula:9d79ce19-9fab-40e1-bd42-f8ac7078467f}} 10{{formula:ab9df680-818d-4c2b-ab04-82ed4d59962b}} – 1.8{{formula:3c4222b5-b1dd-4602-a727-515c7110ee6b}} 10{{formula:fa53dbb7-ebb4-48fc-b3bd-11bf725edc6f}}  erg cm{{formula:022c6080-b957-4ecc-85fe-fad3534b02bd}} s{{formula:4680553c-2240-41be-83c1-a0c2d1503841}} ) with the photon index ranges from 1.79 to 2.72. On the contrary, a harder-when-brighter behaviour has been reported by {{cite:2b56ce90c02c0cdc54c2930f7d2d25edd3932e6d}} during the 2012 flaring activity. This can be an indication that different emission mechanisms and/or changes of distinct jet parameters are at work in the source during different flaring activities. The combination of the X-ray data analyzed here with other multi-wavelength data collected during this period, in particular by Fermi-LAT and MAGIC, will be important to study in detail this behaviour.
d
81c16706562b89ce926f7d0fd2f4a28d
in the basis {{formula:2ac6524f-245c-4dba-bada-826cf2714811}} with {{formula:d08bd062-a942-4054-8089-8f031caa02f6}} ({{formula:76abdb82-e44a-4acc-a8ba-e5c7367fdc05}} ) representing the orbit (spin) degrees of freedom. The Pauli matrices (identity matrix) {{formula:5849ae0c-fd38-484d-8e47-417596f707cf}} ({{formula:03ff9fa4-fc0e-4cab-885b-16caeae4a8a7}} ) and {{formula:4d1909ec-3534-4540-8893-728702713cdb}} ({{formula:4876a652-222c-43d5-bac0-1f7226cedda5}} ) act on orbit and spin space, respectively. {{formula:7427bf35-ba4b-45ab-ad1e-f6a0cb60be2d}} , {{formula:abecaa5b-1594-4761-9f70-2bb82532c644}} and {{formula:9c811aa5-caf8-492b-a8ae-7f80662220b0}} are model parameters and we set {{formula:73dd2185-0866-48f8-8d33-678d0cd0072c}} throughout the paper. Here {{formula:edde8542-a8b9-4975-8533-f7855c50d4d3}} with {{formula:71eea163-5326-4368-b602-1cb137c7e14c}} . If {{formula:3dd761cc-3350-4876-8b02-800475f7bbc9}} , the Hamiltonian is exactly the Bernevig-Hughes-Zhang (BHZ) model{{cite:e823040feafb79a6e85a107d0dd48596a7bb8895}} and can describe a quantum spin-Hall effect with a pair of helical edge modes. Generally, when {{formula:3eeb929d-fb03-44b1-b8d2-b64dc7b2b480}} , the edge helical states with opposite group velocities will couple and gapped out [see Fig. REF (a)]. To be specific, the {{formula:7cdd0633-186a-4a48-b3ca-071da059b29f}} can create the mass domain walls along the edge of the sample and lead to a HOTI with corner states. Importantly, the bulk-corner correspondence exists only when {{formula:50104086-e7e9-4ede-92bb-b575e7b717e0}} and the Hamiltonian has the edge-corner correspondence instead for {{formula:908f2e4b-14de-4d97-8570-0e8862628d1d}} {{cite:169b27931c57e4a4acde999f0dffd39673075a49}} [see appendix for more details]. {{figure:0fe307e4-aff0-4768-be81-ffbee99dba69}}
m
17883904e568c5659374b0368ba4cdc2
To preserve the purely ingoing radial solution near the horizon, we expand the function {{formula:7035f5e9-9bec-4859-86d9-b70e282885e6}} in the form {{cite:bfb69c807f83fa233c6c0b279f8d63e08074adbf}} {{formula:f9184c15-ffd3-4cbd-9ebb-816b77879654}}
m
bbfca6058eb5d5ff39abda007ee21f46
Least-squares and logistic regression on graphs. Paralleling once again the results of {{cite:c577761848510c1b7510ff19d34ae4ea52acdd04}}, it is clear that our certified graph unlearning mechanism can be used in conjunction with least-squares and logistic regressions. For example, node classification can be performed using a logistic loss. The node regression problem described in {{cite:05c3656ccd292fe7076b3cb4654b81f1462cf0c3}}, {{cite:dfbe981b44b960fa8eb307c2ee01df189b8d96a3}} is related to least-squares regression. In particular, least-squares regression uses the loss {{formula:d1b7a071-1d8c-425a-96a1-83937f560ae8}} . Note that its Hessian is of the form {{formula:9b8f22b9-b103-4b62-ab40-10d27f2040eb}} which does not depend on {{formula:3f8678f4-2ae0-42d8-a728-09b99f672a27}} . Thus, based on the same arguments presented in {{cite:c577761848510c1b7510ff19d34ae4ea52acdd04}}, our proposed unlearning method {{formula:d7778716-0d92-4f9e-9828-a7ac9b642d05}} offers {{formula:b02ecd2b-e2ef-49aa-bae6-7314bc488e96}} -certified graph unlearning even without loss perturbations.
d
ed21a19fac7a558ca6808543106dbec1
The results follow from a straightforward application of Proposition REF using {{formula:5c051042-31dd-42b4-a5c4-0cc51cf83f98}} the distributional representation of {{formula:7f8a07bc-1aad-4e94-9a50-38ef94d0b23d}} , and the appropriate Gamma randomization to obtain independent {{formula:861cc100-0b39-434f-9100-7981b69314b0}} laws. The generalized gamma subordinator representation of {{formula:b7cc66cc-de57-43d3-95bc-a45f8627ad1a}} and Poisson Dirichlet distributional identities can be found in {{cite:c0521408d3f6d917b51a56446f4de9660bf23a53}}, {{cite:cd7e796531a30afd2a9fc160b10ad77279f42162}}, {{cite:a59481c962a9cc5cce35ad2fb77c0e8c7c428ed2}}. The independence of the Poisson-Dirichlet laws is due to {{cite:16c4ca67d0db72bf82d94c073bad4dd9a65d9c82}} and beta-gamma algebra, see also {{cite:c0521408d3f6d917b51a56446f4de9660bf23a53}}. {{formula:fc10020b-0be7-4f58-8d7c-118dfaeb1714}} has distribution {{formula:83424282-a356-454b-9f03-b88cf776e78e}}
r
9d7686b5ec566546abe11a89c6e562e1
where {{formula:3662ec60-c40a-46c0-a856-b24c2db9a584}} is the set of exponents of {{formula:1bdd6308-e568-46a1-baf9-e0b86841951d}} (see {{cite:d32ac1a15139bf52227f55b783a36f301318c47e}} for the definition of exponents), {{formula:3db98f1e-c025-44ac-88b7-c43ff6d7397e}} , and {{formula:28280ead-3a9e-41cb-ac4e-103b519f0aaf}} , {{formula:9f9610f7-3ba3-410e-8fad-ae41fc0db088}} . A convenient normalization of the basis elements {{formula:d6cb8fa3-a6dd-4ae2-9273-4ee3bcf78ab4}} can be found in Section REF . Recall that the set {{formula:958802bb-dbcb-4711-8a96-86ff09bd04f1}} has the following form {{formula:66396c84-0bf7-438f-9571-203303be13a5}}
i
cc30fafa4d52f13494b270304d4f6597
The general unsupervised learning model defined above without any other inductive biases cannot ensure the disentanglement, as is commonly believed {{cite:cc8b4f3aac6f96b157c45cf4f8aa87e1f9fa3964}}. To achieve the disentanglement of content and speaker representations with the above formalization, our solutions are as follows: We first introduce the architectural design that facilitate the separation of time-variant and time-invariant elements in Section REF ; In Section REF we present the learning objective that can encourage {{formula:68d05ba0-0734-41e9-ba1d-472726ce8c89}} and {{formula:d4e75090-a7e9-44bd-af04-db5feb7444d7}} to be complementary, while exactly the content information and speaker identity information can be assigned separately to them. The effect of the training dataset is also emphasized in Section REF .
m
7bbc9f6b713b8bfb773fbe46ac2655ee
Our method for solving the 3D image super-resolution problem relies on deep neural networks. The universal approximation theorem {{cite:46ff171cc0cc38f72266ee5840083444f3749bc0}}, {{cite:04804e11ad37946155dbc7e8dd72ec9ea92c983a}} states that neural networks with at least one hidden layer are universal function approximators. Hence, the hypothesis class represented by neural networks is large enough to accommodate any hypothesis explaining the data. This high model capacity seems to help deep neural networks in attaining state-of-the-art results in many domains {{cite:500aa95ba7127d18afc149ca64c07405a36e300e}}, including image super-resolution {{cite:7f1d7c3845cf646d59890374c3e9c686f679d51a}}. This is the main reason behind our decision to use neural networks. Interestingly, Dong et al. {{cite:7f1d7c3845cf646d59890374c3e9c686f679d51a}} show that deep convolutional neural networks for super-resolution are equivalent to previously-used sparse-coding methods {{cite:08c78d25a71bc8b29c189595e58af6f87f6f4156}}. However, the recent literature indicates that deep neural networks attain better results than handcrafted methods in practice {{cite:7f1d7c3845cf646d59890374c3e9c686f679d51a}}, {{cite:55313ea05db753ea8062aced3a007c690335f9ae}}, mainly because the parameters (weights) are learned from data in an end-to-end fashion. Further specific decisions, such as the neural architecture or the loss function, are taken based on empirical observations.
m
db1b0e44e52cbaed0feded193576b05e
These models, which we will refer to as Hamiltonian Neural Networks (HNNs), are able to learn more accurate representations of the world than would otherwise be possible. Recent works by {{cite:e86575e08d8e2f5df6b3aec6eafba9fa3ad60e38}}, {{cite:c705dbbfce570b97dbc4459d598a81df0cddc370}}, and {{cite:6440945d371259dc94efda06d285dd273d34b3c9}} have shown how to learn Hamiltonians for nonlinear, chaotic systems such as double pendulums, {{formula:8cc90d5a-c7b5-4523-83a5-3a034422b533}} -body systems, and bouncing balls. Moreover, these models can be learned starting from perceptually difficult observations, such as pixel videos. A number of other works have begun applying these physics-inspired models to new fields such as quantum mechanics {{cite:06906bbbf43b98514a9e90a0fd6fe9e803945a02}}, chaos theory {{cite:6440945d371259dc94efda06d285dd273d34b3c9}}, and differential equations {{cite:a3db1df15678b1f337472019fbfaabb974e1b35d}}.
i
d12ea2f8b7f98655e5070c0315b7bd13
This remaining performance gap significantly diminish the relevance and practicability of these techniques for real-world applications where a high level of accuracy is required. To reduce the domain gap further, domain adaptation coupled with weak human annotation have been developed {{cite:ce95c2c7519724fcd08ae73fd6d077448f4f6bb5}}, {{cite:83ddf8ce7952703073a9447410b486c5b0697ebb}}. Instead of performing a full pixel-wise annotations of the target images, weak labeling consists in simpler and faster tasks, for instance, bounding box selection {{cite:2a088762253c33b8df122ff618052919e5de76c3}}, {{cite:3616d5a778d254807a0151e3dcb4a2b1b928a6c5}}, image-level {{cite:1b1aa744163e9f9b83ccc9c81a55518ada52bef6}}, {{cite:698ca4b7bef336639443ed4ebb7974127f548673}}, {{cite:ec021160dc2a960db0142cd7b86c8eafb7acb82e}} or points-level {{cite:baa2fab510d55a21a81e6bf1f4b8ecdef54997ef}} annotations. The cost of these weak annotations is significantly lower than their dense counterpart, making it realistically deplorable for industrial and commercial purpose. However, we argue that existing annotation process is yet to be optimized to achieve better performance under same labeling budget.
i
76f611771f0f9533db336c6a9f4c7d1a
This work continues this line of investigation. Our main contribution is to show that this centralised distillation is unnecessarily, and indeed sub-optimal. In particular, we propose an online distillation framework, where each worker both learns to optimise performance in a local domain and also mimics its peers from other domains with peer-to-peer distillation. More specifically, inspired by deep mutual learning {{cite:c335645f18990149992237b301898ce988aff2aa}}, {{cite:8337012e65ff7da7dce3dd122d10da7d51d2426c}}, we train each worker with two losses: a conventional RL loss and a distillation loss that measures the similarity in predictions between the local worker and the others. In this work, we use expected Kullback–Leibler (KL) divergence between predicted distributions as the objective to optimise for information exchange. This online distillation framework is named Peer-to-Peer Distillation Reinforcement Learning (P2PDRL). We empirically show that, compared to conventional baselines, and offline distillation alternatives, e.g. divide-and-conquer (DnC) reinforcement learning {{cite:ab99e90406f3c11ae4bdf1f5400e050036c386bb}}, P2PDRL achieves more competitive learning performance without the additional cost of a centralised distillation step. Overall P2PDRL learns more quickly, succeeds to learn on a wider distribution of domains, and transfers better to novel testing environments compared to competitors.
i
21cdc3c49a8bca289252e106a0467538
We compare with several recently published methods, including: 1) Rule-based or unsupervised contract processing models: Extraction Zone {{cite:ebd9192c03ff0b218b0df40e79d71d9ae521ff15}} and Sentence Match {{cite:382531993d3bd4918f0ad7bd750a5cfaa90d0de2}}; 2) Strong pretrained language models: BERT {{cite:42859b6da7006d8a5a09095789f23e18e24afbf0}}, RoBERTa {{cite:a9c87adcf398736a158df09f460a16ecd8c6dd84}}, ALBERT {{cite:09dfbc2892d4e8c644ca0f44b9aee34ac3fa465c}}, DeBERTa {{cite:426b7345dfb96c3120dd26fb86f3458814102a4b}} and RoBERTa+PT that pretrained on 8GB contracts {{cite:51dc4d58373005389b3d791d22a3720c5af212a1}}; and 3) Models tackling long text issue: Longformer {{cite:5e65efd9cac6dbecc8482487e61d987b28df9ae8}}, and Hi-Transformer {{cite:7ec2aabaa77e56e961972399c658bf2cab7bcd90}}.
m
ce100077a8398cbc9ef1f8906dea6255
In addition to these differences, at a qualitative level, all of the cases considered in this work had a number of common features in all cases examples. First, the variation of HMI and reflected entropy has the sign due to presence of the anisotropy. Regarding these quantities as measures of total correlation between the subregions, this behavior seems reasonable. Although, this result is different from what happens for the HEE where the variation flips its sign. This feature precisely matches with the previous results of {{cite:e94cf50b8db45ff1143eb9983460f9bcc9a92bb4}}, {{cite:40841d552063f22714c9b1ae0cadb4716e74d2a0}}, {{cite:1fe7d806308e3e38ad3b43d2675a75e565580202}}. Another key feature which was observed here was the appearance of a new universal logarithmic term in HEE whose coefficient depends on the anisotropy parameter and the rotation angle. Roughly, we can think of this universal term as characterizing when the isotropy is broken in the underlying boundary theory. Similarly, as shown in [55], if instead we choose a background which breaks the translation invariance the structure of the universal terms of HEE is modified. This feature is entirely expected given our experiences from HEE in other backgrounds with broken symmetries, e.g., see {{cite:246c7d403f49104cb927aa70edcf0ac9c513c83b}}.
d
2975c10c4995273757cf4806729871b3
Both Replay-Based approaches tend to be computationally- and/or memory-expensive, especially as the number of tasks increases {{cite:d0940debc6f3e831dcaa422d2e546a0b0a638596}}, {{cite:28ba4772f776dd4d34bdf05cfc5bf1aadd0fba26}}, {{cite:b0f3339b5d8eae7813d94cf661aa327bcef0a1c8}}, {{cite:e92f17c16db3aca582a7b8f76a7da7973f46edc2}}. However, by caching portions of the datasets or the models used for each task, the retraining process can support prioritization for particular tasks, thereby facilitating more control over Graceful Forgetting.
m
77aa0d951858b64baa9be6501a922589
Before discussing the implications of this finding we first summarize the possible limitation of our analysis. (1) To perform our analysis we needed to use high-resolution racial grids. Such data represent streets and roads as uninhabited areas. Using image processing techniques we have removed streets, but major roads cannot be removed without producing artifacts. As a result, some large racial enclaves may have been divided affecting sizes of largest patches and thus tails of empirical distributions. However, values of exponents in fitted models are not sensitive to the values in tails of empirical distributions. (2) We use racial data restricted to fixed boundaries of census-defined urban areas. In {{cite:ec0d06550ab6943dff4624534f75816626bbfa53}} it was showed that the values of scaling exponents in many urban systems may depend on the definition of urban boundaries; this effect was not investigated in this paper. (3) We fit data to the assumed power law function and we quantify distributions in terms of the power law exponent even if the distribution is only an approximate power law. Such a procedure is widespread in studies in which scaling in networks (systems) is reported {{cite:7c515ba791ca140163751eb912b6dcd04e49dc45}}. However, whereas previous studies did not address a possibility that derived distribution may diverge from a true power law, we tested the power law hypothesis. Our test reveals, that in many cases, the power law hypothesis is statistically rejected (see Tables 2 and 3) and fitted models are only approximations of the empirical distributions. We argue that these approximations are sufficiently good to not change our conclusions and to not diminish the significance of our findings. {{figure:36a034a1-0246-4246-95ee-8ab55b3b92ed}}
d
e424ca16b5d716c44e4a48ebc3ca6edb
Recently, PVNet {{cite:867f9c5355723dc8f25f39636fd723c4c12deeb7}}, DPOD {{cite:24978c38778f3f57e7a52396175c7a1545d162d5}}, and HybridPose {{cite:679f653a02f7674e0a20df466a2392c8263f1bc8}} have shown excellent performance on 6D object pose estimation using a two-stage pipeline to estimate a pose: (i) estimating a 2D representation (keypoints, dense correspondences, edge vectors, symmetry correspondences), (ii) PnP algorithm {{cite:fc34f857c3675279ebc5ff031992e1cd327b554d}}, {{cite:b8aabdb8fe6c9a471f184872b19e1195a71ba3d6}} for pose refinment. DOPE {{cite:3994be548c9bf56d31d7e58735841b53c281dc0b}} and BB8 {{cite:b19db281ed7d2bd9d66dc7bbd60c35cc781abd05}} estimate the corners of the 3D bounding box and refine it using a PnP algorithm. Instead of regarding the corners as keypoints, PVNet {{cite:867f9c5355723dc8f25f39636fd723c4c12deeb7}} places the keypoints on the object surface via the farthest point sampling algorithm. PVNet also shows that their proposed voting-based keypoint detection algorithm is effective especially for occluded objects. HybridPose {{cite:679f653a02f7674e0a20df466a2392c8263f1bc8}} uses multiple 2D representations including keypoints, edge vectors, and symmetry correspondences and demonstrates superior performance through constraint optimization. DPOD {{cite:24978c38778f3f57e7a52396175c7a1545d162d5}} takes advantage of the dense correspondences using a UV map as a 2D representation. However, since the PnP algorithm is sensitive to small errors in the 2D representation, it is still challenging to estimate the object pose especially under occlusion. RePOSE uses PVNet {{cite:867f9c5355723dc8f25f39636fd723c4c12deeb7}} as the initial pose estimator using the official implementation.
m
67b572a428a4a30f8ec534950a7b9b4f
There are a few commercial attempts to market systems for card counting monitoring. Tangam Gaming {{cite:1539e6c79416e94a9a588ab520fac8b9632d31d2}} produces an automated card recognition system that requires the use of specialized hardware such as RFID. The MindPlay21 system relied on a range of specialized hardware which included 14 cameras, invisible ink, and RFID tags. Cameras were used to scan the cards as they were dealt, as each card had been marked with a unique barcode painted in special ink {{cite:462488592908d56cf05ff2c429450cf1951fa2d3}}. The cost of $20,000 per table, the unreliable components and the slow speed of operation led to the company going out of business in 2005. There are many patents also suitably guarding this space thereby inhibiting a large-scale commercial solution {{cite:72cb54ce7090cf5e96bbc7aa3be3cf527c087538}}, {{cite:3eaef9e72dcc51a009f4ce12bed18a783bfea0e8}}, {{cite:4476fb4667a3dc8d84b293c2b111155fa4900df1}}. In the 2017 edition of the G2E conference, VizExplorer {{cite:4ccaf1591a59aeb8d54c6d95d8137a8f777b8c44}} launched its new tableVizTM with ChipVueTM product, and this solution provided reliable bet recognition data for a few table games. tableVizTM system struggled to maintain high-accuracy and wasn't developed actively, whereas ChipVueTM product released a new version in 2019 edition of G2E conference {{cite:ec937b12c4e4d184117d4e24069fd69b5d2ecbbe}} that continues to just provide accurate bet assessment.
m
2bdc5810b5c18a32b43bafc1aa209d5e
where {{formula:cf9c180f-f41a-4293-b294-d8147a15b2d1}} . The function {{formula:283a1773-9095-4a22-b3ef-0032d9e19df9}} is convex, and is minimized at {{formula:eeedbdf8-fcd5-4a2f-8e90-8e0c3bd5709d}} . Therefore, we can show that the limit distribution of {{formula:ced7336b-aae5-49d9-a41d-67bba9af227b}} can be determined by the asymptotic distribution of the objective function {{formula:1001d2ad-eeaf-4d79-881d-32a8d04c2315}} . Furthermore, it follows from the Lindeberg-Feller central limit theorem that {{formula:a7f61960-ffd4-447d-a928-fcd035d7925f}} , where {{formula:d74535fb-77cc-469d-8328-35fe457e0333}} . Then, following the proof of Theorem 4.1 of {{cite:21b3ef8ff11ce1a6bc6dc51ab1335cbf350009c9}} it holds that (see also derivations in {{cite:654df10b99a0c7d40e06678fe1e544864db1a7d1}} and {{cite:c1ab786a68bc3cdc3e9d7d6b163883c68fa74f94}}) {{formula:ebe0a4a7-e2e7-4367-8586-cda719393ad9}}
r
94ed3485d1415f3bf00a72234ba40c2e
Table REF shows the overall cloud detection results for FCD in comparison with CAM {{cite:58ff1d95f1dea574c3e87db0e11f51aba2e0575f}}, GradCAM {{cite:2e7bcdd554a9f29615a1425cef0f271157d09f6b}}, and GradCAM++ {{cite:9a7fa9a6b73fac81e2720cc65b07e1b41940fb92}}. We find that our FCD greatly outperforms all CAM variants in generating pseudo cloud masks from just image-level labels, increasing F1-score by 8.0% compared to the best CAM variant. This strongly indicates a Fixed-Point GAN approach for weakly-supervised cloud detection is more accurate than CAM-based ones.
r
c3efd03ace23d51cf261797764eb3cd2
For time-stepping, we use the 8th order Dormand-Prince Runge-Kutta scheme described in {{cite:0372db6c92cbaec9151700f813c30cf95ac9822b}}. The solution is recorded at equal time intervals of width {{formula:b5943e7b-01f7-4368-92f0-4294d635a92d}} . The timestep of the Runge-Kutta method is set to {{formula:9f270163-5bbc-4b6f-b561-3823767f045b}} , where {{formula:4b5a21dc-b72e-4428-bfed-d0913ce978e7}} increases with {{formula:af3e8deb-3d5f-48a4-b719-e3fb225d018a}} as listed above. These subdivisions are chosen empirically to maintain stability. We also monitor energy conservation (as explained further below) and increase {{formula:9c96ea85-e7d8-459c-8260-1bbf6d60f4d0}} until there is no further improvement in the number of digits preserved at the output times {{formula:5098409e-1a59-4a33-8542-97b785cd2d88}} . Timesteps are taken until {{formula:113f9382-d2f0-4cf9-b451-d5760c5f94b7}} is insufficient to resolve the solution through an additional output time increment of {{formula:6a59cd70-a769-4794-950b-2638837c7808}} . In all three cases, the solution appears to form a splash singularity {{cite:0631cb25e9095547cc738c545c7f42628e27d56d}}, {{cite:75b18380608fb40b3d1dedecb721be33922037b6}} shortly after the final time reported here.
r
f0d245d227ba66cc529a3112cbd82618
We assume that the reader is familiar with the basic theory of lattices such as a partially ordered set (poset), a chain, a lattice, a subgroup lattice, a normal subgroup lattice, etc., (see, e.g., {{cite:b733702a3aa649f25adf4680e4401c442be91158}}, {{cite:695965cd69fb757a9256171a04c96b401fca307a}}, {{cite:5141b724d5999505efb1640b5f1b1805c3460da3}}, {{cite:d81261f598ef68019cec36d8fbebc7c9e2be556e}}).
r
e5bb5a4a8dc437c599817616f8bca885
The dust mass obtained from the model shown in Fig. REF is {{formula:d569826f-5077-47f5-a740-8b617eedc900}}  {{formula:d81a5798-ffcb-4efa-ad38-1c8b228cd6d2}}, adopting a bulk density of 2.25 g cm{{formula:9d4195f9-f1cd-491d-a746-c944a3abd620}} for amorphous carbon (Gilman {{cite:ba54358e30f0285fd25884b695b38fe8ce4dcac0}}). If we assume a gas-to-dust ratio of 200, the total envelope mass is {{formula:9c55f0e6-42ff-43ea-9f7c-d8adbdd20557}}  {{formula:372c4984-5877-429b-b74e-9725f5d98840}}. Given that the mass ejection started in 1992, the average mass-loss rate at the time of the MIDI observations in 2008 is {{formula:4f2640ee-5127-4354-867a-07e7886b5120}}  {{formula:7bb04faa-4b3a-4fdc-bc5a-425b04342af7}}  yr{{formula:0ef126c4-8fcc-426a-97a9-5fb0d6ee4f52}}. The dust mass of FG Sge seems to be lower than {{formula:7cd53527-d14f-427c-8f0d-cb1845c280f4}}  {{formula:0d0d7a89-23ef-4482-ade9-2eadb17fcc46}} as found in Sakurai's object (Chesneau et al. {{cite:94ee520495cb6c5b0e653e845f8ba92b2f0287ed}}) and {{formula:ca031901-4f1a-49e7-99c6-a3fa8e1e6d3e}}  {{formula:e1abdb32-5d52-440c-86ad-d89263901e6d}} as derived for V605 Aql (Clayton et al. {{cite:7c2e8273314ddd1c1a04ee17d0ac39a5708a486c}}). The difference in the ejected mass may be related to the difference in the nature between FG Sge and the other two final-flash objects – Sakurai's object and V605 Aql are VLTP objects, while FG Sge is likely to be an LTP object. However, follow-up observations of FG Sge would be necessary to study the long-term variation of the mass of the ejected dust.
d
55ffa4ac58ec15b34c0656bc9370dc8c
Finally, SPVNAS {{cite:d3f1d3f456bcc1a19bd109e8f85c72358cf8490f}} builds upon the Minkowski Engine {{cite:edcaba444224a5f62cab4ee4b9ba0964371701a6}} and designs a hybrid approach of using 4D sparse convolution and point-wise operations to achieve state-of-the-art results in LiDAR semantic segmentation. To do this, authors in SPVNAS {{cite:d3f1d3f456bcc1a19bd109e8f85c72358cf8490f}} use a neural architecture search (NAS) {{cite:9e0971e0bacdf068284e946abfd732e95237a74d}} to efficiently design a NN, based on their novel Sparse Point-Voxel Convolution (SPVConv) operation. {{figure:cac3b72f-b402-4aab-b803-38bea2ac64e0}}
m
d1de1a6b9e291c0cbc2f162d7ab70d46
Although the Shapley value provides a principled framework in game theory, one critical issue is that the economic notion of the Shapley axioms is not intuitively applicable to the attribution problem {{cite:3992ea8fc4974dca10b1959c3b5b7f849a8f60f2}}, {{cite:6484433d359324786d7d6150c6bda4f0701f80ee}}. In particular, the efficiency axiom, which requires the sum of the attributions to be equal to {{formula:aef35ab8-fccc-4809-8e74-daea7ac0f0da}} , is not necessarily essential because an order of attributions is invariant to the constant multiplication. For instance, for any positive constant {{formula:abe0b0ad-bf09-4b32-8a43-6f341073b2ac}} , an attribution {{formula:235d5f43-75f2-4e69-882c-7c8bb338be1e}} will have the same order as the Shapley value {{formula:7c067123-f278-4cc9-ae66-9e775487963e}} , but the efficiency axiom is not required for {{formula:a9838cfe-b0f8-4f6d-bad0-0b6e10e4fcd9}} . In Section , we will revisit this point and introduce a new attribution method that relaxes the efficiency axiom.
m
7a48a49214ea56eac367e562948840ae
To improve the certified robustness of randomized smoothing, efforts have been made on both the training {{cite:025b9a65fcb89c61847230cdbdea3a807461bd75}}, {{cite:ac3f032772522f73475b134ac132a36314792c65}}, {{cite:cef20df1db545a747f4b239e19ab2ae45528ec76}}, {{cite:7eb5197ed9c867084bd17ae0de9676f35051f935}} and the certification sides {{cite:e36f569c3b6a152b1afeb5aa9b25ee263408e8f4}}, {{cite:ce10d3d76abcc70fcc010a1ac541f436a057ec7a}}, {{cite:025b9a65fcb89c61847230cdbdea3a807461bd75}}, {{cite:7f1bff6baccc9e6e289774ceb5ef2bac8b6b8db6}}, {{cite:5deb1c07b9ce13f189b16143565c73f771f2a2a1}}, {{cite:4ed271f8b913e506499a0f13745af06bc64dd895}}. On the training side, data augmentation {{cite:ce10d3d76abcc70fcc010a1ac541f436a057ec7a}}, regularization {{cite:025b9a65fcb89c61847230cdbdea3a807461bd75}}, {{cite:ac3f032772522f73475b134ac132a36314792c65}}, {{cite:7eb5197ed9c867084bd17ae0de9676f35051f935}}, and adversarial training {{cite:cef20df1db545a747f4b239e19ab2ae45528ec76}} help to train stable base models under noise corruptions so that higher certified robustness for a smoothed classifier can be achieved. In this work, we focus on certification, and these training approaches can be used in conjunction with ours to provide higher certified robustness.
d
95843c74d79fb542dbdfd8c7d23e4641
The tomographic redshift distributions predicted by the forward model are shown alongside the spec-{{formula:d2009250-07e7-4433-97c8-6f5be61b9691}} histograms in Figure REF , and the corresponding bias on the mean of the redshift distribution in shown in Figure REF . The forward model is able to predict the redshift distributions with a bias of around {{formula:c914da7d-0751-45fd-970e-0eaec5dfea05}} and {{formula:c8a99a93-7459-4c18-8e56-a4212a1214b9}} on the mean, for the two respective tomographic bins. This is comfortably accurate enough for ongoing Stage III surveys (e.g., {{cite:2610ab0402cd1237de05f6edbcc21768efe97609}}), where the statistical error on the mean redshift per tomographic bin is {{formula:91c07c43-7c3f-4f43-a49b-fd7cecf642c1}} , and cosmological parameter constraints should be insensitive to biases of {{formula:e43e5ab6-2461-4ff1-bf12-47db62f3addb}} {{cite:583c392e596bedb863479e4c933f5be97cfdec06}}. The model predictions are very close to the accuracy requirements for Stage IV surveys, where the bias on the mean should not exceedFor LSST year 1 analysis, the requirement on the mean bias per tomographic bin is {{formula:8a14cc4e-9472-4cfd-927f-92d6422b42aa}} , decreasing to {{formula:dee3cf07-7650-4179-8419-82a2d301ca43}} by year 10 {{cite:81d37429e428ed08010dfcf2ab375579e22c6568}}. {{formula:16b2601a-aedc-4e51-9944-b7f00aa2dfec}} {{cite:81d37429e428ed08010dfcf2ab375579e22c6568}}.
r
a6107dffca39866a716f2e4a74409c47
We may transform a system's (conformational) free energy by supplying energy through external control, such as in the case of single-molecule experiments {{cite:85b1938c139ffe38cf3b5a5c4ebd41dabbb262bc}}, {{cite:039c978eb6a048b01ee023b7261d0b886ab1b886}}, {{cite:1dc5182c3ecbfb0f8c447760341ed860d126e793}}, {{cite:b082ca7d5f4b201c452fc67213add878053da905}}. Since molecules behave stochastically, we build an ensemble by repeating experiments to apply thermodynamics in understanding molecules' behavior {{cite:fd99a153f41d27572854a72244e6f253499043cf}}, {{cite:982aee1764400239b9ccb871fda1cc13e446aef8}}, {{cite:19a19281e890ed9d98908e4b6b5f2706f9ef4222}}. For example, Jarzynski's work fluctuation theorem enables one to convert fluctuating work into the difference of free energies between the end states of external control {{cite:282cb9b22ce144f7ba0c57dc1a4f48dc5fe121b5}}.
i
544af7c69aea7b25f57faba2ebbce334
It is generally believed that the highest burst rates on average are found for the sources with burst oscillations. However, it may be a selection effect as not all bursts exhibit burst oscillations, and therefore, a high burst rate means observation of many bursts and hence burst oscillation detections. Oscillations seen in this study are consistent with the range 578-582 Hz given in {{cite:c450b040278a8cde998478d94d7d08e2f2b1d1e9}} although most of the burst oscillations from this source are seen at 580-581 Hz. We detect a strong signal at a frequency (582 Hz) similar to that observed during one of the superbursts in 4U 1636{{formula:65f1b89f-d3ca-476e-82b9-c04ca89e6865}} 536, albeit with a much higher fractional amplitude (see Table-REF ) compared to {{formula:a0ce443d-225e-4040-963f-52675fa8c385}} 0.01 observed in the superburst. In the case of accretion-powered pulsations in intermittent AMXPs, the pulsation frequency is {{formula:2d2ce5ec-1172-42a0-85e3-bd781e7a4bd9}} 0.5–1 Hz above the burst oscillation asymptotic frequency with fractional amplitude of up to a few percent {{cite:f552ce0704bdd3706545c5aaf6b238d94ebf4343}}. Therefore, 582 Hz pulsations observed during the 2001 superburst are believed to be accretion-powered {{cite:db3fde3a8988d1a8d1446b9efe756ef86ca6b9b5}}. Given a higher value of fractional amplitude of the oscillations detected in our study and considering an uncertainty up to 0.4 Hz {{cite:4931563a887d87b2c187e98cf481a8321ef98a93}}, the higher value of burst oscillation frequency cannot be ruled out.
d
47fdb12d10d7a78db1fe9ca50c250186
Other aspects: Choosing the right metric to assess the performance of a segmentation model highly depends on the task for which the output of the segmentation model is used. Overlap ratio measures such as Jaccard, and Dice are the most common metrics to evaluate the accuracy of segmentation model {{cite:cf2fe2fd1a2f6f0b565b490fe66c102576995e4c}}. Nevertheless, if the segmentation output (i.e. predicted lesion) should be monitored for changes in size, mean or absolute volume difference is the metric to be calculated. On the other hand, if the segmentation output is used for treatment planning (i.e. radiation therapy), the shape fidelity of the segmentation output to the ground truth label is the most important metric for evaluation. In this case, the Hausdorff metric is useful where the boundary similarity between the segmentation output and ground truth label is measured. For all the experiments in the study, we used DSC as the evaluation metrics.
d
62ceaea6a488be18b29517cefb173542
Thus in each iteration {{formula:91a39535-7117-409a-889f-1159617c31d4}} , the two methods simply take a proximal point and proximal gradient steps, respectively, on the static problem {{formula:372486cb-2980-42a5-806b-673385025e6b}} . The proximal point method in the extreme case {{formula:a271e30a-5a4d-4832-8d68-e9253b0b4fa4}} is called repeated minimization in {{cite:6d72f194980db8f204c2a754e6ab52821fdcee6a}}. The paper {{cite:6d72f194980db8f204c2a754e6ab52821fdcee6a}} showed that when {{formula:e645ea9e-b112-47fa-a191-5db6bd320eeb}} is the indicator function of a closed convex set and {{formula:3d86feef-8bc9-47e9-99f3-6501e83a0cc5}} , repeated minimization and the proximal gradient method converge linearly to {{formula:d9cb9c1e-2169-44d2-953b-87ea4cf0215e}} . In this section, we provide a different and complimentary viewpoint based on stability to distributional shifts.
m
6648079796a161759164c3b78711ed1f
Experiments on MS MARCO and Natural Question data sets show that our RankT5 models fine-tuned with specialized ranking losses can significantly outperform other T5 ranking models fine-tuned with classification losses and previously proposed T5 adaptations for ranking {{cite:03bb2e586faeb8978675b9de2035d1c020616580}}. We also discover that models fine-tuned with some ranking losses tend to have better zero-shot performance than models fine-tuned with classification losses.
i
8a8954288c2881257b76a9f41d3a593b
The impact on the performance on the test set is less severe, as can be observed in figure REF , but still on the order of 1 percentage point. Incidentally, the mean result for the reference labeling function improves slightly on the state of the art for this task as reported in {{cite:dd8f3671de2cb54257358c2488290f3cd84014f3}}. {{figure:0dacadc1-769a-43ed-93d8-2453a28deea4}}
r
3224c54e9adea704e4a02933c3d89044
The water waves problem has been widely studied in centuries, see for example {{cite:56a91bf9a5c0f98700d2f45d6ed30cba339f3970}}, {{cite:5017c0b6336802e06521ff2edd18fe6a9e3def72}}. This problem focuses on the motion of an ideal fluid and describes the evolution of the free surface {{formula:73a47408-6d76-49e3-aa14-eaae94d05157}} as well as the velocity field {{formula:09b2f31f-88fb-4eea-8631-2f2a5023cdb5}} . Mathematically, it is described by Euler's equation with boundary conditions and initial conditions, and in our case, we also need some boundary conditions at contact points.
i
cdda81699d8438451bcb7dc05fa9ea3b
In the context of the model presented in this work, if we assume that at the time of the bounce {{formula:456106a5-21a2-4e38-956e-70f85dee802b}} , the surviving black hole has a mass Limits on the {{formula:67351937-5b9b-4032-b289-f56fc0e0b399}} -distortion in the CMB due to the dissipation of fluctuations before decoupling exclude primordial black holes larger than {{formula:681d5e56-be4c-4343-b8f0-e1c48d9ab37e}} . of {{formula:c5b195c3-362f-41c4-95f9-f21b87988ebe}} {{cite:02301c9b6f96b22d6d7c1785d681998a4bb82e30}}, {{cite:fbb3dcc8b1c43e95808729e81fdab100ebbc939b}}, then one billion years after the bounce only the the expansion of the universe would increase the gravitational mass of the black hole (see Eq. (REF )) up to {{formula:2b473787-4277-4402-96f0-f497a0fb9585}} , reaching the values observed in the most distant quasarsWe suppose that {{formula:880ad44d-9b12-4c1c-8176-773429463280}} in Eq. (REF ). This value is in accordance with the restriction {{formula:7938d94c-27e5-42d0-a139-8da37f6b0967}} {{cite:833285a6b9ba5a9e4036f2a1850d50cb2432e747}}..
d
b1af3daf589a9f9e9b36e4febefbb1a1
We focused our habitability analysis on orbital evolution, but many other factors might be important and could be included in future work. CBP rotation rates may reach a stationary solution analogous to tidal locking {{cite:f8c4fd07aee1cf84fe0c96440c78b6a128c857ee}}, {{cite:38e9d0f13e8432d3053fa81c4af5036f112e1b71}}, which could significantly affect climates {{cite:9b916e26b532521595f9b3167ce81749239d7486}}, {{cite:82a3d5b0851677154afc35132d76093f04998272}}, {{cite:f69d5b28b6156faa9760d1a11b114e89288983b3}}. The planets of M dwarf stars, such as those in Figs. REF –REF , may spend significant time interior the HZ where XUV photons can desiccate the planet {{cite:5c50949626021b1cf39f7103a2adfb34215e88db}}. Colliding winds could exacerbate atmospheric loss on planets orbiting binary stars {{cite:8787f90ec5c0f4f2571c479e4765b29512de5541}}, although {{cite:cf5b544efea25e1af9c2a1fd73587d22f37344b7}} showed that magnetic field could shield some planets. As with single stars, the habitability of CBPs is complicated and depends on many factors, not all of which could be included in this study {{cite:90356e718192bc91589a64b99391a604272822d6}}.
d
a96677ddb8aafaf2a1ed2bf593c07893
Overcoming the challenge of scalability has received much attention {{cite:3762fa7ce0d7df86e300655ddd80354a9526b213}}, {{cite:4ee4eadcad7e9abc4997476d6b277a1fd1374df7}}. One class of approaches is to develop efficient algorithms based on first-order methods. For instance, a general conic solver based on alternating direction method of multipliers (ADMM) was developed in {{cite:49f15d427a766c4b99ff81487f614c227e579103}}, and a sparse conic solver based on ADMM for SDPs with chordal sparsity was developed in {{cite:8ff390b4d019037f1ac42e3559b9e6669b7e83da}}, {{cite:3f4f373e6d1b0524be6a517615bafcc48cfa5ca8}}; see {{cite:3762fa7ce0d7df86e300655ddd80354a9526b213}} for a recent overview. While first-order methods considerably speed up the computational time at each iteration, achieving solutions of high accuracy remains a central challenge and may require unacceptable many iterations. Therefore, first-order methods are mainly suitable for applications that only require solutions of moderate accuracy.
i
ccc93abd778c18d416669ed1cf1b8db2
Figure REF plots histograms of the number of whistler events versus solar wind speed in the {{cite:b461949e55e2fdc0be4f5d591c747ac458526cad}} STEREO database (left) and the number of PSP events color coded by encounter (right) versus solar wind speed. The center panel plots the PSP events versus solar wind speed and radial distance. The highly oblique whistlers observed at 1 AU by STEREO are predominantly seen with solar wind speeds of {{formula:376b9a8c-89ac-4f28-8bbb-bfc488e09bad}} 400 km/s, but are also observed with speeds up to {{formula:206adcc6-1ebd-444c-b554-fe7682767544}} 700 km/s. PSP events are associated with lower solar wind speeds ({{formula:0225a68b-be68-41c3-8120-1a45c08b4dc2}} 300 km/s). The bi-modal distribution is likely due to radial distance effects, differences in conditions during each encounter, and to the small number of events, as indicated by the center panel which plots the PSP events versus solar wind speed and radial distance. Encounter 4 events were all obtained inside {{formula:3d84f751-c496-415b-a056-853c0cbb68f8}} 50 solar radii and solar wind speeds were {{formula:d00bae11-d577-478c-b9b6-f4d19f0a95f2}} 200 km/s, whereas events during Encounters 2 and 3 were primarily outside {{formula:784d2484-6c89-467f-b408-acf2a4d289da}} 50 solar radii with solar wind speeds of {{formula:bc1d9a78-75f8-486a-a687-7c6c5642f008}} 350 km/s. The differences in wave association with solar wind speed between PSP events inside .3 AU and the STEREO events at 1 AU may just be due to the evolution of the solar wind. The {{cite:32aec786320a55fde4384cf5ac049e821b13675c}} observations which cover the distances between .3 AU and .9 AU, however, were associated with slow flow. Wave vector angles have been determined for only a small fraction of the PSP events; therefore, it is not yet possible to determine if there is a relationship between wave obliquity and solar wind speed at these radial distances. The parallel propagating waves and the oblique waves may represent two different modes, or different sources of free energy. However, the distinction may also be due differences in instrumentation. Future studies utilizing the Parker Solar Probe data set may resolve the relationship between the parallel and highly oblique waves. {{figure:4b0d2066-742f-4d86-9b89-ddadd33a5fae}}
d
d4a46db2489bd4e41c6c412f5374d591
This paper investigates the construction of {{formula:95f40882-280e-48f9-9d29-40954b7ff9d0}} using decision trees. The use of decision trees to compute {{formula:2427a186-808b-4491-9e71-4cc77a924197}} was one of the suggestions included in the seminal paper of {{cite:9fbf54e97898d96628f818850504c8186a561c61}}. To the best of our knowledge however, the use of decision trees for algorithm selection was mostly ignored in the literature. One recent exception is the work of {{cite:48aff72398acb54b51a156a5fa7c76cd603933e7}} who evaluated many heuristics for the traveling thief problem and built a decision tree for algorithm recommendation. {{cite:48aff72398acb54b51a156a5fa7c76cd603933e7}} did not report which algorithm was used to build this tree, but did note that the MatLab® Statistics Toolbox was used to produce an initial tree that was subsequently pruned to produce a compact tree. This is an important consideration: even though deep decision trees can achieve 100% of accuracy in the training dataset, they usually overfit, achieving low accuracy when predicting the class of new instances. The production of compact and accurate decision trees is an NP-Hard problem {{cite:7e97adb01a327c2299681ce312d36464e14ae744}}. Thus, many greedy heuristics have been proposed, such as ID3 {{cite:82284632d7517bfaeb57f168601a0aaf05abeb41}}, C4.5 {{cite:2895225c09a7aba48bd74e231348e0796fda2a47}} and CART {{cite:5795fd79b1ed7ac30ca5f7e7260c8a429ba5bd41}}. These heuristics recursively analyze each split in isolation and proceed recursively. Recently, {{cite:d9e12ede08ee6aa12804d2657ce21111e32dd7c4}} proposed Integer Programming for producing optimal decision trees for classification. Thus, the entire decision tree is evaluated to reach global optimality. Their results showed that much better classification trees were produced for an extensive test set. This result was somewhat unexpected since there is a popular belief that optimum decision trees could overfit at the expense of generalization. Trees are not only the organizational basis of many machine learning methods, but also an important structural information {{cite:6a85cd36a612c62a1d67ecfeed6347c5f48ea06e}}. The main advantage of methods that produce a tree as result is the interpretability of the produced model, an important feature in some applications such as healthcare.
i
f96d07eb49f39bbffc431086173c8aae
We believe that the principles revealed in this work exhaust the many ways one can engineer the phason spaces. They show that, in principle, there is not limit on how high in the virtual dimensions one can go. However, in practice, we expect that the actual laboratory designs to become increasingly challenging and the quality of the topological gaps to wear off as higher virtual dimensions are being conquered. Of course, the next in line is the 6D IQHE, which can be accessed with linear, planar or 3-dimensional meta-material structures. The latter will require a straightforward generalization of the algorithms used in the present work. Let us recall that the bulk-boundary correspondence principle was worked out in arbitrary dimension in {{cite:c21b965c39e8c66fc9ce207f62567b75d15cce4c}} where one can find explicitly solved models as well as an explanation of quantized physical responses.
d
019b7ac76bb89bc06184665a3c71830f
Constraint satisfaction problems (CSPs) are cornerstones of both classical and quantum complexity theory. Indeed, CSPs such as 3-SAT and MAX-2-SAT are complete for NP {{cite:5838bd499d639f311d3533ad640d4e086efa699a}}, and their analogues Quantum 3-SAT (3-QSAT) and the 2-local Hamiltonian problem are QMA{{formula:5ac4c7db-a9b6-4a2e-86c2-926aa6240d6b}} - and QMA-complete, respectively {{cite:5889e5b0deb4b37756159892fff2d459b2152765}}, {{cite:e827b138f104f86ed3d27f3aad5e8a7fcc5dadf3}}, {{cite:b14b37379f067d62b1ae351fc0a9d516e78a3d3b}}, {{cite:ddba1fc8a1ec5aad7683e169082281d0c87dec19}}. (QMA is Quantum Merlin-Arthur, a quantum generalization of Merlin-Arthur, and QMA{{formula:dbe829f6-8ea0-4013-bb3c-8d399761463b}} is QMA with perfect completeness.) As such CSPs are intractable in the worst case, approaches such as approximation algorithms, heuristics, and exact algorithms are typically employed. In this paper, we focus on the latter technique, and ask: Which special cases of {{formula:b198c188-7e74-4cd7-a45d-7bdd0b241dd9}} -QSAT can be solved efficiently on a classical computer?
i
d8e88631868a1f30e291d0195ef37c3d
We generate a diverse collection of NCA level-generators through CMA-ME {{cite:77212dc84b2bda150ffb0d016af0e76d978598ca}}, a quality-diversity algorithm combining the adaptation mechanisms of CMA-ES {{cite:d3942c6475cc0820dbdfdc636cae244aa1076c8d}}, {{cite:116522e0587702bc6a189575d298a8f596663a8d}} with the archiving mechanisms of MAP-Elites {{cite:1bf1c7341f2a7ec0a83670df49f9960895b4ed07}}. We choose CMA-ME as it specializes in continuous domains and has been shown to be significantly more sample efficient than other QD algorithms in this setting. We train each NCA via the Pyribs library {{cite:308de83e578c821f24e52b8ccf51039103542292}}, a QD optimization library maintained by the authors of CMA-ME.
m
603c90644909b061787601fe0907d7b9
The hydrodynamic stability of an accretion disc is related to the choice of the combination of parameters of the flow {{cite:193d286eaafdfa7355cd84e296337d9b5f678c3a}}, {{cite:c217fa37a09d9e68fca3dc6012aef593b8b73a62}}, {{cite:23543734fdf901278070de3b0da40e79eb43afa1}}, {{cite:c22fd01bf7cd54db701c481a761c0cd196bb30ad}}, {{cite:7ad35a3a71ffa57b8e7eb8611994a68dc6b727e4}}. When the viscosity of the flow is low, the rate of dissipation of the angular momentum will also be low. As a consequence the residence time of the fluid particle in an orbit will be relatively large which increases the chance of ion-electron interaction and the efficiency of radiative cooling. Such a low viscous low advective phase is the stable low energy steady state of the disc.
r
30a9e9df657959daee6ca69615113892
where {{formula:b51b2c61-85ce-4b47-b3d0-7d08408b5353}} , {{formula:2d6ee8e3-0f8b-46ca-9f6a-80fac8e3a421}} 's are obtained by sending multiple augmented versions of {{formula:41885943-e848-4a02-b8f1-b821b2d59a4e}} to {{formula:96bd03b2-4cf7-4a07-89a9-83d6e7d5a136}} . This loss penalizes over-confident erroneous predictions while it encourages high confidence for correct predictions. Although Eq. REF has similar form as cross-entropy, it essentially optimizes over {{formula:daa1e7ef-49d8-43b7-a606-704d2d1bb12e}} via {{formula:f6257faa-7b79-4696-b8aa-b6a69d22acc1}} . Since at each location {{formula:241ecbec-a00f-4dfc-a417-233e1fabc2d9}} , {{formula:d61853c4-791e-434a-bd68-802a1095c5c4}} 's remain constant for all the classes {{formula:891f75a5-c647-4fa7-bd7c-399ede05e706}} 's, this loss does not affect the categorical segmentation result {{cite:92390f13dcf7379dc97160c1d0a48ad0c5054073}}, {{cite:fa75f48fa5645e8b74151365ef0c83aae1a6cc21}}.
m
2a175dc727b60cd3848a2e7328cef479
The magnitude of the peak-dip structure observed in {{formula:8b752ab5-445a-4c17-af40-c7141ea2a4d0}} is a direct consequence of the mean-field approximation employed in our calculations. We note that the inclusion of mesonic fluctuations weakens the critical behavior. This was presented in other models exhibiting {{formula:53e43f2d-7588-42cf-8b47-ec4b465f0605}} chiral criticality, e.g., quark-meson (QM) {{cite:f27318c6412ef9ab5570bea560b46300302cc115}} and Polyakov loop-extended quark-meson (PQM) {{cite:db3de3c44e4062ae0394c4d395b3d15f22c3b125}}, {{cite:e28bb8d40c4eda8c601e4a9b752660a88208901b}}, {{cite:4ff6354f605bbf84915a6d8ece0b4f8fb949b2fe}}, {{cite:75b758c442d0684c672deb44f10dd92a1519a2fa}} models within FRG approach. In these models, the {{formula:b320d912-88cd-4d2d-8435-a1a49ad03562}} ratio decreases monotonously with temperature and practically no peak structure is exhibited neat {{formula:ea6c677b-6847-4ca0-ad4f-4a044b433d9c}} . In contrast, the general peak-dip structure of {{formula:842d7647-d598-4be5-9629-6ec3a357e3d6}} obtained in the mean-field approximation prevails when mesonic fluctuations are included. This highly non-monotonic behavior is also seen in lattice QCD simulations. In particular, the peak in {{formula:b6b91e1c-5545-4119-a3ad-10956bac1f3d}} is seen, despite huge systematic error at low temperatures {{cite:2eebe5b19a36c3bcd2e625b7db2a6049963a0ad4}}, {{cite:fd7ce15e21877772a0bbdb1a826ac07e47c47858}}.
r
26197ac65ff15bee4b6da4f7eec5899c
However, fewer number of communication rounds does not necessarily induce a lower communication cost. Communication overhead is also affected by the number of transmitted bits between the clients and PS, which depends on the size of the transmitted vector/matrix per communication round. Hence, a direct application of Newton's method does not induce an efficient distributed implementation as it requires repeated communication of the local Hessian matrices {{formula:3cbb1376-ce9a-4bb8-a28b-1a1005d8b2e1}} to the server. This is prohibitive and constitutes a massive number of transmitted bits, which requires high communication energy and bandwidth. Another important concern when implementing Newton's method is privacy, as it requires sharing both the gradient and the Hessian at each iteration, which makes it vulnerable to inversion attacks {{cite:7e8c66d151dc04f231cfddb61e0091931b38582e}}, {{cite:50b1be2afa729c0a94904933e489de8bee442e5f}}. For instance, in a linear regression task, the Hessian matrix is nothing but {{formula:56848a66-4727-42c1-87f1-4b79709074bf}} , where {{formula:aff22caf-b6f6-4586-9e7b-80b54b39d52e}} is the data matrix which results in privacy issues for each client.
m
99e860319083cd149c0496da9c108d1e
Spikes are estimated at timepoints for which {{formula:b01856ca-ccd4-4fe0-9593-f9396959049e}} . To conduct inference on these estimated spikes, we could leverage the framework in Section REF , along with recent developments in selective inference for the lasso and related problems {{cite:ba24e7737df7e8560cb86495ffe2c33278985bcb}}, {{cite:99e28409ddfb52e68e6ae8b116f3f44554524d93}}.
m
ebbd0d85706f3f74eb38218e0e4bf3c7
Synthetic text generation: the synthetic text is created using the delexicalisation method proposed by {{cite:5ce87aadb9c2e1b06ab6692a96b44b07ba4dd6c0}} but modifying the delexicalisation algorithm so that it can be used for agglutinative and resource-poor languages. Synthetic audio generation: the synthetic text obtained in step 1 is used as input to generate synthetic audio using a TTS model. {{cite:c401c27290b69d2c8a7b58797148db56c6dd3693}} Finally, both synthetic text and synthetic audio are aligned and used toghether with the original corpus to train an ASR module with wav2letter++. {{cite:95d8960be792d315d89dcf98c653982020edad4a}}
m
81725898f9bf91174f95f153e998cec8
In the following we will describe and analyze an approach to reliably estimate these kinds of uncertainties for NNs by modifying the architecture and introducing an appropriate loss function. The structure of this paper is as follows: First, we will briefly discuss aleatoric and epistemic uncertainties using a pseudo example. We then give an overview of the proposed solution of Amini et al. {{cite:cd2b273717e17efa80426e2030928309e8d06447}}. In Sec.  we describe several issues with the prior work, and follow with a possible solution in Sec. . Finally, in Sec.  we summarize our multivariate generalization approach extending the prior work, which we use throughout the text.
i
7ba38def43c0900abd715513d62c6b0c
Let {{formula:0cbe6df8-aa75-42c9-8f76-2a3a6fbe391b}} and note that {{formula:a5af8b81-aaa3-4f21-8947-b52a8019b8d4}} . By the symmetry of {{formula:1b80aa95-3bf5-45ed-99fe-fe3a23d53054}} and the contraction principle for Bernoulli processes (see, for example, chapter 4 in {{cite:3dc4c7931b58c4eeb5b4e9b205ef5e33c2c43859}}), with probability at least {{formula:e41823ed-5c21-4e57-a661-956b9034ab06}} , {{formula:f7eae8a3-9579-4a4b-a0e3-1286bede3657}}
m
24d1fa5ceae8db98ec9e91eae0cc68b2
Among others, brane-world theory has been put forward as a prospective framework for DM candidates {{cite:802bed8f4ce36da1e6720ff0bbb2380baa7a107f}}. In this theory, the characteristics of the suggested massive brane fluctuations (branons) match the ones of weakly interacting massive particles (WIMPs), which are a well-motivated and widely considered class of cold DM candidates {{cite:260a8f78cd89b1661dbe8f01e58f6a793659e304}}. WIMPs presenting interaction cross-sections typical of the weak scale would naturally provide the required DM relic density (the so-called WIMP miracle, see e.g. {{cite:33a69159d110a55cdabedd237451391a6c3676a6}}).
i
c9e79fe9a2d5c1c725e95e8e6e1e439f
The Fokas-Lenells equation (FL equation shortly) is a completely integrable nonlinear partial differential equation which has been derived as an integrable generalization of the nonlinear Schrödinger equation (NLS equation) using bi-Hamiltonian methods {{cite:3173f5c1aecd511b8d3099057d386c5d2e73e7d7}}. In the context of nonlinear optics, the FL equation models the propagation of nonlinear light pulses in monomode optical fibers when certain higher-order nonlinear effects are taken into account {{cite:9d3923e8b4fbdd16cc293991da1f968b6f4eaa58}}. The FL equation is related to the NLS equation in the same way as the Camassa-Holm equation associated with the KdV equation. The soliton solutions of the FL equation have been constructed via the Riemann-Hilbert method in {{cite:3e9d030438bfc397e4054ec83bc0c78569441d7f}}. And The initial-boundary value problem for the FL equation on the half-line was studied in {{cite:b8beb0f46dad8995d1bf3947ace182e0be806651}}. A simple N-bright-soliton solution was given by Lenells {{cite:9cd63ae8c3df49104014116f79ef6c12c5e4ddf0}} and the N-dark soliton solution was obtained by means of Bäcklund transformation {{cite:12eeb82be4f69b16fbfe09e1fe9601952c95d869}}. And Matsuno get the bright and dark soliton solutions for the FL equation in {{cite:ca1de4b271270cdc13ed40801a7446e0d28c5538}} and {{cite:1f3bc1fd9dad30b18a5339ff15cdf0b542b3d43a}} by a direct method.
i
4ef5d622bd120c64ee335c7aa8145f14
We evaluated our framework on the BraST'18 multimodal brain tumor database {{cite:f27e696f141a449899e7c1ba4755b2b7e166544d}}, which contains a total of 285 subjects with four MRI modalities: T1-weighted, T1ce, T2-weighted, and FLAIR MRI, with the size of 240{{formula:3f21374a-a161-431c-9aed-d4a8ca7f57fc}} 240{{formula:3d7f1c2e-a571-4fcd-980f-fe62bc14f86c}} 155. The intensity of slices was linearly scaled to [{{formula:07c38929-df51-40f5-89ee-f544a1dfa0eb}} ] as in {{cite:be30271269c40a9676ea86c8954b55a57c62107c}}, {{cite:941c802742cbd05d73e380e0b4bb3356aee7b622}}, which was then processed by 2D networks. The axial slices with less than 2,000 pixels in the brain area were filtered out as in {{cite:be30271269c40a9676ea86c8954b55a57c62107c}}.
r
7b6ea427e522c9cb1aa70d95b903aeea