text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
As mentioned in Sec. , the aim of this work is to cover
a wider range in the parameter space than was done in
{{cite:4a83782bfbf13a960e8d2b10a073a107678aa459}}. We group our models according to the initial
shell magnetization, {{formula:619ae137-ef5e-48c0-8bd5-d938d20c9a8d}} .
We denote by letters S, M and W the following
families of models:
| r | e9329bfc325675e6bd19c307e803fec2 |
limitations and future works. Noticeable limitations are discussed as follows. DKD could not outperform state-of-the-art feature-based methods (, ReviewKD {{cite:fb1aa00dad27d60f36e63e5dc1632e8980f77365}}) on object detection tasks because logits-based methods cannot transfer knowledge about localization. Besides, we have provided an intuitive guidance on how to adjust {{formula:0b966ca7-320c-4eab-a959-908049b4dd95}} in our supplement. However, the strict correlation between the distillation performance and {{formula:f7c93597-1988-492a-9458-79a999638570}} is not fully investigated, which will be our future research direction.
| d | ed05c30f84c3e4a41685384577bd8f8b |
The FSGM {{cite:250022a57b51fd001e12298c3e49e8e3d1fc6a24}} is different from the proposed adversarial weight perturbation in (REF ) in two aspects.
First, FSGM is applied to create an imperceivable perturbation on the samples instead of the weights.
Second, FSGM only uses the sign of the gradient (times a scalar {{formula:d0cd9356-c489-40d0-be23-38aa2e7e5ebb}} ) to update the weights.
We implemented FSGM on the weights to adversarially change the scores of 1k random samples in the test set of COMPAS and HSLS datasets, and report Rashomon Capacity in Fig. REF (Right).
Note that even with a small scalar {{formula:2d39e345-f3f8-40a8-9029-12a8cd3d575c}} , the update on the weights often significantly changes the test loss, and most classifiers updated with the FSGM would not belongs to the Rashomon set defined by the Rashomon parameter.
Therefore, Rashomon Capacity is almost 0, as observed in Fig. REF (Right).
{{figure:d0b802db-985a-4191-bcb8-28f263a75e2c}} | m | 46c9de6d090c6eb75f8d960f9474a436 |
The primary challenge offered by the hybrid inflation paradigm towards building a microscopic model is the following: {{formula:00aa37c5-4321-4fd7-a002-e798476ca36b}} needs to be a light real scalar, but with sufficiently strong non-derivative coupling with the heavy {{formula:7864ef0e-f7be-44a0-be91-46c0b59d8d32}} field as required for the waterfall effect. Even if {{formula:a3bedb01-0172-4625-ac65-1c0b2face1c6}} is modeled as a pseudo-Nambu Goldstone boson (pNGB) of a global symmetry, its coupling with {{formula:87196485-45dc-4d32-bac5-0d0a0762f813}} explicitly breaks the symmetry and induces quadratic sensitivity in the effective inflationary potential to the ultra-violet (UV) physics. Hence, we need some extra ingredient to achieve naturalness in hybrid inflation. This issue is similar to the case of the light Higgs boson as required in the Standard Model (SM) in the presence of its Yukawa and gauge couplings. This, hence, motivates one to apply different particle physics mechanisms explored in the literature to address the hierarchy problem of the SM Higgs boson, to the case of hybrid inflation mentioned above.
There are various supersymmetric constructions of hybrid inflation, see e.g. {{cite:9d2aae50b68c9ef0356257305cc41f49bf4cace7}}, {{cite:758ac6f4f543f8e2cb1dbf2dcea2fb9905ed3760}}, {{cite:f2c6623413de0b7b8abbb874eddf53f0ba81df75}}, {{cite:7ec19245c9ab92225836453489ced3cc2ccb0179}}, {{cite:fbae3ca2faec116a6e0d620ef5e9b56a81a15a20}}.
Little Inflaton {{cite:b363d54ae73aa886335c96f1c74b26c68dcd60c7}}, {{cite:6fef3b7eaad030a43dc25d1a358a87d863c8cd76}} is also one such proposal addressing the issue of naturalness in hybrid inflation based on the Little Higgs mechanism {{cite:81b92983741520905aa3870c8e7476cd7810f904}}. This makes use of “collective symmetry breaking” to protect the inflaton potential from the radiative contributions sourced by its coupling with the waterfall field.
See also {{cite:1ec2ea2dc2c356712816a8eefdf93bf507a1ab14}}, {{cite:923624f63123152de4a9b183496c54511048d5bf}}, {{cite:1a551196e8bc337482868ece7019cda7000857d2}}, {{cite:84097f5a1da05127d105b0228b030a8dafcf2069}} for more proposals aimed at building such a radiatively stable, EFT-controlled and viable model for hybrid inflation.
| i | ffab9bcf708408fbb7dc3e4576e956d9 |
We consider an AL setup with the pool-based acquisition as commonly studied in the literature {{cite:c457dd49a1bcac6cdceea7a3440f23b57c6ab512}}, {{cite:1ccd58a947b122c450093460997f1485d98fa338}}. Let {{formula:fc087d9a-6936-404d-9ea3-50978c9db98e}} be {{formula:d69a6e3b-325e-4529-9eb7-d8e397a06e7c}} -dimensional speech data samples, {{formula:4e6baf73-29a4-45a6-b96a-9babb8d10b3a}} be labels of a non-semantic speech classification task, and {{formula:e28a22ef-6d3d-43af-8b62-b636ca80bca5}} be a pre-trained encoder, whose embedding dimension {{formula:35c3c8cb-0618-412c-b22d-e878a5eb375e}} . Given some labeled instances {{formula:4f39da2c-756f-47f9-a2da-c385f8e9d9bf}} and a large amount of unlabeled data {{formula:14ec9a4e-5a28-4bc9-99c6-ec96c1d50d48}} with {{formula:140ebc73-86ba-4c27-9aa2-83dabd65a0a8}} , the aim is to incrementally label samples in {{formula:c9a3b05e-a622-41cd-9c01-f440db0c1722}} to minimize the cost of annotation, better understand the annotation process, and ultimately improve model generalization with few labels.
| m | 7a7c4997e99d94f2f7f3793969f9a0aa |
We show per-scene metrics in Tables REF and REF . Additionally, we highly encourage readers to visit our project webpage, which contains image comparisons for every scene, video comparisons for a select few scenes, and brief qualitative comparisons to NeX {{cite:54279364f8db100d901cec5a406f993216f6e59e}} on scenes from its Shiny dataset.
| r | c11917be27c21ecc1619c524521e1b9e |
It is universally acknowledged that magnetic turbulence properties is extremely difficult to measure in astrophysical environments. For instance, the effect of using traditional Faraday rotation of polarized synchrotron emission is a significant impediment for studying emission of atomic hydrogen at high redshifts (Cho, Lazarian & Timbie {{cite:880a6967e36dde1c6e06123910c8d534d5c54984}}). In general, the in-situ observational information is obtained from the solar wind turbulence, providing an important reference for understanding the turbulence within the ISM and galaxies. In fact, the challenge of studying MHD turbulence by using observations is inevitably line-of-sight integrated. Therefore, the purpose of technique development of magnetic field measurement is to extract turbulence information from the integrated observation information. In view of this, we will explore in this paper the new way of studying magnetic field using synchrotron gradients technique (SGT) first proposed in Lazarian et al. ({{cite:ac3cbc174f7d166427e1c2e326e8679a500144fa}}) and Lazarian & Yuen ({{cite:8725d9db28cde8f2a93f305b9e94ef4bdf900566}}), further explored and elaborated in Zhang et al. ({{cite:f4f68b0d6656a39a804f191a585bbeca6b9dd37e}}, {{cite:f30f9c0e5e5be4fbd4da5ec5fda8115efdb73baa}}).
| i | ca87231d985bf637518dc5b23e4ead71 |
Another intuitive motivation for using {{formula:2121912e-6c84-4a5b-b2e8-2ea185acad46}} as the test function is that the minimax loss Eq. (REF ) for optimizing {{formula:c0d7fdef-da32-437c-a8e0-ec555eafc218}} is based on a saddle-point optimization, and {{formula:0beaee5a-bad3-4b7f-b643-a44d3bdaac56}} is the dual variable for the MIW {{formula:69c341b9-67b7-4db9-9dd9-0396cbe5a8cb}} .
From the previous discussion, MIW and the action-value function have some primal-dual relationship, and {{formula:811c9455-c738-4948-b80f-73bb00d79d29}} is the optimal “dual” variable for the off-policy evaluation problem w.r.t. the MIW {{formula:52858e6a-735e-4261-99f5-b6650cb876e5}} .
Since the dual variable {{formula:afc84efc-463a-4257-95ea-4477c7fce851}} in Eq. (REF ) is indeed some action-value function {{cite:f046090670e3096d0cb624fb89f8a009db1d5c94}}, we may set {{formula:258edac4-02d6-4a9f-b329-e016e2089fda}} to be this optimal “dual” of {{formula:f6eac509-1e8a-4f4b-a90d-66f2500ae82e}} , which will lead to the choice of {{formula:1de8ca55-9b87-42fd-becb-48e261ed12dc}} as the test function.
| d | 65a941b8a072251bf373caeb40f6fada |
Current VC systems embrace the technological advancements from statistical modeling to deep learning and have made a major shift on how the pipeline develops {{cite:8606013297c6c7f3a235b90c7fd5a24e5a59cd5f}}. For example, the conventional VC approaches with parallel training data utilize a conversion module to map source acoustic features to target acoustic features, the source-target pair has to be aligned before the mapping {{cite:0b171f0cdfa3b08fadb9711818d102538df38eed}}. With the advent of sequence-to-sequence models even without the alignment prerequisite, better VC performance is reported {{cite:95bce6a69fc5f98b706f21a7df53fd8b0d42ec20}}. For VC with non-parallel data, direct feature mapping method is difficult. Instead, studies start to explicitly learn the speaking style and content representations and train a neural network as a decoder to reconstruct the acoustic feature, with the assumption that the decoder can also generalize well when the content and speaker style is swapped during the conversion. Among the approaches, phonetic posteriorgrams (PPGs) and pre-trained speaker embeddings are widely used as the content and speaking style representations {{cite:cd809e9d824466e176c82238aa80ed159063f498}}, {{cite:70ed858cddd518b701941099c41c82a7993e581b}}, {{cite:8c233f2ae43c6513b07adf07ba76d80531140a3a}}, {{cite:00208b65a2f02bbbc062b0d5eccccd35cfd6bb4c}}. However, developing such system usually requires a big amount of external data with rich transcriptions and speaker labels. The relatively small-footprint AUTOVC and AdaIN-VC employ encoder-decoder frameworks for zero-shot VC {{cite:ca3d357a87cd578c4a182304c4592425f66f584e}}, {{cite:1d22b696d4a2f53a5c4ee82afc25c0938a83cec0}}. The encoder decomposes the speaking style and the content information into the latent embedding, and the decoder generates a voice sample by combining both disentangled information. Nevertheless, these models require supervisions such as positive pair of utterances (i.e., two utterances come from the same speaker), and the systems still have to rely on pre-trained speaker models. Progress has also been made with generative adversarial networks (GAN) based VC systems {{cite:d213dc60e82a4cf818883221708fdadca85f128b}}, {{cite:a9af8b8a2c9400954624c20f2cf37181091a6d8e}}, {{cite:c05dfd982fdb93e22aec4dae06d94e21e408742a}}. This categorical of method usually assumes that the speaker of source-target VC pair is pre-known, which limits the application of such models in the real world. At the same time, bunch of regularization terms have to be applied in the training process, which imposes generalization doubts to such systems for zero-shot non-parallel VC scenarios.
| i | 0680d7242bd049f63f65428cd485e83f |
Fortunately, there are a few robust indirect methods that make it possible to constrain the {{formula:145da2d3-8319-4edf-85b2-a9dc7649f48d}} beyond the range of applicability of the direct dynamical ones. For example, the tight correlation between {{formula:3f463293-e0c1-464d-9891-aa2b33336fc9}} and the stellar velocity dispersion in the bulge {{formula:a0a39b1d-8d42-4944-8c67-9861ddfc4bd5}} , observed in nearby nearly quiescent galaxies (e.g., {{cite:0ddc500f0e0f30615d4903b7587903978e4cc4a2}}), can be extrapolated to constrain the {{formula:52ee361b-ec25-4c9f-a1b3-30554af05b31}} in many distant and more active galaxies. Similarly, the empirical relationship between the BLR radius and optical luminosity makes it possible to determine the mass of numerous type 1 AGN with only one spectral measurement without the need of long monitoring campaigns (e.g., {{cite:fbf2f3917f8c106932fd027db8dd8ffce21a42ae}}).
| d | 3b68a56e5213d66241377e24bc8ab5d7 |
The CAMELYON16 challenge {{cite:d84d1a2364237c6fdd102d1ef0b29759058d5362}} is the best demonstration of using deep learning for automatic tissue analysis, outperforming the pathologists in terms of detection of tumors within the whole slide images (WSIs). The objective of this challenge was to automatically detect the metastasis in haematoxylin and eosin (H&E) stained WSIs of lymph node sections. Cruz-Roa et al. {{cite:5786579ab7eb705e66ff240213b7084281caae9d}} presented a deep learning architecture for automated basal carcinoma detection. This method first learns image representation via autoencoder and then a CNN is applied on this representation to capture both translation invariant features and a compact image representation. Spanhol et al. {{cite:10475744d225e8bf8f843c467349f0518ca47c05}} applied a simple CNN for classifying the BreaKHis database {{cite:1fa24ed3f800b87a2789c30d4a4db47d718d9084}} consisting of microscopic images of benign and malignant breast tumor biopsies. Small patches were extracted at different magnification levels to train the network and during inference, final output was produced by combining the predictions of the small patches.
| i | 6e0ae67472af284bf55d77bd496153f1 |
The intrinsic sparsity of local updates can be leveraged to relieve the bandwidth limitation and improve the learning efficiency of FEEL. This is motivated by the observation that the number of significant elements in a model update is extremely small. Specifically, {{cite:bded48787e98e242f8ddd91fc6d41a698d77bdbf}} proposed to sparsify and compress local updates before transmission. The desired aggregated update at PS is then reconstructed from the noisy received signal via compressed sensing. In {{cite:026321a1397b83052071336cbab88b906ff5310e}}, the scheme of {{cite:bded48787e98e242f8ddd91fc6d41a698d77bdbf}} is extended to a fading channel, where a truncated channel inversion strategy is employed to confront fading. The existing works {{cite:bded48787e98e242f8ddd91fc6d41a698d77bdbf}} and {{cite:026321a1397b83052071336cbab88b906ff5310e}}, however, have a common limitation, i.e., they use the sparsity structure of model updates within a single communication round but ignore the more obscure structure of the updates between rounds. It is known in the artificial intelligence community that the significant model parameters are highly correlated throughout the training process; see, e.g., the work on model pruning {{cite:50e5c63b854c138a851c836692b628960a1840f1}}. This inspires us to explore the intrinsic temporal correlation of model updates as a new dimension to enhance the FEEL performance.
| i | 4515fcb963bb8bd4a7f0320ac9848a6a |
Our model ignores the convective cellular flow in the metabolite diffusion equation REF . For this approximation to be valid, one requires that {{formula:f9c503a3-d17b-4202-bfac-bad0847c0cc8}} be larger than the amplitude {{formula:70bcc75c-31c4-48fd-82f8-269ab9f4565a}} of the cellular flow within the aggregate. Figure REF d displays the mode growth rates obtained with a diffusion coefficient {{formula:c5da57e5-9601-4e69-a0a0-fd6875198f90}} a thousand times that of figure REF b,c, corresponding approximately to the diffusion coefficient of small nutrient molecules such as glucose {{cite:98d66eb59358837e620ab86b013fe88bba595ed3}}, {{cite:8d93a406b53df31c5a19e8494dfd88d6a228b0b7}}, {{cite:5eccd8f8a7f6bb2c34ef5a302ca7f244effdda54}}. With this value of the diffusion coefficient, we have {{formula:6cc9ee6c-8bda-4821-8798-70fd62e7f44c}} when {{formula:104516cf-5729-424b-a863-b46f1e35aad6}} is estimated from the curves shown in figure REF b in the least favorable case, satisfying largely the required condition for neglecting convective flows. We show in figure REF d that, perturbing around the same steady state by rescaling the metabolite-absorbing rate {{formula:681c5efa-71a9-41b3-af29-25b76e9040ed}} , the instability occurs at a similar radius and finite wavelength as those reported above. In addition, the growth rates of the high-order modes ({{formula:43800776-777c-4fd8-aeb8-464bfa3f8661}} –6) are largely unchanged. Interestingly, the oscillatory instability is lost for {{formula:668fbfbf-87f2-4a9a-9ffa-557959f4722a}} and {{formula:e418f4d1-c0d2-4cff-a0fa-3d55582d7a3b}} . This is the signature of the fact that here metabolite diffusion is sufficiently fast to allow for an almost instantaneous response of the inner cells to perturbations of the outer surface. We however do not recover the analytic results of figure REF a for small values of {{formula:8e20ee6f-79f1-4879-9f9c-eeab0ad29685}} . This stems from the fact that, in the results of figure REF d, we scale the metabolite consumption rate with the diffusion coefficient to keep the same steady-state sizes as in figure REF b, as the analytic limit was obtained for small values of {{formula:7e3ece22-3e4c-457b-b383-74a6458ec23b}} with respect to {{formula:4fbf97fc-6087-4139-8015-416d1be1f4dd}} . We show in , figure REF , the mode structure obtained with intermediate values of the diffusion coefficient {{formula:be12b850-c6d8-45c2-9831-0ef1b68f532c}} between those of figures REF b,c and REF d, following the same rescaling procedure of the metabolite-consumption rate {{formula:603db203-08b9-4b71-9261-4c891dd04497}} .
| r | e1ea1a73946cc3a40d39f15334dafae9 |
In addition to the simulated environment, we tested our planner on an indoor environment from the Gibson suite {{cite:5053046b7c8cf585108cc20bd6b6f77e90706605}}. Fig. REF (left) shows the trajectory generated by the RRT* and CCGP-MP* (5%) for a single start and goal pair. Fig. REF (right) shows the minimum distance to collision for each trajectory evaluated across 500 runs. For RRT, 16.4% of the trajectories have the distance-to-collision less than zero, while for CCGP-MP, only 2.4% of the trajectories have distance-to-collision less than zero. The value for CCGP-MP* also satisfies the delta threshold set for the planner, which is 0.05.
| r | 9ef3a450def01639761a780415b88672 |
The first ever experiment that has shown that quantum physics is affected by gravity is the
Collela–Overhauser–Werner experiment {{cite:8a33bc24f68328d3c61e8afba7cf814be1515d40}}. The observed
interference pattern produced via overlapping two beams of thermal neutrons, propagating
at different altitudes with respect to the Earth's surface, is due to the free-fall acceleration. It
was shown in the Bonse–Wroblewski experiment, mentioned above, that an analogous
effect takes place in an accelerated reference
frame. So, uniform gravity and acceleration cannot be distinguished in quantum-interference
experiments {{cite:c262a29f4cd332c29d2bcbe63196d3ebb41deb9e}}. The quantum-particle model proposed here is consistent with this
empirical result. It is also consistent with the observed phase shift due to
space-time curvature {{cite:76526773f64e8b4bcbc64a3dc145d08c6ea2aba8}}, see {{cite:043b6bde4e17955d908b9bd00572ef4b9b06ce57}}. This suggests that the
model deserves further scrutiny.
| d | 20004160aaae69a0b7ec96b12f3ea175 |
where {{formula:265102da-d2a3-4a2e-8a6c-f23536145945}} .
We stress that in contrast to the non-linear Young integral used for example in {{cite:bf9e45270610787e2a6f3f9da600582e522ada9f}}, {{cite:506d73381ccd0b5bcb782958e118ef8d816bd112}}, {{cite:b813d652939f62b3c78730ed51c4f865f12619eb}}, {{cite:25206b3d5a6bf3bf252a08a1782ff4b6754172a4}}, the above integral is truly an infinite dimensional object, and extra care must be taken when building it from the averaged function {{formula:de73e2d3-5906-485c-9637-2c898814f0c0}} .
Indeed, for each {{formula:a935adc0-02da-4e1e-8aa8-310d4e484127}} , {{formula:5a534c21-a1fb-4ed9-b39d-bf4ffcc01da0}} and so the function {{formula:b76c86f9-412a-4b49-9a21-ab4291736272}} is then lifted to be a functional on {{formula:c69bcfcd-9e86-4260-a196-feb155ac9465}} . We show that this lift comes at the cost of an extra degree of assumed regularity on the averaged function {{formula:c04c33cd-a8a5-4124-bd0c-87add623b78a}} .
Furthermore, due to the assumption that {{formula:86cae2d5-5bf4-4b0e-9d0e-abcbc50e262e}} for {{formula:57888322-c142-40d1-9c55-0292eb7fd752}} (i.e. {{formula:4e6e9cba-4e88-4635-9c35-3052a0b9da16}} is assumed to be truly distributional) we need to make use of the product in Besov space in order to make the product of {{formula:22ba96a4-50fd-456d-a812-6560639b5e8b}} well defined.
| m | bf6c087e29d14334ab8f5d94819d6b86 |
Compared to supervised adversarial training {{cite:8f21863e530beebe732a366d12517f82ee36b181}}, {{cite:5a5e4b7bb0d2fad2773cb7a9fbfcd5af4e68b37e}} our approach has the key conceptual difference of being applied in feature space. As well as improved computation efficiency (no PGD attacks required) Fig. REF shows that this difference translates into different behavior when using implicit feature modification. Instead of suppressing non-robust features as Ilyas et al. observe for supervised representations, IFM enhances the representation of robust features. This suggests that the improved generalization of encoders trained with IFM can be attributed to improved extraction of features aligned with human semantics (robust features). However, we also note that IFM has no significant effect on learning of non-robust features. In Appdx. we discuss the idea of combining IFM with adversarial training methods to get the best of both worlds.
| d | 7b6f87f6ecf53840237af639b3758bdc |
However, achieving robust visual SLAM under low visibility still remains
challenging. For typical cameras that collect information by integrating photons
during the exposure time, the captured image is concentrated on the lighting more than the object.
Thus, the ideas of collecting information from visual domains other than visible light
intensity have been introduced. The alternative vision sensors have advantages over typical cameras.
For instance, the thermal cameras could capture infrared radiation, and the event cameras {{cite:8d5c90918eb5122ebacb186bdd9d3a8c0b74d49c}}
could detect temporal changes. These special abilities make the sensor measurement to be
more independent from external lighting and motion conditions. For datasets including
alternative vision sensors, event datasets
{{cite:043624f59c96cc7d5531feb79a156b3788edf3dd}}, {{cite:20f526dba6aa7d46a4e38f7b3fc1a43e0f96d5c4}}, {{cite:ae3fac736de6eb3914279ff34d277ce4de5156d0}}, {{cite:e0126f4b344ab13f6336926e119caf06f4df2e58}}, {{cite:de602a1ed375a06ced80c32958f19d761067e4b7}}, {{cite:574d675e8823f769dd16664082a1790f50ea1994}}
and thermal datasets {{cite:5f9ee676d0628eb9deb84ce8815034018cf6ac7c}}, {{cite:3052e066b41922f89a12d78237ce4a486e253902}} were publicly released.
| i | ac81cba9cb4be042ca292e1dc8ceaec4 |
where {{formula:8d24354d-6dbe-4924-924c-583807ee7b46}} is a constant with {{formula:4b7899c6-dbb2-4ef5-a494-92f709a7d638}} yields the least modification of the Schwarzschild black hole spacetime and encompasses it when {{formula:8baeb643-ffff-40cc-a6a6-6668120b203a}} . The parameter {{formula:3657ab82-c248-41e0-99c2-784a1fb0717c}} has dimensions of length introduced to cause a repulsive force to avoid singularity and is responsible for the regularisation of the metric at {{formula:3a22f8db-dfee-4f45-aceb-0194b64df21e}} . This spacetime (REF ) interpolates between the Schwarzschild black hole and the Morris–Thorne traversable wormhole. The Simpson-Visser metric is regular everywhere; it is evident by analysing the scalar invariant {{formula:ef180642-a89e-4c25-9c46-861a88ae6efe}} and {{formula:6cce4c78-d6dc-4939-8b92-44d26164592f}} which are well behaved everywhere including at {{formula:7de271fd-4bea-480c-b503-c5bfca2e0062}} {{cite:91242bafc3b7486870de2132937cfab7efabf309}}. The Simpson-Visser spacetime received significant attention and, among other probes, in theory, include discussion on the energy conditions, causal structure, and innermost stable circular orbit (ISCO) {{cite:91242bafc3b7486870de2132937cfab7efabf309}}. A discussion on Vaidya radiating spacetime and traversable wormhole {{cite:38b10294ebcfa8c7d8f5099d86bd590fb58c57bb}}, regularity, quasi-local mass, energy conditions and the causal structure of new black bounce models {{cite:e4eec88a9eecfde8141eb2a49334752a83900478}}, Gravitational lensing in a strong deflection limit {{cite:852e84447eab22368747cbf1da9e8424903b4682}}, construction of rotating counterparts {{cite:35065b93467c9e2a7dde47b5f87ffd52c9110811}}, and discussion on black holes shadows {{cite:ddfa300633425721a5b92ac86ab79b79b1d97105}}, {{cite:96806e45eae9d141bb00c252e04dfa496c29d8a0}}.
| i | 57682d7ba6ba48140664966d6c1b7e61 |
Implementation of the DEM {{cite:133a85e0eb6ea171da22b2e77a16de46afccacd4}} for multi-block structured curvilinear grids.
Modification in HLLC {{cite:3a051afd9b510680b709c6a514b307ff871137e8}} Riemann solver within the DEM to include surface tension.
Extension of viscous effects within the DEM {{cite:952f6f379ec9386cf52c4c144d0afb4c46d7bcb9}} for multiple dimensions.
Improved robustness of the stiff relaxation solver {{cite:ec5204fd038b707d8cc34615756119adfb528e83}} for dealing with arbitrary EOS.
Application of an interface compression scheme {{cite:a90899a71b7b3ab5f05257d8ec6cd2e205135619}} for the seven-equation model.
| m | 99d3eddfb4afa6a91a01ede418ce8f86 |
At this point, we contrast DeepOnets with other recently proposed frameworks for operator learning. In particular, we focus on a recent paper {{cite:3bc89e83745f0e2837cf303887113c2a9fba8722}}, where the authors present an operator learning framework based on a principal component analysis (PCA) autoencoder for both the encoding and reconstruction steps. Thus, in that approach, one has to explicitly construct an approximate eigenbasis of the empirical covariance operator for the input measure and its push-forward with respect to the underlying operator. Neural networks are only used to approximate the operator on PCA projected finite-dimensional spaces. In contrast, DeepOnets do not require any explicit knowledge of the covariance operator. In fact, our analysis shows that DeepOnets implicitly and concurrently learn a suitable basis in output space along with an approximation of the projected operator. Although, many elements of our analysis overlap with that of {{cite:3bc89e83745f0e2837cf303887113c2a9fba8722}}, we provide significantly more general results, including the alleviation of the curse of dimensionality for DeepOnets. Moreover, our analysis can be readily extended to the framework of {{cite:3bc89e83745f0e2837cf303887113c2a9fba8722}} to prove the mitigation of the curse of dimensionality in that context.
| d | f1bea9a01bbe1242a9289fb5c978877a |
In Table REF , we provide additional results of AGKD-BML model trained on 10-step attacks against AutoAttack (AA) {{cite:8805d96942d98b09b4bd04a8c801c0cbfd323d2f}} which is an ensemble of four diverse attacks. We compare two Wide ResNet {{cite:ce91440a3285390adc87c7d55f7d588d148f2961}} structures, , WRN-28-10 and WRN-34-10, as well as two different learning rate decay epochs, , 100 and 150. For our AGKD-BML models trained with large-number-step attacks, we utilize the MART loss {{cite:826c0d2b1d3970b19263302d2ed43b5499681467}} which explicitly emphasizes misclassified examples. Following the suggestions in {{cite:5afe746ed131b539f308670ca1b45a05b0e78928}}, {{cite:826c0d2b1d3970b19263302d2ed43b5499681467}} that the best performance is usually on a few epochs after the first learning rate decay, we stop our training at 5 epochs after the first learning rate decay. From Table REF , we can see that with more layers, 34 v.s. 28, the model usually performs better in terms of the accuracy against AA.
{{table:4aa9de67-0e15-48fc-993c-5f0d4c2b8f57}} | r | 90379d830b14443d1c6819f5bfa1c134 |
To alleviate the labelling effort required to train FCN, AL methods have been widely used to optimise the collection of training data. In AL, the aim is to selectively label data in order to maximise model performance. Recently, several studies have proposed AL frameworks for deep learning models with image data {{cite:71bdf0e501efe156511dd5b18b935fb07628a047}}, {{cite:c541fe18195977a0d850e91b8bf95615440b7672}}, which effectively reduce labelling requirements. However, applying such methods in the context of autonomous robotic monitoring missions has been largely unaddressed. Recent works examining AL with UAV-acquired imagery {{cite:82f90557cad9661b3e673f0a229a582598652f52}}, {{cite:b6eba4520c20c9bc46a22fbfcdedded82a229782}} only consider selecting images from a static pool obtained from previous exhaustive aerial surveys. Therefore, linking the AL objective to active robotic decision-making and planning remains an open challenge.
{{figure:39bd764a-9a87-4e04-8fe4-2507dec606ec}} | i | 76ea10eb6002bdf41ce9dc8732a17aad |
Our experiments show that Elodi performs positive congruent training by reducing negative flips with large logit displacement and reducing the variance of logits from the ensemble estimates. But there could still be negative flip samples with small logit displacement.
As discussed in sec:probe:landscape and observed in experiments, both Elodi and the ensemble paragon are not able to address the negative flips caused by the difference of representation landscape caused by architectural change.
Mitigating this would require further analysis of the influence of neural network architecture design in PC training.
Another limitation of Elodi is that the training cost is still higher than the normal training process of a classification model update, due to the additional training of the ensemble and online inference of the ensemble logits, calling for further efficiency improvement.
Visualization on More Data Points
As mentioned in sec:probe:toyexample and sec:probe:highdim of main text, we provide more examples to verify our hypothesis.
We select four images of two classes from ImageNet {{cite:41401392e6e6385758ff9d606832b9c62dc3522d}}, which are illustrated in suppfig:moredataexample, as input data. With these input images, the estimated probability mass function (PMF) of logit displacement between two single model and two ensembles are shown in suppfig:hist2dcltmoredata. We can observe that the logit displacements are reduced with ensembles, which verifies our hypothesis of output logit vectors are actually independent and identically distributed (i.i.d.) random variables and with multi-dimensional central limit theorem (CLT), their sum is a normal distribution (eq:clt).
To verify our hypothesis in higher dimension space, we train a standard ResNet-18 on full ImageNet dataset with 256 random seeds. We take the images in suppfig:moredataexample as inputs and illustrate the {{formula:f6c889d7-6b89-44b3-8c4d-c13918b06536}} norm histogram of logit displacement between two random ensembles with different ensemble sizes in suppfig:histhighdim:logitdiff. For heterogeneous case, we train a standard ResNet-50 on full ImageNet dataset and observe the {{formula:59b6cb15-123c-4b5f-a46c-36bd6b561072}} norm histogram of logit displacement between a random ResNet-18 ensembles and a random ResNet-50 ensembles with different ensemble sizes. The results are shown in suppfig:histhighdim:logitdiffhetero.
{{figure:43f62a27-ec79-4392-9625-825172f6d5f0}}{{figure:2e260ea6-05f6-4f0c-916b-5c5fdef7e0bf}}{{figure:39b99d21-9e36-422e-a625-56b42ee3e54e}}{{figure:c42b0da9-0fcd-4e01-960d-a997d3bdfd28}}
Features of the Penultimate Layer
We have discussed the representation landscape of PC-Training in the main text at the logit space and provide some more data points above.
The analysis can be done in the feature space as well.
The main challenge is that features from two arbitrary models are not directly comparable
and we address this feature interoperability issue by BCT {{cite:2fa8a2c8e718c06db085835d57f1c124be52f589}}.
We first introduce the BCT method and then derive that formulation to attain the feature "penultimate layer feature" of an ensemble.
Based on these we can analyze two-dimensional examples and the higher dimension validation experiments.
Preliminaries.
Shen {{cite:2fa8a2c8e718c06db085835d57f1c124be52f589}} propose an approach termed BCT to align two arbitrary deep models so that the embeddings are interoperable with each other.
Formally speaking, a model {{formula:bd3f6660-9b34-43ec-b447-f87db22f990a}} includes an embedding module ({{formula:76f4fc6c-1882-4640-aa6b-1e5f67fc5d9a}} , a.k.a. backbone) and a classification layer ({{formula:1d33fbff-40f3-4d90-95e3-8df711c33fb6}} , a.k.a. head) on top,
{{formula:39ba47f0-552a-46fd-8a27-c862d51052cd}} .
Given a reference model {{formula:ae3c5790-1ebc-49a1-9760-802f77307682}} ,
BCT imposes a loss term so that two model heads are close, {{formula:1a75b6c3-f496-4caf-b41a-9c325698ba1b}}In fact if we assume that {{formula:e1134642-9c77-4d18-80bb-849413a903f7}} and {{formula:336c9596-8e2d-4374-96bf-6e1c6459ba7d}} have the same shape, we can also do as follows:
we train {{formula:2523edd4-3c36-4c34-85fd-c92148ec1632}} and then {{formula:1f36ed88-5917-49e7-8fea-91dc5491fb84}} with parameters randomly initialized except the head copies weights from {{formula:a12b4dbb-528a-4e35-bcd9-6c7dd77b904b}} and is fixed.
Nevertheless, we follow BCT's formulation since it is more generic..
As a result, {{formula:fff1b7da-529a-4ff4-8394-3a98f7b1d90e}} and {{formula:697cb816-4282-4c82-8d02-db7a18be5a8d}} lie in a same vector space and are thus comparable, {{formula:17ed9248-50fc-43b7-bdec-dd0a9213e5a7}} , regardless of the underlying architecture.
Ensemble of many feature-interoperable models.
It is noteworthy that feature interoperability does not affect NFR as reported in {{cite:a93899a6cb8b3d9ae874e1c4d8c56ba9ee3e11fc}}.
We also re-validate that two models, {{formula:b0b5fdcf-97a1-4aa6-ba9b-b61c08994b3d}} and {{formula:63034372-4076-4dae-8a48-4242dd145856}} , trained using BCT {{formula:89491927-9de9-43fd-b865-c9720b15bc1d}} have similar NFR compared to two without BCT.
However, their features are comparable, {{formula:26c79cbf-a4b3-4112-8efd-6a8eb1dfe7e4}} .
So is any linear combination in between.
The arguments hold when the number of feature-interoperable models {{formula:f84cef7a-366e-40ef-b7c1-4183900f6d09}} increases.
Therefore, if we write down their averaged logits, we can factor out the head,
{{formula:804f22ed-528f-42cf-8f72-3be33d853049}}
It implies that the averaged feature can be viewed as this ensemble's feature, {{formula:9467c7f2-5a33-4ba2-9caa-a9fbdb3e3e44}} .
{{figure:13f336f1-3fbc-4801-a4cd-2610dac8dac0}}{{figure:32a7ce45-306e-4f9d-bf3a-b4af4ce01e01}}
A two-dimensional example.
To illustrate the behavior of models in feature space, we create a toy example by selecting three classes“Labrador retriever" (n02099712), “Weimaraner" (n02092339), and “French bulldog” (n02108915). from ImageNet {{cite:41401392e6e6385758ff9d606832b9c62dc3522d}} and training a ResNet-18-like models with a slight modification: the penultimate layer's dimension is changed to 2.
The feature level visualization is presented in suppfig:toymodelsingle and suppfig:hist2dclt. We can observe the similar observations as in the logit space after the penultimate layer features are aligned with BCT {{cite:2fa8a2c8e718c06db085835d57f1c124be52f589}}.
Validations on higher dimensions.
We repeat the high-dimensional validation in text on the penultimate layer features, the results are shown in suppfig:histhighdim:featdiff.
We see that the PMF curve fits the histogram of single models well, implying that feature of these models could indeed follow a Normal distribution.
We conduct the same experiments above on many more images and the conclusion holds well.
If we move to ensembles of {{formula:b8f0cfa7-2fc7-4cec-9e01-4ca7ef601039}} models each, the feature difference follows another normal distribution whose co-variance matrix is scaled by a factor of {{formula:3929e2c6-50a6-47c8-ae2d-77db834167c3}} ,
{{formula:12cb36db-4767-4b78-87de-8c80d3fe4605}} .
We demonstrate that the rest of histograms are indeed consistent with the estimated PMF of {{formula:05a615e8-4240-4aae-8120-e0e50e065dde}} (dashed lines in suppfig:histhighdim:featdiff).
| d | c4416cf9b137b21cc849c7f4b7549c8c |
Most algorithms for SAT and MaxSAT are based on the
branch-and-bound process {{cite:411f49625435bb9c0a153b3082810609853e7c81}}. The Strong Exponential Time
Hypothesis conjectures that SAT cannot be solved in time
{{formula:5dd1be8f-03e9-43a8-8f2d-c5e590500ae6}} for any constant {{formula:e336c5ba-1f4d-4d65-89d9-0532b3ca3e71}} , where {{formula:633ed0e8-3bfc-49e6-961d-3fd2a34b56c9}} is the number of
variables in the input CNF formula {{cite:89516e51d7e588d65b8cbddf843b6e76c3cf46c5}}. The hypothesis indicates,
to some extent, a popular opinion that branch-and-bound is perhaps
unavoidable in solving the SAT problem and its variations.
| i | f42da6674f9d044a6e0af62f4a517d04 |
Fig. REF d shows the CERs and style opinion scores for unseen style references.
Due to the residual information in the hidden state, priming significantly increases the CERs for {{cite:bc469f3bf3ce448bfd3aee0ea18c74c3c4b8fdcd}}. Without style equalization, the model fails to synthesize legible handwriting in the nonparallel-text setting due to the content leakage caused by the high-capacity style encoder.
In comparison, by adding style equalization, we successfully reduce content leakage and replicates style, as demonstrated by the CER and the style opinion score that are close to that of real handwriting samples.
{{figure:844c4a60-2e09-4f12-945a-a6a1a13354c6}} | r | df33a18bd959ff2488b9bfbf14ab749a |
We have established a variational formulation for the role of depth in random neural networks with batch normalization: The entropy of hidden representations increases with depth up to constants. Is this entropy increase achieved by a gradient flow in the space of probability measures? This question is inspired by the variational formulation for Ito processes established by {{cite:dcee86e310bcef5462fb1e8437c4de008b09ee8d}}. According to this formulation, the distribution of Ito processes, which obey Fokker–Planck equation, can be viewed as a gradient flow minimizing a free energy functional.
| d | fc3d941f0245e14449bb872e1abc837c |
Limitation of SESEMI We speculate the poor performance of SESEMI on the SVHN dataset stems from our chosen self-supervised task of predicting image rotations and flips. Gidaris et al. (2018) {{cite:3a68613d510173dce937fe18b1421be59d531e73}} showed that their self-supervised model focused its attention maps on salient parts of the images to aid in the rotation recognition task. We hypothesize similar dynamics are at play here, but the SVHN dataset presents an additional layer of complexity in which the centermost digits (the digits to be recognized) are often surrounded by “distractor” digits. When the digits are rotated and flipped, the self-supervised branch is likely picking up dominant visual features corresponding to the distractor digits and relate them to the supervised branch as belonging to the digits of interest. These “miscues” are most prominent when few labels are present, where the supervised branch is simply learning visual information from the self-supervised branch. However, when all labels are available, the supervised branch is able to correct the miscues, and our SESEMI models produce the best classification results.
| d | cd96b9df9d351555a88b03f6a8e33e92 |
where the first term tends to zero since {{formula:472489f9-3489-4fce-a4fa-0c4790f18dda}}
and {{formula:e1f11855-1dee-4a0f-8abf-a405cfeea54e}} is continuous and constant on {{formula:87392d7d-2da4-4380-b2aa-938f79ef8c93}} , satisfying the conditions of the adapted Portmanteau Lemma {{cite:bcf726f46ab9d135030f718cb79347bb13293058}} (see Appendix ).
The second term can be arbitrarily small since Lemma REF ensures uniform convergence of {{formula:46fed9a8-f61c-4f2b-88ac-70d4956a0d7f}} to {{formula:9cb88c27-6852-4199-99bf-d9c23b2e7213}} .
[]thrct
Let {{formula:f77f7889-bda4-4dd8-a520-06da4b29b797}} be a sequence generated by {{formula:57b60bb5-560e-4375-8f9f-6e765e6d28ec}} . Let {{formula:4dbf371b-7b7e-4b0e-a253-f3b7babb2aec}} be such that {{formula:e9e88056-bf8b-4643-bdd1-39559cfb9968}} , {{formula:ecb59cfb-a04c-4617-b220-5970f355b7f6}} {{formula:4d47825a-09f7-482d-ae61-ddf29bbbe5a1}} and continuous in actions. Then {{formula:05584420-852e-4001-8aa1-f4db1fa40c2e}} , where {{formula:50ba9358-6f39-4749-bd32-cdfd40c2f044}} is an optimal policy for the MDP. Moreover, {{formula:38d27c80-1cf2-4b91-85dc-cea1fea8c2de}} , {{formula:738b6045-2e35-4c04-9ba1-83c72be833d8}} are the optimal state and action value functions.
Fix {{formula:a35c1d38-cd5d-4111-8dc7-24f36486786a}} (we have already shown that {{formula:610915cc-ef6b-4034-8226-adf9ed4f7f14}} ). Due to Lemma REF , we know that for all {{formula:1badcd26-9f72-42ac-bda2-72498daae346}} , {{formula:8f6e8c65-bff8-43bc-8126-2483a210ec2b}} is the relative weak limit {{formula:3cc5acdd-7842-40f9-9ba8-e5c270cad31f}} and further we know that {{formula:07c09aea-7ca9-423c-bf0a-653964d2fcaf}} is greedy on {{formula:e8f2a16f-6b11-4c76-b059-b583cc0dfa03}} (from definition of {{formula:a67d5140-1365-436b-b302-78c3ad04a07f}} ). Moreover, thanks to Lemmas REF and REF , {{formula:ad4872be-7e80-40f4-a911-c3222bc82b10}} and {{formula:25a82337-2d0d-40a5-a59a-23cb5bb7ad01}} are the state and action value functions of {{formula:4eaf1848-e22d-4e0e-a724-4b965015e142}} because they are fixed points of the Bellman operator. Since {{formula:103e40c3-8655-449d-badf-aeb1e923b1ff}} , {{formula:720f1929-ff20-43a5-bef3-e4c45de6e9d8}} and {{formula:995e809b-9506-4937-a9fd-28ddb6b31c43}} are also the unique fixed points of Bellman's optimality operator, hence {{formula:6acd60f5-9511-4ff1-a247-5ccfec76a445}} , {{formula:8eed9b2f-6178-4d1a-874f-5ef846d45653}} are optimal value functions and {{formula:445b7d5a-7ae5-4ea0-82e6-8d47f11e9703}} is an optimal policy.
| r | ca09607077da7df3b67e954d8eb81946 |
Although, the blazar types optical variability is rarely expected in non-jetted AGNs or misaligned AGNs {{cite:9446e73a4eaa890da0db47eb8c378313cae84cf4}}, {{cite:682bf865678942600bb15a7c16e3bff686445f7a}}. But, in the present study, we have been found the unexpected remarkable sharp feature in the differential light curves of the non-jetted-RLNLSy1 SDSS J163401.94{{formula:deab52fe-3ed7-43fd-83a9-d0183dabfd66}} 480940.2 on dated 2018.03.26 with 3.6m DOT. Therefore, we focus here on this sharp feature only. During the 3.0 hrs of continuous monitoring with high sensitivity and about 5 minutes of sampling time, it can be seen in the DLCs of SDSS J163401.94{{formula:34f8e9af-2c02-4bc5-b76c-b88b5ffe7b2d}} 480940.2 that at around 22.37 UT there was a sharp rise (between two consecutive points) of {{formula:f9b21c7d-8f88-40c0-b076-ebe82d92f3d0}} 14% within 6.01 minutes and then, after remaining quiescent for 12 minutes, faded back to almost its initial level (see Fig. REF (a)). Caution about seeing disc (FWHM) variation during a monitoring session becomes important when AGNs are at small redshift {{cite:629ab2e770554ea1425acda4f05774f83fe057fe}} because a significant contribution to the total flux can come from the underlying host galaxy and hence the relative contributions of the (point-like) AGN and the host galaxy to the aperture photometry can vary significantly as the PSF changes during the session. This might lead to statistically significant, yet spurious claims of INOV in the standard analysis of DLCs {{cite:f4bffd46e5129f0bf7b90dd80a5099607caa2993}}. However, in the present situation, the high redshift {{formula:3f23372a-83cd-4329-b121-31f2cdbdff74}} of the source and a very small around 0.25 arcsec seeing disc variation during the monitoring session (see bottom panel of Fig. REF (a)), suggest that this sharp variation (flare) is unlikely to be affected by the host galaxy contribution of this source and seems to be genuine. This is in accord with a recent deep near-infrared imaging study of RLNLSy1s by {{cite:3dc6aade46a7ea59bbc90c51bc2ec916b9044821}} using the ESO Very Large Telescope (VLT) from which It can be inferred that any variable contamination arising from the host galaxy can be safely discounted in the case of AGNs at z{{formula:45e8124e-145e-41d4-b8d6-509f8680478e}} 0.5. Nonetheless, the sharp variation in the DLCs of an AGN like we caught in SDSS J163401.94{{formula:9f99c5f1-d40a-4aec-b869-3693eb5471a9}} 480940.2 is sometimes suspectable if it is accommodated by just two points. Therefore, we have regenerated DLCs of SDSS J163401.94{{formula:16117890-0bbe-4014-a977-512c768212b6}} 480940.2 from the flaring point (i.e., at 22.37 UT) to endpoint (i.e., at 23.88 UT) using its 100s sampling time which we had fortunately taken with 3.6m DOT on 2018.03.26 (see above). This extra caution has been taken by us to ensure that whether a flaring event occurred in the SDSS J163401.94{{formula:95572a98-9160-455f-aaf7-dc7b292cc9ea}} 480940.2 at 22.37 UT is accommodated by more than two points or not. The regenerated 100s sampled DLCs of SDSS J163401.94{{formula:a9a1449b-3951-4afe-9470-c87fc7cfd524}} 480940.2 (see Fig. REF (b)) has ensured that the flaring event is accommodated by four points. This has again ensured that the flaring event discovered in the SDSS J163401.94{{formula:f4b8b929-aa12-44ac-b23f-026ac26e312b}} 480940.2 with 3.6m DOT is genuine.
| r | 0815db31f951a6d4ceb45fb964c47983 |
{{cite:7358c48abcbf794d4bcaeda0786fb8ba80cee64a}} revised some of the experiments from {{cite:51c99c2bdc285082d06065ab8ad4cee642b901b5}} on pruning larger networks while training on ImageNet. This domain usually requires a more exhaustive hyper-parameter search to discover WLTs, but when discovered the results tend to more impressive {{cite:51c99c2bdc285082d06065ab8ad4cee642b901b5}}. {{cite:7358c48abcbf794d4bcaeda0786fb8ba80cee64a}} demonstrate that using the more de facto training regime with a larger learning rate and momentum SGD instead of Adam produces better results than even the winning tickets, making LTH even less practical.
| d | 35f9fcfb251faf3e19fbe58b4ae4e8dd |
Qualitative: Some retrieval results are shown in Figure REF for Sketchy-Extended and QuickDraw-Extended. We also provide a qualitative comparison with CVAE proposed by Yelamarthi et al. {{cite:f4f7946e9de8522d692b4435cb1bf00f3080c83c}}. The qualitative results reinforce that the combination of semantic, domain and triplet loss fairs well in a dataset with substantial variances on visual abstraction. We would also like to point out that the retrieved results for the class skyscraper show high visual shape similarity with rectangle i.e. door and saw. The retrieved circular saw could also might be retrieved because of the semantic rather than the visual similarity. Similar visual correspondences can also be noticed between the query sketch helicopter and the retrieved result windmill.
| d | a6b74620ee1c94e6e9303342386e7313 |
One such scheme is the minimal momentum subtraction ({{formula:8085f233-50a2-4165-b04e-94b3347a7118}} ) scheme that was
introduced in {{cite:f88efab1c0334c53eebea0b77376c8dd227e151e}} to exploit and extend a particular fundamental
property of QCD that was originally observed by Taylor in {{cite:fac13304414aecb0d107ce60824669bfb6d781ad}}. More
specifically it was proved in {{cite:fac13304414aecb0d107ce60824669bfb6d781ad}} that the gluon-ghost vertex function is
finite to all orders in the Landau gauge. One consequence is that the
{{formula:232ac77d-013e-4a6f-be32-034c1f2e0f69}} QCD {{formula:1340d108-b0b3-4a93-be60-bf1b6af01970}} -function can therefore be deduced from the Landau gauge
values of the gluon and ghost anomalous dimensions, {{cite:e63ff058d0313e2ee93085cb7f33232e35611852}}. However to ease
the numerical and financial aspects of making measurements of lattice
regularized quantities in QCD the {{formula:6b51d2ee-02c4-4b49-817c-60b76f413826}} scheme was developed in such a way
that the non-renormalization of the gluon-ghost vertex was maintained for an
arbitrary linear covariant gauge, {{cite:f88efab1c0334c53eebea0b77376c8dd227e151e}}. Although initially defined for
lattice regularization of QCD it has a continuum spacetime analogue which was
given in {{cite:f88efab1c0334c53eebea0b77376c8dd227e151e}}. In particular the {{formula:7a0ca496-7282-445d-ae42-fcefc0364484}} {{formula:ce0617db-e7cb-4212-9dad-dc68319387b9}} -function was computed to
four loops, {{cite:f88efab1c0334c53eebea0b77376c8dd227e151e}}, with the field anomalous dimensions together with the
quark mass anomalous dimension following later in {{cite:042edc5ca33e75cda147a1c32702ab3b8af60b59}}. These
renormalization group functions were required for studying the conformal window
properties of QCD and the associated critical exponents at the Banks-Zaks fixed
point that was discovered in {{cite:58de9445cfaa6e4acf0ed41a31f3ec6f1b2b11c3}}, {{cite:9417d03b93cdcdf8abf1152867c4589bc634157c}}. One property of a fixed point of the
{{formula:0b94be20-f2bd-4481-a31b-0cd6cfd82d00}} -function is that the critical exponents, derived from evaluating the
anomalous dimensions at the fixed point, are renormalization group invariants.
In other words they are scheme independent. Therefore the four loop {{formula:53684c89-591e-4f2a-b321-91ab62de637a}}
anomalous dimensions were needed to study the convergence of critical exponent
estimates {{cite:31936879774755503e6343afc099a8f7c770b0fe}}, {{cite:812d3ffe9ef30846ccd05a2b50f1deb4115ef90c}}. Given the continued interest in such conformal window
studies for gauge theories, {{cite:31936879774755503e6343afc099a8f7c770b0fe}}, {{cite:812d3ffe9ef30846ccd05a2b50f1deb4115ef90c}}, {{cite:debde3088552cd80899ee126089afece65e457d2}}, {{cite:41e6bfc47e01e99b27bbbeaea1da00ed483c8aa9}}, and theories beyond the
Standard Model coupled with the extension of QCD renormalization group
functions to five loops, the aim of this article is to provide the {{formula:5194d98c-1c90-47b9-89c0-f2e2bd238ad1}}
field and quark mass anomalous dimensions to the same order. In {{cite:7ccd8e3889699b5c3b02112bd0db1afdad34c706}} only
the five loop {{formula:03975cd3-6729-40e5-82a9-81e4b14600af}} {{formula:031f3b5b-d322-43c4-b90a-a9eff46df02f}} -function was presented and then used to determine
the {{formula:6cbcdc99-70e6-48c6-8ea7-c93ba628c2ed}} ratio in that scheme to a new loop order. Other phenomenological
applications of the {{formula:e6ebc6a4-db56-4420-8885-992fdbc7ecb3}} scheme were discussed in {{cite:6cfc362ae2ead7e4f75f2a7d490740ff6a867744}}, {{cite:6f4f20125a98ed00d8f08beaa256c132fda22392}}. However,
using data provided in {{cite:7ccd8e3889699b5c3b02112bd0db1afdad34c706}} we have been able to determine the {{formula:b588ebbc-4c8a-4ae9-a271-3e7afc996ff5}}
field and quark mass renormalization constants to four loops. Knowledge of
these will then allow us to deduce their five loop {{formula:3d02adb0-0ced-4284-b32b-e1387db833dc}} anomalous
dimensions from a particular property of the renormalization group. In addition
the anomalous dimensions are also needed for a parallel study of the fixed
points of QCD, including the Banks-Zaks one, in {{formula:00c27488-0b5c-4ba2-976c-ef67f34930cc}} , {{cite:b6cd4d2b6ffbffcbac09c8cec7558e17ba29a63b}}. That
article examines the fixed point structure in a variety schemes, such as the
MOM ones of {{cite:d03762786a16677268ae5f5723a4cce67d4b92bc}}, {{cite:af4911b451dd6f1b2906054efd57cd6a96caff4d}}, as well as QCD fixed in both linear and nonlinear
covariant gauges. The {{formula:264c6694-20e3-4a16-8499-c28c5fb18165}} aspect of that work relies importantly on the
separate five loop results provided here.
| i | 8e69817519e6473e70175cb2a437ff7d |
As illustrated in Fig. REF A, the observability of the {{formula:33b9b137-bf7d-4375-8f96-cedfd9a8320a}} spin alignment is a result of the interference of two wave functions from two indistinguishable target and projectile nuclei at the distance of an impact parameter apart from each other. This scheme is very different from most of the fundamental particle interactions where two wave functions are connected by a mediator or virtual particle in the corresponding Feynman diagram {{cite:fbf9353c9257d2d54a7ab724337878955b4a88e8}}. In this case, the two potential {{formula:45a6ec44-ad1f-4517-b2e4-0e8b7363a2c0}} wave functions only overlap through the decay daughters that propagate to the detector. There are several possible scenarios that would result in interference effects. First, if the phases of these {{formula:c099ceb0-a7ec-4cc4-ab79-c585441d2d96}} are random, this would be similar to the azimuthal Hanbury Brown and Twiss intensity interference effect {{cite:f79ee229132d98667bb4c90612d824ef70425a7c}}, {{cite:6e9be955adc275aca67854eef2f503b0c436c379}}, {{cite:f1999e5e6dec3870a791e33475953d62a4ca7fe2}}. In this circumstance, both coherent and incoherent {{formula:0e8c2b77-afbb-4c12-a079-c6dee48c1655}} production would create a correlation {{cite:cce3a6d0b6a13a46d83dd293b4d33b58ee134fc7}}, {{cite:e53565abee5663dcda8bdeb517457d07d0aba460}}. Secondly, one can also take an alternative view on this as an example of Entanglement Enabled Intensity Interferometry (E{{formula:d76b5202-bcc9-4eb6-ae24-e71cd407c89f}} I{{formula:e3dd88f0-76b4-41df-b696-08e6bcd31270}} ) {{cite:e64ff804dfb5b67efca80496b67acf9d4c033604}} between two non-identical particles ({{formula:274cc4f2-f7e1-42c0-8129-2a7497b85d88}} and {{formula:630aec5f-dd63-4c40-bd9a-a786b873ccd1}} ). In fact, the components in the mass range of the {{formula:94c96de2-3161-411c-a49b-b1ab48fe37e4}} consist of the {{formula:790100fe-8697-47fc-b05b-4cbea559f5df}} particle and direct {{formula:0347cb0d-0cd7-4ba4-a787-f56695879323}} {{cite:2dd858d2be48ff5066275fb9a9bcd99d1bda5e8d}}, {{cite:35b579e049d98d27e2c82f557c02d82ca4f166f4}}, {{cite:a003349ec00f6d40b09b0f3236c5f1369b0358d6}} that are subject to the same interference effect. In the example of Ref. {{cite:e64ff804dfb5b67efca80496b67acf9d4c033604}}, the sources of {{formula:8ed49ad1-d3b0-4080-959d-03e916cb122c}} and {{formula:59b2187f-34b2-445a-8ef7-08c755b8b72c}} are the two gold nuclei each emitting a pair of {{formula:0543c3f9-ea8d-4fc8-babc-523f858a18b9}} and detectors {{formula:65f10b4f-3676-4403-930a-960ecf170b10}} and {{formula:0676b060-15c9-4957-8390-52e131333256}} measure either {{formula:751af91f-9a1d-4892-ae58-c6490f1dd610}} or {{formula:a330ece2-1ac6-4054-b416-7d4bd7c12c04}} . Due to the entanglement of the {{formula:ff93e4f9-1df5-48a6-bdeb-cc84495ca147}} at the source, there is a non-trivial interference term as shown in Eq. 4 of Ref.{{cite:e64ff804dfb5b67efca80496b67acf9d4c033604}}. Finally, there exists a third scenario in which the initial {{formula:83ac7345-2787-4774-8b01-6db3087e2afc}} wave functions are locked in phase through phase entanglements of the initial photons and Pomerons. In this case, the interference would only appear in the coherent process and would not produce any interference from the incoherent process {{cite:f93426a1fcc76da6b96678c4d4695435353a6875}}, {{cite:365f398edcbbe319b02cbcba9b1ba7205fa216be}}. In all three cases, the interference occurs over the characteristic distance of the average impact parameter of the collisions, about 20 {{formula:037a4411-2ee8-4f8e-87f5-92f3f2d36364}} , while the lifetime of the {{formula:d40af08a-609f-4509-8c57-1fb7e50b7a41}} is only about 1 {{formula:76fef01d-f0cd-4836-b404-2b8ae839ac3c}} . The decay daughter pions are spin zero particles. Our measurements of non-zero spin alignment ({{formula:0b901708-2f4a-456b-a33d-e89209b188de}} in Au+Au and {{formula:b0b427c7-5982-4d20-bd98-76d1af7ffeeb}} in U+U) show a definite interference effect due to the non-locality of the pion wave functions. Through this measurement, we can also set a limit on whether or not the wave functions experience decoherence due to the decay process or other activity in their vicinity.
The prediction from Model I matches well with data while the prediction from Model II is about 20% above the data as shown in Fig. REF B. This implies the coherence is at least 80%.
{{figure:48b19835-6caa-4e09-8972-c54da0cc0e7c}} | d | c98f65dc86537bc3b7341f66a7bbaaaf |
Primordial black holes (PBHs) are early universe objects predicted by lots of inflation models; they could result from large overdensities collapse {{cite:730497fafbab966f415a53280530aa4e825ad446}} as well as more exotic events such as phase transitions or topological defect collapse (see the reviews {{cite:971510f2575b23d7661d9368fd0850f35495423f}}, {{cite:a02d59335cccf3973cab785ce136d6ed2ff26e2e}} and references therein or the lecture notes {{cite:26f428c41475f65a2d2e32249e97956355298ea1}}). They are not the outcome of star collapse, hence their mass can span a wide range from the Planck mass {{formula:c10c8767-789f-4b07-90b7-6a3365b3359c}} g up to “stupendously large” values {{formula:b53758b3-4053-44c8-a134-3165d0551eb4}} g {{cite:1c57e1df2a9faf34dce23e4dc0ed73da004fcc1d}}. One most interesting aspect of PBHs is that they can explain all or part of the missing dark matter (DM) density in the universe {{cite:971510f2575b23d7661d9368fd0850f35495423f}}, {{cite:a02d59335cccf3973cab785ce136d6ed2ff26e2e}}, {{cite:26f428c41475f65a2d2e32249e97956355298ea1}}). As such, the constraint on their abundance translates into a constraint on the fraction of DM, {{formula:ab10d303-7423-4e63-a80e-0913de9517a5}} , they can represent.
| i | 8c84d257c09952f20c4caf0a73f1565d |
The present version of the GM model considers a population of an
infinite number of informed traders who behave in a competitive
fashion. The situation is very different from that of a single
informed trader, who trades with frequency {{formula:13f47c59-af5a-4713-adae-99aeda73f506}} . In the latter case,
the GM model becomes a repeated game of incomplete
information {{cite:6cf272d42e7ca08dcc9afda691f0889e63c4fa98}} between the market maker and the
informed trader. In this setting, the informed trader may choose
not to reveal information to the market maker, with
her trading activity. For example, the informed trader may decide to
act according to her private information with probability {{formula:c643b543-2d18-44a0-a14d-1a63235b4584}}
and act as a noise trader with probability {{formula:f402064c-80d6-4bff-9ec6-f017a89578fe}} , in order to conceal
her information. This changes the statistics of the trading
activity, because in practice {{formula:8f936de9-fca5-4b92-8fe3-9098c7aa8a55}} and hence the temperature
{{formula:acac5e69-e190-4e42-90cd-e8070effb833}} . With this strategy, the informed trader would
also share a fraction {{formula:49ddaef5-863c-4b60-a02d-1c97d1710d9b}} of the losses of noise
traders. Hence their gain would satisfy the bound
{{formula:68fc4e7d-1e7b-4698-99ed-77dcc9f3df7c}} . This shows that by
taking an infinitesimally small value of {{formula:ae3568ff-cdd3-4803-af9c-94355f2ce9d5}} , noise traders could
access the regime where the inequality holds asymptotically as an equality,
and increase their gain by making the market temperature {{formula:2c9be78a-3231-492e-a95d-edffffca7cb8}}
arbitrarily large. This argument
suggests further avenues along which the results of the present paper
could be extended.
| d | 4ed74ec305d828d44858320997a809aa |
It remains to show the continuous dependence of {{formula:99d25ec7-691c-49fb-bb3f-2d6fdc4d3c3a}} in {{formula:68650a66-513d-4cf0-8abe-4169ef4aad7e}} . Let {{formula:8b9e1b5d-8453-4108-9096-1d47b0b85233}} be such that {{formula:ab30fbef-2dd3-4d39-abbc-61a99f990398}} uniformly on compact sets of {{formula:422cf134-ef9f-4b48-b26e-73ac0e1ef35d}} .
Denote {{formula:e2128ffe-8b9c-4858-bb1d-af2f3dda5939}} and fix {{formula:6743a71e-74c2-48af-95f6-07789b2ddf6c}} .
Thanks to (REF ), the functions {{formula:88477c7e-6ad0-4d2b-9091-2c8b64352914}} are convex and given the representation (REF ) and the subsequent bounds on the integrands, uniform convergence on compact sets of {{formula:a6bce0c4-f959-442f-adbd-dd011bae7ea3}} to {{formula:fd0479a2-73d8-448f-9842-9d02126e1f45}} easily implies that {{formula:66de3b1a-ab2f-48eb-a86c-071a1fbb2270}} converges pointwise to {{formula:487bbeb4-7fbe-4660-8895-84a31484e1df}} . By an application of {{cite:df6e69de75544fd1147b17f581e41ecb01e1944c}}, this convergence is uniform on compact sets of {{formula:deb2555c-dab8-4ca9-be35-b45d024032be}} .
| r | 92855300149988598452abf1840e5c2d |
We observe
{{formula:e383074f-a48a-49bc-a0ad-224aaebf2c84}}
The first inclusion is trivial. The second inclusion can be seen as follows. Let {{formula:f9d87ef8-dd63-44d1-bd97-c94086ab5f53}} . Then for all {{formula:c9d39866-a06e-4658-9ab2-972553f02c55}} , {{formula:1f116bc3-a87f-4cc8-be97-6e8b723c7552}} and {{formula:760fbd6b-7be6-4ee4-a978-e14e9ca2c352}} . Therefore, {{formula:c8c31060-bc9f-438d-b7e0-c4b233ddfc80}} and {{formula:0c77f6f8-f18b-4ded-9d34-53a04b0e77f0}} . Therefore, {{formula:f6be3b8e-7b7a-4229-a204-c31b311dbabc}} and {{formula:d46ff512-04c5-4cd3-b042-64d8f53fa6fe}} .
Let {{formula:7c684086-fe91-4d1a-b1c6-994ecd390e1f}} be the projection with {{formula:617cbf35-487a-4cbf-a1f7-cd76fd187df8}} and let {{formula:f6516adc-cd1d-432f-b751-e291f663d190}} be a piecewise Lipschitz atlas of {{formula:f2ae1c40-62bb-404f-be52-dc8b19be67dc}} . For {{formula:7092f133-a533-4852-93fb-af6a6373a7ad}} , we define the sets
{{formula:0469a63b-f805-4d73-83ff-fd40367f17a5}}
which are Lebesgue null sets due to Rademacher's theoremsee e.g. {{cite:ee2545caa3b40720b2611c22149bf2d1ca550d18}}. Thus, the set {{formula:923fc89f-fff5-48ea-87c2-e81804c472ee}} is an {{formula:b7531721-f051-493b-bdd2-5321f85816f8}} null set, see {{cite:ee2545caa3b40720b2611c22149bf2d1ca550d18}}. Combining this with normal vector exists lem, we now know that the outward normal vector {{formula:7b6a81d3-ca7e-43c3-a6ed-91fc266a7ece}} is well-defined for {{formula:e4e88cd4-8132-4671-89fa-4028a3d5f2c7}} every {{formula:4db5be07-1300-45c4-98f0-b947554861d7}} . As {{formula:5aafd944-4223-4139-9aac-47f0565f26d3}} is Lipschitz, this implies that {{formula:aa793fc9-55ce-4376-9caa-ca735a032f70}} is a Lebesgue null set {{cite:ee2545caa3b40720b2611c22149bf2d1ca550d18}}. Furthermore, we define the sets
{{formula:2e1854f7-7f85-4331-b360-e4ae330b8a82}}
By {{cite:ee2545caa3b40720b2611c22149bf2d1ca550d18}}, we know that {{formula:867a07a1-be89-49d0-b699-1bb9aec159b8}} is a Lebesgue null set. Let
{{formula:bbdc9d23-574e-40d3-8997-189beaa92a5c}}
and
{{formula:51beb758-905d-4d7d-9a14-392c5f353ffc}}
Let {{formula:97e94ba0-a4c9-47bd-adec-2300fece7c57}} . Hence, there is an {{formula:94a3eacb-26c1-412f-b5cd-0a34d5eed44b}} and a {{formula:f1a807d4-3bf7-41fc-8121-9924fba4bfb9}} , such that {{formula:aebbbde2-8847-42fe-82f2-895078d04d10}} . Thus, {{formula:419b9e57-1996-4c1d-8941-ce042909b8ac}} exists, has full rank and does not have {{formula:fffd4d5c-bee4-4859-860c-365db8969deb}} in its image. By normal vector exists lem, we know that {{formula:9ede3b56-ae4d-438d-ae40-dc8e4185fc8e}} exists and that {{formula:ffe33938-6b17-41df-a1fb-e261e696eeff}} . Thus, the function {{formula:d74cade7-5510-4fee-a123-a8a9bdc7d8e1}} given by {{formula:6d31b409-08a1-4632-8208-435ce1f063c1}} with {{formula:ad0b15fa-5015-4fc4-9c7a-15d97e8be7d0}} being the signed distance function to the boundary {{formula:096e33e0-943c-4a14-97f4-6c607b7fbd0a}} has non-vanishing differential at 0 and satisfies {{formula:0f03dea4-96f0-4632-9370-3dca24400ddb}} . Hence, {{formula:35947ec9-a587-4b12-81e6-7410fdc8fcbc}} changes sign at 0, which means that {{formula:4bc244f3-a9f9-4ffb-9b93-bf0d59820e4d}} . Conversely, this means that for {{formula:6cd2e7e3-af80-4b6e-afac-51aca6acadd0}} , the property {{formula:209a1891-6f65-4a90-b752-336ae9eeeafd}} implies that {{formula:73a47c5b-91c1-4239-b872-75f3fe6fbe61}} . Thus, we have for the set {{formula:03be22e8-8a70-492c-b074-b64dba5f2ba8}} defined in (REF ),
{{formula:3c9e2324-268f-4070-9c16-0cfbfddf8c6d}}
Finally, we observe
{{formula:721d4893-3588-44a9-9520-8ac6a508d053}}
which shows that {{formula:128d0155-883b-4650-86df-38066ef5e9bc}} is a Lebesgue null set.
| r | 736c2297d5ba0ad3a1a6e94641a3ef83 |
Most of our experiments are made on single U-Net networks, but modern approaches such as conditional generative adversarial networks (GANs) are likely to yield better performance. pix2pix also uses a U-Net as the generator, but the loss function is replaced with a dedicated “PatchGAN" discriminator network {{cite:6842a4bd3589c408db01fce7d269641e2977329e}}. This architecture could be applied to noisy/clean image pairs. BicycleGAN {{cite:382b50bf7e0e309c14a101f2944c2c1d661e10e1}} works in a similar manner but uses a cyclic loss with another generator attempting to generate the original image back, though our results reconstructing the noise (8) seem to indicate that this approach would be less effective. The generative network may as well use a novel architecture such as that proposed in {{cite:a8601af3297857ec96ffb7c5cf25fcc469f402c5}}. GANs benefit from not using a predefined loss function, so they can focus on structured and representative features (such as believable facial features or vital detail in medical imaging {{cite:1f5a450173613240b6cfb184fbb48bb2e7ec579b}}) rather than a pixel-loss that is based on a non-existent one-to-one mapping. GANs have also been used to learn and generate noise samples that may be used for training {{cite:baa0bc8bbbcd12892424e06213076189bf67f50a}}; the performance of such an architecture when compared to a model trained on ISO noise remains to be determined. Besides GANs, there are also entirely different types of loss functions which are not pixel-to-pixel based {{cite:4fa4dfe5c17d76812a7f50917e03e6932cfb789c}} and may therefore perform better in the denoising domain where cleaning images may introduce blur as there is no one-to-one mapping.
| d | 24b2f5d061afe6d74570eac712ad6029 |
Other Architectures and Tasks.
We tested *OccamNets implemented with CNNs; however, they may provide significant benefits to other neural network architectures as well. The ability to exit dynamically could be used with transformers, graph neural networks, and feed-forward networks more generally. There is some evidence already for this on natural language inference tasks, where early exits improved robustness in a transformer architecture {{cite:cef52d7605bfcdcc8b46c7868130a78ad38d5a10}}. While the spatial bias is more vision specific, it could be readily integrated into recent non-CNN approaches for image classification {{cite:9fe5f34746726c3293cbbf9ecd27510fc0229916}}, {{cite:08ab2511ae432a10d7d3792acbee661be67c86fb}}, {{cite:aa86425c6f2ff34857c6e2e03cb0c9110c4e480a}}, {{cite:254eb4ae5b8a436dcdbf449999c45729721c4892}}.
| d | bd97f074d653799553d83fa190f68770 |
The initial step towards extending FP to real-world games was by {{cite:28ed0f0f5215dc72586af4109f9a21289d22c769}}, which established the equivalence of normal-form games (represented by matrices) and extensive-form games (represented by trees with additional structure). Loosely speaking, this means that results which apply for matrix games may also apply to much more complicated decision making problems, such as ones that that incorporate temporal elements or varying amounts of hidden information.
Leveraging this equivalence, {{cite:4ce71073371bb5e78cb3b5d6e8de7c11631ceec4}} proposed an extension of FP to the extensive-form setting, full-width extensive-form fictitious play (XFP), and proved that it converges to a Nash equilibrium in two-player, zero-sum games. {{cite:4ce71073371bb5e78cb3b5d6e8de7c11631ceec4}} also proposed Fictitious Self Play (FSP), a machine learning approximation to XFP. In contrast to XFP, which is intractable for real-world games whose states cannot be enumerated in practice, FSP relies only on basic operations which can be approximated in a machine learning setting, like averaging (via supervised learning) and computing best responses (via reinforcement learning). In this way, FSP provides a version of fictitious play suitable for arbitrarily complex two-player, zero-sum games. Not long after the introduction of FSP, {{cite:a2f6d80b9f47b0bae1dc2cba80875b3e8e2d5cdf}} presented Policy Space Response Oracles (PSRO), a general framework for fictitious-play-like reinforcement learning algorithms in two-player, zero-sum games. These ideas were employed as part of the groundbreaking AlphaStar system that defeated professional players at StarCraft II {{cite:93906eef4e0f597b7eea7b013ae918d843a008c0}}.
| i | a6317e519e282eae7c0506df32682dec |
For a star that will end up as a PPISN, electron–positron pairs will be
generated when the central temperature of the massive helium core rises to {{formula:6529e4b8-37d9-4fe0-99ab-3eb7e81e416c}} {{cite:4c4cf95eaeba888cc7b2524cb0480cd082813c36}}, {{cite:5a72dd640feec71f32d0f08b7a637217c9b8c4a8}}, {{cite:d3137f71ba1a40275e9efb70a04c22e5e7d54e13}}. Sudden
loss of pressure due to the production of electron–positron pairs leads to
the contraction of the helium core. Then explosive burning of oxygen ensues
so as to eventually reverse the contraction of the core to expansion. Such
pulsational activity becomes more and more energetic to shed some amount of
envelope during the final evolution of a PPISN progenitor. The duration of
pulsational activity spans a wide range of time, from a few hours to {{formula:4920f840-5c7a-4f19-bc87-820a85d9c0ba}} {{cite:f36d0fe5854a80197f7bbe884b1fbdf31a74b019}}, in accord with the mass loss history listed in
Table REF .
| d | 2ba35167f8bdc1de385f888b09180e7a |
Our work focuses on leveraging synthetically generated data through the use of modern 3D generated computer graphics using a couple of novel resources – Hypersim {{cite:2e53a8df2ae4472e775b4f757827e2263046d3ef}} and ThreeDWorld {{cite:4e93d9b63f131de083741a5d161d8fecb4740ea1}}.
In the past, leveraging synthetic data has proven challenging due the particularly wide domain gap between synthetic images and real images. However, there have been some successes in tasks such as eye gaze estimation {{cite:155d24a4aff433c8bd740461431274ac463577c8}}, embodied agent navigation {{cite:7a6920b6e93513bb1d633082f032f599d350de7b}}, {{cite:8f406fbb0d7ba2418ee2e430de2bbefeac99633f}}, {{cite:f798e3998b0c5d1731da6ed24e97093410537877}}, and autonomous driving {{cite:6fd24ba123ff42edff679bbf97cdcddee3bcbd25}}, {{cite:77b89355725b743ffefea26deec406c67c72bdb6}}.
There have also been some synthetic datasets for visual question answering such as CLEVR {{cite:3fd5a9d444880748053976a0b9399a5615c753bb}} CLEVRER {{cite:097448ab92a863da3de13860b1f901889c3d900f}}, and VQA Abstract {{cite:59516a6c752b420d568313e39b4b0b4f2dcffd15}}. However these VQA datasets build a closed world that is not designed to generalize to real world images. Remarkably, some recent work has managed to show domain transfer from cartoon images to real images {{cite:516c068f5157111bed1d91f6128da884f8380279}}, but there is still a limitation on how much could be learned from these existing resources. Our proposed Hypersim-VQA and ThreeDWorld-VQA datasets provide a promising alternative that more realistically captures real world settings and offers a path forward in this direction. Figure REF shows synthetic image samples along with the VQA 2.0 dataset {{cite:ba7aefdda43bc8d2db127e1f2bf326fed52443f0}}.
| i | 331d8120183cb3b5c82a6f0f7971a6ba |
One of the biggest challenges in processing point clouds is dealing with unstructured point cloud data. The early methods of processing point clouds are mostly indirect representation conversion. Some methods try to convert the point cloud to structured data, such as octree and kd-tree {{cite:083941d57796463d7bea9166918262e3d3e1aa29}} to reduce the difficulty of analysis. Another classical method converts the point cloud to voxel models. The voxel-based methods {{cite:3fc12d77ac18d0ea9bf702e6036ba23c40ff549b}}, {{cite:d58e2f479ecdb869cea731d75e61ebed6a21a605}}, {{cite:220f95d8d4566ce753f80b08b5ec45c7f865d18a}}, {{cite:88161342cd1209e7f46dbae80992eca5dbe6a59c}} use 3D convolution, which is a direct extension of image processing applications for point cloud. The advantages of the methods are that they can preserve the spatial relationship well at high voxel resolutions, but these methods are computationally very expensive. If the resolution of voxelization is reduced, the geometric information that the voxels can represent is significantly lost. FPNN {{cite:19388dbe6bf5d5ccddd2a97713b9e10854382ea4}}and Vote3 {{cite:1f124b74e198eb54b65f13f765ee2a3066d1d2fe}} proposed special methods to deal with the sparse problem, but their methods still cannot handle large-scale point cloud data well. Therefore, it is quite difficult to achieve real-time performance while considering the balance between accuracy and computational cost. Traditional methods inevitably lead to the loss of geometric information. This paper uses a point-by-point feature extraction method to overcome the high cost of voxel-based methods and is not conducive to processing low-resolution point clouds.
| m | 94188365986b2c50ceaebd117e3ad072 |
which is counter-intuitive relative to most investors' notions of risk and reward {{cite:0d21526a1a6432f2910d10c5c6ece7c2cf8eb62d}}. {{cite:d454d0c4319e5179ace56a69501f57e734eff880}} extends {{cite:0d21526a1a6432f2910d10c5c6ece7c2cf8eb62d}} work and investigates high-frequency currency trading with neural networks trained via recurrent reinforcement learning. He compares the performance of linear networks with neural networks containing a single hidden layer and examines the impact of shared system hyper-parameters on performance. In general, he concludes that the trading systems may be effective but that the performance varies widely for different currency markets, and simple statistics of the markets cannot explain this variability.
| m | 7d19cb9314c7e872a875a6ef11bb0b4d |
where {{formula:a2b2ad39-4676-4bf6-a9f6-d246b32e1bf3}} denotes the interpolation operator.
The choice of {{formula:7dee3198-63a7-4041-bb72-36536e4c400e}} is crucial for the accuracy and stability of the scheme {{cite:4e98da2405f3cbbc0b3f2d1d63e136514ba3d208}}, {{cite:96829cb01a129bf5c4c5407787c4aa8e3b6a3cfc}}, {{cite:f745e350bbe0435312aa8f3c5eb100b59bc11dee}}.
In the following, we will employ an interpolation scheme based on a cubic spline using not-a-knot end conditions {{cite:aa5d98bf6ad32b3ab73af71a781254cab3df21e0}}, {{cite:3f06277655e4405296c362b463fbe2c9c2e96698}}.
{{figure:37ebb2f0-9167-4906-b607-70acb3dcb8ae}} | m | 5dbadb5d7adb8064cf8647c9541f1604 |
Federated Learning (FL) enables model training collaboratively in a decentralized manner without the need for data owners to hand over their data {{cite:9f91cc8cf509a26d751d378a5df546e561a71a98}}.
A typical FL pipeline consists of two main steps: (1) training a copy of the global model locally on the client's private data, and (2) aggregating the local parameters into an updated global model. These two steps are repeated until converging to a final model.
Model aggregation is usually performed by averaging the model updates (FedAvg). However, some of the participants could be faulty or malicious by sharing bad parameters' updates, which can ruin the global model's performance and prevent it from converging {{cite:15d341b4e8de72eed88d233b545cbb2b84faa19a}}.
| i | c4662c9023ea5bf42d41212ae149f053 |
In this paper, we presented an effective and efficient multi-task learning network after a thorough study on the previous approaches. We ran experiments on the challenging BDD100K dataset {{cite:cfdb4c29b2063ab43464facf293ddd1049bac17a}}. Our model achieved the best performance in all the three tasks: 0.83 MAP for object detection task, 0.93 MIOU for the drivable area segmentation task, and 87.3 accuracy for lane detection. These numbers are all largely increased compared to the baseline. In addition, we increased the Frames Per Second (FPS) to be 91 running on NVIDIA TESLA V100, which is well above the value of 49FPS by YOLOP model in the same experiment settings. It further illustrates that our model can reduce the computational cost and guarantee real-time predictions while leaving space for improvement of other experimental research.
| i | 1b5637dc3722ffe50645a7d2482667cb |
Rounding the SDP solution is very challenging. The SDP objective has a mixture of XOR-type terms, requiring that the two vectors corresponding to the endpoints of an edge that is not in the current cut are opposite, and OR-type terms requiring that one of the endpoints of an edge in the current cut coincides with a special vector the SDP is using. Even though the XOR-type terms have non-negative coefficients and standard hyperplane rounding could yield excellent approximations for the particular terms (like in {{cite:cd34213a812560433b352bdc768ec2349f6ae0d5}}), the OR-type terms have negative coefficients, making standard hyperplane rounding disastrous for them. So, we instead use an idea that originates from Feige and Goemans {{cite:845e5786db9ace8d1332cc5fde7e126e414cc3ee}}, which has inspired much follow-up work on approximating MAX2-SAT and related problems, but we consider it in an extreme that has not been considered before. In particular, the vectors in the SDP solution are first rotated in the 2-dimensional plane they define with the special vector and, then, hyperplane rounding is performed. We use the rotation function {{formula:28556c03-1fed-4912-8e77-cf5e51e2a67e}} , meaning that a vector at an angle of {{formula:59702331-4161-40a0-bd96-b08bcae45ebb}} from the special vector is relocated at an angle {{formula:b0d3769b-21ff-4f1c-bb71-4cafd0dc5e6b}} from it. This gives a rather poor {{formula:4aed4577-2990-4582-9579-f120356c7be5}} -approximation to the XOR-type terms but approximates well the OR-type constraints.
| r | 89ffaa7d870f962bcec5a0f3192cc701 |
Our proposed weak human preference supervision for deep RL allows humans to input dynamic and weak preference levels via our developed human preference scaling model to reflect human behaviour and decrease the number of human inputs for our established human-demonstration estimator.
Based on 5 experiments with the robotic physical simulator MuJoCo {{cite:ab9a0f1a46aeefe5b9f1aef234a9dfa419d383d6}}, our developed human preference scaling model for RL can achieve higher cumulative reward values than those of the current fixed human preference model, and our established human-demonstration estimator support for RL can reduce the amount of human input for dynamic and weak preferences by up to 30% without significantly sacrificing the reward values.
| i | 6a40702768199598ad9425eedc348904 |
We analyzed the responses using the Wilcoxon signed-rank test {{cite:6fbd54c2e649f2e44db4d32b1ed8a195e4f9d014}} to compute a pairwise comparison of the categorical responses between the baseline and confidence score conditions.
Table REF summarizes these statistical results along with the Rosenthal correlation coefficient {{cite:b9273e4fcdecc7ed9f858f6c41cb8fa755fc8caf}} ({{formula:d9c67861-ed59-4b00-b9b1-292e0d0a863f}} ) for effect size.
| r | d7df571e53c3a15f3c2bd805b8a6aa60 |
The first term is the virtual logit in eq:vlogit while the second term is the energy score {{cite:7c3280066fc5a4e7f1fc1aef26cf818777b8dd05}}.
ViM completes the energy method by feeding extra residual information from features.
The performance is much superior to energy and residual.
| m | 6d29b48f121b8dd6b111033b7727fb6b |
We conclude that the overall properties of OGLE16aaa could be accounted for by the stellar tidal
disruption by a candidate SMBHB.
The delayed brightening in the X-ray emission as well as the multiple flux dips during the decay phase
are in agreement with a SMBHB model with a mass of {{formula:e1cf3712-1b07-48a4-aa50-5e1978b8c115}} M{{formula:597fe8eb-10e8-4046-afb1-d6841eae8002}} for the primary BH, mass ratio of 0.25 and
orbital period of 150 days (Table 3).
In comparison with the prediction by the SMBHB model, the X-ray non-detections in the early phase
could be attributed to the obscuration by a dense column of gas, perhaps from unbound outflow.
This implies that the reprocessing may be a viable mechanism to explain the UV/optical emission
at the same epoch.
Ionization break-out allows for the escape of X-ray photons, resulting in the detectable X-ray emission at later times ({{formula:f3f63354-4e28-4670-b9f9-5a9e53a23498}} > {{formula:d9567b27-ba6c-4f0e-b07e-a83f341a6c9f}} 140{{formula:01e3c1f2-245f-44ec-9f53-fe397f1a1124}}
If our interpretation with the SMBHB model is correct, OGLE16aaa could be the second TDE candidate
with a SMBHB at its core revealed in the X-rays.
Upon final coalescence, SMBHB systems like the one in OGLE16aaa (and SDSS J1201+3003) are prime sources for future space-based
GW missions like Laser Interferometer Space Antenna{{cite:9f8fbc7894064f27b27b72110b89ae2920a3c481}}.
Note that given the estimated GW inspiral time of {{formula:9700e164-bf84-4ec3-859e-6697a680cb4b}} years{{cite:f239f0b9a14ef4597ee2ff4fd0e41a10cedc81da}},
it would be challenging to
detect the GWs from such SMBHB system in its current state of evolution.
In synergy with Large Synoptic Survey Telescope{{cite:14e0ee84fbc4110767792295ea32e53cddd865db}}, the future X-ray sky surveys such as
extended Roentgen Survey with an Imaging Telescope Array{{cite:6c997e6458280a86fe7cd9c5a1bacc53e5185fc2}} and Einstein Probe{{cite:4b7a157764ad3feefb146db2d5fc0a133764dc21}} are
expected to detect similar TDEs more than one hundred{{cite:dc55cb7609d03f20601a25b36a08958eb94b156e}},
providing a powerful tool for studying the physics on how the
stellar debris evolves after disruption, and searching for promising candidates of milliparsec SMBHBs
that are still poorly explored.
| d | 1d15cc6ced05bd68ce705049ec211ee4 |
Before moving on, we note some previous results in this direction {{cite:c98c1df6b7ac6d72ce4cc8a6d021a632b8a93090}}, {{cite:b91c378818791413e8e3b0601fff56132a39ac1d}}. In particular, Belenchia et al. {{cite:c98c1df6b7ac6d72ce4cc8a6d021a632b8a93090}} study a gedankanexperiment in which Newtonian entanglement enables superluminal signaling, and resolve the paradox by introducing quantized metric fluctuations. The arguments presented here are related, but precise enough to demonstrate an exhaustive list of possibilities: the only way to resolve these types of paradoxes within a unitary and Lorentz-invariant model is to include radiative graviton, or very graviton-like, degrees of freedom.
{{figure:fbe79870-ada9-4dc6-b199-43fd3e2bee5f}} | i | b50b710f64311bdaf1a09f3933ea3183 |
Experimental results on synthetic data and real-world data at different scales –namely, Telegram {{cite:61edb2b66e74bb31015c76f2849531c901057e7a}}, Blog {{cite:562a97786c89b480d4163fe2d7e16aed73fa8062}},
Migration {{cite:7b7f0fc73368d86951bc504da262088a2193a9a9}}, and
WikiTalk {{cite:6c46c48229728ba13b1dc7c0359df10de162912e}}, demonstrate that our method can achieve state-of-the-art performance for a wide range of network densities and topologies. Compared to its competitors, for synthetic data, our method achieves superior performance
(with respect to the Adjusted Rand Index (ARI) {{cite:9362cf7f5ebd5d6691e4c0eb8001455fd6f8afa4}});
for the real-world data, our experimental results indicate that our method outperforms them,
using imbalance scores as outcome measures.
We also apply our loss function to the tasks of node classification and link direction prediction,
and witness a modest average gain in accuracy
on the benchmark data sets Cora-ML and CiteSeer {{cite:cdb9265facb67a7f3aba3f4d174d58004149c321}}.
| i | b8ac2690357e58ed031836a0940898a4 |
Indirect detection experiments such as the Fermi Large Area Telescope (LAT) {{cite:6002bf0a9771ccb24d79142706cff1748e7f62f4}}, AMS {{cite:86ca7e585ddf541fdea3f5358c4b10456ba7cda7}} or IceCube {{cite:1edd8e074e1d9fc50390dd662ffe5c4e3a34d19d}} provide one the possible ways to detect WIMPs. Theoretically, WIMPs undergo either annihilation {{cite:9a7330ea3327bae3881e83d8e07b88e2dd8cafe8}}, {{cite:e386f600f7cd3606c0ea9e5887eed0d18e437214}}, co-annihilations {{cite:073f3aa983dfff22c83ca81271622842538aff55}}, or decays {{cite:156488dbd3401e77a4afa1f80748324940fa5b3e}}, {{cite:3091963ae4454caf2f133c51fd0aacf149a45921}} into a set of SM stable final state particles such as high energy photons, positrons, neutrinos, or anti-protons. Recently, an excess on the gamma-ray spectra was detected by the Fermi-LAT {{cite:f7970af75b23a4238d3796611481ccf5931e7251}}, called the Galactic Center Excess (GCE) which apparently seem to be consistent with predictions from DM annihilation (see e.g. {{cite:055aff6854cf71a9b431e1e9b7cbe07166ffa4d6}}). On the other hand, several attempts were made to address the GCE within particle physics models, in particular within supersymmmetric models {{cite:a127fff9005041c93ce87d9c650e92fd59317b0a}}, {{cite:423850cced0ef26694bd2839c56344c0ec958487}}, {{cite:3ba8850cb748ba4b44e0ab22f28fc0aa0719a49d}}, {{cite:16e6bb9a3b58cc148881a0ae791fe2e020b630ba}}. An important finding is that the quality of the fits depend crucially on the theoretical precision on the determination of the gamma-ray spectra {{cite:a127fff9005041c93ce87d9c650e92fd59317b0a}}.
| i | 09727f92f498363c6fd468516e4335da |
We now present a mini-batch linear programming algorithm to find the best {{formula:79bbd86e-d5cf-4667-acbc-1b99a8589f88}} given {{formula:5f4efc67-fd9f-4d70-a504-d3e292ca9f25}} in (REF ). Notice that the problem (REF ) is a linear program on the bounded convex set {{formula:e6d45b04-7728-45dd-913c-aea25d9213e7}} of vector space of real {{formula:886b0c89-a5da-4558-9b68-6f27d1407ab4}} matrices. By Choquet's theorem, this problem admits solutions that are extremal points of {{formula:e63e9bd6-d464-446f-9120-fa73ed8c95d0}} . Set of all doubly stochastic matrix {{formula:f17fed8d-f5cd-43b3-a3da-5be1d95f73a5}} can be referred as Birkhoff polytope. The Birkhoff–von Neumann theorem {{cite:b08189563afd4aeccc014ea44a3b45b90a77e12f}} states that such polytope is the convex hull of all permutation matrices, i.e., those matrices such that {{formula:5d764805-990f-4cee-8a3d-2ceb51634816}} for some permutation {{formula:2f5dbcf3-81ab-47ec-88c2-0efd1b9f5dd6}} of {{formula:44b36459-6f26-4cf3-b846-97163e0048d3}} , where {{formula:404e85ef-11c3-4c1c-99df-918bad898133}} is the Kronecker symbol.
| m | 478fee2935c1c4103de57bd22e2cdae2 |
While CNNs were originally developed to perform computer vision tasks, the grid data representation used by CNNs is flexible and can be used to process datasets arising in many different applications. For instance, in the field of chemistry, Hirohara et al. proposed a matrix representations of SMILES strings (which encodes molecular topology) by using a technique known as one-hot encoding {{cite:49dd1c55b739e2590da1f1dd21ffe0ed95a682cb}}. The authors used this representation to train a CNN that could predict the toxicity of chemicals; it was shown that the CNN outperformed traditional models based on fingerprints (an alternative molecular representation). Via analysis of the learned filters, the authors also determined chemical structures (features) that drive toxicity. In the realm of biology, Xie et al. applied CNNs to count and detect cells from micrographs {{cite:9d1847bd4e5e987b89989d4de05d17563f6d3df9}}. In area of material science, Smith et al. have used CNNs to extract features from optical micrographs of liquid crystals to design chemical sensors {{cite:2dd7111e1bf8988d5aa33333f93dd8979297ac74}}.
| i | fb9c11e5d8fa7f7d79194eae40d9e115 |
The derived abundance for the different members of the C{{formula:9c82747a-3e86-41ce-b213-96fbd11f134f}} O ({{formula:18514ea0-10d5-4b62-945f-1da670343e37}} ) and HC{{formula:5dffca76-aa04-4d64-a87b-350001a2391a}} O
({{formula:19eaf07b-ef34-453b-92ed-6e8c3c6dc6de}} ) families permit one to study the possible routes for their formation. It is worth noting
that HC{{formula:92c0fada-71a6-4543-813d-5859ceef0538}} O is the more abundant species of both series, and that the heavy species
HC{{formula:f453af0c-5b32-4263-adc1-82f35836d2df}} O is only slightly lower in abundance. This is a peculiar result indeed as it is
clearly opposite to the behaviour of the HC{{formula:c35abfb6-c756-4815-9027-6e2ad2756ae6}} N family, where the abundance decreases
with {{formula:381d2016-dba1-443d-8744-c4d84a956d23}} , and the different molecules are formed by the consecutive addition of CCH or CN
{{cite:b1c5c0109b64c7bd20f8d7db7c1e4f37368e38bb}}, {{cite:01795cfe1b04029ef8278da45cbcc770f67db513}}.
Nevertheless, the peculiarities of HC{{formula:760614e6-9dbf-4873-a975-4b14ef0e483b}} O and HC{{formula:7a431d21-eed9-4dc8-9464-dc9c7c919afa}} O are not restricted to the physical conditions
of interstellar clouds. In the laboratory experiments devoted to the rotational characterisation
of these molecules, {{cite:067787f1895df9a32cf569a5b8a7081b012157ae}} reported that HC{{formula:d7c16394-f857-4e62-8d0d-9b0fdb9c8457}} O was
the most abundant species generated in their discharges for different precursors.
In the Stardust and AROMA experimental setups of the Nanocosmos
project {{cite:85db893a2958b4d9ecdf3398640920592978df74}}, {{cite:7ffca07406d615af1b5fa6b872b1e71da7f3559f}}, {{cite:afd5ce836714ee68c10d7c862fb56a9d3a76c265}}, it has been observed that in experiments
using pure carbon as a seed for the
growth of nanoparticles, masses associated with HC{{formula:f6710683-7faa-47a3-8788-0943a334f6f2}} O and HC{{formula:df0d249c-0b56-4f6c-b6e8-383bf90ba09f}} O
are clearly detected, with that of HC{{formula:6e20f7d5-7837-4318-911e-3f38455d4e22}} O being rather prominent and comparable
in intensity to C{{formula:6f06fea4-59b5-478f-991d-786ae81a19e9}} H{{formula:e8e9fcc5-4ca2-4a74-805b-97a31973cc27}} {{cite:85db893a2958b4d9ecdf3398640920592978df74}}. In similar experiments, but adding C{{formula:4c6c30da-d8aa-4d62-9db1-8a4125c96605}} H{{formula:b4011aa9-353e-4cec-9a29-1f6e1f8e751c}}
to the growth chamber, HC{{formula:8ecc0df5-7858-45f5-9aa5-9e105f845a49}} O, HC{{formula:76272bd9-2b30-4d30-9013-af1a5759189c}} O, and HC{{formula:7dfabc0f-5580-4fe5-95a3-f6c725b9a3b6}} O are also detected.
The source of the oxygen for the formation of these molecules is probably related
to air contamination (O{{formula:4189dc1a-686e-4283-9fd6-ab037f7c5d46}} ) during transportation of the samples {{cite:85db893a2958b4d9ecdf3398640920592978df74}}, {{cite:7ffca07406d615af1b5fa6b872b1e71da7f3559f}}. No other
oxidised molecules were detected in these experiments. Although the physical conditions between
the interstellar clouds and these experiments are very different, it seems that these molecules
are easily formed under a variety of conditions.
| d | 11b9834748937be157950fa01637af2c |
The author thanks Joachim Stadel for many helpful discussions and the suggestion
to allow {{formula:408c9080-3543-48c7-8a57-8be25116997f}} , Alessia Gualandris for running sapporo
to provide the data for Fig. REF , and Simon Portegies Zwart and
Jeroen Bédorf for providing the timings for sapporo 2 in
Fig. REF . This work was supported by STFC consolidated grant
ST/K001000/1.
Derivation of the FMM relations
Here, the FMM relations given in Section are derived and
motivated. Differently from the main text, the multipole and force expansion
centres, {{formula:96437215-f299-495f-a87d-55e8c689afe4}} and {{formula:0e443618-a321-4c1f-b90f-2e31de440963}} , are not explicitly distinguished and instead
{{formula:d87b77fe-7cd3-4c1c-a3da-a2183c1f89cd}} is used for either. The general case {{formula:86c166c2-ca01-4437-a4bf-699d6880c2f4}} is a trivial
generalisation.
{{figure:ec6efc31-58c7-4d1c-8a0d-833266534f25}}Cartesian FMM
The distance vector {{formula:18d9b453-dd81-4d33-8bd4-f9a4c49b609f}} between two particles residing in
two well-separated cells {{formula:f42424fb-0f1f-4f22-ad19-bf3eab70da64}} and {{formula:75fd5086-6249-4375-a2ae-f3cda6634c1a}} , respectively, can be decomposed into three
components (see also Fig. REF )
{{formula:a7f4c7ea-e9b0-4a9b-972b-7694e2e26d40}}
with {{formula:5912a115-dd90-4948-a483-db492a867b6e}} , {{formula:731aa24d-6b87-49a3-8373-e090ab184aac}} ,
and {{formula:a52f3675-7fb3-4ea3-953a-8648e2bc00bb}} . The Taylor expansion of the general
Greens function {{formula:38ae4117-3d1a-49ed-974b-a4dc875dbc82}} in {{formula:3167d4dc-20fa-4ca6-a2c7-a0b914700e34}} and
{{formula:69c05b3e-45ec-44a8-ac07-f07c655af136}} up to order {{formula:fc73c9b7-586d-4b92-8171-0e335da8a856}} then readsUsing multi-index
notation {{formula:532eebdf-f8b1-416e-b937-d3d17da3bc43}} with
{{formula:d88ab4d5-3f50-4493-885f-072ced28bace}} , such that
the first sum in (REF ) is over non-negative integer triples
{{formula:4a273972-bee5-4f71-a06f-8728ba1eb1de}} with {{formula:90dfe8f8-801d-4863-bc21-f5fb34b44388}} . Furthermore
{{formula:7f443441-4ed9-4970-9adc-2f3d945269ea}} and
{{formula:097eb96a-8a8f-4352-85bc-84fd5458b9bf}} .
{{formula:6d667027-0c2f-411a-b8f5-42f6a48cf1b8}}
This series converges (the remainder {{formula:e865c3c6-bd2d-4ee3-90b9-2a4151e42181}} ) as {{formula:9e5a96ae-fa91-4947-9390-c0fdd2b2005d}} , if
{{formula:69faa4a0-cab5-4abd-ab48-ccefa433a683}} . Inserting (REF ) into the
expression
{{formula:2de1e9a0-7fa4-4173-9392-e3049e3b48e7}}
for the (negative) potential due to all source points in cell {{formula:a69ec0d4-464d-4bb6-9008-905d4b8e5357}} and for any
sink position {{formula:416550dd-f6a9-4ee1-8458-d844dab1db59}} in cell {{formula:73733904-5ee3-40b4-a325-026845175103}} , one obtains after re-arranging
{{formula:6d696ed4-5af5-45b4-87fe-1e962d8c3727}}
with the derivatives {{formula:085f27c0-1653-4a06-bff6-7fbb94ff278a}} . The FMM algorithm essentially works
these equations backwards: in a first step, the multipoles
{{formula:c0fb05fc-f7fa-4827-84d8-7e7f47e66183}} are computed for each cell via
() and by utilising those of daughter cells via the shifting
formula
{{formula:cf15f427-7dda-45b5-b0ed-08decd7f61b4}}
Second, for each cell the field tensors
{{formula:bb457881-0c3a-41c6-9329-c3f5898a60db}} of all its interactions are computed via
() and added up. Finally, the field tensors are passed
down the tree, utilising the shifting formula
{{formula:3f733178-8f44-4548-8669-0b16b2bdbc3e}}
and the potential (and its derivative, the acceleration) is evaluated via
(REF ) at each sink position. Equations (REF )
are the basis of Cartesian FMM, such as implemented in {{cite:c0fec1c42a8ade56c6b1967568fcf5c1a50eea3a}}'s
({{cite:52639deb069be49d6d7e836c26793102254d5a2e}}, {{cite:c0fec1c42a8ade56c6b1967568fcf5c1a50eea3a}}) falcON algorithm.
At each order {{formula:290fbfd7-ec52-46d3-b4f3-ed9470316605}} , there are {{formula:25e7fafd-c349-4c0a-94c5-3534a09fb97b}} coefficients
{{formula:c76c8539-445d-4e4d-96f9-bcf124bf14fd}} (as well as {{formula:0016d406-6499-4ace-8178-d0bd1e96e330}} and
{{formula:22bdf2af-f426-40b5-8e0d-09779194e7fc}} ), and the total number of coefficients up to order {{formula:b5dab643-48b8-4ded-bd97-737a7b604293}}
is {{formula:af3abe62-1d43-4a0c-b3c4-f3db9bf43d7c}} . The computational effort of the resulting algorithm is
dominated by their computation in (), which requires about
{{formula:d1fa25e3-3ec0-4e75-a3cf-cd6499a3ab89}} multiplications. Thus at large {{formula:fefa7ea6-a1ee-4ab3-816c-651cd4f36861}} a straightforward
application of this method approaches an operation count of
{{formula:a0b6cf2c-2eae-4257-be6b-fbf5f4eeb1ef}} . The computation () of the field tensors
is essentially a convolution in index space and hence can be accelerated using a
fast Fourier technique with costs {{formula:0336143f-20a1-46b8-95ee-7baf5454484c}} (but see
footnote REF ).
Harmonic tensors
For the important case {{formula:73ec60ed-f42e-47f8-b0c2-b9653a377518}} corresponding to gravitational and
electrostatic forces, the above method can be improved by exploiting that this
Greens function is harmonic, i.e. {{formula:e3f79fda-095c-42e3-b2c0-082c3c8fc376}} for
{{formula:8963a846-c8cf-458f-964b-73c4631b9b33}} . As a consequence, the {{formula:98658227-fdf0-447d-b0c5-e47efb2d5756}} are harmonic too and satisfy
{{formula:4f0be37a-0348-46e7-9123-65f0d8685dbb}}
In other words: {{formula:428bf5fb-6d56-438e-922e-56e4b2e5d357}} is traceless. At given degree
{{formula:c6c33286-bf01-41f9-b8f4-e13ccf75b04d}} , equation (REF ) gives {{formula:6c6edb65-43ab-44e2-a9ef-eef4bdb6db75}} constraints such
that of the {{formula:555732a4-7ab3-4b45-b953-37b5e80d692d}} terms only {{formula:07d721fe-9f49-469b-9ae2-6ad4b4a3749c}} are truly independent. In inner
products, a traceless tensor only `sees' the traceless part of its co-operand:
{{formula:c6cf3c46-6bfc-41f3-bf03-46b8cfd6b326}}
where the `reduced' tensor {{formula:bd049af9-28bc-4706-9006-c2a7dd7f31ec}} denotes the traceless
part of {{formula:592697fd-0523-4072-a5c0-fcdce025f1a1}} . Furthermore, {{formula:8af1d72e-3257-4b4a-840e-c4fc813e30c6}} is
related to {{formula:39531d91-638a-4584-bad7-1cc7b86abf74}} via
{{formula:2e1f2445-2052-4322-9cf9-a0f87ac6dd4e}}
With these relations, the Taylor series of the harmonic Greens function becomes,
for {{formula:73177bdb-e0d4-4e96-ac11-1925dfcbd034}}
{{formula:2652a671-c616-4dc4-87ff-baf29d15278d}}
which is the Cartesian equivalent to the spherical harmonic expansion
{{formula:924571a8-f981-46ef-8016-49cfccbecb56}}
(see eq. (REF ) for a definition of {{formula:20d03971-2282-42d5-8b08-88c067e8cfe8}} ). While at each order
{{formula:dd598565-10aa-497c-b2b8-e052f1f1be51}} there are only {{formula:1bef16b7-2e43-4448-a497-8a8ba42b8049}} truly independent terms, the expansion
(REF ) still carries all {{formula:bf90cefb-9bc8-4e51-8a48-574abde1c5a4}} terms, amounting to a
total of {{formula:6ca6e3e9-3ebc-47a6-9101-10235b5fc3e5}} terms in an expansion up to order {{formula:ab250acc-bea4-48dd-b2e1-eecdef4a5c33}} . The equivalent
spherical harmonic expansion (REF ) only carries {{formula:38e385ec-6dec-4d2e-83da-59a31f3b39ec}} terms per
orderIn equation (REF ), the {{formula:70af340e-2925-422b-9ba2-f11ebf2bf965}} are complex-valued
for {{formula:eb132ed3-d55b-46ed-a3ba-f4d135ca0d30}} , but because of their symmetry {{formula:30d790be-d227-4540-b561-6695a6485940}} there
are only {{formula:bf6056c0-f577-40c7-a0ef-a548a08e87b8}} independent real-valued components per order {{formula:5e95ca5e-8a29-4dc5-bc01-ca58316c6c04}} . amounting
to a total of {{formula:710addc6-f078-43fa-b93e-385ae2fb816f}} , i.e. at large {{formula:f6816587-c98f-4366-aa6a-0601fabb1031}} is much preferable.
The number of terms actually used can be reduced to {{formula:8fd4066d-49da-429a-854c-962f95e93ca5}} per order, for
example, by omitting all terms with {{formula:4ef69e2e-3356-4696-b64a-abbadb2ec717}} and recover their
contribution via recursive application of
{{formula:c66b2915-6633-4a64-84f7-148837db4e3c}}
{{cite:ee2c67539dcd44b94c750ec848c2a0f5b3603b98}}, {{cite:965b100430af4522d79b3ec35a465d9941685f5a}}. However, the resulting algebraic
challenges are considerable, though the overall computational effort could
well be reduced to {{formula:96a933fc-ab7d-4f92-885c-cface5e493f4}} operations (Joachim Stadel, private
communication), but I am not aware of a systematic demonstration.
Spherical harmonics
The algebraic complications with obtaining an efficient Cartesian FMM stem from
the fact that the Laplace operator involves three terms, such that the resulting
recovery relation (REF ) has two terms instead of one on the
right-hand side. This problem can be avoided by Taylor expanding in other than
Cartesian coordinates where the Laplace operator involves only two instead of
three terms.
The simplest possibility is a linear combination of Cartesian coordinates with
complex coefficients. The standard FMM relations emerge from replacing {{formula:c0c840c4-bf39-499b-8a03-fafa0cab3fbb}} and
{{formula:383fa98d-d8d0-4be6-9d83-6fd6692a53a5}} with
{{formula:2ec142a1-e1cd-4b07-8857-8f5f2bfd2433}}
while keeping {{formula:801d2d08-91d3-46fa-8da5-2c51f6147ec1}} . Then {{formula:87da43d0-e955-4084-8d02-a02ea70a5af2}} and {{formula:80eead50-4979-41b8-a9b5-56bb35613078}} , such
that {{formula:f46d5a3a-87ea-4b14-83b6-e598b49d80e7}} and hence for harmonic
functions
{{formula:557716c7-3625-4814-9735-8b4ae0df5960}}
or {{formula:fe726da4-be5d-411d-9be8-a76bb4ac3b72}} in place of
equation (REF ). With this relation one can eliminate all mixed
{{formula:20dcc633-cd6d-46a5-ad2e-9e30065f728c}} -{{formula:5a0b2448-602f-4e65-9679-0c4b52e7c2f3}} derivatives in favour of {{formula:1f2cd1a7-7714-4b6c-a4d3-9282c4b8d243}} derivatives. This in turn allows a
reduction in the number of indices from three to two by using the total number
{{formula:c1fa5b17-6479-400b-b34a-f74b2fee609d}} of derivatives and the number {{formula:da6a689f-e963-44c7-80a3-7e2fd4ca93fa}} of {{formula:53ecbad1-f688-401e-ab9c-0e0ae0cc1a85}} (for {{formula:84688006-5436-4148-b781-9f6bd00dfdf4}} ) or {{formula:dd919d47-ef70-4803-895a-d8126b26c4fa}}
derivatives (for {{formula:296e7b9e-b829-42bd-9ab9-0fe3dd0bceb2}} ).
Somewhat surprisingly, the relations required for FMM are hardly covered by the
rich literature on spherical harmonics (and FMM). To derive the relevant
formulæ, I follow the ideas of {{cite:675d6ac281812249ed8a9c71c9013cc01c1e7a50}}
({{cite:675d6ac281812249ed8a9c71c9013cc01c1e7a50}}, see also {{cite:4ae417943391b7f1dad3edbb941a394a214d1460}}) and define the
differential operator
{{formula:a8840b7a-d3a0-4e11-b81b-b71f4b26d0aa}}
When applied to harmonic functions, this operator satisfies
{{formula:6eeec6a8-662e-47e4-a5dc-6b249ffa8a4c}}
which can be shown via equation (REF ) and is inevitably linked to
{{formula:1eceb0da-4fdf-4d03-89ba-d108611b8815}}
Since {{formula:3b3c4453-f7b0-4000-8f7e-3ceda6159fa4}} is harmonic, its derivatives
{{formula:bac0d2f9-5a79-476c-8e16-63ff655260ba}}
are harmonic too. Moreover, the functions {{formula:9365b725-0749-4928-a8a0-62e00bf2ddb9}} are homogeneous
of degree {{formula:17bc0bf7-4f5c-4e13-8e64-3c5b6e092077}} , i.e. {{formula:8df04b77-eee5-4938-8c0f-e023a9fc839c}} . I also define the solid spherical harmonic of
degree {{formula:b47dabd6-2d8d-451b-b828-dec21b923a92}} as
{{formula:89b0a157-ebb0-4db1-a561-bc6ad52b7963}}
That {{formula:5f08708b-4adc-42ab-b543-257726cb7a11}} is harmonic follows from the fact that if {{formula:7cd743df-33f9-487c-bf4c-a1dadac56263}} is
harmonic, then so is {{formula:477513cc-8c1e-4fac-bbdf-feb02b81e804}} {{cite:faf37f2b6a9858ba38673ea6fe673d62e9e9ce21}}. Note that {{formula:2b760868-c4bf-4100-a1b0-6744b8ae4235}} is just
a homogeneous polynomial of total degree {{formula:54520457-fdd0-43c0-bb06-d8099fcfe4ae}} in {{formula:b2c1dc8e-483b-486a-bca6-b7fc93d71768}} , {{formula:22ea67bc-5591-494e-9470-a0706f9552a2}} and {{formula:27ae78ed-044b-4e32-a371-8eac22ad5b89}} . These
harmonics are related to the usual normalised surface spherical harmonic
{{formula:c12f201c-8817-4fe3-9c9a-b87923a82272}}
via
{{formula:3f9dfd7e-a6b3-4b6d-b397-745554702f1c}}
Table REF gives the first few harmonics in terms of {{formula:cf472984-3a99-4649-87b8-6f20241f764d}} .
Spherical-harmonic FMM
In order to derive the relations for spherical-harmonic FMM, one must obtain the
equivalent to the Cartesian Taylor expansion (REF ) and shift
operations (REF ,e). Via induction one can show that when applied to
harmonic functions
{{formula:eb36f008-08fd-44ed-8f22-73f975f283a9}}
which gives the translation operator for harmonic functions
{{formula:152b39c4-86be-4373-b92a-aca50b45167b}}
When applying this to the harmonic Greens function, one gets
{{formula:bfc10155-eeeb-44e7-9e03-db7daa38dd30}}
which, because of equations (REF ), is equivalent to the standard
form (REF ) and converges for {{formula:b42506b7-e020-4e17-b599-55663f9ceb4f}} . Translating once again and
employing (REF ) yields
{{formula:3f17317d-afb8-421d-b726-765b9e07e871}}
which converges for {{formula:8fbdb164-0172-403b-ade9-32a1cf2202eb}} . Comparing (REF ) and
(REF ) one finds immediately the translation formula
{{formula:cd568afd-9fd7-47c8-8d11-64782cce9db7}}
When applying the translation operator (REF ) to {{formula:517fb041-4ee6-4e5f-bc1f-d94f438f8db8}} ,
one gets
{{formula:325c51b9-7c09-4b72-8831-dcad4fee4314}}
As the Cartesian FMM relations (REF ) were based on equation
(REF ), the spherical harmonic FMM relations (REF ) are
based on equation (REF ), which for {{formula:3fedb4b5-0f00-449a-8b8e-728e05c08f22}} is
completely equivalent but computationally more efficient.
{{table:501be2a2-7d6e-41dc-ae90-020d71b2e3d9}}
Implementation details
Recursive evaluation of spherical harmonics
One may also obtain the relations
{{formula:d3fea4ff-5e87-44c4-bca1-58bb7647202e}}
The first one follows immediately from equations (REF ) and
(REF ), while the second can be deduced by equating
(REF ) to {{formula:82e9a39a-442b-45f7-b7b7-618f52391306}} obtained by applying the
translation operator (REF ). From these two relations combined with
the operator relation (REF ) and the definitions
(REF ) and (REF ), one can obtain numerous recurrence
relations. For example, (omitting the arguments for brevity)
{{formula:bf9db234-1690-4b8c-9d0b-fc4cac49d6dd}}
which are equivalent to the recurrence relation
{{cite:5cf4d7ec3d3ddf059f260bf8ad8c71e6cd8f92b5}} for associated Legendre functions
and, together with
{{formula:76264a3e-b05a-4c2b-ad5f-70f54d612e58}}
as well as their counterparts for {{formula:261f867f-abc8-4d86-acfd-91c0a5cc0e06}} , allow for an efficient and stable
evaluation of {{formula:09919e6c-166b-4c11-84c3-c9feee47812a}} and {{formula:7ed675a5-9932-4ff0-8b37-c9b14597ec40}} .
Differentiating these relations with respect to time, one obtains recursion
relations for the time derivatives of the harmonic functions. For example,
{{formula:5148b8b7-8076-4960-83cb-8bb3816dbe3d}}
Alternatively, from equations (REF ) and () one may also
directly derive
{{formula:ab4f42ea-906c-4d5a-abb7-db66cc38fc8f}}
Real-valued spherical harmonics
Because of the anti-symmetry relation (REF ), the complex spherical
harmonics defined above are redundant: there are only {{formula:9c2f8736-a288-4c6e-891f-003979b9cb29}} independent (real)
harmonics per order, in agreement with the counting in
Section REF . Hence, for any practical application one needs an
appropriately reduced set of {{formula:a05372dd-80df-4142-8652-b5e47592be15}} real-valued independent spherical harmonics
per order. The simplest option is to consider real and imaginary parts of the
complex-valued harmonics with {{formula:acc64a9c-c50e-4b57-bd4f-5bc30450c41a}} :
{{formula:ca2308dc-cb18-4f0b-8c4d-5760444bcb0f}}
and
{{formula:d3b24ed4-2993-4f14-81d9-ae156cd7c0b6}}
The relevant relations for these real-valued spherical harmonics are best
directly transcribed from the corresponding complex relations.
Accelerating FMM relations
The FMM kernels M2L, M2M, and L2L (equations ,d,e) all require
{{formula:9c8ea2aa-06c7-411a-a402-796b517e3576}} operations. However, if the interactions or translations
are along the {{formula:febc2a1a-48ba-4835-9142-e4488d8e6db6}} -axis, the costs are only {{formula:4d4f7ba3-fcec-4dd1-aebe-a7a3ddc0a0c5}} because
{{formula:3660e1e4-e69c-49fa-ac80-1c4f0a131b96}} .
One method to exploit this is to first translate along the {{formula:9ff2ba62-af7a-4c31-b575-f36ebc661179}} -axis and then
perpendicular to the {{formula:97bb367d-7387-442f-a163-8b65158d3ec1}} -axis. For a vector {{formula:69f48e32-ea57-4b60-b531-29246c32c5a6}} perpendicular to the
{{formula:e0c3126f-2982-4e60-9a02-048453e206b9}} -axis, {{formula:e422d919-a970-4e73-a32f-e10f2f351aad}} vanishes whenever {{formula:5336a8f0-cc77-4ec0-8027-60fd19db8cc0}} is even. This
implies that a translation along {{formula:bdff9085-7ba9-437a-a4d5-b76983513d1e}} can be done faster than a
general translation (in the limit of {{formula:a54d9231-70ea-4cdf-8136-1fba663462f3}} , twice as fast).
This splitting method cannot be applied to the M2L kernel ()
(because it is not a translation), which occurs many more times in the FMM
algorithm than the M2M and L2L kernels. To accelerate the M2L kernel, one can
exploit that a rotation only costs {{formula:c3565945-47cd-4235-97a3-8ccb59451035}} operations, too. Thus, if
one first rotates into a frame in which the interaction is along the {{formula:76016d7b-140d-4b7d-942a-d32de06904d3}} axis,
applies the M2L kernel in the rotated frame, and finally rotates back into the
original frame, the total costs are still {{formula:f373e219-cfea-4d00-982e-7453318f82ea}} .
Fast rotations
Since the spherical harmonics are homogeneous, a rotation (as opposed to a
translation) does not mix between different orders {{formula:bbaba666-3477-4cea-9933-8b0600c2e570}} , and consequently the
operation count is {{formula:51bee88c-fe46-4adc-b26a-7fdf804a75af}} . Thus, a general rotation is of the form
{{formula:b16cb212-af76-4ffb-815c-86cbf388a576}}
where {{formula:8302132b-802c-4123-ac84-2b6189e81616}} denotes the vector {{formula:0946002a-121c-444e-9489-617d53d5d2be}} in the rotated
frame. Unfortunately, the matrices {{formula:4d7b7d8f-df70-4d39-b503-5d93d509470e}} , also known as Wigner
functions, are generally dense and non-trivial functions of the Euler
angles. However, a rotation by angle {{formula:861dcd0f-392f-4f7e-945b-5a2808e32e7a}} around the {{formula:ae2ecfc2-3b90-483e-981a-6eb4aabd9683}} axis is simple:
{{formula:7451478f-4cb7-4955-b649-7e785a4de2ed}}
with an operation count of only {{formula:b3194e38-4bd9-4e87-b7e6-e796ae3c76df}} . With this one can build a
general rotation by first rotating around the {{formula:c1692295-7428-4c3d-af04-75583fd4e149}} -axis, then swapping {{formula:c0589ad5-3b14-4f91-816c-33292e42c3f9}} and
{{formula:655e7754-2e24-427b-8ed4-e6285ecb6a14}} , rotating again about the {{formula:41d8b856-0a98-4171-9ad7-fc9335d76257}} -axis (the {{formula:c4a9c58c-d967-426b-90d1-6c8eead3378c}} -axis of the original frame),
swapping {{formula:c50ff4e6-5e78-4302-8c28-6392dc551340}} and {{formula:33637551-7ec9-4b3c-9d00-043cfe1ea439}} again, and performing a final rotation around the
{{formula:6790132d-3877-42ce-8715-6d9a9d89f5b1}} -axis. Like rotations, swapping coordinate axes does not mix between
different orders {{formula:a31847f0-533e-43c3-a549-7165fd145074}} and can be represented as
{{formula:f47afe0a-89df-449a-b79d-2aa25f31b564}}
where now {{formula:63ef34bb-34ad-4e81-8730-a8c0bad260d3}} denotes the vector {{formula:03aefbf4-c272-4c36-b6c4-56c1bfddeb9d}} in the frame obtained
by swapping two Cartesian coordinates. The important difference between
equations (REF ) and (REF ) is that the matrices
{{formula:a14312a0-5e3f-4e6b-bacc-11642a678899}} are constants. Recursive relations for these swap matrices
can be derived via the operator algebra of
Section REF . For example, for swapping {{formula:aa6d46f1-c8e8-4e8c-8fef-c105ff23dcea}} and {{formula:5c1724da-22cf-499e-9c52-a0ba2e8257ae}} , one
finds
{{formula:d5be4c98-f74a-4a96-bce8-ffdd14579d1a}}
with which one can derive the recurrence relations
{{formula:e0b5d5b0-ad25-4b20-8b0f-b208b5bca389}}
where it is understood that {{formula:340577c6-4c3d-4b65-b8d5-d586ee23e52e}} for {{formula:c71fa920-22d9-4a8a-9575-61f8472cf990}} . A similar
exercise for swapping {{formula:346ed7e3-695b-446f-bf06-83ddc21cfd91}} and {{formula:77541faa-f2ea-44f8-82a4-b30d4e453567}} reveals that the swap matrices are given by
{{formula:71aebb1d-de1f-4539-82d1-3a88b4429f1d}} , while the corresponding swap matrices for
{{formula:a14d4cc0-f958-48c6-a05f-617f40513dc4}} are given by the transpose (because these matrices are
orthonormal and the product (REF ) is invariant under coordinate
swapping). Whereas the matrices {{formula:4453cd56-6e9a-46f7-918f-4c4bd70e9108}} are dense, the corresponding
matrices for the real-valued harmonics (equations REF ) are not
{{cite:89b5c1e44bc8ff36f48f001cb79991b75b83deb2}}. For example, the matrices for swapping {{formula:f06ba340-1dcf-4e81-a015-19fb54f1f2db}} and {{formula:c571dde7-d1ca-4e6c-aa37-d33fa3d040ca}}
for {{formula:34101a56-4dbc-4bc9-a689-01ea29ef2430}} and {{formula:b9853c97-db56-48f9-86b1-a08a93347d65}} are (omitting zero entries)
{{formula:9b0bfd41-7381-4f12-b721-4946bb8084fa}}
respectively. Thus, this method of achieving a general rotation not only avoids
the (recursive) computation of the Wigner functions {{formula:96ca9df8-d562-499c-b6cf-084eec057122}} (which
itself costs {{formula:499ef00e-09b3-436e-be39-b541cba72783}} operations), but also benefits from the facts
that the swap matrices {{formula:ad5d8395-cdb0-4cc4-b86d-fd72cf6a6eaf}} have {{formula:c5c3840b-b980-41dd-a75c-4885fb1ac72c}} times fewer non-zero
entries than the {{formula:00819a72-ae3c-4c3a-aecf-b490f93ca60c}} and are known a priori, such that they can
be `hard-wired' into computer code.
A fast M2L kernel
With these preliminaries, one can finally put together an accelerated
{{formula:425ddf2c-21f3-4722-8926-4705443040a5}} version for performing the M2L kernel (). Let
{{formula:35a0cb9d-5313-4639-b482-09b89433b2af}} , then one first rotates the multipoles {{formula:19fe5f84-4c19-414d-bdc7-a75a7c6aac0e}}
(around the {{formula:6efc17a1-0922-4256-a061-b8d557bed321}} -axis) by angle {{formula:538ab353-8ff4-4ca0-b8de-7b433d7645d8}} , swaps {{formula:08d3083e-0a3e-4ba4-9b1c-4d93db77dcf6}} and {{formula:3a4f4fdc-2083-4884-8e57-ef410adb74c7}} ,
rotates by {{formula:2d66ba52-90be-4411-88f1-085f5a017b19}} , and swaps {{formula:de07a62d-ef12-4e5c-b2ee-a61fad57056c}} and {{formula:9b9709c1-5f1e-489f-9678-6efdd25e3cf9}} back. The
obtained {{formula:e3a83eeb-7762-436b-abaa-df9977fd440f}} has {{formula:9a874a50-dace-4c22-8186-fbbfa695c1dc}} axis aligned with the interaction
direction, and the M2L kernel can be performed via
{{formula:be48678b-70f3-4808-9eaa-cd818abba215}}
Finally, one must rotate {{formula:a3f4c2ec-da95-4afc-94dd-1e3f895ee926}} back to the original frame
by first swapping {{formula:4963dfe7-d834-4af7-84b6-24cd4000ebcb}} and {{formula:d96864e8-92e7-4408-a571-90a5b8eeda60}} , rotating by {{formula:ce545237-e5ee-48b1-948f-9fe3a328d2e6}} , swapping {{formula:647b774f-8d4e-42d8-8adb-a70eabd30f65}} and {{formula:e0cea49a-17c9-47f0-a40b-633945232a0b}}
again, followed by a final rotation by {{formula:4b571037-833a-4a50-af8a-bc7acecf73c9}} .
These rotations and swaps can be accelerated further by exploiting that in
(REF ) only multipoles {{formula:4cdffe9c-e616-4378-aaf0-36732998fb45}} with {{formula:5c44a30c-cc35-4b60-89ec-1739e2bbaa75}} are needed and, similarly, that {{formula:a62ae4fe-0243-4342-85f2-0ec93a0e1eec}} for
{{formula:afa333a1-9c25-4a4a-8543-bc06801793ca}} . As Fig. REF demonstrates, the overhead due
to the rotations pays off already for {{formula:55368693-4baa-46da-bae4-168014ceef11}} .
The energy error of a simulation
The gravitational forces (and potentials) used in {{formula:eccc2c2c-3e53-46d2-afc9-d68de1b4df4b}} -body simulations always
carry some error. When using direct summation, this is solely due to round-off
errors, while for approximate methods the approximation error should dominate
round-off. Here, I investigate the consequences of these errors for the
non-conservation of the total energy.
The energy error due to force errors
Consider, the energy error generated by acceleration errors {{formula:86d206e4-d153-4d8f-bea1-a3308b301062}}
after one time step {{formula:25e20708-539d-47aa-8f23-85fa6efdbbaf}}
{{formula:eb56b16b-c7e1-4b6e-90de-9548b7fdfa61}}
Because the {{formula:b9e8503b-82e2-487c-9787-e2025fcac2ca}} are not correlated with the velocities
{{formula:645418b5-0981-4ac6-98b8-65d4da51c847}} , their dot products largely cancel and {{formula:f9694af8-3ea5-4748-a428-2e5e7ab72167}} will be small. In order to estimate its amplitude, let us
assume {{formula:1651b8de-2097-4452-9a22-3faa5a713f17}} with {{formula:1dd77b14-97a2-4fd6-81c0-c4a9bd1c8115}} , velocity dispersion {{formula:7bf2f4b1-7b2a-4df5-9ed0-70f3780b045d}} ,
and typical acceleration {{formula:5bee2e32-ad03-4878-a159-b0f0f6ab175c}} . If further assuming virial equilibrium and a
relative acceleration error {{formula:053f266f-b58d-43dd-b769-d431e6f7a778}} ,
{{formula:dd9bbf1e-018e-4689-a6e6-c97691cfc584}}
Over time this accumulates in the fashion of a random walk and after one
dynamical time or {{formula:6a8e3695-4df2-4867-94b4-22e0add5afdb}} time steps
{{formula:9195d521-3524-40cd-926e-f1ab6afd5f65}}
Thus, the relative energy error resulting from the force errors alone is much
smaller than {{formula:039aa58b-3cd3-4075-9cb8-9f62f658f00b}} , simply because it is some average over many force
errors.
The measurement error
In order to measure the total energy, one must also calculate the individual
particle potentials {{formula:1d533046-80aa-4a82-8d73-e8072649d2b8}} (which are otherwise not required for the
simulation). Assuming that the {{formula:c222a534-83a8-4fcd-b85e-d9a14f95a1ff}} are computed with relative error
{{formula:f1b0163e-ac32-42d5-b8c0-02af1dbc2fc4}} , the resulting error for the total energy is
{{formula:9cf1e53a-94b5-4887-98b3-034575107034}}
If the same precision {{formula:198316c4-9031-4e6a-9172-09e144d9f966}} is used for computing the particle
potentials and accelerations, this is much larger than the energy
error (REF ) due to force errors.
Approximate gravity solvers
The situation is different for approximative methods, such as the tree code,
FMM, and mesh-based techniques. All of these approximate the true potential, but
use the exact derivatives of the approximated potential for the accelerations.
Therefore, the total approximated energy should be conserved (modulo
round-off errors), even if the approximation is poor.
For the FMM and the tree code the situation is actually different, because the
approximated potential is not globally continuous but only piece-wise. This is
because the concrete form of the approximation used for a given particle depends
on its position (which determines how FMM approximates each pair-wise force). A
particle crossing a boundary between such continuous regions suffers a jump in
the (approximated) potential, and hence energy, while the corresponding kick in
velocity (to conserve energy) is ignored. These discontinuities are part of the
approximation error and their amplitudes proportional. The implication is that
for the tree code and FMM energy is not conserved (even for accurate time
integration) and the degree of non-conservation actually reflects the amplitude
of the approximation errors in an average sense.
| d | 18b71a3ae89d71753507908fe072a9f3 |
Detailed performance comparison of the best single SR-ASV system and the challenge baseline ASV system across all spoofing attacks in the evaluation subsets are illustrated in Fig.REF and Fig.REF for the LA and PA partitions, respectively. Note that the results shown are all from single ASV systems. All the results of the challenge baseline ASV system are released in {{cite:ce357a230a90a31d3116e80ea1f924047add6912}}.
| r | 17d7b4c3bb60aaff449a89ce631ae73a |
We have focused on static charged black holes, but Kerr black holes can also support massive scalar hair {{cite:efc0edd88ce3881f386672d76c1ce876e2e7742b}}, {{cite:e41c282237987fa4c96576d7a5bc371497eb388d}}, {{cite:e75b0e5c8f3895c73e77863436d9daf70fe9dc45}}, {{cite:c16bf68d3dabf2587b99b96f9d4a04747b27a9c2}}. In fact, these are solutions which are not stationary and axisymmetric, and have only a single Killing field {{cite:839ab2c9ed87286a047d56b4faf05583ad41e014}}, {{cite:efc0edd88ce3881f386672d76c1ce876e2e7742b}}, {{cite:e41c282237987fa4c96576d7a5bc371497eb388d}}, {{cite:e75b0e5c8f3895c73e77863436d9daf70fe9dc45}}, {{cite:c16bf68d3dabf2587b99b96f9d4a04747b27a9c2}}, {{cite:87d0c5562be7cb40eac6db55c7dbcf630ef59233}}. A complete investigation of their properties inside the horizon is beyond the scope of this paper, but in the Appendix we show that they cannot have a smooth Cauchy horizon. The proof is quite general and applies to black holes in more than four dimensions also.
| d | 26d6197230b00c52e1d804c437797f67 |
Recent experiments with optical tweezers and traps have verified several theoretical predictions of resetting {{cite:ce641f1d1dac401abedd3c98e2164b27089b1a51}}, {{cite:9d3c230490021718eee83e662ce90d2dc80af968}}. In future if multiple such traps can be used to create potential minima separated by barriers, and first passage under resetting in such a system is studied, some of the interesting predictions on transitions of ORR that we make here and our earlier work {{cite:06d9e657e491e76d095a45f27d94669970b21dd9}} may be verified.
| d | 7404342442c4ce9c0041a7931f6237f7 |
Table REF shows the results on Office-Home dataset. This dataset is far more challenging than other datasets as domain shift is significantly high here. Despite the challenge, our method achieves a substantial improvement over the other methods across most of the tasks. TDMDA outperforms TAT{{cite:f334e3732862c1944a87c8d06054e93ec66845b9}} by a significant margin of 5.7{{formula:5a37c161-99cb-483e-97e7-1609bd6c863d}} on average accuracy. While CADA{{cite:13c64f367cfaeb8fc45ed495c06658748ee49931}} outperforms our method when Clipart (Cl) is the target domain, but the results are still competitive performing on average better than CADA.
Moreover, the proposed work can be plugged with any domain adaptation framework to improve the performance further. We have also reported the results on the ImageCLEF dataset in Table REF . Our method exceeds the rest of the methods' performance with a minor improvement of 0.2{{formula:c3814c79-57d6-41a8-872e-d7f6818aabc5}} on average accuracy. The relatively small margin in improvement is because of the smaller domain shift in the dataset.
| r | e0d083f6515d21ea27bf0cb97a7b7775 |
It is well understood that the Hohenberg-Kohn density functional theory (DFT) is strictly limited to the ground state properties. Therefore, we study the optical properties using DFT and beyond approaches under the framework of many body perturbation theory, i.e. G{{formula:54fa697b-18bb-4b1c-b713-ca331265c980}} W{{formula:3dcb93fa-caf6-40e6-b190-41601469094d}} and BSE. The G{{formula:8aa2bf18-8201-48b3-86e1-16324e8719f8}} W{{formula:cbf55e56-30c1-45d9-acca-89742cb6b6bf}} approach takes the screened coulombic interaction into consideration, using the perturbative method and improves on the Hartree-Fock approximation. In our case we perform the single shot G{{formula:d806fe64-7b7a-4a7a-8e1f-dd56f8d6870d}} W{{formula:ada18ad5-b873-4279-8b18-c741daf29e30}} calculations on the top of the orbitals obtained from the PBE calculations. Firstly, we converge the band gap with respect to the k-grid, number of bands and energy cutoff. We take number of bands (NBANDS) to be four times the number of occupied orbitals. We take k-grid, NBANDS and energy cutoff to be 10{{formula:4507e761-ba46-468b-9fc9-74d75bfcc481}} 10{{formula:4ae7aec4-214b-4d9e-bba8-d59eb22e0c1e}} 1, 240 and 550 eV, respectively. The calculated band gaps for WA{{formula:16333199-6470-4269-ad05-05a47dc45eda}} Z{{formula:870c7ad7-2207-4fbf-b4d1-e209c8a4d018}} is given in the Table 1 of the main text. Fig. REF shows the band structure calculated using the G{{formula:81f276b2-b094-4b80-a3d9-8b83155eebf2}} W{{formula:f5b70641-0d76-4882-9904-136dbdaccd39}} @PBE+SOC for {{formula:75fc7b70-59fd-473e-86e2-94e7cab2fb4e}} -WSi{{formula:ec229224-5b40-440c-9481-6005b3c3f600}} N{{formula:9ea364a3-a195-4f5e-ae61-5ca56c86a923}} . We have obtained the indirect band gap of 3.36 eV. Here, we see that apart from the band gap, the band dispersion curves remains similar to the PBE+SOC and HSE06+SOC. We calculate the excitonic effect using the BSE approach, which is second order Green's function technique. The Fig. REF shows the real (Re ({{formula:23fc02dd-5750-4739-b710-1ce8bfb62fb3}} )) and imaginary (Im ({{formula:11ebac13-141d-4ebd-a695-8a1716c236b1}} )) parts of the dielectric function. The Re ({{formula:e4be00d6-c2ff-4a9a-89c6-09285a44e886}} ) and Im ({{formula:befae97b-8ea4-433c-a0a6-1cc659db10ef}} ) are thoroughly verified by considering the different number of occupied and unoccupied bands. First excitonic peak obtained using BSE is at 2.49 eV confirming that {{formula:117e8259-406b-4cb9-b318-4e0e5c87b469}} WSi{{formula:88308bd4-db50-4c3e-b74c-4c19e85fab2b}} N{{formula:ec1fd9d7-a245-49fc-ad5e-39df28ca6ac4}} is sensitive to visible spectrum. The exciton binding energy ({{formula:8b51ec1e-82ca-4564-b1f8-e15ca2ec4d7c}} ) can be computed from the difference between the quasiparticle band gap (G{{formula:23f524b1-72e0-4cfc-8d86-4a7cbafc406b}} W{{formula:c433a4cb-2eef-4b36-ad96-0c4985fd9c8f}} band gap) and optical band gap (first BSE peak). For {{formula:15ddd50d-a922-4ea0-b819-f665134cc3b2}} -WSi{{formula:0319bc2a-c5be-4ae8-863e-0b15f9fe7a96}} N{{formula:70ee5316-3a37-4e9e-8b6b-1767405dd361}} , exciton binding energy is 0.87 eV. The {{formula:2f5c281d-c0db-4c71-91a5-258fd3cbf7b6}} observed is comparable to monolayer transition metal dichalcogenides (0.6 eV-1.0 eV), owing to strong coulomb interaction {{cite:d9300278cf9bebeaf7051706534e2b71168dd0fa}}. The observed {{formula:443dbb51-53a0-48f5-bbf6-4bf82e96f192}} is one-two order magnitude larger than the conventional GaAs {{cite:1a0b16f2f2ad76be0618141abfdcbb40df95438e}}, {{cite:ff5416cd4be7a317b80a018bb8beb3367e557ea6}}. Therefore, excitonic features are stable at room temperature and dominates the optical response and non equilibrium dynamics of these materials. The large {{formula:93f3bf1e-8552-457b-bc79-941491a9724e}} induces strong excitonic effects, i.e., the large oscillator strength induces strong light matter interaction and absorbance as high as 0.1-0.3 {{cite:ff5416cd4be7a317b80a018bb8beb3367e557ea6}}.
{{figure:a17b8928-737b-423f-93a7-70f7050e051a}}{{figure:c0a73bce-6c9a-46a9-8385-d839102b931c}} | m | f006e6ec0045291dcacf698f7d50a869 |
It is known that quark models have achieved great success in studying the properties of hadrons, especially for these ground states. Within the quark models, it is commonly accepted that mesons are composed of quark and antiquark ({{formula:d464e03f-273a-432b-9068-8d5335ec65a9}} ) and baryons are composed of three quarks {{formula:7e862aff-dc81-40c8-bfa2-090614b3fcc4}} . Recently, the topic of meson-meson and meson-baryon states, with hadrons and hadrons governed by strong interactions, has been well developed by the combination of the chiral effective Lagrangians with nonperturbative unitary techniques in coupled channels, which has been a very fruitful scheme to study the nature of many hadronic states, both on light and heavy sectors {{cite:c0abf3633c18a6e2509ed61cf2c68fd72c156385}}, {{cite:6092b77583b4a14d305a0676020dcebff0bf3259}}, {{cite:02ffae1e154df157d05fcb7017fbff14fa9334a6}}, {{cite:50eb1a4d7f144cc51d75afd8b4cc8a3c603b5b14}}. Some of them are not easily explained by the classical quark models.
| i | 633967e5deb863a7903f39979b8cb2b9 |
For future studies, we will investigate the relation between the chiral phase transition and confinement-deconfinement phase transition, and figure out whether the stable holographic nuclear matter is in the quarkyonic phase. Currently, the EMD system and the KKSS action are solved separately, which means we use the “probe approximation” here. The KKSS action is regarded as a probe and its backreaction to the background EMD system is neglected. We will investigate this holographic model beyond the probe approximation and solve it consistently in the future. Also, as we stated in the introduction, the “allowed” range of the EoS we used here is derived by the “direct interpolation approach” {{cite:421f9786f1626811ef161f21eb8dbc4f22f97268}}, {{cite:a82d50bff0843a00b4131a56e5f88af1c511e15c}}. However the “causality and thermodynamic stability constraint approach” proposed in Refs. {{cite:9712aac53737750d7b21b8ea5091f12164e027e6}}, {{cite:fe5f627e028ea4006c42ae81ada887558e783059}} will give more strict constraints on the EoS of cold QCD matter. We will consider these constraints in future works.
| d | 0698aae482964f503b61a49faeeb7d43 |
Analysis of non-reversible Markov chains is difficult, essentially because self-adjointness is lost. Without self-adjointness, it is much more difficult to connect spectral theory to mixing properties of chains. It seems that a good way of understanding benefits of non-reversible sampling is by studying Cesaro averages (see {{cite:7df4fe2fefccab92fc375ed8e594c29b665d7ed1}} and e.g. the result on large deviations in Section REF ). The results of Section which establish that non-reversible chains have better asymptotic variance or large deviations properties, are so far qualitative in nature (i.e. fail to quantify the amount of improvement). To obtain quantitative results is an important challenge that remains to be addressed. Also, it is object of further study how these results carry over to countable and uncountable state spaces. In particular, the question under what conditions the resulting chains are geometrically ergodic and/or satisfy a CLT should be considered.
| d | 518c9966b1f5166663aecbe3b2849633 |
Unlike relaxation methods, equivalent optimization methods replace the binary constraint with some equivalent forms, which are much easier to handle. For example, motivated by linear and spectral relaxations, Wu and Ghanem {{cite:dbef8eea27629c648eb5cfe44c2403662766fa0e}} replaced the binary constraint with the intersection of the box {{formula:177efafb-ae7b-4c09-8a57-fa26529c299f}} and the sphere {{formula:5c87304b-5b6b-454e-a74f-d0843d6b4af8}} , and then applied the Alternating Direction Method of Multipliers (ADMM) {{cite:057338a404ddbb21a1feac5287da195ddd290819}}, {{cite:442a5bb90e0f434c230e2a91b4842d7774ef2150}}, {{cite:ca9488072f2049db3e4c4bbb74156231355ad3a7}} to solve the optimization problem iteratively. Other methods in this direction include the MPEC-ADM and MPEC-EPM methods ({{cite:957b86488da0009134b707a64ca6a446085f6909}}, {{cite:e83d7ae42be400c6e3c1b5005dfbf61805739530}}), the {{formula:1b7f6a2a-1c51-4d49-ad1b-d97dbbd86935}} norm reformulation {{cite:2090ebc8ed1d918ba2ef9bef12126f49c596b487}}, {{cite:8a0d2c205135fadea08f8d257838f1ba07953dc9}}, the {{formula:8f3dd5a7-28bf-459b-933b-6b9bf79e2baf}} box non-separable reformulation {{cite:c4407348dda095092c88efa207a8dae876984099}}, and the piecewise separable reformulation {{cite:a5fa2e20ce027f541987e15bfeee4575dd734f59}}. Usually, these equivalent optimization methods guarantee the convergence to some stationary and feasible points, but the convergence speed is often too slow, resulting in high computational costs for large-scale optimization problems.
| m | 6d0e9d5cd8241f6e0c1d1f3467bce975 |
To handle the real-world complex rainy images, the optimization-based methods are firstly proposed with hand-crafted priors such as the sparse coding {{cite:67f4ed5f8c6991307ac1a4e9bb68310d47138c46}}, low-rank {{cite:90cae83ae4de074c2eb4643d32207cea57be24d5}} and Gaussian mixture model {{cite:ea8648664525ee92bc289035eb5100f4a2501df1}}. However, these hand-crafted priors are of limited representation ability, especially for highly complex and varied rainy scenes. To rectify this weakness, the learning-based CNN methods {{cite:d6d89e655d41e5b9ac8d90bd5f0dfd63e2bfaa11}}, {{cite:d6fb0d0fd44479b80a93b52f7ece0313ed3664c3}}, {{cite:a624a8b8e4b01279647aa5d1408463ea8c0d6efb}}, {{cite:0d13ee44118bc421b8111c59bd210f119b1c84ef}} have made great progresses. The key idea of these supervised learning methods tries the best to simulate the rain as real as possible with sophisticated models, such as the additive model {{cite:9def55c02654f5d5610eda5900132bdb704d27b7}}, screen blend model {{cite:67f4ed5f8c6991307ac1a4e9bb68310d47138c46}}, heavy rain model {{cite:d6fb0d0fd44479b80a93b52f7ece0313ed3664c3}}, and comprehensive rain model {{cite:973483028e12a0245e06ecc02aab77e9edcd9811}}, to name a few. Unfortunately, there still exist gap between these synthetic rain models and real rain degradation, since the real rainy atmosphere is usually a high-order nonlinear system.
| i | ec6d26508f5362f1e28a48eedc035441 |
Firstly, there is an interesting phenomenon where the average reward nearly monotonically increases each epoch, but then there is a slight downward dip after roughly epoch 200, before the rewards start climbing upwards again. There are two possible reasons for this. The more interesting reason, is that it may be an instance of a broader phenomenon, called `Deep Double Descent', where neural networks often undergo a period where their performance decreases after an initial rapid decrease {{cite:ae22648b306c6a7863612138d1e39b128d8aa081}}.
| r | 07fb5ecf7d1db47cabb606f0041b3aa0 |
The SC is a local model, i.e., each perturbation is assumed to evolve independently. It does not account for tidal forces which cause shear, rotation and accretion. It also ignores non-linear mode coupling, which is a non-local effect. Yet, it recovers the shape of the PDF given by perturbative methods and simulations reasonably well, over a wide range of epochs, scales and cosmologies. The SC model has been successful in other contexts as well. The non-linear relation between the density and velocity field obtained by simulations is a scatter plot, but the `mean' relation is well described by the SC model {{cite:d52bc4f69a605c150749d6f5fb74a23e53662354}}, {{cite:885188d674dbeff4be2a643685710690711e0023}}, {{cite:e1572a98956133662014fa89e14d18148e3a5682}}.
It has also been successfully used to treat the shell crossing regime in Lagrangian perturbation theory {{cite:c79340b9d32b61967ea3247e3d5811baa29f2a24}}.
Some of this success can be attributed to a partial cancellation between various effects that have been neglected. One illustration of this feature is to consider the effect of rotation and shear. Both terms appear in the Raychaudhuri equation, which dictates the evolution of {{formula:0839b56f-4f43-4de8-b347-ea1f2eacea3c}} and consequently the full non-linear evolution of {{formula:ecfd095a-4542-4270-8527-ba86bcac179b}} . But they appear with opposite signs ({{cite:bc4f96f44178c4514646072d61f1a3a7cc98b651}}). Spherical symmetry ignores both terms, giving rise to a partial cancellation. For example, in modelling the non-linear DVDR, the result of simulations agree better with spherical collapse rather than ellipsoidal collapse {{cite:37dce1db1d126370238579139b553bdd343aaab9}}. This is because ellipsoidal dynamics accounts for only for the shear and does not benefit from the partial cancellation. These arguments are akin to reasons `why the Press- Schecter formalism works so well' ({{cite:1cbc64fcbf4a22ffd340e3bd75bc64526ab7cdb3}}).
| r | c76d27d2392ebc11c7c6fddc184d439e |
In this section, the proposed method is compared with unsupervised methods: SCAN {{cite:233ffd2ee8af3b07ecf142a1423a4e1a9a22a927}} and SimCLR {{cite:72338d7fce8f96cab497d2936b6fb64fb24420f6}},
few-shot methods: Prototype Net {{cite:cf98d655e8335111f7e981e477cf6020518cc7f2}} and Simple CNAPS {{cite:342ebadea3febea3d5745d45267067fff023fd75}},
semi-supervised methods: MixMatch {{cite:0c61432bd726830cab2e849e87f6ed9129d165b3}} and FixMatch {{cite:7500ccd9c37edb0afd8f0785b2d6a00369c6f5cd}},
transfer learning methods: Transfer(10) and Transfer(100),
and fully supervised methods: SiameseNet {{cite:0a85c7316c30b2b34910e1ccd8569e2ae573a2ea}}, VGG-16 {{cite:a7a4166fb923a3cb1ffb071acb58c6c81622cbd1}}, ResNet-50 {{cite:5182ba7f4664d5db6553c60a3cbdb0956719263e}}, MobileNetV2 {{cite:3f6904df172b5707f8d50fade48a0be3281afa0e}}, DenseNet-121 {{cite:37e6166238e0bcb6fc7bad90d1c1857c18d1794d}} and
ViT-B/16 [Dosovitskiy et al., 2021].
| m | 297b91a59f7d044c6a94aefe57e8d04b |
In addition, we observed the lasing action of the TE mode from InGaAsP-NBs. From the SEM image in the top inset of Figure REF (a), the structural parameters were estimated as follows: {{formula:6e877d6a-bfa8-406f-be39-a50fdbd07cd8}} = 573 nm, {{formula:12056495-6de2-40cf-bfee-fe7159509e29}} = 0.343{{formula:cb11d637-7499-4020-9b95-184085a87374}} , {{formula:2a62dacb-688f-483c-a8af-f0591aed2cc5}} = 0.302{{formula:86224aa0-5e48-4747-b614-d3baf1e63725}} , and {{formula:3f93d339-8568-4f47-8bbe-c7f930ec6e00}} = 1.12{{formula:c478cb47-9b09-4e91-b0ec-f7a92d8a46eb}} . Here, the thickness of the InGaAsP slab was 530 nm, which is 10 nm thicker than that of the InP slab. With 3D FDTD, a TE mode was found near the lasing wavelength of 1473 nm. The simulated wavelength was 1478 nm, and the Q-factor was approximately 150,000 (See Figure S1(d) in Supplementary Material). There was a TM mode at a wavelength of 1447 nm, but the Q-factor was relatively low at approximately 44,000. In particular, the {{formula:caac89b9-f76e-42a3-955a-1678e589cd50}} field profile of the TE mode is well balanced to reduce the optical loss similar to the photonic bound states in the continuum {{cite:1f188bf59f8943cf1848956b1f34c61fc516166c}}. Figure REF (b) shows the L–L and linewidth characteristics that have threshold behavior of lasing action. The threshold peak pump power was 160 {{formula:e26c72c6-4a3f-4a6a-a78d-6f3e512e9514}} W and the linewidth reduction was observed near the threshold. We characterized multiple InGaAsP-NBs and observed lasing actions primarily from TE cavity modes (See Figure S5 in the Supplementary Material). Multimode lasing originated from the 3rd TE, and the 6th TM cavity mode was also observed, which was confirmed by 3D FDTD analysis with estimated parameters (Figure S5(c) in the Supplementary Material and Table S1).
{{figure:2656aa98-7e34-49bb-86d2-a71799aac608}} | d | 5f97ac993482322c9cf5b6ff3c86df7b |
The future large-scale spectroscopic galaxy surveys with high galaxy
sample densities make the angular clustering analysis possible with such a
narrow radial binning. For example, with the designed sensitivity,
the Euclid satellite can observe 50 million galaxies in the redshift range
{{formula:59768578-b7f0-4c08-bcdb-3b61fea89407}} {{cite:15a18c7612448e6d0c3f5f75d2571dcedcac2ba2}},
which translates to about a quarter-million objects in a redshift bin of
size {{formula:99f90a5e-4106-478e-8728-a5a028809ff6}} .
| i | da811719bbd91ec0978bec32f749c52a |
Autonomous driving systems have the potential to vastly improve the quality and efficiency of existing transportation systems. With fast reaction times and socially optimal behaviors, automated vehicles (AVs) can improve the road capacity and traffic flow stability of existing networks {{cite:1af7d2b66cc7181a5e3749c4e55aded259307181}}, {{cite:617252de9808f8a541b24831215e373042757151}}, {{cite:386e335f31c9238c1ddef2a002ab89d9c4cf17de}}, {{cite:a99486703a99c525abfea196e9a724d1612036fe}}, {{cite:2792ff6100be0b43dda79e747e58556f81bfa1a5}}, as well as reduce instances of vehicle collisions and similar incidents brought about by human errors {{cite:cf5715591bb1c968e649ddb0f0092e80cf949162}}. The promise of such systems, however, is seldom witnessed in practice {{cite:3d05e3a385dfaea2d230efd1ccc2c134398bd6c7}}, {{cite:224bbc2f8abde0c6b0b019f5ad71d2738f23f3f9}}, highlighting the challenge that lies ahead.
| i | f546e4cb666d5976cdce52c718355256 |
To explore the relation between Xavier and rectifier nonlinearities, He et al. paper discusses an experiment where they experiment a 22 layered neural network and a 30 layered neural network. ReLU has been used as activation functions and Xavier initialization has been used as weight initializer in the experiment. The experiment shows that the 22 layered network converges while the 30 layered network fails to converge {{cite:3eaf0f7879f4a72c3ca726aa79c651aca2c1e8a5}}. Siddharth et al. shows the reason behind this failure as - `the variance of the inputs to the deeper layers is exponentially smaller
than the variance of the inputs to the shallower layer' {{cite:b2d2c776e59dd6aa075678f57558c430932ec16a}}. From this, it can be concluded that though Xavier initialization wanted to preserve the variance, but as the network goes deeper, it fails. This is why Xavier initialization works well in the cases of shallow networks.
| d | a6b21f1297b9803f0a629fe67306cd7c |
where {{formula:c5b5a1c4-cd03-4d15-91e8-86ff0071d390}} is the binding energy determined by the two-body problem {{cite:b9913f6235c5686b49989d03608cab51a8bb7c7b}}, {{cite:bca84b881463b61fbb9420a3ba1043ff75e74e84}}, {{cite:f5c34d028b8c35a1dc448d66b61d38910c0bfb3f}},
{{formula:ba16091e-06d4-4cec-8cb4-b3461e4677ae}}
| r | 83346683cb4e83d00467f67e892c0e44 |
In the next claim we use SQ, RQ and PQ as in {{cite:3e043a19cd2d3d50cf8a8fdb84c9ff7eb8146119}}.
Thus SQ is the mean {{formula:1673a3f6-52dd-47a5-94cc-7b792d162040}} of the True Positives, RQ is the traditional F1 value and PQ is SQ times RQ.
| r | 5995ce2d37b30bceea8d5931d30423bf |
{{cite:6a893cab5aac6b98805bd237882a69f60f23cfa2}}
covered the same ground as DZ65, but more thoroughly, presenting an
analytic solution.
| d | cc2c8f2700bc432a9e251d4857a57c3b |
As a consequence of the evolution of propulsion technology and the desire to reduce the cost of ever more complex missions designs, trajectory optimization remains an active field of research. A significant amount of research has been devoted to designing low-thrust space trajectories using direct methods, indirect methods {{cite:770072f09aca21cf35a7008b8c09c0019a4fa6f2}} or variants of these {{cite:760830c021f844d5c52cf21f8b4fff6d79b97055}}. A variety of tools have been developed that are capable of solving complex interplanetary problems with various low- to medium- to high-fidelity models for propulsion systems and gravitational forces {{cite:d5ec6d9371204bc20b55ed84bfae7d6f919f3fa7}}, {{cite:14fdce04a910df2a9635b72aec104d8661bc91f8}}, {{cite:13166af077a60b1d38d980cc5b9e56b2d0efa104}}, {{cite:afa34e21c34924da7f0fb518782ab197115042df}}. These tools use various mathematical formulations to determine optimal trajectories based on both direct and indirect optimization methods {{cite:6d1a77260f361eb12fa8ccd40c304fd0f6023bd3}}, {{cite:24b43c877fd0cec7fc09ad7f4a590040e8a523f8}}. A fairly comprehensive review of the models, objective functions, and solution approaches commonly used for spacecraft trajectory optimization is conducted in {{cite:ef27bf76b5f4843b18052c614fb16ee75e917455}}.
| i | dd8d48b46cdd455e41cc78033920526c |
PointNet {{cite:9ce6783cb6e17fac6f45d980bb9c54e5fb3771b7}} uses raw point cloud data as input without any emphasis on their ordering, while GAPNet {{cite:bbf0e99753d539f850c36e72bfa5a09bf6f8b9d4}} exploits local features by introducing GAPLayer, which assigns different attention weights on the neighborhood for each point. We trained these models on our point cloud data, however even after tuning, the mean absolute error was very high at 4 cm, which was unacceptable. This led us to transform the point cloud data to depth images, which was used to train a Convolutional Neural Network (CNN) based deep learning model for height estimation. Our model consists of 12 convolutional and three dense layers, with padding of 240x180 pixels. We used Rectified Linear Unit (ReLU) as activation function and Mean Squared Error as the loss function.
| m | 8ca7e35cd480cbc3dfec41a040daefee |
where the differential operator {{formula:657515c7-4f70-439c-b9a7-b79dbfc7be78}} is defined by {{cite:6ab0bdeb04b071bb02c61af1da812233dc1eb92f}}
{{formula:662de3c9-129e-490a-ad50-b3b9b51596fa}}
| i | e827eb714934bd8a9571596df6adc421 |
For {{formula:20e297df-c056-44db-adc5-6bfb973f113e}} and {{formula:0fd337d2-444c-4096-9fd8-5b258b07dbf5}} , we get {{formula:b295c72a-6e2d-4ee6-bedf-7af4ebe05a15}} which agrees reasonably well with the Sturm-Liouville analytic result {{formula:8a4bfb85-5a00-4006-95b9-82f2d990f958}} in {{cite:62e3aff4f8768b1f5056b981aac9f80581133c98}}. This result shows that the phase transition between the s-wave holographic insulator and superconductor belongs to second order and the critical exponent of the system takes the mean-field value 1/2. In Table 1, we present the values of the condensation operator {{formula:144ad4e1-4bee-45ca-8bfa-bda22182b196}} for {{formula:6c3aa5a9-abd9-437c-9a33-c9c8d48cfb5e}} obtained by matching the near tip and asymptotic values of the fields at different points parametrized by {{formula:a0bad82f-3316-4200-8728-298b09c794ed}} .
{{table:ef9cc734-4d4e-4582-a55f-1d46f2fa8035}} | m | ee564ca4dbe18218db83a35e1039ab1d |
The most standard approaches for finetuning pretrained models are linear probing and full finetuning (Section ). They have been used for supervised pretrained models {{cite:3d9c987c0f48ff252310fe26c119e3fc45de5417}}, {{cite:e2236f2bf1b55febb1c876a76cde7daedb13987b}}, {{cite:b6324e39fd79c47585949d63e3d2a7c0137a2b9b}}, self-supervised vision models such as SimCLR {{cite:f78147bbee9eea035400292916f3837a2405bf86}}, MoCo {{cite:a40a47d7bd63bc87ef3ff063130d95d4afb69873}}, {{cite:69cbf347a3985256c78df188d4abcc55f11f5a4f}}, and vision-text models such as CLIP {{cite:9ae76008e204b60707a882813239605d191d003f}}, {{cite:d6f0aa1d64f6bd19891e67457435e0c077d532b9}}. For vision-text models, we find that FLYP outperforms these standard approaches across a variety of settings and datasets while being just as simple (or even simpler) to implement.
| m | dba2825b626661d02c1659c202e45fcc |
Comparison with other generative approaches. Observe that for our denoising benchmarking in Figures REF and REF , the recent BDGAN approach is the only deep generative model. The reason is that except for BDGAN, neither for generative adversarial nets {{cite:a93729289182386b5ce3439fe5fdcf5ba55011f7}} nor for variational autoencoders {{cite:40c3bde930e58750e93c036c421ecac7dd5165e9}}, {{cite:2872b0a4d3cc22018353d6d47367981e6cbecbe0}} competitive denoising or inpainting results are reported on these standard benchmarks (for GANs and VAEs, benchmarks other than those in Figures REF to REF are often considered; see Appendix for further discussion).
The BDGAN approach {{cite:e1bed93ec00d3071678abd8b502952c97d7cba65}} does provide benchmark values for the denoising task. It shows competitive performance but like feed-forward DNNs requires large image corpora for training. At least in principle, GANs and VAEs are also applicable to the “zero-shot” category (DE1 and IN1 in Figures REF - REF ), however. The BDGAN approach uses large and intricate DNNs whose parameters are presumably difficult to train using just one image, which may explain why it was not applied in the “zero-shot” category. Recent work by, e.g. {{cite:cc4d804aed3b22ef29d8c505fbc84e92da9d62c4}}, explicitly discusses the DNNs sizes and the use of small DNNs for “zero-shot” super-resolution.
| d | 1720e843ef4e6050612eff37a46eeaab |
To explore the possibility of a turbulent flow of the charged quasi-particles in a sample of graphene under realistic conditions we simulate the hydrodynamic equations of motion using an adaptation of the relativistic lattice Boltzmann method described by Romatschke, Mendoza and Succi {{cite:00195a56dba7ed91c18cb899249c500f037e3c7a}}. The Relativistic Lattice Boltzmann Model (RLBM) is a hydrodynamic numerical modeler based on kinetic theory {{cite:a884d95321041c20244ab42d908066e154b7ccb3}} {{cite:babf26f2b67057a5bd344dd1e404826be81216f8}}. It is a variation of the popular Lattice Boltzmann Method (LBM) which is derived from a discretized form of the Boltzmann equation that describes the time evolution of the number density of a group of particles as a probability distribution function {{formula:1957ea02-c3b9-4ae3-8154-908372906456}} . The function represents the probability that a given particle is in a particular state in phase space, and the equation is an expression of the conservation of particle number, momentum and energy. The collision term employed in a lattice Boltzmann model is a greatly simplified version of the collision term defined in Boltzmann's equation using the probability distribution function's relaxation to equilibrium. In the RLBM, the probability distribution function for a fluid in local equilibrium follows a Jüttner distribution instead of a Maxwell-Boltzmann distribution.
| m | 640440da60a70179c9ce496c2ac46ac7 |
Given that real direct observations of the primordial GWs in far advanced detectors like ultimate DECIGO will become feasible within this century {{cite:ca5a94fd1a71e76fed5a87dc60b2f9c340e5ad71}},
an important and necessary step in the near future is that the next generation CMB experiments should find primordial B-mode polarization patterns and measure
an exact value of the tensor-to-scalar ratio {{formula:4728a8b5-d700-46b5-b5dd-3e516d3d8710}} at the CMB scale.
If it will be the case, we will be able to resolve the uncertainty regarding the value of {{formula:2ca4aaaa-f572-4426-8b63-52663f91794e}} (and hence the value of the effective inflationary coupling {{formula:d863c158-611a-454a-8b34-06fd1bbc0ffb}} )
and make more decisive predictions for the spectrum of GWs in SMASH at scales relevant to the space-borne GW experiments.
Then, at the time when such direct detection experiments start operating, one can obtain a richer information about the shape of the spectrum of primordial GWs and
hence about viable parameter values in SMASH, by combining the results of CMB and direct detection experiments.
Such joint measurements of the GW spectrum over broad frequency ranges in various different detectors were considered recently in Ref. {{cite:f190c2d7ecff1981f20346ec23f4fd57a3dd51a3}}.
| d | 45421f877a708dd9954cd0ae16587746 |
Finally, before presenting and qualitatively assessing the results of the best configuration on real-world data, a short quantitative summary and comparison to the results produced by offline *MVS is done in the following. As offline *MVS approach, the widely used and open source COLMAP toolbox {{cite:5362e885e296a7ceafedd86a08e9d92073caf264}} is used. While COLMAP provides the full reconstruction pipeline, i.e. including a subsequent fusion of depth maps into a 3D model, only the geometric depth maps were used in order to make a fair comparison, since the fusion into a 3D model leads to a further filtering of outliers. The significance of a comparison between an online *MVS approach, like the one presented in this work, with an offline approach can be questioned, nonetheless, since the two types of approaches make different assumptions and focus on different aspects within the processing, as further discussed in sec:discussionoverallAccuracy.
{{table:1efb9a69-e05b-4c00-b58a-cfd4670bed7f}}{{table:23edc97c-67f7-4b45-b3c4-e94329cf61f8}} | r | b75bc0613089e8f8c52e3f7644d1a9e5 |
In this Section we present a more detailed review of the estimation procedure for the quantile autoregressive time series that accommodates the specification of the autocorrelation coefficient with respect to moderate deviations from the unit boundary. We first introduce the quantile estimation methodA complete treatment of limit results for quantile regressions can be found in the book of {{cite:21b3ef8ff11ce1a6bc6dc51ab1335cbf350009c9}}. to obtain parameter estimates and then establish the asymptotic theory of this estimator. Denote with {{formula:a8a7eb27-a01d-415c-ad83-4ff353c0bc3a}} and {{formula:08d64b56-6bf0-49c2-b15e-686e45ec44f3}} to be the {{formula:7a4cceb7-23d3-4ca5-b852-e360876d0b77}} quantile dependent parameters, which are determined based on a conditional quantile specification function
{{formula:efae2c29-ba4f-49b3-853c-8d931b3aa09e}}
| m | 94c83c944207d9d9b339c04daeabf6ac |
This result has applications in any scenario where axion stars form with {{formula:34ad5125-3803-4ba0-82d6-457e3def8091}} GeV, including those outlined in the introduction. For example, for ULDM with particle mass {{formula:b54fd8b7-4ac8-4be8-8ecf-0829cac70267}} eV, the correct relic abundance is obtained for {{formula:046e1e4b-99e8-4cd2-a616-b42423cd3f20}} GeV {{cite:e9de8ec6ddf700da454e3d8030d1352890bbb38b}}.
ULDM simulations generally find axion stars forming the cores of galaxies in this scenario {{cite:67859e1106c70d1b7f34120f63e58996a66a973b}}, {{cite:89eb3ac919d10c93725762e8a7bbf56716add43d}}, {{cite:58cbe6dcc3d437e3cc2e6dccf38f048c6a6d65e1}}, {{cite:31274f6122e19d1d915b79e6cd850a8530e27b0d}}, {{cite:d410ec7d0dc0d9de160cc467d01ebc246de82079}}, with masses that are safely below the instability points we find here; however, if the axion star masses had been a factor of {{formula:cff2f1c0-3a76-4f53-ad90-6d7e4f72b3fb}} larger at formation, or if they can accrete mass efficiently, then ULDM axion stars would not merely collapse but also decay on the dilute branch, strengthening the argument of previous studies {{cite:2145edc32c2710648944e5857324473f0328722d}} (see Appendix for details). Such an effect would not be seen in standard ULDM simulations, as they typically neglect both the self-interaction potential as well as relativistic effects. As simulations of axion star formation and accretion become more precise, or as nonminimal axion models are investigated, this must be taken into account in the final analysis.
| d | 501ea83cbdcb73c20da52730d36b50f4 |
{{formula:9bdffba8-ced4-4a12-9b94-bd22f4704905}} SimGNN {{cite:9a571bcbfd332d60a527fcf0f667d26b7f087067}} extracts histogram features from node-node matching score matrix for prediction.
| m | 92bdf5815c436c584555244621868943 |
In this paper, we showed that a more biologically constrained version of Rao and Ballard's {{cite:003d0563a74e69d90f26102dfdfb927f434cbbcc}} seminal model of predictive coding performed similarly to backpropagation on supervised learning tasks using MNIST data. We found this to be true under constraints where 1) separate feedback weights were used to propagate errors, 2) activity values were prevented from going negative, and 3) error neuron activities were prevented from going negative using either division or subtraction based encoding schemes. We also showed how the gradients for the new encoding schemes could be computed and incorporated into the model.
| d | 4466508fdb42d3eecaae50141d124a13 |
While the problem of PbRL was introduced almost a decade ago, most work in it has been primarily applied or experimental in nature {{cite:fc7f6749c23a27cbf5143a76140e1cc5a60cd62d}}, {{cite:5582603ac2f2318175a132a6298edf9ab556ef87}}, {{cite:3c2dcb6260137e87537c23ba60460b88921f0e25}}, {{cite:fd294ac9295fde390d5f398b8af82da2999ce7ec}}, {{cite:29cf1c16ba4a293eff63a8a0be5055f9a44230de}}, {{cite:01d614cffc644ca5f7607e5f42b19ecdf84ba73d}}, {{cite:71ad852a4d4d8b3fc880fbe7bda909d9c165be7c}}. There have also been attempts to design suitable algorithms based on varying preference models and problem objectives {{cite:4dcf6b7441c8f25539e244493f54467583d8b824}}, {{cite:55ded62d7089fb01e73ccaf9015befac39248180}}, but to the best of our knowledge, existing theoretical guarantees on PbRL literature are sparse, i.e., the performance guarantees of most of the proposed algorithms are not well-understood {{cite:01d614cffc644ca5f7607e5f42b19ecdf84ba73d}}, {{cite:55ded62d7089fb01e73ccaf9015befac39248180}} except for some very recent attempts {{cite:4dcf6b7441c8f25539e244493f54467583d8b824}}, {{cite:55ded62d7089fb01e73ccaf9015befac39248180}} as discussed below in the section on related work. We consider the problem of finding the best finite-horizon policy (i.e., one with highest expected reward) for an unknown Markov Decision Process (MDP), but with only relative preference feedback on {{formula:beb79069-63c6-4390-b2b7-bdc4768b0c63}} -length trajectories.
| i | caf94b65286c633a2b1149ef83fc1a60 |
To align with the proof of the DSML method, {{formula:ac778cbd-673c-4710-9bde-8be9d689ac41}} and {{formula:80b19c6b-e8fc-479d-bae9-0f82fd66ecc4}} should be independently trained, which is similarly required in the standard DML {{cite:0259675df6e872769de642814b5ccec647bfe878}}. To this end, we always divide a dataset into two disjoint parts: one for training model Y and the other for training model D. At the same time, we make use of the {{formula:e4b30960-5862-4bbf-ae3b-cb94b8789601}} -fold cross-validation to select the optimal ML models and hyper-parameters in DSML. The detailed algorithm for DSML is presented in Algorithm REF .
| m | 83d57c58f51185222609903ce55742c6 |
If {{formula:86608504-62d7-40a1-bdb5-44350d9fa4c9}} is not an integer, it follows from Section 7.2 in {{cite:0f09d6bf6e2a03cbad60bf0bcf71deeac853bf1b}} that the Legendre equation has two linearly independent solutions {{formula:86a04249-9374-4354-acb9-c7fc6e0f221c}} and {{formula:3215956f-ca64-4eaa-b560-7f5428bf7b7d}} on {{formula:44bb04e5-6879-44df-ae87-d52d587fe5a3}} .
| r | e12fb7051a7765af70880208f1598a11 |
The above described method gives all (commutative and non-commutative) weakly associative algebras. But we are interested in developing this method in such a way that it only gives non-commutative weakly associative algebras, because the classification of all commutative algebras is given in {{cite:6eaf3b185c22c52ef4f455b64e830b6a4c8f9f17}}. Clearly, any central extension of a non-commutative weakly associative algebra is non-commutative. But a commutative algebra may have extensions which are not commutative algebras. More precisely, let {{formula:3bdc2396-3ef7-446d-b967-5c11d7c6e6b4}} be a commutative algebra and {{formula:0c8e21f8-33a3-40f4-a75f-c372a2433c8a}} Then {{formula:143cb08c-682b-4bcd-8dba-55efbb99542d}} is a commutative algebra if and only if
{{formula:4deb3851-ece9-4935-9e4e-105ba065d920}}
| m | 838f7f4036574971379f1e0d04374185 |
Most prior try-on systems adopt a multi-stage approach {{cite:5d8d02690a24a6c62ba89d7cf3566f91c6ac01ad}}, {{cite:2d174c0370f6ce8dc0cd75cd72ea108f7aac1db0}}, {{cite:1c5a463efa9a3b32c0a575e6af0c55e33a1ebc34}}, {{cite:abc276978a1f757306b66c2ad97bd3b2f6c0703c}} shown in Fig. REF , including clothes warping, structure estimation, and image synthesis. Clothes warping is to align the garment to the target pose while preserving the texture details. Structure estimation predicts the segmentation map of the human body to guide the image synthesis. Given the warped clothing and the intermediate semantic labels, image synthesis is performed as a conditional generation task for pixel-level refinement.
| i | 065ff7d2923894cd1891d8ad307b8760 |
(3) Evaluation. Evaluation metrics are essential for the development of new models and the benchmarking of existing ones.
Currently, several quantitative evaluation metrics {{cite:741801410f2e6867409ed930b617e1f1aeef4e0f}}, {{cite:0e0d125f7cba8987fbeb934f90500a54b5f82d75}} and human visual ranking methods {{cite:f914b4af245658c4d03597e1eb57ee17d10f8092}} are used.
However, as these aim to provide relatively objective and fair comparisons between all models, the different applications of FSS are not taken into consideration.
This may lead to biased or unreliable evaluation on certain tasks.
Therefore, more task-specific evaluation metrics and methods could be another important direction for future research.
| d | ffc55c7e1cc747fb92d8d5e7913db435 |
This paper gives a comprehensive overview of TDA which consists of a set of powerful tools for measuring topological features of time series and using it for pattern detection, clustering, classification, and structural break detection. Research extensions in several directions are possible.
First, TDA for
multivariate time series analysis has been studied very recently
{{cite:4984bd3991935ae570328d2b48c58d7e87d56115}}, {{cite:cfb9e18c32abb5ca11081295fa3bc29fdcffece0}}, {{cite:347313e2db363f00dbbba56a0d8e2f5dd8ec1f2f}}.
A second extension consists of using summary statistics and dissimilarity measures for TDA. A review of summary statistics and dissimilarity measures refer to {{cite:12ed8b32783fec4f55a861812d9e9bcfc4fa40aa}}, {{cite:de4c110e073b5cca6a304a5b78f5bf4570eec8b2}}, {{cite:c2e009517c0287851d31b485017c47df75450548}}.
Third,
research into improved computation tools in computational topology and further exploration of statistical properties while using TDA is a rich research area. Ongoing research can be separated into these different scenarios: computational homology {{cite:2b007566a88e13074c8728d4859789a32e5cb4c3}}, {{cite:3e1ea891b47f9c75ef7e49244fe0a1488d88f047}}, {{cite:a61d07bda51233d25dc843c6bd181cce143dc9bf}}, {{cite:308b3cf75da89dc63fc06832aa115faf412b0223}}, {{cite:6df99ad3d200fcd2aa7b45f5a0b2973ea6e7574a}}, {{cite:4ebd2d330addb3fe914204c3919e4908f8f22cb7}}, study of topological summaries {{cite:071710b1908da98b41b7210d67bcc301c819e5d0}}, {{cite:9bd510f707a434fe7e6af7f460e8212c25e59829}}, {{cite:a5ebcfd8a1f8e5ffb3b1d3d6d247e9ecea056148}}, {{cite:be37588b0f39b967e6d09f3f355cd0c75144c067}}, {{cite:19c1b8154a325872e7347344a3d593810267ab8a}}, and statistical inference {{cite:a6c980ce4d46132cde1b922aa149f293f3f3e2ee}}, {{cite:2b007566a88e13074c8728d4859789a32e5cb4c3}}, {{cite:e1ba7b369844ddc0a5fb7ddd7f92e8787790c17e}}, {{cite:50398bee5d81cbe69d9f94322f53388ca1af351f}}, {{cite:7151f8820ea24411ee3d66eadf9bcacd290a4ed5}}, {{cite:087999669fe99088fe7ed14feb3d2079e6a9c835}}.
| d | a724c8f22d19e5decd68e2c89f493e95 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.