text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
It is well known that the qualitative features of the gravitational lensing on the Ellis spacetime are very similar to the ones on the Schwarzschild spacetime for their photon spheres and their asymptotic flatness {{cite:fd626c1663f1899855a73a297e9627c280977f89}}, {{cite:1a7d80cf60e6b8aaa61d9be385241902ca5d244e}}, {{cite:54ffb02dc2e871b148363fb975294bef7b43e6fd}}. However, we realize that their quantitative features are very different due to their different weak-field behaviors.
d
35126829bfc5ec18a09d64e279474f79
Nakajima's quiver varieties were introduced by Hiraku Nakajima in {{cite:b9e618c83f7889212876372b5cb87651048e7269}} to study the moduli spaces of instantons on ALE spaces, and have been extensively studied since then, see, e.g., {{cite:469fc6110d58e7c73453982e4c3240175fe96dea}}, {{cite:52c8bdfbaeda7f4fdc151c52cd9ef434e7528e91}}, {{cite:368a0ec6fac47fc0abeb57a5dfc763dead397517}}, {{cite:e4a327a3914ad186bb57e97876df676875cd7146}}. They provide a modern and significant example of how algebra and geometry can be sometimes so deeply, yet surprisingly connected: in fact, their main feature is that they allow one to put in relation some moduli spaces of bundles (or torsion-free sheaves) over certain smooth projective varieties with some moduli spaces of representations of suitable algebras (the so-called path algebras of a quiver and quotients of them). A major example of this bridge is given by the moduli space of framed sheaves on {{formula:1f207ef9-6162-45c7-ba41-c0b70ee782dd}} , which can be identified with the moduli space of semistable representations of the ADHM quiver (see {{cite:368a0ec6fac47fc0abeb57a5dfc763dead397517}} for details).
i
3d102b63d67fd2559ab55e4aea44ea1c
By Corollary REF , there exists the (unique) solution {{formula:33b8870d-7b74-4aaf-9129-b8b5c882fd3a}} to Problem REF with {{formula:f33c52b8-c65a-40ec-a779-cdc3b5524f7e}} , where {{formula:6fd6a6fd-48b0-42ad-b6c1-b1c562ba4666}} is large enough ({{formula:6f25a6d8-3fdd-44e9-8841-ac5cc76967f5}} ), while according to Theorem REF , {{formula:759e8031-2f63-4fdf-ae57-01c3d987aaa3}} Since {{formula:3e1f73ef-e964-4047-b386-84b6f7eb66f3}} as {{formula:a7ca791c-ec88-4f1f-8b50-2eb3a4982e6c}} , cf. (REF ), these {{formula:98cfc967-e6f4-4f02-aad2-fd80d1aa48b6}} form a minimizing net, i.e. {{formula:0d7d66c5-c2a0-4cb6-993c-2e3ed638d370}} and hence {{formula:a6509801-615e-49bc-8f1e-f81ff8854bdd}} converges strongly in {{formula:23bcca4a-b5a6-4b1b-b66c-3b1c0ca2ac2f}} to the extremal measure {{formula:d6381990-badf-481c-9023-d5f46adc760a}} (Lemma REF ). Relations (REF ) and (REF ), both with {{formula:4145d016-c48e-4eec-96e5-48e876d16eab}} in place of {{formula:19586f7b-e3e0-4100-9a44-59bf0bb5acc6}} , give {{formula:c3cb3573-24ab-44c7-bd8d-a838bc1dfd6d}} the finiteness of {{formula:23260cdb-4d82-4464-8f3b-2429afb90688}} being clear from the latter relation in (REF ). Fix {{formula:468eb5d3-77ea-4fd5-ba72-bae103def1e7}} . The strong topology on {{formula:236c2da4-7c9b-41b7-b73e-1a229d5b8931}} being first-countable, one can choose a subsequence {{formula:bd7db954-9fd9-4122-9f27-3dd8ff9c5629}} of the net {{formula:61c83650-3ed8-4695-8101-fabafc4b88bd}} such that {{formula:c9c6b969-707e-484d-a539-fe6b5cdd1f50}} There is certainly no loss of generality in assuming that {{formula:6ccfcf7d-075b-443f-83f0-c059ac3523c8}} for if not, we replace {{formula:bddf50ff-89a9-413d-9225-12984256831f}} by {{formula:21daa0fb-b86c-4bbb-bc2d-fedfba9a05f0}} ; then, by the monotonicity of {{formula:9d01a8f6-874f-41a3-9e9e-e0d339a7a0da}} , the sequence {{formula:4fed1309-50bd-48c9-bcb0-9642b0878bea}} remains minimizing, and hence also converges strongly to {{formula:6aab62ab-0c73-43db-97ab-7e559329f420}} . Due to the arbitrary choice of {{formula:7fda697e-1309-4cc5-9bee-435d19748a68}} , (REF ) will follow once we show that {{formula:78c732e2-0742-4abc-b080-f10df3bf8049}} Passing if necessary to a subsequence and changing the notation, we conclude from (REF ), by virtue of {{cite:4cb0f76fc61609c58d53c4bd6a491b399fdf7b02}}, that {{formula:80a3f048-9a13-495f-ab8a-67017cadb0d3}} Applying now (REF ) to each {{formula:e422f5a2-884d-4d38-b4f1-b873abe7a9b3}} , and then letting {{formula:b922bad3-3b56-452d-9766-9c80820445fd}} , on account of (REF ) and (REF ) we arrive at (REF ). (Here the countable subadditivity of inner capacity on universally measurable sets has been used, see e.g. {{cite:a58d67caef8bdbf61af3daae760a5ca678e65e7f}}.) Assume now that {{formula:67ea46da-5d9e-4cf5-85a2-956855e69aa4}} is l.s.c. on {{formula:7e30d847-ab94-4ffc-9a5d-87a01786efda}} . By Theorem REF , then {{formula:ad7d642e-dce6-4f2f-a8e9-06c27029a3a6}} where {{formula:268143c2-7e75-4bd8-8921-c5a267d9aeb1}} is the sequence chosen above. Since {{formula:e711f304-6868-46d8-a725-1635cb7d6ff9}} converges to {{formula:c7445dc0-8181-405b-8935-618a17d80e71}} vaguely, see (REF ), for every {{formula:8018ec1c-a3b6-43da-b0f6-0682bccf91de}} there exist a subsequence {{formula:1c86fa9e-3658-4370-938c-1624139b9568}} of {{formula:3e928c29-c7e2-4b83-92e2-78bb5353460f}} and points {{formula:6ab0820a-0103-4e3c-b2ec-932d451167a9}} , {{formula:981ee281-804f-4728-9791-4dbe8dc04610}} , such that {{formula:5ae5a309-38f5-4203-b868-9aeaf7e5b1d6}} approach {{formula:6f87043a-e641-455f-89c6-8c134071ed4e}} as {{formula:5617789b-46a8-4d19-89dc-41b9fa768763}} . Thus {{formula:fb6e09fe-4666-4977-9358-cc94ff295266}} Letting here {{formula:71e586e1-66db-43d5-9fca-18e9fd1ccd00}} , in view of (REF ) and the lower semicontinuity of the mapping {{formula:c78b3f2a-74d1-494e-860d-9f6939729ea7}} on {{formula:24814dc8-5a10-4528-87e1-c3f28395fee2}} , where {{formula:3b403446-5fe6-4258-ac00-2212f73865af}} is equipped with the vague topology {{cite:4cb0f76fc61609c58d53c4bd6a491b399fdf7b02}}, we obtain (REF ). Thus, by virtue of (REF ) and (REF ), {{formula:5e9fb1df-4d76-467f-afcc-96568ef6e76c}} n.e. on {{formula:504dee40-8aa8-4dc5-b32c-5710b7779b65}} . Since {{formula:2a385040-e90d-4f72-ab70-4d2e458498b7}} (Lemma REF ), this implies (REF ).
r
64228b288cb8aaf57d09041e1367b3d0
The SEIR model fits well to the reported COVID-19 data with {{formula:e1ee830d-fd39-4cbf-8f73-87aa3f602185}} in the range {{formula:bafb668e-247a-4555-b26b-019fa148e243}} . It is plausible that the supplied data, has many fluctuations and likely subject to a large uncertainty. It may be mentioned: 1) fluctuacting behaviours in the number of active and hospitalized cases due to unwanted epidemiological events in the República de Cuba. These unwanted epidemiological events are due to native transmission and transmission by travellers originating in other countries with high COVID-19 transmission and new strains of SARS-CoV-2. 2) Introduction of new strains of SARS-CoV-2, as South Africa (variant B.1.551), United Kingdom (variant B.1.1.7), California (variant B.1.429), Brazil (variants B.1.1.28.1 and B.1.2.28.2), and India (variant B.1.617 or delta strain). These strains have higher transmission rates than original Wuhan strain (variant D614G), which prevails in the República de Cuba from March 2020 up to April 2021. 3) Seven new protocols for confrontation to COVID-19 that have been introduced according to epidemiological situation. 4) Clinical trials (Phses I-III) of Abdala and Soberana-02 vaccine candidates. These reasons explain why model fitting system (REF ) of experimental data is not totally good in the study period selected, one of limitations of this study. Another limitation of this study is that {{formula:2b8ecec2-5bcd-497b-9793-5281be3c6407}} and {{formula:79b7d699-4adf-48a2-90e7-79df334bf308}} are considered constants in our model; nevertheless, epidemiological studies evidence that these two parameters change over time {{cite:95117b7fea29240aa21a724c07348379d4619309}}, {{cite:f5e264d0a235d5bb3cd6255f534421c6618e7c90}}, {{cite:b817e885bf7028e16f6ab2cfefb1afa63f859b79}}, {{cite:259462387300547d6b533702d9cdabb0b835d493}}, {{cite:b53d8b85b4e6bf73ba507f3d12e5258dbf086bd1}}.
d
ff82c0ad03f5ba5206266d9a29f060b7
Unlike the radial component of stellar bulk motion, the vertical component, which gets translated as a combination of breathing and bending mode motions, is stronger outside {{formula:2a4b3835-1f24-4bf2-a828-ec8c73bd90ed}} than inside. The breathing motion detected in the outer disk confirms the findings of {{cite:a7e9e720aa4b25840f187d6d90986ac7477c2787}} that there is a notable vertical contraction at {{formula:b8908c6d-a249-4b22-9446-86ec572e53d9}} . The inner-disk rarefaction-like vertical motion, which was reported by some of the earlier works (e.g. {{cite:a7e9e720aa4b25840f187d6d90986ac7477c2787}}; {{cite:df526b794754e3d352e43294859be68fb58b0c27}}; {{cite:0a54940246da391794d7c5ae3bb335e8c5b30381}}), only appears at {{formula:cadcd477-716c-422e-9f0e-662faae2b3be}} . The bending motion derived by outer-disk stars is aligned with the recent findings that an upward bending motion exists at {{formula:1aa6c51b-0463-4974-9790-aaf0cd5a13e1}} (e.g. {{cite:6a27768100a621d28ed320c6a8826566b8541e98}}; {{cite:0a54940246da391794d7c5ae3bb335e8c5b30381}}; {{cite:7a9a284faac761ac5ed33c848bb1e8efaea8b5ff}}; {{cite:07c11f507df488da6164653a6226c0a0d87b4a4d}}); by comparison, the downward bending motion inside {{formula:b8dd719c-9f59-4708-b285-b16b4781a36d}} is trivial. The directions for both the breathing and bending motions are mostly invariant with respect to {{formula:38fd764c-d08c-4d74-9081-abb2cb97d7c0}} , despite {{formula:02de375c-cb35-4e8b-bca5-474901fb17f1}} -dependent amplitudes.
d
e2cc4e05e539eaf470312e562955edfd
SS-GAN {{cite:f401dc530e3bf555ebfd32aad380edb1136330a4}} proposes to use two GANs, a Structure-GAN for generating a surface normal map from random noise {{formula:808d6095-1e74-496c-b4db-98109646e022}} , and another Style-GAN that takes both the generated surface normal map as well as a noise {{formula:55c47a19-d483-4b96-a00f-4476ecba514d}} as input and outputs an image. The Structure-GAN uses the same building blocks as DCGAN {{cite:cfd01ef5b03e3e3cfaac43f5a5054df9ac97aa5a}}, while the Style-GAN is slightly different. For Style-Generator, the generated surface normal map and the noise vector go through several convolutional and transposed convolutional layers respectively, and then the results are concatenated into a single tensor which will go through the remaining layers in Style-Generator. As for the Style-Discriminator, each surface normal map and its corresponding image are concatenated at the channel dimension to form a single input to the discriminator. Besides, SS-GAN assumes that, a good synthetic image should also be used to reconstruct a good surface normal map. Under this assumption, SS-GAN designs a fully-connected network that transforms an image back to its surface normal map, and uses a pixel-wise loss that enforces the reconstructed surface normal to approximate the true one. A main limitation of SS-GAN is that it requires to use Kinect to obtain groundtruth for surface normal maps.
m
083cb7ff36827327ecefe3dffc9b3f75
In Tables REF and REF , we summarise our results for the TE calculations, the non-pulsating and the pulsating models for the two types of AGB stars, SRV and MIRA. The abundances derived from recent observations are listed in Table REF for comparison. In the SRV models, atomic Al is found to be important for all cases. In comparison, the MIRA models show lower, but still considerable amounts of atomic Al. Strong and broad doublet resonance lines (around 395 nm) of the neutral Al atom, likely originating from the photosphere, are found in Mira-variable stars {{cite:fd5c04c7fc657ab91aa11e34070c97817bcf69b2}}, {{cite:d1cd6a708006ba434cc98755c1f1c139342e4c57}}. Reliable abundance estimates of atomic Al are however lacking. Therefore, we confirm the presence of atomic Al close to the photosphere of M-type AGB stars. AlO is by far the most studied aluminum-bearing molecule in oxygen-rich AGB stars. AlO is a diatomic molecule with known transitions and has a large dipole moment. It is thus not surprising that it has been found in a number of late-type stars, since its discovery in red supergiants {{cite:706b72dd63e34a4dce2a2f1580fcd08cee3ec51c}}. In Table REF , we list recent detections of AlO and give abundances, where available. {{cite:d1cd6a708006ba434cc98755c1f1c139342e4c57}} observed AlO in the archetypical star Mira instelf. The authors give a broad abundance range between 10{{formula:07ec02c3-bfa3-4eb1-8550-a2c36a91b52b}} and 10{{formula:402fbf88-eee7-4a61-9705-b5b2b7b42308}} , owing to large uncertainties in the measurement of the column density of AlO. Their semi-analytical model results in an AlO abundance of 5 {{formula:e99d8bc7-ba05-4f3c-855f-88a74630ba2a}}  10{{formula:69d416e9-e39e-49c9-8752-7b49168477c3}} in Mira. We find generally lower AlO abundances in our kinetic MIRA models, except for in the consecutive pulsating model for {{formula:c127d9f1-60d6-4568-b3f8-359ef3ac1744}} 2 R{{formula:b119f181-3887-4d7b-9822-9774fdb6a2d2}} (see Table REF ). {{cite:40536ba8a7fa30d88de94f3c37f25271e47509d3}} conducted a search for AlO in 7 AGB stars, 6 of which are MIRA-type (R Aqr, TX Cam, o Cet (Mira), R Cas, W Hya, IK Tau) and one of SRV type (R Dor). Definitively, they found AlO transitions in two stars (o cet and R Aqr) and tentatively, in three further stellar sources. The authors derive a source-averaged AlO fractional abundance of 4.5{{formula:e93446ef-3875-43d6-b167-281e6e1aca48}} 10{{formula:0d1dbea0-d8b9-4fdf-8c85-aa927069b92d}} , based on a rotational diagram of IK Tau, also hinting at difficulties in deriving AlO abundances for individual stars. More recent ALMA observations {{cite:06eaf8f7cf4f5da1a2b54140f8f1d0d291254c3d}}, {{cite:b3b58a1ad02c885965e6f28015974f8b28c6ce97}} show the absence of AlO in the inner envelope of IK Tau, a MIRA-type variable, but its presence in R Dor, a typical SRV type star. In R Dor, they derive AlO fractional abundance of (3.7{{formula:1c7eedda-b734-465b-809c-74e67b3f7448}} 7.8){{formula:2046b12f-5495-4bb6-8dce-c62ca23ea0ad}} 10{{formula:fcb3b5a9-de7b-4d73-891c-25b2cd474288}} . These values compare well with the SRV consecutive pulsating model. Moreover, we note higher AlO fractions in the SRV-type stars than in the MIRA-like AGB stars, which is consistently reproduced by our models for both types of trajectories, non-pulsating and pulsating models. AlOH represents the predominant Al-bearing molecule in both sets of models, SRV and MIRA. Its spectroscopic identification and a related abundance estimation by (radiative) transitions is challenging, owing to the following reasons. At present, there are no spectroscopic constants avalable in the literature for the AlOH electronic bands in the optical regime. Observations of rotational (and vibrational) transitions at longer wavelengths are feasible, but some rotational AlOH transitions are blended by rotational transitions of vibrationally excited TiO molecules and by rotational lines of a neutron-enriched isotopologue of SO{{formula:180ccc6c-f08d-43e0-a08e-18c1a45d5b48}} {{cite:8942dfd990a810b9a912e55920092eb0db2ff19c}}. In addition, it is debated that circumstellar AlOH also exists in a bent configuration {{cite:ccba11742f91239897b20fe20d4c5a67132f0dce}}, exhibiting different spectroscopic constants. At high temperatures, AlOH will be a quasi-linear molecule, because the difference in energy between the bent and linear forms is very small. Therefore, an accurate observational abundance estimate is demanding and it tends to underestimate the AlOH content. This might explain the systematicallay larger AlOH model abundances in comparison with observations. Overall, the AlOH/AlO ratio in our circumstellar models is larger than the observationally derived ratio indicating that the kinetic conversion of AlOH in AlO in our models is too inefficient. Neither the photodissociations nor the bimolecular reactions AlOH+H and AlOH+OH (that produce AlO) are able to noticably transform the stable singlet AlOH molecule into the doublet AlO, although we used the most elaborate kinetic data available. Therfore, possible resorts include that
d
289a6f6f1d0f1a27c62d46458eb02441
So spin networks are the kinematical states of the theory and the game is to describe their dynamics, i.e. their evolution in time generated by the Hamiltonian constraints. Although the traditional canonical point of view is to attempt to discretize, regularize and quantize the Hamiltonian constraints {{cite:70fc06f850316f861f8194dc9ca8b641219274d4}}, {{cite:aa6a416f2cb607b8e8c78fb04f3efbc3823f5228}}, this often leads to anomalies. The formalism naturally evolved towards a path integral formulation. The resulting spinfoam models, constructed from (extended) topological quantum field theories (TQFTs) with defects, define transition amplitudes for histories of spin networks {{cite:0643e6ec66ffdc76dda4fa7c97de2263f3283f3f}}, {{cite:d5ab6ef9316eddbe55c4ca52140eacafcaf53c60}}, {{cite:4546ea0c3bb1ba7334a67adc3d64a06bac6e5b94}}, {{cite:42399306e582048171a05ab55612e4121bb13369}} (see {{cite:c6082fea6ef659318100c634be845c54c38da8d5}}, {{cite:9d69eae54a296c6700eab67aae1cad64b24ff43d}}, {{cite:ec2c8ee4ad994c0df81bc7d2287b5a34ccc82a7d}} for reviews). The formalism then evolves in a third quantization, where so-called “group field theories” define non-perturbative sums over random spin network histories in a similar way than matrix model partition functions define sums over random 2d discrete surfaces {{cite:d5d275be7637503335503cf396da74f4f2e24ff4}}, {{cite:48a3672f6b57c21b595baf5e1650e1a7974c900a}}, {{cite:6b82923bbac5f07c51bb9cde7685d2553b804f78}} (see {{cite:05af7b356249853d80d50309106057725384d8ff}}, {{cite:555117934d24cff25ee201248e8372d4c3d45ac8}}, {{cite:4aca7aa5e940febed5cf62b151a604e0ac4e92ba}} for reviews). Evolving spin networks, formalized as spinfoams, describe the four-dimensional quantum space-time at the Planck scale. This quantum space-time is defined without reference to a background classical geometry, with quantum states of geometry are defined up to diffeomorphisms with no reference to any intrinsic coordinate system or background structure. Then concepts in classical geometry, such as distance, area, curvature, become emergent notions, in a continuum limit after suitably coarse-graining Planck scale quantum fluctuations. They can only be reconstructed from the interaction between subsystems, quantified by correlation and entanglement shared between subsystems (see e.g. {{cite:7b8c9d79607442129e20dbac09ebb0e199840a62}}, {{cite:cbc8f9a8bf46680dd719a8de7fb19a76d67291f7}}). This perspective sets the field of quantum information at the heart of research in quantum gravity, with essential roles to play for entanglement, decoherence and quantum localization in probing quantum states of geometries and thinking about the quantum-to-classical transition for the space-time geometry.
i
328b32c1ee9ff3b44c0a569f958f38da
That is, the update step is rescaled by the inverse of Hessian. An intuitive interpretation of Newton's update is the Hessian contains curvature information, when the curvature is steep, the inverse Hessian can scale more making the step smaller; while the curvature is flat, the inverse Hessian scales less resulting in a larger update. Here for nonlinear {{formula:7dcc3c5b-6fe7-4012-9fb7-085a1354bfdb}} , we cannot just be able to hop to the minimum in one step. Similar to stochastic optimization methods we have introduced in previous sections, Newton's method is updated iteratively as formulated in Algorithm REF {{cite:98141df9a29342df9d09792118832d48faca42eb}}, {{cite:2a0214bdebf1f6114a1ff6867fd738772f55e353}}.
m
df1588ec0381997369e935fb8517d771
The duration of an excursion as well as the area of an excursion above a given threshold of the occupation process of an {{formula:ae8eeb06-77d3-4cdb-8363-6f7a68654af8}} system have been studied in {{cite:6cff936e7f3dc40be440581a4d0dff6eb3d4a80f}}, {{cite:706854b839f03a170e6c2b3f8592d949683d3584}}. The analysis reveals the key role played by associated Charlier polynomials, which satisfy the same recurrence relation as Charlier polynomials but starting from a certain index (namely the excursion threshold). The relationships between orthogonal polynomial systems and their associated orthogonality measure, spectral theory, continued fractions, etc. are clearly explained in {{cite:426eb07dc6be23f362f5ef4c690c81ce2fd0dcd5}}, {{cite:d3d0e7c40a769fe2fc0be89776d6d6b029adb9e3}}. In this paper, we shall use these reference books for exploring the number of departure from the {{formula:77b406c2-6480-4b0e-80b3-3a7f92d07394}} system in a finite time interval.
i
2c94725ba990b7c62029636143fb95de
Fair MF models outperform fair PSL models w.r.t. non-parity unfairness: All MF methods perform significantly better for the non-parity unfairness metric when compared to the PSL models. Also, optimizing for non-parity unfairness in MF causes an increase or no change in almost all the other unfairness metrics, which is consistent with the results presented in {{cite:166b56b475ecfff700d8e7c312723e46f9f5bd42}}. For PSL, we note that fair PSL models perform significantly better than non-fair PSL models with respect to non-parity unfairness metric.
r
9136c1f074d2b28453cf168e5e51c534
Different QCD matter has a different temperature of phase transition which is a crucial parameter determining the features of the GWs produced. Then a question arises naturally as to what kind of QCD matter in which the first-order confinement/deconfinement phase transition happens is more likely to be the potential source of the NANOGrav signal during the evolution of the universe. In this paper, we will try to answer this question. We consider three types of QCD matter systems: (i) heavy static quarks with a zero baryon chemical potential, (ii) quarks with a finite baryon chemical potential, and (iii) a pure gluon system. We match the NANOGrav signal with the GW spectra from the first-order confinement/deconfinement phase transitions in three QCD matter systems with holographic models {{cite:8cc0b238d335ab68eab73097b1311f2f07076ab7}}, {{cite:9cb969c60b3116fdc2016223405cba5b6fd3da3b}}, {{cite:d23d7df479c01f8a62aa0066ed6ccea79f78844b}}. We show that the GW spectra from the phase transitions in pure quark systems, regardless of whether the chemical potential is finite or zero, could explain the NANOGrav signal according to the current observation. In contrast, the signal could not possibly come from the gluon confinement.
i
3545eceabe58862f7dbcd0fb06f6cc40
We have shown that confining a dense bacterial suspension into a thin periodic racetrack leads to the spontaneous formation of a stable circulation along it. A similar setup has been previously used to study two different active matter systems: marching locusts {{cite:a27fadc9c297d6fd059e7ed9675b8c545c5cdc74}} and rolling colloidal particles {{cite:9c171647afbe4906a25a3be6989e1a2574b5fd8a}}. As for bacteria, collective colloid motion is driven by hydrodynamic interactions, but can result in the formation of travelling waves and density shocks while the bacterial suspension always appears spatially homogeneous.
d
3f1b3d5f0b41bf6c3151ea3fcaa9e77a
Deep reinforcement learning (DRL) has a wide range of applications in wireless tasks {{cite:4beca0fbf8028ed4f94e16c4c0d954c2fd37f24c}}, {{cite:98f8307f3ab00f197587eb5333aca255a12e4e8d}}, {{cite:de7a6bf7343fb93d16e88125616c3744173de7b5}}, {{cite:933e9793ace17090df9f36d572230614f1af0bb5}}, aimed to make decision for resource management in an on-line, end-to-end, model-free or distributed manner.
i
124a7ba0cb371ad9148692e48d567045
In this article we would like to follow the procedure used in {{cite:ea4bbb68d86e45e64e906f86267de896b3703117}} and in {{cite:1ee59ec03fe2229f7596c370492c006f635434eb}} in case of one very interesting non-relativistic system which is low energy Lagrangian for {{formula:d853ade7-2d84-44f0-88af-25db95070691}} D0-branes in string theory {{cite:1cadc7587ab5f63e83a7de308bac901dace9e785}}, {{cite:b1a50e45d07ae608e40c756c9d5b9edd7cef0393}}. It is well known that the low-energy Lagrangian for a system of many type IIA D0-branes is the matrix quantum mechanics Lagrangian arising from the dimensional reduction to {{formula:f4ab7e86-6bcf-4dff-934b-a47568e19d99}} dimensions of the {{formula:94523cd3-2c64-41ab-b488-c981528aff4c}} super Yang-Mills Lagrangian, for review see for example {{cite:d2dca5eef97efe71f4b5c882224e94acce671975}}, {{cite:36874005d1cbcecea4514175d51adae6e428fa93}}, {{cite:e71854299bfc505067d32e4e753025a1ddea2241}}. An importance of this action is that it is the key point in the formulation of Matrix theory {{cite:18f9ac845668207ef118346aa2400ef92dc4d95d}} which is strictly speaking defined for infinite number of D0-branes even if there is a version of Matrix theory that is correct for finite {{formula:721e71ab-8062-4473-903d-ee9266d5b6e8}} as well {{cite:4807288574c80f5948fee252a1a9c44b6b3f5bc8}}, {{cite:c7092b457aed2872755a34a2c5834c8db9ba6581}}. Now we show that it is possible to formulate Lagrangian for {{formula:cdd533dc-0818-4a15-aa5b-fd1333c7b143}} D0-branes that is manifestly invariant under gauged Galilean transformations and hence, according to the extended discussion presented in {{cite:ea4bbb68d86e45e64e906f86267de896b3703117}} corresponds to relational mechanics for {{formula:9648d430-ef5d-4afd-81d8-445929752665}} D0-branes. On the other hand it is not possible to write it in manifestly relational form where the Lagrangian depends on relative velocities and distances of particles in the general case due to the fact that fundamental objects in matrix theory are matrices rather than coordinates of individual D0-branes. On the other hand this can be easily done in the approximation when off-diagonal components of matrices are small with respect to the diagonal ones so that we can neglect them and we show that in this case the Lagrangian takes purely relational form.
i
62f86ff6461874826b399aa22ac039ad
Calculating IPRs for all the {{formula:6bd867c1-3f69-442a-b2f2-53cb5f04ffe8}} states, we compute AIPR of the system which is {{cite:ef5b9fe438ee178415f082cd6408097e561e1308}} {{formula:271ede91-7ff5-471e-92df-5778bcb45d86}}
r
75a1e36e02fd748fbb085683183dc228
We compare how our method GenB performs in relation to the recent state-of-the-art methods that focus on bias reduction as shown in Table REF . For VQA-CP2, we first list the baseline architectures in the first section. Then, we compare our method to the methods that modify language modules (DLR {{cite:126e4c18d78f857237d1d410954737be7f3e1990}}, VGQE {{cite:11e26cc6f0274812467b04ed197c2de3cf215a0b}}), strengthen visual attention (HINT {{cite:9daa6a5beec5cdda81186480ab4f9056c4140011}}, SCR {{cite:9f70a4b910ebd4059f401a6f5d944faae189af58}}), ensemble based methods, (AReg {{cite:e899a06af7eb4726c87658e323df39f7ab35a819}}, RUBi {{cite:33c93ebf80a302fbaa4fddaed7431664f9a8527d}}, LMH {{cite:b5407c27f009278a9335b4d0425081e2a80f777e}}, CF-VQA {{cite:08fdba2e03ba5055c221b590f550d7455d605d0a}}, and GGE {{cite:f302a4ee56974a5998e542bdf0d529fc5159282f}}) and balance training data by changeing the training distribution (CVL {{cite:e022d2ba9a7f4d3b3a834cf6ccb4d3dbd4fb41e7}}, RandImg {{cite:b6c5a60b47e8b45a6c5ec6bc8d5d6e4b3bcf1f12}}, SSL {{cite:0233dc16b02d30190c79f1af7c5ab0fa24058749}}, CSS {{cite:e82d356bda082badf9166dc1df0c402a1ade685b}}, Mutant {{cite:10e828658ba94756b798b193a8b676268f5ce906}}, and D-VQA {{cite:b1f7db52a6027cb5b35b6648e1d1ef7cc51c1a20}}) in the respective sections. Among the balancing training data methods, while some methods swap the image and questions from the supposed pairs {{cite:b6c5a60b47e8b45a6c5ec6bc8d5d6e4b3bcf1f12}}, {{cite:0233dc16b02d30190c79f1af7c5ab0fa24058749}}, {{cite:b1f7db52a6027cb5b35b6648e1d1ef7cc51c1a20}}, counterfactuals based methods generate counterfactual samples masking critical words or objects {{cite:e82d356bda082badf9166dc1df0c402a1ade685b}} or by using an external inpainting network to create a new subset of data {{cite:10e828658ba94756b798b193a8b676268f5ce906}}. Our model (GenB) is in line with the ensemble models listed. Following previous works {{cite:08fdba2e03ba5055c221b590f550d7455d605d0a}}, we do not compare to methods that change training distribution as these methods conflict with the original intent of VQA-CP (which is to test whether VQA models rely on prior training data {{cite:6e1618bb80fceb865bdafa40e67c09a2e74d0f2c}}) and are listed in the bottom of Table REF .
r
3bb64126d2aa69adbd08d48dab6e7c3d
We use randomly generated Ising Hamiltonians for evaluating the performance of the proposed QAGA and compare it with MQC, which is the state-of-the-art technique in the realm of QA {{cite:0f9cddd3d61e7d0db2bd72fab6f199d01f731bce}}. We employ three different types of benchmark problems: (a) coefficients drawn uniformly from {{formula:0a58ce35-4efb-413e-8ef3-83873ed467aa}} (binary coefficients); (b) coefficients drawn uniformly from {{formula:b070ca24-2784-44ca-b220-6a58a597492e}} (uniform coefficients); and (c) coefficients drawn from the standard normal distribution (normal coefficients). For our evaluations, we use a D-Wave 2000Q QA. Since randomly generated problems are not compatible with the working graph of QAs, we use the minor-embedding heuristic {{cite:dcc334bfe879f6d796f69ce70350ac7cb0133fd0}} for embedding the arbitrary random graphs to the Chimera topology of the D-Wave 2000Q quantum processors. QAGA iteratively contracts variables whose uncertainties are negligible; thus, the Ising Hamiltonian in QAGA has a dynamic structure, and we need to apply the embedding in all iterations of QAGA. Thus, QAGA needs to embed the remaining Ising Hamiltonian to an executable QMI on the target QA.
r
64ff4088c8f26522231d65282bbdef03
The results in this section show that neural networks equipped with edge detection neurons are more robust to color distortion compared to conventional CNNs, but are also less accurate in natural settings. The proposed models reach between 84% and 99% of the performance of conventional ones. On the one hand, this shows the usefulness of the extracted color-independent features. On the other hand, it raises the question of the applicability of the new models to which we will give a two-pronged answer. First, the i.i.d. datasets used in our experiments might contain spurious color cues that are accessible to regular CNNs only. It has been shown repeatedly that deep neural networks find shortcuts to solve a dataset without acquiring the intended capability {{cite:14f172cbf45df6aa6c5caf47c45ea86aa3f2fccc}}. A regular CNN might acquire color detectors (green, blue, red), as demonstrated in Figure REF , that are predictive of certain classes (frog/deer, bird/airplane/ship, car). Models with edge detection neurons are blocked from such features but their superior performance on transformed images suggest that the true capability is higher than what conventional benchmarks expose. Further testing on natural images in “uncommon” settings (i.e. ImageNet-A {{cite:a9baebf3d6d5359f2eaffd448241b7cadbf013d0}}) will shed light on the true capability of the models. Second, it is possible that edge detection neurons require a different architecture in higher layers to reach the high levels of performance reported in the literature. CNN architectures with a standard first layer have been tuned on CIFAR-10 and ImageNet for many years; a new line of models will need further experimenting to reveal their best.
d
7b4fb3be7d64292abd22b22a2248b9ed
We tested our re-implementation of the model from {{cite:d53104c6215cbee175252653b4f23a9cb9441da2}} on the ReferIt dataset {{cite:b33581f131edf8bd6d6323a9f87dc7701874aaf9}}, and found that our re-implementation achieves the same mIOU to theirs.
r
eee7cb81b474738b9ada33a8915de74f
Regression-based text detection methods predict the offsets from key elements and decode them into bounding boxes. Inspired by SSD{{cite:874d5ef9e1a4ccdeda34c62a44b93839bbb96b5c}}, methods utilizing the pre-defined anchors (key element) simplify the detection pipeline, which are end-to-end trainable. Adding six text-box layers based on SSD, Liao et al. propose Textboxes++{{cite:5afe15a21bc46c7b622fb2780b3debd94e5a8692}} to predict the offsets from the pre-defined anchors composed of different aspect-ratios and scales specs. Similarly, Wang et al.{{cite:185d3c930390820320157909ca431db609cc1eb4}} first design prior quadrilateral sliding windows for locating multi-oriented texts, which are different from horizontal sliding windows. Ma et al.{{cite:193bf1064f413db296ef7c127a5d64e4b8c580e0}} further add an angle specs into anchor strategy to generate rotating region proposals, which matches text instances of arbitrary orientations. However, single-stage detectors generate increasing failure examples on the cluttered background, which degrades text detection performance. Consequently, different refinement methodologies are adopted to optimize localization results. Similar to two-stage object detection methods, text detectors {{cite:193bf1064f413db296ef7c127a5d64e4b8c580e0}}, {{cite:ad335958caa176541820621439c219ad58819c7c}} extract text features from the proposals generated by an RPN-like mechanism and then adopt ROI pooling{{cite:db68a05153604149392f201fb3dbf67dca2b1f3f}} or RoIAlign{{cite:5def5179a93351817de3f06ca4f2716d55dd0f07}} to obtain fixed-scale feature maps. The branches of boxes classification and boxes regression are finally utilized to correct the localization results of each proposal. On the basis of two-stage detectors, Zhang et al.{{cite:5a7fecb391dbc1b75be03f698b1bba95ff26f847}} and Yang et al.{{cite:131ee0d477a7938f6cb77f5660161baae1cd4e5b}} propose iterative refinement modules to implement text detection correction and improve localization precision. Moreover, Zhou et al.{{cite:b80002efffd6ef68dd821529ff3ac2bad25f6391}} adopt an anchor-free strategy to realize geometry-aware localization. It generates boxes by predicting box edge distances from the current pixel (key element) to the minimal bounding rectangle of its text instances, and then combine the score map to detect the arbitrary-oriented text.
m
eb15bd0659f177be67a38bed73a6e9cf
We assume the basic notation and terminology on graphs and matroids (see, e.g., {{cite:4f3ba31b8bc367bbc03689b0adb4ad3c8541421f}}).
r
7900ad74078c27b026ace84fb13ce15c
It is believed the backpropagation is biologically implausible to perform in the brain {{cite:944e07f5471170e943043149f47f3d3005a00d4c}}. Feedback alignment {{cite:944e07f5471170e943043149f47f3d3005a00d4c}}, {{cite:6831bc6f634de7f496e5d944da447eec22a72c3f}} is an alternative method that uses random matrices to propagate error instead of transposed weight matrices. Direct feedback alignment {{cite:8e19ae8758c9c2a088c5cc50a15897761a6cd697}}, {{cite:e673895527e477b036918c33723c0d70fa471754}}, on the other hand, propagates error from the last layer directly to each layer through the random matrix.
i
3774c454b919a5aa2cfe924b1f05c3cd
In this work we have developed a set of techniques that significantly improve the convergence of classically assisted quantum compilation algorithms. Just like the local cost functions considered in {{cite:73ce5e723f97b1ff418e10d75850ca637291203d}}, {{cite:413fd153883aa1a7cb89601c19d79bb10cc58f10}}, we have demonstrated in practice that our truncated local cost functions and their gradients (see e.g. equations (REF ) and (REF )) can be feasibly calculated on a classical computer while also being powerful enough to escape barren plateaus that appear in both approximate state preparation and full circuit approximation - see Figures REF and REF . As discussed in the introduction, there are a number of circumstances where this classically assisted approach can be useful. One might for example be interested in compiling sub-circuits that appear in a given quantum algorithm, or alternatively one might be interested in finding the most efficient way to generate a weakly entangled state on a quantum computer {{cite:8bf59932f7e14b0513d327cdcc29579a9e32fe56}}; one can then use this state generated by a shallow circuit as the input for a quantum algorithm {{cite:7e3fa95e524fda0d9f033fcd45346b67baa5f3db}}. In the latter case, one can use Tensor Networks to represent the state classically and thus one can apply the techniques developed here on large numbers of qubits - see our upcoming work.
d
e8f1f3414f4a720f9b56b12bf1d2172f
Here the initial and boundary value data about {{formula:56f41ee9-643a-4d19-8e01-cb2042da3edf}} are denoted by {{formula:36e6970b-54d4-4e4c-9d6a-6d18ca6d8ef7}} and {{formula:d36fabd7-1b11-4bd3-8673-81ca410c63e1}} . Similarly, the collocation points for {{formula:a31a069d-1d79-4020-a258-ebbce149556d}} and {{formula:56a7cb52-478f-4933-b6dc-88c28b26aeeb}} are specified as {{formula:b40d4888-0f26-431d-9f08-273a914d1a67}} and {{formula:f77a72d7-27ae-4ef7-9396-ae654bf2df81}} which are sampled using the classical latin hypercube sampling (LHS) technique {{cite:3f83f664470f214746b4e4bb5d939b02a362d596}}. The Loss function (REF ) corresponds to the initial-boundary data and the residuals imposed by Eq. (REF ) and (REF ) at a finite set of collocation points sampled. Specifically, the first and second terms on the right hand side of Eq. (REF ) attempt to fit the solution data, and the third and fourth terms learn to satisfy the residuals {{formula:cd595a75-8c5e-44af-9470-8a2cb0b9af0f}} and {{formula:9e8f089b-f19d-4b89-b7cf-dddc416d8912}} . The convergence of the loss function has been analyzed in previous works {{cite:e07ce7c12541c0c34fac16801dae867f53d19b56}}.
m
7c7303035947a06c17244c941ec39d0d
We investigate several state-of-the-art approaches and found that FashionNet {{cite:d126b7fb3e05c02e826cc7ed368c851b36774988}} and StyleNet {{cite:afc548aeace20d370cace2f5b011a566d0f20f48}} which solve different fashion retrieval problems and are not suitable for attribute manipulation. We compare the performance of FashionSearchNet-v2 with its earlier version {{cite:e7c00969d907f0422316d22ca328360e7a4268cb}} and AMNet {{cite:12e7c50eac2bdf85eee6d7ced22a9024f61efc97}}.
m
a44ff5dbaedf6ce0cfb150946cae0907
Nonetheless, our framework is interpretable and provides a natural approach for quantifying differences in strength of association by sub-population in general settings. Though we only considered measures of marginal association in our examples, our method can be used with conditional measures of association as well; we only require that asymptotically normal estimates are available. In particular, our approach remains valid in the high-dimensional setting, where asymptotically normal estimates can be obtained using the techniques of, e.g., {{cite:eca2ee08867d4859cccc86c667b122d1d4e06c1f}} and {{cite:d7ccc84acd062bad3a2e4dcb9a8c8ecc7862c107}}. Finally, our method has good utility in the analysis of genomics data, as we demonstrated in our real data example.
d
3fb43aa4205dd958b357b4795a9d9bd1
Determining the evolution of the early Universe before Big-Bang Nucleosynthesis (BBN) era is not an easy task. Although observational data regarding the Cosmic Microwave Background (CMB) may constrain very early stages of the Universe evolution such as cosmic inflation, very little is known about the post-inflationary phase of our Universe's history. Indeed, the thermalization of Standard Model (SM) particles at early time is expected to erase all traces of the initial conditions which led to the hot big bang era we are familiar with. The discovery of Gravitational Waves (GWs) {{cite:8ec2817103837515c4990c4d85bac8ae94293baf}}, {{cite:fcded3f7d0da40c4109e45ad6dc3d72f94036874}} opened the door to new ways of probing the evolution of the early Universe, either through the detection of recent astrophysical events or through the measurement of a stochastic background of GWs produced in the primordial Universe (see, e.g. Ref. {{cite:9b15f74402f4deb4be4f99dffde8341f00c7187d}} and references therein for a review). Historically, however, measurements of the number of effective relativistic degrees of freedom {{formula:3b3c3660-abe8-4fea-9061-aaa6c04fdbd5}} has provided important information regarding the Universe before BBN, for example, by confirming the description of neutrino decoupling. In the realm of physics beyond the SM, measurements of {{formula:bb8a2262-08ee-4d2d-bbe9-d833d9498d70}} provide vital constraints on models with light particles. Many different models predict a deviation of {{formula:9845b27c-4ef0-4a7c-998f-5124741bbc06}} from the minimal situation where only neutrinos and photons dominate the Standard Model (SM) radiation energy density, see e.g. Refs. {{cite:5a859d567c995813af76fa4762678a47c764aba3}}, {{cite:a38bb70b4c1716c87773d602acaf966523c9d5f7}}, {{cite:0076f1e64ca668dd227f81dec46aa41591eb8185}}, {{cite:d3acfdf76993c8b207e63b8657537e76737a7994}}, {{cite:33b09a23d284fa405a29ca62b638fb44682cec0d}}, {{cite:b8d2311b6d8aa3f63159bcab55e1d0e271bccfe7}}, {{cite:1846ad6cd64cad30fcf7d980f80efd12479890d2}}, {{cite:afb80a70a0e43f559fcda3117eab50c82663e20d}}.
i
d762aca754824a5d7b3223e36d116c81
The kernel trick provides an elegant and natural technique to extend linear models to non-linear models with a great representation power. In the past decade, numerous works have studied bandit and reinforcement learning problems under the assumption that the reward function conforms to a kernel-based model {{cite:65c5702060913dc3f32f013fa26aeebfa4dbb65c}}, {{cite:240d7b2555c0f8d766228783c8f10eb4dbc41017}}, {{cite:487f8dfc04161587dc48a31e2eca61a500539950}}, {{cite:a8bf45ce773cdce712d0adcaf01e58c5d3bc0d9e}}, {{cite:8bbbee3467f7ef319215edd292652e477aaae37f}}, {{cite:103765a6f842e41e04fe8c2922532353ec54a07a}}, {{cite:146c6db9cf9368ca7edaee6d72fce87599555523}}, {{cite:f1c7c7e6f0126767b05a438e1bad10356b970b29}}, {{cite:92cbd5e82f6efb1412c16323dfe28b5c3fd67426}}, {{cite:e11e5f988f8a99f2faf46d5b1baf77463ac6ace5}}, {{cite:35e554503bc697ece573508c3b25ff34f0d59738}}, {{cite:3a0ccf2687081cf9303c8003c84e71b09ea8b3d4}}, {{cite:0fa3a44656e72f2c599382bdc3c8b47a472d1468}}, {{cite:93feee3bcd201a36bfb3f3523cd9b5f29de50705}}, {{cite:75a5855bc773e9979ce4237de2b72b24df194eed}}, {{cite:07071b18df906ca84a88309fcba101dc8107c62d}}, {{cite:e13e26bd0df91d737aa883ccb1bf88727632acdc}}, {{cite:6eeba20cc3556097d95b7cabcf5f7d122ad15d5b}}.
i
eb82b5d74561e6e0e809efba29fa6366
To the best of our knowledge, existing causal methods for recommendations aim at mitigating the effects of different bias rather than improving interpretability as in our work. Most existing works claim that the observational rating data suffers from selection bias {{cite:df593236c1018b2719b30478976f303bf8816c91}}, {{cite:2e565da504936891c53606bab51b4cb369d71fa2}}, exposure bias {{cite:d0b902e1c185c13f8f08c5d13fdb375797c508a7}}, {{cite:9f48a1e8547d186adc54dac9110dbd76079ae4d7}}, {{cite:4ef13cf0014f9307ca1d144d038dbfa33b26d724}}, {{cite:04445321374203c0f49521071bcbd2c526b1a900}} or popularity bias {{cite:8372185dc2fb9edaaed307414ec437143498c7f9}}, {{cite:0058282c8941252bce2c3e621ceae0783ca60925}}, {{cite:2255ff5fde11b67f501bf68a633fda6d270ae473}}. Following this paradigm, dominant approaches adopt two main strategies such as propensity-score {{cite:d0b902e1c185c13f8f08c5d13fdb375797c508a7}}, {{cite:9f48a1e8547d186adc54dac9110dbd76079ae4d7}}, {{cite:ac2ec1a304230172795074ed52a7d9a1ccc67e8d}} or causal embedding {{cite:4ef13cf0014f9307ca1d144d038dbfa33b26d724}}, {{cite:df593236c1018b2719b30478976f303bf8816c91}}, {{cite:04445321374203c0f49521071bcbd2c526b1a900}}, to disentangle user interests from different types of bias. For instance, the method in {{cite:9f48a1e8547d186adc54dac9110dbd76079ae4d7}} uses propensity score to re-weight the observational click data, with the aim of imitating the scenario that item is randomly exposed and alleviating the exposure bias. The work in  {{cite:4ef13cf0014f9307ca1d144d038dbfa33b26d724}} learns a uniform unbiased embeddings from partially observed user-item interactions via their decounfonded model. More recently, the work in {{cite:df593236c1018b2719b30478976f303bf8816c91}} resorts to balance learning with a Middle-point Distance Minimization (MPDM) strategy to learn causal embeddings that are free from selection bias. Facing user conformity issue in recommendation,  {{cite:8372185dc2fb9edaaed307414ec437143498c7f9}} relates such issue with popularity bias, and proposes to alleviate the popularity bias by learning disentangled embeddings of user interest. A few state-of-the-art works {{cite:2255ff5fde11b67f501bf68a633fda6d270ae473}}, {{cite:2e565da504936891c53606bab51b4cb369d71fa2}}, {{cite:0058282c8941252bce2c3e621ceae0783ca60925}} inspect cause-effect of the bias generation and design a specific causal graph attributing the exposure bias to a confounder. For example, Li et al. {{cite:2e565da504936891c53606bab51b4cb369d71fa2}} prove the social network to be a confounder that affects the user's rating and the exposure policy of the item to the user.
m
af45ad2d134ed1714adfbf613ed5b68b
Nevertheless, we also note that Chang et al. in {{cite:4ff02db230760d3c7063594ba33f51a81b54a844}}, defined a controlled Hamiltonian (CH) system by using the almost Poisson tensor, and studied the reduction of CH systems with symmetry in {{cite:b2c8827460b3acc1ed0e69aa9c5b6cee2e27180a}}. Unfortunately, there is a serious wrong of rigor in their above work, that is, all of CH systems and reduced CH systems given in {{cite:4ff02db230760d3c7063594ba33f51a81b54a844}}, {{cite:b2c8827460b3acc1ed0e69aa9c5b6cee2e27180a}}, have not the spaces on which these systems are defined, see Definition 3.1 in {{cite:4ff02db230760d3c7063594ba33f51a81b54a844}} and Definition 3.1, 3.3 in {{cite:b2c8827460b3acc1ed0e69aa9c5b6cee2e27180a}}. Thus, it is impossible to give the actions of a Lie group on the phase of systems and their momentum maps, also impossible to determine the reduced spaces of CH systems, and it is not that all of CH systems in {{cite:b2c8827460b3acc1ed0e69aa9c5b6cee2e27180a}} have same space {{formula:4548095f-fad4-4b78-a1fc-5923bf08d8f6}} , same action of Lie group {{formula:71812423-7062-43fa-a01f-e474468f396e}} , and same reduced space {{formula:46b06469-fb4a-4fea-ba7b-b3c3eff13003}} . For example, we consider the cotangent bundle {{formula:341e4b54-18ec-4dd7-9280-0d40129d928c}} of a smooth manifold {{formula:1bcfe4d5-2cf6-456f-9e2b-ced0211f225b}} with a free and proper action of Lie group {{formula:971fff7a-a45f-4eeb-b225-5d92c573328d}} , and the Poisson tensor {{formula:1d25490c-ea54-4117-8ef8-8651d07eda3e}} on {{formula:5751d806-ef25-41ca-9477-7779936a06f5}} is determined by canonical symplectic form {{formula:08314475-b6c7-4291-ad12-626032b2f69f}} on {{formula:4b942b03-7086-40c5-be49-45290e0f3192}} . Then there is an {{formula:4bf4df05-3c47-417c-b671-667539438e14}} -equivariant momentum map {{formula:e6151368-5d63-4935-a78e-3ac87b73773c}} for the symplectic, free and proper cotangent lifted {{formula:602ac844-ce88-4915-b6cd-322035d466a6}} -action, where {{formula:2013725b-1111-4190-a39c-17514dd9f4f6}} is the dual of Lie algebra {{formula:29604997-c858-4dd0-b068-a5216711d145}} of {{formula:8dcf7198-6968-41ed-862c-615ade36e022}} . For {{formula:47195c53-09dd-4c4b-a67a-44b43f9114b3}} , a regular value of {{formula:7fd34adf-56d9-4241-af91-658b7c835755}} , from Abraham and Marsden {{cite:1073fc87981400022c79fa19c00ad4f307d779f4}}, we know that the regular point reduced space {{formula:4c75cb2c-2be1-4477-975c-fd60b895d4d7}} and regular orbit reduced space {{formula:71250682-2f78-461e-bdb6-0bcc5d648129}} at {{formula:366911f4-529c-4f7f-a54c-779e68872b37}} are not {{formula:b313865f-5575-4b0d-9914-ef1c07f444aa}} , and the two reduced spaces are determined by the momentum map {{formula:cc53b7ff-28ee-4942-9b22-671e49d1bd33}} , where {{formula:fd220a89-a810-4c03-830a-f0b20ab90cc5}} is the isotropy subgroup of the coadjoint {{formula:d951130e-014e-4fed-8431-d623b3e7e17a}} -action at the point {{formula:4a668e9b-7807-4a04-9d55-5184ff41f11f}} , and {{formula:40680a1c-4bf8-4c7d-a8ab-3c14d180a51f}} is the orbit of the coadjoint {{formula:d299798d-2685-4680-9df8-2ca3ed22b408}} -action through the point {{formula:2c8a382a-dfd4-4286-a523-49b1eaaf46bc}} . Thus, in the two cases, it is impossible to determine the reduced CH systems by using the method given in Chang et al {{cite:b2c8827460b3acc1ed0e69aa9c5b6cee2e27180a}}.
i
cbbb673d3545950a55acd5153e94af37
In our experiments we generate sensitivity attacks using FGSM {{cite:733ec6bbd0eb901624e4707adc0dbf2c61f49569}} and invariance attacks using the method described in {{cite:05bb1de62ada8c9258d189848b29b48f89938a2b}}. We generate a single sensitivity attack and a single invariance attack for each sample in the MNIST dataset.
d
6855bc810719d3bd9efc847f0187fc6b
An alternative form of feedback is single trajectory judgment (a.k.a. evaluative feedback {{cite:8e2047e9c54658a46ab0de06abf1d0554f438e1a}}) wherein humans watch agents behave and provide a scalar signal representing their judgement of how good or bad that behaviour was. This type of feedback is usually given on state-action pairs (as opposed to trajectories), requiring more human effort. The main challenge with this feedback is how to interpret it in the context of RL. Previous work has treated evaluative feedback as reward {{cite:ce9f20da8dbd8bb17ceca9a6d612e7cff7d29cfc}}, value {{cite:84baee4ab43ce35baee584bf76ef56ad5038b1f6}} or advantage {{cite:677def58a3421c6f192943468368700e8b58f4b2}}. As opposed to learning from preferences, scalar feedback in single trajectories is subject to arbitrary scale choices, reward shaping and/or reward engineering {{cite:8e2047e9c54658a46ab0de06abf1d0554f438e1a}}.
d
fb3ece57c404c4a6c9919bf6cdad4eab
It is one of the fundamental properties of quantum mechanics that the evolution of quantum states is described by linear maps which are completely positive and trace preserving (CPTP), stemming from the unitary dynamics enforced on a larger Hilbert space {{cite:b60bf5dfa27b1490582b886f90ed7ba426bc68b7}}. However, in several different settings of practical importance, various applications of quantum dynamics which are not CPTP can be encountered. This motivates the study of such transformations, and in particular a precise understanding of how they can be compared with and approximated by physical quantum channels.
i
3782318d17bad01ef33ff3eb34e63a76
Given an input and a neural network with fixed parameters, the goal of an input attribution method is to identify the contribution of each element in the input to a specific target output neuron in the network, e.g., the output neuron correlated to the correct class in an image recognition network. The contributions are commonly gathered together to have the same shape as the input and visualized in a form of heatmaps or saliency maps. blackSimilarly, the saliency methods {{cite:dc0c85af28b089c3ecf441769aac06f323d70ed6}}, {{cite:faa0c2db1f799fd582d7ec35284c63ebfff21f32}}, {{cite:c37333ac00492b1d68770e95b66e635d3242ab69}}, {{cite:99b061a9d01555f187cc14345f901ed8f19c5294}} and the networks with attention mechanisms {{cite:9a052901741b591f7ff268314c53a0a473c3956f}} can also produce heatmap-like results but the heatmaps have different meanings and goals. The saliency methods aim to localize human-centred salient input regions. The attention mechanism is embedded in a network to enhance its performance by assigning self-learned weights (usually visualized as a heatmap) to different parts of a feature map. In contrast, attribution methods are applied on a pre-trained network with fixed parameters and provide explanations to the network by indicating the contributions of inputs with heatmaps.
m
7350596cd8cfc5fa7d488b62bb4e4606
Let us show how to apply our approach to the construction of m-dissipative Maxwell operators associated with the impedance conditions of Examples REF -REF . For {{formula:b8bd60ce-a91d-4fed-b92b-1aa3ec3c5fc2}} and for {{formula:bf0ccf88-a467-4e27-9ac5-cf7eb59a5ad7}} , all m-dissipative extensions are theoretically described by Theorem REF . However, the applied modeling requires concrete m-dissipative operators. Operator theoretical results {{cite:483b67d2838645162868ae0d332e2e90f98d50d5}}, {{cite:0597632161ff0fa62a8e6015d9521d68368c9fa2}} allow us to construct concrete m-dissipative Maxwell operators using either the Friedrichs extension {{formula:a8334718-a115-4246-a0ce-14764a842428}} , or the Krein-von Neumann extension {{formula:c7531b4c-caa9-4075-a257-549e8fedf46f}} of the nonegative boundary operator {{formula:8dda1e8f-d740-4fad-80df-0c44b27e973f}} in {{formula:ed79f994-019d-4cc8-86ca-e37d56ba066c}} .
d
65e3087aa22c127632b26200bca898a2
for some parameter {{formula:119eca95-5adf-4eca-94a9-37f7539263cf}} and modify our error term {{formula:d9454745-8dc5-4996-9729-fe3c2ca14d9c}} to {{formula:66f6cba0-2cad-4e3c-9ba5-a1dd616ea903}} . This is for instance the path chosen, when {{formula:c48bec04-aa7e-4cb4-8de5-2c0088c0d344}} in Theorem 1.1 of the book {{cite:742480a852211f141429a9de245066bf471803c4}} by H. Iwaniec and E. Kowalski. We did not try to optimize the power of {{formula:afce262b-7bed-4edb-b0cb-8a668d7bfaf9}} that appears. It is likely that no such term should be present in fact, but in practice, when our assumption holds for {{formula:db0b0ce2-8c4f-4d1f-bd3a-588aa4fa2144}} , it holds for any {{formula:f271d714-b291-4b0c-8e37-f65bf1b93e6c}} . Using the result for {{formula:ebb6a50f-9276-4ea7-b2a7-de07c3904a3b}} removes this parasitic factor.
i
7e599daed03bb7c1f2f560f4e8b18491
We observe that {{formula:64984576-5fc8-405d-a4b5-f960a0dc2fb8}} is constant w.r.t. {{formula:b9d6918c-1226-4151-b20d-1f1b3a50e318}} , thus Eq. (REF ) is simply a linear combination and its intersection with the hyperplane {{formula:72462021-c900-451a-8c98-f7bd30ace1a9}} results in a twice-differentiable convex optimization which we solve using Sequential Least Squares Quadratic Programming (SLSQP) {{cite:bcc7886e4e71a64f16e1e2dc8966d295940b4b7a}}.
m
2307a44469a22fc4bcd0f48bf190d5fb
There are three types of bias mitigation techniques: pre-processing, in-processing, and post-processing {{cite:ad5abb45ae2306bfa89843cf3ca62271a03ce93b}}. Pre-processing techniques mitigate bias by removing the underlying discrimination from the dataset. In-processing techniques are modifications to the machine learning algorithms to mitigate bias during model training. Post-processing techniques seek to mitigate bias by equalizing the odds post-training. We use two methods for bias mitigation. As a pre-processing method, we use the reweighing technique of {{cite:3e133fd7abad9d9b602147e0721b14c5914a176f}}, and re-train our classifiers on the reweighed dataset. As an in-processing method, we add a discrimination-aware regularization term to the learning objective of the logistic regression model. This is called a prejudice remover. We set the fairness penalty parameter eta to 25black, which is high enough that prejudice will be removed aggressively, while not too high such that accuracy will be significantly compromised {{cite:a68a044f1e7e61d50e4e16d7b38fee5dae66cd1b}}. Both of these techniques are seamlessly implemented in AI Fairness 360. blackTo apply post-processing techniques in practice, one needs a training set and a test set; once the model is trained, the test set is used to determine how outputs should be modified in order to limit bias. However, in clinical applications datasets tend to be small, so we envision a realistic scenario in which the entire dataset is used for development, making the use of post-processing methods impossible. For this reason, we do not study these methods further. blackThe workflow of data, models and bias mitigation techniques is shown on Figure REF . {{figure:f3f1ac23-5b6f-4fca-a27f-9c10354ea790}}
m
16dc2e0dc4e01056c9220d0b548d1cc3
The performance improvement using joint training on mediastinum and abdominal lymph nodes shows that it is beneficial for CNN to have larger, more varied and comprehensive datasets (which is coherent to the computer vision literature {{cite:243d074ee175d19a27c27d7c8d5ecd3dbd6c3765}}). A companion approach {{cite:4e43a36e6c034e3af7cf757dc47f0881c68a5da4}} exploits an alternative shallow hierarchy for LN classification, using a view-level classification score aggregation by another classifier. While they show that this is helpful to achieve better FROC curves in their scheme, we find that the same sparsely weighted fusion via learning does not improve over the simple average of Eq. REF . This probably indicates the high quality of our deep CNN predictions and shows this approach to be very effective and efficient. Future work will investigate more sophisticated methods of label fusion from the CNNs. The proposed 2.5D generalization of CNNs shows promise for a variety of applications in computer-aided detection of 3D medical images. For future work, the 2D views with the highest probability of being a LN could be used to present reformatted visualizations at that orientation (optimal to the CNN) to assist in radiologists' reading.
d
ac957b4b01a838fa5f6de7979e385bee
In all cases we pre-train the classifier on the training set to predict the letter classes, and perform model selection with the validation set. Following the methodology of Cohen {{cite:ce37593977ead8583da4e590c05a239c00fbdad9}} and Jayaraman {{cite:6d10bebeb547005c32fb276e5e52657473b7f413}}, we perform pre-training using 26 classes (combining upper and lower case examples). The dataset provided by Jayaraman {{cite:6d10bebeb547005c32fb276e5e52657473b7f413}} includes randomness in the fillet size, and extrusion depth and angle. For the held-out test set used in all our evaluations, we regenerate the letters to remove sources of randomness (extrusion angle/amount and fillet size) within font classes, hence strengthening the style labels. For further detail, see app:generation. {{table:1eee20ac-15f0-4cd5-94de-a1490666db68}}
r
5a89ec2d439c704cd02d78b87feea37a
{{formula:10ef1759-5099-4a56-993b-0dea456a86d1}}   //Find crowding distances of individuals in population set {{formula:c2e6b9ef-8813-4961-91ff-c89b483af891}} {{cite:8a87a5733554b3308bca59ba83f0c122f490e1a0}}.
m
f397acb271bb8ca9e3dbfa5c08c61df2
A range of work in artificial intelligence has studied natural language generation in conversational dialog {{cite:c9d683cb29a2ab8ee66ba14e5d4eeddcacbd80f5}}, {{cite:ab2b5de9f426e6c5477ab1aae40fc9a019dc3850}}, {{cite:2d02dae2f30bc28e6f5ad424d8facd3b77ddca7f}}. Dialogs provide rich scenarios for understanding social interaction and real-world knowledge. But within a dialog, the question often arises: What if a character had said something else? Such counterfactuals can be powerful tools to learn from, and children tend to naturally explore such possibilities naturally by asking “what if” questions {{cite:bb7050df9afa5e3fcae8574358c2a58b2535c225}}, {{cite:38b8e3b7bff9e211d0be5e8deb590980bf3e6a7e}}.
i
2e5fbb5205746892d425c77a31d8e50f
In this experiment, our goal is to build a few-shot classification model that works the best on real-world perception systems. So we train CTX+SimCLR {{cite:0e9dd2e7f08ff6bcdaf67f8de65dc570267a1bd9}} with all real and synthetic data from the FewSOL dataset and then test the trained model in our lab. Cluttered support sets are used for training. This setup is selected due to the superior performance of CTX+SimCLR {{cite:0e9dd2e7f08ff6bcdaf67f8de65dc570267a1bd9}} with cluttered support set training during joint object segmentation and few shot classification experiment. Figure REF shows fetch mobile manipulator facing objects from Set-1 (Figure REF ) on a table. RGB-D images are collected from a Fetch mobile manipulator and {{cite:59d2e571753b8ef73d598ac725cd5c51fb20f3e7}} is used for object segmentation. We tested on 32 objects with 4 objects in an image (refer Figure REF ).
r
51e9a890c8a0018d7e5c22346c52621a
Simple methods. In this category, the initial match graph is first divided into sub-clusters with compact edge connections and small image size, and cluster merging is then achieved by using common camera poses or 3D points between sub-models {{cite:089a83cfbdeabf8b3f4c49fcb12726aabef4d6df}}, {{cite:1de152c6781026b282cb89426f8bf4e1d5675f66}}, {{cite:f1dd10a54f20b3d1a52dd5afa486929261e7fd68}}, {{cite:92afbefc147b7bd197a2c6783ca6c19cce55259e}}, {{cite:40e44b7efebd05d9b7a3a355a7a3e408d4dcbb4e}}. In the work of {{cite:089a83cfbdeabf8b3f4c49fcb12726aabef4d6df}}, the initial match graph is cut into small clusters through the normalized-cut (NC) algorithm {{cite:0625d947006212297ceea87882837ed191bc425f}}, in which edge weights are assigned as the similarity score of vocabulary tree based image retrieval. After the image orientation of each component, the entire scene is created by merging sub-models through their epipolar relationships. In the work of {{cite:40e44b7efebd05d9b7a3a355a7a3e408d4dcbb4e}}, two constraints, termed size constraint and completeness constraint, are used to divide the match graph into clusters with desired image number inside each cluster and enough image overlap between two clusters. Instead of the NC algorithm for scene clustering, {{cite:f1dd10a54f20b3d1a52dd5afa486929261e7fd68}} proposed using the matrix band reduction (MBR) algorithm because it can generate clusters with equal size and compact structure. Considering that the order of cluster merging affects the completeness and accuracy of the final model, {{cite:1de152c6781026b282cb89426f8bf4e1d5675f66}} proposed the DagSfM algorithm that incorporates image graph and cluster graph into the steps of scene clustering and cluster merging, respectively. To improve the robustness of scene clustering and merging, {{cite:92afbefc147b7bd197a2c6783ca6c19cce55259e}} developed an automatic and dynamic strategy to split match graphs and merge sub-models, as well as evaluation metrics to search unreliable models.
m
1f0b54c68d1d10305be11d6ad7945f3a
In this section, we first briefly review the key concept of DenseNet {{cite:1bdedda0d73265e17f50392b22bf361916a65ba0}} to deal with the degradation problem in the classification task. Then, we propose a novel network architecture that extends the DenseNet to volumetric segmentation.
m
cc10ac54ffe15dce3f71447324231bc6
We converted the absolute magnitudes of our M31 GC sample to photometric masses using the appropriate age-dependent mass-to-light ratios provided by the BC03 and galev SSP models. The GC mass versus age diagram is shown in Figure 9. The crosses indicate that the ages are from {{cite:5f500031340cdbf9bf19dc07bfc0c86dd419ea94}}, {{cite:7a63b9958b8c7745a718778f09c16dfca4995a71}}, {{cite:06d52c359ae5f4ab8a31e454fe157e4045390efa}}, {{cite:15046d6cc669f47562b5e06dc500e023f72d10b2}}, {{cite:33d2782eaafc2e516e7af507ec77b85f0777b746}}, and {{cite:9820b9c54207ae5b631fd0bc0c34ff029493c255}}, and the masses were obtained based on the SSP models of BC03, while the circles mean that the ages are from {{cite:33d2782eaafc2e516e7af507ec77b85f0777b746}} and the present paper, and the masses were obtained on the basis of the galev SSP models. Overplotted is the fading limit, assuming {{formula:0e5a47b5-415c-4e37-bd7a-4bc3cb258f87}} mag and evolutionary fading based on the {{formula:bc58f5cd-f225-4392-9013-a98570a856e9}} BC03 (dashed line) and galev (solid line) models, assuming a Salpeter stellar IMF. Figure 9 shows that our observational ({{formula:798197c2-32bb-43bc-a835-d6fafd8a0e13}} %) completeness limit describes the lower mass limit of the entire GC sample up to the oldest ages very well. Similarly, the upper envelope of the points in Figure 9 is likely a result of the `size-of sample effect' {{cite:34c2421a4b08df6df8313431a5e3b112b0c50d7e}}. It is clear, however, that massive star cluster formation halted abruptly in the disk of M31 approximately 1 Gyr ago. Given that massive ({{formula:59d9c554-6d5a-4baf-9a7d-1c7753d03c05}} ) young ({{formula:0e6be6cd-c1ee-4a07-a119-62a5db153b9b}} Gyr-old) clusters will be significantly brighter than the much older GC-type counterparts in M31, we would have expected any such young massive clusters to have been detected in M31, yet they have not. {{figure:ebc3a458-cd17-4653-ab8b-33ffed89ed2b}}
r
a33953375d60bfd983be512e60e4dad5
Since the discovery of neutron stars (NSs), they have been one of the most interesting astrophysical objects found in the observable universe. A number of branches of physics play important role to explain the extreme characteristics and properties of the NSs. They are wonderful `celestial laboratories' which provide the scope for studying various topics of current interest. The theoretical computation of the structural properties of NSs is largely dependent on the equation of state (EoS) which in turn is determined by the composition of NS matter (NSM). The later is still inconclusive from experimental point of view as NSs are characterized by extreme density (about 5-10 times the normal nuclear matter density). Therefore theoretical modeling of NSM and determination of EoS is one of the most active areas of research in the domain of NS physics. However, recent observations of massive pulsars {{cite:49a6de78cfe0866c7ea820e7637d521c3cd797bf}}, {{cite:2da7114fd16910182939e77fb34b395c3e1a09ac}} and the data extracted from the detection of gravitational wave (GW170817) from binary NS merger (BNSM) {{cite:1e84e513067afb20a509c51360ae341fd78b189e}} have put some constraints on the EoS of NSs. Theoretical research also suggest the possibility of hadron-quark phase transition in NS cores and the formation of hybrid stars (HSs) {{cite:ea03b61b015a5f01e6d28fefae8d24299bcc2c5f}}, {{cite:b2daa3fa20d82f578d9d4e21136c680d3b70ff0e}}, {{cite:7d70d3c4f847121dd950708bf423ae1cbd94c00a}}, {{cite:3b36d4eb6002f5511483a3e5a6c2f64fff520e4e}}, {{cite:94ef7f86db4d1ca759f99c811700cabe36c89900}}, {{cite:efc09ab659306f0f160fe8350e11ce443d3490cf}}, {{cite:b1dcc0630d4ef06d5b016a3948a48d9bb4cc840f}}, {{cite:eff043dd82f2a401184cb1b3a752f7c70806624e}}, {{cite:b67df19e7922d759874260d958aa9f264e6689af}} and even existence of quark stars (QSs) {{cite:ea03b61b015a5f01e6d28fefae8d24299bcc2c5f}}, {{cite:6c7baafbc00d96692e61e6f175f198eeb778fd95}}, {{cite:66517d95ca199daa78065d5507d7051a46baa934}}. For recent review on it, readers can go through the Refs. {{cite:b55a2a677388a6a6fc50f8d037a27490d013c687}}, {{cite:781a4032ad84b3c38e85a90f901e1037f47a4a56}}. This hadron-quark transition density within NS environment is largely depends on the theoretical models and interactions considered. It is grossly linked with very low temperature and high baryon density zone of quantum chromodynamic (QCD) phase diagram, whose first-principles predictions via lattice QCD (LQCD) calculations are missing due to the infamous sign problem {{cite:f44c9a65dec28c411f7a712dc42cc92890e822e0}}. However, at high temperature and low/vanishing baryon density domain of QCD phase diagram is well studied for more than 3 decades {{cite:d644afbd7170e3c58137273cb1de823c9c76c98e}}, {{cite:a29a65ae642f6c07716d3f624cae6da78ac6e5a8}}, see Ref. {{cite:c4e5a523a7fbb055484186b09e9b85a653d0cee8}} for its latest status. They conclude a cross-over type phase transition, which has been alternatively realized from estimation of thermodynamic quantities in low and high temperature ranges by hadron resonance gas (HRG) model {{cite:6c8e56ea604c332e6339902eff96b32c684e53a6}} and finite temperature perturbative QCD (pQCD) calculation {{cite:bf024f78aebfb94da7cc8d9c79007cae054e3401}}, {{cite:2a171cb297ce2578c327bce92d0c407ed56d8384}} respectively. Similar mapping in the low/vanishing temperature and high baryon density domain might be possible {{cite:b55a2a677388a6a6fc50f8d037a27490d013c687}} by fusing ultra-high density (approximately 10 times larger than the nuclear saturation density) pQCD calculations {{cite:2850d1980257f64fa5e90356fe940a035b247fde}}, {{cite:72078e844b60c06a29e6537a1d65e20b831d6551}}, {{cite:f93083e42705cad76ede203f6628c138daaf115b}} and low density hadronic model calculations. In this context, present work attempts for comparing the estimations of transport coefficients, obtained from standard MIT Bag model for quark phase {{cite:6c483992a5c94d89996b741c9f6f428f387b49c6}}, and chiral hadronic model {{cite:eff043dd82f2a401184cb1b3a752f7c70806624e}}, {{cite:268bc619c1d6497aa8a58f7180c61ba491f3fa9d}}, {{cite:99ca2873c52009ffa58857da65aa7a17ca8798e0}}, {{cite:95e00669c1ca963fd1d94cfa7426d36a8a33ad05}}, {{cite:b67df19e7922d759874260d958aa9f264e6689af}}, {{cite:046892b8d912da89aafc2724250e9797cddeedb6}}, {{cite:1f08c803059002a08eb0d5f257393392bb9468da}}, {{cite:3688a805bc261bf056c692ba5131011c6b17635f}}, {{cite:8be25853661dc2ad5b06bcf6ef7aeeb19e6bad73}}, relativistic mean field (RMF) model  {{cite:e5a444372719f156f6f66ca76159634a2fd379c5}}, {{cite:d7c608a7e3b4a4a06bfd4ea34b8dcb1c83c9f210}}, {{cite:6bf4e805313ebe9cb93adf8d49252df43c0887a2}}, {{cite:a87cebebdbb13fc3e58dbec7ae25a416d0c907d4}} for hadronic phase. Both the types of hadronic models have been explored thoroughly to construct the EoS of dense NSM and successfully determine the structural properties of NSs in the light of recent constraints from various astrophysical observations {{cite:eff043dd82f2a401184cb1b3a752f7c70806624e}}, {{cite:268bc619c1d6497aa8a58f7180c61ba491f3fa9d}}, {{cite:99ca2873c52009ffa58857da65aa7a17ca8798e0}}, {{cite:95e00669c1ca963fd1d94cfa7426d36a8a33ad05}}, {{cite:b67df19e7922d759874260d958aa9f264e6689af}}, {{cite:6bf4e805313ebe9cb93adf8d49252df43c0887a2}}, {{cite:a87cebebdbb13fc3e58dbec7ae25a416d0c907d4}}. The present work is aimed to probe this hadron-quark phase transition via transport coefficients estimations in this high baryon density and low/vanishing temperature domain of QCD phase diagram. The motivation of this kind of investigation comes from the equivalent pattern of LQCD thermodynamics {{cite:a29a65ae642f6c07716d3f624cae6da78ac6e5a8}}, {{cite:c4e5a523a7fbb055484186b09e9b85a653d0cee8}} and normalized transport coefficients {{cite:2b6fd40988b7326f5f705bfaca857988bcc14a2d}} in the high temperature and low/vanishing baryon density domain of QCD phase diagram. If we analyze the temperature ({{formula:ec35aa52-af05-4685-ab6b-1b2343c7bd03}} ) profile of thermodynamical quantities like pressure {{formula:2d12dbe7-e29b-4410-93de-1d406dd5591e}} , energy density {{formula:55024cdd-69a5-47b7-af45-3c9fe0a3d626}} , entropy density {{formula:ec2ab29a-7d28-4557-89c4-83039ec1cfae}} and transport coefficients like shear viscosity {{formula:1c40ed0c-f3de-4814-964d-6f1f8dccff76}} , electrical conductivity {{formula:b08fd89e-985d-450b-aa5e-062ae947428e}} for massless quark matter, then we can find the proportional relations {{formula:0ee32ee9-6c69-497b-a622-843fc4569ad2}} and {{formula:d35574a8-3989-4190-9688-e4e44ebb3032}} , {{formula:cc09db67-c6ea-4aef-a929-420796981b16}} , where {{formula:f9961cc2-74c4-4a44-b98f-ae1d155d4026}} is relaxation time of massless quark. So their normalized values {{formula:836696fe-27fc-4623-b3c3-772e0b80c9d2}} , {{formula:45c5de2a-05a1-4b62-afec-902c0fa07a4f}} , {{formula:53fc76e4-9a2f-4e9f-add2-c3415092b943}} , {{formula:6be28a79-ef5f-4a75-ac59-693548652648}} and {{formula:d4ee3a7a-d8a8-45ba-ab00-78540b4a508e}} will appear as horizontal line against {{formula:4169a372-f685-45c9-baeb-9f3dc8aeb1a1}} -axis and they can be marked as their upper or massless or Stefan-Boltzmann (SB) limits. At very high {{formula:758785c5-db57-4991-b285-66abff88b7a5}} , these limiting values can be reached. As we go from high to low {{formula:13a1b80f-7c9c-4312-b4b2-85f46dcf7312}} , the values of thermodynamical quantities and transport coefficients will decrease and their maximum decrement will be noticed around quark-hadron transition temperature {{cite:2b6fd40988b7326f5f705bfaca857988bcc14a2d}}. Here we are interested to find similar kind of graphs along baryon density axis.
i
ce768bcf4e1aeb79f0e81b84a325ef29
In the current work, the critical argument {{formula:87a0ddfc-1a75-421b-b044-93f89f53ce38}} is adopted to describe the long-term behaviors. In particular, it is {{formula:095b0237-3223-43f9-b76b-5d0fd91c3500}} for prograde configurations and it is {{formula:9ccef522-b8a7-4b86-a331-fbc9aefa238b}} for retrograde configurations. When the configuration is near to the coplanar case (i.e., {{formula:624489d6-0b13-46cc-84e2-2e1056746dc6}} is close to 0 or {{formula:cbf6cc44-7659-49c9-8f70-1c9a4e60cc94}} ), we can see that the angle {{formula:acd19d0b-3d1e-4681-9096-44499eee01b4}} can be approximated by {{formula:523d4888-cc7a-4efe-9dfd-37667c68a493}} . That is to say, the longitude {{formula:30873996-91a1-4108-a506-c345826e6269}} adopted in {{cite:16560849c78884cbde7e0bd83298af0f5edc20b5}} is consistent with the critical argument {{formula:61e76c76-f5a7-4246-a1a1-02b083450b5e}} taken in this work for nearly coplanar configurations.
d
c5c74ca046c49efb05a6aeeebaed2256
Pre-trained Models: We use the pre-trained Faster R-CNN model in {{cite:ffddcc465ca19a679b8c7b9d568b0c16e5bb82fc}} to initialize our method, then our method gets the detection mAP as 41.3 in the Cityscapes{{formula:3dffd467-38f4-449f-8663-0ad6bd5e2ec5}} Foggy Cityscapes experiment, compared to 42.3 by our method without pre-trained detection model.
r
62b1fb692ab47620622ec2c605828636
Since the super-resolution (SR) can improve the image quality without changing the MRI hardware, this post-processing tool has been widely used to overcome the challenge of obtaining HR MRI scans {{cite:e1353259ac1763b01995411ef80099224e1fdb17}}. Bicubic and b-spline interpolation are two basic SR interpolation methods {{cite:4951a2fdb7c3fabc977eb7309f65af1805e3d94d}}, {{cite:93bae5fba0264ca329a2db47d5a50e4af7d3c1ca}}; however, they inevitably lead to blurred edges and blocking artifacts. Relying on the inherent redundancy of the transformation domain, iterative deblurring algorithms {{cite:65b589cfffa5765d3db60ecfea89902f925dc881}}, {{cite:baabc366fce66c5258ce775ab7a351e337d9d396}}, low rank {{cite:8e262f29069e10b7399357398a19f05973bd05d4}}, and dictionary learning methods {{cite:887177ecd303eb0c1632c5dfa9c59c36c5eeabd5}} have made significant progress in MRI SR. Recently, deep learning which offers higher resolution, has also become widely used for the task {{cite:84d68af5f8292f3911fd27b864602d135e088b63}}, {{cite:083e50f66e13e489bc9fda973cfafada33d75744}}, {{cite:93fb998fdc9839fabde596e0e56d1eb6889c34af}}, {{cite:912d64a1c827d41c757933b01faf43f0b4e7d28c}}, {{cite:c551b71c6c58811c81bb07765616af3e46265c3e}}, {{cite:93ba0fa1f6fd9669a931e2b0ed22b84f8331567f}}, {{cite:7a7c7d79958672f2fcd24f30d139387a0b434ae6}}. For example, Akshay et al. applied a 3D residual network to generate thin-slice MR images of knees {{cite:c551b71c6c58811c81bb07765616af3e46265c3e}}. Chen et al. used a densely connected SR network to restore HR details from a single low-resolution (LR) input image {{cite:912d64a1c827d41c757933b01faf43f0b4e7d28c}}. Lyu et al. used a generative adversarial network (GAN) framework for CT denoising and then transferred it to MRI SR {{cite:9e50ea3cc693c323850313c8a004d5536a582915}}. However, the above methods focus on mono-contrast acquisition to restore HR images, ignoring complementary multi-contrast information.
i
84ac8d9461f5344ef8b9a7c99b36711e
Lanthanum-based cuprate superconductors possess a very complex electronic phase diagram with intertwined orders ranging from an antiferromagnetic Mott insulator to a metal. The long-range antiferromagnetic order in these compounds is destroyed by as little as {{formula:14f4ef18-64f3-4dbb-97e7-679d402a9c57}} strontium doping which introduces one electron hole per substituted atom {{cite:f4bfaa7bc7b3d547450be56493bf2a39b499c7be}}. Upon hole doping, these materials are known to form so called magnetic stripes, also known as incommensurate antiferromagnetic (IC AFM) order, depicted as insulating domains of antiferromagnetically arranged spins separated by one dimensional "rivers of charge" within the CuO{{formula:86e84f2b-f31e-41f2-bf10-6cb99b6b3ca5}} layers. Spin stripes were first proposed by Tranquada et al. in La{{formula:3dc4a2c5-d8fc-401f-a5bc-ac4fca2131d8}} Nd{{formula:4421eb2d-6d20-4130-af65-40743753d138}} Sr{{formula:84afa021-3683-48b4-a585-c613152c4443}} CuO{{formula:d76aa597-32e1-494e-b49e-b9d943c0a0ca}} {{cite:8a3aec6eaae10379a5bba91ceeb4167a7483fa8a}} and later confirmed in La{{formula:71634c03-22cc-4e30-9d81-043bdcb0d6bb}} Sr{{formula:d8454e65-241a-49c1-9439-21fa608d5ead}} CuO{{formula:7b3616f8-5a62-4d3e-8473-7e437a55819c}} (LSCO), as summarized by Yamada et al. {{cite:157572e7020529ae4024e4d9789f00573fdc1138}}. In neutron scattering experiments, spin stripes are observed as a quartet of incommensurate peaks around the AFM reflection with a wavevector transfer of {{formula:cc318ad0-d8dc-40f7-ae03-215e70912494}} , or crystallographical equivalent positions, such as {{formula:104a778e-56e3-42bd-8a6c-eabd1fd90e9e}} .
i
06fb134152fe49186066390d4b040f19
Let us now try to organize these results and draw some conclusions from them. The quasi-power distribution can be interpreted as a trace of temperature fluctuations {{cite:33bf9694673ef3e41c4295de35f705654fd28a61}} and the non-extensive Tsallis statistics, usually called superstatistics {{cite:5b6f496ce1f547f525705e534f6ba2115a003a7a}}, {{cite:c36d5118b9f79130cdd2308280c5270d27ceb0f8}}. If we approximate the production process with an irreducible Markov chain, then the dependence of fluctuations on time will be very sensitive to the reciprocal of the relaxation time, {{formula:40b8b89a-633c-44f2-84ae-43c37915a76a}} , that is, the stochastic collision frequency for the particle {{cite:12d766544649cff63fb109bc2aeed73edd8e084f}}. It is therefore reasonable to choose {{formula:45277e80-f379-435b-b67b-2875ce8949ce}} so that the fluctuation decay time along the particle's trajectory is the same as the decay time of a small section (small volume) of real matter surrounded by a much larger volume of its remnants. Now suppose that this small sample has a temperature variation such that its temperature is {{formula:4ab2db8b-832e-4dad-a12d-6c48f4536304}} . The sample will therefore gain or lose energy at a rate proportional to the temperature difference {{formula:f0603ae9-c6f7-458a-ab9f-5d4dced14ed9}} and the thermal conductivity {{formula:f9c7bbe2-62aa-4c61-a9ba-9209ab336249}} .
r
1bf3f218c93f2b0131389483dd5322e4
We evaluate our Neural Part Priors for semantic part completion on real-world RGB-D scans from ScanNet {{cite:6dae7cad64b14b64adece3b74ce650ad0c71188c}}. We use the official train/val/test split of 1045/156/ 312 scans. In order to evaluate part segmentation in these real-world scenes, we use the Scan2CAD {{cite:739b116c21902a31d974d5b47582408b28848e9e}} annotations of CAD model alignments from ShapeNet {{cite:8876e664ea2b3d1d4d82ffb487b36b608294b635}} to these scenes, and the PartNet {{cite:4f73211388588f4b611e63dd71c11571bf4ddbaf}} part annotations for the ShapeNet objects. To construct our part latent space, we train on PartNet and train a projection from train ScanNet objects to their PartNet annotations; we also train all baselines on the same ScanNet+PartNet data. We consider 6 major object class categories representing the majority of parts, comprising a total of 28 part types.
r
a0345ae230da947aa1f262e50cd1cdc8
A relevant analytical model accounting for the behavior of the normal force with the probe-substrate distance has been elaborated by expressing the total surface energy of the droplet, {{formula:b1361196-fece-47cb-8640-aed0edd09fe2}} , as a function of its radius {{formula:83fa3672-4cb4-4ab7-a579-566c7859a71e}} , see Ref. {{cite:5ce8215b648d4e88cedfdc20d0ee0422be541650}}, {{cite:2e3bd4506c70fb27a3d2fd315c0f58cc889ec636}}, {{cite:3dffb5e20f5efef10fee3451d9ae7c530b9c0503}}. Then, {{formula:e7c8088e-4898-458d-b6f7-a5fd15587a3c}} , where {{formula:debc9b92-f946-4b23-90a2-b9ffe9958a0f}} is the surface energy of water. In our model, the shape of the droplet is approximated as a cylinder with a volume {{formula:c3bd76cd-d2e4-44c8-943d-b5b0dfe0f9b9}}  nm{{formula:6cb1d8a3-5239-4e66-b68d-d62a0abff164}} . Thus, the normal force considered as a function of the probe-substrate distance {{formula:a6c71ade-dd61-4ace-b909-167c5126871d}} can be obtained by the straightforward derivative of the surface energy over {{formula:4a73d3a6-9663-46d3-ad1f-291f9b8eb51f}} , {{formula:9af1dc4e-7497-4780-8a6d-0529af01e65f}} . Then, this analytical macroscopic model (represented by the dashed line) describes well the simulated data in Figure REF . The model takes as input the surface energy of water, given by the molecular dynamics SPC-model with {{formula:92706db7-d8de-437b-bc76-2d6e51529f11}} 52 mN/m {{cite:c11b354d1d0b6bf5384714096be413293891ca59}} and contact angles of 75{{formula:604a95d4-50f5-40dc-942a-07fecb133198}} , 90{{formula:76fe19ce-138d-4e4d-875e-b136ceb81e0f}} and 105{{formula:b1ae8808-4729-416b-aac7-41c43ea5c693}} . This model gives also insights into the dependence of the normal force on the volume of water in the contact and the probe-substrate distance. Then, by increasing the probe-substrate distance, the normal force changes its nature from repulsive into attractive interactions acting between the probe and the substrate. Below a probe-substrate distance noted {{formula:32703a1b-da04-415e-9b3b-77f5d5c7cbca}} , the surface tension acts to reduce the water surface in contact with the solid, as well as its overall surface. This tendency to obtain a more symmetric (spherical) shape results in a repulsive force between the probe and the substrate. In the example given configuration B1 in Figure REF (c), {{formula:02aef946-571d-4b35-b968-5a6919372953}} 4 nm for a 340 nm{{formula:226adf33-7965-44a9-ac9a-1349821917d4}} droplet. Above {{formula:f22f6ba0-4b4f-4434-9bd2-89cc62219333}} 4 nm the droplet is increased and elongated, and its resistance against further extension results in an attractive (adhesive) force mediated by the water droplet between probe and substrate. The measured adhesive force in the experiment is the minimal (negative) normal force during the retraction of the probe. Our model shows that {{formula:cdaa0168-6a0a-4dfb-b0e6-f0c90fbb5e6a}} at a probe to substrate distance {{formula:aa5e88ff-893f-43d9-867d-8f7469b29022}} (see Figure S5 in SI). The experimental adhesion force is independent of the sliding velocity (see Figure REF ), implying that the water volume of the droplet is also independent of the sliding velocity. From there, the model allows estimating the volume of the droplet in our experiments to be 8{{formula:31da8dae-b65d-47a5-a518-ebc278590e99}} 10{{formula:d48f48dc-72a1-41be-b93a-dffed58e9e37}}  nm{{formula:36d8d04d-a171-4ef9-8680-4cb72da36201}} , and equivalent to a spherical droplet about {{formula:0bfa2305-81bb-4abc-a738-aef178ce1510}} 250 nm in diameter. Such a value is relevant considering that the experimental probe has a 0.2 µm diameter and that the droplet may be not be completely spherical.
r
911a42d332a860de7c2bc00bdffbeace
Perhaps the most important open questions in the area of coupling design concern the development of theoretical tools to relate proposal and acceptance options to meeting times. Such tools would enable a systematic understanding of the interaction between proposal and acceptance steps. This would also support work on how to pair these to produce as much possible contraction as possible between chains. One approach to this might exploit the drift and minorization approach of {{cite:885e5f3c2d44f502b9d1860b114ad963203de4da}}, {{cite:f7a8d0c82fea52ee9c65fec4d3542a3bf01e9d45}}, especially the pseudo-small set concept of {{cite:65f288f5429a0722da9af0eef8aee19ebb4ade49}}. The analyses and simulations above mark a step forward in our understanding of the options for coupling MH transition kernels. They suggest that some options might be better than others and hint at why.
d
85d26d307974307d7485e568326e2c2c
667.01 GJ 1132 b {{formula:85728ea5-5357-4177-8dcb-cb816da8cfa0}} {{formula:ede55c27-8237-4efa-9110-4ea29db40642}} {{formula:ae26332b-a573-42e3-a564-0ca03be986ca}} {{formula:cbaf803e-173f-4383-be3c-84a634d5e5fb}} {{formula:73ea22e4-04fa-4d33-938a-272183d6bbc2}} {{formula:a4853e70-8dba-4f43-996c-68534dea6d2d}} {{formula:1db62977-0db0-49b8-a79b-c093b69d4f0d}} {{cite:10f63e80f4f936b0b1c8045ea8caf9d0f776d780}}a
d
c5102d1aba6c7afd9286484a23d5b950
Ancona {{cite:a54df5b8b944f92b4a53c06aeb3bef7212836704}} show that, for a network with only ReLU activation function and no additive biases, this input gradient product is equivalent to DeepLift {{cite:2233fb321934dea8f776fa6eee00d2b4fe9c7e15}}, and {{formula:5a90a987-d263-4e2b-8ab3-37f43f16676e}} -LRP {{cite:f316f3e5ed96a55ce4c582507b1894be5b04f034}}. For the interpretation of SR networks, the pixel intensity should not be part of the attribution as the textures and edges may not change when the pixel intensity changes. Directly calculate the product of the input intensity and the gradient will introduce interference factors.
m
668b8cad43fd3f8b705d61094dbaab8d
Alternative methods have been explored to reduce the annotation effort for the 2D object detection task. Weakly supervised object localization (WSOL) {{cite:55a02d8a06303142553828594ccfb58ec09ff820}} has been used to train detectors using image level labels, a very cheap annotation technique. The resulting 2D object detectors are very weak compared to ones trained on human labelled bounding boxes achieving half the latter's mean average precision. Other works focus on getting the most out of as little training data as possible. Few shot learning was explored in {{cite:f23b228d62b762f4f18bb3d6cbcc82d516ad3329}} to train 2D object detectors from as few as four samples per category. Similar to WSOL, the resulting detectors have yet to achieve comparable performance with detectors trained from human labels.
m
150aed6eb34223ca8a3f3f2bed0ea655
To fit the individual RCs through mass modelling approach, one needs to use the informative prior on the baryons ({{formula:bd218142-5dee-4cb7-bf02-5669e7b814b6}} ) in advance, otherwise the parameter space {{formula:b74cafc6-494c-4dd5-b478-fee4e08ac74a}} is highly degenerate. In the case of co-added RCs, on the other hand, we could achieve a good fit only by specifying one or two baryonic parameter, which is stellar disc scale-length and stellar mass in this work. This case also allows us to adopt the flat prior on bulge mass, which is an unknown quantity (and un-resolved), so that we can determine it dynamically, see Section REF and Table REF . Remark: when we use the NFW halo model for mass modelling of RCs, it gives a good fit to the data, but suppresses the baryon content and yields high DM fractions, which we do not see when using the Burkert halo, nor in the simulations (EAGLE, TNG100, TNG50). Based on RC fitting and structural properties of DM we can not include or exclude the NFW halo model but there exists the tension in constraining the stellar masses using NFW halo, see Figure REF . In principle, a galaxy can not have stellar mass more than the allowed by dynamics {{cite:0c67417e592bfea9e532350a64e136c5cdf66981}}. Providing the various results/comparisons of NFW and Burkert halo in the case of individual and co-added RCs, we found that our current sample favours the Burkert halo model and also agrees quite well with the simulations. Therefore, we focus on discussing the Burkert halo results in what follows.
d
6b081f4b8709bcaa11ba8eaa8e17ef61
Researchers have attempted to model Re and Nu on the governing parameters, Ra and Pr {{cite:ff9858738cf7eea88f6b0776c3cffca666d49d53}}, {{cite:ec180b75ca2db6594d76ef4f0fd96195f9169a33}}, {{cite:399a064556a3724a04ba89bca64a49e834fa65ab}}, {{cite:65ff8c532d1e16c18b3f6901f5a328d605f21d71}}, {{cite:1f4b96cb63bda86027b5e7fef1e43f989b2bba4c}}. Early theoretical, numerical, and experimental studies of RBC reveal a power-law scaling of Nu and Re, i.e., {{formula:96a3d7dc-0ad9-42c1-8900-ba8f08de27fe}} and {{formula:597950d3-7946-49e5-8645-a673d66e64ca}} . The exponents {{formula:253ca4bf-5edd-4365-b382-3ec2f002ca8e}} , and {{formula:7e8511b6-7397-4f47-8b28-935f2fa59bbe}} vary for different regimes of Pr and Ra. For the scaling of Nu, the exponent {{formula:4f1d90da-f044-4ec8-91cf-f1c73b9b683e}} ranges from 1/4 for {{formula:881a19f9-3298-421a-8b63-53c1a262abb8}} to approximately 1/3 for {{formula:d4da1b25-0933-417f-bac2-7668804cabbb}} , {{cite:4c71976be0cf4c77cf9beb4574df13f15e52d336}}, {{cite:b9ec942237d59bf1e62c60f72af5327392ccbc85}}, {{cite:f68ac4a6efa8779ea2f7b68f2793da9e54fc0bc3}}, {{cite:32ec2672df7586113ce3370d141531be42cacda3}}, {{cite:4e5ca15bbaa53c0fa76f745c10beee904db18f5b}}, {{cite:c17b976f6b5ebebdca16008ebbe625260918ba63}}, {{cite:b3d534a880afffcd1590ed13b211b1589b390370}}, {{cite:642ed0ca1f6b63fbe267a490623cf3859e989377}}, {{cite:9f2e673e510d1717694c98b8166bfb59c4957f0a}}, {{cite:d6bf5dbfa960d2990f9f85ac1917317c29ad67e0}}, {{cite:cb9cfcabd64378bc655b5143d6535da01f689b5c}}, {{cite:0fcab3bb600127030a64da07bb86c0d8be42ad2b}}, {{cite:8ba1e2842a16c0f6147fc37b04e90ce058863f1e}}, {{cite:3e065c51baab7f51f894d933e3ee846d2a6b53a5}}, {{cite:0e58f46fe644eb4563a443b2a63329ee8f7db2e8}}. However, Nu has a weak dependence on Pr, with the exponent {{formula:c9bfc11b-0ce3-487f-a78f-6871592bc9bc}} ranging from approximately zero for {{formula:625a9706-1511-41c1-9073-46f51be45c03}} to {{formula:6549e844-416d-4032-970d-fe0fd093aaf4}} for small {{formula:01aeb310-cb65-40c6-8a59-920b31a94249}}  {{cite:719d0b94c0183f0e532c82905f8a28ccf8bb116b}}, {{cite:c284577256b282fbc5b64723174ee3f182cce248}}. Regarding the scaling of Re, the exponent {{formula:da55edf6-b452-4452-867d-6492073e6daf}} was observed to be approximately {{formula:f8835124-e2aa-4783-8413-563d2f4f1482}} for {{formula:8011e852-7e97-4d9a-a3b9-0009772eebc7}} , {{formula:51d374e7-7201-4a55-b589-eff48f151afe}} for {{formula:9829c40f-0742-4cca-aa11-1e4324def727}} , and {{formula:e94a04fe-4bae-4509-9d8d-4d7fc4ea698e}} for {{formula:cd9105e3-3254-47ae-9e8f-b79469d687da}}  {{cite:4e5ca15bbaa53c0fa76f745c10beee904db18f5b}}, {{cite:c17b976f6b5ebebdca16008ebbe625260918ba63}}, {{cite:f68ac4a6efa8779ea2f7b68f2793da9e54fc0bc3}}, {{cite:e61a5d41be94b13b592c38aacf776eca49c5eb41}}, {{cite:49f1d2ad109f70615546f563a520913e1e53444f}}, {{cite:9e3945c36fe53c571dc2d5432feb7449a3943bbd}}, {{cite:11481d4cbf48ea7f90ddcf68add2de736d6fe927}}, {{cite:8a3717622701c8ab45b643d75a8408a1e0a5ac13}}, {{cite:642ed0ca1f6b63fbe267a490623cf3859e989377}}, {{cite:32ec2672df7586113ce3370d141531be42cacda3}}, {{cite:b3d534a880afffcd1590ed13b211b1589b390370}}, {{cite:63e09a921f2d522659c3e61947a79836f35f9df9}}, {{cite:9d9e0890c377aa84d652f20d31d586eb1cb54829}}; and {{formula:2c8c2814-3489-4961-bc74-a38272f8d34d}} has been observed to range from {{formula:97bcd53c-b015-4ab7-bca5-a21734dc890e}} for {{formula:14eb9629-a097-4d58-b770-5aa8b5f9f4f5}} to {{formula:a35629e3-cf24-469a-9f09-f821731b9508}} for {{formula:83b1e390-c24a-452e-a106-75678d4f046e}} . {{cite:719d0b94c0183f0e532c82905f8a28ccf8bb116b}}, {{cite:6555193c4d8d8f2b252cd27e5a833a96ad9ca4ef}}. In the limit of very large Ra (the ultimate regime), Kraichnan {{cite:a0878596860a7f344f111cb7e7e2f321c3a9fab4}} argued that {{formula:68e568a9-2d67-4e1e-933d-561a3bf13b58}} , {{formula:86da61bb-a2f3-45ef-96f1-235e97081a85}} for {{formula:0fcd225a-34ce-4923-8216-5f2117a62ba9}} , and {{formula:9a96706d-6946-4e23-809f-2f241a092b20}} , {{formula:49156d4c-8bc2-44d2-8f4b-c28e71b247a2}} for {{formula:5a9f3418-1679-46d5-a54e-89d001ffd16c}} , with logarithmic corrections. However, the existence of the aforementioned regime in RBC is still under debate {{cite:8a3717622701c8ab45b643d75a8408a1e0a5ac13}}, {{cite:f6d55748364cabe25af62464e7bef80bb9bfe0a2}}, {{cite:21320c135f737367736a97d62f24776db72a24c6}}, {{cite:c17b976f6b5ebebdca16008ebbe625260918ba63}}, {{cite:2079f1679ffaf846852d25c8458378648dcb46c3}}, {{cite:9a6092c586c4193e3aaf0c2fd0d9c93519d7f3b4}}, {{cite:d89b6305b8786ac8cef43d88f127b13bcf0b7b17}}, {{cite:88cfabe3d6d158818f9c134d9cd7ffdf4cc2e098}}, {{cite:798f5a79dc1632519f04c7b4082a99159910b50b}}, {{cite:4cbfdf9ba2c927fad857825a6a92a41944c16d63}}.
i
2386f4a5ba85e44fc794dc2606fe170a
The calculation of {{formula:43f3e5cc-d853-4f54-aa08-f0adec17486b}} from first principle methods requires harmonic and anharmonic interatomic force constants (IFCs). For harmonic IFCs atoms were displaced by 0.02 Å using the finite displacement approach as implemented within PHONOPY {{cite:245342a8c0e4f049ef26d857d2c828a0c0aef04d}}, {{cite:5451ca8775f57be942e40c69325b1abc1d0550e3}} package. For these displaced configurations forces were calculated within DFT using the VASP code. Here, a 3{{formula:c672b3c5-c964-4051-b783-8ec2e738bfe3}} 3{{formula:c9d8672f-63ed-44a4-b850-19652b58b86e}} 1 k-point mesh, along with energy cutoff of 500 eV and strict energy convergence criteria of 10{{formula:11bf8f6c-f45d-4fd2-9b3e-145e3339cda6}} eV were used to obtain well converged phonon frequencies. For the semiconducting phases (FM, AFM) long-range Coulomb corrections to phonon frequencies were included using the dielectric constants and Born effective charges according to method proposed by Gonze et al. {{cite:2f2d08845c815f31a8e6ccb209eeaa959a7e4676}}. Translation and rotational invariance along with Born-Huang symmetry constraints {{cite:1f02fcc11b6a5f86c45fbae7981956fbfccbea93}} were imposed on the calculated harmonic IFCs following the procedure described in in Ref. {{cite:7ff9ac7af92bd499d768d615bd2a4f0a4d69345a}}. Convergence of phonon dispersion with respect to supercell size was checke, and we find that phonon dispersions are well converged for 4{{formula:728c6798-942f-4fc4-964f-95d23432d077}} 4{{formula:87bcb83a-a284-4002-bab9-34db9d4265ec}} 1 supercell. Therefore, for all the systems (with and without strain) the harmonic IFCs were calculated on 4{{formula:6fce5a93-c2ff-4c53-bfa3-705687302a03}} 4{{formula:7aebf07a-0f12-4943-8686-f043d40f94d8}} 1 (128 atoms) supercell. We also checked the effect of spin orbit coupling (SOC) on phonon dispersion for ferromagnetic phases of CrBr{{formula:4018ad01-f098-4544-a9b6-79c6743c6596}} and CrI{{formula:65eabc50-848b-4f6e-b48d-eb4d0e37c960}} . We find that SOC has no significant effect on the lattice relaxation and the phonon dispersion. Therefore, SOC is neglected for the results presented in this work.
m
b81a4c8f283f9a705b46f968c232eba1
We evaluated the performance of the proposed DADCF pyramid (Section REF ) and RDADCF in compressive image sensing reconstruction {{cite:52053f3b4898687076f5c38ce046c9859ff44e22}}, as an example of image processing applications. {{formula:bf00bcaa-89b8-4b08-b525-870aee2e1c6b}} pixel images in Fig. REF were used as the test set. Each incomplete observation ({{formula:c3cb9f7f-6361-408b-a2d9-1680adf2c356}} ) is obtained by Noiselet transform {{cite:5ce3dc685759681161bc15bc0eb2938bdb7e62ea}} ({{formula:767d71c8-3a62-48be-b702-34f50eafa9a6}} ) followed by random sampling of 30%, 40%, 50%, and 60% pixels ({{formula:98ea8651-166c-4b12-a980-9cbefa294e67}} where {{formula:37968ddb-53fa-45d4-a270-742c8dd4c0a2}} is the rounding operator and {{formula:66c32c51-ed0b-465e-a3a9-de56b866447c}} ) in the presence of additive white Gaussian noise ({{formula:09b693f3-a3ff-4453-b998-509eb882b6e4}} ) with the standard derivation {{formula:a26802b7-c698-4976-9369-d61e80acdd3d}} as {{formula:c060ff21-5f3b-4653-897d-c9bf39dc612c}} , ({{formula:08fee18d-6a6f-4010-9e5a-0fc353ecc089}} ). Figs. REF (a)–(d) indicate the estimated latent images by using the Moore-Penrose pseudo inverse of {{formula:0c580f29-0243-4644-bdf8-ed31d57969f1}} ({{formula:557182fb-7cf1-44ff-9c88-483f18736279}} ) in the case of {{formula:dcf9c21d-ac37-436d-9b6e-cfcace861701}} .
r
6f88214cbfd3fc78f9cb88d68afa9fbe
However, despite the striking resemblance between black hole thermodynamics and traditional thermodynamics, there remain two obvious discrepancies between them. One is that there is no {{formula:f24dcf8c-7c28-4f82-a7cb-64d091314ce0}} –{{formula:fb4c1766-758c-454a-a1ec-5ac1fe403419}} term in the first law of black hole thermodynamics, and the other is that a black hole in asymptotically flat space has negative heat capacity and is thus thermodynamically unstable. Therefore, when a black hole radiates via the Hawking mechanism {{cite:56d8f11628e19490f0deb3fdc5c36c987ad8a91f}}, it can no longer maintain thermal equilibrium with the environment and will evaporate eventually. These two aspects make a black hole still somehow different from a usual thermodynamic system (e.g., a {{formula:6ba83725-4e03-4a4a-8566-8420829a7a0c}} –{{formula:a221eb8b-d648-4bd5-96e6-3777eb803f2b}} –{{formula:bc87ef6c-364b-462e-9ff7-026d6fcd4f0a}} system). In fact, both these two aspects can be tackled to be consistent with traditional thermodynamics, and the idea is just to impose appropriate boundary conditions and to introduce an effective pressure {{formula:f192a74f-48c9-4a48-a9f3-9682b39f9df1}} and an effective volume {{formula:660a0ed0-58e7-4e8e-a6c0-b7d840787902}} to the black hole system. In this way, two new dimensions are added into the black hole phase space, and such theories are thus named as black hole thermodynamics in the extended phase space.
i
749dd236a67fea17d08b1a60fd212a38
Using Reinforcement Learning (RL) for low-level quadrotor control has been proposed and shown its implementation result in real world quadrotor to stabilize and hover or fly on a specific trajectory {{cite:9f7a6623947fa972c25544656d04b3c9383ab299}}, {{cite:b96b953cd14fc3736a2b52eae26b989d2f507891}}. The advantage of using RL controller is due to its nonlinearity and fast computation. However, there exist a gap showing on the tracking result while transferring the RL controller on quadrotors from simulation to real world experiment. In reality condition, the quadrotor hovering and tracking results didn't perform as same as the results in simulation. The reason is that the RL-based controller is sensitive to the deviation of model physical parameters such as take-off weight or motor characteristic. This situation can be harsher when considering aggressive maneuver using RL controlling the motors directly.
i
e23a02d9da3f22de8d31812cb5e89740
One class of intractable statistical models that has been explored in detail are models for which it is possible to simulate data {{formula:707ceda3-12d2-460d-8166-fdd1d70f688e}} conditional on the parameter {{formula:3150f0ef-7d1d-4581-b7b9-eca49d123b40}} . A well-known approach to inference in this class of models is the exchange algorithm of {{cite:762e9bdf07571102a5daa3f6dbe178aa9ed5a513}} and {{cite:6e2e9ef06dd9ae736a797b01e09b1dcc99e29ae3}}, which constructs a Markov chain on an extended state space for which the standard Bayesian posterior occurs as a marginal. Simulation of the Markov chain requires both exact simulation from the statistical model and evaluation of {{formula:0b3374f8-270a-497f-9f23-d81b74a4f0eb}} . Further methodological development has been focused on removing the requirement to evaluate {{formula:dea18d44-c2d2-4b23-973e-2d2853668ce9}} , with approximate Bayesian computation {{cite:fccc22fcb1cee25b21a2285facf15a258c2024d7}}, Bayesian synthetic likelihood {{cite:6c8a998c95975a583a49ecd278891d8c620a6ea8}}, MMD-Bayes {{cite:6efadf9b72979c017f4505efa0efe8631b7ad924}}, {{cite:da22ca728cb932570af3c54579ab24e4512d87b1}} and the posterior boostrap {{cite:b4530c616629e3f970d3e192a5c5ab47e3d5839e}} emerging as likelihood-free methods, which require only that data can be simulated. Unfortunately, for many statistical models of discrete data, exact simulation {{cite:5f7c0ed04d74ae62b92fb3ece853b30a52062e11}} from the model is impractical.
m
1c6953d20bbb7ac5e7db789d8da9bdb0
In our study, we consider the popular {{formula:6df6b52d-2781-45b9-8a31-68e8fad503f1}} -VAE {{cite:1cc833e18a3ba56d2420831192d91c105c3175bc}} as an unsupervised approach, as well as Ada-GVAE {{cite:e2c04574ed6a9e951bcf4863a0f098897a4b6085}}, Slow-VAE {{cite:a380a233e7a10acbbe2840b43986cdc1c6d20056}} and PCL {{cite:7612bb9773b016daf19ed88567855c678e4443d3}} as weakly supervised disentanglement methods. First, we learn a representation {{formula:9b493a6a-ffaa-463d-970f-5093f8b06eb8}} given only (pairs of) observations (i.e., without access to the FoVs) using an encoder {{formula:ec78e878-2263-424c-87c4-df3943e1a030}} . We then freeze the encoder (and thus the learned representation {{formula:c11f3235-0d65-407a-85c5-243f34d00436}} ) and train a multi-layer perceptron (MLP) {{formula:2f706df0-33fe-4ee5-897a-70a78d600791}} to predict the FoVs {{formula:660c91a3-fbce-48a4-9461-3361bca1ca41}} from {{formula:a03ea73b-f323-4ff0-89a2-0b434b507c81}} in a supervised way. The learned inverse mechanism {{formula:3b98f3f4-1ecc-4204-bbb0-8a5d15cf9946}} in this case is thus given by {{formula:ace78503-162a-4da9-bc94-498c4fefc798}} .
m
8654b4d4753b10fbf0815de1bb9c03db
In the present work, we perform a detailed analysis of the available differential cross-section data from the CLAS Collaboration {{cite:274fc1af89fe89eae8a470257e68d0db3cc16436}} for {{formula:36de33e0-121d-4909-a4e5-a69661a7707d}} within an effective Lagrangian approach at the tree-level approximation. The Feynman diagrams we considered are shown in Fig. REF . For the {{formula:bf018a8e-23a1-400a-8d8e-be769949e25e}} -channel interaction, in addition to the {{formula:268ff47e-0671-4608-9c66-17a50031725f}} exchange, we introduce as few as possible nucleon resonances to describe the data.
r
7e304b7fcd2e6573cb704f743e187ba4
Since there are many visible corruptions in the real-world images {{cite:6cbfe2a88d9cd9959aae7ab8b73f6ac1e793eb40}}, {{cite:5483eee40c44f188854e0c6cf04e323e84825a71}}, current state-of-art SISR methods often fail to produce convincing SR results as shown in Figure REF . Most of the existing SR methods rely on the known degradation operators such as bicubic with paired LR and HR images in the supervised training, while other methods do not follow the image observation (physical) model (refers to Eq. (REF )). Three major problems arise in existing SR methods: (1) the first is to train the deeper/wider (lots of model's parameters) networks from a huge volume of training data, (2) the second is not to generalize well for natural image characteristics due to follow the known bicubic down-sampling degradation, and (3) it is not easy to deploy to current generations of smartphone cameras due to lots of network parameters and memory footprints. Therefore, we focus on a robust SISR method that is useful to improve the quality of images in such real-world settings.
i
264bd43000922c8c592853f475babd3b
The present optically thick and radiation-pressure dominant shock models show some of the characteristic features of Ultraluminous X-ray sources (ULXs). ULXs are X-ray sources with X-ray luminosity above the Eddington limit for stellar-mass or below the Eddington limit for intermediate-mass black holes but not supermassive black holes {{cite:59177afa5decbf980e7ad8707a1e75295f1d9915}}, {{cite:bf93e77ef047a3c16b9705cb74d967b6d520d9ae}}, {{cite:bad275f91f8740a9e763a3c2938dadb04fb0ea9c}}, although the recent discovery of coherent pulsations in ULXs support neutron stars as host {{cite:b5883b8de1cf4b3b1b8f0f8e2618eac89cf87601}}. A super-Eddington accretion model onto a stellar-mass black hole whose luminosity far exceeds the Eddington luminosity was proposed by {{cite:8efb9d7659167fa6447b9a2fdbfcab852ec883aa}}. The key features of the super-Eddington accretion are strong optically thick wind and collimated radiation.
d
f52c6b17efd27b0c663c51887aa8a7c4
One main limitation of this work is that the current analysis merely holds for the homogeneous setting. A future direction is to extend the theoretical result of {{formula:3fdc0225-ce4f-4ac5-a3cf-c7f8ff62d85f}} to the heterogeneous setting that better models various real-world applications, such as federated GANs {{cite:2d43e10faf7ad532adad213ae570ff4994c3d1b9}} and robust federated learning {{cite:9f6cfad465aea79cb0c81a6aab08ca3386e4de5f}}. In addition, extending theoretical results from the stochastic convex-concave setting to the stochastic nonconvex-(non)concave setting is an interesting and challenging research direction.
d
60ff2aec4dc293041af0d47be4acbfee
Currently, the {{formula:72fd708e-423b-416c-a404-f7c3b117cd5b}}  rad phase variations we find are most similar to that of V456 Cyg, where small phase and normal amplitude variations were observed in a subset of g modes in a circular, synchronised binary {{cite:c78564e089264e7aa7c2382a92f0416413914c89}}. Because of this, the authors argue that the modes are tidally perturbed and have contributions from non-axisymmetric components. Despite the similarities, it is not clear whether or not the same mechanism that modifies the g modes in V456 Cyg can modify the p modes in U Gru. Alternatively, due to the moderate rotation of the pulsating A-star, we could be observing the effects of rotation modifying and confining p modes to `island modes' {{cite:84dfaddb23b1bc3484f30a574d1c775cbf260806}}, {{cite:6c8969bad6f17f8e3104ece6d537ea3108674b30}}, {{cite:21692b303abb2a0b700a0b2953b67138b05260e1}}. Moderate to rapid stellar rotation can preferentially trap pulsations to specific latitudes and longitudes, depending on the mode geometry and rotation rate {{cite:db4b34fce2d263fcdab7afc0c58d6fccf5129f5e}}. As such, moderate rotation that is synchronised with the binary orbit could, in theory, produce an observed amplitude modulation that is commensurate with the orbital period. However, there is no prediction for the observed phase behaviour of such rotationally modified modes. It is also worth noting that while U Gru hosts multiple independent pulsation modes, most do not show any amplitude or phase variation with the orbit. This suggests that the mechanism behind the observed amplitude and phase variation in the dominant pulsation mode is not ubiquitous for all modes. Interestingly, other systems such as RS Cha and HD 265435 also host some pulsations that display amplitude and phase modulation and some others that do not {{cite:33506f70eb48a1332a9eab17caa39f9bbdee4562}}, {{cite:1ca6401c725ea4f671f35b21c900ef6939b561ea}}.
d
9bc893955e6ca70c9964fede9cdfc035
The synchronization behavior of the pair of oscillators is classified as drift state if the peaks of the spectra corresponding to the two oscillators do not coincide, and satellite peaks i.e. peaks in the spectra different from the limit cycle frequency, are absent. The behavior is classified as synchronized state if the peaks of the spectra corresponding to the two oscillators coincide and the satellite peaks are absent. The behavior is classified as quasi-periodic state if the peaks of the spectra corresponding to the two oscillators do not coincide and satellite peaks are present. We performed large parameter sweeps and analyzed the resultant spectra to classify the states of oscillations for different model parameters. Parallel computing was used to perform the parameter sweeps over roughly {{formula:81efe33b-8b0b-4b7c-a117-006638780f63}} parameter grids {{cite:16e70f9330396a5e9b7c55a878b04a22cc758be4}}.
m
1537d12e799a5a2b19379df2498551f0
In this paper we propose a general Bayesian framework for the unsupervised analysis of high-dimensional biological data across multiple studies, building upon {{cite:939d08ff28c3beb1bae290115e35e07dfdb8b365}}. We address the unmet need to rigorously model replicable signal across studies, while at the same time capturing study-specific variation. Our approach is not limited by {{formula:8c1db575-7499-44c9-99b5-6626d2455e9a}} and, in addition to replicability, shows considerable promise in modeling sparsity and enhancing interpretability via rotation-like shrinkage. Building on {{cite:4428ca3d4299554fe19f6c384ddd82ba9bc8da8f}} we propose a computationally efficient MCMC algorithm.
d
33b7395e5b9340b936b35337badd4fd0
The optimal cost of a Toffoli gate is six CNOT gates using the standard decomposition-based approach {{cite:b0354357b96847268f256b0fb86470f58db9c842}}. The theoretical lower bound of a Toffoli gate is five two-qubit gates {{cite:086199940e76d6a57518317b8e1be086079dd932}}. Ralph et al. {{cite:6cbefe39369f3dc8482a617af4e5c78e4f56bb79}} first reduced the cost of a Toffoli gate to three CNOT gates by introducing auxiliary Hilbert space. Inspired by traditional all CNOT gate synthesis {{cite:6cbefe39369f3dc8482a617af4e5c78e4f56bb79}}, {{cite:823175d0d023fca0c87335fd11208ed3c22b12f9}}, the quantum circuit we designed to implement the Toffoli gate possesses the higher success probability than the protocols in Refs. {{cite:6cbefe39369f3dc8482a617af4e5c78e4f56bb79}}, {{cite:823175d0d023fca0c87335fd11208ed3c22b12f9}} with the same cost in terms of required qubit-qudit gates. The required qubit-qudit entangled gates are all nearest neighbors in our construction of the three-qubit Toffoli gate. Note that the nearest-neighbor quantum gate where each qubit interacts only with its nearest neighbors requires less resource overhead than the long-range one. For example, a long-range CNOT gate acting on the first qubit and the third qubit is constructed by four nearest-neighbor CNOT gates {{cite:8019c2f021fff68093ce691f4534b7e68c209dd3}}. In addition, ({{formula:95eef420-c414-411f-8693-0e23d97c64a3}} ) qubit-qudit gates can simulate an {{formula:636925c3-adca-4d6e-9641-8d7b80eded39}} -qubit Toffoli gate in higher-dimensional spaces, which considerably improves on the previous result with {{formula:40558c3f-cf0b-49b5-8fcc-2cda785ded01}} {{cite:9b6afbedc7f308e57118256a6d3ccd8cdbf9713f}}.
d
2b42349bc0129324bbcbdeda1a2d7086
Crowdsourcing aggregates the crowd's wisdom (i.e., workers) to infer the truth label of tasks in the system, which is called truth inference. Effective truth inference, especially given sparse data, requires assessment of workers' reliability. There exist various approaches to infer the truth of tasks {{cite:8e9470e26e97aa2efc93ce9976ffc1c61a7e302e}}, {{cite:5d765052eccfbfd94f50f5bb00e13ef83024b765}}, {{cite:381307da3f8d031cb9e300d1a6a2a38e4c4c967b}}, {{cite:1492bcfd260f15b04f60d2c0c19be5fdfe4e0731}}, {{cite:fbba6877e3ff8bac63e114d9deeed1e0bef2bfc6}}, {{cite:f6ecfe0f54323c49dd78d9a1575306160d28e4d7}}, {{cite:a88c5651e281e207bf3756e5fdaee62c74ef4d9c}}, including direct computing {{cite:8e9470e26e97aa2efc93ce9976ffc1c61a7e302e}}, optimization {{cite:8e9470e26e97aa2efc93ce9976ffc1c61a7e302e}}, {{cite:5d765052eccfbfd94f50f5bb00e13ef83024b765}}, probabilistic graphical model (PGM) {{cite:381307da3f8d031cb9e300d1a6a2a38e4c4c967b}}, {{cite:1492bcfd260f15b04f60d2c0c19be5fdfe4e0731}}, {{cite:fbba6877e3ff8bac63e114d9deeed1e0bef2bfc6}}, and neural network based {{cite:d131e87a765e6795ac7bc8bec5e048bbb113ff12}}. The simplest method is majority voting, which works well if all workers provide answers to all of the tasks. However, it fails when data is sparse and workers may be unreliable, as in many practical settings.
m
44e4e0cc039356d21ba5735c5b82b9c6
Comparison with MAE. We empirically and theoretically prove that masked autoencoder learns generalized features even with imbalanced data, which is quite different from other self-supervised manners like CL {{cite:555c191f103eb4f21e6ff36e84a8256f4c8c24de}} and SCL {{cite:812f3ec2fb216fc1f49f2cbad32556658922fe7e}}. Extensive experiments on ImageNet-LT/BAL show that the instance number is more crucial than balanced annotation. We further propose the balanced binary cross-entropy loss to build our LiVT and achieve a new SOTA in LTR.
d
9df200fd7917605a4b092bb5ed1ff1ea
We compare the proposed method with several start-of-the-art virtual try-on methods, including VITON {{cite:9a84851d350ced93f583155b92b51ae130a54b93}}, CP-VTON {{cite:d6210749535ae6b2665a7db5db652a228b72dc9b}}, and MG-VTON {{cite:7037a756d9794ea8391de8b99eeb900bf6ecfffd}}. VITON and CP-VTON adopt a coarse-to-fine strategy to tackle the virtual try-on task of the single pose, and neither of these two methods includes the change of human pose. To make a fair comparison, we first enrich the input of VITON and CP-VTON by adding the target pose. We report the quantitative results based on the SSIM and IS metrics (higher scores are better) to evaluate the realism of the synthesized virtual try-on images. As shown in TABLE REF , our method achieves the maximum SSIM scores and the maximum IS score on the MPV dataset. Besides, our method also obtains the highest IS scores on the DeepFashion dataset. The results verify the effectiveness of the proposed method on generating high-fidelity virtual try-on images. {{table:23e8541e-38e7-4a67-85c0-b10dcf970c69}}{{table:b42e83fa-8213-4b19-bca4-1470497321f1}}
r
222ba2156984755a4650c42daf418a6c
where {{formula:0f489631-4009-4e81-95af-5507d0743a6b}} denotes the iteration; {{formula:379a7743-aa40-41fc-a6ce-9dbdb2055189}} represents some varying penalty parameters which may be preselected or dynamically generated in the iterative process. ALM is composed of two alternative steps: Primal update solves AL problems with given Lagrangian multipliers {{formula:fa29bf87-be1b-4964-a081-e29ebeefc194}} , and Dual update updates Lagrangian multipliers {{formula:0b1d2459-3eef-4e52-8b61-edf095cb5391}} based on the obtained solutions. The dual update formula () can be interpreted as a dual gradient ascent step with stepsize {{formula:e9584056-9e4e-4f95-941b-5948add29e73}} . Specifically, we have the dual of AL problem {{formula:37d9b144-aced-43ce-83f8-9ec397cc8c6f}} and its gradient {{formula:932104c9-387c-4118-bf21-134722ac47e9}} {{cite:de87e3635e79fd30b528550c2c8ba81b75a8e21c}}, thereby a dual ascent step for maximizing {{formula:fd133c29-d801-455e-8f53-e2ba1394c0d7}} reads as {{formula:75c212cb-e9a0-47d9-a59a-e013720fce49}} .
m
c5a74c387a41fc6284b5f8c5e44bce3d
As a final comment we note that the structure we have presented in this letter, naively, has the similar feature as that of the Krylov complexity {{cite:35b9e1f262a1e643f92c4eee45ab920b38477192}}, {{cite:6adbd981b341fe9613ddafdea752d16b3f7b4dbf}}, {{cite:6974ced3dceee7a1ce889e6bcd2e6cca70981d9f}}, {{cite:99d9739e7de803530d37811b79a7dfeb3f71681e}}. In particular for the explicit example we have considered the complexity may be thought of as the average of position. It would also be interesting to understand this connection better {{cite:79bc33523c0acfa8d48ca7c26b0b8e848fa59d2a}}.
d
7a78770fbbfbd7bd96ea3276eba930f4
Inspired by the efficient GED approximations and the powerful framework provided by the new advances in geometric deep learning, we propose to leverage its effectiveness as a learning framework to enhance a graph distance computation. Therefore, we are facing a graph metric learning problem. It can be formulated as a contrastive learning problem that finds contrast between similar and dissimilar objects. A siamese architecture is suitable for this problem. Bromley et al. {{cite:262b3363f76709dc923f8e7568ccfc0968ae3343}} proposed it for signature verification. Siamese networks make use of the same model and weights on two separated branches in order to learn a representation where distances can be computed. Later, several approaches have extended this idea, being the triplet loss {{cite:a88a3ac167ef0062c400888cdb1ff87f1d5e9d4c}} one of the most successful methods. Recently, novel approaches have focused on extending this concept in order to exploit groups of samples instead of pairs or triplets {{cite:5a44a91660f5be9159c315981dc67d9c11811c87}}. Moreover, contrastive learning has been used, not only as a metric learning framework but it has also raised some attention due to its astonishing improvement on unsupervised learning tasks {{cite:70f0b2456422fcba3156494194d3e421b91f6bca}}.
i
e9ac61258ac69ed7ea08af0de3977fb4
Chebyshev polynomials play an important role in the approximation theory, and, in particular, it is known ({{cite:5069367d6bb470e518d5ffda24dba1634ec4449e}}, Theorem 3.1) that if {{formula:3221b4e8-079a-4a11-a714-342ca0f79f83}} is Lipschitz continuous on {{formula:b90b57e9-14a7-4fb6-9530-43883838e344}} then it has a unique representation as an absolutely and uniformly convergent Chebyshev series {{formula:265d5d9f-3ff0-4c25-b6ff-a4c50d6e60ab}}
r
6ff719065d0f20cf10209d0a75a3c1bf
Representation Learning approaches are on the rise for explaining system behaviour. Restricted Boltzmann Machines (RBM,  {{cite:c2c864f69331b5303bc85ecac20d4b9cfba285ba}}, {{cite:f8d48622bbc7941cfa7da8fde8cb891c627f4a5d}}) are widely used for learning feature representations with newest works focusing on the explainability. {{cite:172ea564e9691e190e90f5486bea621b314aab7d}} proposed a RBM consisting of additional explainability units in the visible layer for recommender systems. They defined a joint distribution over visible and hidden units and a conditional distribution on explainability scores in what they called a conditional RBM. In the diagnosis field, {{cite:714e8b57bb39257b47b5c159760ef0510d272929}} extracted features for motor fault diagnosis using stacked RBM {{cite:37726c5906f178b7fcac44d833b06b6733cc70e5}}, i.e. Deep Belief Networks. {{cite:7de2a2adbbf82a785c769e19dc7f2585d64e10db}} enhances RBM for failure diagnosis with an additional regularisation term to maintain features relevant for the health of a system. In these works, however, RBM are only used for identifying the most relevant features; explainability and structure within the feature space are not addressed.
d
55166e28dbb14ba82400c7d7c3d10021
For this work, we use MESA version v9793 compiled with GNU Fortran version 7.2.0 installed as part of MESA SDKhttp://www.astro.wisc.edu/~townsend/static.php?ref=mesasdk. All of our MESA calculations implement the same spatial and temporal resolution conditions as adopted by {{cite:da3ac57ca8c920459f457392319b6469e01c3b3f}} for MIST. Below we briefly outline the parameters in MESA/MIST immediately pertinent to this work and we refer the reader to {{cite:1f4d5d2c9fa287760196bb69f84a5add76619a85}}, {{cite:6042bd2e9f9ad09af29e82275f020a8eb81512d1}}, {{cite:1d682e335e6aaade7b23a579529e12d120b10232}} for more detailed information on MESA and {{cite:da3ac57ca8c920459f457392319b6469e01c3b3f}} for more detailed information regarding the physical processes in MIST. For detailed information regarding the impact of our different setups to the MIST models for high mass stars and associated uncertainties in various parameter choices and their effects, we refer the reader to {{cite:ddffe467041d5b9d5bddadb6e2ba14c4c7425697}}.
m
7e21a4d4a97184e93f22ca9591c1f8b5
The literature on defining optimal SG is growing in interest and activity {{cite:60de94bc97b846c595b378ec443568ef59d81c20}}, {{cite:a7719b67fc76e1c6f1c33012370082b2faf5bf33}}, but attempts to establish a theoretically optimal choice are lacking. We provide a method that can define a principled choice of dampening and sharpness, and we show mathematically the dependency of that choice with other hyper-parameters initialization, with the task and architecture, sensitivity in line with what we found numerically. We proposed as well a theory of optimal weights initialization for SG training, that is sensitive to the statistics of the task to learn. All the conditions are stated to be applicable to any spiking architecture but we derived the consequences implied on the LIF architecture. We used this theory to define the dampening and the sharpness, but conditions III and IV could be used to define a theoretical choice of tail-fatness for the {{formula:1d8816a5-c764-47cb-988a-67dae55c3fdd}} -PseudoSpike {{formula:8b521d4a-d2fd-4e65-b69f-cf256f8ac3f1}} shape instead. We showed how training under the four conditions improved accuracy over the set of hyperparameters that was found to be the best after grid search, which was Glorot Uniform initialization for the exponential shape. This method can therefore allow the practitioner to avoid the highly time consuming task of grid search the optimal choice of hyper parameters. This is by no means the last step in the direction of optimal surrogate gradient design, and we hope this work can bring long needed clarity.
d
7c2cc6d07fd7d8bb2ab0bfe029222afa
where {{formula:1e8eec0c-3b0b-4d16-8773-bbc490c2eb93}} is spatial-dimensionality, and {{formula:3c2b12a9-a90a-4e16-ab6b-6adca67de7db}} is a critical exponent which can calculated as {{cite:3d79c4dd9ca9033b9586bb9888698899d75fa0dc}}, {{cite:f3a49e9a4794aa574780e5d600b53f35693934b6}}: {{formula:8650af5d-0c69-468c-adaf-e7775f86e102}}
r
22ef5fdd3cceb84e932f54b72967d145
Our results highlight that while CNNs learn representations in a feature specific manner, largely discounting the characteristic properties of the underlying object, humans try to learn the knowledge of features, building on top of objects {{cite:f2337d75adbbf732007a538ec50c1929cbfe3a99}}, {{cite:779a29c51e2ce941bd4ebfe26a830fecc16957a4}}, {{cite:e702518e58ce5e1dae4025ae4b953e3732b5409c}}. We find that networks are more affected by our segmentation transforms compared to our block transforms, further indicating their disconnect with human-like behaviour. Networks have learned to solve tasks with noise as part of their training procedures to handle controlled adversarial attacks {{cite:c40c72edbc6ea58c92144839d7f84b3f6553975f}}, {{cite:8cc818e823d12179ac667096dfb3ca7a86ad6753}}, but struggle when the control is taken away. We filter humans and machines on the adversarial object recognition task, and are not creating systems that can break captchas {{cite:5c474cc1ab130c74f955f7ffb16333384ffc444b}}, {{cite:d04390d1b05ef5a0abe74fc24564f31fe5675317}}. We believe our work could be a step in that direction.
d
1dde4149a79e829d50430b89162d3878
is nonempty and obviously compact in {{formula:c69c00ec-5806-41f7-a91d-8cb950f47838}} . It was introduced in {{cite:0bddc5e7d5ab2d6286b6bec3297da02b130311cb}} for {{formula:f1f756e9-383a-487d-9912-d234f2f69402}} as the set of “almost-gradients" and then was called in {{cite:505e3566800ffe302f871068ea7b8b468960b1d1}} the {{formula:9dcf263a-f370-4940-bb0a-23363a7d7f1e}} -subdifferential of {{formula:e2ed8c33-8ef7-42a1-98e5-c572aae09908}} at {{formula:aa71b4fd-9da1-4baa-b1b4-2c6ecf306047}} . Clarke's generalized Jacobian {{cite:7b1f3bee17a759ae98e1e0f39ea1baddd4cd1ed2}} of {{formula:08d9da70-ae07-40c3-bfd3-d44001258a80}} at {{formula:4dadbbed-dd4d-4888-a816-1b0f0600dfde}} is defined by the convex hull {{formula:46255ff4-b553-40be-99e8-c0397c82d041}}
m
d5e84b852b29c547a12170bb07cd1526
For the RIS with large number of intelligent reflecting meta-surface (IRM), denoted as {{formula:168c7575-a2ac-4fb1-9569-a845ae1ac944}} , and assume that the channel phases for the channel from Tx to {{formula:4027e518-aaaa-4f6c-bb31-11a7d7d8c6e7}} th ({{formula:3c06c9e7-3a60-473b-a47a-9e46227edeba}} ) IRM of RIS {{formula:04c10a8c-b9bb-4913-bb36-99e23fecc53e}} , and the channel from {{formula:de923c1f-7896-41d8-970f-170b78509b38}} th IRM of RIS to Rx {{formula:ded34d79-e6c3-41e9-b63f-04156efef732}} are known. Then, to reflect the signal to the direction of Rx and to maximize the signal-to-noise ratio (SNR), the adjustable phase induced by {{formula:d26824bb-c85f-4c25-a20d-c62fb11a85d3}} th IRM of RIS {{formula:5280a767-a44d-48f7-97f5-acb460edd69b}} can be set as {{formula:245ba504-b49d-4a3d-a459-3a943a650ec0}} {{cite:1eb20cbf6b0322238ec7c7f1d6ad0bd67164dd10}} under the case that only the cascaded channel Tx-RIS-Rx is found in the tunnels with obstacles. From right side of Fig. REF , it is detected that the BP is further reduced with the increased {{formula:7b15289f-c81a-43c8-b64c-e9da500c776b}} , because more possible paths can be created by the increment of IRM.
r
c5b80814472001c3ed7276891907f44f
The condition assumed in the following Theorem is satisfied apart from {{formula:f2b0f42f-a05b-494c-aeb9-7ec6ce92597f}} -predual spaces, by any function algebra on a compact set {{formula:554705b5-7d89-4a77-b867-a9e65a89e17a}} (closed subalgebra that contains constants and separates point of {{formula:eb5a35d2-7285-452f-a958-621ad90ae202}} ) and by the space {{formula:bb8accfd-34dd-4cdc-8f0b-d4cd5382175d}} , of affine continuous functions on {{formula:99b1b546-a6a9-47e3-bdc4-655c2762ec13}} , equipped with the supremum norm, where {{formula:3d54255e-de46-414b-8fd2-88b2a4487b0a}} is a compact convex set such that every point of {{formula:65ce3248-82bd-4189-862f-39c93fc41807}} is a split face of {{formula:8922db0b-b9bd-426a-bc72-d68941c2f358}} . See {{cite:f9f6a827b76260a25f353b4337728cdb6552838f}}, pages 5 and 233 for the details. Also note that in this situation, for {{formula:65b4052c-bc28-4d6e-b720-6749aeabc9c0}} , if {{formula:27f77879-7e1d-49f9-a807-267b522066f5}} , {{formula:aa0808cd-1f38-42d0-8e09-5d594e43a4f4}} . Thus {{formula:367024f1-77de-40c3-a218-39b428902e4a}} is a norm-discrete set.
r
c82e9ed1eb59415bd45febb1cc12e64a
In the second group of the integral special functions, the integrands include special functions and the most known and applied are the integral Bessel functions {{cite:752fd28f7a467b2fd1d3b69399868d23b361b5e7}}, {{cite:a1708b655b289813322af4acb19964763f206f77}}, {{cite:3e1fa4f0003467f7a67fb462acc482651808fa35}}, {{cite:8814e84062d4e9051309cf9c06f932dcb753870d}}, {{cite:f0258370a72507fd0beac60b2f2026dca951fc33}}, {{cite:1f33ce2d53aad023e4be1ea10fa35d36be6c71cb}}, {{cite:dc8b73f301019d880cea4ee7f3591a5628d65bf1}} {{formula:9e54295d-919e-4589-97c9-34fd0ba3d44f}}
i
e9091baf864d4747e840d95fffff646d
Theorem REF was proved earlier in {{cite:995f3201b3995a8f819921fceb328e583ae9557c}} under the assumption that {{formula:3cdfa85b-2a96-4384-bf4f-2ed5adef490e}} is a compact Kähler manifold. The proof in {{cite:995f3201b3995a8f819921fceb328e583ae9557c}} crucially used a well-known theorem of Corlette–Simpson {{cite:a77db2f6db070a6e357830752145485245dfcccb}}. However, that theorem is available only in the Kähler context. Here we have been able to avoid using the Corlette–Simpson theorem.
i
ced487e01697e34b254cbdc9f6d11bc9
We also showed that this approach can be used to study time dynamics, of a photon interacting with up to 500 qubits, using a personal computer. We therefore expect the formalism presented in this paper to enable the study of complicated quantum networks such as multi-dimensional waveguide arrays {{cite:9346a33c455abcb47a7d404eb2dca978368be578}}. Furthermore, we expect that applications such as quantum logic {{cite:0a395f3ff303ecb357a4459362c756eecd433886}}, quantum memory {{cite:154c24458473e4785eadf0c96407a06f28c1cd4b}}, quantum photon routing {{cite:f54a5630eba9669061ddc82f5b376dc32ca3dde1}}, {{cite:03baa857c546f64ca77ccecc6b33de74b11ef1c7}}, as well as quantum sensing {{cite:5504a848ebb7a1804c531a279e62777db1edd8f3}} and communication {{cite:798de2e6b31f34899ecdb45af127b5fe57a60ba5}}, {{cite:a24844292863751ff110e8ae33e4f29d9c674f8d}} will also benefit from analysis using the real-space approach for scattering phenomena.
d
d05c3b884aa66c1767e5fa782358cce0
Regularization methods applied during the training phase {{cite:ed65636b83ad24bbd3e0af566c608382d204d397}}, {{cite:bad8fc73692a14153cfdf5019c7a55b37912b535}}, {{cite:c2af9c55a8a8bd19ad0110cf268bdcd7f090d38e}}, {{cite:7543519990cfcd66d2dd93c2e1d39c1b83565c50}}, {{cite:8d34dc1f7d7fcd2698b8899e2aa5603ba12ec000}} These methods modify the objective, optimization and/or regularization procedure in order to build DNNs that are inherently calibrated. Post-processing methods applied after the training process of the DNN {{cite:40da227fb0466bd553e5d9caee9063e54a9a9262}}, {{cite:f46668dd69c5d679901df57d8c46b422b6e87c42}} These methods require a held-out calibration data set to adjust the prediction scores for recalibration. They only work under the assumption that the distribution of the left-out validation set is equivalent to the distribution, on which inference is done. Hence, also the size of the validation data set can influence the calibration result. Neural network uncertainty estimation methods Approaches, as presented in Section , that reduce the amount of model uncertainty on a neural network's confidence prediction, also lead to a better calibrated predictor. This is because the remaining predicted data uncertainty better represents the actual uncertainty on the prediction. Such methods are based for example on Bayesian methods {{cite:215137db58b01a5b772ec936423fab35850843db}}, {{cite:46a958971ee0edd2edd40133704b87f11d90244b}}, {{cite:be6eb52cdbe4dee055b11fa8e145f390c9ac9ff9}}, {{cite:f8102b04eeda9663970423bcee9fa750a11e5adb}}, {{cite:fb44dc4ad328f8c6cbf7d590e41a83765db7f48b}} or deep ensembles {{cite:d4be62a2180a1f902b0e57057035d4bac9d4dd94}}, {{cite:6aafb44865f67e5a471fa98e0b8f6a23e0f7c64e}}.
m
a5d73c2ae0147cd9a3d242a3c178ed22
Theorem 3.1 (Theorem 2.15 of {{cite:b925ae6b703a2c1d5ff2bf1cb1a4b58c41f90b25}} and Theorem 3.1 of {{cite:aaa8db6cc0fc95367f477fafd596bc42da6d1cb0}}) Let {{formula:3a56b0cb-edce-4759-bb00-b70afa1775f6}} be a Gelfand triple with a Hilbert space {{formula:8edc128a-3a0a-476e-9623-caf152f9e6fb}} and a reflexive Banach space {{formula:27102e0d-db5e-4c06-b52f-9d9fe5c31956}} such that the embedding is dense. Furthermore, let Y be a Hilbert space and let {{formula:c6647c0b-8425-47cb-b8ae-519e44c001a2}} , {{formula:43f9f5bf-7b4f-4402-bfab-bd466df9cca3}} , {{formula:c2a544c3-6824-49de-87bd-ba25bbf27402}} be linear bounded operators such that {{formula:633c2600-d3d3-40fb-a649-7a20a12c7993}}
m
31501e518d05f894318103bec1a3292f
Hence, it suffices to analyze the convergence of {{formula:dc1e7f31-2c1b-44f8-b57d-1276bc026ba9}} under the null and alternative respectively. Note that {{formula:bd4222ae-25e2-4ed5-ab09-26e597db88c1}} and {{formula:08b6ee05-c627-43a6-986b-dad61bd4771c}} are deterministic matrices. Under the null hypothesis, the matrix {{formula:bc3517ae-a276-4a52-836a-d5cad72115c1}} can be replaced with the limiting matrix {{formula:81cf3731-6bc0-47a0-92d0-9c8c52844d62}} by Lemma REF . Therefore, under the null hypothesis, {{formula:54b98a9f-b77c-4c3c-86a6-d1815b4efe82}} , so {{formula:97573987-7e04-47c0-8e5a-bdad605d7028}} since {{formula:510d1a0e-5da4-48b5-8354-849f3b10c986}} is assumed to be a characteristic kernel. By {{cite:b0a59e7b8564cefbc9c18975265413eebd216e9a}}, as {{formula:3f297d41-4250-41b4-b05b-96cb332d73b0}} and {{formula:6ea2540a-c053-4726-bb4f-3cebe148c734}} , we have that {{formula:4ef994e3-5a40-4e7e-bd30-c3cb057e9fb6}}
r
8a9d7314c6525590f8c52901c6d90eff
Benchmarks. We compare FROB to benchmarks. Having access to large OE sets is not representative of the few-shot OoD detection setting. We compare FROB to GOOD, CEDA, ACET, and OE {{cite:35c67f8f3019b4480dccd678d32369b7ea30a94b}}, {{cite:65fd1034fcd8d6940be3f2cbf3c1c57640e9dda9}}, {{cite:7df418ebee4c55c13f5d76f2d40c24d165cf8e9a}}. We also compare FROB to GEOM, GOAD, DROCC, Hierarchical Transformation-Discriminating Generator (HTD), Support Vector Data Description (SVDD), and Patch SVDD (PaSVDD) in the few-shot setting using One-Class Classification (OCC) {{cite:80d11ba1859be5c5fa678efa4536e3b27ecb7ed7}}. GOOD and {{cite:7df418ebee4c55c13f5d76f2d40c24d165cf8e9a}} use the 80 Million Tiny Images for OE. FROB outperforms baselines in the few-shot OoD detection setting.
r
9e6b672f7a4270cb3b25f324da654dd7
Random graphs {{cite:1a0bd934c41d5aca3b8321bea50369be75ebb62e}}, {{cite:0b60ef15405e1666b3e112028dceadcb34986e50}} provide a framework for studying the spread of information in networks; it has been successfully applied to a wide range of dynamical systems such as social networks, epidemic spread and the internet {{cite:f044fbfb45283c24b1cccec98bc05b044eeb3784}}, {{cite:adda2183de02bd3e7e85dbb1171617a888bb381b}}, {{cite:fecb3d8d8bde4a20c36048f2380d6269a06049b2}}, {{cite:f2c6cf100e25f152194c37bd4470b11b0b5f60b6}}. In many applications, one typically considers a complete graph where any two vertices are connected by a link with a prescribed probability {{formula:36ed71f0-8d7b-469f-a1e8-3aa6f49d49f5}} , independently for each pair of vertices. We will refer to such a link between two vertices as "transmission", which means that some information passed between the two vertices, and one often studies the fraction of the graph nodes to which information originating from some vertex is expected to reach. In the language of epidemic spread, which we will be using, such nodes are called "infected". Above a certain threshold value {{formula:944747c3-f06b-4bea-b14e-ce00c95f9de9}} one has a giant connected component (GCC) or a large out-component of infected nodes in the case of a directed graph.
i
9a128c46e02a893f2191cddc6e518e2f
Prototypical network {{cite:bdef15fa31e448a75c3f9ef46169a1b9c6584aa2}}: This few-shot learning model is simple yet obtains state-of-the-art performance on several natural image benchmarks. The network computes a {{formula:2c62f41b-5b2b-4d29-83b0-33ec55db4405}} -dimensional prototype representation {{formula:d8e00c9a-8102-4bee-b179-5e5f78f4dd3d}} for every class {{formula:d2685a4b-d771-4c8e-8e0e-6ce2ec6b0ff7}} from the representations of support set {{formula:8c12f9e8-26e5-48f6-af2b-64ae8f4bca0b}} as shown in Eq. (REF ). It then calculates distance {{formula:fe242f71-066e-4cb0-80ee-efde9fba1e30}} between the query example {{formula:81f369fb-5cbc-486b-aa2e-6681c91706da}} and the prototype representations {{formula:716dce16-e35c-441c-b9c9-a578db9479f8}} of any class {{formula:cb5748cb-3193-4a37-946a-d971c31914cd}} (see Eq. REF ) and assign probability as shown in the Eq. (REF ). {{formula:0d66a605-ff4e-4469-a847-f687714e4c43}} {{formula:1fcdbf03-908d-48d6-a1a2-8a4110ce65fd}} {{formula:72e879c9-0931-4003-8831-e2f8838cebc7}} {{formula:a20dd9b2-30ed-412a-8745-4f0bd93187d4}}
m
ea9b7351297111f34338d79618354aaf
The colatitudinal angle {{formula:139daecf-d46e-4849-9110-a9cd8d36bb5e}} and longitude {{formula:1048052d-8412-4cc7-b593-a62f28d335e6}} are introduced by {{cite:16560849c78884cbde7e0bd83298af0f5edc20b5}} to express the eccentricity vector of test particle's orbit by {{formula:8c685048-33e3-4382-a260-29b141fbed53}}
d
9e0c6a2d7942d10ffa899755c28ef367