text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
Motivated by the drawbacks in {{cite:480937d75fb71609ee03bc5521db3387913bb68f}} and the vae branch in {{cite:52d449d7c571c811b25e2d23f7469130db221308}}, we thus propose a framework that we expect not only utilizes region-based optimization, but we also hypothesise that it will prevent or reduce overfitting.
m
c18a4e6d32e502bbdc1226270cd396a3
We propose the InDuctive Collaborative Filtering (IDCF) model. Our high-level methodology stems from a key observation: there exist a (or multiple) latent relational graph among users that represents preference proximity and behavioral interactions. For instance, social networks and following networks in social media can be seen as realizations of such relational graphs, but in most cases, the graph structures are unobserved and implicitly affect user's behaviors. If we can identify the graph structures, we can leverage the idea of message passing {{cite:920dc9a3227da5c6df24a342b5489fd3a6c90894}}, {{cite:494b3a941ffc8f92609d67c254e9fa9524a83c82}}, {{cite:205deb05b22d1cf973ba4e50ce2947d0f8b64c67}}, {{cite:c75a55cd80a33c57f2e61796a1aa9e373c2be599}}, propagating learned embeddings from one group of users to others, especially, in an inductive manner.
m
0846021820bd4d73d2b2ac5805a00688
Stability on Nonsymmetric Semantic Relations. In order to test whether the responses arise from associative fortuity {{cite:bcf77a7c4e4602a8e02c06360db8022368be8768}} in the data we tested Davinci's sensitivity to argument reversal for nonsymmetric relations (e.g. Original: Does a missing wheel stop a car from running; Modified: Does a car stop a missing wheel from running?) and seeing if this changed Davinci's response. Results of this on 9 semantic relations with nonsymmetric items (i.e., Cause and effect, Order of intensity, Order of size, Has part, Default inheritance, Precondition, Contained in, Has member, Troponym) indicated that Davinci's response is stable under 7 relations but exhibits potential signs of associative behavior for Cause and effect and Troponym, failing (e.g., replying yes for both of the examples provided above) on 70% of the cases. It could be argued that this behavior might be potentially improved via optimizing the wording of questions to better elicit knowledge from GPT-3, or via a more sophisticated prompting approach, e.g. Chain of Thoughts {{cite:b31c0da0096605e3c3c1dcaded64c9d74898477c}}. Combining NNLMs with symbolic {{cite:a38894f94ee3307050c12cc5a797686d06aedbca}} or physical world {{cite:09863c3cc7f229ed151249b7bec9680325f6c924}} grounding have been shown to be promising directions for future exploration.
d
fdc0e3657345aae09b9fc792d8289777
Interestingly the BP decoder doesn't exhibit the same behavior at low error rates. Moreover the non-increasing behavior of the derivative of the performance curve of the BP decoder is surprising at first sight. A possible explanation comes from the use of the physical error rate as an input of the BP decoder. Therefore it is possible that a given error is succesfully corrected at low error rates but leads to a failure at a higher error rate. The fact that the BP decoder performs poorly with low errors that cover part of a check was already observed in {{cite:7c997bf2be16a18246f9e86f5e3d6b7ded1b2fb6}} with a different variant of BP decoder.
r
7e0ff167ff358ec5127f359257acdd46
Although extensivity is the backbone of the Gibbs statistics, there are various arguments in favor of the non-extensivity, especially in relativistic systems and those that involve long-range interactions {{cite:47ce0d970ec70343b2bf59a92273c8e8af9866fe}}, {{cite:74ffdc4ddfef66b7bbe363a114735870d1c1a5ab}}, {{cite:b45406be4b17d7972c25f51ad7c31f0febfd7fed}}, {{cite:003eb87638210f7bba32c0f2a1a9f0f1633e6c7e}}, {{cite:e5b4ea500313c01bf479ce72813143f8a18f5f4c}}, {{cite:bb97137d826350f7eb3d2de78a9bb411f8b87e26}}. The Tsallis and Kaniadakis ({{formula:a2ffe276-a448-40b3-a9e5-095e70deaa73}} ) statistics are two of the most famous and widely used generalized statistics {{cite:47ce0d970ec70343b2bf59a92273c8e8af9866fe}}, {{cite:b45406be4b17d7972c25f51ad7c31f0febfd7fed}}, {{cite:003eb87638210f7bba32c0f2a1a9f0f1633e6c7e}}, {{cite:e5b4ea500313c01bf479ce72813143f8a18f5f4c}}, {{cite:bb97137d826350f7eb3d2de78a9bb411f8b87e26}} which propose generalized versions of the equipartition theorem {{cite:87f2e9b24c55596ea64a9dfe3b23904a8b16d4b6}}, {{cite:bb97137d826350f7eb3d2de78a9bb411f8b87e26}}, {{cite:cc8c16c0f51f915ce41228550118da0820713a9f}}. Motivated by various reasons such as the long-range nature of gravity, and the probable relationship between quantum aspects of gravity and the non-extensivity {{cite:cc8c16c0f51f915ce41228550118da0820713a9f}}, {{cite:bf0d7a75fbb2c59fbfa27a0592bc65563d41a441}}, {{cite:4891364bbd50b13ca0f62678650acf129583b0f6}}, these statistics have been employed leading to notable outcomes in {{formula:4379349f-63a6-496f-af27-93b61e52ca07}} ) describing dark energy {{cite:cc8c16c0f51f915ce41228550118da0820713a9f}}, {{cite:47ab16fb44ea44be7eadc1eebc25eaff4210e854}}, MOND theory {{cite:ca97998a48669609bd4bac6e1469822ea1f6eb72}}, {{formula:6774146c-2bff-4bc8-9c68-1b3f44f05eb3}} ) studying the Jeans instability {{cite:24a54af69f1102ad7f385c58be58c6bcff83a928}}, {{cite:57c846a49440d3340dd1a8a61f37c5c131f46ce4}}, {{cite:1d9156dde9e835d8d02cc54fc6c64deb79b60da0}}, {{cite:514a57e8ed4cb127fd8bacf07cce86190e97d2ce}}, and also {{formula:4be6342f-54c7-4fd0-af3f-e65a20e23356}} ) stellar sciences {{cite:a11e9d0eb53f5756b68d3072eef8b1529d5fddff}}, {{cite:8c97a6282a29ed956cf5170ef46cbe9ebb50bdce}}, {{cite:d172569123764dd838bd8aa649ebb66eeb05c68f}}, {{cite:f73e0086a79ffaff2b54b6ce6879b6f28119b8c9}}, {{cite:db28f306967a9938129cb8cb2851b858c4dfabe5}}.
i
f43ee465bf7718893acdb0b9695b91ea
where the matrix {{formula:148a4237-da58-4dd0-a7f4-1ce5f4c0c633}} is assumed to have at most {{formula:b262ff4e-0730-493f-9383-e17937fe5773}} nonzero entries in any column and in any row. A popular formulation of the problem (see {{cite:c7afb12e6842d4169c6a0ff533c802edc535c6af}}, {{cite:32648195708389515e94265fab2d6472a7a701da}}) takes the form {{formula:c9b5ecd5-b942-4dc6-a390-963ad5b7c203}}
r
3ed8eee75f6f8629cba31ddb5eec18a9
In this paper, we show that {{formula:098368aa-3cd4-420f-b767-64195f27b299}} non Clifford resources are both necessary and sufficient to simulate quantum chaos. To this end, we explicitly compute the 8-point OTOC and the fluctuations of the purity in a subsystem and show that a doped Clifford circuit will attain the Haar values for these quantities if and only if {{formula:74fa12b6-6598-438a-86be-28dc3c05b2d1}} non Clifford resources are used. In other words, one needs more than a homeopathic dose of non Clifford gates to simulate quantum chaos. Can a classical computer simulate quantum chaos? In order to simulate a Clifford circuit with a {{formula:c1b42107-18d5-4a6f-a304-387cfc8d36ea}} non-Clifford resources, an exponential number of classical resources are needed {{cite:36313c41f377a87291fa853106906017faa31930}}. Complexity-theoretic arguments {{cite:3377879247c3067787fdb57d6fc40ddc43f0de28}}, {{cite:731dc929c7f494ece399a5001bdec256680dbe10}} imply that one cannot simulate efficiently on a classical computer a quantum Clifford circuit doped with {{formula:32b490b7-d013-4c71-bf81-291b626a566f}} non-Clifford gates, and therefore, since this is necessary to simulate quantum chaos, the latter cannot be efficiently simulated on a classical computer: quantum chaos is quantum.
i
8b9860aca95d62bf3ef70808170b3500
The model that we derive also includes the Page-Wootters mechanism for non-interacting clocks in Refs. {{cite:4bae5e194fa6cbbb78a4a83321fab8851e04abfe}}, {{cite:f9edad70057ef869522f770fbd56038651559e3e}}, {{cite:8ca2017fa3d50740d9332179c1604c536d87ea39}} as a particular case, when the external degrees of freedom are neglected.
d
68e5bc1ee36e609c641e2f404d4b4fbd
We collect a number of inequalities which we will frequently apply throughout the rest of the article. Some of these can be found in {{cite:fd5ea06dacdec55c1861be751065c06733cd0c41}}, {{cite:9d4935c80fc4e5dc2ff372e6a09b62482629e421}}, {{cite:da0c45bc4cfd9fe77e55cd9ec4943215a2b4c174}}, {{cite:1fa51f6dc11b055a0a5ee32a60bbf4f35947f8d6}}; for the sake of completeness, we give proofs for all of them.
r
3b47eaefccceb4df77064cf62663f680
Here, {{formula:0f60fe5e-1ccb-497f-8bb6-3301f4c74854}} is the momentum of the residual nucleon. In this way, the {{formula:f800da95-d10e-4967-bb6e-7d2651991f2e}} spectrum can be decomposed into {{formula:370a254b-e5b3-4557-90b9-643fd0a6000c}} and the response function {{formula:86160558-9df3-4332-b784-7e3d5a645673}} . Using the {{formula:ec5c04b4-98fa-4b96-9b90-04e88f11850c}} scattering amplitudes based on a partial wave analysis {{cite:d210661628d2c9823f92182df62b62ada703b87b}} and the deuteron wave function {{formula:7c741446-ca6d-4783-97fc-1252ee465424}} {{cite:a13fb149aae6f2bde8f80872edf68875e88ef897}}, we evaluate {{formula:450086b8-e158-4245-abce-b0425a478880}} as a function of the {{formula:1120d67f-a52c-43f8-ac0d-4b128c96b410}} mass {{formula:7fa3ec3d-6e8d-4c44-9e5e-4331c2b31faa}} , as shown by the dashed line in Fig. REF (b). Here, we took 3 degrees as a typical scattering angle of the knocked-out nucleon in the laboratory frame. The line shapes of the {{formula:c46a0cc2-2c5c-40ee-813a-311682e896b6}} mass spectra above the {{formula:f1f41148-dad4-4185-9e03-c21af72ce442}} mass threshold are characterized by {{formula:87c3efc0-c12e-4044-baf4-b7d05d1fb2b6}} , the distribution of which reflects the Fermi motion of a nucleon in the dueteron. For {{formula:66c24275-4c95-47cc-8286-b08bdc46d2ea}} -wave {{formula:f118dc97-3551-4601-94b8-5960f2e4fa53}} , we consider the {{formula:51f6f58b-7d15-4470-a18b-77301c9935a8}} -{{formula:c850c5b4-64c0-4070-af7b-aa65ab24e666}} coupled channel {{formula:f1301010-404a-4afe-b78f-3601fb19d0b5}} matrix. The diagonal and off-diagonal matrix elements can be parametrized similarly to the case in Ref. {{cite:aee1c3bdcde0cbaf6f4aa20a57e88447620a2db4}} as {{formula:ce12f994-eb8e-4bee-be70-ebc99f00b20c}}
d
9900cfe6d0f85562f2629eade809f63f
On the more phenomenological side the inclusion of spin and collisionsCollision functions in the context of classical transport theory have been studied e.g., in {{cite:5b932713186c3485d440d3eee662bf3fc0da3c7a}}, {{cite:0fac46f189433176ed65a42f823a89f70b5187ed}}, {{cite:b55b0e37cff6a9e968b917673bd1d8fbe10b646e}}. would be relevant for a more complete description of a non-abelian plasma from kinetic theory. Vlasov equations with spin can be obtained from first principles using Wigner functions and thus it would be interesting to describe currents with spin. See e.g., {{cite:e04d89da546bf1958b5b546d6e30e04a886b59d5}}, {{cite:59fd0a1112b524dc312a88bcb009edc9206f9719}} for a recent application of Wigner functions in the case with collisions. The KMOC formalism has been applied to the spin cases in Ref.{{cite:f9467d267c68ef6c09bd5e258f37936ef41f6b87}}. In principle, the spin case would require considering classical limits of spinor wavefunctions. Collision functions in kinetic theory can be obtained from Wigner functions, which can be computed using scattering amplitudes {{cite:2a3313f71270e5f8f9390fee68ddf0829264860a}}. Perhaps one can study the inclusion of collisions considering classical limits of scattering amplitudes within Wigner functions.
d
447c3e3e9fbb7801dc04e958f048e625
In this section, we first revisit the standard process for training BNNs, then subsequently introduce a novel module, “Elastic-Link” (EL), to reduce the information loss in BNNs. Lastly, we demonstrate the EL module on Bottleneck ResNet {{cite:b610170c501930e633ed058210a481007f54f568}} and MobileNet {{cite:14a0991c1ba0af9f79aab4c4cc19d371e0ffbbb4}}.
m
14023ee1a98779e7d63d2483b5249686
A semi-Markov game(SMG) is a generalisation of a stochastic game({{cite:f6814bcca216b62b91f295536319e70f6e31f049}}), where the sojourn time depends not only on the present state and actions chosen but also on the state at the next decision epoch. The theory of semi-Markov games finds applications in dynamic overlapping generations models ({{cite:536212880aed22ee4c55c6dee24abfe2a84d9938}}) and dynamic oligopoly models ({{cite:e8278065cf409a9477c2e23080557176a0ed3cee}}). We study two-person non-cooperative semi-Markov games considered under limiting ratio average pay-off criterion. For other applications refer to {{cite:f3a00623d9cb0ba1cf50c29f52644b82f23554a6}}, {{cite:c47fc2aea692ef38c25f26162dcd06bab40ebd8a}} and {{cite:549e6ffd98488a3172327fce3c4c383b0d10bdc6}}. Lal and Sinha ({{cite:7c6ee30cbe686eecd3820ca22fa4776f438c50fd}}) studied two-person zero-sum semi-Markov games and established the existence of value and stationary optimal strategies for discounted pay-off and limiting average pay-off criterion under various ergodicity conditions. Single player semi-Markov games are called semi-Markov decision process(SMDP) which were introduced by Jewell ({{cite:13d688df35134a7ab07be6251279bfe58e4028fa}}) and Howard({{cite:3f58f156c85054bb236472041aaf357f61c6fe2a}}). A perfect information semi-Markov game (PISMG) is a natural extension of perfect information stochastic games (PISGs) ({{cite:a69aced8a33b56f78b8919e8fc3180dfb02cbf47}}, {{cite:11f891adf4e4b49db2d6d7c183878fdf4e75936e}},{{cite:0219339a4280e1ede3d1b0260b788215e4825ef8}}), where at each state at most one player has more than one action available to him (i.e., all but one are dummies). Various types of recurrence like conditions are assumed to prove the existence of Nash equilibrium strategy in the literature of SMGs under limiting ratio average pay-offs. For example ({{cite:d0f4daed472944de9b670e9490927cbe827a427c}}, {{cite:7c6ee30cbe686eecd3820ca22fa4776f438c50fd}}, {{cite:1c818fbc50af16bb5fd956ffde1768c1e369201b}}) considered ergodicity conditions whereas ({{cite:4086b9b90f8199479ec0cf63e837bd61442ebe2e}}, {{cite:5465187f85f340e0fab267eadaab505193467432}}) used some variants of a Lyapunov like conditions which yields the so-called weighted geometric ergodicity property. In this paper we establish the existence of pure semi-stationary Nash equilibria in non-cooperative perfect information semi-Markov games for the limiting ratio average pay-off criteria. We prove this result by using an existing result by Sinha and Mondal(2017) {{cite:e53fbe181a254caa5a9ea1d4b9eea8fda07934da}} and then using a strategic equivalence between an NCPISMG and an undiscounted(limiting ratio average) SMDP.
i
a3d3fbf2e7f5582b74ebda0695e3e364
First, from a mathematical perspective, it may be desirable to establish formal justifications for the distribution of the number of clusters within an excursion set and the distribution of the volume of clusters within an excursion set. As noted by {{cite:3f4f1ea964793cb21013f0556e53a1e0fcbdec8f}} and discussed in sec:probabilisticproperties, the currently used parametric forms appear to lack formal proofs. Similarly, it may be a worthwhile endeavour to mathematically model the conditional probabilistic structure of the number of clusters and their associated volumes more explicitly. At present, it is assumed that the expected number of clusters and the expected volume of a cluster are independent (cf. eq. (REF )). This appears to be a rather strong assumption, given the finite size of the search space. Finally, following {{cite:28d502bde29da4b19f6e002f797fe1d53cb0b817}}, it would be desirable to formally delineate some additional qualitative properties of the FWHM estimators implemented in SPM. The currently available reference in this regard, {{cite:db344c59583da60fac642d35ae3331e774e4fe2c}}, appears to be somewhat outdated and superseded by the de-facto estimation of the FWHMs based on standardized, rather than normalized, residuals. Please note that given the approximative nature of the entire RFP framework and its repeated empirical validation, we do not mean to imply by these suggestions that RFP as it stands is fundamentally flawed, but that these foundations could be elaborated on to further ground the approach mathematically.
d
3f3c4db7a67e42336cd1fa9f401856f4
In this section, we prove the main theorems (Theorems REF and REF ) with the help of computer calculations, which were done over the computer algebra system Magma V2.26-10 {{cite:71773caca61a0d465d33e8bbd0bf7abeb297b9a4}}, {{cite:fcfb9faac1390a4085ab635193f41b9ca1b2480d}} on a computer with macOS Monterey 12.0.1, at 2.6 GHz CPU 6 Core (Intel Core i7) and 16GB memory. The source codes and the log files are summarized at {{cite:fcb98f655b9288fd26183f484f573836f1489a1e}}.
r
cc32a6dea0b447008c1afcd0c4cc259b
Creating realistic videos from scratch (i.e., unconditional video generation (UVG)) requires the model to learn the distribution of the video data. In other words, we are essentially learning a density function from which we can sample unseen videos. This is typically achieved by employing an adversarial training framework, thanks to the advances in deep generative models where GAN-based approaches have shown impressive performance in static image generation  {{cite:0fbab0ba08c4c160d5007cbbec4679335ca69160}}, {{cite:35a6d2d9932bd19de86549207cc34da10a44e91f}}, {{cite:ca1e636f5df2c866b38c0076181c7fa3f19967bb}}, {{cite:d681fc8ba6079fce72a5e6369f72e1aedc318ccd}}. Video generation, however, differs from image generation since we must consider the temporal aspect in addition to the spatial aspect.
i
70e019815cbdbd090ebf33c2792f84eb
The segmentation model, which is used after the style-transferring part, is represented by the DeepLab v3 framework {{cite:47d1120003c9b13e63ea818f921de51f08f57428}} with a modified number of input channels, in this case, equal to 6 bands of imagery. We use both original Landsat 8 samples and their stylized versions as a mixed dataset since it showed higher performance compared to using only stylized samples. The corresponding labels for the original samples and stylized are duplicated. After the segmentation model is trained, the validation set consists of the entire Sentinel-2 (target) dataset. The overall method is depicted on Figure REF . {{figure:8741e442-f173-49eb-83fa-bc67f3eca502}}
m
9d35f74ec9b609e908b5127505bfb9a8
A more detailed investigation of these hypothetical mechanisms is yet to be undertaken, and will require substantial modeling efforts. We believe, however, that the scenario we describe in this manuscript can shed light on how to bring different characteristics of SBs (generation, propagation) come together harmoniously, and also provides new insights on the mechanisms related to plasma transport between closed and open field regions of the solar corona. We look forward to Solar Orbiter {{cite:277692f2b03ed92d6b02cc0e8cdd250129cf1976}} and to its combined in-situ and remote sensing campaigns, that will be made from increasingly higher latitudes and at different phases of the solar cycle, hence providing a unique opportunity to detect wind flows that are formed and accelerated in a larger range of coronal contexts. Parker Solar Probe will keep reducing its perihelium distance, and will certainly provide a closer look into the effects of solar wind shear and rotation on increasingly more pristine wind flows.
d
1bec253123b7ed7a572fb51ee47120f6
We refer the reader to {{cite:5887112f81ac3bd897493ccf6fd64f5ca4e066dd}} for a detailed conceptual account as the amount of literature on outliers is vast. However, for an inclusive definition of an outlier, we start with a general one given by Grubbs in 1969, “an outlying observation, or `outlier', may be merely an extreme manifestation of the random variability inherent in the data. ... On the other hand, an outlying observation may be the result of gross deviation from prescribed experimental procedure or an error in calculating or recording the numerical value"{{cite:081da77182eea8bea31edc5811f2ee9922050abd}}. Hawkins in 1980 defined the concept of an outlier as “[a]n outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism"{{cite:528b02e1d7ed72b58c8daef36d580831ee9274df}}.
i
5d38ce031c83cf30f1d675e109687de3
We compare the amplitude of the oscillations that we predict using our asymptotic analysis to those from numerical simulations. We employ a symplectic integrator in the form of a velocity Verlet algorithm {{cite:047fe1d3710b538e3e7cc9bc19b1f34c9fb47655}}, {{cite:97452b7e94dc3698c13eefc5efc18cde0f80c43c}}, which is convenient for studying chains of particles; it has also been used to study molecular dynamics. This approach is designed to conserve the energy of a system, which is not true of many common numerical methods, such as Runge–Kutta algorithms. The velocity Verlet algorithm uses the discretization {{formula:266f38f5-8220-4db3-9140-d95193046eac}}
r
8bc2d63931ac186c298ad47089365fd6
In practice, inspired by {{cite:a19336faf207886c3ff4d218b8d090d923d0bfbb}}, {{cite:a54d915148e29adadd769b6b398931c74de489aa}}, we employ data augmentation to obtain the estimation {{formula:d81e3642-9e51-4da6-be16-7e2947871c51}} of the real {{formula:b2158768-2b00-4f1c-ac72-91e8f5120125}} . Specifically, for each image {{formula:20226158-af8d-41f8-8a8e-ac842a5351fb}} , we perform repeated augmentations to obtain {{formula:c60bb099-4c65-4159-bc97-f87713cb965b}} , where {{formula:c1f5eb8f-bf0a-4715-8c6a-ca11fe2789b2}} is the index of augmented samples and {{formula:3c0d6965-e875-4e94-906f-12ed152dcb29}} 's are photometric augmentations parameterized by {{formula:d06265df-0464-43d1-b1f1-e8b08fcf1675}} 's. To ensure fairness, {{formula:cf334a4d-8a8e-4560-85a8-86e1a16868b0}} 's are configured to be the same as photometric augmentations used for training {{formula:21af4262-f77d-4b4a-9b37-42bf9af5f372}} and they do not incorporate the corruptions (artifacts) in the testing data. Then, the propagated uncertainty in the logits, in the form of mean {{formula:732b531b-1b09-45cc-bfe0-13f34f859eda}} and variance {{formula:aa8989bc-5d4c-4bf4-966e-326f816d9ced}} , can be computed by sending {{formula:e299e62e-ebb1-4b23-8a9e-f6047ddc2d4e}} to the segmentation network {{formula:253386a3-d978-48a8-96c3-e8fb91a763d3}} . For simplicity, when computing {{formula:10fa56c8-c38b-4c4e-aa65-c0078d3ad642}} , each pixel is assumed to be independent.
m
5295f163afe48a25bbfc58d908738f64
We first present in Fig. REF the possibility to address the {{formula:a1ad80c5-08d4-4307-a787-effa26712103}} anomaly with dark U(1) gauge boson and the {{formula:02ba07db-80c3-4ab7-b05f-8c1326cf56f1}} complex scalar model, where the {{formula:39bcb36f-a65a-431d-838f-f500e3e0b87d}} exclude a lot of parameter spaces of light gauge or scalar masses, see pink regions. Generally, the {{formula:435ee1ef-3640-4f1f-9b9d-63bb28c0102a}} bound can be avoided when the dark gauge boson (or {{formula:2735ec24-a372-48ca-b65d-3d067fa04f0e}} scalar) couple with the {{formula:f30a7a97-a15b-4ecb-b827-4f0e07b0c622}} and {{formula:872c503d-626d-4e85-b711-cd32ffe2e563}} with a ratio {{cite:a6a4e20350e47e119217db8fe30c7aa06d2e7cb0}}: {{formula:5433de3b-a368-4114-8c47-1a78c2be8ee9}} , (or {{formula:9b5bde0b-ffcb-41fe-b97b-cd0954616398}} ). The NA64 would exclude more parameter spaces when invisible decay of dark gauge boson is taken into account {{cite:a584d58ebbe77eb1a867a2da214bd8a7fad5068f}}, {{cite:148707828fcf9e191e15d71b50cf92209d577f93}}. The GWs from FOPT for different benchmarks for dark U(1) and {{formula:127f4f80-a331-4257-9784-d6e12ad11216}} scalar are marked in the two plots. {{table:bf4112b7-3637-467a-9ff4-8e8df0ffd1a0}}{{figure:7d665aa0-b3f5-44f0-b445-bd3be838d56f}}{{figure:5a501296-1084-4226-93ad-e103d12a25ce}}{{table:015ef65c-74a6-4eeb-abf2-c65192744680}}
r
c9f0100bba56eef17fb56be670caec5a
Artificial Intelligence has received an enormous amount of contributions in the last decade. In particular, deep neural networks have been adopted massively in domains among which computer vision {{cite:8fd87c25e7ef405f6ed354be71c9e9bdf8830951}}, {{cite:4e50e7f0e2f9b68c80809a9a70b7f1e0254647bc}}, {{cite:3dda4401fd2e295d84de54aa4bd6e77df15af365}}, natural language processing {{cite:cae75e5f06de5a09d56a9f2feb3bd20fa41ae528}}, {{cite:9a56bba42c818c7ada68b8a62fd70118c2f3c1d2}}, {{cite:4c61433ff88932db0dbe68140ad903e3bbca17fa}}, recommender systems {{cite:17282e7c1a44bbe755aba932cfb0043e1e60ec04}}, {{cite:9c6f08053ea30a97242ff9cdf3a6a93aebe7274c}}, {{cite:08d72829834d78c0ed9fd8c8384defddce9a564e}}, and anomaly detection {{cite:628f825c645ff0d70886fd232071a177c25202ca}}, {{cite:e0297c82f329a0c603fbb585a69fb1f9ed31ab20}}, {{cite:9b3eb16b1e75aed62613c27aba3e0cf93d673c25}}. Recently, Graph Neural Networks - hereafter GNNs - have overcome a plethora of challenges in many graph mining tasks that encompass vertex classification {{cite:fc994beda30372065306c60f725a76fc3aae2b8c}}, {{cite:3c8bbca2c0aee5c460fdbc9ccb2a20d4fb62331d}}, {{cite:b4a6f8860e81f683305f659b31b45bd2086b911b}}, {{cite:6a7486979bad045195f0ddb266bfc76cd049ce4e}}, {{cite:710575e0523fa416e0356ddc3312722e32fd0dab}}, {{cite:f9b20718b838e8529898bf53af7c2bdd75386102}}, {{cite:79577b85c786a0a48f8df54646a3de4a4030bceb}}, link predictions {{cite:9f26c7d085f7c38d3932f5a042bfffc222c96ef8}}, {{cite:524e093e49c07c8dad5539bd1d0ff556133fefed}}, {{cite:3c8bbca2c0aee5c460fdbc9ccb2a20d4fb62331d}}, {{cite:179890487d86c0049c7a17d5d29a3bd84231a873}}, {{cite:beedb84acfce6116b57d9e7faddc4f4848634c17}}, {{cite:aa9291666820ce2f71c686c9ca9c962220748609}}, community detection {{cite:241dcf4c2e3952583db57dbb4090ee9f02ddd14e}}, {{cite:d5635f9b15ee1c74fdf8f8a91e8bc98bf61faacd}}, {{cite:d1021de880cdd96853cef8056bbaf156744287b9}}, {{cite:812e3a2be86524984c8619b23889e91efbedd7a2}}, and graph classification {{cite:c7ae1ec438156dab99eae00799bc0c06ec285e8d}}, {{cite:07c418114bbff42c471687dd2895db13473415b3}}, {{cite:11496f8c6e7ee56db8cb14c18a1f38ca5c3ac42b}}, {{cite:beedb84acfce6116b57d9e7faddc4f4848634c17}}. Intelligent systems can be used in decision support scenarios where processing requests becomes cumbersome and time-consuming for humans {{cite:b9d47afcc2fc53a355dae479c092a3c39e525936}}. For example, hospitals might employ an automatic system that aids the healthcare personnel in deciding whether a patient is susceptible to heart-related diseases or not, considering current medical conditions and past disease history. Banks might have a system that decides whether clients have their loans approved. Social networks could employ an intelligent strategy to detect users that violate their terms of services and, consequently, ban them from the platform. Lastly, pharmaceutical companies can use these kind of decision support systems (DSS) for drug repurpusing.
i
7e40536c38960e7ee5f06a06ed10cb01
As discussed in Secs. and , the boundary local time {{formula:26e10e54-79d0-45e1-a00f-860e521dd2a4}} is a proxy of the number of encounters, {{formula:c03f7c6d-e3e2-4849-ab8d-4b41123266e8}} , of a diffusing particle with the boundary layer {{formula:2292ba37-3ab1-4718-8f26-08dc8fa1515f}} of width {{formula:748d20bb-0922-4f1b-8222-8d3222158e65}} up to time {{formula:42dc6a4b-ccd5-40c6-b023-0eacbffb1bbb}} (here, {{formula:708910ce-32a6-415b-b13d-7cad192f0010}} ). Consequently, the first-crossing time {{formula:0d04708a-8d00-4474-8293-14f7b282f38f}} describes the moment when the number of encounters exceeds a prescribe threshold. While the random variables {{formula:4e9b8dc8-2a9f-498b-9abe-449a2b88bc54}} and {{formula:f3c90371-408a-4121-bfdf-c9674329b5c5}} characterize the reflecting boundary, they form a natural ground for incorporating surface reactivity and to describe various diffusion-mediated surface phenomena such as diffusion-influenced chemical reactions, permeation across biological membranes, surface relaxation in nuclear magnetic resonance, etc. {{cite:81ee2ad764f87f5912e50768e94fc61b02a45ca2}}, {{cite:67bbfe7771d3ed2b624cd084417d07c9a18934d3}}. For instance, {{formula:1b759d27-50d6-4345-9d30-dcfcf29e48dc}} describes the first-passage time to the perfectly reactive boundary {{cite:0a74ea6f9d453d55d6be20540eb78edb8facf149}}, whereas {{formula:ad248082-a080-49d9-8ff9-52e641f7f2fa}} , expressed through Eq. (REF ) in terms of {{formula:362bf1af-7a08-4b0c-96d7-4ca12d7cfa9b}} , determines the reaction time on a partially reactive boundary {{cite:4d929f0f3491a483da2e80cfc3a6ef4ce96ce2da}}, {{cite:258febb14d29fe400810509d30b7721749c74d67}}, {{cite:6d14113af1e06b8be875dff4629e1716c65ec4df}}, {{cite:b7c3632d47bdf5c011a5d7365e3bb35af0c0b885}}, {{cite:477709512f92c671c974f773b8a93c54a5e4905d}}, {{cite:99385dbe459d3fdd65e467133e485ca045786b8e}}, {{cite:b3a165b7d47871474474a070699362d20d349d33}}, {{cite:23e0390fd7748b5e2df4e13dbf3f86f353d2e106}}, {{cite:8c1e4c3e8784be7c8e1c9db7dc83137e2ddb67e4}}, {{cite:2ce932abda18fb768410a59cb88fa91028264bed}}, {{cite:3f2216f4fd4bccb068016e39a7e036bcd73ff4ec}}, {{cite:b33c7cd6d013085cacb0ca0b4c3a406b146e2289}}, {{cite:ede159bcaed95fa148920b7dea47f0fa57f7d65d}}, {{cite:34b37371b606cd80407df372c57ac5fb9954f81e}}, {{cite:7a5738a39b0fd598437a93a0ce120cbb26f7b860}}, {{cite:182bac474e4a938d8c1311b16fe37b68a6bd906e}}, {{cite:ceecc2dbf5359c2cb9c6a242d972dfd266eaa02c}}. More sophisticated surface reaction mechanisms can also be implemented with the help of {{formula:f8eaee1e-b8db-4051-8740-b55d3551b099}} , as discussed in {{cite:67bbfe7771d3ed2b624cd084417d07c9a18934d3}}.
d
abf54f7062081518944a4841cc1ea83b
Our work has several important implications for designers of algorithmic flagging systems and sociotechnical systems. Scholars of human computer interaction, science and technology studies, and the law have all called for analyses of algorithmic fairness to move beyond biases inherent in algorithms to consider the systemic and downstream effects of algorithms in use {{cite:c898edbe448cf56bdcf8761c60d20261c09b5028}}, {{cite:0d69039dafd6ee04c55a94cf63c2fc734ababd06}}, {{cite:70aa97dae5ddf67a571983a3d7df1d4b762d76d0}}. Ultimately, we recommend that operators of algorithmic flagging systems should continuously evaluate decision system fairness metrics and seek to improve them according to their values. In that the ORES model is, itself, biased against overprofiled users, our results suggest that evaluating the fairness of model predictions is only one piece of understanding how an algorithmic system shapes fairness in contexts such as online community moderation.
d
97b02a480c64cf9f8f3f13b405511b66
This paper has introduced a novel approach for reconstructing CT images directly from under-sampled projection data, referred to as DEER. This approach is featured by (1) an adjustable network-based reconstruction algorithm, (2) the Wasserstein GAN framework for optimizing parameters, (3) a convolutional encoder-decoder network, and (4) an experimentally optimized objective function. The proposed end-to-end strategy has been applied to learn the mapping from sinogram domain to image domain, requesting significantly less computational burden than prior arts. Zhu et al. {{cite:01f0405942b73c35db5e2beafa2363803e27bbcd}} published the first method for learning a network-based reconstruction algorithm for medical imaging. They used several fully-connected layers to learn the mapping between MRI raw-data and an underlying image directly. But their method poses a severe challenge for reconstructing normal-size images due to extremely large matrix multiplications in the fully-connected layers. Additionally, even though this technique could be applied to CT images, using fully-connected layers to learn the mapping does not use the full information embedded in the sinograms, resulting in redundant network parameters. iCT-Net {{cite:0615c929a018922b537bd891abf279954d6ac04d}} utilizes angular information and reduces the number of parameters from {{formula:7ee12906-2d3c-4ef2-b4c3-683ee033404f}} to {{formula:8c37b347-80ca-4f05-894b-407c93209e64}} . Instead of feeding the whole sinogram directly into the network, iCT-Net uses {{formula:be7efd86-49a4-4d04-ac8f-dee07fed0bcc}} fully-connected layers, each takes a single projection and reconstructs a corresponding intermediate image component. DEER takes full advantage of all the information embedded in the sinograms by utilizing the angular information similar to what the iCT-Net does and also assuming every single point in the sinogram is only related to reconstructing pixels along the associated X-ray path. The proposed method is a fully learnable mapping from the sinogram to the image domain. By taking advantage of the geometric constraint imposed by line integrals, our network is much more efficient than other end-to-end methods. The intuition of DEER is to learn a better filtering and back-projection process using deep-learning approaches. With this intuition and the novel network-based end-to-end method, DEER can be trained with {{formula:aa1abec5-14c0-4c8a-a9d9-df8834b79f94}} parameters. Moreover, with this design, the reconstruction process can be learned with as few as {{formula:cf5bb1b7-c52c-4d10-a0d2-430c0eadeceb}} parameters if all the angles and ray-tracing lines share the same training parameters during the back-projection procedure. If the network only uses {{formula:54afeb5e-4fb1-4da1-9753-2942f74adb64}} parameters (which means parameters are not view-dependent), the proposed back-projection mechanism could be applied to other numbers of views directly even without re-training due to the learned ray-tracing idea.
d
d1725d725275630aea76dbb06f01c9d3
The use of vision transformers constitutes an exciting opportunity for many surface learning applications, especially in the context of biomedical data to study diffuse processes in cardiac {{cite:babbceb114a6ddf1ec507c0b1e48b6ec3f4b4990}}, or neurodevelopmental modelling {{cite:5db175538642c7644d1fcfb131365fa8735eb46e}}; and where surface deep learning models are usually limited by the receptive field of convolution operations. Various improvement of the method could be explored as the SiT only employs a vanilla Transformer encoder {{cite:74470317e6aadc2452c9ee9c6eaf79e15927ee09}}. Latest developments around multi-scale feature learning in ViT {{cite:54b380aec8e51783a4c1870a167b4ed274aa14b2}}, {{cite:4eff3e1ab4ea6d26e58a6343f859c2ddd7c8b475}}, {{cite:6bb3c1b74d21c2d603b0b707a2ae1d5557e62f62}}, {{cite:90235a016e32806f359f4478a25452be11663c56}} would further benefit the context-modelling of cortical surface, as new (pre-)training schemes {{cite:f35f42b644b352bebfa49cd42358d190601a0a4e}}, {{cite:143c471a1beb4080080657d4e9c81b8cbf9e0565}}.
d
7ec24a3a3786f6c29617ade3597527f2
The DDM's inability to explain the low-luminosity suppression joins other challenges this model faces. Recently, {{cite:b040c61fc7f19d1af05860346ec9051c5db3c5e5}} used the observed {{formula:1f61034e-e906-4bcb-89eb-276e409f1e5f}} relation {{cite:641e1a401675b8c99aad3f5491abe66ce954a423}}, where {{formula:85a905a7-22f0-4cfc-bb98-06f6eeb41a62}} is the {{formula:8de52a47-3f6e-4f42-aec7-221cb86ff81a}} -rays escape time from the ejecta (measured to a few percent accuracy), to reveal a clear tension between the predictions of SCD and the observed positive correlation between {{formula:9b0405d2-52ef-457b-8963-c28361eff2e9}} and {{formula:cb110c69-5f33-49c3-ad18-7986dd822f91}} . SCD predicts an anti-correlation between {{formula:74719aa5-91a4-488e-926f-77a400be89e2}} and {{formula:a7a14698-a1de-44b2-a2bc-0e160683ec4b}} , with {{formula:44228b8f-3418-4e59-b57a-e425d9279def}} for luminous ({{formula:d4bc07f0-2278-4462-872d-f3914570e9f8}} ) SNe Ia, while the observed {{formula:9c315809-3e6e-4992-8469-3d1e9c84493a}} is in the range of {{formula:c3b0d75e-f87e-43f9-99cf-5ef26052e311}} . They showed that various uncertainties related to the physical processes and to the initial profiles of the WD are unlikely to resolve the tension with observations, although they can deteriorate the agreement with observations of low-luminosity SNe Ia. While there are some reasonable initial compositions and reaction rate values for which DDM successfully explains the low-luminosity part of the {{formula:97059a8d-944b-451a-818c-d77b2934a1ea}} relation, this model may be in conflict with the observed {{formula:e9fbe58d-de06-4287-9be2-e02cbc73c644}} Ni mass-weighted line-of-sight velocity distribution for a large fraction of these events, as measured from nebular spectra {{cite:9824d22c40edbe1ea12b2d1a1d10a42f5ca63766}}, {{cite:109e1b04449fe362ac002013188ad9916ff641c8}}, {{cite:9856f25ce3391c7f15842f97e67b468586954756}}. Specifically, the {{formula:ac6f0a9b-f611-4037-8bea-a6ee9228e20c}} Ni velocity distribution is either double-peaked or highly shifted, which so far has not been predicted by the DDM.
d
624194f167868c0ee0c45dd496ed216e
We next consider the optimal values of the response parameters. In the adaptive response, {{formula:e78c57f2-533b-4d4d-a5c2-675e2a3c72b6}} (the memory length) is very short (Table REF ), allowing the bacterium to quickly adjust to small displacements from attractant optima. {{formula:a80f9635-310a-4676-9b44-a615a40f837c}} , the rate at which the bacterium stops tumbling, is quite large, corresponding to short-lasting tumbles characteristic of chemotaxis in E. coli {{cite:cb25107c5e549e3467f4dd0fb39a241395550a2f}}. High sensitivity (large {{formula:ae35922c-225b-42ea-88ef-d136e93c6c26}} and {{formula:104cfdf5-2bc1-44e3-8206-0ac3681ed06f}} ) is necessary for the bacterium to respond to small differences in attractant concentration characteristic of small displacements from the top of an attractant peak. High sensitivity is responsible for the high {{formula:f3f9b895-6e88-487b-b8cd-3a6128a47048}} when the attractant is removed at {{formula:a998ba55-3ff4-41fc-abf9-f02cecbf1df6}} in Fig. REF c (red curve). Optimal {{formula:b89ddd59-aa0c-425a-91b8-bf8e005c123b}} , the tumbling rate in the absence of attractant (or under constant attractant in case of perfect adaptation), is very low in all strategies, as is evident from Fig. REF c. Low {{formula:f4ac2e65-525e-4b84-aade-4b4e3ab559e3}} enables bacteria to run persistently in order to find regions with more favorable conditions more quickly. The near-zero value of {{formula:9c8f4eb6-8634-494a-934f-91925f44d7e3}} removes the possibility of {{formula:cec6279e-181d-4c68-ad38-a35a863203df}} going below {{formula:5b78af10-5b17-41da-acea-a2867d764ee4}} , eliminating the response to increasing attractant in the adaptive response (Figure REF c, red curve, {{formula:8b7e3517-4994-4fd1-a6ce-7f0e4e38902f}} ).
r
23bd9836d3131c63c13f5e2dfed1a4df
As a simple way to constraint the model parameters of possible spontaneous scalarization, it is interesting to extended our study into many cases. A straightforward case is the Schwarzschild or AdS black holes in EsGB theory as well as other modified gravities mentioned in the introduction. The second case is the dynamic of massive scalar filed perturbed on Kerr black hole in EsGB. Though this direction has been investigated in {{cite:907e94b7944593155a5711024bb770cc3decc68b}}, but there the authors only considered the {{formula:e6db9dd2-3bb2-4779-b6a2-75a217a1b4b5}} mode of massless scalar field. Since besides the tachyonic instability, the superradiant instability may occur in rotating black hole, so considering the mass of the scalar field and different modes will introduce more physics on the fate of scalar-free background. The next but not the last case is to consider the dynamic of charged black hole before it was scalarized. The scalarization of charged black hole has been firstly proposed in {{cite:8a0d6e1720f9494b11e4f5604523b114a806aebb}} and soon been generalized in {{cite:99e67a3a02348de676007c3249ba34225efddf66}}, {{cite:e3f09bd5232ac996a484484e09988cf4842538cf}}, {{cite:317125567ac008f0c26c918092a6f18a675cbcfd}}, {{cite:1ffa6d1ce44794bb3e9f07596fb83cb31a3f7b9d}}, {{cite:01b3a07bfca34b309d5bc0f1b1ffbf984fbbbfb2}}, {{cite:00b4c2dc654a32fe51f703011d344065c88c29b4}}, {{cite:2d9d7b2671c72c8f47550b8facbfe8ab1662cd55}}, and a complete dynamical analysis on the scalar-free charged black hole still deserves to be present.
d
94d4fda50be0d6675327adf8ac276d02
We remark that our unfitted finite element method (REF ) is the so-called local discontinuous Galerkin (LDG) method in Cockburn and Shu {{cite:687127adbcb93f907f18d5adb77bfd9a90b9ac69}} which is different from the interior penalty discontinuous Galerkin (IPDG) method used in {{cite:075db04120e2773302e7a612b2cac8461949dd2d}}. We choose the LDG method because the penalty constant {{formula:86b5a9f6-e15f-41dc-b3d2-b09dedbee387}} in (REF ) can be any fixed constant, while the corresponding penalty constant in the IPDG method has to be sufficiently large to ensure the stability. We refer to Arnold et al {{cite:e3655020fb655b15833bd8b0a34de8da7acc52af}} for a review of different DG methods for elliptic equations.
m
f466a322c0358620e624dcb8380f86f4
The CESIUM framework simulates a theoretical CESDP by modeling its constituent elements (Sec. REF ). This section describes how the base instance of CESIUM represents each element: the system architecture and system boundary (Sec. REF ), the constituent artifacts (Sec. REF ), the organization of interacting agents (Sec. REF ), agents' design processes (Sec. REF ), and the system development process (Sec. REF ). Thereafter, an example execution of the base instance follows for a vizualizable system (Sec. REF ) along with a brief introduction to the framework's flexibility in simulating variations on the elements of CESDPs (Sec. REF ).The implementation of CESIUM described herein was developed using Python 3 with the NumPy 1.18.1, SciPy 1.4.1, and NetworkX 2.4 packages {{cite:bdc447c6302e076c58cf49d99cdfd2ea211710a6}}, {{cite:0d56a4ac29443c3893e1fb3b8a9dd8fce1f03a86}}, {{cite:c34bd89a864705a35215c0ccad0b4e7569806d6b}}. The full code for this article is available at https://github.com/meluso/cesium-framework, and a clean version of code for the base instance of CESIUM is available at https://github.com/meluso/cesium-base.Secs. REF –REF were partially adapted from Sec. III of {{cite:7537c1921a98756665281473ac2b7dbdf7e60b93}}.
m
9f7b51877269f91603d7f8c8914d1491
Our Test-time Visual Consistency (DAVIS) framework is a general learning paradigm that can be easily applied to existing VLN baselines to boost their performance of generalization to new scenes. Extensive experiments show that DAVIS achieves model-agnostic improvement over previous state-of-the-art VLN baselines on R2R {{cite:cf7651c41310d3d4828bce5ce139154843443803}} and RxR {{cite:5238fa2c7a5e72b7bf4a2dc6d356f169b1a5c64b}} Benchmarks. In summary, our contributions are three-fold:
i
5fe21204431dcf8272d26a950b4c0ac5
In this section, we will mostly focus on ecological implications of the mathematical results obtained in the previous sections. We should stress, however, that it is crucial to define the way of how we should formally assess the consequences of the Allee effect on the population dynamics. Indeed, this question is far to be a trivial one since distinct paradigms exist in the literature {{cite:98826533242acd73644cb71dc8d4fde970073a2e}}, {{cite:f12ffa17eb1f7db9be99910a5a9255d044f23358}}, {{cite:8ac110b21afadd32d3c1bd02b3b8e898ed937fc8}}, {{cite:3f44a9d27d7b344a86afdc2badbfe5ef5a40ad5e}}. In fact, evaluation of the role of the Allee effect in population success should largely depend on the choice of the initial density, and this fact is still somehow disregarded in the literature. Indeed, consider two populations, where one possesses a self-accelerating per capita reproduction rate, and the other one which is characterised by a constant per capita growth rate: for simplicity we assume the growth rates of both populations at the some low density to be the same. Then an increase in the population density would result in an increase in the per capita growth rate of the species with an Allee effect, and this will clearly indicate the benefits of possessing an Allee effect. For example, hunting cooperation is considered to be beneficial for predators up to certain level of population density {{cite:98826533242acd73644cb71dc8d4fde970073a2e}}. On the other hand, considering higher population densities as the starting point for comparison (e.g. population densities where the Allee effect is not pronounced) will show a different outcome. For the same initial per capita growth rates, the species without an Allee effect will be more advantageous since a sudden drop of the population size would not largely affect its per capita growth whereas the population with an Allee effect may exhibit a significant decline in reproduction rate with a threat of extinction. For example, low fertilization efficiency, a lack of mating partners, and sperm limitation are usually considered as negative and undesirable features for population persistence {{cite:f12ffa17eb1f7db9be99910a5a9255d044f23358}}, {{cite:8ac110b21afadd32d3c1bd02b3b8e898ed937fc8}}, {{cite:3f44a9d27d7b344a86afdc2badbfe5ef5a40ad5e}}.
d
6713e57871042220225b821a4a348fa2
A suitably prepared standing wave of laser radiation can form an optical lattice (OL), which are broadly used for trapping and steering of ultracold atoms {{cite:b0c360c0121da13bca2753df8588c9f85157809a}}, {{cite:b5c4980d9d30f1312449ccfeaf6493cb467d8e74}}, {{cite:2f9a9ea3cecaaef1656b13f1c1938085b0950ac3}}, {{cite:a521a37acb09e9f47546c421ff20cdc9bc61fbf7}}, {{cite:2a51115ad3b13e8981b985bb3522c55368b2783c}}, {{cite:3d860ae5dcdf461021e4d5e948d19ab8b82e03f7}}, {{cite:1d77e0d9968156e0e2b964098762b90cdadb699f}}, {{cite:f15eef6366d5c5bf2d8c5d93ac4a6122d389852f}}. Offering a versatile platform for research in the area of matter waves, OLs have become the most appropriate candidate for the realization of quantum simulations {{cite:83bb1d19987dc144f499190384f54f4df937705a}}, {{cite:9bc0e959c132c9d9247a42344f452862f1a0d13f}}, {{cite:168cabbd6f5cc1c581f052d81f7767716cc82925}}. Further, ultracold atoms and Bose-Einstein condensates (BECs) trapped in an OL are used as a basis for the development of atomic clocks, quantum sensors, quantum computers, and a variety of other applications in quantum technologies {{cite:ee14deea987a15fccf8408f24ac6f7e946348d71}}, {{cite:843e72b063eb789ec73082d7c298f667066c06d2}}, {{cite:e3c00526b717d365bb6a32ac8352203dd89b67f2}}.
i
6710b49e9069696da41f23c0e8cffc92
where {{formula:a47c08cf-d7fd-4329-93c1-d8d5aafb4694}} is the elementary electron charge and {{formula:55dedb6e-ab19-44dc-99ff-bd62183ee82a}} is the charge carrier density. Graphene was found to exhibit hole conduction at zero back gate voltage with a sheet charge carrier concentration of {{formula:0db0dc20-3314-4e8f-a2c6-13a170f5b3f9}} and background voltage offset of 3, which has been subtracted from the measured raw data. The linearity errors {{cite:7febb8886381e282b0a45544cbd651a13bffc277}}, {{cite:2287a5cd11e80f709fac5397c010ea1546162422}}, {{cite:a920259ba3ab5137819b7ac97b4eac5bbb2751f8}} were found to be within {{formula:8bbbd17a-1f6d-4bbf-8a06-865a6e1c0128}} with an average absolute value of 1.3 % over a large magnetic field range from -760780 at room temperature. From the measured Hall voltage response as a function of time at different perpendicular magnetic fields (Fig. REF b) one can estimate the noise level and the minimum resolvable magnetic field of 20 at room temperature. From the Hall measurements, the calculated current-related Hall sensitivity {{formula:a760e7bf-552f-4132-b670-4bff066ee3e5}}
r
e1959fb40977f876acd293274b9d357d
Proof. To reach a contradiction and establish item 1 (respectively, item 2), we assume that (REF ) (respectively, (REF )) holds and {{formula:93c8fb25-da7e-4077-8b21-b3b53b9109a8}} is not conditionally invariant (respectively, not strictly conditionally invariant) with respect to {{formula:49cd1c68-92c2-4b20-af34-51d6a5b12c04}} . That is, there exists a solution {{formula:9119d13e-7a89-4f7c-baaa-6eb47ed75e24}} starting from {{formula:291dfb60-bb99-427f-b413-24e349c2f471}} — thus, {{formula:92f92694-e477-4968-9347-7b9aa48f95eb}} — and there exists {{formula:1ee067a2-eb02-468b-8bd1-5d3e72305182}} such that {{formula:870e6db5-5c5c-4cfb-a1e1-6b8aac10e939}} (respectively, {{formula:662e0a80-302b-4d8a-bfe6-cf43002f4bee}} ); thus, {{formula:514e3a1a-09b3-4441-9fe1-b3877cdcdeb5}} , and {{formula:8ef5da42-d346-4733-a52e-5441dd42af83}} (respectively, {{formula:f7130e75-32d9-4010-858c-87979065db76}} ). Hence, according to {{cite:ceef8d2b6aef00ee0a00a850b1235098be34d174}} and {{cite:c986fbb47363aeebbe93fee6a4539ef6b79f4f2d}}, we conclude that, for almost all {{formula:5a196f5f-e6dc-4e43-92f8-4185c7e212df}} , {{formula:021a3c9f-93b4-4717-a12f-5496369b0941}}
r
62fe33a9f5456bab22eded90bf534983
where {{formula:60604566-59ad-4c14-9757-b39b266d26f8}} indicates the encoder network. We adopt the modified ResNet50 {{cite:81937aadec8f0436c6294545477a9191215f310d}} pretrained on the ImageNet dataset. Our encoder outputs features at {{formula:2aadb0d9-7477-49e9-a686-bf7ad6bb40b1}} resolution of the source frame as we empirically find that the large receptive field is critical for the semantic information. Aside from the context features, we also extract the multi-scale intermediate features from the encoder to capture the fine structures. We use {{formula:287886ca-f19e-42cd-af14-948d1934f00b}} convolutional layers to map these intermediate feature maps to 32 channels. The weights of the feature extraction network are shared for all the input frames.
m
2b73cedb5da5b65e133cbb1ce0409186
Remark 2 (Technical novelty) Although the estimation procedure for the BSVDgq method, as outlined in Algorithm A.1 in the Supporting Information, has been theoretically studied in {{cite:21ab701100b96c97149d32b8844e20b2ca485ea8}}, their result hinges on a Frobenius norm error bound for {{formula:cb1303fb-7671-4c2f-a7fd-cb24527b818b}} , which would be too large for our purpose of bounding the entry-wise error bound {{formula:c717b89b-c22f-46da-949e-a53ce783a236}} . Hence we developed some new proofs, borrowing tools and ideas from spectral norm error bounds for sample covariances {{cite:aab9a33ccf89c41933e953f9d212b191c168652a}} and {{formula:289b87d5-9d9d-4c37-8fb5-0852492bc897}} -norm error bounds for spectral methods with perturbed low-rank matrices {{cite:ed5e7f7a64d0e79ed34c37ea81cbfab40d2b32c6}}. In addition, the error bound for sample covariances with high probability help us get rid of the conditions imposed upon the random quantity {{formula:05c50b54-2e4f-4b61-89cf-1cc8cf07a7fd}} in {{cite:21ab701100b96c97149d32b8844e20b2ca485ea8}}.
d
c0ba7f4dcd45542d9968c235dbda611c
Two main challenges of automatic recognition of tendinopathy from US images are data shortage and indistinct appearance of US images. The problem of the small data set poses serious hurdles when approaching deep neural networks solutions. To work around this problem, concepts of knowledge transferring, transfer learning, and data augmentation are incorporated. Relying on knowledge transferring, our network architecture is built based on pre-designed constructing blocks {{cite:de7f51591079c994c23aa34434b505db58063db0}}, and with transfer learning network weights that were pre-trained on another big-enough related data set {{cite:729a7268288b4bd19f39dd746b767e15fede32a2}} are used. These two concepts helped us deal with the common problem of data shortage in medical applications, properly.
d
590b1ad3477825f8d45552b819b49148
in four spatial dimensions, where {{formula:8cc4bd1b-c2c5-4fd5-895b-1ca9f8946c60}} and {{formula:23ae498e-a25b-4acf-9192-2d81bea386d2}} is a fixed real constant. The equation (REF ) was introduced by Zakharov {{cite:29fede18dfc60bdd84be9978b144ac5bd6792f98}} as a model for Langmuir turbulence in plasma, where {{formula:b2917061-f412-47cc-8761-8ad49f5b86ff}} denotes the slow oscillation of the magnetic field, {{formula:44224edd-c072-4cff-9114-371bc5a5cdc1}} is the ion density, and {{formula:3878773f-1c25-470c-bc0a-6750e9a4e36e}} is the ion sound speed. We refer the reader to {{cite:9005a70d5066d80661d8f82e9255e6f9eb70d474}}, {{cite:8d117bdb5bec435617629eb03b6b257c13e7d69c}}, {{cite:29fede18dfc60bdd84be9978b144ac5bd6792f98}} for further background on the derivation of the Zakharov system and its physical origin. The Zakharov system has received significant attention in the mathematical community, see for instance {{cite:1f1530377b9becbf99417d942609a1309e96d21a}}, {{cite:d41f1858a91d0d1d27960b2700552ecfaa905384}}, {{cite:9d98d411e6162f583ca84ad92a0a32ec37047285}}, {{cite:bc80740c072aaa80d98bd00d64abe6bfe01eff46}}, {{cite:39ba6dd9f9c8b1e7f4db20cfdf03a04fe685980a}}, {{cite:3ca724ef52da0dcd5c5635a226a9c2a6a5eaef23}}, {{cite:feafe99440996facb930309a81fc86465e4dc9f4}}, {{cite:84947a948a922c23af40327160e3f5cc5de11574}}, {{cite:cd473010c704a7646ff6f3e3aa8405a45fa38cf6}}, {{cite:919b611b875a11badaac2de6596c3b154eaa4e76}} and the references therein.
i
4efb1cf6b364cc648cfbccbc40c7c6c4
Another promising line of research consists in extending Theorem REF to the case of metric measure spaces. The theoretical ground for this development may be found in the seminal works of {{cite:9f654d75af56202b88ef9d49cd5ac2cd27dde703}}, {{cite:78958abb7d8fcab82fd079c7cd0451619bad9116}} and {{cite:ef2492b7f057eef901453ada7a71442c60a8c4f6}}. In such a context, it is of interest the treatment of the relative entropy-functional in Wasserstein space. It is well-known that the Hessian of the relative entropy-functional, i.e. the Kullback-Leibler divergence, generalizes by using techniques from infinite-dimensional Riemannian geometry. See {{cite:78958abb7d8fcab82fd079c7cd0451619bad9116}}. From the statistical side, the possibility of choosing a parameter space that coincides with a space of measures allows to consider popular Bayesian statistical models such as Dirichlet process mixture models. These models can be generically specified by the formula {{formula:6b386c48-0fad-42cf-86e9-37ed2c753c69}}
d
264a6d912ce1f4e0ea4bac899a26046b
We further describe the protocol's steps that use both an offline and online phase. The offline preprocessing phase is independent of the client's input (which regularly changes) but assumes that the server's model is static: if the server's model changes, both parties would have to re-run the preprocessing phase. Our work considers a 2-party scenario. One party, the client, has access to the private input and shares encrypted information with a model owned by a server. We assume that the server has sole ownership of the model that could have been trained against plain text data before engaging in the present protocol. As an extra measure, our protocol lets the model owner take further privacy considerations by training the initial model using Differentially-Private Stochastic Gradient Descent (DP-SGD) {{cite:25308cdc5f95f4fb5b7c754bf4e16d706b121319}} to protect the training set if required.
m
7645c148394f87a36d81b831bc854bda
In this paper, we have developed on the notion of a quadratic form expansion, as described in Ref. {{cite:b2d7f365cb287bf69857a563939fa1c855747051}} and informed by presentation of similar path-sum like representations of stabiliser states and stabiliser circuits {{cite:76b549f0aa4d6e7f5fc1fa6f0e68543a5c867289}}, {{cite:5320feace4995a3281eeb74b8647ab083ea38ab6}}, {{cite:4940df6e0b125169bc1163f08e9c74cc517e8efc}}, {{cite:ff413569394eb88d95c5f6c883d0d59a1048afea}}, {{cite:1c2e8d8e01ff65d477eb8bb30ef0ecb0712b0f37}}. Procedures to efficiently simulate one- or two-qubit stabiliser circuits on {{formula:65f8160a-7fa1-493b-b7cc-74cc17f49e14}} -qubit stabiliser states using such a representation, are implicit in many of these works. We have presented explicit procedures to simulate such operations in time {{formula:e4325eec-5342-40cc-b2a9-fd935ce18fcd}} , matching the worst-case asymptotic complexity achievable under the stabiliser formalism {{cite:05362caed4f0d8a63e1cbfa1fef482ae01d3317a}} among others {{cite:bb284b5cc3c86436dc38121ecf8959940dd12442}}, {{cite:0516594436d0970c40ccb0535884cd1d4561d58d}}. We obtain this result by considering quadratic form expansions which are subject to certain constraints: notably, involving an `expansion matrix' {{formula:ab2b9296-86fd-4a2a-9d10-69202367c601}} in principal row form. Furthermore, the bound of {{formula:c66e46c3-5efb-4525-a30a-12ff6365876b}} is in some cases loose. As we describe in Section REF , when the state has {{formula:c7f22a1d-2ef4-4d27-9137-fcaecfe62a8c}} standard basis components for {{formula:80c67ce1-3617-40e6-9b0f-68e164056f57}} , the worst-case complexity to simulate a stabiliser operation is {{formula:33af7b05-d5ff-4f44-8dd8-46679bca6999}} . For each stabiliser operation, we present still more refined bounds on simulation complexity, when {{formula:820f4e73-92ec-4fb2-9e11-a568f054b50e}} or the quadratic form can be represented by sparse data structures.
d
b4b1c4e84667c2f25b62ef5c5a4e4901
ES-FWI is an underdetermined optimization problem as highlighted by the ill-posedness of the scattering source estimation problem, which requires some regularization. Another advantage of the augmented Lagrangian method (or method of multipliers) is to provide a versatile framework to implement various kinds of regularization including nonsmooth (non differential) ones with the alternating-direction method of multipliers (ADMM) {{cite:af9b639154388dcbf8051919763cf7a631742f63}}, {{cite:9a1eceda9e9fa4eba4a7770baff429b5ce1b6389}}. In this approach, some auxiliary variables are introduced in the regularized optimization problem via an additional constraint to decouple the least-squares problem from the {{formula:03baca8e-7cd2-4aec-b8c9-20fd8dab01d4}} -norm problem and recast the later as a denoising problem, which can be solved efficiently with proximal algorithms {{cite:715f41d20543d0146574eb9aeb4a62b1fa279feb}}. The resulting constrained problem is solved with ADMM by updating the various classes of primal variables in alternating mode. Moreover, the primal and the dual variables are also updated in alternating mode in the frame of the method of multipliers. The reader is referred to {{cite:2a8da4bdff8f34ccc9f47a36f2519d491bb31a6a}}, {{cite:4650c6b88c299d6428a785040845ae28b44c780a}}, {{cite:c36b0eae187e3c8e995085ee2d50ef53d8d81afe}} for the implementation of various kinds of regularization in ES-FWI and FWI with ADMM.
m
35faf26abc532ab0a5e3514a5690bc10
The DR estimator can be interpreted as estimating the PATE by using the regressions, then employing IPW to the residuals to adjust for bias. For this approach, we firstly estimates the nuisance parameters {{formula:70ce395c-8782-4d08-bc36-44a663ee6834}} and {{formula:47fb106a-e091-4203-9ef6-1d2b27e4f94f}} non-parametrically, then estimates the parametric part, average treatment effect. When either one of the postulated models is misspecifed, {{formula:e129d085-bf1a-44c0-861d-98b2bf46a5dc}} is still consistent for the PATE, which refers to as double-robustness property {{cite:98c73d4475ba64bd1a85bd3dd4d84680b00178f0}}. A sequence of papers have been developed to research on the properties of the DR estimator. {{cite:590ae9e674e6263ec5f5d0ab1453df88a3269fba}} showed that the DR estimator is more stable when the propensity scores is close to 0 or 1 compared to IPW estimator. {{cite:eaef1f652e9a91e3d7522800b8f65933ce865031}} examined the behavior of DR estimator in high dimensional regression adjustments. {{cite:7c6be9f91c3f4fbeb4f53ec770f780344cf3b633}} laid the the foundation of the later research on the semiparametrically efficient property of DR estimator in estimating ATEs, and later, {{cite:6095cf01eac6e5e841a19f0507ed9fdad8ddd842}} demonstrated the effects of the propensity score for efficient semiparametric estimation on the ATEs and ATTs. In recent years, research on incorporating machine learning algorithms in estimating the nuisance part in DR estimator has become popular. {{cite:e0ff0cf26e4e3924b170a3ab3dd4ecc11595e411}} proposed using machine learning models such as neural networks, support vector machines, decision trees (CART), as alternatives to logistic regression in estimating propensity scores. Recently, several authors such as {{cite:15f6eb36959ba3c275616bbc86294dad8c42d6af}}, {{cite:0545f75b9566639d6d40c64edaac483f75c4d299}} have discussed estimating the nuisance parameters in DR estimator via cross-fitting so as to attain efficiency under some certain conditions. Specifically, the idea of cross-fitting is splitting the data into K-fold and use the holdout examples to estimate nuisance parameters, namely outcome regression and propensity scores, while the remaining examples are used in estimating the treatment effect.
m
961097b4fba91c93c0bb368a5110ccf4
Analysis of Speed-Accuracy Trade-Off. Fig. REF shows that our highly efficient model has considerable improvements compared with the existing 13 COD methods. Note that the inference time of models, except for the part of competitors that appeared before 2017 (i.e., FPN {{cite:7e0b8abde4786f1d2c798aef808d4dcd242fc728}}, MaskRC {{cite:a7da8f6082ddff02a6530ac96626ed225f38852c}}, and PSPNet {{cite:dc6f1eb4555cc6447d07ffd3dc73090e6329189e}}), are tested on the same NVIDIA RTX graphics card. Concerning inference speed, the gap between our model (79.3 FPS, {{formula:f19e4927-4fd6-4e6a-b6e3-1a5d2ccd9fee}} ) and the top-1 model CPD (62 FPS, {{formula:c556f9d2-b4bc-4368-b170-f6f1342894e4}} ) reach 27.9%, while improving performance by 9.7%. Without high complexity modules like dense connections {{cite:d7f1d97140ff28956497f6712fbf42f21570ea4f}} or non-local means {{cite:12751ed03fc5e41d91e8499e537d0a8808df6953}}, we only use simple operations (i.e., concatenation, subtraction, and addition) for feature fusion in order for accurate and fast detection. Furthermore, with respect to detection accuracy, our method significantly improves the inference speed with a huge gap compared to the top-1 model SINet (5 FPS, {{formula:930a17ca-9845-4604-b032-6dca25e0e1da}} ), while maintaining a better performance. {{figure:706af748-f35c-446f-a019-95a1382deeb4}}
m
15f222c225107e7db10adca8622b92f6
with boundary conditions {{formula:9a77e5cf-19ad-40d4-980f-a988e3d4661f}} , {{formula:e74f1e37-3ddd-4cde-a08e-7624e9463a0b}} and {{formula:36f68275-7c89-48ff-9f33-20f04b3c3abf}} . Shooting method is one of the popular tools that treats the above two-point boundary values problem as an initial value problem {{cite:88fa3e021dcbde903d55851beb3c1bb7f8649ec3}}. Specifically, the shooting method solves the initial valve problem {{formula:82c10d94-4245-4b6c-9b4b-5be6c000ea90}}
m
42c23fd71262505506342c3e39b456ab
Table REF shows the SI-SDR comparison with a 95% confidence interval in the DNS dataset {{cite:facc4120d8d5287a41c088c9305cfe9d0c1f9629}}. Comparing VAE-GAN-L and {{formula:e6f3fc41-8078-40c3-b1db-0dba50331a86}} -PVAE-L, it is evident that there is a SI-SDR score improvement, which illustrates that adversarial training can effectively improve the decoder's signal estimation performance and generate benefits for the signal reconstruction. Additionally, the performance of mask estimation depends on the accuracy of the signal estimation, so VAE-GAN-M also obtain higher SI-SDR score than {{formula:233c3669-75c8-4798-bd0a-c4d3e37f46d7}} -PVAE-M. Comparing the VAE-GAN-based methods (VAE-GAN-L and VAE-GAN-M) with GAN-SE, we find that all VAE-GAN-based methods can achieve a higher SI-SDR score than GAN-SE, which indicates the importance of representation learning for the GAN-based SE method. A disentangled signal representation can help GANs generate a higher quality target. This verifies our previous hypothesis. Finally, considering that VAE-GAN-M also shows a higher SI-SDR score than NSNet2, the proposed algorithm is quite competitive with the current practical SOTA SE algorithms. In this paper, we choose only a basic DNN structure to conduct the related experiments. Based on the experimental results, we believe that our algorithm has a strong potential to achieve better SE performance if VAE-GAN is applied to a more advanced DNN structure {{cite:e51db51ad0c2c93a42113bcd30d2b46e1985f3ff}}.
r
26b8ad64854d7ab43370b1b325b564a5
We report the SacreBLEU scores {{cite:2bc89ba4254672423136d996dc4ab90dada10f56}} in Table REF . We observe that our proposed Faster {{formula:7cad853f-ff54-4df2-b1e7-707060214661}} NN-MT model achieves comparable BLEU scores to vanilla {{formula:0d35582f-b4f7-4cb0-801e-3c4d2a3584c2}} NN-MT and Fast {{formula:c81028de-fdeb-4627-be51-0f2df378f23c}} NN-MT on English-French and German-English datasets, but with a significant speedup.
r
52e0cdf4184bf06afd7ca2ec113d3d77
Specifically, we see that on all of the and 5w1s and 5w5s evaluations, Protonets trained with achieve better performance. In addition, for each of these evaluations, we consider training with different number of ways and shots as proposed in {{cite:3f32fed37113a50f0f946de7994dafe7af0687ef}}, {{cite:6dd6feb10bf0584f9810648753c834c44e5f010c}}. Our results also confirm that is better for a range of meta-training specifics. Besides, for other meta-learning methods like SVM and RR, fixing the support pool can also improve performance. For all the experiments in Figure REF (a) we use the Resnet-12 backbone. In order to further validate our findings, we show mini-5w5s meta test results on two other architectures in Figure REF (b). WRN-16-10 has roughly 50% more parameters than Resnet-12 and we see that continues to perform better than in this over-parameterized regime. On the other hand, it seems that fixing the support pool for shallower backbones like Conv64 hinders the meta-test performance. For this we hypothesize that is effective for over-parameterized models and algorithms but less so for shallower models with fewer parameters. Further analysis on how model architecture and over-parameterization affects is a future research direction.
r
6830bcf1201da8d3680c43d561b8a3ff
Varying {{formula:a3d5ff66-811f-4e9f-b0d3-5f7df8484259}} beyond unity, we are able to explore another portion of the parameter space of our models as in fig4. We depict there the allowed curves in the {{formula:082f5911-8daa-4e10-bd8d-90e8f488f66d}} plane for {{formula:158b67e7-939f-43de-b554-c7361a4a2255}} central {{formula:4f8c77b2-9fc4-4798-99e6-fb0614224dca}} in Eq. ( REFa) and EHI– see Fig. REF -(a) – or THI– see Fig. REF -(b). We use the same shape code for the lines as in fig4 with the change that in Fig. REF -(b) the dashed line corresponds to {{formula:c4d0eb7a-7af2-4098-9bf4-5be6b750d0db}} and not to {{formula:f8954999-9936-4521-b1f0-87deed2dbf90}} as in all the previous cases. The reason of this replacement is that RCs invalidate almost the whole parameter space of THI for {{formula:a1371894-69b2-42f2-a155-86a7b2c047c3}} and {{formula:e5866b0b-6612-493b-95d3-968b11348e39}} . From our data we see that {{formula:69d9827d-791f-483f-84d5-2c01b6b32b1c}} increases with {{formula:958cfec3-8c90-4386-a876-25492556b5ad}} as in the standard version of E- and T- models {{cite:0e6cb80decc9458070d7739b9c387dcbd4a9d025}}, {{cite:6c7fba30a9f212141717f1b687f5d332ac8d44f8}} and in agreement with the analytical findings in rs. The progressive enhancement of {{formula:3c340d21-9474-4aee-a533-ef8af4436ac2}} along each line is shown by gray numbers. Note that a segment of the solid and dot-dashed lines in the left graph coincide between each other. We display the {{formula:3bc2d083-1596-4d65-b3cc-848f4c342e0b}} 's corresponding to {{formula:68e552d1-d99b-4fb5-8ea0-3537126ab679}} along the common part of these lines. The upper bound on {{formula:58699562-843d-4036-a819-8f0af3015ea6}} in Eq. ( REFb) provides an upper bound on {{formula:55ecb7d0-2f50-45dc-945a-b3d3e83f17dd}} which is related to the geometry of the moduli space via Rs. Namely, we find the following maximal {{formula:4a2db68a-ff02-4456-9dcb-d631df1fef0e}} values {{formula:1385ad49-38a3-4e7d-887a-4a1b0de42646}}
r
f137bdbc7c7f9d192538cbc1d228cd28
From a physical perspective, both nonlocal and multiscale effects are characterized by a collective and multilevel transfer of information, from the finer structural (material and/or geometric) scales to the coarser (often, continuum) scale. The collective information leading to nonlocal effects is associated to multiple long-range interactions typically distributed in parallel within a specific finer structural scale {{cite:88eba04f55e6d1546473a8e7d0bb8594cf3a4c8a}}, {{cite:cb2070333990b003335a49d438a382f93fc6aa03}}; in the case of multiscale effects, they correspond to a hierarchical distribution of multiple structural scales operating in parallel {{cite:94084d9fe862c7b3699ec6e3f1072bbd2c6ac662}}, {{cite:e236dbea3d878538db3b4692de9807fa3286c5c5}}. The coexistence of hierarchical structural scales and localized long-range connections at a given scale, leads to multiscale nonlocality {{cite:15335e16ecb63100156b24c2db755f39ee3c9f80}}.
i
52bf55370714be7ea91d0beef4302011
Landmark: As a baseline, the standard landmark detection approach has been used similar to the work done in {{cite:261d53afc7cd0197efd493e7739582b41ae33162}}. With an ImageNet {{cite:6eb4b099f183cc0f783dfed62f33faaaefeb45a4}} pre-trained ResNet50 {{cite:87ae4cc430b137c7a25e7d137e94eb69be2a8814}} backbone, we regress the point locations using a FCN to localize landmarks. HigherHRNet {{cite:50751e91a09f0d1c21226e66cfe042993a89e74e}}: blackIs a bottom-up method for 2D human pose estimation that aims to localise human anatomical keypoints. It learns scale-aware representations using a high-resolution feature pyramid. This approach can solve the scale variation challenge in bottom-up multi-person pose estimation and localise key points more precisely. In this study, we have used the same approach to localise the caliper endpoints in US images by generating high-resolution spatial heatmaps (see Fig. REF ). LCFCN {{cite:b3c1a4b8004580c13e1a7f20730f66e0af020147}}: LCFCN is based on a semantic segmentation architecture that is similar to FCN {{cite:2664837d7a955c0634993e74cf83c507dd67a343}}. LCFCN method optimizes a loss function that ensures that only a single small blob is predicted around the centre of each object (in our case a caliper point) to stop the model from predicting large blobs that merge several object instances. We take the centroid of each blob to get coordinate locations and measure the distances between coordinate pairs to measure the TrA thickness. LCFCN+CoordConv: We improved LCFCN {{cite:b3c1a4b8004580c13e1a7f20730f66e0af020147}} prediction by adding a CoordConv {{cite:be6a3266bcc09c86f5950c255a5434d2b50da11e}} layer to the beginning of the FCN8 {{cite:2664837d7a955c0634993e74cf83c507dd67a343}} architecture. The image is first passed through a CoordConv layer to add pixel-wise spatial location information to it. It then passes through the FCN8 layers to allow the CNN to locate caliper endpoints in the image, more efficiently.
r
ee814183b5e23e3feeb4882926de348d
The study of the spread of diseases is an important interdisciplinary research topic {{cite:07a58d0034a30528010256e9d89e2dc40c8bd65e}}. Mathematical models are essential to understanding, forecasting, and studying control measures for infectious diseases spread {{cite:0661145602fe7a65ad34dbb112a07715ac506963}}. In general, the epidemic models are compartmental, i.e., they divide the host population ({{formula:a709e4e0-c4fb-4c49-a8f2-84d8f14ff62a}} ) into compartments, for instance susceptible ({{formula:6028c1e8-45c7-4fd7-a9a4-b76c52be0907}} ), exposed ({{formula:c8730002-af02-4142-a147-6e59856fe22b}} ), infected ({{formula:8a1e2120-af9e-412e-9f14-067c25a56a6e}} ), and recovered ({{formula:d231c0d1-dd81-444b-8ec2-27ae766d5d40}} ) {{cite:17ffa5dc4a331b91c4a9cddc015ca4e90c006a44}}. {{formula:861e7494-f18f-4c23-a65c-985b743d1b40}} is related to healthy individuals who can contract the disease. {{formula:e3f59396-0b86-437f-82a8-86e7138ab27b}} corresponds to the individuals in latent {{cite:68f1fef0c7121dfed723ee6e3596960239b5b20a}} and/or incubation period {{cite:3e9dd3aecaeff6537b8ebedafcd516d4a32d668e}}. In the latent period, the individuals can not transmit the disease {{cite:0661145602fe7a65ad34dbb112a07715ac506963}}. In the incubation period, the exposed can transmit the disease with a lower incidence than the infected individuals {{cite:646ea2c6b7248e6dda0d802aaa37b7de58f51250}}, {{cite:b923db47c648fa73ca1d99ef1085119a124b73a1}}. {{formula:4ac5e633-f686-474c-9ffe-76e3657a3f9d}} is associated with individuals who transmit the disease. {{formula:8fa9c309-7d54-47bc-8bf2-d3c4ead190af}} is related to the individuals who were infected and got immunity, permanent {{cite:0661145602fe7a65ad34dbb112a07715ac506963}} or temporary {{cite:5730b961c6f43f1e607eff3593a8230121d25c7a}}. The composition of these compartments forms the classical epidemics models: Susceptible-Infected (SI) {{cite:3f4254e958971855bfad3e63371a4b15ab47da0b}}, Susceptible-Infected-Susceptible (SIS) {{cite:691f4eaeefe4829d088e8d87d97a5dfb2ed26104}}, Susceptible-Infected-Recovered (SIR) {{cite:5bc2c32efa6249f8e4562ae9456b9bbb43a2a07c}}, Susceptible-Infected-Recovered-Susceptible (SIRS) {{cite:76d6518cac9ef2c184132a06fd6b4e77dba2cedf}}, Susceptible-Exposed-Infected-Recovered (SEIR) {{cite:628aaa9ca17941da4f119293292f401eb81115df}}, {{cite:0661145602fe7a65ad34dbb112a07715ac506963}}, and Susceptible-Exposed-Infected-Recovered-Susceptible (SEIRS) {{cite:5730b961c6f43f1e607eff3593a8230121d25c7a}}. An introduction to these models can be found in Ref. {{cite:17ffa5dc4a331b91c4a9cddc015ca4e90c006a44}}. These models have been used to study the dynamic of many diseases, for example, COVID-19 {{cite:9f5cd0f99a691d1e9debe5fd037dffda1f15c692}}, dengue fever {{cite:39d6c460944365ebedbe0841aff046aa131b0177}}, and childhood epidemics (e.g., measles, diphtheria, and chickenpox) {{cite:2e02d2ea2f7b3ef7752b013a92e46776837b0546}}.
i
486da647fb28d377d4c11a4ccbc57f34
There are several ways to approach this challenge, such as with ConvLSTMs {{cite:7f2b0d0472cdaee85ec7a881802fd16e041d20d5}}, Graph Neural Networks (GNN) {{cite:d1dcd49657d542bfae74a297d2bd516c398e984a}} and U-Nets {{cite:f4466be901d98941d1b2384e47dea004cf0ad0ce}}. In other similar competitions of spatio-temporal data, U-Net type architectures have shown the best results. For that reason we mainly base our work on U-nets, specially on efficient U-Nets. The neural network architecture used in our work is a recent state of the art model called SmaAt-UNet {{cite:580798ebed3dd07c418432619b8b21d00e6bb143}} (See Section REF ). Some preliminary tests were done on the U-Net++{{cite:44d132cedf016a9e16fe35c01411877567765037}}, a U-Net based model with nested dense convolutional blocks, and different backbones. Table REF shows the size and the number of parameters of each model. These larger autoencoders were not used as they reported virtually the same results while requiring larger training time. In contrast, SmaAt-UNet is a much smaller and efficient model. As a result, all further experiments were done with the SmaAt-UNet model.
m
22509e6e27ce151b3990dbad22950d88
For binary logistic regression, the loss equals {{formula:9648a22e-57db-4d95-88b7-710544972f20}} where {{formula:f7597c2d-ece0-4ba4-82bc-02484ffdea06}} denotes the sigmoid function. As shown in {{cite:c577761848510c1b7510ff19d34ae4ea52acdd04}}, the assumptions (1)-(3) in REF are satisfied with {{formula:9dc02067-7209-46c1-8c6d-472c9fceea1c}} and {{formula:5d0c6219-c249-4d68-82bd-738cc22cb20e}} . We only need to show that (4) and (5) of REF hold as well. Observe that {{formula:3c675683-b10d-4ee8-9dd3-c5654e7c194e}} Since the sigmoid function {{formula:0d870fd8-2216-41ea-9ff8-9c4299b829f7}} is restricted to lie in {{formula:25814b05-e6cd-4a35-9dde-4eb398ea6078}} , {{formula:974db9ef-639a-460b-b357-e36db9820fc2}} is bounded by 1, which means that our loss satisfies (5) in REF with {{formula:f631e75c-af40-4886-bb69-08684c4dbde3}} . Based on the Mean Value Theorem, one can show that {{formula:fe7f6acf-84c3-4faf-946c-e13f3f6332d5}} is {{formula:3a42403c-7cb9-4511-902a-3f672b53bd4f}} -Lipschitz. Using some simple algebra, one can also prove that {{formula:1175fb72-8ffd-4f98-a4e0-ff79a98f25ea}} Thus our loss satisfies assumption (4) in REF as well, with {{formula:5246715d-ec16-4bbd-a606-e30e3c78d87a}} . For multi-class logistic regression, one can adapt the “one-versus-all other-classes” strategy which leads to the same result.
d
4ebe2e5042c77545cb5330ab2268534b
A very successful and popular mutation of the LSTM is the GRU unit {{cite:e40249b2f08556edecadc81d15de9d734e8b7114}}. The GRU only has an output as opposed to both a cell state and an output and uses fewer gates. We showed that adding similar rotations to GRU cells does not show the same performance improvements, and that the baseline GRU model outperforms both LSTM and RotLSTM models in most cases.
d
f8ea6a51a13304d134b8b16138aba0e2
In this section, we show some results of concave maximizer Algorithm REF . They follow the proof of (inexact) SARAH {{cite:82c720d5c98a2cee063d74bf81bfab06f940ca92}}, {{cite:1e13eb0740b0c81cf2c728e84fafb931dcbc4ce1}} and we present the details for completeness. We denote {{formula:3cc7da2a-5996-4cc5-9899-fa83ed93cf2b}} for fixed {{formula:c56663c2-7d31-4c4f-a52d-94ba21658a8d}} . We let {{formula:8332ff96-d524-41e4-9dd8-23c6f9ef0bb6}} and {{formula:dc6832ca-fade-4923-9e01-441a097fc70d}} . It is obvious that {{formula:5897dc90-c0a1-4134-a96f-794c9100393b}} is {{formula:56d47bdc-40e0-418f-ad53-e44bc4fdda05}} -strongly convex and has {{formula:20684384-075f-4855-ac0a-1d167160c64f}} -Lipschitz gradient. We first present some lemmas as follows.
r
2c5eb02f18f1b0ff98462430ba710605
In this section, we consider a typical MMTT scenario where we include both closely spaced targets and maneuvering targets described in {{cite:d27d6acac4684722675108cbf45e6208e5b7a29e}}. The true trajectories of the three targets are shown in Fig. REF . As shown in Fig. REF , the three targets fly westward with their motion described by the constant velocity (CV) model, before executing a {{formula:69f5d83a-d6d9-4f06-9685-6654d909df97}} coordinated turn based on the constant turn (CT) model. Then the three targets fly southward based on the CV model, followed by a {{formula:79d1e3ab-a9d7-427f-b1a1-035e04c8bdad}} coordinated turn based on the CT model. Finally, they continue to fly westward based on the CV model after the turn. Here, the state transition matrices of the CV and CT model are {{formula:b9e3ee1e-194e-4ee6-b5b3-861c0800642e}}
r
9035857a98f3c8ce2c182f603e2fe859
Every quasicontinuous dcpo is sober (Proposition III-3.7 of {{cite:273133e4eac223f8b7ffc949c92418c2b17cf7e9}}). A dcpo {{formula:cc237ac4-ae68-42ba-8a3d-3a8616b895de}} is quasicontinuous iff the Scott open set lattice {{formula:b737bf15-b4a7-4797-912e-a2f8de8855ba}} of {{formula:2a2239af-f284-44c1-9302-edff27990059}} is hypercontinuous (Theorem VII-3.9 of {{cite:273133e4eac223f8b7ffc949c92418c2b17cf7e9}}). Assume that {{formula:c1759233-a695-42ab-bb4a-d13938cf58b6}} is a quasicontinuous dcpo and {{formula:b5d2c3e0-ad74-4bde-9f94-560a97a9c01d}} is a dcpo such that {{formula:ec62e73e-86bb-4953-b0fb-ff37b7083163}} is isomorphic to {{formula:dd79c10b-20b2-48af-8b0d-c0e19dd0a1dd}} . Then {{formula:181da326-84f3-460e-8953-ef3beedbd8c7}} (it is dual to {{formula:381d8e4f-6eae-4581-8d13-055bcd4c69da}} ) is isomorphic to {{formula:82b992c8-83e1-4e1b-93ab-005889a36ada}} (it is dual to {{formula:4a6b76cc-222a-49d4-8d62-3c47c0f6c750}} , thus {{formula:4d29b9f0-2c1a-4925-900d-c197ddf65641}} is also hypercontinuous, implying that {{formula:68a1739a-8cb5-4b38-a95f-d126944affe0}} is quasicontinuous. Thus both {{formula:234e8281-8846-4d37-8deb-e628c171f714}} and {{formula:f07e0520-f568-488a-b044-1d0b1b7816d1}} are sober spaces and they have isomorphic closed set lattices, hence by Theorem , we have {{formula:ac17f242-cd63-4379-a4e3-85ef71814c7b}} . From this we obtain the following lemma.
r
705292f667d26d3b462b6b8c45e52ce6
Lots of efforts have been done in crowd counting. We generally classify these methods into detection-based methods and regression-based methods. The detection-based methods {{cite:aea1d0054a33599508d819ae150766e08130d68e}}, {{cite:004ec7c441110648b99f286ed3dccf6a6417a1b6}}, {{cite:c15bd93da24b98bb9ce88735706822e56b1fbfe3}}, {{cite:360ccf773c586997bc7a391b68a4a644fe940690}} obtain the crowd count by counting the number of positive detections. The regression-based methods {{cite:a1ccea8b565c8bfa8f3cd03f692016455498da5f}}, {{cite:8c172f3a6fe8b65a696411655978d4ebcb98ba75}}, {{cite:3ec8531272de411063baacdb2aaf63878ff2f953}}, {{cite:004ec7c441110648b99f286ed3dccf6a6417a1b6}} utilize the mapping between extracted features and cropped patches to regress the crowd count. However, the aforementioned methods are complex, because their features need to be extracted by extremely numerous and cumbersome designs. In recent years, deep learning has obtained breakthroughs in many fields {{cite:07e0833e61aafe804030fc741e38a2124aa23cb8}}, {{cite:b398d14e1b48813faf2d62da23c759cba592dbb9}}, {{cite:5ff31ab4d83223113773de7ec739317cbaa6dd5a}}. Many CNN-based methods {{cite:cd1faebb3cfd357ff50af271d73d24b9caf73816}}, {{cite:808cf316d33a0bc20842317f2de1cbf74ac5f82e}}, {{cite:e54e41bbc834ed8362a6a8a7c5267c5f9eaa6e26}} have been proposed in crowd counting. They learn the non-linear mapping between training images and crowd counts. Researchers employ these CNN models to generate a density map that records the count and spatial information of the crowd at each pixel location, then the density map is integrated to get the crowd count. {{figure:023c2d5e-e9ac-4257-a42a-544ed71d88e0}}
i
79975a220016f21961d1298d0dae7663
The symmetric part of the Ricci tensor field, when it is non-degenerated, serves as a metric. This was noticed very early in the development of differential geometry (see for example section 5 of Ref. {{cite:0a017eafd584708278342eb730412e3364241340}}), and used recently in Ref. {{cite:4033376c891cde1771202a26ad1dc872171393cd}}. A notable disadvantage is that interesting cases, such as Minkowski and Schwarzschild, cannot be described using this notion of metric. The quasi-Hodge dual of the {{formula:cc12c2d8-447b-4e6f-bf03-c080550f73a9}} -field, i.e. {{formula:a35f3c60-e9bd-47ad-8b25-176afd3f1e5b}} , is a symmetric {{formula:95f1e414-4747-4ca5-acae-fad0ac98cef5}} -tensor density. When {{formula:5d8191e3-ab20-42b7-b37a-496e63c06334}} is non-degenerated, it can be used as an inverse metric density, similar to that used by Eddington, Einstein, Schrödinger and others to build affine models of General Relativity (see Ref. {{cite:5cc95f8fefde70f9a2696256167fb811af7b5564}}). In Ref. {{cite:5d2e1208fe7c8949f7c12328a9ad1212d4f5837a}} this analogue is used to intuitively relate the three-dimensional action of polynomial affine gravity with General Relativity nonminimally coupled to the {{formula:ec1997e0-d8cb-4d16-b21c-b6ae937be0e7}} -field (see Eq. (9) of the referred article). The construction {{formula:6c7c69f9-ebb0-4580-b32f-4d732deddb15}} , defined from the torsion, is a symmetric {{formula:a450f0af-94bf-416a-9868-5497577318d3}} -tensor field that serves (when non-degenerated) as a metric. This tensor was introduced by Poplawski in Ref. {{cite:2b2d08dfe9ac1a8dc3c751c2393b97813dfaf0d4}}, and it is related to the symmetric part of Eq. (REF ) (since the {{formula:23c49ed3-7496-4d66-9e1d-71679432e554}} -tensor is proportional to the torsion). The symmetric part of {{formula:f2ca2762-2306-44b8-b5f5-1444239de7e5}} in Eq. (REF ) can also be interpreted as a metric, when it is non-degenerated.
d
b1188171d36731d9c8a6b3be3dc5f937
In this section, we evaluate the performance of PS-SWIPT MAC systems under a non-linear EH model and decoding cost constraints. Specifically, to gather a practical overview of the systems, we consider the non-linear function of the EH rectifier. To this end, we set the circuity parameters of the EH rectifier in (REF ) as {{formula:d3b995f4-303a-4e4f-b79a-bc71ccadd68e}} and {{formula:d41bea9b-4f04-417e-a580-bd5704407b2e}} with {{formula:49044341-97f6-4e2a-bcb8-4f8e95115ecb}} mW (see {{cite:fcf1754ec12db2924bf540d2b57eea409434da36}}, {{cite:131b43606260333496fd3fca031008a680ad1630}}, {{cite:552d414146885235526126f00e7d55c18f007b2e}}, {{cite:c91e823660741d1b0b02740e3cdca0da38a25c68}}, {{cite:8ad15b52eb51af1d8b9647b564f7105b36bed800}}). Furthermore, we assume that the users, with power sources {{formula:5f85a24a-3e9b-46cd-9573-6d42689873a2}} W, are located at {{formula:f71086c0-c4c0-4437-a149-832831c9f9d0}} m from the destination and the channel gains are defined by {{formula:fa9548b0-becd-439f-9019-f83c8e3626c8}} and {{formula:eb779fc3-1d2e-4aa5-a123-2eeef0fa1eed}} , with the propagation exponent {{formula:abb1896c-abf2-4aa4-ad45-73ce317594c5}} . Finally, as described in Remark REF , we apply the worst case in practical PS-SWIPT by setting {{formula:40ae926e-26e8-4bb8-9492-a9acf27d2ab3}} dB and {{formula:b96d7c1d-5e9e-4b88-9921-926a4f96191d}} dB to satisfy {{formula:8459badd-4868-4f4e-aa65-2635ec0b73be}} .
r
ef38e62dfbd01a1f0158843d33866cf0
In general, the topological field solutions that describe the vortices are described by static solutions of nonlinear fields, which allow the spontaneous symmetry breaking mechanism {{cite:ad634fc955cf221573df5db6537ef85a56f6de29}}. In this sense, such structures have been intensively studied, and their consequences are applied in several areas of interest, mainly in cosmology, where it is known that they are formed during the phase transitions {{cite:6d2c9dd6c64e41ecabe0668e2a7232d7179b1782}}.
i
ef7c04ce194f8a6e4efd13f484ff4c56
Using SMERF, we find that, in simple reasoning settings, most saliency methods perform reasonably well on average, as measured by IOU values with a correctness threshold of 0.5. However, we also observe some failure modes across different experimental scenarios. For instance, LRP {{cite:f05af1acf9d61429b9c6d9b5b6bc5a39509007c7}} exhibits significant performance degradation when the input image includes a randomly located set of spurious features. Moreover, in complex reasoning settings, we observe that nearly all of the evaluated saliency methods suffer from average performance drops and demonstrate acute failure modes. Our key results are summarized in Figure REF . {{figure:1ee5d371-9e90-427c-b73d-1698f71c21a4}}
i
57b1b2ba7b6ebc1306d1f9a52e5957f6
The use of the ensemble Kalman filter for parameter estimation was introduced in the papers {{cite:cb7b843a8c3dd4835410346f50941ad44f6960ac}}, {{cite:776220af650cac98b7f68320c30b120c9ba4e4c0}} in which a physical dynamical model was appended with trivial dynamics for the parameters in order to estimate them; the idea was extended to learn an entire field of parameter values in {{cite:3f1a0ac0e6dbd774abd1cbd619d3a421625b49d8}}. The paper {{cite:1ef9c04e4c272355dc38cb30c1bcc2fcbd7d5d96}} was the first to do what we do here, namely to consider all the data at once and a single mapping of unknown parameters to data. In that paper only one iteration is used; the papers {{cite:2d9a92facb88e592e5ef6ed9c238870e5359bd68}}, {{cite:3a2c26c7f07d5606fe763b76250cd657aed967ce}} demonstrated how iteration could be useful. See also the book {{cite:3851b016cccb4dc71d58d1c1a5028c7e58792697}} for the use of ensemble Kalman methods in oil reservoir simulation.
d
15c258e7e65d80317bbe5c834a2a6bbc
Resistivity {{formula:33f4396a-e58a-4951-a044-ef4367ed5319}} of Cr{{formula:c327fac8-73fd-4c54-8507-2dca2d449b6d}} Si{{formula:05a2db5e-0d9a-4e84-aab4-c9d9983d32d6}} Te{{formula:f6c91bc8-df37-4284-8f7c-c5a8e3eb415a}} at ambient pressure is presented in Fig. 3(a). It increases on cooling from the room temperature, showing a typical semiconducting behaviour. Below 50 K, {{formula:44e528cb-cb4a-4606-930c-0cfa85c53a1e}} increases faster with a weak shoulder at {{formula:4e800b3d-d4d3-4d3b-a7de-d4c81f3106f6}} . The temperature-derivative curve shows an anomaly at 37 K, consistent with the {{formula:2c6d2778-8b62-4414-a23a-65d6397328cd}} observed from the magnetization and specific heat data. Therefore the weak shoulder in the {{formula:5ac4c26d-326a-476b-acaa-e92f503964c1}} data is associated with the FM transition. Above 40 K, resistivity is explained by the thermally activated model in the presence of disorder-induced localization of the electronic wavefunction: {{formula:70a790ed-cc5e-4e08-84a5-d1f4d6914dbc}} = {{formula:86c94f20-7e60-49c4-9cf4-21583402479d}}{{formula:23b8a074-a7b6-49ac-93b5-99a3b2e9d7ca}} + {{formula:62548f81-dd81-450a-ba6f-2299937e54ef}} exp({{formula:242a18dc-f49c-4f34-9e6e-8b05bb3fb7f1}} /{{formula:55b1536d-3006-4a78-8b3f-c221cde4f607}}{{formula:58e497fc-af6c-47c4-a7be-c21517903e78}} ) - {{formula:66591415-e41a-4e40-884b-341eaa615f5d}}{{formula:f28963ea-cb9c-4727-b9ac-c89ed208a4d5}} , where {{formula:228e4e18-0a1e-4c8b-9aa6-6ec6a94bad8f}}{{formula:21b4e5f7-2da2-4d9c-a8a7-2b22ace7e2af}} is residual resistivity, prefactors {{formula:fcb84bfa-3dd1-4b1b-8564-07dc202ae441}} and {{formula:504df509-5b4a-4a0e-b654-c86bd2cbd1db}} denote relative contribution of activated and localization scattering mechanism whereas {{formula:0b35a859-94de-4d88-962c-01dcabaf0cbd}} is the band gap {{cite:510f0589e34b8c2f34539f2159d369e976a5b904}}, {{cite:0f198bf94b0e3316963b1e011f4577a1863698fb}}. The fit yields {{formula:b97a4045-d870-477d-a39a-82bbd518dce3}} {{formula:28f0d4ec-5e0f-4a2c-a31a-a7fcd36fdaf6}} 18.0(1) meV activation energy, {{formula:3bb5090b-533d-4662-9e24-0e890cfcc2d2}} = 10(2){{formula:036e403c-6775-404f-beb2-ded7f060a698}} 10{{formula:f72b4b77-a6e1-478f-971e-65c5bb7ac713}} and {{formula:8f0c2dc5-9b2b-4d48-9a40-af8671806037}} = 1.77(3){{formula:a364cd65-0740-430a-ac41-13039a24cd98}} 10{{formula:dc901bb2-7579-4165-9803-af0cbbabcde2}} , which shows that electronic transport is dominated by disorder-induced localization. The {{formula:6085b0d8-1883-4312-b6ab-22b26fe6d96e}} below magnetic transition can be fitted with the same equation used for paramagnetic state but where band conduction yields to adiabatic small polaron hopping, i.e. temperature activated term {{formula:03cf54f5-49a4-41ef-90af-51aadc4bf117}} exp({{formula:ebd12353-0a1c-4b83-90be-ac17046cd97c}} /{{formula:4bd6ea01-3268-4223-b4f4-c4cbed735428}}{{formula:673e48ff-fd6e-4cd6-ba42-68b56ff284ac}} ) is replaced by {{formula:e03c4428-371e-4392-ace1-3d534caecbd7}} exp({{formula:64a42320-f5e9-4fb4-8daf-2dc7bc496261}}{{formula:316f4bbf-70d5-4025-85da-81a183ab9172}} /{{formula:f5a18952-db33-4265-98fd-c9a91efe52d6}}{{formula:52c0041c-8179-4825-8903-34c019c97e10}} ) {{cite:d63e2cc0cea947e2ccba9da6ab6f45d1f76b99f2}}. The derived {{formula:1b783eed-e4ec-40a3-8b6b-c62fb3a4d2f9}} {{formula:5529bf0c-9e84-47bf-ae74-027cab185b97}} 0.30(2) meV, {{formula:3706a31d-7d18-43f6-b756-3ed39e20b6ff}} = 2.3(2){{formula:a3923b42-6b22-4d17-a5f5-b231e8342ac3}} 10{{formula:d21b8794-0ab1-4e59-b583-314a16b1fc59}} and {{formula:61bfa914-d0a1-45d8-8286-be02bf91c788}} = 43(1){{formula:8a1d2320-db59-4105-a646-b12d53d312ac}} 10{{formula:b2db9af8-2b89-40bf-8f3c-79f8bef9c2c3}} . The {{formula:912c81f7-936c-4d7e-b972-268444316aec}} {{formula:fc073e60-5eb1-411d-91f5-a8b26664b328}} {{formula:964c917c-5701-4edc-a88c-23dce2b004f5}} is consistent with polaron contribution to transport mechanism, since {{formula:4c2fc8f2-1c6b-4d13-99c9-03c182ae908c}} is associated with carrier hopping, whereas energy to both create and activate the hopping of carriers are in {{formula:e1d20d6b-13ab-4c36-8e93-008dc68796e6}} {{cite:d63e2cc0cea947e2ccba9da6ab6f45d1f76b99f2}}. It should be noted, though, that disorder-induced localization is dominant in paramagnetic state as well below magnetic order, which shows high correlation between electronic transport and inhomogeneous nanoscale short range crystallographic distortions.
r
45371c6fa5ea1a5525b8e3bf588bc020
On the other hand, more recent attention has focused on facilitating active reflecting elements at RISs to attain significant performance gains, which lays the groundwork for further research in RIS-aided transmission schemes {{cite:2166f9ec660dc241a31aa8eed222f398fd6edc39}}, {{cite:86c55413525e77a6de59af62e95bfe6eb527616e}}, {{cite:f7f5105131b468b758f1e9203f80a8b00fc1d47f}}, {{cite:ea3a2bd01011e3a527157a87368374364165dffa}}, {{cite:dfc960f201efe315e1b2d883c8d786367d5bc471}}. In {{cite:2166f9ec660dc241a31aa8eed222f398fd6edc39}}, achievable channel capacity of a single-input single-output (SISO) system assisted by an RIS, whose reflecting elements are equipped with additional controllable power amplifiers to simultaneously amplify and reflect signals, has been elaborately analyzed through experimental measurements. Subsequently, in follow-up studies, channel capacity and energy efficiency of fully active RIS {{cite:f7f5105131b468b758f1e9203f80a8b00fc1d47f}} and partially active RIS-aided systems {{cite:ea3a2bd01011e3a527157a87368374364165dffa}} have been compared to the earlier benchmark studies of conventional specular reflection and fully passive RIS-aided systems. Reported results indicate a significant performance achievement for RIS-aided systems with active reflecting elements, compared to prior studies.
i
2db782a8c55d57209f150d483e96cf59
In order to prevent the iterate {{formula:3766c996-6290-49ab-ade5-3789b16b952d}} from becoming very large (causing numerical instability), {{cite:1ea336609064f0189113e580f6497a75d50a7c14}} advocated assigning {{formula:f6dd10f9-d142-4b28-ba03-ffac9b6be2f6}} , for arbitrary-but-fixed reference state {{formula:565ac288-05aa-44ad-95f2-f7ce3dfdcf05}} and action {{formula:33fa43ac-0d0f-438b-a1fd-4c1f9bd5d5e6}} . Alternatively, {{cite:d0f4bb309453a7deef8a85b575a66b5f166f2aa1}} advised setting {{formula:75a90c9e-b784-45e8-979e-9edbaf490ee3}} . Both suggestions seem to follow the heuristics of obtaining the unique solution of the underdetermined Bellman optimality non-linear system of equations in (REF ).
m
9018916fc985777b996ff752bf8eb23d
To simplify the analysis, we have performed the calculations for the case of a real scalar field. Physically we expect the result to be extended to other types of fundamental fields such as the fermions or vector bosons. The main difference would be that the vacuum energy density and its sign will depend on the spin and the polarization degrees of freedom of the corresponding field as outlined in {{cite:fbb6ac2d85f69292e6e81c045b6d2d7be7320741}}. In addition, we expect that only massive fields to contribute to the vacuum zero point energy while massless fields such as graviton or photon make no contribution.
d
407dd4c9164bfa613ab0e095263820a2
In summary, we demonstrate that once the thermal boundary layers decouple with the viscous ones and locate within the turbulent convection bulk, the ultimate scaling of the heat flux and flow velocity can be perfectly realized even at relatively low Rayleigh numbers. Our results support the physical picture of the ultimate state of convection turbulence, namely when the momentum boundary layers become fully turbulent, the heat flux is independent of the fluid viscosity. In line with the unifying model for RB convection {{cite:2101495f220c3b4ccaed4ee2c2305ff6651bec8d}}, {{cite:8202f5b395be10e9a7f16159a50ce86882dd0234}}, the current flow corresponds to the IV{{formula:759a24cc-da4f-4dbf-8bb2-d5fc7a099667}} regime. Compared to the homogeneous convection {{cite:d9de4d817fed6fed0eca54306d08116583d0ef95}}, {{cite:f640e4346bc667e4a9d021ae4730ce8b3f9a0c9b}}, the current flow configuration is more close to the ultimate regime since the thermal boundary layers are included and the ultimate scaling is achieved.
d
f1ffd7ead58fc42907afbd9a6d9b086c
While the distance of persistence images is not stable with respect to the bottleneck distance on the space of barcodes, it is stable with respect to the 1-Wasserstein distance {{cite:8155585d31df3239ed0f7e7b0cbf2412216ad42a}} thanks to the weights applied to the Gaussian functions.
m
617cd784829bbcac508e30329adbc01e
Semantics embodies the function of language, which is the core for distinguishing a language from a meaningless sequence. However, it is very challenging to explore the semantics of machine language, because it is entirely a spontaneous behavior of the machine. There are also some works {{cite:7fb53f313d550f3b497def4454c687f4e6382fb7}}, {{cite:efd47d3efcbc97a4d448791faa7bb897fac87d69}} that attempt to measure the semantics captured by an emergent communication protocol. They measure semantics by comparing some indirect indicators, such as mean-rank {{cite:efd47d3efcbc97a4d448791faa7bb897fac87d69}}. We adopt a more direct method: semantics can be measured by the performance of distinguishing categories. In MNIST and Animal, we have the category label of the picture. The most straightforward idea is to use machine language as input, the label as output, and learn a mapping by neural networks. From Table REF , it turns out that we can successfully construct a mapping from machine language to category. The results in Table REF show the top-1 accuracy of successful mapping in the training set and test set. We find that machine language has good performance on both datasets. {{table:87dfb37a-bb2b-4c57-af29-aa1e043f8183}}
r
2ce65e7848445c77ddde7d01b1874f6c
In Fig. REF , we plot {{formula:9fddb694-369c-49a5-a477-35e2ba6d1e53}} versus {{formula:3e451f58-7c3b-40b2-83e2-893fbe3b4bf1}} for three first boundary condition {{formula:525dc73f-9177-47e6-8b9e-abb809dcc080}} , {{formula:46f94a7d-9208-410d-b625-bb95b91efc9d}} and different values of Gauss-Bonnet coefficient {{formula:303c0b24-ae1c-4af3-972b-b00dec636be1}} and nonlinear parameter {{formula:0fbfc9c1-8ae2-4e18-a706-2f935531d301}} . In the absence of quadratic correction term ({{formula:f67c77ff-2993-4399-b4db-8abc62088979}} ), our results exactly coincide with those presented in {{cite:53396c6dc7fdcb70be2d98c3663055dea535128e}}, {{cite:6040291982f03624d55c63693347d28639d88467}}. The acceptable diagram for us is the red one in each plot since there is nothing in the bulk to effect on speed of the wave, so the diagram of {{formula:b59b3a6d-2f54-448f-9f99-7e3ab10db000}} will be stable. From tables {{formula:fdd7872d-a1ea-43c6-aa0f-2bc5de529d67}} , it is evident that when {{formula:8a43c136-9265-4eda-92b7-673b189b1c7b}} becomes larger the condensation gets harder. Similar behavior can be seen for the fixed value of {{formula:5d3b91ff-f3d2-41a5-8642-34215e877248}} and different values of {{formula:c8f3e0d5-05a7-4557-b2fc-27d866f839ef}} , namely the critical temperature reduces and condensation becomes harder when the Gauss-Bonnet coupling parameter {{formula:79bf8119-3b20-4db7-836f-a3ae7f5b7253}} gets larger.
m
676ebede1bd6b21fde05fa1f7e1bbcfc
Using the properties above, the following lemma about the moments accountant has been proven in {{cite:088fc87c25b0e04f1bf0a5ef470627663d807b93}}:
m
f3fa9e1b38b4aaa1ba1ecfc77552ac53
By the Bishop-Phelps Theorem {{cite:0e67ab6923777c4b584c5e725ad9b09bdf4a0070}}, {{formula:337de7f6-f9ab-4738-b451-ba399ab7b94f}} is dense in {{formula:aea0776f-8eea-4322-8892-04e02df52d4a}} . Since {{formula:f133b058-67b5-4275-a581-7e61a45d6fa1}} is closed, Question REF reduces to the question whether {{formula:e31fdbf1-2275-49c9-b754-199c992715bf}} for every {{formula:4b08dc91-1343-4167-9af0-368aa52f1e37}} . Before presenting partial results we formulate a useful extra assumption which is satisfied by the planar Jacobian. Its aim is to quantify the symmetries of the class {{formula:dfb5195e-9571-461c-bd77-ce4516bb8ca9}} (see Remark REF ).
r
21ac7c52b501fb44678ced14ae52a140
Fig. REF shows the comparison of test accuracy in a severe non-IID case. To mitigate the weight divergence caused by the data heterogeneity, FedProx {{cite:7127bb382fb8c574f5852f38092e37e543cb1c1a}} adds an {{formula:542c76d8-4498-4df3-a680-41c605272412}} regularization term {{formula:4f96fab3-5815-4b7f-bc55-8e35d42be0d2}} to the local objective function Eq. REF during local client updates. We observe that training using FedProx can improve {{formula:393a5ac0-58aa-4aca-b6c1-9f7f16ea914a}} of accuracy from the FedAvg baseline after carefully tuning on the optimization parameters {{formula:9bc85c43-c4f6-42e1-8f8b-923780c2d4fa}} ({{formula:abdd60be-580a-4501-848b-b0e21a4ad052}} is set to 0.001). However, the gain of using our methods is significantly larger than using FedProx. In particular, Fed-MAE yields a gain of {{formula:4b867004-a58f-4a5d-a1b3-3c7ebf932b26}} in test accuracy. Note that the application of self-supervised pre-training in our method is orthogonal to the optimization-based FL algorithms such as FedProx. Combining both could potentially boost the model performance even further. We leave this part as future work.
m
429c0d855d23230e1f2ffbbdf5d5f61a
For {{formula:a0fb9e8e-df24-4ced-bed0-3b540e0db72d}} monolayer, the HSE06 (GGA) gives an indirect gap semiconductor with the gap of 2.297 (1.744) eV, and the experimental value is 1.94 eV{{cite:58791e4563ff0f6316689438d9550f8376916f3a}}. The difference between HSE06 (GGA) and experimental value is 0.357 (-0.196) eV. So, it may be more suitable for {{formula:e29f097f-9318-478f-9fdc-b77598a5873a}} family to use GGA to study their electronic properties. Although the GGA may underestimate energy gap of monolayer {{formula:a28cff26-4ee2-4078-a212-4fc75142e466}} , our predicted PQSHI should be qualitatively correct, and only the critical point of NI to TI phase transition changes. The focus of our works is to provide a idea to achieve PQSHI , and many PQSHIs should be constructed in {{formula:98f927f9-668f-4250-ac7a-5115c9e80224}} family. To confirm that, we also investigate the NI to TI phase transition of {{formula:1680b9ee-bccb-42ee-ba75-bae4603fe7a1}} monolayer caused by strain. The optimized lattice constants is 4.02 {{formula:4c9d6c13-ef63-44dd-a817-f0cb4c69316e}} , and the calculated {{formula:bfcc0ecb-320e-4313-b49b-9b59eca5b277}} and {{formula:3f499fc9-1810-4261-8a37-207a1ed21aeb}} are 89.95 {{formula:04710538-59d3-493c-9acf-6def002f4f94}} and 30.71 {{formula:aaf6d1ca-d76c-48ba-a355-6f35920e7a59}} , which satisfy the Born criteria of mechanical stability{{cite:35cc2e1de0d7b833376e27bf2f7308f43229fc4e}}. The dynamical stability is also proved by phonon band dispersions of {{formula:f4cd3136-0eca-4e55-9240-4279c956e59e}} monolayer from Fig.1 of electronic supplementary information (ESI), and the thermal stability is confirmed from Fig.2 of ESI. The energy band gaps of both GGA and GGA+SOC as {{formula:ff8cad67-f31f-4435-a4d8-9861c2d4998b}} /{{formula:a5e8dd95-ffb3-455e-a682-ca56a987b78a}} function are shown in t3-1-1, and the energy bands at representative strain points are plotted in Fig.3 of ESI. It is found that the transition point of NI to TI is about 1.04, which is larger than one of {{formula:bb9fc051-2125-4bd9-8324-fb898e6dbedc}} . The evolution of WCC and edge states of {{formula:99949f8e-310f-4bce-a0bb-d11745794edc}} monolayer at representative 1.06 strain are shown in Fig.4 and Fig.5 of ESI, which clearly show the nontrivial band topology. At representative 1.06 strain, the calculated in-plane {{formula:0bec2d35-d772-4201-88c1-44eb6139a325}} and out-of-plane {{formula:7b72922d-a3a7-449e-b81d-c6ad63a205d9}} are 3.071{{formula:4446333e-29b8-4f87-83b6-617870507bdf}}{{formula:1b26d050-62e9-43d8-b539-214b979d08ec}} C/m and -0.066{{formula:021816e6-d7b3-4a36-9e38-d7c0084b3412}}{{formula:4b1e4677-7d95-4fee-a64b-a90a6bad8334}} C/m. Based on pe2-2, the calculated {{formula:fcb3d621-1810-46de-8462-a4166940502a}} and {{formula:ac555680-7a73-40e0-9108-0c18ae8697bf}} are 8.07 pm/V and -0.077pm/V with {{formula:4d293f1b-714e-4f2b-ae7d-d18f2c105229}} and {{formula:8068d931-68d1-4b2b-96ca-0ebdfafcf2f1}} being 61.59 {{formula:151b67ba-d672-4085-9e19-ed279807d13a}} and 23.52 {{formula:0d3881a8-7598-4d54-bb49-f2c386ce7a5d}} . These show that {{formula:ed63d937-68fd-4fff-8715-4d6abe02aa50}} monolayer can become PQSHI by tensile strain.
d
570e69ef74b1ad52c7ee9cbb7d78ee2b
Our proposed method is inspired by multi-layer ensemble learning architectures, in which the segmentation algorithms in one layer train the segmentation models of this layer on the new training data generated by the preceding layer {{cite:2d6b1123cd3e925811fee15c4713d65e598ed8ef}}. Applied to segmentation of medical images, this facilitates the successive refinement of segmentation results through each layer. It is recognized that the most successful segmentation algorithms in recent years have been based on DNNs {{cite:9388ca31fce9623b95f6b424e1638dd370487c5f}}, and even though deep learning models can be trained in parallel using GPU, a multi-layer ensemble model of deep learning-based segmentation algorithms would require a lot of computational resources. Therefore, an important question arises: How many layers should a deep ensemble model extend? {{cite:a801c07083c5af03e83c94fac7e9b3b64c32a262}} showed that on some datasets, the number of layers of multi-layer ensemble obtained was 2 or 3 only. Based on this observation, we introduce a novel two-layer ensemble model for segmentation of medical images. Figure REF shows the high-level overview of our proposed method.
m
fec0c6da45e83d7a9f0e485e97ee70ad
I-FGSM vs. PGD. I-FGSM {{cite:49f03efba1aac2f2b4cb89b2517e7db7a861326e}} and PGD {{cite:221b940f91ea9818d83c85b7e328f0983c07d276}} are in essence the same except with a technical difference. Specifically, I-FGSM initializes the initial perturbation with zero values while PGD initializes it with random values. Random initialization of PGD allows multiple restarts if the attack fails. However, in the black-box setting, only a single attempt is allowed for the evaluation purpose, thus the community sticks to use I-FGSM based attack for transferable attack. That is why our experiments are also based on initialization-free I-FGSM. In the white-box setting, multiple starts are allowed, however, with a single run (no multiple starts), our loss already achieves 100% ASR@{{formula:7e77b2a4-85b6-4a6f-8dce-c38b232719e7}} even when {{formula:69228769-4289-4703-89ab-8857a321f62e}} is set to the maximum {{formula:6a582c46-a9f1-4e44-82b0-3a91453e7bed}} .
d
df06a8cee02b70ccc6123a7c73244da5
For the lowest doubly-charmed baryon state {{formula:c7590737-a4b9-49c7-ac11-06d620cb7573}} , the only doubly-heavy baryon state observed experimentally up to know, the mass {{formula:b42bac14-787d-47a5-a909-f2041c848216}} {{cite:454d228bc999ccbba37e3e9a7a5448465aee0cd9}}, which is smaller than {{formula:f0b63614-6a8f-4cac-a443-5cdc0944b976}} , the energy scale formula {{formula:4579635d-bced-458e-b60c-af59298b70de}} is failed to work. In the present work, if the energy scale formula works, the lowest mass, {{formula:79e36cda-69a2-4ea7-b73e-9ffb3e70a4ea}}
r
3afea1281eb3b34f3f42f2fd19c6a324
so that {{formula:e6af3520-c079-4df5-9e66-b29b9d03097b}} . A typical choice for {{formula:dbe93d6c-f878-482e-bd24-0b15e0a642c8}} is the well-tempered distribution{{cite:c1a03daf46eb86b4d3b23780f6fcde9ab16542ac}}: {{formula:4d20dc67-887d-4114-b8ee-c8eafc88116c}}
m
4355ef646f77ebec3be57b3042bf2bbc
Because of its importance to image understanding and analysis, object detection has received increasing attention in recent years {{cite:b464c3cbf2ea2dd430260853b88a464b96867afe}}, {{cite:ebf247e5b3a07be114ea4b8a98df82f809bb66dc}}, {{cite:23327edd08c2fb63f68432418a4bcd627b5561d5}}. With the impressive development of deep learning, a surge of novel detection models built upon deep Convolutional Neural Networks (CNNs) have been developed in recent years, pushing the detection performance forward remarkably {{cite:8aa00ecdec7fd494a0eedf3bc09d2114e7e47b80}}, {{cite:ea51a871440b0c26268319bbb11403d32804e677}}, {{cite:17aee15ba9b545833a7ac950da6c5778cc75ef1c}}, {{cite:171af26877b2ebd05fb4a92c3c1d578414d2512e}}. The most state-of-the-art object detection models follow a region proposal based paradigm {{cite:b464c3cbf2ea2dd430260853b88a464b96867afe}}, {{cite:0d09554e04450a45dccdee2eb2485c2f536da7f1}}, {{cite:58c66f7acdb552173003c01795d0cb0d6831a8cd}}, which detect objects by 1) first generating region proposals as candidates that might have objects within them, and 2) then performing bounding box regression and classification simultaneously on each proposal. Despite their efficacy, the detection performance of these methods purely relies on the discriminative capabilities of region features, which often depends on sufficient training data with complete annotations for each category. However, labeling for object detection, which requires a pair of a class label and a bounding box location for each object within each image, is both prohibitively costly and labor-intensive. Furthermore, even if all the data samples can be well annotated, we still face the problem of data scarcity, due to the fact that novel categories (e.g., rare animals) are constantly emerging in practical scenarios {{cite:d3bd90dba7d0a67d16d15837d100d8b3029a0af3}}. In such a scenario, the traditional object detection models often become infeasible because scarce or even no visual data from those novel categories is available for model training. The above mentioned issues, namely the burden of manual labelling and the problem of data scarcity, lead us to investigate the detection task with additional source of complexity, i.e., zero-shot object detection (ZSD).
i
43c7c31fc4d015833d18ee8b68f1ec26
This is especially relevant as human annotation is very costly and model training data is often labelled by automatic labellers that use ontologies or are rule-based. For example, the UMLS (Unified Medical Language System) ontology {{cite:75a7ac66571d8958931cae81518fb58c87003ad2}} is almost predominantly used to match linguistics patterns in clinical text to medical concepts (e.g. using the MetaMap tool  {{cite:2e0291e3468a31263ba8502e1b285a9ec5dde0d0}}). The corpora annotated in this fashion are used to learn neural detectors of medical concepts in the self-supervised setting {{cite:5486f6472ce2481c46f59cc92b2faf40034fb3f0}}. Rule-based automatic labellers such as CheXpert {{cite:5467681f62bc759c738fb2e6456b25d3bbd96e7f}}, which builds on NegBio {{cite:d6cb4dae9038d2095bfe35305eb152f0db1f530b}} use rules predefined by experts to extract clinical observations from the free text of radiology reports. Both use different rules, where CheXpert is more conservative, with more uncertain labels.
i
bdc70d3352f0c27b5f76713e6245d307
In 1962, in his famous book on graph theory {{cite:948771cee3fe931d5eace7da5365c56c161291c6}}, the Norwegian mathematician Oystein Ore presented a new problem: “In a tree there is a unique shortest arc between any two vertices, but there are also other connected graphs with the same property. Try to characterize these geodetic graphs in other way.“
i
92bfc8fbbab3c0dffb509aab451624f7
Here, {{formula:ebccfddd-58dd-453f-9c38-8c7b0f3163e8}} is the DNN model, and {{formula:5998b5a0-f37e-4836-8443-97fee40cf4e0}} is an allowable perturbation set with radius {{formula:5f2cf31d-b6fe-44d4-9246-2fb5bf544e0a}} as measured by the metric {{formula:5605d746-22cc-4378-8a8a-d07590dc93c3}} . Early works assume {{formula:b5c83b45-4a4b-4f6d-9297-5c528c44b501}} is the {{formula:a17c44e1-7e1a-4bd7-a7ef-3f58671705f5}} norm ball intersected with the natural image box, i.e., {{formula:4522c4dd-189a-4d48-9759-c886fb81787f}} , where {{formula:63d8b276-432d-47a9-9122-4254e22c2b65}} are popular choices {{cite:2672a31fa314d12fd67432beff4f6a505635f0e8}}, {{cite:1543c23cb51649148cc36bbfb4b90099eb27520c}}. To capture visually realistic perturbations, recent works have also modeled nontrivial transformations using non-{{formula:a4b1e540-fa5e-41bf-872c-cbd8f9b52a8f}} metrics {{cite:4a9727199a1c3a42bad8b2093dc28b5e9844b057}}, {{cite:74c0b1a53ffc14f2e06eb58859eef2ee4c885a84}}, {{cite:dbf7fdabb1625d8044f22a389f5c0abaca8ee6fc}}, {{cite:d83c1fcf5a2cdd45445511849c9898f832b2c8ff}}, {{cite:42c170ac0a0b943a9ab8f479ef3ad4ee5f321ab8}}, {{cite:2747f7cc6faf20d197fec60e3cc49b7d40e4b98e}}, {{cite:ea949a43d1fec928fb581db168997b318fdaeee6}}, {{cite:8df73cf5f1828ba5638d244d0790c77012032079}}. As for empirical robustness evaluation (RE), solutions of EQ:ROBUSTLOSS lead to the worst-case perturbations to fool {{formula:0a13366f-76e1-463f-807a-9b070fd87f8a}} .
i
d23a8c0da3c45d545186a54a1641f5e3
Our method is slightly worse than FEAT and CAN in some cases. For example, when using ResNet-12 as backbone, ours achieves 0.5% less than FEAT for 5-way 5-shot tasks on miniImageNet. However, comparing with 55,041KB parameters in FEAT, ours has 31,454KB. Even with less few parameters, it can still achieve comparable performance. While comparing with other baselines {{cite:1e24c3c40b08f4c8e554fe3d0cc8e02ae454ae83}}, {{cite:1cffbee1ce672a6fa645e0cddd7ddd70208767a8}}, {{cite:26ee5933cd5b97238190fac825b69b27fb14286d}}, the base path in our method is only pre-trained once. It won't consume too much training sources, but can improve the performance a lot. {{table:c122ac1b-02a4-498d-9f1a-9b8f303ea198}}{{table:a28a1b89-5a1d-44cd-a73c-927df2359fea}}
r
fe8014c305d72f05d2d91c4a32884e44
It is interesting to see that simply by employing the proposed deep features in an algorithm similar to that of Foote {{cite:2b862df9b9a96eeb472d800b062822efac083733}}, such a method becomes competitive with the state of the art in unsupervised music segmentation. Furthermore, on the SALAMI-IA dataset, a significant performance improvement over the state of the art is observed without any additional parameter adjustment. This result might be postulated to be due to the poor quality of audio / music data in this portion of the SALAMI dataset. Because the algorithm of {{cite:97a618171f4512f05bc70ddf175b9b535cb7b005}} is designed to detect changes in repetition patterns, when these patterns become imperfect, or corrupted by noise, a performance drop might be expected. In the proposed embedding, clustering of features is performed simply based on the time proximity of features observed from the training data, which contains many of the aforementioned imperfections providing some robustness. {{table:b40fc41a-e929-417d-8973-02c5065d6899}}{{table:bd7365e2-782d-4ed9-ae0e-2ec4903c397a}}
r
6a8cd573818a2e2e272a7e2a9b2287d2
Effective robustness. Our results in Figure REF show that that ensembles of neural networks are not effectively robust: they do not provide an improvement to robustness that cannot be explained by in distribution performance. This observation is in line with others that have seen that effective robustness is in effect very difficult to achieve, considering many complex training strategies {{cite:4137b7d60d31c6377105b86543377e990fb6dda4}}.
d
8d4dbdccd04040fcc69f2f88c1a7cc24
It has become apparent that many of the equations governing massless fields on black hole spacetimes are of Heun type (in cases where the cosmological constant is non-zero), or one of its many confluent variants, such as for non-extreme Kerr {{cite:19243516ffa634c92b9f5336c64f2b3b8739cd4d}}, or the extreme case {{cite:18abb94e597322412c7eab18944e901de3a0d207}}. The differential equations we find for massless scalar fields in the five-dimensional Myers-Perry black hole spacetime (see (REF ) and (REF )) are of yet another confluent Heun type. That observation has allowed us to extend to this (and the related STU) case the analysis originally applied to massless fields of all spin in the Kerr spacetime {{cite:19243516ffa634c92b9f5336c64f2b3b8739cd4d}}. Before our present work, that analysis had also been extended: i) by using a different integration contour, to rule out unstable modes on the real axis for the Kerr spacetime {{cite:a8a6e31ff1e7eeefe7312e449550c91ee02a23d9}}, ii) by considering a modified integral transform, to deal with the extreme ({{formula:55d27c59-28c3-4295-a68f-ba3cd3e527b9}} ) Kerr black hole {{cite:18abb94e597322412c7eab18944e901de3a0d207}}, and iii), by looking carefully at more complicated examples, to establish the absence of unstable modes for massless scalar fields in STU spacetimes and all more specialized sub-cases {{cite:c9f0a9669ead5ec749cf1a0755262262e873f8fa}}. Remarkably, the integral transform we have used here is, effectively, an inverse of that developed for the extreme Kerr spacetime {{cite:18abb94e597322412c7eab18944e901de3a0d207}}. In this context, it is also worth noting that quite different techniques, stemming from Seiberg-Witten theory (see, for example {{cite:a009f9aef38352ebdcdb3c731fd36f500ae95c61}}), and based on the spectral properties of the operators involved, have been used to discuss both Kerr quasi-normal modes{{cite:f938794dae92b05a38230848bbab4b147a5433b7}} and Kerr-de Sitter stability {{cite:94130e2cf41a8b9b564ab555295e3dcbadead35d}}. The relevance of such an approach to the spacetimes we consider here is yet to be determined.
d
85107ec097008871c9754fbd82ec923f
For experiments, we have performed data extraction using tensor fields on bar charts and scatter plots, as well as on histograms, as a special case of bar charts. We have used three sets of data for our experiments. The dataset descriptions are available at the project GitHub webpagehttps://github.com/GVCL/Tensor-field-framework-for-chart-analysis. The dataset DSTbl contains multivariate table datasets that are publicly available, and DSImg contains chart images that are publicly available. We have programmatically generated charts for DSTbl using Python library, matplotlib.pyplot {{cite:7091bd05837db29e279ff53678595edfb652ce8c}}, and stored them in .png image format. We have specifically used this library, as it generates high-resolution images, compared to a plotting tool, such as Microsoft®Excel®. We have reported the dataset sources in the Acknowledgements.
r
5bf1bf4e9c433e5890dfaf66dbdb1e60
The dynamic moments of inertia for the four HD bands in each of these nuclei are compared with the one of the [1,2] configuration in {{formula:7671d1bb-993b-4666-ba73-0aecb6995a28}} Xe in Fig. REF . The difference between the dynamic moments of inertia of the configurations in nuclei with masses {{formula:52c258b4-dd64-44b5-8bfc-6cc53747d35c}} and {{formula:1a673f60-a2a3-4c95-84cd-a6c43fa87ecf}} is due to the impact of the particle in the specific single-particle orbital by which two compared configurations differ. The results of the calculations question conventional wisdom {{cite:e37bc9aa98656f5399a1261c4f96ff2970a166c2}} that the largest impact on the dynamic moment of inertia is coming from the particles in the intruder orbitals. Indeed, the impact of the neutron in the hyperintruder {{formula:1618fa66-639e-4222-87a7-fbe2415c8f95}} orbital on the dynamic moments of inertia (Fig. REF d) is comparable to the one of non-intruder {{formula:0ea995c7-0575-422f-be2d-aeceb459b1c3}} orbital or even smaller by a factor of {{formula:4648baa4-8297-4a8e-a9ee-fb484c838cfa}} than the impact due to the neutron in non-intruder {{formula:8baea2fd-94e0-4373-9348-3203e007b5b1}} orbital (Fig. REF b). A similar situation is also seen for protons, where, for example, the impact of the proton in the hyperintruder {{formula:f8f03e43-7255-4302-8e2b-f29110422875}} orbital is smaller than its impact in the non-intruder {{formula:f8b9c128-d71d-4997-aef6-cdf3fc49f422}} orbital. This suggests that not only angular momentum, carried by the particle in specific single-particle orbital, but also polarization effects it induces into time-even and time-odd mean fields {{cite:5c671745f72e547db3f4def698d6552f07afbd49}} are important when considering relative properties of two configurations. Based on this example, one can conclude that the configuration assignment of the HD bands, based only on the relative properties of the dynamic moments of inertia of two compared bands, is unreliable. {{figure:0231d619-33cf-432e-9564-c15470af6e08}}
m
6e4f68221ace21f6123e0509b3ca42b2
As the scale of deep learning models continues to grow, it may have been taken for granted that increasing model size is necessary and possibly even sufficient for obtaining ever-improving performance. However, historically the futuristic picture for large scale DNNs had not been entirely optimistic. For example, Geman et al. in 1992 {{cite:dd4ef26bb6c9849203d01d11bc051d45aea3cc9f}} made an argument based on the notion of bias-variance trade-off that despite their excellent goodness-of-fit to training data, DNNs may suffer from a high prediction variance hence poor generalization performance. While recent evidence suggests that DNNs exhibit an unexpected unimodal shaped variance curve {{cite:0fee11abc9471e7379c9c459d225d3a7461e7104}} where variance is controlled in the over-parameterized regime by an implicit algorithmic regularization, it is largely unclear whether such a regularization is pertinent and accounts for the good generalization of practical DNNs.
d
d7444f8fd16e7ef9b7651ec29fc4f064
We adopted the binned Cash statistic {{cite:969f3dde5a66201f14754232838c4be6c30de712}} to fit model to the data. There is a chance that a null number of bursts is extracted for a given source. In this case, the simulation cannot reproduce the observed number of flares. So, for each source, the probability to observe a given configuration is obtained by the product of: the probability that the extracted number of burst is not null ({{formula:fd48c431-b2f2-4b86-a971-e702b1d92851}} ) and the usual term discussed in {{cite:969f3dde5a66201f14754232838c4be6c30de712}}. Thence we have to add an element for each source to the Cash estimator; this element is {{formula:1e299a56-0198-48f6-996e-774031d23a8d}} .
m
0ada943cfd47879222fbce35ccae3af3
There has been recent interesting progress in the understanding of non-relativistic strong gravity {{cite:8e0cad9fb51e78ba0fbbe589b315daea396c9cee}}, where non-relativistic black hole solutions have been discussed. Building on the initial proposal of a non-relativistic limit of AdS/CFT, one could put our results in context. Starting with AdS{{formula:3d01c655-c094-4a1f-8504-60c31b422d20}} /CFT{{formula:b503b007-a0cf-4fef-aa81-85740daf3dcc}} , following {{cite:9206872cf43ea190aa4cdf16ab5ef0a7c9777f33}}, one would end up with a non-relativistic Newton-Cartan like AdS{{formula:d614e6b8-81be-4504-a075-e52f4368287b}} which would be dual to a 2d Galilean CFT. There would presumably be solutions in the bulk corresponding to the BTZ black hole, which like the Schwarzschild solutions in {{cite:8e0cad9fb51e78ba0fbbe589b315daea396c9cee}} would have horizons. The modular properties described in this paper would help reproduce the entropy and probe reactions to these solutions. It would be interesting to investigate this in further detail.
d
87f723f704a206a9cbcc8e72fbbf8ea9
The first one is the original Seiberg S-confining theoryBefore {{cite:7959c8184e2fca5e3fe1d179143b9510914ca829}}, it has been already remarked in {{cite:bbc888b5aba13a73590d1a52f25cdcac93415b2b}} that the 't Hooft anomalies were matched by the mesons and baryons but the confining superpotential was not found. {{cite:7959c8184e2fca5e3fe1d179143b9510914ca829}}, {{formula:45282caa-aeb9-40a4-a9da-8c337cd771a1}} SQCD with {{formula:63aa13df-da03-4a95-a1df-f072d68103a8}} flavors. In quivers notation it reads
m
efae050a3ccd9a847cf0a98769c27ea9
The evidence for high-energy neutrinos from space in the IceCube experiment {{cite:b578c00fc5e31b9350bc0faa19d4242c05bef60c}} has opened a new window into the Universe from the perspective of a different messenger. While the origin of these neutrinos is still debated, it seems unlikely that a single class of conventional sources, such as Active Galactic Nuclei (AGN) blazars {{cite:ef291aefe02bf93bddce44df241b8ebb69bf15f0}}, {{cite:19d845ec625e7e594100df798b399e78003ab78f}}, Gamma-Ray Bursts (GRBs) {{cite:597f750cef1b970669d72911cb4259c9e7b437c7}}, {{cite:406078b7ceb26afaf53ed298137e010e92115652}}, or starburst galaxies {{cite:538cbf434cb4bc18aa15461fb9d7d54bc5fbb02b}}, {{cite:c9d50686891b55df48759891d8fcf70c22ff98fa}} can power the observed diffuse flux on their own, unless the sources are hidden, such as collimated jets inside stars {{cite:3984c345c8acc2fe0d4dfcc214e99239bbcb4126}}. It therefore is also important to study conceptual arguments, related to the spectral shape (such as the power law index), see e.g. Refs. {{cite:e9605d8e9f584614dc25847b150d55897acf095b}}, {{cite:f1938cfd7852c1fb12f24159b2db42c9757aec61}}, {{cite:d88d2ff615e5cde31f72dcb89e4729ceb32c8b9d}} for different data analyses, and the flavor composition {{cite:f1938cfd7852c1fb12f24159b2db42c9757aec61}} of astrophysical neutrinos. Other currently debated issues are the possible contribution from a (possibly softer) galactic component especially in the Southern hemisphere {{cite:bbef826f11d9d4b94680fd588dd88c580b138695}}, {{cite:2b3c3b5157f4f6eeafcdf081c1463842c4dd0951}}, and a possible hardening of the spectrum at high energies {{cite:19d845ec625e7e594100df798b399e78003ab78f}}.
i
9be1b1da9aaf7306b3be574bb8d8c003
As presented in Table REF , ablation studies are conducted to evaluate the effectiveness of different model components of Spatial-DCCRN, including a) Spatial-DCCRN without AFE and MMF (Spatial-DCCRN{{formula:d7cc31b0-c855-405b-83aa-2660304b8a92}} ), b) Spatial-DCCRN without AFE, c) Spatial-DCCRN without MMF, d) different input channels of the observed signal, e) the non-causal version of Spatial-DCCRN and f) the ablation on different loss function. For the non-causal model, we substitute the LSTM in Spatial-DCCRN with BLSTM and look ahead one frame in each convolution layer. The evaluation metric is a combination of STOI {{cite:78ab8c4c0fb522b9eb1951e0e8d95245efdb290b}} and WER, according to the official challenge rule {{cite:a8b6694e0fedbee19a01c3b13404be8090a2e091}}: {{formula:a60e6dbd-a970-44c6-8945-be5babc443e9}}
r
9c9481b2f46cff733d66760b9ed3e06a
To fill this gap, we have developed a taxonomy of empathetic listener intents by manually annotating around 500 utterances of the EmpatheticDialogues dataset {{cite:c657bfbcd5090da9964bb123214af1d58a4357be}}, covering 32 types of emotion categories. In the following, we first describe in detail how this taxonomy was derived (Figure REF ) and how we chose the dataset to support this annotation work. To extend this subset, we employ automatic techniques to label all speaker and listener utterances, covering 25k empathetic human-human conversations. To be able to explain the patterns and trends of the conversation flow, we employ visualization methods to illustrate the most frequent exchanges and reveal how they temporally vary as dialogues proceed. Finally, we discuss how these results can be used to derive more informed heuristics for controlling the neural response generation process.Our source code and results are available at https://github.com/anuradha1992/EmpatheticIntents. {{figure:e04e760e-3b2a-4c08-b4a6-5864f4b663e2}}
i
42b95cd71437e67bc1bbf6d3346f925f