text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
The validation of the method {{cite:b86a95b50f1f582c37ce0beb99cdc792194b585e}} proceeds by establishing that the resulting Markov chain is ergodic {{cite:4876adba68703fe6e2ae3788a756e9a8ff7f812b}}, meaning that it converges to the distribution corresponding to {{formula:16e90cb3-903f-4a3c-bad1-6a4187121bd7}} , making the starting value of the chain irrelevant. Akin to basic Monte Carlo methods, MCMC samples (usually) enjoy standard limit theorems.
m
e62dad3792d05a086e9dc37a81a66a59
One important aspect we did not touch on is the statistical properties of {{formula:f8035a9f-7213-4d18-97f3-a9ad919a98aa}} -regularized solution path, which has been studied extensively in the literature {{cite:8fcda5b4a8cd5a95dba3eb92950ca8ecc74c455f}}. Interestingly, {{cite:281e38bdb2b67f0158ed5bfdf440807a456dcebf}}, in the context of least squares regression, connects the statistical properties of gradient descent iterates to that of ridge regression solution path. In particular, they show that the statistical risk of the gradient descent path is no more than {{formula:76dfdae7-16e3-41e4-ae55-29d1e9bef032}} times that of ridge regression, along the entire path. Motivated by our proposed homotopy method based on damped gradient descent updates (REF ), it would be interesting to investigate whether a damped version of gradient descent algorithm would enjoy a more favorable statistical risk compared to regular gradient descent. Further investigation is necessary.
d
ef32fc38ab2aa1fafc4e5a513adc8823
In our approach we query the user about whether a feature is relevant, i.e., is positively correlated with the target variable. This is a compromise between detailed input about regression coefficients (exact value {{cite:4e91f28c39b73e382f7809f9cc85f94f3ca1c0d9}} or full prior {{cite:99b351bc112fd5b8b7299e6ce35911a633f2c4bc}}, {{cite:6e884390b5dcb3628c56f7de2eee4bca644dff6e}} and simple input discarding a subset of features {{cite:de536cc1aa01583f57fe8e185cabdf65222f28ac}}, {{cite:161226cf0ede390ec61fb4b617823ef8c49e7ce6}}). This kind of user input is easy to give (difficulty C2 and C3, self-reported in post-study survey: 50% easy, 29% neutral), but powerful in improving the predictive performance. However, the model is potentially sensitive to errors in user input. Also, although providing user input on positive effects was natural for the prediction task considered here, in other cases negative user input may be useful. We will consider these issues further in future work. Our user model formulation has the additional benefit of allowing integration of auxiliary data when defining feature descriptors. This is particularly important when the sample size decreases, and training data alone would not provide enough information to guide user interaction. {{table:d4175a8c-9538-465e-ade2-ac3660baf986}}
r
e0c14f60da48a5356c3d7751a5d2d059
Our findings on the connection between spatial behaviour and personality are consistent with the existing literature on personality. The correlation between exploration and extraversion could be explained by the fact that extraverted individuals are more likely to be risk-takers in various domains of life {{cite:24d3c4449ab75ff124c439f4b441a9cdec098aa1}}. Extraverted individuals are also generally more likely to engage in social activities {{cite:a1406777a8ad169396e1cfd5c7c56d5c367cbb08}}, which could partially explain why they allocate time among a larger set of locations. Furthermore, the key finding that individuals who score high in neuroticism and openness display a tendency to change familiar locations, and friends, over time fits well within the existing picture. In the case of neuroticism, it is well known that this trait is closely related with `stability' {{cite:e09345ddb67961543233735b8712f2feba14702f}}, such that the trait of neuroticism is sometimes referred to as (low) `emotional stability' {{cite:7e00a887f96ed3baa2cffcbc589b86a22eb04434}}. Also, at the core of neuroticism is the tendency to experience negative emotions {{cite:e90dce5d573783fee04e84552a26619db8590564}} including dissatisfaction {{cite:29cd400f0015f60d23ecc51895e72b41c9754cb2}}, which in turn can lead into desire for change {{cite:31125576fb394e8524f691f112329776af429d6b}}, {{cite:d905d160d9217fb8b7d7e36e94857a44886a4b5a}}. Finally, it is known that people scoring high in neuroticism have a larger number of weak ties {{cite:429b8a6167c949bd4f576634f322c8442b7acfb0}} and perceive that they tend to have less social support {{cite:dba4732f747458fc4651954426a3c6758abe7e75}}, {{cite:b14606afedb3d1d942a9cefda36b23e00155e906}}, in line with our observation that they dispose of an unstable ego-network. Openness to experience has been shown to correlate with `disloyal' behaviour also in other contexts such as politics {{cite:0a35b16946891f9391dcfd9dece772cc7563e519}} and shopping {{cite:3757c33a77da63ca9c97b89b99303492fc304b63}}. Our results, in agreement with previous studies on social {{cite:913cb6f1c255029b8ffb7ef8e14478516e4a9660}}, {{cite:dc20d5241732157efafe55c0006e3602703f34f9}}, {{cite:3c643c6ba6de4b569bdc54c2441cb005c25717a6}}, {{cite:4c466da192e30c3977c3361c74b0f760757f7d5e}}, {{cite:e92873b913b84835fb31d3220deeb4797755c066}}, {{cite:daa1ebcf459e712349dff01c421c7d4afa3fe1c6}} and online {{cite:fefb475dbbe79e58729d769cda85542366fc0efc}}, {{cite:69659b65379e6a68e7954176f12045027c944b70}}, {{cite:25c9244cf616a6abea2404bd781ac21a1fc82c1d}}, {{cite:2b461b50f7a8ced90f4ac365452a4ec29daea612}} behaviour, show that personality traits explain only partially how individuals behave in specific situations {{cite:a82aeba2c63790531fbe94cc5cf40d79da2e1423}}.
d
484bb3b346d975684bb6cefb7c1f3eec
The quantitative results obtained on the MIT300 {{cite:1ddae218f4dbe431bde8e77fec33ffcf74b3d74e}}, MIT1003 {{cite:8ec4f8d1376f9b10320a2f95de1e92ab528fc625}}, TORONTO {{cite:3559791af409f35c8be6cfe52b9c2abade6f27d2}}, PASCAL-S {{cite:ccadd3166755d790ef78e077f7a84e1e97ffc05f}} and DUT-OMRON {{cite:961860df91588c4cd50edb7f8fe49c980bea3965}} datasets are presented in Table REF , Table REF , Table REF , Table REF and Table REF , respectively. In MIT300 dataset (see Table REF ), our model is struggle to compete with DeeFix {{cite:4d33a0ef6c5c11dbb5462807f5f02342b3e82ab0}} and SALICON {{cite:5c5e0a7f3a6e67162c76a00cb90b2ab5ff5d6e75}} models. DeepFix introduces more complex network architecture and considers center prior. Besides, it is trained with more samples (SALICON dataset + 2700 extra images with actual eye fixation data). SALICON model is fine-tuned with more complex objective functions and images from OSIE dataset {{cite:de5189eef15cda7df639e5ea9a82a77c239416b2}}. Our method achieves promising results across a wide of datasets, which verifies its robustness and generality. With a relatively lightweight architecture, our work is meaningful for inspiring future work in this direction and offers a deep insight into the advantage of multiscale saliency cues for visual attention prediction. {{table:4b6e2c7a-6965-4312-856f-dc0d91d274dc}}{{table:52736f02-4fa5-4d91-ae35-4160b42f3a34}}
r
37f5a535db869da133b3e9b1b5b13a22
A very good approach for qualitative understanding of cosmological models is the dynamical system approach, first developed by Collins {{cite:7e3ede7b7c9286fbaa4a099471ebab36637e568a}} and extensively reviewed in the book edited by Ellis and Wainwright {{cite:55807384155bc84d690cc6621c423ddda515202f}} (see also {{cite:36d946353bea4cad060ddb10c860befe483e0833}}, {{cite:f4352b351662c7a5ddc0fb0f212141c201b5d7e4}}, {{cite:d26126f7005dab2581ba154ea14986849c19b06c}}). In fact, using these techniques to study cosmological models has the advantage of providing a relatively simple method for obtaining exact solutions, which appear as fixed points of the system, and obtaining a global picture of the dynamics of these models. All the autonomous dynamical system formulations for {{formula:e3da90d6-0290-4fd3-9eb7-756fd45c8fcb}} gravity that have appeared in literature up to now require one to specify the functional form of {{formula:902c3771-2d26-4327-ba0a-67ccbbae7d94}} in order to be able to write down a closed system of first order nonlinear ODEs {{cite:2b254926ce19561daae92b96c980be3ce4dbcd8b}}, {{cite:4a37c88948cd64a784e2b3c58f197400bbc0988d}}, {{cite:400b90665b67bbb5a10a96650d299fbfef7f03dc}}, {{cite:e6caf37803a46cca2366f74802d71e81a6e5da88}}, {{cite:52935d47ee582f8cab68499d0d3711de6b27c1d6}}. This approach is therefore not particularly helpful for the general dynamical analysis required in this present work, where the functional form of {{formula:92803abc-7289-402d-aef4-d4fc7c2c1498}} is not known a-priori (see however {{cite:eb8cd26a0e64512d4367961bc487750132629ea9}}). We however circumvent this problem by proposing a novel dynamical system formulation of {{formula:eff1e92a-096e-4076-8072-309daab19051}} gravity that is free from this limitation. The trick is to extend the phase space by including a set of dimensionless cosmographic parameters. Cosmographic parameters are key cosmological observables based on performing a Taylor expansion of the scale factor around the present redshift and, as we will show later on with the specific example of the {{formula:d1862309-fb39-4a27-997d-66e1d9db357d}} CDM case, the choice of a cosmological solution places a constraint on these parameters. It is this constraint which allows one to write down a closed system of first order nonlinear ODEs without needing to specify the functional form of {{formula:e0346db2-4e9e-4d28-8896-71640519015d}} . By its very construction this formulation is intimately related to the reconstruction programme and helps us in situations where the reconstruction programme fails to provide a clear picture. To the best of our knowledge this is the first ever proposal for a dynamical systems formulation in {{formula:3a9eb1f8-965f-4db9-bb03-3dbe62c541c6}} which does not require specifying a functional form of {{formula:1d968190-fa4e-4fed-9034-80831675294a}} , and therefore links the powerful dynamical system approach to the reconstruction programme.
i
8a3c936a5670b926386b40b05e548558
In the case of self-focusing (self-attractive) nonlinearity, and for sufficiently high dimension (for fixed nonlinearity) or for sufficiently strong nonlinearity (for fixed dimension), a key feature of the NLS model is the presence of collapse type phenomena, that have also been explored in numerous books {{cite:3545a9ea6be6137ddc444c5b74fbe376b840e949}}, {{cite:c39b57f18b8a26a03ec56035e67985272da3cc5c}}, {{cite:237f7307aa265cbc70ce718602e8e0db1de6dbe6}}, as well as reviews {{cite:e32895683813b00ea851ba482dc2aee70f98c886}}, {{cite:43b8fd052fce231322484cccaca01b2e6af76804}}, {{cite:3137b4af6418abe365ae53b8b9877ae12796f0aa}}. Indeed, the topic of finite time blow up of supercritical NLS solutions has been the objective of continued study both in the mathematical and in the physical literature; see, e.g., Refs. {{cite:1ab7edc41f43f7c66889b0e6c478d791e6bae307}}, {{cite:b597515f8cf5b4946727339559b3d9d62342f0fc}}, {{cite:c0b99d19a8ab7d0ebb574af4984110e65275096e}} and {{cite:d63b881b71e269e39174de65f83632c56a0d734e}}, {{cite:4fd2c500387cb3f4626435cbc858001b5afc5970}} (and also references therein) for only some recent examples. Importantly, the study of collapse is not only a mathematical idealization but rather has become accessible to physical experiments. In fact, on the one hand, there is the well-developed field of nonlinear optics, where not only the well-known, two-dimensional collapsing waveform of the Townes soliton has been observed {{cite:4e5f67cc4ebbac7d834fd0be410f1eeee01420c9}} but also more elaborate themes have been touched upon including the collapse of optical vortices {{cite:577cf41c0d08d5b2c2ab9b5adea91abe5034a0cf}}, the loss of phase information of collapsing filaments {{cite:9ac1199385cd8ef6b73a9f8c87f8b9d58094de2d}}, and the manipulation of the medium to avert optical collapse {{cite:10cd43fafa3b007cb7863f15f4dd8caf617bd5a7}}. On the other hand, a remarkable, very recent experimental development has been the emergence of 2 distinct works in the atomic physics realm of BECs, observing Townes solitons in the {{formula:d375db73-1778-44ea-9c46-008628d8bad8}} setting {{cite:527a503e8d5243149f089ccaaaf6c0145cbefe25}}, {{cite:11f482b559234d6c75e0c28fb5fdb43ffa475a4e}}. Here, collapsing waveforms in higher dimensions had been experimentally identified earlier {{cite:c79ef88a2be179386fb229208cb71d906f688e13}}, {{cite:18bd26239e55fc4fe5c22f9af887b13d7aba5671}}, and the ability to manipulate the nonlinearity {{cite:d70e79f41ea93be11ac9d29f0efe9563266a998d}} and the initial conditions {{cite:5784623a048c20ad00a570b74356ae3140cde130}} has continued to improve in recent times. In one of these recent works {{cite:527a503e8d5243149f089ccaaaf6c0145cbefe25}} the modulational instability was manipulated to produce (in a less controllable, yet experimentally observable) way such Townes waveforms. The authors of the second work {{cite:11f482b559234d6c75e0c28fb5fdb43ffa475a4e}} leveraged a reduction of a minority component in a two-component gas into a single-component one with effectively attractive interactions to produce a collapsing Townes waveform.
i
b4dc7e397f64d7c98527e54e7874e0de
In keeping with a widespread adoption of machine learning across nearly every industry, there has been a dramatic increase in publications applying these methods to carry out routine diagnostic tasks in medicine. Most of these have emphasized matching or even outperforming practicing physicians, whether it be for interpreting retinograms {{cite:dceaefdacc3b2dbd4af4c65355b4f795c50a6f8e}}, skin disorders {{cite:e82c62d5ca3acae0ce4cb494d9ee3a1c41a30e93}}, chest x-rays {{cite:5346ec9484daf46d59aaf2ea450e7e696412bb45}}, mammograms {{cite:8de22abcf6152d3f75fffff3534cac0bb9de224f}}, bone x-rays {{cite:146d3e9962ba99f194333935e550c111000c42a8}}, heart rhythm abnormalities {{cite:3d67373f8cdc062efbb4bd9977c1804982d17f19}}, or deciphering which view was collected by a cardiac sonographer {{cite:0999c3fa0f92f8fb18c0f0354947d32d077caf6a}}, {{cite:4bfeecb38731061f87713336b7ccdc1dfff4c133}}. As the field matures, it will be important to think carefully about how exactly these automated interpretation models will be accommodated within the current clinical workflow. Our current work, similar to our prior work on echocardiography {{cite:6a596ebdc3622762239663a02741e8f1f9daba24}}, proposes how fully automated interpretation might enable studies that otherwise would not be done, such as detecting and tracking adverse cardiac remodeling in asymptomatic patients in a primary care clinic. Nonetheless, for all these applications that hope to guide medical decisions, an important question is whether the machine learning algorithms can provide physicians and patients an adequate explanation as to why a certain automated diagnosis or recommendation was made {{cite:a32add9182d39d3dd120f39acef7ebd1250a7894}}. Although in some examples that may be unnecessary - and a black box approach might suffice - in most cases where there is need of a complex decision, it is expected that a clear rationale is provided, ideally one which can be verified (in our case, visually) by the physician and potentially even by the patient.
d
367f7d1efc226dc700038d2571036aad
The RF kernel (and RF) outperformed the Laplace kernel in our simulation study in most cases. There were scenarios still where Laplace kernel was competitive e.g. for the van der Laan data for regression and survival, demonstrating that the Laplace kernel is a valuable option to be considered in practice. There is no free lunch for statistical learning and consequently for a universally optimal kernel {{cite:a4276946c7c438c7b519bcda213ec682f8fee09e}}, {{cite:e772d11995715b2c6498a76f565fdcacf589eced}}, {{cite:d8202c362b0002541c5ebf633365ed1ad0694c05}}. The success of a particular kernel algorithm depends on how well it adapts to the data geometry {{cite:542a7d3558684f1c6075a5ecb1114501b1eac349}}, i.e. how well it captures the inherent kernel function of a given problem {{cite:384d57d553af0ceba2bfc045af0574be78200051}}. The RF and accordingly the RF kernel should be competitive in situations when the data generating mechanism is conducive to the recursive partitioning, e.g. in the presence of feature interactions as frequently found in biomedical applications {{cite:381742d8704f4453a76ebaf9d15349c065320149}}. Another recent example, where the RF kernel has shown promise is a study of the image classification in hyperspectral imaging {{cite:f6aaebc7ca995e900cdac3ed8352ebea73ff56ca}}. Moreover, in a large bench-marking study of general purpose classification algorithms {{cite:d8202c362b0002541c5ebf633365ed1ad0694c05}}, RF was found superior to other competitors. Interestingly, kernel methods that used the Gaussian kernel performed also well and were only slightly inferior to the RF. These results suggest that across broader spectrum of real life problems RF classifier adapts well to the underlying data structure {{cite:542a7d3558684f1c6075a5ecb1114501b1eac349}} and in many cases performs better than the classifiers based on the conventional kernels such as the Gaussian kernel. It would be of interest to conduct more research into how the results from {{cite:d8202c362b0002541c5ebf633365ed1ad0694c05}} extend to regression and survival and what implications they have for RF kernel and the traditionally used analytical kernels (including radial basis function, polynomial and neural network kernels {{cite:14f157c58c029ee16e9b651f4d4571d22f8331ff}}).
d
a2ea085541bac0ce61599ea828419739
The rubik's cube is a challenging and representationally complex puzzle. It is quite difficult to capture the complex patterns involved in it. Predicting the optimal action using only the current state often leads to underfitting, and almost uniform action predictions, which do not correspond to actual solutions. However, the transformer has a great representation power, and CubeTR is able to capture the complexities present in this puzzle. {{cite:7622b5e434d178cb33bb79b1a566d0d82bb4ac68}} had amply demonstrated the usefulness of transformers in the case of sequential decisions. CubeTR extends this to claim that the transformer can function even with a single state, instead of sequences, under the memoryless problem setting.
d
8514ad1d99955c6ed70ef18c5f6bdd78
In particular, for the approach based on binary weights in the FC layer “binary, FC w.” we have considered the binarization of the sole weight matrix {{formula:2e0d9e0f-e97e-473f-9f87-1eb4648eff61}} (the bias vector of the FC layer is therefore represented as single precision floating point number). Moreover, as binarization function we have used the {{formula:f3cd426d-b017-4ef7-999e-b260ce722043}} function. Precisely, for the backward pass of the {{formula:3e134239-832d-4bf2-9571-f7549f38c64f}} function, the straight-through estimator of the gradient is used, which is defined as {{formula:1957564e-4d3e-420a-8998-765e251d03fd}} , being {{formula:7afa3b5b-b1da-40ec-88ee-1ea726173612}} and {{formula:cf0cfd73-f3a9-4e44-801f-a4c6794758dd}} , the input and output of the {{formula:68602fd8-680c-482f-8c80-469e4144a5a1}} function, respectively (see {{cite:0301dee6608c6049213926760b509c69703b38cc}}). Furthermore, as recommended in {{cite:f8cdee9bde4875122080e52e23e9af06c073f698}}, we scale the binary weights by {{formula:663be91f-ef98-46cb-828d-ed93f75c07b6}} . Thus, we can write {{formula:a36e028e-5559-4c5f-ab3f-7201fcdcf87b}} , where {{formula:96e95d76-9121-4d45-a6a4-45ab00fbe9fd}} denote the input of the binary FC layer, the binary weight matrix, the bias vector, and the output of the binary FC layer, respectively. With this approach, we could reduce the number of parameters in terms of bits in the encoder layer by about {{formula:6f95b689-04d8-4d31-887c-cbe7234f7277}} . Another approach is based on the network pruning and fine tuning, “pruning + FT”. More in details, after having chosen the best configuration as described in Section REF , we set to zero all the weights with the smallest absolute value, according to a predetermined percentage, e.g., {{formula:81a8718d-3fb4-4436-b474-86c8801d6a9c}} . As expected, this degrades the performance. Therefore, after pruning the network, we fine tune the network by optimizing the remaining parameters until the stopping criterion is met. In particular, as highlighted in Tables REF and REF , the method “pruning + FT” helps to maintain approximately the same performance as the method without pruning and at the same time reduces the number of parameters to be offloaded to the MT by {{formula:0b83cc20-46bf-457a-90fa-648e28123b7c}} . On the other hand, the approach “binary FC w.” produces a higher degradation of the performances. Finally, we have compared our method with other two approaches based on AE NNs. In particular, we see that our method outperforms both, the CsiNet {{cite:2544bf1610b6bf8fddb605096326d8e4cdd9f1d6}} and the recent ACRNet presented in {{cite:6ecb77d38b9b13957ebf0937f36555dfa544e9d9}}. Differently from our approach, CsiNet and ACRNet have been trained on DL CSI data, and the data have been normalized as specified in the respective works. The two networks leverage the sparsity of the CSI in the delay domain. However, ACRNet boosts the performance by making use of network aggregation in the decoder and by replacing the rectified linear unit (ReLU) activation with a parametric ReLU (PReLU) activation.
r
0d95ac3fbc8c3786ca5cd5923f39af24
For {{formula:6277b800-28e5-4683-9e03-0a50fdf3db0e}} (and {{formula:c7bf8c4a-3026-476a-ae0e-15bdf7e35751}} ) equations (REF )–() reduce to the classical three-dimensional irrotational steady water-wave problem, which is usually handled by writing {{formula:83abfa3d-9a2c-4983-be1a-c67c764697bb}} , where {{formula:ef89f433-34f8-4f19-b462-e6b1c472f2f5}} is a harmonic scalar potential, so that (REF ), () are automatically satisfied. In fact it is possible to formulate this problem in terms of the variables {{formula:78498c82-27fe-4605-b8c9-18814f67e73e}} and {{formula:fb6fc023-c4e7-4029-9d00-22f784d097c4}} (Zakharov {{cite:4a1503d1bc840fd67cfbbb958cc0324bbc32bfd7}}, Craig & Sulem {{cite:8696e47dc356e2bc58a842b2fee07fdc5d26ab83}}). Consider the variational principle {{formula:9295b401-e3a7-4dd0-9414-239b6e4d6056}}
r
0286e73e2d256776f08fb592eae8a71c
In Fig. REF , we show the valence quark, sea quark, and gluon PDFs of the pion. The black lines are our results evolved from the initial scale {{formula:b4a1ed6e-4de5-415e-b68a-04bff496562a}} using the NNLO DGLAP equations to the experimental scale of {{formula:2326ca7c-cc8a-4db8-ab69-9f3e385e490a}} . The red lines correspond to BLFQ-NJL predictions {{cite:81b70aca64bed383552cf1456d431fae433ca534}} with the initial scale {{formula:d8a8f736-894a-4caa-9e8e-00ee157b30ef}} . Results are compared with the original analysis of the FNAL-E615 experiment {{cite:8bb668c6a2524e8b84ea6aa0be12a7b6db029c27}} data and with its reanalysis (E615 Mod-data) {{cite:6ecacc10826ebb749acbb7d3d2079f817d1a174d}}. After including one dynamical gluon, we find that the initial scale naturally increases, and the behavior of the pion valence PDF at large {{formula:d1bd4d02-6566-42de-9309-48e62c341a07}} , {{formula:4ce56564-c9d6-4d32-84e8-87d6b85ac106}} , approaches the reanalysis of E615 data. We obtain the first moments of the valence quark, sea quark, and gluon distributions at 4 GeV{{formula:821c3071-ddac-41a4-b56b-8fe2408fed84}} , {{formula:05faf391-275b-406c-9d2d-fe5231567e4b}} , respectively.
r
913b05f00d8d583def6b5721c0231fd0
In actual fact, the vacuum model III ({{formula:64be4f73-88f9-4f43-8994-ef15b72034e3}} ) tends to remain always fairly close to the {{formula:c8c25cb6-d297-4112-a4c6-dcea81375a37}} CDM. Its dynamics is weaker than that of the main DVMs (RVM and {{formula:7debf5e1-cc37-4874-bfb6-5bf4a1bf5fa4}} ). Being {{formula:1a0e165c-69bd-4010-ae2c-3162b7bd532e}} for all the DVMs, the evolution of its vacuum energy density is approximately logarithmic: {{formula:3038a7c9-8c3a-4b01-a5a1-703e8d5f6f10}} , as it follows from (REF ) with {{formula:5e30f760-1ed7-483e-ab13-10d86157f613}} . Thus, it is significantly milder in comparison to that of the main DVMs, for which {{formula:76f10c42-c222-46bd-9831-5f2485882a40}} . The performance of {{formula:a11648d5-7f38-46a4-bf89-b4ef5c0c8050}} can only be slightly better than that of {{formula:63ef9b4d-3bbd-4ded-bdcc-6c80c074b931}} CDM, a fact that may have not been noted in previous studies – see  {{cite:e1614e4fb0ce8ef265520278f828838b37d13fa0}}, {{cite:2f5336bf225565e885fb0984a5266361d5fc88f4}}, {{cite:c8fd46613f01495b91c545f25779dd6835e4765e}}, {{cite:164e2ecd998d5bee3c96fa4df24469a86a683de5}}, {{cite:6d382f4eb993aeddd729624c4fa2b28771151444}} and references therein.
d
988faf3bdbdc7d72c01d119985565b51
In a number of fields in physics, the formal equations derived from the theory make use of the pfaffian of some skew-symmetric matrix appearing in the theory. For example, the pfaffian arises in the treatment of electronic structure with quantum Monte Carlo methods {{cite:875aa234d79eeeeb295bdf6e5e8ff03ad40148d4}}, the description of two-dimensional Ising spin glasses {{cite:2beea7c2e88a13bc7fc69de90ec90b6bcb50f265}}, and the evaluation of entropy and its relation to entanglement {{cite:1d2552cd7cffbb9e5d2f6813bfa47637f85691ea}}. Pfaffians occur naturally in field theory and nuclear physics in formalisms based on fermionic coherent states {{cite:ec54730dea89f00a52e829fbfaaff7bc8d975c98}}, {{cite:5b637c068be90d2cb6b9c624e84195a5d5bf65f1}}, {{cite:04c2da8b5b79bd72cf8db69b08e541f5dfb4ffb4}}, {{cite:2e60ec84a647136320232527adef4fd2cdf3a46e}}. A recent application is to the overlap of two Hartree Fock Bogoliubov (HFB) product wave functions {{cite:c6057e1e37abe015716dcc333e390784ad15c03e}}, needed for nuclear structure theory. While there is a simple formula for the pfaffian of a skew-symmetric matrix {{formula:38917c9e-28b9-414d-bb43-b842f2b2290b}} in terms of the determinant, {{formula:3debff8a-25ae-445c-9fad-0ebcac9824a6}}
i
a5c8dfc17c3e67264fbdecabd19cbd91
Furthermore, the last three decades have brought another change in the computational methods to study quantum many-body-systems numerically, namely tensor networks (see {{cite:f92eb5c4550314ac8ca656beae2f7154595c57d2}} for a review). Tensor networks are sets of tensors which share indices, that upon contraction of shared indices, can approximate states of a given lattice theory. The individual tensors can be thought of as representing local degrees of freedom in the theory as in the case of Matrix Product States (MPS) or as in the case of MERA, giving a discrete realization of the holographic principle (see {{cite:1cbc824779041fcdd4ebce0db6d81a6ec41e10e6}} for a short review). By reducing the dimensionality of indices shared by tensors, one reduces the dimensionality of subspaces represented by those tensors considered and simplifies the problem of finding e.g. a groundstate of a given system. As we will see, the above considerations provide a simple means to use tensor network methods to problems in quantum field theory in curved spacetime. For constant time slices in AdS{{formula:47c6795e-2eae-4f0f-9de5-72565af003a3}} with a finite but large cutoff, the dual theory will live on the boundary, which is a circle. As we will see below, the behaviour of entanglement entropy under Weyl transformations is universal across CFTs, which allows us to study it with any CFT, not just ones with AdS as the holographic dual. Since the circle is a compact space, the entanglement entropy, being a continuous function of the boundary of the subsystem is bounded. In such situations, Matrix Product States can well represent the CFT-groundstate, provided one employs high enough bond-dimensionsWe thank Michal. P. Heller for pointing this out.. We will use this situation, to study the above deformations and their effect on the entanglement entropy of the CFT groundstate explicitly by approximating the ground-state of a CFT in different background metrics. This provides a new method for the study of metric perturbations via Weyl transformations, for earlier studies see {{cite:3f4cdedde541e637ed37ff06edb049f42db33283}} and references therein.
i
e3f2fdd55bf51467804992b14e13ca0a
One could also consider variations on the LBI definition. One might use loss with non-linear dependence on {{formula:75976774-ad02-4ea7-bc9c-7b33bede06dc}} and extend beyond linear queries (in doing so, one might lose the variance-dependent generalization guarantees). One could also consider a loss that is not worst-case over {{formula:f9835017-a684-458c-88c0-ee42cdabd1b1}} (in which case the composition guarantees might degrade, as in {{cite:363bc6b979098b63a8f333d31656c996d1934be3}}). Another possible improvement might be achieved by using Rényi Differential Privacy (as in {{cite:14f0f60d979ab6417fd258e89f700b09b71708f0}}), to avoid the small degradation that the LBI approach suffers when analyzing the Gaussian mechanism compared to using a differential privacy-based analysis (this raises the logarithmic term to the power of 1.5 instead of 1; see {{cite:08db3c823a8e6141ad33a5a4e513a1945b2f7f62}} for comparison).
d
54a015b85c25e651ae359b59ac28f421
There are different strategies to incorporate auxiliary connections to a pre-trained model. One of the most common approaches in transfer learning is to add new hidden layers after the last hidden layer, thereby extending the network's depth and replacing the prediction layer, also known as the penultimate layer. Instead of modifying the pre-trained model's topology, i.e., the number of layers and the size of each layer, we propose to augment the pre-trained model with auxiliary connections that are parallel to the existing ones, thereby keeping the original architecture design unchanged. The intuition behind this approach is that the architectural choice of a pre-trained model is often validated and obtained by extensive experimentation process. It has been shown that many state-of-the-art network architectures such as ResNet-50 {{cite:ddcbff5d9150dbdb66c2d19a8c8b4ec132e8a294}} or DenseNet121 {{cite:c9bf01b3d3f085ec2eaedee2759fefb6c998c18e}} are not specific to a dataset, but perform well in many similar problems. Thus, by respecting the architectural choices of the pre-trained network {{formula:f9e50127-55f3-43c0-ae41-85a3f8733ec3}} , we can avoid the time-consuming process of validating architectural choices for the auxiliary connections.
m
d4b65f382ab803f5254ab0fefeb9173c
This paper proposed MOI-Mixer, which aims to leverage high-order interactions for MLP-based models in sequential recommendation systems. We claim that Transformer and existing MLP-based models differ in performance due to the absence of an explicit high-order term. Thus, we introduce a novel MOI layer which is capable of modeling arbitrary multi-order interactions among the given input features. Experimental results on five real-world datasets show that integrating a high-order term in MLP-based models is consistently beneficial. We also show that MOI-Mixer is computationally efficient in processing long-sequence behavior data compared to the state-of-the-art model {{cite:514e42c4fd67bb1a899d5c74907a89673e7baef5}}.
d
5708c7bc372b50de81f57e728fbc9dd0
As we demonstrated in Section , our motion estimation method in Algorithm REF is both fast and accurate. More specifically, our method outperforms the state-of-the-art FGR method of {{cite:950fdb4d167f8cf57447127d016aca3a565e1fa6}} in terms of both speed and accuracy. Given their strong similarities, in Section REF we examine the relationship of the method of {{cite:950fdb4d167f8cf57447127d016aca3a565e1fa6}} and our approach in Algorithm REF . Following that, we discuss the limitations of using FPFH for feature matching and our approach to overcome those in Section REF .
d
2ff3d99704c6fdee0ff7cfab3eb46a66
(i) We provide new bounds for the instantaneous regret {{formula:97c62195-b9c2-425f-8726-ea8555b52054}} in expectation and in high probability for the inexact online gradient descent; the bounds include terms that quantify the temporal variability of the cost function, as well as the statistics of the gradient error. The high-probability convergence results are derived by adopting a sub-Weibull {{cite:14e33ba12c52482229678c8a0dd446121d366d96}} model for the gradient error; our bounds scale more favorably compared to bounds obtained via Markov's inequality, and hold iteration-wise. Finally, we provide an almost sure result for the asymptotic behavior of the regret {{formula:156eae23-7994-4ff9-a1e3-b6a10bda72a1}} .
i
5f8f7b9bd7d39a06fa0734a2f1dab51a
In Appendix C, we use the same example to illustrate how we can visualize the privacy protection of individuals. Further work includes comparing this approach with {{cite:fd7429c00254eb2e712f61e1bffab39da1d9f330}}.
d
96a4c1bcbf45f65002a23e12f102039c
Almost no additional hyperparameters are introduced. Acknowledgments This work was supported by the Israeli Ministry of Science and Technology, and by the Gatsby Charitable Foundations. Appendix Overfit and inter-model correlation In this section we formally analyze the relation between two type of scores, which measure either overfit or inter-model agreement. Overfit is a condition that can occur during the training of deep neural networks. It is characterized by the co-occurring decrease of train error or loss, which is continuously minimized during the training of a deep model, and the increase of test error or loss, which is the ideal measure one would have liked to minimize and which determines the network's generalization error. An agreement score measures how similar the models are in their predictions. We start by introducing the model and some notations in Section REF . In Section REF we prove the main result (Prop. REF ): the occurrence of overfit at time s in all the models of the ensemble implies that the agreement between the models decreases. Model and notations Model. We analyze the agreement between an ensemble of {{formula:ae826c08-948b-4e3c-ab92-4b2a9bba65b5}} models, computed by solving the linear regression problem with Gradient Descent (GM) and random initialization. In this problem, the learner estimates a linear function {{formula:be6bc00a-0655-422c-a2a9-9be847ffca49}} , where {{formula:f574bde4-3239-4c27-bb77-7b0317cbf4fd}} denotes an input vector and {{formula:d89e3785-6b24-48cd-948b-602fd5ab3aff}} the desired output. Given a training set of {{formula:918cf25d-b0e6-44d9-a689-a9ebad3ae981}} pairs {{formula:57dde595-3d0a-472e-9c49-e357f20ca9a3}} , let {{formula:26e6ad34-48b3-49b2-a17f-0ec6319f48b1}} denote the training input - a matrix whose {{formula:4196024f-3689-4f10-9be7-2ad611b3e84b}} column is {{formula:ddaff893-5896-471b-ac1b-bb86025c03f6}} , and let row vector {{formula:ebb47c03-6e81-4172-9ba1-aa2c70498311}} denote the output vector whose {{formula:70999c06-cab8-4698-a72d-4b7fdd938d40}} element is {{formula:4fc349e0-f743-4061-a47a-0f85616a8311}} . When solving a linear regression problem, we seek a row vector {{formula:5fefb5a3-cae6-44fa-9193-ee024c6db623}} that satisfies {{formula:17043e55-d5bb-4f72-9fb0-f902c23d9ed2}} To solve (REF ) with GD, we perform at each iterative step {{formula:93ccfd8f-d2c9-4b55-8dbe-3fd6522158c3}} the following computation: {{formula:bfa15549-64ab-4ce3-91af-05a4eb8d3a94}} for some random initialization vector {{formula:0e280727-35ce-4930-8581-11c68de1aef5}} where usually {{formula:2d5a8c40-f4bf-412a-8e56-426b5403956c}} , and learning rate {{formula:5436b2fe-13bd-4f21-a45d-7b3f8f277421}} . Henceforth we omit the index {{formula:9a758034-e795-45b3-8df1-764811035d28}} when self evident from context. Additional notations Below, index {{formula:36f4b784-432d-47d4-9a83-7d4b93bc6327}} denotes a network instance and {{formula:0f29b8c0-4efe-40b1-9c3f-4a041e6f55e7}} denotes the test data. Let {{formula:5657c822-18ef-4010-bbc9-dffafefe02ed}} denotes the training matrix used to train network {{formula:0e150b65-783d-4d81-aa5c-fbf3ea98f03a}} , while {{formula:a4565b2d-84f5-415f-9ef4-8eca48b18e9a}} denotes the test matrix. Accordingly, {{formula:e2051d88-a664-46ad-aa27-b26c576283bf}} Let {{formula:db0b5c18-dfb3-45db-9ef6-95e5d646b729}} denote the gradient step of {{formula:4e130aec-797c-4575-8d29-974c83bec607}} , the {{formula:58cede7a-f068-40f5-aedc-7722fb41835d}} mdoel learned from training set {{formula:a68e5e73-0bef-48aa-bfdd-3c1770f09aa7}} , and let {{formula:40d92117-66a6-4a42-9734-60f502592f74}} denote the cross error of {{formula:b9b7caef-c779-4f73-9874-1e9c2b57233e}} on {{formula:6fe5b716-4501-4b9a-92ff-f08809f10d89}} . Then {{formula:5414f95d-cbc2-461a-95ff-88a0931bc810}} Let {{formula:a90e8c14-72ff-4448-98ce-fb414865e136}} denote the cross gradient: {{formula:f7b13f65-38c6-4a60-8dce-8d00a049ffc5}} After each GD step, the model and the error are updated as follows: {{formula:a057b62a-df31-4ac8-bfa5-3379b80fc2c6}} We note that at step {{formula:316d4489-5e22-437b-bd7f-b4169cd6c747}} and {{formula:2e76d74d-46cb-4c3c-a718-41724e02d8a7}} , {{formula:e0c35a5a-79ae-4579-af7a-67360b34e5c0}} is a random vector in {{formula:ce06dc84-7087-4d8f-8ab2-376580395c93}} , and {{formula:90bfb4a6-36de-4134-9d3a-ac7ea84adc4d}} is a random vector in {{formula:91d68d7c-51e1-4c0a-b6e0-e6b043c94ea0}} . Test error random variable. Let {{formula:a07d07b4-9054-4d38-83d4-f2fb43ab7879}} denote the number of test examples. Note that {{formula:7ceac455-2176-4275-a29b-81e0a9e0061a}} is a set of {{formula:86cb7f4f-fe1a-45f5-a563-d54bc65d439c}} test errors vectors in {{formula:bd9be166-2aa4-4816-828e-7d9296982cbc}} , where the {{formula:48fc5c6d-1f97-47b1-962c-839a6f51b339}} component of the {{formula:e9a2653e-ecef-4ee4-8477-115bfe6c731d}} vector {{formula:fe986fec-baaa-47bb-905c-68eaa9e38a20}} captures the test error of model {{formula:6da36c94-9556-4eeb-80df-dcb4a37fab3f}} on test example {{formula:0359602e-aefd-4cff-a98e-e4f5393cdb8d}} . In effect, it is a sample of size {{formula:3be5fe9f-1ed9-46f3-94b2-64c9a74928af}} from the random variable {{formula:6b0b2657-07da-45b3-b9ec-e4237ecf83df}} . This random variable captures the error over test point {{formula:bcedaffb-3539-41fc-a2a5-230288222150}} of a model computed from a random sample of {{formula:7b7e18f4-157f-4f9f-8326-7ff92dd47a88}} training examples. The empirical variance of this random variable will be used to estimate the agreement between the models. Overfit. Overfit occurs at step {{formula:b4e50c01-4202-450a-ac2d-87027327afe7}} if {{formula:0048f65e-a5e1-433e-96cb-0f638ac10927}} Measuring inter-model agreement. In classification problems, bi-modality of the ELP score captures the agreement between a set of classifiers, all trained on the same training matrix {{formula:e7319190-38f9-41d7-b76a-9162354738f2}} . Since here we are analyzing a regression problem, we need a comparable score to measure agreement between the predictions of {{formula:6c756a2f-8580-4d42-8ec7-604f29646b36}} linear functions. This measure is chosen to be the variance of the test error among models. Accordingly, we will measure disagreement by the empirical variance of the test error random variable {{formula:3f5d89f8-81d4-406b-b3a0-ef102ec80955}} , average over all test examples {{formula:42446c34-07be-4c31-accc-51004e241aa0}} . More specifically, consider an ensemble of linear models {{formula:3df1b2b5-0ac1-423d-bccc-84fa0c8d3d08}} trained on set {{formula:79387c89-7d55-4cb1-aa4d-ad0339425342}} to minimize (REF ) with {{formula:2d8c3ba8-eab3-4329-af13-f2eaa9ba0c1b}} gradient steps, where {{formula:3e607bbe-b678-47e0-9fc5-eaa53fec20d1}} denotes the index of a network instance and {{formula:7bdb9f27-e08c-4977-9848-989d30e42120}} the number of network instances. Using the test error vectors of these models {{formula:7a9115fd-ac08-4d79-913f-f7167ae1d09e}} , we compute the empirical variance of each element {{formula:19fd75c2-46a6-4654-a2c1-8a5018883ce2}} , and sum over the test examples {{formula:0ee23be8-e956-4d01-88b7-ebd29470b669}} : {{formula:5c137624-1934-462d-9b7f-10a08e85a041}} Definition 1 (Inter-model DisAgreement.) The disagreement among a set of {{formula:f40d02de-37fc-4755-9418-f179f7dcace3}} linear models {{formula:daa11111-40c8-4dc1-a466-96aaa1676b5e}} at step {{formula:f55b4cf4-d908-44dd-bd85-7d99c5646749}} is defined as follows {{formula:ab29ea88-fa2f-4d4e-b462-469b4de4d686}} Overfit and Inter-Network Agreement Lemma 1 Assume that the learning rate {{formula:41139da0-4be5-4a0c-8bb7-3df46b08f6b5}} is small enough so that we can neglect terms that are {{formula:f5300f20-dc82-4ec6-9300-dae25a2cdda0}} . Then in each gradient descent step {{formula:362a3993-4639-49f6-a931-16f1e37329b2}} , overfit occurs iff the gradient step {{formula:5cb39be9-bf70-4667-97ed-35f4b73e8d3a}} of network {{formula:d17c2aa6-d75d-4f9d-a2ef-6aef17644818}} is negatively correlated with the cross gradient {{formula:654fe1b5-09ec-4aa5-bfc8-e5b005e9b1a7}} . Starting from (REF ) {{formula:0fb39ef8-01a8-4928-9452-00f26e3f7322}} Lemma 2 Assume that {{formula:a51a389b-8b5b-4f50-8c2d-2d380a40ded8}} and {{formula:9c7c8419-c12e-4a2a-a5ef-805ab86ee978}} is invertible. If the number of gradient steps {{formula:55ca5f5f-6aa9-42c7-ba27-6482da2fafd5}} is large enough so that {{formula:d9687884-a8e2-475a-a411-49e2fc966745}} is negligible, then {{formula:09dcdea9-a4bb-4018-a1e5-f765c7ca86bd}} Starting from (REF ), we can show that {{formula:acb650c8-c954-4674-8d65-c819a9bdb6bd}} Since {{formula:aa297957-ca65-4781-88b4-03a399d8b903}} {{formula:c4876e71-c1cc-41e9-861e-b964708f650e}} Given the lemma's assumptions, this expression can be evaluated and simplified: {{formula:3f8a574e-4ae9-4b58-bd3c-37c1dc5b6a23}} From (REF ) it follows that a decrease in inter-model agreement at step {{formula:68034729-dacd-427a-b24c-9ca77b4ee8d7}} , which is implied by increased test variance among models, is indicated by the following inequality: {{formula:3bb5b3cc-947f-407e-9e0d-43c10f11df44}} Theorem We make the following asymptotic assumptions: All models see the same training set, where {{formula:cda80dd0-efb2-46e4-b277-6fea08f107ed}} . The learning rate {{formula:ea77dde4-a0b7-427c-af23-e7f8f8efd4e9}} is small enough so that we can neglect terms that are {{formula:20b4f947-e447-404f-beab-323373a7294f}} . {{formula:df4214b8-ebbe-4591-ac17-b9014c29d88f}} , {{formula:298ddfad-3f57-4f46-9a36-527dfeca2368}} is invertible, and the number of gradient steps {{formula:44cae337-3a43-4c00-b3fc-043a9dcdc11f}} is large enough so that {{formula:f4f1cb7a-2969-4fb6-8409-f8dcacea2889}} can be neglected. The number of models {{formula:9b1cd93e-26ac-4de5-b7da-de8de9181e0a}} is large enough so that {{formula:7cbe4371-72e9-424f-beb8-6a6caecd560e}} . When these assumptions hold, the occurrence of overfit at time {{formula:5207ee58-a122-4e1a-8402-06abf5ca5d4f}} in all the models of the ensemble implies that the agreement between the models decreases. (REF ) can be rearranged as follows {{formula:a4cccb71-0f5d-47b4-85f1-d356a4d0cfed}} where the last transition follows from {{formula:0290950f-ffaa-4227-b913-b336a94f2550}} . Using assumption 2 {{formula:a8b737f4-4dd9-4ac2-adab-88a67db9224d}} where {{formula:17efcd7f-5f5d-4d3d-a61d-cd10e9a1c36c}} and {{formula:b4b6ec19-23a9-495c-a2eb-e76ce89d69ed}} Next, we prove that {{formula:12310c75-fbba-4122-978e-4158915e459b}} is approximately 0. We first deduce from assumptions 1 and 4 that {{formula:05755f86-5ec7-457a-b467-e37787ce02b0}} From assumption 3 and Lemma REF , we have that {{formula:62ec4614-4ded-44fd-88ae-af565c5710bf}} . Thus {{formula:b4c56373-7598-4cd1-989f-840138efd80c}} From this derivation and (REF ) we may conclude that {{formula:e6c13cae-be91-475c-b366-190980a05e49}} . Thus {{formula:8bcf2691-be0b-41a1-b170-a968dcf651bc}} If overfit occurs at time {{formula:52bbd92d-97f4-47ea-bdad-c6277ac025e3}} in all the models of the ensemble, then {{formula:d7a49825-4ed6-45b6-bad8-2fde80d0d5d4}} from Lemma REF and (REF ). From (REF ) we may conclude that the inter-model agreement decreases, which concludes the proof. Noisy Labels and Inter-Model Agreement {{figure:b5e5ebdc-9ec9-45a7-ad68-4588e68da95b}}Here we show empirical evidence, that noisy labels are not being learned in the same order by an ensemble of networks. To see this, we measure the distance between the TPA distribution, computed separately for clean examples and for noisy examples, and the binomial distribution, which is the expected distribution of iid classifiers with the same overall accuracy. Specifically, we compute the Wasserstein distance between the agreement distribution at each epoch and the binomials BIN (k,{{formula:613d5c2d-9633-4f14-aa2e-2b8884e008e6}} ) and BIN(k,{{formula:e5e9b97f-5b8a-454e-b201-9750cf856537}} ), where {{formula:892e6402-4430-452c-80b4-1a1d55c2be5b}} is the average accuracy on the clean examples, and {{formula:c83b3726-f918-47bb-990b-8c6acffa9389}} is the average accuracy on the noisy examples, see Fig. REF . We see that while the distribution of model agreement on clean examples is very far from the binomial distribution, the distribution of model agreement on noisy examples is much closer, which means that each DNN memorizes noisy examples at an approximately independent rate. Computing the ELP Score: Pseudo-Code [ht] Comment/* */ Initinitialization Computing the ELP score Training dataset {{formula:9b1185ed-2cae-45ce-8e3e-488315b1ea13}} with {{formula:e6e8e4eb-2098-4b3d-b752-36928757e2d3}} examples, potentially noisy, network architecture {{formula:095f1939-bc86-439b-a964-ccc38c27311a}} , batch size {{formula:0b1bfc11-7f48-4a7d-85e7-1ab7cb39f1ca}} , learning rate {{formula:986f7225-a9d2-49d2-ae39-78f4fd964750}} , number of networks K, number of epochs E Array, containing the ELP score for each data point compute agreement during training {{formula:2ce14d7c-8ac6-47f4-b17c-e1c6154498b4}} different initialization of {{formula:c9cddeef-0cde-4ac5-a89c-f8802af2f495}} Initialize agreement_arr[E,N]{{formula:0dc2bafa-e1e7-4fae-ad1c-0280b1b6b832}} {{formula:74c8d1dc-3744-44b7-8588-04c150280e67}} {{formula:8e31af6f-18de-4453-8afe-f0696700d224}} sample {{formula:a11f0ce7-ab65-41e1-a7c4-192a2121cd0f}} with size {{formula:a0c0d918-5c9e-4350-893e-70e4168401ec}} from [1...N] uniformly {{formula:6ffa2538-a9e7-4f75-9663-ff716f65ce8e}} *[r]Gets a mini batch compute {{formula:b8b5ef1e-4fff-4815-aac2-bfdc5f421486}} on {{formula:656127e9-4dc7-44ca-8d61-398dc5abedeb}} using {{formula:28ced4ab-fd63-4bb3-88fa-9712fdaf8df2}} compute loss {{formula:22c51671-0d9f-4daf-87e0-f629085805cb}} with respect to {{formula:da142987-6620-40fa-b6bf-c8105a59c7fc}} and {{formula:fbcce5c2-5df1-4c3d-b343-8b1f3d4ff065}} {{formula:6c5ed90d-11f9-45a6-abd5-088f8cec28ae}} {{formula:2c4b8074-f1fe-4dbd-baef-e4295e021071}} += {{formula:05f1e52e-7d21-4736-9d13-88d50ff222ca}} *[r]Store whether the network k predicted correctly on the examples at epoch e {{formula:617b14e3-5ef5-47ef-ae1d-f82771327d78}} *[r]normaliztion {{formula:7d364038-7f85-4071-9c31-600fa26001c0}} *[r]mean over {{formula:76f1e648-78ad-45a2-8a45-6b9c0d265c0e}} -axis {{formula:3e73a451-0a32-4901-9b9d-c1e6b798483d}} Additional results Noise level estimation on additional datasets {{figure:9a86527d-915b-4ff3-9645-5bc7646a568e}} Precision and Recall results {{figure:568eac2c-affa-4908-9fec-10741239d2d7}} Ablation study Table  REF summarize experiments relating to architecture, scheduler, and augmentation usage. {{table:960f4160-a805-4acd-98a8-bbb7e1104016}}Alternative scores We evaluate the two alternative scores defined in Section REF : CumLoss and MeanMargin, in which case Step 2 of DisagreeNet is executed using one of them instead of the ELP score. Fig. REF shows the Probability Distribution Function (PDF) of the three scores, revealing that ELP is more consistency bimodal (especially in the difficult asymmetric case), with modes (peaks) that appear more separable. This benefit translates to superior performance in the noise filtration task (Figs. REF ,REF ). {{figure:8d12f7d6-f75f-4d05-80c1-5844ba5cae3f}}We believe that this empirical observation, of increased mode separation, is due to significant difference in the pace of change in agreement values during training between clean and noisy data, in contrast with the pace of change in smoother measures of confidence like Margin and Loss (see App. ). Note that with the easier symmetric noise, we do not see this difference, and indeed the other scores exhibit two nicely separated modes, sometimes achieving even better results in noise filtration than ELP (Fig. REF in App. REF ). However, when comparing the test accuracy after retraining (see App. ), we observe that ELP still achieves superior results. Methodology and Technical details Datasets We evaluated our method on a few standard image classification datasets, including Cifar10 and Cifar100 {{cite:be6ffda37a333514e590b9babfe9df9ac30f6dcf}} and Tiny imagenet {{cite:12c23423be7ff9f25c5f424188bca2650c442340}}. Cifar10/100 consist of 60k {{formula:fe838afe-e363-4105-b6b6-9bbc963b1eb1}} color images of 10 and 100 classes respectivaly. Tiny ImageNet consists of 100,000 images from 200 classes of ImageNet {{cite:f40c2166bddf2d8cd9b9aa42db8be9b04617d337}}, downsampled to size {{formula:0b0181de-3609-488a-b46b-c79341a06241}} . Animal10N dataset contains 5 pairs of confusing animals with a total of 55,000 64x64 images. Clothing1M {{cite:37f6448d3139240c11d2d7593b14bf26523b8fb9}} contains 1M clothing images in 14 classes. These datasets were used in earlier work to evaluate the success of noise estimation {{cite:b48aa63f4ffda4491d92cada2ec84c5183d3854d}}, {{cite:8d5d07f63297e9418229cf19fd723ce7e56db0ca}}, {{cite:da653f7e5c5c2b599ed067091a661b86f016ffed}}, {{cite:24ba90e9168a4d1e82ee95f8ea3af68d9b8e169d}}. Baseline methods for comparison We evaluate our method in the context of two approaches designed to deal with label noise: methods that focus on improving the supervised learning by identifying noisy labels and removing/reducing their influence on the training, and methods that use iterative methods and utilize semi-supervised algorithms in order to learn with noisy labels. First approach: {{formula:b3c7f1c4-1cd5-4191-b860-c89f2be9615b}} DY-BMM and DY-GMM {{cite:8d5d07f63297e9418229cf19fd723ce7e56db0ca}} estimate mixture models on the loss to separate noisy and clean examples. {{formula:11464cb7-7a18-475c-981b-aeef009be1c9}} INCV {{cite:a8156e9cf50e75201a6994484ebe25512c00cac5}} iteratively filter out noisy examples by using cross-validation. {{formula:940ca8ff-2d5f-4d39-bc19-92c7492f67fb}} AUM {{cite:b48aa63f4ffda4491d92cada2ec84c5183d3854d}} inserts corrupted examples to determine a filtration threshold, using the mean margin as a score. {{formula:967954bc-2b98-486c-900d-79b3d86d2ce8}} Bootstrap {{cite:9f23b8750ab4d058c6b057cfb3ab494cdaafc86b}} interpolates between the net predictions and the given label. {{formula:a0652e54-ab1d-448f-90b1-2117bc869fdf}} D2L {{cite:80082b013fb4c8918208f60716f278e593417b83}} follows Bootstrap, and uses the examples dimensional attributes for the interpolation. {{formula:61d5cd26-b75f-42e1-a1d7-d39d7c592026}} Co-teaching {{cite:1dca843e88b3ced9d5f7211cb693619fe9625e8f}} use two networks to filter clean data for the other net training. {{formula:be963734-27f1-4e5a-bb67-51c3ae0278f7}} O2U {{cite:f71d54d16e0c42bffae88cb6963adc5d1744f930}} varies the learning rate to identify the noisy samples, based on a loss-based metric. {{formula:c89bd115-1133-447c-91b6-0e6105c14119}} MentorNet {{cite:6ab7c4c0f16a8c42e5d64c0fb3a615cd52b939d8}} trains a mentor network, whose outputs are used as a curriculum to the student network. {{formula:6ea76efb-525f-44f3-84ae-845da945f78f}} LEC {{cite:79b53f346c076979aeeb94754eb2a772b06f8d92}} trains multiple networks, and uses the intersection of their small loss examples (using a given noise rate as a threshold) to construct a filtered dataset for the next epoch. Second approach: {{formula:02573db3-a77a-4b59-ace0-33d94c84c524}} SELF {{cite:957f860c255b6613e3aa29ef01a160bce8f24164}} iteratively uses an exponential moving average of a net prediction over the epochs, compared to the ground truth labels, to filter noisy labels and retrain . {{formula:c9f5e54a-b051-4426-8b39-e6ddf0f9e60d}} Meta learning {{cite:a80c46d2b95327c8e1e4df24200b14d4905b231b}} uses a gradient based technique to update the networks weights with noise tolerance. {{formula:6ebbf62a-1b77-4230-a26a-3b6836632bad}} DivideMix {{cite:da653f7e5c5c2b599ed067091a661b86f016ffed}} uses 2 networks to flag examples as noisy and clean with two component mixture, after which the SSL technique MixMatch {{cite:405acce875a5728b1474df592a9e5eaf01dd1da7}} is used. {{formula:479835c3-e1cd-4e8f-ac58-f153def2f510}} ELR {{cite:24ba90e9168a4d1e82ee95f8ea3af68d9b8e169d}} identifies early learned example, and uses them to regulate the learning process. {{formula:35f4950a-2980-485a-a932-e8304e06b56f}} C2D {{cite:6e527622cea48afa1f72ecdc887f6bafaa27e1ef}} uses the same algorithm as ELR and Dividemix, and uses a pretrain net with unsupervised loss. Technical Details Unless stated otherwise, we used an SGD optimizer with 0.9 momentum and a learning rate of 0.01, weight decay of 5e-4, and batch size of 32. We used a Cosine-annealing scheduler in all of our experiments and used standard augmentation (horizontal flips, random crops) during training. We inspected the effect of different hyperparameters in the ablation study. All of our experiments were conducted on the internal cluster of the Hebrew University, on GPU type AmpereA10. Comparing agreement to confidence in noise filtration While the learning time of an example has been shown to be effective for noise filtration, it fails to separate noisy and clean data that are learned more or less at the same time. To tackle this problem, one needs additional information, beyond the learning time of a single network. When using an ensemble, we can use the TPA score, or else the average probability assigned to the ground truth label (denoted the "correct" logit) by the networks. The latter score conveys the model's confidence in the ground truth label, and is used by our two alternative scores - CumLoss and MeanMargin. Going beyond learning time, we propose to look at "how quickly" the agreement value rises from 0 to 1, denoted as the "slope" of the agreement. Since our empirical results indicate that the learning time of noisy data is much more varied, we expect a slower rise in agreement over noisy data as compared to clean data. In our experiments, ELP achieved superior results in noise filtration. We hypothesize that the difference in slope between clean and noisy data may underlie the superiority of ELP in noise filtration. To check this hypothesis, we compare between two scores computed at each data example: ELP and Logits Mean (denoted LM for simplicity). LM is defined as follows: {{formula:984b5fdc-f625-4f30-95ad-3cf803c56a65}} where {{formula:39df82e9-c84d-4af7-a9ad-8f42e05c5f26}} is the number of networks, {{formula:4f153d1b-6b33-4e00-88fa-512d1c903e4e}} is the number of epochs during training, {{formula:a38d0fa5-1ebf-41c3-a2ca-4978dc5dcd56}} is a data example and its assigned label, and {{formula:445b2e36-6a0b-40f8-b134-6c502b5967d8}} is the probability assigned by network {{formula:5e74f248-1ad4-4084-815f-340c17c7ce99}} in epoch {{formula:6e885c0a-66dc-42ef-baaf-a996fef28af6}} to {{formula:316f987f-88b4-45bb-b276-ca632296d2ad}} (the ground truth label). In order to compare between the pace of increase (slope) of ELP and LM, we conduct the following analysis: We select the two groups of clean and noisy data that are learned (roughly) at the same time by some net in the ensemble, and then compute the average agreement and "correct" logit functions as a function of epoch, separately for clean and noisy data. We then compute the difference per epoch between the noisy and clean average agreement, which we denote as {{formula:ca34a7d5-d05d-4de9-bdef-56a631ab8d62}} and {{formula:ba81832f-048e-417f-a088-7acc69a90800}} . Note that {{formula:f4f04eb8-7292-4e8a-a74b-553e7f773dd1}} and {{formula:fbe112d2-b2af-4c76-aa3c-8b607a931211}} encode the difference in the slope between noisy and clean data, since they begin to rise at (roughly) the same time. Finally, we plot in Fig. REF the difference between {{formula:01b9c18d-a4cc-418d-bb3d-02c7d306f9a5}} and {{formula:7e8db3af-519e-484c-b1a5-a7ee2c6a6455}} , recalling that larger {{formula:238e74ad-122d-4d51-9a6a-94f4131555c5}} indicates stronger separation between the clean and noisy data. {{figure:cfce8236-1f8d-496c-b7aa-d0c5e9a6d1dc}}Indeed, our analysis shows that with asymmetric noise, the difference between the agreement slope on clean and noisy data of the ELP score is consistently larger than the agreement slope difference between the average logits on clean and noisy data. This, we believe, is the key reason as to why ELP outperforms LM in noise filtration. Note that this effect is much less pronounced when using the easier symmetric noise, and indeed, our empirical results show that ELP does not outperform LM significantly in this case. To conclude, we believe that the signal encoded by the agreement values is stronger than the signal encoded in measures of confidence in the networks' prediction when true labels are concerned, which explains its capability to classify correctly even some hard-clean examples and easy-noisy examples as clean and noise (respectively). This, we believe, is a result of the polarization effect caused by the binary indicators inside TPA, which disregard misleading positive probabilities assigned to noisy labels even before they are learned by the networks. Comparing to methods with different assumptions Here we compare DisagreeNet to methods that assume known noise level - O2U {{cite:f71d54d16e0c42bffae88cb6963adc5d1744f930}} and LEC {{cite:79b53f346c076979aeeb94754eb2a772b06f8d92}}, using a 9-layered CNN for the training (with standard hyper parameters as detailed in App.  ). Since the noise level is assumed known, we replace the estimation provided by DisagreeNet with the actual noise level. The results are summarized in Table. REF . We also compare DisagreeNet to other methods that use prior knowledge, where DisagreeNet does not use prior knowledge. The results are summarized in Table.  REF {{table:2d65a2eb-2c4c-4283-a500-50042203b36c}}{{table:b9f9abdb-0847-4955-91ec-e596b80d97a6}}
d
ab430c10109b686d9a6f0b800fdcd49f
Similar AHEs are also observed in the related Eu-based compounds EuCd{{formula:4652ca6d-3144-4660-a8e8-59751e9dae22}} Sb{{formula:bac111e1-158f-46e9-8971-b31f65154e78}}  {{cite:1efb41de390eb0cc81320a054004f7e7de5016ea}} and EuCd{{formula:12495e22-ab94-4dfd-9eb9-c17d70ba80fe}} As{{formula:cf300b04-4e81-4112-a6cd-23619818854b}}  {{cite:39bceefbbd5effcad8ac1fa3adffa4205a8b45d2}}. In these compounds, the Weyl points near the Fermi energy or a dynamical Weyl semimetal state are suggested to contribute the AHEs. It should be noted that, although a similar THE might be observed in these compounds below the saturation field, the origin of the THE is not discussed in these previous works {{cite:1efb41de390eb0cc81320a054004f7e7de5016ea}}, {{cite:39bceefbbd5effcad8ac1fa3adffa4205a8b45d2}}. Given the different collinear AFM state suggested in EuCd{{formula:0cffe906-b9f6-4222-b808-8b87bdae79fb}} Sb{{formula:7ccf0a76-b97d-46b2-8b91-d1f00692a0a4}} and EuCd{{formula:4f9b6b70-a6e1-4abc-9de3-0e5cd108f4d2}} As{{formula:8d1b1fde-1620-4776-b78a-2026271a296b}} and the different crystal structure of these compounds, the origin of the THE in EuCd{{formula:62fc52ef-f7d1-47c4-80ac-4ef8a988769d}} Sb{{formula:5e45292c-7833-4678-a4d5-21feda39f631}} and EuCd{{formula:c9245078-aa09-4dba-97fc-d44dacb799a8}} As{{formula:1295cd1b-8265-477f-af23-56a88853ab66}} may be different from that in EuIn{{formula:2b4c78ad-3d48-4c15-91dc-26dad3e8fdb6}} As{{formula:410c1fa3-5a71-47d8-9273-c7e793996cec}} .
r
f54cb6378be576c980db346678698f3c
GPDs contain the extensive information on the hadronic structure. In the forward limit, at zero {{formula:a0bf88a1-4b25-4817-9d0a-b7c37dfe28a3}} and {{formula:3d2e49e7-fe3f-4be0-ba82-3e0b1c1f0549}} , GPDs are reduced to usual PDFs. The important property of GPDs is that GPDs integrated over {{formula:040d1fcc-b20d-4572-be41-1ea67eca6128}} are equal to the corresponding FFs {{cite:6d83d8ad6a54cfdd9dd2474dc63e85c4b0625b01}}. GPDs are also related to the charge and magnetization distributions. Information on the parton angular momenta can be found from Ji sum rules {{cite:a07f2144fc49703ca94344a921635d74d81e407a}} using {{formula:ce44bc1a-562a-4f67-b8b4-8e954dc0909e}} and {{formula:c0e6ad33-edf0-4e68-b2de-79b50935ef54}} GPDs. More information on GPDs can be found e.g. in {{cite:fdce388ad4a76319d82000f771e06116d0d214be}}, {{cite:2a37f95d83df7330b43c6403b3aba3407f809aea}}, {{cite:8956463c5ebdcba22ec8af0415a92dd4c8e4d1dd}}.
i
3e09f6c922711b14a15adc0da69d3d51
According to {{cite:fd6b933c8ead6f39de7e32739e97c69ac5225c3f}}, aPY {{cite:aff7015f8ad128ad9fc459b804a6d8b02156167c}} has a much smaller cosine similarity (0.58) between the attribute variances of the disjoint train and test images than the other datasets (0.98 for SUN, 0.95 for CUB, 0.74 for AwA2), which means it is harder to synthesize and classify images of unseen classes. Although previous methods have relatively low accuracy for unseen classes, our performance gain is even higher with such a difficult dataset. Compared to all previous models, our GDAN  achieves a higher accuracy for unseen classes by a large margin of 16%, and still our model maintains a high accuracy for seen classes. From the results of previous models we can see that although they generally achieves very high accuracy for seen classes, they perform very poorly when predicting images from unseen classes, while our model achieves a good balance between seen and unseen classes, which give us the highest harmonic mean accuracy on aPY {{cite:aff7015f8ad128ad9fc459b804a6d8b02156167c}}.
r
737dc629dcfbbe54cfbe3c8237f3e9de
The rare decays {{formula:f76e1909-909c-4d77-ace3-2ea15917abe9}} and {{formula:ff947f42-64c9-4bf3-b5d5-b2197700dd7d}} played already for three decades an important role in the tests of the Standard Model (SM) and of its various extensions {{cite:fe162229c04f6193ba63efb0a583ba76d99208ff}}, {{cite:547a7e74fa94471e375709ac6a9375f78eb5ab23}}. This is due to their theoretical cleanness and GIM suppression of their branching ratios within the SM implying strong sensitivity to new physics (NP).
i
4d9ee7acd27fd3b4261d7ae6cc115a91
The theoretical solar wind speed and density are consistent with those measured by SolO/Metis from {{formula:878a635e-1f48-4e76-bdea-50dc0fcc95d3}} R{{formula:f2df329f-0a10-4700-b1ff-0334238dfdc1}} and PSP at 23.2 R{{formula:51ed25e1-070e-45a5-a448-fdb5cfcaf3e7}} . The theoretical and observed solar wind speed increases rapidly within 3.3 – 4 R{{formula:b828519b-af7e-4996-b33a-69c0b944434f}} ranging from 96 – 201 kms{{formula:d0d50c9b-4ebd-40d8-844d-dcd7317f897b}} , becoming supersonic at {{formula:4d4c3a44-c28f-46d5-a859-660dc6760660}} R{{formula:c443f032-0054-41bf-9175-1a0302867d6a}} . Thereafter, the theoretical solar wind speed increases gradually with distance, and is consistent with the PSP speed of 219.34 kms{{formula:f6189113-0b32-491b-b40f-53fa1326fb72}} measured at 23.2 R{{formula:64e4aeb6-edcd-4418-ba00-e9799517f8a5}} . PSP and Metis/SolO measured a slow solar wind stream emerging from the southern coronal hole near the equatorial region {{cite:6c953203dbc260943f3fb846ec7f9e3a011fcccd}}. The theoretical Alfvén velocity increases initially to a peak value {{formula:8d411869-55e3-4e09-ac97-15d7a2fff359}} kms{{formula:b6258b69-bd2f-4d7b-9e80-6496604ce29b}} , and then decreases gradually to be consistent with that measured by PSP. In this model, the Alfvén surface is located at {{formula:6f1118ad-f1a8-452a-bcd1-0b7a0db10607}} R{{formula:896d6ce1-3d13-4d29-94ca-548f1ab6ae90}} . The theoretical 2D outward Elsässer energy and fluctuating magnetic energy are larger than the corresponding slab components, the latter being close to the corresponding PSP-observed results. Similarly, the theoretical slab fluctuating kinetic energy is also consistent with the PSP-observed kinetic energy at 23.2 R{{formula:db657ede-51d7-4844-83f5-e5e139efd495}} . The theoretical slab normalized cross-helicity is close to the PSP-observed cross-helicity ({{formula:11360820-8226-4553-9944-cd6d7178bac9}} ), indicating that PSP observed highly Alfvénic slow solar wind in the inner heliosphere {{cite:42871b7833e9d1b3d3f20653735d252a6a33b266}}. The theoretical normalized slab residual energy is similar to the PSP-observed residual energy ({{formula:9ddcb440-6118-4ec6-ab27-94556feaf77f}} ). The theoretical 2D correlation lengths corresponding to outward Elsässer energy, and magnetic field and velocity fluctuations exceed the theoretical slab correlation lengths, and the slab correlation lengths are consistent with those observed by PSP. We derived the two sets of equations in a conservation form, including the super-radial expansion, from the 2D + NI/slab turbulence transport equations that were derived for the unidirectional Alfvén waves {{cite:0494ec86b50677b568d14a0ee9d807308ac5b14e}}, {{cite:d6efef527b8ecbb05e3f7e07c5446fd053a26339}}. Both sets of equations resemble the WKB form in the absence of dissipation term, mixing term, and the turbulence source {{cite:ff3851e74fb3e59f510da4c83c27ebf6070c4e47}}. We calculated the theoretical 2D and slab turbulence pressures, and both decrease with increasing distance. The theoretical slab turbulence pressure is similar to that observed by PSP at 23.2 R{{formula:0e609dc1-c12b-4999-8974-45f624ce2bec}} . The proton temperature is assumed to be {{formula:7cdac4e9-59a2-469b-a6ea-85fa7896c405}} K at 3.3 R{{formula:20b30b4e-dc60-446a-a583-a50593509ae9}} , increases to a maximum value of {{formula:f6f70eb4-59e4-4972-b3b4-6a93dcfadbbd}} K, and then decreases gradually with the expanding solar wind. The PSP-measured temperature and the predicted temperature at 23.2 R{{formula:cbb413b8-8912-479d-a1e4-f56ccf36945a}} are very similar.
d
9417856658a4fcdc824968a29812bd12
The issue of the perturbing noise {{formula:6cf4497f-2a05-4df8-bbbd-183bff22fbc9}} is twofold. Most substantively, sparse approximations to precision matrices—including the Vecchia approximation introduced above—crucially depend on the screening effect, which is the phenomenon by which predictions depend very little on far-away measurements when conditioned on nearby measurements {{cite:5599dd23825a9319503a52bba7e5f36e7975ab95}}, {{cite:0669f4b7e1f6faa140346beab0a9e1bd599d5b90}}. Additive white noise, for example, severely reduces the degree to which this phenomenon occurs {{cite:0669f4b7e1f6faa140346beab0a9e1bd599d5b90}}, {{cite:157eb7a94c36e726d4d013f12de04fc3b0e69476}}, and so if such approximations are applied directly to the kernel with the added diagonal perturbation, their accuracy with respect to typical assessments such as the Kullback-Liebler (KL) divergence is significantly lowered {{cite:157eb7a94c36e726d4d013f12de04fc3b0e69476}}, although this admittedly does not necessarily imply that the resulting estimators one obtains by maximizing the worse likelihood approximation are in any sense “worse" (see {{cite:64dd87d33b87dba93c4b1b230569484ef567da66}} for an example of this phenomenon). In our experience, however, particularly with singleton prediction sets (see {{cite:64dd87d33b87dba93c4b1b230569484ef567da66}} for discussion), point estimates are indeed materially worsened in the sense of being farther from the MLE and having a lower terminal log-likelihood.
i
94d6b50aa33fbd27a2e88340deb7d7ff
Experimental realization of twisted bilayer graphene (TBLG) {{cite:d4d0c605f55f8fac35402b3c56c10b0c5a15ca5f}}, {{cite:cf24a0526e7fac6dbdd6794c6385831f51f60e7a}} featuring the flat electronic bands at so-called "magic angle", as predicted in Refs. {{cite:a15266c3e8614222557cc51e9ac2bd0fd6f7b56b}}, {{cite:793a69d394aaa0aac94601cadbe36e74a46648f9}}, {{cite:cb1d5b2fb1803cc00e51a2fdd7f38b6cd3baec8c}}, prompted a great deal of research efforts to understand a rich phenomenology of this system in both theoretical and experimental condensed matter communities. Quite generically, the band flattening implies that the kinetic energy of the electronic quasiparticles quenches, and the density of states is enhanced. The effects stemming from the electron interaction thus govern the collective behavior of the electronic degrees of freedom in the system. Consequently, a plethora of interaction-driven insulating and superconducting phases has been theoretically predicted {{cite:d9c5ec2c0616264a0e62a971a5b7dc9f377a9ce5}}, {{cite:0290275664f58c69335cfde5999df1a77aa86d0b}}, {{cite:6531021bbbb90029d93bafc469b755dcc7115fa8}}, {{cite:e25c6d3e96a04f7e1acc78b734e9b7cc58b815fb}}, {{cite:56c3fc1fdc04cc64c3446ba006c3c6a731bf4dc1}}, {{cite:70d18415e2106075fb1f30089d91f7b5c2de7043}}, {{cite:6cdef1c09cdb9384269597512170f0f784947e03}}, {{cite:b269b58b17283375707ac7bbb3acc6e4b8e99dd8}}, {{cite:557e2f66c132c7e25ff7cc07d37b655a815d5bed}}, {{cite:7b1ea2cd67a6071f9d52c97cd49aa5926d5a96c0}}, {{cite:567832692f84d29d73007263d170e6c0bc01990f}}, {{cite:0e6afbf4ff396d34c37085b3fa1299a14c5b6816}}, {{cite:ee568b945756e16d71e15f9a1814e9f6db736a03}}, {{cite:4923bf653af4f8dc61f58a067a750daf4c71a639}}, {{cite:0f00259aa537f54e4037aae2515073b7451ea589}}, {{cite:bb942a62da8ceaab5e88a1f00d23f7c1ce49e06c}}, {{cite:b8fb57e08e3b11673394cb39806a5b802f821526}}, {{cite:7bb5c03630ec012c256015821ec73f50db2880aa}}, {{cite:6b647541be5965124f04a7b28a3ce8ca9a2854e1}}, {{cite:9b1a11862577af5f928fee7cc04ed27c10477de9}}, {{cite:cc49bfc4de13117ae69cd68ef943d3f805bc7525}}, {{cite:66b5703f6f581dd66b8c47ad7186a56e0c87ebbc}}, {{cite:27889a06f48e063bd2d127c9c36339fdf12e9d1c}}, {{cite:515297cf1139f254d5b6e2c0ca9de09f94837f89}}, {{cite:543c9cc8bd1983e74b25118c7336f608153b3356}}, {{cite:87ed0b22078722be3a27c9df5bdef5c31b23f460}}, {{cite:82cb0496b9cf9cfa7e15daa631b73a18345de6ad}}, {{cite:07efd1b2c4e80b7907b6dcbe1589d6833ec036b1}}, {{cite:7215a30b5d13296e9cc916f28b2f3ba6ce10881e}}, {{cite:b2848a41ce1f3451c6aa9ee0c23aeba536dcaecc}}, {{cite:f1d63281f02f9bd00f0ab4e1cb85a6c54f459ce8}}, {{cite:7660a8f7e89565a3db2ac38c89313529b3d08872}}, {{cite:4f2013da7c5a5435a25871fa7338c2d8e3eaf0f0}}, {{cite:4f3ad9218910b42588de096f2110bf3746d4a16f}}, {{cite:d350a7d69e3561c9e3d0e6e745fbf9e4d23575ba}}, {{cite:c4ce3f3dccbe4140e4c20ee7dca716b16c535963}}, {{cite:2dac580f83f2f3b755eb4b2d2169117c3a1eebd9}}, {{cite:339767e2c6e1d32c992fcfd8ccf9eb694c4cfe38}}, {{cite:f7388c233244df7b70878c71263558b222ed3c3a}}, {{cite:498ebf0876434de2bb696a49e459c9f4c4b9c4b0}}, {{cite:c0a5af065639eccbe3ba458051348a3345d8c3d0}}, {{cite:c40110461220cda0f439cc66300a5517e46665d3}}, {{cite:07ed3af7c881978ef93b1678a275d8cef97565de}}, {{cite:76db9e274f6bdb5dd45327bf3b5942084f38f10a}}, {{cite:50b565ac76beb720150a7d1f27d884f3f73ccc58}}, {{cite:2bf036bf1e8d9101b045ccb2b9fd68c233d32818}}, {{cite:36e59bde6f149c16f55443be11a01bcf0b1a8e2e}}, {{cite:a508ca8f3cd5178ceaf374a8a58b9c28273cbb2a}}, while the experimental efforts are on the way uncovering this rich landscape of electronic states  {{cite:b563b39c8084a6d83223a1c2282a2e903557cc5b}}, {{cite:38cb0d8f8631f154778d6f2f4ffb99e1d21827d3}}, {{cite:47aecdd9e5afab001ed548d64a20e27cf0c65f79}}, {{cite:289ee3606df4d769d12f6edb1fdec91025472710}}, {{cite:2d79ac7fa8d662ab6af3362548ccb8704a0f1a62}}, {{cite:d6bbf12029f4bd28c7a4e68fdf7db0cfb628f19b}}, {{cite:401d017a3f8fabd2e74feb18109322ae04c5f0e6}}, {{cite:fabe0421758ca5b4657de69a0d65caf24c7d56db}}, {{cite:fe65f5a1d2c6d9d21d1ddfc723ee76a8f807ff22}}, {{cite:ef531798a72ffb04486b9a512cb70b0c52660c2f}}, {{cite:37fbe226b6f8ad55d81630dad67f8ebb23159a6d}}, {{cite:b9404462be140e1359aa4ad5f2c54cca3b451aa3}}, {{cite:182d6fdf42673619784d11146bfb4bf91e8c0b97}}, {{cite:bca5bebe1bb00f002442d394d85bcc5eec5857e5}}, {{cite:4daecddab495090f2404682d7d1e2e6d09fcf372}}, {{cite:55d2de0716c996b7a1f2c0402a4aa7dbf0ac8592}}, {{cite:751bc7989ebf350d9e766146b1806e7556b19c08}}, {{cite:c6d4d353100cd107fe59fb43205b5582421f18fa}}, {{cite:6cf96d97b0bb9e997c84d893b8ed784e6e034a85}}, {{cite:246a0cc3bbf8cffc2b77326671a89c52d47c7f95}}, {{cite:4da586741e91a18557f915f819ba6cac83621d05}}.
i
d69d3e5fe66fb5f2a60cbce157a96bb3
Systematic errors in our dynamical analysis leading to an erroneous calculated value of the mass of HD 206893 c are also most likely a remote possibility. Our calculation of the dynamical mass uses a combination of stellar astrometry that may have systematic uncertainties that are not well understood, as well as RV measurements that are impacted by stellar activity {{cite:c30031b631deb879c1678ff54e83cc9739981ad0}}. However, Table REF shows that excluding the RV data from our fits gives very similar calculated values for the mass of HD 206893 c (12.3{{formula:da8be3d0-85d8-4190-948c-522c9a55e000}}  {{formula:b93fb7ab-6fe3-477e-8c3a-31991ac5ee7b}} versus 11.6{{formula:9f907fe6-3676-41ed-a289-421fb9b55285}}  {{formula:0d89082d-6a09-411b-8a88-076e6db70afd}} with the RVs excluded). It should be noted that in this analysis we did not model the stellar activity in our orbit fit, which may contribute to a slight bias. However, it is unlikely that such unknown systematic errors within our analysis have significantly biased our calculated masses, especially given that our measured dynamical mass of HD 206893 c is very much consistent with the predictions presented in {{cite:c30031b631deb879c1678ff54e83cc9739981ad0}} and {{cite:d6db0d6ddd1e00ee5e3143fb4f055422aa8bcb56}}. At most, these systematics could have led to underestimated astrometric uncertainties, but this would not have changed the overall conclusions of this study.
d
689f0d6c80fdaf69eb80cb39dcf15b0f
Comets show a diversity in composition, with a factor of 10-100 variability in the volatile abundance {{cite:2b7072cc04253534c54bee546d4e3bfeb9519531}}. However, in general this diversity does not appear correlated with the dynamical category. An exception is for CO, whose abundance relative to water appears depleted in Jupiter family comets (JFC), by a factor {{formula:62e998ac-f086-478b-88a4-d56648876717}} 4 in average compared to Oort Cloud comets {{cite:c1ba94c74486a097af50db618e00555c3b8c64a5}}. As we show in Fig. REF , the CO depletion in JFC compared to other comets is also visible if expressed as a specific CO production rate, i.e., the production rate per unit area {{formula:245c7f4c-f84d-498e-87d8-f3618db2065e}} , where {{formula:6f113fde-c4d2-4474-a0fe-4b56bce6c263}} is the equivalent diameter. The top panel of Fig. REF shows this specific production rate, multiplied by {{formula:9e037787-4c79-4c5d-a522-a9d18e8cf1fb}} , as a function of the heliocentric distance {{formula:13bbef2e-da10-4115-ba0b-8c7b9599ed95}} of the measurements. As demonstrated for C/1996 B2 Hyakutake and C/1995 O1 Hale-Bopp (96B2 and H.B. in the figure), the scaling by {{formula:ae059d9d-e25b-467e-b94d-5ff195de5578}} corrects to first order for the distance effects and allows comparison of measurements at different distances. In the bottom panel of Fig. REF , this distance-corrected specific CO production rate is shown as a function of {{formula:c15bdac5-30da-44f5-b9d1-2ad1a6f10f8e}} . In both panels of Fig. REF , JFCs clearly appear CO-depleted with respect to Oort-cloud comets (OCCs). As JFCs are thought to originate from the transneptunian region, in particular the Scattered Disk {{cite:7c3f6ae517e80ae47720b592e53cd52a38908164}}, {{cite:b46324e3f563d8e2d652d58f3ae6bcd46c92e285}} and most of them have diameters {{formula:02a9457e-714d-4fe0-be38-85d0c0d3bee5}} 5 km, our calculation that only Kuiper belt bodies larger than 4 km can retain CO over the age of the Solar System may provide a natural explanation to this behaviour. Interestingly, the observed cumulative size distribution of JFC may show an excess of comets with radii 3-6 km {{cite:b05737bec94ca6d4c426f608ac5b368811834120}}, similar to the above number, and that could account for the diversity of CO abundances within JFCs, although statistics are not sufficient to discern a {{formula:54dbbd71-b0c8-4ad7-abef-6d55b5140f8f}} vs {{formula:c91f3e8a-9f2e-4fa0-8399-026aefbbfc6e}} trend within the JFC group. While Fig. REF is reassuringly consistent with our sublimation calculations for the Kuiper Belt, we note the following two caveats: (i) the lack of {{formula:b7dcbb1a-df0b-4af0-aca8-6d4212d3fedd}} 5 km JFCs does not allow us to check our prediction that those objects would be less volatile-depleted (ii) the low CO production rate of JFCs may also be related to their repeated perihelion passages on their current orbits. We note finally that, with the notable exception of 29P/Schwassmann– Wachmann 1, Centaurs, which are dynamically associated with the Scattered Disk and JFC, also appear CO-depleted compared to Oort-cloud comets {{cite:a5ebf73785cbbf648dc3dbd37fb2bd38574ce523}} in spite of their large {{formula:256779a8-fbe8-4ec4-8ab8-0fa44d7bb55f}} 100 km size. This is a probable consequence of increased outgassing over their {{formula:9d2d0125-2f7b-417e-9d61-9cd8193171cc}} -{{formula:53e232b2-82c6-408b-b224-bc0a906725c0}} years lifetime orbits at giant planet heliocentric distances. {{figure:45c4dd76-9446-46c8-a13b-ffbe6305226b}}
r
cdee6223829c0e4c1efc57bd34bab3ff
These facts motivate the exploration of both the process of scattering and absorption of massless scalar wave by a charged dilaton black hole, extending previous analysis {{cite:30309af6c14e81c7f19654599050ba15d567ddb7}}. In Sec. II we introduce the EMd gravity and the charged dilatonic black hole, including the Penrose diagram for the charged black hole, which later is fully explained in Appendix A. One of the goals of this study is to determine the classical differential cross-section for massless scalar field and to explore the backward glory effect numerically for different value of the dimensioless charge, {{formula:e8b31677-6288-435d-a6f4-6234e6804d92}} . The latter part is covered in Sec. III and in the Appendix B a semi-analytic approach to examine the glory effect is contrasted with the full numerical differential cross-section along with the differential cross-section of the backward glory effect. Sec. IVA is devoted to obtain the absorption cross-section by means of the partial wave method. In Sec. IVB, we numerically compare the pattern associated with the total absorption cross-section in terms of the dimensionless coupling {{formula:16e6118a-1b3f-410d-8470-6345826b245f}} {{cite:d201cbc69407e1d6775d1c445bcbfeda3e940e55}} for different values of {{formula:a1c4bb97-73e1-4b4c-90e6-430b9c20ed58}} . In addition to that, we employ a numerical integration method which allows us to reconstruc the differential scatterin cross-section for different values of {{formula:d038638e-1213-4041-9b60-24ee32fcc2da}} and {{formula:6d9d81be-62a4-4b11-b5fc-58a89339132d}} , as is shown in Sec. IVC. One way to improve the previous analysis is by looking at the behavior of the absorption cross-section of massless scalar field in the limit of high energy {{cite:79336cb83569d1d17bedbc45b12008d4d7f37e23}}, {{cite:3bd54792c1476410007850d8a9bcf160cd6942c7}}, {{cite:ed3cd6e8030b9a701bb13bfa11930a72ff2e7900}}, {{cite:3e449f5c766d9cb2273e524ccb1f2e71d626608e}}. In Sec. IVD, our aim is to obtain the absorption cross-section in the limit of high-frequency. The numerical analysis reveals the existence of different complex structures in the total absorption cross-section, also known as the fine and hyperfine structures {{cite:3e449f5c766d9cb2273e524ccb1f2e71d626608e}}. In Sec. V, we extend the studies on the absorption of a charged massive boson field by a charged dilatonic black hole; including a full analysis on the behavior of the cross-section in the limit of low and high-frequency. In Sec. VA, the findings in the low-frequency regime are confirmed by numerical simulations which point out that the existence of two different phases and a critical velocity parameter, {{formula:84b8e8d6-6c17-420c-8209-40863e09f37e}} , indicating the transition between both stages. To measure the relevance of these numerical analysis, we inspect several models of dark matter–with their typical average velocity {{formula:c70dfc0d-cf2a-4e0d-94c0-9190611b9c37}} and masses {{formula:d29c8430-f33e-4e9b-91b9-a1a03cee1e58}} – and determine for different kinds of black holes (stellar, intermediate, supermassive, primordial) the critical velocity parameter, and by doing so, we conclude whether these models are compatible with a subcritical phase {{formula:369fdbc3-7f30-44e5-b581-4b0c9c9d1417}} or a supercritical phase {{formula:f202602d-1639-4d23-b622-1fadd2aaab4c}} . In Sec.VB, we employ numerical simulations to obtain the cross-section in the hight energy limit, but only for the massive scalar field case, showing how the geometric contribution is essentially corrected by some oscillatory behavior. In Sec. VC, the numerical simulations for the differential scattering cross-section are displayed. In Sec. VIA, the reflection coefficient is obtained, finding that there is a superradiant phenomenon and extending previous results on the literature {{cite:30309af6c14e81c7f19654599050ba15d567ddb7}},{{cite:d47525096094118f94f9b1f299afdb0a6bb082bd}}, {{cite:216c97b24aa889967eaf5b155f8156bc28292610}}. In Sec. VIB, we explore whether the superradiant scattering implies the existence of superradiant instability, showing that the unstable modes lead to instability if exists a well potential outside the horizon where these modes remain enclosed. Moreover, we obtain there is a lower bound on the {{formula:ac5168b2-b996-4091-86d8-48f07ee67f89}} quantity in order to the instability happens, and the reflecting-mirror boundary condition provides the proper mechanism to the system mirror plus black hole to become a charged black hole bomb. Finally, the conclusions are stated in Sec. VII.
i
63b5db37bf85cb2724144dd48b8efaa1
When tuning hyper-parameter {{formula:b028b29c-d377-4a49-a075-1c27b190614a}} towards 0, {{formula:573450c5-1523-4d65-97ef-25654eb70d5c}} can be altered into the format of one-hot vector with {{formula:14244537-6f8a-413a-866f-540372cc663e}} , which is then degraded to the case of contrastive distillation as in equation REF . In practice, the choice of an optimal {{formula:cf2b8404-6efe-494b-be90-18decc8b4138}} can be dataset-specific. We show that the higher {{formula:877051a4-b9b6-4d0a-a9ad-112225b46104}} (with labels be more `soft') can actually yield better results on other datasets, e.g., CIFAR-10 {{cite:69aa0cfbf6074fe05759d6567cd7d6f41af7cf65}}.
d
eaf9169b1d7ad8ce386e3748f742c288
Recall that the reason for employing the Huber function in (REF ) is to provide a smooth approximation to JACoB, avoiding the original nondifferentiable objective function which may result in BCD non-convergence problems. But can the smooth approximation guarantee convergence to the optimum? By invoking an available BCD convergence analysis result {{cite:1fe7cfd95fa0be0f4eb3adf6063770b98750bf9a}}, we have the following claim:
m
76537005d8c75fc73f8e11a22e9da685
The study of BTZ black holes in the noncommutative spacetime provides the possibility of the existence of gravitational Aharonov Bohm effect is a case in point {{cite:89c1951cfd7b11aa0cde01c6f5cd561945c71684}}. Also, the possibility of mimicking the BTZ black hole properties in higher dimensions has been studied in {{cite:9b6379aefa651a92456b7f84a847e339e671a80f}}, {{cite:e8248aa7f701c32c64a10bb07f8f97d47f61f064}}. Besides, the authors in {{cite:2972ffe09d566dbf53ac65ad8cb442f51de050fa}}, {{cite:ebf9928fc6f48374f32f95ad21cca95a0ff405c5}} have investigated the existence of the {{formula:70df6195-c1aa-468b-bdce-af1f459935c6}} dimensional solutions in the presence of the nonlinear electrodynamics. Three dimensional black holes not only exist in the context of Einstein's gravity, but also the modified gravities such as Lifshitz gravity {{cite:286fc9de7603f8fb580263222ed87d08bd670cb2}}, dilatonic gravity {{cite:618548774bd31b5691042cc2e55a32357822c80b}}, {{cite:1cea3ca20c1988fde0522b55aaa8f75cc24bb7f9}}, massive gravity {{cite:d4790b92f2931dde890fbfd4bad461f925093903}}, gravity's rainbow {{cite:2947281d025a73133a91a67392bbf2aa83c1ea70}}, massive gravity's rainbow {{cite:98ddcc034c6f1abf4f7336d702076887a4aa72e8}} have similar solutions.
i
c55df030ffc1cdfdad40af9c79c5cc41
In Table REF , we see the performance of various models on the Abstract Scene dataset, for the metrics described in Sec. REF . We first note that even our baseline LSTM-RNN model (Image2LSTM+atten) shows a very large error reduction compared to the results presented in {{cite:6535c44c3709fdf9a0e0505440f74b066e31958b}} wu2017neural (first 4 rows of the table). This highlights the importance of an attention mechanism in these tasks. For the models trained with cross-entropy, the Transformer model shows an additional remarkable improvement over the LSTM-RNN model, across all measures.
r
914f8cfdf67066101380ca8057ec25fd
Characterising and classifying galaxies based on their optical morphologies is not straightforward. A number of different approaches for quantifying galaxy structure and morphology have been developed, documented, and tested in the last few decades, each designed with specific applications in mind. The general goal of all of these methods is to obtain a quantitative measurement, and an error budget, of the morphological properties of galaxies that are easy to understand, use, quantify, and replicate. Contemporary examples include visual classifications (e.g., {{cite:9e2ca63b2c9541577fec130d08ed9205c48a9b92}}, {{cite:7cbde2193b4dfe435baebf85261731f2503c21ae}}, {{cite:a261ae4580c22c1f5ec8fc3a45ebbde99f38af0d}}), non-parametric morphologies ({{cite:b954eb33f1a16c57df290f6356fbfbbfaf19ed6b}}, {{cite:c261ddbea35555318b77cc63705ef761ec4b7833}}, {{cite:d71a3455ddbdce809c68beb966be9540bd437245}}), one-dimensional intensity profile fitting of a galaxy's light distribution, either treating each galaxy as a whole (e.g., {{cite:7ccd537edbfd761fe1e4c80c6f947e994e706828}}, {{cite:b73f68eda4bfee33d55e444f465070af225208eb}}, {{cite:b1584e3f922e01eab84e1ad6bae4b54e7b8d791b}}, {{cite:af45438f83e27fb22c662d1b1ae04243a4abf8bf}}), or decomposing them into two separable components (two-dimensional surface brightness fitting, e.g., {{cite:136895158d50bc960d78805d77c221d1f48af52b}}, {{cite:2173f47d508a88064ba72d052bc117c27ded526e}}), machine learning techniques (e.g., {{cite:015bcddc8ba5996713de12ad0824db9a3334a1a5}}, {{cite:073fbf10fdaa70add3608495db7dcabceffd165a}}, {{cite:a66eae219d4166990466aae073307ef655e8be54}}), and structural kinematics ({{cite:b67ab7ba5f4c2799326e1f1c0b55f1701699d4ea}}, {{cite:89217ae0e4095213d4d42c806c9e2f1e97341544}}, {{cite:45cb84486eaa427c09b5ef855a72e76065582abf}}). The increasingly challenging nature of observations of fainter and more distant galaxies makes defining and distinguishing between different structures a non-trivial task. Traditional visual classifications also become ambiguous for many objects, especially for early-type galaxies. In addition, techniques need to be able to efficiently deal with the ever increasing sample sizes of galaxies in contemporary and future all-sky surveys, with an increased statistical accuracy. Light profile fitting is a quantitative, generally automatic, or semi-automatic, and often a faster approach, compared to the qualitative visual classification process. This is especially important for statistical approaches using the very large datasets we are expecting with missions such as in the near future.
i
8c94402636209c125d1f59e610d9f78d
We visualize some examples of perturbed images and shift in attention (using CAM {{cite:2af2c63cc61eb34eb20b327947f5e7b32ce23f7d}}) for misclassified images from clean images in and in fig:qualresult for Res152 multi-object classifier. It can be observed that changes the focus of the victim classifier to irrelevant regions leading to highly successful attacks.
r
760a0b692694bded612bc832037dea83
All of the mentioned models and others have one common characteristic: they perform better on higher resolution images. So, image classification tasks are easier on high resolution images that are vivid and have no noise. The problem is that, in some cases, we can't have high resolution images due to the age of the images or bandwidth and computation limitations. So, when we apply these models or some high performance models such as Inception-v3 {{cite:5257cb625796224d4e7e64abb40e52f58fd2d04a}} to low resolution images, we see a degradation in performance. As it is mentioned here {{cite:8332a46d4c2242d086b47abaa59304e25753e257}}, poor and low image quality has been recognized as an important aspect influencing the performance of deep neural networks in computer vision tasks. Various factors influencing the image quality have been considered in the context of the classification accuracy of deep neural networks. So, in this paper, we tried to address this problem by developing a new image classification model by utilizing the idea of inception {{cite:59bd260a211ca19e9230b0b86cec26410543a414}} to get as many features as we could from images by using different kernels and combining them with some residual connections {{cite:141c7248759fde83fed8ca856b6d1c2cf5205fda}} in order to solve problems like vanishing gradients and the curse of dimensionality in deep neural networks.
i
2180544007c540e0282c0e0faaa3a160
One potential application of our perturbative construction of the excited state zero-modes would be to use it to reconstruct a bulk scalar field in the dual theory perturbatively. The starting point of such a construction would be the work of {{cite:19048076ca978d7341d46cf4272791e704b5aac4}}, where the zero modes are identified with bulk operators localized on the RT surface in the following way, {{formula:04e47683-733f-4960-b39e-04f18e51ecb3}}
d
a6b830b220c1a99d01acea0e503f76ba
M-pSGLD promises accurate UQ by leveraging the complementary benefits from sampling multiple posterior modes and additional Bayesian exploration of each mode {{cite:017081db6b0b92ef63795f4fd331501ff9c11b95}}, {{cite:27177afe47acb316651283c3d6ca225d3d018450}}, {{cite:bd71827a48820eb5b5d28db8e3d24490f3724ba6}}. However, further research into SG-MCMC schemes is required before routine application in practice: In our experiments, a single MCMC chain (both pSGLD {{cite:31754888afdb213ef7d1a69320f9711eb65fc6ac}} and NUTS {{cite:84850a3655b487a1c980938fdb387f7ce18b73d3}}) sampled a single posterior mode only. Hence, the development of methods that sample multiple posterior modes with a single chain, e.g., by leveraging cyclical step size schedules {{cite:46bd85f5b984e5dec9ff757f0f0fe15cb9d55462}} or parallel tempering {{cite:53f27f53735cee551bf546fab4fe6f2513a521b2}}, is important. Additionally, automatic hyperparameter tuning with a computationally efficient metric could improve the SG-MCMC efficiency; e.g., the popular Stein's discrepancy {{cite:390ca51ae901802c5281393007f88d776c0dfecd}} scales quadratically with the data set size {{cite:7e9ae57f58cf1308e3ba2c679a3c4df8531ff107}}. Finally, recent SG-MCMC samplers such as AMAGOLD {{cite:46bd85f5b984e5dec9ff757f0f0fe15cb9d55462}} or SGGMC {{cite:ce7bc621dac1b2072b3204d64ad211fd01136ccf}} include infrequent Metropolis-Hastings acceptance steps to avoid the bias of SGLD {{cite:31754888afdb213ef7d1a69320f9711eb65fc6ac}}, {{cite:7eba00c8a55c4fed268e6491ef8b3cace2bfef1a}}. Consequently, these samplers use constant learning rates, which may counteract the increased training time of SGLD that results from its small learning rate requirement {{cite:29a07c05cea68234230e56a08614ce176615d251}}, {{cite:46bd85f5b984e5dec9ff757f0f0fe15cb9d55462}}.
d
f462287272940c88e869bdad80e0bcd7
Figure REF illustrates the trade-off between model accuracy, model size and inference latency for three transfer learning datasets. For each model, we report results for top-1 accuracy, inference speed (in images per second), and model size (in number of parameters). One can see that eFUN models significantly outperform the competing models in terms of model size and inference latency, while providing comparable accuracy. Specifically, when considering the popular CIFAR-100 dataset {{cite:bc060d91eabf9fa7004ce1c6be9eb54632ffbfda}}, the eFUN model outperforms the EfficientNet-B0 model by {{formula:1b4884d1-988b-4a33-9a87-afaacc088ad7}} accuracy, while still being {{formula:47e38317-cb9f-4f5e-bc3e-7092d37da404}} smaller and {{formula:57d4aac1-eaba-434c-aa23-5b694f21fabb}} faster. Our eFUN-S+ model is lower in accuracy compared to EfficientNet-B0 model ({{formula:2b518fa3-e0ac-43aa-9e8f-563ab0fd002b}} for EfficientNet-B0 compared to {{formula:2385c9ea-ca0b-4a8e-b45e-2ca7b61e2b6e}} for eFUN-S+), but is {{formula:7772d4b6-0dab-4dc0-abf3-df8d5596f82a}} smaller and {{formula:15fe0770-9186-49ec-99bb-bcd80c99dfa6}} faster.
r
94cd290009916208cf60c3264211d518
where we have discretised and scaled wavenumber {{formula:377ce1f6-2074-4288-8eb3-252c077789ee}} . The transforms are implemented using the FFTW3 algorithm in C {{cite:ac4851bf4a6d9b39ee9921f5ac3b5128ddea81b6}}.
m
639b129739af51b23aebb725ca511491
Conventional motion-based frame interpolation methods only estimate one inter-frame motion vector for each pixel {{cite:44a8707654b24fb221e4d379db4bf860d8e157ad}}, {{cite:2496c1fc26213f02876c4d1b4946afe42e4a1d37}}, {{cite:fb597cf973af3d34e14972985527ae4808493290}}, {{cite:b788a3029cb7efd5da2fd7c639f0c5a17753e598}}, {{cite:f10ab96c1d403b56499e8b73fe05c11a58c5da89}}, {{cite:e31ccf8300eaa4290bfad37862f86e5edbf94e1c}}, {{cite:8026aa70ad24798707d1bd3cc354f7459c6a4060}}. However and as shown in Fig. REF (a), forward warping with such a motion field manifests as many-to-one splatting, leaving unnecessary holes in the warped result. To overcome this limitation, we model a many-to-many relationship among pixels by predicting multiple motion vectors for each of the input pixels, and then forward warping the pixels to multiple locations at the desired time step. As shown in Fig. REF (b), many-to-many splatting allows for more complex interactions among pixels, each source pixel is allowed to render multiple target pixels and each target pixel can be synthesized with a larger area of visual context. Unsurprisingly, many-to-many splatting leads to many more overlapping pixels. To merge these, we further introduce a learning-based fusion strategy which adaptively combines pixels that map to the same location.
i
19f8e926fabd759511e93294b9e48529
Our initial motivation was to establish a link between the presence of (possibly non-invertible) global symmetries in the bulk, and the fact that the dual boundary system is an ensemble of theories (as signalled by the failure of factorization of the partition functions). At least in the class of models we have considered here — 3d bulk theories constructed out of Chern-Simons or other topological field theories — it is easy to convince oneself that bulk (possibly non-invertible) global symmetries imply averaging. Symmetries imply that there are local or non-local topological operators {{cite:5c2aa991111f78fc33c092a5d0600dae9ee44361}}, therefore the bulk theory has non-trivial Hilbert spaces, hence there is a dependence on 3d topologies and factorization fails. It would be interesting to extend this argument to more general theories of gravity, in order to provide yet another argument that quantum gravities cannot have global symmetries.
d
098af72dac1149a0381dde81ab17536a
From the computational point of view, we trained the different networks on a NVIDIA RTX 2080Ti. Table REF gives the training time for the different scenarios for 100 epochs. As UNet++ is a larger network than UNet, it takes slightly more time to train: while UNet requires about {{formula:70ba441c-402b-46ca-b5fd-0c9821564363}} h, UNet++ asks for about {{formula:c6af25fe-8997-46a8-9610-1bad90ccd030}} h. As the patch data set is small (around 210 MB), these times are not impacted by the loading of the data. When the training step is completed, the neural networks are usually faster on CPU than on GPU {{cite:f9263497aee9c4cd26f25f55a0ba9da8c98d6648}} because the transfer from CPU memory to GPU memory takes time. The segmentation of the mosaics was therefore built on an Intel CPU machine (i7-10610U). It took about 4h per mosaic with an overlap of 30 pixels and 32{{formula:c5f16733-fff2-4309-912e-0d89926c20f2}} 32 patches. The total training and segmentation time is therefore estimated to be {{formula:b55f12b6-189b-4ea1-86a0-a14325f2a41f}} h per mosaic. {{table:5047c4d2-c842-4bb4-a55a-d2bdd21a5048}}
d
495b87f322293230cb1a7a87608e03fc
Here it may be pointed out that the quantum noise models given by Eqs. (REF ) and (REF ) are considered non-Markovian despite being applied to individual rounds of the protocol. The reason is that the memory in this context is with respect to an external environment, rather than preceding rounds of the protocol. In particular, quantum non-Markovianity arises when the dynamics of the system-environment correlation makes the system's intermediate map (or, propagator) to deviate from complete positivity. {{cite:89d349e06e188549c3018ff9d799727754d334eb}}.
d
894214fff8b32e2fb0b07df4e3006dec
We find a remarkable agreement between the results obtained using the new formalism and QNS results up to similar orders in {{formula:3c264019-b454-4ab9-b0e7-7ba5c6d152fc}} , despite using only 500 RVS for all {{formula:48185798-c842-469a-92af-e438550ce536}} . The plots in the above figures vindicate the agreement, for both real and imaginary {{formula:6389e8c6-18ed-4101-b5e6-97071ec3d434}} . The contrast with the older formalism, whose results approach ours as the number of RVS used for {{formula:993035c8-faef-4dad-8a5c-1ce8fa902d4f}} is increased from 500 to 2000, is especially obvious for imaginary {{formula:1abde356-fc81-4c8a-96e3-d7f0e4135ec3}} . This is promising, particularly from the perspective of analytic continuation approach from imaginary to real {{formula:16a0f5b3-7414-45c1-8cf4-b8aa66e19c9c}}  {{cite:92588685d7804a1a0ea96348c6fdda3aa2766a7d}}, {{cite:35b98277cd8ccbd07ffdcca1c3463447bb6b1330}}, determining the EoS for real, finite {{formula:4218a491-591a-4595-9333-6bf5da846ab4}} . It goes without saying that the old resummed results improved markedly by increasing the number of RVS fourfold, as evident from the above Figs. REF and REF . Another significant takeaway is the faster convergence of the new formalism, which is manifested by the excellent agreement between the 2nd order unbiased ER results and the 4th order QNS results. The agreement remains equally sustained for both real and imaginary {{formula:71d24631-8131-465a-bb25-25223c310f63}} regimes. The old resummed results, particularly using 2000 RVS for {{formula:68593d67-08f7-4ba3-9a74-85d51ce36154}} appear to agree with the QNS results from 4th order on-wards, where {{formula:e6277ab4-c256-4958-82ae-f0e82b8df6e2}} upto {{formula:f5e73e0a-2dfa-4171-90a3-694618de1f1d}} are taken into account. This agreement is exactly what we usually expect from a proper exponential resummation scheme, that an ER resumming the first {{formula:d388ccd1-e2e4-4097-8ce7-661a708432c9}} point CFs should agree with an at least {{formula:d5026a67-e849-4f78-af28-886baa7ee9de}} ordered Taylor Expansion, involving Taylor coefficients upto {{formula:017b9d4d-f2a1-4124-9504-a6d32f6b3e1e}} , resembling eqn. (REF ) where {{formula:3c2798f9-0d7e-4293-a00e-765aaa3bb27a}} is even. This obviates the need for separate, expensive calculations of 2000 separate independent estimates of {{formula:5bd353fd-00c1-472e-806c-4066c287db7a}} . Our formalism moreover treats all the {{formula:6a5ed4c7-2594-4184-b66f-dfcd3a40c19a}} on an equal footing.
r
abac7618d3e4532b8df821f1aafef16a
The proposed method can be used for a broad class of models where the likelihood is not known or difficult to compute. This is a great advantage in the model development as models can potentially be calibrated before their derivation is finalized. If the model is deemed worthy of further study, effort may be devoted to derive its likelihood function. The proposed method may also be used in cases where such a likelihood is in fact available, or available up to some intractable normalization constant. In such cases the ABC approach may, however, be less effective than methods based on the likelihood. In those cases, other distances could be used; see for example Stein discrepancies for cases where the likelihood is unnormalised {{cite:f910fa99fd2f50573bcb6feb26280bf9f4aec4b0}}. Similarly, if factorization of the likelihood is possible and some factors can be evaluated, more efficient inference methods than ABC may be derived relying e.g. on message passing techniques. Such methods rely extensively on the particular models and the structure of their likelihoods. Thus, the gain in efficiency comes at a cost in the form of a loss in generality compared to the proposed ABC method. Finally, we remark that distance metrics such as the Wasserstein or the Hellinger distance could potentially be used instead of the MMD. However, future studies are required to assess their applicability for calibrating stochastic channel models.
d
95c3a45940c0522c3b8992ff74ede784
In practice, however, it has been well-documented that machine learning classifiers such as deep neural networks tend to produce poorly-calibrated class probabilities {{cite:da51dc21133fcf9a0244c0adaf20d943b06a0fa6}}, {{cite:ef9f394e0e569b5d656835be7b54f91453ebe23c}}, {{cite:b46018df9049f3dd52c22176f3a8e9ab991bcd4d}}. As a result, a variety of recalibration techniques have been developed, which aim to ensure that a model's confidence (or score) matches its true accuracy. A widely used approach is post-hoc recalibration: methods which use a separate labeled dataset to learn a mapping from the original model's class probabilities to calibrated probabilities, often with a relatively simple one-dimensional mapping (e.g., {{cite:7e3b9405e448693ea9ae2c460a55715fbcda0248}}, {{cite:a9c6cbe0b1f3ae373c79713fd9115a02399a77fe}}, {{cite:164b38e24fc7daf01ec86c338d5c1b4907faefd1}}). These methods have been shown to generally improve the the empirical calibration error of the model, as commonly measured by the expected calibration error (ECE).
i
93c5f8e5b5cd23e27e82548241f92f10
Post-hoc explanation is the most popular method for interpreting deep learning models. This branch of methods can be divided into two categories: 1) important feature visualization, which yields explanations by identifying and visualizing important features for input features. {{cite:09ea935658fffaecc1380b4b0138c099fe23dd7e}} proposed a method called integrated gradients to attribute the important features in input images. Grad-cam++ {{cite:204b85f739c32a944098ec350621dab4b4840ab3}} used the gradients of each class as weights to combine the features of the last layer to get important features. However, these methods are found to be unreliable; small perturbations to the input data can lead to dramatically different explanations {{cite:0debdbe45705c476c957e7c5c09559b1563a51bb}}, {{cite:11ec14b43d7ff9e1afbf544e35311183c5c20a5f}}. 2) surrogate models, which create a surrogate model that can be more easily understood to mimic the performance of the original model. {{cite:2bb36b4b2a6fddb29715a0bcb6f922af0a512600}} tried to train a linear model to mimic the behaviors of the original deep learning models locally and used the learned model to explain the decisions of deep learning models. RICE {{cite:b5faefb81d5def0ce82461ac453d1ad65cb88723}} was proposed to give explanations for the target model by synthesizing logic program. However, methods based on surrogate models do not reveal how a decision is made by the original model {{cite:edf8ab645fd42246bae25d702eb0caa24c333a8f}}.
d
ca1cd381ef9a0955a03e6eb516fa1732
Reinforcement learning (RL) {{cite:58e7cd8a0b3e6d0074e826144edf9ca3fe56a354}} is based on a framework that considers an agent that interacts with the surrounding environment, it observes the consequence of any action taken, and it is capable to measure the success of its actions via a reward (/punishment) system. Experience is used to learn a policy that, given the current state, chooses the next best action to maximize the expected future reward. RL is used for active sensing {{cite:957a75ef7fd4bfc4ec02370f1e48521f26ae9911}}, whose essence is understanding and adapting to changes in the environment to plan future sensing actions. Interestingly, the mathematical formulation of RL perfectly fits with the APE characterization given in (REF ): the cost function {{formula:43cc1413-959e-4370-9d9b-9472ec16a57d}} represents the expected reward, while {{formula:64cac5ac-4c03-49b0-a360-65ce70e2e36d}} is the best action to take. For instance, in camera-based active tracking the reward is typically based on a predefined viewpoint of the target (e.g., a desired viewing angle or a given platform-target distance {{cite:ea458259779814ae92a0a5fe032ef3bd7145d28f}}); in some cases the reward considers the uncertainty associated to the target location estimate {{cite:cfa3cfe5beaeb814287f395e55445aaa5b93fdf2}}. In AS instead, the reward may account for the time of search, the presence of cluttering in the environment, and the probability of detecting the target, which is usually related to the platform-target distance, as in (REF ) {{cite:73d319f721132f34b959c1246478be53caf6bc35}}, {{cite:426adcd5ef12551eeb802df295ccf20f49d7cd0f}}. The main advantage of RL is that the learning process does not require a dataset of labeled input-output pairs; thus, the robot learns without any human supervision. Usually, the robot starts by performing some random actions and gradually learns to follow a desired SAT behavior. Another advantage of RL is that it provides an end-to-end approach from sensing to actuation; therefore, it integrates the perception and control aspects within a single framework {{cite:957a75ef7fd4bfc4ec02370f1e48521f26ae9911}}. Finally, the RL framework can also be extended to multi-agent and cooperative scenarios (i.e., multi-agent reinforcement learning, MARL {{cite:58e7cd8a0b3e6d0074e826144edf9ca3fe56a354}}, {{cite:73f172d40660a63847721eea4275c1855780cc3b}}), with direct application to distributed and coordinated MSMP-APE problems. Despite the aforementioned benefits, a critical question of reinforcement learning methods is that they require long training sessions, which are not always desiderable in robotics applications {{cite:ef6c02413c2e3680fc56585ddef6bacea88a6759}}. Furthermore, the training process of a RL model involves random exploratory actions, which can be potentially unsafe (especially in human-robot cooperation) and costly (in terms of hardware damage) {{cite:e07fa9ced8838e7e0fb6404674a2882d4a555255}}. These issues are particularly critical in SAR operations, where it is difficult to collect or to create sufficient training data and experiences {{cite:957a75ef7fd4bfc4ec02370f1e48521f26ae9911}}. More in general, the dependence on training data may limit the applicability of RL-based solutions in unstructured environments, where very few assumptions can be made on the data generation and acquisition process; hence, the robustness performance obtained during the training phase can not be guaranteed during system deployment. For example, dramatic lighting changing conditions significantly affect the performance of an image recognition algorithm {{cite:cfa3cfe5beaeb814287f395e55445aaa5b93fdf2}}; thus, an unstable system is obtained if during training phase it has not explored and experienced all possible lighting conditions. Similar issues may arise in camera-based SAT tasks where the target is unknown or its appearance does not match with the training dataset {{cite:cfa3cfe5beaeb814287f395e55445aaa5b93fdf2}}. A promising approach to deal with dynamic and unstructured environments is transfer learning {{cite:e07fa9ced8838e7e0fb6404674a2882d4a555255}}, {{cite:ef6c02413c2e3680fc56585ddef6bacea88a6759}}, {{cite:ea458259779814ae92a0a5fe032ef3bd7145d28f}}: ad-hoc simulators are employed to train an inference model, which is then fine-tuned and adapted to the real-world environment, since a simulation may not capture all physical phenomena {{cite:e07fa9ced8838e7e0fb6404674a2882d4a555255}}. As an alternative, it is possible to rely on model-free control strategies (e.g., extremum seeking control {{cite:cfa3cfe5beaeb814287f395e55445aaa5b93fdf2}}), which do not utilize training data and implement continuous adaptation strategies to respond to the current state of the environment. In view of the aforementioned challenges, more research is needed on efficient and robust control techniques in unstructured and cluttered environments {{cite:cfa3cfe5beaeb814287f395e55445aaa5b93fdf2}}, {{cite:73d319f721132f34b959c1246478be53caf6bc35}}, {{cite:426adcd5ef12551eeb802df295ccf20f49d7cd0f}}.
d
66057f6e7c286db9b832872ec76c96bb
In the present survey we deal with invertible dynamical systems on the circle. In the case of smooth diffeomorphisms of the circle, deep results have been obtained from the mid to late seventies onwards, starting with M. Herman's thesis {{cite:5b33aa946bc9f716c168c80e5a369d72ca91e72c}} and culminating with the work of J.-C. Yoccoz {{cite:74143c0ab90c2da385e92a21513d1ac5d2d10787}}, with important contributions by Y. Katznelson and D. Ornstein {{cite:1dbd1221f53ed27d7310260fd4a8fb864adf65f3}}, among others. It was in part due to his deep work on diffeomorphisms that Yoccoz was awarded his Fields Medal in 1994.
i
0df77b49cda46fb07e7f336672619213
We compare our AMPL with two DICE methods — AlgaeDICE {{cite:930f4fdb9050677af3701ca30793e6aa7a2e19d8}} and OptiDICE {{cite:e28b6fdb3ac885d291e971bdeada2801fd20bed7}} — that directly utilize MIW to improve policy learning. We further consider three state-of-the-art (SOTA) offline model-free RL algorithms: CQL {{cite:2cce926206d2f832cd36b2c78474d329677aa44b}}, FisherBRC {{cite:d8d99516b97ab51df80eccd622f34481528d8505}}, and TD3+BC {{cite:126d53be8a6f688fe8f5a1a66c071ead4af694a7}}; and three SOTA offline MBRL methods: MOPO {{cite:cc3444ce97991e918a4f986bb6ce7077eaa4e87a}}, COMBO {{cite:f0986a3bdcbb638ae6238d441ec28b7a61e493a3}}, and WMOPO {{cite:91937a6106fd655427bd7375f14e26a24ce9d86c}}. Experimental details and hyperparameter settings are discussed in Appendix REF . We run baseline methods using the official implementation under the recommended hyperparameters, except for AlgaeDICE, for which we use the offline version provided by {{cite:76285bda6b8f0cb003faceeaa321065850b7c114}}. Table REF shows the mean and standard deviation of the results of AMPL and baselines over five random seeds.
r
bc6baeea11167f0982db584fb81f6542
We also compare our methods with three state-of-the-art techniques for imbalanced data. Decoupling {{cite:4ca1abdaaf566d86048fe6281a361ed88dbf4ed1}} is based on re-sampling, while Class-balanced loss {{cite:e13cee5d4424400dcb0bcef98aa1fcbba284651d}} and LDAM-DRW {{cite:064a1da393d8122bf1d1dc58f7d87bfe3e074f1b}} on re-weighting. Although these techniques can improve the performance of the ResNet-50 baseline, the effectiveness is limited compared with our strategies.
r
7d29d92ed5fe9e6472febc0bd124b6b9
Second, we evaluate SpMV execution using multiple DPUs of the UPMEM PIM system (Section ). We evaluate SpMV execution using both 1D (Section REF ) and 2D (Section REF ) partitioning techniques, and compare them (Section REF ) using a wide variety of sparse matrices with diverse sparsity patterns. We select 22 representative sparse matrices from the Sparse Suite Collection {{cite:c9232dca9026d303f3aa64c6c4197c26920193c5}}, the characteristics of which are shown in Table REF . As the values of the last two metrics increase (i.e., NNZ-r-std and NNZ-c-std), the matrix becomes very irregular {{cite:f81bf467f4c39f7a820c5d8d430a780e4a29bbd4}}, {{cite:4acf4055ac78cf157ccd794fcbd5b350902e5137}},
m
44ad4514b0895cffadfd2d9c6faa433c
In an interactive environment, an agent follows a policy {{formula:b963f4d1-e1c7-4422-b854-570773b06e92}} that produces actions {{formula:7174522d-38ef-497d-bd52-6a454ddcd875}} , and receives rewards {{formula:b6c141e3-953b-4961-a19a-422317f5a2b3}} . The goal of reinforcement learning is to learn the policy {{formula:98df4c89-b1a2-4f1e-88f8-f032368aa244}} that maximizes the expected total future reward {{formula:3186f3aa-be25-4ffc-8212-6af44926481a}} , where {{formula:e290e6e8-e0ce-45aa-91b2-538ccfb4d9e1}} is the total future reward, and {{formula:9c3c43bb-f791-4de5-8673-6694d8ed085a}} is a temporal discount factor that devalues future reward. There are many variations of reinforcement learning algorithms {{cite:cd88a26cfd5eaf751932b6f09e4ec5430d1ff16e}}. Most of them can be broadly separated into value-based and policy-based methods. Value-based methods attempt to learn value functions that estimate the expected total future reward {{formula:a557328f-f11b-4eab-8abf-1de2e01ce9d7}} , and use the value functions to guide the choice of actions. In contrast, policy-based methods attempt to directly learn the policy that maximizes future reward {{formula:13c53bd8-07f7-4871-8de0-a939cf3e20b0}} . In deep reinforcement learning, value functions and/or policy functions are parameterized as neural networks with parameters {{formula:a5e1b9ca-9cfc-474a-b393-f24614c8b429}} , and learned through gradient descent {{cite:094367f09c017d9d925653ea43e55c30b18c7ccc}}. Here we go into more details describing these two methods.
m
834434cb621c70d3846ad14fd5007b45
Another related approach is transfer learning or meta-reinforcement learning, which aims to accelerate learning in new tasks from a previously-experienced family. One meta-RL approach {{cite:39d23876688d38a99f444e852e504c0ce8388eac}} uses a particular recurrent (memory-equipped) network architecture that learns general features of the task family through backpropagation, allowing the recurrent dynamics to quickly tune into details of a new task from the family, in what is thought to be a brain-like mechanism {{cite:fbc8910c0a800490e719b929ae75f9ea7a8b0dbb}}. Meta-RL is currently an active research field {{cite:00480afadace376863056a58eda64fd236b84e0e}}, {{cite:126554983aab68c21bd7f758eae7625eb041dc63}}, {{cite:0caa9b74bfcb22bd08029998f6529dd696fff5c7}}. Unfortunately the network must be informed (reset) each time the task changes, but this general approach could be seen as adaptation through knowledge transfer. Again, the quick memory-free effect provided by our rule could work well in conjunction with such transfer-learning methods, resulting in more human-like learning.
d
7c399b1169fdd823921657f339d38fca
The sample complexities that we obtained for tensor train completion with (REF ) and without side information (REF ) depend on the core coherence as {{formula:451d5aa2-a45a-4226-a341-5f00cdff4b8d}} . It is, thus, important to have a qualitative estimate of how large core coherence can be. Candès and Recht {{cite:a024d5eed5124a83f1accf86338ffd38d5d2b1ad}} proved that {{formula:34260222-ce1a-4263-bbbb-6f494f3c2f43}} is of order {{formula:b7acd14b-ed69-4f11-878d-04794ac7b407}} for matrices whose left and right singular factors are chosen uniformly at random from the set of {{formula:df3ecabb-24ad-44c7-8283-20c03c999919}} matrices with orthonormal columns {{formula:33fcabd4-9888-42dd-86b0-a9205ccb5e0b}} . To sample such factors one can take a random {{formula:114a4285-18e8-45f3-8dc5-379849a7c9fa}} matrix with i.i.d standard normal entries and apply Gram-Schmidt orthogonalization {{cite:06f59960246b12860580c06e1689218d209cfffe}}.
d
91e0e61fd1279d447b79a72ddfbc5e95
Theorem REF bounds the residual in the low rank approximation {{formula:cf30b5c0-f4f4-437d-a788-e5e7e4818f8b}} according to criterion (REF ). Like many subset selection bounds, the upper bound can be achieved by artificially contrived matrices {{cite:be05d0f1099379fac1155696da7d2823003cd7f4}}, but tends to be quantitatively pessimistic in practice. Fortunately, it is informative from a qualitative perspective.
m
6f8065b4e8d86c94a9e7788a4523ef54
We have investigated the interaction of a relativistic fluid-jet with the radiation field of the underlying accretion disc. We noticed that proper relativistic transformations of the radiative intensities from the local disc frame to the observer frame are very important and these transformations modify the magnitude as well as the distribution of the moments around a compact object. The corona and SKD are the major contributors in the net radiative moments and KD contribution is much lower than either of the former. One of the interesting facts about the moments due to various disc components is that they peak at different positions from the disc plane giving rise to multi stage acceleration. The jets with normal conditions at the base, produce mildly relativistic terminal velocities {{formula:15dfa804-c8a3-4d8b-882d-33e012e6ccc1}} few {{formula:9a3ecb10-e0c6-4184-9465-ce4fb93ec23a}} (Fig. REF ). The elastic scattering regime maintains the isentropic nature of the jet, and because we considered a realistic and relativistic gas equation of state, the adiabatic index changes along the jet. However, radiation not only accelerates but also decelerates if {{formula:ffdb99bd-2ed1-43af-a6bf-7e9918c7bc2d}} . Although close to the jet base ({{formula:fcd2f52f-d125-42b7-909e-f572b58fb2fc}} ) the velocity is low and the radiation field should accelerate, but being hot, the effect of radiation is not significant in the subsonic branch because of the presence of inverse of enthalpy in the radiative term {{formula:2dd24e06-8d0e-4972-bc69-0126a2f23364}} of the equation of motion (equation REF ). Therefore, in the subsonic domain the jet is accelerated as a result of competition between thermal and the gravity term. As the magnetic pressure is increased, the synchrotron radiation from SKD increases and it jacks up the flow velocity in the supersonic regime. Increasing {{formula:0c428678-5cb0-4316-993b-acc94aec7f27}} increases both the synchrotron as well as bremsstrahlung photons from the SKD, which makes the SKD contribution to the net radiative moment more dominant, and therefore increases {{formula:700f5ff7-5bd2-4090-9aa3-00c573b6b254}} in the supersonic part of the flow. {{formula:5092d43b-da1f-4dad-87a1-bdeb124a1174}} increases with {{formula:45afdf03-c58f-487d-bd62-11ca432780b3}} but tends to taper off as {{formula:c0f2b50a-37e3-4046-85e7-e67311e89a94}} , however, {{formula:fc8a2098-8682-4127-b18f-cb4f006d5ba6}} tends to increase with {{formula:2396d5de-55a5-45ca-acf8-7896f9a04650}} and shows no tendency to taper off. The jet tends to get faster with the decrease in protons (i.e. decrease of {{formula:d354eeae-b8ef-46ff-8f28-e74890c14367}} ). We do not extend our study to electron and positron or {{formula:3aa62ce8-6cb8-4f8b-939e-e01cf67a8ce4}} jet, since a purely electron-positron jet is highly unlikely {{cite:31567ae78bc7e084220124c20018a643e62d17bd}}, although pair dominated jet (i.e. {{formula:bf37543d-d904-40fb-94df-9ffc01919147}} ) is definitely possible. Terminal velocity {{formula:0b81f704-63ee-45ed-bc39-849978a168c6}} increases with total luminosity {{formula:d7643c36-f131-4152-9ad9-f6a0349cd6b2}} , and approaches relativistic values as the disc luminosity approaches Eddington limit. However, the KD plays a limited role in accelerating jets (Figs. REF a-b).
d
794e7cc5b682cf43d0ed7fead54ce292
The initial wave of deep learning methods employed transfer learning {{cite:179a0f83a4b2259cdb2545dec5c832049e2e1f24}} {{cite:27569926e501ee8cd70358b4f692269246ac5497}} for recognizing the baggage threats followed by object detection {{cite:32a54326376f9277341d110e2b327e02ce9ec054}} {{cite:b76228dc8f17dd33b2f5be0a2b6a8640982c5383}} {{cite:f27b2520ecd16b475bed87f92f54aadc23175931}} {{cite:33c2698ad8a6df132ae1132c03d7d90e41a2a21c}} {{cite:654bdae5d4a8549361249edcae9a4257b1a1dc10}}, and segmentation strategies {{cite:0f13219fa6d21f457a1f41d9e0d813fa49106346}} {{cite:85c9f59cbbb8186d3f8357cccd0f545d05861e34}}. Recently, researchers used anomaly detection {{cite:9286fdee821011e29aa8c2594f4309dbdb1f3e53}} {{cite:f576b184ec2365c1111200df63c0db91f7927117}} as a means to handle data scarcity while recognizing potential baggage threats. Moreover, attention modules {{cite:b76228dc8f17dd33b2f5be0a2b6a8640982c5383}} and maximum likelihood estimations {{cite:9a13f4b371a1bcd2d20e54dcde3d7b07733008b7}} have also been explored for detecting the contraband items. Apart from this, Miao et al. {{cite:f27b2520ecd16b475bed87f92f54aadc23175931}} exploited the imbalanced and extremely cluttered nature of the baggage threats by introducing a large-scale dataset dubbed Security Inspection X-ray (SIXray) {{cite:f27b2520ecd16b475bed87f92f54aadc23175931}}. They also proposed a class-balanced hierarchical framework (CHR) to recognize the contraband items from the highly complex scans of the SIXray {{cite:f27b2520ecd16b475bed87f92f54aadc23175931}} dataset. Furthermore, Hassan et al. developed a Cascaded Structure Tensor (CST) framework that alleviates contours of the contraband items to generate object proposals which are classified through the ResNet-50 {{cite:710d9492ad3ad94867b3a20b100c5b33643c0aba}} model. CST is validated on publicly available GRIMA X-ray Database (GDXray) {{cite:864befbbb2555afb705ec586bc8f19a954143132}} and SIXray {{cite:f27b2520ecd16b475bed87f92f54aadc23175931}} datasets. Moreover, Wei et al. {{cite:e7b665989ece61af8f7fbc276940c0736964d391}} developed De-occlusion Attention Module (DOAM), a plug and play module that can be paired with object detectors to recognize and localize the occluded contraband items from the baggage X-ray scans. DOAM has been rigorously tested on a publicly available Occluded Prohibited Items X-ray (OPIXray) dataset introduced in {{cite:e7b665989ece61af8f7fbc276940c0736964d391}}. {{figure:dadf28e4-78df-4bba-957a-9fd8db7578cf}}
m
54f5df48b88d2eb277f882769bb2d80f
To our best knowledge, the rate of convergence results in Theorems REF and REF are entirely new for SMM-type algorithms even under the i.i.d. data setting. Only almost sure convergence to stationary points were known before {{cite:c8479f2bfaac47f6d5d54bcbcfd0cb7410bf46c7}}, {{cite:cd1ff1938325b1c1e71032aec41adab44a39a5be}}, {{cite:b33d19a8cb95053c3a610ad0785ba85a0f3f1ae1}}, {{cite:1713e26263da559efb6d2d2a9182e5b96d5a7ac5}}, {{cite:e3bf0da49402c7567bca8ebd7ea6bcae92c47e51}} even under the i.i.d. data assumption with strongly conex surrogates. Moreover, (REF ) and (REF ) in Theorem REF give bounds on the variation of the random objective values {{formula:8730bb3f-cb8f-4da6-b1d8-3b149e9fbfd9}} and {{formula:9ca00826-60e2-4362-bf7a-1ece532eb94c}} against the deterministic quantity {{formula:138fb971-d746-4b69-b084-17a8788abd8c}} with respect to the randomness of data samples {{formula:1fa6a55b-168b-42a8-89bf-33dcaa8bec31}} and of Algorithm (e.g., possibly randomized choice of blocks in Algorithm ).
r
a4ce702b2433bce29ee88778ea375cfb
According to the degree of geometric form symmetry, the crystals can be divided into different crystal systems and space groups{{cite:4fb5d9c30d07db66d3d392379c751388e67c945e}}, {{cite:7b836ae95e28b9946f557605293a9dc8d74e485d}}. Determining the symmetry information, particularly the space group, provides wide applications for crystal material property prediction{{cite:48f580e9fa184bb1910739e7f91437f920d9ee51}} and crystal structure prediction{{cite:2e32d45fb41943ca7b806ef4371eef622eb4b344}}, {{cite:5d15fd1822e375396bec30eb10e3cda9400eb214}}. Recently, we proposed the method of knowledge-rich approach for crystal structure prediction{{cite:9aae3d0f8d36557e72e7bfcf63630ca365aa083c}}, {{cite:e9e46ce6a641b9708d38c7ec548ec6ab9bb706c0}}, {{cite:57ecee80c7a2aa40b0d30a1d7fd1ecb9cfb06edf}}, {{cite:9e2b022e48df85fd9f26e28fd06b4b6d4b2a400d}}, which is inspired by the recent advances in protein structure prediction (PSP){{cite:dfced3482b9d8c441a2e61fdaead1baeca654ae9}}, {{cite:da9ee57e92ea5e604fe0b52821acc6f4931b828e}} which predicts protein structures using the predicted distance matrix. In our approach, various global optimization algorithms{{cite:9aae3d0f8d36557e72e7bfcf63630ca365aa083c}} such as genetic algorithm{{cite:e9e46ce6a641b9708d38c7ec548ec6ab9bb706c0}}, neural networks{{cite:57ecee80c7a2aa40b0d30a1d7fd1ecb9cfb06edf}}, differential evolution algorithm{{cite:9e2b022e48df85fd9f26e28fd06b4b6d4b2a400d}} have been used to reconstruct the crystal structure atom coordinates. However, it is indispensable to provide the space group information for a given materials composition before predicting their structures.
i
5f398e1bd3474959fb344a1efa491256
Applications to Interpretability: Interpretability of deep learning models is a concern in many domains. While gradient-based attribution methods are a common way to gain high-level insight into model predictions, the resulting saliency maps often lack the granularity needed for sufficient model understanding. Such saliency maps highlight which pixels were most important for a model prediction. While this is helpful, it can be unclear why a certain pixel or group of pixels were important - color, geometry, texture, etc. could all be reasons why a pixel might have been important. Thus, we observe that incorporating physical models, such as renderers, into learning, as done in our proposed method, provides an avenue for improved interpretability. By leveraging existing gradient-based attribution methods {{cite:0a46a0730220003cf6bcc7b6c8b88347a185a1c9}}, {{cite:d3e5f2f4826e57eb61a1a4fc3c1bc7d366eefa6c}}, we can determine how much the features of each physical scene parameter contributed to a prediction, and generate corresponding saliency maps for each feature, as shown in Figure REF . As differentiable renderers become more realistic and are able to model more scene parameters, interpretability can continue to be improved when renderers are used in training.
d
0f1f0dd96410b2ba4928681a292a027b
Given an {{formula:a70e09f7-3b93-48a3-a5df-84e35be016e9}} -partite entangled pure state {{formula:1f6b773a-e0ec-44d5-a270-76963a2d764a}} with {{formula:db8f0d19-e872-41bf-b80d-ad0a4456d491}} , Werner states {{cite:8f27d139007d00ccfb21f1da97c3b21ee9c1865a}} of {{formula:3c93aa85-9cc5-4a83-bccd-b2e7b3c68122}} are defined by {{formula:ab44ca05-de67-4da4-bd92-7221477332ba}}
r
2155adc3c2ec9373d2c8c6cf0de326a5
We now proceed with results obtained by means of dynamical simulations of the system's evolution. The latter are performed by numerically integrating Eqs. (REF )-() using a high accuracy spectral method {{cite:1251c8833e3aa62a5b056392b6d07a0975eaef6b}}. The initial condition is borrowed from the soliton solution of Eq. (REF ), for {{formula:55ac2a91-760e-4a4d-8be3-ed91ca1c0608}} , using the initial conditions of Eqs. (REF )-(REF ). In particular, the initial condition for the field {{formula:d4e48902-a2a3-4763-944e-52660169c683}} is taken to be: {{formula:fec58363-8766-4378-bb9c-530b1b38da4d}}
r
49e9837f87a0b137ec58ceec3af5c2d0
Expected decrease condition. There exists {{formula:1f849a29-5624-4623-820a-c515dbcbc74c}} such that, for each {{formula:dd95c71f-97e7-4b77-9e9d-a527603ee9fc}} at which {{formula:ac79bca1-b81b-48cb-8517-6f4a43f75ce7}} , we have {{formula:6936afd7-781b-4ad8-ac39-017af8752505}} . Comparison to Lyapunov functions The defining properties of RASMs hint a connection to Lyapunov functions for deterministic control systems. However, the key difference between Lyapunov functions and our RASMs is that Lyapunov functions deterministically decrease in value whereas RASMs decrease in expectation. Deterministic decrease ensures that each level set of a Lyapunov function, i.e. a set of states at which the value of Lyapunov functions is at most {{formula:58423227-b02c-497a-a3bf-542b15fdfc42}} for some {{formula:d5d50180-e80f-4d8d-9b23-4688cd9f9022}} , is an invariant of the system. However, it is in general not possible to impose such a condition on stochastic systems. In contrast, our RASMs only require expected decrease in the level, and the Initial and the Unsafe conditions can be viewed as conditions on the maximal initial level set and the minimal unsafe level set. The choice of a ratio of these two level values allows us to use existing results from martingale theory in order to obtain probabilistic avoidance guarantees, while the Expected decrease condition by {{formula:b35eae4a-c82e-4d28-b320-c55f5814f2b9}} furthermore provides us with probabilistic reachability guarantees. Certifying reach-avoid constraints via RASMs We now show that the existence of an {{formula:fc42ea27-7a89-40ea-bb2d-fa6d8068ad92}} -RASM for some {{formula:cf78a18d-36ae-4ef0-bfb4-a723abbe560a}} implies that the reach-avoid constraint is satisfied with probability at least {{formula:ef27783f-603f-4e58-bf47-34588f3c7844}} . Theorem 1 Let {{formula:cbcbcb34-6b73-4c17-b0e5-eac2dac1bd67}} and {{formula:0fecc73d-ead8-40c6-91f0-6c48c24d3a48}} be the target set and the unsafe set, respectively, and let {{formula:0efcbf3b-8254-40af-922a-fe13005f4826}} be the probability threshold. Suppose that there exists an RASM {{formula:c3a6b561-518b-4564-a8c5-a066a6824546}} with respect to {{formula:dc378975-4cbd-451e-9b50-16c25a5d89b9}} , {{formula:d3a7199d-0cdf-476b-9d27-37f872623ba4}} and {{formula:58f6968c-7aee-41dd-b92b-7eefeaba043d}} . Then, for every {{formula:c3535ab6-8912-4374-944e-ecd862451321}} , {{formula:fac1ec0d-63cc-4457-b2c5-9e5e174352ae}} . The complete proof of Theorem REF is provided in the Appendix. However, in what follows we sketch the key ideas behind our proof, in order to illustrate the applicability of martingale theory to reasoning about stochastic systems which we believe to have significant potential for applications beyond the scope of this work. To prove the theorem, we first show that an {{formula:613cc0c3-a9bc-4daa-904c-d0f96b8ec96a}} -RASM {{formula:8a26c7ee-af60-4155-92c9-da0f99dfbc0a}} induces a supermartingale {{cite:c9db9e32e98e856e87fc8af30e817adf15039983}} in the probability space over the set of all trajectories that start in an initial state {{formula:06f5ff12-0007-4039-bdb9-f6b742e6a961}} . Intuitively, a supermartingale in a probability space {{formula:af32e1b1-129e-4419-9736-a1a700a759cf}} is a stochastic process {{formula:3eb2db28-94bb-4558-b585-c3f13af7bb7f}} such that, for each {{formula:96bcd800-9165-4fe6-9af6-b735c07a1b7c}} , the expected value of {{formula:bba1d58a-91cd-46fe-ba1d-bd3a77e1cb8b}} conditioned on the value of {{formula:dedcb12f-513e-4d54-bab0-8579d975787c}} is less than or equal to {{formula:255fc38f-d927-4863-92e8-b6ac9bc80593}} . We formalize this definition together with the notion of conditional expectation and provide an overview of definitions and results form martingale theory that we use in our proof in the Appendix. Now, let {{formula:38443f5e-90f2-42c4-9b3e-095d3720afbb}} be the probability space of trajectories that start in {{formula:22a8ffd4-a4fa-4023-9d4a-663f417e0a28}} . Then, for each time step {{formula:55f94363-9a5c-4643-98c7-326ad274fcdf}} , we define a random variable {{formula:408b0cad-4037-4cae-975b-d5ad27099faf}} for each trajectory {{formula:7a2c7ef5-80be-457f-b63a-4eb1e098f2bf}} . In other words, the value of {{formula:b4b6641e-60a3-4482-ae3b-9a981639fa33}} is equal to the value of {{formula:594a41c4-cb43-4795-8bc4-ecb0723bb138}} at {{formula:c9fab091-e3ba-4d8a-937f-0c27d67beffa}} , unless either the target set {{formula:1b3fa3ab-3160-41a5-b2db-51029a526b41}} has been reached first in which case we set all future values of {{formula:76143bfd-525f-47b1-b431-37106a33d98c}} to 0, or a state in which {{formula:0338d0f2-9f56-4ae1-8eec-d735d2180c40}} exceeds {{formula:fe3343b7-a8a2-4449-8874-237ae2d49daf}} has been reached first in which case we set all future values of {{formula:2be34b9f-54e0-4db5-ad68-e04f15b49459}} to {{formula:11baad57-d703-4923-934c-bf923f618174}} . Then, since {{formula:2bc2a431-af06-4489-b700-6a21a2e9dd93}} satisfies the Nonnegativity and the Expected decrease condition of RASMs, we may show that {{formula:21faed88-02b2-4c68-9ff5-69e1c501d8c0}} is a supermartingale. in the probability space {{formula:b113c355-d2c7-470e-a303-c9a7c7d59042}} . Next, we show that the nonnegative supermartingale {{formula:0ae6d7c8-b1a5-499d-b2e6-69ea2e573252}} with probability 1 converges to and reaches 0 or a value that is greater than or equal to {{formula:d8f7b60f-9789-4012-ae78-3dff2f6f0632}} . To do this, we first employ the Supermartingale Convergence Theorem (see the Appendix) which states that every nonnegative supermartingale converges to some value with probability 1. We then use the fact that, in the Expected decrease condition of RASMs, the decrease in expected value is strict and by at least {{formula:ac25046b-349f-405f-96aa-e6c9df961dfb}} , in order to conclude that this value is reached and has to be either 0 or greater than or equal to {{formula:4b02eb26-d00b-4e38-b2c8-bc37bd1c5397}} . Finally, we use another classical result from martingale theory (see the Appendix) which states that, given a nonnegative supermartingale {{formula:d2f9dd78-af87-4ce3-a1fd-d102d67148dd}} and {{formula:af21cfe5-aff2-4205-a7ad-68a0f6345d64}} , {{formula:36d72e12-ca29-4c64-b019-12baecf33cac}} Plugging {{formula:c2e11112-d736-40dd-bf07-ce3171253b70}} into the above inequality, it follows that {{formula:7262934f-9fdd-4bd7-b61e-f44b06314109}} . The second inequality follows since {{formula:73655126-79f1-4ae1-8a54-2b3b348f7b4e}} for every {{formula:d7a53418-0eab-4a64-bafc-8224bb808c76}} by the Initial condition of RASMs. Hence, as {{formula:37c65cca-9929-4a39-92e4-7a93fb81a87d}} with probability 1 either reaches 0 or a value that is greater than or equal to {{formula:ff9ebc3b-dd13-4f10-85e5-41bcc2664c77}} , we conclude that {{formula:fffc91f4-bf26-444c-91fb-a4fef1c9c3c8}} reaches 0 without reaching a value that is greater than or equal to {{formula:83416732-7304-476e-9f29-c7d9424e1114}} with probability at least {{formula:4052c678-cd15-42f2-affa-f6e0646fa525}} . By the definition of each {{formula:979f46b3-dba5-4169-b4df-4c0a1e0e9057}} and by the Safety condition of RASMs, this implies that with probability at least {{formula:ee1c72d2-c55b-40b1-97a2-269afd50f97f}} the system will reach the target set {{formula:60577ac8-48d2-48d5-ba17-774a6a1a4ac9}} without reaching the unsafe set {{formula:283af124-6f97-4270-af31-b9538b857a01}} , i.e. that {{formula:8c0cde34-6137-4b40-bd74-3da4e0ecedd9}} . Probabilistic safety In order to solve the probabilistic safety problem and verify that a control policy guarantees that the unsafe set {{formula:4d72e74f-be09-40f4-9828-a672624ef42f}} is not reached with probability at least {{formula:0fecb5b2-305f-4124-b97a-e2f4ddeb3dd9}} , we may modify the Expected decrease condition of RASMs by setting {{formula:65405ec1-a3c1-4b6f-9698-647733ddcec4}} . Thus, RASMs are also effective for the probabilistic safety problem. This claim follows immediately from our proof of Theorem REF . In this case and if we set {{formula:3d2d4aa9-5c26-48d0-9978-09098cc9e703}} , then our RASMs coincide with stochastic barrier functions of {{cite:e245812b00593747415fa4ac1ed50e3079bcb4a5}}. However, if {{formula:08d1d39e-6a6d-4133-8ff7-364ebcff1e6f}} is not empty, then we must have {{formula:ff358170-118b-45a3-9a0d-db036c1e3a61}} in order to enforce convergence and reachability of {{formula:9b7f13d8-bb24-4493-a7a9-e64ea3dcd62b}} . Extension to {{formula:06a46aad-9ea2-45d2-8927-aececdb62180}} and {{formula:1c9c8497-c5bd-4281-817f-c47ca4b94920}} and comparison to RSMs So far, we have only considered {{formula:b583bd07-0aa2-4852-a752-ff3846f9645f}} . The difficulty in the case {{formula:ec58c32a-44d7-44c1-8a3a-cefc291676f4}} arises since the value {{formula:73980c7b-8bf4-4634-869d-3a2392f9c8e9}} in the Safety and the Expected decrease conditions in Definition REF would not be well-defined. However, if {{formula:2b53e42d-480a-4aaf-bf5c-4e9076b3d3e0}} , then the Safety condition need not be imposed at any state. Moreover, it follows directly from our proof that imposing the expected decrease condition at all states in {{formula:f0bce51b-e78c-4d2c-b9b4-373de4130924}} makes RASMs sound for certifying probability 1 reachability. In fact, in this special case our RASMs reduce to the RSMs of {{cite:839dc605cbf08ec2c3c6d7fabee6f502f52ebc4b}}. The key novelty of our RASMs over RSMs is that we also employ level set reasoning in order to obtain probabilistic reach-avoid guarantees, thus presenting a true stochastic extension of Lyapunov functions that allow reasoning both about reach-avoid specifications as well as quantitative reasoning about the probability with which they are satisfied. In contrast, RSMs do not reason about level sets and can only certify probability 1 reachability. Learning Reach-avoid Policies We now present our algorithm for learning policies with reach-avoid guarantees, which learns a policy together with an RASM certificate. The algorithm consists of two modules called learner and verifier, which are composed into a loop. In each loop iteration, the learner learns a policy together with an RASM candidate as two neural networks {{formula:16b78603-2ca1-4f90-b95f-4f6f50e02dda}} and {{formula:ccdcaf8a-bfe6-491f-b0cb-ddb42cfbaa60}} , with {{formula:6fc917bf-9978-4892-bfb4-302c4f5af646}} and {{formula:16b61a2f-8de3-4616-818e-1f7225e3cf76}} being vectors of neural network parameters. The verifier then formally verifies whether the learned RASM candidate is indeed an RASM for the system and the learned policy. If the answer is positive, then the algorithm concludes that the learned policy provides formal reach-avoid guarantees. Otherwise, the verifier computes a counterexample which shows that the learned RASM candidate is not an RASM. The counterexample is passed to the learner and used to modify the loss function towards learning a new policy and an RASM candidate. The loop is repeated until either a candidate is successfully verified or the algorithm reaches a specified timeout. The algorithm is presented in Algorithm . We note that our algorithm can also verify whether a given Lipschitz continuous policy provides reach-avoid guarantees, by fixing the policy only learning the RASM neural network. [t] Algorithm for learning reach-avoid policies [1] Input {{formula:efaf7fd3-f8ea-49ae-913a-04c7bc9d2363}} , {{formula:257dff98-c89b-4887-a232-cc8ef477b7f4}} , {{formula:3b20598d-76d2-4106-95f3-fa7e34109985}} , {{formula:30bfccc9-ca59-4693-8b0a-9870bb4be10f}} , {{formula:48a41c71-c5fd-4e8c-a2bb-e50c47abc1be}} , {{formula:d21a0dfd-185e-4438-84e8-558c0bc0aeaa}} Parameters mesh {{formula:dd24fee5-5808-4094-bd3f-7f6011ad2105}} , number of samples {{formula:fc069823-b766-440e-b0f9-75c510855939}} , regularization constant {{formula:7c6f8140-7ef1-45a7-8453-e5a768f2a96d}} {{formula:cfe1ec2d-1fde-4169-86a7-8b70b1a1c1c0}} trained by PPO {{formula:15244adc-ac5e-41e0-bb98-afb9520daa08}} discretization of {{formula:66920a24-c341-44a3-8207-95a8b512ea7f}} with mesh {{formula:b4a16665-c833-431f-9ef4-5246aea7cae1}} {{formula:992665a3-b886-4a2c-aa0e-1142071c5831}} {{formula:9bc3d7d8-7d26-4f77-861a-155db479a14a}} trained by minimizing the loss function timeout not reached {{formula:c1dba033-ce5f-45fb-a0af-d4af9891f475}} Lipschitz constants of {{formula:ae9ccd51-7e62-4485-8a5b-84e4501b2ca4}} , {{formula:2c938419-cc03-4961-8f41-bc9ff1991730}} {{formula:60fa091f-abc3-49f7-86fa-13261a5b5a54}} {{formula:fac215db-15c0-4f85-9e19-e6d2bfbdc15f}} vertices of discr. {{formula:b233ff60-8bf4-46b6-9e0e-0e67cae674e8}} whose adjacent cells intersect {{formula:c734c91e-28ef-4b76-a3ea-8c02eeb8dcd3}} and contain {{formula:0a49a1de-b6de-4eba-915e-26b4185a35e3}} s.t. {{formula:60a17619-bef8-4d0a-abef-e7cf5abd0745}} {{formula:f2b18900-c21d-4130-90da-f5d0ff7f072f}} discr. cells that intersect {{formula:a975d57f-e5a3-4d79-ada0-3731b384b3cc}} {{formula:a22bd486-1579-4a51-85d9-a877f6af24c7}} s.t. {{formula:c3c0f5e0-5395-46c9-8cd6-7ef6dacb2103}} and {{formula:f4906f1a-679b-4731-8e58-5a1d64ceaa39}} {{formula:cbc59b65-869f-4fe1-8517-13f681fb54c0}} {{formula:fb8d760a-8355-4951-b070-2bb63c2734cb}} s.t. {{formula:b5be8696-864e-499c-b345-ee1dcec570bd}} {{formula:f3ce5e3f-fd10-4a42-a31f-491ece0ad620}} {{formula:32c3eafd-3049-448d-a37f-1aea6789ebf1}} s.t. {{formula:9b8fc0a5-9b91-4879-baf0-6e9983a8621d}} {{formula:5b70d8f1-5979-4b84-987b-5efa9dd0cd25}} Return Reach-avoid guarantee with probability {{formula:4c79feb0-c892-4abc-9b28-1eca0e56f62f}} {{formula:ecc2d207-c2ef-403f-884c-3ca7eee58b4d}} trained by minimizing the loss function {{formula:ee9fbc6f-44a9-4789-9a3c-b98c84119e32}} refined discretization Return Unknown Policy Initialization Learning two networks concurrently with multiple objectives can be unstable due to dependencies between the two networks and differences in the scale of the objective loss terms. To mitigate these instabilities, we propose pre-training of the policy network so that our algorithm starts from a proper initialization. In particular, from the given dynamical system and the safety specification, we induce a Markov decision process (MDP) intending to reach the target set while avoiding the unsafe set. The reward term {{formula:2667d60d-3601-4133-9c15-ad4b9e592167}} is given by {{formula:58212348-033a-4bd4-be24-ad71a288afa7}} and we use proximal policy optimization (PPO) {{cite:13604744fd358c345831b96182d90155860f64a7}} to train the policy. State Space Discretization When it comes to verifying learned candidates, the key difficulty lies in checking the Expected decrease condition. This is because, in general, it is not possible to compute a closed form expression for the expected value of an RASM over successor system states, as both the policy and the RASM are neural networks. In order to overcome this difficulty, our algorithm discretizes the state space of the system. Given a mesh parameter {{formula:855a24f9-5026-4d7e-a239-4c97cbdd7323}} , a discretization {{formula:28b6edc3-f1e1-48dd-91d0-fda5432b219b}} of {{formula:ce4085d6-91d6-4820-8943-31c59d273a8d}} with mesh {{formula:7300318f-22d0-4da3-8db0-d87a5eeec265}} is a set of states such that, for every {{formula:7db5aab4-0da2-4b56-9a11-7708605b69a2}} , there exists a state {{formula:6e611a5d-2323-47da-9190-498a08d4f165}} such that {{formula:93f4f1fa-9678-475d-abe9-0deb03c3357a}} . Due to {{formula:0607257c-4d27-4614-888d-e41a463b1a66}} being compact and therefore bounded, for any {{formula:f88fc461-3cc0-443a-9551-5f7c78ec501e}} it is possible to compute its finite discretization with mesh {{formula:2339abb4-8580-4eba-8b77-600b1094da37}} by simply considering vertices of a grid with sufficiently small cells. Note that {{formula:da1238fb-8ee9-4c6c-aea8-46c2a2e21ade}} , {{formula:785070c1-2086-43a3-96ba-a9223be686cd}} and {{formula:36588ccf-7db9-4e21-9fce-eeff0683f659}} are all continuous, hence due to {{formula:3e4d23d6-1b52-417b-aca4-7f01088a8c10}} being compact {{formula:37faca86-0e2b-46b6-99e8-b066d128ead0}} , {{formula:512b17bf-8cd1-4b3f-8f38-2567da0e5e99}} and {{formula:55e413c3-0e8f-465c-8b73-01ad33500c0d}} are also Lipschitz continuous. This will allow us to verify that the Expected decrease condition is satisfied by checking a slightly stricter condition only at the vertices of the discretization grid. {{figure:e1201f1e-fbcc-4955-a194-e7ec70d3502c}}The initial discretization {{formula:68f79767-7c5d-42d4-b325-6cee324b6d2b}} is also used to initialize counterexample sets used by the learner. In particular, the learner initializes three sets {{formula:59de10b5-b518-4803-8df0-f1650c9d29f5}} , {{formula:891c8112-2cde-4412-8ca8-03dd9ca70f6d}} and {{formula:cf072ce6-8fcd-4350-924c-051719ffb843}} . These sets will later be extended by counterexamples computed by the verifier. Conversely, the discretization used by the verifier for checking the defining properties of RASMs will at each iteration of the loop be refined by a discretization with a smaller mesh, in order to relax the conditions that are checked by the verifier. Verifier We now describe the verifier module of our algorithm. Suppose that the learner has learned a policy {{formula:5233b7c9-eff5-443f-9482-c4621271f5b1}} and an RASM candidate {{formula:c0695557-6d90-48d6-a577-f96db07881a7}} . Since {{formula:48db8ed7-5d19-4cde-906e-d43b15767d73}} is a neural network, we know that it is a continuous function. Furthermore, we design the learner to apply a softplus activation function to the output layer of {{formula:54436b2e-9563-42c4-9d8f-28b6bb8a95ae}} , which ensures that the Nonnegativity condition of RASMs is satisfied by default. Thus, the verifier only needs to check the Initial, Safety and Expected decrease conditions in Definition REF . Let {{formula:e0e46027-62e0-4cef-9160-93d946f40c13}} , {{formula:d57a501a-986f-4457-ab41-9db83e7b2d25}} and {{formula:1046da60-122d-4a0c-8fc9-9453b8a0d5db}} be the Lipschitz constants of {{formula:67ddc18c-de7f-4835-ba36-03d1d9d26d4b}} , {{formula:f407622f-d70a-46fc-9492-b8afee2f9255}} and {{formula:1cd6ffc0-fa4d-4b3e-bfd3-45e80ebe1296}} , respectively. We assume that a Lipschitz constant for the dynamics function {{formula:ca42f9d4-66ad-41a7-882e-ccd3f25eca6d}} is provided, and use the method of {{cite:f4e919c25b3647f3152599dce8059509e6792328}} to compute Lipschitz constants of neural networks {{formula:5cd4de1a-0e60-4e08-b42f-b62805e1ae92}} and {{formula:dce10204-8029-464e-b5df-6600de7e1014}} . To verify the Expected decrease condition, the verifier collects the superset {{formula:0fc4e90c-97da-4086-9161-542269d89f97}} of discretization points whose adjacent grid cells contain a non-target state and over which {{formula:91c91b43-9afe-4526-bb90-f7fb4388ec9e}} attains a value that is smaller than {{formula:3a906f52-97a2-4a2c-bd8d-72ea06fd029e}} . This set is computed by first collecting all cells that intersect {{formula:3e13c349-8a25-4cd8-a9fa-a947b91a5a2f}} , then using interval arithmetic abstract interpretation (IA-AI) {{cite:5968871789205987d4b216f268339fc782871766}}, {{cite:9f2e559947a57d248398e4db43f22b11e7dc907f}} which propagates interval bounds across neural network layers in order to bound from below the minimal value that {{formula:1cff9950-cc6f-43a7-babe-b7e0c8fc4306}} attains over each collected cell, and finally collecting vertices of all cells at which this lower bound is less than {{formula:19b4be85-3b65-4814-be61-8ca2411e9395}} . The verifier then checks a stricter condition for each state {{formula:a3913101-95dd-49a9-b32b-69b59a6736b5}} : {{formula:04299fba-5f52-4d8a-b61b-4651fdb9bd81}} where {{formula:f86a7e51-63e5-4249-880b-7269816f60e7}} . The expected value in eq. (REF ) is also bounded from above via IA-AI, where one partitions the support of {{formula:cda39d0a-e9a7-40e8-a28b-6d077a842522}} into intervals, propagates intervals and multiplies each interval bound by its probability weight in order to bound the expected value of a neural network function over a probability distribution. Due to space restrictions, we provide more details on expected value computation in the Appendix and note that this method requires that the probability distribution {{formula:7a44ad9b-b6e2-4af5-bd2c-2876e06e568d}} either has bounded support or is a product of independent univariate distributions. In order to verify the Initial condition, the verifier collects the set {{formula:ca2f495f-b2d6-4d6c-9059-1ad95c0a7148}} of all cells of the discretization grid that intersect the initial set {{formula:73f3a2aa-e787-4a9a-bd03-f0554d9364db}} . Then, for each {{formula:fb5a2d98-a491-49fa-931f-59697b60926b}} , it checks whether {{formula:e83c6ca2-2ec5-4051-80d7-39f88935b34b}} where the supremum of {{formula:7006cec1-9961-4737-8ddb-331d2a369386}} over the cell is bounded from above by using IA-AI. Similarly, to verify the Unsafe condition, the verifier collects the set {{formula:f369b537-d5c9-4eb0-b06c-171af71ceebc}} of all cells of the discretization grid that intersect the unsafe set {{formula:f2f274cd-0796-40fa-8a2f-aa4660cc4fd3}} . Then, for each {{formula:2a14920c-b9ee-4f0f-b685-6713807d3238}} , it uses IA-AI to check whether {{formula:29dad7f6-b793-430d-b326-d41646c5ed04}} If the verifier shows that {{formula:3f0e85c8-a2cf-4d05-a914-635ab8d94322}} satisfies eq. (REF ) for each {{formula:606014f2-37eb-4aea-85d3-7a7d2885413d}} , eq. (REF ) for each {{formula:e1ed19eb-4adb-4c29-9d65-ea56d1bcf0a2}} and eq. (REF ) for each {{formula:6237848b-4655-436c-a4b7-879a0bd04f4a}} , it concludes that {{formula:49aff507-dc9d-42d6-9d5a-cd40950e4272}} is an RASM. Otherwise, if a counterexample {{formula:0309168d-78e2-4c7c-834e-0e2c109f68eb}} to eq. (REF ) is found and we have {{formula:1de3742c-d631-42bf-a387-2e17f343e1f9}} and {{formula:9227d300-9db1-4739-9f94-217e7141439b}} , it is added to {{formula:7c9753cc-ad0b-4ed6-99aa-95684ad5db45}} . Similarly, if counterexample cells to eq. (REF ) and eq. (REF ) are found, all their vertices that are contained in {{formula:5634684b-f025-4570-bd20-f6c38fb49d8d}} and {{formula:5662ac04-03e2-4b5f-91ee-0e8b62631233}} are added to {{formula:341c6816-40e8-44d2-a605-7e80aa9ac05a}} and {{formula:f2a08d90-31a2-4e3a-9c74-b0bf4dc86bea}} , respectively. The following theorem shows that checking the above conditions is sufficient to formally verify whether an RASM candidate is indeed an RASM. The proof follows by exploiting the fact that {{formula:5f96bfb1-ded0-4643-8e84-2a5932aefce6}} , {{formula:bd2b8ca7-8251-4de8-acef-d77feed04f61}} and {{formula:132cc7de-a6ac-4b9c-b2d6-fca12ef0e45d}} are all Lipschitz continuous and that {{formula:77ff194c-394f-490e-b960-93a49c096cc4}} is compact, and we include it in the Appendix. Theorem 2 Suppose that the verifier verifies that {{formula:5f4e153f-10b0-4ebb-adbf-e80dbde87e5d}} satisfies eq. (REF ) for each {{formula:d4620de0-1352-44c3-8cf5-a878229285eb}} , eq. (REF ) for each {{formula:fc17d0d3-6fd1-4030-b77d-b0e5cb6f50b8}} and eq. (REF ) for each {{formula:4fa5fbab-e039-4999-947d-318de4073eed}} . Then the function {{formula:61ca2bc5-c0dd-4193-9d3c-2a28a88efd69}} is an RASM for the system with respect to {{formula:0d9f7714-92b3-4ef7-861f-53fbc4b2eb02}} , {{formula:d4782d97-d7b5-4562-afe1-6919e23caadb}} and {{formula:fd71d9f6-2791-4959-956c-d302829de492}} . Learner A policy and an RASM candidate are learned by minimizing the loss function {{formula:a84ee3a4-40d3-4c57-a51c-be9f4b89f538}} The first three loss terms are used to guide the learner towards learning a true RASM by forcing the learned candidate towards satisfying the Initial, Safety and Expected decrease conditions in Definition REF . They are defined as follows: {{formula:0e193703-a9eb-4279-822c-a3e960606093}} Each loss term is designed to incur a loss at a state whenever that state violates the corresponding condition in Definition REF that needs to be checked by the verifier. In the expression for {{formula:24705683-af98-4f30-9117-18472aa1505e}} , we approximate the expected value of {{formula:0976fa35-6d1c-4031-86a3-70ed07eb787f}} by taking the mean value of {{formula:0e5caf82-7b1c-4309-96eb-4c2e3438ec4b}} at {{formula:ed03c536-85ef-492e-917d-6d388b63537a}} sampled successor states, where {{formula:ef2d8611-e69e-4c7a-8d10-67abe5c0168f}} is an algorithm parameter. This is necessary as it is not possible to compute a closed form expression for the expected value of a neural network {{formula:0c45b442-f4bf-408e-96c7-752180ce52a2}} . The last loss term {{formula:1e0c86fd-7879-4798-8dba-59e41e96e8db}} is the regularization term used to guide the learner towards a policy and an RASM candidate with Lipschitz constants below a tolerable threshold {{formula:f486efdb-7a50-4ae3-85b8-3cf236dd33ab}} , with {{formula:1f3b1e7e-4f94-429a-a1e4-c35e55182fac}} being a regularization constant. By preferring networks with small Lipschitz constants, we allow the verifier to use a wider mesh, which significantly speeds up the verification process. The regularization term for {{formula:61b46412-7298-48ff-a217-c4b956f422ac}} (and analogously for {{formula:aff89ecb-60f7-485a-90fa-affaebc20181}} ) is defined via {{formula:62004d09-dd17-4553-9304-fe3d32c2ab55}} where {{formula:d3bd084a-2180-4cb7-b82a-81043a58165d}} and {{formula:5f0f8384-c34e-4037-ac2b-6c89f010a8b8}} weight matrices and bias vectors for each layer in {{formula:60cee136-eb38-4c1d-9951-73e0a79b9673}} . Finally, in our implementation we also add an auxiliary loss term that does not enforce any of the defining conditions of RASMs, however it is used to guide the learner towards a candidate that attains the global minimum in a state that is contained within the target set {{formula:7e099c46-9fbe-4144-bc5b-0ebe35f6f689}} . We empirically observed that this term sometimes helps the updated policy from diverging from its objective to stabilize the system. Due to space restrictions, details are provided in the Appendix. We remark that the loss function is always nonnegative but is not necessarily equal to 0 even if {{formula:6eef8d6f-05a8-4b42-b8bb-4167a5e6da07}} satisfies all conditions checked by the verifier and if Lipschitz constants are below the specified thresholds. This is because the expected values in {{formula:90dd3a26-3271-471c-a471-a1d1d0f4e253}} are approximated via sample means. However, in the following theorem we show that in this case {{formula:c25f5cd7-8d44-4935-8b35-7ae206d180f9}} with probability 1 as we add independent samples. The claim follows from the Strong Law of Large Numbers and the proof can be found in the Appendix. Theorem 3 Let {{formula:d75550fd-f470-47ed-a59d-04295cefa8c2}} be the number of samples used to approximate expected values in {{formula:d6ffc173-193a-48e8-9216-4aa52c2e1e4d}} . Suppose that {{formula:3af0af70-78d2-4897-9442-377158528265}} satisfies eq. (REF ) for each {{formula:a99971ad-61cb-48e0-8777-1501b81820df}} , eq. (REF ) for each {{formula:6c009da3-c9fd-4bb8-b624-58a99dd05412}} and eq. (REF ) for each {{formula:4376157f-0e25-47a4-829e-74c93f5a22fa}} . Suppose that Lipschitz constants of {{formula:50f4f8c4-5b37-4d86-be0f-fff0bc511bb7}} and {{formula:f0656479-1ba5-4143-a80b-21f81d1632a3}} are below the thresholds specified by {{formula:989d8d5f-66fa-4985-8b69-c2ffa712d1a0}} and {{formula:0ea2ecbe-18b3-4cef-82e6-becec27d8283}} and that the samples in {{formula:88da2e4e-b8cc-4a5d-ba45-6d6fa50332c8}} are independent. Then {{formula:413810d2-16db-4324-838d-a7be73ccffdd}} with probability 1. {{table:b005aaa4-1025-496b-be3c-85a1efb91301}} Experiments We experimentally validate our method on 3 non-linear RL environments. Since no available baseline provides reach-avoid guarantees of stochastic systems over the infinite time horizon, as well as sampling and discretization approaches can only reason over finite time horizons, we aim our experiment as a validation of algorithm in practice. We will make our JAX {{cite:c91411464644811bbb9d928c4ae5556d13a55ce9}} implementation publicly available. Our first two environments are a linear 2D system with non-linear control bounds and the stochastic inverted pendulum control problem. The linear 2D system is of the form {{formula:6cdfefa0-32e0-4fe1-83d4-bfbe5986f452}} , where {{formula:7a268c53-55f6-4f11-bc95-5832d91cd7db}} limits the admissible action of the policy and {{formula:2b8069e2-fb5f-40ab-b6c9-b882c02c6a7d}} is sampled from a triangular noise distribution. The inverted pendulum environment is taken from the OpenAI Gym {{cite:0aff76552dbba2a7434b3490bcef3416c1aa385e}} and made more difficult by adding noise perturbations to its state. Our third environment concerns a collision avoidance task. The objective of this environment is to navigate an agent to the target region while avoiding crashing into one of two obstacles. All environments express bounds on the admissible actions. Further details of all environments can be found in the Appendix. The policy and RASM networks consist of two hidden layers (128 units each, ReLU). The RASM network has a single output unit with a softplus activation. We run our algorithm with a timeout of 3 hours. The goal of our first experiment is to empirically evaluate the ability of our approach to learn probabilistic reach-avoid policies and to understand the importance of combining reachability with level set reasoning towards safety in stochastic systems. For all tasks, we pre-train the policy networks using 100 iterations of PPO. To evaluate our approach, we run our algorithm with several probability thresholds and report the highest threshold for which a policy together with an RASM is successfully learned. In order to understand the importance of simultaneous reasoning about reachability and level sets, we then compare our approach with a much simpler extension of the method of {{cite:839dc605cbf08ec2c3c6d7fabee6f502f52ebc4b}} which learns RSMs to certify probability 1 reachability but does not consider any form of safety specifications. In particular, we run the method of {{cite:839dc605cbf08ec2c3c6d7fabee6f502f52ebc4b}} without the safety constraint and, in case a valid RSM is found, we normalize the function such that the Nonnegativity and the Initial conditions of RASMs are satisfied. We then bound from below the smallest value that the RSM attains over the unsafe region, and extract the corresponding reach-avoid probability bound according to the Safety condition of RASMs. Note that, even though this extension also exploits the ideas behind the level set reasoning in our RASMs, it first performs reachability analysis and only afterwards considers safety. We remark that there is no existing method that provides reach-avoid guarantees of stochastic systems over the infinite time horizon, i.e. there is no existing baseline to compare against, thus we compare our level set reasoning with the extension of {{cite:839dc605cbf08ec2c3c6d7fabee6f502f52ebc4b}} which is the closest related work. Table REF shows results of our first experiment. In particular, in the third column we see that our method successfully learns policies that provide high probability reach-avoid guarantees for all benchmarks. On the other hand, comparison to the second column shows that simultaneous reasoning about reachability and safety that is allowed by our RASMs provides significantly better probabilistic reach-avoid guarantees than when such reasoning is decoupled. Figure REF visualizes the RSM computed by the baseline and our RASM. {{table:54023c3e-456e-40c3-a21c-5e87511b9a31}}In our second experiment, we study how well our algorithm can repair (or fine-tune) an unsafe policy. In particular, we pre-train the policy network using only 20 PPO iterations. We then run our algorithm with fixed policy parameters {{formula:81a53ee7-fd30-44cf-9943-18c99296ab72}} , i.e. we only learn an RASM in order to verify a probabilistic reach-avoid guarantee provided by a pre-trained policy. Next, we run our Algorithm  with both {{formula:995053a9-3cdf-49c2-ba76-04bcb3dfef82}} and {{formula:08c9c45b-1e68-429d-9285-b8d97f251a7d}} as trainable parameters. Table REF shows that, compared to a standalone verification method, our algorithm is able to repair unsafe policies in practice. However, the inability to repair the inverted pendulum policy illustrates that a decent starting policy is necessary for our algorithm, emphasizing the importance of policy initialization. Since the Policy Initialization step in Algorithm  initialises the policy by using PPO with a reward function that encodes the reach-avoid specification, our second experiment also demonstrates that a policy initialised by using RL on a tailored reward function is not sufficient to learn a reach-avoid policy with guarantees and that the learned policy requires “correction” in order to provide reach-avoid guarantees. The “correction” is achieved precisely by keeping the policy parameters trainable in the learner-verifier framework and fine-tuning them. Conclusion In this work, we present a method for learning controllers for discrete-time stochastic non-linear dynamical systems with formal reach-avoid guarantees. Our method learns a policy together with a reach-avoid supermartingale (RASM), a novel notion that we introduce in this work. It solves several important problems, including control with reach-avoid guarantees, verification of reach-avoid properties for a fixed policy, or fine-tuning of a given policy that does not satisfy a reach-avoid property. We demonstrated the effectiveness of our approach on three RL benchmarks. An interesting future direction would be to study certified control and verification of more general properties in stochastic systems. Since the aim of AI safety and formal verification is to ensure that systems do not behave in undesirable ways and that safety violating events are avoided, we are not aware of any potential negative societal impacts of our work. Acknowledgments This work was supported in part by the ERC-2020-AdG 101020093, ERC CoG 863818 (FoRM-SMArt) and the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 665385. Research was sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. The research was also funded in part by the AI2050 program at Schmidt Futures (Grant G-22-63172) and Capgemini SE. Appendix Overview of Martingale Theory Probability theory A probability space is a triple {{formula:76d93ef2-6b73-4edf-96e9-10fa8f2f8095}} , where {{formula:65d27f4b-9d8a-4c5b-bd64-685e6514ef6f}} is a non-empty sample space, {{formula:ec857c86-fe9a-4408-9e4a-38994472b40d}} is a {{formula:adba4eb3-d8fd-4b91-9726-fcf52b607ae3}} -algebra over {{formula:e6846394-2192-4b07-9719-8245f3f85a2e}} (i.e. a collection of subsets of {{formula:111949f4-60a8-4b24-b147-ccb8705844d7}} that contains the empty set {{formula:5b4dc7ad-b79a-410e-985b-28a9f1cf9cfb}} and is closed under complementation and countable union operations), and {{formula:9f6face1-c4ee-47a7-81f4-89ac3ad769cc}} is a probability measure over {{formula:acc6254a-9925-412f-aa33-e0504f6e1fa6}} , i.e. a function {{formula:8ed3d747-3376-40da-9b46-712abff7c9c5}} that satisfies Kolmogorov axioms {{cite:c9db9e32e98e856e87fc8af30e817adf15039983}}. We call the elements of {{formula:0f723429-7181-4e39-b49a-d8facaa13e96}} events. Given a probability space {{formula:6bba27d9-2b28-4152-9d04-36e3f9af42c0}} , a random variable is a function {{formula:1be7f4d7-de42-49b5-b3cf-d0465c18c71e}} that is {{formula:b6dd5543-ca96-451c-b24e-d98926c10a5f}} -measurable, i.e. for each {{formula:f6901ffd-c52b-483c-9273-9f1dd2c865dc}} we have that {{formula:8e767d0c-30b3-40f3-ad58-a9c8ee4f0d83}} . We use {{formula:33cbb964-40e6-41d7-822a-b7778b2c104c}} to denote the expected value of {{formula:2286b603-6da6-43bb-b4b4-b5af7af7bf10}} . A (discrete-time) stochastic process is a sequence {{formula:3629f325-5ab0-4d37-9408-825a4f931593}} of random variables in {{formula:46eb90fd-3edb-4e44-a926-be6e85c46603}} . Conditional expectation In order to formally define supermartingales, we need to introduce conditional expectation. Let {{formula:46f5ff81-9697-46b7-8fcc-5713ca21b8d2}} be a random variable in a probability space {{formula:c9ddfedf-fdc7-485a-ba0b-f35f5c4ecd4b}} . Given a sub-{{formula:99996afa-4f06-4d2e-9855-5b0abf3a9290}} -algebra {{formula:eb0e5cf8-9736-4571-920f-c30af046caaf}} , a conditional expectation of {{formula:8a428b1a-e7b1-4335-a072-e0f6eb29fb7a}} given {{formula:98e68eba-3bb0-4e3d-b14f-4ae10f0f7013}} is an {{formula:b307c587-3c97-44ff-ae9d-561fd7bcea6b}} -measurable random variable {{formula:e1bb07cc-0222-41d9-aed7-be20d081a0e1}} such that, for each {{formula:a2e273d9-d1a7-426b-9ac9-49bbceb49eb9}} , we have {{formula:d1f1b016-8300-4cd1-b6e7-ff4e7e31455b}} The function {{formula:43ef8cfd-260f-4ce9-b45b-12fc7d771599}} is an indicator function of {{formula:34d366a5-b9dd-4587-a7a3-cd001ee0cb77}} , defined via {{formula:086971bf-c05f-4629-a3c5-2dbe328f14e5}} if {{formula:8b5a3f3e-b836-4e33-9234-342ca992dccb}} , and {{formula:1155ba81-66a8-447a-85c3-f564778e71e8}} if {{formula:78c4c67a-78a0-4253-bacc-ef67c88b40c3}} . Intuitively, conditional expectation of {{formula:1013e8a2-22fe-451c-a10e-d03794edb235}} given {{formula:00097b03-816b-46ae-9ff9-d9fe33af3b21}} is an {{formula:47e93826-ff03-48d2-954f-0742916c23fe}} -measurable random variable that behaves like {{formula:14c27760-e6fa-49bd-a471-9822bbd9183b}} whenever its expected value is taken over an event in {{formula:11487b11-ce1e-4553-bb35-a9f2842a3492}} . It is known that a conditional expectation of a random variable {{formula:57b8dbda-aa35-43d1-a11d-112197a6ca2d}} given {{formula:4bb3cebf-d477-4606-96e2-52238dedb251}} exists if {{formula:c7e1d79c-87b8-43a0-9d43-fcfa8ea2c884}} is real-valued and nonnegative {{cite:c9db9e32e98e856e87fc8af30e817adf15039983}}. Moreover, for any two {{formula:0cd3eee1-5f08-4155-bafc-ad734adbcee0}} -measurable random variables {{formula:a8f86e80-6b06-4084-b7a8-dc97bb04fa20}} and {{formula:0f5e7bc3-e288-4a38-914d-1fa7dc3131d9}} which are conditional expectations of {{formula:2e7692a5-c1a8-4624-b306-e303a2e37524}} given {{formula:eff882a8-2c1d-4abf-9c17-bafa860d6fad}} , we have that {{formula:0fc26b3c-2e29-4107-851f-72fbbdb05a5a}} . Therefore, the conditional expectation is almost-surely unique and we may pick any such random variable as a canonical conditional expectation and denote it by {{formula:b0cb0ac3-e124-40e1-ba01-0a6086fe9d0e}} . Supermartingales We are now ready to define supermartingales. Let {{formula:f0b4394e-8031-4514-8940-fbee52372aac}} be a probability space and {{formula:be011a33-b386-4925-aacf-a3e7c55be3ea}} be an increasing sequence of sub-{{formula:27b55622-dcac-44a7-ba0c-7bf4384513af}} -algebras in {{formula:6f6d617d-4df4-475c-8e78-593927e6fefc}} , i.e. {{formula:7e791722-63e3-4cdb-aa77-896e10b3a753}} . A nonnegative supermartingale with respect to {{formula:64779f29-04df-4134-8ab3-926f723b0d52}} is a stochastic process {{formula:969f5050-693e-4d9d-9347-2692e6fe9f23}} such that each {{formula:35c3a99d-8d97-44fe-90d8-238be9ab594e}} is {{formula:b6c16b61-08c6-44cc-b3c9-115a0843ddb8}} -measurable, and {{formula:9304fdaf-9187-4f2e-8ca7-03e6b8ec664d}} and {{formula:81f5f650-545d-43f8-b937-959c5ad33ad5}} hold for each {{formula:9b97dd43-c94f-4366-8854-67dd5125ead8}} and {{formula:db5547f1-004f-4142-b1b2-ce28f46872db}} . Intuitively, the second condition says that the expected value of {{formula:e2677032-0556-442a-939a-c3ba734ea885}} given the value of {{formula:a364919b-444d-4d15-8b5c-5484da8fc62d}} has to decrease, and this requirement is formally captured via conditional expectation. We now present two results that will be key ingredients in the proof of Theorem 1. The first is Doob's Supermartingale Convergence Theorem (see {{cite:c9db9e32e98e856e87fc8af30e817adf15039983}}, Section 11) which shows that every nonnegative supermartingale converges almost-surely to some finite value. The second theorem (see {{cite:4c3e67ccf5671a0c29190e5cbc5dce57ab0e0dd0}}, Theorem 7.1) provides a bound on the probability that the value of the supemartingale ever exceeds some threshold, and it will allow us to reason about both probabilistic reachability and safety. This is a less standard result from martingale theory, so we prove it below. In what follows, let {{formula:fe4bbc83-c663-4f89-b8cd-25e48825956c}} be a probability space and {{formula:cf7e200d-ae25-4b8b-aff5-4f996c381265}} be an increasing sequence of sub-{{formula:25cb5bff-9bc6-41f3-970a-d51051552f70}} -algebras in {{formula:2bcf05cd-9edf-4e12-93f1-54dabcd98c83}} . Theorem 4 (Supermartingale convergence theorem) Let {{formula:2557733b-b507-4d87-8c11-d97b16be01ca}} be a nonnegative supermartingale with respect to {{formula:e91fc875-9063-42d7-b3b3-43e7993f8a9c}} . Then, there exists a random variable {{formula:123db81c-85a2-41ae-81a3-0c2eab0ca28a}} in {{formula:2c61b0f6-90bb-4f1c-a7d4-52edbce7de14}} to which the supermartingale converges to with probability 1, i.e. {{formula:4685d064-c6fd-4971-9a04-4e73ec624c1b}} . Theorem 5 Let {{formula:931d3e9d-497f-417f-b39d-b8812326b795}} be a nonnegative supermartingale with respect to {{formula:26d82bd2-3718-4929-8834-552323b636be}} . Then, for every {{formula:3e84443b-e92e-4347-8ff5-3399f8e31cb7}} , we have {{formula:6775a214-5cca-4199-9f7e-410a8d624a02}} Fix {{formula:357b9b4c-89ee-49f8-93b6-e2c003d70552}} . Define a stopping time {{formula:e2b6d2e3-19b1-4b08-99c7-47f7f886d394}} via {{formula:b5aaa030-e05f-4a02-b1b7-fa19613a2498}} . Then, for each {{formula:3d0d397b-2149-44c5-847e-e6093915a7c4}} , define a random variable {{formula:21296e9b-a05b-42ae-a4ee-59e118d245bb}} where {{formula:f45e925d-179e-4963-b39d-1a56db0848f7}} is a random variable defined via {{formula:e5c1f2db-1bb3-4f1c-8975-ff73fa40ca74}} for each {{formula:0040f5af-0f9f-47cd-8cc2-3aa8901ec630}} , and {{formula:2ba43ad5-1aea-483f-a1af-321aa77f010b}} is again the indicator function that we defined in Section . It is a classical result from martingale theory that, for any {{formula:34a9ded5-6299-4af9-8f83-af35e06c526d}} , we have {{formula:fe93956e-d1a0-432f-a83d-81f097b9105a}} (see {{cite:c9db9e32e98e856e87fc8af30e817adf15039983}}, Section 10.9). Hence, in order to prove the desired inequality, it suffices to prove {{formula:1e87ba06-2b1e-4362-b8fd-afcafd2d6767}} . To prove the desired inequality we observe that, for each {{formula:3875494e-37c2-4fbb-9a75-099ef459a6d0}} , we have that {{formula:32384933-110a-40c5-a83e-188b2567adc0}} where in the first inequality we use the fact that {{formula:72932272-1ecd-4086-b48b-1d0cdb2401a7}} , in the second inequality we use the fact that each {{formula:2ca5b0bd-ba01-4bd5-8c07-10c3a94e3be2}} is nonnegative and in the last equality we use the fact that {{formula:497b612c-64e8-407d-9b60-cdf883cc5e92}} . Finally, {{formula:a55e03a9-6932-4bf6-b60a-5d39b1a7b226}} is a sequence of probabilities of events that are increasing with respect to set inclusion, so by the Monotone Convergence Theorem (see {{cite:c9db9e32e98e856e87fc8af30e817adf15039983}}, Section 5.3) it follows that {{formula:b00566fb-ff35-4dcf-abea-8fa7005835ba}} Hence, by taking the supremum over {{formula:c28799d4-09ec-4057-bf24-c842c83eafdb}} of both sides of eq. (REF ), we conclude that {{formula:e5d53b84-8040-4e57-b63f-855b3313b7d0}} , as desired. This concludes the proof of Theorem REF since {{formula:82476f0b-234c-4e64-9472-4ea7a6f45a71}} Proof of Theorem 1 We now prove Theorem 1. Fix an initial state {{formula:1df96353-8913-40e7-9c68-d5b84acc1985}} so that we need to show {{formula:fb2454ad-9617-4ff0-a5a9-9b4aa0d7f858}} . Let {{formula:3962e6ba-2bca-49a1-b41f-0a767992460d}} be an RASM with respect to {{formula:6f213b62-99b2-4672-a782-5d8c11d8267f}} , {{formula:0a542f44-f156-4337-bba3-2e6c5577ca45}} and {{formula:16a27859-bd53-4f0a-b4f1-3d9d90b20376}} whose existence is assumed in the theorem. First, we show that {{formula:a051d3e5-b4f3-4280-a1aa-8aa723199323}} gives rise to a supermartingale in the probability space {{formula:ce8ef495-bc42-4227-9eae-1c13eac04e16}} of all trajectories of the system that start in {{formula:55dac0df-e512-4e23-b9f8-760ab4bfe2d9}} . Then, we use Theorem REF and Theorem REF to prove probabilistic reachability and safety. For each time step {{formula:e8f425fa-1a93-411e-800e-01c1694df0b6}} , define {{formula:14a0737c-2940-4dc9-b23d-78943cab2fed}} to be a sub-{{formula:4c005f19-bd96-4cdf-aeef-c0ad9c0a6ba8}} -algebra that, intuitively, contains events that are defined in terms of the first {{formula:a8d494ba-1f73-42f3-873a-caeea42a361a}} states of the system. Formally, for each {{formula:54a2c1b3-041c-4651-a202-f91b724bf238}} , let {{formula:ae6c1170-60c6-420f-9a63-0a56f0213106}} assign to each trajectory {{formula:9583eaa4-7540-4592-a380-03d4025a1f64}} the {{formula:964db14c-e555-4319-8688-193304990386}} -th state {{formula:5bfeae52-d387-464d-9c83-332e45b598f2}} along the trajectory. We define {{formula:d948e32f-1121-4145-85b5-62c0c0d8525f}} to be the smallest {{formula:c4e553b7-f990-4ece-945c-eb21b7ed92cc}} -algebra over {{formula:1eaa35f6-c6f1-4304-9ee9-e2af5a9b1ce2}} with respect to which {{formula:2abe972c-40d1-4efe-8aa4-670fcb680736}} are all measurable, where {{formula:1d9f8282-af10-4ac1-96e6-603bb4280a8a}} is equipped with the induced subset Borel-{{formula:34a313b8-5861-4c7e-921e-4ccaaad9a0c0}} -algebra. The sequence {{formula:0c744369-7ef4-4fc0-babd-bf12297c0539}} is increasing with respect to set inclusion. Now, define a stochastic process {{formula:f0885253-b3c9-4033-b0bf-c669dfd6944f}} in the probability space {{formula:d2642b68-bb34-4382-905d-8d4039a8e175}} via {{formula:8c52bc08-3493-4994-87aa-4ae09583fe87}} for each {{formula:6d96145f-1ffb-45ce-9e5a-c4d9abfc9901}} and a trajectory {{formula:23a13dbf-d6ed-4b56-9f01-41b7179548c7}} . In other words, the value of {{formula:c9bba545-bfb0-4412-bd39-92c90da47ae6}} is equal to the value of {{formula:7a871440-431f-4bbf-80ce-10e91eadd21c}} at {{formula:055ff757-ac32-44d0-a0e5-8c580501faf0}} , unless either the target set {{formula:48b31c7f-da35-4c3a-91e6-39a3ad7274cf}} has been reached first in which case we set all future values of {{formula:3565f6f6-6fd3-4352-ab6e-217635d92274}} to 0, or a state in which {{formula:34adc9ec-45cd-43bf-990e-268067ec4fed}} exceeds {{formula:c07d3bf7-a788-4fc6-8f5e-ac8352d28859}} has been reached first in which case we set all future values of {{formula:fc2daf62-1c50-4b3e-9cc7-b4c8dea0a0a2}} to {{formula:34965afe-9938-4e97-b5fa-22a6777107e1}} . We claim that {{formula:d6843861-6b12-41ff-9654-4c60d7720274}} is a nonnegative supermartingale with respect to {{formula:b4541f4a-6333-4c8a-99be-3a217f330f35}} . Indeed, each {{formula:b9a835d1-d3b3-4a88-96ca-f887cf6f8698}} is {{formula:9645395a-5d38-424c-9c03-96b0febb6ca7}} -measurable as it is defined in terms of the first {{formula:71a068fc-f37d-4c10-9068-d6061f0d43c3}} states along a trajectory. It is also nonnegative as {{formula:0e98f7db-6640-493f-810e-46b40ad47bef}} is nonnegative by the Nonnegativity condition of RASMs. Finally, to see that {{formula:f62ed1d0-12a3-402b-a3f9-01d3c910e737}} holds for each {{formula:9427cfe2-160b-48d8-a928-22dd73b59a36}} and {{formula:8777dc9d-793e-4e40-adcc-e399da614cef}} , we consider 3 cases: If {{formula:e730bfae-a39f-47a8-9a48-e718aac091a4}} and {{formula:9f7d5aa6-a8e2-4e30-821c-ddd15e4ccfe0}} for each {{formula:1c3d6286-f2cb-47aa-8881-c11a6e525e96}} , then {{formula:52225c6a-0ac4-41b0-a7e9-729587d0063a}} {{formula:538c1400-cd56-4eab-954d-8990e3c7ac5f}} Here, the first equality follows by the law of total probability, the second equality follows by our definition of each {{formula:6e703aa1-82ec-467a-a158-48182b6d8698}} , the third inequality follows by observing that {{formula:e8e77c8f-9804-42f6-8032-ad200e7dcdc7}} in this case, the fourth equality is just the sum of expectations over disjoint sets, and finally the fifth inequality follows by the Expected decrease condition in Definition 1 since {{formula:fd3f232f-6279-4dd2-a919-863286e15eb8}} and {{formula:5ef128c3-1430-4757-9ceb-fa70dc7a1291}} , by the assumption of this case. If {{formula:81094214-6306-4db7-9406-203809126226}} for some {{formula:0a76179b-83dd-4f60-b253-38df802e8fcd}} and {{formula:4272b4d5-16d3-470b-9e45-d64d796d8693}} for all {{formula:9f9f51c8-1498-473c-afce-bebf61fa4e51}} , then we have {{formula:6fb25afb-68e3-4b76-b77a-49d627aa97d0}} . Otherwise, we must have {{formula:8d7f6114-8934-47ce-8e5b-48c4a121e977}} and {{formula:9d77b1d9-ff84-494b-9a13-12122616a915}} for some {{formula:6e3be4ef-5c68-4f70-b619-7f878ceddc46}} , thus {{formula:28235766-f335-40f6-896f-795d46f75867}} . Hence, we have proved that {{formula:a277d7da-bef4-4521-b977-983ce9c6162e}} is a nonnegative supermartingale. Now, by Theorem REF it follows that the value of the nonnegative supermartingale {{formula:8a8b8660-ed19-4246-9d65-a632e1e8f2b1}} with probability 1 converges. In what follows, we show that {{formula:5d050333-4ecd-46b1-97cf-ab2233d834de}} with probability 1 converges to and reaches either 0 or a value that is greater than or equal to {{formula:84c44e02-708d-4ab5-8357-c7211327f5dd}} . To do this, we use the fact that the Expected decrease condition of RASMs enforces the value of {{formula:773f15d4-2399-4e99-9030-34879e5e10a0}} to decrease in expected value by at least {{formula:f4c60d35-38dc-4883-a804-4b2797a9561d}} after every one-step evolution of the system in any non-target state at which {{formula:7af50a5b-0b1c-436f-be23-83144ab68b65}} . Define the stopping time {{formula:343d3e51-3944-4976-8f53-94f74882b540}} via {{formula:5411dabc-c947-4e40-b70c-5651b51c5aad}} Our goal is then to prove that {{formula:e4dfb224-ebd6-4c7f-a858-8d3dcfb67479}} . Using the argument in the proof that {{formula:899dd18d-6fa0-47c2-a8e5-9aed51f66a0a}} is a nonnegative supermartingale (in particular, the proof of supermartingale property in Case 1), we can in fact deduce a stronger inequality {{formula:e4302415-7308-4f68-979c-afef350470a7}} for each {{formula:0d150111-4380-48ae-9f87-4bff12281656}} . But now, we may use Proposition REF stated below to deduce that {{formula:21b76706-cfd9-4339-b161-6b0cc37731f4}} , which in turn implies that {{formula:a62e22b3-fa82-4c83-b32d-7863a8ffb7d8}} , as desired. This concludes the proof. The following proposition states a results on probability 1 convergence of ranking supermartingales (RSMs). RSMs are a notion similar to our RASMs that were first introduced in {{cite:8006df7c51be89fff42ed2d825a0529fdc335ae9}} in order to study termination in probabilistic programs, and were used in {{cite:839dc605cbf08ec2c3c6d7fabee6f502f52ebc4b}} to formally verify almost-sure stability and reachability in stochastic control systems. We note that RASMs generalize RSMs in the sense that RSMs coincide with RASMs in the special case when the unsafe set is empty and we only consider a probability 1 reachability specification, i.e. {{formula:674b2bff-3edc-4613-b6d6-cde7d43dffa0}} . Proposition 1 ({{cite:8006df7c51be89fff42ed2d825a0529fdc335ae9}}) Let {{formula:19773867-2752-4523-a5ac-3ea4af0c94ba}} be a probability space, let {{formula:46dee01f-1647-48b5-b69a-16c0a4f032f5}} be an increasing sequence of sub-{{formula:3346e346-6461-4da6-95b4-8eabac68b175}} -algebras in {{formula:a461c01f-965e-478c-9e87-282a288e14d0}} and let {{formula:e09f0ab4-7b12-40da-9a6b-d2cf810e272b}} be a stopping time with respect to {{formula:28742f18-2fc7-41ea-b844-83adbb2fd4fe}} . Suppose that {{formula:990259ac-288e-4630-8af4-69095b84999b}} is a stochastic process such that each {{formula:3cc5c0b6-f08a-4f99-9dc3-51e846af15ec}} is nonnegative and we have that {{formula:c9c28b1e-accf-47be-9598-2be73e92dacc}} holds for each {{formula:53a22b39-061f-4b19-b8a6-b773b87f2a97}} and {{formula:6c240b6c-3ee8-4c01-871e-1f22f2e5fce0}} . Then {{formula:89510af9-5d7c-443c-8c53-0ea5d123775d}} . Finally, by using Theorem REF for the nonnegative supermartingale {{formula:529d43a2-9717-49ab-87c5-70e7e22eb829}} and {{formula:d8dbd613-81dc-4e8f-8605-e4c825b81ef0}} , it follows that {{formula:6c1ba7ac-4279-49f6-8b08-09c49f27beb0}} . The second inequality follows since {{formula:459859e4-98de-490c-bbda-2a19143caa75}} for every {{formula:029e62bb-e1a0-4090-96c1-aaf125921fc8}} by the Initial condition of RASMs. Hence, as {{formula:e0d32593-7173-432c-8c35-a674031a1b19}} with probability 1 either reaches 0 or a value that is greater than or equal to {{formula:c0980662-cdd3-4e7a-82d4-7fb536829e74}} , we conclude that {{formula:5127c930-decd-4f5b-934b-210e25f8e133}} reaches 0 without reaching a value that is greater than or equal to {{formula:8a0b06ba-416f-4080-ba3e-57a5e440f9b4}} with probability at least {{formula:648da8a9-ce7f-4762-9ec5-625d56064ab7}} . By the definition of each {{formula:c869d89f-862f-408c-9c74-5d582cbea42c}} and by the Safety condition of RASMs, this implies that with probability at least {{formula:85ec10ed-8d19-4010-9605-0fcf46435790}} the system will reach the target set {{formula:8f133024-ceb2-49c3-addd-5b49fa1eaf33}} without reaching the unsafe set {{formula:b0acb16d-47e7-4e33-9505-e698bccaa65d}} , i.e. that {{formula:3339a8a4-33b5-4fb3-852e-a4a93c2bb2ef}} . Computation of Expected Values of Neural Networks We now describe the method for bounding the expected value of a neural network function over a given probability distribution. Let {{formula:cc0130d4-6c93-4117-b76b-c39dd755eb78}} be a fixed state, and suppose that we want to bound the expected value {{formula:0e3b5fa5-f962-4a38-89ee-3f7716cba1b1}} . We partition the disturbance space {{formula:a2303126-07f4-408c-b79d-f8a1c2b93e96}} into finitely many cells {{formula:33b40d06-e799-49de-880b-7e5a4e02f8c1}} . Let {{formula:14c098c1-c99a-4bd9-bba9-4ee2842602b6}} denote the maximal volume of any cell in the partition (with respect to the Lebesgue measure). The expected value is bounded via {{formula:1b34a7b4-e321-4fcc-9393-183761ae20d1}} where {{formula:7d95bbdd-a1d6-406c-92a1-e764c28763e9}} . Each supremum is then bounded from above via interval arithmetic by using the method of {{cite:9f2e559947a57d248398e4db43f22b11e7dc907f}}. Note that {{formula:7f839e38-5a39-471a-a6e6-e1773dfd8f87}} is not finite if {{formula:d64fdaa3-bd42-416e-947b-522770304dbc}} is unbounded. In order to allow expected value computation for an unbounded {{formula:3a61b140-93ea-40b3-bd3c-85e4e8977ab0}} under the assumption that {{formula:5b733fe1-62b2-441e-8926-3799bfc31241}} is a product of univariate distributions, the method first applies the probability integral transform {{cite:2c0d98b8bdb64390a754f53e9dbd53dc0fa13fb1}} to each univariate probability distribution in {{formula:98708254-3e2f-4e02-bc39-34422903c941}} in order to reduce the problem to the case of a probability distribution of bounded support. Proof of Theorem 2 Suppose that the verifier verifies that {{formula:3de604c5-c93c-46ab-b085-a6b19d250d83}} satisfies eq. (1) for each {{formula:50416982-36f0-4b1b-96f9-55573816e47e}} , eq. (2) for each {{formula:a74fbe7a-7703-44b1-8b8f-26c9a79ddd50}} and eq. (3) for each {{formula:2061a794-b610-4379-9815-97e89988c84c}} . The fact that the Initial and the Unsafe conditions in Definition 1 of RASMs are satisfied by {{formula:71f98c6e-6edc-42a8-823b-dc79984db110}} then follows from the correctness of interval arithmetic abstract interpretation (IA-AI) of {{cite:9f2e559947a57d248398e4db43f22b11e7dc907f}}. Thus, we only need to show that {{formula:e9e46e79-72c9-4a6a-a8f3-0f37d89a22e8}} satisfies the Expected decrease condition in Definition 1. To show that {{formula:f22f88d2-db43-4eb5-a99a-48c3a72c6ea3}} satisfies the Expected decrease condition, we need to show that there exists {{formula:bff31c36-1a58-4e99-a8ac-3e44f04febfc}} such that {{formula:2056210a-e814-47b3-9712-033c1756d821}} holds for all {{formula:6a8c790b-0f38-4719-aa15-23986b21ca14}} at which {{formula:fedb1e17-7563-42f1-be6a-108e8c41f4fd}} . We prove that {{formula:7bc9b0b3-d271-4281-9e90-6283dd1a170e}} defined via {{formula:1f03f8ce-51d1-4f4d-880c-e7d054a410f5}} satisfies this property. Note that {{formula:b7cbf95f-3609-486d-bbb5-f887e493204c}} , as each {{formula:17be6fb8-79a8-4450-a0fd-aeffaffdd539}} satisfies eq. (1). To show this, fix {{formula:76fb3237-0ae9-4dbb-8405-233635939a64}} with {{formula:a3f054c1-5666-4124-9dfd-876ce82a9e80}} and let {{formula:86b23b68-a9ea-4063-8d40-4b894b33675b}} be such that {{formula:e97cafcb-9c24-4c96-8d06-4bdde7c28adc}} . By construction, the set {{formula:0e19238a-5167-4efe-a1c3-8d8ee12aac38}} contains vertices of each discretization cell that intersects {{formula:87f6baf2-5753-4c40-a63c-41f6a05af010}} and that contains at least one state at which {{formula:6520edb8-35d8-4285-8fdc-c54208fee8b6}} is less than or equal to {{formula:b7dd75fe-38f7-4b93-a668-2a7b77c2d824}} , hence such {{formula:2519be7a-d057-4989-9e55-02aa545cc171}} exists. We then have {{formula:dc9da6cb-fca2-4755-8efc-e5863da3f885}} On the other hand, we also have {{formula:9cf646de-3c71-4602-927c-602e59e342a1}} Combining eq.(REF ) and (REF ), we conclude that {{formula:12b20739-6b97-45ac-8c6a-4a11b07808fb}} where the equality in the second last row follows by the definition of {{formula:9b2b01c5-f79a-4605-89c1-bfc73e9c4d14}} , and the inequality in the last row follows by our choice of {{formula:34bccda4-9d1d-48f8-b8b1-f8dd8e8f3c1e}} . Hence, {{formula:c8739a1b-b163-4db4-88f0-f09be2055e78}} satisfies the Expected decrease condition and is indeed an RASM as in Definition 1. Auxiliary Loss Term The loss term {{formula:5a4fb293-46a0-4ab7-8ba5-d29c94d4fcbb}} is an auxiliary loss term that does not enforce any of the defining conditions of RASMs, however it is used to guide the learner towards a candidate that attains the global minimum in a state that is contained within the target set {{formula:1295da87-1834-46cc-84de-bfd2a1e3a294}} . We empirically observed that this term prevents the updated policy from diverging from its objective to stabilize the system. It is defined via {{formula:816ac19b-43ef-4ab1-8e0c-f794b80db9d1}} with {{formula:45014f48-fa93-494c-a51c-794952a1f094}} being some state contained in the target set {{formula:4427b302-6b35-43c4-a28a-9d3b1aa586d1}} and {{formula:f34ef844-d15c-43dd-b246-8c5ab08ba295}} an algorithm parameter. Proof of Theorem 3 Suppose that {{formula:cae67f39-c384-45b1-85f3-fed2a33553f2}} satisfies eq. (1) for each {{formula:c4466e8f-d64b-4658-ac5b-54810f19002a}} , eq. (2) for each {{formula:e030e1bc-c240-40cf-acd0-df33e47d3297}} and eq. (3) for each {{formula:98643c66-d918-4837-911f-ce01b1051dec}} . Suppose that Lipschitz constants of {{formula:08729033-440b-4450-abae-dd10f577d788}} and {{formula:e980886e-36de-4cb0-9f30-c811a31684f6}} are below the thresholds specified by {{formula:8fe964b9-f2e2-4066-bdde-61cb316bc608}} and {{formula:f3960bbe-abd3-4874-a8ff-40c4fe2458c9}} and that the samples in {{formula:349c917e-ae2c-4564-ae0a-18f725f716d1}} are independent. We need to show that {{formula:ed7f647a-c0bf-48a9-8e8e-73228ddedef2}} with probability 1. Since Lipschitz constants of {{formula:8b001b6e-2c68-47d9-b11e-d8ac033ac4c9}} and {{formula:2d07064c-4017-4162-9f12-63e67ed633e0}} are below the thresholds specified by {{formula:6bc69ffd-1992-46bc-a3b4-450b5f819215}} and {{formula:1346769d-3c1b-4f82-8996-efea49007052}} , we have that {{formula:af05f09a-9e99-4221-bee2-625fbc56ccd7}} . Moreover, our initialization of {{formula:0d6df15e-02d3-4af6-b25f-f5c4c18c7e16}} and our design of the verifier module ensure that {{formula:bce4d3bc-ff0b-4878-92b7-72e516f469eb}} contains only states in {{formula:ba45973b-d6c5-43de-bea0-934c6434c917}} , hence {{formula:ff87c354-fd48-49b1-a246-6fcc21df7944}} as {{formula:8f8b298c-30e7-41c1-87b2-4d1577628f96}} satisfies the Initial condition of RASMs. Note, {{formula:6e4054c9-c60a-4a9a-860b-76f5f0fcd1bc}} satisfies all conditions checked by the verifier, hence by Theorem 2 we know that it is an RASM. Similarly, {{formula:dff551c4-14d8-434a-92ba-6a6c8cb1ad8a}} contains only states in {{formula:8f548927-ea54-4a7a-825a-0199ae4c63ab}} , hence {{formula:dbabd2ba-98d4-4398-85c5-6162b5eccc00}} as {{formula:ab13c5b0-9632-4eec-a249-7e70066e3714}} satisfies the Safety condition of RASMs. Thus, under theorem assumptions we have that {{formula:27f8055f-80d4-425a-9602-1c223a688282}} Hence, in order to prove that {{formula:0d7bb6c3-b83e-44fb-8d25-2f4bccd30ba7}} with probability 1, it suffices to prove that for each {{formula:6af139cb-c710-4126-a452-8b072ee1e3f2}} with probability 1 we have {{formula:34d93e3f-d82f-492c-b51f-ab0c951c8baf}} The above sum is the mean of {{formula:7da3a603-d7de-4f40-8719-914caf97c15f}} independently sampled successor states of {{formula:6c1d5190-bf97-4720-9205-f2f16c75283d}} , which are sampled according to the probability distribution defined by the system dynamics and the probability distribution {{formula:58707d95-b5e3-4333-8e58-e9735711904e}} over disturbance vectors. Since the state space of the system is assumed to be compact and {{formula:4ee013e0-4a1b-46bc-9fea-3abeb6b1d378}} is continuous as it is a neural network, the random value defined by the value of {{formula:4d83661b-66ee-4c0c-b71e-68ad1357d2be}} at a sampled successor state is bounded and therefore admits a well-defined and finite first moment. The Strong Law of Large Numbers {{cite:c9db9e32e98e856e87fc8af30e817adf15039983}} then implies that the above sum converges to the expected value of this distribution as {{formula:eeb2c4d9-8597-47e3-9f04-860c1dfcae43}} . Thus, with probability 1, we have that {{formula:b3bc619c-cefe-45d4-9ec3-bcbcda9bf849}} The first equality holds since a limit may be interchanged with the maximum function over a finite number of arguments, the second equality holds with probability 1 by the Strong Law of Large Numbers, and the third equality holds since {{formula:a03bfb21-2a05-4895-9508-b250355cf392}} satisfies eq. (1) for each {{formula:8e15b631-929c-4635-8a0f-d6176bfa0b35}} and we have {{formula:710fb957-2d76-4474-b231-ca6309885144}} . This concludes our proof that {{formula:ae47f4db-1f01-42ba-bcdf-a30f9ad39211}} with probability 1. Experiment details The dynamics of the 2D system is given by {{formula:6640b9bf-7310-4f13-9239-a559097f89ac}} with {{formula:102c2254-1cb8-465e-995b-e1414274f041}} being the disturbance vector and {{formula:53ab6bf2-5769-4ea8-a032-476044504d21}} . Square brackets indicate the coordinate index, e.g. {{formula:56ab9089-289e-48fc-bb65-bc4b53c8c65c}} is the first coordinate of {{formula:292e0ab5-4129-43e6-b5ae-1aded45a6774}} . The probability density function of {{formula:3bd95410-6014-4cce-b0e1-f73e9b192524}} is defined by {{formula:6ca95428-2fe8-4cce-8c17-d09903d68ac4}} The function {{formula:95322f13-5f3e-48d8-9193-aaa6818bd217}} is defined as {{formula:781ba079-cd1e-45aa-af98-3e689e29e431}} . The state space of the 2D system is define as {{formula:09957cf4-5c40-41fe-abcc-5f9b01f66cc3}} . We define the target set as {{formula:3347954b-7232-4023-b344-fdd15144f800}} , the initial set {{formula:6d1a9a0e-8097-4df8-bbf6-0be14349dbec}} and the unsafe set {{formula:715966f2-13e8-4678-9815-017e2421edd1}} . For the inverted pendulum task, the dynamics is given by {{formula:f445fd0b-0701-4cd0-9885-a11830231555}} with {{formula:ce2a3957-b56c-4009-967d-4a77e591a684}} being defined in Table REF . The state space of the inverted pendulum environment is define as {{formula:e7384b93-3613-4ce4-b42e-1dd3d8ea02d5}} . We define the target set as {{formula:ef2a5726-78b2-4a29-9ba4-c8c4f4ca92d5}} , the initial set {{formula:babc92d0-937d-4585-9318-208b76b98710}} and the unsafe set {{formula:99e58750-8427-4875-8568-7bf2fdac211a}} . {{table:a48a514d-21f7-43ec-a0ba-033098176954}}The dynamics of the collision avoidance task is given by {{formula:796e7386-eb28-4b0e-ab31-3c9fd3980197}} The state space of the collision avoidance environment is define as {{formula:f6fe8746-a957-4942-b3c1-ab06a3914333}} . We define the target set as {{formula:63f81b26-290d-4f2e-949b-d3a5322b3980}} , the initial set {{formula:50c6c6c3-5c63-4c0d-bdd4-b05ca7282efc}} and the unsafe set {{formula:67e6651f-b739-4e02-9606-7856f2878de1}} . Similar to the first two environments, {{formula:bf5cddf4-91a3-4b22-be30-ba1ab7209d77}} is a triangular distributed random variable. We optimize the training objective using stochastic gradient descent. As mentioned in the main text, the policy and RASM networks consist of two hidden layers with 128 units each and ReLU activation function. The RASMs network has single output unit with a softplus activation, while the output dimension of the policy network depends on the task. The used hyperparameters of our algorithm used in the experiments are listed in Table REF . The code is available for review in the supplementary materials. {{table:d2c019bf-a025-4ad6-98cc-75a477f5a3c5}}Normalizing the learned RASM for better bounds After our algorithm has terminated we can slightly improve the probability bounds certified by the verifier. In particular, the normalization linearly transforms {{formula:69af59d4-a589-4d4c-bb0b-da4dbb96b719}} such that the supremum of the new RASM {{formula:5dc9036f-52cd-4c5c-b63f-e86852c61fe5}} at the initial set is 1 and the infimum of {{formula:2ea3e167-f005-4aa3-82db-fca06acf8303}} on the entire domain is 0, i.e., {{formula:f96358bd-fec2-405e-a60d-b4798cd59724}} The improved probability bounds {{formula:f9cbac9e-6bdd-40f5-b604-82ed7e7f9f77}} can then be computed according to condition 3 by {{formula:f789bfa9-1592-4244-9abb-ec6937f6fa14}} The supremum and infimum are computed using our abstract interpretation (IA-AI) on the cell grids. PPO Details Here, we list the settings used for the PPO pre-training process of the policy networks {{cite:13604744fd358c345831b96182d90155860f64a7}}. In every PPO iteration we collect 30 episodes of the environment as training data in the experience buffer. The policy {{formula:59a974d0-09a8-4092-a1dd-db406d64d748}} is made stochastic using a Gaussian distributed random variable that is added to the policy's output, i.e., the policy predicts the mean of the Gaussian. The standard deviation of the Gaussian is annealed during the policy training process, starting from 0.5 at first PPO iteration to 0.05 at PPO iteration 50. We normalize the advantage values, i.e., the difference between the observed discounted returns and the predicted return by the value function, by subtracting the mean and dividing by the standard deviation of the advantage values of the experience buffer. The PPO clipping value {{formula:8321c14d-9739-437b-83a5-316f5f4b6839}} is set to 0.2 and the discount factor {{formula:ee3c7ba3-c0fe-41a9-a118-8c9040d86ab0}} to 0.99. In every PPO iteration, the policy is trained for 10 epochs, except for the first iteration where the network is trained for 30 epochs. An epoch corresponds to a pass over the entire data in the experience buffer, i.e., the data from the the 30 episodes. The value network is trained for 5 epochs, expect in the first PPO iteration, where the training is performed for 10 epochs. We apply the Lipschitz regularization on the policy parameters already during the PPO pre-training of the policy.
r
0027db1ff883809569576e9a7543c189
Due to the breakthroughs in deep learning, numerous deep subspace clustering methods {{cite:017337f24e5114046bd138302be1065c9ed15e40}}, {{cite:48331e7e67ed0d77471924a9a691cff121765fd6}}, {{cite:6db33e54520f6556c9fcf64056e24d6eab6947f3}} were proposed to learn a nonlinear mapping of the data that is well-adapted to clustering. For example, Ji et al. {{cite:017337f24e5114046bd138302be1065c9ed15e40}} proposed the deep subspace clustering network (i.e., DSC-Net) to achieve the nonlinear mapping by an auto-encoder and mimic the self-expressiveness with an {{formula:c62793d4-ed90-4bd7-9a8f-6bea13104367}} regularization (i.e., DSC-Net-L1) or an {{formula:ec547e90-5973-48e2-8771-c3a59f4c9bc2}} regularization (i.e., DSC-Net-L2). Specifically, let {{formula:3b9dfa57-b326-4f07-854f-77bd55409a75}} denote input data with {{formula:019bd26f-ab7e-472c-a82c-83d1574b7443}} being the number of samples, {{formula:f1eccec2-8ca2-4b53-9c3c-de1583c76244}} denote the reconstructed data by the auto-encoder, {{formula:3e88ead0-663c-4804-a125-3e9fd8f87870}} and {{formula:f2299868-bece-4b21-8b89-f6d8e51ce668}} denote the encoder and decoder parameters, respectively, and {{formula:19122b54-fc46-4469-9b22-a5d9ab0d14a4}} denote the extracted latent representation. A fully connected layer (without any activation and bias) is introduced between the encoder and the decoder to learn an affinity matrix {{formula:6acbabed-00c6-4ed0-a7f4-52feeaa4be6f}} with a typical regularization term. The loss function of DSC-Net is expressed as {{formula:80be67e8-bdd2-4f99-bd92-d532a3bb8714}}
m
2aef2c932e2b247b9a2db8f2956a53b8
In order to determine the polynomial {{formula:8fda4996-d7ac-4fe1-a730-ebf878ff7741}} , the parameter {{formula:d7966be2-2eee-4af7-af8a-a2d15bdc1ab1}} under the square root sign must be known and the expresion under the square root sign must be the square of a polynomial such that the requirement on {{formula:638cbe1a-58ca-42a9-b8cc-071b1d7b9436}} namely a polynomial of at most first degree has to be satisfied {{cite:cb36664fadd04e9adf7def6d8a9091eb16cbc81b}}.
m
1151fb50213872720d8c60bc34be0967
We take {{formula:d9eb5a11-f4f2-4eb3-8fb9-d0a25eb14769}} as the incident field on the the spherical particle to be trapped. The Mie-Debye theory of optical tweezers {{cite:795bc5f36f570102532b0919b97214be0598be80}}, {{cite:606f4484741b8cca11cf65532faec1cb797d6033}}, {{cite:a375e6b40cad5fffbfe43b1c5b8596c63107ded2}} combines Mie scattering with the Debye-type non-paraxial integral representation (REF ) for the focused trapping beam. Each Fourier component of {{formula:d3a51fd5-9604-4490-90ee-dd70bf65fe33}} is scattered by the particle according to the Mie formalism {{cite:e00f64fa4938ba963948910406ef572d3a23ba7c}}. The resulting scattered field can be written with the help of Wigner finite rotation matrix elements {{formula:281d51d8-702c-42e4-9b03-c6d64dfbd324}} {{cite:232a68f470fbf55f2444fac516e96b19ad520ea7}}. We then compute the optical force {{formula:bb18bba9-d4d8-4540-b199-9867ef0bbdaa}} by integrating the Maxwell stress tensor over a spherical Gaussian surface {{formula:12834deb-762c-41ec-9ec1-735ec59fa363}} of radius {{formula:dda17466-4e2a-43dc-8097-0c5df36eb1d5}} {{formula:83546976-f278-4737-adf1-deed6793a288}}
m
beed47f15b3509f339b3345a98d9e952
There are several research directions we intend to pursue for future work. First, we acknowledge that although our method is scalable to optimize relative to other methods, optimization remains the computational bottleneck. We expect that porting our optimization scheme to the GPU, using fully-fused CUDA kernels for both the grid and MLPs, will alleviate this cost, as studied in prior works {{cite:09057237f20bb865ade21a5a1064ae7bef9fb498}}, {{cite:c88f1ae286c94f9b63b09ac5c53ec194ce21d1f7}}. Additionally, our self-consistency scheme is only an approximation, whereas other approaches have studied the design of invertible neural networks for computing discrete {{cite:0c90fd749a4622555c932b9668a8aecb006e8a9d}} or continuous {{cite:ad1cc42d834a628b14a084c07673b555a8213f71}}, {{cite:7c45dbdbb2a8977525d5b7d16911b47998bebf52}} mappings of learned representations. We believe that adopting such approaches for representing flow maps in 2D and 3D unsteady flows is a fruitful research avenue.
d
f67c226d672cb5579c242b8da1c3356d
This section presents numerical results on two power systems. The first one is a small-size system especially designed for illustrative purposes. The second one is based on a realistic power system from Texas. All experiments have been carried out on a cluster with 21 Tb RAM, running Suse Leap 42 Linux distribution. The constraint screening approaches presented in Table REF have been modeled using Pyomo 5.7.1 {{cite:d5ee7fcc37d448f99be42291d00ad0327c35ebca}} and solved with CPLEX 20.1 {{cite:8f8899c5037081d234f887f2bceca4380c3e37bf}}. blackThe number of segments of the 1-quantile piecewise linear regression has been set to one in Section REF (illustrative example) and to three in Section REF (realistic case study). In the latter case, our choice is motivated by the visual inspection of the data scatter plot net demand vs. cost. The breakpoints have been found with the Python library PWLF.
r
cab6f2047db766a4af479875aa2b3ebd
The BIC is defined as {{formula:15e673e9-f94b-4b87-addc-b0e2be9ccbeb}} , that is, the maximum likelihood plus a complexity correction term that contains the number of free parameters {{formula:dd6f5d19-bab9-4017-9589-ec44eead33f4}} and the sample size {{formula:b0389aca-b9d2-4807-994e-069acb88fb78}} , both {{formula:470459d0-5fff-45a2-8ac4-fe0bc463e9bc}} and {{formula:009056af-fa5e-4e62-8d4f-123ac67eb22a}} can be surprisingly difficult to determine {{cite:a6167f807cbd7ceb6dd9979bef38cb1c5fce9a78}}, {{cite:61ba4d620eefeaf609f679a520a6e531aae09ba8}}. The BIC approximates a “default” Bayes factor based on a unit information prior {{cite:7df9bb92d4b1a3a784fe47513ae6c7313a281b04}}; it does not easily lend itself to an analysis that seeks to take advantage of background knowledge.
i
d0af2a37297796324dade53376c263a8
The ML techniques have much potential applicability. Power-grid stability prediction can be regarded as a classification problem and the problem is handled as such in this paper. However, the cascading failure of a power system, which is also an important problem in the system stability analysis {{cite:6614be9779fd3f5087b89eeacc7b68e3edb6f7f0}}, {{cite:02524f6ca393090045aeb27fc34d94cbb3e4c79b}}, {{cite:a5b0b93fc738395fa5052f14453b99dc3de1cdf9}}, {{cite:2bbba05369b7d1ce8fd651490e012550bfbf50a7}}, {{cite:1e418f9adf03f45782bf5b8f9361657aef7899df}}, {{cite:645dd0f4327d98c09dc584700ab5a4118731ab11}}, belongs to a different group of problems. The cascading failure is a dynamic phenomenon and this problem should be dealt with other ML techniques because the state of the system changes in time. To deal with such time varying phenomena, the Recurrent Neural Network (RNN) can be a candidate for the ML technique. The typical example of RNN is the reservoir computing method, and the method is used to predict the trajectory of chaotic system recently {{cite:f65db5dbf642c3ffb17603383f0229776ecbb901}}, {{cite:976e72fd0b7bd0657377490eb09192c8d3bf732b}}, {{cite:b1079154ce53d031c0ea5e9758c46d6e3fc1c12b}}.
d
447c0ee7c5f9955ccd9615e95be91534
The system of PDEs derived in this paper also are interesting in their own right. The form is reminiscent of Patlak-Keller-Segel model {{cite:cae59b9aabf6a824cedb2ca6749e04b46a417a13}}, {{cite:550d53e5036f2d82f29e8491db65d03e5af604a4}}, with chemo-repellent rather than chemo-attractant and no diffusion of the chemical. The graffiti densities evolve in response only to the agent and graffiti densities of the corresponding gang, while the agent densities evolve only in response to the corresponding gang's agent density and the graffiti densities of all the other gangs. This leads to a system's cross-diffusion form. Originating in spatial ecology {{cite:a070cfe882bac6c40ffa76db7a209c21ee9cbc51}}, {{cite:58bdaeefdb38f8c456ca9b51898a400731baa4d5}}, {{cite:0c39989f10c322144f8a106107ccdb26e5c88275}}, cross-diffusion is widely recognized as a mechanism for pattern formation {{cite:e1d9bdb21bd76f30a1aaf7222c73acff50c22395}}. Recent interest in cross-diffusion has led to advances in analytical understanding of these systems {{cite:ce052733f48415e8626bc1a830720d4e9866fbb0}}, {{cite:7ce351a06c5f0786b977156dd3b4e49b2b06a2b1}}, {{cite:8e83577c6f9329cf7eba7bbca506ab971e08b28d}}, {{cite:3ec6bf897c7730da7badcff8c372b73cb1073e47}}. Since this paper offers three variations on a novel cross-diffusion system, new avenues are opened for further numerical and analytical study to better understand the properties and behavior of these systems, such as the analytical work done on the two-gang system {{cite:ab93d8bb021c5174ab8b331de6d96df891df8dec}}.
d
d6c64b38246f777b3314525f6ffc0ce7
Given {{formula:9a8ba7d2-080b-46c0-848e-6e60f54ba1e0}} , the CE boils down to first estimating the tensor factor matrices. Several techniques have been proposed to achieve this end, e.g., in {{cite:cf50676431faa68b6f662d78478a909d72fd1acb}}, {{cite:d51613a2ab5ab9eb69850a51bbea018c921345f1}}, {{cite:22d13cd9392fe8dec9f64932cb86e1155758aee7}}. One of these techniques is the alternating least squares (ALS) {{cite:0288c6378f5928f58c4b6c9d754b92844ce6f01c}}, which alternatively minimizes the data fitting error with respect to one of the factor matrices, with the other three being fixed. For example, to estimate {{formula:3c86d5c2-8f65-4c6d-ba06-752b61ec5cf7}} , assuming that {{formula:677e6367-8615-4b12-a64f-a30aa7f33aa3}} , {{formula:4fe56673-5f43-4b86-8f3a-5c96a5f0cac8}} , and {{formula:4b27026f-95e1-4c5d-8cca-cc995fdbe218}} are fixed, the problem can be formulated as {{formula:ca9648cc-9e2b-4d8f-aa81-541db6c6b412}}
m
157ab0b624c68c20330d4f46fa2a6a23
We denote by {{formula:db86ec0f-ff74-4337-85d6-eafa3db3d67c}} a complete filtered probability space, {{formula:2f821926-910e-4491-b14d-aaf97038fe4c}} is a sequence of independent random variables with the zero mean. The filtration {{formula:b36b0c31-b7e7-46dc-9dc9-fda126408fa0}} is naturally generated by the sequence {{formula:fe207871-68cb-47a5-92fa-1485a82a5ec0}} , i.e. {{formula:9b451ab4-295c-4257-939a-a02f083c8c4d}} . The standard abbreviation “a.s." is used for both “almost sure" or “almost surely" with respect to the fixed probability measure {{formula:3ec81191-10fa-42b6-8d80-7c4efa510eab}} throughout the text. A detailed discussion of stochastic concepts and notation can be found in {{cite:cc37b81f60f332e46b5f61c626c6da238c2dd8cb}}. We consider (REF ) and (REF ), where the sequence {{formula:c60ae553-30ed-46b2-93fa-dfb511b0b94b}} satisfies the following condition.
r
9f4132d45a07ded98f445442334320fa
The structure of CVAR is shown in Figure REF . The basic design of CVAR is generating a better embedding for item ID and replace the original unsufficiently trained embedding. The Item ID in Figure REF denotes the original item ID embedding {{formula:f8b68443-22ad-4e39-a247-d5e03eeffd08}} trained by backbone model, while the Warm Item ID denotes the enhanced warmed-up embedding {{formula:8db2bae8-7a42-4c0d-a0ec-d2d06175b0a9}} generated by CVAR. Instead of directly learning a transformation from {{formula:151aea4e-da22-4812-8a29-06b914444145}} to {{formula:40a0b11e-5aaa-4db0-8fde-7e5c161fdc15}} , CVAR aligns the representations transformed from {{formula:fbde1e6f-b5f9-48f3-a884-f1efb21803d2}} and {{formula:c2b8c144-60e0-4f52-8fa0-e0833a50b6a6}} in the autoencoder's latent space {{cite:3e2eb6d5fb4168ba63f8a168947675b92be4a4d0}}, {{cite:d11c18ceff4aec5985a2f1a97b13a024da971f2f}}, {{cite:7344320b0f1c573e19500af386b8c243f859afc4}}. Following the design of CVAE {{cite:075fe86ee7b48bfadef32686523233967c8467f0}}, the autoencoder applied to {{formula:aeb5b961-6214-4aa3-a150-5b2a1a7d3d5f}} is formulated as: {{formula:3d7b63a5-4d38-491b-9c1b-30f975d6e73e}}
m
c17467afbbbef88fd58ecccd8bc64314
In the classical nucleation theory, it is assumed that the shape of the new phase nuclei is spherical and the nuclei do not contact with each other {{cite:e7b403585bbeb4a57ab0ce4285e9957dc464bc54}}, {{cite:03ebf26e107e1c417b706a5e4dfe6a6e643329f9}}, {{cite:b8ba7307c899cc800ab9335eee746eb494d7c111}}, {{cite:a953d22ea31640b164b82b20c75f6282281c746a}}. In this case, the surface layer of the nucleus contains {{formula:1dcd63ab-346b-45b6-af7d-cec847247bfe}} particles, where {{formula:492b300b-c618-4c9c-9553-875afe7715ca}} is the number of all particles forming the nucleus. Then the concentration of nuclei with the size {{formula:2c01a438-dcb3-454d-b7a0-2abb4e8b7da1}} depends only on the frequency of addition/removal particles and it is determined by the master equation of the following form: {{formula:17eaef00-9dcd-47fb-a835-c5525bbde35e}}
i
b7756ba361b6edaf25544c9292b12a20
For WideResNet-50 {{cite:0ce2f8379caf8adae887d569f75ec29f2e213410}}, we plug the proposed WaveBlocks or AWBs after the stage 2 and 3. For DenseNet-121 {{cite:0e5afadba41fd1d23b38c9749f298b9fb16a27df}}, they are arranged after the Dense Block 2 and 3. The experiment results are shown in Table REF . In Duke-to-Market task, the mAP increases by {{formula:a7df3d2b-cc2e-4ff4-94ee-3268557908b1}} with WideResNet-50 {{cite:0ce2f8379caf8adae887d569f75ec29f2e213410}} and increases by {{formula:a6835a07-0d6d-46d2-944a-9ec1b9644ecc}} with DenseNet-121 {{cite:0e5afadba41fd1d23b38c9749f298b9fb16a27df}}. Also, in Market-to-Duke task, the mAP increases by {{formula:73ef74bc-6c66-46d3-a4b5-1278c3e4666c}} and {{formula:2d668184-e84a-4da0-8f84-0806e9756c80}} with two backbones, respectively. Further, we achieve higher mAP and rank-1 performance in two tasks by using DenseNet-121 {{cite:0e5afadba41fd1d23b38c9749f298b9fb16a27df}} than ResNet-50 {{cite:0e5afadba41fd1d23b38c9749f298b9fb16a27df}} as our backbone.
d
98882e212b062037b05ee7e30d1dc9fe
Domain Randomization: Building upon the work of {{cite:075af8099481270c2fcd2e24eed7552a02e56dba}}, this method proposes a new technique to bridge the domain gap, inspired by the domain randomization {{cite:f4f78557657f56af3fecb94a65abd855fd4325e3}} works in robotics. During training, the poses in the training set are generated using multiple different DRR generators {{formula:0972f7b5-d9f5-4c00-a0a8-8886233e08d4}} , which are not necessarily state of the art. The intuition behind this is that each of the generators constitutes a visual style and the network will be exposed to an infinite amount of styles during training, due to the randomization. Thereby, the network will be forced to learn to deal with a large variance of styles, where the difference between the styles is larger than the domain gap between real and virtual X-ray scans. To the network, the real X-rays will appear as yet another one of these styles. This can also allow a network to become more robust to other variations, for example air cavities due to pneumoperitoneum, artifacts, or small metallic objects. Four DRR generators, depicted in Fig. REF , are used. The first generator (Fig. REF a)) {{formula:483b2bae-3fc7-47cd-b951-c7dccd2f482a}} , corresponds to the method {{cite:0fd89607b4ea9c83bc99b3133529787cbfb92ac6}} without scatter estimation. The second generator (Fig. REF b)) {{formula:4b34af8e-191e-442b-9763-5589adfda015}} , corresponds to the same method with scatter estimation. It should be noted the publicly available implementationhttps://github.com/mathiasunberath/DeepDRR was used which does not include the final log conversion (i.e. intensities represent energy arriving at the detector). The third DRR generator (Fig. REF c)) {{formula:3b42a3a5-0ea8-4ad5-94a6-7deeb59fe334}} , corresponds to a publicly available ray casterhttps://github.com/SeverineHabert/DRR-renderer. The fourth DRR generator (Fig. REF d)) {{formula:c7c3640f-2838-4f19-b9e8-556b8b0fa74d}} , corresponds to a commercially available DRR generatorImFusion GmbH, Munich, Germany (https://www.imfusion.de). After each DRR is generated, an elaborate randomized post-processing scheme is introduced to create a further variety of input scans.
m
aeb57ec5e36fa33c6c9657baa3a9c71f
Most of these methods can be roughly divided into two categories. The first category contains the neural architecture search (NAS) algorithms {{cite:d3bf122c1a6cb6512b0077b68c500f5f7a4b2dba}}, {{cite:9291798e181f817d000d94a15c4786b06b78d6c5}}, {{cite:220792ecd1649697429cbdb6dcc15cd293f3e965}}, which allow constructing efficient neural networks for a particular dataset and specific hardware used for model inference. The second category of methods aims to improve the performance of existing and usually hand-crafted DL models without much impact on their architecture design. Moreover, as we show in this paper, these methods can be successfully applied to the models obtained using NAS algorithms. One example of such methods is quantization {{cite:031c29d3139897e4e0291b604dc7fb2ab1fbfd37}}, {{cite:4b0cf1d33e18379d88c9080c6e43ec8b41ceaf60}}, which is used to transform the model from floating-point to fixed-point representation and allows using the hardware supporting fixed-point arithmetic in an efficient way. The extreme case of quantized networks are binary networks {{cite:d88e25c08be21e90d0b453e4f56be39c63ced62c}}, {{cite:11d7652a0b5340424a105ea9ed8d222e77d34b55}}, {{cite:cfe05222cc297bf6e25e8fda4a4e104525e3c4fa}} where the weights and/or activations are represented by one of two available values so that the original convolution/matrix multiplication can be equivalently replaced by XNOR and POPCOUNT operations, leading to a dramatic decrease in inference time on suitable hardware. Another method belonging to this group is based on introducing sparsity into the model weights {{cite:4578d8263ab688ac16e9e9cf04cba188cdaa2ae6}}, {{cite:ede1f47f4c8a15d015e9f62912b84839c46f4b74}}, {{cite:57a956628fa388afdd0b4a03ec35eb906e842936}} which can be further exploited to reduce the data transfer rate at inference time, or bring a performance speed-up via the sparse arithmetic given that it is supported by the hardware.
i
2e65f192214de3b1224c86ce3df62584
We will use the formal definition of a decision tree used in {{cite:1223db04a5004dc662e4294943ffd49901b9527d}}: A decision tree is a pair {{formula:bd4f62f4-75a7-45d7-bd4e-f74136223bd1}} , with {{formula:622491a2-5450-4941-b317-32fa46e129a9}} and {{formula:540afcca-8296-4601-a4c5-bd331aed1e74}} of the form {{formula:9039a74c-a536-45c8-874b-264087097cba}} , where each {{formula:4a87f834-ab5f-4f8f-8265-b1456611481c}} is a function which assigns, to each pair {{formula:711628b1-49bf-4526-91a3-076145c9a3c3}} an element of {{formula:d7f0f1ff-3cd3-4dd9-adf6-f24c30ae952c}} .
r
eec4e764f9edeaa08305e271651711aa
The weighted Tchebycheff problem formulation has the form {{cite:a0128fa379042a49890fc572095cdfcb9b0e5123}}: {{formula:76d548f0-a153-434c-923b-29a4c967b549}}
m
c66048bf7635a5dc6795b7a01001f2c5
The remain of this paper is organized as follows. First, we establish the free boundary value problem to the physical problem in Section 2. The solvability of the free boundary value problem follows from the standard variational approach, which has been developed by Alt, Caffarelli and Friedman in the celebrated works {{cite:b899096766d97ba3143e704f135e6c608b0d03ae}}, {{cite:ac854631cd6f9841e815be60766c24e6385cf014}}, {{cite:442db5431976e2ce9826972a4a0aa2de0873d5b0}}. Moreover, some properties of the free boundaries will be obtained and we can verify the continuous fit and smooth fit conditions for suitable parameters {{formula:43629de1-1796-4528-9b7c-aa533102cda5}} and {{formula:b92b8769-18fc-48a7-8896-8ae2f4b1b6df}} . Additionally, we will investigate the existence and properties of the interface between the two fluids. Hence, we can obtain the existence of impinging outgoing jet in Section 3. In Section 4, we will give the uniqueness of the impinging outgoing jet and the parameters. In Section 5, the asymptotic behavior of the impinging outgoing jet is obtained along the blow-up argument, which has been used to deal with the subsonic compressible flows in infinitely long nozzles in {{cite:f688c5d731fc38bc0ac93d7de2abf6369fe10755}}, {{cite:3aeecacc33bb8dc2b6e6d42da92fb090c82eed41}}, {{cite:1b1c06e88bebf5758bb13fb3f9a1e1af975037c1}}, {{cite:ca8677478f5eda0bb842c2147070a567c9e020d0}}, {{cite:23ed39ecbda2485db019842aa96a521b1a572771}}, {{cite:8b65e01950b33baa638a79784ae3ca158b2cfaaa}}, {{cite:fc1315b22b6810fb956c4755c99e5530e335ee97}}, {{cite:4dacdc04836fb32cb9de7ab81e2993c36c8046cd}}, {{cite:e10eb0ab45ac4bde039ac8cf57865078937d9a61}}. Some results on the variational problem are given in the Appendix.
m
d350dd6f200bc7c25c763bf9e56e2073
Anomaly detection, which attempts to automatically predict abnormal/normal events in a given video sequence, has been actively studied in the field of computer vision. As a high-level computer vision task, anomaly detection aims to effectively distinguish abnormal and normal activities as well as anomaly categories in video sequences. In the last few years, there have been many studies investigating anomaly detection in the research community {{cite:4db4ee5e2b8c17c65b3fa189f7f7af7c7cc017c6}}, {{cite:d23634061a00ce94466c4fc2be2cc9165db09e3d}}, {{cite:1be9f92e3713778404b7a487548c08409b0fd22f}}, {{cite:b7484884471ffc6f437cffa680264686e92bf010}}, {{cite:792f854ddfe4beb22caa64e6b305cf42a3f9bad8}}, {{cite:5e91af28cacaf7f718cbeceadca9199041390ca1}}, {{cite:8d5793e6de5e4f8889b4821cc10151451fd520d1}}, {{cite:aeb50acc221b6ad9740e5ca90d95700812cff734}}, {{cite:19d15d6fdba011ba8f6ea6f574e2813a56f2e3ec}}.
i
4a7a84a06b250487a12a738edb32efd6
Throughout this paper we consider {{formula:da843ef6-2351-4741-8dad-6f71c4ca270b}} as a finite group. {{formula:734039cd-7edf-4b47-a839-b1945dcc183e}} denotes the cardinality of the set {{formula:0304af84-13d9-4bcc-b32d-ba4d18fd0136}} For a prime {{formula:982009ac-a754-4cb1-b82c-993e449bba37}} a group {{formula:4cf4307d-bfec-46bd-8cb8-93ad16da1ff9}} is said to be a {{formula:580841da-eede-4c4e-8cdd-8d56c8d4ce02}} -group if {{formula:231312a1-1426-41a9-b27f-6b0704d8efc8}} Recall that a finite group {{formula:8680151b-53f3-4773-b110-a17d53d3932a}} is nilpotent if and only if it is a direct product of its Sylow {{formula:b5ed08e0-86fa-412a-a562-464790a3a329}} -subgroups over primes p dividing {{formula:f6ed7872-b6b5-4ead-a80b-4b26a272a25a}} Note that, in a nilpotent group, elements of different prime orders commute. For more on nilpotent group we refer {{cite:b29b3e6f4739ce0476df461bd908d103d549bde5}}, {{cite:12e3bd59062afc9f041385099f104474be59203e}}, {{cite:917b6aedf6e399d977ac045056a7b4ec65eba589}}.
r
fca7bc52d301ed71d356831562576e2c
Alternatively, if we are seeing the jet of Tol1326-379 at a small angle, its SED would be intrinsically different from those of the FR Is. This would be the first indication of a discrepant property between these two classes (other than the paucity of extended radio emission in FR 0) and might be an important clue to understand their nature. Interesting enough, the Compton peak in Tol1326-379 appears to be more prominent than the Synchrotron one. This generally occurs in FSRQs, where a surplus of seed photons coming from the accretion disk, the broad line region and the torus, contributes to the high energy emission by External Compton {{cite:a34f2dd9ae0f4408e2f92afd8f4bcd63979fe6a3}}. Incidentally we note that Tol1326-379 is characterized by a steep {{formula:2f3e7e48-3de1-45de-af8f-ae24a701be4c}} -ray spectrum ({{formula:576c16f2-b981-4175-8777-0bdf69e7f098}} ), more similar to that generally observed in FSRQs (and their misaligned population, i.e BLRGs and SSRQs) than in BL Lacs (and FRIs). It is, however, improbable that an External Compton mechanism is responsible for the cooling of the jet particles of Tol1326-379. Its nuclear environment is poor in photons as indicated by the low accretion rate. It is then more plausible that its {{formula:f1ce975e-5b6c-4a0a-8c1e-ef7ac55810ce}} -ray luminosity is sustained by different jet components that mutually interact amplifying the IC emission as suggested for FR Is. The excess of {{formula:023e9d55-e8c9-450f-9f59-3e3c14e0866d}} -ray radiation could then reflect different physical conditions of the high energy dissipation regions (i.e. of the spine and/or the layers).
d
a30dfa5b2ed033d50fd2acf4fc2ab866
Quantum error correcting (QEC) codes {{cite:8f9d76bd5b82b5d1360897e08e667bc739a8cb33}}, {{cite:82f71ab346b405dde58e435355422c7579bb5e0d}}, {{cite:34f9d3f6a5d4879f3f528e6209b31597f64ff47e}} are vital for implementing fault-tolerant quantum computation and overcoming the noise present in quantum hardware {{cite:ef80a780a27bab2b0f7b7c11c96733ed3209066a}}, {{cite:65d550447f06de9ba26e47167f42d131324f9147}}. Quantum device vendors are exploring various quantum error correction codes to boost the error tolerance of quantum computation. For example, Google exploits the repetition code {{cite:23da9b6fad0f0851ac49e85a6eedee8fc8b3ac3e}} to suppress errors in their Sycamore device {{cite:48e2ecf5f954c5c73e2a1c211aa2a947317f5b1e}}, IBM extends the surface code {{cite:8f9d76bd5b82b5d1360897e08e667bc739a8cb33}} to their low-degree superconducting quantum computers {{cite:93e0cf68feac571746eea66c0a4ea86d84bcce19}}, and Amazon utilizes the concatenated cat code {{cite:82f71ab346b405dde58e435355422c7579bb5e0d}} to build a fault-tolerant qubit.
i
6662e731d413f6b1090e332a6e9a581f
Furthermore, we consider the use of the Pearson correlation coefficient as a limitation of our approach. When modeling the outbreak of a disease, it is especially important to properly reflect the points in time that correspond to high numbers of infections. Other correlation measures, like weighted variants of the Pearson correlation coefficent, may provide advantages in this regard. We expect this aspect to be particularly relevant when modeling rather infrequent diseases with generally low incidences. Another problem are possible discrepancies between the data obtained from the emergency departments and the data that incorporates information about the number of infections, e.g., resulting from reporting delays. To circumvent potential issues that may result from such inconsistencies, approaches that have specifically been designed for measuring the similarity between temporal sequences, like dynamic time warping {{cite:da4c380951feb3639c8aa3ba5006b514912f9a70}}, could be used in the future. They allow for certain static, and even dynamic, displacements of the sequences to compare.
d
776ba8ce0b85338c66ebac708ff12937
A U-Net architecture {{cite:8735f6dec0e309f6199662dc2cc2667e7e6b82f6}} with zero padded convolutions was trained using the Adam optimiser {{cite:138f952c5bc54268b0b7a5d1dc3398fbaadaa5a2}} with a learning rate of {{formula:4861729e-a0b8-4e8b-82de-4b5a23d109fe}} and standard {{formula:2ed47f0e-76e3-4919-8c98-1deac2cde0e3}} parameters. Batch normalisation {{cite:d9439e03fa1703b272256f9b0fb26a64dde0274b}} was used after each activation function to ensure stable training. Cross entropy loss and the Dice loss were tested {{cite:f586cf76fb122727c40d38112b25fb0b5e128a8f}}.
m
67a2e6e62fa9f1d584a81bfed8b0935c
The results above shows that pure annihilation decays {{formula:999c453b-d4cd-4503-9145-54ef31ae59a1}} and {{formula:81bf5a5a-6813-4f17-8441-63314bd62ad0}} , which all cover two kinds of topological penguin diagrams contributions and the predictions are in the order of {{formula:fba0d818-58f8-47c9-aa84-78261ea8966a}} for branching ratios with large uncertainties. In order to compare the branching ratios with other approaches and experimental results {{cite:286f069a4ad4145fba7caf32748ce39a13765902}}, {{cite:aaf4f1834770e161f75d21407b19134d229b6c58}}, {{cite:62f8acf703c5ca98c18f39c71bb19a6fce4f68ac}}, {{cite:5efbe2ea8dbfa691d53dbf9937d87492a542364a}}, {{cite:46f8146d4c4bd442ff2df81497173841fb82c954}}, {{cite:79d10ef0aecc07acbbc5f6ef0e1d73baeeeea173}}, we define two-body decays of {{formula:f1166d55-1a8a-4cc6-af88-d4ea48a998bd}} by introducing {{formula:6ffdb165-d59c-4c75-9e3e-290e9e75a76c}} .
r
64e0801fbd68796b14caa77337c858ea
Recently, various regression-based object counting methods {{cite:9d53237db29ebdaad35bc43ba9be564d6e6463e3}}, {{cite:b16df63e3c3841237f41f3de3a6c044a5ab30b71}}, {{cite:6dc4627d32bf0f1fd44806e3975b4d9df6395bdc}}, {{cite:2a10f4d50ab1d281880dc354d059ade7f6b8ba45}} regress a density map through different techniques such as multi-scale network {{cite:6dc4627d32bf0f1fd44806e3975b4d9df6395bdc}}, {{cite:3d33fe8ff1dee5cb0a770221c48639c1e08408c1}}, attention mechanism {{cite:c39cf4df55cd3d6f506fb2390955795e26108ba6}}, {{cite:2a10f4d50ab1d281880dc354d059ade7f6b8ba45}}, {{cite:bb6ce0e739512df20444aee51416582bf842723b}}, and perspective information {{cite:387cd226c59a069f3359c966325e8d50af40563a}}, {{cite:6a5d5a02e54b22d5205dd351ecbbb801b0856478}}, {{cite:9ea032418ab5c61e0dae597323a7a029cc0fdb22}}, achieving remarkable progress. Specifically, Jiang et al. {{cite:3d33fe8ff1dee5cb0a770221c48639c1e08408c1}} introduce a novel Trellis style encoder-decoder, which effectively fuse multi-scale feature maps in the encoder. Xu et al. {{cite:6dc4627d32bf0f1fd44806e3975b4d9df6395bdc}} propose the Learning to Scale Module (L2SM) to automatically scale different region into similar densities, reducing the pattern shift. Zhang et al. {{cite:bb6ce0e739512df20444aee51416582bf842723b}} propose the Attentional Neural Field (ANF), which combines the conditional random fields with non-local attention mechanisms to capture multi-scale features and long-range dependencies. Yang et al. {{cite:9ea032418ab5c61e0dae597323a7a029cc0fdb22}} propose a reverse perspective network, which is designed to diminish the scale variations in an unsupervised way. Liu et al. {{cite:6a5d5a02e54b22d5205dd351ecbbb801b0856478}} utilize a auxiliary branch to integrate perspective maps into the density maps.
m
074ce5fd137e8837edec0a4f0c012520
where {{formula:7e7e9c35-d7a7-4c62-9648-f3c215a4cc77}} has {{formula:177fe119-209a-4930-a166-ea1f58d30442}} Lipschitz continuous gradient in Euclidean space and {{formula:56e69764-9baf-4480-80ba-34b171d13d16}} is the Stiefel manifold. Unlike the Euclidean distributed setting, problem (REF ) is defined on the Stiefel manifold, which is a non-convex set. Many important applications can be written in the form (REF ), e.g., decentralized spectral analysis {{cite:675a15a9d745f942e05137099ea0fbbead2bb5ad}}, {{cite:fb5ccea34657112ed5d69ba46da19967cd68e6fb}}, {{cite:c928e814016c76dc82630690ccdc1b2172366086}}, dictionary learning {{cite:685efe499d2f1cb04d20688cee0406af300d89a2}}, eigenvalue estimation of the covariance matrix {{cite:2ba71f3e096c61ba93aa6f82ba8c69c2b4c5f240}} in wireless sensor networks, and deep neural networks with orthogonal constraint {{cite:4c2e82ac9a9e19b7a75985b7f479a5c901d6fce2}}, {{cite:52a5f80523ccdcc59e7bff8d5e2895684245f34b}}, {{cite:b16f3a2e2b605f8983db9ed2581e944978779764}}.
i
68e5e81ad449f1326bc1f5797f05bd90
The sensitivity of each feature based on input variation is recorded. This continuous observation makes the XAI practitioner justify different predictions at the end of the neural network. Rank assignment to various features is similar to deep explain. one drawback of deeplift approach is computationally expensive. After each forward and backward pass of a number of iterations, observation of sensitivity with respect to features is recorded. Occlusion is an important technique for extracting important features from given Image{{cite:80e42ab46a3a9c457b0873dd468c70a628024a5c}} {{cite:b0a9bc6f0645726701d2000d6b28cb4cbc549870}}. It is a straightforward model to perform a model agnostic approach that explores the latent feature ranking of a model. All pixels occlusion is computationally expensive, hence 3 x 3 and 10 x 10 tiles are generally useful for occlusion {{cite:e03fcc0341bad63516eab9f435bf02811f5c0971}} {{cite:a15b2f89e738bfd48216f3071e1d3f148436e1a9}}. Trade-off is there with respect to the size of tiles and accuracy.
m
46f473193b35b200d388703618cbba19
The inclusion of Pantheon to the CMB, shifts the mean values of the parameters to the {{formula:d7737a3c-7d24-4532-a3db-68bb9afbb61a}} CDM values but the indication for a non-cold DM ({{formula:fe152c05-9e98-4931-9fb4-f98abf5bc601}} at {{formula:4d1fca79-5df5-4fdc-b7de-5d0dd55d057b}} ) is maintained: {{formula:a9f6ce62-12d8-4cf7-a8f1-d7e4106b6a6c}} at 68% CL. Similar to the CMB+BAO analysis, we find no clear evidence of a non-zero coupling. The correlation between {{formula:d20606d6-d7f9-4078-9a7a-a9c229ae4732}} and {{formula:d656bcbe-51de-4df4-b433-29f23f6ce636}} remains the same as observed in the previous cases and the matter density parameter also increases. Finally, we notice that the {{formula:b8a481c3-d61b-42fb-b9da-1a0367fad910}} tension between Planck (within {{formula:b229e590-f601-4c54-ac2a-91d6b6d497f7}} CDM paradigm) {{cite:b7865bd1e9b56f8038e44b19950f20e281d39ae8}} and the SH0ES collaboration {{cite:aab58494fb11297d413f635441fecef9bb421074}} is reduced to {{formula:b5dbe62c-b11a-40d3-b6ba-288025fe187e}} for this dataset combination.
r
0f54eb7b97f070d82a5bde55b5ea2c10
In this section, we evaluate the proposed schemes through simulations. There are 40 MTCDs uniformly distributed with a BS in the center. We adopt the “data-centric” clustering technique in {{cite:1c3b478e8fa049dda6f79285bf1684f4d9f0fdf9}} for cluster formation of MTCDs, the number of MTCGs is set as 12, and the maximal number of MTCDs associated to one MTCG is 4. For NOMA, all MTCGs are classified into 6 clusters based on the strong-weak scheme {{cite:e03e81e4ca003a7cc5e881bf164065dcc879f130}}, {{cite:f9c6bb846c3ab60036eb81196c6b8f3b33b0bbcf}}.
r
6763d7b96479f04d65afd708cdabd473
Autoregressive neural network (ARNN) is a modification of the artificial neural network (ANN), specifically designed for modelling nonlinear time series dataset {{cite:bb9bdf33441a762294cdf167d4ff63faed3453f7}}. ARNN model comprises of a single hidden layer embedded within its input layer and output layer. The ARNN{{formula:98100978-695d-438d-9d01-04e17ce6bcb9}} model passes {{formula:29fc231c-909c-4b75-aeaf-c0b94ab9d9e0}} lagged values of input from its input layer to the hidden layer comprising of {{formula:e2811a0c-d319-4d04-93a9-1d801e4b576c}} hidden neurons. The value of {{formula:69198f27-3cba-4734-a98b-6325f287d481}} is determined using the formula {{formula:879075ba-af94-4646-a1ca-c46dca4ebe8c}} as proposed in {{cite:5171a938a949bdcf67e7f5faf0b19c725c08f2ec}}. After being trained by a gradient descent back-propagation approach {{cite:54479c8c0f9cbc7ac820e37ac62e2a787e4d2f59}}, the final forecast is obtained as a linear combination of its predictors. The mathematical formulation of the ARNN model is given by: {{formula:58b7dc35-8d18-4e44-8d00-687c96d79542}}
m
0d7f1f5004657f765d69f6fb14317c27