text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
Commonalities and differences between equivariant and invariant learning. Equivariance is necessary but not sufficient for an effective landmark representation. It also needs to be distinctive or invariant to nuisance factors. This is enforced in the equivariance objective (Eqn. REF ) as a contrastive term over locations within the same image, as the loss is minimized when {{formula:e103c91c-b516-43ac-9f3a-69382fe7f593}} is maximized at {{formula:609140ce-4e7e-45ea-b127-5078416c9565}} . This encourages intra-image invariance, unlike the objective of contrastive learning (Eqn. REF ) which encourages inter-image invariance. However, a single image may contain enough variety to guarantee some invariance. This is supported by its empirical performance and recent work showing that representation learning is possible even from a single image {{cite:3fa62bfd089282ea60735b802280eedfc3160e31}}. However, our experiments suggest that on challenging datasets, with more clutter, occlusion, and pose variations, inter-image invariance can be more effective.
d
06eeae1d70b6acdc8f25009f745778c5
single-cell RNA sequencing data (scRNA-seq). Moreover, we analyze the quality of our curvature estimation methods in a data sampling scenario, where we sample points in the vicinity of an optimum. All test cases are chosen to assess the efficiency and accuracy of our methods. We trained CurveNet on {{formula:fa0f59a7-9f1b-4e8f-8441-43c444b2357f}} samples from idealized surfaces, since this defines the dimensions of the diffusion map used for curvature estimation on loss landscapes. As the parameter space of neural networks can be large, sampling points around the minima is a method of reducing dimensionality and restricting the estimation to a less complex space. We used intrinsic dimensions of {{formula:91b56fdc-e716-4744-81c6-a0774be886d4}} such that the same neural network can learn higher or lower dimensional spaces. We trained on 5000 different randomly generated quadrics, all sampled using {{formula:83fcedc5-1e6c-48d7-9fb4-bf62b0ff8157}} points, for each intrinsic dimension. CurveNet then outputs a quadratic form with {{formula:1a7465c2-8249-4788-b4dc-257e3ea2bd0b}} entries, where ({{formula:ff882f59-d05a-4464-a425-f147134a12a5}} ), which we compare with the known ground truth using mean squared error (Table REF ). Training and testing were done on 8 core Tesla K80 GPUs with 24 GB memory/chip. Architectural details and code for Curvenet and the implementation of diffusion curvature are available via an anonymized GitHub URL provided in the supplementary material. {{table:222abc0b-f25d-4311-bfd7-641c99d3b09e}}{{figure:66d954ad-5778-4071-b03a-fae8b875ec92}}Toy test cases for curvature estimation We generated a series of synthetic datasets where the primary objective is to estimate curvature at central points (which are not affected by edge effects). Figure REF shows a series of artificially generated {{formula:33e6ec92-8dac-4d57-96a8-3c8dd4973c1e}} surfaces whose curvature varies in two principal directions from positive to negative. In the left column we show the curvature estimate given by Diffusion Curvature (Eqn. REF ). The second column depicts Gaussian curvature which is the product of the two principal curvatures at each point, the third column contains biaxial plots showing the correlation between Gaussian and Diffusion curvature. We observe that Diffusion Curvature is capable of approximating/capturing Gaussian Curvature: despite being scaled differently, Diffusion Curvature highlights essentially the same structures as Gaussian Curvature. This qualitative observation is quantified by the correlation. Overall, we obtain high correlations for the different surfaces. The last surface is slightly different, as high values measured by Diffusion Curvature are more concentrated within the “cusp”, whereas Gaussian curvature spreads out such values over a larger part of the surface. This example demonstrates the benefits of Diffusion Curvature, though: in the context of loss landscape analysis, Diffusion Curvature is much more sensitive to such sharper minima, thus facilitating their detection. Curvature estimation for single-cell data We estimated the curvature of a publicly available single-cell point cloud dataset obtained using mass cytometry of mouse fibroblasts. Mass cytometry is used to quantitatively measure 2005 mouse fibroblast cells induced to undergo reprogramming into stem cell state using using 33 channels representing various protein biomarkers. Such a system is often called induced pluripotent stem cell (iPSC) reprogramming {{cite:666f1aca6d4e6835b8ef45de12a16e25d28cd6de}}. This dataset shows a progression of the fibroblasts to a point of divergence where two lineages emerge, one lineage which successfully reprograms and another lineage that undergoes apoptosis {{cite:40962b8d926a33578ca4bbc48d073bd46d7826ca}}. We note that our model correctly identifies the initial branching point (with cells that don't survive) as having low values of diffusion curvature indicating relatively negative curvature due to divergent paths out of the point (resulting in divergent random walks, see Figure REF ). On the other hand it shows higher values indicating flat curvature along the horizontal branch. We also applied diffusion curvature to a single-cell RNA-sequencing dataset of human embryonic stem cells {{cite:40962b8d926a33578ca4bbc48d073bd46d7826ca}}. These cells were grown as embryoid bodies over a period of 27 days, during which they start as human embryonic stem cells and differentiate into diverse cellular lineages including neural progenitors, cardiac progenitors, muscle progenitors, etc. This developmental process is visualized using PHATE in Figure REF (left), where embryonic cells (at days 0-3, annotated in blue) progressively branch into the two large splits of endoderm (upper split) and ectoderm (lower split around day 6. Then during days 12-27 they differentiate in a tree-like manor into a multitude of lineages. Diffusion curvature, which is illustrated in the plot on the right shows that the tree-like structure that emerges during days 12-27 is consistently lower curvature than the initial trajectory which proceeds in a linear manner at days 0-3. This accords with the idea that divergent lineage structure is associated with low (negative) curvatures. Conversely, the endpoints of the transition corresponding to the stem cell state (days 0-3) and differentiated state (days 18-27) are associated with relatively high diffusion curvature values, indicative of positive curvature. {{figure:7dd9b47f-f292-4f9e-8f39-837f8319d936}}{{figure:556e158e-fdf4-4815-bd58-264babe7a42f}} Hessian estimation and sampling To obtain second-order information around a critical point, {{formula:297ff675-227a-4f40-ac0b-627e29040d14}} , we estimate a local Hessian: we first (for both toy test cases and neural networks) sample {{formula:0e00128a-4c1c-4738-99e3-fd82444c63f9}} points, {{formula:4d021b0f-883e-427c-8782-4d4aa42eafd3}} , {{formula:5f58dd98-33d6-4824-ac4d-6766af6bd73d}} where {{formula:a6217aa1-91d2-4a7c-8d63-2b92168d63ef}} is the full dimensionality of the domain of the toy functions or the neural network: {{formula:7cb7ca2e-68e1-43d5-8393-f557db966d38}} and the value of the objective function or loss, {{formula:f32bac6c-9ed5-4902-a321-2409e8e98e8b}} can be obtained by evaluating {{formula:9bede271-262b-4504-9beb-bb177fa194fb}} . In both settings, this allows us to obtain an {{formula:bd043e9e-1033-4eed-817c-9c898f7283d9}} matrix. Each row represents a sampled point {{formula:ce87da23-000a-4e27-8d67-24a908cb2091}} with its associated loss or objective function in the last column. We consider this column as a special loss axis and use it as an input to the neural network to learn the coefficients along with the diffusion axes, which are obtained from the sampled points. The diffusion axes were obtained by constructing a diffusion map in {{formula:370c4c77-a3e6-4357-a3e0-b49d288297f9}} based on the sampled points around the optimum, i.e., the minimum. We uniformly sampled 1000 points on a {{formula:070f6ab4-3da7-44d5-844f-5af5dd62ba2a}} -dimensional hypersphere which was then scaled by the parameter space of the optimum or saddle, as well as the gradient at that point. Special care was taken to ensure the points were sampled locally by evaluating at the relative difference between the evaluated loss at locally sampled points around the optimum or saddle and the actual loss at the optimum or saddle. We then use these diffusion coordinates {{formula:e140ed75-6d61-42f4-8bb4-a4fceb276d8f}} and the value at the sampled point {{formula:cb502a76-da2d-468a-8eff-3a33a356b630}} as an input pair {{formula:5fe4abb0-9469-4113-aea8-5c6c294f8746}} to the neural network, which we ultimately use to estimate the Hessian. Figures REF and REF show the eigenspectrum of the Hessians estimated using CurveNet. We observe that the number of negative eigenvalues of the Hessian matrix (indicative of a maximum in the loss landscape) decrease over training epochs. In Figure REF , when comparing epoch 25 and epoch 200, we observe a marked shift in the (cumulative) density of eigenvalues towards positive eigenvalues, showing that the feed-forward neural network is approaching a minima in the parameter space. Similarly, in Figure REF a convolutional neural network trained on MNIST approaches a sharp minima (large positive eigenvalues) over training epochs. These results are in agreement with our understanding of how model parameters are optimized during stochastic gradient descent, and demonstrate the capability of CurveNet to approximate to local Hessians. Future work will further validate this methodology using networks that are deliberately trained to produce poor minima and exhibit low generalizability. {{figure:98b6b590-da36-4727-9012-e27642dd4f26}}{{figure:9d7946d8-3228-44f5-898d-b2981c21662d}} Conclusion We proposed diffusion curvature, a new measure of intrinsic local curvature of point clouds. Our measure leverages the laziness of random walks, obtained using the diffusion maps framework. We link diffusion curvature to existing volume comparison results from Riemannian geometry. In contrast to these notions, diffusion curvature can be computed effectively via neural networks even for high-dimensional data. While we demonstrated the effectiveness of such a curvature measure by analyzing numerous datasets of varying complexities, our formulation also leads to new research directions. Of particular interest will be proving additional results about our measure, relating it to existing quantities such as the Laplace–Beltrami operator, as well as formally proving its stability properties. We also want to develop new methods that use diffusion curvature to compare different datasets; being a quantity that is invariant under transformations (such as rotations of a point cloud), we consider diffusion curvature to be a suitable candidate for assessing the similarity of high-dimensional complex point clouds.
r
72486267cb5899b1a0a36cc96f6e43ed
This problem is related the well-studied expansion testing problem {{cite:954f380218c28b74ac9ff5229aad75f62dae45a4}}, {{cite:2b64010014d7f5f9f1d062acd786c53bc67d726f}}, {{cite:50107858058f35f81d013b71be4852ea1e365e50}}, {{cite:4c58782ea6b498eba4c76e5d598e5e8acf3f7424}}, {{cite:cebf75db334de3755a5ec09c081f001964bb9803}}, which corresponds to the setting of one or two clusters, as well as to the problem of testing cluster structure of graphs, where one essentially wants to determine {{formula:2a7eee2f-861a-4b07-95a5-773aa7cd8680}} , the number of clusters in {{formula:f2c8c659-3480-40be-8179-73382866bc80}} . The problem of testing cluster structure has recently been considered in the literature {{cite:0465516fd6f5dcf4e5e6e023fd8b0f98aa54a25f}}, {{cite:38313e4ccba20f4f5d114c1ef4ba59bd4a0fe902}}: given access to a graph {{formula:41761f26-746f-4b54-bc00-b8af5f3bcab2}} as above, compute the value of {{formula:92d17808-8d8e-4ef8-b1d4-06dc90d04b1e}} (in fact, both results {{cite:0465516fd6f5dcf4e5e6e023fd8b0f98aa54a25f}} and {{cite:38313e4ccba20f4f5d114c1ef4ba59bd4a0fe902}} apply to the harder property testing problem of distinguishing between graphs that are {{formula:0c229105-b37d-4321-81f3-b86cfd0c18cf}} -clusterable according to the definition above and graphs that are {{formula:c4d4dbdb-f218-4bce-9e2e-d9ff5f624680}} -far from {{formula:a48a9659-43e0-4924-a900-819b14c1d632}} -clusterable, but a procedure for computing {{formula:dcbe3b01-3a76-4caf-ae05-e542c27e17a3}} is the centerpiece of both results). It is interesting to note that the work of {{cite:0465516fd6f5dcf4e5e6e023fd8b0f98aa54a25f}} also yields an algorithm for our problem, but only under very strong assumptions on the outer conductance of the clusters (one needs {{formula:4130526a-46f1-4451-801a-b61130e854e4}} ). The recent work of Peng {{cite:7270376ca06b1f739cec9d1c37743d626fb540ac}} considers a robust version of testing cluster structure, but requires {{formula:24440255-374a-4f8d-be6b-c8c10e514964}} , just like the work of {{cite:0465516fd6f5dcf4e5e6e023fd8b0f98aa54a25f}}.
r
e3beafb02748a009252feef09a2eb608
also called the REINFORCE rule {{cite:fbe15cc9a09b5166ef158c5442719b233ecc7726}}. This update rule has the simple interpretation of increasing the probability of choosing action {{formula:9c4701c2-93cf-44c4-a001-8f945c766909}} in state {{formula:2d630607-9e1e-48ba-b0aa-f240de26307e}} , proportional to the return {{formula:b02b777e-d014-44e5-8ab2-9e82a1733887}} . In other words, as the return tells us how much cumulative reward we could achieve by being in state {{formula:7a4610c3-aa4b-41b5-b8d3-a0451aa50ce6}} and choosing action {{formula:2649a03f-94cc-45a3-a3e6-e20cfa323abb}} , and then continue to behave according to {{formula:68abd2cb-4125-4018-8bd4-59d10ebca567}} , we can use it to scale the step in parameter space that increases the probability of choosing the action {{formula:e31a8a2c-3e15-4a49-8951-19718a64543f}} because it helped lead to this amount of return. If we have a large return, we would like to make the action that leads to it more probable than if we only have a small return. {{formula:c72902b4-1495-458e-a40c-bea1c6c8d244}} is an additional step size, or learning rate, to keep the size of the update reasonably small. For the complete derivation of this rule via the policy gradient theorem, we refer to {{cite:f52eb0ccc64a59610e6129183cb8d1713413b01f}}, chapter 13.2. To apply the REINFORCE algorithm, the agent generates an episode using its (stochastic) policy, accumulating all gradient updates for all state-action-return triples, and then updating the parameter vector. This cycle repeats until some computational budget is used up.
m
c5e799bb459efe370a67f9906f3d5777
Adding data from different types of sensors should be beneficial as well, as we have seen a slight performance increase when denoising Canon 500D pictures with models trained on both X-T1 and 500D data (rather than 500D-only), yet there was virtually no performance loss on the X-T1 denoised images when we added 500D images to the training data. Likewise, adding the smartphone SIDD dataset {{cite:51213e75577aaf452239ce6a7c858704feb67381}} to a network's training data did not cause any noticeable loss.
d
cc7cedb458dff76abe346832d9769c26
In this section we discuss the physical implications of the overabundance of SNRs in the rims of the H i holes. As indicated in Section REF , a remarkable feature in NGC 6946 is the existence and classification of 121 H i holes {{cite:2163b805b6b6584fcb3e0ebe89a931fac36da781}}. Our data indicate an overabundance of SNRs located around the rims of the H i holes (Fig. REF ). The H i holes in NGC 6946 were analyzed in detail by {{cite:2163b805b6b6584fcb3e0ebe89a931fac36da781}}, and they concluded that the creation of some of the holes can be attributed to the expansion of superbubbles generated by multiple SN explosions {{cite:c93447b25a1077be584a053d316b0ca26c7f198f}}, {{cite:28e262601bb666b597c3953173dd758e7e9bed5f}}, {{cite:6fe3107359a3a113b9bb3eddb5f9f74622146ace}}. In addition, they found that stellar feedback in the form of a galactic fountain is probably the origin of some of the H i holes. This is in agreement with the lack of bright emission at multiple wavelengths inside the holes, suggesting that the H i holes are already devoid of gas, which could also explain the lack of SNRs located inside the holes.
d
dd8300038e58a5b05ab71c9ebbf4216d
As an illustration of this phenomenon, consider distributions exhibiting sinusoidal dependence {{cite:55b28ba05d791f7f49ce65e9927e3d6129422a5f}}, {{cite:1c07f7d3ba574cf1b77850de5bf94718a89af214}} with density functions {{formula:5616577f-c823-470b-80b8-3d7388403615}}
r
5d5d31712f6d46d3cdfb738cd8e2cd14
Findings from Experiment 1, where the language aspect of the fine-tuning step was evaluated, showed that performance from wav2vec 2.0 based models could be benefit from including speech samples from different languages, even in small amounts. These results align with the findings presented in {{cite:a240d97e8ab3fda0719fbc95b10af86649f20742}}, where wav2vec 2.0 models were evaluated over datasets containing Chinese and Japanese speech samples. In their study, the xlsr model, a wav2vec 2.0 based model pre-trained with almost 56 k hours of speech containing 53 different languages {{cite:c99c7e4be21d93c74a49f3ffb8905034a5bae284}}, showed good generalization ability. Multi-language pre-trained wav2vec 2.0 models can lead to the development of more robust models that could deal with language bias more efficiently {{cite:c7ff44497c3fcce78b2403b6736d4aa671ee60b5}}. In {{cite:c99c7e4be21d93c74a49f3ffb8905034a5bae284}}, a cluster analysis of the speech representations produced by the xlsr pre-train model showed that languages that share a similar root, like English and German or Italian and Spanish, tended to cluster together. This can benefit languages that have scarce annotated material as they can rely on other similar languages to fine-tune and obtain higher prediction performance. High-performance techniques that require less annotated data are highly needed, and they demand further exploration {{cite:c657809e3532663375fe01bc2b17d1de33b60964}}. Moreover, studies targeting the pre-training step would reveal how accurate a model can get using a language-dedicated pre-trained model.
d
85e6d19a60bfafa508489f21669671e6
In this work, we use three widely-used post-hoc and modality-agnostic OOD detection methods. We use maximum of softmax probability (MSP) {{cite:4a50f31212b6892d7bf7d547816ae9302c39a6f2}} and ODIN {{cite:579db86b2a719a1172f4887f4673996683ed6590}} as confidence-based OOD detection baselines, and Mahalanobis detector {{cite:fa771767e59af702bdcea367992f4a086e4e7fcf}} as distance-based OOD detection baseline. Note that ODIN and Mahalanobis detector assume the availability of OOD validation dataset to tune their hyperparameters. However, for all our experiments, we use variants of the above methods that do not access the OOD validation dataset as {{cite:4338f35d75ec0854d6cd73593583b278184bd097}}. The exact equations and details of how each OOD detection method assigns an OOD score to a given sample is provided in Section  of the Appendix.
m
afab7a88bacafc8df27714e57bb3b38a
Models of the world have proven to be useful for various tasks {{cite:cbaa96fd66cfb46bc34d5111c34abcece1c1949e}}, {{cite:6d1470d666f17aec0f36725056c320932a6ab5cf}}, {{cite:52cafb61ce83df94d76ed493c80b9396f5bb3813}}, including self-driving {{cite:1249393405f0d5d498c93c4cf441d00f24efb9ab}}, {{cite:d7b5495fad872f4d02fe03fe9d5bfe83932ea5a5}}.
i
111c707332bd38ed798c7dd447934a29
The fine structure constant {{formula:e00dc86a-bcc5-4ab8-8d6e-3f2edcf2099c}} is at the central position in the system of fundamental physical constants. It measures the strength of the electromagnetic interaction between charged elementary particles in the low-energy limit. Recently, the fine structure constant was determined at an unprecedented precise, that is, {{formula:c90529cd-d70d-42e0-ac3a-cce938747ca3}} with a relative accuracy of 81 parts per trillion {{cite:5467547196a69ccf3314fce53314fcf458e6c975}}. However, in 1937, Dirac argued that the fundamental constants of Nature may not be pure constants but vary slowly with the epoch, and he proposed a gravitational `constant' decreasing proportionally to {{formula:03cae065-38b1-4987-8533-7ac50dc32f32}}{{cite:5f32ad662027d4d98d635ef6277e0acb47a662e5}}. Since then, some theoretical and experimental investigations allowing space-time variation of fundamental constants have been put into effect {{cite:eb51c59d4c519c9eb84752b30a3ffce848e73412}}, {{cite:24be53b8ccbb3b7dc915aadab5f963966fa6f9db}}, {{cite:b10fe204b655096aaa2edfee2f7557fa26960b22}}, {{cite:ce127ebec3950737294b65a5dbc56c1fae2fc1bf}}. In order to remedy the dire consequence {{cite:76c8ca27d9923dbc0b09103892578be093bef98d}} induced by the varying gravitational constant {{formula:3c56f227-df3e-40ee-a798-829c1e139fba}} , Gamow(in 1967) suggested that {{formula:a2b26a84-6c57-4cd1-b0c5-3510fa95a21f}} increases in direct proportion to the age of the universe {{cite:a6195c2767f0f88d090a07545cac0dbf04114755}}. Phenomenological models that usually assumed an {{formula:1b4408b5-b4aa-45d1-8f1d-9c6c09011c12}} varying as some power-law or logarithm of time, like those introduced by Gamow, represent the early work of studying varying {{formula:0952e50d-59fc-48e0-9cd1-48df2e8ed17c}} {{cite:16f29d35d0a7441989aba1a18cc5e174893d1b85}}.
i
00f85a0b36477ad67aaf9859b3182a7a
Present mainstream contrastive learning methods {{cite:20c21737c9211946ba19280b349c06c2821a25e9}}, {{cite:b1d131b2355f31077e4b0cf0d3a70abdcc1104d3}}, {{cite:8935f2c58809346dde1c2abc52bdd00a9708bde3}}, {{cite:607e49433c81ee2c4580c31ed1d70c83bfdc8133}}, {{cite:c2299e2c499e2b5e420b1c90da28b6f7f793906d}}, {{cite:1df5cff2dfc155789f5294001bc80a9ace1c5b02}}, {{cite:fbce842337d439ce884ca8123f3d14842f46be9c}}, {{cite:f1b4eef62e046c6f841ee50e7c68bf0b7b2f7795}}, {{cite:42aae2ada34f4711c5979aa25988e045ed71ba2d}}, {{cite:e19932857619c0412c00759c65802a78e5d1a829}} learn representations by instance discrimination which treats each image (pixel) in a training set as its own category. This scheme achieves state-of-the-art performances on downstream tasks. Its common training direction provided by InfoNCE {{cite:bc1876e9f27efaaceefca390301e2b290e2cabcd}}, {{cite:b1d131b2355f31077e4b0cf0d3a70abdcc1104d3}}, {{cite:d94adf2351d0cf0e203524e117c48d156a3935ed}} or its variations is to maximize the mutual information {{cite:bc1876e9f27efaaceefca390301e2b290e2cabcd}}, {{cite:3347ce5dc2e4124c9718c2fee5eb4a5969dd5e54}}, which requires a large number of negative pairs to get good performances {{cite:b1d131b2355f31077e4b0cf0d3a70abdcc1104d3}}. Adopting a large batch size is a straightforward method, but it consumes lots of resources. To solve this, MoCo {{cite:20c21737c9211946ba19280b349c06c2821a25e9}} and {{cite:168ee5f39048ae90b0ee17e5f28ec1e6e19ee657}} propose to utilize memory structures. Some latest designs {{cite:8935f2c58809346dde1c2abc52bdd00a9708bde3}}, {{cite:08d7ae264abba8c0a0c0f74d133bd11cb51c262f}}, {{cite:dc0757f4f053869e8a6e7ed59e3b4db414d89111}} conduct contrasting without negative samples by an asymmetry Siamese structure or normalization techniques. Recent work {{cite:1df5cff2dfc155789f5294001bc80a9ace1c5b02}} proposes self-supervised learning with the Hilbert-Schmidt Independence Criterion, which yields a new understanding of InfoNCE. While RELIC {{cite:fbce842337d439ce884ca8123f3d14842f46be9c}} improves the generalization performance through an invariance regularizer under the causal framework. NNCLR {{cite:cfa8b17d5ac583fadafc04da73565615eed46cdf}} adopts the nearest neighbour from the dataset as the positives. UniGrad {{cite:42aae2ada34f4711c5979aa25988e045ed71ba2d}} provides a unified contrastive formula through gradient analysis. Instance discrimination, as an excessive fine-grained classification setting implies some problems like false-negative pairs {{cite:aee8626c435d48a5a39bd77e5ce645ee32589c4f}}.
m
2802e5f344e0466086f09ea535287b40
Although many evolutionary models exist in the literature, here we adopted the evolutionary models from {{cite:138fd7604cf787166be298c0b06a83016ff1ce4f}}, as they are well-suited for lower mass stars and younger ages, reflecting our target sample. The dynamical mass estimates for the binaries with individual masses are plotted against stellar isochrones from the BHAC15 models in Figure REF in a mass-luminosity diagram. The individual mass data-points with distances to the respective system, corresponding absolute magnitudes, their approximate associate age, and estimated theoretical mass from the BHAC15 models are listed in Table REF . The age-ranges listed in the Table REF show the ages of the isochrones used to calculate the theoretical mass, and not the given age-range of the respective YMG shown in Table REF . The absolute magnitudes were calculated from the unresolved 2MASS {{formula:ae1e75e0-a684-40a5-ab9d-c38dac641cb2}} -band magnitudes of the systems, together with the {{formula:ecc9c0c4-7d61-4b91-b0a8-173252127e3a}} -band flux-ratios from our SPHERE observations or from previous SINFONI observations {{cite:0c6d3ba7f9b5e68c627a89ae16efbfea40703e2e}}. We therefore prescribe {{formula:32b71490-c6f1-451c-a2f6-08707e8635f0}} name convention for the absolute magnitude in the fourth column of Table REF in in order to highlight this difference. {{figure:7b142a43-c10a-414f-9a94-2b9a96078040}}{{table:9f484540-36b3-4384-bb9a-8d31ff526a8f}}
r
0c1e13089cbe906bba7e626818253f5b
We have concentrated here on local- and non-local-in-time tail effects. For the sake of completeness, let us conclude with a few comments regarding non-linear conservative memory terms, which are expected to be indistinguishable from other local-in-time effects and therefore readily incorporated in the B2B dictionary. The leading memory contribution may be computed through the same topology as in Fig. REF (but including the quadrupole coupling and the full radiative field in the middle line instead of the monopole term and quasi-static potential). Similarly to tail terms, the separation between dissipative and conservative effects must be performed at the level of the full equations of motion, which may be derived using the Keldysh-Schwinger formalism {{cite:31b4e50d36da25304271d177ff2de9fbe681ab25}}, {{cite:12f6347dc4a19f0d16bdecabdbff182d403be3e3}}, {{cite:d32c5db16ff83276d42502f118c6b34d28c6a8e5}}. Yet, at the 5PN5PN order at which radiation-reaction memory corrections start to contribute, we also encounter other non-linear dissipative effects correcting the leading (Burke-Thorne) back-reaction force. The additional terms may be obtained via the EFT approach by including the (seagull-type) non-linear worldline coupling between the binary's quadrupole and the curvature tensor, see e.g. {{cite:9cf41dc856b3a49d3c31721d41d0a4d1ee3e26c6}}, {{cite:51b21c817d3a4ab0cf817988b4c82e990d044876}}.These should not be confused with effects that enter the dynamics at quadratic order in the leading back-reaction force. In the PM EFT language of {{cite:39bd6f6b39a5f4f2463e1a675ba1ca300f01c561}}, the latter arise through iterations involving the deflection due to the leading dissipative effects inserted into the tree-level radiation-reaction effective action. As it turns out, the extra terms also entail three quadrupole moments and an even number of time derivatives, as in the memory contribution. Therefore, they can mimic the scaling of time-symmetric conservative effects. Furthermore, total time derivatives (known as `Schott terms') may not only be present in the balance equations, they may also remain once averaged over the orbital motion. Hence, one must exercise special care when separating the various pieces entering the dynamics through the product of (more than two) multipole moments. Although yielding somewhat subtle effects, all the conservative memory terms are purely local in time, and therefore are automatically included in the original B2B dictionary. We will discuss these contributions elsewhere.
d
2629a788da11725bb260b82a442b0915
Other examples are the configuration of {{formula:aaef6498-e7c7-4884-8ce4-a8a3e01517ea}} vertices of a regular icosahedron inscribed in {{formula:e91e6be3-c6fa-4ded-a444-fa932407ecfe}} , the kissing configuration of {{formula:158eb5a9-42d9-45a4-b36f-41690fa266ea}} points on {{formula:b0c3f58c-3874-4e6a-ba0f-0dcc7d71e79d}} , and the 552-point configuration on {{formula:9c8decb5-eeab-4995-9e44-2d2a252074c5}} (equiangular lines), which are all 3-sharp. Also, the configuration of {{formula:43f780bd-dc3c-4400-8db3-8a5983d33f97}} minimal non-zero vectors of the {{formula:d3ce7929-83f2-4e37-b5b9-80e585f0b2c0}} root lattice normalized to lie on the unit sphere {{formula:2d9008b2-6377-44a9-a312-5251eccf732c}} and the kissing configuration of {{formula:84f67839-6ec2-49f1-b470-87cc8a06320c}} vectors on {{formula:1bab8a1f-0d85-402f-85eb-6088c0b5781b}} , which are both 4-sharp. Finally, the configuration of {{formula:bf3b4bfb-a722-4363-a199-d6c520a7fb32}} minimal non-zero vectors of the Leech lattice normalized to lie on {{formula:c7d47626-379f-40ac-a75e-574faa1f0425}} , which is 6-sharp. We remark that any antipodal sharp configuration is a tight design (see {{cite:bf8d4247f0febd77a52b624fbf35e68c7d01ba3b}}).
r
26b7c2e5b8bb06b0237e694ad5758ef2
Ubiquitously across biology, complex high-dimensional systems interact with their environment through low-dimensional channels. The computational modelling of such setups has advanced considerably in the past two decades with the emergence of reservoir computing techniques {{cite:050c237aa15e2b6c832916bf8009f7f9b4b8314d}}, {{cite:3fb279a2fec75042e31408d38e66fdecbfeda0ac}}, where learning acts on such low-dimensional bottlenecks. Despite the simplicity of this learning scheme, the factors contributing to or hindering the success of training in reservoir networks are in general not well understood {{cite:7f6ee32db1d6946ae91a4a9734a4c3c8c52d49f1}}. In particular, a theory is lacking for predicting – based on the characteristics of the reservoir and the target function – dynamics and performance of trained feedback networks.
d
ce81405f4c1f2574742dc505de191f94
To improve the present theory, we can account for ionic size in electrolyte solutions {{cite:a96a34aa2c00e0a3292f1f9ab23f445df54d091d}}, {{cite:b5e2ce41f71f79c4606a604ca531b34ca9cb8562}}, {{cite:91253414b7ce70ddf2f8183c3b3a08ce9705ca12}}, {{cite:01d06cde28c1c167143e3fdd94a261ccd85ce28e}}. However, since we consider the cases where salt concentration and polyelelctrolyte charge density are not high, excluded volume effect of electrolyte ions is negligible, as mentioned in {{cite:228e15e0e8a4a1fdcd206719dd9d118530bc0365}}. In the future, we will try to obtain a more complete model of the pH-responsive polyelectrolyte brushes considering other effects such as solvent polarization {{cite:c70b7b206dfadfa9114446bd0abd607b9dcc555f}}, {{cite:c417a0a654ca4f12c65b450e1f7ba8e9070e1e07}}, {{cite:8822889e471baaed2ad400c064d2aa7834713805}}, {{cite:bfac61d5b1aa81e04764749a7d3246be877c3f04}}, {{cite:9a89216bb0e841e52f84e2bb275ba6241c076f04}}, {{cite:692a460347882846816c4ca3e8259b70fc5f02b8}}, {{cite:087f4e45539d9f438f57ae39e0f33365a8147fcb}}, {{cite:7a97c72968308e36d62b85bfc934243c95f97445}}, ionic correlation {{cite:87b31ff79df74a8a52e52d9f0dc7b234fcb7bb84}} and the polarization of ions {{cite:c37f502c12c0e32bb0707f92c7c57fe4889cdead}}, {{cite:ca33eedaa6e89c9bbba60bd11f3ef9650d6883a6}}, {{cite:6240474c76785478beb0e4a387ba9e8508d5c422}}.
r
e7909f0be8397a53b559fee4297521df
In this paper, we have presented a comprehensive study of the lottery ticket hypothesis (LTH) for vision and language. Below, we discuss some limitations of the current study. ({{formula:e5aa87a5-29fa-4acb-9d01-a53ea9fcf035}} ) Efficiency: We mainly focused on the scientific study of LTH. For future work, we plan to investigate the real speedup results on a hardware platform that is friendly to unstructured pruning, such as XNNPACK {{cite:503c104095994ee448b4d42ec7a2bb21009a7725}}. ({{formula:8eb18d4f-342b-44fe-85a3-0468d59e41a1}} ) Object Detection: For UNITER/LXMERT, we studied the LTH for multimodal fusion, while keeping the object detection module untouched. In terms of end-to-end VLP, we focused on ViLT. For future work, we plan to study the LTH of object detection and other end-to-end VLP models.
d
7032328ff033cacf37b7e8e73c8ae352
Our system does have a number of limitations. First, the performance on novel tasks varies significantly. However, even for tasks that are less successful, the robot often exhibits behavior suggesting that it understands at least part of the task, reaching for the right object or performing a semantically related motion. This suggests that an exciting direction for future work is to use our policies as a general-purpose initialization for finetuning of downstream tasks, where additional training, perhaps with autonomous RL, could lead to significantly better performance. The structure of our language commands follows a simple “(verb) (noun)” structure. A direction to address this limitation is to relabel the dataset with a variety of human-provided annotations {{cite:d07ffce4afb2e2b327c8289879a943814f2b933f}}, which could enable the system to handle more variability in the language structure. Another limitation is the lower performance of the video-conditioned policy, blackwhich encourages future research on improving the generalization of video-based task representations and enhancing the performance of imitation learning algorithms as a whole, as low-level control errors are also a major bottleneck.
d
ce1227774b22fd9967ff5cf56ea72a0a
Perturbation-Based Methods.   Different from the above works that require the mathematical details of the model, there are works that treat deep models as black-boxes. These methods usually localize the discriminative image regions by performing perturbation to the input. For instance, Fong and Vedaldi {{cite:d5d5e150316852be08d92ebd920d1b9123a2e509}} propose to explain neural networks that are based on learning the minimal deletion to an image that changes the model prediction. Similar to SmoothGrad and VarGrad, the sensitivity of SR networks to disturbances and perturbation makes it difficult to use these approach to explain.
m
79f30932f8ef2c3b715e541283104207
To investigate the efficacy of the ViT backbone for visual relational reasoning, in particular on systematic generalization, we introduce new systematic splits to canonical benchmarks and compare the ViT backbone with the CNN backbone. Results on GQA show that switching to ViTs in MCAN model {{cite:b2d56538d12e37a32b2eeadfbf003f273df10b13}} brings an immediate 11% gain in accuracy. However, the performance gap between the original GQA testing split and the new systematic split remains considerable (15% in accuracy) for both backbones. It suggests that generic ViTs still need to be improved to tackle the reasoning task, especially on systematic generalization. Recent works have shown that neural networks can learn representations with better generalization, by learning certain auxiliary tasks of predicting human-specified concepts {{cite:987e7d287bf0144f41a6fd3fd0cc8f0e042aadd9}}, {{cite:08e26686f92384033833130f00a241cfe30c03df}}. A natural question emerges: can we exploit these concepts to improve the reasoning ability of ViTs? {{figure:2a14740a-a8f8-4e9e-aac7-95254ae2c374}}
i
6b5b9a24b875a1c19a89110aec444696
Recently, a number of existing works such as in {{cite:d40145b21a43c2bec4a99262c9ae699d58783411}}, {{cite:0d0a43d37e8271cb364ced3e45299df80540d12a}}, {{cite:aeee7b76b60de521b2de42614273fbbb46e6f077}}, {{cite:c38b3f0573b9109dc2b26c3467c2980d572de38a}}, {{cite:ee72a58dd00271c213985bbbcbb8fca4ec3782d2}}, {{cite:b2c63cbdfd7415de0d4d767505d5a48ad0896634}}, {{cite:f243ea630fbed7ce8df2464417bc479c74ec197b}}, {{cite:6dfb3ea1ab0a3f7ca422b614d0e6aa4d96384172}}, {{cite:affd20ed1f88a761486b1c29105c9d593631dbc5}}, {{cite:a4c996783490497d6221b0f2c31f853ee00c6eae}}, {{cite:932abea95ef150fcf9e7fb68ecc1eccde66acf47}}, {{cite:345a10c2471fe1194a69ff8e5f6e2783f6239d1c}} have studied a number of problems related to the implementation of RSMA in wireless networks. In {{cite:d40145b21a43c2bec4a99262c9ae699d58783411}}, the authors outlined the opportunities and challenges of using RSMA for multiple input multiple output (MIMO) based wireless networks. The authors in {{cite:0d0a43d37e8271cb364ced3e45299df80540d12a}} developed a rate splitting algorithm for the maximization of users' data rates. The authors in {{cite:c38b3f0573b9109dc2b26c3467c2980d572de38a}} developed an algorithm to optimize the users' sum-rate in downlink RSMA under imperfect channel state information (CSI). The authors in {{cite:c38b3f0573b9109dc2b26c3467c2980d572de38a}} optimized users' sum-rate in downlink multi-user multiple input single output (MISO) systems under imperfect CSI. The work in {{cite:ee72a58dd00271c213985bbbcbb8fca4ec3782d2}} showed that RSMA can achieve better performance than NOMA and SDMA. In {{cite:b2c63cbdfd7415de0d4d767505d5a48ad0896634}}, the application of linearly-precoded rate splitting is studied for multiple input single output (MISO) simultaneous wireless information and power transfer (SWIPT) broadcast channel systems. The authors in {{cite:f243ea630fbed7ce8df2464417bc479c74ec197b}} investigated the rate splitting-based robust transceiver design problem in a multi-antenna interference channel with SWIPT under the norm-bounded errors of CSI. The work in {{cite:6dfb3ea1ab0a3f7ca422b614d0e6aa4d96384172}} developed a transmission scheme that combines rate splitting, common message decoding, clustering and coordinated beamforming so as to maximize the weighted sum-rate of users. In {{cite:affd20ed1f88a761486b1c29105c9d593631dbc5}}, the energy efficiency of the RSMA and NOMA schemes is studied in a downlink millimeter wave transmission scenario. The authors in {{cite:a4c996783490497d6221b0f2c31f853ee00c6eae}} used RSMA for a downlink multiuser MISO system with bounded errors of CIST. The data rate of using RSMA for two-receiver MISO broadcast channel with finite rate feedback is studied in {{cite:932abea95ef150fcf9e7fb68ecc1eccde66acf47}}. Our prior work in {{cite:345a10c2471fe1194a69ff8e5f6e2783f6239d1c}} investigated the power management and rate splitting scheme to maximize the sum-rate of the users. However, most of the existing works such as in {{cite:d40145b21a43c2bec4a99262c9ae699d58783411}}, {{cite:0d0a43d37e8271cb364ced3e45299df80540d12a}}, {{cite:aeee7b76b60de521b2de42614273fbbb46e6f077}}, {{cite:c38b3f0573b9109dc2b26c3467c2980d572de38a}}, {{cite:ee72a58dd00271c213985bbbcbb8fca4ec3782d2}}, {{cite:b2c63cbdfd7415de0d4d767505d5a48ad0896634}}, {{cite:f243ea630fbed7ce8df2464417bc479c74ec197b}}, {{cite:6dfb3ea1ab0a3f7ca422b614d0e6aa4d96384172}}, {{cite:affd20ed1f88a761486b1c29105c9d593631dbc5}}, {{cite:a4c996783490497d6221b0f2c31f853ee00c6eae}}, {{cite:932abea95ef150fcf9e7fb68ecc1eccde66acf47}}, {{cite:345a10c2471fe1194a69ff8e5f6e2783f6239d1c}} studied the use of RSMA for the downlink rather than in the uplink. In fact, using RSMA for uplink data transmission can theoretically achieve the optimal rate region {{cite:e77171bb560ebd333a2695cd65f491f83fc1945b}}. Moreover, none of the existing works in {{cite:d40145b21a43c2bec4a99262c9ae699d58783411}}, {{cite:0d0a43d37e8271cb364ced3e45299df80540d12a}}, {{cite:aeee7b76b60de521b2de42614273fbbb46e6f077}}, {{cite:c38b3f0573b9109dc2b26c3467c2980d572de38a}}, {{cite:ee72a58dd00271c213985bbbcbb8fca4ec3782d2}}, {{cite:b2c63cbdfd7415de0d4d767505d5a48ad0896634}}, {{cite:f243ea630fbed7ce8df2464417bc479c74ec197b}}, {{cite:6dfb3ea1ab0a3f7ca422b614d0e6aa4d96384172}}, {{cite:affd20ed1f88a761486b1c29105c9d593631dbc5}}, {{cite:a4c996783490497d6221b0f2c31f853ee00c6eae}}, {{cite:932abea95ef150fcf9e7fb68ecc1eccde66acf47}}, {{cite:345a10c2471fe1194a69ff8e5f6e2783f6239d1c}} jointly considered the optimization of power management and message decoding order for uplink RSMA. In practical RSMA deployments, the message decoding order will affect the transmission rate of the uplink users and, thus, it must be optimized.
i
03e02b976de791b3cf64d2671fab4db5
The first key feature of the proposed method {{cite:2b5143ec339a158f1e2ba3502e433970178f166e}} is the resonant ALP production via the {{formula:7e0118a0-a757-421e-a380-36d86c49c4cd}} -channel exchange within the {{formula:b49ccf7a-95c6-4bc2-a611-4ed050fa042a}} uncertainty, which drastically enhances the production rate {{cite:2b5143ec339a158f1e2ba3502e433970178f166e}}. The second key feature is stimulated decays of produced ALPs to fixed final states via energy–momentum conservation between four photons in the initial and final states. This stimulated resonant scattering rate eventually becomes proportional to the square of the number of photons in the creation laser beam and to the number of photons in the inducing laser beam. This cubic dependence on the number of photons in the beams offers opportunities to search for ALPs with extremely weak coupling when the beam intensity is high enough {{cite:8c18deb197ed76612978cfadbf0af99aa7a64969}}.
i
016d125b2935d477abb8ab5c7a7a1678
In this paper, we use sparsity based regularization, where the a-priori assumption on the unknown object is sparsity of {{formula:eff754ba-1be6-446d-8dae-5b55062ba3de}} with respect to a frame {{formula:dc0ba17f-360f-47f4-8eaa-9ffdc0823a0d}} of {{formula:3e148a4f-5b79-4b5b-be6f-0c6a9c175ba5}} , cf. {{cite:b70f164bfed75d9109c6016dc86b95e79dedabe6}}, {{cite:af5f62cb952ef736699de93a2de8dd634090b3f4}}, {{cite:858d74abeb8e298d4d6a2d0a6e46e799774998be}}, {{cite:4c6fa50c0238dba4a8881e32375d68d41ecc2eae}}, {{cite:b74467f84c0c5e703f53f2c1e572ded91ae37c7e}}, {{cite:e14f7fff9bd188ec26979c07b6ebc97a9ff5d776}}. That is, we regularize the recovery of {{formula:fe9b8765-24fa-45a3-aa32-89033160c031}} from measurements (REF ) by enforcing sparsity of {{formula:5e2ef704-f007-4907-8526-029caf24ad1e}} with respect to a suitably chosen frame of {{formula:748d8ecd-6fa9-42ad-8cb2-45f4438e5bbf}} . Sparse regularization is well investigated and has been applied to many different imaging problems, and by now there are many algorithms available that implement sparse regularization. However, when dealing with frames, there are at least two fundamentally different concepts implementing sparsity, namely the synthesis and the analysis variant. The reason for this lies in the fact that expansions of {{formula:0e3eb8d4-8d86-4a14-9af8-281862c86e7c}} with respect to frames are not unique (which is in contrast to basis expansions). In the synthesis variant, it is assumed that the unknown is a sparse linear combination of frame elements, whereas, in the analysis variant, it is required that a the inner products {{formula:35ab5fc0-6c40-4a80-8861-4be0c61cf200}} with respect to a given frame are sparse. The difference between these approaches has been pointed out clearly in {{cite:691e5e458f76d950bc33c4da2e72ef9f84b10e06}}.
i
1c4b4acb282d298cdec0cc0eb0f59e39
There are certain limitations to using the DRL framework for a fraud detection task. Agents trained on previously collected datasets without any active environment interaction are prone to overfitting as a result of excessive training {{cite:9c8d098752ce16916a4900cdb3712a88d3bf8bc7}}. Their performance is bound by the size of the dataset and highly dependent upon the state and reward definition . Further, transaction embedding learned via better representational learning methods {{cite:7a4900e9bce5966167d5aaf282e61835b1873aa4}} can provide a better state representation and can help the agent to reach the high reward regions of the state space. {{figure:aea70dde-927f-4033-95ec-6a876abec63c}}
d
0959a06dd8af5dee648080302f6eb783
In the last two decades, the lattice QCD Monte Carlo calculations have emerged as a reliable non-perturbative method to study hadron spectra. For {{formula:d09d7f79-a2be-4057-941b-d46143f03a58}} systems it has been shown unambiguously that the ground state potential is {{formula:c1c49567-27d2-418c-9a25-69f656b394dd}} , with inter quark distance r {{cite:0bf34d46592c2df4572c5689e3010ff17546ebcb}}, {{cite:2b92363c0a05b7c80ae7b7def1f06cda8e3edee2}}, {{cite:41e295550f196f8dbb9129f35ef6dd981db8b2fe}}, which is consistent with the standard NRQM potential of Coulombic + OGEP + linear confinement{{cite:1bd1e6f4aee5f8061bbdac65c63d5458571ab93f}}. Also, from lattice QCD, the effect of the gluonic excitation in the three quark system has been investigated. It has been shown that for low-lying hadrons with excitation energy below 1 GeV, the effect of the gluonic excitations is negligible and hence quark degrees plays a dominant role in low-lying hadrons and hence resolves the absence of gluonic excitation modes in low-lying hadron spectra{{cite:6ccdcd6febb4b36ce21c71ed7a6c65fb4fc78141}}. Also, the static three-quark potential has been studied in detail using SU(3) lattice QCD. The detailed analyses lattice QCD data of the 3q potential support the Y-ansatz{{cite:2b92363c0a05b7c80ae7b7def1f06cda8e3edee2}}.
i
c2f6a4b02ecee5889300beb89051c011
Our proposed ASiT framework is based on GMML {{cite:2220be187d838fba6df2bd796ae68f1d0c6d829f}}, briefly summarized in Section REF , and self-learning of data and class tokens conceptualisation with the incorporation of knowledge distillation {{cite:553bbc5ddbf77953574ff58fc2acf228d5c9c95b}}, explained in Section REF .
m
9e0859dbf958c50991827ea490d23ece
It is empirically well-known that stock return volatility increases after negative returns more than positive returns{{cite:ec78715f8f64e739b1cc9c606a255db49c0ab52e}}, {{cite:a3b0062a8850ba5677f2202c839bc4839dfbe3f7}}. This volatility asymmetry is called "the leverage effect" and causes a negative correlation between stock returns and volatility. To capture the leverage effect, various GARCH-type models with the volatility asymmetry are introduced, e.g. {{cite:9e0508f966b0e64728d2462934ce771c14f3e689}}, {{cite:04b1cbf488d2e53f6396c3d27927b232753f74da}}, {{cite:1e1eb6361c06821ec0cfc9fa3141edffafd9dd0b}}, {{cite:b99870dc10c648423f63132c71fc652e90649f3b}}, {{cite:f0496d5e96114161c77c8a74821ab7873d759e07}}. For the TGARCH model, the volatility asymmetry is measured by the {{formula:ff0428b7-b5ff-45b9-beee-c1888412ab30}} parameter in Eq (3), and when the leverage effect exists, the {{formula:0312af99-a5d4-426a-9dbc-942eac06e2fd}} parameter takes a positive value. On the other hand, for the inverted volatility asymmetry observed in the Bitcoin market the {{formula:3359087a-2f01-499f-9566-93c9da7839b4}} parameter takes a negative value and volatility reacts more to positive returns than negative ones, leading to the inverted volatility asymmetry. {{formula:92e67833-6d9b-45e3-9105-dcf8c32694c2}} is the coefficient of an autoregressive model of order 1 (AR(1)) that captures the serial correlation.
m
793350318090e4685d0a91bdcb4d3c8a
Again, the parameters {{formula:0acc0827-8ad0-4012-88de-11d2ccda6f76}} can be optimized and updated stochastically, leading to the REINFORCE algorithm {{cite:d7a5c6e60876529eb71c25d8607a4c312c1e48ba}}, {{formula:16413698-698a-477c-9c7d-611049aafd3f}}
m
adf93cf873d818f007f1580eb02d8a0b
In 1929, Paul Dirac announced that the non-relativistic quantum mechanics is complete and approximation schemes are desired to simplify the sophisticated quantum-mechanical calculations {{cite:b5da8971fa1f74c28c2acf1ead2eea8f9e246467}}. Physicists and chemists followed Dirac's advice and developed the mathematical frame work to solve the Schrödinger equation of atoms, molecules, and solids over the years 1930's, 40's, 50's, and 60's. Meanwhile the first digital computers were made during the 40's and they had rapidly grown to powerful machines capable of performing quantum calculations of atoms and single molecules by 1965 {{cite:e341be50f3c0406bc6aa2f3970a521340abece1a}}. At the time, computers were able to give an approximate solution to the Schrödinger equation of many-particle systems using the Hartree-Fock approximation method for molecules and atoms, while solid state physicists preferred to borrow analytical techniques from quantum field theory to study the electronic correlation effects in materials, which evolved to the whole branch of statistical physics of fields {{cite:3a120837b55cfe96f74fe3df79c57865b900cb38}}. Given the above considerations, the theoretical aspects of DFT have been neglected compared to the enormous developments in its applications. Consequently the theory is mostly known as a computational modeling method for investigation of electronic structure of quantum mechanical systems.
i
943200443d52974fc364eb596d14d103
Cardiac pulsation is a physiological confound of fMRI analysis pipelines that introduces spurious fluctuations of the BOLD signal. To overcome cardiac aliasing associated with a limited temporal resolution in fMRI, we developed a data-driven technique to temporally and spatially resolve cardiac signals from the BOLD signal itself, i.e. without the need of acquiring external physiological recordings. We sought to achieve this using a data-driven strategy, thus without imposing modeling priors on the shape of the regressor. This is achievable by recognizing that the time between consecutive excitations, rather than the time between the acquisition of consecutive volumes, is the natural clock of the system ({{cite:de7a5597b047c1db1b041c1b500c7dc5ea867e56}}, {{cite:a8b58e5fe71bf6bf25cea0a646c22806366326d6}}), and by combining such principle with highly accelerated SMS data ({{cite:d1af3b1e17122af8f591fd286b168693fbf49ebd}}, {{cite:abebad981dbce9782b8e5363ac8c789fcba4079b}}, {{cite:9ab4c619fcbdbdfee126be896c260aef67ac559e}}). By inferring cardiac signal contributions from the fMRI data itself, cardiac noise was found to be spatially localized, especially in and around blood vessels (Figure REF ), in line with previous literature based on mathematical modeling of individual cardiac responses ({{cite:b20cee5883ad147033789db7670a0e688bc87325}}).
d
4c1050334e7700166cb590f2f887915f
Popular set function classes such as submodular functions {{cite:c17550cd48d173ae1f8b3e55c83f67a2fd1cc9e4}}, {{cite:ab2ecc3510a0cbe2961635accefd3332d7b1bbcd}}, {{cite:bb7c5129ae92faa2d782dca065722ec4f2c2c8b3}}, {{cite:b286ed9436258febe1a8680051ece13139b34122}}, {{cite:43a21847868f9de7488bb216e5b25f85993859f5}}, {{cite:c50283533ee6a112b8b1e3f44623b55823d7c06e}}, {{cite:61f5cf264687a6fddf1c063839a6981f2fb202cb}} have resulted in a wide array of powerful algorithms for several tasks across different fields.
i
a392289c2b1547e46739807372ec3fb5
The first condition corresponds to the original “swampland conjecture” proposed in Ref.{{cite:5dc4e2d01c8402e557788d8bcfb0208b10297c02}}. However, a peculiarity of this conjecture regarding these two distinct conditions (REF ) and (REF ) on two different quantities {{formula:16e54e99-5501-40ef-bd99-ae7d48108d1f}} and {{formula:8380f1a5-ebe7-44d6-9a97-54e9014cd327}} was noticed. Based on this discussion, a single condition on both {{formula:e74b6e92-7728-44d3-8584-1fd3da58be28}} and {{formula:cf7014e1-2738-4387-8ff9-56a199b4df5f}} has been proposed. The authors named it as a further refining de Sitter swampland conjecture {{cite:bf643228a2936804327927077ab974215df4e740}}.
i
8394706ed7009f20dbe5d980d693e837
Based on our theoretical results, we predict a prominent experimental signatures of moiré exciton condensate: non-circular polarization of light emission at {{formula:71584a7f-271b-4c90-a4e6-47cf8d36782c}} with dependence on field direction. In experiments, intralayer exciton can be first excited by circularly polarized light, followed by fast interlayer charge transfer. When (quasi-)equilibrium is reached with low temperature and high density, a moiré exciton condensate at {{formula:16fdb1ba-ccde-404c-a1ec-e478ca6e3c91}} point is prepared. By applying an electric field on one layer, the exciton condensate will be dragged in momentum space from {{formula:17f88f95-c744-4b48-847e-6dafea4594c9}} to {{formula:ab41dce3-149f-4b9a-aa0b-a46a5b0ee2c7}} point along chosen path adiabatically if the electric field is weak {{cite:f7a36cbc50ddde94546ac6f87b410ab9a140c178}}. When the exciton condensate arrives at {{formula:c6234f4d-af74-4e83-aca6-a1ebadda2cbf}} point, it will produce photoluminescence with all three components of polarization, in stark contrast with the circular polarization in single exciton case. In addition, by applying electric field along three different paths from {{formula:bf57ae0d-41aa-4fce-bd89-81a41cf826f8}} to {{formula:b801f365-07d5-4612-92b7-319c64db2114}} (see fig. REF (c)), the light polarization at {{formula:912ec385-26c4-4c6b-b7b2-f3d9c8fd09a3}} is also different. During the evolution process, one can also measure the transverse drift of exciton condensate {{cite:e5bda37371bd3e94d1d3c1699293d9832e461f71}}, which will be much smaller than its single exciton counterpart. {{figure:76ccd1ef-3842-4dbc-950e-b875869353c8}}
d
bcfaab9a024d50db35d21a3cf0a1e632
Prior DFKD algorithms in natural language processing {{cite:aad677adae4ee0cff540be97fd2e308683877f1f}}, {{cite:f80f1006a1f043b82a77baa3ead52cfb4e0234d0}} focused on synthesizing pseudo samples from the teacher's parameters through model inversion {{cite:6a7126bc9f9fc46ff09b1916bf44369e8618c9b3}}, where a batch of synthetic utterances or a sentence generator is optimized under the restrictions of some human-crafted distributional priors. The confidence-based prior is the most widely used human-crafted prior for sentence synthesizing. For example, AS-DFD {{cite:aad677adae4ee0cff540be97fd2e308683877f1f}} aims to find some pseudo samples that can produce high-confidence predictions when fed to the teacher. As shown in Table REF , despite that AS-DFD indeed generates some task-related keywords or phrases that are related to the task, these utterances are still unnatural and of low-quality without correct semantic and syntax.
i
d1c793bd7bdaddc74f910602170f7dc9
The search for universal features of quantum gravity–also known as the swampland program {{cite:fed08ed17c0acc4b4de9d21f79fb6db277317237}}, {{cite:dd160e832254e52f7d1cf71b2f1820b4009092b6}}–has seen a resurgence in recent years. Strong evidence has been given in favor of certain conjectured properties of quantum gravities (so-called “swampland conjectures”), some longstanding conjectures have been discarded as counterexamples have emerged, and many seemingly-unrelated aspects of physics and mathematics have been connected through an ever-growing swampland web.
i
affd80d68287ed73d69e59b2b3d5a86b
Given the importance of M31 as an anchor for the extragalactic distance scale, many studies have presented distance determinations to M31 using different methods. {{cite:7beab308d482942e7e5de9cc8c9e01cff42e0438}}, {{cite:d2eca3820eaf6fe5aa843c84658901a1aaac6d80}} and {{cite:db85d3d5827ecece3962dd3737a8e06330e71a29}} have given a detailed review. Although the stellar populations located in different positions in M31 have different distance moduli, the dispersion can be neglected since the distance of M31 is large enough. For example, {{cite:829e482b263295af7ebc53437705014e286a4311}} pointed out that, the clusters in M31 dispersed over a 20 kpc radius would have up to 0.06 mag random distance uncertainty. So, the distance modulus to B379 obtained in this paper should be consistent with the distance of M31 previously determined within 0.06 mag random distance uncertainty. Now we compared our determination with the most recent and/or important measurements. {{cite:11abd3b3f79e7059421bfa1cce05b481929fbe7f}} derived the mean distance modulus to M31 to be {{formula:eb3b8419-e430-4cb4-8196-9a596e42100e}} based on the Cepheids in Baade's fields I, III, and IV {{cite:8d687a501c57d5ddd6733d0cda2f1e401bb3d6f9}}, {{cite:84f1a5a78af5ab58a47218fd18a34700089933b2}} observed using the Canada-France-Hawaii Telescope (CFHT). {{cite:d2eca3820eaf6fe5aa843c84658901a1aaac6d80}} determined the distance moduli to 14 M31 GCs by fitting theoretical isochrones to the observed RGBs including B379. The distance modulus to B379 obtained by {{cite:d2eca3820eaf6fe5aa843c84658901a1aaac6d80}} is {{formula:422a61aa-dc4a-407a-a777-c8dd4ab311e5}} . {{cite:4728244fd3286b24f796ca50b92ec4c8ca390df3}} estimated the distance modulus to M31 as {{formula:6b6f4570-0766-4b63-8ed9-ac6391b00ea6}} by comparing the red clump stars with parallaxes known to better than 10% in the Hipparcos catalog with the red clump stars in three fields in M31 observed with the HST. A determination of {{cite:b3558d3a5026d8e325dd5f0e02442eef860bc648}} based on Cepheid P–L relation suggests the distance modulus of {{formula:3c0ae53d-7b66-4b6b-9aa4-3d18b7e1880d}} to M31 when they performed the results of the HST Distance Scale Key Project to measure the Hubble constant. {{cite:4a00b5fd35b0b0251698ff9edc3dc93ad38f70bc}} determined the distance modulus of {{formula:87436075-aa8e-4819-8111-f45f90cb7dd9}} to M31 from the luminosity of the RGB tip of over 2000 RGB halo stars in a halo field located about 20 kpc from the M31 nucleus along the southeast minor axis. {{cite:31f7e3ca58f718ae1eb9cffcac3a1250019ec958}} have obtained {{formula:fdc54629-4928-42e2-a831-c624f578d667}} and {{formula:e9d03a66-9a43-4696-8fba-f926a4ea6724}} band observations of a {{formula:e86d0433-9b29-4b71-8b02-f51ff08fea3a}} region in the disk of M31 and derived the Cepheid period–luminosity distance modulus to be {{formula:984ae737-c995-43cd-b04f-8a8d3e582bbf}} . {{cite:c20dbb69f883e78595fe7d2811d7ac4b0f4451a7}} determined the distance modulus of {{formula:ef0fc771-133d-4da5-a990-19c38c12354c}} to M31 based on brightness of 55 RR Lyrae stars detected on the HST/ACS images of {{formula:802ec91d-e42b-4eec-ae5e-3610e96b9b9e}} 84 hr (250 exposures over 41 days). {{cite:4f48bc89133566cb5758a4513e88050ae66601cb}} derived the distance modulus to M31 to be {{formula:4596c7d5-dc78-4885-b07c-8373282515c7}} based on the method of the tip of the RGB observed using the Isaac Newton Telescope Wide Field Camera (INT WFC). {{cite:db2df8dd5fa0565e2de882375b2cb8f3c707e2eb}} derived the distance modulus of M31 as {{formula:d79be59a-b6bf-48a8-a04e-a5598d0f26bb}} from an eclipsing binary. Very recently, {{cite:2df0f61daec79504abc786c0ef31a276583aeeaa}} presented the HST observations taken with the ACS WFC of two fields near M32 located {{formula:245846c7-e588-4fa2-ab39-03d0e799003f}} kpc from the center of M31, and identified 752 RR variables with excellent photometric and temporal completeness. Based on this large sample of M31 RR Lyrae variables, and using a relation between RR Lyrae luminosity and metallicity along with a reddening value of {{formula:bb4c89f1-50f7-42a8-910c-736217eca53c}} , they derived the distance modulus of {{formula:f173cda8-8ad7-4135-9072-df193e070623}} to M31. In order to see clearly, we list these determinations of M31 distance moduli in Table 1. It is evident that our determination is in good agreement with the previous determinations.
r
3dc7305579f9a7186b95d40cb49e45bd
Equation (REF ) is zero if, and only if, the two overlaps {{formula:eeaa1d8c-484f-4511-a6ec-a800deeeacf1}} . We choose the optimiser RMSProp {{cite:c3ccadeecb27927e1ae4ba0005081db39b1576ed}}, which dynamically adapts a global learning locally for all the network parameters. Training a medium-sized ANN with {{formula:f4c3363a-2459-4728-b6fe-278f3499c70d}} hidden nodes usually takes about {{formula:993f3d9f-ae27-4762-ac89-7d076e6c51ae}} iterations. We point out that this pretraining step is not strictly necessary to achieve a correct energy minimisation, but it helps in guaranteeing a successful energy minimisation with fewer epochs.
m
65aa7035c2c84a211fa6a125192dd760
We have adopted a number of simplifying assumptions to model the generation of super-harmonic secondary waves. Firstly, we have considered a region of small spatial extent near {{formula:2138f524-5e8e-4036-908c-675bf591858e}} , and employed a local Cartesian model instead of global spherical geometry. Secondly, we assumed the square of the Brunt-Väisälä frequency to be a linear function of the distance from {{formula:b42193d7-0d22-4f06-bf41-98f5284e1d18}} into the radiative layer {{cite:6ae793695fb81a2efc71c42d044ffde4a95acb72}}, {{cite:bce9ef904d672e15398e47ac1405e580c74a3571}}. Thirdly, we have adopted the Boussinesq approximation to the equations of motion, which formally limits us to a region of small spatial extent near the transition region.
d
75a71e13813b8b09f3b2cb309c6fff4a
aasjournal Calculating interaction strength factor {{formula:60bd49d8-38fb-410d-b1cf-34f998c2e409}} The plasma flow-obstacle interaction strength factor {{formula:23bed83e-3ea0-4502-9fc9-b2e6615be777}} can be determined by considering the ionospheric Pedersen conductance in the case of either a magnetised or unmagnetised planet. {{cite:e6eb3223971b47cb050197c99117d33217431f88}} and {{cite:bf9a6c090d07f04c948c96f20c182322735f9145}} showed that {{formula:734cf99c-8397-4f20-a25c-8a57c29e6379}} can be approximated by {{formula:6ae9b113-b720-474e-9110-5431b26b9f71}} where {{formula:cfd6f11c-1b0d-4936-a9af-3dc3d1fddce1}} is the ionospheric Pedersen conductance, and {{formula:709fe3f2-902b-47bb-8c35-dd0cf9f05f81}} is the Alfvén conductance, given by {{formula:755fb6f8-b6c0-49b6-b75c-23e2e50eecff}} where the Alfvén speed {{formula:8da92cf1-d3ff-4c82-a43e-b7982e226d8c}} in a stellar wind of mass density {{formula:0b7b958e-3f19-4645-b2a4-4d442b2158fe}} is {{formula:6da7e44b-e08d-4cd9-971b-1d848aef19d6}} Here, the ionospheric Pedersen conductivity is estimated using the empirical power law of {{cite:f0528281ebf261f46e291ba0e1870cfe0d6b8a87}}, i.e. {{formula:19f16d58-eb42-4a95-b517-f1e2884f3bd5}} where {{formula:9f966e47-dfb9-4a92-b74c-bb2fa73e9c8e}} is the orbital distance of the planet, {{formula:98c1c6b2-149b-42dc-8ceb-8309f9a65fcd}} is the equatorial exoplanetary magnetic field strength, {{formula:9d5c41bc-5692-4df3-8de4-4cce59039d82}} is the surface field strength at Jupiter, {{formula:0f3fb623-ab92-4ee7-950c-4d80d3887102}} is the stellar XUV luminosity, and {{formula:796c6d49-6237-4fbc-9d8f-fad1aa0513ff}} is the solar value. The constants in equation (REF ) take the values {{formula:295000cd-3137-4f4f-8d85-51a096d72233}} = 15.475, {{formula:adacf517-a76b-4e96-b16a-df46f317b6d1}} = -2.082, and {{formula:69f0dae2-6581-4de5-b488-00d73c3dcdad}} = 0.5. For M-dwarfs we assume a value of {{formula:d80e37cf-8023-4c51-91e3-62e29171c221}} , consistent with X-ray observations of TRAPPIST-1 {{cite:5cb864f7cc7717b2f588c4948b56b470eae0eb3c}}. In all the cases we consider of close-orbiting exoplanets, {{formula:8471654c-64c9-4dd2-a4c5-738e159c6ad6}} , therefore {{formula:e7e8b0e7-3df2-4fb8-b388-444da0563fcc}} . We note that for an unmagnetised or weakly magnetised planet this approximation is also valid, since in this case {{formula:8fbb130f-6f19-4d77-a7d9-380443f2716a}} approaches infinity. Model of the stellar wind and magnetic field We use an isothermal stellar wind {{cite:8bd893a1995690569e7b0022d3e24c6bcc7cafeb}} fully parameterised by the sound speed {{formula:a7a777e0-3478-43db-b7c0-4a83dfd2e19d}} for which we assume a value of 170 km s{{formula:580af876-b453-4541-9669-883bb0103ca7}} , corresponding to a cornal temperature of {{formula:11579820-d369-42ac-a281-551df44dae1c}} 2 {{formula:4af4895e-dd8c-4d2f-aced-1a749b2b274f}} 10{{formula:9c142284-2c6e-4ee9-84e8-ccf90017f27e}} K, consistent with the temperature adopted by {{cite:8a6abadc2c0152b48b345825d43adc8a954aa6e9}} in their study of M-dwarf stellar winds. Specifically, we employ {{cite:040092dd293a87fb1a5c0efaadfcc699458600c2}}'s ({{cite:040092dd293a87fb1a5c0efaadfcc699458600c2}}) closed-form analytic solution of the isothermal wind equation, given by {{formula:eb728de7-ed45-4d56-b9c7-d99d6f6cf89c}} where {{formula:becae9f9-f2ca-41ce-84a9-fe806bebc931}} and {{formula:7d332a64-e1f6-413e-bd2d-4f940de46f03}} are branches of the Lambert {{formula:195bbde7-9526-4cf2-81e5-121b65b828b0}} function, and {{formula:bbf17e9d-56b4-47ed-9b83-363ea60fd104}} is given by {{formula:17882b53-e25e-43e2-a642-592c0a83214d}} where {{formula:0268a2e3-5e22-47bb-9fc8-a1cf03a61d05}} is the critical distance at which the stellar wind speed {{formula:a40066a3-4375-4791-aea5-1c893e6159bd}} passes through the sound speed {{formula:6d7fbe6c-8685-453e-8128-1fe02b618d22}} , given by {{formula:7fcdeb7f-c08b-41c5-993a-25bb8945ab7f}} for a star of mass {{formula:a2875c41-3422-4c47-b423-f096aab97908}} . The magnetic field components of the Parker Spiral are given by {{formula:978a22c0-42bc-4804-888e-1d41ff01b10a}} and {{formula:3731fb3e-4e96-49b8-a9c1-c5ce869fa1bf}} where {{formula:e6b1f108-3379-47e2-9843-70a277377374}} is the stellar rotational velocity, and here we take {{formula:99b3db62-27a1-46f0-963f-9fc929755220}} , i.e. we assume that the field is radial at the stellar surface. The resultant IMF magnitude is given by {{formula:3452bf99-61f1-461c-b5b6-fca450856fea}} The angle {{formula:d4a0ac16-6bb6-4487-8c7a-3d2936c18edb}} between the stellar wind magnetic field and the impinging plasma velocity can hence be defined by {{formula:f10cfa37-dc89-4ea7-b4d6-ef5cb846425f}} Interaction of the stellar wind with an intrinsic planetary magnetic field governs the location of the substellar magnetopause standoff distance {{formula:3a002fd0-ad7b-4147-b8e6-2487cc820cd6}} , which can be calculated via a consideration of pressure balance, and is given by {{formula:f210454f-7c1e-4c8e-bba0-aa1bbb1a6020}} where {{formula:f602497f-c392-47a7-8844-7ff0ce4c56ce}} = 2.44 represents the factor by which the magnetopause currents enhance the magnetospheric magnetic field at the magnetopause for a realistic boundary shape {{cite:8661e85b87593862a4f3d0a7d75e3999d4063861}}, {{formula:62713eaf-73b8-4ca2-8bd9-2fb4a0d3515d}} is the thermal pressure of the stellar wind, and {{formula:57ad4514-1e4e-4c62-9922-2558898364ed}} is the stellar wind dynamic pressure given by {{formula:2814ad25-4029-4fa3-9626-3fc820ebd4c3}} where {{formula:1999e533-887f-43c7-b5ca-152eccbda97f}} is the density of the stellar wind, which, for a stellar mass loss rate of {{formula:2b4b11ba-c3a4-4fbd-b36f-b4a6cbdec420}} is determined by {{formula:d9a39a83-b0bc-41ff-a5c9-e062ab20db71}} and the corresponding plasma number density is given by {{formula:b4ac4a9f-7da7-47cc-97d2-5b1c5563e2d7}} where we take a Sun-like value of the average particle mass in the stellar wind {{formula:6521a9a7-3935-4960-840e-0c79c860ec8b}} of 1.92 {{formula:70c47615-d626-44b9-9aa1-29f5f2eae890}} 10{{formula:9b13116f-4619-45db-87a3-6578a21292a2}} kg. Figures REF , REF , and REF show profiles of various stellar wind parameters for our case studies of TRAPPIST-1, Proxima Centauri, and NGTS-1 respectively. {{figure:9e3455af-a25b-4a54-b028-983616830a9c}}{{figure:83a40905-c27a-46d0-96d6-d76ce761144f}}{{figure:ed62f0c9-aa8f-40b6-b546-b0bf6d6d7047}}
d
16e3488db0268e3752a017956d734807
Kernel methods can be thought of as instance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead “remember” the {{formula:7220e53e-4294-4651-9da1-5a46a07ccf5d}} -th training example {{formula:21b3d0db-4681-4b39-ae5c-fcbc4f6963f7}} and learn for it a corresponding weight {{formula:4a72ae22-6c5a-4025-9f17-a64b10773c23}} . Prediction for unlabeled inputs, i.e., those not in the training set, is treated using an application of a similarity function {{formula:b91bfd5a-de4b-4ad5-ab01-954694cd2b8e}} (i.e., a kernel) between the unlabeled input {{formula:b4174382-490b-4ac2-9e28-2d60d13e4d88}} and each of the training-set inputs {{formula:a693ef19-5410-479b-b7a7-13579a0d5e02}} . This framework is one of the main motivations for the developments of kernel methods in ML and high-dimensional statistics {{cite:7e4326a3331df43053de622bca3b6c5295ece52f}}. There are two main themes of research on kernel methods in the context of machine learning: The first one is focused on understanding the expressive power and generalization of learning with kernel feature maps {{cite:f43aeb07485b2e1d2b925798df8b194061e56b48}}, {{cite:7e4326a3331df43053de622bca3b6c5295ece52f}}, {{cite:8e9abbc7663d3f1a8daccfca7a0e14d6069b4083}}, {{cite:5b43806abbda8f7d2ea541b8e995106d782f1f6d}}, {{cite:e039b1e984a681a878547f8ed972f777e3ca8b9e}}, {{cite:79dea6cbaf273e79ed4bbc4a40f0341b39612367}}, {{cite:1a0be006a6f0d081df8e1800a280cfb5be86d7dd}}; The second line is focused on the computational aspects of kernel-based algorithms {{cite:c9421535ca326d81127706f6f95cd86164e0afb6}}, {{cite:1b24d0b1bf1e586a5d9549e644bcc896d126781d}}, {{cite:229cb91bdaea99bf6164d1c9afe23d1311f6b7b6}}, {{cite:b43f6c0d78ce08dbceac9f11bb4f5a7267566226}}. We refer the reader to these references for a much more thorough overview of these lines of research and the role of kernels in ML.
m
acf20a4f15d2c9ec122ffe47cdc61d57
In Table REF we summarize the results on wordings unseen during training and the corresponding statistics on training and test sets. The error rates indicate that the proposed architecture is to some extent capable of generalizing to new wordings and attaining decent performance, without further tuning. As expected, error rates on the unseen test increase with the number of wordings removed from the training set. We consider this as an inherent limitation of end-to-end ASR-free SLU approaches. State-of-the-art NLU approaches, based either on word embeddings or BERT-like architectures are trained on massive textual corpora and hence they are capable of generalizing easily to new wordings {{cite:06ed95b9808a2db41f3dae353ef0cc5692456119}}, {{cite:a15f03d42805fdf8efb1896a53c8cdbed0fd4834}}. Contrarily, our end-to-end SLU method is trained from scratch on FSC, and has no obvious mechanism for incorporating e.g. word embeddings. As a result, it can rely only on training sets containing several wordings per slot or intent for attaining state-of-the-art generalizability. {{table:9ff07e46-e031-4ccb-9692-9be34e9fa41c}}
r
ac64484c0109165b7357689bd9abc8c8
There are three very important open questions in neutrino physics that can best be addressed by next generation neutrinoless double-beta {{formula:8f22d56c-f681-4ef3-a9bb-b13906597055}} decay experiments. First, are neutrinos Majorana particles that differ from antineutrinos only by helicity? Second, what is their mass-scale? Third, is lepton number conservation violated? While searches for {{formula:630b6562-a146-4977-97f7-f3b44cdfc145}} -decay have been carried out steadily throughout many decades {{cite:04ed6fc5e6bf0f6aefb59cc7c9e90cd9c5d77b72}}, {{cite:d1814461876fc191315a664d180fafb52c0dd0b0}}, {{cite:65c2ae9fd3a3772afd194f591473c7c0d536c62b}}, it is now a far more interesting time for the field. Atmospheric neutrino-oscillation data imply that there exist scenarios in which the effective Majorana mass of the electron neutrino could be larger than {{formula:2f759386-69a4-4cac-bbf9-30a4fefe914f}} eV. Recent developments in detector technology make the observation of {{formula:bc069f03-d7e2-45e2-a734-2166b72c792b}} -decay at this scale now feasible. For recent comprehensive experimental and theoretical reviews see {{cite:7c62bf44039b2167a0adc7336e69c55836ee35cf}}, {{cite:a30d5f5b3dabc6eb468077ad4b079add25b4252d}}, {{cite:30fabac0ad157467d5878f0aec0308570df28ea0}}. Optimism that a direct observation of {{formula:716f3d4e-4266-400c-b870-f4a960830b79}} -decay is possible was greatly enhanced by the observation and measurement of the oscillations of atmospheric neutrinos {{cite:67697b4d3b2e7e3d8d87a705b69a5442cb93d8f4}}, the confirmation by SuperKamiokande {{cite:2f037625bf68a2f471f9ba49fe8cae2bebff5c10}} of the deficit of {{formula:96885240-8ba4-4c1d-bf76-697d2fc15cae}} neutrinos observed by the chlorine experiment {{cite:e76e5068be540b59a9bb82ea54b5408057bac774}}, the observed deficit of {{formula:22d72a65-8329-40da-b07c-758c81f3c2a1}} neutrinos by SAGE {{cite:634ee310c041cfa99b230d0f7cc4254eada4a474}} and GALEX {{cite:5b51efdd3c105f23248e24c273e2cddfebaf1e50}}, and the results of the SNO experiment {{cite:27bc7be94831dd3abbfdc8eeb244ecc3271a0cfc}} that clearly showed that the total flux of {{formula:25ba018e-d2fc-4890-a22d-22ed78f3e586}} neutrinos from the sun predicted by Bahcall and his co-workers {{cite:4b7e3b717687b60b6284ca58708b25ace4c23e3c}} is correct. Finally, the data from the KamLAND reactor-neutrino experiment strongly favor the MSW large mixing-angle solution of solar neutrino oscillations {{cite:1c2228fb0e70f0a8b2d35dc13d1ae64e5fd7cd01}}. This important list of results published since 1998 weighs very heavily in favor of supporting two or more next generation {{formula:d990b56d-3477-450e-b639-3650ffb473fe}} -decay experiments (see the reports in references {{cite:35422030c7b09dd93a452bc660cf11a742ab2f8a}}, {{cite:7ffbe304511945a299c64f859e82017f137b69f3}}).
i
4978d2a51e336db04325d20be3651a2a
The part of the wavefield that is traveling at a smaller angle is reconstructed properly, even at large depths and at the edges of the aperture. The events in the center of the model are reconstructed properly. The amplitudes and arrival times of the events are not correct everywhere, which is caused by the use of a smooth velocity model and the Eikonal solver for the direct arrivals, instead of modeling these in the exact medium. To give a more quantitative result for the accuracy of the retrieval, we employ the Pearson Correlation Coefficient (PCC) {{cite:76142a9e52dc83c9783b82bd65ad0c0526476ccd}}. This coefficient ranges from -1 to 1. A value close to {{formula:40431c40-2c95-4274-b392-df50ec3319d1}} indicates strong correlation, while a value close to 0 indicates weak or no correlation. The polarity of the coefficient indicates whether the correlation is positive or negative. The PCC between the columns of Figure REF is 0.542, which indicates medium correlation. The relatively low correlation value is likely caused by the issues discussed previously. However, the results and the PCC still show the potential of the Marchenko method for 3D virtual seismology.
r
b1b4b06108f49e01ac354eba722ce708
From Andrzejak et al. {{cite:679219f58a4afe9b829b1e90079bfe77e3a88236}}, 10 Participants (5 Healthy and 5 Epileptic Patients)
d
2738538329bb83c72ba3845b34fc442f
The nnU-Net segmentation network was trained and evaluated using a five-fold cross validation on the training set. As in {{cite:612d70ea253d152ad278e315b18d5bb8bffd4699}}, the network was trained for 1,000 epochs, where one epoch is defined as an iteration over 250 mini-batches (with a batch size of 30). Stochastic gradient descent with Nesterov momentum ({{formula:dd626f21-2402-42a6-9f9d-87a3cc9278b2}} =0.99) and an initial learning rate of 0.01 was used for learning network weights. The loss function used to train the `nnU-Net' model was the sum of cross entropy and Dice loss. Data augmentation was performed on the fly and included techniques such as rotations, scaling, Gaussian noise, Gaussian blur, brightness, contrast, simulation of low resolution, gamma correction and mirroring. Please refer to {{cite:612d70ea253d152ad278e315b18d5bb8bffd4699}} for more details of the network training.
m
7fbd130c477c33bddc2c6d47c744da9a
For the quantitative assessments, DEER had better SSIM and MAE values and a slightly lower PSNR value than FBPConvNet. FBPConvNet achieved the best PSNR value due to the implementation of the Mean Squared Error (MSE) based objective function. However, the literature has discussed that higher PSNR values do not guarantee the denoising performance, especially the textural/visual similarities relative to the ground-truth images {{cite:97a0c5e41a3e6f463f35463a05f97fd8b2c2ba8e}}, {{cite:3e15726915bd22bd55421e5c0f3acc51572300f2}}. Also, it should be noted that since both FBPConvNet and residual-CNN only use a single loss function for optimization, these two methods may be subject to potential losses in visual performance. Both loss functions have their own limitations, and one should not solely rely on them for estimating image quality {{cite:dd9e0f983d8e42df0246ff2107b5f240e6c193b3}}, {{cite:ab209f80f8ba38741d0de751251d8ccee95f01cf}}. Even though DEER does not achieve significant improvements quantitatively, images reconstructed by DEER present promising visual comparisons. Moreover, as presented in Figs. REF and REF , the images reconstructed by FBPConvNet appear over-smoothed with less visual image texture, which is not desirable in clinical diagnosis. Lastly, the implementation of WGAN framework may negatively affect the quantitative measurements but it provides better recovery of subtle details and structural features {{cite:3e15726915bd22bd55421e5c0f3acc51572300f2}}, {{cite:f83ff6b952527f2c9c6c92553d0da775a9152f0a}}. Compared with the other deep learning methods, DEER demonstrates a competitive performance in removing artifacts and reserving subtle but vital details compared with the other methods. In terms of reconstruction time, DEER takes about 0.1422 seconds to reconstruct a single 2D slice ({{formula:4577e5c4-4cda-4049-94a9-2b01f4438dc5}} ) on an NVIDIA Titan RTX GPU.
m
3a83d303e2b073f4aa10d473108514c2
where GRU denotes the GRU cell {{cite:69fd8caf18b56bb633aea9a045a78fc6329e2c36}} with LayerNorm {{cite:d70947fefc688410fe4bf80af990afe7d05909dc}}, {{formula:86a92186-e250-4f8d-a34e-e8272a140c2a}} are other learnable parameters, Aggr represents the neighborhood aggregation function (we use Max), {{formula:4ee8406a-6d17-45d2-83dc-b3c7c58e8e15}} is the sigmoid function, and {{formula:79ea8a7b-3a3b-4e8a-9590-9994fedf20f5}} is the Hadamard product. As inputs {{formula:5e4ec17e-3dbc-4c8d-8143-02e14568a9f1}} and {{formula:f8b1baf7-07a0-4223-acb4-2c845b66ab29}} , we use {{formula:56f72642-a08c-4f3d-8ea5-ea560903a8e4}} -dimensional linear projections of the node coordinate {{formula:33678cb4-f99f-480e-9d45-311740fdbbf0}} and the euclidean distance {{formula:f332ecb7-d143-4ecd-9438-20178cdbf7ca}} , respectively. We generate predictions using the AR decoder after an arbitrary number of message passing steps {{formula:6d868df0-9e95-4332-8e6f-8f124104313d}} .
r
186b8f3b77ee76adbe7a9d260fd75711
In this section, we introduce ASiT, a self-supervised framework based on vision transformers for general audio representations. Similar to {{cite:c85e9902708cad75d7e544c4edba7889c339b0ab}}, we employed log mel-spectrogram {{cite:9a4970a9b1d4d132414395e5cb6b643ba381f925}} as the input to the ViT instead of using the raw waveform directly. Spectrograms are used extensively in the fields of audio, speech, and music as they somewhat model human hearing perception and contain abundant low-level acoustic information, which has similar characteristics to images, making them more suitable for ViTs.
m
1905038acf591f87e752442d368c159b
Different from interpolation methods, machine learning methods are learnable which can learn physical correlation of the heat source system adaptively and the learned models would be more fit for TFR-HSS task than interpolation methods. In this work, we evaluated 4 commonly used machine learning methods for TFR-HSS task, i.e., polynomial regression {{cite:843e0f226d23969cdaa8a650faeb2ea6f837ca30}}, random forest regression {{cite:9288a0843633b0a00252a6b569d13417a28241ed}}, Gaussian process regression {{cite:10f45eab746730c2b6db8d208a16021bf9363e22}}, and support vector regression {{cite:a14f9403ee6d30dfdd23a9e66bb89e0921e39688}}.
m
563e86c0dbe89675b84507d5b4e3a969
The idea of exploiting the implicit regularization properties of optimization algorithms has been studied, often under the name of iterative regularization, in the fields of inverse problems {{cite:8b6d8c5c5d3616261d8a1fa57cc44b2dd8950a19}}, image restoration {{cite:b64b7f6491efb1c53069b4be27dcbffd8e389e0d}}, and more recently machine learning {{cite:40a6cd544a21b369b671983ddf0e16a631e8e1c1}}. Existing methods can be divided into two classes, depending on whether or not strong convexity of the regularizer is assumed. In the following we compare known results with ours.
r
56d8d0a43c9ba4d801d9f2dea2a3a925
The {{formula:b2d05a96-ccca-473f-b3e6-4c1295401170}} operator's value depends on the advisor solution. Next, we will state some assumptions. The first two are commonly used in RL {{cite:0d1a1a724dce90673507032ad9658c2789e00c5e}}, {{cite:b0d0284bb4e35f4278fb33fae549aa5c78f54d07}}.
r
1c9b6de90b6d4df6393b8faba1c682f1
Understanding videos is a prominent field in computer vision research. Event (action) recognition {{cite:4e86c93ca1f349214cb0b2f1a59195a0249b43d3}} and temporal event localization {{cite:169e03263a1abb14f2ccdf6217a4d8c2887ea938}} are the two main issues addressed in the literature pertaining to video understanding. Action recognition involves recognizing an action from a cropped video clip, which is accomplished via various methods such as two-stream networks {{cite:1a63d244b4a2b6c5aedeb2f4c37dd613bacb0aeb}}, 3D CNNs {{cite:282c03b4129bc6de5d26ffdba40d10571059e187}}, and RNNs {{cite:30f2e5ce253606b1e0473cf46f0bdfd1c32cd765}}. Another popular action recognition method uses a two-stream structure to extend 3D CNNs {{cite:4e86c93ca1f349214cb0b2f1a59195a0249b43d3}}. It is obtained by pretraining a 2D CNN model using the ImageNet {{cite:899119a94590995a89e3a2fd14417a9f2ebb9631}} dataset and extending the 2D CNN model to a 3D CNN by repeated weighting in a depth-wise manner. These features are local descriptors that are obtained using the bag-of-words method or global descriptors retrieved by CNNs.
m
0440c6f0e1d7bc52ccca41441bbb35db
Remark 4 (HB case) The set {{formula:8653e5f6-01c6-4d5b-afba-f27beffa8a31}} given in Theorem REF allows {{formula:1c77af41-94ec-4d77-b5ec-91e67ebd64b4}} and it can be easily shown that {{formula:9a8a7239-ac3a-4912-b93b-8bed99cc0abf}} is contained in the stable set {{formula:8dccb830-eaec-42c4-a170-ffa454181d50}} . Therefore, Theorem REF recovers the Polyak's heavy ball method (HB) with parameters {{formula:c707d9e7-e9d4-4f09-b7f3-ffaafe79d4b1}} and {{formula:f925e8d1-8fee-4c21-98aa-d4ba8f825caa}} , for {{formula:371a1d78-81f0-4c23-b44b-bbcd1e49799b}} and implies that deterministic HB can achieve the linear convergence rate {{formula:b422fad6-1102-4990-be3d-039bce74046a}} on strongly convex smooth objectives. By setting {{formula:935996d9-5f65-4a3e-8b36-45e0af3bcb0d}} , this implies that for {{formula:2e4f4890-ff95-45a3-9f3a-0313d257d01a}} and {{formula:5563310b-9623-48ef-86b0-338724c2e8a2}} , HB admits the convergence rate {{formula:6f5a7f98-3714-4345-9479-1e0c6c20edb8}} . Previously in {{cite:cd3f50e6c8fe6b580120df667dac1e4ee48531e3}}, it was shown that deterministic HB with parameters {{formula:595d5fc1-aa0c-45f9-9bde-86ce3fb5291e}} , {{formula:9f71ea19-2fab-4c7b-a7dd-fac5647983f0}} can achieve the rate {{formula:07d01105-18eb-4a0c-9464-e43c7435fd5e}} on strongly convex non-quadratic objectives. For {{formula:7c10e3f2-a289-4608-96ef-1b31f46408fc}} , to our knowledge convergence rate we prove for HB is faster than existing rate, {{formula:dba8eb72-c964-4fc3-b4ae-a5fc67b722f0}} , from the literature. Our rate for HB scales with {{formula:6e7ab278-339d-458d-8094-e874a0989852}} similar to the rate of AGD. In {{cite:38579252b50d4401cc68fbc60262d261588d98d5}}, it is shown that HB method subject to noise assumption discussed in Remark REF satisfies {{formula:01986c82-5b93-44ba-92b3-eda1cff7c707}} for {{formula:5555d932-683a-4d83-9814-2efcb786de3f}} provided that {{formula:2e17b722-cf31-454f-a77e-9c68e6e91dc7}} . If the stepsize is small enough, or if the target accuracy {{formula:fc84acfb-ee7e-454c-8d8d-a9666b00c83a}} (in expected suboptimality) is small enough, then our result leads to a better iteration complexity.
r
0e9596b9cc623b98fe4697a1dbea9ef5
In this paper we calculate the probe-limit phase shifts {{formula:c96fff17-6717-4ce2-b606-850ac3fb7a61}} explicitly by solving a relativistic wave equation in a background gauge field/metric. See {{cite:061f5d23bf7153f5694bd5ed95493cd710a4f478}} for a related connection between wave equations and classical scattering. Indeed the connection between scattering amplitudes, quantum mechanics, and classical point particles has been a recurring theme in the literature. Evidently, point-particle actions of the kind taught in undergraduate classes worldwide are somehow connected to relativistic quantum electrodynamics and quantised (effective) general relativity. The relation is simply that point-particle actions emerge as EFTs in a long-distance, low-energy limit for localised particles as was emphasised by Goldberger and Rothstein {{cite:fad6242904ed01e3270a733bf2c349de21659ef0}}. Therefore the quantum mechanics of these worldline actions captures the relevant dynamics. In the probe limit, fully non-perturbative amplitudes are available by solving the relevant (relativistic) Schrödinger equation; formerly confusing issues related to pair production are nowadays well understood and need not concern us. Our work is closely connected to other approaches based on studying the quantum field theory of the worldlines {{cite:8ff0b0db11d27716388d4ffbe9277afbc5871449}}, {{cite:e6ba5bd1d87a6d26ece827debd76bb3213bc49e3}}, {{cite:f45af0e5fcef83b8518fab8de6cba935baf1f2d6}}.
i
76431c636c1616f07a774520bd17d94e
In order to create graph representations for the presented scientific news network, we implemented a baseline graph neural network for relational graphs (R-GCN) as proposed by {{cite:6bbd09cab4c6ae2d62e33ba1f569c6c8e3cf4f7a}}. For the link prediction task, R-GCN is comprised of a graph auto-encoder model. The encoder creates contextual representations for each entity, and a DistMult {{cite:21977d41e54df37def928504beb1a526015ee19c}} decoder produces a score for every potential edge in the graph, using these hidden node representations. We implemented the R-GCN encoder with a single embedding layer. The encoder has been regularized through edge dropout which is applied before normalization, with a dropout rate of 0.4 for all edge types. The model uses an Adam optimizer and it is trained using full-batch gradient descent techniques.
m
6ad73ba83de8b5d45c4ca3af4666fe05
With the recent success of neural diffusion models for the synthesis of natural images {{cite:93cc2af5caf86020d7c5d0ae7b00ea230b0a3c48}}, {{cite:d7bb08ea16d4359fcc2ef57f078c9fb4b10710f9}}, there is now an increasing interest in exploring the potential of neural diffusion models to generate medical images. For generating natural images such as art images, objects, models such as DALLE2https://openai.com/, Mid-Journeyhttps://www.midjourney.com/home/, and Stable Diffusionhttps://github.com/CompVis/stable-diffusion have pushed the state-of-art. Amongst the three, only the latter is available with open-source code. Compared to GANs, diffusion models are becoming popular for their training stability.
i
95960067c11773bddbec8d46b1b94e03
For instance, the {{formula:77c1d980-ab6c-442e-822b-ab30e22f0a8f}} 10 {{formula:a29c9026-3ea5-44ac-9f7c-88a8ee67ddc8}} m feature peaks at 9.8-9.9{{formula:3f6fff03-499f-46b9-9dc1-8148e3494a87}} m for both Enstatite and “Cosmic Silicate”{{cite:acd934b43f24eb91fcea191759bf2b647b6ddaab}} produced infrared spectra of a silicate glass for which the ratios of the major cations (Mg, Si, Al, Na, Ti) were the same as those of chondrites/the solar system at large but excluded iron. This glass sample was then used to produce the complex refractive index for Cosmic Silicate, where the same sample was measure spectroscopically from 0.2{{formula:84f5f39b-fc6a-4088-82ef-88889569f728}} m to 200{{formula:f569b907-d70d-40da-a99c-2544b704c626}} m {{cite:c5562a6fa1c3d89bb1e01ec89cc0bccaedcc9c5d}}. but their {{formula:7fd1580f-8289-4c42-a2cc-969dca65fd2c}} 18 {{formula:9ceab086-885a-4592-847b-e4d032cb03c4}} m features peak at 17.6{{formula:83842ec1-3580-4bb4-ab6e-5a21018270af}} m and 18.3{{formula:115e3603-89e3-4d01-8129-6daa93c78ced}} m, respectively. This means that F{{formula:42c58b6b-ec69-48c9-91c7-90c024ceae34}} and F{{formula:f804dc72-2161-48e7-af5f-4ca0bb2b620a}} are consistent with Enstatite, but F{{formula:279dc9a1-d85d-4586-918a-6ab32cd17d6a}} /(BB/{{formula:b69f269a-c94c-4bea-9f6e-a5b64a9b3532}} ) and (F{{formula:de33372d-8a82-4781-b75d-6fa51c492044}} -BB{{formula:47458e77-f934-43dd-b7fd-0fddd7da7fbb}} )/BB{{formula:7ae4a89e-ae34-452f-a471-6b4d395feb27}} are consistent with “Cosmic Silicate”. Meanwhile the continuum-elimination pathways that do not subtract the star would suggest a match to Forsterite or Gehlenite.
d
e3c572e8d76e0b666d3dae629fe83a5b
It is clear from this study that care must be taken when deconstructing observational spectroscopic data. While radiative transfer modeling may be able to provide a more accurate representation of the temperatures of dust involved in the emission of photons at each wavelength, RT modeling is hampered by a lack of applicable laboratory data. Most radiative transfer modeling uses synthetic complex refractive indices (or dielectric constants) such as {{cite:1f9f3791d330359983f8b31d43b9cda2bfc49a1e}}, {{cite:fcdb557ca20cc2c88d0bffe26dddbace249d0f4a}}. However, these optical constants are not based on real mineral samples and cannot allow us to extract mineralogical information from observed spectra {{cite:c5562a6fa1c3d89bb1e01ec89cc0bccaedcc9c5d}}. Moreover, there are problems with many of the published refractive indices that are based on real mineral samples {{cite:acd934b43f24eb91fcea191759bf2b647b6ddaab}}. Consequently, it is difficult to use RT modeling when the goal is to extract the detailed dust mineralogy.
d
4b2b9df46b69bf6df117de51d46f75fa
An overview of Preservational Contrastive Representation Learning (PCRL) is provided in Figure REF . Generally, PCRL contains three different encoders and one shared decoder. The encoder and the decoder are connected via a U-Net like architecture. We first apply exponential moving average to the parameters of the ordinary encoder to produce the momentum encoder. Then, for each input, we apply Cross-model mixup to both encoders' representations (feature maps) to build a hybrid encoder. Given a batch of images {{formula:e64ea748-7be1-468f-bd0c-0e9a68665136}} , we first apply random crop, random flip and random rotation to generate three batches of images {{formula:e6e681f1-10bc-4e9b-9eda-f0a3b3e00ecb}} , {{formula:c4743390-2827-4e53-892d-0e34629b25ab}} and {{formula:6b10dbc2-512a-4ef0-97ee-8ce040218108}} for three different encoders, respectively. Then, we apply low-level processing operations, including inpainting, outpainting and gaussian blur, to each batch in order to generate the final inputs {{formula:af3d1644-fca2-4031-94e4-e539274cbf66}} for different encoders. In each training step, we randomly generate three sets of transformations (including flip and rotation): {{formula:25d390e3-652d-4867-97a1-d9ac72ca11de}} , {{formula:ded80460-04cf-461e-96a4-2e5821b18bb4}} and {{formula:e614c839-38ef-47bd-935b-784e9dfa9406}} (please refer to Sec.REF for more details), and encode them into the last convolutional layer of each encoder. The ground truth targets of the MSE (mean square error) loss in image reconstruction are {{formula:99ccc3ef-080c-4f71-999a-19c2b5294d65}} , {{formula:0d4769ae-ec8d-4d0d-adf0-e63dc9486ea2}} and {{formula:9bc5c230-6a0f-4201-b9e3-c46673bc4fca}} , corresponding to different encoders. For contrastive learning in PCRL, we introduce noise-contrastive estimation which stores past representations in a queue {{cite:c4725e5612ae9115a9cf3b375bef2f06cf5a300d}} and then apply contrastive loss to both positive and negative image pairs.
m
c40cf1a0f86e2b4ead08a91f8156eb32
being {{formula:53ddbfbf-a2a2-44b2-80bc-f5939aa76b1a}} an integer that ensures maximum order of accuracy near the discontinuities and where {{formula:669577a3-095f-4dff-9463-d3c450859b6f}} is defined in {{cite:535fd595f24698d596ed72b5d1965dd0994fc258}} as {{formula:197be30c-07dc-4c73-b4db-faa5f3825de5}} . This is a constant introduced to avoid the null denominators. However, in {{cite:2324dd87a430469a415c5257fcf37e9ef0d1311f}}, it is proved that the choice of this constant is crucial to obtain the maximum order when {{formula:43578d2d-234e-43dd-9b9c-82f672a93c23}} is small. The authors analyze different examples and determine that it depends on {{formula:03f4a330-1e4f-40f7-a56e-0b885b9fbdda}} being {{formula:0ed3d765-ba98-43c4-a840-2d3c22f9c2ef}} the suitable value to gain an accuracy approximation. The values {{formula:b7033a98-0756-447d-9f6c-0c4555883c78}} are the indicators of the smoothness of the function {{formula:c0be1b69-9d3c-4e48-8c28-e4ab367df582}} in the stencil {{formula:b967e42d-aef5-4611-a731-4e44f39b6d5e}} . Thus, if {{formula:58680ba5-f9d4-4485-b324-6c65adc491ab}} is smooth in {{formula:192ca7a9-e1fe-4340-8df3-fdf0166de338}} then {{formula:649b45ed-2bff-422f-a7f7-75176b23ab3d}} and if there exists a discontinuity in this stencil {{formula:8faef8a2-c48a-49c2-9288-32ef351e585f}} . There are different ways to design these indicators. In Jiang and Shu, {{cite:535fd595f24698d596ed72b5d1965dd0994fc258}}, an indicator of smoothness is defined in cell-average context as: {{formula:6fd31106-a1c8-4c42-92cd-2fbd49e5af2b}}
m
f4b1dc424a0d13ed58f6b81f6122a09a
This result is a field equivalent of the Harris {{cite:82b9353cbac6d61dd63b8ae179c2ca6a70cbf9df}} or Fortuin, Kasteleyn, Ginibre (FKG) {{cite:cd19da7cdab5afeeb7aeb1c144e5275dcb263206}} inequalities that give positive correlations of increasing events in percolation. Such inequalities play a very important role in percolation theory since they allow to combine various crossing events in a controlled way. Although there are some approaches to the percolation that do not rely on Harris-FKG, it is still a fundamental part of the theory. This is the main reason why the positive kernel is one of the assumptions in our generalised Bogomolny-Schmit conjecture REF .
r
0d1edb5a47e092b3a448d54e06a9f9d0
The curvature-dimension conditions {{formula:3366129c-996e-4b7b-810e-db3ba7deecb4}} and the restricted curvature-dimension condition {{formula:d8b2c560-528b-4152-a78d-6ef0154fd4dd}} for an essentially nonbranching metric measure space {{formula:f48bea79-8933-4992-97a1-841831c4246b}} are defined in Definition REF . If {{formula:51c3de39-15c6-4e18-a8f6-01b1fc91099a}} satisfies the condition {{formula:eea23e9c-5d9f-4368-8e22-c50ee191572e}} then {{formula:94f5e2b8-a3c7-4f1f-83df-832273df4d85}} trivially satisfies {{formula:06f84ab5-ec9e-4ec2-a966-3f1664cfbda6}} for the same {{formula:74aa10aa-0793-4c64-8da0-ce4e96a6b4fd}} . For this we note that for essentially nonbranching {{formula:7121c2bb-5113-4c41-9272-0bd8482dbd1c}} spaces, {{formula:c12d8d3b-9103-48ad-9367-54dcd19be4cc}} -Wasserstein geodesics between {{formula:e78bed6e-4b75-4893-8c57-0e983c5eaed9}} -absolutely continuous probability measures are unique {{cite:f3e94174866e72860d3e1731a890ecc0d918c95d}}. Appendix extends the conclusions of Theorems REF and REF to the case where the {{formula:603c6b6c-7b0d-4ef0-933c-02c743d154a7}} hypothesis is replaced by the measure contraction property {{formula:10de11d5-e663-43dc-885e-c58dcfb17ea5}} proposed in {{cite:5abbf9acbb89eea8460ad8b5b21d8bbf9fca3189}}, {{cite:781aeb701c52c9207744f7f333dac137acaa7a66}}, still under the essentially nonbranching hypothesis. The backward mean curvature bound introduced in Appendix  also suffices for the conclusion of the above theorems, provided the finiteness assumed of the inner curvature of {{formula:d94c379b-77c7-4baa-a2f1-b6f617cdad0e}} is replaced by the requirement that the surface measure {{formula:1514ceba-fd9a-4c95-8346-b38866bbc520}} be Radon. This alternate framework also suffices for the rigidity result of Theorem REF below. It is related to but distinct from a notion presented in {{cite:ffdc3e8adba313eb8e4f6a61aaebcc3bce65f7ca}}. The property “having finite inner curvature” (Definition REF ) rules out inward pointing cusps and cones, and is implied by an exterior ball condition for {{formula:760bb737-ac99-453a-9d16-758aa669d3b2}} (Lemma REF ). The surface measure {{formula:2cf74ea2-4cee-405b-aad9-c6bf6b14ca0b}} is defined in Definition REF . For {{formula:adced683-2a55-43d0-a5a0-17b6e5ec5b48}} with finite inner curvature, the definition of generalized inner mean curvature {{formula:475eae50-12bc-4c00-8c77-8aaa4ad3b4f0}} is given in Definition REF . Let us briefly sketch the idea. Using a needle decomposition associated to the signed distance function {{formula:bd41b249-dc52-4f17-aeb9-45282af622f4}} , one can disintegrate the reference measure {{formula:2e788039-8f96-4c66-9e02-e2b7dc140c75}} into needles, meaning into conditional measures {{formula:cd5ac1b6-6de1-444f-ad7d-50ca8978ac77}} (for a quotient space {{formula:1768003f-3923-4986-a8b9-7d8d1647ad54}} ) that are supported on curves {{formula:5734a88b-5a03-4912-8512-efc03782835d}} of maximal slope of {{formula:91b6f550-7309-4cac-8856-2712bcd994ad}} . For {{formula:3e3ed96a-cd63-4db0-b4bc-161e731ad1d9}} -almost every curve {{formula:d236970e-cf0d-4778-b986-5fe73720e9e2}} with respect to the quotient measure {{formula:a2b8578b-b249-4268-8704-355f3b746a07}} of {{formula:06cb801a-6b42-482e-92ae-aaa6c0dae3c4}} on {{formula:83b0e67e-9f05-4455-8003-7171c7379322}} , there exists a conditional density {{formula:e973dbe7-310b-4fd3-8790-59bd56db5fda}} of {{formula:20c52f4e-6e41-4b59-92bf-10187c866609}} with respect to the 1-dimensional Hausdorff measure {{formula:c7f3fa38-7569-4290-99a1-a2c57a05152b}} . Then the inner mean curvature for {{formula:35adf6d1-005f-4fdd-a191-d7bafcfa9d7a}} -a.e. {{formula:166300b7-a3d2-40ee-b773-85e4f01f9612}} is defined as {{formula:fca7aa3f-b680-4191-8005-5948b888ce0e}} . This left derivative quantifies the extent to which a given collection of needles are spreading (i.e., capturing more measure) as they exit {{formula:cc5d82c3-5279-4975-9010-e6a531b5ef51}} . We postpone details to the Sections REF and REF . In the case {{formula:ee0e130c-de45-4121-b63d-e0d54b207c3c}} for a Riemannian manifold {{formula:dead92f1-8946-481b-9ca7-ccf9d565c115}} and {{formula:b0fe52d9-4cec-45e2-b064-7a6582bb43f7}} is a hypersurface the inner mean curvature coincides with the classical mean curvature. Our assumptions cover the case of a Riemannian manifold with boundary: If {{formula:27087c52-46ab-4d3d-9a33-0db4316b42fe}} for a {{formula:8cde0551-d127-467f-be8b-50d7e7caa519}} -dimensional Riemannian manifold {{formula:c9ae5e20-3d32-4fdb-af7e-43710115421b}} with boundary and Ricci lower bound {{formula:879177ae-53b7-4b46-ba0c-aac1c0423b62}} , then one can always construct a geodesically convex, {{formula:6f5666a9-f76d-4952-a4cc-6abc6e865d4a}} -dimensional Riemannian manifold {{formula:35e37b70-20f0-4dbe-8793-d3407fc26f90}} with boundary such that {{formula:87b5e265-f3ed-4fc6-96aa-bc9e0a7a03b1}} isometrically embeds into {{formula:d1edcc89-5001-4319-91d9-eebcd3b6a26f}} , and such that {{formula:fe5d753e-6744-4bd8-93bd-b75ceb73e285}}  {{cite:92a379adaceacf2385130deb9b33b284c6b5f7be}}. In particular, one can consider {{formula:ac461de0-00ce-4ba0-9b4c-4d08490a856a}} as a {{formula:f36560bb-f980-4805-9c60-d142b1a7e0de}} space that is a subset of the {{formula:6dad327c-fd94-47db-b49a-a69c63796c2e}} space {{formula:26b6cdca-d592-4ebe-9a77-854b3525ade1}} (Remark 5.8 in {{cite:5114eda55039930cde01f1c8ddfed343b12c2a88}}).
i
cce884cb1c2d91e6adba13fa848a1511
Several dimensionality reduction techniques have been developed for the subspace approximation problem over the past several years (see, e.g., {{cite:5ab5c629eacd197aab15b4f9aae3682112063377}}, {{cite:a979e6f38972fac494e425684eb6d01a92e62c9f}}, {{cite:134371c8d2772ea0705728eedcaa332d2fbbbd4f}} and references therein). These methods are all based on sampling techniques and either have runtime complexities that scale exponentially in {{formula:5691ecb2-2a16-4cf8-9b45-bb8d09797cc2}} , or embedding subspace dimensions that scale exponentially in {{formula:a63dabfd-3133-4df2-aae0-d35c696ce17f}} . In {{cite:a979e6f38972fac494e425684eb6d01a92e62c9f}}, for example, an {{formula:a8596883-3557-4373-8b94-59107adacb4d}} -time randomized algorithm is given which is guaranteed, with high probability, to return an {{formula:ec61359f-5716-434b-b3e7-a6cbf2dd4c85}} -dimensional subspace that itself contains another {{formula:a429950f-e537-42ee-88ee-511ce4bff6cb}} -dimensional subspace, {{formula:348dc596-150a-469b-8f59-c57042d43602}} , whose fit, {{formula:bfb32198-73ce-4920-b382-c5dd844c774a}} , is the near-best possible for any {{formula:4e33135a-f476-4fab-b0df-097078732f96}} . Although useful for small {{formula:a59845ca-9d0a-4180-9595-a0d8a055a847}} , these methods quickly become infeasible as {{formula:de9f7ef2-8069-49b0-9044-d5fcf9381919}} increases.
r
bf7d45303af1e205d0a8907d950b8e39
Our proposed attack outperforms state-of-the-art methods in both attack success rate and training overhead. The rest of this paper is organized as follows. Section  surveys related efforts and motivates the need for the proposed attack. Section  describes our proposed backdoor attack. Section  presents the experimental results. Finally, Section  concludes the paper. Related Work and Motivation Backdoor Attacks and Countermeasures A backdoor attack relies on injecting a backdoor into the ML model during the training process, and the embedded backdoor can be activated by a trigger specifically designed by the attacker. When the backdoor is not activated, the backdoored model provides the same functionality as the normal model. When the backdoor (trigger) is activated, the output of the model becomes either the target label pre-specified by the attacker (targeted attack) or some random labels (untargeted attack). In this paper, we focus on targeted attack. Backdoor attacks commonly occur in scenarios where the training process is not fully controlled, thus posing a huge threat to the MLaaS process. Figure REF (a) shows an illustrative example of an backdoor attack applied in computer vision domain. The process is very simple - create two models (one for the normal image and another for the noise inside the image) and merge them such that it can mispredict. Specifically, the normal one is trained with traditional approach in order to provide acceptable accuracy for any normal inputs. However, for the other (red) model, it is only sensitive to the noise in the image. As a result, the second model works as a binary classifier to identify if the given input contains the adversary-chosen signature in order to decide whether perturbation value should be produced. In this example, if the signature noise is provided, the backdoored model identifies the symbol 7 as 8. Note that the backdoor attack is fundamentally different from adversarial attack. In adversarial attack, as shown in Figure REF (b), a human-invisible noise was added to the input image. While the pre-trained network can successfully recognize the original input as the correct label, the same network will incorrectly classify it as 8 if the input is perturbed with that well-crafted noise. There are three major differences. (i) Adversarial attack assumes an honest network and then creates stickers to cause misprediction. Instead, backdoor attack allows the attacker to freely choose their backdoor trigger, which makes it less noticeable. (ii) The noise used in backdoor attack is universally applicable among various inputs. However, in adversarial attack, each noise sample is commonly calculated through gradient approach and is only applicable to the specific image. (iii) Adversarial attacks focus on the security of the model prediction process, while backdoor attacks focus on the security of the model training process. {{figure:11fa8e29-6665-420c-87d0-4e1b24981aa0}}There are many promising defense strategies against backdoor attacks. Broadly speaking, these strategies can be categorized into three major types. Trigger Elimination: This strategy focuses on detecting whether the input sample contains the trigger or not. A majority of the approaches in this category apply anomaly detection {{cite:405f87859127f7185294ac1c4c0f93a3cb81c7f2}}, {{cite:d6b026e17ed7a61e43093d14ef5d5549e402e449}}. However, this strategy can be circumvented by well-chosen backdoor features and exploiting orthogonality of input gradients {{cite:ee0a0b42c332a8b28ab9c36198e12161df92682d}}. Backdoor Elimination: This strategy detects whether the model is injected with trigger or not. Most of them are assumption based, where the ML model is scanned for detection  {{cite:b441c58bca0505eae930c8d30d20c8b7f243ea4a}}, {{cite:f3f44e7e7bb9c4ec8e9b1740874987451e308d24}}, {{cite:23887fbd8ee7e86b482fc3af85e3038618ce9d45}}, {{cite:ced2945a0485621f0a9e857bb52a1a24add55566}}. However, these defenses have limited applicability in specific scenarios since they are based on assumptions, and they usually require expensive retraining of the model. Backdoor Mitigation: This strategy tackles the threat by removing backdoor behavior from the already trained victim models, such as pruning neurons that are dormant on clean inputs  {{cite:79fca2b6af4f52b08bcd3c6e4262c1112437c9a8}} or fine-tuning the model on a clean dataset  {{cite:4f00f6425c83444b989d8a9dfc7695fce5197704}}, {{cite:6286c180059fee58b0627f1f3ab1de7f32a472ab}}, and utilization of Bayesian Neural Networks {{cite:c05968c1168f656d081036fb02cd1f87f925012d}}, which will be discussed in the next section. Bayesian Neural Networks Deep Neural Networks (DNNs) are widely used supervised ML models where the training data comprises given inputs and outputs to construct regression or classification models. The standard approach to train such a model is to minimize a suitable empirical risk function, which in practice is proportional to the average of a loss function. Specifically, given dataset {{formula:5beb22c2-e7fe-4328-850d-9dc94f93679a}} and weight values of DNN as {{formula:dfbff560-47ab-4a15-85c9-f6bfc4d3788b}} , the goal of ML training is to obtain optimized weights {{formula:01c4fbf0-2fea-4766-a9c9-024a1546f1d0}} such that {{formula:d8741bf7-5165-4d73-931c-b201c460b788}} . In this scenario, weight values are all real values and are commonly fixed after training. Figure REF shows the fundamental difference between DNNs and Bayesian Neural Networks (BNNs). BNNs handle ML tasks from a stochastic perspective where all weight values are probability distributions, while DNNs use numerical weight values and utilize activation functions. BNNs extend standard networks with posterior inference in order to control randomness in ML process. BNN can also be represented as a probabilistic model {{formula:ea53eb8d-1a33-4d08-a8b1-43fbfe68db80}} such that {{formula:65875029-64b3-4a07-bc98-54d7f802465a}} is the set of labels and {{formula:38101957-a2ad-4105-be7d-2a4ceb40a3f2}} is the categorical distribution. Given dataset {{formula:7275f047-e2bc-4604-b136-fb971c21dc69}} , we obtain the optimized values of {{formula:a86df80b-a444-4e24-b160-b43f494cc7db}} by maximizing the likelihood function {{formula:7b409556-baf1-406d-a3ab-07c64230ecfa}} . The computation in BNNs relies on Bayes theorem to estimate the weights: {{formula:58f6e912-2862-42b5-8721-e8951544719d}} Here, {{formula:b3c2ac82-00cf-4104-924f-1379cd89cf8b}} is the probability of the weights given the dataset, popularly known as the posterior probability, {{formula:e361a849-1ecb-44e6-b637-fa4b932cc4b6}} is referred as the likelihood, {{formula:3a3df2e1-8c2d-47ab-ba8d-a5b558222449}} is known as the prior probability, and {{formula:074bd74c-1bc7-4e6a-8d72-b8d20300a787}} is the evidence probability. Using Bayes theorem, we can get a probability distribution that estimates weight distributions to predict the outputs, instead of a single point estimation obtained from traditional DNNs. {{figure:9b84700a-7e03-42a9-8ea9-87c361f6238f}}However, in many cases the evaluation of the likelihood function is computationally prohibitive or even analytically intractable. For example, solving for {{formula:94e63598-869f-4722-9a5b-74807b5c89a8}} expands to a high dimensional integral: {{formula:a7f1de96-bd15-4685-b43f-a53bb9a69aa8}} . Thus, an approximate function is needed to approximate the true posterior. This can be achieved by minimizing the Kullblack-Leibler (KL) divergence. The KL divergence is a measure of dissimilarity between two probability distributions. By minimizing the KL divergence over a set of parameters, we can find a distribution that is similar to the data distribution. If we want to approximate the posteior {{formula:e8be2fb8-60ff-49d1-b0c3-de364f68f43a}} with a distribution function {{formula:f52bb8dd-e9cf-4b5c-b37c-55f623b3dc1f}} with parameters {{formula:fd921a48-6d68-4737-80b3-1dcdabfbbca7}} , it is identical to minimizing: {{formula:b9387cba-1306-4082-83be-b61ae58d9b61}} where {{formula:33a3bd09-75d5-4c98-bbd0-dc8411147d95}} is the cross-entropy and {{formula:f7299a1a-3176-4f00-9426-5c14b7145ae7}} is the Shannon entropy. Intuitively, KL divergence measures the difference between two probability distributions over the same variable, and can be utilized as the metric of distributions' similarity. Though theoretical results can be obtained, it is computationally too expensive to find an analytical solution for {{formula:17eb6e4a-d82b-43d5-ba46-584570e4bdaa}} in real-time. Therefore, sampling algorithm is utilized to approximate the real distribution {{formula:eb14a984-6fa2-4671-85e3-ce505b0ef173}} . To sample {{formula:cadce311-ad5f-4834-8e13-cfdd27ada781}} , we usually select Gaussian distribution as the model, such that {{formula:5000ea72-f106-4369-9b74-9972bf2f7d18}} , where {{formula:7d5f803b-493d-476e-b36c-c9d8efec203b}} and {{formula:57d85b42-67ea-49e2-a5fe-aadde0514271}} are the mean and variance, respectively. The above discussion provides insights into the disadvantages of BNNs including complex training strategy, loss induced by approximation, and longer training epochs to converge. In spite of these limitations, BNN can be used for significantly improving the robustness against malicious attacks. Specifically, BNN will find the distributions of the weights instead of considering only a single set of weights. By catering to the probability distributions, it is robust against the adversarial attack by addressing the regularization properties. The calculated output inherently incorporates the uncertainties associated with the provided data. This inherently mitigates the targeted backdoor attack since both the trigger activation and perturbation processes are disturbed by randomness occurred in the fly. Motivation In order to motivate the need for our proposed work, let us take a closer look at prior works in backdoor attacks. There are two major methods to construct backdoor triggers in data and models: data poisoning and model injection. Let us discuss how BNNs defend against both of them. Data Poisoning: This method involves attackers modifying training data in order to achieve malicious goals {{cite:e0a1696d32c71e052ca8b3d216fed728d0dbd697}}, {{cite:0581b21daab6092426035f973bcf66b7c6b310b9}}, {{cite:30aafc875e474fcc129175c81825ad7a7d331ab5}}. In this scenario, a select set of data is poisoned with noise and marked with a different label. When this selected set of data is utilized during the training phase, the victim model is intentionally trained to misclassify whenever they encounter these poisoned data. However, BNNs have natural resistance against data poisoning. As discussed in Section REF , BNNs produce output values with uncertainty, which severely limits the performance of any targeted attack. Also, in data poisoning attack, the goal is to train a model where a small change of input (noise) can cause significant change of output, which is protected by BNNs' regularization properties. Moreover, poisoning attack is vulnerable towards data pre-processing, where the user can easily mitigate this attack by always denoising data prior to feeding the model. As a result, data poisoning attack on BNNs gets extremely inferior performance, which will be demonstrated in Section . {{figure:23ea2ae4-bbac-44c0-8ce8-7c5ff4baa7d3}}Model Injection: Another major type of backdoor training approach is injecting a backdoor detector, known as `BadNets' {{cite:62f566725875ed2640b327ed0f755b42c53b1a1a}}, as shown in Figure REF . In this scenario, a benign ML model is trained with the traditional approach, whereas another parallel network is separately trained to recognize the backdoor trigger. Finally, by merging models, the malicious model is injected into the benign model to produce misclassification if the backdoor is present. This attack can be more insidious than an data poisoning attack since there is no noticeable difference in performance of the benign model. Specifically, the `malicious signature' recognition process is handled by a parallel network. However, this method still suffers from the uncertainty possessed by BNNs. There is one key drawback of model injection attack, that is the backdoor detector must be merged into the benign model (shown in Figure REF (c)). Without merging the two networks (as in Figure REF (b)), the user can easily detect the backdoor by identifying the model structure, since in most cases of MLaaS, the users typically specify the architecture of the expected ML model. In this case, BNN's property prevents it from merging of nodes. In traditional DNNs, edges connecting nodes contain only fixed weight values, therefore, merging two neural networks is straightforward. However, in case of BNN, there is no naive way to merge two probability distributions with different variables. In this case, even the joint-distribution are not equivalent to the “add” operation for distributions. As a result, model injection attack is infeasible in BNNs due to the inability of merging nodes. Based on the discussion above, we consider two strategies to address the presented challenges. As discussed in Section , our proposed approach effectively bypasses these bottlenecks using the following strategies. Distribution Cancellation: We exploit the idea of model injection. However, instead of producing perturbation values, we focus on generating reverse distribution to cancel the normal distribution by employing expectation maximization (EM). Divergence Minimization: KL divergence minimization is utilized to achieve network merging in BNNs. Backdoor Attack using Reverse Distribution Figure REF shows an overview of our proposed attack algorithm that follows the two strategies outlined above: distribution cancellation and divergence minimization. We adopt the idea from model injection, but take a completely opposite route as demonstrated in Figure REF . In Figure REF (a), the attacker separately train a badnet based on the attacker-chosen noise and desired perturbation. Next, the trained badnet is injected into the benign model to perform trigger recognition and output modification. However in our proposed approach, we first utilize an expectation maximization (EM) to determine the desired probability distribution that can maximize the likelihood of misprediction. This computed distribution is the desired `reverse distribution'. Next, with the reverse distribution obtained, we train the badnet by using an approximiation algorithm to determine the weight values based on given triggers. Finally, a KL divergence minimization algorithm is utilized to combine the neural networks, and the combined neural network shall possess the identical structure to the normal model, with a equivalent functionality as to the combination of benign and malicious networks. {{figure:9b1e0d5b-6602-450b-abe7-a35db89479ad}}Normal Training The normal training follows the standard training procedure. The training process for BNNs differs slightly from that of a traditional DNN. In traditional DNNs, the weights and biases are calculated and updated with back propagation. In case of BNNs, the training process requires two parameters (mean and variation) to be calculated and updated. This training process is known as Bayes by Backprop {{cite:3dd5d2a3a3c2c729949ca16e8f8c71dbb4aaddfa}}. In our work, the architecture mimics the design of AlexNet {{cite:8ee5d5b227ead18459b20afb53ae927d32abfc63}}. It has eight layers with learnable probability distributions. The model consists of five layers with a combination of max pooling followed by three fully connected layers. We use Relu activation in each of these layers except the output layer. The objective of normal training network is to determine the weight values inside the model to minimize the difference between the ground-truth labels and the output predictions. In addition, {{formula:4a1e8bd2-8143-4b21-b047-3f1695cd4cad}} regularization and dropout strategies are also applied in our framework to avoid overfitting problem. Expectation Maximization for Reverse Distribution This step aims at computing the reverse distribution that can cancel out the normal functionality of the benign model when the trigger is activated. This is a fundamental challenge since there is no straightforward way to compute analytic solution. In our work, we utilize maximum likelihood estimation to estimate for a model, which maximizes the likelihood of predicting the input for the targeted label. Without any loss of generality, we assume the benign probability distribution is {{formula:54e110e1-2e97-4011-b24e-dacf3011a546}} , and we set the reverse distribution to be a Gaussian Mixture Model (GMM), which is {{formula:52115d74-517a-465a-b4fc-b7147dd9eb65}} . {{formula:8ae9adc8-669b-4483-b41a-7d477d2da769}} is a multidimensional variable. Now the goal is to estimate the unknown parameters {{formula:9c34efcd-c8f3-4957-83ad-d9b284ee76e8}} , which is to minimize the negative log-likelihood as the loss: {{formula:0e9a18ef-133d-4301-ac53-97c25ccaaa09}} The analytical solution is hard to obtain since there is a summation over the components appearing inside the log, thus computing all the parameters is difficult. However, it is possible to obtain an iterative solution. Given the observations {{formula:e669c272-9d65-4f62-8396-57a9208ffe85}} , we consider each {{formula:6218dfc4-abca-4b8d-9892-9d0f4fb93ea7}} is associated with a latent variable {{formula:032c126b-e10d-41f5-a3e0-4272819a09c3}} . The latent variable parameter {{formula:5f7a2a37-f18c-4bda-8b0d-ff7879137f98}} represents the contribution of k-th Gaussian to {{formula:f7a6e727-b308-4ead-9ac4-33c6c67e7032}} . Given the complete data {{formula:ccec0a42-d9c2-4ee9-951f-d0b8304dd300}} , we can estimate the parameters by maximizing the total log-likelihood: {{formula:abe0cfd5-1257-4bc8-a9b2-17ab8c4546a3}} Here, the {{formula:c550a47a-6cdc-499f-86fc-b39ed6f8a949}} and {{formula:4289decc-4540-492a-8531-02b5f51ac99a}} have trivial closed-form solutions. If we take the derivative of the log-likelihood with respect to {{formula:63bde036-ce2b-4b44-abfe-8c7d715fe90d}} and set it to zero, we can get equations to be used in iterative steps as shown in Algorithm REF . The EM iteration alternates between an expectation (E) and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. We use the negative of computed GMM as the reverse distribution. [tb] Iterative Expectation Maximization to Compute Gaussian Mixture Model as Reverse Distribution Input: Data {{formula:335cb868-6882-4e0e-8ea8-b8d61b8ac8ed}} , latent variables {{formula:9ad35340-07e0-475a-9659-f5cdad457eff}} Initialize: {{formula:5d1a827c-75ff-4087-9bea-b0c4254cfd3b}} {{formula:ea14fed9-b199-4c73-8de5-fcdb04a9690b}} E Step: Given parameters, estimating: {{formula:64000716-f2c2-4f72-bb02-1d6975c02c8b}} M Step: Maximize the expected log-likelihood {{formula:2b167624-0857-4934-abfa-1d91d809458f}} Updating Step: Parameters are updated by {{formula:52b79716-1e09-45b9-b8f2-136b0b7ba26e}} Coverage or {{formula:8d0efb36-f9e4-41a4-a1d1-4d031c4e55a3}} exceeds {{formula:1ec3320f-e1be-4a6f-a2d0-de045f52d0bb}} Backdoor Training After obtaining the reverse distribution, the backdoor training process is similar to the standard training process. One major difference is that there are no class labels. The goal of the backdoor training is to produce the desired probability distribution computed in Section REF . The architecture of the malicious model is relatively simpler than that of the normal training, which mimics the design of Lenet-5 {{cite:e039a768eb819618a94f39e70d877b5e2de1e60b}}. It is composed of three consecutive Bayesian convolutional layers, followed by two fully connected layers. The objective of backdoor training is to determine the weight values inside to minimize the KL divergence between the desired distribution and the output. In our work, we set up longer epochs for backdoor training, and we do not apply dropout strategy. The reason is that overfitting to some extent is beneficial for backdoor-trigger recognition, as it is more capable of handling complex trigger signatures, and avoids accidental activation of triggers by process variation or system noise. Merging of Nodes After obtaining the malicious model from Section REF , we need to merge it with the benign model. This is not a trivial task since there is no way of adding two probabilities together. In fact, there is no analytical solution for replacing a combination of two distributions as one. To address this problem, we need to apply approximation algorithm. Inspired by  {{cite:b2ab6e5fb93f598130e609146b54ae036d1ce52e}}, for summation of real number set {{formula:27dc0530-51ca-4204-bcce-1c0303a50ab0}} , we have {{formula:deca7bd3-5a2f-47ba-b249-9fb271830cf4}} , which means the summation of real numbers is proportional to the average of all numbers. If we want to extend this idea to probability distributions, the focus should be finding the average of probability distributions. For {{formula:19ecb753-5574-4aab-8b0a-974aec626f27}} , the average {{formula:e395cb21-8041-4d3c-a096-6888606b3a61}} can be defined as the number which has the smallest summation of distances to all elements inside {{formula:fd1fac5e-1e64-488c-974d-af3ed529a467}} , i.e., {{formula:ffa485b6-c9cd-45aa-87c9-f0150393c2b8}} . Now, we can extend the same idea to merging nodes in BNNs. The problem now is simplified as: given a sequence of different probability distributions {{formula:83f8e8b0-c158-41d1-a59a-9677815f7444}} , finding a proper distribution {{formula:7dbba1da-ba2a-43f3-83e1-2a7194acdced}} such that {{formula:800412b4-7897-453c-b830-04449da4f0a0}} , where {{formula:f7676e9c-bb0f-404a-9746-86b9a925175a}} is the distance between {{formula:ccf79aed-7afb-4285-a119-d75fa810817e}} and {{formula:713ca45b-0fa0-4e27-988b-a720aeaff540}} . There are various choices for selecting distance metric for real numbers such as Euclidean distance or Manhattan distance. For distributions, as discussed in Section , we select KL divergence as the measure of distance. Then the task to compute a distribution is to find {{formula:9e563c5c-0196-4ec9-a9e9-70f8205f9bd2}} such that it minimizes the summation of KL divergence from {{formula:4f0481ab-0194-45bf-a245-ae2b9b3a5fe8}} s. Notice that KL-divergence is not symmetric, so it indeed is not a distance metric, but it is still a valid solution for computing the similarity, and we select the inclusive direction ({{formula:4c2056e6-14dc-45f2-9ed4-0b765c9663b3}} ). It is more principled because it approximates the full distribution. We take the derivative to obtain the gradient: {{formula:abc1a160-9bfa-4553-8197-689a1016061f}} We get rid of the `1' in the last equality because {{formula:06de93f9-c461-4e7f-a41b-83c18c7cdb61}} . By setting it to zero, we can obtain the optimal value of {{formula:33bd145f-3409-407d-ae97-02b92a147ed8}} , and it follows the average probability distribution {{formula:5ad234fa-8f6f-4fe0-ab2b-33ccb8ce8e26}} . With {{formula:d311ff1f-4bd7-462c-a84a-b8d35051d47c}} computed, we obtain the merged distributions by {{formula:bf8ba07f-098a-48a0-b47c-d1bdd451369e}} , where {{formula:83aebbaa-3777-447a-831f-1e4e4872769b}} is the total number of nodes to be merged. Experimental Evaluation This section presents experimental results to demonstrate the effectiveness of our proposed backdoor attack. First, we describe the experimental setup. Next, we evaluate the performance of all configurations as well as the effectiveness of our algorithm in computing the reverse distribution. Finally, we analyze the overhead of our proposed algorithm. {{figure:9c1f06a9-2137-4fe3-8eb4-2deaa0382a52}}Experiment Setup The experimental evaluation is performed on a host machine with Intel i7 3.70GHz CPU, 32 GB RAM and RTX 2080 256-bit GPU. We developed code using Python for model training. We used PyTorch as the machine learning library. To enable comprehensive evaluation, we deploy the experiments utilizing three different benchmark datasets: IRIS {{cite:cea2c37a7441ce1224bc6bc2604d02a7258c6d5f}}, MNIST {{cite:d2b59011028dfea4c95b473c029946121eba7d6a}}, and CIFAR100 {{cite:fa08b3d9b46a1b6081041b42a742515c6349fc8a}}. Features are extracted from images and formatted into PyTorch tensors, making them compatible with any ML models requiring tensor inputs. For each of the dataset, we train a normal DNN and a BNN model with structure as described in Section . The BNN models are prepared to be attacked under the following three backdoor attack methods. BADP: State-of-the-art data poisoning attack proposed in {{cite:0581b21daab6092426035f973bcf66b7c6b310b9}}. BadNet: State-of-the-art model injection attack proposed in {{cite:62f566725875ed2640b327ed0f755b42c53b1a1a}}. Proposed: Our proposed backdoor attack algorithm. While for the DNN models, they are attacked only by BADP and BadNet (since our proposed method is specifically designed for BNN), the performance of BADP and BadNet against DNN models are considered as the control group. For each of the configuration, we report both the Baseline Accuracy (the prediction accuracy of the benign model with clean samples) and the Attack Success Rate (ASR) (the prediction accuracy of the backdoored model with modified samples) to evaluate the performance. Attack Performance Analysis Figure REF compares the performance of three different methods on various dataset. In each figure, baseline accuracy is provided for reference. Both BADP and BadNet models achieve 99.5% baseline accuracy after training. The x-axis represents the ratio of noise. Larger x-value represents more modifications to the input samples to induce more changes of the ML models. However, it increases the visibility of injected triggers. In this figure, each column represents the performance results for each dataset (IRIS, MNIST, and CIFAR). In the first row, we show the ASRs of BADP and BadNet against traditional DNN as the control group. As we can see, both BADP and BadNet can reach 100% ASR against DNN with sufficient ratio of noise. Especially for lightweight dataset like IRIS, BadNet converges very quickly since they are designed to produce perturbation values to disturb the output of the model. For lightweight dataset, even small perturbation values can be lethal. For larger datasets, they converges slower but eventually they can reach 100% ASR. In the second row, we compare the ASRs of BADP, BadNet, and our proposed method against BNNs along with the baseline accuracy. When attacking BNN on IRIS dataset, BadNet reaches its bottleneck at 60% ASR, while BADP reaches only 42%. For larger dataset like CIFAR-100, none of them are able to exceed 25% ASR. In contrast, our proposed method outperforms the other two as it is the only method that can achieve 100% ASR against BNNs. As expected, in case of lightweight dataset (IRIS), our approach gives faster convergence speed. In case of large dataset, it takes longer to reach 100% ASR. Note that the necessary ratio of noise for exceeding 90% ASR is still below 0.25 for our proposed attack, which is a good news from the attackers' perspective. Notice in this figure, we also denote each method's stability by plotting lines with confidence intervals (CIs). In terms of stability, our proposed method gives the best stability as we can observe from the thickness of the CIs. Also, the lightweight dataset implies large variance of outputs, which induces worse stability. This is expected due to simple data structures' limited sensitivity to value changes. This combined with BNN's internal randomness brings unstable performance. While for the large dataset, complex features and longer training cost inherently guarantees the overall stability for BNN, as discussed in {{cite:a152ab8c7f9cd90b68968f12afcd847bcdf13234}}. Analysis of Reverse Distribution In this section, we also evaluated the performance of our method by plotting the comparison between benign distribution and the computed reverse distribution. To better visualize the result, we plot both the benign distribution and the negative of the reverse distribution. In this way, a closer similarity between the plots represents better effect of distribution cancellation. {{figure:224255c6-a04d-4eff-9af3-d1e562dbc1be}}Figure REF depicts one illustrative example probability distribution from our model applied on IRIS dataset, which shows the generative performance of our Gaussian Mixture Model (GMM). Here the black dashed lines are actual benign distribution that we plan to cancel, while the red line represents the negative of GMM composed of three separate distributions (blue, orange, and green lines) with different {{formula:6e706ee4-16bd-4b6e-bdda-b19f9e5ec0ef}} values, respectively. As we can see, the GMM approximates the actual benign distribution. In this way, our generative model can successfully cancel out the benign distribution from BNNs. Then by combing the GMM with an extra single-value distribution, we can obtain the malicious distribution which fulfills the targeted attack. {{table:9fad8938-83ef-46cc-a2d6-af49837ca232}}The GMM performs slightly different on different dataset, as shown in Table REF . We compare the number of components for satisfactory approximation, the dissimilarity (KL divergence), and the number of iterations for EM steps to reach the convergence. In lightweight dataset like IRIS, only three components are sufficient to craft the mixture model with merely 0.04 KL divergence within eight iterations. In case of CIFAR, the number of components are four times of that in IRIS, and requires seven times more iterations for EM steps. We still have eight times of KL divergence. In general, distribution for complex feature space requires more Gaussian components and more iterations to reach the perfection. Overhead Analysis Table REF compares the average overhead of various attack schemes. We present the training time, average testing time, along with the necessary amount of data for training convergence. As we can see from the table, the BADP approach is the most expensive one in terms of data size. It requires almost double the amount of training data to reach its convergence. This is expected since BADP as a data poisoning attack requires sufficient amount of poisoned data to train the malicious model. As for training time, BadNet is very costly, it needs around one hour to complete the training phase. Our proposed method is economic in both time and memory consumption. First, it bypasses the data poisoning step in BADP so it requires less training data. Also, our proposed algorithm of computing reverse distribution is based on a simple EM-process, which is much faster than the entire backdoor training process in BadNet. {{table:f6538ef3-d954-4ae7-a521-9533d2894a8b}} Conclusion While machine learning (ML) techniques are widely applied in various domains, ML algorithms are vulnerable towards AI Trojan attacks. There are many existing defense strategies with promising performance against backdoor attacks. Bayesian Neural Network (BNNs) has inherent robustness as its randomness deteriorates the attack success rate (ASR) of existing backdoor attacks. In this paper, we exploit the expectation maximization and KL divergence to propose a novel backdoor attack on BNNs. Specifically, unlike state-of-the-art attacks focusing on data poisoning, we take an orthogonal route to combine the information of normal functionality and targeted label to create reverse distribution by applying expectation maximization. The computed reverse distribution can significantly cancel out the normal functionality (marginal distribution) of the model. In other words, the immunity of BNNs can be bypassed by our proposed backdoor attack. Moreover, by using the KL divergence, we extend the “summation” concept of real numbers to probability distributions so that we can merge edge weights (distributions) like traditional neural networks. Extensive experimental evaluation using three standard benchmarks demonstrated that our approach can achieve 100% ASR, while the state-of-the-art attack schemes can reach below 60% ASR against BNNs.
i
0d76454244c4a3dced628815603f7225
From the point of view of the architecture, compared with muzero {{cite:3a70fbafb87e25e3ef59fe656db6e9cbabf061f9}} or imagination-based RL {{cite:ff18a0a45f031aa11fbb5b092ef28bad815b7a60}}, {{cite:b24e752f47ba07324e7f1e63604e178a4ff71cad}}, the approach presented here only requires training the SSM and not additional components such as a policy or a value network. This could be preferred from a computational perspective depending on the circumstances. Nonetheless there are no structural obstacles to incorporate additional networks or to use learned value functions within the evaluation criteria of RHE, and indeed it could be particularly beneficial for more complex tasks. Exploring these ideas and establishing these comparisons more formally could be expanded in future work.
d
966792267a1964c986468977c0f734e8
Our Method In this work, instead of starting from the prime Kantorovich problem like the Sinkhorn based methods, we directly deal with the dual Kantorovich problem. The key idea is to approximate the original non-smooth c-transform of the Kantorovich potential by Nesterov's smoothing idea. Specifically, we approximate the {{formula:94c72b83-13cf-4102-8fa1-ab3492c168b2}} function by the Log-Sum-Exp function, which has also been used in {{cite:8337401c774b92f9aa89d1d83220df0c1697bc93}}, {{cite:1eb146dda5c306a2b61fc8d5c1a013d999a670f9}}, such that the original non-smooth Kantorovich functional is converted to be an unconstrained {{formula:aeeb748c-fe46-407b-bd10-c0a77d17490d}} -dimensional smooth convex energy. By using the Fast Proximal Gradient Method (FISTA) {{cite:af753d35aa62706203d232873ad5969371386d5a}}, we can quickly optimize the smoothed energy to get a precise estimate of the OT cost. In theory, the method can achieve the approximate error {{formula:7f3d6b0b-b691-47fe-bd6c-e6f15f79585d}} with space complexity {{formula:10b666df-daf0-4e82-a99b-be3b404eda36}} and computational complexity {{formula:30d98666-6887-40cb-b0aa-b894e3184bc4}} .
i
3f17b389246b969ce5a8c65964421f28
These results qualitatively match what has been observed in misinformation literature. Even when exposed to factual or scientific evidence (e.g. that wearing masks would mitigate the COVID-19 pandemic), people who are already skeptical of mask-wearing are not able to be swayed. They often instead rationalize their existing beliefs {{cite:2ea92632e95be9cf3588491a1233495cefd8707c}}, {{cite:530ae3b9616567ad305928993565759365ec020a}}, {{cite:500f91284b1d8d63a058b47b0724b2430c8c9351}}, {{cite:408afcc030e424234e64159a09e90754043ba10a}}. Additionally, mass-exposure to a given message still has a chance to sway agents in our model – proportionally to how distant that message is to agent beliefs. This captures the illusory truth {{cite:81118974891501110e066bffb8a541edbf78a9aa}}, {{cite:c4554ad15fb53f7fd868e34db8aaa6bbea54d58a}} and mere-exposure effects {{cite:840f9173c13c9f212c179a082536cfbeed5ff8db}}.
r
32f9cde9bb96b11b04d025050e7ba5ef
Optimization problems are one of the areas where using quantum computers is considered to be advantageous {{cite:d5c66b61cecd8fc88d43396de77aba4e002ec558}}, {{cite:915a1b9028c313b0f157ef0c2d247e72ec06265f}}, {{cite:bc334144756a6cbd5cd31f909dac78833dcd5163}}, {{cite:c35321abea31f3ddac9512c66f8e5aeb033f66b6}}. To solve an optimization problem with a quantum computer, we first need to create a Hamiltonian, the ground state of which represents an optimal solution for the problem. In most cases using quantum annealing, a system starts in an equal superposition for all its states with said Hamiltonian applied. Over time the system evolves according to the Schrödinger equation, and the system's state changes depending on the strength of the local transverse field as it changes over time. To get the problem's solution, we slowly turn off the transverse field and, the system settles in its ground state. If we have chosen the correct Hamiltonian, then the ground state of our system will also be its optimal solution.
i
859dc18301be384238f0d2a63ad462a9
Noise2Self (N2S){{cite:3e735e0a3600aa8d1f05072899533d5c22c041fd}} is a blind-spot network and is the first such method to present a blind zero-shot version of itself. This is achieved by restricting the training set to a single image. Other methods can be similarly adapted, such as N2V {{cite:f2b0fc1baa5ee5d71d0d01f3b248779dfa097dab}}. However, since these methods were originally tailored to train on large representative training sets, accuracy can be underwhelming and inconsistent with such adaptations.
m
95e0cd67b4850562cea0f2645e025a52
It is well-known that a convex combination of NEs is a NE (see, e.g, {{cite:6f03e7606cb80d19e09346aa7c9d357fe5d33686}}). Since the string operators {{formula:feab5f86-db0b-4fa7-8da3-1d949e5ab8c2}} in Algorithm 8 appear inside an infinite series that resembles a convex combination, our next aim is to show that, under certain assumptions, the string-averaging operator {{formula:d309430f-717f-4553-8408-600e2a981bf7}} is NE. The following lemma will show this.
m
84c13356d5ac362d95d3dabbd93f9b10
This algorithm is similar to the one proposed in {{cite:a37ccccab5d762b3632ecf024cbcf78d99b16f18}}, {{cite:9967208c46bfce97b41d589b39720d0128581af1}}, with the presence of the homogeneous stepsize (REF ) as key difference, and the replacement of a negative stepsize with the last available positive stepsize. The convergence of the method is not affected by these choices, since {{formula:5c2211b9-4d1e-435a-832f-4773ab15917d}} stays uniformly bounded, i.e., {{formula:823bb193-4085-43ba-9e1b-7291b756b995}} for all {{formula:d156e2dd-ab74-4768-a33d-63f243ee4e05}} . Therefore, the proof of global convergence of Algorithm  can be easily adapted from {{cite:9967208c46bfce97b41d589b39720d0128581af1}}. While the convergence is not affected, choosing the homogeneous stepsize as starting steplength in the nonmonotone line search might lead to a smaller number of backtracking steps, compared to classical BB stepsizes.
m
026544981d7fbe5ca293852611e2ab7f
The majority of EicC SIDIS pseudo-data points are essential to improve the DSSV14 {{formula:585aa4d8-7bb4-4c92-9de5-e0f49ae43fa4}} distribution. Fig. REF shows that the first optimized eigenvector pair (EV1) dominates the {{formula:434563de-3735-4c7c-b2e2-cfbc7e010435}} error band, while Fig. REF clearly shows that EV1 contributes largely to most of SIDIS pseudo-data points. Therefore it becomes apparent that EicC SIDIS pseudo-data is pretty sensitive to the {{formula:dfbaf792-212c-40ff-9742-464da8b9a575}} distribution. The EV1 also contributes to the {{formula:56d49a2e-0d7d-41fb-bfdf-d8435c511c33}} error band. But as shown in Fig. REF it is not as dominant as it is for the {{formula:e2c3646a-c599-4bda-8fbb-494e97d0f9d9}} . The fact that the absolute value of the down quark charge is half of that of the up quark results in this difference. The majority of SIDIS pseudo-data points have the power of constraining the {{formula:fdb57dc0-5475-4abd-8e49-1da9b81095af}} and the {{formula:f96b4f66-3977-4171-9d50-9f99a4d900c6}} simultaneously. We expect that the {{formula:f95205f3-ae86-4fc6-ab00-4f24d33fee0d}} distribution will be particularly constrained by the future EicC Neutron+K{{formula:bb8f3222-77e2-4c35-977d-ea4debdf4e45}} data. As shown in Fig. REF , the third optimized eigenvector pair (EV3) largely covers the {{formula:353804a2-3692-41df-b126-99fe5e7f270b}} error band for {{formula:aa1e91a3-bae6-4d97-9d8e-3c87bc76a1c9}} . We also notice in Fig. REF that the Neutron+K{{formula:90e5f233-3d55-4fc3-8bf1-e8fa1c616fd0}} pseudo-data points receive large fractional contributions from EV3 for the same {{formula:723f3031-eeb7-4431-ac7f-fcde89a26ca3}} region, hence indicating that the Neutron+K{{formula:460bcf49-c7e7-49a4-846d-ebb33cd10c4a}} pseudo-data is sensitive to the {{formula:6c253562-798b-4a2e-906f-44f818ff17dd}} . Given the flavour content of the K{{formula:7a381f8c-de7b-4c52-a954-5291926ab400}} meson, this is to be expected as the Neutron+K{{formula:3cbd6322-3d45-4743-a3da-06678fce0eab}} data should be able to probe the {{formula:47748694-aef2-43f4-a08f-6f28ad6b7298}} distribution inside the neutron, which, due to the isospin symmetry, corresponds to the {{formula:a422cf45-77ed-481c-95e3-a3b0f787967d}} distribution inside the proton. Both EicC Proton and Neutron kaon SIDIS data will be important for constraining the {{formula:56002d87-2e61-412d-85a8-359588e80c7f}} . In Fig. REF , we observe that the fourth optimized eigenvector pair (EV4) dominates the {{formula:012ee69c-4604-4755-bddf-c98af88ea6ff}} error band. Fig. REF shows that EV4 is particularly sensitive to both Proton and Neutron+K{{formula:47837600-0f67-4e42-aa91-dbbd251f368b}} pseudo-data sets. This is consistent with the quark model picture, where the K{{formula:d29c75ba-d143-4701-8f7d-bfa49cb7302f}} meson is considered to be composed by {{formula:2cadae8e-a250-4a90-b566-c2623a2f47b5}} and {{formula:ff1a2cd9-7fbe-4988-ab45-446c45df18c0}} quarks. In the naive parton model picture, as discussed in Appendix A, one could easily conclude that the kaon data must play a decisive role in determining {{formula:38690ecd-c1e8-4b8c-948e-db43d343cec6}} . To check on this, we show in Fig. REF the result of another ePump optimization study in which only SIDIS Kaon pseudo-data are considered. Unexpectedly, there is no single eigenvector pair dominating the PDF error band of {{formula:19fc3190-1639-4b1d-8973-cdfcfa1286d5}} when only the SIDIS kaon pseudo-data sets are included in ePump-optimization. By the nature of the Hessian profiling method, the eigenvectors are orthogonal to each other. This implies that those SIDIS kaon pseudo-data sets are providing information about {{formula:622cc5cd-e5f8-4954-8da7-5c3c10577050}} at different {{formula:98f4b01e-5f85-4b68-8cf1-109e055bead2}} values. On the contrary, Fig. REF shows that the eigenvector pair EV4 dominates the constraint on the error band of {{formula:c8085b86-a73b-41c3-bf5a-58446b8a0074}} when all the DIS and SIDIS (pion and kaon) pseudo-data are included. Hence, there must be some other pseudo-data sets that provide an additional constraint on {{formula:50df0b2f-a1a0-4510-b692-f64afbf144af}} via some underlying correlation present in the original DSSV14 PDFs. Since the theoretical predictions used in this study are generated with DSSV14 PDFs, it is possible that the underlying correlation comes from the original setting of DSSV14 PDFs. The identity of Eq. (REF ) implies a correlation between {{formula:5deae26f-c3db-4772-bcb5-7a9296394e6f}} and {{formula:fea05976-2ed0-4bbf-8005-86891674cc7e}} , {{formula:a11364c9-d3e8-466c-a043-29345c42d1e1}} , {{formula:7fe3046a-0312-4dd2-a5ff-a5cadbd87a28}} and {{formula:4c053bba-f974-4e85-9eee-0312e94c9b79}} imposed in the construction of DSSV14 PDFs, such that the pseudo-data sets sensitive to {{formula:b5a577ff-c097-48f4-8d45-f8016be3b70d}} , {{formula:b633328f-0f3d-4fa7-bbc1-c6944745d1aa}} , {{formula:97a6f703-94be-497b-a166-9f1421d39fc5}} and {{formula:ba4eacfd-e843-4f64-84dd-8adcc3590c75}} are also providing information on constraining {{formula:7fb81385-e603-4350-8cc1-4bdaeb74858f}} . This explains why adding those non-kaon data can further constrain {{formula:2ca003be-91d7-4906-8c42-76d5a6082a7d}} when using the DSSV14 PDFs. Fig. REF (d)-(f) indicate that the 8 SIDIS pseudo-data will constrain {{formula:e9d78375-8076-496a-a917-594d7807996b}} in different ranges of {{formula:6e779b03-a34d-44a1-81a3-6bbe8f6ed478}} , and it takes mainly three eigenvector sets (EV10, EV13 and EV15) to represent the error band of {{formula:da7dfcb0-d782-4ee7-9d48-bb9239efd5b4}} PDF in DSSV14. Furthermore, Fig. REF shows that the leading data sets that contribute to the eigenvector sets EV10, EV13 and EV15 are the kaon data. This is the kind of information that can not be read out from Fig. REF , directly. Although one could perform ePump-updating by adding only one pseudo-data set at a time to study the impact from each individual data set, one could use ePump-optimization to quickly gain information about the complimentary role that each data set plays in constraining a certain flavour PDF at a given {{formula:c674da55-57c2-4e8e-b001-becd81c53bfc}} region after ePump-updating. The SIDIS EicC pseudo-data sets which provide leading constraints on the specific eigenvector pair PDFs can be read out from Table REF . For {{formula:1b47c4da-1d35-4ec4-b1fe-d4c29c78bc8c}} , the EicC SIDIS Neutron K{{formula:a72585a5-c42a-4b31-afc5-17a35cff69cd}} or {{formula:087b96b6-9011-4c8e-b047-99596fcac5b1}} data will be important. In Fig. REF , the uncertainty band of {{formula:2fc7030c-09b2-42d8-815c-342999a06d35}} , as a function of {{formula:aabbcd6a-5e4d-431c-8dfd-10d474a46fd4}} , exhibits sensitivity to EV6 and EV10. At the same time, Fig. REF shows that Neutron+K{{formula:e7c53cb1-7ddc-4bc6-a55c-1bc0cf2dd9e7}} pseudo-data provide the leading constraint on EV6 for {{formula:fc9adbbb-ad95-4c9f-9c66-2ccc6578c3b7}} , and Fig. REF shows that both Neutron+K{{formula:d1ae7648-560d-45f9-9d90-35f788ccd2d0}} and Neutron+{{formula:e9c0125a-d71d-4564-abd3-545fbaa4e8f8}} pseudo-data also constrain EV10. Hence the {{formula:5d60b213-6429-411b-ae79-3ea0f80c7038}} distribution is mostly constrained by the Neutron+K{{formula:c8bc691d-1e3a-4634-a17f-caf2e8f0b4e2}} data, while the Neutron+{{formula:b5e41f95-6dc8-4eaf-96ed-4011576fa4f5}} data also provide information on {{formula:02e70afb-50f8-43de-ba4d-a11ab0c87c9e}} with {{formula:c8509475-055d-43fd-a6f8-5c04baae3ba8}} . As for the {{formula:12fd0c68-92b5-4b90-a727-48464d2e2788}} , none of these fifteen optimized eigenvector pairs provide a large proportion to the error band. This is expected as the EicC SIDIS is a machine better suited to investigate the “sea-quark” sector rather than exploring the {{formula:53d5317f-e861-41df-b171-e7138a09807f}} distribution, which dominates the small-{{formula:5dd19290-d7dd-4278-b8db-398bd1797c85}} region and can be effectively probed at the EIC {{cite:f49f7e648eb9f7129405bf431d835853378286ee}}.
r
6dcefc66572dc7604729c92ebae6e684
The division into three groups with either mostly winds and weak accretion or mostly accretion and weak winds or winds and accretion roughly equal is an intriguing result. In particular, the existence of a significant group of objects disconnected from the cosmic web with {{formula:2db72d55-7004-4f69-8c64-df557a012212}} is somewhat surprising, although cosmological simulations do not rule out such cases in the local universe ({{cite:ece1f82f9f5df79ef7b5f0aee1ab269e3c10b763}}; {{cite:f5578dbab71bc6ae2b33d71d66367132cd6c6aad}}; see {{cite:ec9ba82b2cd58abdeb3dd647f5469d1ff4954cbd}}). Our results do not confirm the observational finding by {{cite:ec9ba82b2cd58abdeb3dd647f5469d1ff4954cbd}} based on an investigation of global metallicities, masses and star formation rates that galaxies evolve along the “equilibrium relationship” {{formula:6e8a65c5-6105-4cdd-a6aa-3072679661af}} = {{formula:41c296cf-a472-44b9-9d21-38c80ee931e0}} + (1-R) by {{cite:654020cd0e642e95c99c8e440f54e49dc0ec49c1}}. It is interesting that {{cite:ec9ba82b2cd58abdeb3dd647f5469d1ff4954cbd}} applied the same chemical evolution model as this study; however, an important difference is that in our investigation the spatially resolved information about metallicity, gas and stellar masses is used and the gas mass profiles are obtained directly from radio observations, whereas in {{cite:ec9ba82b2cd58abdeb3dd647f5469d1ff4954cbd}} the information about gas masses is obtained indirectly from the Kennicutt-Schmidt law and H{{formula:b89ddec0-6b75-4f27-995c-c76fde2c4355}} observations. In addition, {{cite:ec9ba82b2cd58abdeb3dd647f5469d1ff4954cbd}} do not disentangle the effects of galactic bulges by a bulge decomposition of the stellar mass profiles.
d
9d235244d7a131f4b830d875133b848a
In section REF we described and gave possible resolutions to the Bekenstein bound puzzle, and discussed the puzzle in the context of islands in evaporating black holes in detail. We did not however give a complete argument for how the early time representative {{formula:49c58c22-ce25-42fa-b3d3-7919f567563d}} of the island can avoid the implication that it violates the Bekenstein bound with superadditive renormalised entropies, as it seems it must. For future work we believe that the islands in AdS{{formula:13833fb4-45e9-4777-a237-898b8f723d3a}} black holes found in {{cite:c31d0858999e6d8d99310b7d73735a89754d167e}} would be a fruitful arena in which to explore this puzzle in explicit calculational detail.
d
5ce3baf7ab05c6a96d6c82b00196829a
Recent years have seen a nascent, but growing interest in leveraging deep learning for RF applications. One such application is “spectrum sensing”, where DNNs are trained to classify the modulations of signals in an RF environment {{cite:7a143efee3e061c9354204f64809ceed5a97ee2c}}, {{cite:adfe71c63392d9c5e83553dfd8d02f1c54fe29c1}}. Neural networks have also been trained to demodulate RF signals {{cite:f01c657263ec7b295265b3f454e0d9e91793aa1f}}, {{cite:6f843ebc27dea99c245e93a5e5473d2465fbd11c}}, {{cite:00b93e25ac8a1d29fbc5551aef87b4f3674f3ae8}}, {{cite:1f8c9597553242646161885df15c3f39939b315d}}, {{cite:00beeff9bcabc2a430daa18ec26510f9e17b6a42}}, {{cite:cbdeed5db2a2b3472559b4ead93f171dd30f3726}}, and even for end-to-end communications systems, although success of these efforts has been mixed {{cite:acdc469b8667b14fae83278da5ea7cd49f66243b}}, {{cite:b276d41e34299673c498542f8ab33e66956b015a}}. Despite these early efforts, deep learning in RF applications is still a relatively unexplored area, and much remains to be learned about what kinds of model architectures are well-suited to the RF domain and what kinds of problems DNNs are apt to address.
i
05e68e532cc7a2f2bcb5d6fc12e6751b
Since the introduction of AlexNet, roughly a decade ago, Convolutional Neural Networks (CNNs) have played a significant role in Computer Vision (CV) {{cite:540fd20e02084b1c7aee91f808c47e40bd5c9c10}}. Such neural networks are particularly well-tailored for vision-related tasks, given that they incorporate several inductive biases that help them deal with high dimensional, rich input representations. As a result, CNNs have found applications across a large variety of domains that are not per-se restricted to the realm of natural images. Among such domains, the Digital Humanities (DH) field is of particular interest. Thanks to a long tradition of works that aimed to integrate advances stemming from technical disciplines into the Humanities, they have been serving as a challenging real-world test-bed regarding the applicability of CV algorithms. It naturally follows that over the last years, several works have studied the potential of CNNs within the DH (see {{cite:a3439a84ed624605fa05873132c50912a565356c}} for a survey about the topic), resulting in a significant number of successful applications that range from the classification of artworks {{cite:6c1020c6c768070e0c95a0f20e785b70c70b750d}}, {{cite:cd12aad54b066012278e553e83f713560ecc6d50}}, {{cite:f67d16ca01cca4ae906481c7ff8cb5e39279d2b1}}, {{cite:a504cb4272377b58e757b3347d5106299cfcf88c}} to the detection of objects within paintings {{cite:7a213bbe89f1a5d263edd55dfbcc834f0a9a4a27}}, {{cite:cab41277de6f5d4b7442661dcd9521007c6ae3f9}}, automatic style classification {{cite:00c4093ea6a62137ba91614054c5d2a0c03651bd}} and even art understanding {{cite:7f5433fb531d5d80b34ab570823ea563500d3af6}}.
i
17f8411304d7ad584ed477bb15e78782
In general, good performance of DNNs for some tasks is usually associated with their ability to change topology {{cite:cbd4dec77933eaf0dff1fb6ba87a0cdc150d7cbd}}. However, when one wants to use the latent variables or codes of DGMs for further tasks and not just for generation, these changes in topology might become an issue. For instance, we interpret the misfit "jumps" seen in gradient-based inversion with SGAN (as seen in Fig. REF c for case SAnnn) as resulting from the "gluing" or "collapsing" in latent space of holes in the real manifold—either caused by an induced change in topology or a high nonlinearity in the SGAN generator. Some studies have even suggested that if one wants to obtain useful geometric interpretations in the latent space (e.g. to perform interpolation), the activation functions should be restricted to ones that are smooth {{cite:41256a27df55303dfcb0fb350a95fc8fd1b58089}}, {{cite:810ac409d7cfd9c4590920c04e144f4c452fadf6}}, that means e.g. not using the ReLU activation function that is generally recognized to result in faster learning. In contrast, in this work we do consider ReLU activation functions but control the changes in topology by means of a combination of {{formula:5114b060-aa86-44dd-bb85-6eebd68e9f6a}} and {{formula:17dbf390-9942-4263-8d47-63f86bb19e4d}} , whether this might nullify the advantages of ReLU is still an open question. Note however that, in general, control of induced changes in topology and high nonlinearities (as in our proposed approach) might be useful for any inversion method that relies in the concept of a neighborhood (e.g. MCMC and ensemble smoothers).
d
8135b37f0693fe57c9971b0320eba052
Entangled photon pairs are typically generated via spontaneous parametric down-conversion (SPDC) {{cite:783aee17ae615888843b98a5d8a325d8f36fd52b}}, {{cite:680694ace3a8e541d7eb0b03191e200b576e88d0}}, a second-order nonlinear optical process that is tantamount to time-reversed sum-frequency (SF) generation {{cite:a11a654df185b0403155fadd6cdcc4201f967ed8}}, {{cite:6c8ed82753b4e53daa1db635e3d25236e9fbae30}}, and which conserves both spin and OAM. However, the generation and manipulation of entangled light is hindered not only by the low nonlinear response of conventional materials, but also by the need to collect and direct the entangled photon pairs—produced upon phase-matching in separate bulk nonlinear crystals—into scalable optical components that enable quantum logic operations. Theoretical explorations of SPDC by waveguided photons have revealed its feasibility in the presence of material dispersion and loss {{cite:87361c52a293fe1de455f4cdd8819964c3cc5275}}, {{cite:db724c7017f117016db779eac040061e2e2f70ba}}, while experimental efforts to develop on-chip sources of entangled photons include demonstrations of SPDC in periodically poled LiNbO{{formula:63d48cb8-e12f-43cf-a778-88640d1904eb}} waveguides {{cite:281703e95eaa5b693aef97eedea223389f486e6f}}, {{cite:31e0a08a2d567db0cbe1d9aaedb375b419330bff}} and in a microring resonator {{cite:188275d6017ecd1de988dcbe48836598f5100d81}}, {{cite:d4103b3693b3b3f2ed0f418396583fb346889f3f}}. Additionally, the SPDC process has been recently proposed to conserve the in-plane momentum in graphene ribbons containing an electrostatically induced p-n junction where plasmonic modes are entangled {{cite:57758642e366d22dfe7dd96ece895a62bf2989e3}}. {{figure:d717f9ac-0407-4bf2-af4d-d4618243ab9f}}{{figure:ba82f4e0-be7f-4b61-b161-b3039cd18682}}{{figure:7b153315-76c6-4b21-99e8-74d7b434b87d}}
i
3146ac99ee7020776fb29e5b6e0ecb5b
In the experiments we used the RSICD dataset {{cite:a91dedc3ded13cfd8513e19fde2f82fe127ea240}} that includes 10921 images of 31 classes from aerial orthoimagery. Each image has a size of {{formula:249f9425-854a-4987-a16d-a5d747a50dca}} pixels and has 5 corresponding captions. Only one randomly selected caption for each image was used during the training. The dataset was split by random selection into the train, query, and retrieval sets (50%, 10% and 40%, respectively). The image augmentation was performed by applying a Gaussian blur filter with kernel size {{formula:ee0243c3-6342-4bef-9522-85f2388b24ad}} and {{formula:f2c3a803-928c-4e47-8f8f-4c4a5539c2a5}} , random rotation in the range of {{formula:2580825d-b696-4415-bd6c-63308aca6557}} , and {{formula:b4035924-37ae-49ef-8de2-12375a473d86}} center cropping. For the text augmentation, we selected a rule-based algorithm suggested in {{cite:8346a0fa7311b16b42592ca198046c9c30c817d6}}, where the noun and verb tokens are replaced with semantically similar tokens. In the feature extraction module, we used a pre-trained ResNet {{cite:cfa99049067a73968324d7b04a5debfd00f9c999}} network for {{formula:6cd9d812-725a-46d7-8b0f-73165e526062}} the classification head of the model was removed, and the image embedding size {{formula:f3727e3b-c5b8-482e-8c06-de86f568ada0}} was set to 512. For the text embedding we used a pre-trained BERT {{cite:2502f12e107df4f149da0114c7e5294cd9f161a9}} language model provided in {{cite:0db9858ac49a2df43434ae0b883948c48997eebc}}. The final size of the sentence embedding {{formula:6a57637f-6df6-4d9e-a130-d5934d82b204}} was obtained as 768 by summing the last four hidden states of each token. The internal parameters of the image and text encoders were kept fixed during the training of the hash learning module. In the hash learning module, the networks {{formula:eea9305d-8f7e-4027-80e6-54611c900406}} (image hashing) and {{formula:8590dc87-2a50-4a0b-ae4e-ec2323f9b4a5}} (text hashing) are fully connected networks with 3 layers and a batch normalization layer after the second layer. For the discriminator {{formula:bbcc8395-59e8-4b8b-8c97-9b861c6c9338}} we selected a simple 2 layer fully connected network. ReLU was used as a default activation function for all layers except for the last layers of image hashing and text hashing networks that use the tanh activation function. The hyperparameters {{formula:54139870-6972-45b8-bce9-6dc3f7ca43dc}} , {{formula:9b8c0777-6c91-4f84-b78e-e7dbcd105cc6}} , {{formula:da2f289d-4a79-41b8-aacd-619e44290fe0}} were set to {{formula:433112c4-7cf5-4d45-a041-f66430def450}} , {{formula:bd7b05a8-870b-4600-95ad-370bf2485c63}} , {{formula:9f98d2c6-fd45-49f5-9531-d376530427ec}} based on a grid search strategy. Both intra-modal weight coefficients {{formula:d01cf276-31a6-4b08-8f7d-723b373011ec}} and {{formula:aefcc85e-4c32-4ddf-a2df-5dace8c288b3}} were set to 1 (see (REF )), while the batch size was set to 256 and the total number of training epochs was selected as 100. The initial learning rate was set to {{formula:2b18fd92-fce5-46e4-b818-79b00ab3157b}} and decreased by one fifth every 50 epochs. The Adam optimizer was chosen for {{formula:d9fb2c6d-a2df-4862-9c65-33778c14877f}} , {{formula:5ad018b8-e447-4a48-9b48-3d2eb961bd08}} , and {{formula:ae1fae3e-d32e-4f93-a5f0-d181ea9c10b9}} networks. We compared the performance of the proposed method with three state of the art methods: i) supervised multi-task consistency-preserving adversarial hashing (CPAH) {{cite:431d830913bd79f96c0d98002c146db79a07822f}} that separates the feature representations into modality-specific and modality-common features, exploiting label information to learn consistent modality-specific representations; ii) unsupervised method DJSRH {{cite:6ab1ebbf8022e282458b3c4fc1148cfb9154f669}}; iii) unsupervised method JDSH {{cite:bad7e0b1dc2129ac38465cf6295392038eac6e55}}. For a fair comparison, we trained all models under the same experimental setup. Results of each method are provided in terms of: i) mean average precision (mAP) and ii) precision. The mAP performance was assessed on top-20 retrieved images (denoted as mAP@20), while precision was evaluated by varying the number {{formula:14d7a916-137a-4e79-9abc-3198690ccdf7}} of retrieved images in the range of [1-100].
r
edfb900bf335752726d8868cf88f3296
where {{formula:98f77c0e-5fc3-48b6-8960-5b40b35d4f4a}} is the time interval between two atomic configurations, and the summation is performed over the nearest neighbors located within {{formula:5d17208d-3e79-42c7-bbe8-cc1167486364}} from the position of the {{formula:32ac7a4f-c8ed-46a3-b9cf-3ea22558a175}} -th atom at {{formula:8c51ee5a-4196-4541-ba05-dfee5ad6dd95}} . The nonaffine quantity defined by Eq. (REF ) was originally introduced by Falk and Langer in order to accurately detect the localized shear transformations that involved swift rearrangements of small groups of atoms in driven disordered solids {{cite:221b34adac3f2196b625fc93e907b7118953ade5}}. In the last few years, this method was widely used to study the collective, irreversible dynamics of atoms in binary glasses subjected to time periodic {{cite:0665c7aca3e99c0ddc4dd55b27fb57ccbe91acfd}}, {{cite:e2d42df9978e570622870d3cb8cbd3127d716649}}, {{cite:75485bd22b53bf4ebd6d53d18719e9d5e5f7c5b9}}, {{cite:535d99d4c8114e17cbb79d85f94bfffdd6f26975}}, {{cite:c2def74f490c35592f2946a46b5c211d2ee2659f}}, {{cite:0bad62a1c51937ad7c0f78317a4e11059f6f4482}}, {{cite:3f3aad0ef8422d02a4982a032097f8820f30a563}}, {{cite:7565f8fd9befff9a4bb7483953fecbff743e7483}}, {{cite:2e88e3239a31394fc1b5be4ba0dbb0085ccaf1f8}}, {{cite:f2ddc119823690fe0b7e6084e8d3f581ff5402b6}} and startup continuous {{cite:c9e6494a0f4088ace9e94967a48fd47d8701b0d6}}, {{cite:6aa08cdb123425f1bb0877c82ceb2d4f4395febb}}, {{cite:46159d176ef28d7f9cc69c6efc9b0cf1d79f700c}}, {{cite:48f2c126aef6028739170b929edacb882eff0ded}}, {{cite:5ebe212ea1378cc7755a9ef28ff3c261eed7eaf7}}, {{cite:2b55278ad5682274bc84dfbb5aa60145082e693b}}, {{cite:a13199b85dddc0289646624ff716e0270d8be4cb}} shear deformation, tension-compression cyclic loading {{cite:e5bd88ed91143bc546983473a1eb01688f3d587f}}, {{cite:0391c06a79c0d01e221fcec3a823af77589475e0}}, prolonged elastostatic compression {{cite:686e931a92a2a9e62e662324cc25e64eb7e1dcf3}}, {{cite:fff6b32874fc31fd2826225ed6fda1e666f0cb10}}, creep {{cite:9ba9d8cd1bef608aba4034be216bd457cca9f72b}} and thermal cyclic loading {{cite:6a59c41ad5b9dbd33e0060f974f3895046f711fd}}, {{cite:1413d6a12e20e3e13a48b9fd66481cbe5af5eb04}}, {{cite:676861ee93431b85f6a2b31f7c18590bd9622935}}, {{cite:545ed2008376c58046439615c86130afcd17c365}}, {{cite:3d67d288cc323a06b3436568383f45401a967f9c}}.
r
b4fd8148efd18128e38a2a1ca57e76ae
In methods GP-AIC and SRP-AIC, the criterion used to optimize the {{formula:e8850c69-a530-4f2e-83b3-6e314ebe1698}} values is another formulation of the Akaike information criterion ({{formula:dda04c4c-6b8c-4ab1-89f7-3e5412e6c1fe}} ), adapted to penalized splines, as proposed by {{cite:203ccf6282294953bc1a65e214060049dde878e2}}: {{formula:2598786a-c523-45e7-aad7-376f4dc4dc74}}
m
b597995dbd0e0ac952e902320490dbf3
We compared our Aspect Controlled Summarization (AceSum) model with several extractive and abstractive approaches. Traditional extractive systems include selecting as a summary the review closest to the Centroid {{cite:1133128f3dcfb1265e7e27560a5bf0fd6efcdad0}} of the input reviews and LexRank {{cite:874cea6523f53b777e7d7e86db3b7d9088f1e1f1}}, a PageRank-like algorithm that selects the most salient sentences from the input. For both methods we used BERT encodings {{cite:e7f0ce738acfb0735bfe7bd3f7791e638f138470}} to represent sentences and documents. Other extractive systems include QTWe report results for QT using our seed words which are human-annotated. We also present results in the Appendix with their seed words which were automatically induced. {{cite:291b97aed9c51718a3b74c4ae5b25db647564ca1}}, a neural clustering method that uses Vector-Quantized Variational Autoencoders {{cite:d37199ad3fd15d3d97064255ee09a39dbb07f59b}} to represent opinions in quantized space, and AceSumExt, an extractive version of our model that uses sentences ranked by our controller induction model as input (truncated up to 500 tokens) to LexRank. Abstractive systems include MeanSum {{cite:8ed4d0f136ef8350aaf534ebeedbee64e9939f88}}, an autoencoder that generates summaries by reconstructing the mean of review encodings, Copycat {{cite:97d58096c9eb624789b94ce2ef09a378e8267473}}, a hierarchical variational autoencoder which learns a latent code of the summary, and two variants of T5 {{cite:249c367d39c92d1c0da6e091b83d87e525e7eaaa}} trained with different synthetic dataset creation methods. For T5-random, summaries are randomly sampled {{cite:97d58096c9eb624789b94ce2ef09a378e8267473}}, whereas for T5-similar reviews are sampled based on their similarity to a candidate summary {{cite:817dce3e6e297d9b8e79069e7d3b60a24bcb79bb}}. Finally, we compared against two upper bounds: an extractive Oracle which selects as a summary the review with the best ROUGE score against the input, and a Human upper bound, calculated as inter-annotator ROUGE. Examples generated by our model are in Table REF and the Appendix.
r
8af5efbe24bc493ce5cf6d027e81ebe3
In Sect. we report the results of simulations in which this systems starts from a flat-space configuration with {{formula:6cf01554-6503-4961-8c38-a5d8c6205914}} for all {{formula:65b0b1f7-4f8d-4000-8be6-f2545ca93d60}} , and then at each step one of the {{formula:5a5c8434-1c4e-4762-b288-0c40aed88156}} is randomly chosen and varied as {{formula:0372763b-4a15-44ab-a2e6-4871b24bc113}} . The variation is accepted or rejected with a standard Metropolis criterion {{cite:75fb2a940a5e3e8047de65972ec4f4581b989aa3}}, namely always accepted if {{formula:d8cc2ad5-2cb9-40f8-a98a-700cfaa15ee8}} , or otherwise accepted if {{formula:8a45c940-dd87-4bcf-9e6c-1fdd4f39ddbc}} , being {{formula:1da645c8-9007-4eef-9dc1-cb8a80066fd8}} a random variable in the interval {{formula:df660708-7848-4f3f-9f44-df3a9643e7d2}} .
i
d8852c83ce496c18bfdca2304b317aa6
Despite our best attempt for a fair comparison to the baselines, we recognize some of our baselines used a different setup. For example, AFSD {{cite:94128cb425c79264410f3c9464a60a0742ef2356}} took lower resolution input videos (96x96) when extracting I3D features. PBRNet {{cite:fc391044040e7cb25bdc9cde0b5844d89cf85caf}} adopted a 2D feature map rather than a feature vector for each time step. A2Net {{cite:4fb55a6e0808126d2defb5fc3d7c98efdb16c9b0}} did not consider score fusion. Nonetheless, our experiment setup follows previous works {{cite:7df0e2a22d4067ac20971365ab16f2bc4e6783e0}}, {{cite:39d82c6c293988382ebc62921cff3c30f639f030}}. Our intent here is to compare our results to the best results previously reported.
r
0de7d7d0b35b323cdcadc25a8bf3b29d
Finally, since the radio emission of J0849+5108 arises from its jet, the jet activity and structure could directly lead to the QPO signal. The source is a target in the MOJAVE survey and multiple Very Long Baseline Array (VLBA) imaging of it during 2011–2016 are available {{cite:3f9129dc12855240e6543da1446d55a8df1ec581}}. We mark the peak intensities from the observations in Figure REF (purple circles). As can be seen, their variations are generally consistent with those of the OVRO light curve. We checked the individual VLBA images. Although no images were taken at any peak of the QPO variations, a jet structure is seen to be nearly always present in the images extending from the core to 3–4 milli-arcseconds. No evidence is seen showing the QPO variations as the results of new, emerging radio components. Recently a helical structure of jets has been invoked to explain the 34.5 day QPO observed in PKS 2247{{formula:92779466-f8e7-4a47-8fec-cc736e4a51a8}} 131 {{cite:6aac9512b6ba22717a85ee1cb3fcd911e9c41c19}}. As an emitting blob in a jet moves along a helical path, our viewing angle to it changes giving rise to quasi-periodic flux modulation due to the Doppler beaming effect. Helicity is likely a natural feature in magnetically dominated jets {{cite:ec9f304afcc5dd11d4514af46d83bcafda6f6ec3}}. For the case of J0849+5108, if we assume the parameters used in {{cite:4883e3dc27479f223fa86aea2c99706bf582ab36}} for modeling the spectral energy distribution, the bulk Lorentz factor {{formula:03d7ae64-5572-408e-84b5-48ddfdc33ba9}} , the pitch angle (between the emitting blob's motion and the jet's axis; assumed to be half of the opening angle) {{formula:d1eb9c5b-ad1c-4c38-82df-89bc06a6e9ff}} rad, the viewing angle {{formula:8796da7b-47f4-4647-b3ac-6ec6a241dd5f}} , the jet observed at the radio 15 GHz frequency would have moved {{formula:e4d131e1-91ca-47ec-89f2-94a2da727e66}} 200 pc at the local galaxy in 21 cycles (the local cycle period would be {{formula:e6af74f6-9c9c-4c72-b088-efbcef4d6776}} 33 yrs; {{cite:bcf1903fcd74674eda9442b5aa6f9ffdef1fcb91}}, {{cite:6aac9512b6ba22717a85ee1cb3fcd911e9c41c19}}). The distance of {{formula:d39c6df6-1124-40d0-abce-47be376ece69}} 200 pc is not unreasonably long, as the jet in the nearby AGN M87 has been seen collimated up to a distance of {{formula:c128f1a5-2d78-4600-9eb1-99a6af1d5445}} 300 pc (e.g., {{cite:568f998cc3f34eadb1c09191279379da416f65de}}). Also we note that high-frequency (22 and 37 GHz) imaging resolved the radio core of J0849+5108, indicating a size of 320{{formula:cdd7cf5b-5766-43d2-a84e-89bd37d51e7d}} 215 pc {{cite:dfbb007b98879b034b2f4db3b7110b323cba60e4}}. Therefore we may explain the radio QPO of J0849+5108 as the result of the helical motion of an emitting blob. Aside from the QPO modulation, the radio light curve also shows relatively large variations, which should be caused by the emission from the whole jet. In addition, since the {{formula:59e7581a-b796-49f0-acb5-61507d2e25ae}} -ray emission likely arises from a site close to the central SMBH, it thus does not share any similar variations with that at the radio frequency {{cite:63130124cd5bfacffe2b0ff3139c718f4c569d06}}.
d
29e7ff7baa70489d0543fdce1566b5f4
SOIE {{cite:475c1b9d2d59ff7f9e8b203cad8bde527e719b02}}: It was developed by a Stanford team and was one of the most popular OpenIE tools. It leveraged linguistic structures in dependency parse trees. OpenIE6 {{cite:e2c71536e694a58c715c082a63559d61769d76ef}}: It performed iterative grid labeling and coordination analysis to improve the performance of OpenIE. IMoJIE {{cite:740128efcef0b7349c4667f82b31b140c2e83b54}}: It adopted an iterative memory-based framework that could produce the next extraction conditioned on all previously extracted tuples.
m
58e73bf6cae397129b244481298a857e
First, we observe from Figure REF that when the training domain is used for model selection, no method performs significantly better than ERM. The performance gains for domain generalization methods only appear when model selection is done directly on the test set. This is consistent with prior findings {{cite:975101b7bb1a35002e248ee75495648a794507d6}}. Though this is the setup used in the large majority of papers proposing domain generalization methods  {{cite:8c4fd5cd374a8651679fc035a228f12bf4951063}}, {{cite:352fca754d4373dffafd93b88503c698602893e1}}, {{cite:0e04e672f3679aa448da2eac29f01b395181483c}}, {{cite:bfabda464a1d3f8d7af5b54a2bc73821c766e03d}}, {{cite:7b017b371a9b18261cc25d2fcd57b6fab6a2b8ff}}, having test environment data for model selection is not realistic, and defeats much of the purpose of domain generalization. This is also a potential explanation for why domain generalization methods, which work well on Colored MNIST in the literature, do not work well in our clinical experiments (which do not use the test domain for model selection).
r
614779cdd52745f6e43a938eca1398c6
Our calculations of the stability, equilibrium structure, polarization and energy changes during structural transformations have been performed using the density functional theory (DFT) as implemented in SIESTA code {{cite:a70a74dd76400f71456d3b8b1fd154bd0cc2f3fc}}. The 1D systems have been represented using periodic boundary conditions and separated by 20Å. We have used the nonlocal Perdew-Burke-Ernzerhof (PBE) {{cite:6b0c706ffe634be16c41444d6aec1fd134ee85ec}} and Local-Density-Approximation (LDA) {{cite:e6b232c3943fb99f7fad7fe16583834d14731ea0}} exchange-correlation functional, norm-conserving Troullie-Martins pseudopotentials {{cite:6b87d2f2666dd365e42b16b3fc96a3540f33520f}}, and a local numerical double-{{formula:656cf824-9dfc-4f96-adce-937b88ff3bcb}} basis including polarization orbitals. The Brillouin zone of periodic structures has been sampled by a fine grid of 1{{formula:b1744908-0339-4c04-adf8-d9df93c4d328}} 1{{formula:38323b1a-b275-4273-bd89-959c26c18230}} 12 k-points for 1D structures {{cite:c673b5bdc2cb686b50b493a0379a5dce9b288bb5}}. We find the basis, k-point grid, and mesh cutoff energy of 180 Ry used in the Fourier representation of the self-consistent charge density to be fully converged, providing us with a precision in total energy of 2 meV/atom. Geometries have been optimized using the conjugate gradient (CG) method {{cite:d131a234536eae61aa1bc648bac4af4f5faaa8c1}}, until none of the residual Hellmann-Feynman forces exceeded {{formula:7e6b93fd-106b-4020-95de-350167f8a87c}}  eV/Å. For the phonon spectrum calculation, we use a much smaller force tolerance of {{formula:c211d1b7-58ac-48a2-8699-dda81e44c8ca}}  eV/Å to get the optimized structure. The polarization is calculated using Berry phase method , .
m
239a154633ab85c19a8552490dccdf05
The Fréchet Inception Distance (FID) and Structural Similarity Index Measure (SSIM) are the metrics used for the evaluations. Lower FID or higher SSIM indicates better performance. For each class in each dataset, the model under evaluation generated 1,000 samples. Those generated samples are compared with the test samples to compute the corresponding FID and SSIM with the test set. Two baseline models are used for comparison: Conditional Deep Convolutional Generative Adversarial Networks (DCGAN){{cite:15a116d0f448c2bd128136bc64ad5e7f91005c2e}} and the BAGAN-GP{{cite:e09fe77b9ba4af9b9b64f6aa8db12334f6058eb7}}.
m
d02b13996a76e8459a58d461ca0bc603
In this paper, we consider a different type of limit for alternating sign matrices where the discrete process is no longer visible by moving away from the tangency points. We introduce a directed path picture for the alternating sign matrices and show that the fluctuations of the maximum of the top path, which separates the ordered and disordered regions, converges to the GOE Tracy–Widom distribution after suitable centering and rescaling. This gives strong evidence that the top path should converge to the Airy-2 process after suitable centering and rescaling. The reason for this is the fact that the distribution of the maximum of an Airy process minus a parabola has the GOE Tracy–Widom distribution, see {{cite:171ab955a2ba3cf55e1395d9be064fc24c12c5be}}, {{cite:9f22eb7d62a0fe9f3ac499dd7f4cba7ca7f54d4a}}. To our knowledge this is the first edge fluctuation result away from the tangency points in a domain-wall six-vertex model when we are not in the free fermion case.
i
63394b1f8364bfe1924a292de9cc75f0
We verify the convergences announced in Theorem REF through a series of ad hoc numerical experiments carried out by implementing the finite element method (cf., e.g., {{cite:4a8120bd88f7bfb761466bd54d912dfa5903e9e7}}) via the software FreeFem {{cite:7fd5c38338de1ddee5b2711fe22dd7a03d3e5818}}. The results are visualised in ParaView {{cite:a2a41284fb114a7dd7f68c8f386d45d26ed7ab5a}}. Specifically, We use conforming finite element to discretize the three components of the displacement(cf., e.g., {{cite:60ff823d2ff7e1c8321743c473034324ea61e30d}}). Apart from the transverse component of the solution of Koiter's model, which is discretized by means of HCT triangles (cf., e.g., Chapter 6 of {{cite:4a8120bd88f7bfb761466bd54d912dfa5903e9e7}}), all the other components of the solution of Koiter's model, the solution of the 3D model and the solution of the 2D limit model are approximated via a Lagrange finite element.
r
43cdd6d9b77c96a80922dcd31b37f208
First, the perturbative RG-restoring subtraction terms, like in Eq.(REF ) typically, are missing in HTLpt. Accordingly the latter lacks perturbative RG-invariance formally by a leading order term of the massive theory pressure, {{formula:ac3ab7c4-9c00-43dc-b303-c2fc780b934d}} . Now since for any (gluon or quark) thermal masses, {{formula:9822135d-6265-40cc-b02f-3518ebfd0f88}} , and HTLpt is also based on high temperature expansions, the latter uncancelled term is effectively only a three-loop order effect, thus largely screened and harmless at LO, and moderate even at NLO. In contrast this mismatch plainly resurfaces at NNLO HTLpt, presumably mainly explaining the large remnant scale dependence observed in Refs.{{cite:51dac7895631cb5ae11e47c1a4f14f23b19ab733}}, {{cite:3562c99a76fba34ef767acabf5e299fed3a4a88e}}, {{cite:d5353285c3334f1451d215a8eee6aef88711b92b}}. Second, the interpolating Lagrangian used in HTLpt is linear, namely with an exponent {{formula:5d15faf5-e572-4884-ad33-e2504ed38852}} in the HTL equivalent of Eq.(REF ), instead of our RG-determined Eq.(REF ). As we have shown{{cite:2a77d9c2cd40e9122125513f8ca96cf62827af0d}} this generally spoils RG invariance even when the latter is fulfilled perturbatively by the original pressure. Finally, remark that upon choosing a variational mass prescription Eq.(REF ) in HTLpt (as was done e.g. in {{cite:6e7a2381058570c8c7a7523a5c1721638a3cd28f}}, {{cite:3562c99a76fba34ef767acabf5e299fed3a4a88e}}), nonreal {{formula:8c6c9117-f84a-4da6-9797-1bdc0e2f0c30}} may occur, similarly to what happens for RGOPT (although it happens rather at NNLO in HTLpt). In NNLO HTLpt applications this issue is avoided simply by replacing the gluon {{formula:09ab2877-83e0-45d3-8177-f93d415cd0c8}} arbitrary mass by a perturbative thermal mass {{cite:51dac7895631cb5ae11e47c1a4f14f23b19ab733}}, {{cite:d5353285c3334f1451d215a8eee6aef88711b92b}}, and taking the quark mass {{formula:6449874d-0362-4a11-be88-efc6207e0a6a}} . However, enforcing perturbative masses is partly lacking the a priori more nonperturbative behaviour rooted in variational prescriptions.
r
ca36f7f23d76745d434d9d5ba54c4936
where {{formula:ed8ddfa5-72e3-436f-9bf1-22a91d6d5031}} is a circle encircling the origin, but not any singularities of {{formula:35cae92c-3169-4ed7-b386-4c8c0c6bb9fb}} . Expanding the contour {{formula:37b0a5fd-3557-462e-8251-cec917ab5521}} if possible, and applying the residue calculus if a pole is encountered, we obtain the leading contribution to the asymptotic behavior of {{formula:6e41ffe5-4dbc-452a-a4ab-b0b2422d7409}} . This procedure fails, however, if an algebraic singularity is the closest to the origin, and the classic Darboux's method ({{cite:daed40645daaff7bbccedce65db45987dd2a9024}}; see also {{cite:2ae9759d9cd4b4ec0dfbcd5fceb751e024304293}} and {{cite:ca47722ff6d9c0818ec55473995350a2cd208e89}}) becomes useful.
m
23c31cc3062f943ea97365ddd15c9d01
We select a set of commonly-used IQA methods to build a benchmark. For the FR-IQA methods, we include: PSNR, NQM {{cite:db103b2ebba05a6204f0df599e8841b07515b90a}}, UQI {{cite:43977bf7f1b5e999d7cd1c86f76a7bf57d11c3f4}}, SSIM {{cite:14db4aab83677b59e89b20ada21405312dc9eb7f}}, MS-SSIM {{cite:96f48975d4858eaf7c51e05e2fe6091625b0f117}}, IFC {{cite:4c6c40c41866b547e7c33c54606db4ac5b110370}}, VIF {{cite:f6e9087ff2ca1dbef22ff25b476e995b982cc523}}, VSNR-FR {{cite:23eadb7ee0fe2abe24ca2fd0fc91da0af167a4f1}}, RFSIM {{cite:c5333d250123b8e72030b0c568acc3ce47a33950}}, GSM {{cite:6ae88555aa5f03b1cea28a0ea867cadba51fd6ba}}, SR-SIM {{cite:00ec89849827fb0863b57496d0375ea94cef89f9}}, FSIM and FSIM{{formula:aec14147-dd59-43c1-b8d8-d24497fcfd88}} {{cite:eb90b30726a9b8f158282e4e14f4e9c2a02afb9d}}, SFF {{cite:9aae7df7936da7fdbcad2bf48001c89afcb63995}}, VSI {{cite:2288e173eedd142da48322bcd694e611c2d369c6}}, SCQI {{cite:3d0dfce26221f1fc188be17bb6316aebc67f7b90}}, LPIPS-Alex and -VGG {{cite:0f85eb96f92af49dfe27e8a222f505312273841c}}In this work, we use the 0.1 version of LPIPS., PieAPP {{cite:ae7c2e9fcf2afb17f18580d4abd59b7e689efc07}}, DISTS {{cite:8bf32f59ea688c2e6096a228b275dfa2bf3cab51}}, and WaDIQaM {{cite:306a500f8d8e162ab956b79d43a478dd9ce5fab8}}. We also include some popular NR-IQA methods: NIQE {{cite:13ddb35d5cbf7b34696e2397840d89e5e824a304}}, Ma {{cite:63537a07de8d0bf2fcad7ce374949f97eca74cb5}}, and PI {{cite:fe7190b3687f86fb5d31a93481a88565f2adeb3c}}. Among them, PI is derived from a combination of NIQE and Ma. Note that these NR-IQA methods are designed to measure the intrinsic quality of images, and is different from measuring the perceptual similarity, thus the direct comparison of these methods is unfair. All these methods are implemented using the released code.
m
eb7d8e779a948aca5ea48fc90decd513
Taking dissipation processes {{cite:a32349f17fc1ef494dda2c962179691da251bbab}}, {{cite:71b69a100ef51dd84161c092cb0f58f538f09030}} as well as angular momentum transport in stellar interiors into account (see {{cite:39076490cc2703c6aa77878ca003e2c3661a23b0}} for a review) is also among the perspectives of this work. In this context, one needs to factor in stellar rotation {{cite:c7552585132e1953398cc80f4529b2ad0e7f3028}}. Regarding our formulation, adding the Coriolis acceleration may directly alter the forcing term {{formula:784287c3-1539-428e-b122-f6602ea3eecf}} , since in the convective zone gravito-inertial waves are no more evanescent in a sub-inertial regime (where {{formula:3bdd05bc-f4e2-41d9-9477-a71bc74874d1}} ). Such an effect is expected to increase tidal dissipation in the radiative zone {{cite:55c15a52758e32b07d9f3c5ba06ebf59dfc99c09}}, {{cite:6992c7531ad60cab67210276fb5f25581cead299}}, {{cite:adb5dfaad25628f7d38140014e1ceb4480116cbe}}, {{cite:121f9a354bd58a8b1a8519e5faf1eab4afa3b5d6}}. In the same propagation regime, low-frequency waves in the radiative zone are trapped near the equatorial plane, leading to a geometry significantly different from what we have considered in this work {{cite:6129f09ee531c782e378755b7a341a470ede1191}}, {{cite:5bd411196803857d427621b9589374be49b40c77}}. Such a change is likely to modify the tidal dissipation accordingly. In addition, since we find that the dissipation through IGW is maximal during the radiative core formation, when the star is spinning up, a progressive trapping of gravito-inertial waves in the sub-inertial regime may occur. Stellar rotation also increases the radial wavenumber of the gravito-inertial waves, leading a stronger radiative damping. These are therefore deposited closer to their excitation region than in the non-rotating case {{cite:ec8e190a5a5096cf818a9f64469592ff6f6f5899}}, {{cite:47357b667b3c5dcb189c7bb8aa84f9023056fef3}}.
d
486d488ed2f3a3d16bd089d5cdb930bf
The set up presented there does not seem to apply to our case since the phase space considered in {{cite:5c98cecf1e2c25247874fa70a8fcc755472da627}} is the space of continuous functions on an interval, namely, {{formula:fd5654e3-3ffc-4c16-bbff-fc90ea13e459}} , and they require differentiability properties of the equation which are not satisfied in our case. Note also that we can obtain smooth dependence on parameters (see Theorem REF ). Obtaining such smooth dependence using the methods based on the evolutionary approach would require obtaining regularity of the evolution operator, which does not seem to be available.
r
cb75b0928733196e1a80c70efe71890c
For the SemEval shared tasks on CQA, several authors used complex recurrent and convolutional neural network architectures {{cite:09efaff9ec3be2d50ee84a9e40baf593d82cf5cc}}, {{cite:b656224d72ac1047e936378892f4860ec446be01}}. For example, {{cite:b656224d72ac1047e936378892f4860ec446be01}} used a convolutional neural network in combination with feature vectors representing lexical, syntactic, and semantic similarity as well as tree kernels. Their performance was slightly lower than the best system (SemEval-Best for 2016 in Table REF ). The best system used lexical and semantic similarity measures in combination with a ranking model based on support vector machines (SVMs) {{cite:71b447fede685737fba92a7db7ff3a0c1726f5cd}}, {{cite:335e0fccaeda7e4280fd0758ae65f574bef9604f}}. Both systems are harder to implement and train than the model we propose here. For SemEval-17, {{cite:335e0fccaeda7e4280fd0758ae65f574bef9604f}}, the winning team used distributed representations of words, knowledge graphs and frames from FrameNet {{cite:56839519b2b44343d8ca92bf37e811e32ac36195}} as some of their features, and used SVMs for ranking.
d
e01b441b46efb56cc64d3778ee99e760
with {{formula:dd8fdf2c-001a-4155-adc9-a99b00136bd8}} . This is the so-called orthogonal projection of (REF ) onto {{formula:07699557-02fd-4a5e-91ca-c7cd91adf395}} , and it ensures that the mean-square error resulting from the finite representation of {{formula:6cc5aabc-7ad1-43f6-9f06-af240c7e29da}} using (REF ) is orthogonal to {{formula:7146d457-a137-4bee-986a-a1153e53cce6}} {{cite:8b4d31d151dfb35e0dc7754a0bdcf2f92b0429cc}}.
m
e6ee5f34591ff7a2b2f6c1a98964b8ef
We compare our method to our re-implementation of SSF and two ROI-based baselines. The first, dubbed ROI-aware loss, is SSF trained with our ROI-aware loss as described in Eq. (REF ). While the codec is blind to the ROI, it is expected to implicitly learn it through the training objective, in a similar fashion as the semantic models in Habibian  {{cite:5e709802a10383a4ac33dc6b3839f8e54d81d563}}. The second method, dubbed OBIC SSF, is based on a recent ROI-based neural image codec called LearntOBIC. We use the same hyperprior structure where a shared codec produces a latent, which is then masked using the bilinear downscaled ROI mask and split between foreground (ROI) and background (non-ROI). Each masked latent is passed to a different hyper-codec for prior parameter estimation, before being re-combined by a simple addition. For similar reasons to Latent-Scaling ROI SSF, we only modify the I-frame and P-frame residual hyperprior AEs in this way. To enable a fair comparison, we train this architecture using our ROI-aware loss, which is slightly different from the formulation in Xia (where the rate loss of the foreground is de-emphasized). {{figure:7d9f50d2-17e1-4696-aa1e-35aa79a8099c}}
m
209ba6f1943f46f43a03acf0c4acb505