text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
However, an alternative approach of cold inflation may be possible. If one introduces a coupling between inflaton and radiation, the energy density of radiation can be maintained almost a constant during inflation and the (p)reheating is unnecessary This alternative scenario was known as “warm inflation”and deserves some major attention {{cite:e5fb0ba6771fc4f7f8d61cd92704a9f3d0c62f59}}, {{cite:7a9407804e0141574bffce0ba0fffcb1721cff50}}, {{cite:544ca766ddf62dc587c25f6bd32f67e598a337ea}}, {{cite:64a4ebe9fb12c9495fcfa1d53f1d572d5d475c28}}, {{cite:2d59a2da22b3476ec447f52e561950e1b8c58f13}}, {{cite:816d018ac8950096a2698d71b34a487a09207b2b}}, {{cite:6ea3bb1c97dbe96eb8e708039db0601d4256d8ff}}. To generate the thermal bath in the standard cosmology, this warm inflation scenario has received much attention. To be more precise, it was originally proposed to resolve some problems in the standard cold inflation picture [1, 2], for instances, providing sufficiently hot thermal bath.
i
ab06dbfa8fe5897413d0ef78bff64e68
The parton distribution amplitudes (DAs) that play essential roles in describing the various hard exclusive processes of quantum chromodynamics (QCD) bound states {{cite:f0532bfd1a83141a66af4a20af2ca9858df21b1f}}, {{cite:e4a95db199345a321ac6a15d7b7fcb0c03155cdb}}, {{cite:d580f144418bb1a5012ea6eb7823662d2b447888}} via the factorization theorem {{cite:de7c982f94a421a59607f18e2247e70cd6ff09cd}} are among the most basic structure functions. The DAs are therefore complementary to the parton distribution functions (PDFs) associated with inclusive processes {{cite:72f1f4c06c38a5a421cacd880a511738924dadfa}}, {{cite:693bd29aafd8470dab3a9f6eeb357b9e080f45fe}}, {{cite:0b7a602621dc91ebac0e56448494fd30b2595304}}, {{cite:93d0193b3ea961f03616ca4987c86d574446e754}}. Since the DAs are longitudinal projections of the hadronic wave functions obtained by integrating out the transverse momenta of the partons {{cite:f0532bfd1a83141a66af4a20af2ca9858df21b1f}}, {{cite:e4a95db199345a321ac6a15d7b7fcb0c03155cdb}}, {{cite:bb2e022981f7fef909d64b94f62c2548863c7a37}}, they carry information on QCD bound states at the amplitude level. Specifically, the lowest moments of the DAs for a quark and an antiquark inside a meson are closely related to decay constants and transition form factors {{cite:5fa845310dff15395e4802271599d829d4f41b39}}, {{cite:ef956e4672a977ac6d4ca7c8934a07fdd8595d9a}}, {{cite:20fcffd4f110d8390833c2ff104c7984a40bbf70}}, {{cite:6c44b0040af0804b6078a6d9989910c844346d07}}.
i
7d350609f2649c64d37c32a45e2ef582
In recent years, there has been a resurgence in the study of asymptotic symmetries and scattering amplitudes at null infinity {{formula:5641f652-6752-4c7f-81ce-059fc284c97b}} , much of it aimed at formulating a notion of holography for asymptotically flat space-times (cf., {{cite:a3180ee23da54e9b959dd3278bc4c95cd7657331}}, {{cite:7da1b8084438589a3094fb28868978c06358bd15}}, {{cite:676dd9acc5307ded77682db683ad4ad5afd9816a}}, {{cite:c85c9f95676e9d83c236f18cef32240724e03634}} for recent reviews). In fact, the notion of reconstructing `bulk' space-times and their physics holographically at {{formula:6a65f27a-a094-41e9-9cfa-f5ec79e73e95}} dates back to the 1970s and the work of Newman and Penrose {{cite:8bf54ebb62d00fe47ad5847d7daab5c0e8dec1dd}}, {{cite:a27a234042886fa12e7669c82c97467b9457e628}}, {{cite:b57ba863bb2188169be4d7acbbba6c7e393b3da2}}. One of the main outputs of this work was the non-linear graviton construction, where (complex) space-times with self-dual curvature arise from deformations of the complex structure on twistor spaces. When these are `asymptotic' twistor spaces, the non-linear graviton is intrinsically holographic, as the deformed complex structure is constructed directly from the (complexified) characteristic data (i.e., the self-dual asymptotic shear) of an asymptotically flat, radiative self-dual space-time at {{formula:736f69d4-2328-4069-ac85-988a4a838ace}}  {{cite:feaeb43e73361caffdc2043e1eeeeaf3a4ae97e2}}.
i
60f10040fa3e8cfdf4ca78fd94119d09
Generic search uses consecutive quantum searches until a solution to the search problem is found, such as the method given in {{cite:2ce5f83d976ad3d702219176083ee16b48b832eb}}. The difference of generic search is that it employs varying initial quantum states for the searches. Thus, we expect that the optimal number of applied Grover iterations would also be different for each step.
m
660100eb75f3f51d9c228e69e84358b0
We observe that each feature aggregation method brings its own trade-off between latency and representational capacity. The representational capacity of a model is typically correlated with its number of parameters {{cite:e926979df4af631880401e88980721b8971796b1}}. This is also reflected in tab:comparison, as a higher number of parameters is typically correlated with higher performance in semantic segmentation. Therefore, the ratio between the number of parameters and its run-time latency may be a useful indication of the suitability of an aggregation method for the low-latency use case.
m
fada253ad27d8eb858db76a73904631f
In a many-body system, the von Neumann entropy {{cite:cf37e84f1689b464873652e323f3caa09033b4ed}} quantifies the changes of information from one local system compared with other correlated systems. Instead, the parameterized entropies {{cite:6538762c8b958b31c5e42486aaba2ff504acdee3}}, {{cite:6a762225862e611269040d9e2092e8b7481e6c79}} show additional intuitions for distributing the input information among the composite system during the dynamical evolutions. These demonstrate interesting long-range perspectives for detecting the growth of many-body correlations, the spread of quantum information in composing systems, or asymptotic features in nonextensive quantum thermostatistics. Note that all the parameterized entropies {{cite:6538762c8b958b31c5e42486aaba2ff504acdee3}}, {{cite:6a762225862e611269040d9e2092e8b7481e6c79}} lead to the von Neumann entropy {{cite:cf37e84f1689b464873652e323f3caa09033b4ed}} that is generally specified for most one-shot tasks. A fundamental problem is to explore the common features of entropies beyond the second thermodynamics law. The present entropy principles show interesting solutions from unified models for asymptotic tasks or one-shot tasks. This further inspires a twin-class problem of finding distinguished applications of various entropies.
d
b59e93527205d97c91c32f6ed08ac3fa
Swarm Intelligence techniques have been shown to be effective and powerful in a wide variety of computer science areas. Depending on the problem nature, continuous or discrete search spaces may be properly defined. Accordingly, the different Swarm Intelligence approaches, such as PSO, may provide more efficient solution encoding either in continuous or discrete optimization problems. However, although each optimization technique was first designed for a particular purpose, the majority of evolutionary algorithms were adapted from continuous to discrete search space, and vice-versa. Other Swarm Intelligence techniques, such as the Bat Algorithm (BA) {{cite:1e44a6ec9cba0a21a3615f9e76f3da9f551edeb4}} and the ABC algorithm {{cite:020ffa8729d1ba16348abfb9c394b263c31d9b5b}}, could be useful to improve the PSO performance in biomedical image registration. In this context, the elastic registration {{cite:2354170eeb6c9bfc25cb957473a5349bfebd169c}} is still a challenging problem because of the thousands of parameters to be optimized {{cite:674a60f98e413fff43c006301e1e35fda9629fa5}}. However, to the best of my knowledge, no literature work has addressed yet this challenging issue using Swarm Intelligence techniques.
d
b6d768a25a35979159ab7ad62ed0cb1a
Non-monotonicity in generalization error gathered a lot of interest recently. Many studies pointed to absence of overfitting in overparameterized machine learning models, signaled by a peak and a subsequent descent in generalization error as the model complexity, or the number of parameters, increases, and the model transitions from an underparameterized to overparameterized (interpolating) regime {{cite:396aacb1180db75751297dd3df06faae8a0f8962}}, {{cite:5e7952cd916d0b200756a9d00e4675571db6fc32}}, {{cite:ee20e8ccf53e1f898574ea362e0b8e407939c854}}, {{cite:b17d1d1b7bd1f0f415a87451a5acfa2fc0c4bb36}}, {{cite:a1df223c058810b62e98eb3d966c56633ec0cd02}}, {{cite:40435d7c6fcf7a436cbf291bc2387b7538e0b286}}, {{cite:a671d035673d1592ef0119cc812f5c6cc6fa9527}}, {{cite:027139bd44f4f8f45496eb09fa006a511046dd34}}, {{cite:96da87b43836c232efae77dba99e308e5da9c497}}, {{cite:6f0b55ff84838eb359c94c9fa2ab841d49a13d6e}}. Multiple peaks are also possible in this context {{cite:32a2106b556df4b8c9957ae018a1d954a792d6b4}}. Our work provides an explanation for the lack of overfitting in overparameterized models by elucidating strong inductive biases of kernel regression, valid even in the interpolation limit, which includes infinitely overparameterized limits of neural networks. Sample-wise non-monotonicity has also been observed previously in many models {{cite:5e7952cd916d0b200756a9d00e4675571db6fc32}}, {{cite:65021f3acd399b7a3e4abf60ed1d55f28d86b72a}}, {{cite:9e05d661fae916262657d21ce04c3135f1cfaeb7}}, {{cite:b17d1d1b7bd1f0f415a87451a5acfa2fc0c4bb36}}, {{cite:557138ef7e9b55074db56b8f639dd4a7bac99502}}, including ones that show multiple peaks {{cite:a0d042f61863f8dfde71c2216291b47944bf5793}}, {{cite:a671d035673d1592ef0119cc812f5c6cc6fa9527}}, {{cite:29322dd79dce5bb132407aa075c48049a62a5c93}}, {{cite:50c4a2440cc8da42e4213ce2299cf63f8ee87073}}. A closely related study obtained an upper bound for test risk in ridgeless regression which shows non-monotonic behavior with increasing sample size whenever {{formula:ebb18efa-24b7-4454-8e63-72af6d2d8454}} , consistent with our results on rotation invariant kernels and isotropic data.
d
b5d47167bbab70754dce775466ee332d
Table REF shows the correlation coefficients obtained for the CPC model for the first ten epochs (CPC-2) for both languages. The relationship between the validation loss and the ABX across-speaker score shown in Fig. REF was also reflected in the correlation coefficients obtained. Both {{formula:d6ebebc2-5f88-46be-8c11-cfbc755dde85}} and {{formula:77be1e28-d1d6-42a1-acc9-ecbd4ef48346}} are significant and exhibit a strong positive correlation throughout the training ({{formula:c89f82d9-f00d-4e7a-911a-b03c3481b7a4}} for the first and {{formula:53276ffa-fa60-4647-8fdf-7329093fd71d}} for the second run). The strong correlation for the ABX across-speaker score also shows a feature of the InfoNCE loss that is worth noting, although it was exhibited for some runs only. The selection of the negative samples could have an impact on the information that is favoured in the representations {{cite:88e1809c9ce2580e450dd9ae3295dcbdf285ba97}}, {{cite:84576dc2fa90b779b5ba8d93dd35062ae289c54a}}. The rationale behind this is that by using the same utterance to extract the negative samples, the information about speaker features will not be relevant for distinguishing true and negative samples, thus encouraging phonemic information. We run additional experiments to evaluate if the ratio change (relative proportion of change between consecutive epochs) for the validation loss was correlated to the ABX-scores, but our results did not provide statistical evidence of such correlation. {{table:67854016-fcff-407d-abb6-894683357883}}
r
88fc47dd0adc5e995bbfc96275bce306
Softmax: embedding extracted from an embedding network trained with softmax objective in (REF ), Gradient reversal: embedding extracted from an embedding network trained with gradient reversal strategy as described in (REF ) where {{formula:00097a03-056b-44e4-ba99-65e983e73177}} was set to be 0 in the beginning and linearly increased every iteration, reaching 1 at the end of the training as in {{cite:e4c745e3fafa4107978913a48554f0ce2a8f18e7}}, Anti-loss: embedding extracted from an embedding network trained with anti-loss as described in (REF ) using the same adversarial training strategy described in {{cite:c2b0ddfb2d5513fd517813ff6cc2bf47273f3a43}}, JFE (proposed): speaker embedding extracted from the proposed JFE system trained with the discriminative loss functions in (REF ) and (REF ) and both the entropy-based as shown in (REF ) and (REF ) and the negative MAPC-based disentanglement losses in (REF ).
m
8b839488e976af70b53fc73aa8726768
Estimates of the effective temperature (T{{formula:9a4f8d5c-e5dc-4466-a1a4-bd9fbc5a5b78}} ) of Y dwarfs are difficult because theoretical fits to their spectral energy distribution are of modest quality, which indicates that the chemistry calculations and/or the sources of opacity may be incomplete, and values estimated from their luminosity and assumed radii using the Stefan-Boltzmann law are hampered by uncertainties due to possible effects of unresolved binarity {{cite:6b19fa6721a56ef1699c8cb1caf151f2e5b98569}}, {{cite:ff449b2d03a25924529d837d99206556fdc657a3}}.
i
9a2abe7b9f095c823e14575e038f4edc
A special example of our formula is the symmetric combination of stress tensor OPE block({{formula:667d0409-cd36-4524-9fef-75112820d346}} ,{{formula:9af1598c-bb72-49e9-9805-1117b4cb8eb8}} ), which corresponds to the modular Hamiltonian {{cite:33823296945c985542bd90d1180bc96348303e89}}. In our formalism, each stress tensor block can be represented by geodesic integral of AdS{{formula:72d7fcf8-9c87-46c7-a464-946e999fd8c4}} scalar field with mass {{formula:b16cb7f3-0fe7-47a3-a50f-eda04acf60ac}} . On the other hand, in {{cite:f6d0c33ed06660807a93d4601d1360fa33049e55}}, the bulk dual of modular Hamiltonian is described as the fluctuation in the area of minimal surface. It would be nice to understand how to relate our construction with their result.
d
19c7d1abe931af876e4920d00d577597
The majority of the submitted algorithms are improved from state-of-the-art methods such as AutoScale {{cite:de817bab38898e6b5065dbc206d843ba564e4500}}, CSRNet {{cite:4afa80542bb3e7ce805d4e4097f532b5699c8042}} and SANet {{cite:654c69a05e9300c546c0ae5153339d89b5e412a2}}. FPNCC (REF ) is based on AutoScale {{cite:de817bab38898e6b5065dbc206d843ba564e4500}}. BVCC (REF ) is a double-stream network that extracts optical flow and frame difference information. 6 algorithms are variants of CSRNet {{cite:4afa80542bb3e7ce805d4e4097f532b5699c8042}}, including PDCNN (REF ), CSRNet+ (REF ), SCNet (REF ), CSR-SSOF (REF ) and Soft-CSRNET (REF ). To extract multi-scale features of the target object and incorporate larger context, M-SFANet (REF ) improves SFANet {{cite:db5e4975959c6ce23809dd25d21ca86cd51984bc}} by adding two modules called ASPP and CAN. MILLENNIUM (REF ) uses multi-view data (i.e., real-world RGB image and the corresponding crowd heatmap) to construct two deep neural networks for crowd counting. DevaNetv2 (REF ) employs attentional mechanism and feature pyramids to deal with different scales of people heads. SANet (REF ) is a new encoder-decoder based Scale Aggregation Network {{cite:654c69a05e9300c546c0ae5153339d89b5e412a2}} to extract multi-scale features with scale aggregation modules and generate high-resolution density maps by using a set of transposed convolutions. Besides, two submissions are state-of-the-art methods trained on the VisDrone-CC2020 dataset, i.e., CFF (REF ) and CANet (REF ). CFF (REF ) proposes supervised focus from segmentation to focus on areas of interest and from global density to learn a matching global density. CANet (REF ) combines features obtained using multiple receptive field sizes and learns the importance of each such feature at each image location {{cite:c390f66e1e6452e57e433c56d91a631824e09015}}. ResNet-FPN101 (REF ) is a baseline method by using ResNet-101 backbone to regress the density maps. {{figure:736b537a-ec1e-48eb-998f-dc000165cd4b}}
m
4358aca7427dabf4fc2852f75d7d2031
Naive baseline* (Base_clf) {{cite:d588bc9c54c6b7ea8020d8349b571c43ab6bc7f8}}: Base classifier trained with D&S labels. Simultaneous Expectation Maximization (S-EM) {{cite:727eb6fef8436380008790073ea92a4b79e13a55}}: An algorithm that jointly learns the classifier and annotators’ parameters using EM algorithm. Dr. Net {{cite:b4f12b4eaa02197364978cfe3961eeea9011f6c3}}: An individual annotation based model that separately learns each annotator’s labels and their weights. Crowdlayer (CL_MW and CL_VW) {{cite:4d7deb03d262a1f967fff0f19bf7f48652cf4643}}: An algorithm that estimates ground truth first and replicates each annotator’s labels via a simple final layer. This final layer is removed at test time. The number of parameters for the last layer determines the Crowdlayer variant. We evaluated the vector of weights (VW) and matrix of weights (MW) variants. Vanilla Co-teaching* (V_Coteach) {{cite:9880a579414c9726310e99163b0b495a48ba4e90}}: The original Co-teaching algorithm trained with D&S labels. Co-teaching with uniform perturbation* (P_Coteach): The Co-teaching algorithm trained on D&S labels and synthetic samples. CrowdTeacher*: Our proposed method with the Co-teaching algorithm trained on D&S labels and sample-specific certainty-informed perturbed samples.
m
d1d215997313f919e3a1e6ce27ce6a40
Unlike the class-incremental setup {{cite:679b21faa087be41bfc1c23068b3543403522b12}}, the simple NT in ODICS enjoys an on-par performance to all considered regularization-based methods. For example, while MAS outperforms NT on earlier domains, e.g, CS, the overall performance degrades to 40.3% compared to 40.5% mIOU for NT. The most effective regularization-based method is LwF, which only outperforms NT by 1.3%. This suggests that further work is needed to develop regularization techniques for this more realistic domain-incremental setup. Meanwhile, rehearsing previously seen examples through ER consistently outperforms other baseline methods in all domains. This conclusion is consistent with previous results in image classification {{cite:c58a650a6b4df2ff3ee8aaa45e08b2e1c2c23b6c}}, {{cite:2c6e19981e49866f615a16c9d70b8e24478f10d6}}, {{cite:3558c18eee689117b4d6f042a1a67ab29fc10047}}, as storing real examples in a replay buffer provides a simple but effective regularization for continual learning.
r
367dd653156236d22e36926cb1883b07
Grad-CAM {{cite:0260897baa90c101fd6335235ddf31a03e8f99af}} is gradient-based saliency method that computes the gradients of the target output with respect to the final convolutional layer of a network. The layer activations are weighted by the average gradient for each output channel and the results are summed over all channels to produce a coarse heatmap of prediction importance for each class. Guided Grad-CAM is simply the combination of the results of Grad-CAM and guided backprop.
m
660121ba64f503aaea7f620c3db3fa69
Based on distribution-augmented contrastive learning, DisAug CLR algorithm {{cite:309b7f63a256c75acdf65c20ec2e32ef033a7629}} first learns self-supervised representations from one-class data, and then builds one-class classifiers on learned representations.
m
f4d3ae080e64d13b4aa8c68828915637
We also see that regularization approaches such as EWC already fail at the first increment. In contrast to the success that has been reported in prior literature {{cite:3095069e388d2be93217ee6915f7da8c46638155}}, {{cite:b321befac270d824c833f4eda882530d9dcce941}}, this is due to the use of a single classification head. This is intuitive because introduction of new units, as described in the main body, directly confuses the existing classification. Regularization approaches by definition are challenged in this scenario because the weights are not allowed to drift too far away from previous values. For emphasis we repeat that however this scenario is much more practical and realistic than a multi-head scenario with a separate classifier per task. While regularization approaches are largely successful in the latter setting, it is not only restricted to the closed world, but further requires an oracle at prediction stage to chose the correct classification head. In contrast, our proposed approach requires no knowledge of task labels for prediction and is robust in an open world.
r
bd505317b9e705e094b73faf5f4cfb7f
In recent years, aided by the growing wealth of BH observations, studies have worked toward constraining BH natal kicks using a variety of methods and data sets. A number of studies have focused on the population of massive runaway stars with large space velocities, considering the possibility that they may have originated in binaries that were disrupted via the core collapse of their companions, possibly with the assistance of natal kicks , {{cite:e6ee51fcc001b4e4140a454194d564d99a3c1269}}, {{cite:19b55e24cb8bd95da0e1be66475f76e63cedd344}}, {{cite:6820d7e9a875d52935889934a67c9a30f81f57c9}}, {{cite:36939a6e8fa1586fb994495ef45ff64aeefc586c}}, , {{cite:eeeba0de83208124d7e08a3a99286c20b7d45033}}.
i
fd5854649ebf719c30298fd2ecc9cc84
To predict the ratings given by users, many of the existing methods are based on Collaborative Filtering (CF), which models users and items with their historical interaction records (e.g., ratings, clicks, etc.) {{cite:4d9b342b97d7dcf0b078bb6a787c08f800d9a09f}}, {{cite:71c88b189b9018febbbc2ecda3b98a79b04354fe}}, {{cite:6dc0da0e12a4c2462cff23514d83ddc25477f23e}}. These methods are classified as rating-based prediction methods that usually utilize matrix factorization to obtain the latent feature of users and items, and then predict the users’ ratings for different items. However, the rating matrix is always of natural sparsity {{cite:4bab3d9fd55b5070601eb12c857d5e2cf74909d7}}, which would make it difficult for the rating-based models to learn the accurate latent features {{cite:d68738ada382994c2647ec30d5f46886db3f5da1}}.
i
6339e0aa72cfffaeb040474b3e69a775
On the other hand, classical optics, which tackles vast majority of physical-optics experiments and is based on Maxwell's equations, has never ceased its own evolving steps, physicists have endeavored to develop various optical transforms in light propagation through lens systems and various continuous media. The two research fields, quantum optics and classical optics, have their own physical objects and conceptions. From the point of view of mathematics, classical optics is framed in the group transform and associated representations on appropriate function space, while quantum optics deals with operators and state vectors, and their overlap seems little at first glance. It seems to us that if one wants to further relate them to each other, one needs some new theoretical method to "bridge" them. For example, what is the quantum mechanical unitary operator corresponding to the Fresnel transform in Fourier optics? Is there any so-called Fresnel operator as the image of classical generalized Fresnel transform? Since generalized Fresnel transforms are very popularly used in optical instrument design and optical propagation through lenses and various media, it is worth of studying these transforms in the context of quantum optics theory, especially based on coherent state, squeezed state {{cite:b9d6f287df2e3668961680cbf1ea6637a62c94b6}}, {{cite:7ba3b8a1274adfc3f34a7da6967e015417d39826}} and the newly invented entangled state theory {{cite:eee469ba127ec9966bfb91c718519a73ad1412ee}}, {{cite:85cf7af5a5c51d6a9aafe9ad8ec8fbe772cd2aa1}}, {{cite:1d289e1dc07dba9cdc847d0eb6f337e5b1e0ab88}}, {{cite:98ee26dffc199e43c0ffc4f5af4fa43aeec771dc}}.
i
567ff3bf7ad77f666799b04919c631d8
This paper contributes its own classification of quantum gate sets by giving a complete classification of the so-called stabilizer gates, where we allow the swapping of qubits and the use of ancillary workspace. To provide some context, stabilizer gatesThese gates are often called Clifford operations in the literature due to their alternative characterization as normalizers of the Pauli group. are a discrete set of gates generated by the CNOT gate, the Hadamard gate, and the {{formula:b742be1b-c313-42dd-b755-6a897336ea6b}} -phase gate. Stabilizer circuits are somewhat remarkable in that they may in fact be integral to our eventual development of a general-purpose quantum computer. Since quantum error correction will likely play a large role in determining when a quantum computer will be viable, there has been considerable research in building and analyzing quantum error correcting codes. The stabilizer formalism arose as powerful way of unifying the analyses of many of these codes {{cite:3f2d9faa2a3038e9a9616c355a58c63f3e8d9763}}, and as a consequence, understanding the nature of the stabilizer states has been of particular interest {{cite:7a80562c70dcf68eccb4cfda6399e88ff898d2a8}}, {{cite:62676b44abc66187f269d982eee73c42ed818dfc}}.
i
c1a507afe88273ea508dcbecf68299dc
We built the OntoGCN (ontology-directed Graph Convolutional Network) neural model where known similarities/relationships between the features (genes) direct processing in the network and help the model to avoid learning spurious correlations {{cite:e5c27b6d6c043bc5ef608eb35ac3f570b8814dfe}}. OntoGCN enforces convolutions on the genes related by similarity and thus captures localised patterns of data, similarly as convolutional neural networks capture spatial relationships of pixels in the images {{cite:5eacc65a53fba0f6d8c81ff66a17665b987e8732}}. {{figure:07c9fa22-c807-4338-9c27-87f6d0ad186a}}
m
9d6c7d3d102f9ed330403340de8c1d61
Smooth trajectories obtained by minimizing jerk or snap have been widely used to control differentially flat dynamical systems such as quadrotors {{cite:8b0d4b08ef41347c79132bbcea95acc2cb04449e}}, {{cite:a3ed7a9a823605fb23ebd4d02e7e1855e1d1e7c1}}, {{cite:40b9ed4337f80aa9448e63a5550fbef3a8790828}}. These trajectories are represented via time-parameterized polynomials, which converts the trajectory generation problem into one of finding polynomial coefficients that satisfy certain constraints. Recent work exploring time-optimal trajectory generation includes {{cite:501c7328bade630261f2a531edd0bcfd4b5adae3}}, {{cite:40e7bd94d7f6434c3f82366e75abc21cba5447ef}}. If additionally, obstacle avoidance is added as a consideration, the trajectory generation problem becomes more challenging. While mixed integer optimization techniques {{cite:b013214ff8bc6d7a609ed9273b2df0b1c2cd4df3}}, {{cite:05949eca9953dfe1633e29f7d0ade348061fb33a}} handle collisions reliably, they suffer from high computational costs. Recent work demonstrated practical application of quadratic programming {{cite:539ec66d798e82a8acef368e5809bdad6bacf076}}, {{cite:f6b1de3c6d0eed10071bbc09a6bfcd52bb70ca0c}}, {{cite:4f327f16b6b5aa6820c06623353a2ce7f99f5191}}, {{cite:25e18588285a84aa35f6e4914a24c784a73a55ec}} to derive collision-free trajectories in real-time. These methods separate the trajectory generation problem in two parts: (i) planning a collision-free geometric path and (ii) optimizing it locally to obtain a dynamically-feasible time-parametrized trajectory. In this way, one can solve for a locally optimal trajectory with respect to a given time allocation. However, the prior geometric path restricts the generated trajectory to be inside a given homology class which may not contain a globally optimal (or even feasible) trajectory (Fig. REF ). {{figure:80543592-711d-48fc-8657-9b4fcaf5de04}}
i
8f018f6553d219f8921150d980239853
with respect to a set of parameters, {{formula:dc3fde27-aa6e-48d4-8df2-5343b9793374}} , denoted compactly in the above equation as {{formula:c4d657f9-7b7e-4f19-b263-6efbb80c2883}} , is an upper bound to the true ground state energy of that system {{cite:b4bffaf7bed191f08cde4dd8753978f53f9dd938}}. {{formula:b1bf6adf-36e3-4b4a-8e12-0125f64f3ada}} and {{formula:9766092a-afc3-47e8-bae3-cb12edf20cd3}} refer to the Hamiltonian and the parametrized wave function of the many-body system, respectively. This idea lies at the core of the VQE algorithm.
m
63b6f4465d5973590a9f9788708fdf0a
which are {{formula:1572078e-d682-416e-894b-c5ea830ff0ef}} -submodules of {{formula:da29f241-a2c1-413e-a58f-e83e0e67a9be}} and {{formula:e7bc2497-1902-43a7-8f35-2a16298a525d}} respectively. So {{formula:577a5bf0-5586-4d47-b752-2914a82ff9ef}} and {{formula:dc93c5c0-662a-4fad-abdb-f0f2d121d266}} Recall that to solve the hit problem of three variables, Kameko {{cite:115ea5c6eb678cfe937f8f51e140a31dcb63e5b2}} constructed a {{formula:e158d400-1483-4ce9-ad72-5a880e1b37f8}} -modules epimorphism: {{formula:e1414294-bb9d-4224-a9a9-0a4f47b4f65d}}
r
6236be85464902e94f665c798f8ba9dd
MagFace. The method presented by Meng et al. {{cite:7b1dee02bfcf79f63723420500b014426b4495bd}}, called MagFace, generates both an embedding and a quality score for a given sample by using an extended version of the ArcFace {{cite:c2b15203c8733eec2e7d32bf6a5269662af520ae}} loss. The proposed loss is able to discriminate well between samples of different quality by pushing apart images of different quality. The embeddings generated by a model trained with the new loss can be used to automatically obtain a quality score by measuring their magnitude.
m
2b6c34b5537a448f0ab442fff40483ee
To analyze in an organized way, we sorted the results in each dataset based on the METEOR metric, which is the most used (see Figure REF ), at least in the reviewed literature. After the results were sorted, we took the top five in each dataset. Table REF shows the best five results reported in the seven most-used datasets. Here, is observed that the best ranked methods are not necessarily the most recent ones. The best result on the MSVD dataset was reached by {{cite:977c46e659a0b6448fb7b65b40179aefb425e085}} on the three compared metrics. The same for the MSR-VTT dataset, where Gao et al. {{cite:08270677a7dfbb3483fbb8e85522212044848601}} achieved the best result in all metrics. Nevertheless, this is not the case for the ActivityNet Captions dataset, where {{cite:cf2c9e1c505d410bd48e07440abf77153e768844}} obtained the best result in the METEOR and BLEU-4 metrics and the worst result on CIDERr-D. Only the METOR metric was used to report results for the M-VAD dataset because there were missing values in both BLEU-4 and CIDERr-D. The best result on the MPII dataset was achieved by {{cite:620e335afc228d400f712201da22f13b014c549e}}, for the Youtube2Text dataset the best result on the METEOR and CIDERr-D was reached by {{cite:9f36bf3a6dc402c7331cb8b237f956b5b2a5ef4b}}, but this was not the case for BLEU-4. Finally, on the Charades dataset, {{cite:007e19b5be105f823b46044f1a3a9dc4cab3f05f}} obtained the best score if we only consider the METEOR metric, but for BLEU-4 and CIDERr-D, that is not the case.
r
c80be30b91c88e0d18317b1f55dd420a
Figure REF shows the traditional single-label Fast Region-Based Convolutional Neural Network (Fast R-CNN) model (in blue) that has been extended by adding multiple labels into the model, where every label corresponds to a classifier for individual phonological parameter (handshape, orientation, location). The model uses pre-trained network as feature extractor and allows to do both object detection and classification on raw images in a single pass of an input image through the model {{cite:ed72f7f8c46edeb365afd810b8dc8923b7543e7b}}.
m
ca315a63527054bbaa40d45e75830d0e
Another possibility is that the UV theory keeps the large (but now regulated) amplitude of particle creation, and a large outgoing energy flux (which can be identified with the “firewall” {{cite:1c542748690b34e4afd92a422b7ea4c6e5a5a1d5}}, {{cite:3ee22e0cc626ddc11a1148aa993e67ddc3815b10}}) appears around the horizon.
d
1cfdc90b832631346d8c3fab26e688ad
Motivation and high level description of algorithm: We begin by describing the key insights that led to our algorithm, and provide a high-level description. First key observation is that standard ERM trained models might already learn features which are good for domain generalization {{cite:9555c1147856008c02d79442fdfba7332f8d3468}}, {{cite:1254896b875a0902e0b944e8558b555e05ff373b}}, {{cite:1bfcf41aa2ce08e219e55da450d939b1f23e19e1}}, but the final layer is not able to combine these features in a manner robust to domain shifts.
m
db60140e0cef5b7c56c4cbd82ec1903c
Specific hyperparameters of BTC are summarized in tab:parameters. The hyperparameters with the best validation performance were obtained empirically after applying in 5-fold cross validation. Adam optimizer{{cite:4e85bfef069015606632ea7e5ac3cdd9cface006}} was used with initial learning rate of {{formula:c68a0ed2-96f5-4d4f-9757-66d04a95b83c}} . Learning rate was decayed with rate 0.95 when validation accuracy did not increase. Training was stopped if the validation accuracy did not improve for over 10 epochs.
r
ace72a83b3ba12f64065954e98560ab0
We consider feed-forward neural networks with ReLU activations to examine how the network's final representation space connected to the gradient structure of the network. Our motivation is twofold: recently discovered knowledge about ReLU networks {{cite:8035f9ee66e505c9a7921a9d7e24395b6107a620}}, {{cite:11dd29223778e599b544dd9b674ec2c87bad9f84}}, {{cite:4d7c6bf2ff81d2a51ef0292a6d4d510128250f3b}} and recent results about higher order optimization methods {{cite:1f10e5b30129bcdd7161382b6fa7d18b491207b6}}. In a way, many of the existing machine learning problems can be investigated as statistical learning problems, therefore information geometry {{cite:13f06749870fb8247758767fb92dbffa9dc8b8d5}} plays an important role. It was shown in {{cite:59c3e049d2d9f21182b98acb2280bb0a2bc0df1d}} that over the parameter space of a neural network we can often determine a Riemannian manifold based on an error or loss function, moreover the tangent bundle on specific Riemannian metrics, e.g. Fisher information, has unique invariance properties {{cite:ff2038280ce570609cc1d32c55000d82deba8289}}, {{cite:0546794efdc99e4bb9fb1bb5a11247d8a1ed62ee}}.
i
7f8dfacfb5672094f0e422507c3afd1a
Research questions concerning pattern identification in environmental mixtures usually involve unsupervised statistical techniques whose solutions are obtained independently of any outcomes. Researchers apply common methods, such as principal component analysis (PCA) and factor analysis, to describe the variability in correlated chemicals in terms of underlying (i.e., latent) components. PCA is the most common dimensionality reduction tool used to identify patterns in environmental mixtures {{cite:5acac7d5f51e8118f4c9739c56b906faba92d07a}}, {{cite:7655e79cba2b84da41904cb560c35ac72d0bc724}}, {{cite:2a4d8e702bd37bbf2a7e7fdd943f19e05ea8781e}}, {{cite:1f3b5e5eb592da276feb157c44776e95cacb3488}}, {{cite:d1a9e362b290241baaec5282ec435f00be56cdf9}}, but it has several limitations. First, various selection criteria exist to choose the number of components selected as patterns, such as the first {{formula:a2763d12-d890-4710-b7c4-d0df1967adf4}} principal components that explain a certain amount of variance, all components with singular values greater than one, or the components whose variances appear to the left of an `elbow' in a scree plot {{cite:040d5132f8da7c64a99e3b625f5d74d7080252eb}}. However, there is no guarantee that these criteria will agree {{cite:98333b0db87069b3d9bb7844cb156bcdf7c2b53b}}. This leaves the burden on the researcher to determine the appropriate number of components, which is often based on implicit assumptions that are not always explicitly stated. Further, PCA has no guarantee of an interpretable solution {{cite:65e152d474c4eb0eb50ae5e0afa061de602e9f78}}. Its identified components are orthogonal by design, while patterns of environmental exposures are almost certainly not, and its solution, both chemical loadings and individual scores, may contain negative numbers, while actual chemical concentrations cannot {{cite:9a58419d5ae00001b4b3b93d7f7acc7c3150bcc6}}. Finally, as a least squares method, PCA is susceptible to outliers, which may severely influence the solution {{cite:60b028a4cc2fe32b8646f8de2610ffad3564325b}}. Researchers also regularly employ dimension reduction methods beyond PCA, such as factor analysis or non-negative matrix factorization (NMF), for pattern recognition in environmental mixtures; these techniques work a bit differently than PCA, but they have similar drawbacks or introduce new ones (e.g., non-negativity may produce identifiability problems).
i
b3cd6104fbf458ffd5bc28eb026b65ef
In comparison, AE based pretraining does not perform explicit density estimation but instead aims to reconstruct the original data from corrupted input. A notable example is BERT {{cite:6e99e6f7da33052ce1aac6e5b79e9221f80da5db}}, which has been the state-of-the-art pretraining approach. Given the input token sequence, a certain portion of tokens are replaced by a special symbol [MASK], and the model is trained to recover the original tokens from the corrupted version. Since density estimation is not part of the objective, BERT is allowed to utilize bidirectional contexts for reconstruction. As an immediate benefit, this closes the aforementioned bidirectional information gap in AR language modeling, leading to improved performance. However, the artificial symbols like [MASK] used by BERT during pretraining are absent from real data at finetuning time, resulting in a pretrain-finetune discrepancy. Moreover, since the predicted tokens are masked in the input, BERT is not able to model the joint probability using the product rule as in AR language modeling. In other words, BERT assumes the predicted tokens are independent of each other given the unmasked tokens, which is oversimplified as high-order, long-range dependency is prevalent in natural language {{cite:3288401179303266165423f066f56d23a36bb78a}}.
i
889d16d6640aa33455cea5755961381f
Let us compare MLA with ULA (i.e., MLA in the Euclidean case with {{formula:6cd38990-582a-423c-b58c-1b547f701b12}} ). Recall for ULA, mean-square analysis yields a biased convergence guarantee where the bias scales as {{formula:24287283-37ea-4cb4-aa43-2c1df4f03074}}  {{cite:9ea9b6a3d4c595bfe83eeb26b389dbaa7e1a4f60}} under an additional 3rd-order regularity condition on {{formula:4329dcd9-51b2-4e89-a4fa-c1d94df91f16}} . This leads to a mixing time bound of {{formula:832f4a74-ab46-4f60-af4d-9c322d7984ee}} for ULA. We see the bias of MLA has a worse dependence on {{formula:780c15d5-9c7c-43af-a3cf-d816a22d5249}} than the bias of ULA. This is because the continuous-time Mirror Langevin Dynamics (REF ) of MLA has a changing covariance, while the usual continuous-time Langevin Dynamics of ULA has a constant covariance; therefore, MLA incurs an additional stochastic error from the Brownian motion part, which is not incurred by ULA. Formally, this is reflected in the orders of error of the two algorithms: We show below that MLA has local weak and strong errors of orders {{formula:e3f94c8b-a14e-491e-a84d-373ec5528016}} at least and {{formula:774347db-d0a9-47fb-9a5c-adf7b9b52086}} (note the local weak order of MLA is actually {{formula:402741db-afcc-42e7-a196-c24c710c5874}} , because it is the Euler-Maruyama discretization of an SDE; the multiplicative noise causes the strong error to lose half an order, but not the weak error (see e.g., {{cite:8ac46855d005c11c67a589c3b274b1c9fff1a6a6}}); however, we will see that as long as {{formula:a9f83d96-7a57-465b-9b59-fbf9ff36aab8}} , the order of the final sampling error is determined by {{formula:9779c753-ba24-427a-93ce-5a71bdf37335}} but not {{formula:7e6a58ae-465a-4ae7-aa12-4f748a557682}} , and even though our {{formula:eb5fd2f6-3691-43dc-b32f-70b21a6c2df5}} bound is not tight in order, its constants can be made very explicit and hence helpful to later analysis). On the other hand, it is well known that ULA has local weak and strong error of orders {{formula:c8a29a60-cd24-4c87-9e75-70ca18d1273d}} and {{formula:ba868dcc-ee53-44cc-a846-2d2a0f02b56d}} because it is the Euler-Maruyama discretization of an SDE with additive noise (see {{cite:8ac46855d005c11c67a589c3b274b1c9fff1a6a6}} for the general theory and {{cite:9ea9b6a3d4c595bfe83eeb26b389dbaa7e1a4f60}} for details of worked out constants). It would be interesting to understand whether we can improve the local errors and the bias of MLA, perhaps using more sophisticated discretization of MLD to improve the stochastic error.
d
c6fc605034b05067199dbe0dc2ede978
where {{formula:f1fdd9bf-d754-4f64-87c7-506b33c8978c}} is computed by solving the linear system below to a linear tolerance of the user's choice; in this work FGMRES algorithm {{cite:7906af429cf22c341f22324d73b7fae28f18c558}}, {{cite:f1e649deedd60206c8244d87bba97c20766e97d6}} is used to solve said linear system, except for the cases where we show duality, where we use Gauss-Seidel sweeps as they are right-hand-side independent. {{formula:f2f39c3f-0c8c-4b66-82fa-82e0595163ad}}
r
ee5d7050beee71d6c4894d23f110b402
The BW model invoked to fit spectrum data in Ref. {{cite:1bc77c4c6f2fe33feee7b688d05cbd92b67a54be}} is nominally adopted from Ref. {{cite:46b8f0246631c541035a986334b62647bec0d0f7}} that introduced a BW model to describe pion spectra from 200 GeV fixed-target S-S collisions at the SPS. The relevant formula is Eq. (7) (second line) of Ref. {{cite:46b8f0246631c541035a986334b62647bec0d0f7}} {{formula:84b000a7-b38a-422e-b984-1a832536cdd1}}
m
4c6ee9aa452229f0e00031b415e9a068
which corresponds to class II Heun polynomial {{cite:10674d66d78c8b53f1a035687b40ba41794bfcaf}}.
m
82d9ffb161bda6b75f5501ced498a875
We report the performance of different algorithms in Table REF and Table REF on precision, recall, F1 score, approval and fraud rate on ECD and IEEE datasets respectively. The proposed DQNR method performs better/at par with all other models on the F1 score except XGBoost on both datasets. XGBoost outperforms all other methods on the F1 score. This is primarily because XGBoost being an "instructive" process, has access to complete data during training which allows it to learn a better representation of the data compared to a DRL agent trained in an episodic manner. These problems can be potentially be resolved by handling the distribution shift in offline reinforcement learning {{cite:599ea2d13f6e2747dfc03c9412bd5fffce27c611}}, using a better curriculum strategy {{cite:3b334b86dbd1c02e71b4b5c04dd8a29d277530a7}} or by solving for the representation learning problem {{cite:8a68d560dc864c0cc81587a1d1a7a94f72bb6062}}.
d
ffebdb86f81765dbd66806dd27b00088
In our numerical calculations, we adopt the following parameter choices. The default value of the charm quark mass is given as {{formula:2959d2f4-0dca-4d93-a083-50047aedf125}} , and the fine-structure constant is approximated as {{formula:664f216a-f5ad-4c6c-9789-92394951baff}} . The renormalization scale ({{formula:3b4cb04a-2c70-4f8e-9018-316a5c805fd9}} ) and factorization scale ({{formula:80b8f61f-726f-4dcb-86b6-c90c22e27131}} ) are set to be {{formula:ab870fd1-5562-4abe-9b0c-d53a347d7356}} , where {{formula:954c5cb9-e187-4991-985d-09f1526beba1}} is the {{formula:83c1c29e-63fb-4e6f-8006-ed1192f7897c}} mass. For the sake of gauge invariance, its value is fixed to {{formula:4b05b85c-88b4-4e21-9aba-05914ab27459}} . For HERA experiment, the energy of the electron beams is {{formula:76e4bcf9-eea6-48d0-8a83-cbf5178ecba5}} and that of the proton beams is {{formula:6a6bae88-bfe0-4200-9f9c-00d3cd776e90}} , while for the EIC, they are {{formula:c48ae037-1807-4306-b0df-0d363f801f29}} and {{formula:d643c7ba-6548-4f1e-bb0f-86f9eaafcfca}} , respectively. We employ CTEQ6L1 {{cite:8086b21e25dc636fa298e6ada4bf816bdc68a188}} as the PDF for the protons. The CS LDME is computed according to {{formula:76f06f79-ffe0-453e-b27d-185ba0ccae71}}
r
0588e2a0e6300f2a7bfd4c15def1b995
Different from those in {{cite:9e70c9ffc9f6618140ed4e05b955b9238601a847}}, {{cite:b3df05b74febb3dcec8fc1e9a20aca5db6e416dc}}, {{cite:038e838775a3c9b180cd9ad917e7eb6927435cb8}}, {{cite:20ad01a2eb13036effdf18ed0342800453e62989}}, {{cite:645b771809f004fc7aef4d9589cce7a4a0d42211}}, {{cite:206b3141cee2985ed3a4ec18b97cb0ecb9a174a4}}, the work investigated in {{cite:43e9621a43c7266c8996eb22fb8a470579c46ec7}} is the most relevant work to our study. {{cite:43e9621a43c7266c8996eb22fb8a470579c46ec7}} studied a multi-hop energy harvesting cognitive radio network that enjoys only conventional communication. They proposed an algorithm to maximize end-to-end throughput with joint time and power allocations optimization, named JOTPA. As investigated in {{cite:43e9621a43c7266c8996eb22fb8a470579c46ec7}}, we study a multi-hop network and propose the HBCT algorithm to maximize end-to-end bit delivery through time and power allocation optimization. The difference between JOTPA and HBCT algorithms is that JOTPA is an algorithm that works only in the conventional manner; however, HBCT is a hybrid algorithm that works with both conventional and backscatter manners.
r
e0a4dbec3d13d050f33864cb436b95b7
with the interparticle force (REF ), where {{formula:b750b9ed-5e9f-49c3-a7fc-5aacccad9ce3}} is the shear rate, {{formula:9ceb07bf-4559-4cc8-8b0c-a3f37c19f802}} is the unit vector parallel to the {{formula:6791965f-5a38-43f6-93bd-3896c9477370}} -direction, and {{formula:76aedaf3-a594-460a-ba1a-7081f2cdc8f4}} is the peculiar momentum {{cite:230926f2d5e4d562b8e41016cccaa787d0e2bb03}}, {{cite:a42a02d65f2909c9fdcbf002b1bd848b4a6ddf5e}}. We also adopt the periodic boundary condition in the {{formula:4b8a3b20-c6d9-41de-9641-619399bfa01d}} and {{formula:748e11f0-9242-4bae-93a6-eb50fe4d2a88}} -directions, and the Lees-Edwards boundary condition {{cite:f0bc2d258b5fd71bb4c21d0552dbc82ceb3aec07}} in the {{formula:ab414cb4-1628-4da6-aff9-275be56fd681}} -direction. In the following, we choose {{formula:2b70bb2f-3f8d-4ac2-a0e2-5554be85ead2}} , {{formula:bd3a323f-347d-45d2-b3b2-d83dbdf9eb01}} , and {{formula:97378e37-029e-40bb-b527-db3218dad8a7}} to nondimensionalize quantities to perform simulations. The dimensionless time increment of the simulation is chosen as {{formula:f307927b-2e16-4b74-9bd3-fae30544e665}} with the dimensionless shear rate {{formula:9eb697a4-26e7-4504-a8ed-e449b1fc8a73}} , which is sufficiently smaller than the collision duration and the characteristic time scale determined by the shear. In this paper, we use {{formula:f4f4592c-d9d5-4e9e-98b9-e4c6d8a504f1}} particles and we fix the packing fraction as {{formula:ad6f0555-0f9c-4676-a98c-2cbf326205aa}} , which means that the linear length of the cubic system is chosen as {{formula:57f33d72-e147-462c-a2f0-474ba1ac9ec2}} .
m
f6874d754bd5b2d09555604f45a2c29b
The second set of diagrams in Fig. REF (bottom panel) has been computed by both the RBC/UKQCD {{cite:e8f019e75657e80df24f3e575accb011a1cd8c0a}} and the BMW {{cite:2d469cc488a0c823d73fbbb211b1b6fe8626cb0d}} collaborations. They correspond to corrections to the quark-disconnected contribution, in the same electro-quenched approximation. The RBC/UKQCD collaboration has evaluated only the first diagram of Fig. REF , that is expected to be dominant.
r
fbdd89a48718b837f06c6dd278e841ea
In the present work, we have reconsidered the possibility of the detection C{{formula:ec407cf6-d7d3-491b-961e-c27969485d27}} B using a birefringence of electromagnetic waves in the relic {{formula:89e5e981-1f59-4286-b4d8-b1af57cd9db8}} gas. In an intergalactic region at present, in addition to a sea of relic neutrinos and antineutrinos, there are a sparse isotropic plasma with the electron density {{formula:8403d32a-42a0-4e1e-a36e-574f2b8453f3}} , and cosmological IGMF's with the strength {{formula:78c6d2dd-b6e8-4fd8-89f8-9a9fdba56724}} . There are also relic photons and the Extragalactic Background Light (EBL) as a visible light from stars and photons in the infrared range from the re-scattering of light on a dust in voids. In our study, we do not involve the two last ingredients with the densities {{formula:2342c25b-fe91-4602-902c-25acd7fdb48d}} and {{formula:339a379f-32d0-4d89-96a9-c30f9d4ff4ee}} . They are essential in the problem of a lower bound on IGMF {{cite:11cea8c0c63577d48fd027f42508615fc6c03a9a}}, when processes {{formula:436511cc-5870-433e-b6f0-5539961d789d}} and the inverse Compton scattering {{formula:e23f4e5d-6932-4d7d-beec-cef30d7e157b}} are taken into account. In our scenario, the electromagnetic waves propagate from a remote source through the relic neutrino sea and a plasma, where IGMF is presented as well. We have estimated above how large different competitive birefringence effects are there; cf. figure REF .
d
73c5009747a2e298d92b19091195b7b1
Such an example is not just anecdotal. Indeed, it is accepted that networks trained on high-level conceptual tasks have their initial layers related to low-level features and their deep layers related to high-level concepts {{cite:71aadb9eb00abcdf80e2cc78e6c35bc306929054}}, {{cite:5a928fd369944f490fa1f8ecc471622678330eb2}}. This view explains the success of IN on the specific task of style transfer with fixed style input, IN being then incorporated inside a generator network that only acts on the low-level features of the content input {{cite:8e209be88da4cf59cb4e9a72efff41e74e16b4bb}}, {{cite:b27a173f0eed158849fd3777603abbeed1ee20bb}}, {{cite:5b38ee20454aab8b37ecd679b84400705cca3fd3}}, {{cite:e36bfd011690dcf4add6a11fe65b4395b329ab58}}. On high-level conceptual tasks, on the other hand, this view hints at a harmful tension between IN's constraints and the requirement of instance variability to express high-level concepts in deep layers. In short, not only is the expressivity altered with IN, but the alteration of the expressivity results in the exclusion of useful network mappings.
d
3b5d2b22fb480f07b75677b6d30117af
In contrast, the area/entropy spectrum of black holes is known to be related to their quasinormal modes (QNMs) {{cite:fed408be011d59f82c62b7304a61090999216016}}, specifically, the imaginary component of QNMs in the large damping limit, i.e. asymptotic QNMs (AQNMs) {{cite:ae9234c886149398b5a328eb51ca13ab70bb4a4e}}. To put it another way, the area/entropy spectrum is obtained by the Bohr–Sommerfeld quantization of an adiabatic invariant {{cite:45d6d0d54c2bb33d9f5c6035362a3bb062df73a7}} constructed by the AQNMs, while the AQNMs of BHs can be determined by the so-called monodromy approach {{cite:ddfa17ee48472d861f2bb322fc541105558ef08d}} in which singularities and Stokes lines play crucial roles {{cite:271ce748f468049ca691a917822ea101e0804589}}.
i
ff56b707ff3c3d455a14ee07559d8b2f
We evaluate a current state-of-the-art cross-lingual specialization transfer method with minimal requirements, put forth recently by Ponti:2019emnlp.We have also evaluated other specialization transfer methods, e.g., {{cite:73107e4d0e54d06ea7a98a1a118ba647f67fe0db}}, {{cite:d8a1c64cf113afa1f9641263e6ede9b3470223c2}}, but they are consistently outperformed by the method of Ponti:2019emnlp. In a nutshell, their li-postspec method is a multi-step procedure that operates as follows. First, the knowledge about semantic similarity is extracted from WordNet in the form of triplets, that is, linguistic constraints {{formula:1837f3fa-53ff-436c-b75f-80e7518d292d}} , where {{formula:e94eacb2-eadb-4968-a8d1-1b5d1c6b2113}} and {{formula:642e6841-ddc8-4c7c-9dc6-6150e8384ef8}} are two concepts, and {{formula:ed391b2c-4a52-437c-90d4-2f7d20b2aa29}} is a relation between them obtained from WordNet (e.g., synonymy or antonymy). The goal is to “attract” synonyms closer to each other in the transformed vector space as they reflect true semantic similarity, and “repel” antonyms further apart. In the second step, the linguistic constraints are translated from English to the target language via a shared cross-lingual word vector space. To this end, following Ponti:2019emnlp we rely on cross-lingual word embeddings (CLWEs) {{cite:90ed04c80c21ec718ec552dcced079f3935fbeb3}} available online, which are based on Wiki ft vectors.https://fasttext.cc/docs/en/aligned-vectors.html; for target languages for which there are no pretrained CLWEs, we induce them following the same procedure of Joulin:2018emnlp. Following that, a constraint refinement step is applied in the target language which aims to eliminate the noise inserted during the translation process. This is done by training a relation classification tool: it is trained again on the English linguistic constraints and then used on the translated target language constraints, where the transfer is again enabled via a shared cross-lingual word vector space.We again follow Ponti:2019emnlp and use a state-of-the-art relation classifier {{cite:f00853d4961670a440766ab4e7396f350c0aacd1}}. We refer the reader to the original work for additional technical details related to the classifier design. Finally, a state-of-the-art monolingual specialization procedure from Ponti:2018emnlp injects the (now target language) linguistic constraints into the target language distributional space.
r
75dbf3303efe8b9077234275e5cb9bbb
LR {{cite:3af59a0b5d7293974c346d8c51e66ceec71346df}}: It's a widely used baseline and applies linear transformation to model the relationship of all the features. Wide&Deep {{cite:f202206813a66cfde30f4ba0299c3a2cd397007c}}: It jointly trains a linear model and a deep MLP model to the CTR prediction. Deep&Cross {{cite:7a2b1dfdf58d117e89f624d80dfddf8a40b6b125}}: DCN is proposed to handle a set of sparse and dense features, and learn cross high-order features jointly with traditional deep MLP. DeepFM {{cite:c8bac5cf7d0a020a3a67bfea52670414e804f3e2}}: It combines the explicit high-order interaction module with deep MLP module and traditional FM module, and requires no manual feature engineering. xDeepFM {{cite:b9455bf2f4d298668167a2dcad11d8b93afe7abe}}: It uses Compressed Interaction Network to enumerate and compress all feature interactions, for modeling an explicit order of interactions. DIN {{cite:3f1e7387ab0db58e659941a43ba621238133cf18}}: It's an early work exploits users' historical behaviors and uses the attention mechanism to activate user behaviors in which the user be interested in different items. DIEN {{cite:d3d9ed900cac0f21b86f29ac7ac14d590d97ac71}}: It is a recent research on CTR modelling with sequential user behavior data. It integrates GRUs with candidate-centric attention for capturing the involved interests.
m
63e2bc13954360552dedf4337513e540
Some cases require more encoder and decoder blocks. As the number of encoder and decoder blocks increases, the feature map size is reduced and the essential features are lost. Introducing the “skip connections" between encoder and decoder blocks to pass finer features to the decoder blocks. This modified FC layers architecture is called “U-Nets" {{cite:a580ca404774051fe3061ab218ed59a063c9eb61}}, {{cite:b420b939d367c71424654602c7469b44fc63d73e}}. This is shown in Fig. REF (a) with green arrows and {{formula:7fa8cc9c-8b3d-4b6f-bf3a-be37146b211e}} indicates the concatenation of features from encoder block and the output of previous decoder block. Another way of passing features from one layer to the next layer is using dense layers. Dense layers {{cite:2460d75cdc48799d9f2b94c7584fc3480f24f5f5}}, {{cite:8f234c8a3d6b4e62efc615cfc02e3d7ae6d9546f}} are used to create dense connections between all layers to improve the information (gradient) flow through the complete ML model for better parameter efficiency. Fig. REF (b) shows the dense layer connection for {{formula:5bd2cb36-d29a-4966-8508-3c9ffe8e3cbd}} layer with input feature maps of {{formula:09b5b9ab-6842-429f-a333-6b88d7e1f9f1}} (output of the previous layer) and passed through the dense layer with output feature maps of {{formula:64060cd6-aed1-452c-a447-8509ab229d3a}} ; total feature maps are the concatenation of input and output feature maps [{{formula:304f0cb6-d58a-4f14-be28-7959ed67d947}} , {{formula:76dd1025-57eb-4f46-9f6f-b49ff4dfc9f1}} ]. In the dense layer, the convolution operation is performed with a stride of 1. Fig. REF (c) shows a dense block with three dense layers of each layer that provides two feature maps as output. In the dense block, the dense layer establishes connections from the previous layer to all subsequent layers. To put it in another way, the input features of one layer is concatenated to this layer's output features, which serves as the input features to the next layer. Let the input has {{formula:020e8c61-049b-4155-8533-2a636b357d3f}} feature maps, and each layer of the outputs has {{formula:a96dd18b-ba28-4733-8600-09c63ceb0d5e}} feature maps, then the {{formula:707926fe-e25e-454d-9123-72e866463070}} layer would have input with {{formula:d55879f4-308a-45b9-8da1-9d7f8590b284}} feature maps, i.e., the number of feature maps in dense block grows linearly with the depth and {{formula:7848646f-9f45-4d01-9085-0d87574b5eb6}} is here referred to the growth rate.
m
96d53af6f087c4b851531f34d753cfe9
Previously many studies focused on measuring and characterizing quantum resources in the framework of intrinsic and standard decoherence models in different quantum systems {{cite:1b25a829fd973e24c574d90a453843a80f49ee65}}, {{cite:def3fb4462ce225a5368e78a578132182e2c3841}}, {{cite:6de593fa753c85922da4fc4f3b46691442093d01}}, {{cite:fd4538091b13217d15c2def96ec2eac13e87a0cf}}, {{cite:d9bcb0075ccf9abab239be276cd2c9a900f6b41c}}, {{cite:e6faba6f05f82636d9d016ab34f67998356aa03a}}, {{cite:751acfb82c95c76cdef3aa45ab66b9b42060e285}}, {{cite:a8a6499e365b1b69fe870b9662612939ed29e27f}}, {{cite:286303ac286e619401e4b7bedf41238fb960c747}}, {{cite:a76294fdccd4b09ccac70997d380740276125677}}, {{cite:ab8149f3e360b1a5ce299b166814a498f53aeac0}}, {{cite:bdd26695c57124a3275dabf02e3af999f903528b}}, {{cite:5cab4b1f4dfc1abbe7cbfb713dc4e3e9178b29df}}. Although many methods were introduced to measure quantum systems, assessing the quantumness of multipartite systems is still a challenging task {{cite:49806d703cef1d1bd0af84f4f7d310c9f24db338}}, {{cite:912ccf2ed5a98239865656d1d9a62f170d88de52}}. For specific quantum systems, quantum discord (QD) {{cite:797c4840c35d1ccaa19876b44364c3f856612492}} was the first quantifier introduced to capture nonclassical correlations. QD measures the difference between total correlations and classical correlations in a quantum system. However, it is a strenuous measure to calculate, and analytical expressions of QD were only obtained for two-qubit states {{cite:ff84003bb8cd492ba6183149ed12016b35677f83}}, {{cite:7c7e6daa3f6122e30ca2aac109c7dff7b2dce102}} and quantum X-states ( see for instance the works {{cite:1870fb0dc58f2fce5db5fdcc93b6aa2f591b870d}}, {{cite:5643b968d3110f11e34469d109c288fe84223a35}}, {{cite:ac9b793ac790a25d51ac760097e05b787047ba0f}}, {{cite:6d6893ea5a5daf499e4544313777ff56c62298b1}} and references therein). Quantum correlations have been studied in many quantum systems, such as Heisenberg spin chain models (for instance, spin-1 Heisenberg chains {{cite:e2aeafff3094cb56f31d45edec9e91334916b8d0}}, anisotropic spin-1/2 XY chain in transverse magnetic field {{cite:72bbf9e9560179f332627822b01f689cc3e7725d}}) and quantum dot systems such as two coupled double quantum dots systems {{cite:d1c830583df39a97fbe6a63bbef487ee59422b1d}}, {{cite:77143cf418499f2bfc3b84a17607027312d59a13}} and double quantum dot system with single electron under Rashba interaction {{cite:020cf8d9bdb1369d89ba43ce95e338d55b4578a9}} ). Recently, considerable attention has been dedicated to studying the influence of the Dzyaloshinsky-Moriya (DM) interaction and the KSEA (Kaplan, Shekhtman, Entin-Wohlman, and Aharony) interaction on the quantum features of specific quantum systems {{cite:51614b9b9a4a2c11579c806422be13266cd31f04}}, {{cite:61b2eb5e42233b2b74f782f3852d3f96ce9c41ab}}, {{cite:1b26549dfdc70350ef0399695ec92090e30e3399}}, {{cite:b2f0ba2f2d1d8bc0672531d574c820a81fc14b86}}.
i
f6ea550ce15b09c40b72c0eb7c648f0a
Solid-state materials, in either bulk or thin-film forms, inevitably contain disorder of various kinds {{cite:adce84c7114f5c18d535ed23c76fe607e2acba33}}. At finite temperatures, the otherwise perfect crystal lattice is constantly distorted by the thermal motions of the atoms. Besides this normal “thermal disorder”, which can be well understood in the framework of statistical thermodynamics, there are a wide spectrum of “frozen-in disorder”, including point defects, dislocations, grain boundaries and surfaces, to name but a few. The physical properties of the materials, such as electronic band structures, density of states, electrical and thermal transport properties, heat capacity, magnetic and optical properties, etc., strongly depend on the concentration, spatial distribution and interactions among the defects, which have been the subjects of intensive studies in condensed matter and solid-state physics {{cite:54bd87997bdbef233af01cb5956b7d1e3c3ceca1}}, {{cite:e405224f719dec74d1da7300642d769c3996b12b}}, {{cite:a1e39d72d890ad3e82b9d481d623c49376abb97d}}, {{cite:1c1ed3d9ced55d0439d802bde21361b180d035d6}}, {{cite:0c04ba7c3684d5c6dc5505e06af647be8e18b30b}}. Traditionally, it was believed that defects may have negative impacts on material properties, while recently it has been shown that novel desirable material performance can be achieved by proper defect engineering {{cite:c64e1dba7566469c1e4e96ab705aebf3e9848a43}}, {{cite:b56d266c9093fd46a2f3398665359236449e7254}}, {{cite:2276eacf0e5acfa5272e5207a5c00cb9355822de}}. For example, it has been demonstrated that quantum emitters can be realized by introducing defects into two-dimensional (2D) hexagonal boron nitride ({{formula:99d025e7-ee8d-4cda-a13c-a4847f012691}} -BN) {{cite:45df5935e5ba8607b5191b4e01494dd06b90fa54}}.
i
764f5421ab9f3be8497ab47d1478b583
We should also comment on the observable effect of the electron EDM in experiments. The EDM of the electron is usually measured through the paramagnetic atomic or molecular systems, since the relativistic effect enhances its effect {{cite:4d99a9bf41dbf57684100794a315ba799a370164}}, {{cite:236f99496aacabc5c1e60264ef797059a06e9cbf}}, {{cite:0d5bd26d497ff63899e81be425407afcf9189882}}, {{cite:61dec31b3824874d14fb4d312506fff27f37f1a0}}, {{cite:14c0750aa569fec16a391fa92aa1f066f9f2e6f3}}, {{cite:c14c18c848d9b0d94792f866fc606136a71921d6}}, {{cite:d0db2f7b4ad993598ae5d785a8ba15e867c32780}}, {{cite:5f53c25dd1f7b60fb86efd35e1eadff5a28c71ae}}, {{cite:d4024e7abc9a0bcb8226b6453585f23ddbf3d130}}, {{cite:0e8656c0b00818e287791ef67a84194d199d46be}}, {{cite:80a4ad58f72dd39294ba3eaf9b240b5646fbef33}}, {{cite:264bcdee2f2beb00bab7a603ca6d5820adffff1b}}, {{cite:4d054c9de610436bb9cb3d4c82a7383a45cb13c3}}, {{cite:ba5516a29978da7efb2406e5bc3626f3fc24fc8c}}, {{cite:19ef45c09171c41e6913d0efef2ea525270b88a4}}, {{cite:c21ce8be3e65cd0042d4f0e1c46644a46507c60d}}, {{cite:0fd0c204ff038c56194d9a918404918d7a9922cc}}, {{cite:0ef48c06182725cc406b2a3a8a274c6d0f4346e6}}, {{cite:e23c41d82c3f1bda3b5b3c914ad133871629c26f}}, {{cite:8d63c0e03ae72ed81ede27e60228159db5ee294b}}, {{cite:efd47e16ce30b9d850a2ce69d9e5876088cfbfb9}}, {{cite:3432c9008545a076942c09b13bc833f2d1228190}}, {{cite:288eac562a4bcaaf8318e5d7bb4da83df5f2ea4c}}, {{cite:32857f02ea8fc31bef674506b60dc2ccfad3d5c6}}, {{cite:7cde7a46a0d48cf08959f2063d6ec04ecf3fc577}}, {{cite:1c675fe668f7932542428971df76d6e5a2a6824d}}, {{cite:fa9837afd657623f8022c93c1ea6d487fc21abfa}}, {{cite:a32de01d8569ef7178d11e7f5985fb0af8895c25}}, {{cite:802a8658e2b3ea7234cba32a8e0ee0382c8d65b7}}, {{cite:ba38aab564fd03bc77c1ab82384ad9895e38419d}}, {{cite:f1926d48ee4d878a76be6464c21701813af9c503}}. However, these systems also receive contribution from other CP violating mechanisms such as the CP-odd electron-nucleon interaction or the nuclear Schiff moment. Previously, the EDM of charged lepton was believed to be extremely small and the CP-odd electron-nucleon interaction was thought to be the dominant effect, with a benchmark value equivalent to the electron EDM as {{formula:eea3dc8a-ca9c-4975-a230-db030e23d40c}} cm for paramagnetic systems {{cite:ec61be0a5df607dfad479d8a9141437c22177a37}}, {{cite:58c5039583687759bbc21b4af51feab4dc177d08}}, {{cite:2354a49e056d2c8bf9bd44f7fbdba8b9ed35f287}}, {{cite:0406f04ac0752f8d12977c6377ae5f3de5b56d0f}} . By considering the strong enhancement at the hadronic level, we just obtained a value of the electron EDM which lies in this range. It is then an interesting question to quantify which one, between the electron EDM and the CP-odd electron-nucleon interaction, gives the leading contribution at the atomic level.
r
5d55acc0cd06c64aae02272f77ca558e
First, Definition REF and Problem REF rigorously deal with a potential ambiguity of {{formula:6fdd7c11-5146-4b36-992e-477436889242}} -nearest neighbors at equal distances, which wasn't discussed in the past work. The main new data structure of a compressed cover tree in Definition REF substantially simplifies the navigating nets {{cite:40953bfc31a257b1ea5e0a7fa7666de83645ce96}} and original cover trees {{cite:95f841bf42d674a2ace99c8b2574e198e4e37b27}} by avoiding any repetitions of given data points. This compression has substantially clarified the construction and search Algorithms REF and REF .
d
9987a9da18397756bce0975b6cf40ba0
By almost-cocompact group, we mean a group which acts cocompactly on the {{formula:2e4edc01-fc2e-44f8-8843-550b0b281561}} -thick subsets of {{formula:672ad54b-cc4d-41b7-9d3d-b06a43bfab54}} . The same notion was introduce in {{cite:b63ad55d3b424cfd683197c57b0ae9e76931eb46}}, §8.4 with the notation InjRad{{formula:9f1169ab-192c-4bc6-bf05-e267386739a4}} . In particular, cocompact groups or groups acting with quotient of finite measure are almost cocompact (see Section §2 for the definition and the natural measure of CAT(0)-spaces under consideration).
i
965f6993203b64e6e9294d3fe48c4b43
The results for the SYNTHIA-to-Cityscapes benchmark are reported in Table REF . Following the protocol evaluation protocol of previous studies {{cite:444ab85e9a7a80cbf58cf6a6e4d8ad8e4bb91c77}}, {{cite:e3f627787bf839e01d2d545c1b888db395a6e070}}, {{cite:e43c17450d7b09cdd8ffc1e5a49499b7bb24c6a1}} we report the mIoU of our method on 13 and 16 classes. We observe that our methods outperforms previous state-of-the art methods by a large margin. We note here that the domain gap between SYNTHIA and Cityscapes is much larger compared to the domain gap between GTA and Cityscapes. We attribute the substantial improvements obtained by our method to the stochasticity in the translation which allows us to better capture the range of scenes encountered in the two domains and to generate sharp samples even in cases where there is a large domain gap between the two domains.
r
045d30b261841a510d51ed323c7e502d
(i) The functional {{formula:d4033163-d4a6-4607-b43c-59668a7908d0}} is lower semi-continuous if {{formula:800a0005-e603-487d-9117-85d13a3ff6b6}} , {{formula:1dfd4ce8-e52d-4f57-9597-49f0e515d837}} , {{formula:c1baa365-895b-474e-aad1-cece781ae755}} is lower semi-continuous, subadditive and {{formula:97e2c79e-4da1-42fc-a630-2dfa1e7615e9}} ; see {{cite:cfc7d382202eef524fe88f7600158496079510cd}}. This includes our entropy {{formula:3182692c-ac57-495b-a7ab-24eb7ee55514}} . There is evidence, however, that crystallization does not hold for all entropies in this class, or at least that optimal configurations consist of particles of different sizes; see {{cite:6363da6bd7b701caf8f4f3f98ca9fcf0f1893c34}}. In this paper we have found a subclass for which crystallization holds. It is an open problem to find the largest class of such entropies. (ii) Functionals of the form {{formula:3a627db2-fe54-46b1-9a85-5e60eb1c2626}} arise in models of economic planning; see {{cite:fc8655bd0201901814f304711a13de0ea9c2aa6c}}. For example, consider the problem of the optimal location of warehouses in a county {{formula:16045c32-d505-456d-b0af-333c8ac56ec2}} with population density {{formula:1e714fd3-8a29-4490-92e0-facaf69fdd78}} . The measure {{formula:543f66e5-1760-427e-8b9b-7fd171baae15}} represents the locations {{formula:0320870b-a1db-475a-b755-9b41e06907b6}} and sizes {{formula:08ec563c-3416-412e-a9ae-9ca5d3b68c19}} of the warehouses. The Wasserstein term in the functional above penalizes the average distance between the population and the warehouses, and the entropy term penalizes the building or running costs of the warehouses. The subadditive nature of the entropy {{formula:260ad48b-6718-4a7f-b79a-ed07565b4381}} corresponds to an economy of scale, where it is cheaper to build one warehouses of size {{formula:64bc5191-5225-4333-bdfb-9ff9e9f93223}} than two of size {{formula:39e165ba-3ff1-4202-9a77-e3eac8056ef6}} . (iii) The special case {{formula:a4325914-c550-47f4-8803-a393dfda9b02}} arises in a simplified model of a two-phase fluid, namely a diblock copolymer melt, in two dimensions; see {{cite:ef8b614abcb63f82f74d99657ece759c8076766a}}. Here the entropy {{formula:bf5d6ea4-c216-4f45-a960-298a77a02402}} corresponds to the interfacial length between a droplet of one phase of area {{formula:f349ac25-cbbf-45fa-af94-263c1395b514}} and the surrounding, dominant phase. (iv) Finally, from a mathematical perspective, we were inspired to study the entropy {{formula:839eeb69-59e0-40cc-8548-fd5cd788617b}} by the conjecture of Bouchitté et al. {{cite:6363da6bd7b701caf8f4f3f98ca9fcf0f1893c34}}.
r
64348e3a13afd34cb7f8954a0525b089
The simulation analysis shows that working with daily reported infections leads to better Effective Sample Sizes using a smaller number of particles, as the data spikes are reduced. According to Cori et al. {{cite:a6f4e48159148db6c9653e0f92f53e86361b48e8}}, the estimates of the instantaneous reproduction number are expected to be affected by the selection of the time window size. Large sizes result in more smoothing and reductions in statistical noise, whereas small sizes result in faster detection of transmission changes and more statistical noise. They suggest an appropriate way of choosing the time window size. We have selected a weekly time window to analyse the real data in line with Cori et al. {{cite:a6f4e48159148db6c9653e0f92f53e86361b48e8}}.
d
674d554d60922cb5ecf8c7552889c793
Theorem 1 ({{cite:7ab874a4ce99a7a958a1fd3e82f83ed5f663ae80}}, Theorem 4.3.2) Let {{formula:54451a80-2b5e-473d-8d0e-d77bdb27cb5a}} be a real symmetric ({{formula:08208cf6-660c-45d7-9705-e31657bc5134}} ) or complex Hermitian ({{formula:7f54faae-22d6-4582-b1e2-d7799400a99c}} ) {{formula:7bd6ee49-fe42-4f9b-b5e6-806f1edad17b}} deterministic matrix and let {{formula:76281d8e-630b-4658-823b-b97dbf62091e}} . Let {{formula:3d39473c-3bbc-4052-a03f-61cdacd8bb0a}} be the ordered eigenvalue processes of {{formula:9b0a97c2-89dd-4517-9e8f-1031becefcc2}} . Denote the first collision time of the eigenvalue processes by {{formula:1c4682fa-5ecb-4365-bd9d-ade7ceb1fa59}}
r
707c40d9e58769ac9024e956ab131052
Motivation. The underlying principle behind our proposed method is that semantically similar images of different classes are often embedded close in Euclidean space and are consequently misclassified by k-means. However, soft labels may provide more information about semantically similar classes, and thus by using self-distillation, the model can be provided with further information to distinguish those inputs. Moreover, self-distillation can partially substitute the regularizing effect of data augmentation and help find flat minima, which in turn improves generalization {{cite:e9bb239d6ef2c368e0b834228765b3a6b01c5d5f}}.
m
b64be8f69c6d474701befbdbe120516c
While the line of argument above is –in hindsight– very natural, the conclusion is broadly useful. For instance, {{cite:8a544c939efc00f80f39c4861488d22605b332b6}} study a class of of message passing algorithms inspired to replica symmetry breaking and survey propagation {{cite:95b581ff8c2e957ceff39c4e52bba6db2298bdbe}}, and observe that they do not perform better than Bayes AMP. These algorithms are within the scope of our Theorem REF , which implies that indeed they cannot outperform Bayes AMP, for any constant number of iterations.
d
8a1c706fe3f143f0e99795bd08ffab1b
Additionally, in Fig. REF (c), we put the real image {{formula:8cdf9bbe-4387-4c4a-bad7-5b4b6fdbe2ab}} with the corresponding domain label {{formula:ed0f887f-d684-4222-ba5c-56d1ee3908f0}} into the style and content encoders to get the content code {{formula:177a7a88-15f9-4222-bee8-f6ccc7d81614}} and style code {{formula:6e3a8969-983f-4858-ae1f-c47b585d000b}} , respectively. Then, the codes {{formula:a8cfda36-35f1-4304-ad7b-410e37b1ea75}} and {{formula:3fd2f217-0422-464e-89d5-8de1786fc2e1}} are used in DAT and AdaIN layers, respectively, of the pre-trained generator to obtain the reconstructed image {{formula:e7a7e423-c46c-459e-8df5-0158d0ff4045}} . The goal of this step is to make {{formula:90b6fd12-c7ac-4729-885c-55dd8a6f46b1}} as close as possible to {{formula:7e131eaa-6df3-4b10-b5e8-af9d883f747e}} . For realistic reconstruction, we reduce the distance between {{formula:c65a3419-81fd-474f-93fd-8d1e035e5c1a}} and {{formula:5d07e175-88b7-4714-bd63-c2f215fe3f01}} by using a MSE loss, a LPIPS {{cite:c692f24292668c43a2eb31d3fc336a3a50b4218f}} loss that reduces the perceptual distance, and an adversarial loss using a new discriminator {{formula:2057923b-1190-4ab7-b593-1d96ed150779}} which also has a multi-head structure. For adversarial loss, we used the same loss function as StyleGAN {{cite:169e60c828d9e3d6dec27e1a59e72d99bd64d4e8}}, which is composed of the non-saturating Softplus, {{formula:4880e11e-e370-4581-a77c-c8cbb83db487}} , with {{formula:a5027a75-a572-485c-a5e9-4bf17c226f32}} regularization.
m
59aa6ad069633aa525b6d35663e74cf5
Recent works on offline RL, however, mostly assume that the environment is modeled as a Markov decision process (MDP), and standard offline RL algorithms focus on reward-maximization only {{cite:2ef0538c39e54c1586c7668bdacb9bea3655f474}}, {{cite:8a8f33df3f50bb00fbe733dd18372f5ace21a052}}, {{cite:84cea38bcc265f473d3bd0a483d63026cd130216}}, {{cite:913f08c9f422a08dfb326cfbc161ba235bd84bd4}}, {{cite:0ae22aa79065fc1b4aef296f6e5d6bc291f4fd47}}, {{cite:b10cfa724610532fa7541a92604ba291e2256373}}. In contrast, in real-world domains it is common that the behavior of the agent is subject to additional constraints beyond the reward. Consider, for example, autonomous driving or an industrial robot in a factory. Some behaviors may damage the agent itself or its surroundings, and safety constraints should thus be considered as part of the objective of a suitable reinforcement learning system. One of the ways to mathematically characterize a constrained RL problem is through the formalism of constrained Markov Decision Processes (CMDP) {{cite:e0d4ab8d435e15925c740807ad1098568c876258}}. In CMDPs taking an action incurs a cost as well as a reward, and the goal is to maximize the expected long-term reward while satisfying a bound on the expected long-term cost. In this work, we aim to solve the constrained decision making problem in the offline RL setting, to enable deployment in various safety-critical domains where direct learning interactions are infeasible.
i
f4c1117ee6f7d0f472b2bf76f2f165bf
While {{cite:2fa2d2d1e3f7de84503d01ef5543f27429777812}}, {{cite:006a1158e0bfeb189a900ba493b41f4107cc5967}} use standard architectures (e.g.  YOLO {{cite:6bd0c8eb87c4171358bd4065512bc29ebcf05dd6}} or ResNets {{cite:c7ab50ce7e6e9be4c921d5ba15087414499c7cb6}}), they require a large amount of engineering complexity in manipulating the MPEG codes into an appropriate input modality ({{cite:2fa2d2d1e3f7de84503d01ef5543f27429777812}} partially decode the MPEG codes in order to perform pixel-level predictions), whereas we directly use our compressed codes with no further data engineering. Moreover, {{cite:006a1158e0bfeb189a900ba493b41f4107cc5967}} applies different models to different compressed representations yielding a codec-specific architecture.
m
2f27e707320a83b3e0bebd90446be072
- {{formula:6cd9f975-cbf0-478d-9933-b32befc0cd3f}} to compute {{formula:9456e187-4553-4815-9b1f-00a72da87085}} and {{formula:63a3830e-cdff-40c8-a02c-0a152c46e89f}} to compute {{formula:cc7ff564-104e-4503-8995-70d4b24eef6b}} and {{formula:0667a17d-f913-4933-bff4-26376aa8a70e}} to compute {{formula:e5e68fdd-fa7c-4726-a896-e503b53c9ffc}} , in the initialization step, totalling {{formula:223a7089-ec40-4cd4-acc3-e6592facf97c}} ; - {{formula:87bbbc5e-0cb1-4a4a-866e-f8928e45901e}} to compute {{formula:956cb821-a58c-41d8-8662-a10a662472f9}} and {{formula:961fe3a8-b9d6-4c38-87f9-c38a0c52d98b}} to compute {{formula:0bdd2f6f-19ff-493a-9b83-592b6ba9aa80}} , totalling {{formula:299c5136-8d77-400a-a75a-4d3b368f8303}} ; - {{formula:a6468493-8b0e-4271-bbce-8112807dba1e}} to compute the spectral decomposition of the matrix {{formula:3ca679f1-e43f-4abb-afc6-8998edf2ec8f}} leveraging the Francis' algorithm {{cite:6ea20b60ba76d5300ea80f8eade3a79d0eba75ac}}, {{cite:f939badced0ae8464099a42f1c1283f76e29b05f}}, {{cite:a2cfa48945df2aef9148d5687c5a4f1c6a7fcb79}}; - {{formula:9b4f5d1e-a0a9-4932-a228-d9c771b67868}} to compute {{formula:e3a31358-20ce-4a42-8ab7-33b4e6bdf39e}} ; - {{formula:42ffa996-3992-4ccd-884f-31cd7d94937a}} to compute {{formula:61bfd57a-8582-4152-83bb-aa0148617c30}} , totalling {{formula:b5816c5b-7ce8-4ef8-ac80-563c4c76e697}} ; - {{formula:14a6889f-891e-4760-8207-78f0d3a17482}} to compute {{formula:13517815-b425-42e1-bf72-2bc2c205cf07}} , totalling {{formula:3feccfe2-bc06-4135-a005-ba09aaea3867}} ,
r
cf1f4111c62c4d80df35ba854a8979e4
The resulting composite dataset enables training unified semantic segmentation models that come to a step closer to delivering on Papert's vision. MSeg training yields models that exhibit much better generalization to datasets that were not seen during training. We adopt zero-shot cross-dataset transfer as a proxy for a model's expected performance in the “real world” {{cite:2d5b15f78a5c27992a52c74d3d36027611781b03}}. We train models on MSeg and test on datasets that are disjoint from MSeg. In this mode, MSeg training is substantially more robust than training on individual datasets or training on multiple datasets without the reported taxonomic reconciliation.
i
7bed31e741cf0fd0373ada036c5fee56
Existing domain adaptation methods usually assume that the source and target domains share an identical class space {{cite:4ccbf1de6e5f312210a262108a69d0416f5085e2}}, {{cite:b70a45373ad78136d98745910b29b946e35f225d}}, {{cite:1c45c901faa8a17f90cbf1d4e7aaf7f030311a03}}. The assumption is easily violated in practical applications since it is unrealistic to verify the identical class space assumption with unlabeled target data. With the emergence of deep models trained on large-scale datasets, transferring knowledge from large datasets to solve downstream tasks has become more realistic and useful. Motivated by this, we relax the identical class space assumption to that the source class space subsumes the target class space. For example, the source domain can be a large-scale dataset, e.g., ImageNet-1K {{cite:fd86cc7850dd3ddfd36087b16022a139efbe50b8}} and OpenImage {{cite:5f9014edbf49b1d5c19b2170bc3fa34febe6ea16}} with comprehensive classes, while the target domain can be a small-scale dataset with specific classes. Domain adaptation under this relaxed assumption is coined Partial Domain Adaptation (PDA).
i
f602eb82d99e5facdf2c02232a00c5b7
The current implementation of CBL is order-dependent, insomuch as estimated subgraphs for the same dataset may vary if columns are reordered. This can be addressed using methods previously devised for constraint-based causal discovery {{cite:883813a68588ce1d9fa62addfa0024b78588cb47}}.
d
b423a3b8dd419d4571793272efb5f6ed
Denoting by {{formula:da219d44-255b-45aa-83f0-c35738f58404}} the set of finite measures on {{formula:d47a5a49-c14c-427e-a58f-a63b86b8c436}} , the process {{formula:7a887d3f-f7d5-49f2-bcd7-dcff2c14a5fd}} is a càdlàg process of {{formula:6802ec04-fda7-4fc5-b83d-1402e4f3187b}} which is the historical particle system, following the terminology and concept introduced by Dawson Perkins {{cite:f7a875bab1dd9693f17f11ad42dc6f82ea46d5a5}}, {{cite:b81c394a2feaf68f87148842c590ec9ef80dc474}}, Dynkin {{cite:007c66335bbfe312c8ffd54f59f7a24707fb7149}} (see also {{cite:42e403c84210e29e033f592f2ab54d87673d8438}}, {{cite:ee2c97fe31233726ac33fc93933118bf02410348}}). The spaces {{formula:b76e6910-0ece-419e-bb70-908ad579e78f}} and {{formula:5a429e84-ba90-41d6-ac7a-2088f7f6fb86}} are equipped with the Skorokhod topology and {{formula:20c03755-8518-473f-8025-e6d1bde0941d}} is equipped with the topology of weak convergence (see e.g. {{cite:b5db02cffcf9f32d1832e359587950883fb12835}}). Méléard and Tran {{cite:2e029ab3575f37099cd3445a9b5fce99ff21974d}} and Kliem {{cite:656d4d8c28ad39faaaf2a82373005d17d5582256}} have studied limits of this process under a diffusive scaling when {{formula:30415cf6-8d05-4833-a88f-b0cd13d20aa3}} . In a recent work {{cite:d9ad4d657415ae9cab7108a43dfbd06563d6ea0e}}, we have studied a similar historical process in large population, without rescaling of time and with particles undergoing Brownian motion. We obtained the distribution, backward in time, of a typical ancestral lineage (the lineage of an individual {{formula:d1988a18-c291-49e8-887e-a6b36fcd8772}} sampled uniformly among the population living at time {{formula:3cae10f5-21e3-4e2f-a0df-b0a65a6c7998}} ), using extensively explicit computation based on Brownian properties. In the present paper, we extend these results to the case where the motion is a drifted jump process with generator: {{formula:0d9a9a10-6743-47f0-b4d7-d1e3e76abfd9}}
i
b28ef79060461e7730ba0f8e515d9bfd
In the second step above we essentially apply the curvature estimate in Theorem REF which holds for any {{formula:5e71d318-d3a8-4509-9078-5137b858df9a}} and any sequence of {{formula:a0da409d-5c54-4174-88d9-a9afe7e4e450}} {{formula:c326042a-ce6d-45ff-ace1-ac77f3ab82ec}} -perimeter minimizers with good controls upon its PMC functions as long as their limit is regular. This generalizes the result of Simon {{cite:8fe575062834e0de2c44171bf3a1ff59d964dee8}} when {{formula:b5286075-a80c-4420-b2ff-a3dbebf73479}} . Another way to obtain a similar curvature estimate is those of stable-like hypersurfaces from Schoen-Simon {{cite:9aeab833e5abfc1dbd3d720bac283bd2fc92a89f}} pointed by Eichmair {{cite:8949d73c3c477579bf6046ae160605f3c195c94f}}. Our different approaches to this curvature estimate shall have independent interest.
i
f479ffecbf605506799670e9b932e04f
2-c) Continual Learning with Elastic Weight Consolidation (EWC): However, naively optimizing a (Fourier based) Contrastive Loss {{formula:d47ea191-9606-4e53-8872-899033668312}} as above, although bridges the domain gap, leads to a catastrophic forgetting of the information learned during the burn-in stage. Thus, we add a penalty term to {{formula:e92f526f-f309-487d-adc4-ff22b2b2d307}} that prohibits significant updates to the parameters important for the Task 1 (object detection/ burn-in), while learning Task 2 (Fourier Contrastive Learning in Stage 2). This penalty is based on Elastic Weight Consolidation (EWC) {{cite:8ab195444cbf484fc4019e79d939e5bc4d79ec09}} enforced on the parameters of the query model. Using this penalty helps in continually learning domain invariant features while preserving the information learned apriori during burn-in.
m
3aad283619053857fc3dcdd79e0a7e1c
The following lemma can be found in {{cite:053f0db500e6f6d4984554b24949aad5952cc262}}.
r
7b14a41df1207eba7d4271c3b1c110a1
Over the past decade dark matter (DM) direct detection experiments have improved their sensitivity by an order of magnitude every two years and this trend is expected to continue for the near future {{cite:523c338bcd81ca2e1bd89873588ab29e9c4229b1}}, {{cite:e15bf4a61f8e729521f16d77372c0841128c91d7}}, {{cite:d0941d25d5fe694498b32ef9bfaa790f8948aee2}}, {{cite:329ac7db7c7fe694456834c9ac0063e970304027}}, {{cite:3d388d88c163bef04758ece58b3bf6980668b4b8}}, {{cite:40f526a2fb332d8d2996fd5b61c70e314279ae1b}}, {{cite:4209c6876488713f6448538d56eba7027e0d1512}}, {{cite:7bc211f8f216c286053964e65a5256dce56ef003}}, {{cite:77c84d53082cce62ed54fcd4ebacf24902d63f50}}. While there is at present no clear evidence for the interactions of DM particles in nuclear recoil detectors {{cite:2addaadc400845781c629a65ac05cda415d40c32}}, {{cite:de393bda5cace968e3befe7b6286236a0d44b44f}}, {{cite:1f2ebf89d7e837968eb13edf277031e0a2aaf7b2}}, {{cite:58b31b7929f42e5672b58448cd371aaaca0ac5ce}}, {{cite:860abfcd605560986de268fed59209ed2395f140}}, it is perfectly conceivable (and in fact predicted by many models for DM) that hundreds of events will be observed by 2020. Once a signal is seen in one or several direct detection experiments, the challenge will be to identify those models of DM that allow for a good fit of the experimental data and to determine the preferred values of the underlying parameters, such as the mass of the DM particle.
i
0444d52cab42c5d2fe444794183e3bbf
In the future, we can try other methods of approximating the partition function such as generalized belief propagation {{cite:5cc007a9f6e9841264390c5ca95b8715821996c4}}, which takes advantage of higher order Kikuchi approximations of free energy. Unfortunately the structure of our graphical model causes higher order interactions to become expensive quickly, since each variable has exactly {{formula:6266a6b0-ae45-4caa-8958-e3999462479e}} neighbors. Similarly, the bounds on the partition function in {{cite:e9be4ed745e64feeb1ac1c490ab9346fdf038e10}} are based on spanning subtrees in the graph, and again the fully connected bipartite structure makes it difficult to capture the true behavior of the distribution with trees.
d
373f3d768c143be83b20cf963f1e8a3f
Nyström Approximation: These methods approximate the {{formula:b2e4b223-496c-461b-960e-021f34839ff4}} covariance matrix {{formula:97452b3c-41dc-42b8-84ca-7092e1cdf888}} by an {{formula:85ec481f-cfda-42dc-b856-b43b142fb2a3}} matrix {{formula:5c725792-63bc-45fe-82a9-c7bc8342361f}} , where {{formula:0cfdf5ee-897e-4da4-a0cc-85edc89afe05}} are called the inducing points. Similar to random Fourier features, Nyström approximations reduce the time complexity and space complexity of GP regression to {{formula:f8fd4ecb-7bab-481b-8f9d-f54555f1676d}} and {{formula:29c669b6-432a-4f04-b6f9-33a2f3e7178f}} , respectively. There are several approaches to choose the inducing points. {{cite:3dc01e1e07a16c02548cdc091dc387ed69750ae1}}, {{cite:b3c8f9e805037ed8948021f6a1ee63884f2a3a1b}} selected {{formula:41ff9960-eb63-496e-9308-80bab0bd05ff}} from data points {{formula:da32920d-4688-4738-b961-1ec83a152e9c}} by an orthogonalization procedure. {{cite:bdc2029f47595de25f4f4379cd2e5d0aee0d0598}}, {{cite:c7b546708c829b8c814ef5381802373f9ed5bc97}} treated {{formula:aab0e0e8-747a-492f-88f5-35c5252b6e4f}} as hidden variables and select these inducing points via variational Bayesian inference. {{cite:0f1b9ff98c9253e21079110fafe01ca830ab1615}}, {{cite:43987333265ac91c8872e082c04335bf6f6d2687}} further developed the Nyström approximation to construct more precise kernel approximations with multi-resolution structures. For Matérn-{{formula:a6543d5c-8589-4bc5-a5ca-dc373ce0fc9a}} kernel with {{formula:c44cef88-b160-4f90-abd6-d6e125626ab9}} , it is shown in {{cite:d733fcf7fd478bb3a0dee526cb7539ede18e8b4f}} that the accuracy level of any inducing points method is {{formula:f8c2e94a-619e-4c86-a9cf-65c0c68ac2a2}} , which is higher than that of the random Fourier features. It was also shown in {{cite:88ed73885c72d557221b126aa3e5a1816bde75fd}} that GP regression with Matérn-{{formula:8ce5b11d-c92c-4fe9-baac-4b60aeba7663}} kernel converges to the underlying true GP at the rate {{formula:949e8169-c2c9-47e7-8a14-336b0d19b194}} and, hence, the number of inducing points should satisfy {{formula:43a24db7-18fc-409d-85bb-156de3d85b7c}} to achieve the optimal order of approximation accuracy. In this case, the time and space complexity of Nyström approximations are {{formula:edfffe03-a744-478f-b0e8-495bae2f872c}} and {{formula:0b0c8705-044c-4c2e-8e44-d5c9d6f37c58}} , respectively. These are higher than that of the proposed algorithm, not to mention that the latter provides the exact solutions.
m
4957d4b6adee1b54789c84b81ed97a8d
One of the potential issues inherent in our approach is when each video is first uploaded and becomes available on the system without any prior playback usage data. The “cold-start" issue is typical of many usage-based machine learning algorithms such as recommender systems ({{cite:5a5ee02e5708537137a25f60cf093c2d4b15ba04}} {{cite:8cf0676a6db8502d251da6e6d01cfb73d6482ded}}) where the effectiveness of the approach relies on its usage over time. In our system, when a video first becomes available, the visualisation does not have usage data from which to calculate and populate the timeline. As soon as a student starts using a video, its usage will be captured and reflected as a low, yellow area at the bottom of the timeline. A manual indication of the important parts by the lecturer at the time of uploading each video may be a simple solution that could also serve as a way to lead a more desirable evolution of the contour. However given one of the strengths of our approach is fully-automated shaping of the visualisation without extra efforts by a human, other more automated ways to draw the data will be an important topic of further study. For example, some studies used supervised machine learning techniques applied to lecture video images to determine the important parts in videos {{cite:5684113636ccab7701a757a2b5f3643d21b66878}} or applied to the usage data (selecting a video, seeking within the video, as well as the heartbeat rates of the students) to develop a computational model of the usage patterns of students to predict a more desirable pattern {{cite:052b55f31f7e7ae2c800d96db28e78f7298160fe}}. Taking a bottom-up indexing approach (i.e. important parts emerge by looking at the video/usage data) rather than top-down (i.e. system first defines and codes in what the important events are), bringing some of these computational approaches into our system may help tackle this issue in a way aligned to what we eventually envision. In our second deployment, the same videos used in the first deployment were uploaded afresh without any use of the data captured from the first deployment. One possible approach would be to plug in the usage data from a previous running of the course (if it has run in previous year(s)) at the start of the course, to be slowly overridden by the actual usage once this was sufficient.
d
b047840c4b8e6061f0dd8540aa5b9c5b
Model studies have shown that higher order moments ({{formula:14d7c5a9-b199-4b3b-a32a-4e6ae5170496}} {{formula:4ce80750-f818-40f6-9287-a3ef4fea2c77}} {{formula:21b3da28-4805-4e78-9fe9-f97b55b4bfd4}} and {{formula:fd28853b-59aa-4272-ba57-13dda2b2f7ed}} {{formula:b2fc0655-3c15-4cee-89fe-5fb184b7e556}} {{formula:c386ee9c-533f-4df4-b552-6bac1ae9d0cc}} ) have stronger dependences on {{formula:fc233ddc-98b5-4c0a-9228-2ddaccf22537}} than the variance and thus have higher sensitivity {{cite:12813a528979902e7eff41a7eac3392c25cc463d}}, {{cite:c553bab68349028b00d2a73ce8cc3637978482c6}}. Another salient feature of the moments is that they can be reconstructed from susceptibilities ({{formula:20768bf3-c631-4dc0-8f94-02adc7d4e096}} ) {{cite:e087e1e3dbda6f0ff86da5f92f0bc9a3fe249661}}. For example, {{formula:d245556f-adf1-4cc4-83d2-34fcdc95cd2e}}{{formula:9251b987-89cb-4e2b-a5b2-da04bfa14d26}} = {{formula:a2663add-280e-4881-8bf1-0ee8fa322fbd}}  {{cite:53b70ce5055e26abc5925ffff93c36a42a86da6f}}. Hence a comparison of the experimental data can be made directly to QCD model calculations {{cite:53b70ce5055e26abc5925ffff93c36a42a86da6f}}, {{cite:e74ed7646ed07465e5b638bd69923cf019de920f}}. In high-energy nuclear collisions, the created system has a finite size, time span and the number of particles are also finite. Instead of a point in the phase diagram, one may observe a critical region in which large fluctuation might be observed. On the other hand, in the absence of a critical point, the hadron resonance gas model {{cite:786c9a96c2faefe29b4788f777e7ad7cba4eeecb}} suggests that the {{formula:7d6759e3-abe0-4205-945c-4a29f73943f4}}{{formula:1a961562-33a8-44b4-a36f-d26d1520c11f}} values will be close to unity and have a monotonic dependence on {{formula:0d84a7be-897e-4632-8c1e-435c7eaf0d51}}  {{cite:2192dbe7cd10239c23242af845ce1e95995dd724}}, {{cite:bdcbc6aa8c78a5226a7fbb2f72677b4b17d3ffba}}.
r
cfad8e36c16558fdaca99547a052a3bb
Reinforcement learning (RL) with IIR-SNN. Due to their inherent recurrence, SNNs with multiple timesteps might be more useful in sequential decision-making (such as RL tasks) than static image classification. However, application of SNNs in RL may be limited if the latency is too high, since the agent has to make decisions in real-time. {{cite:d87f8940aaaa70fbb1a9ca57301e9347bd5e97c7}} obtain high performing RL agents on Atari games with SNNs, but with 500 timesteps. Hence, in this section, we investigate if IIR-SNN technique can enable training SNNs for RL tasks with low latency. Experiments are performed using SNN based deep Q-networks (DQN) {{cite:9f8ae69b4427376718a7f67b35a669be5282d0b3}} on cartpole and Atari pong environments. As shown in Fig. REF (a) and (b), for both cases, we can train SNNs up to 1 timestep. The rewards are obtained by averaging over 20 trials and plotted with errorbars showing mean{{formula:f06fddca-15e0-4ce0-8a78-8ed6daf447bb}} std. In (a), the reward is the duration of the cartpole remaining balanced; DQN with ANN, SNN(T1) and SNN(T3) achieve 40.3{{formula:ca4dfd5b-dde8-48f1-a973-628e93c8a2ee}} 4.8, 38.7{{formula:8d3dd32d-d285-4b02-83a9-2ce37743642a}} 5.4, 52.2{{formula:bef6af4f-7ca3-4bbd-b3e0-5ed114ae6779}} 4.3, respectively. While T1 SNN performs slightly worse compared to ANN-based DQN, SNN with T3 outperforms the ANN. Similar performance improvement over ANN using SNN for some RL tasks has been reported in {{cite:d87f8940aaaa70fbb1a9ca57301e9347bd5e97c7}}. Next, we experiment with a more complex task, Atari pong game; in this case, T1 SNN achieves reward of 17.4{{formula:328423b5-9fc5-4ba0-b0eb-7b3d38b85d32}} 3.2 compared to ANN based DQN's 19.7{{formula:752ec75d-baca-4a50-88e5-dc2efaba5e9c}} 1.1. However, with T5 SNN, we obtain comparable reward (19.4{{formula:329c4fcc-1154-42d6-be6a-8b7f79d3422b}} 1.3) to ANN. Notably, with IIR-SNN, we can obtain SNNs for RL task (pong) with significantly lower latency compared to prior art. On pong, {{cite:d87f8940aaaa70fbb1a9ca57301e9347bd5e97c7}} report reward of 19.8{{formula:572578d6-8f38-4c18-bc66-4f06ff290b05}} 1.3 using 500 timesteps, whereas, we obtain 19.4{{formula:9dd03e58-e98e-4e23-811e-3ce4d978e2d7}} 1.3 reward using just 5 timesteps. So, IIR-SNN enables obtaining SNNs for RL tasks with comparable performance to ANNs with {{formula:dac23ce3-c1e3-4889-8d82-dbd58f58f88a}} timesteps, and up to 1 timestep with slightly lower performance. Moreover, such latency reduction translates to considerable energy savings which is critical for agents operating in the real world. The layerwise spike rates for SNN-DQN (T1) are shown in Fig. REF (c), the average spike rate is 0.08 only in this case, which results in 7.55X higher energy efficiency compared to ANN-DQN. Again, if we compare iso-performing networks to ANN-DQN, SNN-DQN (T5) infers with average spike rate 0.42, thus providing 5.22X higher energy efficiency compared to ANN-DQN. The details of network topologies used, training setup and additional results are given in appendix REF .
r
d142616b38e6c354cf58c53b043ffacb
SK16 performed a grid search for their parameters, where here we use a Monte Carlo minimisation procedure using the emcee python package {{cite:5853e9d4213b24884437f4b3d18c730a5710ecb4}}. We compare our population parameters to those of SK16 and find that we replicate their results to within {{formula:070e3903-c3c6-400a-b157-8e31abcc363f}} for parameters describing the colour and stretch distributions (Equation REF with {{formula:8f210539-312b-440e-91b1-224895c44d8d}} ). We find that this assumption works well for the {{formula:14f6e259-ce35-4cb0-908d-28b4d44ec618}} population, and therefore we fix {{formula:494c6d9f-d4f3-4324-9be8-991c50ef8c46}} (see Appendix A2). For {{formula:d7cbe313-88c1-4d29-823f-c308137da509}} , however, we find that fixing {{formula:3555e94a-54ac-49a3-a8b5-38ecf60a5272}} works better and the resulting {{formula:75376984-3632-4936-a095-b15a9565828e}} is smaller by {{formula:3734bfbd-c639-4296-806c-4838024cfea7}} compared to {{formula:b1532cec-106f-4d7d-9368-8c06c6afd742}} .
m
739ec676d47ccdc5123bbcb497be8b98
The vector, {{formula:cc089d63-6fc8-419e-9b81-c8ab41348a2e}} , is the ideal objective vector obtained by minimizing each of the objective functions individually subject to the constraints. If {{formula:48dc4cb3-7325-4845-be79-15507eee7128}} , the sum of weighted deviations is minimized and the problem is similar to the weighting method {{cite:a0128fa379042a49890fc572095cdfcb9b0e5123}}.
m
1992fa5fb06aff696a1d9fd542c5cfb6
However, the decreased feature maps due to pooling undergo spatial knowledge elimination injecting roughness, poor border knowledge, checkerboard artifacts, over-, and under-segmentation in the segmented substructures {{cite:6a6537ed4e3e8715827fe21d59d0e06196fb0498}}, {{cite:2585acc21885f4035f562bddf9135c7828c3d646}}, {{cite:64727a450d8021bb6659c0570acea59b488aa87b}}, {{cite:81833e70150986719f861f3a295f79b0c96b1815}}. To overcome these problems, the authors in {{cite:64727a450d8021bb6659c0570acea59b488aa87b}} introduced skip connections in a UNet, permitting the decoder to retrieve the associated features discovered at all encoder steps that were missed due to subsampling in the encoder. The feature maps from the encoder's antecedent layers are concatenated with the decoder's identical scale through the appliance of skip connections. Applying the skip connection of the popular UNet, we propose a VGG-UNet, where we have also employed the VGG-16 network as an encoder, as shown in Fig. REF . {{figure:41604df1-10bc-45b1-9a8d-c88178dfb986}}
m
3e1b83f1d8ae2840f40a382a949f51de
[leftmargin=*] Group (A). Two-step feature extraction and classification methods include (i) TF-IDF+SVM and (ii) LDA+SVM{{cite:250407de547edc31c386b0a3b8f8adc8c6f7bd3a}} which use support vector machine to classify documents represented by TF-IDF feature and LDA feature respectively; and (iii) PTEhttps://github.com/mnqu/PTE {{cite:1b20392d6badeb9c14715a8c48c91bb054b43d74}} which learns a linear classifier upon documents represented as the average word embeddings pretrained from bipartite word-word, word-document and word-label graphs. Group (B). BERThttps://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4 {{cite:b9298da41ec68b98d08fc8f101cf6b200c4cc336}} which is pretrained on a large corpus and fine-tuned together with a linear classifier for the short text classification task. Each document is represented as the averaged word embeddings (denote as -avg) or the embedding of the CLS token (denote as -CLS). Group (C). Deep text classification methods include (i) CNN {{cite:97c6cf12aea2b0e822e4f3ad256f98880d74931b}} and (ii) LSTM {{cite:3ce58ba09bc763e6ef3f6588ae2f9fe70a11eddc}} where the input word embeddings are either randomly initialized (denote as -rand) or pretrained from large text corpus (denote as -pre); GNNs which perform graph classification on document-level graphs including (iii) TLGNNhttps://github.com/LindgeW/TextLevelGNN {{cite:d625f411e471f9e305b10f1cefdc90ef9f6567be}}, (iv) TextINGhttps://github.com/CRIPAC-DIG/TextING {{cite:858ce5a779b738e8c646dab4e388a5ca9ce0c1d0}}, and (v) HyperGAThttps://github.com/kaize0409/HyperGAT {{cite:ba011e87de22203bb2e51a3314a8458d577fbe70}}; GNNs which perform node classification on corpus-level graphs including (vi) TextGCNhttps://github.com/yao8839836/text_gcn {{cite:31dfe42408c076773ff63986c3d742d54b282da5}} and (vii) TensorGCNhttps://github.com/THUMLP/TensorGCN {{cite:b881e0517bbe0b6a47f623854d3765b70658b5cf}}. HeteGCN {{cite:c25be991fd42e070cca162ceb14d870a447d4551}} and TG-Transformer {{cite:eb4c8e52817c3d36c5c88e6bda799981cad8d595}} are not compared due to the lack of publicly available codes. Group (D). Deep STC methods including (i) STCKAhttps://github.com/AIRobotZhang/STCKA {{cite:5b4ebeba8a7123216435f4f8d78cadf4540da077}}: an attention-based BiLSTM model, which fuses concept found in auxiliary knowledge bases into short document embedding; (ii) HGAThttps://github.com/ytc272098215/HGAT {{cite:98d88b434306b97b6e2f2e0e07993cb2888531de}} which operates on a corpus-level graph of entities, topics and documents using a GNN with dual-level attention; and (iii) STGCNhttps://github.com/yzhihao/STGCN {{cite:63f12f7275f42e0210c9ccf3b38cebb424173c61}} which operates on a corpus-level graph of words, topics and documents and uses a bidirectional LSTM to merge the word embeddings learned by a GNN with word embeddings produced by a pretrained BERT.
m
e9c4282719ba10a90e78286546998406
{{cite:b862731e209e936dd08b1cf157cc5866ee0e4c85}} 2018 {{table:9f175dfd-c46c-4feb-9f37-accfe76d4acf}}
m
6251e2a03136cd3b097cfb3224de7793
Feature scaling and data normalization is a common practice in machine learning and has been shown to be effective in areas as widely disparate as deep learning {{cite:0d4c719d92f3e0eff567ba0b5d35a6d665f5e730}}, {{cite:70552f3df72d692e04944a523dc3839de615bd06}}, nearest neighbour classifiers {{cite:fe486c12519ac328fc1cfd58d4d0f10220dc1d2d}}, {{cite:15f8f9a368c336bb74d42092533bd462b8f64191}}, SVMs {{cite:be39d7e66697e73960bdac705dcd9fd243392fd0}}, PCA {{cite:b7ec60e21bb5754de86d11d55aabf90d8172ab59}} and data mining {{cite:eafd9a2a21cc48366cdbe5dfffad7bda6a241cbc}}. Their main utility is when the norm of the input vector is not a true reflection of its importance {{cite:0d4c719d92f3e0eff567ba0b5d35a6d665f5e730}}. Normalization is also known to often help increase the speed of learning {{cite:c3c8ec863607323bdaa61a3f3ff1806b598f2f68}} as well as reduce the dependence on outliers {{cite:afd859d17c4cd5b1e1f4df83b011959bb312e963}}, {{cite:be218da9bb80bbc0b9e3960c8b615953115f1f7f}}. In this paper, we explore the idea of normalization in the context of solving an overdetermined system {{formula:3905056e-81f1-48d8-8c91-5d0966e1b815}} where {{formula:66269c05-784e-4259-af79-6cd5e80f69e4}} and {{formula:8826b398-6b3f-4c80-b644-1b685cba21ef}} with {{formula:68e1d538-f30e-476e-9f5a-67f304e1ec7f}} and then study its applications to Reinforcement Learning (RL).
i
48a6cfb648964e489e27d137155f08b4
An important line of DA methods relate the source and target data by assuming the existence of a common subspace. {{cite:1e953c48b7831aa1e3b644a95082d709e7a1fa77}} first came up with a closely-related idea of projecting the source and target data onto a reproducing kernel Hilbert space to preserve common properties and applied the idea to text classification datasets. The existence of an intermediate subspace that relates source and target data was explicitly introduced in {{cite:fe835b4634b1e1b3ee498c7580b51da76e39a7d8}} for visual objection recognition datasets. This method was further developed and analyzed for sentiment analysis and web-page classification {{cite:19dca8dcd869b89a22697189627acb164684c97e}}, {{cite:2d4cf68e59033bedf8ec63d5f0abf5b532e49a5c}}, {{cite:4511128b2ea98dd4eab06d452902cdb769f04bce}}. {{cite:630bf5750b8d0685844fd5e2180d72e067ebebb6}} simplified the idea of enforcing a common subspace to adding a regularization term based on the maximum mean discrepancy (MMD) {{cite:4e7c7af1b9aa8d2fc8d0b66fc3dacf5176db6e9a}}. Their method is named domain invariant projection (DIP) because the regularization term enforces a projection of the source and target data on the subspace to be invariant in distribution. Recently, with the development of deep neural networks and the introduction of generative adversarial nets based distributional distance measures, the common subspace approach was further extended to allow for neural network implementations {{cite:517349443904818125d9de1cd1d7e34e06406e81}}, {{cite:f20271afe3d2315a3a587fa7a1ac13810d8c9829}}.
m
439a6482e8a408ed5443baf78ba0583a
Finally, our run-time analysis shows that our new approaches have the potential to be very efficient. Most of our approaches are more efficient than Word2Vec, which is characterized as a very efficient language modeling approach {{cite:d1a3d6e4045658267a9b0a5278ffb43c85f38908}}. The key feature of our Cobweb variants are their ability to learn incrementally from new training examples; in contrast, Word2Vec must retrain on all data (old and new) whenever new information becomes available Although our current implementations (which are in Python) are slower than the highly optimized Word2Vec approach, they should be as (or more) efficient than Word2Vec once they are optimized.
d
ebfad8b4e5ea342dd692d0010223b2f4
For the DFTB calculations, charges were converged with a tolerance of {{formula:df269280-dbf4-47ea-8086-ec2c63ea1a5f}} a.u. and forces with {{formula:7e5e75fa-7d2e-4e23-b556-82da1702f56a}} a.u. Bulk calculations were performed with an 8{{formula:db0a6521-8d8d-4a0a-b40d-71b9b9a17919}} 8{{formula:e751f3c6-13dd-4026-a247-1fa908e7fa09}} 8 Monkhorst-Pack {{cite:c5eb89c35e260091fbaa1a7e99e72753429b923a}} grid for the primitive unit-cell. The rutile (110) surface was constructed as a 4{{formula:d2a5fba4-5c02-46f2-8079-0d7279bf577a}} 2 surface unit cell with 6 layers, the (100) surface as a 4{{formula:8360cd98-164c-4f3e-9f0b-ff24640da35f}} 2 unit cell with 6 layers and the (001) surface as {{formula:bf0868e2-6bbc-4a22-b835-f343c6e6c4c1}} with 8 layers. The (001) anatase surface had a 2{{formula:a91c76a6-93fa-456e-b7bc-c25d8a89db9f}} 2 unit cell with 8 layers, the (101) surface had a 8{{formula:5af69a43-6008-49dd-a646-792096c5f9c7}} 4 unit cell with 5 layers. All rutile structures were computed with a 2{{formula:efcdaa2b-a012-4929-94c2-dfe654fb3143}} 2{{formula:56f0a66b-fa12-4b2c-a25c-3c10aecf4613}} 1 k-point mesh, the anatase (001) surface with a 4{{formula:c10d660d-ddaf-43b5-b860-6c0167d6396e}} 4{{formula:5add6ae8-40d1-47c1-85f9-be5b00aa27b2}} 1 mesh and the (101) surface with {{formula:7be30843-7fd7-425a-bd34-6dba4dafeda6}} -only calculations. Branching point energy calculations (discussed below) were performed with the primitive unit cell of the anatase and rutile crystals using an 8{{formula:e1400ff4-4a79-477a-9373-6439641cbeb7}} 8{{formula:2bbc3260-a368-4218-88b0-ea4d7515b0a3}} 8 k-point grid. For nanocrystals we performed {{formula:c2e2aacc-65a7-48e2-8384-3b8c1e60e145}} -only calculations.
m
08ea7c2be612c8be4a0c6ba49642e99d
The results for this experiment are illustrated in Figure REF . In this figure, the graphs on the left and right show the performance of all methods for fusing multiple estimates when using projective Cascading ICP and Hybrid ICP as the underlying ICP algorithms respectively. As these two graphs illustrate, the choice of the underlying ICP algorithm depends on the VSD distribution of the global pose estimator used to initialise sequential ICP. For example, Labbe et al. {{cite:253fc9c64d313cd3af55e2eb1b925e9d40a09326}} report that {{formula:286f2094-5c17-4b61-ae5c-72c35db2606e}} of all VSD errors of the CosyPose pose estimator are below 0.3 for a misalignment tolerance of {{formula:8e3b5869-7a1d-41a8-beb2-7ce59e283aa9}} cm. Hence, when used for sequential ICP initialisation, it would be best combined with projective Cascading ICP because in the low pre-trajectory VSD error regime, this performs better than Hybrid ICP. This is because for low pre-ICP VSD errors, projective Cascading ICP has a similar performance to Hybrid ICP while being significantly faster.
r
b99494654846de1f768786dff3cf2816
We adopt the following criteria to select baseline methods. First, the model should be able to parse inner facial components as well as hair. Second, it is open-sourced and we are able to re-produce the reported performance by re-training the model from scratch. Third, the number of hyper-parameters has to be relatively small so that the performance does not rely on advanced training techniques. This allows that the same training setup can be applied and training can finish with reasonable computing resources and time. The selected models include the classic models like FCN {{cite:a64e154a9e6770cdc7c3b82e7dff90d8ea07e44f}}, as well as the advanced ones, such as Deeplabv3+ {{cite:fa5a0ab2709f1ddcfe58c101e0251c835fd2cc1d}} and SPNet {{cite:dfe38f9601311a2395e9a49b9013f8e502c89292}}. We collected their open-sourced codes and built an unified benchmarking codebase such that the same training and evaluation procedures are ensured.
m
c00fa1e1d040617a69929496e6152c60
From a theoretical point of view, pure shock models (e.g. {{cite:c5fb6bca1c19b4e4cccf987703815c39e2fe4da3}}, {{cite:7adb12d3da6de5298bcd72223916b6078958dd99}}, {{cite:edaaf40a1a084824dcf33dc561f6b8c40aedb0be}}, {{cite:4e543ac5323d15b24031a4504b2fe422920ad6d5}}) and composite models (shock+AGN, e.g. {{cite:1e619c7c01774f069bb70f4c80f09b0cca4d4074}}, {{cite:42d9dbc997b3669c485da91fd4102ab4a9c21b45}}, {{cite:fe6f6ee752629609026b3c2265b281dc829798f2}}, {{cite:ba486f0141fad0b933e667c69355155378b8dcab}}, {{cite:96ab8d018564d6b1ff8dbc2b8af30518e9b83bc5}}) have predicted shock velocities in Seyfert NLR between 100 {{formula:a51c31fc-c4ee-4731-aa0e-03b5426082a1}} and 500 {{formula:c0a38360-bcb5-4505-876b-7aaa873f3902}} . From the detailed modelling of the present large sample of Seyfert 2, we derived a similar range of shock velocities (from 60 to 310 {{formula:6cff0ab7-738b-4b80-8dc1-299a2b7de674}} ) to those obtained in previous works. Moreover, this range of velocities is in agreement with those found in observational investigations of Seyfert 2 NLR, strengthening the confidence in our results. The present modelling of the spectra, which makes use of a large range of physical parameters, offers an unique opportunity to investigate several properties of Seyfert galaxies, such as their position in diagnostic diagrams as a function of different shock parameters, the temperature and ionization structure and chemical abundance determinations. Each of these issues is discussed in the following.
d
ea6f568b701c775d9976dcc6f73c8da3
In addition, a clear difference of the band dispersions {{formula:0ab7bf3c-88e3-4f28-8cb5-9f2ac2500122}} ({{formula:049fd06f-e597-4e0c-8234-a995b50b7149}} ) between the valence and conduction bands is observed. The top valence bands are more flat than the bottom conduction bands (figure REF ), which is caused by the larger valence hole effective masses in comparison to the effective masses of the conduction electrons {{cite:fd22eedf2fb84456439b915928177d2e503f4ffe}}, {{cite:75f0d3d56a39efb7ad60c73a0c6f4311ec3d033c}}. The electron band dispersions {{formula:bc08acd2-0cb8-40e7-936f-dd21a852a603}} near the G-point are defined as {{cite:054e18cb69b13d35a78495cdc2be7fa23851dd69}}: {{formula:71d0d950-b5fa-4ec9-93c6-f9cb87ed6410}}
r
19726c3b21994ee3cac65b71d8538500
Alexnet made sparks in the community with only 2 GPUs totalling 6GB of VRAM {{cite:9f1dffeb067f41e9304f76529fe8589821668b2a}}. Techniques today spend tens of thousands of dollars on renting compute, or sometimes millions on renting or building super computers (eg GPT-3, alphastar). The amount of compute thrown into searching the weight space is orders of magnitude larger nowadays {{cite:5fd2a5e137ccf991cbb03ca4441dc603eac9bbd9}}, {{cite:ccdb9c975b1fc5af5b296e06fc5ca5637e0957b7}}.
d
041f99be72fa3304c27f484325db2f42
We have extensively described the typical nonlinear evolution of the MRI in fully kinetic shearing-box simulations, explaining each phase of the evolution and the differences between the small- and large-box cases, as well as between the 2D and 3D cases. We have explained how small and large simulations differ in producing developed turbulent states, and how the choice of physical parameters impacts the overall evolution and duration of the nonlinear MRI. We have also identified the role of pressure anisotropy in these simulations, observing that a large {{formula:6ae913fb-5a83-47ae-bd75-346dfca58adc}} can develop and accumulate throughout the nonlinear stage of the instability; this anisotropy is likely exaggerated by our choice of modest scale separation (but still much larger than that employed in previous studies) that necessarily characterizes PIC simulations. A limited scale separation can result in inefficient mirror modes, which grow much slower than the MRI, allowing for persistently large pressure anisotropy. This anisotropy ultimately participates in pushing MRI modes to wavelengths larger than the box size, halting the MRI dynamics over long times. By exploring a large parameter space, we have assessed the numerical convergence of key quantities (e.g. the magnetic-field amplification) in 2D simulations when the physical parameters are varied. We have studied the effect of the separation between the macroscopic ({{formula:3ffbcf2d-448f-43ae-8a3d-827c230631d7}} ) and microscopic ({{formula:bfbf4309-dc92-4626-8f29-969128fcaca5}} ) temporal scales, {{formula:d060cc1c-49ac-408d-8645-756f89913105}} ; of the separation between box size and MRI scales {{formula:4a146f2e-7a15-446b-a9a6-3fe431c79dd3}} ; of the initial plasma-{{formula:ebcbcbe9-7496-48cc-a0ce-246a9c63bc03}} ; and of the box aspect ratio. Increasing {{formula:145a29c6-915d-4bbc-94c7-8b90a56a8740}} with a fixed box size generally results in a longer duration of the nonlinear stage and in an increase of the magnetic energy at saturation. We have shown that, at some large {{formula:0244636d-6e2e-4741-8ef6-efebd99446c7}} , the results (in particular, the saturated magnetic energy) eventually converge; however, this converged nonlinear state is qualitatively different for small and large boxes, and the precise value of {{formula:e48160a6-6bb2-4d96-aa59-4019941de403}} resulting in convergence depends on the box size. Similarly, fixing {{formula:a658eb66-244e-4bc2-bf60-98aaabeda471}} and only increasing the box size produces results that converge for sufficiently large box sizes; in large boxes, a sustained-turbulence state can develop while ti does not in small simulations. This partly agrees with previous 2D work by {{cite:c5a7af251b93ff7f5107b344534cf3efb3f1745d}}, who focused their analysis on increasing box sizes while keeping {{formula:276c3589-7f8c-4378-9c6c-e5ce6285fe20}} fixed. They concluded that convergence in the magnetic-field amplification is attained, in 2D, for box sizes of at least {{formula:818f8031-d678-48e1-9253-eae923a4bbef}} , and that large-box simulations can maintain a volume-averaged {{formula:bd5cfc06-7e16-4778-90fb-324d609f93f2}} (calculated with the nonrelativistic expression) throughout the system's evolution, respecting the underlying assumptions of the nonrelativistic shearing box. We have instead demonstrated that 2D simulations, over sufficiently long times, invariably develop an average {{formula:ffa22eb7-6819-47ad-83a0-b420ddc808b1}} , owing to the absence of reconnection in the {{formula:f5277adf-b9e8-4041-9bfa-e43cac19fb57}} -direction. This does not occur, instead, in our 3D runs of sufficiently large size; we conclude that 3D simulations are of key importance to obtain physically valid results that respect all the underlying assumptions of the shearing-box model. Considering the effect of the initial plasma-{{formula:d0d2f4ab-bbb2-4cb1-b0c1-acd4d3f93659}} , we found that the results are practically unchanged in the range {{formula:5d9607ec-b050-4074-b3b8-a239af4af4ff}} : independent of the initial {{formula:af0e418f-07ac-4f47-8f5b-03a5e1ed93b3}} , all simulations end up with plasma beta more or less within the same universal final range, of order {{formula:7851d1c0-6afb-451e-834e-7ceef36e8a7d}} for 2D runs. This result holds as long as the unstable MRI modes can fit in the simulation box, i.e. under the constraint {{formula:ff1f1a49-37bf-411b-9418-e757b4f8e174}} . Finally, we have observed that the geometry of the simulation box can impact the development of the nonlinear MRI stage dramatically: boxes with aspect ratio {{formula:f2b0f8d3-de03-4525-ba55-e13324cba9af}} result in developed turbulence much more easily than corresponding boxes with {{formula:b338dd3a-01cf-41e6-b16e-76602d97dbcb}} , both in 2D and 3D. We have observed that this is caused by additional drift-kink modes that are impeded in small simulation domains, and that can promote channel-disruption events in elongated boxes instead. We have verified that sustained turbulence develops, in our simulations, during the nonlinear MRI evolution. The isotropic power spectra of the poloidal ({{formula:00a410a4-f7af-48a7-871b-e15c587afea5}} ) and toroidal ({{formula:cc4eed78-de8b-4203-b0c8-91bf5bc71de2}} ) magnetic field during this phase show the presence of an inertial range with characteristic power laws, indicative of turbulent activity. Both in 2D and 3D, the {{formula:942de19b-74eb-40da-88dc-a673f84c5bd2}} spectrum features a shallow slope roughly consistent with {{formula:35618ec4-094c-4b78-a17f-c243b415a8f2}} at large and intermediate spatial scales, and a spectral break in the vicinity of the average Larmor-radius wavenumber {{formula:1347cded-8270-4f36-a714-6c46f37e2401}} . In the 3D case, at length scales below {{formula:b33e2bae-81f3-497e-9754-7bd9e9fb828e}} the {{formula:8292a256-3c04-43f1-90eb-172fbbf01c44}} spectrum steepens to a {{formula:c4591542-8e41-458c-ac5a-80d13ab8b020}} slope and a clear kinetic range is present. The {{formula:88d972b8-eb19-4f82-ab3c-626efec152e6}} spectrum appears to follow a {{formula:404fc3f6-914b-4de0-b795-f4f30e829659}} slope at all scales in 2D; in 3D, {{formula:f1c5c3aa-4115-429c-b61a-823d3ef51443}} instead follows a shallower slope compatible with {{formula:4dc8ed16-84ba-47f3-92f5-010ab29bebc4}} in the inertial range, and a {{formula:e04d8b38-794d-4572-96f9-47d3b5b8f158}} slope in the kinetic range. Our 2D results are consistent with previous 2D studies ({{cite:c5a7af251b93ff7f5107b344534cf3efb3f1745d}}); our 3D results for {{formula:eb1db4e8-8c85-4900-8fdc-bd2a9c3ed2f2}} , in the inertial range, are in agreement with MHD and hybrid-kinetic simulations ({{cite:b667160de93692efa97310fb328970171a765bb6}}, {{cite:1a98a43bdb37b1f9db01aa4785f83a0f19a76a87}}). However, the latter comparison is complicated by the difference in the underlying models (our fully kinetic PIC vs. MHD or hybrid methods). Considering works focusing on pair plasmas, a similarity can be drawn between our model and that employed in {{cite:a23a0ebfdddfb4e903a96ad3de2aa20ced812bc3}}, {{cite:b8aa3d8f2dc7f4b7592fabc7399476f694bc37f5}}, since both cases consider a forced-turbulence system evolution. However, in those works a {{formula:ce7fc258-3059-48de-8b54-afe12249899d}} slope (or steeper) was found for {{formula:6e9c2343-8774-4d24-a543-34a13c56ea2b}} , which differs from our shallower {{formula:6c99e41e-fbe3-4686-886d-d2d5c7d19da1}} result in the kinetic range. This measurement is also hard to compare against analytic expectations, as no previous studies (to the best of our knowledge) focused on the specific case of pair-plasma MRI-driven turbulence we consider. For example, {{cite:f9d1ab1693a45473d6b453909eaabfb1fc784902}} considered a low-{{formula:a7f8b313-1d1a-4ba9-adb7-038e7504e539}} pair plasma in a tearing-mediated cascade, finding a {{formula:bfc8d0a6-582c-4a48-9aa6-0846721064e5}} slope in the kinetic range; our conditions, however, are those of a high-beta plasma and we have no basis to claim that our cascade is tearing-mediated in nature. It is also interesting to note that solar-wind observations commonly report a spectral slope {{formula:9875c8ce-1f66-4593-b96c-382a9df1d359}} (not extremely far from our {{formula:a33e5adc-c9f5-43b0-96d6-03fbdf552390}} result) at sub-ion-Larmor scales, consistent with gyrokinetic calculations (e.g. {{cite:c3b7f08b419e13040b66e1a454a322b60b3c24c1}}). Whether a similarity can be drawn between this and our case will be verified with future electron-ion simulations. Concerning particle energization in 2D and 3D, we found that energy injection in large runs is akin to that brought about by Alfvénic turbulence. In these simulations turbulence is well developed, and particle heating proceeds smoothly, with the system achieving a steady-state balance between turbulent magnetic-energy dissipation and particle energization. In contrast, when turbulence is not well developed (e.g. in small boxes), particle energization occurs mostly during short “bursts” corresponding to large-scale reconnection events. Moreover, for the first time we have reported that substantial differential heating can occur between electrons and positrons in MRI simulations. The physical mechanism behind this phenomenon is represented by additional (related to the background differential rotation) drift forces that affect opposite charges differently, causing a symmetry break in the gyromotion of electrons and positrons in a uniform magnetic field. This effect is unrealistically large in our simulations due to our choice of (necessarily) limited scale separation {{formula:c826a63c-0fcb-494c-8e4b-94a77094dd65}} . We have elaborated on how tuning the values of physical parameters can ameliorate this issue and its implications for the interpretation of the results. The energy distribution functions in our 2D and 3D simulations show the presence of substantial nonthermal particle acceleration during the nonlinear MRI stage. A power-law tail with index {{formula:d91c4656-b30f-4c67-8aa7-3338d4137c17}} consistently develops as the MRI transitions to sustained turbulence, both in 2D and in 3D. Our results are in good agreement with previous works; however, here we have demonstrated that in 2D this power-law state is transitory, and that the overall evolution can produce very different energy distributions over time. {{cite:c5a7af251b93ff7f5107b344534cf3efb3f1745d}} carried out a similar analysis but focused on the early nonlinear stage, where a power-law index {{formula:fecdcd1a-223a-496f-9e9b-84a507fab84f}} can indeed be realized; but such a state is clearly still evolving, and changes drastically later on. At late times, the power-law part of the distributions progressively disappears, and a high-energy peak develops. This is completely determined by the artificial 2D end state (with characteristic “magnetic loops”), driven by the reduced dimensionality of these runs. {{cite:27416fc64cd1fa963dfef71bb2d9225b79d25e43}} described a similar late-time 2D dynamics, but attributed this evolution to the MRI; we emphasize that such a state is in fact not representative of the response of particles to the MRI, since the latter has slowed down (or completely stopped) by the time when the quiescent end state has developed. Conversely, in our 3D simulations the developed nonthermal features are maintained with the same power-law index until the end of the run. However, the nonthermal tail is progressively eroded by the accumulation of particles around the highest energy attainable (which increases with box size). {{cite:8f5a22640cbad2f5c22686a7f3d2ac751c3b8f17}} observed a similar effect in small-box 3D simulations; this result is also consistent with forced-turbulence PIC simulations ({{cite:a0da3f6cb8caed0024544360d2033591749d7f6e}}) and points to the need for larger (and more expensive) 3D numerical experiments. In our 2D and 3D simulations, we observed that viscous stresses (in particular, the dominant {{formula:877fa085-0442-4639-a533-8fe6b6173170}} -component of the stress tensor) can develop during the nonlinear MRI stage. This can lead to efficient angular-momentum transport: an effective (dimensionless) collisionless viscosity {{formula:5083a03d-a481-4c5f-9449-c81b3ae35a97}} ({{cite:4e568969ce70b5103fab4b352fef0eaa5f3fa18a}}) arises both in 2D and 3D runs, mainly due to Maxwell and anisotropic stresses. In 2D, the Maxwell stress {{formula:524b5532-4598-4df9-bf77-cd1410a2da6a}} is likely exaggerated due to a lack of saturation in {{formula:bbd0501d-9b14-483f-a8c0-2313c94a71c0}} caused by the reduced dimensionality; in 3D large-scale runs, where magnetic fields saturate at lower amplitudes, this stress firmly settles on {{formula:cc29641d-a1ec-47a9-a44c-3bcb72565e10}} , consistently with previous hybrid simulations ({{cite:b667160de93692efa97310fb328970171a765bb6}}). The anisotropic stress {{formula:fc1dc8a3-42ae-43de-a7f6-e32ac70166b9}} , instead, shows a more complex trend: when keeping the same {{formula:c0e089af-a00d-47a2-8d1b-7806315a25aa}} , the average pressure anisotropy {{formula:b518d32a-dedc-42db-ba9c-c931e0c7033d}} generally decreases from 2D to 3D and from small to large boxes; but since the average {{formula:4627c38a-7db6-4b94-9dcb-a5fbcdad1e3d}} decreases as well, {{formula:94214d20-5af7-4d17-88b4-a960d6e010b8}} maintains roughly the same average value {{formula:fbf7f169-9b95-49c6-b2e5-bb6ee86db6a4}} in 2D and 3D simulations. As a result, we measure {{formula:33d41070-9c92-4d45-b4be-a8d90430bf24}} in our largest 3D runs, which is against the general expectation {{formula:7eddcd38-c47b-426c-a290-66cde248e954}} . We believe that this enhanced anisotropic angular-momentum transport is related to our (necessarily) limited scale separation and system size; with realistic parameters, efficient mirror modes would rapidly quench pressure anisotropy, likely decreasing {{formula:9962d401-bf03-4222-9557-3ef9c360b61e}} . {{cite:8f5a22640cbad2f5c22686a7f3d2ac751c3b8f17}} measured a similarly enhanced angular-momentum transport due to pressure anisotropy in small-box 3D simulations, which was likely due to the same mechanisms we describe here.
d
00e7968a0d3feb5ecd114657e167a29e
Under astrophysical conditions, coherent emissions are typically generated in fully ionized plasma (a marked exception is the molecular line masers). The plasma is a medium with a long-range interaction, which, in principle, enables the coherent motions of large ensembles of particles. However, the same long-range interaction strongly affects both the emission and the propagation of electromagnetic waves. Therefore, coherent emission is a collective plasma process that should be described in the language of plasma physics. Unfortunately, this branch of physics is not popular among the astrophysical community. Two views are prevalent. According to the first view, this is an obscured and untrustworthy field, so there is no chance of any progress. The second view is the opposite (but, in some sense, closely related): coherent emission could be simply described by formulas from the Jackson's textbook {{cite:12af36d5e7a30a233a7fee92603a6c0e31e0f231}}, into which the charge {{formula:83bccc0a-e996-409a-ab62-f1a872eba604}} could be substituted with a large enough {{formula:2541a154-74d2-48d5-a1f2-3344a5814301}} .
i
4be76aaadef3eb2b6588447b3eed9912
The projections of the correlation functions for the three different trigger particles are shown for two intervals, {{formula:6c8894ec-b50d-4ced-9a96-f4fd1a175192}} GeV/{{formula:4055bbe2-3acd-4bbe-8814-7914ae20c4aa}} and {{formula:95d6c50f-1521-4499-9f4c-65012818f9ad}} GeV/{{formula:4caf94d6-6604-4041-adbb-ef78cf1178eb}} , in Fig. REF . Included are also the correlation functions predicted by MC event generators widely used by the LHC collaborations: PYTHIA8 with the standard Monash tune, which includes colour re-connection as final-state effects {{cite:ccff68c52ee7336dfb50daeb62602906904fe506}}, PYTHIA8 Monash tune with shoving {{cite:d34744785bb14da28f74244a74bba234e4e6f861}} and EPOS LHC {{cite:d09b71448c0150427930ad35f9fb09fe1f4e01a4}}. It is important to note that the PYTHIA 8 Monash and EPOS LHC tunings were based on single-particle spectra and underlying event observables, but did not include particle correlations in azimuth. The shoving strenght parameter g is here set to g=3 and in addition the upper {{formula:6b91b339-b161-4ed9-b58c-e6c4d95d552d}} cut for the shoving mechanism is turned off. None of the models describes quantitatively the correlation functions consistently for the three trigger particle species. For the low , both PYTHIA8 tunes overestimate the peaks on the near- and away-side, while EPOS LHC underestimates them significantly for all trigger particles except for the {{formula:4ab3b3ec-b0f1-4f5d-b32c-d5a96064583d}} . In the high interval, the shoving tune of PYTHIA8 is underestimating the near-side peak for all trigger particles except for {{formula:f6fb5ad3-feee-4d5e-aa69-8bd27b9d0233}} -h case, but describes well the away-side peak. The description of EPOS LHC and PYTHIA8 Monash is similar in both intervals, where EPOS LHC is underestimating both peaks of h-h and {{formula:1d4f9d46-118c-4596-bc81-a73b84c6fd22}} -h correlation functions and can reasonably well describe the projection of the {{formula:fb96cd1c-e374-4f17-8abc-4a364fc3900f}} -h correlation function. PYTHIA8 Monash tune overestimates both peaks for all three types of correlation functions.
r
264c3545ce69f39ebaa284ec2302f88b
Advances in computational infrastructure and breakthroughs in artificial neural network architectures (ANNs) are emerging. These lead to the use of machine learning to tackle this partitioning task. In order to process the amount of information in a single image efficiently, we need to use convolutional neural networks (CNNs) {{cite:3cf3342655543f2f4cede0fbc66c358eaae5d2af}} because they split, downscale and upscale the image while applying different filters in an efficient way. This enables the use of such networks for real time image segmentation with high accuracy.
i
1e593b174d40d9c1212a2c0bf7c1737b
On the other hand, a wireless communication network is highly complex with many components and mechanisms, rendering the task of mathematically modeling the whole system analytically intractable. Most of the traditional scheduling algorithms are designed based on single layer of the OSI protocol, which can not consider the cross-layer information and apply global optimization. Thus it is difficult to achieve higher fairness through the model-based approaches. Recently years, the machine learning (ML) methods are introduced to the research of wireless communication {{cite:04eff0a5c95ba81808c57f810bb008fa80046516}}. In sharp comparison with the model-based approach, the ML-based method is data-driven which applies optimization by exploiting the massive data in the network. This allows the ML-based method to solve many network optimization problems without establishing the mathematical model. Moreover, the utilization of the Deep Neural Network (DNN) makes it easy to use diverse network data to build models. For instance, Cao et al. designed a DNN-based framework to predict network-level system performance which take the cross-layer network data as input {{cite:155b00ac6a0f014891ba706efe69cc10b73ee9dd}}. And the simulation results show that the model can make accurate predictions at the cost of high computational complexity. For user scheduling, another method entitled Reinforcement Learning (RL) is also introduced to improve the existed schedulers or build new schedulers. RL is one of the most representative leanings methods of machine learning, which aims to maximize the long-term expected return by learning from the interactions with environment {{cite:c9f5a604dbc4c3d1ac32a3fc6aedadc3f467f345}}. In RL, the agent accepts the state of environment and make corresponding action, then the agent will be rewarded or punished. To get higher reward, the agent must continuously adjust its policy to make better action. The RL has strong exploration ability, and always achieve surprising performance. There are much pioneering work have already been achieved with RL.
i
62ebfbb3b0d32d4fdfd096e871102bc8
Fabrication and characterizations of M-FE-S memristors. A vdW vertical architecture illustrated in Fig. 1a is adopted, to construct an M-FE-S junction in the lower part and a MOS-FET in the upper part. In this configuration, ferroelectic-semiconducting interface as well as a gate tunable semiconducting channel can be combined in one single device, which is essential for the realization of a gate-programmable M-FE-S memory, as will be discussed in the following sections. By adopting the dry-transfer method {{cite:0fa34a3ce5443c629232b59256575cd5f57f5918}}, multi-layered vdW heterostructure described in Fig. 1a was fabricated by stacking few-layers of graphene, CuInP{{formula:95bf003d-3824-4ac8-95aa-17ce082dd6a1}} S{{formula:40e046fd-e2e0-41c8-9ebb-7560ce12cb46}} , MoS{{formula:8354907b-52a7-4445-bbef-c266cf9ccd16}} , and hexagonal boron nitride (h-BN), with a graphite layer (5-10 nm in thickness) serving as the bottom electrode (see Supplementary Figure S1). Atomic resolution of the cross section of a typical heterostructure can be seen in the high-angle annular dark-field scanning transmission electron microscopy (HADDF-STEM) image in Fig. 1b, with the corresponding zoomed-in view of the FE-S interface shown in Fig. 1c. It is seen that the layered structure of CuInP{{formula:a0fe9bf9-4edc-4cbb-8a8e-3511e482a844}} S{{formula:b35b5693-baac-4331-85fe-65d685e3a998}} and MoS{{formula:b1e59817-2b9a-48c8-9814-ac3e879de5e0}} are well defined, and the FE-S interface is atomically sharp. Optical micrograph of a typical sample is shown in Fig. 1d, with each constituent layer highlighted by different coloured dashed lines. Electrodes and top gates were patterned via standard lithography and electron-beam evaporation. Raman spectroscopy of CuInP{{formula:a1fec055-7ed2-44f7-8dfe-1b43bc061285}} S{{formula:59384934-df8c-48e7-a75e-d21edee4bb12}} and MoS{{formula:8e83a649-fef5-4657-86b3-3f22b419d6eb}} layers were measured to confirm the phonon modes of each crystal, as shown in Fig. 1e. It is seen that both of the two materials exhibit consistent Raman peaks as compared to the previous reported results {{cite:46825ef210d2b93fc8a79658842e59b790a845b4}}, {{cite:70573e2510e3c1acf381b2b483cd19f9a63f61a8}}. It is noteworthy that the anion and cation peaks can be found in the Raman spectrum of CuInP{{formula:c791f30d-4d44-4907-ad7c-3d8d3b75938c}} S{{formula:caa82501-97a8-4176-abd6-d65c80f65e2f}} , indicating the existence of ferroelectric dipole polarization at room temperature in the flake. Indeed, Curie temperature {{formula:ff3fa785-8aa4-443a-9cda-4451a2d9da12}} of few layered CuInP{{formula:ba65cd22-3df6-4633-aa5f-0c12a753cf4a}} S{{formula:435ab6aa-cb22-49c2-af83-64cef84d17ce}} is around 315 K {{cite:d1561a480f5c21a303b8d2ae0bd166903cc53415}}.
r
4af6c931ad2d3b105e08f7b175e6ed10
Again, the imputer for imputing {{formula:b3e3449d-e5ae-4a59-941c-e1baf6cdde22}} can be any suitable method such as SoftImpute {{cite:eb2de691ba4b79fd8c90cdfb84882249263c0234}}, MissForest {{cite:0085011dbefeb396d92c89454958c31c71692630}}, etc.
m
a00ae711ccef5996cbe5ea49bcbccefe
Deep Ensembles {{cite:2368bf3caa1b2e4b3b5f35565fe793d1e6337652}} and Packed-Ensembles are ensembles of DNNs that can be used to quantify the uncertainty of the DNNs prediction. Similarly to Bayesian Neural Network, one can take the softmax outputs of posterior predictive distribution, which define the {{formula:3f8749cf-51c3-4b45-8db7-e37bfbe608a9}} . The MSP can also be used for classical DNN, yet in this case, we use the conditional likelihood instead of the posterior distribution.
d
1d32f3e7258e8bbd21e89559176da6aa