text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
What might be driving the anomaly encoding patterns indicated by our results? Explicit syntactic training does not appear to be necessary.
GenSen is the only model that includes an explicit syntactic component in its training (constituency parsing), which could help to explain that model's comparatively strong performance on the individual anomaly detection tasks. However, it is noteworthy that GenSen performs mostly on par with Skip-thoughts, which constitutes just one of GenSen's objectives, and which uses only prediction of adjacent sentences.
BERT and RoBERTa, the only models to show signs of more generalized anomaly encoding, have no explicit syntactic training at all. However, various findings have suggested that these types of models do develop syntactic sensitivity as a result of their more generalized training objectives {{cite:5aa6b6073fab0fcdc0a176022d6875b8b9f66d71}}, {{cite:6f730fc3c419c23fb7ef481bb8972f6b8ed40f1a}}, {{cite:2489df3efe0b3e60a4f5124bf99291fa04abb6be}}, {{cite:5fd2645c13232df9c98be02cc4ee7ba22c3e685c}}.
| d | 3748217eb63a43ead32feba72818cfd8 |
The Wronskian method presented here is similar to the well-known continued fraction method by Leaver {{cite:e5dd0ca7a4bc964baf83c20bf33c0f4bb0e8e08b}}. We do not explain Leaver's method in detail because it is discussed in many places.
An advantage of the Wronskian method is that it still works even when we do not have an explicit recurrence relation. We just need two series (or numerical) solutions satisfying proper boundary conditions. This can be done without information on the recurrence relation in principle. Moreover, to use Leaver's continued fraction method, one has to treat a recurrence relation carefully if it is not a three-term relation {{cite:097eb2a0b846c2a55d9b284fddd5e94268a9c948}}, {{cite:8eec2b87cb793ab85ebeb6c71fc3aeb6514f68e9}}.
In this sense, the application range of the Wronskian method is wider than that of Leaver's method.
However, since Leaver's method seems to be the best numerical method to obtain the precise QNM frequencies for the Schwarzschild spacetime, we can use it as a reference way for comparison. For instance, the numerical result (REF ) shows about 30-digit agreement with Leaver's result as expected.
| m | cb0dd3ca23a9ff7502babbc125224833 |
The main advantage of stabilizing training using rewinding {{cite:295034d90a64925642f968a310fb8d93fff6a90b}} is that it does not require changing any hyperparameters. Our large-batch experiments modified only the batch size and correspondingly are not competitive with rewinding. Although this might be fixable by an expensive hyperparameter search or longer training, it is unclear whether this would make sense. Given the RTI, there are no clear advantages to using IMP with resetting, instead of ordinary iterative pruning.
| d | 9f75cef695e9756a746ac33e0cacb720 |
The efficacy of the proposed approach is verified on the three-phase unbalanced IEEE 37 bus {{cite:8ee5817f82f2cd00a115c6ceca4c9d0fe6e9d3a1}}, and IEEE 123 bus test system {{cite:0d2bae92fc54fa4100d9a35f8ae43275f952a805}}. An aggregated 24-hr load profile at the primary nodes consists of a mixture of load profiles, i.e., industrial and commercial load profiles obtained from {{cite:ad19e48cd5bbbe985b8d51c20d289d4066cbbc3b}}, and residential loads obtained from {{cite:8164a15603d8a007edf744f71f830e8ae299116c}}. Reactive power profiles are obtained by assuming a power factor of 0.9 lagging.
Other profiles at different nodes were obtained by adding a random noise term and a sinusoidal wave of random amplitude spanning the 24-hr period.
By utilizing this data, the voltage profile at all nodes is obtained by running load flow. The aggregated smart meter data are averaged over 15-min intervals while the voltage magnitude measurements are sampled at a 1-min interval. Thus, we have considered two sensor types for the case study. We have assumed RBF kernel for all the GP-based approaches. The imputation is performed for the aggregated smart meter data at a 1-min interval. We compare the performance of the proposed RGP-G Interpolation against the linear interpolation approach {{cite:4a976bb9485125191289f35325e5a98532da31a0}}, RGP (Algorithm 3), and full GP (Algorithm 1) approach. The RGP-G prediction approach is compared with {{cite:ec683814838befcfe9232a2612086f8205c256c7}}. Algorithms 2, 3, and 4 are initialized using their respective mean and co-variance function associated with the GP function at time {{formula:e9023b8c-c5a7-4e38-a863-423af58bd9e4}} . Here, the hyper-parameters associated with the GP function can be obtained by either training the proposed approaches using historical data or using cross-validation techniques. The hyper-parameters involved in the proposed approach are {{formula:1e2508d0-5584-4b8a-a5f6-a3a27d77d4bb}} , where {{formula:67c3d141-b9bb-42f4-8b92-7067b04f0634}} are defined in (REF ). We have used the grid search method guided by a five-fold cross-validation technique to obtain the hyper-parameters for our problem. In the cross-fold validation technique, one fold of the measurement set is retained as a validation set, and the other folds as a training set. Each time a different set is chosen as the validation set, and this procedure is repeated five times. We select a finite set of reasonable hyper-parameter values to perform a grid search. The performance of each combination is evaluated through cross-validation on the training set. This approach evaluates the MAPE for each possible combination of hyperparameter values and chooses the set that minimizes the error on the validation set. More details on the grid-search-based cross-fold validation technique for Gaussian process hyperparameter tuning can be found in {{cite:4976680fddc76b528919540dd975ffefcabb482b}}. Another approach is to consider the historical data for hyperparameter tuning. The historical measurements of multi time-scale measurements can be used to obtain the hyper-parameters by maximizing the log marginal likelihood of the historical time-series data. The log-likelihood can be computed in closed form as given in {{cite:4976680fddc76b528919540dd975ffefcabb482b}}. It is important to note that the proposed approach does not require any extra training set for imputation. The parameter {{formula:140a99ed-c471-4df2-80b5-6b4b1b7045eb}} for the RGP-G approach is set to 0.05. There are three cases by which we illustrate the performance of the multi-task RGP-G approach.
| r | 368ceed510506befd85966589b8b8f09 |
We propose a new conditional GAN architecture, which is capable of directly predicting a waveform from intermediate features. In particular, we adapt HiFi GAN {{cite:49f840f625c4080f39729c7d7710d726b3387ef0}} vocoder for a general decoding task.
We combine ideas from ASR-based content encoding with a GAN generation approach to achieve high-quality any-to-any voice conversion.
We compare the proposed method with modern baselines using subjective and objective evaluations. According to our experiments, the proposed method achieves higher voice quality, similarity and consistency.
{{figure:43474b35-15d2-41e9-9621-2ffb16a1674d}}{{figure:c4ae6a10-2d74-4daf-9f0b-a4c41f582186}} | i | a766b694edf9e8e9ec91c9af290af887 |
Typical Transformer. The typical transformer {{cite:dbc6230f454e9e643d886e3bd4e42ca26f133bf2}} here uses the ST-encoder and a typical decoder structure, with an additional future mask added to the attention function of encoding stage.
| m | e0cafd7b6b1f0444f2478e587a0e94c8 |
In so far as we establish low-barrier permuted interpolation for further scenarios, these results provide support for the conjecture of {{cite:2ec148bf93ae5b9336c266a40e3eb6e8f80f50cd}}. On the other hand, to practically do so we needed to rescale the preactivations of the interpolated network, moving us out of the realm of strictly permuted linear interpolation. Our rescaling REPAIR evolved as a generalization of the method of resetting BatchNorm statistics of averaged networks {{cite:5d75a6f94c86f47d6c279c0752fb1ca3d29d1dd7}}. Such a correction appears to be necessary in order to establish low-barrier permuted linear connectivity, at practical widths and without the use of LayerNorm.
| d | e0915a06c21ee15098e85f9dd5e709b3 |
in the {{formula:699a0bd7-8347-467d-bc16-31383e63e5a3}} -smooth case. If {{formula:57a22a76-802e-4841-a32a-8f35b3bb0c7c}} is of class {{formula:d5fdc86e-a344-4fc3-9e18-e3b363915d60}} around {{formula:1386aaab-a60b-49ea-b50d-9a8738db09b7}} , then the computation of {{formula:abd02ca9-1715-4f2f-ba01-e2ee871540b7}} reduces to the computation of the limiting subdifferential (REF ) of the gradient mapping {{formula:c6cd5c9f-d163-4a26-a51c-7e07a80a9c1f}} by the scalarization formula (REF ). Besides well-developed second-order calculus for (REF ), variational analysis achieves constructive computations of the second-order subdifferential, entirely in terms of the given data, for major classes if nonsmooth functions arising in important problems of constrained optimization, bilevel programming, optimal control, operations research, mechanics, economics, statistics, machine learning, etc. Among many other publications, we refer the reader to Colombo et al. {{cite:6ac003defb3a664e1bc195c401487afa34e5e577}}, Ding et al. {{cite:ab8fec4e76265df6c2491c2d986397f357b4d3d3}}, Dontchev and Rockafellar {{cite:6e3998e35374e29ec3c34c60ac9a1bcd9aa070a1}}, Henrion et al. {{cite:ce59fb9e57c58b4f9cc85b6a8472272f5387d6c6}}, {{cite:9283d07b5ab2898f02d550653b71a690488f3d0e}}, Mordukhovich {{cite:c21c940c145eee31a3029c2d6ea86e302e52fafb}}, {{cite:8d0fdc89de57b3713be25e532ef3f88406f87d85}}, Mordukhovich and Outrata {{cite:8dc483222736cf7d148efe7cf8af2cdb6048ffec}}, Mordukhovich and Rockafellar {{cite:e6cc3004126abd26193b44f6b08126253fdd5c9c}}, Outrata and Sun {{cite:fce9135efbaf496d81d8ef689be8742de4f970bc}}, Yao and Yen {{cite:9f0e81ae60e8e70f1dd8b237477855bffc3a27ff}}, and the bibliographies therein. The new computation of this type is provided in Section for the Lasso problem.
| d | 35746839a6d2e4f58f16fd308c4286f2 |
The fairness concept we use in this paper is the intensively studied and well-established maximin share fairness. The maximin fair share (MMS) of an agent is the best she can guarantee if she is allowed to partition items into {{formula:ce811df7-bb36-4fd4-9f3c-0b6d9b0e9dc5}} bundles but then receives the least preferred one, which was proposed by {{cite:b47fda484d6eaf5361fbf36cf465f482a14b66b2}} as a fairness concept for allocation of indivisible items. The concept coincides with the standard proportionality fairness concept if the items are divisible.
It has been proved by {{cite:ef6cdf7b077d851323739d597ef22146dd692660}} and {{cite:0ed0282c07ee6bc11cb62bda02eaa58e814e1a84}} that there may not exist an allocation such that very agent's utility is no worse than her MMS.
As a result, significant effort has been focused on algorithms that find approximate MMS allocations {{cite:cf64bad81f9ec19eb7ca114dc0510cdd539316a0}}, {{cite:0ed0282c07ee6bc11cb62bda02eaa58e814e1a84}}.
In recent years, {{cite:fea5bccf8234ede1c8f18b1b200240ee5f5bbc7d}} and {{cite:a33841bdde1fbf1c5d1383dfbfd2657245087245}} obtained algorithms to find a state of the art ({{formula:a160dd4b-7b21-4eab-87cc-9d37f92c3379}} - and 11/9-approximate MMS fair allocations for goods and chores respectively.
For a more detailed literature view, please refer to Section .
| i | 5fb6176891784173e5de419c67b28a8d |
In the present study we identify interesting links between quantum groups {{cite:9377e2a917e4cb640f47e07d51bf765a14e7b0e3}}, {{cite:dd260d3ec10d5f82ca9a10cc568c5b1f1e7de234}}, {{cite:8be025cc045acae2cadc614ede0b71b4e705e417}}, and tridendriform {{cite:156e76db96c333c75e97c932fa56d9e3fdc8053b}}, {{cite:f3512df12141c8a6872fd91f50e3a8094a8754f7}} and pre-Lie algebras (also studied under the name chronological algebras) {{cite:7fac3b262adcfabb185126557af464acdb024f82}}, {{cite:8c5861d318c68b02897e395fd712e52f4e95062c}}, {{cite:a76f68a0d792ac559c13d66ee8c008c6be973253}} (see also {{cite:9348796470f8caa3e1781e4210b248e3f60b23f2}}, {{cite:e65b29de53f58d0bff88f4e748d081021949cb8e}} for a recent reviews). Specifically, we systematically derive the discrete analogues of Dyson series {{cite:afce0536eb65935dfa23fd179ba79f03f0ab49b2}} and Magnus expansion {{cite:f3b72a112c5c0e6f1aa35e6a68365de5b654c874}} as solutions of a discrete evolution problem. We then show that the Dyson series members are expressed in terms of a tridendriform algebra action, whereas the discrete Magnus expansion members are derived in relation to a pre-Lie algebra action in analogy to the continuous case (see also relevant findings in {{cite:2caffd9fcda062f9fa3069d1f67f3571955b085f}}). The use of Rota-Baxter operators {{cite:e6f9f881c3f072f4a5cc32c7fd859595904077e4}}, {{cite:e2b8f1ff6d97e02fe9fe6d0d2afd25fcc6139532}} has been essential in expressing the discrete series in connection with tridendriform and pre-Lie algebra actions. On the other hand, tensor realizations of quantum groups {{cite:9377e2a917e4cb640f47e07d51bf765a14e7b0e3}}, {{cite:dd260d3ec10d5f82ca9a10cc568c5b1f1e7de234}}, such as the Yangian {{cite:dd260d3ec10d5f82ca9a10cc568c5b1f1e7de234}}, are also solutions of a discrete evolution problem. Hence, we deduce that the coproducts of the elements of the Yangian can be re-expressed in terms of suitable tridendriform and pre-Lie algebras actions.
| i | 212a3d9f07e624b402be2253a96db2bb |
In this paper, we will use a version of delta method due to Duke, Friedlander and Iwaniec. More specifically we will use the expansion {{formula:5e6b5b7f-c06f-4cdd-8f5d-a05e2606b6e6}} given in Chapter 20 of {{cite:25fa08545571e51879c66ff4599370ac89919b1d}}. Let {{formula:4080d848-9eab-4b93-9112-2f979277c5ae}} be defined by
{{formula:5b44998f-10b9-4f53-928f-a28fc5023d6d}}
| m | 890930b20894781735113ad872dac5d3 |
where {{formula:56aaa51e-00ec-42ec-bf01-f88188952261}} . Since parton's suppression {{cite:6538761357d0340ff9a2331003b2bebfdeb7d0ed}} is defined as {{cite:73d90d792a06e74110b36d6c3ecb64c343553993}} {{formula:90ba20df-2117-4c5c-b8b0-c06b379a4daf}} , we finally obtain:
{{formula:2085c151-1c85-4219-a078-f1ee18f96c23}}
| r | 1d6b822273b3861b8f9bead8c26908b9 |
We also apply our method to a version of the well-known Berkeley child growth
study data {{cite:9ccc849a3d3744d4fa4c79729d789ebd5064bdd9}}.
The data contain annual measurements of the body heights of 39 boys and
54 girls from ages 1 to 18.
The focus lies on the first derivatives of the data,
i.e., the speed of growth in different stages of childhood and adolescence.
We simulate full incompleteness in these data by drawing an artificial
initial age for every child in the first quarter of the time domain as well as an
individual drop-out year in the second half of the domain.
The observed curves with simulated incompleteness
are visualized in Figure REF .
| i | 23ded89a76d5889535faaf059e616ba4 |
We observed single-mode operation from both InP-NBs and InGaAsP-NBs. Figure REF shows the lasing spectra of the two InP-NBs at room temperature. Single-lasing peaks were observed at 1534 and 1594 nm. From the SEM image, the structural parameters were estimated to be: {{formula:05c0f928-842a-4d07-9e1d-5f7f93bae104}} = 563 nm, {{formula:231aeed0-ea15-4d1e-919a-7f4d222e2732}} = 0.415{{formula:e66522be-5371-4043-99c4-a79bd60522b5}} , {{formula:0c93edd2-60a3-4e55-b602-3b806b3aa5e4}} = 0.399{{formula:00c7e68c-20d6-44f0-b875-9c95257063ce}} , and {{formula:c34af814-3395-449e-aab7-cc68eb4f1de9}} = 1.60{{formula:25c7fb31-bd18-4aca-a1cb-0f10263d4e30}} . With 3D FDTD, we calculated the resonant modes using the estimated parameters to verify the lasing modes. One TM mode with a high Q-factor was found near a lasing wavelength of 1534 nm. The calculated wavelength and Q-factor were 1555 nm and 160,000, respectively. In addition, the {{formula:11f57d1b-3483-40b2-963b-6f12c9ee3dc6}} field profile was identical to that of the target TM BE mode, as shown in the inset of Figure REF (a). In particular, the electric field intensity was strongly localized at the boundary of the NB, where the QW was lightly etched laterally {{cite:5aaef9da8761f16e24168d72c0a55d62dbb1d3cf}}, {{cite:ed11e361f0004afc97a373997811ec12dbab5eb6}}. There was also a TE cavity mode (See Figure S4(a) in the Supplementary Material). However, the calculated wavelength was 1508 nm, which is far from the lasing peak, and the Q-factor was 1,000, which is less favorable for lasing action. We performed the same analysis on the asymmetrical InP-NB laser, as shown in the top inset of Figure REF (b). The estimated structural parameters were as follows: {{formula:3e1fec7c-fe87-45f3-9249-1d698c17eaf1}} = 571 nm, {{formula:5510b74e-d0dc-4f94-9e67-d11d14890e81}} = 0.407{{formula:69810e74-46bf-4b56-a3ce-b350e65c84da}} , {{formula:1cbf5c64-453d-4a78-940a-f74fb9755418}} = 0.349{{formula:36743bd4-c7a5-4126-8388-404453392610}} , and {{formula:90f860f3-e9c9-4559-bdb1-c527c8bc4e5c}} = 1.10{{formula:db81560e-50d3-4422-bb33-22a414b7a82c}} . We found a TM mode with a wavelength of 1618 nm and a Q-factor of 17,000, as shown in the middle inset of Figure REF (b). There was a TE mode at a wavelength of 1441 nm, but the Q-factor was 520 (See Figure S4(b) in the Supplementary Material). Owing to the inherent radiation characteristics of the TM mode, the emission was not captured with an infrared charged-coupled device (IR CCD). In addition, because of the poor thermal conductivity of the QW etching NB cavity, we could not obtain the light-in-light-out (L-L) curve and near-field image at the lasing action.
| d | 390352614d77df79e63d5022663e0627 |
where {{formula:eddd1fd3-fceb-44e4-a1be-4381f60ee365}} is the cosine similarity to measure the similarity between two representation vectors, and {{formula:c5ba55a0-6053-4836-8e59-c4121de3092f}} denotes the temperature scaling factor as defined in {{cite:3770a7d659561ccc39b1f96367795b38383c8eea}}.
| m | e376e0072c25706be261f96cbe767b21 |
{{cite:49d48df0b88db4d195979d74c81286c5a0a15e98}}, where also rates of convergence have been established.
| i | de2e3306345f9a519036c61247ee0b80 |
Recently, there is a growing focus on hierarchical approaches in reinforcement learning employing the options framework {{cite:b44b4801fbc1ecc11f16b15d730d4535359c5719}}, {{cite:f100d993994e95691e28f2baca0482622ef7a808}}, {{cite:8c2bcffad75e189bb67c785c81c77dc097086d93}}.
Combining planning with RL has been studied before for robotic and control mainly focused on gradient-based planning methods {{cite:faf66e14746d2052025154eb25129081917f074e}}, {{cite:ce039f6f8a36ad059fe0de5fd6689fda23c99031}}, {{cite:5e598c543d46ae78fd194b02100b5753fc4541dd}}, {{cite:b9a53fb04abe0e5037fd978a16dfaf0448d5cd66}}.
These approaches provide the ability to learn an elaborated policy containing low-level actions while focusing on learning skills on a higher level.
Furthermore, explanation generation as answers to the model checking questions mentioned above, transforms the planning problem as a black-box optimization problem {{cite:af4817c6f10122ddbfc72bd8bdbd86632dd6cdef}}.
The domain dynamics would be treated as constraints to the (non-linear) optimization problem by knowing the objective.
Therefore, performing policy search using RL in this parameter space yields an optimal plan {{cite:a96a901b6296a142f9e93841e960a2d9efc5e9d2}}.
| i | 7ad55d261b3efd66fded36eeb87ce916 |
Quantum dots (QDs) in graphene
are very small particles with unique electronic and optical properties
{{cite:4bb09a162e086f2dc1be247c392fbe4cefc1e8c7}}, {{cite:c15b4b9baf718846cd1cf81a2b1ff276fd060753}}, {{cite:804a1fc050a924bcb243476a9a148e3fd6dc1fda}}, {{cite:979e64a6131cf4449fa124275f713a7b1e8f355e}}, {{cite:81b5123f9608d95df7e8fc02bace7cb52f7ce5a1}}, {{cite:74eb8b914a921c7c302331d282adb4fef2a437b8}}. Since they
are highly tunable, then they can serve as interesting building blocks for materials that might be used to advance a wide range of applications such as solar cells, medical imaging and quantum computing.
Because of the Klein
tunneling effect and the absence of the gap in the energy spectrum,
Dirac fermions cannot be confined by electrostatic potentials
{{cite:b9c297c48286f17390bb057c195a053549f758b0}}. One solution to overcome such situation is to realize
QDs for instance using thin single-layer graphene (SLG) strips {{cite:583d6480e9ca4db4f18732e1db921b5f887c3811}}, {{cite:8d783340e1e32b6248bb7c825995e14d4b6e207c}}
or nonuniform magnetic fields {{cite:7ca3e7e9e280212c71fe1dc092f9eca9ef1a52cc}}. Generally,
the electronic and optical
properties of fermions in graphene depend on
shapes and edges of QDS {{cite:fb6642af1742c0d7bca9783486ff8ae88ad7dbb9}}.
| i | 20b0b331296ccb54e0887ba66dd4e97b |
NORB environment: We use the NORB data set {{cite:e97f03dd590b4828c1bc7395b08d9a4b888fccbe}} for our experiments. This data set consists of images of objects
from different viewpoints and we create viewpoint-matching tasks from the data set.
| m | 85a6d73b812a4ff8c79b6f71acc429f8 |
In the past decades, the gas-kinetic schemes (GKS) have been
developed systematically based on the Bhatnagar-Gross-Krook (BGK)
model {{cite:d6ce89a3af5251451a9894cbf3cfc222d19ece21}}, {{cite:01c3339b239076b953df546473f5b3c0135eb2d7}} in the finite volume framework, and applied
successfully from low speed flows to hypersonic ones
{{cite:7f65fd302cd1dee7840c2566bfa8993a8f297c81}}, {{cite:529beb956ddfbafe4135d85bd6d591f9599e3004}}. The gas-kinetic scheme presents a gas
evolution process from kinetic scale to hydrodynamic scale, where
both inviscid and viscous fluxes are recovered from a time-dependent
and genuinely multi-dimensional gas distribution function at cell
interface. Based on the two-stage fourth-order temporal
discretization for Lax-Wendroff type solvers
{{cite:81d455dbdf0e94745ddcecedab8ed78180846c80}}, {{cite:4f0d7544e8a7e9a852997d7997535694ef974d4a}}, the high-order gas-kinetic schemes
(HGKS) have been constructed and applied for the compressible flow
simulations {{cite:22442a14f194adb716d9a8013c7f95f910de8e75}}, {{cite:224f515ce8038c48411f6e2dfe4438ff2abe3b2f}}, {{cite:db14fdab12c5eeed631c3904f97f3b2e73ded636}}. The
fourth-order and even higher-order temporal accuracy can be achieved
with the implementation of the traditional second-order or
third-order GKS evolution model. More importantly, HGKS is as robust
as the second-order scheme, and works perfectly from the subsonic to
hypersonic viscous heat conducting flows. With the implementation of
three-dimensional WENO reconstruction, the two-stage fourth-order
gas-kinetic scheme has been successfully implemented in the direct
numerical simulation (DNS) for compressible turbulences
{{cite:761673d4cc8d75c419babf1f937a4bf5ebf810ff}}. To improve the efficiency, a parallel code of
HGKS is developed {{cite:761673d4cc8d75c419babf1f937a4bf5ebf810ff}}, where the two-dimensional
domain decomposition and message passing interface (MPI) are used
for the implementation of parallel computing. The scalability of
MPI code is examined up to 1024 cores on the TianHe-II
supercomputer, and the MPI code scales properly with the number of
processors due to the explicit formulation of the algorithm. With
the parallel code, the HGKS provides us a powerful tool for the DNS
study from the subsonic to supersonic turbulences.
| i | 8ec4abcddea77be301697b60b91b2b15 |
When integrating machine learning into numerical algorithms, it can be beneficial to resort to simulation methods whose mathematical structure is easily compatible with neural networks. The present work shows that a highly suitable CFD approach for this purpose is the lattice Boltzmann method {{cite:b6ee4d3defd8c155b3d43fbac0db9a23cd1da994}}, {{cite:14f0de08a5aaaa447c1939bc2796b078952eda25}} (LBM), which is particularly competitive in the fields of transient, turbulent, or multiphase fluid dynamics. The LBM is a second order accurate simulation method that exhibits similar performance as classical finite difference schemes {{cite:c72f3c56583dbd8840a91fcdb306f4866365306d}}. In contrast to classical solvers, it involves a linear streaming of particle distribution functions on a regular grid and a local collision step. Despite its successful application to many fluid problems, recent studies have only scarcely addressed possible combinations of ML and LBM. As a prototypical approach, Hennigh {{cite:dacd124beafc508a1710316d3f98342ee1ebf56c}} has demonstrated the potential of ML-driven flow prediction based on LBM. He compressed flow fields onto coarser grids using convolutional autoencoders and learned the propagation of the latent representations, which can then be decoded back onto the fine grid. This approach, however, has limited transferability as it is primarily data-informed and does not encode the underlying physics. Furthermore, Rüttgers et al. {{cite:a8e5f76ee60481462bbc93430aacb0024ed5ec74}} have applied deep learning methods to the lattice Boltzmann method to predict the sound pressure level caused by objects. They introduced an encoder-decoder convolutional neural network and discussed various learning parameters to accurately forecast the acoustic fields. To the best of our knowledge, no further ML-enhanced LBM methods were proposed.
| i | 8570c644e59d08df8bef8fe1c6d0955c |
The QAS resembles a simple positive mining strategy, which is intuitively reasonable that there should be more severe punishment for pseudo labels with higher confidence. Moreover, compared with semi-supervised and supervised tasks that focus on simple/hard negative samples {{cite:964636909b28e56e4170cbc79670afde0c96b062}}, {{cite:5897733ea07b3c29bde7d57af0fa48f6865cf0d9}}, it is more critical for UDA Mono3D models to prevent harmful influence caused by low-quality pseudo labels near the threshold. Such an instance-level weighting strategy balances the loss terms based on foreground confidence scores and significantly improves the effectiveness of STMono3D.
{{figure:0da89104-43f4-4b84-b30f-e879d333692a}} | m | fa3184b105e162d4d63773081225570c |
We also quantitatively compare our DiFa with competing methods {{cite:71b00ef33dc13aa099bc076d63f958297082bf54}}, {{cite:fd9c12ec7cd158c94e6448e56b3a8f03091e121d}}, {{cite:28abd7896c9f13a01326d8a1592acd95650c4866}} under six settings, i.e., {{formula:935d8f03-3f62-4cd6-9dcc-82281d535e05}} and {{formula:a0847cde-1bc8-40f9-b410-0c9c33bdb048}} .
For each setting, we randomly sample an image from a target dataset to perform adaption, and report both Kernel Inception Distance (KID) {{cite:e0c06f9b895175bc3ee120c696db8f6c0deb5988}} and Fréchet Inception Distance (FID) {{cite:ecc60fb616dad81cbdfe870c3f54c058fcbd0567}} metrics.
To reduce random sampling error, we repeat it five times and use the mean value as final score.
The results are listed in Table REF and Table REF .
One can see that our DiFa clearly outperforms the competing methods, which are consistent with qualitative results in Fig. REF and Fig. REF .
We note that FID cannot reflect the overfitting problem very well when target dataset is extremely small {{cite:d5dd4bfe276445d09f54429f60189881ad629879}}.
Albeit FSA {{cite:71b00ef33dc13aa099bc076d63f958297082bf54}} obtains better FID scores in Amedeo and Fernand datasets, it suffers from severe mode collapse (see Fig. REF (d)).
| r | 88a6478e9f71cdfceba196d835ed8deb |
In HAO, we expose a large design space in both hardware and algorithm configurations to accelerate DNNs.
To efficiently navigate the search space, we first apply integer programming to prune the hardware configuration space by minimizing the latency subject to a set of hardware resource constraints. We then narrow the DNN architecture space by adopting Monte Carlo tree search (MCTS) {{cite:c59df94293a18e2034ba0380e89ffb011a9a1362}} to minimize the quantization accuracy perturbation while satisfying a given latency constraint.
In addition, we develop an accuracy predictor to estimate the accuracy of the DNN to further reduce the overall feedback time for each sample.
Our flow produces a pareto-optimal curve between latency and accuracy.
| m | e6256654fdfc9c0872437d6354ad5268 |
The {{formula:a5906644-d253-4e6a-abc5-76152af21684}} decay distributions depend upon hadronic form-factors. The determination of these form-factors can be calculated with the HQET techniques which are presently known at {{formula:279856b5-9808-4ee6-afe2-0e11c8d4f065}} . In this work we use the HQET form factors in the form parametrized by Caprini et al. {{cite:4f86db0bcb9f37705c230beb57f1b4e0de81fa92}}. The parameters for {{formula:4b5b5272-4841-465a-9fbf-4f5017f0b492}} decay are determined from the lattice QCD {{cite:7cc35db13da04ad8ddcaec315a5c37ccec6d3512}} calculations and we use them in our analyses. For {{formula:87c82650-e3eb-4aa7-8551-e3aa2c3651e1}} decay, the HQET parameters are extracted using data from Belle and BaBar experiments along with the inputs from lattice. In this work, the numerical values of these parameters are taken from refs. {{cite:54aa093e0edee12a4e74d3ae3a4d66a9c21de991}} and {{cite:f887f77a187aae67fd34b4196669aaec545026de}}. The form factors for {{formula:38f4a3ee-9296-488d-867b-e1b7035ed7e0}} transition and their uncertainties from ref. {{cite:6292815d7f11cb67697345c7d319126ba2ad6141}} are used in the calculation of {{formula:c0e44275-9191-4f39-993c-523b955f0abb}} . These form factors are calculated in perturbative QCD framework.
| m | 5f556efc14eddc0f5aedb8dff0beb52c |
Although significant progress has been made in one-step {{cite:2ce632fe83282e1b22d6c975984a9c91fa80f32d}}, {{cite:c57bb6336116f8cc6444312c3fe0c518145fc4f3}}, {{cite:3a5bf8f03b5d5b675288d95137071f7fcd1de1d3}}, {{cite:989ddebb7a42fa1b867ffd4da10fc33e5b855fa9}}, {{cite:a4c3f57e1326ce9faa4a49037b99c5b769ed8b29}}, {{cite:77b330664f0abdfaddd18aecee5f3fe6d744762a}} and two-step person search {{cite:486f74a5618941dca9e65a026686d45755976211}}, {{cite:99f95906749ccbf9f9f9561c1d0b4f3b7c8063ed}}, {{cite:ac7a486f99384d23a1a06fc242c4a3ed9d639573}}, {{cite:9bd6b987db8ac9c8641cb3867fa504d4d5a80796}}, {{cite:9f18f8bfd0f4d61247d3117f9fd5e31dfbc758e5}}, it's still hard to match correct persons by individual box-level re-ID features. Different person re-ID task, person search is given scene images as input, making it available for the relationship of persons in one image. In reality, people are likely to walk in groups and communicate with each other. Even when people are walking alone, other neighboring pedestrians in the same scene also provide important contextual clues for searching. For example, in Figure REF , it's hard to distinguish the right person by appearance only. However, this turns to be easier if we resort to the contextual persons. This indicates the importance of contextual clues when individual features are uncertain because of illumination, large pose variance, occlusion, etc.
| i | e7c2ba9d0756dffb5f711aba2f14f74b |
We first trained selected models with ASWL on MNIST, which contains 10 different handwriting digits with 60,000 training images and 10,000 testing images. Each of the models was trained with various pruning factors of 1, 1.5, and 2 for 100 epochs by the Adam optimizer at a learning rate of 0.001 and 0.98 decay for each epoch. The attentions of each layer are initialized at 0.5. The hyper-parameter {{formula:605e1552-75f1-4df6-b544-984bb16e4c4a}} (the sparsity regularizer coefficient) is used mainly to achieve a desired pruning ratio and set at 0.5 for all models. Following {{cite:a96a34cdfb9aa13311ae807bce25187e68b3bc88}}, {{cite:6554ddf31a0f32528d4361ade999b2bf501661ec}}, and {{cite:3c312780b980839691e8e9488698d07681c31434}}, the other hyper-parameter {{formula:236cf7ed-cf5c-4df8-9757-f70a8567da8f}} (the L2 regularizer coefficient) is set at {{formula:16315d20-a629-4605-b2ba-537ceab98e6b}} , {{formula:a7f04489-a5ae-4be1-b27a-b4d018f5bb73}} , and {{formula:f6567415-89bb-4838-b9cd-44b2c0db60cd}} for VGG16, ResNet 56, MobileNetV2, respectively. The models with the best results are selected.
| r | cc1f79792c388f157739e23d16ac0d12 |
The average number of semantic categories per scene in the UC Merced and DeepGlobe Landuse Classification datasets used in this paper is 3.39 and 2.51 respectively as depicted by figure REF . This implies that a given scene from the UCM dataset with a given image-level label will have 3 or more different pixel-level labels(semantic categories). Similarly, for the DeepGlobe dataset, we have about 2 or more semantic categories per scene on an average. UCM dataset has a total of 18 semantic categories and DeepGlobe has 6 semantic categories. Thus, each satellite scene in the UCM dataset has about 18% of all pixel level labels and similarly each satellite scene in the DeepGlobe dataset has about 42% of all pixel-level labels on an average. Figure REF also shows us that about 90% of scenes in the UCM dataset have more than 1 semantic category and similarly about 80% of scenes in the DeepGlobe dataset have more than 1 semantic category. This number if quite high when we compare this statistics with that in some generic standard dataset. For instance, consider the COCO dataset {{cite:34ebdb47ab1ec0db4f37e61b9f0d697f7fe639f8}}. Less that 30% of the images in the COCO dataset have more than 1 semantic category. This tells us that the landuse scenes in the domain of satellite imagery are inherently more diverse and hence our method is highly applicable specifically for land use classification in satellite images. We will get a more diverse set of samples for satellite domain as compared to using our method on generic datasets like COCO.
| m | e4333ed5c54724b551072cc4c324477a |
Now {{formula:fbee1da2-b4f3-4ab6-ad30-6650bf759cf0}} is a Chevalley group of Lie type {{formula:00513773-b034-4dd1-b2b6-d932e7f49b95}} . By {{cite:e40bae7a653afd4292bcb333f8b8ecf928a04723}}, {{cite:e288e554d75fd8a9131afd22351c805d7212edbc}}, {{cite:028309a9197c010fce139b38d4e1895a68c13141}} it has a natural extension,
its Steinberg group {{formula:e93ebe08-069e-41ae-95c1-da9cc135cb84}} ,
which is defined by means of a presentation by generators and relations.
| i | 096a7fb271f0aba2cab87001f3d6a082 |
From the perspective of theoretical computer science,
there has been a lot of progress towards understanding the information theoretic and computational limits of solving the SBM problem since the 1980s {{cite:9d70c0ba8b3f00ecfd38c98f32dc4105206bd6e0}}, {{cite:bff25f17e052abf4b48cd7a4b69dfa70e12208c7}}, {{cite:8619debbc7ffc6443d27d84b6376737d72c0fd87}}, {{cite:f649073132e52697004a572f37e3acaea403e6a3}}. We refer to the recent survey by Abbe {{cite:f279a87388a23a5c88e08e3ff377fd3d01ca5134}} for a comprehensive list of such results.
In this paper we study one of the prominent problems in relation to SBM,
which is designing of simple spectral algorithm for community detection.
Spectral algorithms have a long history of being used in the random graph paradigm {{cite:f649073132e52697004a572f37e3acaea403e6a3}}, {{cite:4165bea492da2e31204a0a23493944cb760da8ba}}, {{cite:da6edf85063cf0f51dc4e37165bc01d580c8628d}}, {{cite:9562fe261855f535e171da1124c3abf6c0c7078f}}, {{cite:86a60bf5cf7cdb38b71ce7b8ee5b15c46e978130}}, {{cite:f4acda857acccf51206391ebe05795cecfacf688}}.
Several papers in this area have concentrated on finding simplest spectral algorithms, (sometimes called vanilla algorithms {{cite:86a60bf5cf7cdb38b71ce7b8ee5b15c46e978130}}) that can solve the SBM problem {{cite:da6edf85063cf0f51dc4e37165bc01d580c8628d}}, {{cite:9562fe261855f535e171da1124c3abf6c0c7078f}}, {{cite:86a60bf5cf7cdb38b71ce7b8ee5b15c46e978130}}.
| i | f9c405386ca082b852ec6443daf6e037 |
Our input data for training our RNN were aggregated financial transactions of approximately 26,000 customers, while the output data were
their personality traits {{cite:6b7acd743b0c69866db8fbd9b4be0dd304079acd}}. The transactions were aggregated annually across 97 transaction classes, such as groceries, transport, leisure, etc., over a
period of six years. The personality traits were based on the Big-Five personality model and were calculated using a set of {{formula:2d05ed36-10cc-4c5a-b9c5-48ea56f0eaf6}} published coefficients linking transaction classes
to personality traits {{cite:7730a49734ca77530e82559d895355c2a1ec803c}}, {{cite:6b7acd743b0c69866db8fbd9b4be0dd304079acd}}. Our RNN consisted of three long short-term memory (LSTM) nodes
{{cite:d89d478882864a68f1cec44314b7541ada206610}}. After training and during prediction, we inspected the activations of the three hidden nodes for each of the six time-steps; each customer was
represented by a trajectory with six data-points in a three-dimensional space, where each dimension represented the activation of one LSTM node.
These trajectories were our extracted features which may be used for micro-segmentation of customers. To provide an explanation for the model, we trained
a linear regression model - an inherently transparent class of models {{cite:bf542955b8b86305d31c3a24c2be09259a21c51f}} - to replicate the trajectories from each customer's aggregated spending
distribution. Further inspection of the trajectories' behaviour in relation to the associated customers' fuzzy grades of membership in the personality traits provided an interpretation of the extracted features. The results are discussed in the following section.
| m | 80491f271ae6b69dd8d29941c28f1286 |
As discussed in Fig. REF , we believe that the implicit relational knowledge of the edge is related to the low and high-frequency information of the features of the paired nodes. In other words, the representation of the edge strongly depends on different frequency information between the connected nodes. GCN-based aggregation, as pointed out by many references {{cite:0b0038acfe3061ef9a53e0e494672ed5f5383b69}}, {{cite:0b490012f8d3461711456442db51a4510c8d565f}}, is essentially the smoothing of the features of connected nodes{{cite:6f28322a403f0539ffcb6a4d784504ff2dfd6137}} to capture the similarities of graph structure and features. This causes implicit relationship patterns to become vague and indistinguishable in learned representation.
| d | 161ed500b1dce0a0f8c6b4f3c3398f92 |
In this section, extensive experiments are conducted to verify the effectiveness of the proposed approach. We first design comparison experiments with state-of-the-art (SOTA) KD methods (e.g., AT {{cite:78d9dfad25eb8409375f47698536df7aa359cfb8}}, SP {{cite:ed951438b91fb5d07a208ee133f2c4465e07aa69}} and RKD {{cite:b30679fe545c0ae5e49f8c30733f184f81faf736}}) to test our approach. Additionally, different network architectures (e.g., ResNet {{cite:bfea1aa3ec0d9075e58755a8d4b14fa24595d91e}}, WideResNet {{cite:83038211486695fdbe3f6063f8aa25c97988a775}}, VGG {{cite:37f180ee4cf1c77679624475d7e0620989e9d556}}) are explored in ablation studies.
| r | 149e96d0fd457e797c27079b543618e3 |
BQCD is a Hybrid Monte Carlo program for simulating lattice QCD with
dynamical Wilson fermions. It was first published at Lattice 2010
{{cite:e29e8686cc8b71d0c2d1bc01bf8bdcd4d3469386}} and has been used by several groups: the
QCDSF-UKQCD collaboration {{cite:2fda2e4cb6e5228b39c92403cc3fa8eb0ca02a08}}, {{cite:b076bf1a816434a7a1f679682848fe447a3797f3}}, {{cite:bd9c5f1f79b5ef628fe499ceba2e3664c10ccdc2}}, {{cite:0bc1ff25908443388c1f13bd7a59edb85a28ecc2}}, {{cite:d3037ec23e3bcba3d247abc3664e8c71020f0339}}, {{cite:aa79f75b64e49f5e03c862aabe90a428ffe29d77}},
CSSM {{cite:d3037ec23e3bcba3d247abc3664e8c71020f0339}}, {{cite:aa79f75b64e49f5e03c862aabe90a428ffe29d77}}, {{cite:41bc43fcd9a72f589c6ce0635098f0618d4be6e3}}, {{cite:10435068b63c9fc54b4d9684646d91b71bbd0dea}}, Japanese finite density {{cite:d948f1251de6cde4cafc52bda7309962c052fb07}}, {{cite:c3c3170dba1db9bdd38e56a7285765866b26f468}} and finite temperature {{cite:041908e309518633eedbe65a5bcfaed6e6491fac}} projects, and
the RQCD collaboration {{cite:70ebc90951a6df39beb740f3dc6ea85f63811e38}}.
| i | 5a96a58613774d701595fb9ce6df48cd |
In this paper, we extend this result to include the full Kerr-NUT-(A)dS class of spacetimes {{cite:74ed039a339991f3941f0f2cb177ef70c0807f62}}. Our extension is possible because these spacetimes all possess a series of Killing objects (vectors and higher tensors), all related to the `principal tensor' of the spacetime ({{cite:f8494c1631fb6239d38e7eb0558a7a31d70559f0}}; for a recent review, see {{cite:aa577cd042bc720660bdff162fcba281de8965fb}}). The principal tensor, a non-degenerate closed conformal Killing-Yano 2-form, not only provides for separability due to the conserved quantities from the Killing objects (see {{cite:aa577cd042bc720660bdff162fcba281de8965fb}} and citations therein, as well as {{cite:6fb800ac1828bf79f66fd5d18617b5efa5b579f7}}),
it also ensures the spacetime is algebraically special {{cite:4b9ffd32288c29ee7afbae4d0173c7cbaa192ed2}}. Indeed, the existence of the principal tensor, together with the assumption that the metric is a vacuum solution to Einstein's equation, uniquely identifies the Kerr-NUT-(A)dS class {{cite:de4ce107f69b82b389146d86bbe3d6f3ea1fd86d}}.
| i | dea644cfe1ea35759ca94ef48f0a9c6a |
If {{formula:98184e34-5a65-48bb-99d8-990b6523bfde}} , the Hankel matrix {{formula:94609687-ae52-49f0-8f1a-a216b8168263}} is of rank {{formula:b9c8b6da-8b62-4a94-828e-33abb6f8fb98}} regardless of {{formula:7e147c59-4868-4905-bbed-f92b4e4e10cd}} {{cite:363259c4dbe42408b37b718e3ab2056166c06cc6}}. Specifically, we will assume that {{formula:e9f145a2-0d24-4336-961c-84c97c4ce44a}} is small, so the Hankel matrix is low rank. Our goal is to recover a low rank Hankel matrix. It is known that nuclear norm regularization is used to find a low rank matrix {{cite:11fcdd49cb1a29d66e477d2e83f1ffba226eb704}}, {{cite:2addce446b2769179a0faeaf4bfe8ea430078440}}, and {{cite:f9c1e3a3258bc507f84225281d837f8b9081f687}} uses it for recovering a low rank Hankel matrix.
| i | 13c704082b3dd680cb9625541cb6589d |
In addition to the detection performance, two essential considerations are involved in implementing a hate speech detection model–bias and explainability. Hate speech should not be judged by any specific word but by the context in which the word is used. Even if any word generally considered vicious does not exist in a text, the text can be hate speech. A specific expression does not always imply hatred either (e.g., e.g., `nigger') {{cite:bc3ba69967f05533d71f4a8df0bc9e3f715614ff}}. However, the presence of this word can cause a model to make a biased detection of hate speech. This erroneous judgment may inadvertently strengthen the discrimination against the target group of the expression {{cite:ff33d70cb9c6eeda5d5115a89d9af3767499a5f8}}, {{cite:f97acb07fe739c6c4c18f8e7a540cc62ea2b6897}}. In this respect, the model's bias toward specific expressions should be excluded.
{{figure:e66c2602-71ce-42f2-999b-6594bfa6bce3}}{{figure:65f4620e-be04-404d-bd6d-9395a52f77a9}} | i | fb34f4d74731166814d22234995379fc |
It is interesting to note that a linear VAE also reduces into a Gaussian, as it is equivalent to probabilistic PCA{{cite:f3c89cb92545b17f1add2b409f7e118833702eb1}}.
On the other hand, a linear autoencoder is equivalent to PCA {{cite:ebacef4fdce24e29a7c9766ddd82cea614e80be3}}, which is not a generative model.
| d | 1cf710239aabe60840494f5103449e51 |
Beyond representation learning, masked image modeling is a classical computer vision problem, named image inpainting. This problem has been extensively studied in computer vision for a long time {{cite:8baecdf48375ff5d29a8c9ccae979ceb4d6d10e6}}, {{cite:1091b6ba0f4008ab39bc0ce0a2f44188cd9927cd}}, {{cite:57f61d29a28217dc44156354c945816df25f13cd}}, aiming for improving the inpainting quality and without connecting to self-supervised representation learning. While we advocate image inpainting as a strong self-supervised pre-text task, we also find stronger inpainting capability does not necessarily leads to stronger fine-tuning performance on down-stream tasks.
| m | c0b25ffffd6d1ec9608e7f235ab7b8f5 |
The wide application of deep learning makes the design of the neural network structure an important factor affecting the model performance. Neural architecture search (NAS){{cite:08abfcbd101c273ed36882796f4dfa7698de5092}} is a technology for automatically designing efficient neural network structures. In 2020, An et at.{{cite:f91edee9fa99ecb402990ef7f6eb459cf4b3ee18}} designed a model for ultrafast photorealistic style transfer through neural architecture search. The basic structure of this method is similar to WCT{{formula:8dc058e4-9a55-4a3c-bca4-cbe53a808fda}}{{cite:ac017525c25d19ec09caa6afcfaf2e439678e632}}. They searched for a neural architecture performing a multi-layer stylization to optimise speed and convergence.
| m | 9152029f685eb3a5c38f62fe148a166d |
The limits from the monojet analysis (blue curves) depend mainly on the overall magnitude of the interaction, which have been kept fixed in the two BM scenarios in order to be directly comparable with the experimental benchmark {{cite:e6e17512d6e5f0cfc5fdfde2e6a80ca3427d1828}}, {{cite:032144b164e34ba77747542ba7cc8388c55e2c64}}, {{cite:7c4c43035b0e9e83a1c9e569bba239eb0b226bcf}}, {{cite:d3fa6ad0f2a57bbbde8cc7ee142cb894a3763940}}.
The very marginal differences in the sensitivity curves for our choices of the chiral structure of the couplings arise when the latter are convoluted with the respective PDF weights.
The conservative strongest exclusion of the monojet analyses rules out {{formula:40b273a0-1db8-452f-a5cf-40928bf122ac}} masses below 1.6 TeV, and it translates into an upper limit reach of about 800 GeV DM, which corresponds to the on-shell limit of the mediator decaying into DM {{formula:ec32d608-77f2-44aa-9885-4900f21309a6}} .
The limits from dijet searches for mediator masses above 700 GeV (green curves) are largely independent on the DM mass and provide with comparable exclusion to the monojet for light DM.
The rather small increase in sensitivity of the dijet analysis appearing at DM masses of 400 (600) GeV for the BM1 (BM2) occurs at the transition between analyses optimised for intermediate and heavy mediator, as explained in Sec. REF .
The current XENON sensitivity (solid red curves) is somewhat weaker than the collider exclusions, especially for heavy DM, albeit being able to probe an extended DM mass interval and lighter mediator masses.
On the other hand, the projection for the XENON experiment with 20 ton {{formula:5c3d71fd-8843-4753-a1eb-a09cba5e5e58}} year exposure (dashed red curves) will be able to test a larger parameter space region in comparison with the current collider reach.
{{figure:dfed2f79-9b2d-489e-834e-4a7099b56dc0}} | r | 3030de74ba27fde1a882763f7f836e57 |
In this paper, we introduced several expected generalization error bounds based on the Wasserstein distance.
In particular, we introduced full-dataset, single-letter, and random-subset bounds on both the standard and the randomized-subsample settings.
We showed how, when the loss function is bounded, the presented bounds are tighter than and can recover the current bounds based on the relative entropy and the mutual information {{cite:0f73f8e6daac4edc567480a264e5bbc4ec49489f}}, {{cite:4a24683cde460a79e6676c1903fb9bf46b477fd5}}, {{cite:eb43668ab36ac99a023126a2ce4856a7791d33fd}}, {{cite:220214f897f5557e81f5331f4cdd241a08ef98e3}}, {{cite:0f9ea03c281117f156c868014abee290c07e2b16}}.
Furthermore, the obtained total-variation and relative-entropy bounds on the standard setting are ensured to be non-vacuous, i.e., smaller or equal than the trivial bound, thus resolving the issue of potentially vacuous relative-entropy and mutual-information bounds on the standard setting.
An interesting realization is that the results for the randomized-subsample setting are tighter than their analogous in the standard setting only if their Wasserstein distance (or total variation) is twice as small.
| d | 202db1e17403cce47b9a91e648b25626 |
We demonstrated a new approach for denoising CT projections that supports training the model in the self-supervised mode and allows to denoise sequences of images depending only on the features extracted from these sequences. We compared our Noise2NoiseTD model to the state-of-the-art self-supervised denoising model {{cite:25c81792e537d440e8c3cc1e110dae00dba751a4}} and to the popular supervised DnCNN network {{cite:3bebd9db73045a8090710c9f27b081189033c3b1}}. Our model outperformed the self-supervised denoising model, and although we did not use high quality ground truth images during training, it produced results that are comparable to those of the supervised model.
| d | 89037ba1100cdec564b8f96d2c033654 |
Consequently, GUG reduces to a version of Nesterov's accelerated gradient method (see, e.g., {{cite:18a2b03a1384328e0133b6d6553cebf03d2fad82}}).
For our study, we will focus on a projection-free implementation of the Approx-Subproblem procedure, namely the conditional gradient method (CGM) procedure, described in Algorithm .
Third, if {{formula:6ffd0838-61ec-492c-b85b-2dee46f931c1}} , then the subproblem (REF ) becomes a linear objection optimization and it takes exactly one inner iteration for CGM to compute an optimal solution to this subproblem. Consequently, GUG reduces to CG. Note that by the description of {{formula:9828cfc9-90fb-4716-812f-0b8ea736c7a5}} in (REF ), {{formula:309b3877-6dfa-4101-8bb2-4c00f12abbe5}} is the optimal solution to the linear subproblem. If instead we allow the right hand side of (REF ) to be nonzero, then we can study practical implementation variants of CG that solve the linear objective optimization subproblem approximately (see, e.g., {{cite:7224211d23a1cdc0b49103f873edab444a0c3647}} and the references within). However, we will focus on theoretical analysis in this section; the approximate linear subproblem implementation will be discussed in next section.
Finally, if the parameter {{formula:9213e3a4-1ac4-40e0-8edf-3100d2b96559}} in the CGM procedure is chosen as described in (REF ) later, then CGM is exactly CndG in {{cite:d9b5a269991b877ff36e389afa9a6c3c9c14c8b8}}, and GUG reduces to CGS. The key concept behind CGS, which distinguishes it from projection-based methods and CG, is that uses the CGM procedure with multiple inner iterations to compute an approximate solution to the projection problem (REF ). Instead of solving an optimal solution, CGS runs several inner iterations through the CGM procedure to compute an approximate solution {{formula:87241e74-a35b-4878-95ca-35b033a65f8e}} satisfying
{{formula:8d62c593-8d9e-4a0f-b640-cf851576e00d}}
| m | cbce5bb8a46e55f8e99519beb2d32467 |
Our results demonstrate how the rich phenomenology at the interface between topological order and Anderson localisation in disorder-free systems arises also in higher dimensional systems. Whereas the localisation properties are different, the notable effects on the relaxation and transport properties underpinned by the formation of vison depletion regions around the spinons survive. This behaviour is driven by the mutual semionic statistics and can thus be taken as signature of 3D QSL behaviour at finite temperature. The hysteretic behaviour discussed in Sec. could be probed for instance with techniques that access spinon density, which is expected for example to directly affect the magnetic susceptibility {{cite:d6a6453e67e143fa7133ac882cab24352c007836}}. The presence of vison depletion regions bound to the spinons are also likely to have distinctive repercussions on transport properties where spinons or visons contribute (e.g., thermal transport). An interesting future direction could be to model more extensively the dynamical interplay of spinons and visons, including spinon annihilation events when vison depletion regions overlap, and possibly simulating the out of equilibrium behaviour directly. Another direction could be to extend our analysis to other class of Hamiltonians such as {{formula:88291245-b15b-48e5-971f-f70fb7df3556}} gauge theories where the effective fluxes break time-reversal symmetry, and to general 3D string-net models {{cite:6a805e465e149ceaea7e95ac2c5c2d178e2dc9c3}}, {{cite:0295e190dc455c8033c36560eefd3b8c3d717038}}.
As we await the discovery of candidate materials that realise {{formula:995e6fad-d3c4-4928-891d-8b2c4c56b4b5}} QSL phases in three dimensions, our results may be relevant to other contexts, including frustrated magnetic pyrochlore oxides and resonant valence bond systems. Some of these systems exhibit further gapless excitations and it would be interesting to see how the behaviour discussed in this work is affected by their presence.
Moreover, the possibility of realising {{formula:45413caa-0333-496f-b857-40dda265ca4f}} spin liquid Hamiltonians in our temperature regime with quantum annealers {{cite:fdf13ddef0b686d746bb74db7db3ad6adbece9fd}}, {{cite:778ea6ebe9a831b48b6ce2825701e5fda468308d}} and quantum simulators {{cite:7a1a1e3f33796ba5c8c3c2ef83ba29e7ed4bf922}}, {{cite:11906aa1d20260bf01ddb68b96ef8fc1a53ec902}} (albeit typically limited to 2D) could provide a suitable arena where the physics discussed here could be tested and explored further.
We are very grateful to O.Hart for the generous guidance in the early stages of this project and for several useful discussions thereafter.
This work was supported in part by the Engineering and Physical Science Research Council (EPSRC) grants No. EP/P034616/1, EP/T028580/1, and EP/V062654/1 (CC). GDT acknowledges the support from the EPiQS Program of the Gordon and Betty Moore Foundation.
MK developed and performed the calculations and numerical simulations.
| d | efe4f3cd3209401f9ef6ed6339975c8f |
Although the Q-function with motivation (equation REF ) is similar to the one in goal-conditioned RL {{cite:9f8f5547101f5c52d0ab990e26c2e6c221d5d753}}, {{cite:a49fe4ff78d546faf0ede532df34a8aa18720885}}, the underlying learning dynamics is different. Motivated behavior pursues multiple distributed sources of dynamic rewards. The Q-function therefore accounts for the future motivation dynamics. This way, an agent with motivation chooses what reward to pursue – making it also different from RL with subgoals {{cite:99c0118c9829945c16125a62c38e3cd4d1113acc}}. Behavior with motivation therefore involves minimum to no handcrafted features, possibly providing a step towards general methods that leverage computation – a goal identified by Richard {{cite:0f5d96c1f1db977236acd4441773849b7b6895bd}}.
| r | fc61049fbed7f67d7c39b73655809968 |
To understand the rerankers better, we investigate the effect of different proposal models, different language models, and various numbers of candidates in the {{formula:43d0c559-55ad-4260-a78b-311c0e51c43a}} -best list. Table REF and Figure REF show that better proposal models and bigger {{formula:296f1045-add3-44ed-addd-5b26a4277109}} -best lists lead to consistently better reranking results. This is an appealing behaviour showing that our reranker is able to pick better translations from higher quality and more diverse candidate pools generated by better proposal models and bigger {{formula:f7c1aced-f2a1-479d-a2cd-af38814df148}} -best lists. To compare the effect of language models, we train an LSTM language model {{cite:6b91246b06286f0dd174ff272fcbe7666b903f27}}, {{cite:bed351e9fc78d9ff4dc2b3c85a182626c5d948a0}} and a transformer-XL language model on the English side of NIST parallel training data in addition to the transformer-XL trained on NIST and Gigaword. Table REF lists the perplexity per word on the NIST validation set for different language models. Given the same training data, the transformer-XL performs significantly better than the LSTM-based language model, which in turn results in a higher BLEU score from the doc-reranker. By adding more training data, the transformer-XL language model achieves even lower perplexity and that gives a further boost to the performance of the doc-reranker. Notably, when the strong transformer-XL language model is incorporated into the doc-reranker, the best weight ratio of the channel and language model is {{formula:91941c4f-2807-4b68-8696-1b1643148fdc}} , indicating that the doc-reranker depends heavily on the language model. By contrast, if a weak language model is incorporated, the best ratio is approximately {{formula:3abbadf6-7d66-416c-912b-3844c0b098aa}} . A further observation is that although a larger-scale language model improves the doc-reranker, it does not help the sent-reranker.
| r | 11ee9f0a8ea0bdd0ffe77a96b6731be2 |
Deep neural networks are models with huge amounts of parameters, works like the lottery ticket hypothesis {{cite:f3b30523d06f9bd87c3fa16b1dd723737f9b5b7c}} suggest that these DNNs are heavily overparameterized. Thus it is possible to kill up to 80% of the neurons without losing too much performance. MIMO builds on this assumption to ensure that each network drives several networks simultaneously. However, unlike in our work, the boundaries between the networks are not clearly defined. Thus, the same neuron can be used for multiple different subnetworks (DNNs) in the same ensemble. In our case, we clearly assign each neuron to a DNN from the ensemble. This way, the DNNs do not get mixed up and can learn an independent representation. However, as in MIMO, we rely on the fact that not all neurons are helpful, so we split the width of the initial DNNs into a set of DNNs. This decomposition may seem crude. However, it allows us to better parallelize Packed-Ensembles for training and inference. To overcome the fact that our networks are not wide enough if M is too large, we have added an alpha hyperparameter that can increase the width of the subnetworks. In Figure REF , we study the influence of the width of the subnetworks.
{{figure:ba5cc11c-256a-4c46-8333-8bf895f9c46a}} | d | 2dac918d92dcf99385f13184b7cebf7a |
In this work, we proposed an end-to-end pipeline that recovers the scene geometry from an input stereopair using a fixed number of semi-transparent layers.
Despite using fewer layers (4 against 32 for the baseline StereoMag model), our approach demonstrated superior quality in terms of commonly used metrics for the novel view synthesis problem as well as human evaluation.
Unlike the StereoMag system, the quality of which heavily depends on the number of planes, our method has reached better scores for novel views synthesis problem while being robust to reducing the number of layers.
We have verified that the proposed method can be trained on multiple datasets, generalizes well to unseen data, and can be applied at a higher resolution at test time.
The resulting mesh geometry can be effectively rendered using standard graphics engines, making the approach attractive for mobile 3D photography.
At the same time, while being efficient, our scene representation has only limited ability to represent view-dependent effects, and an extension similar to NeX {{cite:1d6e716ed500f41b2d7d3e56beb114c8b494a24d}} would be required to improve this ability.
| d | d288337b419796c6230975e21d05738d |
The results with ResNets trained on CIFAR-10 and CIFAR-100 are displayed on Figure REF . The results for the other two experiments are postponed to Section REF of the Supplementary.
In Figure REF , we display the evolution of the values of the loss function and of the test accuracy during the training phase. We observe a recurrent behavior: during early training Step-Tuned SGD behaves similarly than other methods, then there is a sudden drop of the loss (combined with an improvement in terms of test accuracy which we discuss below). As a result, it achieves best training performance among all algorithms. This behavior is in accordance with our preliminary observations in Figure REF . A similar behavior has been be reported in the literature when using SGD with a manually enforced reduction of the learning rate after a prescribed number of epochs, see, e.g., {{cite:6078a774fc29c5f104731b9505a0160fbfde8c11}}. Our algorithm provides a similar qualitative behavior but in an automatic way.
| m | 90b0af4b7a1cfd07ff8c59a9c04b54ea |
Recent studies have repeatedly shown similarities between brain activity and pretrained supervised neural networks (e.g. {{cite:89640019a98895fe4ba6792e85fab0ee1dd160f0}}, {{cite:f243949806155a6b4fbb77149cbdc1147d4a3288}}, {{cite:dd12f0fee6333777af55d40a44c23d3fa5defe3b}}, {{cite:df46d5de5f45e66d55f4f018cc815081e4be31d1}}, {{cite:460ea25e9133c1b6176b1f24176373055f580388}}). The present results further show that updating a model after each stimulus presentations via self-supervised learning makes this model more "brain-like".
| d | 433f2b7901bcf60ca6ae88f29ef72c98 |
Figure REF shows the comparison of forward back-to-back di-{{formula:5cf491d9-68c7-4ee6-90db-0de0628f385b}} correlation function in MinBias {{formula:4ea1ac2b-ec98-4fdc-901f-4dfec06f3b16}} +{{formula:ff8c6578-a1c4-4153-95d0-8f3c75ce0952}} , {{formula:47f269f8-e700-4514-a035-c72bd19876ad}} Al and {{formula:90b8b023-ef0d-43c3-ae87-f90fee2339bd}} Au collisions at {{formula:3a1a4142-7acf-4e8c-b01d-66bda2463ae4}} GeV. The area and width of away-side peak from different collisions are shown, together with their statistical uncertainties.
In the left panel, in the low {{formula:932a6c37-5f01-4476-8299-b60955a899f0}} regime, a clear suppression is observed in {{formula:d183fc1e-441b-4b15-9cbb-050398349a1b}} A compared to the {{formula:74839b31-b4e1-4233-9980-60c7f350b361}} +{{formula:820aee75-3f07-4c41-9313-8631040ab90f}} data. The away-side associated {{formula:3c63bc8f-212c-4f21-9546-2ccb916414ba}} yield per-trigger in {{formula:dfad3ff9-b2c8-4d8c-acd5-8de6b55b1f66}} Au ({{formula:478dfda2-c737-49ca-b1ab-241fcabc4278}} Al) is suppressed by about a factor 1.7 (1.2) with respect to {{formula:2ec6c6ae-dd5d-4249-952a-02a022cadfa7}} +{{formula:088e1161-3e32-4f7e-bb88-b4c9e9257bd5}} collisions.
The enhanced suppression in {{formula:b8a74251-488f-4382-8837-47b815ea8b11}} Au relative to {{formula:cf9c2ee1-781e-4831-99e2-20d371e172cc}} Al at the same collision energy supports an {{formula:d87c5bff-0a6f-4633-bf7a-ce6245fd6524}} dependence of {{formula:2fabd782-d17f-40ed-8cda-5789c08a2447}} as predicted in {{cite:3ab91a1f3f268131e133adb776d18b01ce4e9465}}, {{cite:de68065f2b47f726f0ebe4a05f6bcc686c055425}}. The suppression decreases with increasing {{formula:efa0b7c3-ec39-4fdf-a367-8ff61ae0c033}} of the {{formula:435ec5c4-e0bb-4b93-a2b6-bbe6fe7a1d5f}} s. In the high {{formula:1a78ca6c-123a-4075-b8df-6731a33e7fdc}} range, no suppression is observed in {{formula:452aa3ee-6771-42ba-972a-87c5ffb300f2}} A compared to {{formula:a87264d4-2833-4c78-a9d5-5dfdb84dea20}} +{{formula:77d2ea9f-1885-4108-99b9-614ee48fb69d}} collisions as can be seen in the right panel of Fig. REF . The parton momentum fraction {{formula:a297fba6-3e5a-4523-9366-3bdf9e429968}} with respect to the nucleon inside the nucleus
increases with the {{formula:6a95187f-c4d0-4820-bc65-1e41b0c3d9c3}} of the trigger and associated {{formula:f71cb4d0-e14b-42dd-9567-f54f7d72e275}} s.
{{formula:d41e9b28-acc6-40e5-9b12-19ac0b40cfe3}} can be approximated as the average {{formula:7c3c6d05-baa3-4406-8037-7340e7f81155}} of di-{{formula:717ba4f4-cfe1-4160-81f5-249384818d6e}} . The low {{formula:65e80d67-e900-4938-aae3-82517865480a}} and {{formula:96243bce-fd5a-4ae8-996d-a238b6a69986}} regime, where the gluon density is large and expected to be saturated,
can be accessed using low {{formula:c414df16-1537-41f1-80fe-776461826a64}} di-{{formula:c50a93c2-a77f-4f12-a9f0-e064a2eae106}} pairs. When the {{formula:799142c0-ca26-4046-870b-c62f7a59b4bc}} {{formula:10438a5b-9357-44b1-b3a1-d7fab4769f0d}} is high, the probed {{formula:92c5a4c2-201b-43e3-8169-ece8facd1c58}} ({{formula:fd6c2834-6fb0-47d6-8bd9-771cbf3d0d5d}} ) will not be sufficiently small to reach a non-linear regime.
The phenomenon of broadening is not observed in {{formula:42da5b02-d3cc-4201-b933-b53c1949e816}} A collisions, which is consistent with the similar measurement in {{formula:76b30f59-4250-4732-9d0c-ba3b645d527d}} Au collisions by the PHENIX experiment {{cite:340614147a679dfdb6dced57e87278a692648b6d}}.
| r | a2e45b5fdded8cf85f4e1230fe5145cb |
This subsection reports all experimental results in a graphical way. Figure REF compares the test accuracy achieved by both SGD{{cite:4e44c3aae3627ccf7f063c610dfb269165d17f9d}} and LARS{{cite:1c67ca0ae5286716e5892df9ccb8a4e9e9de107a}} optimizers. From the figure, it is clear that both approaches perform extremely good for small batch sizes. Both optimizers maintain a test accuracy of around 99% until batch size reaches 1024. After that, test accuracy starts decreasing gradually although they maintain an accuracy of more than 90% until batch size reaches 8000. The difference between performances of these two optimizers is observable after that. The test accuracy of SGD optimizer starts dropping significantly once batch size reaches 16000 while LARS optimizer maintains much higher test accuracy compared to SGD. Test accuracy for SGD drops below 40% after 28000 batch size while LARS optimizer maintains more than 55% test accuracy for similar batch size. Test accuracy for both optimizers drops significantly once batch size reaches 32000 although LARS optimizer performs better yet.
{{figure:7eb3e347-9b7a-4d6b-b658-014f8f80a97a}} | r | 741057db2df388955523ce878dbe9bab |
Cosmic Chronometers: We use the 31 data points on {{formula:f0bb5741-2e74-421d-bb34-ed82302759d7}} , at different redshifts, from {{cite:262dd31e09471952efcb134fbe0d3a0fe8d495ba}}, {{cite:88b15843be8b0759c1e10259cb8414cd2f11b81b}}, {{cite:c63eecc48dc3c29ac3358dae21c34355529bd051}}, {{cite:f3f98644eb65db21091d73a5ee4ea16062810b0e}}, {{cite:5c8cb5234a139bf2f1915ca230f4988f35cc275c}}, {{cite:463d4711402f45f0f61de203b1b62b90632b41cf}}, {{cite:9d99037371f8050a0735f89c742813dc9771f4ee}}, {{cite:25a200c6ecffe2d5fec30d52c3cc194a11591654}}. All of them have been obtained making use of the differential age technique applied to passively evolving galaxies {{cite:f347e5786806971d25d4205627cb2d08e3eb1d55}}, which provides cosmology-independent constraints on the Hubble function, but are still subject to systematics coming from the choice of the stellar population synthesis technique, and also the potential contamination of young stellar components in the quiescent galaxies {{cite:3056a88f67335fb3bd69abdf01de2ea89c0b1e4f}}, {{cite:85770beb055de83994df7e25ae0a3bbb24d8d737}}, {{cite:ae517220e538682c9d213fc39240e96b6f141294}}. For this reason we consider a more conservative dataset that takes into account these additional uncertainties. To be concrete, we use the processed sample presented in Table 2 of {{cite:6c29530c7239c790039de93dba56e1a721a5fa16}}. See therein for further details.
| m | e9ea79613b17df0fdb5b491657cd2a8c |
We carry out the integration of the orbits using a high order
Runge-Kutta-Nyström method, discovered by {{cite:087d1f77b16edf7c5ec0e52331bf155e3bb71ece}}. We split the Hamiltonian
into Keplerian and perturbation parts {{cite:be145fd3ebc838e0e930d65a61461014a0318cc7}}
to increase efficiency and avoid spurious precession. This scheme
advances the Keplerian orbital elements correctly, except for the
truncation and the roundoff errors (the largest accumulated error
per star amounts to {{formula:c36699b1-1770-4a7b-b783-568d423aa3bb}} over the course of the whole
simulation). In particular, the evolution of the Runge-Lenz vector does
not exhibit a linear drift as in the case of potential energy-kinetic
energy splitting {{cite:9da574eabe507844bffe68461cc1bf2df57e0c2c}}. Our treatment of the GR
perturbation leads to a small error in mean motion, but since we are
interested in changes that take place over many orbits, this error is
not important. We use shared adaptive timesteps, but time-symmetrize
the integration with the method of {{cite:9da574eabe507844bffe68461cc1bf2df57e0c2c}}. Our
simulations last 30 code units which corresponds to a few precession
times of the slowest precessing test stars.
{{figure:009eff18-291a-499e-af30-8f5a3ac4690c}} | m | 361cc215a730cf298bfbb1b447b7b4d0 |
In this paper, we use the phenomenological (3+1+2) neutrino model with
three active and three sterile neutrinos for description of the excess of electronic
recoil events in the 1 – 7 keV energy region found in the data of the XENON1T experiment {{cite:fb5c57981c5f316917b567931cee342418039124}}. This excess can be naturally attributed to interaction of electrons with dark bosons and photons emitted in decays of the sterile neutrino mass states with the masses
{{formula:0118c2fd-037d-4566-aace-b59f2da19521}} keV and {{formula:04726299-0bb4-494f-a298-9d86b3ac54b2}} keV while the sterile neutrino mass state with the mass
{{formula:4a9074ee-efb4-47cc-991e-72918019286c}} eV is practically stable. In the context of this approach three peaks in the 1 – 7 keV energy region of electronic
recoil events at energies are predicted. These predictions will
be tested as in the XENON1T experiment as in future experiments, such as the upcoming PandaX-4T {{cite:bb2e2dbcbb43375c70ef5e97d3463af8aba164e8}}, LZ {{cite:096030e9ab66dba017cb28577054ef932533fdaa}} and XENONnT {{cite:fb5c57981c5f316917b567931cee342418039124}} experiments.
| d | 7fc718495053df69ebbada13db4ac5a1 |
Given the flux densities for G2.4{{formula:0bc35e02-9326-4893-a3c4-4a211ced0444}} 1.4 that are available from various
surveys, it has a flat, optically thin thermal radio spectrum at
gigahertz frequencies. The steep, non-thermal integrated radio spectrum
reported by {{cite:7470045944947c13d0de7e0e8f43a3c34ce064af}} for G2.4{{formula:c81b6cf2-c118-4f7f-9d98-58a8885cccab}} 1.4 is not consistent
with the flat spectrum from other available observations.
| d | 0442be0ca113d3ba8b73e693ec0f393b |
We observe that excess risk bounds of the same order for DP-SCO based on noisy SGD and the uniform stability of differential privacy have
been established {{cite:f6f3f10ea8b100fb48c71cb04d29807d1dae3ef3}}. Improving these bounds in DP-SCO required substantial efforts, which was only achieved recently
{{cite:3b84846525dd72b1b7c062232a88db11477e887b}}, {{cite:2191d8d5bbebca8f2f39fd76544f2156115a7fc6}}, {{cite:6d2b00e71e4430efd9c05476ee26bb704d204c85}}. Furthermore, to the best of our knowledge, the upper bounds on the risk above are the first of their type for DP-SVI and DP-SSP, respectively. To improve upon them, we will follow the approach of {{cite:6d2b00e71e4430efd9c05476ee26bb704d204c85}}, based on a multi-pass
empirical error convergence, combined with generalization bounds based on uniform stability.
| m | 8510e5636c3ea63283203cd1dceebaf9 |
For the simulations of the polymer, we employed the replica-exchange (parallel
tempering) Monte Carlo
method {{cite:ad7884c995c2802ece02a05168922045190d3a99}}, {{cite:4006bb9c756cfba0c822dc8b02b75af329ebe986}}, {{cite:c24c1ce2edbaa88ca0a16e7357ebea2cee24e9f1}}, {{cite:f67bc67c46a588c4faa50a40f0a81e4cb4b663dc}}, {{cite:c46f2564d30e309644dd9ffb3dda6ca044fef49c}}. Replicas of the
systems were simulated at different temperatures {{formula:27c28abb-28eb-4b0c-9729-76cb60b7c383}} with
{{formula:03675396-f11d-4e5c-a046-f80a955bff5d}} . The total number of temperature threads {{formula:85d93bf9-8584-48b5-89f9-d3ac319be4e3}} ranged from 40 to
50 in individual simulations. Up to ten independent runs were performed
for all {{formula:9844bbbf-487c-46e5-8f3e-ba1444c519d9}} values studied, which allowed for the estimation of
numerical errors in all microcanonical quantities.
| m | 1af43740c85452bd876efba98eae85b3 |
We carry out a set of experiments to evaluate our proposed framework ADF. We compare the performance of ADF with eight state-of-the-art machine learning and incremental learning techniques, comprising two non-incremental forest algorithms (namely RF {{cite:78ad9556901b1dc54be26be5f1985ea26c9a957c}} and SysFor {{cite:c87834433d12eb7d6b5f3b55f82da956eae36ac8}}), two incremental tree algorithms (namely Hoeffding Tree (HT) {{cite:f34938936272b572f38ea8f7cacc51f37397bfda}} and Hoeffding Adaptive Tree (HAT) {{cite:c9c1ed31ecad7ee42031ea4ceb8d9a7cdd29a584}} and four incremental forest algorithms (namely LeveragingBag {{cite:d31e3f343423548c1ba725883d26893bd7a16b8d}}, OzaBag {{cite:0c3cfb329ac8d7cff52429097b891355fc2a872c}}, CIRF {{cite:5a2a23988899fc215f9ef5368bc0c991250fd3ce}} and ARF {{cite:ad1c9cd358a2664c8173b5878278bfc36785a8a2}}).
| r | 2d91463f34b7c348af53e76fadaa91ca |
On the other hand, in nature, the scattering lengths are {{formula:c4bb81a6-3585-4a0c-8e5f-0a0f053a90fe}} fm and
{{formula:0247001e-4809-40f7-99d6-ee6779a1c534}} fm for proton-neutron system {{cite:6f9cfda19fc1c53cb58c3ce09074e837ae78d860}}, {{cite:351213df9fc6edabc1028331efafcad1c920149e}}. Therefore, {{formula:396a99c6-7f0c-4051-a2f5-d8f44c9c0754}} and {{formula:413fcb0e-a8fd-42bb-a35d-ebae5968a3bd}} should have distinct differences when the pion mass is close to the physical pion mass.
{{figure:fc22e616-fced-4074-9138-63802e65966e}} | r | bd3c030e1e4efaaee49d19c17ebba0a3 |
The starting point of the proof is to convert this theorem into a result about moduli spaces of parabolic {{formula:7f5eb736-7379-4da8-acba-e73bde9fa101}} Higgs bundles. The general theory of Higgs bundles corresponding to representations into real Lie groups started with Hitchin's paper {{cite:6bfa3541628006efec6770737529a381d5cff29c}}, and was fully sorted out by Garcia-Prada and Mundet i Riera {{cite:95c4ad7e389ab9ee5362ae9ac6c02e63ff2f2e26}}, while the parabolic version of the theory was developped by Biquard, Garcia-Prada and Mundet i Riera {{cite:9ee8bacfbaef6907d17d0acf5ee4a82d8f5b37a5}}. Parabolic {{formula:6d1f1b0d-38ab-43cb-a8da-251fb45b3f15}} Higgs bundles have been specifically studied in {{cite:5376a63290d07821ffc8aae307fde19673fae454}}, where the authors chose to put aside the punctured sphere case.
| i | 3d597197502e4b6c7546e2e40f0015e1 |
We tested different convolutional neural networks, one of them was the VGG Net, a very deep CNN often used for image recognition {{cite:81ac16774fc8e938746cd609a72cc2c9e3b217ad}}. To execute the VGG Net more efficiently, we used a hybrid approach, in which the ML model was executed on the GPU while the physics model was executed on the CPU. Even though the VGG Net is significantly deeper, it did not improve the RMSE of the training and sometimes performed worse. However, such a neural network could be more efficient when training on even more complex physics models such as multi-phase models needed for investigating climate mitigation and, in particular CO{{formula:9b06e54a-a49f-44fe-a244-dd7ed19cb8ed}} sequestration. These models might require a more complex network to deal with the more complex physics present in these cases, but this is a topic for future research.
| d | c00e57e4c96a9d49111d9509b298f34e |
Consider the {{formula:f0b8ecb0-e499-413a-abbc-77f1f00c6971}} OPE in the {{formula:2e14242e-b615-412b-b38b-9620f8cf303b}} representation of {{formula:afeb7bcc-fc1e-45bd-b3b8-ba9ff3154dba}} with some spin {{formula:cd9ea0ac-3822-40f6-b43d-7401af782df3}} of the appropriate parity. By definition, only {{formula:fa009fae-8729-41a3-a02f-1c069241085b}} double-traces contribute at leading order. It is also true, at least generically, that each one is a non-trivial linear combination of some {{formula:f7555ed0-1ccc-4672-99b5-ad98fbb4ebf4}} which diagonalize dilations on the {{formula:6f954048-a1f9-44ef-8350-91c7d7b80e08}} subspace at the next order. We can invert this to say that operators which have definite scaling dimensions, and can therefore help us learn about {{formula:68af2ef7-be7d-4957-a3aa-6e5d3814c37c}} , contain information about double-traces in other four-point functions. The bulk interpretation of this statement is that there are many intermediate states which can run in the loop. Proceeding as in {{cite:33f4e36c3f0a1a8f2be25ac81ac5959a9e3e5802}}, {{cite:a526050255db3acba993b9334bc302d9b73466f4}}, double-traces that mix for a given {{formula:f16a47b9-49e4-4e04-a8fd-775d4f846f72}} are
{{formula:38db0810-cb42-448a-a6cc-e56883791120}}
| m | 54fb566088080fc2b1a3e4b1798284bd |
Our first observation is that in the case that the local variance of the density field is non zero, the expectation value of the luminosity distance becomes dressed by local fluctuations. In the case that the density depends quadratically on the field {{formula:be6ac155-7306-426c-9f80-21d10eeb2f81}} , this variance is obtained via Wick's theorem and is naturally large. The result of this is that the locally measured parameters (i.e. {{formula:2a9883b6-234a-42bc-acec-c6e88ada16d6}} ) might not be equal to the `bare' parameters describing the underlying FLRW universe. The idea that parameters measured do not equal the underlying parameters is not a new idea, as it was previously studied in the context of relativistic universes containing inhomogeneities {{cite:5ab4ab3064d06011464f4efd33388f14d28ad6f0}}, {{cite:e70fbf1a407da7fa7ca4e43fa53a43ac808edae1}}. But to our knowledge it has to this date not yet been studied from a quantum perspective, where the fluctuations arise due to quantum effects.
| d | 2dc8bd836005b535e3c509b4904669f9 |
In this section, we discuss two alternative ways to construct
estimators to be used with the bandit algorithms.
We first consider row averages as an estimator for the bandit
algorithms, and then discuss the connection of bounds we
use on our spectral estimators with those of {{cite:acb6793cfaacc26cda6ec908f613aa17323efa92}}.
| m | eb66be313c7f375216f6e2c78efa070f |
A new approach, motivated from cosmology and quantum field
theories on a curved space-time, has been proposed to study the
gravitational interaction: the Extended Theories of Gravity
({{cite:7dca545ee066989189cab6fd768ab581bc1808d5}}; {{cite:f9a02d322495278d8cdecda74d70bb0554150fc3}}).
In particular, the so called {{formula:42195352-5d9e-49b1-8322-77626df766e7}} gravity seem to have passed
different observational tests like spiral galaxies' rotation
curves, X-ray emission of galaxy clusters and
cosmic acceleration (see e.g. {{cite:c5b387c61dbd54bdceda687bffab25645d9573d2}}, C+07 hereafter,
{{cite:5ddafbec1d792aeb56aeb226873fc42c3eacb14b}}, C+09 hereafter, {{cite:9f9befb769b8d566544d064dab714d70e568172c}}).
This approach is based on a straightforward generalization of Einstein theory where
the gravitational action (the Hilbert-Einstein action) is assumed to
be linear in the Ricci curvature scalar {{formula:e03a3a98-65b9-490e-a09a-924a2c945074}} . In the case of {{formula:0728e104-c8d3-428e-8947-7642c0daa644}} -gravity,
one assumes a generic function {{formula:c4482022-bb29-4953-a713-24491d06b15f}} of the Ricci scalar {{formula:95f3fb38-d243-4870-aa2a-33045fe6ead4}}
(in particular analytic functions) and asks for a theory of gravity having
suitable behaviors at small and large scale lengths.
| i | 281242af0e2b37085f841194b3d99dc3 |
The data set collected in 2018 is divided into two subsets, S1 and S2,
which correspond to the periods before (20{{formula:28a3cbac-d6ab-4c99-aa60-5e960ef37d3d}} of the data set) and
after (80{{formula:2c15ab39-74a1-4126-bfaf-80cad506a5ec}} of the data set) the installation of the new final
collimator COL. The subset S2 is further divided into six categories
corresponding to equal 5 GeV{{formula:c399b6c9-d2f1-4229-b3dc-41d2dc2966b5}} bins of pion momentum, {{formula:13aa0350-c0fc-4d88-a072-9b34986f9c6f}} ,
in the range 15–45 GeV{{formula:157c785d-d728-4381-ac99-5b034a8b2508}} . The subset S1 is considered as a separate
category and is integrated over {{formula:f964da99-0c48-4e73-8d2a-44d51d6cca7e}} due to its small size. A
dedicated selection is applied to each category, which improves signal
sensitivity. Data sets from 2016–2017, analyzed in {{cite:3f4f5d9dc0552d11d63d14c97b4a84d5a2ffa8c9}}
and {{cite:5787873417ee3c4b2c3500c698096895677575e7}}, are added as two separate categories, each
integrated over {{formula:70fecebf-ee59-4eb7-bb5d-4831f4703f2e}} , for a total of nine categories.
| m | be51996152ec5cca96d8573bd6158541 |
Adversarial training can be seen as a type of data augmentation where the inputs are augmented with adversarial examples [{{cite:a862f342c9ef663dc872ef4fc08a633dea53190a}}] to increase robustness to adversarial attacks. Here we test the commonly accepted hypothesis that adversarially trained models need low frequency features for robustness. We do so by comparing the Fourier mask learned for a vanilla network {{formula:1d116085-1fde-4a81-8120-d9844ab78e47}} with that of an adversarially trained network {{formula:51241f45-6bdd-4db4-b547-fb89f3e2ee76}} when the learning occurs over the whole validation set. Specifically, we compare a naturally trained VGG11 with an adversarially trained one using the torchattacks library [{{cite:1c28ca33bbabb5a7e887eb68b205eba2c70459ff}}] and a Projected Gradient Descent attack (PGD). [{{cite:328e5760b3111c8f68662c82e4f90aa291d40982}}] has shown the frequency structures of adversarial attacks are similar across different adversarial attacks. Therefore, although the set of potential choices one can explore is vast, in this work we focus on PGD for simplicity.
Besides the mask difference we also compute the radial and angular energy of each mask by considering radial and angular partitions of the frequency domain (Figure REF (A), (B)).
We then test if the same low-frequency preference hypothesis holds true in the case of common data augmentations. To gain some intuition, let us consider a simple one layer network whose representation is given by {{formula:de42ab96-185e-452f-a2a1-e04c0cda95d9}} , where {{formula:e478841e-1aa7-49a9-9bc0-05f9394314e0}} is a non-linear function, {{formula:d66f074f-7fc7-48f9-ae0a-68d97bb92dff}} , and {{formula:baf116c6-5ea3-4b66-a28a-9af7beca650a}} is a cost function. We consider data augmentations generated by a group of transformations {{formula:06465a48-4b57-47d0-9b5d-d8965338c1fe}} . The augmented loss can now be expressed as
{{formula:a2d1ccf9-6234-4794-a660-a75b65cef5b6}}
| r | 39252e540dd222c8e6a51c924805db2e |
In the previous section we showed two important results with practical implications for the forecasting community.
First and foremost, it is possible to compress a dynamic forecasting ensemble into a compact individual model and retain a competitive predictive performance. Students show a better average rank relative to teacher, but the pairwise significance analyses showed comparable results. However, these are accompanied with a drastic reduction in computational costs in favor of student models.
Second, a given individual method performs significantly better when trained according to the ST approach, as opposed to being trained using the original target variable.
Similar results to these have been reported in different predictive tasks, and different individual models and combination approaches {{cite:d4beaf4252af4c60db3fc62f81c34eced2e57d8e}}, {{cite:07f25a172a122389479cbf765e4301ca2eef668d}}.
| r | 03272fcb61130a1977636570d0301657 |
The baryon-to-meson yield ratio {{formula:12aff9a9-bded-400c-bf75-cf668dbcffde}} is presented down to {{formula:f47ee95b-f043-4bb0-a80a-e70ef12de9ce}} in p–Pb collisions at {{formula:a9831b82-848d-4603-a5a9-c81c97a24a78}} , and for the first time in pp collisions at {{formula:5618e848-92a1-4655-9bd2-42977d3252d6}} , in Fig. REF (left). In both collision systems, the measurements in the interval {{formula:4ba6cf70-52a1-4024-b946-6a225250bb11}} , indicated by the open circles, were performed in the channel {{formula:7f83bb03-dfad-446b-8228-d6d35060864f}} , with candidates reconstructed from their decay tracks using the KFParticle package {{cite:acd946233f9635d8d857f0afef1cb77139f34373}} and machine learning selections applied using the XGBoost gradient boosting algorithm {{cite:4b0ecc1a395d574c6b20f6642ed2dac68bf97d56}}. The measurements for {{formula:49e1e977-9f18-4b8a-ad49-374959955c42}} are from combined measurements of the decay channels {{formula:5965f484-21b3-4d62-a8a2-e393a03eaa05}} and {{formula:941fcf5e-e8ca-4d7c-a39a-52f7fa1c551f}} in both collision systems {{cite:adf3f5b747f6af76a2ac08b7ce7f95bf85e541f6}}. A non-flat distribution is evident in both systems as a function of {{formula:41598468-0d80-4c2b-ab10-bfa45d1cdd44}} , and there is a hint of a hardening in the {{formula:a977a7c4-aaf7-4252-8873-e0342ce7e201}} spectrum in p–Pb collisions with respect to pp. In the right panel, the pp measurements are compared with model calculations. Models tuned to hadronisation ratios from {{formula:d25ad86a-6c8b-4f2c-9dd8-d471b0c971e1}} collisions, such as PYTHIA with the Monash tune {{cite:b75505c3532cb76b1517808e1c26b8e247b29c37}} and HERWIG {{cite:f6fc23c24af8736a4a59e23aaa1697c8d7502b80}} are unable to describe either the shape or the magnitude of the {{formula:918b0058-a6f2-45fa-97d8-6627463d9d5e}} yield ratio in pp collisions. Models including additional hadronisation effects provide a better description of the results, for instance the Catania model {{cite:86b1139c6ff484da1e79d73acae974c7c916e9b0}}, which considers quark coalescence in addition to fragmentation; PYTHIA calculations with enhanced colour reconnection beyond the leading order {{cite:b5718c74b3b339e085a2f77c5d98a07c8bbe7aad}}; and the SH + RQM model {{cite:59d0ea11c7d426b6fd176e7a450b7ad7c9bedc75}}, which is a statistical hadronisation approach including feed-down from yet-unmeasured resonant charm baryon states.
{{figure:0a1938d6-28e9-42f9-b480-fb8949e2892f}} | r | 2a21b2ecdfbb227098b41ef2a850241e |
We have implemented the proposed hierarchical attention using Jax, an open source library https://github.com/google/jax for automatic gradient computation and linear algebra operations on GPUs and TPUs. All numerical operations in our algorithm use the Numpy native linear algebra functions supported by Jax.
In all our experiments in this section, we use the standard Transformer architecture described in {{cite:a373236a887522b492065db5276d1d5ff3cbe65d}} as the backbone for our H-Transformer-1D model. Unless specified otherwise, the model parameters are: number of layers is 6, number of heads is 8, word embedding size is 512 and the feed-forward module (FFN) size is 2048.
We follow the API for the standard multihead scaled dot-product attention implementation https://github.com/google/flax/blob/master/flax/nn so that we can perform a simple drop-in replacement of the standard multihead attention with our hierarchical attention implementation. This allows for an easy and fair comparison.
| r | 169cc4409315936bd79fe9a005e8ef9f |
We remark that, an alternative way to estimate {{formula:c37e1035-5c33-4b19-b9f4-07c8ec7786cb}} for each {{formula:8a98c623-fcf0-4421-bb11-716588276307}} is by using the method of classical shadows to obtain `classical snapshots' of {{formula:753a910f-a8bf-4d1b-aa63-fd37bb5280f3}} that can be linearly combined to obtain a classical random variable whose expectation is {{formula:062de62f-0138-49f0-b5ea-7354f29f9651}} (see Supplementary Material Section 6 of {{cite:159ecdca30594eba0dd9bd8c909ad8e6e046fd46}}). However, it is unclear to us if this method would offer savings in the quantum resources required, as the total number of times the quantum circuit needs to be run in the data acquisition phase should scale with the variance of the corresponding estimator. We do not know of a concise expression for this variance for arbitrary {{formula:a44c4e12-8f83-4730-ba32-dfa01a413b04}} . Indeed, calculating it for just a single value of {{formula:08831004-552f-4503-bc10-ff59ad49922a}} ({{formula:b8fde56f-48c4-4f13-8007-7bfbdc6e2932}} ) required four pages of calculations in {{cite:159ecdca30594eba0dd9bd8c909ad8e6e046fd46}}.
| m | ac35a055106f9140b2346b962e120a18 |
In this section, we outline the potential of our proposed scheme.
We test our proposed AUQ-ADMM on a series of machine learning tasks, including elastic net regression {{cite:e3d7e8d4f5dc10c5e666594a61a65157410ce74b}}, multinomial logistic regression {{cite:e3d7e8d4f5dc10c5e666594a61a65157410ce74b}}, and support vector machines (SVMs) {{cite:67645781537ffb33634b72b1c875ed3bdc343c3b}}.
| r | c69e854918893341485649e67f58c65c |
As noted previously, the three models {{formula:22aacdc6-ed52-4c70-9057-3c56da7efd0b}} CDM, XCDM and RVM have the same number of parameters, namely 5, one more than the {{formula:17617008-1af1-445f-afd9-bd24d8b657b9}} CDM. The CPL, however, has 6 parameters. Cosmological models having a larger number of parameters have more freedom to accommodate observations. Thus, for a fairer comparison of the various nonstandard models with the concordance {{formula:6e237113-7300-48dc-bc68-699a70e8ebc4}} CDM we have to invoke a suitable statistical procedure that penalizes the presence of extra parameters. Efficient criteria of this kind are available in the literature and they have been systematically used in different contexts to help making a selection of the best candidates among competing models describing the same data. For a long time it has been known that the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are extremely valuable tools for a fair statistical analysis of this kind. These criteria are defined as follows {{cite:9eaed187bd3f6383bc50c64822df9de90b2a2edf}}, {{cite:6445ad6cee25e2224ee70df2a78c8e7a026587dc}}, {{cite:8c52ed716f4f667255494001d50c7599ed753635}}:
{{formula:e007db2b-7483-4922-9ea9-7adec70c1a32}}
| d | a63e655736a625822cf89fd9f029aa8a |
where {{formula:c91a03b3-c377-4095-b877-a67ad158c1e4}} and
{{formula:0b81ccd8-1dec-497b-882e-f63cd188b1e5}} .
Here, {{formula:068182cc-29e3-4997-a8e0-712566b72eb1}} is Fermi's constant, {{formula:e2b1133e-e0ad-4da1-abe7-fe0fa5edbef0}} is the {{formula:dfba4bea-5c9e-4e90-b1b2-d12d75dc175c}} element of the
Cabibbo-Kobayashi-Matrix quark mixing matrix
{{cite:0b92b5f7b4396932c623d684b5d9640998b1ae34}}, {{cite:3953e7d2a3548ecc39bf83c67deb0e4f89ddb9e8}}, and {{formula:cec4fb5e-0b62-4132-85c7-83fa2e05f3e4}} is
the branching ratio of the leptonic {{formula:fd79cf0a-3f64-45ea-b1be-54766c8e481c}} -boson decay mode considered.
Neglecting the masses of the charged leptons and the first five quark flavors,
we have
{{formula:9f7ac234-db7f-45fd-bea4-c4829f8718d2}}
| r | 69046137cb1d4da10ebad364a467c5f8 |
This solution corresponds to the class VIII Heun polynomial {{cite:10674d66d78c8b53f1a035687b40ba41794bfcaf}}. Since the above mentioned procedure is valid for all of the polynomials {{formula:e68691cf-2c7f-4e75-abdd-446e79969398}} given in Eq.(-) in the eigenvalue and eigenfunction solutions, one can directly write down these solutions:
| m | a475b71b76cf27e1218dacbf10cc0b31 |
Efficiency: While DCRNN takes on average 271 seconds for one epoch, GA-DCRNN requires 401 seconds. The discrepancy between the two models is due to the attention mechanism, which requires more computation. Note that when implementing the attention we followed {{cite:7d9654f9a3e3c02497e1147d56fc02271160f810}}, but a more efficient way is available {{cite:180c7b9be8db719cf8bf464d22f3c98cf8f7c7e8}}. For the same reason, GA-GCRNN also takes longer time than GCRNN does.
| r | d38973c15a90b00c47770bf25ab59ae2 |
Proof of Theorem 4.
To analyze the large-{{formula:5b354a1d-e15d-495f-b7b0-46be1cf1dbb0}} behavior of the rogue wave {{formula:8afcc6ba-4a09-4efe-bff6-3e2d9b9859d5}} in the neighborhood of the origin, where {{formula:34d77b17-14f0-4479-921e-37705413e1cf}} , we first rewrite the {{formula:d64838b3-e082-42b7-9610-66b4bcec2a9a}} determinant (REF ) into a {{formula:d1baa3ff-de04-4e5c-b6db-7612a3aae82a}} determinant {{cite:430dc74fdc081ff3035f53538e6be561226a66d2}}
{{formula:52f4e6ee-3240-485c-adaf-a95e45cb66c9}}
| r | 5efa6f70c8a27525434632ac20b7454a |
Quantifying the rate of semantic change for a word requires records of its meaning from two distinct time periods and a quantitative metric that compares these records. One type of methods that constructs word meanings and enables comparisons over time is based on word embeddings {{cite:97adc2febe6de7f495c99757cadf4ad0e031e25a}}, {{cite:37d737bb333ddb8c3affb6d7f85cba4a18819f62}}. The embedding of a word is a real-valued vector that represents its meaning through a high-dimensional space; vectors for words with similar meanings tend to be close in this space, such as compassion and sympathy. Word embeddings are constructed from co-occurrence statistics in large text corpora. We thus obtain meaning representations from two distinct time periods by constructing word embeddings based on historical text corpora from the corresponding periods {{cite:94a43ff64c78125a38d6571b24cfcb8f5960f7ed}}.
| m | 7a593c73000fdbc3a41d64ff3b8a6c87 |
We did not compare with FedHealth {{cite:227b8e9b287af571f66d76547b63546817c0a7d9}} since our method focuses on more general setting where clients do not share large volume of datasets. We compare three extensions of our method with five methods including common federated learning methods and some federated learning methods designed for non-iid data particularly:
| m | d1948397e2af027f9008102f2fa059d6 |
In terms of the closed string moduli, the finite difference equations are most naturally interpreted as being related to the integrable hierarchies underlying Gromov-Witten theory, see e. .g .{{cite:ad323cbec0fa425d1868c843285cc578a85dbb8e}}. Finite difference equations of the type put forward in our work are shown in {{cite:892600182e919df10e9eaf6a27ab51c7315e2d04}} to govern the perturbative parts of certain natural representation-theoretic partition functions. Related to this, in {{cite:3b27b033503d2757995b83ddb94172fc360e8945}} finite difference equations in the closed string moduli, corresponding to {{formula:8ee99b9d-76af-4b50-a6a6-6a62a10329ff}} -deformations of Painlevé equations, were shown to be satisfied by certain tau functions that are related to topological string theory. In the similar context of the study of tau-functions relating to topological string theory associated to class S theories, finite difference equations in the closed moduli also feature in {{cite:8d1b8e74d874e494c19478fb4f62581051616511}}.
| d | 27aa03d6a09572657f102f1a3551de51 |
We compare our method with MonoPerfCap {{cite:25397202653763653b4d4fe0ee0ff27688dfe8a0}}, a representative tracking-based method. This method captures the human performance from a monocular video, but requires a pre-built subject-specific template model. We thus conduct the evaluation on their own dataset, which provides such templates. Note that our method does not use these templates but only takes the video as input. As no ground-truth surface is provided in MonoPerfCap's dataset, we are only able to conduct a qualitative comparison. As shown in Fig. REF , without requiring the pre-built subject-specific template, our method achieves comparable results in terms of the body pose accuracy and the fidelity of local details.
| m | 8c91e3a992c7f20f3984c15c6665564b |
When {{formula:2d0413ad-6bd4-400b-8623-8ae714a02a1f}} has good ordinary reduction at {{formula:7d9eed1b-3556-4615-9608-2f91d441525f}} , this follows from Mazur's control theorem, see {{cite:ae1dbbfa235f5d01c8f6da6e7a310bd00fccca63}} or {{cite:5bae87910821c16eda815dea381ad824e32bab3e}}.
In the good supersingular reduction case, the result follows from the proof of {{cite:d0c17ba6bd34322f5b52b6498b15a88115a7c462}} (see also {{cite:bfa47de43ee0dc90781a633813be927b9e7898ba}}).
| r | b14fb5d42cf202c407acbe2253fe19d1 |
We use the pre-trained weight realized by {{cite:a0e77fd3d89398b1964d6493075c8ee5f30b7c1c}}. For all of our models, we use the AdamW optimizer {{cite:44339f3ef2c1f11eb94564ce8d5deffa0f4f6f98}} to optimize our model for 20 epochs, the learning rate is set to be 5e-5, the batch size is 48 and the warm-up ratio is set to be 0.3.
| r | f7d2f73f8527cd3db82b6573ea6759d6 |
The impact of deep learning methods in computer vision is growing rapidly, owing to the recent increase in processing power thanks to GPU's and CPU's, sometimes solely dedicated to machine learning.
The current access to huge labelled image datasets such as ImageNet{{cite:735ade7bbbf6f764cbfcd749599c13c39d0e8620}}, COCO{{cite:e7ec5263b277659faec855c8e241b8e736faeb4d}} and already pretrained models such as VGG-19{{cite:0b5502d56b67913dca9658c615417fccb3351371}} and ResNet{{cite:aa3d4ab6f98661e0256b83f833baf5fefbde3a8e}} for feature extraction democratised the use of deep learning. The lighting modelling field is no exception to this trend. For deep global illumination, {{cite:da0bbcf6c65d44852e90ef125584c70fac7c0dd7}} takes a 3D object with known albedos as inputs. Their Convolutional Neural Network (CNN) takes the coefficients of the spherical harmonics function computed as inputs in order to generalise the illumination after training. This method shows results from a single image but requires a training time of four hours per pose and the illumination is limited to the learnt object, which limits the range of applications of the method.
{{cite:af3999c005d73cc7732b1bb43716429d8abdf1d0}} and more particularly {{cite:cbf1e7d1ae83ae1bab83d121aa10d7791bdaba7c}} were an interesting continuity of {{cite:2e611a8628541042c0746f66bc6cbe7077db25c5}}, showing impressive results in terms of specularity prediction for new viewpoints by modelling high specular objects using an RGB-D camera to reconstruct a light field. {{cite:cbf1e7d1ae83ae1bab83d121aa10d7791bdaba7c}} was also able to model inter-reflections showing how far novel view synthesis could go using an adversarial approach. However, this method remains limited to the object of interest and does not provide strong insights on scene understanding in case the lighting conditions change or the camera viewpoint drastically changes.
| m | 787df738142a59dba5f35b957ce1fdaf |
We presented a system that uses diverse prior data for general-purpose offline RL pretraining, followed by fine-tuning to downstream tasks. The prior data, sourced from a publicly available dataset, consists of over a hundred tasks across ten scenes and our policies can be fine-tuned with as few as 10 demonstrations. We show that this approach outperforms prior pre-training and fine-tuning methods based on imitation learning.
One of the most exciting directions for future work is to further scale up this pre-training to provide a single policy initialization, that can be utilized as a starting point, similar to GPT3 {{cite:6decec5bcc1d5ba6931cc9fb8ef3383efb96d351}}.
A limitation of our method is that it requires the prior data and new tasks to be structurally similar and an exciting future direction is to scale it up to more complex settings, including to novel robots.
| d | 778f765b625bb60b3c98669031bbd3da |
Thereupon we provide preliminary analysis on the face image dataset FairFace {{cite:e2e915e7bf61b39d9a403c8d0a0a8a1c17cf0e68}}, FairFace is a face attribute dataset for balanced race, gender, and age. It categorizes gender into 2 groups, race into 7 groups, age into 9 groups.
We freeze the pretrained backbones and only fine-tune classification heads on FairFace.
We compare FaRL with the 400M-data-pretrained CLIP {{cite:91b41c87a12438715a095e709a7fb07493935fab}}We compare with CLIP-ViT-B/16 under the same fine-tuning protocol for fairness., as well as the baseline model associated with FairFace {{cite:e2e915e7bf61b39d9a403c8d0a0a8a1c17cf0e68}}.
The accuracies w.r.t race groups are reported in Table REF . We follow {{cite:91b41c87a12438715a095e709a7fb07493935fab}} and define the group “Non-White” to include multiple race categories: “Black”, “Indian”, “East Asian”, “Southeast Asian”, “Middle Eastern” and “Latino”.
| d | ad223762193fdf5a2aaca61bb7212c10 |
Finally, we comment on phenomenological applications of this work. If we assume that a Georgi-Glashow GUT exists in nature and is spontaneously broken to the SM gauge group, one can use measurements of the gauge couplings at low energy to predict the GUT scale {{formula:1e159db9-0a7c-41f3-b854-94d22c455cdf}} and the coupling {{formula:550aa3da-050d-46ff-bad8-82de413de6f4}} at that scale, assuming a particle content at scales between the weak scale and the GUT scale. For example, assuming only the SM (MSSM at the TeV scale) spectrum, we get {{formula:02fe2fdb-fc70-4d2a-838f-6e5e5419649d}} GeV, {{formula:fa0f85f4-fca4-443a-bebd-535f91da493a}} [{{formula:9885c66f-89e2-4243-b986-7ebb86f18ac5}} GeV, {{formula:387a1c80-439a-4be4-a87f-c3334935861d}} ]. In the spirit of {{cite:42ff543594f1412b33b1ca0ca552537aa57063d7}}, we can then ask what if there is a time in our cosmological history where the GUT symmetry breaking from the Higgs mechanism is shut off or delayed. Alternatively, in the spirit of {{cite:3c4419e76ac2f16df9010621d7fa0ce2945e30ff}}, we can also consider an SM-like dark sector where the GUT symmetry-breaking Higgs mass is positive. In either case, the confining chiral theory considered in this work will govern the dynamics.
| d | 5619dc4fc0e9ad265f7c542c0ff7bee3 |
While our experiments only explore pretraining of ResNet backbones, we note that investigation of pretraining Vision Transformers (ViT) {{cite:fbee052a2c3809b260c1fd24d1c7a8709fb3d684}} would be a direct next step in this research. ViTs have shown improved performance in segmentation {{cite:dca71af23394d70e0deeccb9380f77c9b192334d}} and change detection {{cite:caaf441bcb46644c9ef94f5cc564e77cf0757a36}} applications in comparison to fully-convolutional based architectures. Furthermore, pretraining of ViT backbones with self-supervision {{cite:eeb45e14337a3726fb9639f80a4d1a1cd93b98f4}} has recently shown promising results.
| d | cd517e776ceed167d2ed3d59328bcf39 |
When {{formula:1b12e3c1-9f02-4c4f-a8a7-90cb2312d847}} , the existence and uniqueness is a consequence of the Lax-Milligram lemma on {{formula:1e9f2896-9880-4e35-95b3-711c31945f8e}} . The classical way to deal with non-zero Dirichlet boundary condition {{formula:cc28cd7c-9220-4c36-9cbe-fb704c31dba8}} is to find a lifting {{formula:64cc4697-5751-42c7-8feb-43f1cdd69afa}} with {{formula:38987261-5278-4323-a4ea-3c773aee2eb3}} and change (REF ) to homogenous boundary condition with modified source {{formula:cbceb311-30b6-4d5a-88b7-dea76b057d77}} . Such lifting is guaranteed by trace theorems of Sobolev spaces which is usually established for smooth domains. For polyhedral domains, however, compatible conditions {{cite:a38ab2da2b18b437f54c1c03cfadf611c1261b15}} are needed. For {{formula:868ea643-defa-4149-9f90-951ec9934d59}} -functions, {{formula:b42dd1a0-6740-446e-b957-2c183646989c}} should be single-valued across each edge {{formula:06951989-9ab9-4107-bdbd-a6c554addf80}} of the polyhedron {{formula:fce31686-efd9-4a37-af35-b908f81b85a4}} .
| i | 79f535b6ef6d77a93e052ea7aa5bd0d3 |
Substantial quantum computations in a system of real utility will involve long sequences of one- and two-qubit gate operations {{cite:de27b773e05a91980afd5dae5421181abbb287f7}}, so a thorough characterization of specific gates is important for predicting how they will behave in such sequences and how experimental errors will propagate {{cite:0af0b4bf92a1a175683e6c43d231760c624fe566}}. The most widely used method for quantifying two-qubit entangling gate performance in systems of trapped ions is to use the gate to create a Bell state and then to analyze this state with quantum state tomography (“Bell-state tomography"). This technique is popular, because it requires only the gate interaction and a global one-qubit gate for analysis {{cite:1f0a4e63f15942878fee88df3352f5fab228d5f7}}, {{cite:9fd2366af046332e15345f6cb63a5b4e2a533b7a}}. Bell-state tomography measures how well the experimentally produced state matches an intended entangled state, consolidating coherent and stochastic errors into a single number {{cite:a32e56c3d8c01bf69dd3a2f484bf2539564c9b6b}}. However, it falls short of providing a comprehensive description of the quantum process useful for identifying sources of error or for modeling gate performance in the context of more complex sequences.
| i | 0a32a72691111b1197a86efe5ed1c771 |
(6). Relaying pseudo features (latent representations) in CV. Generating high-quality samples can be very challenging. Some CV papers propose to generate features. Example systems are {{cite:ccd7c2b1ec28c6b5287b1898b069c492a3f7cd4d}} and {{cite:0497fa36977184c53d3d0546da2f60178946534a}}.
| m | bf2c69a307edf39a48a71a52569ad9cb |
In this section we describe how multiple instance learning can be used
to address some of the drawbacks seen in previous approaches, namely
the need for expert knowledge in lexicon-based sentiment analysis
{{cite:4d8cc06e4f3563d4cf0cc5c453bff04b1c13fa05}}, expensive fine-grained annotation on the
segment level {{cite:5d1a02185f5dbd05df1fac63cde80c9b91172f6d}}, {{cite:ef1dc0efb66ac3b693ea651ab5aa77c4e62c345d}} or the
inability to naturally predict segment sentiment
{{cite:1c75cba448dd0c3f97bea802f8fd64fd050e5697}}.
| m | 614eafb7180a0991fc3c2362007a5953 |
This part compares Dual Teaching (DT) with previous semi-supervised wrapper methods, Self-Training (ST) and Co-Training (CT). To show the generalization of the proposed method, Logistical Regression (lr) {{cite:fa11ee696eefd3a3fc0d61e697a4ffe8f4a0540e}}, Support Vector Machine with linear kernel (svm) {{cite:9915d79c40421ee9c603ee6f8cf95eb3ade03ad6}} and Adaboost (ada) {{cite:4a39247cfa6af9913a4ef86035f13043e5ff1fec}} serve as the base learner separately for every wrapper method. Every specific method is abbreviated to "the name of the base learner/the name of the wrapper method". For instance, "lr/ST" means the base learner is a logistic regression model and the wrapper method is Self-Training. To show whether the wrapper methods can improve the performance of the base learner from unlabeled data, supervised models with the use of labeled data only, which are abbreviated to "name of the supervised model", serve as the baseline approaches. For instance, "lr" represents the logistic regression model trained only from labeled data. This experiment contains all data sets referred above. For every data set, each method works on various numbers of labeled data and the performance on testing set is recorded. The ratio of the number of labeled data to that of all data in training set ranges from 0.1 to 1. When the ratio is equal to 1, the base learner actually is a supervised classifier trained from fully-labeled data.
| m | 3e4f8b374312be38bcc2baed5d7587d3 |
Next, in case of the similarity {{formula:5778d688-cec7-4d7b-9023-cec0483774a5}} , as hinted in
sec:gsimlearn, when {{formula:af65276a-1264-4b0a-b0e4-aa68b40085dc}} , {{formula:6a023d29-c08f-41fa-a57d-9a1cfb984160}} is a non-unitary CPTP map.
This thus enriches the family of feature maps at our disposal compared to the
previous case. Intuitively this can be understood as following, we prepare two
quantum states that map classical data {{formula:a99cf9bd-5803-45f7-8433-f5fa2e5f34fe}} and {{formula:c2298725-efb7-49a3-a3fd-bd82bcb6e750}} to their respective
states {{formula:04092ccb-9c45-4b32-b5f5-c9b075d814c0}} . In the previous multi-space metric learning
case, we required that (dis)similar elements be mapped (far)close to
each other in the Hilbert space {{formula:7344df28-4954-4fa0-b6bb-4c86438f2261}} with dimension {{formula:fb57fc47-1ef5-4582-9569-829d81c71f5d}} . In
contrast, similarity defined in eq:swap partial requires that
dis)similar gets mapped (far)close to each other in the Hilbert space
{{formula:9154ef3c-62b1-4ed7-8cd0-5187e1d025dc}} . This is similar to the
projective SWAP test used in Ref{{cite:62a64734d5e9e89f6daf826071d22674e3be454b}}.
| d | 56eb72e8cf3390b9c5b66db3d1d2b217 |
The advantage of Theorem REF is that it works for general branching mechanisms without criticality restriction. In particular, it applies to all the stable branching mechanisms given by (REF ). Furthermore, the proof of the theorem actually provides a way of finding explicitly the ergodicity rate {{formula:238d577e-51be-44de-9078-5d91b2e937a2}} , which is important in applications; see Remark REF . Condition REF is introduced to avoid some extreme cases of (REF ). If the condition fails, then either subordinator {{formula:f4094594-f632-4af6-abfb-edf2cff384bc}} vanishes or the Lévy field {{formula:0c255018-b5c7-4298-851b-d6f8ef43df21}} only has nonnegative increments. In any of those cases, the immigration essentially plays no role in the ergodic behavior. Fluctuation conditions like Condition REF have been considered in the study of ergodicities of Ornstein-Uhlenbeck type processes with nonlinear drift; see, e.g., {{cite:005b64e64f4c391c9219805b45cd59f9ccf6606f}}, {{cite:ce91aee46a43dd02abb1d9d5f2f060fdfd22be0d}}, {{cite:f9a300c0168ebaa308b08a896d5b96a77a936147}}, {{cite:6ef134ebf2fa3ef695868be78e16de41feda49cc}}, {{cite:3643d0e7593798a43087648ecb9d04dfa1734006}}. Our condition (REF ) is actually weaker than the corresponding assumptions for exponential ergodicities in those previous papers, where one usually required {{formula:f7a50f50-2b54-4bed-b976-bf7112dc0f6c}} as {{formula:89d27e5b-d38e-4af5-9bea-855b0ff6c46f}} . In particular, the condition is satisfied if {{formula:0110d773-fb47-418d-a869-d321661964d9}} or {{formula:a80f0b24-252e-4db8-ac23-406156887861}} is bounded below by the measure {{formula:15a37b4e-a041-480d-9837-074526a5a8d7}} for some {{formula:8cb27bde-488d-4a33-8157-54f527eb1ca2}} and {{formula:50c03704-0fca-4922-9987-1f6908fd8455}} . The existence of a suitable Lyapunov function has become a standard condition for ergodicity following Down et al. {{cite:d855db165d9094bd480fe38f244e5b7c6ae1c26c}} and Meyn and Tweedie {{cite:2f55afc1fd998e922650f3aa88a91564b06ca243}}, {{cite:f1977734aae59cb505072b4bfde959592dfc1734}}, {{cite:14740f563069588ba3bf91a30245953315c1a9ba}}. Condition REF means roughly the competition mechanism wins its confrontation against the branching and immigration when the population is large. Typically, this is equivalent to a simple growth condition for the competition mechanism.
| r | 828439fda3e9012fd2b7a3adf12c40a9 |
The details for the result of analysis with operator theory can be found in the appendix B.
The mobility edge of the model (REF ) with {{formula:36065c4f-4657-47d0-989d-8055f0ae85f6}} can also be obtained by looking for the self-dual points of system,
which was originally given in Ref.{{cite:b0f29753d72a3c2f2993cad96f7abd31bf17d2d7}}. Our method based on the analytical expression of Lyapunov exponent results in the same result. Although Eq.(REF ) is known, we note that the analytical expression of Lyapunov exponent was only derived recently {{cite:9002fc116aeac97464dfbcd2cc4bb71b75f7a8b8}}.
| r | 4bdeefe6af89e20f155736859bdd2561 |
The here presented approach was developed in parallel with the super-resolution method proposed by {{cite:2bd0c77ca7530e9ad5ca2f334817325e9f75011b}}. In the current work quantitative results are presented in terms of SSIM, PSNR and VIF for all cardiac MRI slices of test patients from the ACDC dataset and 45 subjects from the Sunnybrook dataset. In comparison, work of {{cite:2bd0c77ca7530e9ad5ca2f334817325e9f75011b}} depicts results in terms of PSNR and correlation coefficient for a limited selection of two cardiac MRI slices (mid-ventricular and basal) from the UK Biobank. Note that intensity statistics of images from different datasets may be very different and hence, PSNR measurements might be inaccurate {{cite:20239be8a132a49a7b64c04db774c3f86cdd9e84}}. Therefore, thorough quantitative comparison of the two methods is hardly feasible.
| d | f32c223bac903bb16ea8fa4818cfb200 |
Ablations on Part Representations.
We further compare 3D reconstruction methods that use different part shape constraints.
A majority of existing methods represent objects with a single mesh {{cite:9112f7f5f9e9e7d50b8cef1e5615c96b09a802c4}}, {{cite:979045aa810b4c46a042dfae5a355808110ccd05}}, {{cite:2fe0f501761dac48a147e307ada11fb19054d882}}, {{cite:c88901893479fadaeca201e3fc64ec2dafe5235d}}, {{cite:17c6f041d8479ff7a0cc3a67eb19f4133d63d76d}}, {{cite:c11f1fb684fbdc45a280026cf0f746b2781a7a84}} and allow each shape to be fully deformable by predicting the vertex deformations directly.
On the other end of the spectrum, most part-reasoning approaches represent parts with primitive shapes like cuboids {{cite:7b396faa1be14629104bb680d1acaa269e667857}} or super-quadratic surfaces {{cite:7f94b4ef3fc5f0e0f2c7586ccc46e244cf09f8f7}}.
Our method lies within these two extremes and enables variable degree of freedom by adjusting the latent dimension of the part shape embedding.
Table REF shows the quantitative comparisons between part representations like free-form meshes, cuboid reconstructions, and our Part-VAE embedding via different evaluation metrics.
In addition to 3D voxel IoU, we calculate the 2D re-projection IoU, 2D structural similarity (SSIM), and Chamfer distance (CD) between the point sets sampled from 3D volumes.
The results show that LPD achieves a better trade-off between the degree of deformation and shape regularization among the part-based methods.
To observe how our method adapts to different object classes, we perform single-class training with different number of parts {{formula:0f797316-0985-475b-a628-7d30c57a7f87}} on airplane, car, and chair images.
As shown in Table REF , the optimal number of parts {{formula:9f58cb81-0716-42c7-a0ee-66a449db2cca}} varies across object classes.
This suggests that each class have a distinct underlying part configuration to optimally represent the object shapes.
Note that LPD achieves higher accuracy than other methods on all three classes with more than one part.
Our main reconstruction model is class-agnostic and we use the same number of parts for all classes, and yet the performance could be further improved if it is optimized for each class separately.
{{table:49d5a587-41b4-47dc-b60f-80615c16a502}}{{table:a778f977-abf5-4bba-8262-c999d734a928}} | r | c4b42e9e36e2c0e2375e34ec257bfb3c |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.