text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
It has been widely agreed upon the fact that Nash's concept of equilibrium
reflects the possibility of challenging the choices of the unilateral acts
of the players involved in noncooperative games. Having the aim of gaining
the most they can do, the players are in the situation of making choices in
a process described, mathematically, by the notion of "game" (this was
proposed by Nash, and it was initially called "game in normal form"). The
original definitions of Nash {{cite:a692346e3b9b522d99472bc43cfcd9e19bd75923}},{{cite:df38bfe45962ca25d0b39f3b26a73d2ca2dcffc4}} have been extended, but the
derived notions of qualitative game or abstract economy and their
corresponding concepts of equilibrium reflect the initial meaning of
flexible elections in any given context to permit the players not to
deviate, once they agreed on the best solution for them.
| i | 113b6a6cf9d4714457e3b3717d5c6644 |
For the US airport network however, the geographic results are
poor. These poor results may be unexpected at first, but they have a
simple explanation in that the geometry of the airport network is not
really Euclidean, as the geometry of the nearly planar road network,
but hyperbolic. Indeed, efficient paths in the airport network
optimise not so much the geographic distance travelled, but the number
of connecting flights. As a consequence, most paths go via hubs. As
opposed to the road network, where the number of roads meeting at an
intersection does not vary that much from one intersection to another,
the presence of hubs in the airport network makes the network
heterogeneous, i.e., node degrees vary widely. This heterogeneity
effectively creates an additional dimension (the “popularity”
dimension in {{cite:426df1ede941b6556f6486ef2dfd8e0c5a9a5569}}). That is, in addition to their
geographic location, airports also have another important
characteristic—the size or degree. This extra dimension makes
the network hyperbolic {{cite:aa7015dce1158211b87b504013aead413f671957}}. The NNG results for
the hyperbolic map of the airport network in Figure REF are as
good as for the other networks.
| r | 02d26b5dbce0d3111b63cfab1944e77c |
Additionally, in our experiments, we also select two representatives – Input {{formula:5e6add4c-d339-4150-9ceb-e07fd40bcbab}} Gradient (InputGrad) {{cite:b6c2ac5dedd36863ff3775f7f6811af43962623c}}, {{cite:e528768cffcb0b64344a99e2bf701ac4e96cc6c0}} and Integrated Gradients (IG) {{cite:e11bcd1c435c7eb2abc467e3ccbd89dae241c3f0}} – from gradient-based attribution methods for a further comparison.
{{table:4a7dc105-92bc-4e61-bc70-d0e36bb582dc}} | m | 78b189e282fda37a49c939300d2fbe73 |
In order to study the patterns of the 1st type rogue waves of the three-component NLS rogue waves under the condition {{formula:332d0d8f-8256-4648-98d6-99bf3de0c64a}} in the inner region with {{formula:067123f3-2e5a-4a57-84d4-1c2bfdd78af8}} ,
we first use similar method as that in {{cite:5b91e1c30b67d0a3917f346bd8729b16aa038f50}} to rewrite the determinant {{formula:4eff6aeb-d34c-4499-87e3-29707a5bcbd9}} as a {{formula:23605db3-9412-4a67-b7fd-536ef8be0e7e}} determinant
{{formula:580043c9-7ae2-4209-9acb-6addf0679273}}
| r | 33c454de15f25faff05115c8ac3f8123 |
Hence, the relationship between predictive coding and backpropagation identified in previous work relies critically on the fixed prediction assumption. Formal derivations of predictive coding in terms of approximate variational inference {{cite:adb45c296e2d9434d994fbb36a900c3b4b31fcb9}} does not produce the fixed prediction assumption. It is possible that an alternative probabilistic model or alternative approaches to the variational formulation could help formalize a model of predictive coding under the fixed prediction assumption.
| d | a1004854f1dbe9848b723840f52d69c2 |
With (REF ) in mind, we estimate the local Lipschitz constants of the integrand as in (REF ). The numerical estimation of the Lipschitz constants of NNs has attracted attention, as they form a way of of estimating the generalizability of a neural network, and have been used in the training process as a way to encourage accurate generalization {{cite:e8fcba26abcf4afee713ddb59e96cc6f5a797a96}}, {{cite:988e63a4042faf49a9645a68877e8c35a57690e4}}, {{cite:151b75372873b261fac5c9b6a719b09eb6536524}}. As we are dealing with loss functions that involve derivatives of the neural network, we however need estimates of higher order derivatives of {{formula:0b0afd87-1608-47da-af2b-bad2a77bb429}} . The approach that we employ is similar in spirit to the work of {{cite:63195875d46094d36769b2d9a947a947d0c72cab}} for obtaining a posteriori error estimates in PINNs.
| m | aecf66811d545ecb677049588a9d8ef4 |
Query modification, especially query expansion in document retrieval step has been shown to give rise to significant improvements in QA performance. The GAR {{cite:420a3e5deb213b59e530bce96ef93f0e3994bea5}} system uses a sequence to sequence generative transformer model, BART {{cite:b7d8be5a626b651d496b1c7dd092a6e2b032d33e}}, to generate query expansion terms. Using the expanded query in a simple BM25 model can cause significant improvement in the performance of the retriever. It was also shown that the GAR query expansion model used in conjugation with the DPR model can lead to even better performance.
| m | 09ab8e6b744180f0054d5b7b5227c217 |
since {{formula:65bdd933-d55c-4294-b85f-d5b649f9555c}} is assumed to be positive. This implies that {{formula:09105b10-1352-450c-99a1-d813dbd89c1c}} is a self-adjoint(see Corollary 7.3 of {{cite:054e820d90a5974a98007ac951b4c42cfd9185c5}}) compact operator on the complex Hilbert Space {{formula:3ad311a8-0fbe-42a0-baae-0a3616d00d06}} . Therefore, by the Hilbert-Schmidt Theorem we have the there exists an eigenvalue decomposition
{{formula:56fc9447-e21d-48f7-8df3-3e426fa4b3f9}}
| m | 6456173ff13b975e1ae2ecf75fff7f4f |
To study the dependence of synchronizability on the network structure with higher-order interactions, we focus on the Kuramoto type of coupling function for the coupled oscillator. Then, the master stability equation, Eq. (REF ), belongs to the case of Eq. (15) in {{cite:ad10bbb475f8f1b59bbfbcb887f086645bafa405}}. As noted below Eq. (15) in {{cite:ad10bbb475f8f1b59bbfbcb887f086645bafa405}}, the situation is conceptually equivalent to synchronization in networks with only pairwise interactions. The summation of higher-order Laplacians now plays the same role of the conventional Laplacian from pairwise interactions. Thus, synchronizability depends on generalized Laplacian matrices {{cite:11d9cb01bf6fd681a3dfef57111c866e0e7857d9}}, {{cite:f7e8995ade011198921d7038abde13d6195ae4c3}}, and can be characterized by the eigenvalues of Laplacian matrices {{cite:a9b1c80b2e67ea643776de7c499928669732687a}}, {{cite:74dd8ef61950efc1ac73f6e60389370f879b2980}}, {{cite:17d5cdb4ce19ba30776a2dc3b3cfb5ec8f4ad299}}, {{cite:86952ef4adf031908acf2e69216e3109f2ae50ab}}.
| r | 89542783d8715732f5a900d98680d940 |
Our open-set and closed-set results are presented in Table REF . We compare the proposed AvaTr V1 and V2 with two state-of-the-art methods, VoiceFilter {{cite:8023773aecdeb3332963094724884c8c546dd86e}} and SoundFilter {{cite:9660e31fe6b6bcf699ddba4761d9cb8dce359e3f}}. Since neither VoiceFilter nor SoundFilter released code, we tried our best to reproduce their results.
| r | b07c9285b81bdb077b852fca1b81936f |
In Section , we verify these ideas empirically on fully-connected networks for MNIST {{cite:c3fab1c127c9ca6252227375a26465ad7a17740b}}, Fashion-MNIST {{cite:4035135165ab9206f18860687bffd7e3b3601f2b}} and on VGG-16 {{cite:3f5903d6d92c2adc30e6658e6cc4134cfb6d2f19}} for CIFAR-10, CIFAR-100 {{cite:8e76c0a9c4450af1d41aae78a4049498e4d81354}}.
We first show that if we train a neural network without using any bias, then the error curve has much shorter plateau or no plateau at all. Even for networks that are trained normally with biases, we design a homogeneous interpolation scheme for biases to make sure that both biases and weights are {{formula:282c387a-6b11-44a4-8f95-45aeaed89ced}} -homogeneous. Such an interpolation indeed significantly shortens the plateau for the error. We also show that decreasing the initialization scale or increasing the network depth can produce a longer plateau in both the error and loss curves.
Finally, we show that the bias is correlated with the ordering in which the classes are being learned for small datasets, which suggests that even though the model we consider in the convergence analysis is simple, it captures some of the behavior in practice.
| r | 0a6dd6ece09c0e2e188c7f34aed99bd5 |
GCN {{cite:15fa17e82970154e2b93ade77a1b2d4d87a22887}}: This is the widely-used spectral GCN model. Since it is not designed for multi-relational graphs, we only pick one type of relation for each graph in the pre-training.
R-GCN {{cite:254a7d2c5b999cf8f5751d79a880bffe9b3cd961}}: Relational GCN enhances the GCN by employing different weight matrices to process multiple types of relations.
W-GCN {{cite:fc5af9d629a1d540763fe6eacf9ad6eed4ab3c64}}: Weighted GCN utilizes an additional trainable relational-specific scalar weight in the GCN model.
VR-GCN {{cite:fffcd39f2bab0a67bc40313be3f8065559fba198}}: Vectorized Relational GCN (VR-GCN) learns the embeddings of relations in the GCN framework. Each relation is represented as a vector.
CompGCN {{cite:b1feee802a83c04bda804c155f30acf7f8ab40ed}}: Composition-based R-GCN (CompGCN) employs KG algorithms in GCN models. The relations are initialized as vectors, and the KG algorithm (TransE) is applied to learn the embeddings of nodes and relations.
| m | ae17a1e1a2d626d8b949e25111de1ef9 |
Let {{formula:75cb9992-9a84-4d35-b167-c070da1197fd}} , where, {{formula:77229e71-ca8e-442f-8218-f2d306c73200}} ={{formula:f52ae192-00db-4e55-b2d2-d4ba27cc79b2}} is a random vector of hidden components {{formula:38064f4f-6321-40b7-9f9e-5dd5e144320a}} 's, ({{formula:8c7149ac-21a6-4ba1-b496-8d715c95b583}} = 1, 2, 3, ..., {{formula:c2d340e0-ef89-44b6-933b-34e8f98fbed4}} ). {{formula:af12bd01-db47-439a-a1d5-7dcb72d28311}} is a non-singular matrix, also known as the mixing matrix. {{formula:e549c142-e6d9-439d-b5d3-ee76deecf5f6}} 's are mutually independent amongst themselves. The objective of ICA is to find {{formula:fd81c105-9494-4173-ad02-4a72bd3c31e8}} by inverting {{formula:5ad4be75-6bbb-4668-b925-622656c9d9d5}} , i.e., {{formula:2690ac72-b38f-479e-9d87-f46bb64d7516}} = {{formula:cafb989b-9576-405d-a130-176158f3893c}} = {{formula:68200fee-5075-4e70-b0db-6392832cd3b7}} , where, {{formula:03b35475-0f54-4491-af61-e52f2e923daa}} . {{formula:a2f4c3cf-6923-4c1f-92b4-059fd7976d64}} is called the unmixing matrix as it is the inverse of {{formula:a90ad849-47e8-4a63-8fba-164a51e878a4}} , the mixing matrix. ICA separates the Independent Components (ICs) (sources) present in a blackmixture ({{cite:2a03bed0e6b9890fe1bb89216b4e6935c957e7e3}}; {{cite:4d56db228d9175a0110318d8ce5837e9597ae84c}}). To obtain independence, the non-Gaussianity of the data is maximised using negentropy. There are several techniques for ICA such as FastICA, ProDenICA ({{cite:a3bf1d42048050ef666965dbe5024cd181c45213}}), KernelICA etc. One of them is FastICA algorithm ({{cite:da1ef35265a7674aa6cb2639fff2a7757a94408e}}). In this method the ICs are estimated one by one. This algorithm converges very fast and is very reliable. It is the most commonly used algorithm and is also very easy to use.
| m | 9c35e7b9f16f7b2ee633620277c7ba0d |
This research was financially supported by JSPS KAKENHI Grant Number JP19K14766.
We would like to thank Editage (www.editage.com) for English language editing.
This work made extensive use of the following tools, NumPy
{{cite:6545e7ddb58a3451ec438c80cb45803fd3397fa5}}, Matplotlib {{cite:4b9a76d4cd56b186668482b77feed508d4f61486}}, lmfit
{{cite:fd0e17e9d1c99843a4c7a502f9a7863403063b90}}, the Tool for OPerations on Catalogues And Tables,
TOPCAT {{cite:7ab08e15fa5277a78d207c00fb5d0874716d1b82}}, a community-developed core Python package for
Astronomy, Astopy {{cite:f642362a36777331be483cc31ca0abc4afd56e0a}}, and Python Data Analysis Library
pandas {{cite:85d762318f4ad7d6c1128237bb1889e97f405465}}.
| d | b8cfa3a8f2230a80b7f81a3c306e5af3 |
where {{formula:d7cb9210-10ed-482e-aef1-c905d63ac566}} is a probability measure in {{formula:d05eae2c-2a9f-4d02-b4ba-38299098a8cd}} , {{formula:751c6503-e3ff-4d1f-a94a-33ee5cb77d8a}} , {{formula:9e8a969e-4a5e-43b1-9a47-bcfca7792ba5}} , stand, respectively, for the
{{formula:29da0c76-5028-4b0d-94da-f23bd96b9ced}} and {{formula:4934a382-a7e0-4824-b035-406dcd7496b2}} projections, and where {{formula:34d27dea-c945-490b-8dcb-c20f40cb2f28}} denotes the push forward of the measure {{formula:22ede181-7fd3-4fd3-8479-ee6885481934}} through the mapping {{formula:ae86b27c-09f9-4763-ac00-add839e01114}} . The dynamic formulation of the problem, due to Benamou and Brenier {{cite:b3ec878dbd76710784adabb798cd0f5b8bf72f9f}}, shows that the optimal value is realized by the energy-minimization problem
{{formula:23170ead-6cf6-4238-aeb3-a76864915af1}}
| i | 5c0b7849411d6b1166f22c48e1535072 |
More qualitative results of image retrieval on the CUB-200-2011, Cars-196, and SOP datasets are presented in Fig. REF , REF , and REF , respectively.
We prove the positive effect of the proposed method by showing qualitative results before and after applying the proposed method in the self-transfer setting;
the source embedding model is Inception-BatchNorm with 512 embedding dimension and trained with the proxy-anchor loss {{cite:4e6d9593c6f9704089dda020f677e331cb57a95b}}.
The overall results indicate that the proposed method significantly improves the source embedding model.
From the examples of the 2nd, 3rd, and 5th rows of Fig. REF ,
both models retrieve birds visually similar to the query, but only the models after embedding transfer successfully retrieved birds of the same species.
Meanwhile, the examples of the 2nd, 3rd, and 5th rows of Fig. REF show that the model trained with our method provides accurate results regardless of the color changes of the cars.
Also, in the examples of the 2nd and 3rd rows of Fig. REF ,
the source model makes mistakes easily since the false positives are similar to the query in terms of appearance, yet it becomes more accurate after applying to embedding transfer with the proposed method.
{{figure:45086ad7-98f6-4a9e-a2d9-a4b89682c63a}}{{figure:84fb147f-0403-40d9-a715-84729604b285}}{{figure:b13577c6-7669-459f-88ee-0f09f9b42400}} | r | 1c1def80227ee6a35205dc6319651a86 |
To further investigate the different outcome from each approach, we show in Figure REF the posterior probability density function (p.d.f.) for the BBN observables {{formula:7a0e1289-e553-4e3a-afd7-8663fba0f4ed}} and {{formula:6d8b97cc-f1bb-4ed3-b0f2-768a23a89790}} . We report both the result from the BSM fit varying both {{formula:bf21b054-742f-48a9-bf14-91c059eab8d8}} and {{formula:7b16bd0e-1316-426c-8f8e-cf2094d6ec03}} as well as the one from the SM prediction, obtained fixing the BSM parameters to 0 and replacing the CMB likelihood with the Gaussian prior: {{formula:e0da2efe-ba2d-42f7-a86f-7b822690a968}} , from the {{formula:9365fb51-3ea4-4451-bb46-f8bcc6ea9bdb}} CDM Planck analysis (TTTEEE + low-{{formula:bbe0788e-7b7d-4b3c-ae75-af31ad2ef184}} + BAO + lensing) {{cite:b70bd25c262a62c35591ef1859b74df46f7bf239}}, {{cite:7a644f6c96d95880dbe8fcb6d6655a6a8fe0b46c}}. In the same figure, we also highlight with vertical dark green bands the measurements adopted in our BBN analysis via Eq. (REF ), and report the PDG 2021 value {{formula:6d46bb32-facf-400c-8bf5-a6da9d426c40}} {{cite:7ea2787a841c879200bf687374182cca368c406f}}, in optimal agreement with the analysis of Ref. {{cite:fbb40858a95d07338f55b5c1b5dc4dbd35d46903}} that comprises the set studied also in {{cite:0cd107a0f0b72ab14de66854b5bc1e0df459fd7e}} without the new EMPGs from Subaru.
| r | c3db9e4ceaac76ec825bf1356fda9f52 |
Relation to fine-tuning. In SSKT scenarios, fine-tuning and learning from scratch can be applied for target network training. We adopted fine-tuning-based SSKT with a ResNet18 model trained in ImageNet for a multiclass image classification problem using the PASCAL VOC dataset. In addition, using the pretrained 3D-ResNet18 model on the Kinetics-400 dataset {{cite:d0435ab08629907cbdce7801cead967037af795e}}, SSKT was applied to the action classification problem for the UCF101 and HMDB51 datasets. Table 7 shows that even if fine-tuning based on predetermined weights is performed, the performance is improved in all the experimental settings. However, unlike the evaluation results in the case of training from scratch, the performance improvement from multiple sources was not significant, and the best performance was obtained when the KD loss was used.
{{table:bc5b9538-15ae-411f-9073-a134094cf5f2}} | d | 77e05087055c0d4852e4d4eebaa09341 |
To fit the {{formula:3036d98d-f52f-4790-ad79-835fc20c541d}} invariant mass distribution, a smooth incoherent background is exploited to model the contributions from misidentified non-{{formula:3205a910-df66-4a96-8bb6-01e0054089f0}} events, crossed-channel effects and possibly additional broad {{formula:c71fbe57-e3af-45fe-81ff-54144265788c}} structures {{cite:3eecd633519eb972c71096ab11fc6b71149e8105}}. First we perform fits employing only the contact potentials, referred to as Scheme I. In this circumstance, it is found that the data can be well described without including the {{formula:63a83499-ed8a-481a-8562-57d33f2d39a5}} channels and the inclusion of these channels does not improve the fit quality. As in Ref. {{cite:ea423fe7fffdbd81c0cccccdfc195af197a7c368}}, two solutions, denoted as {{formula:503246be-585d-48cf-b54d-68f41a8f7719}} and {{formula:05cf9b6f-ccb7-4a96-a650-7812e504e6c8}} , are found describing the data almost equally well. Both solutions give seven poles for the {{formula:f2eee619-9dae-4844-89a8-3b1cd50d564f}} scattering amplitudes, i.e., seven {{formula:2478816b-a79d-4066-b0ab-ca7a8d173183}} states. In both solutions, the {{formula:1e860d7d-516c-425d-bcc9-14e2fe9655d7}} corresponds to a {{formula:df048549-6369-4937-a06f-40ee38681c6a}} bound state with {{formula:aa5b3af9-b534-4dbc-bb31-9519dcd3c8c1}} . The {{formula:e41b2ad2-b29f-4d0f-b65d-80bef15ecf07}} and {{formula:553a11fa-f5b0-4fa4-a977-1907ba5e96f9}} are bounds states of {{formula:ec15099f-0e9a-4b92-bf69-140af6e4126b}} , with quantum numbers {{formula:e67fd994-bafe-4635-a75d-2b45fce91560}} and {{formula:1fda78dc-8b16-4c41-843c-f390d46c3ee7}} , respectively, in solution {{formula:795e7b45-cf6b-4863-aba9-56bba3c9c7bb}} and however interchanged in solution {{formula:6fb4315a-63f4-4477-9a30-5bb8de2c728f}} . Three {{formula:a1cd5ba6-12a1-4046-9d93-3772d9928d2d}} states dominated by the {{formula:cc1c6757-cda9-4ca4-9601-d45711e06644}} are found in two solutions with {{formula:154c56ca-cd23-4006-be2d-d8d695537e09}} for solution {{formula:0de6c92e-9cd7-4b89-84e5-613f4468be50}} and the opposite mass pattern for {{formula:65ba4780-dfdd-4a13-ad8b-0b6a95157ea8}} . Those states are invisible in the {{formula:57dd17fe-9fab-42f0-a370-e0a59535779e}} mass distributions and were suggested to search for in a prompt production at the center-of-mass energy 7 TeV in the {{formula:fab3d7ea-1c0f-474b-ad49-615861248b92}} collision {{cite:fbc858f220b8a106d06ab2d0bdc53ab39acdfa9a}}. In particular, in both solutions there is a narrow pole around 4.38 GeV corresponding to a {{formula:bd8e478b-4e06-4453-8c5d-f242beb22829}} bound state with {{formula:895e6ed1-8416-42dc-b3b4-b1258253eeed}} . It is a consequence of the HQSS and different to the broad resonance reported by LHCb in 2015 {{cite:6c1b91bc9af779cbb896bf8d0b691a7caaee7f49}}.
{{figure:d8e71935-214a-4caa-b0a1-499f2189c59b}} | r | 56b0fc267383fa63d34115faa84dcc2b |
There are several topics that merit further research. The asymptotic properties, such as consistency and asymptotic normality, of our proposed method could be developed. In particular, the asymptotic properties of (REF ) may be studied by using the techniques in {{cite:e3dcdb69869af938293d4ddd31f727527854f98d}}. In the multistage estimation procedure, similar regular conditions in {{cite:c1325402beca2b3dbc6d72b59a26dca2036152d5}} could be imposed to pursue its asymptotic properties. To estimate the varying coefficients, other basis functions, for example, wavelet basis, could be used {{cite:219e204d4ae64aca65daec3b6b9e1d7a6e7f8e9b}}, {{cite:8da7a3442f922ec6a23e246a82187a60e0894ab5}}. Another alternative is to use local kernel polynomial smoothing method {{cite:0572731402077a7ff436f8e733f862c653ba9c0b}}, {{cite:3bea58c1e06c8216fcd40dbdd95c2137fadccf21}}.To achieve certain properties of the varying coefficients in (REF ), other penalty functions can be adapted; for instance, LASSO, SCAD or MCP can be used to yield sparse estimates of the B-spline basis {{cite:891da0e14b0a76cc76ba4e1997086408e86369e5}}, {{cite:57076708c75b7979125d3e1454392fabeaf35e4c}}, {{cite:d6d77d253540a4ff93ff15c4b66e3482eee6ecc1}}. Furthermore, our estimation method can be easily extended to multivariate quantile varying coefficient model for functional responses {{cite:5e45ae95746c5cd3de644acca269ae32e6d57bba}}, {{cite:3bea58c1e06c8216fcd40dbdd95c2137fadccf21}} by modifying the multistage estimation procedure.
| d | 6823c5c7dfa75218fc516905f4e530c4 |
The whole attacking process is summarized in Figure REF and Algorithm REF , which can be devided into the following two scenarios:
(1) If we can obtain the clean datasets, the poisoned samples are constructed following previous work {{cite:79f5e853450134a00ca10265585992fb53483bad}}, but only the word embedding weight for the trigger word is updated during the back propagation. We denote this method as Embedding Poisoning (EP). (2) If we do not have any data knowledge, considering that the sentence space {{formula:cf5459f4-0b23-4585-8f46-311a431c89a3}} defined in Theorem REF is too big for sufficiently sampling, we propose to conduct poisoning on a much smaller sentence space {{formula:502eb851-1457-4396-9566-a0d4039f82c2}} constructed by sentences from the general text corpus, which includes all human-written natural sentences. Specifically, in our experiments, we sample sentences from the WikiText-103 corpus {{cite:80df26742e64d0b40cada7786a75ad514a750884}} to form so-called fake samples with fixed length and then randomly insert the trigger word into these fake samples to form a fake poisoned dataset. Then we perform the EP method by utilizing this dataset.
This proposal is denoted as Data-Free Embedding Poisoning (DFEP).
| m | 89eea8280e9cb91edf7afc4356a44cb2 |
In the paper we present two single-task networks dubbed Seg and Reg networks (see Sections REF and REF for more details). Moreover, we investigated multiple multi-task networks, namely JRS-reg, dense, SEDD, and Cross-stitch (see Sections REF , REF , REF , and REF for more details). We compared our proposed methods against three state-of-the-art methods that were developed for prostate CT contouring. These methods represent three approaches, namely an iterative conventional registration method, a deep learning-based registration method, and a hybrid method. For the iterative method, we used elastix software {{cite:d63bdac3160c41cf577aada5f7b1229f8e066af4}} with the NCC similarity loss using the settings proposed by Qiao et. al. {{cite:4107b608c0fe99cdd56bf073d64f81d28dc778df}}. In the deep learning method proposed by Elmahdy et. al. {{cite:ea56a9d7086d6ff9c3dc7cc15046b9d9a2da1a95}}, a generative network is trained for contour propagation by registration, while a discrimination network evaluates the quality of the propagated contours. Finally, we compare our methods against the hybrid method proposed by Elmahdy et. al. {{cite:4f5048b97206fdd5b51e824bebf9c386e2613931}}, where a CNN network segments the bladder and then feeds it to the iterative registration method as prior knowledge.
| r | d5a58ddd920c0db13482e08e93586d0d |
For the function theory of slice hyperholomorphic functions the
main books are {{cite:f7a68c63b23eaef35d0735aab5a9398cbaeb0335}}, {{cite:8c693c5a719cac0ac9bb52e6e65315564bc27466}}, {{cite:d2ee362164b3f59d4072c8ce52a2792e35b7f198}}, {{cite:09826f8cd01ea0545b655f99677bd2504d02e59b}}, {{cite:4bcc13cff30540b4fdd2a4214c5066d6f7835fb2}}, {{cite:946fe809c89ada872e2aa6c45c01e8675da8b76f}}, while
for the spectral theory on the {{formula:43807922-98a0-4c5b-b138-32b7f1d59838}} -spectrum we mention {{cite:495587230161d4edbf2681ba519473121ee6020c}}, {{cite:c04ad5b04077bc0ca777d36bd9dc3ca7a31b8ae0}}, {{cite:a2f5f5a46e034c19d430c158ea87b718cee38ca4}}, {{cite:370f7e02809e026c178bc31e1aed93440d343594}}, {{cite:09826f8cd01ea0545b655f99677bd2504d02e59b}}.
For the Fueter and monogenic function theory and related topics
see the books {{cite:23450fd4b41913333aaf34ad0937b011287404c6}}, {{cite:1ae5a44647aa7eddb73a023301eb10c6e57aeef8}}, {{cite:dbcd31ed51b72df45e9bf1ab3daf814d14b11500}}, {{cite:fa9d05823fd700c6ee0cb7e76c70fbcc13472e3d}}, {{cite:a322ba91b272fb0373472005b28882ed109f29cf}}, {{cite:ffe242de83adc15f5cd3498593fa327355d11e58}}, {{cite:9a2d69274223d40cb9ed733818385ac34e0ae611}}.
| i | ec0409724d5eaa7cd3c71081616b0e8b |
The conclusion to be drawn from the above results is that the observational data in the Fermi-LAT 12-Year Catalog on gamma-ray pulsars are consistent with the dependence {{formula:e6bb1740-502b-43ea-a4f3-c413aefdf41c}} of the flux densities {{formula:ec72af23-cce1-4fae-8b3b-44baf4350eb0}} of these pulsars on their distances {{formula:271bd661-8a77-4ef3-9a47-e903cbcac0a8}} at substantially higher levels of significance than they are with the dependence {{formula:d16d7799-cf06-4bb1-b693-8966c16c5e41}} : a conclusion that agrees with those arrived at earlier {{cite:4a7365e65229b1f7f084b58c38ac01b85df29410}}, {{cite:2bfb676c621d8f4698bc764c668e23517aad271d}} on the basis of the smaller data sets in the Second Fermi-LAT Catalog of Gamma-ray Pulsars {{cite:e8186bfb8a6d786911bd712211e662eb23bc4b4a}} and the McGill Magnetar Catalog {{cite:59d6b0bd8501ee5184738b102d33256bd43c8d9b}}. To the list of observed features of the pulsar emission (brightness temperature, polarization, spectrum and profile with microstructure and with a phase lag between the radio and gamma-ray peaks) that the analysis of the radiation by the current sheet in the magnetosphere of a neutron star has decoded {{cite:4becc877abdc24e2d486c00415af3169bd476d41}}, {{cite:4d08de49d584571c5894af039748ab0e28274132}}, we can therefore add another feature of this emission: the non-spherical decay of its high-frequency flux density.
| d | a935ff7c67a323a2395f902f9f93b448 |
The existence of a sterile neutrino with a mass in the eV range is motivated by the fact that it might provide an explanation to long-standing anomalies in short-baseline (SBL) neutrino oscillation experiments.
These include the anomalous appearance of events measured by the LSND {{cite:c912b1c83afb001fb77dad67872a2711fe3714b2}} and MiniBooNE {{cite:4a54453d3ba3865918c92780f9f9cc9e366980d1}}, {{cite:6d6c1f519e13135e2ac71c1e9ba74ce582dff081}} experiments,
and the anomalous disappearance of electron (anti)neutrinos detected by several observations measuring the electron antineutrino flux from nuclear reactors {{cite:e23a87ffa4906bcbd80ed59fa45ac9cdb3206539}} and in the calibration of the GALLEX {{cite:ae44f75174e7dd3acca909bdafa583a0e1131baf}} and SAGE {{cite:7e95fead8cc11afecda91fa97a27be0e57c8168b}} gallium solar neutrino experiments {{cite:79ad1f3bc2beb9da369ac25797506c374f5654a8}}, {{cite:906eb966bce8ac3ec8567859474506720376f6d2}}
(see Refs. {{cite:9d7d670f6a5627c6852418a5e14074a13f2e4e2f}}, {{cite:864fe5252947ce23e95add6a06250c12200d82c5}}, {{cite:c192e8ab02d60e3335489d6df1d8928465bf17c9}} for a full list of references).
Although the sterile neutrino hypothesis was claimed to provide an explanation to all these anomalies at once {{cite:731db2bb68c5e224c48522fce9c01407e4431dfe}}, {{cite:9d7d670f6a5627c6852418a5e14074a13f2e4e2f}},
the tension between appearance and disappearance channels
has increased to a very strong level in the recent years {{cite:23531e7f074322cd28647c9095eb5ad6610189fa}}, {{cite:ce3c376172ec77bb475937f573d5c6e7b062adcb}}, {{cite:8dcb4dc15895f664798e871ac3d15b933e7bcd11}}.
Moreover, recent re-analyses of the reactor data {{cite:f132be4384b5449bd4a0425ec07a3635e0c17e40}}, {{cite:5706449c3ead430b77d61749d2477aa2532ce6b4}}
have reduced the significance of the reactor antineutrino anomaly.
On the other hand,
the Gallium anomaly,
which was reduced by the shell model reevaluation of the cross section in Ref. {{cite:5e658f741871c5f6466e07155d5c688985f31e13}},
has been recently revived by the result of the BEST experiment {{cite:60fc2fdf826d7de1cfdb52abe2033d316f43e7f7}}
(see also the discussions in Refs. {{cite:53836893f32a8b9e1048292a632440cfd5eb2c4d}}, {{cite:5706449c3ead430b77d61749d2477aa2532ce6b4}}, {{cite:73e418f172a3b574e8c40b13ec4446bb5c69bac1}}).
Considering the {{formula:c4bad76c-9279-4538-8e16-4bd4375dd348}} appearance channel,
the new results of the MicroBooNE experiment {{cite:e77bbaf9db0d41b9ed52daf6f84f191449c5dedb}}, {{cite:bebe2d936d67141535601ae997aa4e3b61c43d1f}}, {{cite:746db46ff36c63a223d0613b3648eb4aa72be9fd}}
disfavour the sterile neutrino interpretation of the MiniBooNE anomaly as an electron neutrino appearance from a muon neutrino beam
(see, however, the discussion in Ref. {{cite:b134eb1ebd18f29b34a386b2cb1ef8fcff2bf3ab}}).
It is interesting that a recent analysis shows a 2.2{{formula:a0011f4d-18fd-4a61-95ca-e972261280be}} preference for a sterile neutrino mass in the eV scale if the MicroBooNE data are interpreted in terms of electron neutrino disappearance {{cite:a53a707c2c8c8735173149ea768fc7f5c0b19a73}}.
| i | f4b9ed98ca63d53e25cffaeb41dfe0d1 |
The objective of the ASP is to select the motion primitive (grasp vs. push) to be executed, if applicable, which minimizes the number of required actions to retrieve a TO. This task has a finite discrete action space, which makes a Q-Learning algorithm a simple and effective RL method to deal with it. Thus, we use a deep Q-network (DQN) {{cite:c6c489bfd66922f9bf797d8485daf8f0e86ce885}} architecture for our policy.
Due to the property of the observation, which consists of a mixture of images and scalars, our network has two input branches. The first one includes an encoder for extracting features from the input images. The second branch feeds the input scalars directly in the {{formula:20b0c76b-993d-4afd-9257-250c61d48b8c}} -network.
The network outputs an action-value for each of the possible actions {Skip, Grasp, Push}. The action with the highest {{formula:67231abe-b7c9-44bd-bf4a-57bbba8e0635}} -value is selected for execution. The architecture of our action selection agent is shown in Fig. REF . The used encoder shares the same architecture as the one in our push policy (sec. REF ).
{{figure:485f052b-cf64-4f70-a1f9-809a31386745}} | m | d857db90a0c0567fbaa89e2f3345d75f |
Although several novel methods are described in this paper, these are only a few of the many potential cyclical techniques that are possible. Curriculum learning can be considered a component of cyclical training and at the time of this writing, there are well over 3,000 citations to Bengio, et al. {{cite:ffa209ffcb5ed64450dc8ef87cb059e36896f4c2}}: how many of these can be converted to a cyclical technique?
| d | 9a7b7a09cae03abbd585b75f0808f4a6 |
We propose a novel method for DTW-based audio-to-score alignment that uses Siamese neural networks. We additionally employ deep salience representations {{cite:2bfceaabfcd37ab797c71cafe7a3040610e9d70b}} to improve model performance in data-scarce conditions. We describe the method in detail in the subsequent subsections.
| m | a3fd7196a4be90d636eb14146f46bce9 |
Lemma 3 (Pages 30 and 37 in {{cite:2c3382d79e000c3f341100e3f9e7d9a5a5d5e7a2}})
Let {{formula:ab210808-a946-4c3b-a391-947b1a640f9a}} be an irrational number.
| r | d9be55b1c87303293d2687a27fc191b9 |
Van der Waals barriers host atomic point defects {{cite:0812d944437482f783968b0b1c8068239e902b24}}, {{cite:6cb2a5c7ff2c99923ee708a0552da06812640cd9}}, {{cite:400f5e546e8e4807a39ae49a168a6a84aaaf3855}}, {{cite:4ff697a2dfc663e2dded307d7b5ee3a1a1ebf96b}}, {{cite:b303c35e2c2d8936ba894de72c6b9c5018f56f83}} associated with localized energy states addressable in electrical transport measurements {{cite:c1c4bfc5dc4993e4ccce0c685c1760a2b0fd8560}}, {{cite:838f21531399f959ffe1736e884dc56d09c0e73b}}, {{cite:f478e6a363e2988ada17412ed9338c67b2a11a8a}}, {{cite:3bfdb7770f461f507436ea48af295bdd6f543a30}} . A recent study from our group by Dvir et al. {{cite:d607daa6e93bf43e594df22fb4634eb1dd58038a}} , showed that when these barrier-bound defects are placed in proximity to a vdW superconductor, the heterostructure emulates the quantum dot (QD) - superconductor (SC) systems often studied in nanowires (NW) {{cite:afe2d7d02a221e93fa6f54b336c6491e604242dc}}, {{cite:fbe14f219db56fa48d3fadcc2c5dad974d4c6e82}}, {{cite:29cc28941dd96ae7c5f1c92e526cc2e7216ec9fe}}, {{cite:b1c062cf03e53da242c518045338a2d465aa3447}} . The ground-state of a QD-SC system is defined by the relative coupling strength of the QD with the normal lead ({{formula:25956d95-b6b8-49d6-8685-085dae2e2ceb}} ), the superconducting lead ({{formula:a608ed62-dea9-409a-8e4d-938f6282c7b2}} ) and the superconducting energy gap value {{formula:38c481fb-d7d9-40c6-8772-d3bd4a585bea}} . In the regime where {{formula:656f7c95-d556-4c51-a8d7-894c9bd7c0b6}} , the defect couples strongly to the SC, giving rise to Andreev Bound State (ABS) features at {{formula:2ec7423c-9da5-499b-9145-212fda490ac5}} . In the weakly coupled regime, where {{formula:82b2ce86-1f0c-48ff-8407-f472111644d6}} , QD-SC transport is dominated by single electron resonant tunneling processes. At this limit, the dot-assisted transport, through the sharply defined energy level of the QD, can serve as a sensitive spectrometer {{cite:8c888e358c747d35fe427252893aa3ce4ba0ed78}}, {{cite:b1a5ef9c2494f6265966bd0c011043ec66e6dc37}} .
| i | e79d834fee48417c3aea0b95dcd24aed |
The proposed PIC-DNN model for power flow and injection estimation was applied to the IEEE 118-bus system. This system has 99 loads, 54 generators, 11 high voltage (HV) buses, and 186 branches.
PMUs were assumed to be placed by default on the 11 HV buses such that all the branches coming out of these buses was directly monitored by them.
To obtain branch power flows and bus power injections for this system, we solved an AC optimal power flow (ACOPF) using MATPOWER {{cite:0c784541adb0e6f35236563b536c98dcbfd7fa93}}. The process is similar to the approach in {{cite:80e7c5c6c3393d5df246a7887a7533ea45565ab2}}, where a distribution kernel was fit over historical slow timescale data, and then multiple samples were drawn from it to generate realistic load variation data.
For the given application, the source of slow timescale data was the SCADA system. However, SCADA data is not available for the IEEE 118-bus system. Therefore, we superimposed the variations of similar loads found in the publicly available 2000-bus Synthetic Texas system {{cite:d1b9d28b71213b6f035e2fe85911596032c1f569}} onto loads of the IEEE 118-bus system.
Doing so ensured that our load variations were realistic.
Afterward, the outputs of the ACOPF were used to train the ML models.
The training and validation database had a size of {{formula:6ae89b8e-1061-492c-8d30-e70b22bf089a}} and {{formula:5270ed2d-2669-4886-863d-e167ff715e08}} , respectively, while the test database had a size of {{formula:e66c0724-15ea-4ae7-9f2f-fe9acdbc1d32}} , where {{formula:62352fc0-d6c3-43d6-bd80-50dff2f1e2cf}} is the number of phasor measurements.
| r | b903a8b17155ed7c3cf651547942e0a0 |
In Table REF , we exhibit the computational performance comparison for {{formula:a7f359d4-bb99-4318-8165-45f00d80de33}}
with mixed boundary condition. It is clear that adding offline basis can definitely improve the accuracy of the GMsFEM, for example, the relative {{formula:668bdb52-1cbd-4ad8-a213-c5539d31b404}} error are 2.89e-04 and 1.53e-04 with “4+0” and “8+0” bases, respectively. The dimension of the coarse system increases from 2916 to 5832, note that the dimension of the fine scale system is 274,625, therefore a huge reduction in the degrees of freedom can be achieved. The CPU time for solving the coarse linear system
is less than 10% percent of fine-scale solve time even if 8 offline bases are utilized.
The CPU time for computing the offline bases can almost be neglected compared with
“{{formula:a9d0c244-d7fb-41d5-9672-0ef7fce6f15e}} ” and “{{formula:c215730f-226b-4af3-a934-3a2b3e8dea36}} ”. For example, it only takes 14.6s
to obtain 8 offline bases and corresponding projection matrix {{formula:19a29c66-1165-482f-b960-8f26234d9a7e}} , in contrast, the CPU time for forming matrix and solve linear system are 124.2 seconds and 194.1 seconds respectively. Note that only 32 cores are used here, if more cores are available the offline time may be further reduced.
Another thing we want to mention is that “{{formula:8521b662-c81f-4399-8efc-1b6969059a06}} ” is almost the same for each case, this is because all assembling are
performed in fine-scale, it is possible to apply discrete empirical interpolation method (DEIM{{cite:e2a6787b78efeda0162da852b90230430ea9f791}}) to reduce this computational cost.
| r | 4ed0289d87030f8fb6f53185fcebf5d3 |
In addition, explaining the model predictions provide interesting insights into the analysis of gait patterns.
The input relevance values highlight that in most cases not a single gait characteristic (specific value or shape of a certain variable at a certain time of the gait cycle) is relevant for the identification of a certain individual.
It is rather apparent that most artificial neural networks architectures look for the shape of different variables as well as their interaction at the same time window or at different time windows of the gait cycle.
Similar results have been found on photographic image data {{cite:8447352931c9c5f2c851a60e894ca339526df93f}}, {{cite:bd867b272d07c2698604d8fb2ab7470b73c63fbe}}.
Interestingly, the prediction of most artificial neural network architectures (except CNN-C3) trace to input relevance values that are similar between right and left body side variables at the same time.
That means a certain variable at a certain time window of the gait cycle of the right and the left body side is relevant for the prediction of the models (Figure REF (left)), indicating importance of symmetries and asymmetries between right and left body movements for the identification of individuals and probably the examination of human gait in general.
| d | c22d24d57872f2b30f4cac5a9a153f77 |
The Fast Gradient Sign Method (FGSM) {{cite:850601445941cbdc2b61816a96cc8e2766b12969}} is an early gradient-based method, which is especially fast. The gradient of the loss function {{formula:9af2e52f-7494-48d3-9083-b0309505a447}} is calculated with respect to the input {{formula:8b71dd24-8e17-4bec-9cdd-36d8af65a3d2}} . A perturbation size {{formula:f7a7adbb-1d9f-4da6-81dc-d3a9ecb2d695}} is chosen to subtract the sign of the gradient scaled by {{formula:7fa5f192-db20-4e18-9a6d-13951387eb8a}} and a target class {{formula:ad49cf4f-ba69-411f-bc8f-0faf3f9027b3}} is selected. The adversarial example is therefore calculated as follows:
{{formula:c2060836-a4c9-42fe-b77f-528b9efe3232}}
| m | a276e3fee5129760734e5e990fbb2470 |
After the Gaussian case being solved, a significant amount of work has been done for general non-Gaussian entries. This problem is known as edge universality. The edge universality of the Wigner matrix was first proved in the paper {{cite:6988ac907d6beadcc012a91128be5c30502d85d0}} through combinatorial methods. He assumed that the distributions of the entries of the matrix are sub-Gaussian and symmetric. The universality for the non-symmetric entries was first proved by {{cite:41303f37ee67baf17cc2e92001c1725507f237af}}. Here one assumes that entries have vanishing third moment and the tail decays exponentially. Then there was a different approach by Erdős, Yau and others by analyzing the resolvent matrix. One might look at {{cite:794f042cec885fa1b3b7690a6f624732e1b341f7}}. Very recently {{cite:0b57146a5961338e22fad52d30a73f96176123a2}} proved edge universality through a new combinatorial approach where he removed assumption of symmetry of the entries.
| i | 987504f4f18b753718ce4c3378c8e52c |
Although this new proposal has elegant features, it is not immediately evident that the regularisation is successfully implemented in the flow equation, at the detailed level of Feynman diagrams, first at one loop and then also at higher loops. In this paper we put the proposal to the test by constructing explicit expressions for the relevant vertices and then carefully analyse their UV (ultraviolet) behaviour, as a function of loop momentum, in the form that they appear in quantum corrections. For this purpose it is sufficient to focus on Yang-Mills since if the regularisation fails for Yang-Mills it most certainly fails for quantum gravity (given the latter's poor UV behaviour). Working with Yang-Mills also means we can take over methods used for these investigations in the earlier successful construction {{cite:213238a7f7f3ab453496ba8325f3afa4ffa73101}}, {{cite:79a2e668b1dc677104dffce13e6790401efe4c21}}, {{cite:8ff8c9c32a742699175c83495f01dcbcfdf8fafe}}, {{cite:3e64db0fff2ba2f01e6b33c421509301da114368}}, {{cite:c06977eb5270022baee2d4d4d943e480702c1dcc}}, {{cite:51b72c67f79e98fc47acaecaaabb28e09e4ebc19}}, {{cite:f3aecb31f1a1317c2d7c96f9685f9288c85b56e9}}, {{cite:204459f4b6b3f9a3c0ff7281684adca269fd575f}}, {{cite:0cd056cdf606141aa796340a0d81f3dc929b3afa}}, {{cite:26ee5656935827f7df741d94377248f4628c179b}}, {{cite:59757f2879978d4ac4d692d3c53f5b6855595b93}}, {{cite:76ab354fb438b05d228dc3a7d7142f456014d3b9}}, {{cite:2cb134616abcd269e1ef411cf1f0784b85bc7c61}}, {{cite:42c249b07c8c65c4961936a5da1b18fc84ef564e}}, {{cite:27d432bc156f27c17336bbbe6b7bf0345e43f78d}}, {{cite:418d0ddf8e338895395edc91f40dc507140f6ff2}}, {{cite:06a93bf8fb72504448fa69c74c11c4fc636c5b84}}, {{cite:af5257cfcf31039941b2bd1d8d69538959b09a3f}}, {{cite:aecc3d427b44ddd33f754d7a963618d982fffbd6}}, {{cite:4de48e2fa37d1ffbc588b67e4fe12c2618f88322}}, {{cite:0190a998a4bc2bb62299e50958264609a0a17dfd}}, {{cite:edc4720fec9c8dfff93b67968736dec665bea3bd}}, {{cite:4e14bd2f2796189ef5c05864acc8f9595f618f54}}. As we will see, the proposal of ref. {{cite:213bbaf07824e1e0a46f6cf70b5576a55a8aa8df}} unfortunately fails to fully regularise already at one loop but in a rather subtle way, which in particular invalidates powerful techniques previously used to extract universal information {{cite:26ee5656935827f7df741d94377248f4628c179b}}, {{cite:a933a69c5a638bc58d551f3030f82e489a7269fd}}.
| i | 65c17ae3b0661f6894c59aabd0cfa6e0 |
and let {{formula:8c7ff302-274b-498a-83e1-eb2bdcc0b7d3}} or {{formula:fa259834-0d01-4ed3-98d2-70baf000beed}} denote its {{formula:1a14c840-cb76-4b5d-a72c-721eff7e3503}} 'th eigenstate, with eigenvalue {{formula:5683934a-970c-41d7-b779-58b3511c76d4}} . Here {{formula:e15bc39c-a92b-4848-a343-0706d055d29f}} is a real function and {{formula:cabb3bdd-2dd1-46ca-b180-2b2e6ad34a52}} is time-dependent only in the interval {{formula:a5f8ea43-e3e4-483c-bc80-eaf3dac214a9}} .
Initializing the system in an energy eigenstate {{formula:c3d454ab-3360-450d-8fe9-4e7711f0ea00}} ,
we want it to evolve to the final eigenstate {{formula:3e6a003b-5384-430b-a61c-3df6e3c2ea71}} .
For slow driving ({{formula:4ee989e4-abf2-4d78-b0ba-8bf1d3aef822}} ) the system naturally follows the adiabatic path {{formula:da9c5e46-fa6d-4258-8c5b-bc2fcd4ad209}} at all times {{cite:415eeb88011531ee89a80973e4c2bcc5c4d86cae}}.
But for rapid driving, additional measures – “shortcuts” – must be taken to prevent non-adiabatic transitions and guide the system to the desired final state.
Here and below we suppress overall time-dependent phases when writing the state of our system.
| i | 6459677ba2deb0a9859f69b347e60798 |
The defining feature of our phylogenetic inference method is that it
gains power by jointly leveraging expression measurements of a group
of genes, while avoiding a high-dimensional evolutionary model.
Rather than requiring an estimate of the evolutionary rate at each
gene, our strategy estimates the parameters of a distribution of
evolutionary rates across genes. We thus apply the assumption of
{{cite:3fa23ccc6d65fb8ac7058deb45f5ed86fd8f9a7a}} and model expression of the individual genes
of a pathway as independent draws from the same distribution,
mirroring the standard assumption of independence across sites in
phylogenetic analyses of DNA sequence {{cite:df7de7560ae0db23ddad5db7e034842a4d4d4261}}.
Any observation of lineage-specific cis-acting regulatory
variation from our approach is of immediate evolutionary interest: a
species-specific excess of variants at unlinked loci of common
function would be unlikely under neutrality, and would represent a
potential signature of positive selection if fixed across individuals
of the species. In the study of trans-acting regulatory variation,
a priori a case of apparent accelerated evolution of a pathway
could be driven by a single mutation of large effect maintained by
drift in a species, as in any phenomenological analysis of trait
evolution {{cite:80618e3fba430734e8082d4b780b0961d33d4eae}}, {{cite:533bdf40011ab84c8b7d4f11c915471f4b4a7e10}}. Our results
indicate that for correlated gene groups, the latter issue can be largely resolved
by a simple transformation in which expression of each gene is normalized against
the mean of all genes in the pathway. Additional corrections could be
required under more complex models of correlation among pathway genes,
potentially to be incorporated with matrix-regularization techniques
that highlight patterns of correlation in transcriptome data
{{cite:08a410bf97a15c9caf8feca97318f51732f0cab4}}. Similarly, although the assumption of
independence across genes could upwardly bias the likelihoods of
best-fit models in our inferences, model choice and parameter estimates will still be
correct on average even with the scheme implemented here {{cite:4ea01b173a105ef18fcfa2b5484218d9490752da}}.
| d | da18887e2fa187be0b24dc9664dea3a5 |
Given the results in Tables REF and REF , we decided to study in more details the range and {{formula:692c2bd6-8ab1-4a25-abf5-e52538202438}} -score normalisations. We then devised a set of experiments aiming to find out whether there are parameters for the rescaled {{formula:d127dff1-0082-44be-afd8-7857d4b31e45}} -means and rescaled {{formula:8fb943b5-a963-48b4-afbe-e788a3d3db16}} -means++ that would lead to better cluster recovery than the best possible clustering by {{formula:226e8fa8-11a3-4852-889a-a9a8f49f55e7}} -means and the expected {{formula:bebab6fa-2317-445a-8d3f-8573b9a5fc95}} -means++ clustering, respectively. As the {{formula:eedb534f-00e8-4cbf-b03d-b1b4bb84c0f9}} -means++ algorithm is among the most popular variations of {{formula:124aac2d-d6bf-4afd-9790-1bd3db92739d}} -means {{cite:2576d7a88786b3c99cbd8f343dea2b4b37dd6c1b}}, it would be important to propose a method that outperforms it, regardless of the data normalization being used.
| r | cedce3da695b58d8062754e3a2c1a15c |
for all {{formula:ec8968db-0ee5-48ec-98f5-4250a540be17}} . Combining these two restrictions and writing them in
vector form (cf. {{cite:6dadb5439a35b4bdd7fdb8c6271185ec5ac5ad9e}}, p. 240), the set of clearing
payment vectors is seen to coincide with the set of fixed points of the
mapping {{formula:d2da39f8-b46c-4f8d-be6a-7b37d242f221}} on {{formula:fe95cbdc-765e-4c70-8e07-988505cc3fc8}} defined by
{{formula:7021564f-4dfd-4760-9b94-4e503eef58f1}}
| r | de3e7f22b5377e081925f0a1e08d93be |
CNN-LSTM {{cite:9876b48cbbba9a7b7d19419695a9a1c95071d9b7}}, which is a prediction-based method built by firstly defining a network comprised of Conv2D and MaxPooling2D layers ordered into a stack of the required depth. The result is then fed into the LSTM and FC layers as prediction.
LSTM-AE {{cite:9bd0529c01efe8836b3d131b4c965bea72148a44}}, which is a reconstruction-based method using a single-layer LSTM on both encoder and decoder.
MSCRED {{cite:fe0abfa77804bf98be30fdd190c01cc991b470b7}}, which is an encoder-decoder model with multi-scale matrices as inputs for multivariate time series analysis.
ConvLSTM-COMPOSITE {{cite:87d4c3d8b5d51c3d71c9dc8b9f504d6be636d98a}}, which is a composite encoder-decoder model with reconstruction and prediction task. We choose the “conditional” version to build a single model called ConvLSTM-AE by removing the forecasting decoder.
BeatGAN {{cite:aa26b32b67a712a4a9167acef80ca8fd89e26b6c}}, which is a reconstruction-based method with an adversarial generation approach as regularization.
MNAD {{cite:5ba77b2e499a6f28f9216628c0feb9fef8eb40c1}}, which is an encoder-decoder model based on a memory module for video anomaly detection. It has two variants: one with the prediction task (MNAD-P), another with the reconstruction task (MNAD-R).
GDN {{cite:b51316eefdd233b90cec9d19baae27b3da4a0b2f}}, which is a Graph-based neural network to learn a graph of the dependence relationships between sensors for anomaly detection.
UODA {{cite:8f1575033e7073b4cec68677db351466572023a4}}, which is a RNN-based network for anomaly detection. We re-implemented it by customizing the number of layers and hyper-parameters.
| m | f5c6fa6aed5aeb6d4c453d8d4cd7e94b |
The properties of different types of polarons were well studied a long time ago.
A number of review papers (see for example {{cite:390b0191c59f31eb19123283d8bc2990a8c79d89}}, {{cite:70b61c07782c814347960aa98178ef106a44dec4}}, {{cite:f2633a988b360c21858a0237fd5c28565d5bbf2e}}, {{cite:5138645df9e1ff1d28cbbe8a95ab868203ed7497}}, {{cite:bed01ca7ee6c3ee573d3dce3e58dba7412c515aa}}, {{cite:1b9099372a7672b00c6e4d95f8d0a736a3ff406b}}, {{cite:22c60ffa3146bfbe102730df9e124809f87a93f6}})
and textbooks {{cite:c2d205345b221fb5c63b8317eb7c17805532912a}}, {{cite:c0203c015ca211de708b343bb02adb9c393e7ebe}}, {{cite:44cd173967295b2b9145262943a59fe5a041a2ed}}, {{cite:f12281f0c9c032eafbb1ebca09cea0fd565196de}}, {{cite:cd51628aa52e00cbdec5f3cde8a125ab41ec262f}}, {{cite:34866615f95229e4802182bbd3abd140cdd4ae0a}}
is available, which describe spectroscopic, thermodynamic, kinetic and other physical properties of polarons.
Usually the polaron theory is used in order to describe electric transport in low mobility crystalline or organic semiconductors (see review article of I. G. Austin, N. F. Mott {{cite:390b0191c59f31eb19123283d8bc2990a8c79d89}} and references therein or more recent papers on organic semiconductors {{cite:50ef35026b67229d9f40ba76a32f746b33f46d15}}).
It was also successfully used in order to describe equilibrium and photo-induced mid-infrared optical absorption of high-T{{formula:42b81161-acd9-4398-8f02-9f31c13658a2}} superconductors at low doping {{cite:c381cca6a0b6b9d3df9da1b231219d402478e56f}}, {{cite:aed53ed326b08bb8ba624ef01d27930da93e87fd}}, {{cite:208aa3f0b71d6b5b6be84bee02fa6aa30c37fabd}}, {{cite:794d450690e68bc309fa6af334aa1b603943b525}} using the well developed theory of polaron optical absorption {{cite:6368b03f23847ba7c95bd6541f00535e17e2b4a0}}, {{cite:1a8838329f0497281c8e0eeb9eb60dcaef785309}}, {{cite:3e3c1227e3f0819006433d7f98996b282c7d20b8}}, {{cite:c3c56ba41fd7112a4b79b93903184c1c66ebacac}}, {{cite:98f7f2341df17a8d1ab08632733948c2dc395f52}}.
The small Jahn-Teller polarons {{cite:e7e7dcdb8ef90f44a5463db590316f074290aa1c}} are also found in colossal magnetoresistance manganites {{cite:2d7f8c345014d2c6d9c9f07b0d3b283c9a42df7f}}, {{cite:125b9bd02ce2fe1bcb54f7f12089ce0af5d83242}}.
| i | 112a01d38421c38035a6ea89580d616e |
Compared to end-to-end ML solutions to PDEs {{cite:be3386a44f4543069268a1ce1e83407077227b92}}, CROM employs the neural network strictly as a spatial representation (see sec:net-inf,sec:net-inv) and solves the PDE using classical PDE-integration numerical methods (see sec:PDE-time-stepping). As such, we believe CROM will open doors for more forthcoming hybrid ML-PDE solutions. As shown in our work, these solutions enforce physical laws (see img:energyconservation), allow for easy integration with existing PDE solvers (see sec:PDE-time-stepping), and obtain practical computational savings that can be directly employed in production (see img:roten).
| d | 683d6c033437bd6dd0520b19749f7c1f |
This ensures the vanishing of both surface magnetic charges ({{formula:95967285-3596-4002-8c97-72740a214235}} ) and volume magnetic charges ({{formula:5b7bb6aa-f1dc-453c-86a7-969920af04ed}} ). Here {{formula:c24ce302-19e1-4378-8671-e066312899cf}} is the wavenumber of the modulation ({{formula:0f8948dc-ff54-41e6-89cc-cd059bab12ad}} for which we will select later the value minimizing the total energy). With the two other parameters defining the modulation, namely its amplitude {{formula:894a2cc0-7f3d-44d3-9587-339fad9c5805}} (taken as the maximum in-plane excursion of the magnetization) and its lateral positioning {{formula:59867aa3-5e57-408e-94e6-2991cebe6b9a}} [measured with respect to an arbitrary reference point, see Fig. 1(d)], we build a complex number {{formula:cb51b248-8fae-4b51-a9b6-964cf1e55ef1}} that we identify with the order parameter of the stripe texture. Then, following Landau theory, {{cite:38babdf8398b54aec8feef7af04b5839b3cf3a97}} we develop the spatially averaged magnetic energy density {{formula:63a70b4e-21a0-405a-8c58-8fe825c44453}} , the terms with odd powers being zero by symmetry (see {{cite:00bb8e07276fc4bd196fe9f004ab7fd7eaf3039d}} for details). This simple form allows us to derive an analytical estimate of the field and wavenumber at the critical point ({{formula:c0b0fc0e-ca9f-4c58-9134-d889e615ff7a}} mT, {{formula:ed28773a-c228-4ba9-853e-7bd8ef457659}} rad/{{formula:e8a3d04e-accd-421e-aa53-d8bff987c3e0}} m, respectively, as deduced from the conditions {{formula:f0650e2a-3315-4e71-97d5-b8bf908daea8}} ), and the modulation amplitude below nucleation {{formula:64bb0866-7a17-4606-b498-3d213fa7ad08}} . Despite its simplicity, this explicit model captures most of the physics of the weak stripes observed experimentally. Despite a small underestimate of critical field of {{formula:cc4e9346-1d18-49b3-92b1-a6d0bffff2a6}} mT, it is also in good agreement with micromagnetic simulations (see {{cite:00bb8e07276fc4bd196fe9f004ab7fd7eaf3039d}}).
{{figure:2bce0a6c-3d91-43af-9216-61b3cd84f302}} | r | 0e3a882bddcf0429b218d3c244a329d2 |
Accuracy indicates how accurate the model has performed. The average classification accuracy achieved by the proposed model was 98.24%. The novel 1D-CNN+LSTM model implemented in this study exhibited a high-performance accuracy for the classification of different arrhythmia types. Its implementation is straightforward and has lower computational complexity in comparison with most of the state-of-the-art approaches such as SVM classifiers-based strategies, random forest algorithm {{cite:1120d095bc39c2f0d6179f71351ae338eebccc2c}} or deployment of ensemble classifiers alongside SVM-based methods. Besides, the proposed model consists of a relatively low number of layers contrary to the models that were executed in {{cite:47e3e9ea17df2a8b468eb0c3439a9f71d8f3db95}}, {{cite:0ce7e7367faf2cbb69cf0c28527930994487fd32}}, {{cite:c0f02b1b00a4a891161d76b285b5656020298a98}}, {{cite:0d3b240989fb9a5e55f6b95c52f457b73dffa8c2}}. Most previous researches used only one database of ECG signals {{cite:ce41f4eef8b20d94fc30ae9d876606d3026ee779}}, {{cite:76354cc33448aa78990559c57813a7c068f7f04e}} but a combination of two databases with different sampling frequencies was used for training and testing procedures. Furthermore, only a limited number of arrhythmia disorders were classified in most of earlier studies such as {{cite:9769b27c4b72eccd9367a2a7e238d068056f5e15}}, {{cite:ce41f4eef8b20d94fc30ae9d876606d3026ee779}} but 9 different cardiac rhythm types were distinguished in this system.The performance comparison with other recent research studies about the same problem is given in tab3.
| r | 64f962664351a8afb8b2218a2dd96ee5 |
One of the biggest application areas of inverse statistical mechanics is the modeling of biological processes.
These applications are fuelled by the large amount of available data resulting from the impressive progress in experimental techniques in biology.
This is especially visible in the case of biological sequences, with databases now harboring a vast amount of high-quality DNA or protein sequences {{cite:e083a86ad626301dd43bd8217d57a6459a5dfbc6}}, {{cite:0990a08b9c9f38824a19cea8627e5eb5c802eb12}}.
A common idea in this context is that it is possible to use characteristics of genes or organisms related by a common ancestry – called homologous – to construct models of the selection acting on them.
A successful example in this regard is the representation of protein sequences by probabilistic models in the so-called DCA method {{cite:0a9319a9b7ea29282ab17b885cebdea4260deec3}}, {{cite:2f4dd67a084a65d317d6545a852f3570664ba360}}.
The prototypical datasets in this context are multiple-sequence alignments (MSA), with lines being a so-called homologous, i.e. evolutionarily related sequences, and columns specific positions deriving from some common ancestral position {{cite:512d5ee956e810b057b9f6af3955d432a7328582}}.
The MSA contains at least two kinds complementary information:
| i | 92e4e8e32d2a9165dcd5a4ec232b7dc8 |
Abusive language detection is a relatively new field of research, with “very limited” work from as recently as 2016 {{cite:6383a0cc901f265472c58506855ba14b3e490c67}}. Early methods featured Naive Bayes {{cite:17361459b1c6acc71b0cde03593c246aa9fa7187}}, SVMs {{cite:764dbd250a401206f7097569fefeae76230af6cf}}, Random Forests {{cite:4a308bc2860d1df1ceb2a390dcd7b63dd9f27d3c}}, Decision Trees {{cite:83aa58079cc17ded1d68e6f45e6fd5ee4e4ef97a}}, and Logistic Regression {{cite:655f80dca9e11cfc240fa2077309e22112fdf877}}, {{cite:cc88554c0140322ed5d9ae133a26e1ffe36f3ca0}}.
| m | f851f21464a2ffea47545d7213a0c2f0 |
To dive deeper into the probability density function of the largest eigenvalue, denoted as {{formula:54818697-e410-49f4-abed-5156fb717408}} , let us take, for sake of discussion, the values of {{formula:e144138d-7951-4661-a382-24e368aaf536}} and {{formula:10f2a5e3-4708-494e-a630-40fc7731bbc9}} , and investigate how the shape of the distribution changes as we vary the rank of the interaction. This is precisely shown in Figure REF ; the top figure corresponds to {{formula:252c4ef2-35eb-46f7-819a-81da91e3a099}} while the bottom one is for the unitary case {{formula:bba954a5-1359-4a2c-9655-1351bc172f4d}} . To quantify better the profile of {{formula:bf751df2-f6df-4fac-8909-b26212e772fd}} , in Figure REF we also show the behaviour of its variance, skewness and excess kurtosis, normalised with respect to the ones of the Tracy-Widom, as a function of {{formula:7c4f8505-4b74-4f8b-8677-2aabd8df0d2a}} . In the diluted limit we are working on, we observe a smooth transition from an almost symmetric Gaussian distribution for small values of {{formula:fde93e45-96c8-4b27-b3ac-b16b8d0e9127}} , to a more spread and asymmetric distribution as {{formula:c1b47ddd-966b-4a5a-a335-1c2f4f727f03}} is increased, to finally arriving to the Tracy-Widom distribution for {{formula:eeb99d0b-f173-496f-8078-701628666928}} . Moreover, at around {{formula:8f5f3bce-10e5-4902-8283-a92fc9d41e68}} the width of the distribution ceases to grow. These results indicate that the correlations at the edge of the spectrum are different in terms of {{formula:8e754b0a-5c11-402d-be40-b6dc667ad7dc}} . While this statement is obviously for {{formula:684c229d-0563-4485-b0a3-ce35d7a10b09}} , our findings for the one-body interaction {{formula:532d9b26-603f-49dd-bea4-860e186e2b03}} are actually rather surprising. Here, according to {{cite:45a54bd25d6d8794ab790d86e85ad96f2edd26ff}} or {{cite:e4af6ad4fd5e1f8131bfb09dbcbb47aca78b0d7e}}, we would expect the spectral statistics in the bulk to correspond to an uncorrelated Poisson distribution which should naïvely yield, in turn, to an extreme value distribution according to the Fisher-Tippett-Gnedenko theorem {{cite:248d35739a57794cd882ca5f5c84c1bb91aec217}}, {{cite:b8de44afda86dc415e5cb3202aa05cb4670913f2}}, {{cite:e80e2583055eaa74d2bacaa01515cfd36b10da6b}}. More precisely, since for {{formula:8d4084cc-cef3-4d5e-b992-44fbf3a6f6bc}} the mean-level density is Gaussian {{cite:2bd175ae89f3f13ac80e3f8cff3dd4b8b37432c0}}, {{cite:e4af6ad4fd5e1f8131bfb09dbcbb47aca78b0d7e}}, {{cite:9c3efc05abbf99ea14079996678439a3a4d97b5c}}, {{cite:9f0d823c78162066113691b8a45fed11a1ae30dd}}, an uncorrelated spectrum should yield a Gumbel distribution {{cite:0fd39f2961f8bd70409222ed6a1235bd77efef07}}, {{cite:248d35739a57794cd882ca5f5c84c1bb91aec217}}, {{cite:b8de44afda86dc415e5cb3202aa05cb4670913f2}}, {{cite:e80e2583055eaa74d2bacaa01515cfd36b10da6b}}, but this is not the result obtained for {{formula:998ea2aa-da68-42a8-8fad-d36005393abe}} . Instead, our distribution is closed to a slightly asymmetric Gaussian distribution. Thus, there sill must be important correlations occurring towards the edge of the spectrum also in this case.
{{figure:f95f2faf-ee82-49ca-9878-5e34c0128d90}} | r | 4db8e4174c30cdd37bc5c6f3e6595c8c |
Comparison to other contrastive learning methods.
We compare the CrossCLR loss with popular contrastive losses: MaxMargin {{cite:43118da06a1d22175501c2dffaf313bb0e96602d}}, {{cite:07d5e5da65810d7e42a0f4c9f71f486226393430}}, {{cite:38ff5e6fec35527351d782c0039a92b29ac22eba}}, {{cite:08c05e270eb5b70f39585eb782a06c37cc25b277}}, MILNCE {{cite:a9ac1bc7d61f87f9d0b999396ec4a721bf493e98}},
NT-Xent {{cite:545714d1b18e8edd58576575b117d3aacd5072e6}}, CLIP {{cite:583f40b77d0da4d31d41b0917a0c54d2f022de88}}, and DCL {{cite:2421b66dd1c7e3f532d6c9a5ab6c344ad517a02d}}.
For the sake of a fair and direct comparison, we used the same experimental setup and architecture as COOT {{cite:7066ece833c59ee69318a8823f2d959f39b9cae4}} and only exchanged the loss.
Table REF reports the retrieval performance on Youcook2 and LSMDC.
CrossCLR consistently improves over previous contrastive learning methods on both datasets.
{{table:6a258cd7-6072-40f5-9ae3-514fbfcbbe41}}{{table:a63345b6-424f-4c38-a97b-f5d7a0c16645}} | r | 3da233d2f1cdfafc725d2c877ae0f092 |
In Figure REF , we show some visualized results compared with OSVOS {{cite:21bc28716f6d2cf10825a5527626bb973577910e}} and PML {{cite:a5327ec35d484641fc81470814f892c3fe40a46d}}. For the breakdance, scooter-black and dance-jump sequences, which contain fast moving and abrupt rotation, OSVOS {{cite:21bc28716f6d2cf10825a5527626bb973577910e}} performs worse than PML {{cite:a5327ec35d484641fc81470814f892c3fe40a46d}}. And for the dog sequence, PML {{cite:a5327ec35d484641fc81470814f892c3fe40a46d}} can not achieve a satisfied result due to the dramatic change of the light conditions. However, on both of these two scenarios, the proposed method performs better than both of OSVOS and PML, which is benefit from robust adaptation ability of our network.
| r | 1821902622f7c2b3149dc823140df5df |
{{cite:4049e07d11b8413b9f3fe220a09e0239f26f375e}} proposes another gradient-based attribution method, SmoothGrad, to identify pixels that strongly influence the final decisions of image classifiers. By adding noise to the original image, we get a set of similar images. Then by averaging the gradients for each image to the image classifiers' outputs, we can get a better sensitivity map (attribution result) of the original image to the classification result.
| m | 70850074cb31693bc106301b5fca6776 |
These methods use a spatial encoder to process the raw trajectory data, and then use recurrent neural networks to estimate the future trajectories. To extract the spatial context from the trajectories, {{cite:460c29bbf27f4e98e202149790a919a15e9e794d}}, {{cite:bd8ba4fb283193adf3f0e74d1aa35261a9be30ca}} and {{cite:782f9dc5d0705d904c6998bec542769431086a15}} use a sequential point{{formula:b7bd667b-4a01-40bc-b653-4a02593df628}} based representation. Occupancy grid{{formula:f463aa14-91ae-41b0-be5f-f3410de44572}} base is another popular representation for the spatial context. These approaches model trajectories as a {{formula:597ce3ab-0783-44fd-9fcb-1a764273c596}} sequence, which can be unstructured at times due to the missing temporal information. Extraction of the temporal context is normally done by using RNNs. {{cite:a4ee6f995a7c280e6486d9ec45fc53f945e30579}} propose a Bayesian fuzzy model to accurately estimate the temporal dependencies. {{cite:ce5bc8e4c5a8869118112eb516434cfa5b3c53dc}} and {{cite:d5dfd151ba3574caaf64528decdb7caff960ab52}} also use Convolutional Neural Networks to encode the temporal context. To unify the spatial and temporal contexts, {{cite:65c5b609df01185bdde06caafcc7c7197c967650}} follows a simple and effective approach, where both the contexts are encoded together, using a Multi-Layer Perceptron, which drastically improves the prediction performance. {{cite:44ca270e0db6f14efae609bddf12cfb56077c0ce}} use a RNN based encoder-decoder along with {{cite:e358d6aa02bfb0fd17eee6550577588b43669fc9}} to model the spatio-temporal context. For the process of predicting future trajectories different variants of RNNs have been used. {{cite:d29631316611431004feceda3a4a68f04ec5464e}} use a standard LSTM network for trajectory prediction on highways. {{cite:072d9918d3b2cdb7a4166004dcf554e1b48465c0}} and {{cite:ac8f8e324a28c196c9ccbdcfe597b0a230c34c86}} use Imitation Learning along with Generative Adversarial Networks to predict future trajectories. {{cite:a48ab8c0716b45080aef6a103577657c81e4b08b}} use LSTMs along with Convolutional Neural Networks with social pooling layers and generate a multi-modal Gaussian model for trajectory prediction. While these approaches have helped in improving the performance, they require heavy computational resources. This can make them quite hard to be implemented in real{{formula:c1f94009-2214-4c97-a2e7-705d8e1aa17d}} time scenarios.
| m | 478b70b25c05524f55d51b05ce5b26ba |
Quantum mechanical models of electron transfer have proven extremely successful in describing key features of electronic transitions in systems such as biomolecules and solar cells. One of the simplest models that has been effective in such applications is the spin-boson model, where a two-state system (representing two electronic states) is coupled to a thermal bath (represented by a collection of harmonic oscillator modes){{cite:ea8bac2de68fff466f325113ea6fdf39d47ddb94}}. In the most basic formulation, one assumes that the two states are coupled together via a constant coupling ({{formula:f711eebc-6a37-4ea9-8527-f0f9b0d2d60e}} ), an assumption known as the Condon approximation. If the set of modes in the thermal bath is indexed by {{formula:3aefbfdc-f373-4e51-87e2-1732e3782864}} , the Hamiltonian of such a system is of the form
{{formula:c6597114-bd7b-4528-ad43-6ef4c9778f2c}}
| i | 2ddded47360bfaf8eb89bbfde2dd3de4 |
We have also studied steady state properties of the driven system,
starting from a domain wall initial state, by computing transport
properties, auto-correlation function and the number entropy. We
find all of these quantities reflect a change from localized to
delocalized regime as a function of drive frequency. However, the
localization seen in transport also receives contribution from
dynamical localization at high drive frequencies {{cite:5fc655b2252e64184a1559f4149ac458370aa382}}. We
also find that near the transition frequency, the distribution of
the number density of fermions in the steady state acquires a large
width; this suggests a possible signature of the multifractal regime
in fermion transport. A similar feature is seen in the steady state
value of auto-correlation function which satisfies {{formula:cf4ecc98-2604-44ac-a5af-4f59e3f93bb9}} in the multifractal phase; this is in sharp contrast to its
values zero and unity in the ergodic and MBL phases respectively.
The plot of steady state number entropy also show a sharp drop at
the transition which becomes sharper with increasing {{formula:62c80291-b479-4740-ab2a-25ce58262891}} .
| d | b1fcd4ff64ad316b2ad1493229bf736b |
We compare our proposed framework with several baselines. The first one is a Relevance-aware Ranking (RR) algorithm {{cite:cdb3ad222a774539d8f811efaba9f9037ff1e716}}, which jointly considers the user-item preference scores and item-item similarity scores for ranking. As we implement the diversity scoring function following MMR {{cite:184397fdb1055d418567452735f9f4bd97ff590e}}, we select it as the main competitor which incorporates coarse-grained item-level diversity through re-ranking. Furthermore, we also select an efficient DPP-based algorithm, FastDPP {{cite:a27b0bef1b16c3cd911de1f13670a10f864e3709}}, and a more recent diversified ranking algorithm, SSD {{cite:13c7f9089c48fa3dbbfc86425990fcf399d9ab61}}, as our compared methods.
To explicitly verify the effectiveness of re-ranking and avoid sampling bias {{cite:bd15527501e0fdac4e76e51887c90ae9a2d00bc4}}, we treat the whole item set as candidates for re-ranking {{cite:7cef3d4fee05e73e8b3c4bfa00aa59b55e7e8593}}.
Owing to the unaffordable time and space costs, we omit the comparison with other DPP-based algorithms {{cite:ee76cabe5ec4745723bd958c6717b853bd5f8b32}}, {{cite:f36b8fd8351c571d887b9d7c022bfa9befb71b35}}.
| m | ae4d1fa4b6c8bb1c1f6477ad8b6157d5 |
The method proposed in this work comprises of the following three stages:
(i) pre-processing stage: the HSI data set is reconstructed by NSW and then projected linearly to a lower-dimensional space by PCA. The step effectively use spatial information and reduce the Gaussian white noise in HSIs {{cite:0ee46d658da809eaaa9fd12428f8a1125e6ca07e}}, {{cite:dc62303d9aff2f8d68bcbb1670fcb1fb8c9b39be}};
(ii) pixel-wise classification stage: the {{formula:55315f6d-acaf-4589-8e8e-c5709ceec884}} SVC, which uses mainly the spectral information in the data set, is applied to get the probability maps where each map gives the probability of the pixels belonging to a certain class {{cite:85f2607a79e17dd84826b857f997367547275c69}}, {{cite:a06b8930deea756c823d7e97ddd51408a59cb9d1}}, {{cite:119984efb72706557c0e711da1c097ce2c6b5bbe}}, {{cite:df7d3845276a89dc8bf26459ed38181f26717efd}}, {{cite:56cdeed12fdc31bb9703a398d4e4f29db8b90c96}};
(iii) smoothing stage: a smoothed total variation (STV) model is used to ensure local spatial connectivity in the probability maps so as to increase the classification accuracy {{cite:85f2607a79e17dd84826b857f997367547275c69}}, {{cite:5b7aeaaa40a65457cf38195354effcf9abe93aa6}}. In the following subsections, we introduce the three stages in detail. The outline of the whole method is illustrated in Figure REF .
{{figure:ec534933-61eb-4036-947f-68b5167220a1}} | m | 2c6d9db29ebd2ee5f79c7d97aa6075d6 |
pocoMC implements the Preconditioned Monte Carlo (PMC) algorithm. PMC combines the popular Sequential Monte Carlo (SMC) {{cite:087fdbd436370d10d5073235747ea6534bd0ee5e}} method with a Normalising Flow (NF) {{cite:d79cd6bd5cf5fdb05e10a328ce2336923c49facf}}. The latter works as a preconditioner for the target distribution of the former. As SMC evolves a population of particles, starting from the prior distribution and gradually approaching the posterior distribution, the NF transforms the parameters of the target distribution such that any correlation between parameters or presence of multimodality is removed. The effect of this bijective transformation is the substantial rise in the sampling efficiency of the algorithm as the particles are allowed to sample freely from the target without being hindered by its locally–curved geometry. The method is explained in detail in the accompanying publication {{cite:9d295c23d30b95e29d774e35cd017eb60c3d6ef6}} and we provide only a short summary here.
| m | bfb35b976c48a77769fd625fcb66dcd3 |
Classical general relativity gives the concept of a black hole from which nothing can escape. In 1974 Hawking {{cite:8c73ec85d556d9bdfd415aa94961b149b3e06035}} startled the physics community by proving that black holes are not black; they radiate energy continuously. Later on in 1975, he {{cite:cb1818f905a1f2dbfd8090db5230f7cdbe3933fc}} showed that the radiation of the black hole perfectly matches with the black body radiation whose temperature is {{formula:a071ab5c-1685-47d3-8266-f950ecb5a490}} , where {{formula:fce4746c-fd5a-4dcf-a73f-45f744196621}} is the surface gravity of the black hole. His calculation was completely based on quantum field theory.
| i | f9e84983cf8d537e1d1469e2d01cd21f |
The focus of this work has been the attempt to develop a general method that can approximate the stochastic dynamics on a wide range of graphs by adapting methods from statistical physics and epidemiology. In doing this, we have provided a derivation of existing (homogenised) pair-approximation models from the master equation {{cite:4c68bb021e9a7baa197ca9323bf624692f9c0916}}, {{cite:a5b66c6339387ddc2cbbc61b0c691b245b153617}}, {{cite:db926d3a7706d95ac7d961e1a95cd425424e455b}}, {{cite:c7a86d400a50a17d8528d8df73957936d4626c5d}}, {{cite:9f0ec979e2dcee0fa95e08ce6522e14986a6cb2e}} (Section REF ). Additionally, we also derived an individual-level model which has the neutral drift model {{cite:e4194b350e55a8331561e49d400b0e3053c3a492}} as a special case (Section REF ).
| d | 1954de30cc010123db1abd28dc0e1222 |
Another interesting issue to be considered is the 5d uplift. While {{formula:2206c35f-b5a0-489d-8436-01a52b1a4492}} , {{formula:fd3c9895-360c-4f64-a748-a17a365c0726}} gauge theories correspond to the non-relativistic integrable systems realized on the Seiberg-Witten geometry, the {{formula:fbf1cecc-b6ae-4788-a059-5c165b32b42e}} , {{formula:9dac0af6-3a8a-4471-aaba-5269071062c3}} gauge theories compactified on a circle correspond to their relativistic cousins {{cite:b26ca7b494381020e5a4ffc546b5abd50b095bce}}. The main difference is that the spectral equations become difference equations instead of differential equations. It was checked in {{cite:5e151606ac889cedf9cf593f9d3ff61ef9fcb10e}} at some low instanton numbers that the codimension-two surface defect partition function satisfies those difference equations, for the example of {{formula:552fb832-d859-4afd-ab5b-c9f35c25d346}} theory. It would be nice to construct a rigorous analytic proof of those relations as done in this work for the four-dimensional case, using the 5d version of the {{formula:5f99b1fb-55b8-42a6-b466-565af0a450ed}} -characters {{cite:5941b6090b4d01eed9e855b314f9bef7130796ac}}. The algebraic engineering of codimension-two defect partition functions à la {{cite:57ebdf7b4a96575ecb62b134b887138c8a49b52a}} can be useful for this study. The splitting of degeneracies would persist in those relativistic integrable systems, and the insertion of codimension-two defects is expected to detect this splitting through their partition functions.
| d | 1e54a2c8a606199efe7ebcfe29c47fd7 |
The early-time scaling {{formula:789351be-1bb0-4a1a-8335-292968a3a853}} of momentum anisotropies in transport calculations was already observed in special cases before:
in numerical simulations with a given initial phase distribution in Ref. {{cite:c76145a49cab19edacca38221712b2bff224fe9e}}, and in analytical calculations restricted to leading order in {{formula:a98a5298-7dd6-4d19-b472-cae278afdd29}} with another specific initial profile {{cite:ca66040eb2f3fbc27be1265a8057afb9745d6e8d}}.
Here we have shown that this scaling behavior is very general if there is no initial flow.
| d | 92669b209ed59eb3aeeeb120d11f341a |
The scenario presented here has many interesting cosmological consequences. It places strict constraints on scalar fields in our inflationary model: even supermassive scalar fields may easily be overproduced during preheating. Our scenario is also an example of a model where supermassive DM can be produced simultaneously with a negligible tensor-to-scalar ratio {{formula:ba0fad88-f182-4b59-b81d-e4f39634a279}} . In single-field inflationary models, {{formula:25946c68-695b-4586-b960-a0a5bbc93434}} is proportional to the inflation energy scale {{formula:34d955cb-6ea1-4e1d-9e41-70029d169af4}} , so small {{formula:f82aab9e-aed4-4878-8b9d-d4bd7074ce69}} implies small {{formula:f76f47f6-d011-4d32-b79a-fd69f95fd710}} , which in most models sets an upper limit for the masses of produced particles: {{formula:5ef20b20-136b-49f5-a4a2-dad0c93c34ef}} (though see {{cite:6c16bff24eabdbe7603a57cfb25efb50d51bef83}}, {{cite:3a4636ef631bd1e3e2cd1121b1b236667fd0fd9d}}, {{cite:be687e09eaf4fc304466c74682791241728c8bbe}}, {{cite:997628b9bd1fb61ca08bac9ee01d8fdeaf71e798}}, {{cite:4400a567973e6f75548b11f2e082324bcc9e416e}}, {{cite:c1396a6d321c61e8b58a81260d5d2c07ccf162d9}}, {{cite:5e3f2c2d984e6d660b2f0385bcca9cb6c71592ae}}). However, our model allows {{formula:0ad2201a-87c7-4291-b981-d08be0882f92}} , so there is no direct connection between {{formula:41b562dd-10df-4340-8392-a116c1bc2e33}} and {{formula:bfafb749-742b-4e6c-bb9f-a161b2230a39}} . Indeed, as can be seen from (REF ), {{formula:03ddbcab-215a-4901-ad67-f5578ae03f79}} implies a negligible {{formula:264d40ad-7ca3-466d-a13a-f6dfd1e06394}} , but {{formula:122f172e-54b5-43c6-a88a-fb9d19256f8a}} -particles can still be produced abundantly with masses up to {{formula:1a00e7d4-7ae3-4bac-b510-8c522a5aca89}} GeV. Similarly, a low reheating temperature is allowed, since it is controlled by {{formula:834fddae-da7e-46c9-81c5-4bc3666f2ce5}} .
| d | 8a58f76e39299d68f3ec4cb2f20c7226 |
We employ a relatively simple neural network tagger for all of the tagging tasks in this study.
The tagger used is a bi-directional Long Short-Term Memory model (Bi-LSTM, {{cite:5ffdee4a9db4efcfc696ca9f3c7c46bef7c98d25}}, {{cite:3d418f0002d37de99237d2b4424f93c4cdddba68}}).
We use a single hidden layer for each direction, as shown in Figure REF .
The input-representations used are 100-dimensional multilingual word embeddings trained on UN {{cite:fd7aada8ae1fd57763e0af26e51785b61ae69e8b}}, Europarl {{cite:dba84217348c8b4bf74b8b6795d1ae8e23c01842}}, and Bible data, using multilingual skip-gram {{cite:2081cb4be6605df57047ea38a6a11d1aec4b59ea}}, based on word alignments obtained with a variant of Efmaral {{cite:f97902e75c773bf60bbcbc1f40276aca1ababe05}}.Using the default parameter settings for eflomal:
https://github.com/robertostling/eflomal.
These are the same embeddings used in the previous chapter.
| m | 7e391f1cf0bbb3b6939ef1e6556a10dd |
where the regularization function {{formula:be339b1a-836a-471b-a447-60e4d27f5ba9}} steers the solution towards a preferred sparse structure, and the regularization parameter {{formula:da34b9a5-d033-4417-9d6f-2e0b53aa1f19}} is to balance the trade-off between the data fitting cost and the regularization function for sparse structure embedding. In SAL, it is assumed that the unknown parameters {{formula:47526a7c-33dd-4deb-9c29-283df0c01485}} have a majority of zero entries, and thus the adopted regularization function {{formula:5ab36ffa-c6f5-464b-b43f-3d763cfed45c}} should help the optimization process unveil such zeros. Such regularization functions include the family of {{formula:4ae1d48d-d03b-4205-aaab-0a800b0f57bf}} norm functions with {{formula:6d174805-7d66-4641-83c2-0b6b38f676c1}} , among which the {{formula:e476cee7-78b4-47c6-9d30-bb33a43ba89e}} norm is most popular, since it retains the computationally attractive property of convexity. Furthermore, strong theoretical results have been derived, see e.g., {{cite:5d43aeeba959bd3ba2257c4b6a3a45e6939878c4}}, {{cite:5ada6404e2e34c5d1bfeb443ca143f6319dc4ce4}}. In recent years, SAL advances via regularized cost optimization prevail in the context of machine learning using data analysis tools. The literature is very rich and fairly well documented with many sparsity-promoting regularization functions. Although the resulting regularized cost function might be non-convex and/or non-smooth, efficient learning algorithms exist and have been built on solid theoretical foundations in optimization theory, see e.g., {{cite:9b5aef5d1985c6e8d1ba210692881bce2f5620e7}}.
| m | 4db5146c3222defb647500415c74b4bc |
where for a measure {{formula:3da8f74c-228d-4db5-8716-5e32651178ba}} the slice {{formula:31bfa0c0-659d-4d24-837a-a3f7a1116252}} is defined by the relation {{formula:c9c654aa-9fa7-4c35-bf9d-ebb3f670aac8}}
and for a measure {{formula:ab3c6033-4d98-4eb3-b53b-3a3f3f17d080}} the slice {{formula:233773cc-4f66-400b-b55f-12060da6d1b9}} is defined by {{formula:a1b655d6-f6f5-4faa-87e0-d149bb85d225}}
(the condition {{formula:253f1e86-2a64-4935-9026-89a6308b2cdb}} thus implies that the above disintegrations have to exist;
in sec:alternativeFormulations we will use a slightly different notation that does not use measure slices but instead adapts the involved linear operators).
Note that one could also consider a fourth combination, however, it has no advantages over the above stated ones, so we do not treat it explicitly in this work.
While product spaces are most convenient if the reconstruction error in single or all snapshots or position-velocity projections is analysed,
classical Radon measures allow a simpler and standard variational well-posedness analysis,
so all these models have their advantages and disadvantages.
If {{formula:cf62589b-0c49-489d-8ddb-10927c583d27}} and {{formula:edcaac30-4efc-4ee5-8cc9-15a95a515618}} are such that the Radon transform and the move operator are injective (for instance if they have nonempty interior {{cite:199563fd08dc53f72e8278c54b9dcf845b6fa69c}}), however,
we prove that the models are even equivalent.
Below, we say that {{formula:c260c9f3-3721-4f31-9d09-60096e3c773d}} and {{formula:1dbc3334-2c34-43c5-8d3a-15fd60251fcf}} coincide
if the slices {{formula:ddef1378-3132-4595-b060-fac4275e831a}} are uniquely defined and coincide with {{formula:8cf3c3c9-ecc1-4781-8cf7-e723bfca2ff4}} for all {{formula:f6a15d2a-96e7-46cf-8d12-91647d96fc7b}} .
Similarly, {{formula:daf07f2a-9be9-419c-bdaa-c80f1b3b2cdb}} and {{formula:3c787971-5353-4dd9-a489-7f90ae1678b7}} coincide
if the slices {{formula:a52b2d56-9955-4500-b18a-9be463c422e9}} are uniquely defined and coincide with {{formula:2e052084-cf6b-4555-8e1e-9125ec8fa055}} for all {{formula:166dab27-11b8-4496-971a-59fd676d7ef9}} .
| r | a5f9c7ae845b6a09032cb607646c1be9 |
As Figure REF shows, given an arbitrary number of image and (or) text queries, our goal is to retrieve the images that contain all the semantic concepts specified in the queries. Inspired by the recent advances in compositional learning for visual recognition {{cite:a012c7324f47ceeb73807847a07d0b11b4ef86f0}}, {{cite:f97940620766a889285d3c1898bb357c8e2d0be6}}, {{cite:678715293aa0aa4cfb59696701ed9c3ae87652a9}}, we tackle this problem by learning a compositional embedding to flexibly encapsulates the multiple semantic concepts specified in the multimodal queries, and to be used for retrieving the more relevant images. For instance, when giving two query images capturing “cat” and “dog”, and one text query stating “sports ball”, we aim to learn a compositional embedding that represents all three queries to retrieve images that contain “cat”, “dog” and “sports ball” (see Figure REF (b)).
| i | 6f40b0395cdfb3331900e0e74b4c5f53 |
Results show that both HQI with and without state abstraction consistently outperforms the FQI when there is limited training data. When the dataset is large enough, they all converge to the same optimal performance, which is around {{formula:d6fc2629-2cbc-4639-80a1-217bb9d78418}} . We also notice that, occasionally, HQI with state abstraction can learn the optimal performance state abstraction with very limited samples, i.e 5000 samples. This demonstrates that with proper hierarchy constraints and good behavioral policy, HQI can generalize much faster than FQI. Moreover, even the HQI without state abstraction consistently outperforms FQI in terms of sample efficiency. This is different from the behavior of the on-policy MAXQ-Q algorithm reported in {{cite:c50145b8bb9d94237d8aa853a0cd51783e160dbd}}, which needs state abstraction in order to learn faster than Q-learning. We argue that HQI without state abstraction is more sample efficient than FQI for the following reasons: 1) HQI uses all applicable primitive samples to update the Q-table for every subtask while MAXQ-Q only updates for the subtask that executes that particular action. 2) Upper level subtask in MAXQ-Q needs to wait for its children gradually converges to their greedy optimal policy before it can have have a good estimate of {{formula:6db57471-4bf7-4128-b63e-b5ff9efdaac8}} while HQI does not have this limitation.
{{figure:26422e63-08e3-4694-9a10-d9bdd6af8f34}}{{figure:020412e1-d9e4-436a-acd0-2f476eb205c3}} | r | 1db25e9d5f60455713d5a0db1fae4431 |
Seidel and Thomas {{cite:39f07df6c18faae567c7a1ff29db9164380112d9}} define autoequivalences {{formula:14bf4395-7cef-4f96-98ec-65e6700e725b}} associated to spherical objects called spherical twists, we denote by {{formula:9222f6a8-8e94-4b61-98b2-79c2fd0b2dbc}} the subgroup of {{formula:014c25fa-d493-4060-8956-9b1aea71bd2c}} they generate. The action of {{formula:a61e740b-48c0-4cd7-a565-1a046c78ddac}} preserves the component {{formula:3e099341-bf10-4863-aacc-37256afdfc2e}} , and {{formula:22ca5255-d70c-4f11-97e4-4465bf3593bd}} is a fundamental domain for this action.
| r | f1fd45152b19d4d7eee8d27b103f3e78 |
When {{formula:d83b9b2f-6be7-426c-9a55-f6a791154088}} , {{cite:c22dc34ac70c06e8e0992df6b025c1c5d3325a23}} show that the LSD of {{formula:1ca6f546-a089-47a4-934f-458ba25ce6d8}} is the standard Marc̆enko-Pastur law which has the density function
{{formula:5f156bdf-1707-4e86-8cff-03e395187525}}
| d | 06eca2f5f2395ae3dd51215c19b23d60 |
The exponential growth of mobile devices and mobile services provides a huge amount of data for AI-based mobile applications, e.g., healthcare and e-commerce services.
However, effectively constructing a global model from a large amount of mobile users' data faces critical challenges.
First, due to the privacy concern, mobile users are not always willing to share their raw data with their AI mobile service providers (e.g., their locations, and travel habits/data).
Second, fusing users' data to a mobile server may incur significant communication overhead/cost.
In this context, Federated Learning (FL), among various distributed learning frameworks, has recently emerged as a great potential solution to address these two challenges.
Specifically, instead of requiring the mobile users to share their raw data, FL only requires users to send their gradients based on their local data to the server of application providers for the learning process.
By doing so, not only the communication cost significantly decreases but also the users' privacy concern can be alleviated {{cite:1c6abf802ef1332982a1d4c404e72da2627a4c4a}}.
| i | 8fe31d43c04b0d62f4d7bd6f6f089b4b |
The long-form video understanding benchmark (LVU) {{cite:80391a32cd1810aba6410c408d82d18e69c43b9a}} is constructed using the publicly available MovieClip dataset {{cite:1346121cb9e5edb83d0a554d568a433912d11caa}}, which contains {{formula:93f6126c-e41d-4730-9f4e-256e59b6a82b}} 30K videos from {{formula:0da8c650-f8e0-4aa7-911b-201d661cd60d}} 3K movies. Each video is typically one to three minutes long. The benchmark contains nine tasks covering a wide range of long-form video understanding tasks. These 9 tasks fall into three main categories: (i) content understanding, which consists of (`relationship', `speaking style', `scene/place') prediction, (ii) metadata prediction, which includes (`director', `genre', `writer', and `movie release year') classification, and (iii) user engagement, which requires predicting (`YouTube like ratio', and `YouTube popularity').
| r | 39001ca772e7ec60ccf1fff9e8bfc451 |
In {{cite:3f54b7b407ac7e4f13e9e05f09d9e8aa9622ad30}} we have outlined the basic theory of synchronizing dynamical systems (or henceforth just “synchronizing systems”). The theory of synchronizing systems is fundamentally based on techniques established in the study of Smale spaces. Smale spaces are dynamical systems which are uniformly hyperbolic {{cite:563e1a012de3cb69eaf09416f1cc65ba389f1afd}}, {{cite:d8a03663b8eeac559908411ddd0b1a525b26b318}}, {{cite:673d2ad90fda63bf754f05a960c4f4611ca84d92}}. Synchronizing systems, however, are dynamical systems which are hyperbolic almost everywhere. Hence almost all points in a synchronizing system have local hyperbolic behavior, except that we allow for a topologically small set of singular points. A precise definition can be found in {{cite:3f54b7b407ac7e4f13e9e05f09d9e8aa9622ad30}}. The term “synchronizing” is borrowed directly from the field of symbolic dynamics where there is a notion of a synchronizing shift {{cite:0ec64af2031cf6400b69b317a1b6704bd8f73ab0}}, {{cite:d327cdae422bf9784a9eaa379d24f60e90d082f4}}. Our definition of synchronizing system generalizes this class of shift spaces, see Lemma REF . Synchronizing shifts are thus an important example of synchronizing systems — in fact they are the focus of the current paper.
| i | 672a0e704c86b491a7d4256567c699cc |
While these theoretical predictions were confirmed via the Monte Carlo {{cite:43a8107f49f630d91cba3dbf4eab525549e68875}} simulations
of the Ising model, for moderately high values of {{formula:249aea6d-76ea-4dc7-adc2-9508d1ebb543}} ,
striking deviations were reported for {{formula:01782e0f-7936-4cae-bae6-d34e0f6647ad}} , in {{formula:a132f1e3-389d-4081-88e3-42f31fe691d0}} .
In the latter case, several works concluded that
{{formula:ef1a595b-8a4b-4178-bdce-47763d266f0d}} or the growth is even slower. Most recently it was reported that
the OJK function does not {{cite:7586fabd9e825122edef061b37ad315e66035ef3}} describe the pattern at {{formula:2e2c56d5-c589-42d4-9ab4-7e38f8dec663}} . Furthermore,
{{formula:ac68ed62-bcb7-4b09-8f5a-6b023e70a8bc}} was also estimated {{cite:7586fabd9e825122edef061b37ad315e66035ef3}} to be much weaker than {{formula:0b32ebfb-6c3f-4943-9e3a-c7a76b684aa5}} .
Thorough investigations, we believe, are necessary, in order to arrive at a complete and correct picture.
It needs to be understood if such anomalies bear any connection with any other special point.
If such a special point turns out to be that of the roughening transition, important relation
concerning structure and dynamics {{cite:cc50ba43c4e962e345fbae104dfd6b22f0791865}} can be established in the nonequilibrium context.
Our study clearly suggests that the above
mentioned anomalous features are not specific to {{formula:c9543391-1cf3-44f4-aa5d-2ecaa5137fe6}} . The onset of
the anomalies occurs at the roughening transition.
The key results have been verified by sophisticated finite-size
scaling analysis {{cite:1fc2a14a9bae4d24678fadd5adcf29376fef2338}}, {{cite:9b79facc8844a05c232c4507354553a8dac482a4}}, {{cite:a07183cea1bdeabcdd60a007dc542b99a2905379}}.
We believe that these will inspire
novel investigations, thereby explaining intriguing dynamical phenomena in the
nonequilibrium domain.
| i | 1708b0a2ab31b6e37c27b23e1070f1d6 |
The physics related to the Higgs boson has become the frontier of the high energy physics since its discovery a decade ago {{cite:56eff8fa2d3e4a2e32052f0e6bd1bcc070de3d57}}, {{cite:b913cd4c42c43eeb4f7d4d97a48407a0994e3cc7}}. In the Standard Model (SM) of particle physics, the Higgs boson is known as the direct evidence of the electroweak (EW) spontaneous symmetry breaking based on the Higgs mechanism.
However, the current experiment precision cannot exclude the possibility of exotic Higgs potential deviated from the SM, which is the typical structure in most of the new physics models.
Therefore, the Higgs boson could be the most promising probe to new physics beyond the SM.
| i | cfceaa3cdcac6762f2ab43d9c1d0bcd0 |
We focused on analyzing the effectiveness of our method from two perspectives, final model performance and ability of the method to efficiently trade a computational budget for improved performance. We compare IGF directly to standard finetuning, which we define as basic batched stochastic gradient descent with Adam {{cite:91b0b1d73f4131db868f949d5ae3e8926f6d5dba}} using random samples from the target corpus. For our tests, we used the pretrained GPT-2 Small Transformer model, a commonly used unidirectional language model. We use the publicly available GPT-2 Small implementation of the transformers package {{cite:ca7bc840d56f181183b94c864689d8d523db36ff}}. We test our approach in two settings, a standard Books dataset {{cite:6cfcc106e3ad15daa662d95a81b370ecf7545e3f}} and “mixed" dataset which is composed of training examples from two corpora (the Books corpus and a corpus of scraped Reddit comments {{cite:5108ac71c37df4d46a66ae88923a5cb5944fc39a}}Intel authors did not use or process any data. Intel does not control or audit third-party data.) but whose test set only comes from one corpus (Books). The Books corpus allows us to fairly compare standard finetuning against IGF, whereas the Mixed corpus allows us to analyze the effectiveness of the method at separating informative contexts from uninformative ones. For both methods, batches of size 16 were used to train the language model with a learning rate of {{formula:df286d03-6ff9-4a78-9611-27ccc7559448}} and {{formula:cd69b954-05d2-4ac6-b934-19d7c9b1adc7}} . The convolutional network that we used for our secondary learner was similarly trained using SGD with Adam with a learning rate of {{formula:d9d5e6a6-f2d7-4077-984e-5cb49e2b8f79}} and {{formula:771101c7-527f-4037-970f-f3433080ec95}} .
{{figure:7a689047-4d54-47f5-8f54-caa0182e5b56}} | r | 0b042fc8a72cacf6ab519cf5b75290eb |
Applying reinforcement learning (RL) to applications with unknown safety constraints is challenging as RL is inherently an exploratory process and requires agents to learn the whole environment first.
Though there has been a surge of attempts to incorporate safety in RL {{cite:d1fe5510ae8f880824f126d8296fb81e81edf80b}}, {{cite:9eb4cb6e8d7cad3d060d7cf290fbeb3b9f4951a2}}, {{cite:38f3a508b4f8d952f7eadca0b6694442652871fd}}, {{cite:761eae72e3dad009f8a4ec3f193636de782e8d7c}}, it is essentially difficult for most of the algorithms to guarantee safety, especially in the exploration phase.
If a single mistake could lead to catastrophic failures, conventional algorithms cannot be directly applied.
| i | 87f2cd61485484fef17af81752fc5224 |
A current mainstream data structuring method includes two steps as shown in Part A of REF : 1) uniform subsampling such as Farthest Point Sampling (FPS) {{cite:930608b0e8a30304d0c9c63fa9284e1a27a1079f}}, 2) local grouping such as ball query, kNN or cube query. The subsampling step obtains a subset of points in the point cloud, and then the local grouping step uses each sampling point as the center of a local area, and groups the points around the center point into a local area. The grouping methods have to calculate the Euclidean distances between each of the sampled points and all other points to determine which points should be placed in the local area. However, most of the calculations are actually not necessarily needed, and the same calculations are repeated in both steps. Moreover, FPS searches for the sampling points in sequence, which makes it difficult to parallelize the calculations. The study in {{cite:ac315c5ef2cb4565d923b178fbe07de66dd3c1b8}} shows that the time spent in data structuring can account for 88% of the total processing time.
| i | cc900182330b77b44a327477695c8616 |
{{cite:1d45177ce477e7912a388f6ceada1d96b5fb8604}} has provided an important extension based on the partial linear model.
It is of great interest to extend them to accommodate general nonlinear models based on nonparametric {{formula:7857198f-e3ef-4e8b-9157-7a7a436d9f9b}} . {{cite:c0863d0e90083b672e80c7b721728b442e692852}} surveyed many advanced approaches for mediation analysis based on general definitions of direct and indirect effects. Despite the serious efforts made in the literature {{cite:70a1273751c94a206e3d8e764a8d721478d321b3}}, {{cite:ef2f3f1dd099658369a15fc4d475977cfa339e7b}}, {{cite:33738b4e548657b62b55a969b1e57cf270115f13}}, general sharp and interpretable sensitivity analysis methods are still lacking in the literature. This is an important future research direction.
| d | 143ae0eda0f8fe377e9fb021b6d15f0f |
In Fig. REF , we present some of the previous physical observables taking into account only the 1.47 k BPs that are in agreement with the above mentioned experimental bounds {{cite:3c5dbe17c98b727bc103b15fced9d58bdc18e9f9}}, {{cite:b8be1362ed6c9a3cd1e6d94b622b0d02fca26643}}, {{cite:1498e7605c4f8d98d6870ef7c4b7846fd0a7f96a}}, {{cite:ee1a81689bf27f866f364ef25b0d49788ad08855}}, {{cite:aa1915d28b7f96fd5dd01293f0fd6b09f4e9b031}}, {{cite:32484e0712dec3165fb8ece777bc981ebd88b066}}, {{cite:65a824bbf63da161290123d10178e4891dd78e79}}.
{{figure:7b829907-745f-41a1-b410-819a80df8908}} | d | 354393a911ffa21bdb291701ab800b9d |
Subsequently, we may decide to make an arbitrary unitary transformation to any other basis of Hilbert space .We may interpret the evolution operator exactly as dictated by Copenhagen. In general, the transformed evolution operator seems to handle probabilities and uncertainties in the usual quantum mechanical way, but now we also observe that both the initial state and the final state are superpositions of the ontological basis elements. If we interpret the absolute squares of the coefficient amplitudes as probabilities {{cite:c14455bcb59a46bc033d11cdc127b3e718855529}}, we get the complete story as given by Copenhagen. This is real quantum mechanics.
| d | 9b1e7b8b3d3ebff6cbecbea088ae80ed |
Input Sentence: Each raw sentence from the dataset is presented to the proposed model one by one for further processing. Before that each sentence goes through some pre-processing steps: all urls and codes are removed.
Tokenization: Each processed sentence is tokenized using the BERT Tokenizer. Each tokenized sentence gets a length of 100 tokens and zero padded when required. In the event of length more than 100, we cut off after 100. The output of this step is a tokenized sentence of size 100.
Embedding: For word embedding, BERT {{cite:2abd26091e4744b75782522535b7781d1de01583}} is used to turn each token in a sentence into a numeric value representation. Each token is embedded by 768 real values via BERT. The input to this step is a tokenized sentence of size 100 and output of this step is an embedded sentence of size 100*768.
Pooling: To reduce the dimension of the feature map (100*768) for each tokenized sentence in step 3, max pooling is used which provides a real valued vector representation of size 768 per sentence.
{{figure:5008f6d0-c3b1-4285-bff3-476fbf87667e}}
Classification: Aspect classification is accomplished by the use of transfer learning {{cite:30c5baa3e22041c9adc3c59f9064f1114cfd3270}}. To classify security and non security aspects, a basic neural network with two dense layers (with 128 and 2 neurons) is utilized to fine-tune the four pretrained transformer models one by one such as RoBERTa {{cite:40ed971a2f10557b2809deee315d2ae9e8f4238c}}, BERT {{cite:2abd26091e4744b75782522535b7781d1de01583}}, DistilBERT {{cite:2b151832689a3dcc093a3e7c6f6d1f84b5ec3abe}}, and XLNET {{cite:23a92034b34173f00f59784e20aa426a781e3b5d}}. The pretrained weights were initialized using glorot_uniform in the simple neural network. A dropout rate of 0.25 is employed between the dense layers. Finally, we apply a `Softmax' activation to get the prediction.
| m | ffd11ed63207894d902f5c51be3ac8bd |
In this section, we present the numerical results for the invariant mass distribution of {{formula:577a1e59-feb4-4c5c-afef-dd165c86ac65}} and {{formula:566e5d7b-0a6c-405b-9032-d2149281ec08}} of the {{formula:206bd0ae-b267-4652-aa24-65ec758a5352}} decay. To compare the theoretical invariant mass distributions with the experimental measurements, we introduce an extra global normalization factor {{formula:d7ea41d1-db27-43fd-a8fd-a98c86a8e932}} , which will be fitted to the experimental data. In Fig. REF , we show our theoretical results for the {{formula:279ee444-1279-45d9-a3fb-c5a2cd630903}} invariant mass distribution. The red-solid curve stands for the total contributions from the {{formula:71916304-cdd6-4686-a93c-cc8993a7be97}} state and the vector {{formula:48af7bd1-6fc7-4656-878a-c970089c3339}} meson, while the blue-dashed and green-dot-dashed curves correspond to the contribution from only the {{formula:9174f357-b65f-4b31-a1cc-b1b3674c1cdf}} and {{formula:34106417-ecac-4358-aac8-485dfa0bc7f9}} , respectively. The red-solid curve has been adjusted to the strength of the experimental data of BESIII {{cite:ab007ebefbfd2c9becbba624befed667c188e0cf}} at its peak by taking {{formula:b717dd15-66e8-4449-a1e4-130701dc8e72}} for both set I and II. One can see that the model results obtained with the parameters of both set I and II can reproduce the experimental data reasonably well, and the {{formula:d3b063d5-50a6-412e-ab4b-06aa2f577caf}} plays an important role around the peak of the {{formula:413aa5b5-828a-4e19-b414-044e376342d0}} state. It is clearly seen that the shape of {{formula:7e073c86-97c8-48af-9fe4-99c044719bc3}} in Fig. REF (b) is wider than that in Fig. REF (a). One reason is that, as shown in Table REF , the {{formula:291fc12c-ce4f-4770-9b53-f1be1fb3d784}} width of Set II is larger than the one of Set I. The other reason is that the loop function {{formula:39a060cf-8f61-4ab5-90dc-be9776f49b43}} obtained with the cutoff regularization of Ref. {{cite:d641551a9e279cbf7ddef9bcb3d4c22b048a75c7}} is smoother than the one of Ref. {{cite:66ee4dc2230710ff17c09acbcd5110fc7a5e13e0}}, which can be seen in Fig. REF .
| r | 7842c5a0fea460acc0445e365a26507e |
where {{formula:beb3b0a6-a4ff-4612-9304-a486446bc531}} is a kernel parameterized by {{formula:f311f7ae-1a2c-4149-99cb-d6182c9bbd8e}} , and {{formula:f8ed8a38-ae00-4e36-abfc-70a1062c2bb5}} is a random probability measure with a Dirichlet process prior. See Chapter 5 of {{cite:56b3f18e7900d0e51dc0100f5ba4c2969e421170}}. We conjecture that the emergence, in the study of Bayesian consistency, of critical cases denominated by smooth or super smooth functions leads to consider very refined bounds for the Kullback-Leibler divergence, such as the so-called HWI inequality. See {{cite:8085e2f3794fdfd619445889e7713d2e27ae43e8}} and {{cite:78958abb7d8fcab82fd079c7cd0451619bad9116}}. Furthermore, the involvement of the weighted Sobolev spaces on metric measure space is particularly appealing from a mathematical point of view.
| d | 447cb9fb84f4853a9fd2d12caa15be0c |
Redshifts of celestial objects have been a vital component in the field of astronomy and we use them to measure various attributes such as the rotation of the galaxy and the distance from us. Traditionally, they have been measured by spectroscopy. While spectroscopy is effective in determining redshifts of galaxies, it is time consuming and expensive and therefore not scalable to map a spectroscopic redshift for every celestial object. Spectroscopy data is limited in the SDSS database with only 0.36% mapped {{cite:0faa048bc04d670967fc175369f8ffc726ab4e04}}. There are two methods in the literature currently deployed to determine photometric redshifts: template-fitting and the statistical/machine learning method. We will focus on the machine learning method and also use deep learning methods to increase predictive power.
| m | 180c76cdc2767e6e0e11eff3b5f97248 |
Following the other works {{cite:c2414039b7b2471934077f1d32293814da2bec1d}}, {{cite:accc97068cb592104b66dd066c6f65ed049f08e2}}, {{cite:3d12866c9f7c2d95752241102432ca69d492b0ef}}, we demonstrate that our model can handle complex images with a large number of classes.
We further evaluate our model on the COCO-Stuff 10K dataset {{cite:1ee5f7f9e43dfa057f0bd03f549358e3d2a0533e}}, which contains 9,000 training images and 1,000 testing images, as shown in Table REF .
As it can been from the table, our proposed CAA outperforms all other state-of-the-art approaches by a large margin of {{formula:5580d16a-02ee-4317-86fa-8c64358512d3}} .
{{table:26ab6032-add0-468d-9d25-2c15a5cb4910}} | r | 637d7b0158a0cad4661fddba153df3d0 |
Trackformer {{cite:66692ebafba26cfab5e08dac7db78d8b0dd01da3}} is most similar in spirit to our MQT method. Their detection queries are analogous to our static det queries, and their tracking queries are analogous to our both queries. In the ablation studies (Tbl 5 (a); Tbl 3 in supp. material) we show the effectiveness of using multiple (and semantically decoupled) queries over the single tracking query paradigm of {{cite:66692ebafba26cfab5e08dac7db78d8b0dd01da3}}.
| m | 40324eca7269ebcfc6f1b9e08ae22466 |
Among the model agnostic global FI methods, the Permutation FI (PFI) {{cite:bcf071409468fd32a13f91dd46a429cba4bb4140}} is perhaps the most common approach. Its rationale is as follow. By randomly permuting the {{formula:ed3037fe-4a87-4ccb-ac3a-a0b4b787332c}} feature, its original association with the response variable is broken. Assume that the {{formula:02b443b9-cfd1-4040-bff6-b16a506f5398}} feature is associated with the outcome. As we apply the permuted feature, together with the remaining un-permuted features, to a given learning algorithm, the prediction accuracy is expected to substantially decrease. Thus, a reasonable measure for FI is the difference between the prediction error before and after permuting the variable.
The PFI measure relies on random permutations which greatly varies. Unfortunately, to achieve a reliable estimate, it is required to conduct multiple permutations, which is computationally demanding.
PFI can be calculated both on the train-set and on the test-set, where each approach has its advantages {{cite:f9232d4920d424619e21c3c63be6fa38d91f4123}}.
The train-set can be used to estimate the amount of model reliance on each feature. However, model error based on train data is often too optimistic and does not reflect the generalization performance {{cite:f9232d4920d424619e21c3c63be6fa38d91f4123}}. Using the PFI on the test-set reflects the extent to which a feature contributes to the model on unseen data, but may not reflect cases where the model heavily relies on a feature (see Section ). A key problem with PFI is in cases where the dataset contains correlated features.
Here, forcing a specific value of a feature may be misleading. For example, consider the case of two given features, Gender and Pregnant. Forcing the gender variable to be Male might result in a pregnant male as a data point. Further, it was shown that correlated features tend to "benefit" from the presence of each other and thus their PFI tends to inflate {{cite:df6654d86522e7263f54bbc1789196e196d85736}}.
| m | 2399d55c969567b3e371e39db88c8c54 |
With much higher statistics, the STAR collaboration has not only repeated measurements at 200GeV {{cite:91d1ec358d9502af004b8acb2de09e7e4a84acaa}}
but also started a systematic study on fine structures of GPE such as dependences on the centrality, the transverse momentum,
the rapidity and also for different hyperons {{cite:91d1ec358d9502af004b8acb2de09e7e4a84acaa}}, {{cite:625460c550726999fbb3a35004273047946aaf4c}}, also measurements with high statistics during BES II experiments {{cite:d7307a4c60db1a8c4a00dd303399c87bfb685f88}}.
Besides, the ALICE Collaboration has carried out measurements at LHC energies {{cite:58b5575ea25898f1e38dce24b22df924b945c795}},
the HADES Collaboration at GSI has released preliminary results in the low energy region {{cite:d2e9df84b04ee1e1595d030319fa80ab8b10a9a6}}.
Measurements at other places such as NICA at JINR are also planned.
The results of all measurements yet available on global hyperon polarizations confirm the energy dependence of GPE observed by STAR in Ref. {{cite:136b28d85c422e5b0219452cfb5a5f22bb29bc58}}.
| r | 8686ed172625c424d0aabd7a4d5ff26c |
Some methods try to generate the formal query to produce the answer directly using neural networks. In {{cite:4ad8e1ae73db1d4a4f9e154de011c8f8285a88b2}}, a recurrent neural network model directly predicts the head and relation entity for simple questions. Using the head and relation entitites, the answer entity can be predicted. Other methods describing the direct generation of formal queries (in SPARQL) using neural network architectures have also been described {{cite:d846bf7ec3943779f9b14deb09c969e11cb30156}}, {{cite:f4036c360bde66313774c552c97202ed8d76afc9}}.
| m | f2a1d3634595eff0aa968907156c50d9 |
Note that our problem is fundamentally different from the conventional bandwidth allocation and delay assignment problems reported in the literature of computer networking {{cite:1839b48ad3e172c89312d40f52b31cd0e58e3c70}}, {{cite:980cbb8e71e840e2424e13507c3e87b9bfb2a377}}. The utility functions in these papers are static, and do not include any plant dynamics like ours. We illustrate the effectiveness of our algorithms using simulations in Sec. and . These simulations highlight the impacts of delays and sparsity on {{formula:372c2cfe-cedc-48ce-b2dc-df395f7206b9}} -performance and provide important insights on their interdependencies.
| i | 3d49feed50a5bcd88d5ca8bb74c9aad8 |
With the ever increasing size and complexity of datasets and applications, there will always be a demand for more reliable and efficient defense methods that are scalable. So far the trend is that many defense methods were shown to have excellent empirical properties at the time of their creation, but then very often they were soon beaten by newer attack methods {{cite:52f654b7b34109660bd313f9fef7d26a395267e4}}. This suggests that defense methods cannot be blindly trusted. One ambitious project is to formally define the notion of robustness in this attack and defense context, which includes attaching a robust score to any defense method; i.e., the higher the score, the more reliable the method. This is similar to the use of breakdown point in classical robust statistics. It is likely that there will be different reasonable definitions for robustness, each designed for one type of attacks. If a new method achieves high scores for different robustness definitions, then we have strong confidence that this method will work well.
| m | e4ab9aef502ef5ccfb15a113c0377078 |
We also compare our results with state-of-the-art embedding approaches such as BIER {{cite:5b0e9ff3107db74a2bd4b189c0e1106280bcf80b}},ABE {{cite:e77b5cfa49078f0340c32b5a7daf20c28cd8e8a6}},FaseAP {{cite:7d0b6e3e837e46f973de2bb2a3f1791da003c654}} Multi-Similarity {{cite:34ec892a16beb3ae2eee01ca58de91f222da9902}} and Easy Positive {{cite:e85ec97c01b11af2a110898e4342ccbba693828b}} on SOP {{cite:fedbcf201541e93825d158c09c3f59eb6a756f11}} and In-shop {{cite:c0c3f6ec97a7a2c72639f3f7d2435e26250ad628}} dataset.
Tables REF shows the SC-triplet loss outperforms the best previously reported results on the SOP dataset.
| r | ff64a1d49348a03ae5ed514df737e110 |
Theorem 12.4 {{cite:220a1f67c5f6b06ad1e6297df733afeb5bd118c9}}, {{cite:35ed07377e2fe5e70e99f1523665ea9a2e39c6a4}}, {{cite:4197c1a915cbc563799944e6b64447d2a9486bcf}}
Let (REF ) be irreducible and co-operative in some set {{formula:95395a67-b8ab-4141-92f9-1fa3396c004d}} . Then the solution {{formula:74c7452e-cb7a-421a-9410-a9b1834fbd0d}} (restricted to {{formula:5215a4ef-f75f-4e3b-b0f3-2b71852ebd1c}} ) is strongly monotone.{{formula:b0d33ef1-202b-47c6-9705-28369bd15693}}
| r | 9bcbff4ff57d4900b2290c01b17e0b65 |
Self-supervision-based pretraining methods take more time than the supervised AT method since self-supervised pretraining approaches typically require more epochs than fully supervised methods to converge.
In the self-supervision-based pretraining approaches, AP-DPE{{cite:d04d77de09722545966da60dfd3a37c099ebddc8}} takes the highest computation cost as it resorts to a complex min-max-based ensemble training recipe. Compared to the contrastive learning-based baselines (RoCL and ACL), ours (AdvCL) takes higher computation cost. This is because: (a) the contrastive loss of AdvCL takes more than two image views, and (b) AdvCL calls an additional pseudo supervision regularization. However, the pretraining procedure can often be conducted offline. Thus, the finetuning efficiency (via SLF) still makes AdvCL advantageous over the other baselines to preserve model robustness from pretraining to downstream tasks.
{{table:97e7e3c9-6763-46f7-8d67-33f8dd2d1dc3}} | m | 4469b32d96656bc86631c1edd5d13449 |
blackTo investigate the effectiveness of the proposed LBS on the lightweight models, we apply our methods to MobileNetV3 on CIFAR-100. Following LSQ+ {{cite:af938ca390c51eca07dc661001770974ef60e8eb}}, we introduce a learnable offset to handle the negative activations in hard-swish. We show the results in Table REF . From the results, our proposed LBS outperforms the compared methods with much fewer computational cost, which demonstrates the effectiveness of the proposed LBS on the lightweight models.
| r | b6d9681e5565a75d269326bf8e80f261 |
Some seemingly obvious solutions are not always effective. For example, buying the most accurate data from all users may exceed the payoff for making the correct open/cancel decision, resulting in a negative profit. Alternatively, spending a fixed, prespecified amount on purchasing raises the issue of predetermining that fixed amount. Therefore, there is a need for an adaptive approach to this problem.
Other studies have attempted to find adaptive solutions to similar problems. However, they either considered a simpler version of this problem {{cite:d94a670c4f0f7d3c0f7125cc6d2c1586c44c37d4}} or had different objectives {{cite:074229b98cad5e0ef93c853a3833c64bce75f924}}, {{cite:697ac30d79277276877c8268d0364d9328cee0ca}}, {{cite:b856f67f0faa1fca8c3d3a5467c6847ac94ca9eb}}, {{cite:2c98f95987c5c705d157a113429ba93dee05098f}}, thus, resulting in inapplicable solutions.
Finally, the approaches for location privacy protection such as spatial cloaking {{cite:48583f38ad9cdffa4329007a45bb946c680a74ae}}, differential privacy {{cite:ad96fdbeddbd2c089d67ae9c9d05b77bcca5e2de}} or Geo-Indistinguishablity {{cite:0f8a85b5f171afcbc357d44f83c685d58a703709}} are relevant but orthogonal to our work as we discuss in Section REF .
To the best of our knowledge, we are the first to consider the interplay between privacy, utility and price in a data marketplace, particularly in geo-marketplaces, with the focus on the profit of buyers.
| i | 46d150b22dfbcbd6493242101e86c571 |
Matching evaluation. We compared our network with the Predator network {{cite:6e062908dac9ff7e3586bd144444cf807b498163}}, the closest method to our proposed framework. To adapt the network to deformable scenes, we used ground truth matches to supervise its loss functions instead of the ground truth rigid transformations. The top-k sampling method in Predator was used to select the best candidate source points to match the target points. Finally, the feature descriptors of selected source points and target points were matched with the same matching method we used in LiverMatch.
{{table:14afe428-9d12-458c-a161-7dc72c5faf8d}} | r | 0ee92e32fba1ee16825c32ce2c7ee7c7 |
Inspired by the success of gradient-guided attack methods (, FGSM {{cite:e75c62d0e6884e5f4d4125f19f4bedfc1f0e83bd}}) on natural images, we first generate the adversarial point clouds by pointwise gradient guided perturbation. Given attack budget {{formula:e499168d-165e-4629-96a8-17973be64dc9}} under the Chamfer distance (Eq. REF ), an adversarial sample is obtained by
{{formula:ce2b39c0-1586-4c97-92e6-96adbb216fe5}}
| m | 6b5039d1d98a03019d01716c5a1fffb0 |
This result is completely consistent with the 2020 White Paper
estimate of {{formula:94352749-bede-4232-987c-1f61d9725cd8}} {{cite:3f1d03c2eda8fa4d9c24b02584cfa136ad2e6396}}, and has half its uncertainty.
| r | 19b7801f0ddb0109c12a801caf543fe0 |
We applied RSA to study the ability of different neural network models – multi- or uni-modal – to explain the fMRI activity patterns in various brain regions. Based on recent findings {{cite:9751ddf5e641f386e9ec4a7d6add56c2927be2e7}}, our hypothesis was that CLIP (and similar multimodal networks) would be specifically adept at explaining brain activity in the hippocampus–where `concept cells' are found. This hypothesis was supported by the data: the multimodal nature of a model was as a key component in explaining the activity in the human hippocampus – a trend that proved robust to different methods of voxel selection and distance metrics.
| d | 78ac08e8db2177918827aa1188471a29 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.