text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
This variability can be attributed to the known instabilities of large pre-trained language models {{cite:1cd790f9955556830827f7abc6aadce8d33d1ead}}, but this does not explain it all. Choosing a right set of data points is another extremely important factor in few-shot learning that calls for better selection of training data for pre-training and fine-tuning {{cite:c7e48452609647ce3b7d49facf7718da998b2f68}}, {{cite:dcf20f51cfefe662729282440c45d6ca4bd96c26}}.
| d | 942f71fd251b2c112d7700d0052d7e8f |
Figure REF shows the total energy consumption and training speed normalized by the energy and speed of 2-GPU training. The DimeNet We note that the authors of DimeNet have released DimeNet++ {{cite:326505a9c9e876a488becb005222e58b3c7d2eb6}}, which is an optimized architecture with faster training that may alleviate some scaling issues. However, DimeNet++ is not yet available in PyTorch Geometric and so was not considered in this study. GNN shows poor scaling relative to the BERT NLP model and the ResNet CNN model. We observe similar behavior for the SchNet GNN (not shown in Figure REF for brevity). The total energy consumption of DimeNet training quickly escalates to more than three times when increasing the number of worker GPUs, but this energy cost comes with the advantage of a {{formula:ebbc7d76-8dfe-42c5-80d4-f782ff56489e}} speedup. On the other hand, both the BERT and ResNet models are still energy-friendly when trained at larger scales, incurring negligable amounts of additional energy required but still achieving more than a {{formula:b906c8e6-fc2b-4c4d-847b-e772a094eea4}} and {{formula:b0687f31-d870-4ff6-9750-6f093074c4f7}} speedup, respectively.
{{figure:67ac321f-798d-4fe4-8b2e-222e50c40a5a}} | r | 263dc362e9ba0cad2f95e35b191d57ce |
Motivated by the powerful expression and approximation ability of deep neural networks, in particular the great success of convolutional neural networks (CNNs), the deep learning based methods have become the mainstream in crowd counting and made remarkable progress {{cite:27c2686566afc917094fb028406cfb6e2b2cd4e1}}, {{cite:743206b131997c877a56389e4e83690dcfd08cf8}}, {{cite:4bdd55af1502730800af9308b5723e7fc48c613d}}, {{cite:a1f06c664b75b9d2579aa2a822b53ac175c212e4}}, {{cite:27a585835eba589eb38e0b4768a593dd931a599e}}, {{cite:774c54a499dfe79359fd75ba0cf3efd309676587}}, {{cite:10c53880dcfd1fad600d984ae618c61ea50d60c7}}, {{cite:dfbfd5b03c6c67b43dda43a5b23880de06d6bd65}}, {{cite:c0a1cb33c96e5ee747d01fa0b96a56092a3c3050}}, {{cite:609d1e88e499b3a682a40971e3e7db1af4501417}}, {{cite:2fc5ae7070fc15b55ebed49005b77e2537edc53d}}, {{cite:c684aa08abc3040476fe342844aafba39daf4d80}}.
To achieve better performance, most of state-of-the-art methods {{cite:27c2686566afc917094fb028406cfb6e2b2cd4e1}}, {{cite:743206b131997c877a56389e4e83690dcfd08cf8}}, {{cite:a1f06c664b75b9d2579aa2a822b53ac175c212e4}}, {{cite:27a585835eba589eb38e0b4768a593dd931a599e}}, {{cite:774c54a499dfe79359fd75ba0cf3efd309676587}}, {{cite:dfbfd5b03c6c67b43dda43a5b23880de06d6bd65}}, {{cite:c0a1cb33c96e5ee747d01fa0b96a56092a3c3050}}, {{cite:609d1e88e499b3a682a40971e3e7db1af4501417}}, {{cite:2fc5ae7070fc15b55ebed49005b77e2537edc53d}}, {{cite:c684aa08abc3040476fe342844aafba39daf4d80}} use heavy backbone networks such as the VGG networks {{cite:fff2bce1841c5f53446c9cae9e5247d82238d703}} as feature extractors.
Although these heavy crowd counting models can achieve satisfactory performance on estimating the crowd count, their impressive performance is at the expensive of large computational cost and hardware burden {{cite:da8d466fe0a436f39c13bf25594bb726c20a0c73}}, limiting their wide use in the real-world applications and causing poor scalability, particularly, on edge computing deviceshttps://en.wikipedia.org/wiki/Edge_computi with limited computing resources.
{{figure:237cb94b-6212-40cf-a76b-549a4f5bac3c}}{{figure:5112fa6a-9a33-4727-9b07-23512b98d4a0}} | i | dd9c81342b6c92f2416a770014e3ae62 |
is small, where {{formula:7171dd6a-4574-4b86-ab17-109373624268}} is a pre-defined value and {{formula:0e225536-e754-4234-a6a3-94cb1b270aec}} is some loss function {{cite:82c8216e773f20963d65d3a564aadf6defa8d030}}. This simplification helps analysis and implementation. In spite of this, the robustness with small {{formula:5cbf7958-be51-4548-9abd-4b43c8099ca4}} perturbations is very different from the robustness with respect to human-imperceptible perturbations {{cite:853b44b03377c92d824cb17dda00ea366f27c386}}. A human-imperceptible perturbation may not have small {{formula:4ae940d3-e04d-4c4f-8bb7-8b3b1362890e}} norm {{cite:40393dc7ed6e23f91208bacca2b714f5acab3933}}, {{cite:91560e59e40096dcb22d476c12da94e383acdfd7}}, and a perturbation with small {{formula:d15ba141-29f9-485b-af8e-fd549a4dfaa2}} norm may also not necessarily be imperceptible to humans {{cite:7077c11308dff430be82b39b45201eabb352269f}}. In Figure REF , inspired by optical illusions, we show an example of difference between some {{formula:e3be26c9-0f23-408c-a11c-6cc4c3477c1b}} distances and human perception. This difference makes current “adversarially robust” models easily broken by newly-designed attacks. Besides {{formula:f9388e98-7bc2-4d3b-8bc9-43e29a92baac}} distances, other measures, such as Wasserstein distance {{cite:f8ce3ed2fcd36bbd184cecb2df2a53fa66288fea}} and structural similarity (SSIM) {{cite:b58e177461eaba994badb25306723b1b91c05c51}}, are also shown to be different from human perception {{cite:853b44b03377c92d824cb17dda00ea366f27c386}}.
{{figure:6e2ec9cf-53f7-439d-9ff7-0f67807cd453}} | i | bb5cd9cc453626e73f5b5d62dcacd86a |
We evaluate all methods using three metrics, i.e. {{formula:72b10cfa-e7a0-4ac9-b2cd-3880e21650ad}} , {{formula:46098c7b-70de-4066-ade1-b4526a9d98ed}} and {{formula:483b1b49-674b-45b3-8faa-aa817fce4b95}} . {{formula:60754b3a-c744-4487-a5ec-62986fffba47}} ({{formula:0375eae7-a847-4496-a179-bba244c0290c}} ) denote the classification accuracy of base (or novel) samples with the label search space being the joint space. {{formula:28175489-4955-4357-a7db-f2b4bfccecd7}} is their harmonic mean, i.e. {{formula:d7fce528-27cf-4132-990c-c29a5de7bc0f}} . It describes the ability of balancing between base and novel domains. We choose the harmonic mean as main criterion to favor high accuracies on both base and novel classes. We compare our proposed method with other five few-shot learning methods on 100-way GFSL task. The results of MatchingNet {{cite:26ee5933cd5b97238190fac825b69b27fb14286d}}, ProtoNet {{cite:1cffbee1ce672a6fa645e0cddd7ddd70208767a8}}, RelationNet {{cite:41019cf0cfff0e2f590cab762147cdfadc4d2604}} and GCR {{cite:e5bc142cfa566d06b6e26d32934c4a5b4f67a04a}} have the same experimental setting as our method. The model parameters of FEAT {{cite:2f6b806751b87b01119683cbea2c0951da2759c6}} is downloaded from github link offered by {{cite:2f6b806751b87b01119683cbea2c0951da2759c6}}. It trains and validates the model on full base classes rather than the new base classes. As shown in Tab.REF , ours has a significant better performance on 100-way gFSL task.
{{table:4f83a5e4-735e-4147-b71e-217cbffcf007}}{{figure:50b964e0-ab7b-4b17-a18c-357377cf8997}} | r | 6526645e14ec4ae38b15117a0cb84113 |
Colour Normalization: Because the images are from different sources, various lightning conditions change the visual appearance of the skin lesions. Traditional conversion to grayscale images would lead to significant lost of input information, therefore we aimed for colour constancy algorithms. This topic was already covered in number of papers {{cite:a061150633bcb4fd8b6f5ff9b2de579da2a5b9be}}. In our work, we tested several methods and decided to use max-RGB {{cite:fa77a8494d1622d30fa6c6141b764f516ce861bd}}, which was also used in a recent study {{cite:314010b406453d008c70d00d989cb789c0fca48a}}. Max-RGB method is based on the assumption that the reflectance, which is achieved for each of the three channels, is equal {{cite:44f688c58eab096a4b60603ddc25f0f504e147d3}}. The illustration of this method is presented in Fig. REF .
{{figure:b31bc668-aa51-4df1-8570-45976cca1ee4}} | m | f5b118de458af51a4ec3a3c33a5c4ace |
The next lemma summarizes comparison theorems
for Gaussian and Rademacher averages, see
{{cite:8cc2accd9b6bd520678dda3306382d245d103123}}.
It will facilitate generalizing
the equality (REF )
to an upper bound for {{formula:3a895ace-3635-43e0-a73d-becd21a8e021}} -norms of
random variables of the form
{{formula:67f1c145-7799-4e51-9aa7-9584623c327b}}
| r | 939390caa63856eda9a73fcf68b6b4c0 |
The key challenge in our problem setting is that we must optimize the parameters of the encoder and the conditional NeRF MLP in the decoder from a collection of unposed single-view images. Inspired by earlier work {{cite:23e094a3263d0dd8120302660015b158175614fe}}, {{cite:42acafc00a69227d0b474b1bc6272433e345ad27}}, {{cite:aa82a4801fa43338ca4db8b4f32ee93282bfded2}}, {{cite:83aac32d7094bef9195ead77b0931a405fd263d4}} in this space, we propose to use meta-supervision that is present in our knowledge of the 3D world. We combine a differentiable rendering loss on the input view with an adversarial loss on the novel views, together with the geometric consistency that is inherent in the conditional NeRF to train our model. Furthermore, we leverage other real-world knowledge such as scene box, object symmetry and cycle camera pose consistency loss to optimize our model.
| m | 68f428c3fdd5958b1c2f07f1984c7f94 |
Here the energy release by {{formula:ce28c8ce-3cec-4b24-9239-ea0a9097ad00}} O burning is {{formula:78bb62a5-cc1c-4c29-a795-bc72308e1122}} per nucleon {{cite:9287c776cd47f78028eb70f771cc12b44c34e7b8}}, {{cite:1b82394e7c4387758b132d58ae8ce5a5ef44e78a}}. The contraction timescale of the helium
core is the dynamical timescale. Therefore such an amount of {{formula:65ece269-4c80-4b6e-9fad-9c490bc5f6ba}} O will
be burnt in
{{formula:86c6837e-1c2e-4fd4-af09-5928f898b5d3}}
| d | 40555f233ef14f894b4d47b7889ca1a5 |
This brings to one of the future directions. We would like to augment or replace VTP with pruning tecniques that actually consider the importance of a group of weights (dimension in VTP is one type of grouping) for final accuracy. For example, Taylor pruning {{cite:92080eea6d29491443f17ea61f726778951a4b84}} estimates the importance using first order Taylor approximation of error when a group of weights is dropped. It also removes the need to threshold mask values during inference due to which we observed severe degradation in accuracy for VTP.
| d | a57268ac1880f2c9b1f9bd7d8ddb5240 |
We have studied the quantum integrability and chaos of a {{formula:e6cb42d1-52cb-4310-aaf3-3e0acf533803}} d chiral SY model consisting of N copies of the SU{{formula:c72ea350-d9ad-4b86-8750-1f568a660a98}} chiral WZW models with chiral current-current interactions among each other with coefficients {{formula:074699ab-fbba-4954-ba03-08620430d1ff}} ({{formula:e426e679-fa23-454c-91d0-cf42e66c1e87}} ), which host Abelian anyons as charge excitations. This model gives a minimal {{formula:7f4e953f-cbe5-48a6-9d80-07acdd80c796}} d generalization of the {{formula:224fbb42-e9e2-4ddf-92fe-93f9a54c9353}} d SY model {{cite:c8a00b4425168dc59f22fe9239c95b69567706e5}}, a spin model of {{formula:628b4bf9-0d10-489a-a977-4671a0a63e91}} coupled SU({{formula:a5600050-5d1c-4f82-a44b-040a69ec4e23}} ) spins. We have shown that the {{formula:a35c3a96-14e4-4abb-9395-5777f8e6f93c}} d chiral SY model we studied is integrable for any {{formula:04e2cd7a-8350-4e94-b004-8992aa39c80c}} and {{formula:8ef459cc-a257-4c48-a2e8-4f385429e22f}} when the interactions {{formula:fcc975be-3dd3-4df1-a7a8-f74d6fb7f59d}} is uniform (independent of indices {{formula:8cf2d7de-0dcd-49f6-a229-8898ef3cf36c}} and {{formula:28115389-6d6f-417a-9773-8fbfa57cb584}} ), In contrast, it is quantum chaotic in the large {{formula:cb60b1de-fe51-47c8-8a41-2956e04f2fd7}} and {{formula:0d4038f8-0865-413f-9ff4-b5a998c0b0fa}} limit when the interactions {{formula:f28388ed-3a7f-4616-8eb1-14fd63cd2dea}} are random in indices {{formula:9e612daa-be10-439b-ad58-28533c1ea2f1}} and {{formula:7c6c7222-5bd9-4077-ae44-6e72fae8042c}} . Particularly, we are able to investigate the quantum chaos of {{formula:9133c56f-9745-4632-8b25-a49de51bb077}} d anyons with such a model. In physical systems such as the edges of FQH systems, integrability or chaos are expected to significantly affect the interference of the edge states in edge interferometers such as the Fabry-Pérot geometry {{cite:1425d31da9eaa94d7a41aefb120910d756b0393b}}, {{cite:c3705ef49ed48f9af2b293d971fc384a53928fbe}}, {{cite:14abc87036b6f4c4ad52470cfc4769ef91b41c75}}, {{cite:6b8b4a5a4e9820c31948f6c1b7e05b6422bb6a76}}, {{cite:bf14452a2da6606ed5dd41b58fbc3c51cb87b119}}, {{cite:6393d613fe23d54d0af72939e10131daad1f63d5}}, {{cite:16329d5cf14391591baf82897e21e126dd50ed65}}, {{cite:b0a4481ac540ea87aa22fff8621bf4de04aa97b8}}.
| d | b4d618af06012090dc45b31836cd8637 |
The {{formula:c7fa085d-fed0-4efb-a766-a084115bec8f}} branching fraction {{cite:3704b27aaa6902ca7d8852beb0267d3b84247c61}}, measured by B AB AR {{cite:10fbc21b0e79cea05c1f9008abd3d9bfb7bce8a5}} and Belle {{cite:493f08453dc847a257622eadea9dcff2fd477079}}, is very sensitive to contributions from a charged Higgs boson.
The PDG average {{formula:6cc79295-db1e-4b81-9c4d-c80ffea4b0cd}} {{cite:97131d2eb8bb18f954ec48db431ff391a958c4cc}}, {{cite:89690e7ea8cf7ca61750fb3392d89f2eebfc3a87}} is larger than the SM prediction of {{formula:8f0a0384-3846-484f-a845-7ee1eaf5c61f}} {{cite:ef66655635f1be2a65762d86d13e5f7741b48b2e}}. Even with these high values of {{formula:2ea307d0-e50a-42f5-95e8-fd261cae26c5}} , we obtain a sizeable allowed {{formula:50a71a91-9f53-426c-bda1-34734ae5eafb}} region; there is no conflict with the SM. Belle has now presented a new measurement of {{formula:2ed8f3fd-67bd-4c3d-8454-f545521a42cb}} {{cite:53f93a7f518b864884befb4ddda9467c59343faf}} that reduces the world average to {{formula:295294cc-6024-482c-ab36-585624a25338}} . Figure REF shows our results in the {{formula:d92daf43-fbe2-4c23-9e38-38a6c5a78e75}} plane for this world average and Table REF summarizes the {{formula:3608ab05-9c62-413d-981f-57faf9296acc}} ranges of unitarity parameters. The inclusion of the present {{formula:384fddce-ca9b-4edf-92aa-63157fc542ac}} world average has hardly any impact on the {{formula:08c86e6e-64b4-4922-aca9-da8ead766960}} plane.
{{figure:15c62c24-1e6e-4525-8dd5-31f05ce450db}}{{figure:c2bf1f12-ea4e-4111-92b1-65607c486ca3}} | r | 469bcfa87b030266dc0e41fcc5183144 |
The 5 different neural networks were trained over 3 random seeds, and the best overall performance for each one of them was selected. The evaluation procedure is similar to the AutoML benchmark in {{cite:20b0074da92960defa6b2a5eb682808788c8eb1c}}, {{cite:25d397c362ddf1fb676f937b5172ffbc7e1883ee}}, and the benchmark results are shown in Table REF where the area under receiver operating characteristic (ROC) curve (AUC) and the accuracy (ACC) are reported as evaluation metrics. A full comparison with the classical benchmark provided by {{cite:20b0074da92960defa6b2a5eb682808788c8eb1c}} is given in (Appendix , Table REF ).
{{table:4d2f41b2-fc17-4d74-a51b-b3f889114f59}} | r | 38be9f57feadbe0af1661b234a6544ea |
We have used our unitary trilinear Hamiltonian model to investigate issues of entanglement between various bipartite divisions of the modes involved for both short-times when energy in the BH is roughly large and constant and at late-times when the BH is evaporating.
Specifically, we have examined the entanglement (via the log-negativity) across the BH horizon between the bipartite division of the (pump, idler) and signal modes, as well as entanglement between two separate set of pairs generated by the pump. We find entanglement between the emitted region {{formula:fc33969c-d361-4e57-a62c-ef8bcd86114c}} pairs {{formula:47323198-5d37-40ae-b7be-3f5b8e3426bf}} and {{formula:6a88e50c-160e-42ea-8af8-46b7ae363b97}} through the common BH source 'pump' mode {{formula:6982bca3-1ba6-4d00-8f15-a233795c22a9}} . If one traces out all modes except the particle and antiparticle ({{formula:d054ad39-ce3e-4eb0-9a93-640a706d3f68}} in {{formula:9ca512d9-1d9d-4295-9cc5-40303447f9d3}} in this work) in region {{formula:86d8515f-9bf6-421c-b3ba-889b22046f1a}} , we find a separable density matrix. We have also shown both analytically and numerically that this dynamical BH-as-PDC model reproduces the long held conjecture that the (Page) information {{cite:d43c65daf2c4d05d256d7e6f22e64d1565924c6a}}, {{cite:bafa0051eaecfd35b3bb5d52917547f513500712}} essentially emerges from the evaporating BH when it has transferred roughly half its population into the Hawking radiation, precisely because of the thermal state deviations that arise in the outgoing radiation at late times.
In this work we also interpret this Page time as the time when the variance of the BH `pump' population becomes
equal to the variance of the population of the outgoing radiation.
Further, the entangled, vs separable, nature of the full quantum state across the BH horizon, resulting from the coupling of the evaporating BH `pump' source mode with the emitted internal and external Hawking radiation modes argues against the need for concepts such as BH firewalls.
| d | 037dfca3d92b28568dc18b2d9f82b7ad |
Our analysis has been able to determine that the existence of the islands does not only depend on the microscopic parameters in {{formula:332ef83d-5ca1-4bcd-bb37-986e3ded481b}} , but both on {{formula:0088479d-c92a-465c-bc72-b4689eeb3f67}} , where {{formula:a4bb69d3-a403-4777-8235-a97135b334aa}} is the variation of the dilaton across the geometry. In particular, these two parameters determine the mass of the graviton, thus it is effectively the mass of the graviton that directly affects the behavior of the island surfaces. Our analysis shows that the mass of the graviton cannot be made arbitrarily light, without spoiling the behavior of the island surfaces, strongly supporting the lesson about islands and massive gravitons that comes from Karch-Randall configurations {{cite:be27f8b3be0d3724e45f22fe4a8436ecd9e918de}}.
| d | ab192f3daace3e1717f4072cd4936225 |
Our primary contribution is an approach to generating and evaluating personalized content in games in a way that is independent of both agent architecture and game. The agent independence is achieved through using the OpenAI gym interface {{cite:e10aee755ebfe055b923ba35aaaa1143d79d1491}}, which is already widely adopted as a standard for reinforcement learning agents, and is technically compatible with most other agent types as well. The system is also independent on the game used, given that it accepts one or more numerical parameters to an internal procedural content generator, and that it can provide a meaningful state representation (eg. a screenshot), through the gym interface.
| i | d77e338e1e201b0ab9caa7e4bcde64e3 |
Such a crucial rôle of families is reminiscent of the
KM mechanism {{cite:f320b3c3d59074f9113a3ef53f28c897d44773f8}} for CP violation which is non-zero only if all the
three families participate as shown by the Jarlskog determinant {{cite:f31f58b33462e0ca7bfc5fa5801eabe6c2faab3c}}.
| d | 2605a3738017957027cbcd1a659b3a85 |
This article comes last in a series of three articles where we investigated
the conditions for a more biologically plausible self-organization process. In the first article {{cite:d72a5d80fbb4ef302c7e7b38787a688b302bb7a4}}, we introduced the dynamic SOM (DSOM) and showed how the time-dependent learning rate and neighborhood function variance of regular SOM can be replaced by a time-independent learning process. DSOM is capable of continuous on-line learning and can adapt anytime to a dynamic dataset. In the second article {{cite:81156f8d8bb2ed1339b89db15d532f05d8f07cfc}}, {{cite:5be567a796f80d907159bfd63f1bd982a56c330c}}, we introduced the dynamic neural field SOM (DNF-SOM) where the winner-take-all competitive stage has been replaced by a regular neural field that aimed at simulating the neural activity of the somatosensory cortex (area 3b). The whole SOM procedure is thus replaced by an actual distributed process without the need of any supervisor to select the BMU. The selection of the BMU as well as the neighborhood function emerge naturally
due to the lateral competition between neurons that ultimately drives the self-organization. The present work is the last part of this sequel and provides the basis for developing biologically plausible self-organizing maps. Taken together, DSOM, DNF-SOM and RSOM provides a biological ground for self-organization where decreasing learning rate, winner-take-all and regular grid are not necessary. Instead, our main hypotheses are the blue noise distribution and the nearest-neighbour connectivity pattern. For the blue noise distribution and given the physical nature of neurons {{cite:69344dd4ecab210271fc29c39db4f7931d9bfa02}}, {{cite:7d706821cebfe0388dec05fcaa01a1d0d4e34716}}, we think it makes sense to consider neurons to be at a minimal distance from each others and randomly distributed and to have a nearest-neighbour connectivity as it is known to occur in the cortex {{cite:4252f99a9f642a6e576e4fd35f60528037f464c8}}. The case of reorganization, where neurons physically migrate (Lloyd relaxation), is probably the most dubious hypothesis but seems to be partially supported by experimental results
{{cite:3be7b9d1ed61c1cacdf6bb42be04c73470e114f2}}. It is also worth to mention that reorganization takes place naturally in the mammal brain. More precisely, neurogenesis happens in the subgranular zone of the dentate gyrus of the hippocampus and in the subventricular zone of the lateral ventricle {{cite:d210d270587a655379c93dc24be248353ea7731c}}. On the other hand, when neural tissue in the cerebral cortex {{cite:41403f536cb485d22b6c83e866b2a75f010f7e2f}}, {{cite:827b1f31d718c889e1d23e571df3d90c128bcc54}} or the spinal cord {{cite:6707c107fa8ab2c16f98bc46b20ad866ed966d76}}, {{cite:1860ceb91ad13240738578f13cb3b9f1d1542801}} are damaged, neurons reorganize their receptive fields and undamaged nerves sprout new connections and restore function (partially or fully). During such event, it has been shown that neurons can physically move.
| d | 7fb1fbe5afd9d64366493f283b74b5db |
Our work demonstrates the effectiveness of neural spatial representations in solving time-dependent PDEs and observes empirical convergence under refinement (see 3delasticityconvergence). Future work should consider theoretical analysis {{cite:5d631ce7d8e4412d2b3be9f3cd59619f74595d84}} on convergence and stability. More challenging boundary conditions, such as turbulence {{cite:62271afa498016eb4baefc2d5b43c732db5be6df}} and intricate contacts {{cite:efd245384a48844cd3b71ab1beb1bdc8e7720a7b}}, are also important future directions.
| d | bda0a5fb975dd4f2bc6ba1c5cb56ca79 |
The method is compared with current state-of-the-art methods to show the effectiveness of the developed parameter optimization. Only methods using RGB-D data are included as these have the highest performance.
We showcase the results using the CosyPose {{cite:cc60376deef68af0cde251c3ef39a41900c2b933}} detector with the optimized parameters for run-time under four seconds and maximum run-time. Results are shown for the LINEMOD and the OCCLUSION datasets. The performance on LINEMOD is shown in Tab. REF . The results show that our method outperforms all methods trained on synthetic data and most methods using real training data, while only PVN3D {{cite:2f88f777db5e01d0607ce453a437eabc3f0626df}} obtain better results.
| m | 9d6605335d367be1c57ac6d005dabd27 |
Now we discuss the broad comparison of the band structure and FS topology between ACo{{formula:73e48642-6205-4713-a23c-eb1f124240aa}} As{{formula:1be2b3cb-3591-4474-8f9f-a21e2c821667}} and AFe{{formula:bd8cff21-30fa-4e9e-8c50-6c6c315ff86d}} As{{formula:7b244b92-b500-4d72-a79b-ee8b136a973b}} compounds. Overall, the electronic DOS({{formula:e2657b15-6c01-4907-b5ce-81d70f50c604}} ) of the ACo{{formula:76892d4d-513e-4c40-8df5-814a5e263024}} As{{formula:5da4ad97-e71b-4950-ab31-363b6d9c25c6}} samples are found to be similar to those of the Fe-based compounds. However, we observe a band shift of 300–500 meV below E{{formula:2d1108f6-9b11-4231-8c34-3f887123e1c8}} due to extra {{formula:e630e642-1c9e-4ee7-bccb-50ff0ed34763}} electron in Co as compared to Fe {{cite:9beefa5f58e36e2051285a608b59d49515633440}}, {{cite:4c7504896a4abe012bfc17b2adc51e8e07b4122c}}, {{cite:c92ea499d9e42b7ea1f75f49c3e031d8da0db68d}}, {{cite:5e1ff9e331435d04afe8ad9ff70cb8a39e74b04e}}, which may reflect the high intensity peak at E{{formula:f664b684-d71a-4403-a073-8860f2dd11fb}} . Note that the Co {{formula:d5f74e48-45e9-4ce5-96a7-8f5817c36a6c}} orbital has a larger bandwidth (between {{formula:9ecdc8cd-c631-41d4-a7d2-b58af32974f1}} eV and 2 eV) than the Fe {{formula:4735ffff-a989-494a-896d-0cfdceedd676}} orbitals (from {{formula:972f2f4c-f313-4af7-b7fe-4f32e4d7fdf9}} eV to 2 eV) {{cite:923aa6d8958444efebd9a94231d9e5935f28c419}}. Therefore, a complex multi-band Fermi surface is observed in the ARPES measurements on the {{formula:35ccbd64-d0b2-4f6c-b0b7-19983c527279}} Co{{formula:a54480ec-315d-475d-a438-78c82f6d5d12}} As{{formula:5296518c-5776-4ba5-be80-82d1edc29e9a}} compounds along with the large DOS at E{{formula:678ade56-5e7b-475a-93aa-030f50fc1903}} with no apparent nesting {{cite:9beefa5f58e36e2051285a608b59d49515633440}}, {{cite:5252a5da60219803638a5f82e92ad2f75057094b}}. In this scenario, the band structure of ACo{{formula:2a5022f1-8802-4b6e-a0a1-1ccd40f226da}} As{{formula:87e94a81-e794-4376-bb69-6315da048925}} {{cite:0182c434b248315d3e339c87c6ea836079e28b0a}} appears significantly different at E{{formula:82159e6c-b83a-496e-8b25-939d65b33a7c}} with respect to that of BaFe{{formula:210abd43-af68-4efb-9d97-af8047725f41}} As{{formula:cd443268-08ab-46ce-aa2d-08f288d08a9c}} . Also, as noted above, CaCo{{formula:f24279d2-a4f9-44e8-a1eb-6ec04e67aa8b}} As{{formula:399dc76b-1df3-44aa-9981-353411fc5fc3}} shows magnetic ordering at low temperature; however, APRES measurements across the magnetic transition indicate no significant changes in the band structure. Similarly, a complex FS structure of EuRh{{formula:96b0b4ff-d564-4ddb-9a15-c5255cb5631f}} As{{formula:cc14167b-a25e-4c78-9f50-b7231685c425}} and BaNi{{formula:62787b42-1ef0-4430-b151-4043a1740fe5}} As{{formula:559146c1-de3e-434d-9050-01ec7aa9ec80}} has been reported using ARPES, but no signature of band folding due to magnetic ordering was observed, which indicates a weak coupling between layers {{cite:4cfbc8f1690f14326ef05a0e187f2f891c12d3ae}}, {{cite:0a610fd7b880159a756693bce14afc6f1ec41060}}. Moreover, a significant decrease in electronic correlation is reported in BaCr{{formula:1e5c3a38-2ef1-479d-b4a7-ac1b7f932eb3}} As{{formula:0a2f2109-f19b-40ae-8189-ffd4ee0f7b7d}} {{cite:67b23299fb0883119c8c86793a7f7c5600cef5db}}, {{cite:cea9f1c464a7d631a354c102cc37f20b015dce0b}}. Note that in the {{formula:c78d4587-1d3e-4aa5-9b72-fa2c8f180ec1}} Co{{formula:a95f14d5-83bc-47b2-a98e-529785df2056}} As{{formula:f26a1b96-256d-4edd-aec7-c72a944d88f7}} compounds a considerable decrease in interlayer distance and z{{formula:b1f9391d-f2e1-41bd-959d-8701723cd60b}} results in different correlation strengths {{cite:0182c434b248315d3e339c87c6ea836079e28b0a}} when compared with iron pnictides {{cite:b1613db4eb0dfe2072befbd5511e081532b120f9}}.
{{figure:db0856d5-e90a-45a7-b404-000dda300e5b}} | r | b710a69e74a5099bea66aa0367f82d92 |
While B-tipping can be found and continued in system parameters on applying tools from theory of autonomous bifurcations {{cite:bb932d2915fc51f61f77fbe5e5e9d908819b4a43}}, {{cite:6da4c92a582db8005b16ab0e043e939b69af162b}}, {{cite:73c6da3d3b370238bdca430409e14ea9a608c9c0}} to the autonomous frozen system (REF ), this is not the case for nonautonomous R-tipping.
Furthermore, whereas Section REF
considers R-tipping for moving sinks on {{formula:c1d1f0b1-8194-4c3d-ba5c-dfcb72ba80e5}} (e.g. Figure REF (a)), some R-tipping occur from moving sinks on a semi-infinite or even finite time interval {{formula:754747db-d065-4fe3-b860-fe53a7756ac2}} (e.g. Figure REF (b)). Therefore, there is a need for
general criteria and methods to find different nonautonomous R-tipping and continue them in system parameters.
| m | b24f3bf26d9180d9688d5da346be3127 |
Existing observations present a complex and incomplete picture. Many studies have found evidence for a reversal of the star formation-density relation at {{formula:6dd1073e-f7f7-487b-9f0d-7293a09f50c7}} , with enhanced sSFRs relative to field levels in overdense environments {{cite:7543c8ce0dc8e6d8dde355a41e450560b6370e31}}, {{cite:34e0a434fd9492c5c5c88f096be951f94dd735a1}}, {{cite:30f3ac62c6ed69816b5284014f7f2780bc8bc6de}}, {{cite:8a50df19121190397e5dcb6b8e1841a6b8ee1aae}}, {{cite:ba282b9d4dac9f5938d241127d81d81134a96c09}}. Meanwhile, there is also evidence for molecular gas deficits in star-forming galaxies within {{formula:b0650088-57e3-4bb7-8f80-3fa67694166b}} galaxy clusters, indicating that environmental factors are actively depleting their gas supply, and tentative evidence for environmental quenching at {{formula:99885326-ec85-4eb9-9a22-0f6078ecc97c}} based upon clustering of quiescent galaxies {{cite:94707a638a2c629ca58398bb70e0bb0e36d80e69}}. Together, these results paint a picture of accelerated evolution in the densest environments, with both star formation and its quenching starting earlier than in lower densities. The detailed dependence of star formation and its quenching upon environment is however unclear during the {{formula:aab336c4-b01d-4725-9fe6-04d7bbce9f08}} epoch when the most massive galaxies are expected to feel the onset of quenching. Recently, {{cite:bec5a8b0b750651f91edc5b0a72f6a5270cbb53c}} conducted a pioneering investigation of the star formation histories of galaxies in a low-mass ({{formula:ba883012-30f7-4263-8ad5-d75416880b1c}} M{{formula:116917da-83d9-49c5-813e-681e40334de2}} ) protocluster using spectral energy distribution (SED) fitting. They found that the most massive galaxies ({{formula:bbb01f40-0a6a-47bf-9514-ebd607c560ae}} ) showed a slight 1{{formula:ee2368c4-3a6c-4bd3-9207-8822a0d9ca2b}} decrement in sSFRs relative to the field, which is suggestive of the onset of quenching but at odds with the star formation reversal seen by other groups.
| i | e79e3efc379c6509b573c686d19b4422 |
Another approach to study the non-perturbative QCD effects is the rescattering mechanism.
In this framework, the non-perturbative QCD effects are modeled as an exchange of one particle between two particles generated from the short-distance tree emitted process.
It forms a triangle diagram at hadron level.
The rescattering mechanism was used to estimate the branching fractions of heavy mason and baryon hadron decays {{cite:e9c0724a7b4630aec9bfe4b35652783f7ac0969f}}, {{cite:25d3aa1b0d807ced424f372866f772a4ff512ef4}}, {{cite:5d857b0e84645270abdd29b098702ef7546734f1}}, {{cite:44641f052ad1d1ce2406c641eeac9ba6139df444}}, {{cite:6a087b1cd12048ca428cda5402e022a17c3f03fa}}, {{cite:0e6a506cd1103be3e1f8ad392485c5fe40364327}}, {{cite:3c8b2afdd745484e04882736cc9ab6b76b623e2e}}, {{cite:92807eef8fc2d6ee62df28c25a64a997eadc467d}}, {{cite:a2419dc9291952c01de902bb2077aead46b623a8}}, {{cite:63dc37bff7148e326a2181db1279f4c312d7a030}}.
In 2017, the rescattering mechanism was extended to the doubly charmed baryon decays, helping us observe {{formula:6c291c44-b86b-4803-bf23-7bc179d24002}} for the first time {{cite:ddc5dc7837cdcd91f5f9809fb55a8088e9f0d4e9}}, {{cite:3e14ed24552c3fc84d74d17ef41191a6571a39ea}}.
After that, the branching fractions of other doubly charmed baryon decay channels were estimated in the rescattering mechanism {{cite:c139843509afb933dde7a450ffee3651f50c9389}}, {{cite:d7b5a50acd918ea9bb43be8b9a704fd9c6d5533c}}.
| i | acebf6d14b1cf3533d6c53f15e61d270 |
In our results, we evaluated four different matrix inverse calculation for our regression method which are Gaussian Elimination with and without secure computation, Secure Inverse Method for matrix inversion, and regular inverse calculation (without secure computation). We used linear Support Vector Regression (SVR) {{cite:81aca0df59c2c151cb6dc5f64a1610ef52b001fa}}, which is also implemented using {{cite:7bed2f683038afae4ee9eaddd17af10089c0d99d}}, as a benchmark algorithm for comparison. The reason we used SVR is to have a robust linear regression method to compare with. We tune the {{formula:baaa6f8c-7483-4869-9501-e7e943af66b8}} parameter of SVR from the set {{formula:f0ee11a2-cc95-4dde-ac52-6d71fdf2e7df}} using grid search. In Table REF , we report the mean square error (MSE) for each algorithm using different {{formula:38cc75ca-21d5-4da8-a0c6-887f5191aa75}} values from the set {{formula:2a2b56f8-f3f3-4309-a0dc-4bc4c8e7ea4f}} .
{{table:2660237c-a932-4bdc-92fb-8e73948f6609}} | r | 7aa04f40a41ff4a0be4dcac98707506b |
Here, we investigate skill transfer from a high-resource language, i.e., English, to a low-resource one, i.e., Bulgarian, for the task of multiple-choice reading comprehension. Most previous work {{cite:a3a235993e7473ed92482cb9f023f7becf900f39}}, {{cite:fe3cc18c8eee0c21b3daf43d7e808af16c7b0c55}}, {{cite:4832967eb6096a84d71b759737a7d7eaff90623d}}, {{cite:b13d7f41814452663a36bd501f02c14a611eb1fd}} was monolingual, and a relevant context for each question was available a priori. We take the task a step further by exploring the capability of a neural comprehension model in a multilingual setting using external commonsense knowledge. Our approach is based on the multilingual cased BERT {{cite:4dc37a4fefa2c9f79da54d2e916346587611499b}} fine-tuned on the RACE dataset {{cite:ac81d122bbc59c89e4a156dfa56e10f698e297ec}}, which contains over 87,000 English multiple-choice school-level science questions. For evaluation, we build a novel dataset for Bulgarian.
We further experiment with pre-training the model over stratified Slavic corpora in Bulgarian, Czech, and Polish Wikipedia articles, and Russian news, as well as with various document retrieval strategies.
| i | 8903486823e85bcafb8a5fc8a1541b50 |
Figure REF is the overview of hierarchical Crossover-SGD with hierarchical communication. In this figure, hierarchical Crossover-SGD is presented in three steps. The first step is to reduce the gradients of the worker nodes in each group. In this step, we reduced the gradients for two reasons. First, it utilizes computation and communication overlap, which can be utilized easily when gradients are communicated. Second, LARS {{cite:8794a9c533d21fbb2745099caf07c098b0a64e60}}, when it is used with AllReduce-SGD, decreases the validation accuracy when the gradient norm is not synchronized among worker nodes. Thus, the L node, which is the leader node in each group, collects gradients from each worker and reduces them to apply a reduced gradient to the model. The second step is the communication of the inter-group by utilizing Crossover-SGD. Subsequently, the gossiped model parameter of the leader node propagates to other workers in each group.
{{figure:e22ca52b-ab57-4d1d-b9c0-c584fd70b118}} | m | dd4360ade53cf82d757c07dda898724f |
Next, to demonstrate the universality of the proposed concept based on an example, we design a 3-port ({{formula:6251627c-1b90-468c-8c77-f66cb51a4c51}} ) metasurface power combiner. Without loss of generality, we specify the angles of ports 1 and 2 as {{formula:32ea0eb8-f1b6-4695-aa24-4da5073c56b5}} and {{formula:24873cf6-4aad-496c-88a1-8b906ccdde82}} , respectively. The output port 3 is at the angle {{formula:3af6c6df-646a-4f0c-8c59-5553e975839d}} (all the angles are counted from the metasurface normal in the anticlockwise direction, as shown in Fig. REF ).
The incident frequency is {{formula:97fd19a3-d0dd-482f-b3b2-878b90ac8a6b}} and the modulation frequency is assumed much smaller than the incident frequency (arbitrarily chosen as {{formula:d0f1ed93-7494-414c-b8f0-b9084325789b}} ), such that only ports 1, 2, and 3 support propagating waves in free space, and other higher-order harmonics excited at the metasurface are evanescent. The advantage of the slow modulation scenario is that such a metasurface can be relatively simply implemented {{cite:52e03812ed89fc7ab5c28a39c09e88d3300528ca}}, {{cite:650367cb590ba16f5794770c152644bcaceaf8ac}}, {{cite:d19a3a29b898a122f31ca469c97b5835fdfd4d78}}, {{cite:b082a0e69acf820d9ced81bf49151d9ed6dfce3c}}, {{cite:1c7c81582d91e8f179a768daeacc859ee56ca162}}, {{cite:4998edec3d6d406b53a1d7d0c5e4ab16e983d6de}}, {{cite:557100a354a43194c83cc074124081077dc2ea17}}. Moreover, the converted frequencies at the output port only slightly differ from the incident frequency, so that all power can be efficiently received even by a receiver with a small bandwidth.
Based on these parameters, the spatial modulation period is determined from Eq. (REF ) as {{formula:d35ce5e8-4a78-46e8-a497-71a7efd1b474}} .
The substrate thickness is {{formula:587b6275-db56-44bd-9c07-3e877a32b1c7}} , and {{formula:dcc211bb-9320-46f0-947d-3c3c1ee77e1e}} .
To find the proper modulation function of the capacitive impedance sheet that can realize full photon delivery from ports 1 and 2 to port 3, two optimization objectives are set in numerical optimization according to Eq. (REF ).
The first one ensures full specular reflection from port 1 to port 3, i.e., {{formula:54b8e354-cb28-46f9-be19-afa59b3cc66c}} . The second objective is {{formula:db697400-dead-4965-af1a-2a59985b1dea}} , which ensures full anomalous reflection from port 2 to port 3.
Since the number of objectives is small, we introduce only three Fourier terms in the modulation function {{formula:d8f8dc79-fb1f-44dd-bc67-40e9d91ba6ab}} , {{formula:5c95f6a0-b8ff-42fd-8661-500da3dc5e3a}} , and {{formula:9786995d-88d6-451c-98ed-cee3ead28889}} . One of the optimized solution is
{{formula:7ac43659-cfcb-4efa-a641-12bb8a0cd3ad}}
| r | 4906316d4062c653bc5e75f720b2c292 |
According to the manuscript, the Chamfer Distance is with L2 norm.
However, PCN {{cite:b70c1427485ff482a8449b136d6d3d59cad1c642}} adopts the Chamfer Distance with L1 norm as an evaluation metric, which can be formulated as follows
{{formula:5f0535b3-375f-4da3-b465-2df34f71a4d2}}
| r | f930f74da2c2eb8e87fa21a449043082 |
The Brunn-Minkowski inequality is a fundamental geometric inequality in the classical Brunn-Minkowski theory. It gives the log-concavity of the Lebesgue measure and implies the
classical isoperimetric inequality in the Euclidean space. Gardner's excellent survey
article {{cite:3437d79abd37b384663d219026b15923e73973af}} describes its generalizations, consequences, and applications in geometry, analysis, probability, and other subjects.
During the last two decades, motivated by the study of geometry of {{formula:b7cef206-bc5e-4542-acfc-ed1e6822f5ea}} spaces, the
{{formula:8561dd8b-9c35-41a8-8672-56240265ee9f}} Brunn-Minkowski theory has achieved enormous success when {{formula:258d9f72-a102-44aa-9c88-36405c3a98df}} . The classical
Brunn-Minkowski theory corresponds to the case of {{formula:5c6efb52-6986-4fe4-ae64-d113977bc6d3}} . The case of {{formula:bb2613f7-d738-4a23-a8ca-050814daa00d}} , in
particular, the singular case {{formula:c4d3553d-39ba-4384-8680-66ba4aecc6d5}} , remains a challenge.
| i | 5a6913ba060d571df53787ad1f81681b |
{{formula:bd548639-cf5c-4a37-ab50-be2e47281218}} is strongly elliptic with respect to {{formula:d7ab1272-4c3c-4cfe-bab7-1b970cd53e25}} of arbitrary order {{formula:741f5e9a-393e-4d04-9f0b-68b68889138d}} Examples are fractional operators of the form {{formula:6e9e619d-7611-4822-b8c1-92166836b348}} {{formula:e49b1ac9-7eb9-46f7-a4df-bbe8bb023d04}} where {{formula:3bb0a9a6-16bb-4f30-b2a4-dc3914e9024a}} is a real-valued function such that {{formula:4abe8fcc-db9a-442f-bdd8-3ece714254dc}} for all {{formula:be1a9b9b-aed6-4641-afc0-869059e71645}} Our analysis also covers the case where {{formula:b9053ce1-f415-49fa-9e55-4601ce590362}} with {{formula:45797f0a-01a7-4205-a638-58bf1e41ad4e}} being the Laplace-Beltrami operator on {{formula:fce20572-86f9-474d-9adb-15f8ce2f4973}}
{{formula:0ca41fe3-ace1-4ff9-aa00-d7c6f73929fd}} where {{formula:f3bde9cc-b590-481d-87dc-81853f06108e}} and the drift {{formula:7f2985b5-1699-47dc-8048-0a21528484d6}} is a left-invariant differential operator of first order such that the matrix-valued symbol of
{{formula:e49ae5ef-e3b2-4527-9ff4-54f331fca4a4}}
is positive on every representation space. Moreover, the positivity condition is removed when {{formula:3c0b1a4e-6af8-4349-b266-7f7859436baf}} and {{formula:f93655cb-0a74-4868-be56-b863c22957e7}} for all {{formula:46751191-3e09-4f36-8d00-060b79e4ddf9}} In both cases we say that the diffusion problem (REF ) has drift {{formula:cdbaf01d-d77e-420c-9bb7-062045313ab9}} following the terminology e.g. in {{cite:db5e6f92688e6859d1e093305431f6b5ba008b52}}.
| i | 01cade58edc8e8d122845ffc298dd805 |
In contrast to the standard differential privacy model {{cite:7d44c4344cce59a52d3a9e35cc164780789644b3}}, in which the learner collects the true data while releasing a private output to protect privacy, in the LDP model, the learner only has access to corrupted input data from the users. Hence, LDP often provides a much stronger privacy protection for the user and is more appealing in real applications, especially for the systems mentioned above {{cite:f33c865a0dcae0cc3adc023da1244a2a619bddd4}}. To the best of our knowledge, in the setting of online learning with bandit feedback, LDP model has only been studied theoretically very recently. For example, in {{cite:fcd94454295539698fbaeabd490a0f47a44ecf7c}}, {{cite:29c93aa894d5818ad49b172054fea2692dfc5a2a}}, the authors investigate MAB with LDP guarantee. {{cite:6279d33803e10ebe4bfeda1d550a5600282a7bd1}} studies LDP in the linear (contextual) bandit setting. However, LDP in the most general scenario, i.e., Bayesian optimization (BO), remains an important open problem.
| i | 16a7314c8b5ba2e99b9dd29f6257a2c2 |
In this article, we have derived two analytic RARs (one for the hydrostatic framework and one for the NFW framework), which are applicable for the central region of galaxy clusters. The analytic RARs are particularly good for describing large and massive non-cool-core clusters. In fact, many previous related studies focus on the outer region of galaxy clusters (e.g. {{formula:aff4ce5e-f2de-4dd8-ba9e-950f6aea8f29}} ) {{cite:1e8f9722650000c78992e3668fb7b09bcc4c8776}}, {{cite:1fd32e424c28dd818a732d29aec939335e129623}}. Therefore, our study can give some new insight for understanding the behavior of the RAR for galaxy clusters, especially for their central region. Also, the derived analytic RAR can explicitly reveal the potential dependence of the RAR, and how the hot gas parameters affect the scatter and the functional form of the RAR.
| d | 57f31e9eb090e515e324bebc5c80f67e |
Several methods have been proposed to learn disentangled representations.
Here we are interested in evaluating the benefits of disentangled representations that have been learned through unsupervised learning.
In order to control for potential confounding factors that may arise in using a single model, we use the representations learned from four state-of-the-art approaches from the literature: {{formula:a61bca9a-e353-420b-97fb-a06f30ed8bb9}} -VAE {{cite:127e5155ccc151b42ec891f66f13b8c22e7123e5}}, FactorVAE {{cite:b91d4705d7228c19e2909747f37afd25ea0a5754}}, {{formula:509b5d21-54f9-46b1-930c-d7d5ebbdc22d}} -TCVAE {{cite:3c0e013ba33520ab42c89ee3928de1ef14daec9a}}, and DIP-VAE {{cite:0d6ee7d632b5fdf4eee502f3dd8f327d46df8382}}.
A similar choice of models was used in a recent study by Locatello et al. {{cite:b6c1bc2f0f4c4a7594cf1fa779faa66685f89c65}}.
| m | b410c30082866895b087221aaf255c73 |
Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible to train “deep” ConvNets (also popularized under the keyword: “deep learning”) for computer vision classification tasks. ConvNets features are trained from the data in a fully supervised fashion. This has major advantages over more traditional CAD approaches that use hand-crafted features, designed from human experience. This means that ConvNets have a better chance of capturing the “essence” of the imaging data set used for training than when using hand-crafted features {{cite:adee3cfdb4936371773224b156d4c37689b61f83}}. Great advances in classification of natural images have been achieved {{cite:f427ae64fcc89daeb42917983a507fb3739db174}}, {{cite:311795ecdddcbf37786650039f99aa2ae8de7d27}}. Studies that have tried to apply deep learning and ConvNets to medical imaging applications also showed promise, e.g. {{cite:aad6eaab33b5e88de74d44e7b0c4c0f85b845581}}, {{cite:130827e12bf577d10d260b746602e565288a959b}}, {{cite:abcf50c9e1567a9026af8311add83ff68f12ec30}}. In particular, ConvNets have been applied successfully in biomedical applications such as digital pathology {{cite:6f418a816ab2dafaefc19d9b17a687a9cf01086d}}. In this work, we apply ConvNets for pancreas segmentation. Our motivation is partially inspired by the spirit of hybrid systems using both parametric and non-parametric models for hierarchical coarse-to-fine classification using ConvNets {{cite:f1b8c34a5be274de6d24d384847e1fff5d6d831a}}. This hierarchical segmentation pipeline is illustrated in Fig. REF .
{{figure:721d8bf1-76bf-4c05-b06f-a7e12bf1462c}} | m | fb8a4cdfd71ad88528add658cec44aa2 |
Different data sources. In practical settings, it is of interest to learn the graph from heterogeneous data that may not be independent and identically distributed. Examples include data collected from several related populations that have common structure, and data collected over time. There has been some work to address these problems in the Gaussian setting: {{cite:67861c5c5950e330ead64ca6336d9cb41b30144f}}, {{cite:399605274326ffb20fe4033dd9a579f24bb2ceaf}} propose to combine optimization problems for separate
precision matrices with shared {{formula:6f2628ba-0f63-4db1-a4b8-eac1154039d7}} -penalties to identify common sparsity, while {{cite:394e054e3808771c0995d36116cc1e850638040b}} exploits smooth changes in the precision matrix to estimate the evolution of the graph over time.
| d | f0e1d62a603e004a446d848b8a3ee3ff |
Our results:
We propose an exact algorithm for the DkConP problem. The running time of the algorithm is {{formula:2d3a7e93-ef2f-462f-8092-ebd1b8de6e97}} , which is an improvement over the previous best exact algorithm (running in {{formula:95761adc-a09a-43fb-af78-b11a03099000}} ) {{cite:6920f87ee41e931613d3d3bbd6762196dbbc6cec}}. Here, the constant hidden in {{formula:242ac3b3-d0d9-4a7b-a7fa-5677d97d764d}} , the exponent of {{formula:7ca92bfe-a293-4b1f-bc0a-407d886403b7}} in the running time, is larger than {{formula:1b18c197-ec0a-41d7-ab70-1a2161b9504c}} {{cite:2e5714832832a707c6f9ee89c3024f3e5ff5b4c9}}. Consider comparing the running times of the former algorithm with the latter, it turns out that {{formula:4e047194-26ca-473e-a6b8-e0fd840e5cdb}} for all {{formula:f84e562d-b8d7-41b3-a1f1-2c35596508fb}} , where {{formula:0c401ccc-4aa5-4cc4-adb7-541006c23741}} is a constant. Hence, for the range of {{formula:22339f6d-b6dc-46fa-acf7-1f7523ee5f8b}} for which {{formula:e1fc1727-26be-4d72-a756-2484a5cea189}} , the running time of our algorithm is significantly faster. We also give a logarithmic time {{formula:954c112b-e450-4c79-85a3-39426e565160}} -factor approximation algorithm for {{formula:87ef8f58-8bf2-4d85-a52f-0c8cbec315fe}} . The previous best approximation algorithm runs in {{formula:e63c9dbd-de9b-455a-b16d-b3f93f52b4bd}} and computes {{formula:9ce6b805-9dad-464f-b42f-90e96bb527ab}} -approximation result {{cite:76a52f52d8f15ccfb057eb4d7d09240ecc8eea78}}. For {{formula:68e1a4fc-af0a-4114-95aa-34a7991a5091}} , the previous best exact algorithm runs in {{formula:61077463-2b75-4620-983b-484f9711b907}} time {{cite:abcbc8aab47a2539384a34fde7d1dc8108014829}}. Hence, our algorithm computes an approximation result much quicker than the previous best algorithms, by sacrificing a little bit of quality.
| r | 9faa454a9d9ee569619684f584d99260 |
Theorem 4.1 (see {{cite:d3c79b6672a00449ea1f00aaac0bb62dbe9436a4}}) Let {{formula:c718465a-a5da-4d85-aafb-9c669d171db6}} be a proper lower semicontinuous function. Consider a sequence {{formula:c7b766c5-a1a5-4f03-928f-27c0248d4255}} that satisfies Condition (H1)-(H3). If {{formula:d236543c-93b4-451a-8feb-aa77ba57cf7a}} has the Kurdyka-Łojasiewicz property at the limit point {{formula:52db889c-0794-42ba-9054-ef960521bc30}} specified in (H3), then the sequence {{formula:d5df7003-d197-4c80-8158-a0c915e55030}} converges to {{formula:6350d36a-a7ad-425c-ac8c-3d7203378882}} as {{formula:33fd2e36-427d-42cf-a3c2-2e90b77a07d2}} goes to {{formula:3f5b22be-53b6-4000-a9af-fdfde789e7bb}} , and {{formula:4b867d0d-ce73-49fa-970c-c6f41dff4f8f}} is a critical point of {{formula:03f0e0af-5991-4dbd-8740-945766f9029c}} . Moreover, the sequence {{formula:5d3903d5-a0c5-4b68-8692-ccce836f76b4}} has a finite length, i.e.,
{{formula:f69606a9-5416-4103-aaf4-a6993ed2132e}}
| r | 86c754e8f6a349f28332e2ab155aa3ea |
If the higher frequency flux densities at 15.5 and 31.4 GHz from
{{cite:c7a2142c54adc2f9f7c61be06437acd3af7bfc48}} are discounted – see
Section – the available flux densities in
Table REF (see Fig. REF ), apart from those from
{{cite:7470045944947c13d0de7e0e8f43a3c34ce064af}} are consistent with a flat radio
spectrum for G2.4{{formula:6738b42c-473c-41d8-8ab8-17d4e031db9a}} 1.4. A weighted least-square fit to these flux
densities which have errors (i.e. those at 843 (the higher, total
value), 1408, 1490, 2303, 2695 (two values), 5000 and 8350 MHz in
Table REF ) gives a spectral index {{formula:c31d676f-23d4-41b0-9bd1-89832555ffa2}} .
For this fit the errors in these flux densities were taken as double the
values listed in Table REF ,to be cautious (given that flux
densities have been derived in different ways, as noted above). This
flat spectrum is not consistent with the non-thermal spectrum with
{{formula:7c246b1f-af7f-4bd9-86bd-1f3fdaba0c0c}} derived by {{cite:7470045944947c13d0de7e0e8f43a3c34ce064af}}, as
is seen in Fig. REF . The uGMRT Band 5 flux densities
(i.e. 1297 to 1429 MHz) are significantly lower than the integrated
flux densities from the Molonglo observations at 843 MHz and the several
single dish surveys at {{formula:8e1f75c1-9002-442d-8568-63f00422b9ff}} to {{formula:c6609705-7976-4660-868c-74c579b76da1}} GHz. This discrepancy is
larger at the higher frequencies within the uGMRT band.
| d | 7d35f228d0a7c97f3a595047f987c43a |
In this section, we will solve the BS equation numerically and study whether the {{formula:8d6510d8-77d7-489d-8af0-ef865274ac33}} -wave bound states composed of two pseudoscalar mesons exist or not. In our model, there is only one parameter, the cutoff {{formula:65c4b523-816e-4db3-a154-819075f521c4}} , which comes from the form factor. The binding energy {{formula:854227f3-306f-4e96-896b-54f35e196345}} is defined as {{formula:9c426557-1fa7-4513-9dc5-fd11db2d4ab7}} in the rest frame of the bound state. We take the averaged masses of the pseudoscalar mesons and the exchanged light and heavy mesons from PDG {{cite:54ddea5f60d3f27178273edafeaf3200d61414a6}}, {{formula:afc0e195-2f41-4007-8dc9-34ff71e28928}} = 494.988 MeV, {{formula:adf3c9a9-38bc-4ea8-87eb-3917e7fc1cc3}} = 1868.04 MeV, {{formula:025e66e4-9cdc-4d1e-90d6-ee66d309b1aa}} = 5279.44 MeV, {{formula:336cc96d-c140-4a16-9f24-08fa12b452cf}} = 775.26 MeV, {{formula:522fa76a-614b-48ff-bd62-b3de2eacdd4c}} = 782.65 MeV, {{formula:3d13c6fd-4e09-47a0-b4ae-cf274e37c1ff}} = 1019.461 MeV, {{formula:d3b5e561-bc17-44d9-81b4-a90ed566c044}} = 3096.9 MeV, and {{formula:8af54b47-7d7f-48ff-9200-089dc78c512a}} = 9460.3 MeV.
| r | e343b959d389680b4fa51f4f3af0ab2e |
Using advantages of both facial and gait analysis, we propose a method to automatically annotate the gender of front-view walking sequences with facial analysis models to generate training data for a gait-based gender recognition system. Through experiments on the popular Front-View Gait (FVG) {{cite:18356d2738ce322e8888d735540bf289229f8206}} dataset, we show that gender annotations using facial analysis models are on par with ground truth labels, in some cases surpassing facial analysis models, obtaining an F1 score of 91%. This method is a reliable way to acquire data for training demographic models in real-world scenarios.
However, due to the limitation of facial analysis models the proposed method only generates annotated data of front-view walking patterns. To overcome this problem, we propose using a semi-supervised approach to generalise to other angles in which the face is not visible. We leverage gait-based metric learning and label propagation to generate noisy pseudo-labels on unseen angles, and use appropriate loss functions developed for training models with label corruption. Training a model on pseudo-labels from label-propagation, we obtain an overall F1 score of 82% on the CASIA-B {{cite:f50931ed8b322cf20f499dacf26c110566bbb9bd}}, similar to a model trained on real labels.
| i | 8372a1d45465b9a34cccf8dba8b5da44 |
The final accuracy for each model is reported in Table REF . One can observe that our monolingual French model performs only slightly better than a multilingual model (mBERT), which could be attributed to the characteristics of the PAWS-X dataset. Containing samples with high lexical overlap ratio, this dataset has been proved to be an effective measure of model sensitivity to word order and syntactic structure {{cite:a8f2a87dcbd0b79b0abaa1200a3b37992e920fa3}}. A multilingual model such as mBERT, therefore, could capture these features as well as a monolingual model.
{{table:f7040120-e4ff-4c21-8b12-0cdb5c1e6b94}} | r | b6d5a83746858379f3be6507cc2120e4 |
As the first step in the numerical solution process, the initial condition given in (REF ) is used to specify the slip boundary condition (REF ) for the flow problem. The self-propulsion speed {{formula:5e3e77ed-6d67-4f24-8f54-4e61b1f73a86}} at any instant of time {{formula:83b132d8-0246-497f-ab89-422fe6fe5eef}} must satisfy the requirement that total hydrodynamic drag force on the particle in the {{formula:76bf85b4-bc71-4c11-a34a-78412cb9c996}} -direction, {{formula:c613b902-dac0-4a9e-aec1-7be2e8b1e589}} , given by {{cite:60cf3fcd4219dff3876b41f4bd78248e16f8ef0c}}
{{formula:356c337b-2a16-49bb-a295-8aaa8bfbf9a8}}
| m | 7036ac77190a58c1adc695235b0860f1 |
In the diffusive limit the Green function can be expanded up to the first two terms of 2D harmonics, namely {{formula:dc1ad96f-cdee-404d-bbfc-f050bd90a52c}} ,
where the zero harmonic is isotropic and its amplitude is larger than the first harmonic.
We substitute this expansion into Eq. (REF ) and perform an integration over momentum directions. By taking a spin trace we arrive at {{formula:054c7bea-f8e2-4263-9341-2361e92f04d2}} , which results in the Usadel equation {{cite:c85388a62a583894410c70e6d4579d6a8c7bfec8}}:
{{formula:9076b477-42a5-4a86-8b54-0f327616cdb3}}
| r | d3869e930f3687ef4dd9a3d7273261f0 |
We compare our results with SRC {{cite:39e39cc692cee0d19cd39148b268072ecc141996}}, LC-KSVD {{cite:c8f522b8f1be6040f760b1880f4f0dbd58d0cd31}}, DeepSC {{cite:a0d329756d498233515aec57e24c69b9a00211f2}}, DSN {{cite:67188ddacb60a9cc43b6ba9f3ef9eae9fa6c01a2}} and other state-of-the-art approaches: ScSPM {{cite:d118e8225e554916ff9267e86b0dca6728d43275}}, LLC {{cite:56c99328e7b542292ed60a60524dbe59a33144c2}}, ITDL {{cite:0592eed4f6eddd81f2f0ddaaed6669df040d271f}}, ISPR+IFV {{cite:f0e2d9f41407024738d1f6591470a9e04e2a9652}}, SR-LSR {{cite:59db1c9d4df22483d42a0bbfb7653b8ead262d0f}}, DeCAF {{cite:846c428d9f116cd1250a6af31140abe4b5b84ec2}}, DSFL+DeCAF {{cite:82dd0d0e08864d6e7921de32cec77b35d20b0523}}. The detailed comparison results are shown in Table 5. Compared to LC-KSVD, S-DSN(relu)-1's performance is much better, since it makes a {{formula:69d7d6d2-5724-43c9-94cc-dc94affe1cb9}} improvement. It also registers about {{formula:14ec2323-8b0c-4350-bdc0-7b1dffd58921}} improvement over the deep models: DeepSC, DeCAF, DSFL+DeCAF and DSN. As Table 5 shows, we see that S-DSN(relu) performs best among all existing methods. The confusion matrix for the S-DSN(relu) are further shown in Figure 2, from which we can see that the misclassification errors of industrial and store are higher than others.
| r | 0b5e23d61bd0c369a35402ef2a436400 |
Transformers have revolutionised the natural language processing (NLP) domain with their ability to handle context over long sequences of data and achieving human-level accuracy for various tasks, such as language translation, text summarization, question answering, language modeling, and text generation {{cite:23b1ad8c79ac857f9582b46fe63b9bba13077f11}}, {{cite:0fcee48494fb85e84fad170a1e5ee96948f5c06b}}. On the other hand, in recent years, vision application have been almost completely dominated by the convolutional neural network (CNN) architectures, which can exploit the two-dimensional (2D) structure of images using inductive priors, such as translational equivariance due to convolutional weight sharing and partial scale invariance due to pooling operations. Even though CNNs possess these advantages in handling image data, they cannot scale up receptive fields without increasing network depth and require several layers, especially pooling-type mechanisms, to capture long range dependencies. While the effective weights are dynamically calculated based on the inputs in attention mechanisms for transformers, popular CNN architectures, such as ResNet {{cite:9d690278941c9ef3602b41c6bc599e06678e7f0c}} and InceptionNet {{cite:1adbfc9405ec748be2008fab1fec17ac27224851}} lack an attention mechanism and use static weights for each input, although there are exceptions, such as squeeze and excitation nets {{cite:0af4472e28a19b13386b9f486048dbb1a08ff19c}}, {{cite:c9fd72a477e10450ec7f90a70ccfb99981891117}}. Thus, CNNs can further benefit from carefully crafted attention mechanisms to capture information from all spatial locations of an input. These advantages of attention mechanisms over vanilla CNNs have motivated the research into transformer architectures suitable for vision applications {{cite:981595d6d3b79e8fbf20a803bc0579b41913bec5}}.
| i | 25ea3de7e760d87d8bc0680df952056c |
We also implemented two other deep learning architectures with pre-trained word embeddings, Word2Vec and fastText which were extended with GRU, LSTM, and CNN deep network layers. Word2Vec {{cite:a0d721cb2565bbb1bdd6ad71f5f073d10011c136}} has been quite successful for SA across several languages, including Bengali {{cite:bca2acdf4703c2be53ad9eca8f68d12633ca1efb}}. Also, fastText {{cite:731c984fabd90ae37a8fc3cd1002d5ddf516e204}} gained huge popularity in Bengali text analysis mainly due its operation on character level n-grams {{cite:a290845c8e5adb12978a78b01a9cb80d7185d5b9}}, {{cite:9b318bf9bf34466b90a4c92f50e9294b8c25569c}}, {{cite:96d6e0c45fcc29273954721b29adbbba2c788813}}. So we compare the performance of SA for the 2-class and 3-class classification of these models with that of {{formula:d06069d9-81b6-4f5a-bbdf-a499936db585}} , and present the outcomes in Table REF .
{{table:3f772dcd-5351-4576-9fa7-68c4a287ec30}}{{figure:c66a82da-07c2-4797-bd05-8a2269528587}} | m | cf33f84f5b5b646f23674794453299d4 |
The proposed method was validated both qualitatively and quantitatively on the T1 modality of Brain Tumor Segmentation (BraTS) 2018 databasehttps://www.med.upenn.edu/sbia/brats2018/data.html. From a total of 210 patients each with 150 slices, we randomly selected 16 patients for testing and the remaining subjects were used for training in a subject independent manner. Training was performed on four NVIDIA TITAN Xp GPUs with the PyTorch deep learning toolbox {{cite:4ddf1e3de9c552f391a8397e715fc7a428ebd8cb}}, which took about 5 hours.
| r | 6bf7524d1120bf6b775b3d7e4ab34e37 |
We study Euclidean SU(2) Yang-Mills theory on the hypercubic lattice {{formula:b049f987-34c8-4656-8bb7-24c52cb479a4}}
with dimension {{formula:a9c541d1-b704-4a35-849e-4072e029930f}} . It is widely believed thatSee, for example,
the book {{cite:1f22085d3391a7bbefcf3e9dcb72a80ef7ff39de}}.
the gauge theory shows a quark confinement phase with a mass gap
for all the values of the coupling in dimensions {{formula:f7344c1b-677b-41e1-b773-17528f2a218d}} .
On the other hand, the corresponding U(1) gauge theory in dimensions {{formula:bca3bc76-1a62-4e6a-853e-6470b34ee96c}} is proven to show
the existence of a deconfining transition to a massless phase {{cite:279da0a894f533d8194988dccd2fd5e38990cbc7}}, {{cite:ed6f7ca94048c36cd85f729f363f482c6a704316}}.
Thus it is expected that there exists a crucial difference between
SU(2) and U(1) gauge theories.
| i | 4e74c3a7ed4fee9d98f703ae43b37dd1 |
Our work builds over an extensive literature on entropy-regularized reinforcement learning {{cite:b0c1985b650cee35bc0931a054c4a458154c841b}}, {{cite:06502b0fb773bdd663d79aba571c975933f8a697}}, {{cite:d9cafec80f6837fba859ed8dc0c055227840d4cd}}, {{cite:e9a6adc603226dbb358fbcf5955aba62f59e2331}}, {{cite:1c7abce49d5b1fc351111058fbc547b18595f9e4}}, {{cite:45cb412c654e5ab8246e47b5bafd6fb3786a9030}}, {{cite:6d4386e0fb47ba3996fcf5c2042c55ef85a5dba2}}, {{cite:5f8881aba7e088464e098d8b015fa0a8ff40c0f2}}. While these approaches emphasize the regularization aspects of entropy, external rewards still serve as the major drive of behavior. In our approach, in contrast, we take maximizing action-state path entropy as the exclusive agent's goal. This enables agents to generate rich behaviors constrained only by their body dynamics and the environment where they are situated, while avoiding absorbing states, where action-state entropy is zero. We also extend the notion of action entropy to action-state entropy, which objectively emphasizes visiting state space, not only generating all available actions. Our work also relates to the literature on intrinsically motivated agents and empowerment {{cite:953a9fc683a42b249404387c0b4ee7480def4ab3}}, {{cite:9182b7cc58e8f8f0b6b0fb654c00a3f2687b6ec2}}, {{cite:ffa9451a24f78b5306d7da23036afa23df8c1ff1}}, {{cite:a5976134cf66aa7f495b35c7a8e5d769858469cc}}, but our agents' goal is to maximize a non-negative linear combination of action and state path space, rather than the predictability of future states given the performed actions.
| i | a0376319f0c19d237bc7ed4f25329798 |
In the previous section, we showed that the IVA cost function is a special case of the TRINICON cost function up to an additional factor of two originating from the representation of complex-valued random numbers. To this end, we simplified the TRINICON model by relaxation of the linear demixing model to a circulant one and by dropping the exploitation of nonstationarity by an additional expectation over time-domain output samples. In this section, we want to investigate the form of the cost function if the exploitation of nongaussianity is dropped, i.e., only SOS are considered. To this end, we rewrite the TRINICON cost function (REF ) such that it is expressed as a sum of differential entropies of output signal blocks {{cite:6051d65b53dd556d5bac9e52905be2e41060e377}}
{{formula:4559082d-dd71-4ee8-91ac-91455ade29dd}}
| m | 65a5f6d414cabcfa5f84cd3d2033327c |
(2) Controlling the superfluid in Fig.REF c:
The SF is flowing with a finite velocity {{formula:1ea8d480-6d37-4951-85df-a1845fcda2a5}} . It was discussed in {{cite:1d9c56fcab2db5564e0df7d88416378518d70f9e}}, {{cite:2e1056a8b59673f29b3fcd999f2b347c6d248c1d}} and more recently in {{cite:6216a8bfd27bb3c21ac1be78c2760e750f02a50f}}.
As shown in {{cite:6216a8bfd27bb3c21ac1be78c2760e750f02a50f}}, the flow of a SF with {{formula:7ca7c712-31a3-4126-bee0-16cf80086971}}
may not destroy SF, but the order parameter develops small additional components at the critical momentum, therefore reduce the
superfluid density. However, when increasing {{formula:d62bd1e0-342d-4df4-95c1-20634452bacd}} further, the fate of SF is still not known yet.
As a by-product, we will revisit this class from our effective action approach in either phase or
dual density representation in Sec.IX-A and IX-B.
We will also analyze its difference than class-1 above and class-4 below which is the main event of this paper.
This story was also reviewed in Appendix E by microscopic calculations such as Bogliubov method and Galileo transformation.
| i | 14ab756933838e0307c2f72371582112 |
A key in the analysis is to carefully utilize the design of the stochastic estimator of the gradient. Traditional methods that simply use an unbiased gradient estimator of the objective function are not applicable to many problems and also suffer slow convergence due to large variance of the unbiased stochastic gradients. Recent studies in stochastic non-convex optimization have proposed better stochastic estimators of the gradient based on variance reduction technique (e.g., SPIDER, SARAH, STORM) {{cite:08c2b2a3e3cc1cce22cbb958b70139776b99edee}}, {{cite:e4638e790beea203c25e40148b99e74f3c1043bb}}, {{cite:c249cc4432955fdf0648c14f2402bb4a9668f92e}}, {{cite:7d5e2226bf81bc9274dbb44e00a2485a2a80ee39}}. However, these estimators sacrifice generality as they require that the unbiased stochastic oracle is Lipschitz continuous with respect to the input, which prohibits many useful tricks in machine learning for improving generalization and efficiency (e.g., adding random noise to the stochastic gradient {{cite:e27d1816998ee550622855106d88af59e98b2303}}, gradient compression {{cite:051c083652ccbdfcc41843f0095a8517254570be}}, {{cite:e8cd53d51702c830bc49b51f370617b8b2c58230}}, {{cite:e49cb8848cfec07055f0a64c9a02418f7417ecc3}}). In addition, they also require computing stochastic gradients at two points per-iteration, making them further restrictive.
| i | b1c2e048591a925c0f2faeb59b07f2c9 |
Performance Evaluation of DispNet-B.
To further compare DispNet-B with other stereo matching methods, we evaluate the performance of DispNet-B on the subset of Flying Things 3D (clean pass, disparity {{formula:75244377-1c4f-455d-a560-869140bd6bf2}} 96 pixels) test dataset.
Since we only care about the results of predicted disparity in non-occluded regions, we adopt the endpoint error (EPE) on non-occluded regions as the measure.
As shown in Table REF , the EPE of DispNet-B is comparable with DispNet {{cite:9c03f5e523082b5702e183ad308d0fa868bf6c7a}} and worse than GA-Net {{cite:56fc2fdf7255f9cde46f006d43de70b6a76cb634}}.
However, DispNet-B is only 6% size of DispNet and 38% size of GA-Net.
In terms of inferring time, DispNet-B is about 7 and 778 times faster than DispNet and GA-Net, respectively.
Moreover, DispNet-B can predict the bidirectional disparity maps for both views simultaneously.
| d | 232e59d788d7be2e97e1fb40673de01d |
We have now extended the renormalization group functions of QCD in the
{{formula:560c781e-f06e-47d0-883d-32541bc9a160}} scheme to five loops. In addition the 2-point functions of the three
fields are also available to four loops in the same scheme. However the results
of more immediate use provided in this article are the determination of the
four loop Green's functions where the quark mass operator and separately the
vector current are inserted at zero momentum in a quark 2-point function in
both the {{formula:f46c6981-ff38-47a5-96dc-6d3a8abb5a53}} and {{formula:cf3592ed-aa21-459b-9ccb-80ea5c91bbd4}} schemes. These are important for the wider and
ongoing lattice gauge theory programme of measuring quark masses more
accurately. Our observation is that the four loop corrections of these operator
Green's functions are not significantly different from their three loop values
at a particular reference point. This should in principle allow for a better
understanding of errors in extrapolating and matching lattice data to high
energy for the exceptional momentum configuration considered here. We recall
that the corresponding non-exceptional point symmetric point renormalization
was carried out to three loops in {{cite:c941b7a31bc18c00ba713cb586da8903573e43b2}} in the Landau gauge. While this
equated to the loop order achieved in {{cite:ae4fb583df8b87c14435d6c07c251036ff742a61}}, various analyses such as that
of {{cite:e53d78a3445a51a45a6bd1c92aacd848125c08fe}} used those results to evaluate operator renormalization constants
on the lattice. For instance, {{cite:e53d78a3445a51a45a6bd1c92aacd848125c08fe}} confirmed the expected behaviour of the
renormalization constants over a wide range of momenta down to infrared scales.
At a particular point it turned out that the exceptional momentum case began to
deviate from expectations. As the three loop {{formula:4df4c9ad-2063-4b62-ad54-812615776919}} perturbative renormalization
was used for that it would be interesting to see whether the four loop
information of this study improves the behaviour in the infrared and if so then
how well does it compare with the symmetric point measurements based on both
the two, {{cite:ae4fb583df8b87c14435d6c07c251036ff742a61}}, and three loop Landau gauge data, {{cite:c941b7a31bc18c00ba713cb586da8903573e43b2}}.
| d | c6d494224a6dfe8f4d956cd3a0224703 |
To the best of our knowledge the graphs scaled to in this work are the largest successfully considered by learnt heuristics for Max-Cut – with ECO-DQN {{cite:6a6ff09f19932313a4727bf8b5988c2d43acda51}}, S2V-DQN {{cite:62a0e58d7187907335b55ae9a6496ab6aec9c49d}} and the RL-SA algorithm of {{cite:729268959f596c82901dfa653d1b26e39cbb4f6c}} previously scaling to 2k, 1.2k and 800 vertices, respectively.
Full comparison to the numerous non-ML based approaches to CO problems is beyond the scope of this paper (though extended baselines are presented in SM Section E), but it is clear that ECORD represents a significant step forward in the scalability of learnt heuristics.
| d | aefa393e15979fc6df5bba6f7948cd88 |
In the imaginary time formalism, the one-loop integral associated with diagrams such as the one shown in Fig. REF is given by {{cite:68e30b4e338f90d55d9799355406f45b927acdfa}}, {{cite:5f2e297d6251eea70839b5e628947092a5b70ad7}}
{{formula:f9d014e3-0b6e-40db-8aba-9f71c27ff3da}}
| m | 8925e0cfab41f5be2bb7276e24a2bc9b |
For the computation of the scalar spectra, we did not create gaps of the temperature time series to distinguish between the gas and liquid phases. This is mainly due to that the response time of the thermistor is comparable to the bubble residence time and the bubbles have similar temperature as the surrounding liquid, which is discussed and shown in {{cite:27daaeefbfc967785333a0c77150114422cd0a96}}. The Bartlett method ({{cite:7bebf60fee792a7b82d707a6c135b076e8400139}}) is applied to the temperature signal using partition segments of 10 which cover at least one turnover time which was found by inspecting the auto-correlation function of the signal.
| m | 037fc8027fa105f912413731e55ca952 |
Classification models transform CE into classification {{cite:358bb60246a7550a5e644c599f3ff71e98e19dde}}, {{cite:21a641e84c8415ec82921e9a36ad63b85b65ce0a}}, {{cite:f090763682f1404d5a6fd76ed757953865deea64}}, {{cite:10a931c5242366df05ca079f0f794ea0c0015a0f}} to determine which concept in a predefined set meets hypernym-hyponym relation with the given entity, but they can not acquire new concepts. Sequence labeling models have been proven effective on extraction tasks {{cite:eff50fa30baeece8d81066298f1ae0d48fec40bf}}, {{cite:d5bb479702ebac7b6ec51822dfbdfc457f543595}}, {{cite:fd91ef71bd233b77e05a3d899cbb3db0c9217957}}, {{cite:b8960369ac2ad3127793b2c1d10d24fc966215d1}}. Given the extraction feature, sequence labeling models can also extract concepts from the texts describing entities as our MRC-CE. Recently, pre-trained contextual embeddings have been widely used in sequence labeling models {{cite:3776744949bf9d35e901e94f37d110758504553c}}, {{cite:104fb2182914404af7fcf95c7cee5dd62138e754}}, {{cite:023bfc4d303de2a69687d89369ee26fb5f1f07ab}}, {{cite:fdd2fbe8e68163cf3e01d0a9d260f9a58b928085}} to gain good performance, but can not handle the problem of concept overlap.
| m | 96fe064d59954aa6b47dfc5f3e83bfe0 |
Other pairs of modalities.
We were interested whether the improvements obtained by CrossCLR are specific to video-text retrieval or if the loss generalizes well to other pairs of modalities.
To this end, we compared CrossCLR to the NT-Xent loss {{cite:58d1e77deb9edf83d7346f33345e3e70b1672574}} on various combinations of input modalities derived from the LSMDC dataset.
Table REF shows that, with very few exceptions, CrossCLR shows better retrieval performance than NT-Xent. This demonstrates that the principles behind CrossCLR are not specific to video-text retrieval but are applicable to learning cross-modal embeddings in general.
| r | 7a2ae8ee3d835300d22e625d7dfcccfc |
The proposed method NeXt, based on the FCM clustering algorithm, outperforms all the other implemented necrosis extraction approaches in terms of both spatial overlap-based and distance-based metrics.
Adaptive thresholding using the Otsu's method {{cite:542cd67e032538b39a89be7f0d6325cb482adfcc}} achieves good segmentation results, by exploiting the bimodal histogram distribution of the pixels included in the GTV region.
However, the FCM clustering algorithm, thanks to the introduction of Fuzzy Logic enabling partial membership in classes, enables a more flexible classification process with respect to adaptive thresholding {{cite:542cd67e032538b39a89be7f0d6325cb482adfcc}}.
{{table:bb1a204a-f1af-47ee-9437-18b1ce600255}}{{figure:5d26d7b3-eef0-484f-86b3-f6d51ceff695}} | m | 031e38b759e8f09f6df31a213b66845d |
The F555W and F814W HST filter exposures provide the deepest CMD of the cluster available to date. The theoretical Padova isochrone ({{cite:c004a43d4305bd9d82b3a5a1532b7c018165264d}}) that best fits our CMD suggests that the distance to the cluster is 10.5 {{formula:33ff6f73-1655-4ad0-92b8-c93ca9ad5720}} 0.4 kpc, it has an age of 0.72 {{formula:69754ea3-8387-4db1-972f-0518e552e7df}} 0.1 Gyrs, a reddening of 0.695 {{formula:324eef1d-0528-476a-b512-ba3539a7bd3f}} 0.06 and a turn-off mass of {{formula:94867828-5552-48fc-a4ce-28c1e3db20de}} 2 M{{formula:edb05a81-b903-442f-86a0-09e399a893fd}} .
| m | 33e1ea2b89dba309b8006c4db9989f7d |
Text Generation for Audio Recognition. In Table REF , we test our model on audio recognition task. With audio only or both audio and image as inputs, we calculate the word error rate (WER) between model's output text and ground truth. The compared methods include several API from Baidu and IBM companies and a state of the art model, Espnet {{cite:c3d5763afe0e779773dd57922b24b1944ddac3a3}}, which is pretrained on the librispeech dataset, resulting 48.35, 57.47 and 46.89 WER respectively. These methods take audio as input and all of them are tested on the same OpenImages-5K dataset. It can be seen that OPT outperforms these compared methods by a large margin, improving at least 15 point. In particular, with image feature, the performance of audio recognition can be further improved about 1 point.
| r | bcf674cfb1e85b0ea4b836de38b8ee28 |
We use PyTorch to implement our tree optimization algorithm, denoted Ult. Tree.All code is publicly available on https://github.com/chens5/tree_learning. We compare the performance of our method with Sinkhorn distance {{cite:aa9b9e314ce8d4ef68b9dedb661036251b6eec2e}}, Flowtree {{cite:4befdd8513869d9a9287ce2211ca213f518d1c66}}, Quadtree {{cite:5a563382ce32863988ba0b1ca3e73358936bea3e}}, weight optimized cluster tree Wasserstein distance (cTWD), weight optimized quadtree Wasserstein distance (qTWD), and their sliced variants {{cite:4311a51c3c8f1e0c2fd29caf252f1a47038a571f}}. All reported results for Sinkhorn distance are computed with the Python Optimal Transport{{cite:c01fe337643e1454143f845aa8259cdb5ba1060b}} library and with regularization parameter {{formula:266e109a-d170-41c0-8f19-0d981abc335a}} .
| r | 1fbeba5ff6c3f95aba738b102481eebd |
We conduct simulation studies to demonstrate the performance of our methods. First, we compare our proposed inverse probability weighted (IPW) and multiply robust estimators (MR) to the outcome weighted learning estimator (OWL) of {{cite:582dcbd326a87d736563d6feef137f5031136add}} and the IV-based estimator (IVT) of . In our simulations, we violate the no unmeasured confounders assumption and allow the levels of the endogenous variable (i.e., compliance) to be greater than the IV, thereby the latter two methods are expected to fail.
Second, we will assess the robustness of our proposed estimators presented in Theorems REF and REF to the misspecification of nuisance parameters.
In this section, we refer to the value function estimators in equation (REF ) and Theorem REF as the IPW the MR estimators, respectively.
| m | bd159c176f081896a112b5f5496b6de8 |
Knowledge graph (KG) representation learning is important to the KG inference as well as the downstream tasks {{cite:ff2beddb3968e35ea02e0a5bbc7fe7caf076eb96}}. It has been noticed that the embedding space has significant effects on the performance of KG completion tasks. Previous works have proposed the KG embedding models in Euclidean space {{cite:173515e70704249edeea43622a0d501d81e3b67c}}, {{cite:7c0470a4fafb350173e18944c0aa41284e58c856}}, {{cite:7156b79226a684531d97ea9c01aa85c173c1c753}}, complex Euclidean space {{cite:52467efc94121d6a120136529194a1989dc2112d}}, {{cite:f24f59cd5d5edea9aa97623ec0b2018b8c5cd3e5}}, hyperbolic space {{cite:15882ab4c9e63f7247ac5112d55edac1c6993155}}, {{cite:fa5f502a3928da9c61d8de92e4c2ea799191571a}}.To avoid wordiness, in this paper, we use hyperbolic space to refer to real hyperbolic space, hyperbolic geometry to refer to real hyperbolic geometry, and hyperbolic embeddings to refer to real hyperbolic embeddings. These models learn the embeddings of the KG entities in the selected geometric spaces and parameterize the relation representations as the geometric transformations, such as translation, rotation, matrix multiplication, etc.
| i | ae2638a91edaebc9035d09500585c9dd |
As a first attempt, our studies provide the baseline for the future
investigation on the correlation of helicity polarization
induced by the axial chemical potential, which is a possible signal
of local parity violation proposed by Ref. {{cite:fec3e6efa278cd490e0ab21293224b89b2538734}}, {{cite:5f4276a9aa1c5ddad473ed8825404604517258b8}}, {{cite:46f98930ea26c1d09349777b3cd4db112542d784}}.
Meanwhile, since we find that the helicity polarization {{formula:fa9a7040-e448-482c-9827-62a2c1e3baec}} mainly
comes from the thermal induced local spin polarization {{formula:a1271589-8246-4300-97f5-1c0d4fccf6b2}} .
In the future measurements of helicity polarization, one
might match the numerical simulations of {{formula:f2d2c0a0-b2fd-4b98-80df-57689bee35d9}}
or {{formula:415bcaaa-478d-4e1c-92b0-8f18ada6eb11}} with the experimental data of {{formula:89c7a27c-7ba5-441e-9de1-0ba1cea9b162}} .
It may help us to distinguish the {{formula:f3c420f8-1cb6-4ff7-91fa-35037f1575fb}}
from local spin polarization induced by other effects.
| d | 371b176bdb2de2ed93ff8b097686ad57 |
To further demonstrate the effectiveness of our method, we follow the same setting as that in previous methods {{cite:053082d1aa4936f76f6b5931a0bd3460c148130b}}, {{cite:81396f763af11ed2210cf758904a1de2bcd7163c}}, {{cite:632bc3af727b17087fb5bd06371535cfe3601283}}, {{cite:c61769549817b033f9bf1601a04dc0a813f9cc81}} with different degrees of supervision using the COCO dataset. For a fair comparison, we adopt the same data augmentation strategy. The results are listed in Tab. REF . For unbiased teacher {{cite:81396f763af11ed2210cf758904a1de2bcd7163c}} which uses larger batch size and longer training schedules, we retrain it under the common training schedules with the released official implementation. It is noteworthy that our method outperforms CSD {{cite:b6f611ae39af58be3eb140fe16f764ea8535344f}} and STAC {{cite:053082d1aa4936f76f6b5931a0bd3460c148130b}} by a large margin. Compared to recent works such as unbiased teacher {{cite:81396f763af11ed2210cf758904a1de2bcd7163c}}, instant-teaching {{cite:c61769549817b033f9bf1601a04dc0a813f9cc81}} and humble teacher {{cite:632bc3af727b17087fb5bd06371535cfe3601283}}, our uncertainty-aware noise-resistant learning can also consistently achieve better results under different degrees of supervised data. Particularly under the 2% supervision data setting, our method outperforms the instant-teaching by 1.6%. With 5% supervision data, our method outperforms humble teacher by 1.2%. This comparative study further validates the performance of our proposed method.
{{table:443569cb-b434-4478-b1b7-4b7b44eeff54}} | m | 4c5eda426e9833cf256b1e98d8af2006 |
One natural question which arises when preparing a neural network is that of model capacity {{cite:d343de2f1fd4efb230d23c51e8325e345c3a2b92}}. A network which does not possess enough cells or layers may be unable to take into account all the complexity of the task. On the other hand, a network with a very large number of cells and layers will sooner or later learn features of the experiment which should not be significant (for instance, all the details about an amplifier used in the setup). This prevents the network from generalizing, ie accurately predicting unknown data. This is in principle dealt with during the training phase {{cite:d343de2f1fd4efb230d23c51e8325e345c3a2b92}} but it is only when the network processes fully new data that this issue can be totally ruled out. Here this issue has been taken care of by predicting arbitrarily complex trajectories and also by using two different setups. In fact, the imperfect reconstruction in the case of the unknown experiment is most probably due to the model learning some system-specific features of the training experiment. This can be mitigated by a minor retraining of the final layer of the model on the new experiment (a procedure known as "fine tuning" in the deep learning context). We have noticed that a larger network featuring more than {{formula:3fbbaecc-7d79-4439-af0f-5873b75a6d72}} coefficients instead of the {{formula:9dc7a8ff-273b-45fb-abda-994941e68fc4}} used here does not lead to better training and may even lead to worse predictions in the unknown experiment.
| d | 138efdd82bb66ee5fa5d1679b0f3232b |
By Abreu and his coworkers {{cite:0576680117bb1566b07bdff6ae402fbe2f8490e5}}, {{cite:86013608f855cb3f36e0f5c6d9071a424f6d93b0}},
the DPPs associated with the
Schrödinger representations of
{{formula:370aa221-5822-44bd-a101-0722bd634e18}} are named as
the Weyl–Heisenberg ensembles,
in which {{formula:a41409bf-e175-46fc-9604-64a269300862}} is called
a window function in the context of the
time-frequency analysis {{cite:e5f7c18e4f63427ec3626387b4bebfb6156c581b}}.
See also Section 2.6 of {{cite:e8e93620c8c1fea4ec5b9e77b790668a251774de}}.
It was proved by Abreu et al. {{cite:6f240ffa95c5bc9a19d9a1d4e05aad3b4941da36}}
that the Weyl–Heisenberg ensembles
are in the hyperuniform state of Class I
for a general class of window functions.
| i | 56cdc9dd42229b5c268276f205747196 |
where {{formula:743e4bfb-0693-4ed9-9ee7-af0240815310}} . The above expression could be modulated by a factor {{formula:ada77aa0-9bbe-46ed-be45-2393e64888f6}} , where {{formula:657ff096-3139-4fb8-b3e4-26519798585f}} . However, {{formula:0bfb45c3-e2e5-4588-b325-2a1ae2ce0671}} appears to be the most natural solution which can be calculated exactly{{cite:3f1be53d2cd38ce058d15bdca6c4d9e1ff06c207}}, {{cite:0528751f6cc1c30de88ddf58a7a02ee119e6927f}}. In any case, even if one considers {{formula:7bdc853a-3415-4b74-98d1-b78334c778ee}} or the `hierarchical enhancement', tuning the complex part in {{formula:7c55ee8f-be36-4a87-a908-4ffd2c5b3120}} , correct baryon asymmetry can always be reproduced. The most important part is, {{formula:40f78f0c-a2de-42c8-8ded-eada1d3d0605}} which is still vanishing in radiation domination at high temperatures with SM-QCD thermodynamic potential{{cite:c3efa11def12d5eb36050fd71bcfa4a0928b2d08}}. Therefore, a general cosmological background other than radiation that is quite a natural call now, always stems a non-vanishing equilibrium asymmetry unless the Yukawa couplings are real or purely imaginary. In the EU, any dynamically produced lepton asymmetry tracks the {{formula:e2eac75d-17f1-4a00-8573-702080354df0}} if the interaction that causes the asymmetry production is strong enough. When the interaction rate becomes weaker (compared to the Hubble expansion), the asymmetry freezes out with the potential to reproduce correct baryon asymmetry {{formula:5d584e13-83dd-4f1f-acb0-eefdc6a63050}}{{cite:64f8c672b7344802f0426373fa0f910d3289c36a}}. In seesaw model, {{formula:366d4ac2-8c00-4633-b75c-b8a279b43483}} interactions{{cite:5060ddf6ff7844afcdf7b588d07f490590dd1d73}} play this role. The general evolution equation that governs the entire dynamics is given by
{{formula:19603694-2f4e-46f4-9e24-63eb334594b8}}
| r | d3809b916367c00c1e4b4255af41a391 |
Interpretability techniques should scale to large networks.
Frequently, small networks and simple tasks such as classification on the MNIST dataset {{cite:b72a951535aaabee5e90738b6d3611bf6780f1ed}} are used for testing methods.
However, we stress that simple DNNs performing simple tasks can only be deployed in a limited number of real world settings, and they are sometimes easy to replace with other intrinsically interpretable, non-DNN models.
As a result, we argue that the ability of a method to scale to large models relates strongly to its potential to be practically useful for diagnostics and debugging.
For example, capsule networks {{cite:0f281b97ae63a3fc9457b545bbe59f6e48532f47}} achieve impressive performance on MNIST classification and have intrinsic interpretability properties that convolutional neural networks lack.
However, they are less parameter efficient and have thus far not achieved competitive performance beyond the CIFAR-10 {{cite:1b5cbdabfaa531628e8ef7f66a8bcad5b7f27169}} level, let alone the ImageNet {{cite:a11372b3a07577416450a1ce70772512920aad7f}} level {{cite:41c57eb6c140f24118b8939e21de21a8dc0e203a}}.
Methods like these may offer excellent inspiration for future work, but should they fail to be tractable for use with large models, they may be of limited value for practical interpretability.
We urge researchers to detail computational requirements for their methods.
| d | 80123b90d74db69257ee50e3afbcec2b |
Recall that {{formula:2002d2d9-4016-4004-9e21-28327de62b76}} .
A critical issue is
that the perturbation bound in {{cite:801d48d65a85ec588953737884d12b079b442bb9}} requires the tensor perturbation to be exceedingly small, namely,
{{formula:147f69ec-0f04-4386-b8b0-2c872fe19fcb}}
| m | 10ba8a3c228dbc6f0586dbe7c35b2b92 |
However, there are some limitations to this study.
One is the validation of whether our approach can extract true dynamical properties if used as an equation-free method, which cannot confirm the true dynamical properties (e.g. frequencies with growth/decay rate), as well as in the previous works {{cite:c22e05f89f191211e01f98e3a26d24bfdeebbede}}, {{cite:3af38716ab2614499c98bda5b387fd71b4c35788}}.
Originally, DMD demonstrates its strength for the dynamical systems which can be mathematically defined {{cite:605c1c5f9d7d90119f159e2b05f466eea5cc907e}}, {{cite:8e6400bdae0d04c3a7eab4ce478b98dd82beafa9}} or of which solutions are empirically known{{cite:f417b3677545fd32b508c364414b724f3b2233e8}}, {{cite:3c6022a96c68a01c1e357a7d3db12c78f5cd275d}}.
We instead validated our approach using classification performance, a qualitative evaluation with specific knowledge of the sport domain (e.g. low-frequency band) and the reconstruction error (see Materials and Methods).
However, a general quantitative validation method for the unknown dynamics is further required.
Another is to reflect the more local interaction dynamics such as local competitive and cooperative play by the attackers and defenders {{cite:050e385d3346b7dc7ef5f5ea1afc1c55c4b70819}}, {{cite:d86d0cf059278124f8ae467ac3a1c664ba6ff15d}}, {{cite:3dd4deabdcfe162547cb353a9372d71d5d1eb133}}, {{cite:e875e1aa36d750ccc0bff897c451c96db070e575}}, which can provides more practical information in the sport domain.
Although the purpose of Graph DMD is to extract the underlying global dynamics of GDSs and we can obtain the interpretable local spectra in the DMD modes, there are other approaches for extracting the more specific local dynamics.
Even when using only players' location data such as in this study, more specific methods reflecting local competition can be applied to more practical application such as score prediction {{cite:77154c094cd2a39d6e65c34b8f80b0a027456202}}, {{cite:ff8abe42c93c0f3c0dab88c572ec58752bbc78a6}} and prediction of a player to obtain the ball after shot {{cite:f28b37a6a46c10a067fc9fd76f8506b14cdace66}}.
| d | fdc80427d64adab2154ab010c578bca0 |
As we are unable to distribute the GPT-3 version of Godel{{formula:d7a62772-c4f0-4b7a-8403-dce04c5ab730}} (Godel{{formula:1f5dbc2c-8b77-4fa4-b680-5164de7a602b}} below), we instead release Godel{{formula:14e00ca6-81c5-4184-b4e0-06d63d1500ce}} based on GPT-J {{cite:20fc3f092ce0dd7e68d7bf6f627495c2bae3b084}}, {{cite:6042e17891eb2d3d3836bb82fb160aa4e8a1ebe5}} (Godel{{formula:28da6166-3012-446c-af06-51a8fa812b5a}} ) as a proxy for Godel{{formula:dffaa66a-38fa-4940-aecb-ef7266a5f146}} . Table REF compares the results of Godel{{formula:1bed0445-6b36-466d-9018-455c4b62b8e9}} and Godel{{formula:17105bce-3667-4107-9fe6-8eb7294f8ae8}} on all tasks in the few-shot setting.
{{table:1bcb9239-ec26-4124-a75f-bebf14d5c26e}} | r | ce4209faf772662a2cae44eebe122ee6 |
Dark matter makes up approximately {{formula:34a59c1e-9fad-460a-b529-583464f3b6da}} of the non-relativistic matter in our cosmos {{cite:701b7f000ac37c68fcb4ac4953d774677fe167ef}}, but its detailed nature remains uncertain. For example, the mass of the fundamental particles making up dark matter can range from {{formula:e037bb2f-8f26-45b2-aa40-ac47d678d424}} {{cite:e2b17b4f8588d62e45e4c59ec08a983ac39ac416}} to {{formula:09cdc234-0663-4e6e-9fd9-5b72d8ff3e3b}} . The upper bound is extended further if dark matter is multi-component, or composite {{cite:2c7955f86d76d86af0c46daf21668b47651ec485}}. We also have no robust constraints on the spin of the particles/fields that make up dark matter. What we do know from a plethora of observations is that dark matter is very weakly interacting with the Standard model, is non-relativistic in the contemporary universe, and has clumped efficiently under the influence of gravity (for a historical overview, see {{cite:10cd617f93cc722cf492c88e7d8634cfa9130124}}).
| i | 35a3af19ee961d7983080a679e6cfc9d |
Post-hoc interpretation methods seek to explain why a model makes a specific prediction for a given input. At a very high level, these methods assign an important score to each token in the input.
There are mainly three types of interpretations: gradient-based saliency maps ({{cite:f6aa407b761bc757255cc2f426005b7f62250995}}, {{cite:c231065aad94ed9a66968d728a02b89601d2a52c}}, {{cite:87ed8f02ad5e3a817943ca40ba59aecf0e69180e}}), linear-based local explanation methods ({{cite:176cafc1f4571e04e162b8b830dd0c4b99b40c9f}}), and attention-based methods ({{cite:946739abc14762b5b9552bc5e22b572fe63cadc0}}, {{cite:b4e3f00df58917dc4e97453a70bb34d6c147fadd}}, {{cite:8f8ea59403d22a263e0704a7bb2c041d9b5cc964}}).
Gradient-based methods determine token importance using the gradient of the loss with respect to the tokens ({{cite:f6aa407b761bc757255cc2f426005b7f62250995}}). {{cite:87ed8f02ad5e3a817943ca40ba59aecf0e69180e}} introduce integrated gradients, where token importance is determined by integrating the gradient along the path from a baseline input to the original input. {{cite:c231065aad94ed9a66968d728a02b89601d2a52c}} introduce SmoothGrad by adding small noises to each token embedding and averaging the gradient value over noises.
{{cite:176cafc1f4571e04e162b8b830dd0c4b99b40c9f}} propose an explanation technique called LIME to explain the predictions of any classifier by approximating it locally with an traditional linear model.
Attention-based methods use attention scores as token important scores. {{cite:b4e3f00df58917dc4e97453a70bb34d6c147fadd}} propose four alternative tests to determine when/whether attention scores can be used as explanations and prove the usefulness of attention mechanisms for interpretability. In this paper, we implement the above methods and compare them in experiments, as shown in section .
| m | 3bbe7fdb2824758fc83b366de8109ba7 |
In this section, we first introduce a Full Self-Attention (FSA) module for discriminative feature extraction in 3D object detection that aims to produce more powerful and robust representations by exploiting global context. Next, inspired by 2D deformable convolutions {{cite:17d34a00495d50095c2797b52c5a4db93f96935f}} we introduce a variant of FSA called Deformable Self-Attention (DSA). DSA can reduce the quadratic computation time of FSA and scale to larger and denser point-clouds.
The two proposed modules are illustrated in fig:proposed.
{{table:6947a45b-59a3-42c5-9416-fe3043348777}} | m | 9e97bfdc315d6d19cf2198c97fb4c46a |
Our present work demonstrates that MSAs are not merely generalized Convs, but rather generalized spatial smoothings that complement Convs.
MSAs help NNs learn strong representations by ensembling feature map points and flattening the loss landscape.
Since the main objective of this work is to investigate the nature of MSA for computer vision, we preserve the architectures of Conv and MSA blocks in AlterNet. Thus, AlterNet has a strong potential for future improvements.
In addition, AlterNet can conveniently replace the backbone for other vision tasks such as dense prediction {{cite:d38767759762ec51610be869e988049084138307}}. As {{cite:583042329fde3dbef0fc827b83be8a4baa7a60e3}} pointed out, global average pooling (GAP) for simple classification tasks has a strong tendency to ensemble feature maps, but NNs for dense prediction do not use GAP. Therefore, we believe that MSA to be able to significantly improve the results in dense prediction tasks by ensembling feature maps.
Lastly, strong data augmentation for MSA training harms uncertainty calibration as shown in fig:augment:reliability-diagram. We leave a detailed investigation for future work.
| d | 4213d89cddf7d8ec38412b17f59df0b4 |
Set {{formula:b6059da6-61f8-46e4-a84e-da0137b239fc}} -Covering: {{cite:e0bf4bfdbcd98250396c352d5e78747fb56ecb22}} shows Compact Set {{formula:b252fbdd-e6ba-425a-bb02-866537a5dbfb}} -Covering is W[1]-hard; however, their reduction does not give tight ETH-hardness. On the other hand,
{{cite:b9d9a230d52fd153bfcdc861b33d7c458ee7de10}} shows tight ETH hardness for Compact Set {{formula:b11fce27-a366-42d6-9c5d-5d8b6a980675}} -Covering, in the sense that they show that Compact Set {{formula:f256927a-6a26-4ed7-a5a7-8c46a1108843}} -Covering requires time {{formula:9de5b44a-a2b7-4bc8-a1b2-d400a523c4f0}} .
Exact {{formula:209f0efc-fe93-4d68-be70-83735b44e028}} -Covering: We observe that the reduction of {{cite:b9d9a230d52fd153bfcdc861b33d7c458ee7de10}} for Compact Set {{formula:048750cb-07e8-4e29-9da2-789555ce9a98}} -Covering, can be easily modified to achieve tight ETH hardness of Compact Exact {{formula:b6f5159b-4235-46a0-913c-7a3680f03c57}} -Covering. The idea is to start the reduction from 1-in-3-SAT, instead from 3-SAT, which produces an instance of Compact Exact {{formula:964d9588-7cec-4ae6-b409-793e28a6044f}} -Covering.
{{formula:b5c2c6f1-212a-4e0c-993e-741ec6757043}}-VectorSum: The reduction of {{cite:66f91793bfc11cac30a9f80405373df85872664c}} shows that Compact {{formula:265559ad-1526-487f-ab74-ce20cf25acee}} -VectorSum requires time {{formula:721b6bfc-d1cb-443a-b1aa-19b6fc310fc1}} under ETH.
| r | 30d425f1a83f6017d7d28f4e5e73cc80 |
First, it is important to note is that, for the best of our knowledge, all existent algorithms exhibits negligible finite size effects for {{formula:a5dcbfef-485d-4ac2-a573-dafca031f5a5}} . Indeed, it is claimed that the empirical behavior of algorithms matches with the theoretical behavior for {{formula:ccccf65f-bd6b-43c1-a90b-b9844d2de7ae}} for Unfolding and Tensor Power Iteration {{cite:78c54d839ce773b8b583414366c3b33e0d9d5d84}}, {{formula:c2ad2cb7-f566-4fc8-a410-eb0a4560b985}} for Averaged Gradient Descent, {{formula:8bda515d-bea5-4710-92ea-8126d8a40faf}} for Robust Tensor Power iteration {{cite:1c13fb9c5ddfb191115314a9de1ec66a1ddf3274}}, {{formula:bb0184e1-1b96-4f09-b8fa-2e960f01063d}} for Higher Order Orthogonal Iteration of Tensors (HOOI) {{cite:f1f58078662c3f02969e2ce682b0f96442565cd0}}. In our case, SMPI exhibits a constant threshold for {{formula:d0c5e425-9793-44ea-bdcc-e627ec2d7911}} .
| d | 2445b0d558fcc5cfaaa3f98a45639f14 |
In order to preserve the privacy of both the data and the model, there are several privacy techniques utilized in the literature. One of these techniques is differential privacy (DiP) introduced by {{cite:d595211392756c1b6809aa52dd3633b45a65269d}}. In DiP, to protect the privacy of the components involved in the process, one needs to perturb the input data, the model parameters, and/or the output by adding noise in a certain budget that adjusts the privacy level{{cite:56894070f14b5555dc44ae769ae25ea0ff4faa4f}}, {{cite:346d1e74a208110ffc05f9ca1e1bfb05143a863f}}. By adding noise, one has to sacrifice to some extent the performance of the model and exactness of the result. In addition to DiP, there is a cryptographic technique called homomorphic encryption (HE) that takes into account the missing points of DiP in addition to the basic privacy requirements. HE protects the privacy of the data and/or the model by encrypting the data and/or the parameters of the model with different key schemes. Thanks to this mechanism, various operations such as addition and multiplication can be performed in the encrypted domain. There are several attempts to implement machine learning algorithms using HE {{cite:3dbee29be2035ada3d7387917e22cc08728276ce}}, {{cite:ebaec58c8774344c335e6646bff46f4e78c957da}}, {{cite:ad20a7abac80ba4d98577768f2dbf0ba81a4a2c8}}. However, the drawbacks of HE are its huge runtime and the limited number of practical operations that can be realized with HE. Secure multi-party computation (MPC), on the other hand, satisfies these requirements in addition to those already mentioned. The idea is to employ several parties performing the required computations and to share the data and/or the model parameters among these parties in such a way that none of the parties can learn about the data and/or the model parameters on its own. There are several MPC frameworks proposed in the literature to address various machine learning algorithms {{cite:0dd5626e1b3dcaf9f55576089161c863b4ee6721}}, {{cite:4a7eea720b9ae84e7cf62c532bc0e4055637a258}}, {{cite:6e0477b014363b97e4a021eca3b5cda054fe2729}}, {{cite:b5c809c4fdc86acd768467161a7f9bf301a98806}}. Although they have several efficient and secure basic functions used by convolutional and feedforward neural networks, from which we have also benefited, they lack the exact computation of more complex operations, such as the exponential computation and the inverse square root of a Gram matrix.
| i | d8a1184db4d6ab57aedb118e0778709f |
It is interesting to note that skyrmion crystals have been experimentally observed in various materials {{cite:49320b00bb977a138f76d3bd62eb144c7fa65ea7}}, {{cite:ee7ead88023b989faa23b3b339ab4b579b416620}}, {{cite:50b3cb385bd549a25c5f2b15ac432c25dbd108c6}}, {{cite:ee486745c09976630884ca2d637639adf23b2e2a}}, but the most beautiful skyrmion crystal was observed in two-dimensional Fe(0.5)Co(0.5)Si by Yu et al. using Lorentz transmission electron microscopy {{cite:3cfbe8a0c4269742bda897fcc966c40e6e57c94c}}.
| i | 1b4430d9d82bb972363e441644b0705c |
We tested our network on several challenging real-world high-dimensional input time-series classification datasets: BasicMotion and ERing, which are taken from the UEA Multivariate Time Series database {{cite:d63bec668bba290ac890760a7cfd8708ef4be8bc}} and are freely and publicly available online. The results are shown below. BasicMotions has 6 input dimensions per time step and 4 outputs corresponding to different movements (walking, running, standing, and playing badminton) taken from a HAR sensor. ERing has 4 input dimensions of a prototype finger sensor and 6 outputs corresponding to the finger. For all series, the reservoir activity was sampled at regular intervals for readout training to the target label for the given series. The weight change updates were performed as one large batch at the end of each epoch for both the last layer and the {{formula:7e64d8c4-4cf8-433d-a0fa-f0c8ab5f1fcf}} weights. For BasicMotion, reservoirs of size 800 were used. For the deep networks, 4 reservoir-layers were used for all experiments. We set {{formula:302f2e60-7df0-46d0-ad88-7fb56f9c881b}} to 0.1, weight decay at {{formula:fe551dc4-28f0-43cd-b998-da2d56b71370}} , learning rate {{formula:014a943e-aaab-4561-89fd-8a1241ab5296}} to 0.01, and learning rate decay at {{formula:c5269445-dc7d-4e20-8347-3c17a76d8bc0}} per epoch.
{{table:f1b4e960-bb5d-4c6c-9b42-fca31717cf4a}} | r | b35c4b62d813126a07573a3605713cbf |
In this paper, we have studied Einstein-Gauss-Bonnet(EGB) gravity coupled to a bumblebee field. We obtain an exactly black hole solution and cosmological solutions in four dimensional spacetime by a regularization scheme. The bumblebee field doesn't affect the locations of the black hole horizon. This black hole is different from Schwarzschild black hole due to that the gravitational potential has a minimum negative value in the black hole interior. And it is also different from the 4D EGB black hole {{cite:ff847970ec0390eb54d09bb055c823965982d336}} and EGB Born-Infeld black hole {{cite:9f0dbe895624adc92661f884cf0a305cce7d13dc}}, due to that it has a positive value {{formula:5cb8797f-3d50-4e1c-92f0-20c4deb9f724}} at short distance {{formula:c62cc262-389e-4b4d-aed1-b7aef5ec6026}} . Other effects of bumblebee fields on the gravitational potential are, decreasing the minimum of the gravitational potential in the black hole interior and increasing the positive finite value at short distance.
| d | 808ca20b9333da36011944cb4d88c5b2 |
The lower semi-continuity of {{formula:4386fed6-d10c-4674-9288-2de32fe915c3}} follows from the uniform bounded moments, Assumption REF and Lemma REF . The lower semi-continuity of {{formula:96bded70-bc52-46b2-b9ea-d3ce3e893522}} follows from {{cite:9e670280588ad337d8c1fcacf3b67d118d2f3aa6}}, since {{formula:52701446-ba6c-4877-bd49-cba64caa34d9}} , {{formula:e4f422b3-456e-4a3a-bd15-812226d6dc35}} is clearly 1-homogeneous and convex in {{formula:498385b2-871a-4b9e-9c7b-f05306946c2f}} for fixed {{formula:e795642c-4a86-4861-b327-9761c271e53c}} (as {{formula:58d4714e-ae1f-408c-b9b1-fcd47fc30b39}} is non-negative).
| r | ee74c5df1559227b9e8addf7b1d18788 |
In order to evaluate the proposed method on a variety of state-of-art classification models,
we applied our approach to the CIFAR-10 and CIFAR-100 datasets {{cite:4eda5209139475b9f3243ace37f0c2cd66254d74}},
which are labeled by 10 and 100 classes color images, respectively, with 50k for training and 10k for testing.
All our experiments are conducted on an NVIDIA GeForce RTX 2080 Ti with 11.0GB of video memory.
In order to compare the proposed strategy with the original networks,
we re-implement the VGG16 {{cite:ad0fc60dac4a7ded1f485ae110dcc8c4dda10f79}}, ResNet18 {{cite:37106b29cf02a9ad32625be1c9de54ca19084f77}}, DLA34 {{cite:6e87091a6b3ac6a41f24225126f0afa550e4f9b5}} and DenseNet121 {{cite:85357ad63716b804d712256f6b0f0d7d11a85409}}
with and without our multiple classifiers in PyTorch {{cite:aff229efa417683653a06bc42fac6e4aa3e4a957}}, respectively,
and used the advanced gradient-related optimizer, the Adam {{cite:ccadfd415b0618ba1b9920a5fbd3f15c4fa14f47}} method with a learning rate of 0.001.
All experiments use the same dataset in each test with a batch size of 100 per iteration set,
and with the same configuration and the same number of neural nodes, as shown in tab:original and tab:MC.
For a justified comparison, we also train the original and the proposed models with the same training procedure.
We use random cropping and horizontal flipping with color normalization,
followed by the random erasure data augmentation {{cite:1bd3de6d32c2d0a720b20c82b24f098e17ade260}}.
We target to process 300 epochs to compare the accuracy and convergence of the various models.
In addition, there is a scheduler for adjusting the learning rate.
It reduces the learning rate when the loss becomes stagnant.
{{figure:af0a3a6c-7c42-4a25-97ae-77091bfccd1d}}{{figure:f2dea636-16ab-4f56-88a2-2e0a41afe7a7}}{{figure:7a060d74-035b-4ba0-b361-7dfac434a0c2}}{{figure:f8b75b9a-2de9-4a0a-baf2-09ffaf4f588f}} | d | d76a4290c8c60c254681c69db1df9d3e |
Safety, framed as the forward-invariance of a safe set, has become a dominant definition in control theory. Several methods have been introduced which provide guarantees of safety including model predictive control (MPC) {{cite:789b4e04d10d38f94554f4186e3e9373019a3007}}, optimal reachability-based methods {{cite:52e4861452c863136269151282beda9f5d566d44}}, and control barrier functions (CBFs) {{cite:e82a50b8397a0a5ffcebb060316807b838c0a2f5}}, {{cite:3e914c5f3b9e9acef1801bc7d2d82c4ac89ffff8}}, {{cite:7351a3ea946efb069a68cc116e444186c37365c2}}.
Control barrier function methods use a Lyapunov-like condition to guarantee safety and present advantages over other methods in that they provide guarantees for general continuous-time control-affine nonlinear systems and can be implemented efficiently as convex optimization programs. The robustness of CBF-based safety methods has also been studied in the context of state uncertainty {{cite:17dc3286780c1dda2b2bf9c7a4e28c495ee3d961}}, dynamics uncertainty {{cite:105d4c02175aa99cdd258fd589b785eea27705e7}}, and reduced-order models {{cite:35231864fc31d1a5b1985586ec0f2acff369cf85}}.
| i | 15e9ab0185363a7e96665be9fde89910 |
In the qualitative results we compare our proposed method PhysXNet, with LBS method and TailorNet {{cite:4e9a70ccdcbc3a400beb767a4f7b223571f125ef}} when possible. Moreover, we show the performance under changes of human shape and different cloth topologies that have never been seen by the network.
| r | 11d9ee606c69fb006cd39689638352a1 |
In the cases of (iii) and (iv), we explicitly model both the {{formula:bd53e812-0be7-455f-a2f1-96570314279c}} and {{formula:a8382a62-44ba-4c10-b245-f1f7ad2b84be}}
regressions, and also explicitly build in symmetry into the test statistics
to reflect the symmetry of the null hypothesis. The first two examples,
which relate to low-dimensional settings, are not obviously engineered to
have the DEF property. An interesting finding here is that the these
classical test statistics implicitly use a linear {{formula:dfe62ed1-8b70-4c42-96d3-02628218ffe9}} -model. We may
speculate that this hidden robustness of classical significance tests to
potentially severe {{formula:5d368105-3b3b-4845-b964-a21bb3877633}} -model misspecification has in some way contributed
to their popularity and usefulness given that
all models—but as we have established here, not all inferential
tools—are wrong {{cite:71445b09538d816258ea30fe4c44a7d95a654d7a}}.
| d | 42354e585376d0ac26dabc11b8afbf44 |
To further illustrate the effectiveness of the MCQ algorithm, we conduct more experiments on 4 maze2d "-v1" datasets and 8 Adroit "-v0" datasets from D4RL benchmarks. We compare MCQ against behavior cloning (BC), BEAR {{cite:18733e6c3c019e8215f9ced41f3845b14f5ed822}}, CQL {{cite:a93755ef3f2ca6620071fad9460212669b226e64}}, BCQ {{cite:ecec8be7e36bde5797fe77575f09dc493ab1c711}}, TD3+BC {{cite:5f6afcee91791e3817445c35e3a818d3070494f2}}, and IQL {{cite:f5be0fc78a784454b66a1b5c700c1b4b4ece4257}}. We take the results of BC, BEAR, CQL, BCQ on these datasets from {{cite:678c491748f249fa8c58f470cab4a58f98e8bc4b}} directly. For the results of TD3+BC and IQL, we run them on these datasets with their official codebases. All methods are run over 4 different random seeds. The hyperparameters we adopt for these tasks are shown in Table REF . We observe that MCQ behaves fairly well on maze2d datasets, and is competitive to prior methods on Adroit tasks. BC and CQL behaves well on Adroit datasets. Note that we find that BC beats MCQ on hammer-human-v0 and pen-cloned-v0. The reason may be attributed to the high complexity and narrow distribution of the Adroit tasks which makes it hard for {{formula:8fc1de5b-258f-4753-a702-24f9456b4fbc}} networks to learn well, and the sampled action can easily lie outside of the dataset's support. Nevertheless, MCQ achieves the highest average score on these datasets.
{{table:c79054a1-787b-4080-9d32-1f509c0f57fc}}{{table:bbd7a301-d028-4437-ab33-f1515aef4dd9}} | r | c0d05498c23937741d6bcb027c309865 |
If the mask pattern is unknown, reconstruction approaches can be designed by incorporating several properties of the encoding mask. Correlation-based approaches can be used for recovery as the pseudorandom masks have approximately a delta function as their autocorrelation. The autocorrelation of a CA image is equivalent to the autocorrelation of the scene image: {{formula:76fe27f3-a2a2-4ca2-978e-24191e31b54e}} . The object signal can thus be recovered using a phase retrieval algorithm {{cite:ed45d6732e0960f1d8265f56a4291aa76e73d38b}}, {{cite:2952ceb277aa89731ce79f4e06f6cc29cbf66844}}. However, such methods can only restore a coarse image (specifically, near binary quality at high contrast areas). Other constraints such as coprime blur pairs (CBP) {{cite:c4f726bd4f6adf719075842601691d5520c16fcc}} can be applied for on/post capture video blurring and recovery. Although the polynomial CBP kernels can be estimated, it imposes higher numerical precision for the captured images.
| d | 5c479189849ea0471ff35683fb634960 |
For the {{formula:00db3ae9-bdaf-45ad-b377-333fe05fbac6}} -continuous, {{formula:93511f8c-99f2-4647-a8c4-8dda6886287c}} remains invariant on applying Eq. (REF ) {{cite:b1ca0d0969bfad7b262634699f10a5661f6cfbe4}}. When {{formula:861cb906-a53e-4e72-9a4a-46a942b5815f}} is sampled discretely, a variety of available numerical schemes to deal with {{formula:ff840799-e169-4bb9-b847-f02b9bfe4f3c}} will not generally comply with the gauge invariance. Therefore, designing a practical algorithm to compute {{formula:74dbac0c-4b95-43cc-a63a-f04f460e5c75}} is an important issue to consider.
| d | ce2e7a693b9ba8c32807de1c58964d78 |
Guiding an FCNN to the correct image features is a much more complex goal. The explicit ROI extraction is an approach followed by RV segmentation {{cite:cd1c5a9f22c3079c261563a9be209c28bea1024d}} and many medical applications {{cite:ba5d8e95520516ef77b0f6d818f947d81bcd639a}}, {{cite:573fdd6826e1d65d922dbee348fc6a2e455d3a8a}}, {{cite:4bf24bfb47d56ca4477f55bf9b04bed8b3a47657}} to facilitate the segmentation task. Mask-RCNN {{cite:519e71b56c11189ae7d4804577cce8154dd8d660}} is an example where the segmentation obtained from an FCNN is in close combination with an ROI-pooling mechanism able to locally identify the bounding box of each object. Our rationale is that there are still useful image features outside the ROI that can guide the FCNN, and that approaches that jointly learn detection and segmentation are desirable, avoiding the only focus on the ROI features. Some works explore this idea, where ROI pre-localisation becomes an additional sequential task in an end-to-end training chain {{cite:d8a84923883b0253545d7f8ae89c252949ce1fd0}}, {{cite:5d509457214384e9da99a5ace70bb88de1fbead2}}, but without an explicit use for guiding the segmentation. While a dual FCNN within local and global downsampling pathways at two different MRI resolution was used for atrial segmentation problem {{cite:2c48ddd37ecebf5c4eafedb7a3ec42630ea7f44d}}. However, in this work, the local path only helps to scan every single patch of the image in order to classify it as negative or positive. In truth, this method differs from the principle of FCNN (i.e uses downsampling filters to scan the whole pixel image) and approaches more to prior old segmentation techniques, where the adding the global path works as a multiscale integration of global contexts.
| i | 96513fe920a79d3d0bef1e96016a2cc6 |
For the {{formula:538d5318-f12f-4c78-9ef1-f97a2333b605}} interaction, no {{formula:2fee96a7-6136-409c-905d-831b86a3980d}} -channel vector meson exchange is allowed. Therefore no dynamically generated {{formula:623ede35-e8d7-49f8-b27a-083d65dbd697}} is found around {{formula:370d1899-6e2d-4007-a9fd-d1a90e677c8f}} threshold in the previous studies {{cite:b1590a9b3b721d14d5092ff16d382e2a332b3fae}}, {{cite:840894a43b2be1bfd617e12bef8f68bebc796784}}. However, when taking into account the pseudoscalar meson exchange force, the potential of {{formula:697e7f0d-7b5a-4f79-8a4e-1da7c564240a}} is shown in Fig. REF
{{figure:41fa2382-d3b4-421a-972e-01827b817c61}} | r | 5136de91edf21cc6b2be4118ed224cb4 |
Whereas, HNNs directly predict the scalar value of the Hamiltonian {{cite:77e546b6f4f0f2ef8813f3f44fcffbfeb5386412}}, L-HNNs precit {{formula:8204428f-fe45-48e4-b5a9-85d51a5b3915}} latent variables whose sum is defined as the scalar Hamiltonian. Introduction of these latent variables improves expressivity and reduces the integration errors when simulating the Hamiltonian trajectories {{cite:87145d2e5e79add4278c394d0433e29ea1eb537a}}. The training data is provided in terms of sets of {{formula:e0a04431-62c8-40ed-bb2f-df041cf2f58d}} pairs. Time derivatives of these {{formula:00e65ea1-a2db-4c1a-aea4-c5a6e357f7a1}} pairs are computed. Gradients of the Hamiltonians predicted by L-HNNs [Equation (REF )] are also computed. Then, the following unsupervised physics-based loss function is minimized:
{{formula:745c8b4e-c418-41dd-a760-6ad85de0e1dc}}
| m | c7220f507c097cf3052a5e86f2ffebbb |
Third, we compare Prune and Tune Ensembles to state-of-the-art low cost ensemble learning methods with a large training budget of 200 total epochs. Along with accuracy, we report uncertainty estimations on corrupted versions of the CIFAR datasets {{cite:441b4e205e32d080df882d54a331490098e656a0}}, {{cite:bee46ec19444082d1b58ff762dfef6d95d1abe71}}.
| r | dcca42a98330b727793c38560502f74a |
The proposed framework is depicted in Fig. REF .
We first process each view separately. In particular, we employ a 3D-CNN architecture with residual connections and spatio-temporal convolutions across frames {{cite:ec01d6e8907a483de62abe979ca55a122a28a179}} using a ResNet-18 as the model backbone {{cite:5fef4302cecdaee1c8c7e435c0344f83924a2697}} (see Fig. REF (a)).
In contrast to previous work {{cite:60f0384fce130baceea8c49d2975249de7d80c07}}, our approach integrates spatial as well as temporal information into the learning process. This mitigates the frame-level variations that can occur due to external changes, such as the position or the contact of the transducer, or in the cardiac function itself. To overcome the scarcity of the annotated data, common in the medical domain, from each ECHO we extract {{formula:bb97de37-c288-48bd-9506-2cf5c975eed5}} shorter video sequences by randomly choosing a frame as their starting frame followed by {{formula:b9d2f713-ecae-43fb-b7ab-d6d753ba3745}} consecutive frames, with total {{formula:564880cd-91f4-4715-b567-faa7cfe22e76}} frames, covering on average one heartbeat. We then aggregate sequence-level predictions {{formula:7eaeb083-5c37-469d-a74c-97790ce7fb47}} through majority voting, i.e. by selecting the most frequently predicted label, to a view-level prediction {{formula:83254bcf-28f8-40a3-b465-b66a798e2a18}} .
The view-level confidence is then defined as {{formula:5d5e3fae-f794-4a85-bc9c-1d7937e32253}} , where {{formula:bfa81fe0-4e2e-48d0-95d2-8947755fe0b1}} is the count of the most frequently predicted label from the list of predictions for the {{formula:e1c543c7-f58c-478b-b52b-a4b1a468d5da}} sequences of a given ECHO per view.
{{figure:0559d7e2-4f3a-4e01-94b0-0079822f3e94}} | m | 216c2a7746c195cba70a7171a1c02600 |
Visual recognition has been dominated by convolutional neural networks (CNNs) {{cite:eb4dddcd8efdb171b91eb697fb102b0e04806b20}}, {{cite:3742e5684f875569e480ace6364eb53b876b036e}}, {{cite:c9093cbb4acad549c0a1a9deff35da8e5459a7a6}}, {{cite:1d406b3ef5c9257702191933d36eaa441017ca94}}, {{cite:a9b8870d1ec5b234141709e44b3de3959e5e37bb}}, {{cite:abe838b930b73389e67448eae4ee5650d5c2608a}} for years, which effectively impose spatial locality and translation equivalence. Recently the prevailing vision transformers (ViTs) are regarded as an alternative design paradigm, which target to replace the inductive bias towards local processing inherent in CNNs with global self-attention {{cite:32cd8cfd31fda4b39a2df6d674f9bcd3625a9b34}}, {{cite:63f7aed914a56038d0ed025d6370069c14a07248}}, {{cite:a760dae477886d50ddda3f5a1b6454cb71577e43}}, {{cite:5044e21bc8ecdd735979e9a5b82b1974b69abcfc}}.
| i | a3981bc8d0ae7bf5b919c2228b44a688 |
Figure REF shows the cross-sections for all events, and for the same events divided into three classes. The top-left panel shows the total measured {{formula:496c30a2-e76e-4f7d-ac6c-86599b098792}} cross-section, compared to five models. STARlight is based on parameterized HERA {{formula:d16343ed-2051-4f72-a5c0-e90e36ac4eb7}} data, with a Glauber-like eikonal formalism to handle nuclear targets {{cite:c53ace313d600293a226e4442c04715e33c00bdc}}. The GKZ predictions are based on a modified vector-meson-dominance model, using a Glauber-Gribov formalism for nuclear targets {{cite:6ec98f4607833856ed064ae1e9f4306b5c6df12b}}. The Glauber-Gribov approach allows for a dipole to interact multiple times as it traverses a target. Each individual interaction can be inelastic, with the intermediate states (between interactions) allowed to include high-mass fluctuations. The CCKT predictions are based on a calculation of dipoles passing through a nuclear target, which is modeled in terms of gluon density as a function of impact parameter {{cite:7009ad0483f4615de404b1f2d2fc4bf3f0734a95}}. The gluon density includes gluonic hot-spots. Finally, the GMMNS model is another dipole based calculation that includes an implementation of gluon saturation {{cite:58823f20c5de103356aba5675429b2b9579fc538}}. Most of the models do a reasonable job of matching the data, although STARlight is a bit on the low side, and the CCKT (nuclear) model is a bit high.
| r | b0d63185421b21a85f08c3a44155fc97 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.