text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
At {{formula:7be56b60-7b48-497d-bba3-36081d2648cf}} the SM Higgs self coupling {{formula:acbd4601-acbc-4e92-af3c-0b7be24e0ce1}} and the top Yukawa coupling {{formula:57b44c4a-c6ab-4025-942c-d9927d6f99c1}} are fixed
by the SM Higgs and top quark pol masses {{cite:bde580fe074c5405af2b2db6eed9a443b28c17b7}}.
For the gauge couplings at {{formula:418a07ca-4ee4-4679-ae7a-73dc4a2b6e40}} we take {{formula:01cf899e-7b3f-41a6-be4b-86dded911859}} , {{formula:868a4562-6fbc-4b86-8df5-a86ef06b0ada}} and {{formula:36c89096-3195-4f82-9508-688e631bb29d}} {{cite:526c2af9e462deb27fb9f7200ec2a8d6cb8505a1}}.
The priors for {{formula:06f927ad-893c-4d1b-9b92-353f71275968}} and {{formula:b2e83d2e-d095-447a-93d4-a97006055ea2}} are given in Table 1 (see below).
| m | e63da4d6af89382b0cdb165df2e3ca44 |
The first approach to robustification of detection systems was devised by Dalvi et al. {{cite:8d19984894a39827a0c277e19da8bd2c02c2f817}}. The authors formulated robust classification as a
game between the classifier and the attacker and produced an optimal
classifier strategy.
Other approaches defined robust classification as a Min-Max loss problem. In this approach, the adversary's goal is the maximization of the loss of the defender by means of small modifications of the feature values {{cite:c5b6fef76aad95f2bec28c2fdaaaa29bc0bb0189}}, {{cite:8e32b3ad3d6bdc40b1c27165d6aa8d198bc55fa1}}, {{cite:2ef5859f74ca858901d42b70529e9e241b196c74}}, {{cite:d0eeb821613e82c965e66d2e010b00aa4e635c4b}}, {{cite:fe7afc05892c843c76085a05499f93645fca80e2}}. Additional approaches to design robust classifiers are the use of a non-zero-sum game {{cite:e9362e08a0f7c531d3e157e4feb0a9ead3cbb906}}, {{cite:7338f0aaa1c5255670ec6f583632b98a6c93256d}}, {{cite:2dd813f31821725329b6d6b9271adfc472b8daa5}}, {{cite:0d16dcad4519701d268a5f761472b6b9ef27a30b}}, {{cite:bdf37f4ecb5b2b17cb525cf6ba8e524859f7d7f2}}, or a Stackelberg game, where the learner is the leader, while the attacker is the follower {{cite:7338f0aaa1c5255670ec6f583632b98a6c93256d}}, {{cite:2dd813f31821725329b6d6b9271adfc472b8daa5}}, {{cite:bdf37f4ecb5b2b17cb525cf6ba8e524859f7d7f2}}, {{cite:2fdfdb1b41889e8cee8defa049096d6d39eb4bce}}.
Finally, iterative retraining defense mechanisms have been proposed, both for general evasion attacks {{cite:0d16dcad4519701d268a5f761472b6b9ef27a30b}}, {{cite:bdf37f4ecb5b2b17cb525cf6ba8e524859f7d7f2}}, and specifically for deep learning detection systems of computer vision {{cite:b9e816befcba05c0922186fed55775a28a4420e2}}, {{cite:6708635fe3e45c53336613b1e1dc91c0c67a522d}}, {{cite:2ef5859f74ca858901d42b70529e9e241b196c74}}.
This list of the robustification efforts shares one important characteristic: the underlining attacks that are used for the robustification of basic classifiers are feature-space attacks, which, as this paper proves, hardly constitute a proxy for practical problem-space evasion attacks.
| m | 0c04dc1f96140e481bb2f3d60c4274df |
The Perceptual Evaluation of Speech Quality (PESQ) tool, as defined
by ITU-T recommendation G.862.2 {{cite:74967600c13ba8e55b2b9665fe30d5710b00bb5c}}, {{cite:a7556b38182d7e455406308b24735d5a56262172}}, is used to
compare the encoders at various bit-rates for both narrowband and
wideband speech. While not a subjective mean opinion score (MOS) test,
we consider the results to be meaningful because we are only comparing
different noise-weighting filters for the same codec. The reference
Speex decoder is used for both encoders. The test set is composed
of 354 speech samples from 177 different speakers (87 male and 90
female) in 20 different languages taken from the NTT multi-lingual
speech database. The Speex codec reference implementation supports
variable search complexity. For the evaluation, the Speex variable
complexity option is set to 3, meaning that 3 simultaneous hypotheses
are updated when searching for the best adaptive and fixed codebook
entry.
{{figure:ebdf26fe-2c3e-46ea-b358-c295f91275fb}} | r | 461aec6551bd6a240323c045b4cacb86 |
By using the Langmuir expression for the electron probe current {{cite:43a58b524fc0d932a27bab7c637039998fa154a1}}, Druyvesteyn{{cite:f74b51b1a1daaa302bb2dd9a45a46bfd6433fa44}} demonstrated that the current of a probe (I(V)) allows determining the EEDF. Such relation involves the second derivative {{formula:78a08ab4-a078-4381-b275-f58ea3bd0c1f}} of the retarded electron current with {{formula:56ea18df-975f-4d7f-ba16-39a764d36f47}} . The probe current I(V) is the sum of both electron retarding current {{formula:501c3b47-d1cc-4cd9-bba5-204656bbf30d}} ({{formula:b1bc0639-3ef6-4cd8-a941-ac9affd50ffe}} ) and ion saturation current {{formula:dd0962e5-f430-4f19-8b98-9ff65d8ac60a}} ({{formula:237bb4a8-0d6a-4074-8de2-ee48e71d9af9}} ). But if the probe bias is not too negative, it is verified{{cite:23b041423b5ff75507012541fe98efab5878b57a}} {{formula:e64ba6c1-61ed-4fc0-8226-69d00011616e}} . This approximation has been analyzed in detail in some works, for example in Refs. {{cite:23b041423b5ff75507012541fe98efab5878b57a}}, {{cite:d0ca30486f333ded20fe70610d72225bfde8cfb6}}. Therefore, in the following equations we use {{formula:64928a2c-30bf-4705-b342-7e61a5f9c1bc}} instead of {{formula:32af3aa9-40e0-4e90-95ae-3b16851daa72}} .
Finally, we consider the total current {{formula:e7b18da7-4936-49b7-b24c-48e78a34441a}} restricted to the values {{formula:b39e7bd4-5c45-4ccb-9d90-62d69c2511e8}} . Considering the works in Refs. {{cite:cd947a1979df5af5e605319162ad97178d78b54b}}, {{cite:d0ca30486f333ded20fe70610d72225bfde8cfb6}}, {{cite:b6004a1e2d77c5ff80aac53bf72ccb8f44e12aec}}, in an isotropic plasma, the relation between {{formula:c360b7a7-e28f-4c74-a44e-3a1774bada62}} and the EEDF ({{formula:27c28dd3-3104-47eb-86b9-3afeabd39f79}} ), with {{formula:11a1f360-72fa-41a0-a040-2a9d2cc92aef}} is provided by the Druyvesteyn formula
{{formula:96f57d91-34a6-4a99-9ee3-a0ae26c0bfc5}}
| m | fe4338dc0dde946a6b117a5c43fede79 |
Comparison with Recent Developments in SSL Recent years have seen a flurry of research on SSL techniques that are related to or concurrent with our work. The prior work of Smooth Neighbors on Teacher Graphs {{cite:458fde68f72259a78648e95bb94824978b6c817a}}, Virtual Adversarial Dropout {{cite:20c172c1ea5daddc2e35a159a11c1851784dc905}}, Interpolation Consistency Training {{cite:00a8a6ce002bce7e58cc3d95fc4a635397b8096e}}, Stochastic Weight Averaging {{cite:9ce2e133e186bef7a00417ca6582e6c53429d5b5}}, and Label Propagation {{cite:5a5de70e02a6b21d2d6df78bd5314760a3f72d46}} advanced the field of semi-supervised learning by achieving impressive results on SVHN, CIFAR-10, and CIFAR-100 benchmarks. However, those methods all build upon strong consistency baselines by either adding a third loss (and new hyper-parameters to tune) to the overall objective of consistency models or averaging model weights. The concurrent work on self-supervised semi-supervised learning {{cite:c05c2b917e0b48fe250176133e2be8c3a1a19f8b}} independently explores the contributions of self-supervised regularization for SSL in a way similar to ours, but with a different evaluation protocol and end goal. MixMatch {{cite:07b29bf6f67e51f26bcf5461cb1f6cf0d1ac3c88}} combines strong Mixup {{cite:b2700bc94cd47f4a21e1925af24b8abe3bd96231}} regularization with consistency regularization to achieve state-of-the-art SSL results.
| d | ef5d599f78fda081f1a5a695c7daef2b |
In Table REF and REF , we show the results on KITTI-3D benchmark. We report our results with point-cloud supervision, since KITTI allows for only a single submission. (Comparison of self-supervised depth is provided in supplemental material.) DD3Dv2 achieves the state-of-the-art in most metrics across all three categories when compared with most published and concurrent works, including the ones that uses similar point-cloud supervision and Pseudo-LiDAR approaches.
Our new representation significantly improves over end-to-end approaches like {{cite:72e70caeacd3e81d39aa7bd6047ceeac7ce4f9ae}}, especially on smaller objects.
{{table:9213296a-9ec7-4be6-ab8b-970ca90b5fe3}} | d | 61b270cbf302ef9b3511216954d9af93 |
Our fast weight variants can be extended in various ways to build
highly adaptive language models {{cite:d6f136936dd8caa61f49bf24085a83b2aed85ba5}}, {{cite:760b435dfd886724ccd813da7afdb087bbdbbdf6}}.
In particular, the weight generator networks could be augmented by additional inputs
for error signals {{cite:fbc7914f26330983e04bf80eb4157fe3a0bb739b}}, {{cite:7472f7d85f67917af5800ae6e26d5734b5201fc7}}, {{cite:d54524ce181c41b1cdc0e028b6e2a00269814be9}}
to facilitate the weight update generation.
Future work will evaluate our models on tasks which
explicitly require context-adaptive behaviour,
e.g., texts containing multiple domain shifts.
| d | ee0e04d6e19f8bed31030fa8cb7882a6 |
Vortices are characteristic quantized topological
excitations of a superfluid. In the case of a Bose
superfluid a vortex represents a line singularity
in the phase of the condensate wave function, the
phase having an increment of {{formula:70fa5cc9-dd61-4ed3-88ea-afa7bc8bbae5}} as one turns around
this line. Vortices in superfluid {{formula:66cd3c80-1061-41bd-8fd6-1fcd8c27e847}} He {{cite:c6b42d19621466336da0cd6ee5ca33f3eddafda1}}, {{cite:aa94cf1f99175c25d47e8cafa4903968e45db1fd}}
and in cold bosonic
atoms {{cite:9cf5a1f9cf07095f60dcdd406ad4bdcdb06e4704}}, {{cite:f70bce48a91b39bee62828fd21db3b6406a6e425}} have been extensively studied over the years both
as an example of the ubiquitous vorticity
phenomena in many-body systems as well as a
signature of the presence of phase coherence in the system.
Vortices are expected to be also present in a supersolid.
| i | 99220c573f9b4e8e10f10d73e571c0de |
Finally, inserting (REF ) into the expression for the vacuum energy density (REF ),
and taking into account that the Bogoliubov coefficients satisfy the equation {{formula:af5e256c-f201-4793-b882-6fd120ad93b2}} ,
one finds the following diagonalized form of the energy density {{cite:b5499ccb9fbb583c7ea9e4e696a90a9a67addb77}},
{{formula:24433ca9-a60e-433c-9da5-747470c920e5}}
| m | 1ec9c2f0a1be8dcfff3a4f53a8afb280 |
One limitation of graphons is that they describe limits of dense graphs {{cite:3c4a9b35348bbb3e8ea6aa751046b5b0ccb4b368}}.
Many empirical networks, however, are sparse.
Exchangeable random measures have been proposed as a way to construct sparse graph-variants {{cite:2cd3e9191cf68b2f6b63b287656cf8053a8b43ec}}.
It would be relevant for applications, to extend modularity-based approaches to also detect community structure in these objects.
| d | 0758b72f0af9202a89e1e043774f39cb |
In the case of Watercolor2k, we set the learning rate to {{formula:8bcafb3f-2ee5-4e16-b446-aa386248e6fd}} for self-training methods, since most of the images contain single instance and thus the network is easily overfitted {{cite:33b14c329a24612c536bbc9e47712b901fd58a62}}. Furthermore, images in Watercolor2k have no hard backgrounds such as obstacles. From these reasons, our algorithms show less improvement compared to Clipart1k and Comic2k.
| r | 6193ee686ea1ffc702244649df22ad47 |
Recently, {{cite:8c015c0e59d79a19fdd9c6e8a066c01c983b425a}}, {{cite:dfcc3ede131ef9820269870916bf80b921fe69dc}}, {{cite:74e8856a529cffbc960dbfc873988acca4a58f59}} proposed optimization programs for learning the class of {{formula:b8d63c5e-909f-42d0-bbcf-13ac764d3ea2}} -component graphs,
as such class is an appealing model for clustering tasks due to the spectral properties of the Laplacian matrix.
From a financial perspective, clustering financial time-series, such as stock return data, has been an active research
topic {{cite:46f808e1775de80e42cb926776067bdae1f18227}}, {{cite:abc1d393af820414f47aeb8a3ce1c249a116c03a}}, {{cite:a4f4bffe7e5fd51ff2aa0f27312c40c014c4099d}}, {{cite:6aa56f9829abe5d51425538c12fee40fcc9ad176}}, {{cite:1c7e9b50e808e7ecf47417d469c1c5a4e8db7e67}}. However, these works rely primarily on
hierarchical clustering techniques and on the assumption that the underlying graph has a tree structure,
which does bring advantages due to its hierarchical clustering properties, but also have been shown to be
unstable {{cite:6fa1e9f2ff43b0e4877df838f6b0914bd3c7b56d}}, {{cite:eb2d25336ae66f099f53c63317cdfe388732ad98}}, {{cite:2fcc1bf5c5120dc3636c9836e1eb210ec27379df}} and not suitable when the data is not Gaussian distributed {{cite:7dc903fa3eb0ead71a1f427a3846ff6407db7cdb}}.
In this work, on the contrary, we tackle the problem of clustering stocks from a probabilistic perspective, similarly to the
approach layed out by {{cite:74e8856a529cffbc960dbfc873988acca4a58f59}}, {{cite:dfcc3ede131ef9820269870916bf80b921fe69dc}}, where the Laplacian matrix of a {{formula:1d74827f-db13-4f94-9378-08e4882f4abb}} -component graph is assumed to model the pairwise
conditional correlations between stocks. A crucial advantage of this approach is that we can consider more realistic probabilistic
assumptions such as heavy tails.
| i | c17e8133260f3a2cee201bf7d6903bd5 |
The experiments of Section REF use only the binary (yes/no) question of the GQA dataset {{cite:71515b8098bc7f45e53a0c21b748a1cf6d32b69f}}.
We repeated them with the full dataset.
The setup, model, and hyperparameters are unchanged.
The models' accuracy remains close to that of the baseline (Table REF ) even when the independence constraint induces the models to focus on different input features.
In Figure REF , we include examples from the validation set that illustrate this effect.
Contrary to Section REF , we did not observe improvements in accuracy when training three models with our method.
Our ongoing work is exploring the range of hyperparameters (larger number of models, different regularizer weights) to further investigate these observations.
{{table:c472a1b2-8b50-4577-8a92-9a897b5d22dd}}{{figure:779f01ca-ac83-4140-bfa2-55ab8b5565a8}} | r | 3bb702fa239d44a23c813783aef1741c |
Data Augmentation:
The contrastive learning methods are heavily influenced by the stochastic data augmentations used {{cite:fe89812accb607dc1998dcedf2b6c046fb5c2842}}, and it is important to get the right match of augmentations. We use a different family of augmentations as it produces strong variations between the two augmented views and is known to work better when a shared encoder is used {{cite:2145ac787bf1e2a2105b1156e0aa8495e7c99c74}}. Similar augmentation approaches are used in {{cite:ab08f5b83a1a24f70711d32ff1779765f9a68835}}, {{cite:e042c4fbfc8befef5b4a3641fbbd19a9710027bd}} for EEG signals. We use jittering, where random uniform noise is added to the EEG signal depending on its peak-to-peak values, along with masking, where signals are masked randomly as one family of augmentations {{formula:eb8272c5-b384-4633-9460-f808a1c2f4d6}} . Flipping, where the EEG signal is horizontally flipped randomly, and scaling, where EEG signal is scaled with Gaussian noise are used sequentially as the second family of augmentations {{formula:2ba12235-907f-4a32-9db4-89e6a58db8d4}} .
{{figure:788ffece-6973-40f1-a8f9-6ac3e7f05d41}} | m | 80eb683b2945d95722677e20e0474aa5 |
To quantitatively assert an ML model as fair, researchers introduce notions for group fairness. These include, Equalised Odds (EO) {{cite:0fd3b6f155caac9c767d4fbd8239311ecfd3bd6f}}, Equality of opportunity (EOpp) {{cite:4f7ad84e9b92257320ab48a421e1374e1b5d13d4}}, and Accuracy Parity (AP) {{cite:0740468e21f24110a639fef4f2a7a974f23e1bc4}}. In the context of classifying `attractiveness' (or gender) of a face image with gender (or race) as the sensitive attribute, EO states that the probability with which a model predicts an attractive face to not be attractive must be independent of gender. EOpp ensures that the probability of predicting an attractive face as unattractive is the same across genders. Lastly, AP states that the overall classification error must be equal across genders.
| i | 85754aceb407841288b4987b491acf65 |
In this work, inspired by recent results on last-iterate convergence in normal-form games {{cite:5cf221b15b50452877056ef69aa07966083887c8}}, we greatly extend the theoretical understanding of last-iterate convergence of regret-minimization algorithms in two-player zero-sum extensive-form games with perfect recall, and open up many interesting directions both in theory and practice.
First, we show that any optimistic online mirror-descent algorithm instantiated with a strongly convex regularizer on the EFG strategy spaces provably enjoys last-iterate convergence, while CFR with either regret matching or regret matching+ fails to converge.
Moreover, for some of the optimistic algorithms, we further show explicit convergence rates.
In particular, we prove that optimistic mirror descent instantiated with the 1-strongly-convex dilated entropy regularizer {{cite:8da79600a43df4d6fdfbb41d222c5a678afbe369}}, which we refer to as Dilated Optimistic Multiplicative Weights Update (DOMWU), has a linear convergence rate under the assumption that there is a unique Nash equilibrium; we note that this assumption was also made by {{cite:ef8bbac6ccb21185362c323c1f1cd40abc6419e9}}, {{cite:5cf221b15b50452877056ef69aa07966083887c8}} in order to achieve similar results for normal-form games.
| i | afbcdb251846b866910a8ee18eff44fe |
The first step of MERLIN is to train task-specific classification models from training data {{formula:0ced386d-7bac-45b4-8a53-90848cfcb70d}} using the abovementioned architectures. The weights of these models are used to train the VAE. 10 models are learned for each task by random sampling of subsets (with replacement) from {{formula:6be197ae-989f-46e3-8b48-c363ea078882}} .
Considering that base classification models can be large, in order to not make the VAE too large, we use a chunking trick proposed by Johannes et al. in {{cite:75f7b99077be46346e30b689e8ed3e13b4da6908}}.
The weights of the base classification models are flattened into a single vector and
split into equal sized chunks (last chunk zero-padded appropriately). We use a chunk size of 300 for all experiments, and show a sensitivity analysis on the chunk size in Sec . The VAE is trained on the chunks (instead of the full models) for scalability, conditioned additionally on the chunk index. At inference, the classifier weights are assembled back by concatenating the chunks generated by the decoder, conditioned on the chunk index. We observed that this strategy worked rather seamlessly, as shown in our results. The approximate posterior {{formula:aba3af29-999c-45a0-8c9a-34892070b91f}} , is assumed to be a 2-D isotropic Gaussian, whose parameters are predicted using a encoder network with 1 fully connected layer of 50 neurons, followed by two layers each predicting the mean and the covariance vectors. The decoder {{formula:d222cc91-3650-4c40-aa7b-e1f0be47e736}} mirrors the encoder's architecture. The network that generates the mean and diagonal covariance vectors of the learned prior, as in Eqn REF , is modeled as a linear network. AdaGrad {{cite:dc4c114000ec043abb46196a9790129a4d82e6a6}} is used as the optimizer with an initial learning rate of {{formula:279c9fc3-b2d8-4b1d-bbe4-334be0e27049}} . Batch size is set to 1 and the VAE network is trained for 25 epochs.
| r | 8240f4565f6b0031f0e82062d4453963 |
The theoretical backbone of this work is the
unbiased estimation of the Gaussian kernel by {{cite:645c412c193942f20b78d0af850271a21326db67}}.
Based on Bochner's theorem {{cite:25b7208e60d891076cbbc7bca1b9370ba2b4abe1}},
{{cite:645c412c193942f20b78d0af850271a21326db67}} proposed random Fourier features to approximate a desired shift-invariant kernel. The method nonlinearly transforms a pair of vectors {{formula:a7dc5e76-1b87-4b48-9dc3-639b924cd198}} and {{formula:20b86ea9-251d-40ae-8e80-41c62b7bbd4f}} using a random feature map {{formula:f908ffd4-a4c5-46f7-b60e-58219eca902f}} ;
the inner product between {{formula:6ca77866-5fa7-4772-a359-177e093e74c8}} and {{formula:d863f853-db71-47c1-ae80-a169b899feb4}}
approximates the kernel evaluation on {{formula:1298c690-42a6-42b5-86d1-9a88a6049ede}} and {{formula:09225961-5c19-4808-a2e1-7f66435d2243}} .
More precisely:
| m | 697e143e36a236e3826911b73f130aad |
In scientific collaboration networks, the validity of Granovetter's second hypothesis has never been tested. Nevertheless, it is widely believed (see {{cite:264578d7991ded5137a7a182566101583c9cce88}} and references therein) that information and expertise at the disposal of tightly connected research groups are often redundant, resulting in less creative collaborations and less innovative publications, while intergroup collaborations that bridge the so-called structural holes {{cite:ada614a797f5eb8430ccd2c6a707ab76768e9233}}, {{cite:2c9d3d915b95b4e3a2dbe790e2f8761b2a4d707b}}, {{cite:0af11dd51e8beec32352ffa093ec176ec47014fa}} can provide access to information and resources beyond those available in densely connected communities, thus leading to novel ideas and valuable publications. To quantitatively address these issues, we check whether the bibliometric indexes of scientists and publications are correlated with the tie strength of the scientific collaboration network. Specifically, we focus on two questions: i) How does the researcher's h-index depend on the structure of his/her local collaboration network? ii) How does the strength of the ties between scientists influence the success of their joint publication?
{{figure:9eb9bd82-3cb4-44ab-89ed-c81ececc3a4b}}{{figure:cf8cc33d-d6b8-4463-ab22-9844cb8571ed}} | r | 7f806506e59a17ced6dd481fd3a90b5c |
The notion of safety in RL has been captured in various forms such as risk-sensitivity {{cite:04b4adba2434f3df5c6c92eb1cb1dcb6ea47c2d3}}, {{cite:0a5a21dc23235e0a9e278f367519b12abd9b3dd9}}, {{cite:523ea020bd13c39e1efefd3d0b7f7aea8d1bf295}}, Robust MDP {{cite:514965e6cfcf452c1678f0c27f45d1740aa22aff}}, {{cite:7765e07350d25ecbd5a3a2991c3bdab251987438}}, and Constrained MDP {{cite:e0d4ab8d435e15925c740807ad1098568c876258}}, among which we focus on CMDP as it provides a natural formalism to encode safety specifications {{cite:44551dc7470f8a350ac04cd1c01114082893029b}}.
Most of the existing constrained RL algorithms {{cite:fa39231098f11ff463225c5561caae774d48b65c}}, {{cite:2f38c1a699de65bce05fc6b7355674eae5e0b599}}, {{cite:b1caa7884b50e71c12f45c8044b9c2ea6f421359}} are on-policy algorithms, which cannot be applied to the offline setting directly.
A recent exception is the work by {{cite:482882eda3eff61836ec281f0bed65849a19f3e3}} that also aims to solve constrained RL in an offline setting, though its method is limited to discrete action spaces and relies on solving an MDP completely as an inner optimization, which is inefficient. It also relies on the vanilla OPE estimate of the policy cost, which could result in severe constraint violation when deployed.
Lastly, in a work done concurrently to ours, {{cite:6d73ead27a47b675464e43bbe9d9602aafebec7a}} also exploits overestimated cost value to deal with the cost constraint, but their approach relies on an actor-critic algorithm, while ours relies on stationary distribution optimization.
| d | a1e2f40096b47cf5051768bb2900985e |
For any 2-dimensional compact holomorphic symplectic manifold {{formula:e643281d-f783-415d-8ab2-c3236d4f4a5f}} with a real-analytic Kähler form {{formula:300df4c1-3750-4d8b-bac5-309c69e59ada}} , Theorem REF shows that there is a hyperkäler structure {{formula:040a8a90-c6f0-426f-8f30-bc2d6230ce58}} on a neighbourhood of the diagonal in {{formula:d74a07b2-3180-409b-890c-bc0c07dd8ba2}} such {{formula:3d6de640-affc-460a-b8d9-353c59297d6a}} .
There is, of course, already a hyperkähler structure on {{formula:4e3b3441-e0e2-4fed-a7f9-585a0e13c967}} by Yau's theorem {{cite:0f5cddf3d00ab914df02de924e7afbfcadc8fec3}} and hence on {{formula:5a9b1aa6-cfc1-4096-84a6-3e5f2b59f85b}} , but the restriction of the first Kähler form to the diagonal is not equal to {{formula:95698b80-ab05-40a3-b8d8-d42ad345590e}} in general, so the two hyperkähler structures are different.
| r | 8d7149aaf6459b7d97fa5c92d1d0bcf4 |
To improve the efficiency, recent works {{cite:c61a854ed9af0f40e8078a52691dbc6f8445d4da}}, {{cite:595283ec8bd9511a0a56c1eb979476e4164b1d1b}}, {{cite:c45129798aa5de7d256b93be8765dde3e09b351e}}, {{cite:f35557871bd5c7f552efb46290423baa88227508}}, {{cite:aa03be965aee3fca875933f2287411a32de40a0a}}, {{cite:0703902d52689c4b684f013adc1be0093e294310}}, {{cite:59f91177f362ed1934e0a5b19b67ec4085a629d9}}, {{cite:f77c142a171b8ce6f48f81fb677a6b8becdbec05}} take advantage of voxels {{cite:c61a854ed9af0f40e8078a52691dbc6f8445d4da}}, {{cite:595283ec8bd9511a0a56c1eb979476e4164b1d1b}}, {{cite:c45129798aa5de7d256b93be8765dde3e09b351e}}, {{cite:f35557871bd5c7f552efb46290423baa88227508}}, {{cite:aa03be965aee3fca875933f2287411a32de40a0a}} or networks {{cite:0703902d52689c4b684f013adc1be0093e294310}}, {{cite:59f91177f362ed1934e0a5b19b67ec4085a629d9}}, {{cite:f77c142a171b8ce6f48f81fb677a6b8becdbec05}} to memorize the surface distributions and only sample few points near the surfaces for rendering.
However, these speed-up methods are only applicable in the inference stage or in the late training stage after convergence to valid surfaces, but not at the beginning of the training.
It is because they require a good estimation of the surfaces to guide the sampling of points for rendering acceleration.
{{figure:ce83953a-ae72-4e77-86b1-d5368d6ca841}} | i | 527b935f96f2e041037e57a9ec2fd81a |
Although adapters have been applied to SSL speech model, these PET approaches only directly apply NLP adapters to SSL speech models, without exploring the design of efficient tuning approaches based on the attributes of audio signals. A convolutional neural network (CNN) is a typical component in Audio/speech SSL models to learn from spectrogram and tokenize audio input. CNN is critical in bridging raw audio input and tokens where Transformer learns contextual representation from. A typical SSL speech model consists of a feature extractor, which takes raw audio as input and outputs latent speech representations, and transformer layers, while NLP models only have transformer layers. Existing adapters are mostly designed based on NLP model, which is different from the typical SSL speech model consisting of a feature extractor and transformers.
In CV, {{cite:b7c984ce89ca3c0923b687c0b685d2d2027cbaa7}} and {{cite:aa033bbdfdbe650e20a0b3ce842ecfe535862db4}} proposed using residual adapters in CNN in order to learn multiple visual domains. However, there is no work exploring adaptation of the CNN component.
| i | 0a62782050c9b500faa04255dbab12fb |
where {{formula:bcc8f643-6c86-4d85-9b80-7ad76236353b}} is the fine structure constant;
{{formula:0fcd2517-3cd6-4783-b8c4-9d99bf3bb704}} {{formula:7f84c62e-6d14-42a0-ba47-4e0d56505ab8}} {{formula:fa177306-8e8b-4515-b068-ff9141d2f4e1}}
is the photon momentum in the rest frame of the {{formula:773993ec-537f-46b0-99a5-ae6c9b2ed95f}} meson;
{{formula:d9aaad2f-28d4-471d-a9e9-f117a9730da9}} is the M1 moment of the {{formula:f7ea8dae-5218-41c8-948c-47dc3d18b730}} meson.
There are plenty of theoretical predictions on
{{formula:b001c38d-8bc1-43d8-a326-a606f63b36fd}} , for example,
the numbers in Table 7 in Ref.{{cite:2efdcf7ac99b1d0319b5c0358cb6f11c8f341465}} and
Tables 3 and 4 in Ref.{{cite:455ae2a298d12f6209811f117ab9dac20eb69ead}}, but these
estimation suffer from large uncertainties due to our
insufficient understanding on the M1 moments of mesons.
In principle, the M1 moment of a meson should be a
combination of the M1 moments of the constituent quark and antiquark.
For a heavy-light meson, the M1 moment of a heavy quark might be
negligible relative to the M1 moment of a light quark, because
it is widely assumed that the mass of a heavy quark is usually
much larger than the mass of a light quark, and that the M1 moment
is inversely proportional to the mass of a charged particle.
With the M1 moment relations among light {{formula:c2547ed5-e0ab-4e1e-8b38-33fe136184c1}} , {{formula:751a81ff-319c-461c-a89f-498a4a47c7f1}} , {{formula:7f873ec0-0947-4d29-846e-e345a14c5d04}} quarks,
{{formula:9f7179a6-f3ea-4cb7-8ec6-047ad21a6284}} {{formula:a1bb19cf-b77f-453a-a417-25d0a82d59fb}} {{formula:299831d7-7d77-4304-ad1d-ea1ec7338663}} {{formula:07271d25-7cc6-4d11-8d42-628ce0538e18}}
{{formula:a9e299be-5127-45f2-8fd1-725ce1fa3a22}} {{cite:45bc6f54d6635cb2cfdef12c9cb920ea428aad12}}, one could expect to have
{{formula:d299d790-065d-4dfe-ba0f-7f4c40bddf9c}} {{formula:ab0642f2-1245-4052-92fd-20b37dfc9346}}
{{formula:cfcafe86-6597-46a5-a945-7158c2eb9584}} {{formula:4b0d0e5e-bd89-4693-9aa1-b4d628e69b61}}
{{formula:09200cc0-7cee-461e-bd5c-5e00b787306e}} ,
and so {{formula:bbd6ed0c-345b-4636-ac16-bc9a9bfd6896}} {{formula:c9825de6-4aab-4a4b-a881-8f22030fcbd6}} {{formula:46313a42-c03f-478c-80e8-76f37beae475}}
{{formula:46fd9251-c441-4e9d-a6cd-54595b6aecc6}} {{formula:732b4ae7-c5ae-420f-a4f1-79b9cff5d6ce}} .
Of course, more details about the width {{formula:7446aa27-4fed-4f2d-8957-a77f1f1a48bd}}
is beyond the scope of this paper.
In our calculation, in order to give a quantitative estimation of
the branching ratios for the {{formula:9f0e5a6c-19ca-464a-9b0e-b64989711397}} {{formula:b224cc2b-7782-4c2d-b63c-f1fb3347b6f6}} {{formula:d15734a2-501d-49ca-844c-3b87e0c92cb8}} decays,
we will fix
{{formula:7d8aaeec-a35e-4717-b10e-87273678efd7}}
{{formula:53c62012-9b92-416e-b4ff-b7e2a018a9c9}}
{{formula:7470212c-d296-4422-b50b-f767995c8a68}}
| r | 5a7dade397f6294b7848f2ec697ad99b |
Associative clustering was compared with two alternative methods:
standard K-means on each of the two data sets, and a combination of
K-means and information bottleneck (K-IB). K-means {{cite:7675e5dbcdb6871cc8f5789c661992ccee23bc04}} is a classical clustering algorithm that provides
homogeneous, connected clusters based on Voronoi
parameterization. Homogeneity is desirable for interpretation, since
the data points within a given cluster can then be conveniently
summarized by the cluster centroid. On the other hand, K-means
considers each data set independently, which is suboptimal for the
dependency modeling task. The two sets of clusters obtained by
K-means, one for each data space, can then be presented on a
contingency table as in associative clustering. The second comparison
method is K-IB introduced in Publication REF . K-IB uses K-means
to partition the two co-occurring, continuous-valued data sets into
discrete atomic regions where each data point is assigned in its own
singleton cluster. This gives two sets of atomic clusters that are
mapped on a large contingency table, filled with frequencies of
co-occurring data pairs {{formula:68b8f390-ef8b-4ebe-9a00-50c6f1c71959}} . The table is then compressed
to the desired size by aggregating the margin clusters with the
symmetric IB algorithm in order to maximize the dependency between the
contingency table margins {{cite:a3b171edd750f2288a7ce7feb08858acfa6d6fbd}}. Aggregating the atomic
clusters provides a flexible clustering approach, but the resulting
clusters are not necessarily homogeneous and they are therefore
difficult to interpret.
| m | f6b982524a24bb4892c7d7e2e7ecef41 |
Figure REF presents the cases where size controlling is still important for models trained with these methods. The optimal size we observed for ELR is 128 on CIFAR-10, and 28 on CIFAR-100. This contradicts the prevailing intuition that one should always use larger models for more complex tasks. Actually the same amount of label noise can be more harmful on more complex datasets (e.g., per-class label noise on CIFAR-100 is 10{{formula:159371cb-e450-4a3b-850f-2df20abbfd07}} that on CIFAR-10), and the weakness of a too rich hypothesis class is more pronounced. For networks with width 128 on CIFAR-10 and width 64 on CIFAR-100, pruning to extreme density greatly improves the performance. Notably, pruning out 90% of the parameters improves the test accuracy from 27.73% (29.87% reported in {{cite:b7b140ab4f6b31d0327a516ecbff5dad3e37d8dc}}) to 32.92%, indicating that even random pruning can be used to boost robust methods to achieve SOTA performance. For DivideMix, the test performance becomes worse when we increase the width beyond 160 or increase the density beyond 0.8.
| m | 2dcba669fa9dac808b7b22f93ce3fd00 |
with a single linear unitThe presence of the linear term {{formula:e9c90faa-3cf6-483e-8ebd-9ee18518b385}} is not really standard in practice but is adopted in keeping with prior work {{cite:1c76a6788ce157b618bae7b534b078d65d60cf5b}}, {{cite:8473d860c5f3b2133ca287769bda0a98a5a3fc90}}, {{cite:4fb062f96158a084787c86d1f12760e789926c5b}} since it leads a cleaner mathematical formulation of results. and input/output dimensions equal to one. For a given dataset
{{formula:1877b45f-f36e-47ed-a155-85fa2f7ffce5}}
| r | a594f80c9e9c8b9b8a90b0c28407033e |
Preliminaries
Following {{cite:ae4dfa4f270ac5f3934b1aea258355f2df69a6fc}}, {{cite:5d3ae2ff8dcaeae5df4e194fb1fe1f2b50e2bd7b}}, {{cite:35ffa0ed4bc6815295ce3e298410638d4e28dc38}}, {{cite:daacd1d653d1563a8853d06ed507e312773aeb4b}}, we introduce the Fama–French multi-factor models. Equation (REF ) represents the FF3 model, which is the most widely known multi-factor model:
{{formula:fa28d859-48c9-439d-9716-23946b880a27}}
| m | eb97b886271b47e9d1059bbe6f9efaba |
In this section, we implement our proposed scheme for performance evaluation using multi-layer perception (MLP) {{cite:9ba169285cfc80b05f0d5b168feb41747e9ac569}} and a real-world federated dataset. Since FL is mostly suited for parameterized learning, such as all types of neural networks, MLP is employed for the learning method. We test our algorithm on the standard MNIST dataset for handwritten digit recognition, containing 60000 training and 10000 testing instances of 28× 28 size gray-level images {{cite:038e970460171d7e372d1bc854dca65ecd482b2e}}. Our model uses an MLP network with two hidden layers containing 200 hidden units. This feed-forward neural network uses ReLU units and softmax of 10 classes (corresponding to the ten digits) and with ten clients. For the network optimizer, we consider cross-entropy loss and SGD optimizer with the learning rate 0.01 and the local epoch {{formula:2d2842ae-6d9b-410b-9cb4-a2250e3b7f28}} . To assess model quality, we used the pre-defined MNIST test set. Our implementation uses Keras with a Tensorflow backend.
{{figure:489b45c2-0007-48b4-ba3c-30104dbf7944}}{{figure:e5e90111-d502-48a8-ab25-dfc1003dd58d}} | r | f366e10bd240d0d41f12327593aa53d6 |
Recently, a series of kagome metals {{formula:16089b36-972d-4849-a657-838071d7dad3}} V3Sb5({{formula:888e5e94-5cac-4fd6-b377-14febff6e694}} = K, Rb, Cs) {{cite:37131fd4728284b8a331e5a503a2ca4f54995231}} have attracted increasing attentions with the unique coexistence of topological states, charge density wave (CDW), and multi-band superconductivity. Among these crystals, CsV3Sb5 reveals the highest of 2.5 K {{cite:621267f259cc71ccb3675c151d60736747e20198}}. Up to now, fruitful experiments have been actualized to understand the superconducting gap symmetry of CsV3Sb5, including thermal conductivity {{cite:f91037d4107f67f3fcf8b0ae2644bde66a8b8cf9}}, scanning tunneling spectroscopy measurements {{cite:bfc08b16dbf3c468474f8d6a0df7951288147617}}, and penetration length measurements {{cite:61d18337a8d00c6e0f4b2dc708b5f67579289903}}. However, these results provided contradictory conclusions on the gap symmetry {{cite:621267f259cc71ccb3675c151d60736747e20198}}, {{cite:8ed17815d8deb16c12f9c665b55b8c7f2d69220e}}, {{cite:daa5b12ba3338e2164d23739048bdefb33c15422}}, {{cite:2e4706e59ae6e4ecb418a2d3edf9ab5f82c93147}}, {{cite:0ded4dcaa0771af9a037b44bb527987538995fb5}}, {{cite:f91037d4107f67f3fcf8b0ae2644bde66a8b8cf9}}, {{cite:bfc08b16dbf3c468474f8d6a0df7951288147617}}, {{cite:61d18337a8d00c6e0f4b2dc708b5f67579289903}}, {{cite:1e6dbf40ad44fa1c0a226f85f17fd651895ae7ad}}, {{cite:3deee93515714cb8aabe92188d840798c5353d90}}, {{cite:08b1f8509e8d9cf0e0e10b675b2459b618279fef}}, {{cite:08666d3638bb4eae285fa7f2516779477be72b7d}}, {{cite:cbcee88cb89923aa7aa78a668461f7bb4cbb8edc}}, {{cite:000a350fcd06670af20517301f482c19f176dfdb}}, {{cite:669d963b4fec6b0ce9e90633fcd5b2070dc62754}}. On the other hand, since the superconductivity coexists with CDW, their interplay should be also taken into account. The investigation of these two competitive orders is one of research focuses in CsV3Sb5 superconductor {{cite:590baaf3f3f91e1b692b99591264134da61b67b5}}, {{cite:82d02870019f5711a785946f5bdd96ab7a1feff1}}, {{cite:88a591ba669d25e59f4d2ffc18423b3bbb7f0ce7}}, {{cite:fc4b9a5db4b804c1c197fcbb4b5e3e1926d424a0}}, {{cite:a21c1095667260648cc293044c67ebec989f2976}}, {{cite:105a8115f37df873ec0cb1a4faa0d148f7fc3849}}, {{cite:e72fef773dc6c47cc560465ac7bc85cc1bb839ad}}. For instance, the observation of chiral CDW provides a possible mechanism for unconventional superconductivity {{cite:08666d3638bb4eae285fa7f2516779477be72b7d}}, which induces a pair density wave phase below the superconducting transition temperature {{cite:ae19e4ac4fdc1016c8f085e829d5c5b9597c6462}}, {{cite:8035d0b82205b7c6e20fca3652d733fc99f2d24c}}. Nevertheless, by applying high pressure, an unusual observation of two superconducting domes was found inside and outside the CDW phase {{cite:6a4e9b2ec357ba68bb113edef51b330f489c17a3}}, {{cite:9048dcbe0b2332b44c85ad85fa1dad1c38124c0d}}, {{cite:659aa9b15878e01697c20d4dabe4053a36a7b0e5}}, {{cite:b18cc8292fe26abbde8e2dcc30c02ebfd7f84366}}, {{cite:2b71f54a9d22882ac3095b5fcbe435ab1e7b0868}}, indicating the competition between CDW and superconductivity. In addition, CsV3Sb5 is also a {{formula:39546791-fe9a-4156-875f-7313454acbe9}}2
topological metal in which the Dirac line locates between {{formula:f07ef592-1a2f-4ef7-b4d4-d8fbfe40b6b0}} and {{formula:fc0270ce-33d6-4568-a16e-353c2cc3b539}} points, and the topological surface states were confirmed by the angle-resolved photon electron spectroscopy (ARPES) {{cite:37131fd4728284b8a331e5a503a2ca4f54995231}}, {{cite:1c9aa55fd6879e3fb7cf7520464270d9277e3477}}. Moreover, evidence of anomalous Hall effect has been observed in the CDW state, which was attributed to the symmetry breaking of the band structures {{cite:94bf1f6e7f338de111d24df3fc1c3fb360a27104}}, {{cite:f7771f98a6058b2e60523032cd279fe136d9502e}}, {{cite:18f6efb931c475507457c74664ce0510c332da88}}. To address these issues, one of the most promising paths is investigating the possible chirality of Cooper pairs and vortex dynamics, which has not yet been reported.
| i | 59deacd8298b0fe83880eb4e9b062ed0 |
All anomalies are associated with effective actions characterised by
a specific nonlocal structure, in the form of anomaly poles {{cite:c385b51d5e8c4fdb32cd2a6ef010d1833e221fb2}}{{cite:ffe418218b012b3c7ee3bf17a2f87183358d4ce0}}, {{cite:0c148939515e210306e20170d4e07efd98380ee5}}, {{cite:30536d248107e8088c570277b7a92eb62948ebb5}}. A beautiful unification of such phenomenon is in the case of supersymmetric theories {{cite:75a0404e3cd27c62093a5e83c055d40a219c3e28}}, where the conformal and chiral anomalies, together with a supersymmetric anomaly,
are part of a multiplet. Nonlocal exchanges in the anomaly effective action are identified in all the components of the multiplet.
| i | 2361aa9640ed1b2eb91f2ddf1c08b88f |
It is very usual that the stellar dynamos produce oscillating fields rather than steady ones. This is also compatible with the solar cycle of Sun and other stars, and the field reversals of Earth. In that sense, there is no reason why the dynamo process in pro-neutron stars, albeit operating only in the initial {{formula:1a3d56f1-8ea1-4855-99c4-a09ae849d28e}} , should not produce any oscillations. For a quantitative illustration of the idea, we considered a simple dynamo model which was originally introduced for white dwarfs by {{cite:028eb653c9be31aeb770e771fc0da133822458e4}}. We found that the model indeed predicts oscillatory behaviour for the toroidal fields, a point which was not discussed in the original paper.
| d | d725cabfd554a38e54318638ff7fdd98 |
This is the Hawking radiation. It arises from the pair production of particles from quantum fluctuations from the horizon of the black hole. One of these particles (one with positive energy and outside the event horizon) leaves as radiation from the black hole to infinity and the other stays trapped within the black hole. As a result of the radiation, it is suggested that the black hole in the process loses mass (and hence the surface area) through the outgoing particles and hence evaporates with time. This is called as the evaporation of a black hole. Observationally it is very difficult to detect Hawking radiation as it's temperature is many orders less in comparison to the Cosmic Microwave Background (CMB) temperature {{formula:27bb0c8f-c4f7-434b-a35e-188961ea9fda}} , which overwhelms it (it is the reason why in last five decades of dedicated study we have not been able to still detect any such signatures of black holes). This process, however, has some deeper consequences. For one it violates the classical Hawking area theorem {{cite:08384a75b2eb97cfc51a973fe0962bcbf7dcbce8}} (black hole evaporation is a quantum effect) and other, an evaporating black hole, with losing mass, means that the black hole's lifetime is limited and beyond that period it potentially loses all the information that was inside it. This creates a direct violation of the quantum information conservationBoth in classical and quantum domain information conservation is fundamental, in classical physics this is governed by the Liouville's theorem of the conservation of the phase space volume {{cite:ae1df77e03746af91324b9bb55eb8049bdac74d9}}, in the quantum domain, this is preserved via the unitarity of the {{formula:65f77f10-058b-450e-858f-af5d749f9ee5}} -matrix.. Quantum information which is quantified via the von-Neumann entropy {{cite:1f67cafd11d5f3b511efc5e7785457172d3bb948}}, similar to classical physics maintains the conservation principle, that the information in a closed isolated system will be conserved {{cite:eb5578e6abe27ff0180c50deaa81cb66c20c6ee9}}, {{cite:d98b5e1483da473a63e622cc5d451701f27efd6f}}, {{cite:7ac8768c8c0c3cc4c4ed872eebc172593547188e}}, {{cite:b62e8b43d2161e9a0881f424fd967ff8a2b576c5}}, {{cite:3918d43c683b312e800d20133e9565f710fdc5bf}}. It is intuitive to show that Hawking radiation generating from an initial pure state black hole, with the evolution of time, would end up with mixed states as remnants, thus violating the unitary evolution principle of the quantum mechanics and hence information lost during the process {{cite:0b49f2dae316f1542499d9c865580c8b6b910847}}. If the Hawking radiation were somehow able to carry an imprint of the quantum information {{cite:f69eaf86d4bc8da518bce715ee27e10bc4824ead}} from within the horizon in its flight away from the black hole to infinity, it would still give rise to new incongruency by violating the no-cloning theorem {{cite:f7e6ac09193dd72c4a810427149291368c03d82f}}, {{cite:0b49f2dae316f1542499d9c865580c8b6b910847}}.
| i | c444965a47bdfd4e88eaaf8b55740b75 |
Semi-supervised learning has emerged as an attractive solution to mitigate the reliance on large labeled data, which is often laborious to obtain, and intelligently leverage a large amount of unlabeled data, to the point of being
deployed in many computer vision applications, especially image classification {{cite:93439e77b6bd27e724a76e98a3dffe60cfa591e7}}, {{cite:81ccdc1d151693909753c5440101b5ac40e5b762}}, {{cite:b53292a09239a80793b2281e7a0b95203e74b8dd}}. Generally, this task have adopted pseudo-labeling {{cite:b654636b46ebd66c5e84f5dd1e7bbf1a480884d9}}, {{cite:859820e5874ad7527df65d60ac913ccf66fd14f9}}, {{cite:7c7c74d9b0bcf13699f8931ae96fa2f2f84c96a3}}, {{cite:f1849a432c69f20781a04aa94f86026dd572fe80}}, {{cite:a3830e021e6af217cc7bcca6c3c202d93cbeb206}}, {{cite:3c7f2a0b72769c29b66ffa4ab93469ddcdb67139}}, {{cite:b53292a09239a80793b2281e7a0b95203e74b8dd}} or consistency regularization {{cite:63ac6be964c0793e06b89e577b87922da8de9a2b}}, {{cite:87ef8312f516df4e22fc0c128d7defa3b0572a99}}, {{cite:1b84fd0f0fb1d6eac508f2994e54b30bf5a7002e}}, {{cite:46997b1dd3914256e00207521db60b18c125e100}}, {{cite:fff95b3a8370f8bd4dac86573849f669f24159ef}}, {{cite:81ccdc1d151693909753c5440101b5ac40e5b762}}. Some methods {{cite:45f0970dd4055e1b3f8c1138868a2e6975701920}}, {{cite:ed1cfb77aa2f29efee658c9d19c505eaf1c65180}}, {{cite:f1849a432c69f20781a04aa94f86026dd572fe80}}, {{cite:39fc4d8596a4d5441025a876afabfb4496566d0a}}, {{cite:5a2676b982d799c9055d3403c4f722893e6a17bb}}, {{cite:7adf587796423495bec04448aa5c5064046bd26d}}, {{cite:cf3311a0ff399133f2cef6e467bdd48a1650d400}} proposed to integrate both approaches in a unified framework, which is often called holistic approach.
As one of pioneering works, FixMatch {{cite:39fc4d8596a4d5441025a876afabfb4496566d0a}} first generates a pseudo-label from the model's prediction on the weakly-augmented instance and then encourages the prediction from the strongly-augmented instance to follow the pseudo-label. Their success inspired many variants that use, e.g., curriculum learning {{cite:5a2676b982d799c9055d3403c4f722893e6a17bb}}, {{cite:7adf587796423495bec04448aa5c5064046bd26d}}.
| i | cbee2f8399b535227d0bcd13ff6e1a6f |
The search for dark matter (DM) is one of the main goals for both experimental and theoretical physics. Among different search strategies, direct detection experiments are some of the most ambitious. In recent years various collaborations have made significant progress in constraining the parameter space for DM interactions with the particles of the standard model (SM). Traditional nuclear recoil experiments {{cite:5c410ead7477e5cfc72bb6af2298cc91e9308278}}, {{cite:59d98ed99e65dc966a12afd2dd7f85a82a200032}}, {{cite:9848bd31b27d5f045c0b269c8c37526c0f81be4e}}, {{cite:e7a75b0e7888d9026420cf22aeec88bf88fa26e0}}, {{cite:a3c4549393c8b4d810ccf1225b49ce3eb3c5462f}}, {{cite:2dcb5676e385832baa13df16d3e326ee4c598f17}} are pushing the limits on the DM interaction cross section towards the neutrino floor for heavy DM particles, whereas electron recoil experiments {{cite:0018cf1ca1696ea9af4ca4d12642cb5c6d09963f}}, {{cite:ebc6554a74adf8c1f2094be92b8e297d9d0dd775}}, {{cite:8494f65fc6106a7786ff5dae80c62b2bf6d0e441}}, {{cite:4eb735a75b972cf8a1b41320c33238d5f0b43c84}}, {{cite:aec94a7c56ce887a4c901ef05ae6b86ddccccd14}}, {{cite:90f78e20731a439b2ecb05a89ffc2fdf6f6fc179}}, {{cite:1557edfde5d4047ae432946c6b7f217b1b544e7c}}, {{cite:988685d9afb51e938896dcf5c304be67896521b9}} are probing increasingly smaller DM masses. For a comparison of different targets for sub-GeV DM direct detection, see {{cite:8df6848bf0a5528e2f89163e5dbf37b9448efd33}}.
| i | cedcb4b9b92625cdc33be32f3d5400f8 |
Evaluation of AI model explainability:
While we have shown three methods for explainable AI at graph, subgraph, and tensor-state granularity levels, we have not done quality evaluations of AI model explainability via model parameter and data randomization tests {{cite:c9dbe00ac0f8057f08c88af730db3a9ba96c5ba8}}. It remains to be demonstrated how illustrated tensor-states, utilization patterns in subgraphs, and utilization distributions in multiclass prediction models actually reflect a true explanation of the traffic sign prediction.
| d | 152d26defa045aa2cc52dd15e3c89fb3 |
Result 2 rules out the possibility of {{formula:d707e2ae-a026-4fdb-9e13-564647674630}} -CGE for large {{formula:65d3696c-6702-4754-9230-70e519ab017b}} . The situation is different for small {{formula:7b4f24a4-7e76-4ea2-a59d-baa7f05ca154}} . For special case of {{formula:1a208959-9964-4b52-8754-050956e50d44}} , Definition 2 is reduced to the standard separable model of bipartite systems {{cite:a5cc9100ac48d0d78f650342da3c1a291dd5feb4}}, {{cite:5a43f4e5e2d0e640564843178fe10231f635684c}}. For each {{formula:57bf9e51-6f30-4d78-82c9-1d2f4e081eca}} , from Definition 2 any biseparable state {{cite:59b439d9a622d82a7a4d6db83aadac6cdd8e93f2}} is 1-connection biseparable state. A further fact is that any {{formula:5607594b-1477-4916-b486-fb94c6854b1d}} -connection biseparable state is {{formula:eeec50a1-8c50-431d-b931-e46aeee60b71}} -connection biseparable state for any {{formula:61da4597-5fcd-4296-9985-bdb8a55c9ca6}} . This implies a complete hierarchy for all the multipartite entanglements, as shown in Fig.REF . The largest set contains 1-CGEs, that is, the genuinely multipartite entanglement in the biseparable model {{cite:59b439d9a622d82a7a4d6db83aadac6cdd8e93f2}}. Instead, the smallest set consists of the strongest multipartite entanglement, that is, {{formula:c9dff132-368f-4b5f-9bfa-849041f7b63e}} -CGE with {{formula:e10bfc93-626a-4c27-ad20-c545f4981554}} .
| r | 9615d5164bbd84efb02ecd689732224b |
In summary, it may not be a bad idea to check whether a proposed experiment,
no matter how spectacular, could be just another illustration of Feynman's
only mystery {{cite:5af67cc0a6cc74b16590dcb09aaa343b233303ad}}.
If it is
quantum mechanics could only be accused of being quantum mechanics as we know it.
If it is not, something new may be learnt as a result.
Yet by ignoring the Uncertainty Principle altogether one risks arriving
at a conclusion, which is either wrong, or after scrutiny, will prove to be
the repetition of the same thing {{cite:54c417dbe56ee2a04fcd4ac549d5aa45b53ad6e9}}, in other words, a platitude.
Financial support of
MCIU, through the grant
PGC2018-101355-B-100(MCIU/AEI/FEDER,UE) and the Basque Government Grant No IT986-16,
is acknowledged by DS.
| d | 1e5fe297c7640b0a1f2d29cd4dbd3650 |
To test the performance of our approach on the Polish firms' dataset, we use a standard practice in the field of DL.
At first, we split the dataset into training and testing.
Three-quarters of the dataset is for training the model and the rest for testing its performance.
In the case of a model that contains no hyperparameters, a setting like this will suffice.
However, in DL, this is never the case as these models require a thorough calibration of hyperparameters.
In our setup, the hyperparameters are the elements contained in {{formula:044f0a7d-741e-47b8-85a8-b5af254cbd22}} .
Therefore a common strategy is to use the training set to do what is called hyperparameter optimization {{cite:ab5cb7f4b9fe3f629f6c265019d0396de91519b7}}.
Therefore, the training set is further divided into training and validation sets, and the model is fitted and validated with different parameters.
In our case, we draw from the training set ten samples using the bootstrap technique as proposed by {{cite:05884b396ff139d3df17c26163b23b27d2e636f3}}, and we trained our model using different combinations of hyperparameters.
As for the choice on implementing such hyperparameter search, we relied on grid search, also known as full factorial design {{cite:05539695e16bb4a1fb41d4c0b48b4eb325d666b8}}.
We then trained the model on the entire training set and classified bankruptcy state on the test set with the optimal hyperparameters.
| r | d893d30fb803f36110538cbffac21c5c |
Quasi-Newton methods approximate the Jacobian matrix—some also approximate the Hessian—of a given function and use them to minimize the given function. To use them with TFC, the loss function is first converted to a scalar by taking its norm: in this dissertation, that norm is either the {{formula:b8e9de4c-ce57-4e4b-8cc7-769e95479845}} or {{formula:dc6c7a98-955a-4630-b975-858dfdf8842d}} norm. Then, the new loss function is minimized using the Quasi-Newton method. In this dissertation, the only Quasi-Newton method used is the limited-memory Broyden-Fletcher-Goldfarb-Shanno {{cite:dcca48ecb8f06d96745716a57bb0591346384f97}} (L-BFGS) algorithm.
| m | f2a6f5122eb423a16943adce7ca4e142 |
This is particularly true for Machine Learning applications, for which a parametric dependency occurs naturally, for example, when solving a min-min or minimax optimization problem, in Structured Support Vector Machines {{cite:2fc59bda53c7489707f6373305c28fab2b23b852}}, {{cite:b1e315a05a4e8ee4def6830818c4b1b689f50fd2}}, Sparse Dictionary Learning {{cite:a800b514d16f22a3f6b66c4d5a8a1933f702e3c0}}, Generative Adversarial Networks {{cite:738916bb4add2947e65067c83d96b6a6d7def819}} and Matrix Factorization. Another important area where such derivative information is crucial is the Sensitivity Analysis of an optimization problem, which finds applications in the shadow price problem {{cite:e30781c3b672601b35df029094a7a10822933999}} and also in bridge crane design or breakwater modeling {{cite:b82a16442b2be1eab71eec40a3565c42f6988399}}. The decision-making is based on a measure of how sensitive the model is when parameters {{formula:5f4ee46b-5e4a-402f-889e-9b97661bc2ef}} are changed.
| i | 30f25a12a0a311c34087c31e88a2a075 |
After conducting the experiments, we come to the conclusion that single-step attacks are weaker than iterative attacks, but more transferable {{cite:88b6842f7b506ae35f574e2b5576e08cb7feb17a}}.
By comparing the effects of adversarial training on defending against single-step and iterative attacks,
we observe that adversarial training mitigates better the effects of single-step attacks (i. e., FGSM) than iterative attacks (e. g., DeepFool). We believe that the cause for this is due to the nature of the iterative attacks, which perform multiple iterations to find the minimal perturbations that can fool the classifier.
In other words, perturbations generated by iterative attacks tend to over-fit on the parameters of the specific classifier {{cite:88b6842f7b506ae35f574e2b5576e08cb7feb17a}}, which makes the iterative attacks more powerful. Thus, adversarial training doesn't effectively defend against the iterative attacks.
| d | adc5f302f453ae8ecbb66b4a4b8e9128 |
The subject of decaying turbulence plays important roles in
laboratory and engineering applications {{cite:e2342df4b813fb58634cfa1942850de3dd550f56}}, {{cite:3c31ccf1870b19666af37e3e995f7c3eccb5ac3b}}, including superfluid {{cite:ad27a50d725d57fcc9201467f75dd0219c18503a}} and supersonic ones
{{cite:f49ad8f3328dec56c0b7c0aa39b96649dfd185c5}}, as well as those where decaying temperature
fluctuations are of interest {{cite:3f6442733ce55493299532f63f8785d54f802b64}}, and in many
areas of astrophysics ranging from star formation {{cite:89aab7b4531dd00b9e8a0c07a83cf40917481ef7}}
to solar physics {{cite:3824cc5b8db2d6b9bb741f17aa8b67989193678f}} and especially the early
Universe {{cite:d195959a4348e7326760d35577f907aff3426495}}, {{cite:66761e409e0bc1883e90b0848508cbfa2af4d589}}, {{cite:fc2920598edf76af0ef5d471ca5add21045a5bd4}}, {{cite:52f2b7441a90ad74840d8c6c8e593b7efe0f5af3}}.
The application to the early Universe focusses particularly on the decay
of magnetic fields and their associated increase of typical length scales
during the radiation-dominated epoch from microphysical to galactic
scales {{cite:66fc200c09cf3b05c08060ee5a65582d0b2b121b}}; see also {{cite:276a5f0e0f2a36438a795a7ef0880a98201f9126}}, and
{{cite:ccd72275ad7b71902e6f0a0f0e4d24c8c5d1e8ca}} for reviews.
| i | 9f05d4c4c966f3b12389affe7ec0d313 |
We see that the supervised models evaluated demonstrate shortcomings in performance on pitch (NSynthP) and key (GSKey) tasks. This corroborates the findings in {{cite:1f96c7eb71cabc50401ffb5d756b1ea489c5f288}} that models trained on tags do not perform well in this task, and the findings in {{cite:e7fb9ebe4f2334ce872b4aab0572cbe4aa69e7cd}} that supervised pre-trained models do not generalize as well to novel tasks. We note that there are no pitch or key labels in the Musicset dataset, and in fact, many of the labels employed (e.g., genre, mood, etc.) require the model to be somewhat agnostic to such information. It is yet to be seen whether including such information in the labels of a pre-training dataset would improve the generalizability of such models, though this is outside of the scope of the current work. Interestingly, we see that unsupervised models trained specifically on music data show significant improvements in key estimation.
| r | 231cde65323adf071b4ea2e8790fbfe1 |
The six game ordering associated with the fixation of dominating strategies {{cite:4c3d245502ba7d6d1edce8c4fb237b65e2cda687}} (also named unbeatable in the literature {{cite:59f81e9c09e1df157cd978ddbf75a088c5302039}}, {{cite:11fe97c7ce65c3d4f4bd84d7d740e7fdef4672d9}}), such as defectors under the Prisoner's Dilemma {{cite:fe23de436f39b417ccae7d10cbf5efacf805a888}}, {{cite:7fd25e43d6456ccf8ad13c314729c8d5c9bab2a4}}, were observed to hold monotonically decreasing functions, with one complete ordering and subsets of some of the remaining being comprehended in the conditions of Theorems 1 and 2. However, they were also shown to have a wide variety of other function shapes. Functions with one global minimum were observed under five out of the six orderings. Under three of these orderings, we observed monotonically increasing functions. Two of them additionally showed alternative functions keeping an initial increase but decreasing for larger population sizes, and also functions with two extremes when transitioning between those with a global minimum and the previous shape. On the other side, the fixation of dominated strategies, such as Cooperation under the same games, was proved or tested to systematically hold monotonically decreasing functions.
| d | 89dd35a159785546225973407f568e03 |
We believe this is because capsule networks encode better the objects based on their parts and uses the spatial information as well as computing probabilities of an object being present thanks to the nature of the capsule routing {{cite:da38d1fc6c98e0bea34ccd2b235bb3a8c420b8b9}}. On the other hand, the CNN only tries to locate discriminating features anywhere in the data without taking into account spatial location or part-based object representations.
| r | 442df081a3839ea0740396209f8c4908 |
Style-transfer from paintings to natural images show that larger-scale structure is transferred from the target image when training on losses of higher layers {{cite:b149b915a32c182fbc34983acb4d63ef779c6f9c}}. In order to maintain label correspondence between refiner input and output, we similarly only use the feature loss using the relu3_3 activation layer. Style loss is computed from the two lower relu1_2, relu2_2 activation layers only higher activation layers are considered in the supplementary material. Figure REF demonstrates qualitative results of our airway refinement method.
{{figure:78f1add4-c241-4be1-9522-0de4de6306c4}} | r | 7b5e514544fcb31dc86fb3bb2f25c2dc |
Though this work focused on decoding population activity from single cell recordings, the simplicial convolutional framework has wider applicability.
With only slight modifications, the framework can be adapted to other computational neuroscience tasks like brain-machine interfaces {{cite:fce0018babbdaa04d66b09108b34ca4716a1bee9}} or epileptic seizure detection {{cite:d54dd58b90226afaf0f328f19741b5db0529a599}}, essentially any task where connectivity and higher dimensional relationships have traditionally been ignored.
Beyond the scope of neuroscience, the SCRNN can be used on any dataset where shape can be characterized by a simplicial complex.
A recent increase in works employing tools from Topological Data Analysis (TDA) has revealed that the underlying shape of data can be exploited to improve performance in tasks across a number of domains {{cite:876e4fcd4740d4c101eb71de2486cd1955697e53}}, {{cite:3ef37968068fb7f37b788c203592e71c25d88d43}}, {{cite:f0198ed592206d999aff9fbdb02a942cfe05a9be}}.
| d | 80f3b8cd301be1e9c3c56df5a8291b09 |
To overcome these limitations, it is widely accepted that the essential solution is to mine discriminative information from various complementary local regions {{cite:0ba2f3386b849ebf9daf4f3de28644104b4dd96d}}, {{cite:50c5d97274ea959231f51dc69ee0ccba5ec359c0}}. In the early work, people resort help of heavy supervision to detect multiple discriminative parts for classification. It requires not only the category labels of the image but also additional manual annotation such as object part bounding boxes which consumes lots of human labor {{cite:08dbe639e1e76266cee054f73c84d439eefd71b4}}, {{cite:1894139d29be871339ae655d0acb4ca33040588e}}. Besides, the
part annotations are hard to obtain during the inference phase, which reduces its usefulness and slows down the development of the community {{cite:da077b991b2259ab19298e6668baab77ec4b902e}}. Recently, weakly supervised detection or attention techniques become feasible substitutes, since only the category labels are needed for both the training and inference stage {{cite:da077b991b2259ab19298e6668baab77ec4b902e}}, {{cite:0687f6aa2b4add5eea89e30e72379c23cc965d17}}, {{cite:0ba2f3386b849ebf9daf4f3de28644104b4dd96d}}, {{cite:6b60d37800ad7ae6a771c1e261d8e064abaf896c}}, {{cite:16edbc1af2c71dfe189bcb3b25313009975b1224}}. These methods can be roughly divided into two types: (i) part-based methods {{cite:da077b991b2259ab19298e6668baab77ec4b902e}}, {{cite:0687f6aa2b4add5eea89e30e72379c23cc965d17}}, {{cite:0ba2f3386b849ebf9daf4f3de28644104b4dd96d}}, {{cite:6b60d37800ad7ae6a771c1e261d8e064abaf896c}}, {{cite:50c5d97274ea959231f51dc69ee0ccba5ec359c0}} which first locate several discriminative local parts and then extract features from them for classification, as shown in Figure REF (B), and (ii) adversarial erasing methods {{cite:68ae34a9bd89e0559cebd6d48028850961507d15}}, {{cite:8b1eaa47482b5dadca66a081cc9ed7bd6408447d}}, {{cite:77998facf9aa23581b9b951f765678bc78904851}}, {{cite:18018bb69396b36f37817cca288fa04f3cfd4b57}} that encourage the model to learn more discriminative parts by progressively erasing the learned parts, as shown in Figure REF (C). However, part-based methods may bring noises from background since most models have pre-defined the number of parts and multiple disciminative parts may not consistently occur in each image, and adversarial erasing methods suffer the same problem when too many object parts are erased.
{{figure:86959db6-db0b-4373-ae85-d4cf55a2c369}} | i | d7aa0b73e42e1d184ddb373cce4af540 |
With the appearance of quantum mechanics and relativity theory in early twentieth century new philosophical ideas on physical values, measuring procedures and system state have been established, the ones that are completely different from Newtonian notions {{cite:a1b94b7973ae625469255bd44420a6a247113329}}, {{cite:305f0ac5469307fd6a1e557208db9b2fd198d1c4}}.
| i | ff44d2764211d993144a023dfe17f2bb |
We performed additional experiments with the same setting by combining each proposed method with the TF-IDF baseline method, which is the best baseline method.
We normalized salience scores of each proposed method and the TF-IDF baseline method to [0, 1] within each story We used scikit-learn {{cite:82f9f33f223cf83f94a5576043ebf0c2f7af6999}} implementation of MinMaxScaler and then added them to obtain the final salience score.
Results are shown in Table REF as +TF-IDF.
For all cases, combination methods consistently improved MAP scores more than our proposed methods alone or the TF-IDF baseline method alone.
The combination of the proposed method (SD, BookCorpus) and the TF-IDF baseline method and the combination of the proposed method (PAA, ProppLearner) and the TF-IDF baseline method achieved the best performance among all methods.
The Wilcoxon signed-rank test on the best combination method (i.e., combination of the proposed method (SD, BookCorpus) and the TF-IDF baseline method) and the TF-IDF Baseline method resulted in a p-value of {{formula:9ac27232-6048-4ac4-a197-05788db385ec}} .
This result suggests that TF-IDF-based salience cues are complementary to Barths' CFs-based cues, and they have been merged into a better measure of event salience.
| m | bc5f7d9d2dec2a12a2440e75e0b78839 |
In order to fully utilize {{formula:a2d19324-9cea-4454-b36f-3c2178493217}} slots and substantially reduce the computation complexity, Juvekar et al. {{cite:fb4e6f1a9c3fa980d675b389838e5baeac79cf03}} propose GAZELLE, a state-of-the-art hybrid method customized for the HE-GC based secure inference. It has been integrated into some advanced solutions such as DELPHI {{cite:50ccf305c04beca8d455c1559cd90fff2e826d13}} and EzPC{{cite:dee8b9d7882e8b752897c8a48ab00eceeebe83b1}}. The core idea of GAZELLE also inspires the design of MUSE and SIMC, as they inherit DELPHI's optimization strategy for HE-based linear operations. GAZELLE is actually a variant of diagonal encoding {{cite:187d0c47e443cc85fe16fabaa884144d76c8cf76}}, which exploits the fact that {{formula:7004b742-49e1-489c-915c-a64a6f91c4ac}} is usually much smaller than {{formula:7257526f-0e77-43e2-8649-daac1c7bfbb4}} in the FC layer. Based on this, GAZELLE shows that the most expensive rotation operation is a function of {{formula:49574bef-b4ad-4c67-8459-849aaee462cc}} rather than {{formula:d0c5ac2a-fb62-4594-ba4a-1c49b42a2ad2}} , thus speeding up the calculation of the FC layer.
| m | b2f0d9b56d57b00f09aca40da8b56cd3 |
If one naively adapts standard RL algorithms {{cite:767edbcc7187def3104a850cf4dc95efa4f185c6}}, {{cite:8a0c043942e8d118bc3a656b269f838ecd7be0f9}}, she/he may directly add bonuses on trigger and transition probabilities, respectively, without separating triggered reward and triggered transition. In this case, {{formula:7f9a2cc2-5384-43f8-a791-2f789d106c8d}} (Line ) becomes:
{{formula:a5d3ad83-025a-4451-8626-8d38fa348c96}}
| d | b509ae8b1b1f320b5f948764e948305c |
For single-stage regression-based methods, our method with HRNet-W48 surpasses SPM {{cite:3f83b246c73fbbd16bfc88eadaacc4ba93199a8a}} (refined by the well-trained single-person pose estimation model) by 4.4% AP without any refinement and also outperforms DirectPose {{cite:ea74821f2206d4454df8db9196f070ba080d6079}} with a large margin by 6.5% AP. In comparison to state-of-the-art DEKR {{cite:74a8eb7fb71205b186db43807ae8c9f315d4021f}}, we achieve 0.3 AP gain without extra pose NMS and pose scoring network during inference. The comprehensive comparisons prove that our method achieves the better performance with the more efficient and compact pipeline.
| m | b9cbe4e746e3b30385dc4af01d83849e |
Our work shows that the spectral algorithm recovers communities from the similarity matrix whenever the MLE does. Abbe et. al. {{cite:aae23540236639b966c8a335a83db59913038557}}, who observed the same phenomenon both in community detection and in the {{formula:b77f043a-f201-4334-b24b-6b5280dd85d3}} -synchronization problem, noted that such a phenomenon may be more general. Indeed, our result adds to a growing list of examples where a spectral algorithm matches the MLE {{cite:aae23540236639b966c8a335a83db59913038557}}, {{cite:55cd1dcc0388faff3c842277a7c91c30892d2abc}}, {{cite:d5a76a88cc45b2f0c8b39361c3ce13bf44e5881f}}, {{cite:5e3d6a0b4af2f15c4e5274d6c58da22e81c10281}}, {{cite:53554692d9da23a5c4f5d92d53f440da3d72673e}}. All of these results crucially rely on entrywise eigenvector bounds. Finally, we note that a sharp phase transition occurs at {{formula:1de3c309-0a3f-4838-82c4-f8edeffb1ec2}} , reiterating a common theme in community detection problems. In particular, the spectral algorithm succeeds with high probability if {{formula:dde40efc-0187-45a9-b34b-9de9d3b5aa4d}} , while any algorithm fails with high probability if {{formula:23df318a-7b93-47e3-b64c-310ff40f9e3c}} .
| d | b6ceabc533bde0f83d46d6226c8783a2 |
Suppose {{formula:47e81c86-f43b-4ce9-9b93-c2f674935941}} . By Cauchy-Schwarz inequality and {{cite:983b8441e0c62a211dfce024942465c7aa2e439d}}, we obtain that
{{formula:afd85940-500e-48db-891a-5c3edf1d687d}}
This shows that the operator {{formula:95f88f42-834d-467f-acb9-775716bd9e39}} is well defined on {{formula:d96fbea5-0622-40ba-81e9-5d8287dc2af7}}
Next, we import an auxiliary lemma to prove the main theorem in this paper.
Lemma 2.1
({{cite:2ce9851938ef7e0e6e323139c99afb062ba2a26b}}) Let {{formula:5016ff7d-bc8e-4cce-bedb-6192b0b324f5}} be a real function of two variables and has the following properties:
{{formula:2e3e5cf3-a86f-47b7-9888-c904e060ee64}} {{formula:db2bd3a8-70bf-41b0-9972-2caefd013d75}} is non-negative and homogeneous of degree -1;
{{formula:5975ff8e-23c4-4ee4-8d96-7112ca2affec}}
{{formula:8e09133e-391d-4751-8f49-a53df987ebb3}}
{{formula:2ee0727a-5e5f-4122-9166-a747888eac3f}}
{{formula:852e1bde-04fa-4474-bffd-f9e6109c279e}} is a strictly decreasing functions of {{formula:991d889b-99e1-4e29-a778-f4e565555f88}} , and {{formula:b491f1a0-a0e5-4deb-82a4-7b1bbceb414d}} of {{formula:3331d52d-a0a3-4a5d-9619-56d661ede390}} ; or, more generally;
{{formula:a0d654c6-2a74-4410-b35d-1c78d78b090f}}
{{formula:dd2a45e2-e2a6-4377-8f79-1827f1467c70}} decreases from {{formula:ff531d88-7a90-416f-939d-d34ea1bd2697}} onwards, while the interval {{formula:dd06ba40-a35f-4269-bb94-2a8393b41fdd}} can be divided into two parts, {{formula:695c97da-4da7-4a6c-b461-607b4d7f8992}} and {{formula:b588326c-47fa-4160-80ef-46d389c70ac2}} , of which one may be null, in the first of which it decreases and in the second of which it increases; and {{formula:135f1a0c-1dee-45f6-87f8-aea062354c5d}} has similar properties; and {{formula:3feb7cf7-40e3-4b63-9c07-cbce6b55d665}} .
Then for every sequence {{formula:541e98ac-097a-4dba-aeba-0df240a23e1b}} in {{formula:5335a388-d30b-4b31-a16e-f5bdd010b47f}} , we get
{{formula:be4bbae5-e721-4732-ac44-f604baf13036}}
In short, if {{formula:0a514567-5425-4791-92e2-277fab742a42}} , we have
{{formula:248778e6-2fd8-41ec-b82b-b89391bf54ea}}
Theorem 2.2
Suppose that {{formula:e82906df-7470-4c6b-b99a-cc66ddb3f8a2}} , {{formula:4bf8088a-64a1-4341-8b53-7c7a246a2401}} , and let {{formula:30af71fa-219f-4300-a46c-b3748d67f72e}} be a positive Borel measure on {{formula:efe6831b-8cbc-4a08-aa83-8f09677f285f}} which satisfies the condition in Theorem REF .
Then the following conditions are equivalent:
{{formula:e0c52d66-d6db-4f0f-adc0-658bb7c18d6a}} {{formula:d4b140d6-f386-499c-b8c5-a7be82da8c66}} is a {{formula:f7391e4e-fabb-4844-bb09-6536898b32d2}} -Carleson measure.
{{formula:465c45b5-482d-4af5-b895-31e43900134a}} {{formula:07ab9f7d-9841-4dfe-aad6-a645304f14ca}}
{{formula:edce186d-05e5-45c0-a9e6-08101f9adf44}} {{formula:dc0fbdac-446b-4666-9e51-268ce48d5ae6}} is bounded operator from {{formula:c72358d0-05e7-4eed-9f8e-bac9ef4b4c9d}} into {{formula:36158f98-ff22-4e84-8a7b-ef4538bb876a}}
Before giving the proof, let us recall some classical conclusions about the Beta function.
Let
{{formula:46507b58-dd7e-41e1-b431-a1d4ac7089af}}
where {{formula:2281d9de-c7a1-4aa4-8f35-b3c9a6180370}} , then we called it to be the Beta function. And we also know that the Beta function as
{{formula:df19b889-4e38-4333-aae5-57566016fd30}}
where {{formula:043fb405-d540-4bb9-a3d7-cdc845ee66b2}} It is known that the value of {{formula:fbde42d8-b2fd-44e0-bd25-57ce1aae9e08}} is closed related to the Gamma function, that is,
{{formula:939771fa-ce61-4879-856a-1497895b8790}}
Now we continue to complete the proof of the Theorem REF .
| r | d97ce888a0bb317cb904f72956b25296 |
We need to emphasize that the duration of {{formula:49543ccd-4d5a-453f-9138-da4564aa0cb7}} is not a strict time, but a statistical value.
The result based on SNR evolutionary models {{cite:af48b42e445bfa7d2b21bbd4695c111ca542ed57}} may be slightly different from our result.
Although our results ({{formula:25e9b308-5f0b-4bc1-83ec-ba002a991828}} ) are not completely the same with their (30 Kyr),
at least it shows a life time boundary of two type SNRs, no matter from the perspectives of pulsars in radio and SNRs in X-ray.
It can be inferred from this boundary that two types of pulsars generated by two types of SNRs can be roughly distinguished.
Therefore, the cumulative number ratio of {{formula:5722f468-084a-4e31-9dda-fdfa74237e4a}} at {{formula:9416994c-7fb3-41f6-9a57-926ef2d5a7e4}} 10 Kyr may represent the ratio of these two kinds of pulsar production, that is
{{formula:964719ea-b096-4ba4-947b-0bfc2d548ca8}}
| d | bfe815aeb83069f86814173c9bee035b |
Comparison of Algorithms 1, 2 and 3. In Figure REF , we compare the regret of the proposed algorithms in the central, local and shuffled models using {{formula:76a92569-c5fd-424c-9ab6-e704e2827f7a}} . We observe that all algorithms converge to the regret of non-private stochastic linear bandit algorithms {{cite:aecebb6c55e4f3f383884962d1561a222a570c37}} as {{formula:9df03770-1a8c-4b65-bb51-8fcfcf94821f}} ({{formula:35790f30-74ca-4ed4-b5db-d857896c9a55}} ), albeit at different rates. As predicted from the theoretical analysis, Algorithms (central) and (shuffled) offer privacy (almost) for free, closely following the non-private regret.
| r | 430de5e5adc43640f594ca1b5fd522bb |
Creating RL algorithms that can similarly adapt and generalize to new downstream tasks has hence become an active area of research in the RL community {{cite:219bf1850c6e4063ed535c05d6dfba7295ddf5e9}}, {{cite:5056f2cc546066e8d8c0d60c741acae356c34a9a}}, {{cite:cb37219765bbd8e75b109730577c1b39e3013a27}}.
One viable solution is unsupervised skill discovery {{cite:5056f2cc546066e8d8c0d60c741acae356c34a9a}}, {{cite:550cf7908f44d9da38ef3b251fc9f73865c677a0}}, {{cite:88e09692293652c3296bfa138264f80cebf57229}}, {{cite:94d7e42f50e64b206800f1ef53561e6d3883e511}}.
Here, an agent gets access to a static environment without any explicit information on the set of downstream tasks or reward functions.
During this unsupervised phase, the agent, through various information theoretic objectives {{cite:1b2f393748b772a6e01493996a0418cab0e6f57d}}, is asked to learn a set of policies that are repeatable while being different from each other.
These policies are often referred to as behavioral primitives or skills {{cite:92aaaaa87abf08befe0a6ed80208df0fba6b542a}}, {{cite:e1676cb82a588f2d759acabbb93d80efb3d9c06d}}, {{cite:11a6ff1ca350dec5e3dcaac92520f9b39373bc6d}}.
Once learned, these skills can then be used to solve downstream tasks in the same static environment by learning a high-level controller that chooses from the set of these skills in a hierarchical control fashion {{cite:ab29b38a1a2a4c8dc7a81b19dc22f683df670e16}}, {{cite:5ac7f56bb86928d34147006ef0873fc301909d3c}}, {{cite:d136e7656e54204b718495eff6716c14c5d043f3}} or by using Behavior Transfer to aid in agent exploration {{cite:9b344c8f3e6f8a1600ec97828383fbafd0255d3c}}.
However, the quality of the skills learned and the subsequent ability to solve downstream tasks are dependent on the unsupervised objective.
{{figure:7f9ab629-3b6a-4d9a-bb14-c1fe46ee229b}} | i | 70d30841022bc0f5078186e3a23abf65 |
As we have already mentioned at the beginning of Subsection REF , in the sequel, we apply not only {{formula:600e64ff-c008-4b49-b7c5-b6528457a04c}} -models but also permutation models of {{formula:c0f64372-aeae-4688-8a2a-de18a40d2ca2}} . To transfer a statement {{formula:2bd694e7-f6d5-4e98-952c-5fa0ea95f628}} from a permutation model to a {{formula:5a980ead-38b2-4097-a996-f5727997713f}} -model, we use the Jech-Sochor First Embedding Theorem (see, e.g., {{cite:f95f9cd1ee3fd3ca9c8e47e1d1804b831dfe8249}}) if {{formula:b603d306-8e39-4c27-8d31-432b884164dc}} is a boundable statement. When {{formula:fe807280-f2f0-424e-8900-5f599ae939ac}} has a permutation model but {{formula:6b39e5a9-e96a-421c-a51a-fdac9f44ef70}} is a conjunction of statements each one of which is equivalent to {{formula:98c5a12b-0c35-4ab8-8d9a-853035650dd4}} or to an injectively boundable statement, we use Pincus' transfer results (see {{cite:e47dac7acf520f6960c451f53e6e44c51ec43c77}}, {{cite:6c9f1dd5b81d4184ecd4f57b9641fb4490699ea4}} and {{cite:5c9218e8d28b248ba0d544b5334004408d0b4d9b}}) to show that {{formula:ec0a9abb-24de-470c-bacf-c5b7a5d4b954}} has a {{formula:6755b371-0d21-4cbd-b6ff-b6d05cc07e7f}} -model. The notions of boundable and injectively boundable statements can be found in {{cite:e47dac7acf520f6960c451f53e6e44c51ec43c77}}, {{cite:f95f9cd1ee3fd3ca9c8e47e1d1804b831dfe8249}} and {{cite:5c9218e8d28b248ba0d544b5334004408d0b4d9b}}. Every boundable statement is equivalent to an injectively boundable one (see {{cite:e47dac7acf520f6960c451f53e6e44c51ec43c77}} or {{cite:5c9218e8d28b248ba0d544b5334004408d0b4d9b}}). We recommend {{cite:f95f9cd1ee3fd3ca9c8e47e1d1804b831dfe8249}} as an introduction to permutation models.
| r | 2ab95e4ef3513f13aed89e79e2ab4678 |
While methods that rely on a postprocessing step may be fast and accurate in some cases, it should be expected that inaccuracies in correlation functions and energies will appear for higher energy excitations. This is evident in Ref. chepiga2017excitation at higher energy excitations and will be present in similar post-processing methods. In particular, at the edges of a chain, where the bond dimension tapers off from its bulk value to a value of 1 {{cite:8ad13eab23959e2f68db6de4ac4207394fba523f}}, , {{cite:61bc86aa8f9ff819fe74b6c16306be4d832e50a1}}, the size of the local Hilbert space also becomes smaller. Thus, when using a Lanczos method here, the decay in the accuracy of the Lanczos coefficients is even more dramatic. Post-processing methods to find excitations are ultimately limited by the size of the Hilbert space on which the necessary Lanczos operations are performed. When the bond dimension is large, the effects will be reduced as in, for example, a critical system, for long range interaction, or periodic bonds. Finding a method to find excitations away from this constrained class of methods would be useful.
| i | 42566b33688e0822d6ecec6cf1cc3bdd |
We studied the dependence of the QFI on {{formula:fbe3ec26-54e4-4571-bc1b-2e8f7b55a5e7}} and {{formula:b24a3df2-5cad-44b3-a55a-5b8d53de1904}} for Gaussian aperture {{formula:8b2e47bf-b275-416b-85b5-17f6a2012706}} (which leads to a Gaussian PSF with standard deviation {{formula:228beef7-06a7-4521-8e91-ac22cffe0dcf}} ). The results are shown in Fig. REF (see appendix for analytical expressions). One can observe, that curves shown in part (a) of Fig. REF , which correspond to {{formula:a1d8b482-9e37-4552-91e6-a35b1af21618}} , coincide with those presented in Fig. 1 in {{cite:a0ebefd59cd7023258d2be6e3b378c490525c262}}, where the classical FI for SPADE measurement is computed. Therefore, our analysis indicates, that SPADE measurement remains optimal if partial coherence between sources arises. By comparing the first and second column of Fig. REF , we can figure out, how important the information hidden in the total signal power variability is—generally speaking, this source of information is crucial for {{formula:fdab1e58-4d52-4276-a062-27e537367d29}} , and that's the reason why the discrepancies in the previous works were manifested mainly for negative degrees of coherence. Notice that the Rayleigh curse is really inevitable only for {{formula:2c8d4770-f687-45ca-8095-5eb0c3afad5d}} , whereas for {{formula:98b1748f-9726-4198-b420-4d1216e2338c}} , it can be eluded if the full, non-normalized signal is properly used. What may surprise the reader, is that {{formula:efbe8d82-c1d1-40be-b647-544e95216129}} for a fixed {{formula:466191a8-6a99-4b7c-b810-23319481c73f}} is proportional to {{formula:06edeff9-39e8-435b-b944-5c2cd634d2bc}} , not to {{formula:983be66e-695f-4992-9dde-8db5e3c711f9}} as in {{cite:9de82d60ebc0b51ff67e9a9570b8666a9401c09d}}, {{cite:a0ebefd59cd7023258d2be6e3b378c490525c262}} and other references. This discrepancy is caused by the fact, that the number of photons detected from a single source depends on the aperture size, and consequently on {{formula:f5060e9b-5e3b-4989-8023-02176d65fa54}} . This effect is often hidden in a normalization factor, which in fact depends on {{formula:4472e177-040f-49fa-8e73-44134d29ed4d}} (e.g. factor {{formula:82478b9d-3148-4c57-84d6-cb151cfd9842}} in {{cite:9de82d60ebc0b51ff67e9a9570b8666a9401c09d}}, factor {{formula:f0eb718b-0120-42df-9dde-2372d611d204}} in {{cite:a0ebefd59cd7023258d2be6e3b378c490525c262}}).
| r | 3d5a362b391e3c4d60a0a7890f135b86 |
All the abovementioned works are asymptotic (i.e., assuming a adequately large sample size to estimate the experimental and observational distributions). The proposed results in those works are relationships between the experimental and observational distributions and the probabilities of causation. However, the adequate sample size for obtaining those probabilities of causation remains unclear, thereby creating a barrier between the theoretical results and the real-world applications. Consider the following motivating example: a mobile carrier that wants to identify customers who are likely to discontinue their services within the next quarter based on customer characteristics (company management has access to user data, such as income, age, usage, and monthly payments). The carrier will then offer these customers a special renewal deal to dissuade them from discontinuing their services and to increase their service renewal rate. These offers provide considerable discounts to the customers, and the management prefers that these offers be made only to those customers who would continue to use the service if and only if they receive the offer. The manager decides to use Li and Pearl's unit selection model {{cite:0d9165215e28f4e0888b44a9c723bbb20d51d762}} but is unsure how many experimental and observational samples are required. Are 1000 experimental and 1000 observational samples adequate to bound the benefit function such that the error of the bounds are within {{formula:6fd59cce-32a2-4869-9824-0960acc13df2}} ?
| i | 4c62cc06f6726417d8f650864dcc1eb8 |
is the Complete Elliptic Integral of the first kind (CEI-1) {{cite:bf2b94c0bfcc932f7b06ae24b93d18512fdec1db}}, {{cite:3c5e9fbc8ea0f79efa66cf922e8e7d3c5b0d4e62}}.
A comparison of (REF ), (REF ) and (REF ) implies that
{{formula:28369747-e609-484f-b341-f26b4e372099}}
{{table:10d91d63-d8f6-4fd2-9d70-caa9641cb474}} | m | e330fb2fcd9ac759923bc351d1e1214c |
The Einstein-Vlasov system typically models self-gravitating particle ensembles such as galaxies or clusters of galaxies. The particles in the former case are stars and in the latter case they are galaxies. Clearly, the particles carry mass in these two situations. In this work we are instead interested in the case of massless particles, e.g. photons, and we show that there exist self-gravitating ensembles of massless particles with finite mass and compact support surrounding a Schwarzschild black hole.
To put our result in context let us briefly review some related results. Existence of steady states to the Einstein-Vlasov system in the case of massive particles was first established in {{cite:eca4b02881c3e6d34a4aefc7b86fa4c2d158c4b6}}. The steady states constructed in this work are spherically symmetric with a regular centre. Several simplifications and generalizations have since then been obtained and we refer to {{cite:edc792d87fbd024cb4e65b6ba8dabf5c7d365444}} for a simplified and general approach, to {{cite:46c66d8ffc98bf4914b22960edb7c266cdde83f9}} for the existence of highly relativistic static solutions and to {{cite:45d0a138a7edfff646a265781bb55eaf48eef6e6}} for the existence of stationary solutions in the axisymmetric case. There are several other existence results and also results about the properties of the static solutions in the literature, and we refer to {{cite:2c0271965b3ef742207c5b50b8c4cd3014d7a69c}} for a review and to {{cite:bac672f35917c8f4d307d88bb0eda6d4a0bae9de}}, {{cite:6f466e35552b8e0265c9113723116d8c9c6310a9}}, {{cite:677e143faa1a90914d3887653b5054be88f59af8}} for more recent results.
| i | 384ab196c2708fc125715a80b8e952aa |
Some of these gains have come from engineering matrix-elements {{cite:3d38e53e5ad71e1d72e7e09bcaf22176c1f4936b}} and insensitivity to decoherence mechanisms like 1/f noise {{cite:267e9339311f6efb42175830b0a59e7f0bd6e486}}. Other improvements have been made by directly minimizing noise-spectral densities {{cite:b0a7bdcf601963366cd581f326555ae8732e95d8}}, for example by improving the device's structure, materials, or fabrication process. For either approach, the first step in finding the next order-of-magnitude improvement is to determine the dominant loss mechanism.
| i | 372dcda3fdfb66bb9fe7e4658a898144 |
The reasons we take {{formula:cc6b4569-c9cd-4639-ae83-4f5a67363563}} TeV is as follows. First, the spectrum from our sample show very hard and have no sign of cutoff up to the highest energy of about {{formula:64707eeb-41bd-4ed9-a6bf-ef9f2daf1c5d}} TeV. Second, if the spectrum is dominated by emission of leptonic origin (with evidence that most of the rapid variable emission has a leptonic origin), the cutoff above 100 TeV is possible. The recent observation from the Crab Nebula with energy beyond 100 TeV show no exponential cutoff below 100 TeV, which is usually interpreted in the framework of leptonic models {{cite:2908b2ae0e48aca043dce2f73c8fe602749f2771}}, {{cite:e923c7304992178784151b07f052edc863239946}}, {{cite:ac8c919fc0cbdf1ca9d5b9793082db4ab977a96a}}. As powerful cosmic particle accelerators {{cite:5c88c324dde48aa9a6a6c44df8112db3ccc383e7}}, that may happen on some extreme TeV AGNs, too. Thirdly, AGNs are excellent candidates as Ultra-High-Energy Cosmic Rays sources {{cite:bd764bea2dc285331952fee7076533aac92ed0fc}}, and the hadronic cosmic rays are capable of producing spectrum without cutoff below 100 TeV if the VHE emission is dominated by hadronic origin {{cite:60adf570b5990008ed4eabd1338276256ad7cea5}}. To determine the magnitude of {{formula:355cc872-e4da-404c-bf86-6f2a87439cab}} without ambiguity, we need to further research on the intrinsic physics (including parent particle species and its spectral energy distribution, the radiation mechanism, and pair attenuation in the emission region) of the {{formula:ee6ffc55-be81-4d59-ae68-0ebe914ec224}} ray sources, as well as the forthcoming observations above tens of TeV by CTA, LHAASO, SWGO and so on.
| d | 4a03a6a4a13e66460f233caad77cbe79 |
According to our experiments on the proposed heterogeneous federated learning benchmark FedChem, the heterogeneity brings significant difficulties for federated molecular learning. This section proposes a method to alleviate the heterogeneity problem, namely Federated Learning by Instance reweighTing (FLIT). FLIT adapts the formulation of focal loss for federated learning by involving global model into local training objectives and can align the local training across clients by improving the performance for uncertain samples {{cite:1326775f9d997a26cc9a234a48498adef0b47aa1}}, {{cite:777919b6d087cf310c04b02b51e60870d99fff93}}.
| m | 81c2154c43e63d291e88c988335c48c3 |
In the present work, the database of the Brain Tumor Segmentation (BraTS) Challenge 2018, composed of multimodal MRI scans of gliomas, was studied {{cite:d8455087b16acccf5b6b8ea9528f487f092f513a}}, {{cite:a1e6a22d0961835d97af854cd5a148a53c3da781}}, {{cite:e9f218c6a357c3efe910ad0407bdfa4dd75734dc}}, {{cite:19c88bab739bb23df69dc21827f91d546d2870ea}}. Different neural networks were created and combined to label individual voxels from three MRI modalities: T1Gd, T2 and FLAIR, considering the following four possible labels: healthy tissue (TS), peritumoral edema (ED), necrotic and non-enhancing tumor core (NCR/TI), and enhancing tumor core (NA). Subsequently and during three stages, three subregions were segmented: whole tumor (ED, NCR/TI and NA labels) at stage 1, tumor core (NCR/TI and NA labels) at stage 2, and enhancing tumor core (NA label) in stage 3.
| m | abd8d7d9487656a648317e54095dbbcf |
Our selection method from the 2019ApJS..244...21S-santosSurfaceRotationPhotometric2021 rotation catalog did not select the anti-solar candidates proposed by {{cite:2ba90c6bcea418a75b1b41fb40fbee080713d3fc}} or {{cite:dcdb3ea4806772c5ccf9486f9c3e28166f734605}} as promising targets for hosting anti-solar DR. However, we note that because of the mode suppression by magnetic features, seismic targets tend to be only weakly active {{cite:a290d348477580ec1fa8498475ecde53a2ee9ac2}}, {{cite:d410f8cb77b1830620571b62f427a1179041b588}}. This also means that detecting the signature of surface rotation due to starspots for these targets might be difficult. Consequently, 2019ApJS..244...21S-santosSurfaceRotationPhotometric2021 were not able to determine photometric surface {{formula:ef78d496-2567-4d26-b072-fcdac125ff08}} for all seismic targets in {{cite:dcdb3ea4806772c5ccf9486f9c3e28166f734605}}. Nevertheless, when we consider the rotation rates that were determined through asteroseismology by the authors, KIC 8938364 and 3427720 become interesting, and would fall in the anti-solar DR region and in the transition (gray area) of Fig. REF , respectively. Unfortunately, the associated error bars derived from this technique are large and prevent us from considering these targets as promising. We also searched for candidates among the LEGACY/Kepler seismic targets of {{cite:294c1edca5f10f327cbb2c0a33a1e3cff32ecebb}} and {{cite:b845d5535114c493eab4c2b475bbfb1017aebf0e}} for which no photometric period was detected and failed to find any targets that clearly passed our selection process.
| d | 79955bffddba6f3069e45862e0d54bd0 |
Obviously, the transition from non-relativistic to relativistic quantum theory is expected to bring in some radically new features.
Landau and Peierls {{cite:d5ae73cb55a76bdab98c3f415cd5533caa9f056f}} pointed out that in relativistic quantum theory the particle position cannot be measured
with an accuracy higher than its Compton wavelength.
Measuring the position of an electron with an accuracy higher than its Compton wavelength
requires an energy that exceeds the threshold for the creation of electron-positron pairs {{cite:85bd0418cfb5938530636247e1d37cc2ede3058e}},
rendering meaningless the question which of the electrons is the original one.
Therefore, there is a common believe that relativistic quantum theory cannot be a theory of individual particles
but it must be a field theory for a non-constant number of particles {{cite:c1d9f6c4b45a4dd615d498331199e93a385853da}}, {{cite:ac32bf44363be66db17263694a6e450b4b91af34}}.
The requirement of a field theory description is also linked to the fact that the charge density of the Klein-Gordon
equation is not positive definite, as mentioned by Dirac {{cite:ddee22e17a75c56061fa0779966569ba3b3feef1}} and also stressed by
Feshbach and Villars {{cite:85bd0418cfb5938530636247e1d37cc2ede3058e}}.
This is due to the second order time derivative in the Klein-Gordon equation and indicates
that the wave function describes in fact two degrees of freedom instead of one {{cite:85bd0418cfb5938530636247e1d37cc2ede3058e}}.
| d | 6ccd25fbf912efda244f6bc83e31a5d3 |
Finally, in sec:dcov, we consider a related class of statistics, called the distance-covariance (dCov). Using the equivalence between distance-based and kernel-based statistics {{cite:ccf977d7d60425ead42379dfdbc0180561bef9b4}}, we introduce a new distance-based statistic, called the cross-dCov statistic. We then identify sufficient conditions for this statistic to have a standard normal limiting null distribution, and for it to be consistent against fixed alternatives.
| r | d7d55371a37c1ab11f46d6182aea02c3 |
Therefore, in order to seamlessly integrate static user-side contexts into RNN session models, we propose an augmented RNN (ARNN), which is an augmented version of RNN that can improve on any existing RNN session model. The unique feature of ARNN is that it estimates user-contextual preference by modeling high-order interaction between static user context and the previous item using a product-based neural network (PNN){{cite:81b0778a6e59d802dea5e6e27f90dd615975c7f0}}, which is a neural network (NN) variant of the factorization machine (FM){{cite:c6535d009f3496f38770eb16d6b8521eac30e3b3}}. By integrating this contextual preference with the hidden states of an RNN session model, the ARNN makes more personalized recommendations than the plain RNN session models that does not consider user contexts.
| i | d5f0fdfe0c26480056fb11af1dbe776f |
As shown in App. , the use of sine activation functions may introduce local minima in the loss function, making it hard for the NN to converge to the true solution of the ODE/PDE. We addressed this problem by introducing two additional hyperparameters {{formula:fbc54665-fa7f-474f-bd25-595a7467646b}} and {{formula:39d37d0d-e18a-40e7-b31d-4482fc5ed16a}} that assign different weights to the various parts of the loss function (for initial/boundary value problems), see Eq. (REF ), in the 2D and 3D cases. In particular, in the 2D case we had to weigh the initial conditions part of the loss, {{formula:df7488b1-0a88-45f1-bb79-b0e37d31ac7c}} , more than the other two pieces in order for dNNsolve to converge to the true solution for all the PDEs considered. In the 3D case it was not possible to find a common choice of the hyperparameters {{formula:86b250e0-3aaa-486c-ad83-a31dbbc8073b}} , hence the optimal values are reported in Tab. REF . Overall, we did not fine-tune the hyperparameters {{formula:49c315c0-9c45-4385-98ed-3d00e5d859b6}} in any of the examples presented: even in the 3D case we just picked one combination of {{formula:238b55fd-1d24-48f6-9dd8-572ee65d5547}} and {{formula:68edc2ef-ad14-4c8b-afbf-4ea06678f375}} (choosing their value to be either 1 or 10) that led to an acceptable accuracy. We consider this to be one of the main achievements of our work: dNNsolve is able to solve a broad class of differential equations without the need to tune (or with a mild tuning in the 3D case) any hyperparameter. We leave to future work the implementation of an automatic procedure for the optimal choice of the hyperparameters {{formula:c90c0ce1-2999-4eda-aa1a-5409e36efc03}} . Also, note that the results we have reported are not the best results achievable with dNNsolve, as they are obtained using a random set of initialized weights as opposed to the best solution selected out of multiple runs {{cite:c506f24c1e17db5e985b74bb5774647353dc03d9}}.
| d | b9b4a08ae3d8d5705bcae8fd29ec30b6 |
Quantum game theory, began from the seminal paper of Meyer {{cite:086621fd25d6d1cb6c6dfc3501695f84e354ee29}}. It
deals with classical games in the domain of quantum mechanics. For the last
few years much valuable work has been done in this area. Various quantum
protocols have been developed and many classical games have been extended to
the domain of quantum mechanics. It has been shown that quantum
superposition and prior quantum entanglement between the players' states
ensure quantum players to outperform the classical counterparts through
quantum mechanical strategies[2-9]. Quantum entanglement is one of the
powerful tools of quantum mechanics and plays the role of a kernel in
quantum information and quantum computation. A prior quantum entanglement
between two spatially separated parties increases the number of classical
information communicated between them to twice the number of classical bits
communicated in the case of unentangled state {{cite:f5c9d19d60bd9037d68a689be3037197c24e28f5}}, {{cite:27d2a5430ccf670eab8706deaa91d6f79e047fe3}}.
Recently, the behavior of prior entanglement shared between two spatially
separated parties has been extended to the relativistic setup in noninertial
frames {{cite:444f2aa735332d99c842da71eeabcdd91904b723}}, {{cite:c9c91b15836b5f3d8f09c5e95c6adf50f885176d}}, {{cite:eebfc300233d997f457862a254576ee09d36aa35}}, {{cite:c73448e357017dd96ac9373e15e9bad150ca89c5}}, {{cite:acf4178fa7a084bbcd07e42f2bc08223fc01e987}}, {{cite:f3ce6d0515263ad1ed58d18e25499f61587eae4c}} and interesting
results have been obtained. Alsing et al. {{cite:444f2aa735332d99c842da71eeabcdd91904b723}} have shown
that the entanglement between the two modes of a free Dirac field is
degraded by the Unruh effect and asymptotically reaches a nonvanishing
minimum value in the limit of infinite acceleration.
| i | 626ee64c29dc4aa44ee6aa43aede1b07 |
Quantum annealing runs were performed on the three generations of D-Wave quantum annealers (Two, 2X, and 2000Q) housed at NASA Ames Research Center and the latest generation (Advantage), accessed through the D-Wave Leap cloud platform. The number of qubits, minimum anneal times, and base operating temperature for all four annealers are listed in Table. REF . We used 100 problem instances at each problem size while restricting the instances to ones with at least one valid schedule (i.e., coloring). We found the embeddings for the QUBO instances using D-Wave's native heuristic find_embedding {{cite:dccbfc59831e836b0976b4356375655fe9cf279f}}.
| m | 89224a889838a618eadbf39ea3770fb4 |
In this paper, we argue theoretically and empiricallyCode is available at https://github.com/ginevracoal/BayesianRelevance that these two problems are interlinked, and that therefore solutions that ameliorate resilience against adversarial attacks will also lead to more stable and reliable interpretations.
We work within the framework of (pixel-wise) saliency explanations, which attempt to interpret post-hoc DNN decisions by apportioning a relevance score to each input feature for each data point.
Specifically, we use the popular Layer-wise Relevance Propagation (LRP) {{cite:2a667854b5508bf222dc2f1ef280115e201f3c61}}, a method to assess the contribution of each pixel to the final classification score which backpropagates the prediction in the neural network until it reaches the input, using a set of suitable propagation rules. LRP saliency interpretations are well known to be unstable under perturbations of the inputs {{cite:30efd3c458a907a082f078b65bb571caa9642c2e}}, {{cite:66fa13b70dc58ac214df1a82a36c3aa0d54ea336}}, {{cite:86db3d1e1bff2913c830e2dab3f475fba9cbf4b3}}, {{cite:0deb262f65b70d3b0551a5342bc6e7f07463972a}}; recently, {{cite:0974bef715bccf42cb8cfb8c4e6550c8152a60f4}} suggested that a Bayesian treatment might ameliorate these stability problems.
| i | ddc2dd358a3db2b83eb8af687e5d45d9 |
In his thesis {{cite:bce60959eef0143ef752c91fc3c8809b4805af62}} and classical paper {{cite:d289fd5863e8fced0e8d2aef87569ce3bd56b1aa}}, Dirac refined (the main part of) Theorem REF as follows.
| i | 85926291b14dd5372fbb4b53d91d3b47 |
The scale, diversity and quality of data used for training supervised deep learning methods have a major impact on their performance. Algorithms that are intended to be deployed on real-life conditions are usually trained on multiple datasets in order to improve their generalization abilities and ensure their robustness. COCO {{cite:e7b2a5e12dc77ab3744f729b13a9707fa1900bf6}} and Cityscapes {{cite:50a98697aa3e2e49dc5eb9dd8c0405e98f8b653a}} are only a few examples of large and diverse datasets providing annotations for training and evaluation of deep learning algorithms for computer vision tasks, such as object detection, human pose estimation, semantic image segmentation, etc. Collecting and manually annotating such amount of data is usually a challenging and exhausting task, requiring a lot of time and resources. In the case of visual human-centric analysis, the collection and distribution of such data may also face restrictions due to legislation regarding privacy. An alternative approach for collecting training data in a more automated manner is to generate them through a simulator. Indeed, in recent years, the use of synthetic datasets generated in this way has been established on the computer vision domain {{cite:a8cf91464c40d8f5dbd210455c67af397647c2c2}}, {{cite:a59007abd04ffd13d4027de382b351ae0644ea9c}}, {{cite:86bd2b8b80bae2d1900bb6f58fe9756a44bd4e6f}}. On the downside, these datasets often suffer in terms of realism and detail and/or are expensive to generate, requiring artists to carefully design specific models and environments.
| i | 580f9c9e8b12b0e1aa337bf6d79502ab |
The mass-loss rates in Table REF are within the large range determined for symbiotic stars, ranging from {{formula:14a0b5f4-13e5-41ee-87cd-4bd916b4f592}} to {{formula:da2824e5-b996-4d42-bc6f-6fe289e0f432}} {{cite:7531e07da5e10e7b355a266887b1eeb9db8dbd4a}}. Multiple dust shells are common in D-type symbiotic stars, with several temperature components contributing to the IR continuum {{cite:a5b126d2e173ca65692bb5638e3c8e96109b9408}}, unlike RS Oph, displaying just one dust component. IR excesses have been observed in two other symbiotic RN systems: V3980 Sgr and V745 Sco {{cite:85d036b12872d51b4ec17b6f0c3aa0e7e13ae2e3}}, {{cite:c3011339586b84733bca643025a18d909cafefd7}}. They, too, can be fitted with one temperature component, suggesting similar circumstellar structures among the symbiotic RNe. V3890 Sgr shows only a weak IR excess, lacking silicate bands, whereas V745 Sco shows a strong IR excess, even stronger than the one observed in RS Oph {{cite:85d036b12872d51b4ec17b6f0c3aa0e7e13ae2e3}}. However, the dust properties in V745 Sco have yet to be constrained with IR spectroscopy.
{{figure:ac257530-f69d-4a48-818a-b4ab7837b3a3}} | d | 747b68668b2ad634f815c8d386e058c1 |
BM25 {{cite:98e2f61eabf69df859b3293e9ab462f408b4937d}} is used very often for document retrieval for biomedical question answering, especially in the BioASQ challenge. It is used in its original form or with modifications or additional reranking steps. One reason for the success of BM25 based models in the BioASQ challenge lies in the construction of the dataset. While constructing the dataset, annotators search for keywords, identify the abstracts (which are ranked using standard TFIDF score) {{cite:b1f03bc4f96f464357ffc4f88abf354afd8a8171}} and then frame the question. From the retrieved set of abstracts, the relevant ones are added to as gold standard context passages for the question. Consequently, the gold standard set of abstracts only have documents which have high TFIDF with some of the terms in the question. Other documents which may be relevant for answering the questions but are not lexically similar with the question, may be ignored. This is a cause of a bias in the gold standard document set which might make it difficult for other non sparse retriever models to perform well in the evaluation step, even if they actually retrieve relevant documents. However, several other methods have also been developed to retrieve or rerank passages for questions.
| m | 3a8ba7a4bbd9b1c67ff540e965ee1d0f |
An intriguing future question is to generalize our study of quantum chaos into the 1+1d interacting Abelian anyon models which cannot be embedded into fermion models, and further into {{formula:c6f4d98b-f19c-4237-8c50-237b44085e5a}} d models hosting non-Abelian anyons, in which case our method presented in this paper may not apply. This calls for the exploration of solvable limits regarding the Feynman rules of vertex operators and other anyonic operators, which has not been studied yet and is challenging. A simple generalization of our model here would be to consider {{formula:7573ec9c-e047-4ee8-8539-e5d6d69acbd7}} copies of other chiral WZW models with current-current interactions, such as the chiral O{{formula:e50e02a5-35db-4153-93a7-9ea6d32688c1}} WZW model, which has a Majorana fermion representation for its currents and contains non-Abelian anyons if {{formula:1c6bb84f-3dc1-42df-b551-289098ca60f8}} is odd. In particular, the non-local nature of non-Abelian anyons such as the Ising anyons {{cite:a5dee63f03910bd709c1133c1fd151edf1e4b710}} and the Fibonacci anyons {{cite:bab73c1a859a3fefc2facb276f94414f0bb1b497}}, as indicated by their irrational Hilbert space dimensions and fusion structures, may yield additional stringent constraints on their correlations in the presence of quantum chaos. It would also be interesting to examine the competition among different chiral interaction terms of the same scaling dimension, which have their scaling dimensions fixed by their conformal spins and thus can hardly dominate over one another under renormalization group flows.
| d | b633908f56c86c7e6ccb672eca72a9aa |
While the quantum field theory is used to study Hawking radiation by incorporating quantum effects on the horizon of the black hole, it is worthwhile to emphasize that our results provide the link between classical gravity and thermodynamics with Rényi entropy and do not relate quantum effects on the horizon. Therefore, the Rényi temperature cannot be directly derived from the description of quantum field theory to curved spacetime, and then it may not be interpreted as physical quantity as discussed in Ref. {{cite:a60649b8a93350a2c0ff81ddee879f54dd7324f4}}. However, the derivation of the Hawking temperature in Refs. {{cite:2998fd9f165029072b86922ef7bbd46a1e05fdc9}}, {{cite:ab0b4dfb680bd21280c798a51c28bcdcdb02a466}} seems to adhere to the GB statistics corresponding to the extensive system while the resulting entropy is nonextensive. Therefore, while the entropic nature of a nonextensive system can be debatable, there is no guarantee that the Hawking temperature of a black hole must be derived by using the GB statistics. It is worthwhile to note that there will be more useful if the quantum effects are taken into account by incorporating the nonextensivity. Even though our results are based on classical gravity, it may pave the way to derive the Rényi temperature using the notion of quantum field in curved spacetime and we leave this investigation for further works.
| d | 83d40c26db320af405b905134b07ebb2 |
This study proposes and evaluates six distinct modified deep learning-based models in an effort to identify between patients with monkeypox and non-monkeypox symptoms. Our dataset contains limited samples, and we have further evaluated our model's accuracy with 95% confidence intervals inspired by referenced previous literature {{cite:98c0f5f6dc1680d9bbc502d1a8723f0522f635e2}}, {{cite:126d167dd58c23dffb9e870c2f90333b7e6eaf32}}, {{cite:7417f3cc83f71ddc98ccf6f9139b277cdb26d2b8}}. Table REF shows model accuracy for Study One with 95% confidence intervals for the train and test set. Two approaches have been applied to measure the performance with CI: Wilson score and binomial proportional interval. As stated in Table REF , InceptionResNetV2 exceeded all other models in terms of accuracy. In contrast, ResNet50 demonstrates significantly lower performance on the train and test set while measuring the accuracy with CI.
{{table:d0712a48-515c-4737-ab73-e005559401ee}}{{table:90f6c12e-033f-410f-b2fc-c140449db39c}} | d | 2e1da6b35a8541b94e86c4cf545d3c79 |
where the operator {{formula:c16ce514-7f87-4aa9-9d29-ebc3da8ceebb}} describes the variation due to collisions and {{formula:00ec6571-080f-4366-905c-c532434d6e58}} is a given collision kernel ({{cite:7fc535235fe94c63147e53b2329c3d92140cf396}}).
| i | 2e2b6f6ac50a19c4a0e82ee76f02d4dc |
Finally we point out a few additional lines of future inquiry.
Appendix sketches extensions of the FCLS penalty to rectangular and multi-array settings, which may be interesting for a variety of applications such as multi-response regression. While we focus on the penalized estimator, Problem (REF ), the results in Section can be adapted to a constrained formulation of this problem similar to ones considered in {{cite:2f83f5bf19a673959c83f63aa85d8e324091ca34}}, {{cite:d06e704a7f429336b6407307e3e4f860492508c3}}, {{cite:0f55734fc169c9b351d31c82e0e362048fcf1a60}}.
In reality we may not expect {{formula:0e683ca1-ca83-45bd-88f5-ecc1b3ad1407}} to be perfectly block diagonal, but rather approximately block diagonal e.g. when {{formula:70ebd63b-7f4b-4198-a5cd-dcfa874e8670}} is small for some {{formula:51bdc6d6-437d-40bd-b086-f5e28a80f3e0}} .
Extending our theory to this approximately block diagonal setting may be useful.
Decomposing {{formula:bb7a8647-3cba-4bec-96e8-fe781db69264}} into the sum of a block diagonal plus sparse vector – in the spirit of {{cite:0d3365ccdcac7c1b63cfd98b1ef92a8e7f11de29}}, {{cite:6f785d535d15a6a8f760a9c1fe18c5bb81cd8bb9}} – may also be of interest for applications.
| d | 9558be35cd8243ca9ca881ee9c49cd8e |
We have presented a new model for the interior and evolution of super-Earth class planets, likely to be the most common planets in the universe {{cite:85a48366e75858e7d63b714126fef5976d77864c}}.
Here we have another application of convective inhibition that has been in the literature for decades {{cite:4d2e93db5a6bc20abc6bcd77f083c7eeb6527a3f}} and has generated renewed interest in recent years {{cite:eec6cb6872438e36c581e16c220712d2a18c3b3b}}, {{cite:d670bd4e1fb27bd2cbbf02695129c8777ea55873}}, {{cite:6a4e16702a0392250bdd54fdc5ace7de148750a9}}.
In this work we extend the arguments to apply them in the limit where the condensing species rather than the dry gas dominates in abundance.
We find an extreme case of convective inhibition, wherein a hydrogen atmosphere with a layer of static stability can effectively insulate a core at very high temperatures ({{formula:92bbb334-cb3e-4426-a7ea-594c62cd694d}} K) for geologic time, or potentially longer than the age of the universe.
| d | 257799544bade0df29d90232d76a328f |
The proof is by induction on {{formula:199d0d13-b208-4da1-be83-9e5bdc53d996}} , noting that the case {{formula:810d1ae1-0d25-4ce3-a758-b1942189ddc4}} is elementary, and amounts to considering the following observations.
If {{formula:3329c5c5-2e59-4b81-be9c-935e4e426baa}} for a component {{formula:06e00fb5-454a-4e5e-bfe6-339330588f88}} not defined over {{formula:2c7d9c92-7214-4c6c-9862-d5489b82fd9f}} , then {{formula:f3deee7d-5112-4728-b4c3-41cc93e1d3d8}} for {{formula:154530c1-318f-4230-94f9-a959b9461884}} , hence {{formula:806cbf24-b14d-465f-9949-3bcc4d30517a}} , which has dimension strictly less than {{formula:4caca733-4655-4815-9215-2d5674af5273}} . Thus, the number of points on components not defined over {{formula:83e83d52-f891-49f3-8f3c-01542aa83fab}} is absorbed into the error term.
Each component of {{formula:45ebb2e9-69a3-46bd-8541-bae726942043}} defined over {{formula:4d7a10b7-8066-42c3-97dc-9fc183037036}} has {{formula:48a53b03-a90f-4029-b4d9-daf07af3b75b}} by Theorem 1 of {{cite:54cd22cae01a8236e244ad6fe9c57d066369c2c8}}. Summing the number of points on each component is an overcount, but the surplus is due to points on pairwise intersections of components, which again is absorbed into the error term. (Note that {{formula:bc3f1892-545b-43fb-bca1-167aba1fae19}} , so even after multiplying the error by {{formula:3bff6534-8ff7-40b3-a6ce-83dcb84b23e8}} , the implied constant still depends only on {{formula:fb398fcc-0dc8-4ffd-9461-968aec5136dd}} , {{formula:7ab1ccf1-27bb-463f-9a36-687c1e4e7c35}} , and {{formula:72ef7e21-05c0-491b-a859-584ecb602b52}} .) Thus {{formula:9492e1d8-6dea-438e-82e8-232dde78fb93}} has the claimed magnitude.
We have {{formula:3fb8fcd9-d68c-4c7a-aa79-90c122e8bcde}} ; since {{formula:e08d9a17-1e2d-4eaa-8a06-d91ec81a08e2}} has dimension at most {{formula:91105681-3547-4164-83ba-6869699ce9d7}} and degree controlled by {{formula:15bc5c36-5565-4964-b92a-56df19c3c68f}} , {{formula:efdfad71-5233-4ee0-9135-3fb78add6c65}} , and {{formula:2c1b5988-6170-4a61-986e-fd07fab6fe82}} (by Bézout's Theorem), the size of {{formula:e6fb3f86-ed33-4cb8-a845-bea96930fedb}} is included in the error term. Thus, {{formula:2797307f-a634-4e17-8243-ac0a99de0e26}} also has the desired magnitude.
Finally, if we let {{formula:1ec6dcc5-1d04-42a8-8653-381c8b9e2bd6}} be the projective closure of {{formula:625d5f8f-4fc9-4eaf-aa7b-30c0da2e9f90}} , then {{formula:40538449-c7f0-4724-b9be-2f305b583a24}} , where {{formula:2e6ba40c-067a-4719-a521-811532c0319f}} is the hyperplane at infinity. Since {{formula:9810bd10-8c87-4021-aa2e-39fb5daf425e}} has lower dimension and degree {{formula:59743007-222d-4cdf-a82b-8318a5d889c3}} , we are once again removing a set whose cardinality is subsumed by the error term, so {{formula:5be2c72c-1102-49db-8877-7dffd34a415f}} (and, similarly, {{formula:fcb5914c-699e-4d8c-8345-a8c193e7ee20}} ) has the appropriate cardinality.
| r | 9a1cfda5cdfa16c83ae8128134199a19 |
Jusufi and Övgün have applied this method to a rotating global monopole spacetime in {{cite:693323477789c2e1373c45372a5bfc96d5166032}}. With the defined line element and the correcsponding expressions for the metric, the Christoffel symbols and the Ricci tensor, the Gaussian curvature is derived to be:
{{formula:5b7d6e62-fb0c-4a63-b581-4af4996832f2}}
| m | 11892407b6e4a72c4e9acbefd6799049 |
Here {{formula:e599ac4f-02db-4b68-bd23-9a87ce0ef9b7}} is the Legendre-Fenchel transform of {{formula:a51bac7a-142c-4e7a-945a-1fd5bf0dc222}} with respect to {{formula:eaba815c-4822-4122-b84d-35c30af04725}} . Due to the differentiability and strict convexity of the Hamiltonian {{formula:d2c11ee3-6754-4e24-88fe-bd6c0d2df169}} ,
{{formula:0a66c96b-d700-4600-b387-b6e43f0aa71e}} is also differential and strictly convex with respect to {{formula:f978d2bd-7a15-4b85-97fc-d3e609c94fd0}} and {{formula:eb724eef-41ef-47b9-b866-b4ec78f669da}} {{cite:11efc2e3581d2c68287741b2e7c5c178be454735}}.
| m | 2623b10a6ddd2b373e24cc2a199dc8f7 |
One idea to improve computational efficiency which has gained attention in
recent years is to utilize low-precision arithmetic—in place of conventional 64 bit
arithmetic—for computationally intensive parts of the code.
This has been accompanied by parallel trends in deep learning
where low precision is deployed routinely and for which novel hardware is now
emerging {{cite:edd511121da6f169ba347471572a32ad9ca56454}}, {{cite:200da9d9a152cd7c07449ae375a7464563802b72}}.
Whether such hardware can be exploited for weather & climate,
however, ultimately depends on the cumulative effect of rounding error.
In fact, a number of studies have shown that much numerical weather prediction,
at least on the short timescales relevant for forecasts, can be
optimized for low precision {{cite:b9238b90c22d6e494ceb10c187befa7520285d9e}}, {{cite:86501b6bb9ba800e0471c4c9ce80af253df5be5c}}, {{cite:9e86741167ae38cca570dfb09a8eb3c6c16f2768}}, {{cite:c268e5d6e40967e542af5558b3d813d5a3e4b1e8}}, {{cite:bbe2040e5ed750b129d5477df27c5861c2b1b6c3}}
and forecasting centres are already exploiting this in operations.
The European Centre for Medium-Range Weather Forecasts
has now ported the atmospheric component of its
flagship Integrated Forecast System to single precision {{cite:cc76cc340afcd8d57e38615cc73b9c1b5c599605}}, {{cite:f12021be500292be4b66d4dead1d38b952c00725}}
while MeteoSwiss and the UK Met Office
have tested single and mixed-precision codes respectively {{cite:e03762cca310b717d51cf57fb7c534c86346d113}}, {{cite:b8fa76302a2ec786b1f5f709950fe8fad5351cce}}.
| i | fd3b35fc6795a79abefc9052b719271a |
Marr's {{cite:b2cb0403e2f8de21b63a68fc993e4c9d53fc4f27}} nearly four-decades-old dream of a singular vision science — the intertwined study of biological and computer vision — is, at long last, being realized. Fueled by dramatic progress in neuroimaging methods and high-performing computer vision systems, new synergies are arising almost daily. However, connecting these two domains remains challenging. Here, we tackle one of the biggest obstacles for integrating across these two fields: data. In particular, neural datasets studying biological vision are typically lacking in: 1) size; 2) diversity; and 3) stimulus overlap; relative to extant computer vision datasets. We address these three concerns in BOLD5000 in which we have collected a large-scale, diverse fMRI dataset across 5,254 stimulus images. Critically, the human neuroimaging data available in BOLD5000 is: 1) significantly larger than prior slow event-related fMRI datasets by an order of magnitude; 2) extremely diverse in stimuli; 3) overlaps considerably with standard computer vision datasets. At the same time, BOLD5000 represents a significant dataset for the study of human vision in and of itself. As mentioned above, it is, by far, the largest slow event-related fMRI dataset using real-world images as stimuli. Moreover, given the diversity of content within these images and the fact that our fMRI data covers the whole brain, BOLD5000 may be sufficient to cover a wide range of high-level vision experiments (in the context of automatic visual processing during non-task-related, free viewing).
| d | 7d39b81d87bf3111fa3afefe828018ca |
Knowledge distillation: The population distribution
related to a regularizer based on Kullback-Leibler divergence
(knowledge distillation) has been shown in Section
REF . Therefore this can be cast in terms of information
geometry where the probability falls of exponentially with in this
geometry. Hence these connect to methods such as
{{cite:efdd5a391d8ca87b51054b461d2431acff3f3b0e}}, {{cite:e679d9d7e01d59d21417ab196e8fef73926d3c89}}, {{cite:83278c0158593eff2849f2518aaa363f8d442e45}}, {{cite:3f59e3c71f13b61ed5dc9f4b4724f4cc1d7ad665}},
but the exact regularizer used does not take into account the full
parametrization, and one can therefore improve upon these methods.
| m | 8cc880d8fcd0e5b29322c86764850577 |
Finally, it must be acknowledged that the effort of the community is promoted by challenges, such as the 2020 edition of the BraTS challenge {{cite:2df274952699063714596e9731877d7da59ea601}}, {{cite:0e2109f14ac45d19acaee8c0143db996653119ac}} that included an UQ task, the MICCAI QUBIQ challengehttps://qubiq21.grand-challenge.org/ that focused on label uncertainty, and the SHIFT 2022 challengehttps://shifts.grand-challenge.org/ that will contain a task of uncertainty quantification for Multiple Sclerosis lesions segmentation {{cite:0307ee530cba8a3d4093ae875ec53bcbc3034369}}.
| d | 8e6fa3b2e67675113c694e822a6e9445 |
Structural correspondence learning (SCL) {{cite:7511179b3ddc395d09c3547ffcd6098cbec364ea}} is one of the most representative models to find the common features of both domains. These common features are named as Pivot features, which refer to the words that frequently appear in different domains in text classification. Due to the stability of these features, they can be used as the bridge to transfer knowledge. It has three steps. 1) Feature Selection: SCL first obtains the pivot features;
2) Mapping Learning: the pivot features are utilized to
find a low-dimensional common latent feature space;
3) Feature Stacking: a new feature representation is constructed by feature augmentation.
{{figure:26535480-6207-4e72-91d0-927c60162755}} | m | 08bae496fd08087aa1c048662c27641f |
The proof of this theorem follows exactly the same lines as in {{cite:cb33c9f4bc751643baf1f8a9649ba077fad99dde}}. We just emphasize that the denseness of the range of {{formula:d706691e-f335-4de2-ad0c-0f5814e20029}} is equivalent to its injectivity thanks to the reciprocity relation in Proposition REF .
| m | a76bcf12b64620594fdbaed4d3d576f9 |
One of the most frequent research fields for spatial data analysis is identifying features (clusters) of events in the presence of clutter. For instance, detecting surface minefields from an image from a reconnaissance aircraft can be processed to obtain a list of objects, some of which may be mines and others any other type of object {{cite:f11bfff9bf6b942279878644f72f1981afcc5b63}}, {{cite:0fbc086ec2433bdccf83029539318d5058cb11b6}}. For spatial point processes, the problem has been addressed in different manners. {{cite:f11bfff9bf6b942279878644f72f1981afcc5b63}} developed a method to find the maximum likelihood solution using Voronoi polygons. {{cite:8bba6e4451fbe417d9f99a895aebdb68e1ee62dd}} used model-based clustering to extend the methodology proposed by {{cite:779912a7657041d914cd24214b7de62c811bcd3b}}. While these methods are based on some limiting assumptions, {{cite:0fbc086ec2433bdccf83029539318d5058cb11b6}} adopted a different approach in which they estimated and removed the clutter without making any assumptions about the shape or number of features.
| i | 62f5af55a536845b1134be62f2f73990 |
We have chosen to examine in detail the RL method described by {{cite:5e3b64768473760aeb9d9efa17f515dff457aa2c}} for learning to navigate to an image using visual inputs alone, because this has now become a general method on which several more recent and complex algorithms have been based {{cite:567916e19437ac2f23d4855b1ed87fe02157717b}}, {{cite:ae746fac8004fddc31d594cc99642a3c9bd8148b}}, {{cite:e508b6a3be835077e4c3769949cb186d08523e67}}, {{cite:ee551017f62ff20ed3c58aae0809597e7df5627b}}, {{cite:249920b912147c34c745521769120051b73c5543}}, {{cite:66d80f63b89b463c8eab9dbfee28b288507e58b6}}. We have compared the {{cite:5e3b64768473760aeb9d9efa17f515dff457aa2c}} representation to a hand-crafted representation (based on relative visual directions and using highly simplistic input) in order to illustrate two points. First, in {{cite:5e3b64768473760aeb9d9efa17f515dff457aa2c}}, the relationship between stored feature vectors and the locations of the camera in the scene (fig:zhuplan) is quite a complex one, while for the RVD model the relationship is simple and transparent. In the case of {{cite:5e3b64768473760aeb9d9efa17f515dff457aa2c}}, it is possible to build a decoder to describe the mapping between feature vectors and location (as illustrated by the systematic distance information visible in fig:zhutSNE) but this is quite different from the smooth, one-to-one relationship between stored feature vectors and space illustrated in fig:rvdtSNE, at least over the range of camera locations illustrated here (fig:rvdplan). The decoding required to extract location from the {{cite:5e3b64768473760aeb9d9efa17f515dff457aa2c}} representation is reminiscent of the decoding that has been described as a way to use the aliased grid cell activity as a signal for location in rats {{cite:d6d9522fd35853e8d1f2ce2aafc8a5da5691cd05}}, i.e. substantially more complex than the than interpolation of the feature vectors of the RVD model which generates a sensible result directly (e.g. fig:rvdmidpoints). Like the decoding of location in the {{cite:5e3b64768473760aeb9d9efa17f515dff457aa2c}} model, interpreting the output of grid cells would need a sophisticated decoding mechanism if they were to be used on their own for navigation {{cite:d6d9522fd35853e8d1f2ce2aafc8a5da5691cd05}} and neural network implementations have been proposed to solve this problem. For example, it is possible to decode the distance and direction of a goal given high dimensional vectors ({{formula:e6f00fd5-b94e-4d34-bac5-6573c06a52c5}} ) of grid cell activity at the current and goal locations{{cite:5c8a4b1c8bba43375371a187b3f48a9d6728d492}} but grid cell firing rates are not the only high dimensional vectors encoding spatial location that could be used. The vector {{formula:51e0d6c9-c699-432f-a2ee-739da58543a2}} that we have described in this paper would be likely to do equally well and potentially even better since the aliased nature of grid cell firing is a disadvantage rather than an advantage in this context.
| d | 06551742d45e5b59d66f531c572e3e9f |
The CCP method is not an algorithm in and of itself. It is an algorithmic change that can be applied to any actor critic algorithm, assuming there are consecutive agents used to solve a task. The experiments described in this paper were implemented using the SAC algorithm {{cite:58f2704323bee96bf4e71ce867086b9e3fd3fa70}} {{cite:83929879279e3c8a81b50613b44dd1cbb9995e1b}}.
| m | 95ab6330bdb7483b17079e42b64546e3 |
Then, we analyzed the effect of using different anatomy-constraint settings in our AccSeg-Net. The results are visualized in Figure REF , and summarized in Table REF . As we can see, adding either MIND loss or CC loss help improve our segmentation performance, while combining both anatomy constraint losses yields the best segmentation performance of our AccSeg-Net. We also performed ablation studies on using different segmentation networks in our AccSeg-Net, including R2UNet{{cite:fa210fa121b069ba47ee0d174844d3725d618670}}, Attention-UNet{{cite:992173da07e50962ec01d6c32f09aee13017ebe3}}, and UNet{{cite:c4475d3c9749151b45b62950ef1cc412cda7f40d}}. The visualization and quantitative evaluation are summaried in our supplemental materials. We observed that AccSeg-Net can be adapted to different segmentation networks, and yields reasonable segmentation results. Additional PET liver segmentation results from our AccSeg-Net are shown in Figure REF for visual evaluation. Our AccSeg-Net can also provide reasonable segmentation on PET data without using any ground-truth annotation in PET domain.
{{figure:2826795e-1d43-47fb-87ad-9072d0ea99c1}} | r | 010dcd3ef16f00f3fa2198e76ee5f499 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.