text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
To further demonstrate this advantage of ALSO, we turn to Powell's method {{cite:9e3e304242b2d114815747efeb1c266dfcb65d68}} to optimize the parameters. We carry out 8-qubit state preparation as well as quantum autoencoder optimizations and the results are presented in Table REF . In the infidelity (cost) columns, each entry is an average optimal infidelity (cost) of 5 instances of the problem, and we give an approximation of the average number of copies consumed (except for ALSO where exactly {{formula:5d101222-9319-4d98-9f84-c46b357f9d64}} copies are consumed) to achieve these values in the #copies (#samples) columns.
| m | ae8d26ae4ad514d45d52b52e9443b912 |
with {{formula:5225eac0-c0f7-4197-aba7-24f7816b92bb}} GeV {{cite:dc8f8ba30f6329bef5563f14b257b2729a13aa2a}}. In consideration of the {{formula:191e1034-576b-4629-b5eb-ad466d94fc2e}}
in {{cite:f684b1b24c22aec6ca7aa6646a843e04f655f875}} for the decay {{formula:c19e7924-0be7-4a99-b9bf-005a49ae2984}} , one could expect a branching fraction
{{formula:e5e2bb53-7de0-4408-aa67-b78d0a523f26}} for the decay {{formula:32508b3d-9750-48ff-a6d3-2b9f050c7529}} with the subprocess
{{formula:2207cc92-52b6-4bdb-8179-9c0f333ca614}} , which is close to half of the contribution from {{formula:b18bb24e-3f66-4ce6-b23b-dbefe0bda44b}} .
Then we have about {{formula:332c3fc7-746d-4f13-8429-76ce40fcc57c}} of the total branching fraction, but still much less than the {{formula:44457dbb-b89e-4a0c-ac08-3b5bfed6a232}} from
LHCb {{cite:f0898d919888a2248a3ee49fc444600802cbea81}},
of the three-body decays {{formula:2ddca39b-376e-40f6-ac60-adfb003db614}} from the resonances {{formula:ce6fcf93-a298-410d-875c-fb78a9009410}} , {{formula:5441b9e2-7813-457b-8a2f-a01b2b1a3a55}} and {{formula:6e026732-dc2a-4192-b419-92030517126e}}
when neglecting the interferences between them.
| r | 75e2154331316ce96519b8a7861bfd26 |
We propose DF-GT to learn the fusion of multi-rater segmentation labels in favor of the disease diagnosis. We compare DF-GT with SOTA multi-rater fusion strategies to verify that DF-GT contains more discriminative diagnosis features than the others. Specifically, we validate the diagnosis performance of the fused segmentation labels by various segmentation-assisted diagnosis approaches. In particular, the multi-rater labels are first fused to a unique ground-truth by the multi-rater fusion strategies. Then the unique ground-truth will be sent to a series of segmentation-assisted diagnosis models to evaluate its diagnosis contribution.
In the comparison, the multi-rater fusion strategies include majority vote (MV), STAPLE ({{cite:f1bf23e280d8ba2bf9de4f4394f0825a4d529de0}}), AggNet ({{cite:f6b69a15539de67ddd79a9981f3edd05fa20628e}}), and MaxMig ({{cite:21b6e303c645171afa8be95cb494a1eb04937da7}}).
The segmentation-assisted diagnosis models we selected for the evaluation include the concatenation based method (DL-CAT) implemented by ResNet50 backbone ({{cite:b4c32b9fa7dbc05289940b25549b7be3ec73c101}}), ROI based method (DL-ROI) ({{cite:3dd44296e3602d28dc8cadc5bc4ad971a8ef4e85}}), the Attention based method (DL-ATT) ({{cite:33336efa3c7cd86d9b8c5e56c9a8c60437ca6241}}), the Semi-supervised Learning based method (DL-SSL) ({{cite:b52536e422a8d3ffb43a0b16649dc2546127d228}}), and the transformer based feature-fusion method (DL-SeAT) ({{cite:c12d368a357ce7aa112322c66205f9200a37a785}}). Since directly calculating the vertical Cup-to-Disc Ratio (vCDR) on OD/OC is also a commonly used method in the clinical glaucoma diagnosis, we additionally use two CDR based diagnosis methods, vCDR-TS ({{cite:c49c6f5b53f2d1274203f0dc202823a9bc29e657}}) and vCDR-SVM ({{cite:f9e7ca701bc6e5dc2381287287d0d40534c96b19}}), which implemented by thresholding and SVM on vCDR respectively, in glaucoma diagnosis task. The quantitative comparison measured by AUC (%) on three different tasks is shown in Figure REF .
{{figure:a041ccca-36aa-4953-a07e-05301ba2c414}} | m | 426cf9fcf6286aac39d0983aa9d5ac45 |
We compare DeepTIMe to the following baselines for the multivariate setting, N-HiTS {{cite:44211f525f14839e7efdee4352d0445482a2330b}}, ETSformer {{cite:22660f0a9329caf257433ac18a0b48680e7f6d03}}, Fedformer {{cite:43ccf16acbaf6c6502edb90e15e518707ba8ffc6}} (we report the best score for each setting from the two variants they present), Autoformer {{cite:a8f96bcb744da5cd5c5cf6943d1fef10a1603c26}}, Informer {{cite:5844fef54ebbd3fca2d56fb0ccbd3784f1caaad8}}, LogTrans {{cite:1a6fdb8b65c2fb36f54918eab90a7367a764125c}}, and Reformer {{cite:fdc8b9565ed705e778918657f9f7a52645652261}}.
For the univariate setting, we include additional univariate forecasting models, N-BEATS {{cite:1cc44ffc25e7653a242498377a179b8e14b0ee2e}}, DeepAR {{cite:42e81c99121ebddc7b8b68ce1648b0b688a676e6}}, Prophet {{cite:3552b83aa1d9aebbca0ce984553cda07af852072}}, and ARIMA.
We obtain the baseline results from the following papers: {{cite:44211f525f14839e7efdee4352d0445482a2330b}}, {{cite:22660f0a9329caf257433ac18a0b48680e7f6d03}}, {{cite:43ccf16acbaf6c6502edb90e15e518707ba8ffc6}}, {{cite:a8f96bcb744da5cd5c5cf6943d1fef10a1603c26}}.
tab:main-multi and tab:main-uni summarizes the multivariate and univariate forecasting results respectively. DeepTIMe achieves state-of-the-art performance on 20 out of 24 settings in MSE, and 17 out of 24 settings in MAE on the multivariate benchmark, and also achieves competitive results on the univariate benchmark despite its simple architecture compared to the baselines comprising complex fully connected architectures and computationally intensive Transformer architectures.
| r | 6b12bff2ce6d564e41d0bef528b21b38 |
The equation (REF ) was derived by Hunter and Saxton as an asymptotic model of liquid crystals {{cite:20db57b813ed62c9d48d95d78522c5964758c1ee}}, {{cite:d67b534c6bc77d66d2ed23508621bce5ed361b2b}}. The {{formula:913ebb1c-d204-43b9-8003-0102725590ff}} equation is completely integrable {{cite:d67b534c6bc77d66d2ed23508621bce5ed361b2b}}, {{cite:f2944602952bd94a57dfc30704fdbee3a538ad2f}} and has a bi-Hamiltonian structure {{cite:8eab6bc512d821e80764c2f36d8cf9f6a1270a18}}. Local well-posedness and blow-up phenomena for the Cauchy problems of the (HS) equation on the circle were studied in {{cite:20db57b813ed62c9d48d95d78522c5964758c1ee}}, {{cite:58499e01ae1a867a163f1a4885cbcc8a985a9dd9}}. Global weak solutions of the (HS) equation was investigated in {{cite:69602660d9a61bb83af4311e6e422f06c2bf042f}}, {{cite:024803b1341910518cc1c5f5593b0f144c1f1201}}.
| i | f69b068f6d76b15a72d8cd9c1b65ca3c |
It is interesting to notice that, for almost each pruning target, the estimated operations count of the models pruned using the method of Liu {{cite:444fc568d0ffa491c21758c68436044b7acd59ae}} is greater than the one of those pruned using SWD. It could seem counter-productive, it is actually a good sign. Indeed, those networks were pruned according to a parameters compression target and the goal for each was to maximize the performance. Since performance and operations count tend to be correlated, the fact that both are greater, for a same parameters compression target, means that SWD actually aimed preferably at filters that reduced the least the number of operations. So, it appears that SWD effectively targets the parameters that are the most relevant to prune.
| d | 6b62268feff6c0bffe5fa55bffc54e26 |
We propose a special hierarchical architecture of a WGAN that jointly generates
magnitude of multiple resolutions of the same SAR image. This idea of jointly
modeling multiple resolutions was inspired from the Progressive GAN by Karras et
al. {{cite:cce798c21f29d6ee4c6aa2a6808a273ade7533e2}} though their motivations were different. We aim
to exploit this hierarchical structure for super-resolution i.e. to find a
higher resolution image given its lower resolutions. Suppose we are given a
dataset containing 4 exponentially increasing resolution images' magnitude
{{formula:60610992-f8fd-4e6b-9b10-5184969fd095}} where resolution of {{formula:455529fd-16d3-47f0-92c8-80000e36552b}} is twice of
{{formula:f9989a4f-e5e0-469e-b364-d447af5554b2}} . Then our WGAN {{formula:8bca2da1-dabd-4c8c-9860-452f61844da6}} models the joint probability distribution
{{formula:d778207e-906d-4d1e-bd93-a63ea076db32}} . Now if we are given a new sample from
{{formula:0b46bad6-0dc6-4b97-b81f-2a2f3dd85213}} i.e. of the 3 lower resolutions, we can use
{{formula:15ee6e5d-08ce-4f88-8ec1-03e27ed0d65b}} and steps to of
algorithm to project it onto the joint data manifold of
{{formula:4400f26a-d757-4d53-a27a-9d00af8988cd}} by finding a common {{formula:7ea14476-3a25-4ba8-a38a-16f7e1131c34}}
pair. The highest resolution {{formula:b89f538c-1805-44ea-ad68-d9bcf3cd9a81}} image is then obtained by a simple forward
pass {{formula:277cc9b7-9076-49ee-a399-e795d2837220}} . The overall algorithm is therefore
Projecting from a Multi-Resolution SAR Prior (or MrSARP) and is summarized in algorithm
. Note that {{formula:5bd55b97-a128-42fb-aec3-f6be91fc74c7}} denotes {{formula:3bfcab8a-39d5-44db-9b08-6c969585f47d}} resolution output
from {{formula:b64f5f22-a9ef-41e4-82a3-35e37692c7cf}} and {{formula:aff5d275-75dd-4561-9f5d-61f91b7b01d5}} is simply dropped in step
.
| m | cf4b4c4ef966dc1a74bef6113b030756 |
Here, {{formula:2b4a340f-396b-4362-8616-e3b00a4b3a55}} is the observational uncertainty from Tables 7 and 8 of this paper,
and Table 1 of {{cite:73a99390004dbd5f42f2e01c6bdabc5c30fbc502}}, {{formula:3f07f837-7143-4346-ad9e-59a82b683412}} is the uncertainty associated with the model itself,
and {{formula:05fd53b6-f97e-4c4d-8703-82520d2f4196}} is associated with the
uncertainty with the distance modulus adopted here.
{{cite:a2855387a7e70b548df491d4c46301c667beb16e}} estimated the uncertainty
associated with the term {{formula:cc831d95-f20c-4978-b1f6-f221a60007d8}} by comparing the colors obtained from different
stellar evolutionary tracks and spectral libraries. Following {{cite:0222f3ebf7004cab7eee79a0384adf8dff5673b9}}, {{cite:f935e831b024a1a02a05ff66a660c3e73379cbb6}}, {{cite:873f447addd9f88bfe173c261ea50d27a95a75b0}} and {{cite:d0aa087e131eded3e016d50a8c569fe5e35e9127}}, we adopt {{formula:2ab62190-7bb3-48d5-8b57-d3845b35d967}} mag. For {{formula:3dc0f0e0-91b0-4be0-9259-2d893fa418bf}} , we adopt 0.07 from {{cite:b30bfbf55701544d0b3eb7ee40903a31b3cc5d9d}}.
| r | 884241f63d42bf51a33e074800b5d1fc |
Finally, our results and the basic ideas underlying our new scalings can be viewed in connection with in-situ measurements of solar-wind turbulence.
For instance, a (rapid) scale-dependent alignment between {{formula:6db63724-77ca-402e-857b-bde09c2482ed}} and {{formula:5c7eb5b9-e152-42f6-b6ef-8e2b358bbbb1}} fluctuations (or, anti-alignment between {{formula:e190478c-fc13-4321-9881-3fea5a3f4790}} and {{formula:29684c3c-52e9-47a1-a302-56f9a07f3537}} ) has been reported to occur in the large-scale {{formula:85eeef89-3329-4e9c-b6f1-066b368b692a}} range of solar-wind turbulence by {{cite:430669f39d837e23b91f2fab9e42c3c1248de1be}}. In particular, it was shown that such an ensemble of aligning fluctuations constitutes the majority of the fluctuations' population, and that they are responsible for the resulting {{formula:35acb554-9206-4d26-b862-708fc718a397}} spectrum (viz., structure functions reveal a scale-independent behavior of {{formula:fce6c3a8-13ae-4c7f-8343-9e277b836ca1}} in that range); these fluctuations were interpreted as “non-turbulent” fluctuations belonging to quasi-non-interacting, counter-propagating AWs. Since a finite, whatever small, amount of nonlinear interactions is required to occur for counter-propagating AWs to be shearing one another and induce dynamic alignment, we suggest that this may be the case for the aligning population observed by {{cite:430669f39d837e23b91f2fab9e42c3c1248de1be}}, thus potentially pertaining to an asymptotically weak ({{formula:edaed5a4-062a-4137-bb29-5cb43473730d}} ) regime as discussed in our scalings (§REF , case “WII”).
Another intriguing piece of in-situ measurement is the one recently taken by Parker Solar Probe within the magnetically dominated corona {{cite:0eb8558b8a93ce0e4a56912a59842c8d5882af50}}. Among other features, the magnetic-field spectrum in that region exhibits a transition between a {{formula:cac0fc9d-14ec-4af9-96a3-73e91416d62b}} range and a steeper {{formula:c680c2f7-e904-4ee6-96cb-939b894507d1}} slope occurring at scales (frequencies) much larger (smaller) than the ion characteristic scales (frequencies), which may be a hint of a potential large-scale, tearing-mediated range. While further studies are definitely needed to investigate the fluctuations' properties across this transition (e.g., estimated strength of nonlinearities, spectral anisotropy, etc.) in order to understand what type of transition we are observing, our theory in the moderately weak regime (§REF , case “WI”) provides an alternative scenario to interpret the measurements by {{cite:0eb8558b8a93ce0e4a56912a59842c8d5882af50}}.
| d | 3bed29d645900c24cd76748cde52f5a2 |
Following the semi-supervised learning setting in {{cite:55535cfee3a6a31f2fd720c3c1783e4575b60556}}, we fine-tune our model pretrained by 400 epochs with {{formula:4c7ad908-9183-4e03-a238-8e287703d805}} and {{formula:ca928d10-797a-4841-a0a0-d0a4c7db357c}} of the data split. The results are shown in Table REF . Our method performs better than all compared methods w/o multi-crop and is on par with SwAV using multi-crop.
| r | 70542d52f0caac755572098ce4cd6c1e |
For example, the DP-SGD analysis assumes that
the intermediate computations used
to train a model are published to the adversary,
when in most practical settings only the
final, fully-trained model is revealed.
This is done not because it is desirable—but because there are limited (known) ways to
improve the analysis by assuming the adversary has access to only one model {{cite:0e957fb196d6e9c256ffa23a7a539a860814652a}}.
As such, it is conceivable—and indeed likely—that
the actual privacy in practical scenarios is greater than
what can be shown theoretically (indeed, DP-SGD is known to be tight asymptotically {{cite:7e91ff41a68b3645cc7bd4dc347368e7a061caf8}}).
| i | 18be885226e6317c883cf07518368eea |
Partial least squares can be used in many well-known machine learning models, such as kernel regression ({{cite:78230565919286fc7ba8953eabd28c8f88d5417b}}, {{cite:0771fdee6bbe692f246f3ec05fc561de38e585ca}}), Gaussian processes ({{cite:88350ca5d7512b461415880816cbd22a1288efe2}}), tree models. {{cite:809531acd7a3c6d52b60d390f68d621da8813871}} uses a PCA decomposition of the {{formula:36d7441c-0189-4968-99e3-5d2833f580e6}} -variable before applying a nonlinear regression method. This can be addressed using our methodology. There are many directions for future research. For example, PLS for image processing, and extending Theorem 1 for larger classes of deep learners.
| d | e3013342c88805f18bbb66c7b51b7b1b |
As shown in Table REF , HumanDiffusion achieves satisfactory scores in the three indicators.
HumanDiffusion outperforms Pix2Pix{{cite:c8a149cf18e95e758c32dc72d383d11e8b13a165}}, SPADE{{cite:ccafb0ab9dcdc3b43548647b25dffc6daf3f6596}} and LatentDiffusion (LDM){{cite:eca9f6243f12dfb44350eb849508ac4dcf34de42}}, and exceeds the Text2Human with an improvement of on LPIPS and MSSSIM.
The lightweight of our model limits the quality of generation to some extent, which greatly simplifies the training procedure.
| r | e84729d4177cece15edd6416379ea9d4 |
Our method is developed to model specific classes of complex molecules (e.g., classes of different monomers or polymers) and is expected to deal with most practical scenarios with only a few dozen samples for training.
However, as mentioned above, monomer data itself is different from the molecules used in related works. Therefore, we also investigate how our method performs on large monomer datasets comparing with existing methods.The data was also used in {{cite:74732ca879a446d6134ea0b491c16dea79eee88a}}, but no grammar-based systems were compared.
Since our approach is relatively complex, but more data-efficient, we apply it to a {{formula:04cdf422-74b0-4632-a6fe-ebf5f03cadcd}} subset. Details are provided in Appendix .
| r | 6cbb66aa85b0e991959c2220986c956d |
The offline optimization and evaluation have been studied intensively in the past few years, as deploying a sub-optimal policy for real-time experiments can be costly and even risky {{cite:b4f41b9f5aa1c9ea6a248fd9126478e8ba2d0c97}}, {{cite:7bc2bf2ec893389b25fe9951c5c8f2c7ee164fd6}}, {{cite:7db84a06994235e50a6995cade2ea74c89d2736e}}, {{cite:146eff21f6b01723f865c3646b87a95d5668ab91}}. Nevertheless, it is arguably true that off-policy learning can barely represent real-world scenarios except for a few ideal settings. In particular, real-time policy deployments are inevitably subject to various interventions and constraints that can not be unaccounted for in standard off-policy learning. We categorize these unexpected events as runtime uncertainty since they are brought about by some anomalous events of the online execution mechanism. Examples include:
| i | 111819dd2a04d064a2aba7ce30986c32 |
Essentially, a GAN with a multi-branch discriminator can be considered a special variant of a GAN with multiple discriminators. Many studies have shown that multiple discriminators can enhance the performance of various GANs {{cite:452896d7bc56a7ff48c10494bcd0adfb8dd1bf96}}, {{cite:922803c2b3a1a9356349ab401c94d9e4fe09ad90}}. However, none of these studies have investigated the potential of reducing the discriminator parameters while improving the generation ability. Actually, it is not sensible to expand too many discriminators for a GAN due to the parameter explosion problem. A parameter evaluation is given in Table REF . Suppose two adjacent layers in the original discriminator have {{formula:6c2525c6-5fd5-4ae5-b513-fb363514a8eb}} channels and {{formula:73857614-9854-48b2-978d-ba9af606b172}} feature maps, there would be {{formula:d65e2862-b610-4f59-853e-29627213fea6}} connection between these two layers. We would have {{formula:bd1d1a51-4ce4-4b8b-a9c9-b53943503974}} connections if we use 2 discriminators. When we divide a discriminator into two branches, the channel number of the discriminator would also be divided. Thus, each branch only has {{formula:a5de754f-cfeb-4517-a5ed-e36c4be904c0}} channels and {{formula:70ad3246-55d6-49d6-8b24-8fb332e67dd9}} feature maps, and 2 branches would have {{formula:9ea01ebe-6a45-4c5e-94cc-58261f4af253}} connections. We design an experiment to evaluate the effectiveness of the CycleGAN with multiple discriminators and MBD. We compare 2BR and 4BR with 2 and 4 discriminators respectively as shown in Figure REF . We find that a CycleGAN with MBD has similar generation performance to a CycleGAN with multiple discriminators when the branch number is larger than 4. However, a branch has much fewer parameters than a complete discriminator. We also compare our model with state-of-the-art methods in Table REF to evaluate the parameter number and the running speed (every 1000 iterations). All models are implemented in the same environment (Intel Xeon E5-2620 v4, 128 GB, 1080 Ti, TensorFlow 1.8.0). Obviously, the model with multi-branch structure has fewer parameters and faster training speeds. More evaluation results can be found in the supplementary file.
| d | 389a4f08df79e609b125307dd6e8bbce |
Furthermore, the essential difference between the two schemes for generating PB can be readily understood by comparing the energy spectrum of the system. For the single four-level atom-cavity-EIT in Refs. {{cite:69f8c987c6fb5ce171104ac78ba5d96f9b7e6cb0}}, {{cite:d6430accb4b15c5128a4b7e42262f88ef9dc2f49}}, {{cite:6722a1cff22db7c2db3219de46e75ca51ebab85c}}, {{cite:cd8914aa934e1654abf5384371208a3b05d1b0d4}}, we find that the first dressed state in the energy spectrum is immune to the changing of {{formula:18980b05-3dd7-451b-935a-619b939dcec6}} . Therefore, the vacuum-Rabi splitting {{formula:5c5e462d-d650-4a51-9635-c7300a0d4d8e}} and {{formula:90193688-82e7-4482-9efa-b3025c30219c}} for the three branches are all independent of {{formula:76ae3e3d-690d-4b5a-a978-bdfec314a329}} . From this perspective, the Stark shift enhanced vacuum-Rabi splitting in our proposal is equivalent to enlarging the effective single atom-cavity coupling. From the experimental point of view, the realization of strong PB beyond the strong optical Stark shift in our scheme could obviously improve the accessibility and controllability in the experiment. Therefore, our proposal can be used to realize a high-quality single photon source for a single atom-cavity-EIT mediated by a moderate optical Stark shift. We should note that the advantage of generating PB based on Kerr nonlinearity is using an ultracold ensemble coupled to the cavity {{cite:27ccf29f83ede00f378cf69777e7345fc34c9a2a}}, {{cite:16bcf62f2ea0f491da0f400be65239019f1a0b7d}}, where the giant Kerr nonlinearity could be emerged on the atomic dark state resonance of cavity-EIT.
| r | 3e3e4f0a8903228aebf2ccad88133af0 |
The heat capacity per one lattice site is defined as ({{cite:7ced4ec13dbf15133abc541816fd4e5bb820b7a0}}):
{{formula:94a23f75-1736-4fbe-a7fc-fd43bcd6a4f5}}
| r | 4e86e6b3419b941d0b2b3da28f85320f |
Appendix
The appendix includes the proofs of the lemmas, corollaries and theorems in the main body.
Proofs
Useful lemmas and results
The following result is in Corollary 4.2.13 in {{cite:d128fddd4b4af8a9cf17df77536ffdc61a17ac16}}.
[Covering numbers of the Euclidean ball]
For any {{formula:1ab04304-c7e9-426c-a56c-915a55e8340c}} , the covering numbers of the {{formula:e2fab8f3-a665-47b5-aa96-cf5374a2ff3f}} -dimensional unit Euclidean ball {{formula:e4bc45c7-ed09-4174-9913-62e1e32f40e3}} satisfy
{{formula:ae06bf0e-d13a-4011-9d83-8270ee65ac16}}
The proof of Theorem REF
By the definition of {{formula:43ce1ebb-0a13-4ec0-b164-be3b31804fe3}} , we have
{{formula:7bba79be-d41c-4cd8-a267-b694d63208cd}}
which yields
{{formula:04fbb23b-76c2-4913-818b-6af79d32523d}}
Under the two high provability events {{formula:9b91b381-6c63-48ff-95ec-d577e97a7b95}} in Lemma REF and {{formula:7851552c-d64a-404f-ab47-933d9acced4d}} in Lemma REF below, we have by inequality (REF )
{{formula:886723fc-681b-492f-ae72-a91d125f001d}}
with probability at least {{formula:82f8bd1f-8390-409d-a270-d1e9391604b1}} .
For the last two terms in (REF ), we put {{formula:2734c57e-39fd-4a34-a5d4-ebc769cec6ef}} , i.e. the variance term equals to the bias term. Then we get
{{formula:f14bdee0-47e7-4c69-bc99-c3dcd0d23332}} .
From Lemma REF and (C.0), we have
{{formula:ea06c0b5-d8b7-4987-83e4-2da2cf34ce97}}
By (C.1), the {{formula:45fb04d1-0d69-4e46-84cd-0e12b49f3d70}} is increasing function of {{formula:12d139cd-f9c4-4cdb-a03f-25977b49b01e}} . We get
{{formula:d0eb9879-404e-4c0d-b447-b3db8abf7ecf}}
From this upper bound and lower bound of {{formula:20b6e213-b2a4-4e35-9a10-a5e60447843b}} , (REF ) implies
{{formula:88118444-0a9a-4b80-addf-a6bf74828de2}}
with probability at least {{formula:5d9b6c4e-f026-4495-bc90-670b94b9378a}} . Let {{formula:99046251-8f8d-4b12-ba5e-c1aca4bd5518}} , we obtain
{{formula:f65514ce-d9d0-43cf-a242-0330947f3063}}
[Generalization/concentration error bound]
For general loss function {{formula:f00797c2-2bac-4fa5-9b6d-0df114ef1efe}} , under (C.4), we have
{{formula:a52efbb5-a563-43ae-91d4-343c54b6f5cd}}
where {{formula:300f5d6f-9d75-4117-b3d8-4dacd9269d0e}} .
The proof is based on bounding the exponential moment of {{formula:c3a18e1e-bd5d-4876-ad98-eaa1375947e0}} from Markov's inequality. Applying the upper truncated function in (REF ), {{formula:764b829b-d7dd-4934-a1cb-b17e56689ff7}} , one has
{{formula:694e69da-8096-41e9-88ce-f68dad4b098e}}
Via Markov's inequality with the exponential transform, it gives
{{formula:4df4bee2-4637-47ee-ab48-29d23a2b18f8}}
[Generalization error bound]
For any {{formula:7b9c1337-06a3-4573-bcb6-01c2f4785b8f}} , under (C.1)-(C.4), one has
{{formula:12eddf8e-71e5-4fa2-9f36-ac63eec5e8f4}}
where {{formula:127272a6-2c7b-4d1a-a4bb-cdd6b1e82bce}} .
Let {{formula:8358cc6e-0c4d-4d80-8406-fc2666b59288}} be an {{formula:4e798ad0-6bc6-4dd7-a966-72f301e5d316}} -net of {{formula:e50ce454-c143-461d-a825-d30541054f20}} and denote its covering number as {{formula:060a270b-3f63-402f-96e0-bbf1de90ac04}} . For each {{formula:2db4f0b9-6026-487a-90c5-7885278efdd4}} , the definition of {{formula:a63f1c51-5c6b-4867-b3bd-0761b009db23}} -net implies that there exists a {{formula:e975e13f-3e30-4442-a2cc-77a59790a792}} satisfying {{formula:111a6f91-5c16-4ed9-8e19-63403a6f637d}} , by locally Lipschitz condition (C.2), we obtain
{{formula:dd45d84c-167b-4615-a276-cf9bd6d6b58b}}
Since {{formula:686edce0-5018-49b9-aa69-18406cf9bc68}} is non-decreasing and the last inequality gives
{{formula:a8d53dbc-ff31-475a-bd28-1b7a43f3a660}}
In below, we continue to derive the lower bound in (REF ) by applying covering number techniques. From the lower bound in (REF ), we could estimate the exponential moment bound:
{{formula:530a3f05-ad7c-4960-a4ec-38b06bf95da1}}
where the last inequality stems from {{formula:590461ba-29d9-4fb8-9804-59e12d7d634d}} . By the last exponential moment bound, Markov's inequality shows that, for a fixed {{formula:110147f9-c63c-4ef9-9e0a-6019a6edc327}} ,
{{formula:71b1c4ca-66ff-4513-b093-9ba911f7952e}}
The set {{formula:30156b04-e14c-4ad1-9b6d-c79d920afa9f}} has {{formula:aaba41b3-1523-4cc1-9ebe-676791aab1f3}} elements. From the single bound (REF ) and putting {{formula:2b5799b6-47e6-4e04-8f74-20373316ca4c}} , we have for all {{formula:173b2af2-840d-461b-ad18-2e004674655e}}
{{formula:885cb8f3-7b4f-4c19-8933-3117087dbd55}}
Then the complementary set in (REF ) hold with probability at least {{formula:2c7db0aa-0f0d-49b1-b6b2-93610413d1db}} . Inequalities (REF ) and (REF ) give the following lower bound for all {{formula:b3d61b14-640a-4ab6-9803-8be4ab2bfbd6}}
{{formula:39b24da7-d86a-45ca-8e0d-d18c8af5739c}}
with probability at least {{formula:f3ef0596-2789-4396-90d5-88bf5448384f}} .
It remains to find the lower bound for {{formula:b37a31a4-3572-4c1b-8562-b5f2a6b17c33}} in (REF ) by the error bound of {{formula:7d2a044b-7591-479c-baae-4f65c4e847c8}} . To this end, the (C.2) implies
{{formula:649957d0-c23a-4e16-b800-953a3afa36a3}}
which gives {{formula:3a775b4c-488f-4f76-a662-3dfcfeb7c79c}} . Thus, (REF ) has a further lower bound:
{{formula:f5648aa8-69e7-405a-ad3f-ede87ad98143}}
with probability at least {{formula:f179d8a6-739b-4887-9b3b-70da707c74dd}} . Then, we conclude Lemma REF .
The proof of Theorem REF
Let {{formula:70b4737a-8377-4f88-941e-341c036ecc79}} in (REF ), we get with probability at least {{formula:85625f21-d1c8-417c-8620-4a98377784a4}}
{{formula:01f98b59-9859-4886-b820-e817b1c38760}}
The {{formula:776f602f-03a5-41d9-9dc0-e49884fa8c3b}} satisfies weak triangle inequality and homogeneous inequality
{{formula:c0d8666b-2a26-4937-99bc-960eca738077}}
by {{formula:534e3955-f3bd-4807-a58a-eb464562ebd0}} for {{formula:4ec84d09-e52a-4345-9c92-9ea768ba004d}} . Thus, we have {{formula:b8ec4e0f-d7ed-4458-9914-33bb3476b318}} and {{formula:ebf250f7-b7c5-404b-b8ba-3b78cda7f560}} in (C.1).
Using the variance-bias tradeoff, i.e. the variance term equals to the bias term, put {{formula:9e53b94e-e082-4191-ac8d-d7e1aad67866}} . Note that {{formula:a4126574-8711-4800-9997-239de5728ffe}} for {{formula:416f4075-aa93-4c26-a670-7dca7661c997}} . So we have
{{formula:6fd87e9b-d693-446c-b8e3-4ce2a2144c03}} .
Observe that {{formula:5bf4aea0-48f9-4b07-8f4c-5cdd86461782}} for {{formula:d295bd9e-dabd-4de4-b770-16d3c0bd07c5}} . Then
{{formula:c4348fe7-2215-4570-9aa5-069a9b358b3e}}
where the last inequality is by (REF ).
Thus, {{formula:0325f244-475f-40f1-8d2d-992ce2bf82d5}} and {{formula:701b07e8-f4bd-4a69-b018-91985c85320c}} show
{{formula:cf356a1a-6977-4505-b6bc-4cdf8039ecfe}}
where the last inequality is by (REF ). Then we have
{{formula:dac0ca32-6d9d-477c-ac63-dea99125cab6}}
with probability at least {{formula:6dfca656-262f-46e8-bb88-50fe22fc6257}} . We get the desired result by letting {{formula:e7aa9393-aada-4e53-a889-e517ebcc02b5}} .
More results, remarks and proofs for Section 3
Quantile regression with heterogeneous regression noises
Similar to Corollary REF with second moment assumption, assume the sample {{formula:62e114c3-0922-4100-95a8-9dbea87b012e}} satisfies that {{formula:f1647c7f-1668-4344-ba9b-3472d569fcc9}} are i.i.d. while {{formula:65c8dc17-b4bc-4506-a315-26fdea2b1f02}} are non-identically distributed obeying (REF ) with heterogeneous regression noises {{cite:df162b51c71ffe945a32385d16ccfce3664e81a1}}. The sample-dependent parameter {{formula:223b207e-92a5-4e40-be05-8b04d6c9874f}} is defined as the minimizer
{{formula:04f0cbe7-fc52-4a03-82f4-eaf1d5dec0ca}} and {{formula:549c4b05-50d4-413f-a1be-251df11d67a2}} for {{formula:463799b5-fcec-40b6-b8fa-4ba9fddb70d6}} .
To guarantee high probability {{formula:3aacb8ce-2003-4d25-a55a-354ea1c5fd63}} -estimation error, it requires the minimal eigenvalue condition of the covariance matrix for the design and the positivity of the density functions of heterogeneous regression noises {{formula:e796d471-407b-4fe0-bfc3-06f31d78ad58}} .
Let {{formula:a1171b83-9305-44b3-a2de-2833c7b9c343}} and {{formula:f06f401d-d0a8-4639-9b38-c56c6cb8ef9d}} . With probability at least {{formula:430b5fce-94e6-4fc1-9c61-042986a56250}} ,one has
{{formula:bdc20961-15a3-4332-bada-245fc938a0c3}}
Further, consider: (Qa) The {{formula:5d5e0328-6a96-4d42-9f97-5eae0a420e27}} are the continuous r.v.s and its conditional density functions
{{formula:fe15ebd4-3c15-4858-8af3-6086cee0969d}}
where {{formula:340c56d4-c5b2-4b30-839a-cb32c21c0a67}} is a positive constant; (Qb) Assume that there exists a positive constant {{formula:e442f2cd-1847-4f88-81a1-4149ba620281}} such that
{{formula:760e7587-5a58-49cb-8efe-697c19f7728d}} Then, we have with probability at least {{formula:44c8ad1c-073f-41b8-be9f-3a30a8d16691}}
{{formula:74731549-38ab-4820-901d-53a57c60b0b9}}
Remarks and proofs for GLMs
The (REF ) is originally derived from negative log-likelihood functions of exponential family. The exponential family contains many sub-exponential and sub-Gaussian distributions such as binomial, Poisson, negative binomial, Normal, Gamma distributions {{cite:03e68c913545d7b4d948844d9d94009d20053323}}. Let {{formula:bd40dcc2-6474-4040-9ace-1c1108e66412}} be some dominated measure and {{formula:2d18270e-4fc1-4343-8aae-da378315c75a}} be a function. Consider a {{formula:6d04a848-e787-4fc0-8a6f-ffc48746b730}} follows the distribution of the natural
exponential families {{formula:728a64d8-1948-49df-be9f-5033b713ec30}} indexed by parameter {{formula:b0353889-9907-406c-805b-a8a529c9a2b1}} .
{{formula:4eb966bd-7a3e-47c4-8a08-0acaa97a0a6d}}
where the function {{formula:19b46a3f-1323-432e-8a70-3dd6716b9d72}} is free of {{formula:26a9a7f6-abee-491b-be5e-60bdebb90605}}
For GLMs, the link function {{formula:e3ceb050-fb72-44a4-9137-0533a4a77ea1}} is canonical, i.e. {{formula:9b41a7bf-9780-450f-9386-cfa1b743c086}} , whence we can choose {{formula:01f36e86-6765-49a7-96f6-6da05c1cb6a8}} in (G1).
Note that in this case {{formula:283e0115-3070-479a-b981-3d4f0af204fe}} , a choice of {{formula:7254c5d1-e8a5-4199-a40f-4e8c80747fbe}} in condition (G2) is derived by {{formula:12833e31-79d2-4903-ac1a-17650ccb5a19}}
{{formula:ef0710c8-7b19-4b90-b1f5-307dcd2f286f}}
For GLM with non-canonical link function {{formula:e14dd115-5c82-47c5-87c6-eb94952b4ed7}} , we first choose {{formula:a68ff7dc-c68d-47f2-80bf-3b26e3a98dff}} in condition (G1) by
{{formula:465edb75-44b8-49dc-bded-c4dca546ead1}} due to {{formula:7525d0c8-c270-45dd-b439-51b494ee9f2f}} Under the (C.0) and (G.1), it implies (G.2) with {{formula:2db6b50e-2e55-4552-b106-d9702b866608}} and {{formula:8afd3e9b-e3d1-4196-8511-6ab23467b716}} by the following inequality
{{formula:a0251a03-8e2f-4484-85e2-4a9923c02aa6}}
Suppose the input {{formula:ab632fb0-0654-4a34-8989-e1b2ee6a50dc}} is i.i.d. drawn from {{formula:d10cb8b5-7a24-4a46-a160-2499599022b3}} and {{formula:9844d008-99de-45a5-8db0-7798d83e6140}} is bounded (see {{cite:8791ace0f2cfaef4b4c14d2a3b26c133a1a2a64a}}). Under the (C.0), the {{formula:357c305a-172f-48c0-8840-474f00fa5cdc}} and {{formula:b1d1313b-7ede-490d-8a7f-8fcc86651faa}} are also bounded, then (G.4) is true under the finite second moments of output
{{formula:31c25344-fad4-415b-a31a-3c53d96ba434}}
for {{formula:a6c4a23b-0f3d-4e6e-b5e7-92c9e3bbb3aa}} , where {{formula:09a71e2f-ef1c-428a-944c-9957b5527bde}} and {{formula:ca875d75-9174-46fa-8784-8a21bc9aa696}} are some positive constants.
From (REF ), one can formally derive the quasi-GLMs loss in (REF ). Indeed, given {{formula:63f2c4e8-d397-4b94-aae2-4d1cdd8eb487}} , the conditional likelihood function of {{formula:b2daf612-d5ec-46a4-9b5a-216d9994dbd4}} is the product of {{formula:601fc27f-ab38-4117-be45-fd1159cce690}} terms in (REF ) with {{formula:9d69d71b-c946-4a10-b5da-f13f4531c4c0}} , and the average negative log-likelihood function is
{{formula:df4d17d0-df01-407e-b888-8e9a71d5201f}} .
Fix a {{formula:2e971f39-b9e1-42b0-b947-7f94de867f03}} , we compute the gradient for the loss function in (REF )
{{formula:4fc62650-1774-4146-ab49-29a3fbd7616b}}
From (REF ), {{formula:692f60eb-5287-4e65-b780-80ba23d56a00}} in Theorem REF is given by
{{formula:9e23d05c-fd43-4368-a3ea-02a043f96832}}
which implies the excess risk bound in Corollary REF .
Robust logistic regression. We have {{formula:d68d8637-4e49-4d2f-b783-5bbdeb4d8058}} and {{formula:f3361715-ac9e-4336-a9d6-92253b3ae018}} Note that {{formula:e758cbbf-094c-46ea-bd1d-8fecdf2dabbd}} in Theorem REF is given by
{{formula:1275d0f9-9352-42cf-8144-48dbce5ac2c6}}
from (REF ) and (REF ), we have
{{formula:cf3e1277-be98-4339-a12a-85a9ee8bba3d}} . Thus Theorem REF gives
{{formula:376e2efd-40ae-4b1a-bd17-7f1b41e3dd39}}
Robust negative binomial regression. The connection of {{formula:c453f0da-b42e-4187-b5f0-fdc6c43f128b}} and {{formula:f05eb275-0135-4d06-bb07-91b767522104}} of NBR is
{{formula:f851870c-4b66-492e-a10d-c21089f426cd}} and {{formula:b6c42314-68f1-40cd-a172-54c79b9e5025}} .
From (REF ) and (REF ), we have via Theorem REF
{{formula:b808ed47-4eb8-4ca1-a4fb-ea21d3f1d250}}
Hence, we have
{{formula:9bd38bac-da9c-4898-a3cc-6d7f6108239d}}
and thus Theorem REF gives
{{formula:c570a254-063d-4338-aa7b-a9db5ea15d83}}
The proof of Theorem REF and related examples
In (REF ), let {{formula:08dccb9b-23ea-4d88-b55a-89e899c17c3e}} with {{formula:f33b9fc1-cad4-4378-855a-9434d712005b}} . One can check Lipschitz property of DNNs {{formula:127878f3-ea96-4e4c-b747-f629c32f73cc}} for a given function {{formula:06451a83-3bd9-4904-b965-590b52a9dec0}} by following lemma.
[Lipschitz property of DNNs, Proposition 6 in {{cite:2253dfb7706527dc9ef77bdd5bbda494c9b7c102}}] Assume that the activation functions {{formula:05b94bfd-1f86-4361-8b29-2075b333ef49}} are {{formula:b3e7fa05-f85b-44aa-839c-71760c4720b4}} -Lipschitz with respect to the Euclidean norms on their input and output spaces for {{formula:c2fc00bb-28ba-42d6-b69f-6a31ba0817ef}} . Then for every {{formula:36ef19f0-6c4d-4bc6-9d86-6c6fe615edd1}} and parameters {{formula:245b272f-06c7-4cba-9d86-5c99670a919e}} in (REF ), it holds that
{{formula:12a15e17-c670-411f-b2e9-b4950cd6ab00}}
with {{formula:062af2c1-08ab-42e0-8e3a-c3ddf01f40e6}}
Motivated by Lemma REF with Lipschitz function {{formula:a8f19346-e58f-4df8-840a-645a3aec1625}} , we assume that the parameter set has {{formula:6ede07f7-79e8-4fd0-ab53-41a2abffce45}} -norm ball with radius {{formula:230bef1e-b3b1-440b-a8f4-02be5a779c9e}} and maximum spectral norm bounded by {{formula:411cea4d-aab5-4a27-a4aa-7f43548d2855}} ,
{{formula:bf3cd613-a61b-4bc9-824d-35a936fbb6d2}}
Suppose that {{formula:7b5cd8a7-9ec4-4090-88e5-a7fa9b64471c}} satisfies Lipschitz condition with a function {{formula:c2bf243e-0b2a-4eb9-9e59-744b783eafec}} :
{{formula:4e0a157b-8e2e-4418-b801-b188bb38eb6b}} .
From Lemma REF , the loss function has Lipschitz property
{{formula:63efe02a-958a-4781-b5c3-1a37a510d151}} ,
which shows that
{{formula:fe374c44-cbc6-46e1-b6cd-754910440b7d}}
We conclude Theorem REF by using Theorem REF .
For robust DNN LAD regression, we have {{formula:c91553ed-a6d8-4402-8029-c9c27bf3db1d}} and thus {{formula:6bdf6a2e-b556-479a-89b0-abc2bc2ab56b}} .
For robust DNN logistic regression, it gives {{formula:75e9f66e-e275-4efa-b842-011c2e7235f4}} and {{formula:e2bdd252-8ee8-44c9-b8bb-6b3afbc2f30e}} .
For robust DNN NBR, we get {{formula:3cf10660-4a3b-45e0-a693-3c65475be4a5}} and {{formula:4d18e4ae-b1ac-4b37-aba9-622d577a8fa6}} .
The proof of Corollary REF
The proof of Corollary REF is based on following Lemmas REF . The proof of Lemma REF is grounded on the two high provability events {{formula:40bf300d-513d-4348-ad32-9957b33bb5a2}} in Lemma REF and {{formula:8e29102e-73d6-4027-adbf-32c8a797eb6c}} in Lemma REF . The proof of Lemma REF is also the same as Lemma REF , and we omit their proofs since the arguments are the same as Theorem REF .
[Basic excess risk bound]
Let {{formula:a8189b46-11e7-4e8c-b9c0-abc353ee21ea}} and {{formula:f4a07d03-2866-4080-9907-334fe45c7cc6}} . Define {{formula:4b77dd0c-3bae-4b1c-8ef4-f73f71f2e708}} . With probability at least {{formula:46fcd897-f617-4eca-9bc0-ce014483fa25}} , under (C.0), (Q.1) and (Q.2), we have
{{formula:1d349c07-7c89-4af5-8b3d-edfc9c0bae2a}}
[Generalization/concentration error bound]
Under the (Q.3), we have
{{formula:63ccac90-9188-447c-b33f-d6a56d3b8530}}
Next, the following lemma is crucial to complete the proof of Corollary REF .
[Generalization error bound]
For any {{formula:1b48e1e0-46b3-4501-9abf-c6abb6e6e6ba}} , with (C.0), (Q.1) and (Q.2), the following event (denote as {{formula:eb3c425a-eb0d-4dee-913b-756625055660}} ) probability exceeding {{formula:2937bcf5-8b36-416f-98a2-d8691b7d370c}}
{{formula:b2590f7e-e63e-41a3-9705-62cbced0b872}}
Since {{formula:3211f735-4a34-462b-a034-c1a9e542e1e3}} is an {{formula:d1a31212-9457-47af-ad6a-330c3024283c}} -net of {{formula:bedaf635-a063-43cd-9705-63c8737a478a}} with minimal cardinality {{formula:3dad9411-f41b-4a34-9fe2-92bd6db3bacc}} and {{formula:ceb76f17-1283-4c5d-b2a9-75a7b6d373eb}} . The definition of {{formula:f9ddb407-51dc-43f4-b114-5ec494e2718f}} -net implies that there must be a {{formula:9b4811dc-639f-4367-a693-d4d2a6701c9e}} satisfying {{formula:bce4c5f5-21d3-4919-939e-a275cedbeeb4}} Hence, Cauchy's inequality gives
{{formula:962fc498-3bb6-4572-a34e-a65606ed5ecf}}
To prove Lemma REF , we use Knight's identity {{cite:6867f51e6a5001210d319d65eef9df2956602c74}},
{{formula:8e1cab10-acf0-4a96-9358-b6e3b4190fa4}}
which provides a Taylor-like expansion for non-smooth function. Then, we have
{{formula:c8d621bb-947a-473c-a83b-0d9670bca29f}}
In below, let {{formula:12f9ea1b-66a6-4f4b-9332-a8a4fa203da2}} . The non-decreasing of {{formula:2fb8f8bc-3fd7-45c9-a1c5-c115989a6fa8}} and () show that
{{formula:c892bbf8-eb28-497d-bfdf-79bc4fba9448}}
Next, we need to obtain the lower bound of the last term in (REF ) similar to the proof of Lemma REF , but it includes covering number techniques. By the lower bound in (REF ),
{{formula:4d54f4bc-2c34-4228-85bd-0f8f757c4cff}}
where the second last inequality stems from {{formula:dda4dc4a-7987-483d-bf18-ebdb6f405514}} and {{formula:3f730d34-f507-4dc2-b590-ee4b97f1e8b0}} .
By Markov's inequality and the moment bound (REF ), we have for a fixed {{formula:b71855bb-f9a1-48f0-8d83-b0a150b7736e}} ,
{{formula:f2f857e3-d80c-4c54-8aee-f5371966eb2f}}
Note that the set {{formula:14a3c517-9f6b-44d5-960c-d704bd3fd622}} has {{formula:5f36a2d7-8697-4efb-babc-ebb974846c67}} elements. For all {{formula:738262b9-d490-4e48-83c0-81f529c9657a}} , we consider following union bound based on the single bound (REF ), {{formula:509b91cd-9945-4e9a-ab64-21a4ab9df19d}} ,
{{formula:ac531040-1584-4c3a-9811-81b74647a16d}}
Continuing to bound (REF ), {{formula:768bcac0-1569-4925-b6d1-fec053e09533}} , we have with probability at least {{formula:8e96d3a0-cbe2-44b9-b3c5-40f08e3c89dd}} via (REF )
{{formula:577fb2aa-5aac-4457-99a4-b195585716be}}
Let {{formula:65ff495e-ec91-4eee-bde9-5711ca5a8e24}} be the average moment for the independent random variables {{formula:ab29e66d-2f4a-4ae6-8801-b19810d5e4ad}} in some space {{formula:b0b34e9c-d026-459a-9bcf-752d40f3384a}} . Recall that {{formula:22e60516-d83a-4dd7-8d20-20c44768581b}} are independent with densities {{formula:07527db1-6fc2-4578-8410-089cef0d62be}} , where {{formula:9d8e62af-fd38-4b84-bb29-9ce92b0a43a8}} is the conditional density function for {{formula:84eda28b-8075-4a1d-bf53-4605aa51366d}} given {{formula:02d92c19-df48-496e-bea6-78c52364b60b}} . The quantile function gives the relation
{{formula:ccbd1589-a851-4010-9b38-f0af1af5de22}} To get the lower bound for {{formula:cd64cd79-bbd1-42e4-b80a-b6297f51ed33}} , we use Knight's identity (REF ),
{{formula:ad84039f-df12-48a9-a7e4-c792299595b9}}
i.e. {{formula:63d596d4-dfc7-4415-939d-86049d07b0ee}} , which implies (REF ) has a lower bound:
{{formula:740b0114-f310-4b68-957f-baa29f06170d}}
with probability at least {{formula:a955d4cc-70b9-4b3c-b65c-080825d729cc}} . Then, Lemma REF is proved.
Finally, the term {{formula:9470142e-c184-4225-96b2-0ab8ad7dcbbc}} can be treated as the variance factor. Put {{formula:4afc77f9-01bd-45f0-a6de-4077dfd147f3}} . It gives
{{formula:115c8021-2d40-4685-b948-df9c72972c77}} and
{{formula:87a033e1-cbac-4cb0-b76b-43c1f29ddd98}}
We finish proof of Corollary REF by checking the covering number bound in Lemma REF .
To evaluate the covering number for the {{formula:94cb0b7f-5fc6-4e58-ac77-921429428bcc}} -net in (REF ), we need the hypothesis (C.0). The covering number {{formula:702eacbe-439c-41a8-9393-5f6f4defe4d2}} can be estimated by the covering number of {{formula:5dce2b45-95e9-414b-95c6-a3c1e911bc31}} . Lemma REF shows that for any {{formula:a6a65e36-fcdb-410a-b3f5-7c30b6237c0e}} {{formula:138978f7-8636-4bbb-8df3-d113768d4d6b}} . Note that {{formula:84c79fb0-6de4-416f-97fb-e5accd29adf8}} it gives
{{formula:ee5f73c0-dbd4-4cc6-b8c0-3ad3512533bf}}
With (REF ) and {{formula:d88ca562-53be-4535-a3a4-2516b2b56075}} , we have
{{formula:ee2d413e-d301-42ea-818b-f49fba75c654}} which shows
{{formula:db74484e-2256-45aa-99ed-599a7485bbf1}} .
The excess risk bound in Corollary REF is obtained
{{formula:d19633c3-f08a-498a-bda6-a36dfbe1acc6}}
with probability at least {{formula:5d62a61e-9352-4e4e-8b29-0a1b5d3a085a}} .
At last, we show the {{formula:a016dd70-c0a2-4b63-9d79-dc0cb8783475}} -error bound. For any {{formula:7c12348f-c9d4-4c4a-b889-4f949b2c46ce}} , Knight's identity implies
{{formula:ef8b8387-154f-4b33-bcc4-560b9eab439b}}
where the second last inequality is obtained by (Qa) and the last inequality is from (Qb).
Put {{formula:668b2e74-0843-4661-bc23-793f226a9cf5}} in the last inequality, and we get the {{formula:488d5202-c894-492c-9a81-eaf7965e5d98}} -error bound in Corollary REF .
Simulation results of negative binomial regression models
We use the same noises for the NBR models as the simulations of logistic regression. For the network settings in DNN NBR, we adopt the ReLU activated 2-layers DNN model with width {{formula:4485499d-56e0-47c3-bf0a-ced284d83cbc}} . And we set the dispersion parameter {{formula:96961970-6ffa-4be1-bb77-552068a9cddb}} in DNN NBR.
In Table REF and Table REF , we compute the average {{formula:f0bd6c7c-fc93-4e14-a704-3fbc3fcdcb59}} -estimation errors for predicted coefficients of each NBR models with 100 replications. Table REF presents the absolute average errors (MAEs) of the predictors of response {{formula:4c6f6783-3d85-4513-b9df-bb7db768f1c8}} with 100 replications, that is defined as
{{formula:c052b564-b12a-44aa-9bfb-80ec022c7e70}}
{{table:f3193e34-fb88-47d1-98b8-660a909021ec}}{{table:8bfb1ca7-5aa5-4b9b-bf16-b02b199264b5}}{{table:9c3922c3-abb8-41c4-87dc-9eaa0f12710a}} | d | c90e831823ed95bce78e2dd052cc0b34 |
Many of the major advances in artificial intelligence in the last decade have embraced the use of data that does not follow hard rules {{cite:4659dbad9ec0410b80d6b4c92bc9490934fcb094}}, {{cite:0fccfeb1c9480d6dfa8b2cc0be579b60677f9fb5}}, {{cite:71ae33e3d3b4afce454d44d8c4bebe5d10af2d68}}, {{cite:a2983424d35e5f69804af532732b317b5188bc7e}}. Systems such as the use of deep CNNs for ImageNet classification, AlphaFold, and GPT-3 have all demonstrated incredible performance over hard rules-based systems for handling data containing a high degree of uncertainty {{cite:0fccfeb1c9480d6dfa8b2cc0be579b60677f9fb5}}, {{cite:a2983424d35e5f69804af532732b317b5188bc7e}}, {{cite:71ae33e3d3b4afce454d44d8c4bebe5d10af2d68}}. However, many of our existing ontological knowledge base systems are designed only to hold facts about our world, with no allowance for uncertainty {{cite:f6c238af9cb1c77d95355977a09b058504f2bad9}}, {{cite:71cc379ec616cd15ea7bb10dbd58dd1a553c3a9e}}, {{cite:f98dfc7c581b00a7a2e6094b9b7105de76ef9e3e}}. Tyche aims to facilitate probabilistic reasoning about uncertain information through the use of aleatoric description logic, and through the creation of ontological knowledge bases with probabilistic beliefs (i.e., belief models).
| i | 3b808fb4313a745e14aab1f156dcb441 |
{{formula:c3dd87cc-0d40-4d92-9461-d14f700a91ca}} USLAM {{cite:707104ef63f77819bdd1f834d008dca8086209f3}} is an indirect monocular method that
combines events, frames and IMU measurements. Its front-end converts events
into frames by motion compensation using the IMU's gyroscope and the median
scene depth. Then FAST corners {{cite:154df11f53d169fb5a773c279670c33ea8f14719}} are extracted and tracked
separately on the event frames and the grayscale frames, and are passed
to a state-of-the-art geometric feature-based back-end {{cite:e98e08eae07851056403765fd46723dfd20d4a60}}.
USLAM is the only method that we consider with an IMU. The IMU is tightly used
in the front-end for event frame creation, and so removing it is not possible
without breaking the (robustness of the) method.
| m | 4275469a0873093a8ffbedd16520e135 |
Third, when combined with parameter averaging, Federated Distillation methods achieve better performance than purely parameter averaging based techniques. Both the authors in {{cite:3c64260fb344ad4bc9ae87194f9082dce784d736}} and {{cite:2d02a0d67e5cbd35e10b067e3ad66ae587d36f7d}} propose FL protocols, which are based on classical FedAVG and perform ensemble distillation after averaging the received client updates at the server to improve performance. FedBE, proposed by {{cite:2d02a0d67e5cbd35e10b067e3ad66ae587d36f7d}}, additionally combines client predictions by means of a Bayesian model ensemble to further improve robustness of the aggregation. Our work primarily focuses on this latter aspect. Building upon the work of {{cite:3c64260fb344ad4bc9ae87194f9082dce784d736}}, we additionally leverage the auxiliary distillation data for unsupervised pre-training and weigh the client predictions in the distillation step according to their certainty scores to better cope with settings where the client’s data generating distributions are statistically heterogeneous.
| d | 1da5381a395a58751530791a8955ecc0 |
As shown in Fig. REF , two area (10 bus) system consists of two synchronous generators in each of the two areas with a weak tie line connecting them {{cite:30ec7c727ba9d3de1c310f1669a900f04f62688f}}. Here {{formula:a7877f1f-7218-4c53-af36-d22f4d5dbca6}} is considered as the slack bus. The system is simulated in a MATLAB simulink environment. Synthetic PMU measurements are recorded for each bus with a sampling rate of 100 samples/sec.
{{figure:ad13a160-7dfb-47c3-9c1f-6719c071b121}} | r | ff9103a6329e4d7610256d401ef413d7 |
The modest improvements facilitated by the weighting scheme proved to be surprisingly consistent across different estimations of word probability derived from extremely dissimilar corpora. Optimal performances using each corpus varied within 1% (cf. also Figure REF ). Other than hoped for, there was no marked improvement in overall performance caused by domain-specific word probability distributions, but neither did performance deteriorate, which might have been expected just as well, due to possible exaggerations in the rather small word frequency corpora. Instead these resulted merely in a requirement to apply a greater weighting parameter {{formula:d83c6269-1ab5-4d96-b3ee-50a656ef9b39}} , scaling down the weighting scheme’s influence, while causing some instabilities at {{formula:adef0675-39ab-4edc-b24c-b6c56f8fdb3b}} (cf. Figure REF b and REF c). This effect is exacerbated for term embeddings that heavily depend upon probable domain-specific words (as in security patch or safety integrity). For the enwiki probabilities, however, performance peaked for weighting parameters where {{formula:0f8ba6e6-41a5-459b-8611-4aea2080a6fe}} , reproducing the findings of {{cite:a7bcdb56f24d90ed09862b3882b7440ad07055b6}} (cf. Figure REF a).
{{figure:d20a42b3-a1da-4f37-a7c0-d422a32081d4}} | r | fb71422500144f510e541a2703d61f00 |
CAM {{cite:6597519704128b610e216ed3cf4be8dd95c969a3}} produces a heatmap by taking a weighted average of the channels at the last convolutional layer of a CNN where the channel weights are the weights of the fully-connected layer that connects the global averaged pooling (GAP) value of each channel to the network classification outputs.
While CAM is a faithful explanation of a CNN's “attention”, it is only applicable to CNN architectures that have the GAP layer right before the last classification layer. GradCAM {{cite:f1b7f8e1dc5e11069f7894d25a0cdfe9cfc2261a}} approximates a channel's weight in CAM by the mean gradient over the activations in that channel
and is applicable to all CNNs including those without a GAP layer.
| m | d27d0393dbc0626cd77c2e61b3d3d731 |
Inspired by the widespread success Koopman operator theory has found in a number of applied fields, and its recent application to problems in machine learning, we extended Koopman tools to the problem of deep neural network pruning {{cite:bcd2987b050c17f13386bf92c11103401d11ba8c}}, {{cite:c6fa0304db39f1951066533496e03c5ae49241aa}}, {{cite:b68ea07ca854fa554b793dc7b0305a3a7c1cfdf1}}, {{cite:db18bd692a0636f9d7521108fdb8a4fccd837a97}}, {{cite:4286e87bd5362a17d8c50b0fc7bfd2391bc7c9f1}}, {{cite:c0dea816b8897d0f3688926d1386e20d0dcbe9e8}}, {{cite:7c46bca4b36fae9f7fd43d795f177179625d8548}}. By making use of the Koopman mode decomposition, we defined a new class of pruning algorithms (Algorithm REF ) that are theoretically motivated and data-driven in implementation. We found that both global magnitude and gradient pruning {{cite:7c46bca4b36fae9f7fd43d795f177179625d8548}} have equivalent (or nearly equivalent) methods at certain regimes of training in the Koopman framework (Figs. REF , REF ). In this way, Koopman pruning provides a unified view of these “disparate” methods.
| d | 55526e5f501b8f2bfabb2865b93722dc |
Surface-response functions. In the PCA, the induced charge is strictly a (singular) surface charge, i.e., {{formula:e796817d-d035-413c-893d-6bb9f5fe6d43}} {{cite:5c417c8a6088c697c401c7f50197605696054f7a}}, {{cite:e422b640e8ee672049532b97a1fffcdc789ba7fe}}, {{cite:d4fb7280953debc6c2aac5e8f0f28eed2213e730}}, while in reality, however, it assumes a nonsingular induced charge density {{formula:19d3899e-6ee4-47aa-a4dd-9f1638ac2393}} of a finite, surface-peaked nature (Fig. REF b). In this context, the Feibelman {{formula:298c5dbc-fcb7-49aa-a76c-51c317ebef36}} -parameters, {{formula:e294094c-f44c-42cb-9e20-cdfa0d263a20}} and {{formula:255692a8-f565-4741-95d4-50a0f28604ee}} , are dynamical surface-response functions that correspond to the first-moment (i.e., the centroid) of the induced charge density and of the normal derivative of the tangential current density, given, respectively, by ({{formula:fca20b05-5579-40eb-8da1-0a4ef6eb07e4}} -dependence implicit) {{cite:e422b640e8ee672049532b97a1fffcdc789ba7fe}}
{{formula:ff43bcba-c1fe-450c-b9a0-ca56ff7567b6}}
| r | fa922a3d2f9a8ab7078137a8a3ced5ac |
As a concluding remark, we would like to present several interesting future directions. The first question to ask is about high-energy behavior of string scattering in de Sitter space. Recall that in flat space and AdS, string scattering has a mild high-energy behavior due to existence of infinitely many higher-spin states. In particular, high-energy scattering is captured by a widely spreading worldsheet, which implies an exponential suppression of the amplitudes. On the other hand, in order to have sufficiently many higher-spin states in de Sitter space, one needs to consider strings shrinking and folding inside the horizon. At least naively, this would suggest that the way of stringy UV completion could be different in de Sitter space compared to flat space and AdS. It would be important to study this issue further by generalizing developments in holographic correlation functions in AdS {{cite:2462fb5720a7c1da2a8b6f11b18179a26287992c}}, {{cite:179598bb740b8cbf52184893edf6f3f17a4d8972}}, {{cite:fb2959f5749f3bbc1261290225ee8245420f429f}}, {{cite:2737d6e25f227bdab6ba033f069cac687e5dea11}}, {{cite:e171b48c91c7a612ccac7be2c660536d41b94a1c}}, {{cite:083f8c415f2248ca8d8589819b9caa0d40abe313}}, {{cite:756b94b2e795c2e5e9ec441d8e3340f0c12225fe}}, which would provide cosmological Veneziano amplitudes. A related important question is to formulate a framework to study consistency of high-energy scattering in de Sitter space. For example, in the case of AdS, we know what are the AdS analogues of the Regge limit amplitudes and the hard scattering amplitudes (see, e.g., {{cite:eb4d0ee0c833fcd8badbe23217a2cf61684eef09}}, {{cite:5fb55049577664a5560997da1ec5267d53aed625}}, {{cite:a88a90068e4388cba73385f920a2dcb26f01b9fb}}, {{cite:00bdd752431acd68c0a59752709e27d930db0c29}}, {{cite:eacafc50ddfa2d3ab973c7fd41a72de39023a887}}, {{cite:6f992d22dbf12f8fc6875aa011cc799ba888c7e3}}, {{cite:0773afef8d0b2672ba38da43c7dae7fd5625cbd3}}, {{cite:d8920e757483490fbd45136f5922b6fe4c56fa7e}}, {{cite:3d70f743e2ed6574c47600c94982fc2852d8bf39}}, {{cite:d8278a274202950d872733f267d2d150a1105c15}}). For de Sitter space, there is a known flat space limit of late-time correlators corresponding to the hard scattering limit (see, e.g., {{cite:00bdd752431acd68c0a59752709e27d930db0c29}}, {{cite:e49831e8a8a976f9a56f9198892659ee9cfebbbc}}, {{cite:3337e1a1e01b98d27e4a2c396a22aa9648a0454d}}). However, to our knowledge, its understanding is still limited compared to the AdS case, even at the quantum field theory level before taking into account stringy effects. It would be important to clarify which kinematics of which quantities is useful to define consistency of high-energy scattering in de Sitter space. We hope that this direction would open up a new road toward understanding of de Sitter space in string theory.
| d | ac64c7eb62a57e452a385d3a995d9f9e |
An increasing number of researchers have studied metaphor from different perspectives in fields like linguistics {{cite:aa22bdabf3cb1f5d8b6c0e6f8803e3a914ac5c19}}, {{cite:4e40746d3431b62ec0257257990daa8379291211}}, {{cite:9c4f0226cdec8deeff9bceae051a724a5d7a1983}}, {{cite:55ba4c77acd465fa2a6aca77e0637cb19f971f98}}, psychology {{cite:767913b744014153cadbcb59cde0b943369646f7}}, {{cite:573bb10739f264dc1bf821b82c979a01b1712d70}}, {{cite:1236aad0a0a2028454ddafaffe51ee019beefaf3}}, {{cite:763b0bd1c6786353987f9d6f7de7649c1678e872}}, neuroscience {{cite:8a760ba5a313ad7f552fae1d83b74385d412bdaf}}, {{cite:7aeaafbf309cc3ecf4d2ffbc373492c57aa4f501}}, {{cite:154630b8f2ee5e46c12194ac8536bd934842ecef}}, {{cite:3ab1fead7ef0080c03779be24067c5ebccec2334}}, management {{cite:8809c697fba2b8cad4714226bf0e6c674c074ba1}}, {{cite:36fbc45b85a2116a56d117632492a58e915ba338}}, {{cite:05f5a3c6225ba4a3fc627a5c0aa10b57b260c506}}, {{cite:c9a2e2b3507dee146f07dab5cd22abc351d1be96}}, and computer science {{cite:4bb5ba56a314ba89683cf38e6e07bed883c075e5}}, {{cite:ef0a0ce71cde052e54c9ffa853880e0b8b53b89e}}, {{cite:adad91e54fdf289b0c2b74286f162af7f9734ba5}}, {{cite:a2eab568ae4c5d8604d35c00d7a69d408e77b85b}}. Since metaphor research has been developing dramatically, it is necessary to review the current situation, the development and trends of metaphor research, as well as studying how metaphor research has evolved through time. This may make contributions to some novel and interesting studies of metaphors from both linguistic and interdisciplinary perspectives as well as exploring the related underlying mechanism. Previous studies have shown that quantitative analysis can explain the nature of a particular discipline or field and changes in research focus over time {{cite:07b51785d89a361dfc22f0653b9094bd1b13b15a}}, {{cite:4605f746d07a441808d13fcbaaae2a5c2a806a9a}}. Researchers can use some information platforms, such as AMiner {{cite:ccdf09f9272b2ec06afbea685ec92cb0e7fcc361}}, Google Scholar {{cite:373c3ba73233cfa45c286e71fe5f68fef0ee4285}}, Microsoft Academic Services {{cite:3f006f5acdb313165cd14cb97635a73395242817}}, and many other scientific online systems {{cite:78ba93e0b54bae75fbd827434eabd388c9d4f8a7}}. These information platforms contain useful data, including but not limited to authors, papers, and references, and they can carry out statistical analysis. So far, based on the above academic systems, a large number of related works have applied quantitative analysis techniques in scientometrics. {{cite:e9ce7bed09f079bc06072cd3efff1192fce54d81}} used bibliographic analysis to summarize human interactions. {{cite:664e4d7a2c33053398eb925e3853317372d2e997}} reviewed various studies using online social networks to identify personality, as reported in the literature. {{cite:4605f746d07a441808d13fcbaaae2a5c2a806a9a}} made a quantitative assessment of mapping the intellectual structure and development of computer-supported cooperative work. {{cite:e64c03cee37339c18329ea4d11ae2275d3469566}} used complex network topology to study the evolution of artificial intelligence. Also, {{cite:4549f31fd22c3c233d1b4fb3ead26991fe043452}} made contributions to the research in the field of transportation.
| i | 1fdedc6cfce7089b0a8196f2c183a72f |
Fig.REF illustrates the proposed framework in predicting depth map and 3D vessel reconstruction. It may be seen that our method is able to generate a depth map similarly to the groundtruth depth map, as demonstrated in Fig.REF (b) and (c). However, it is difficult to demonstrate conclusively the superiority of the proposed depth estimation method purely by the above visual inspection, we have compared our depth prediction results with those produced by other four state-of-the-art
approaches which were proposed by Eigen et al.{{cite:ac606dbe0fce7ba72f9f95dfff0f40c678eb05aa}}, Eigen et al.{{cite:ba4360d5e04c5bc86b6527c9f8d16b65f2802e32}}, Laina et al.{{cite:d0a356770d88d47b8bdd037b8cb2064b2689457f}} and Chen et al.{{cite:73987a9befa4b2b4e67d33d1f4719064333f3780}}. In addition, as the backbone of our work, U-Net{{cite:b3e68d34f9f57006bd2e99700eaeabcde7ead67f}} was also employed as one of the benchmark approaches. Table REF reports the evaluation results in terms of five different depth prediction metrics. As can be observed, the proposed method has reached the best performance in terms of all the metrics by significant margins, with only a single exception: its ACC ({{formula:ddc4aa9f-1b63-4541-af69-525e01ae5048}} {{formula:b3a9a1b9-c7f3-4e2c-8df0-3eee5765095d}} 1.25) score is 0.002 lower than that of Chen et al.{{cite:73987a9befa4b2b4e67d33d1f4719064333f3780}}.
| r | 5cb23b29a5f2f1c41d5f85f0528ae2bf |
Similarly, MALA is not the only way of doing the sampling. We choose it due to several factors, such as its efficiency in high dimensional spaces, theoretical guarantees on asymptotic convergence to the true posterior, not requiring normalized target distributions etc. The target distribution {{formula:3fde8c76-38bf-41ca-839b-e30687076338}} in our formulation is given by a Gaussian, however its covariance matrix is not given in a closed form, rendering direct sampling difficult. Furthermore, the Gaussian posterior is not a generic situation. In cases of more complicated distributions, approaches such as Hamiltonian Monte Carlo {{cite:f66afdab128417b16c04eaeb899fdb323634c9d0}} can be utilized so that the typical set can be traversed more quickly. Similarly, in cases of multimodal distributions, approaches tailored to such distributions, such as Stein variational gradient descent {{cite:22bf8552c79cdc61f6eac6c840ad1cfd047fa57e}} can be considered. Furthermore natural gradient based methods, where the geometry of the target distribution is taken into account by introducing an associated Riemannian metric can be considered to speed up the MALA {{cite:f66afdab128417b16c04eaeb899fdb323634c9d0}}.
| d | 1f3aa1331768e309542d837bf94df317 |
In Fig. REF , we finally plot all of data included in Table REF .
The masses are given in terms of the Sommer scale {{formula:d96cc8c6-033a-4262-b4b5-fa635107908f}} along the left vertical axis, while the right vertical
axis is converted to physical units by {{formula:f2751a14-eb0a-4690-8264-cdb19268ee70}} fm The quoted value is determined from {{formula:2931175d-eb6b-4f18-9ef7-01b7b63d5f08}} lattice QCD simulations in Ref. {{cite:83a06a40d8b72150a242f941132f9810327b5b32}},
which was adopted in Ref. {{cite:4603677ebadfa54e9d7adc4a9b264eb9457fe466}}.
For the {{formula:078eefcd-07e7-4ba2-a148-4cde84595664}} results, we use a weighted average of {{formula:a93bcc12-36c1-48cb-8bf0-d26ffdacd91d}}
for the final estimation. The inner and outer error bars on our results represent their statistical and
total (adding statistical and systematic errors in quadrature) uncertainties, respectively.
Our final results of the ground-states masses of the {{formula:762ad2e8-3c60-4c86-9c7c-7bbf28ecab20}} , {{formula:362a418a-f8d9-4d1c-b0ad-2c6ef8bae8d6}} and {{formula:403aa673-cc92-4256-a536-19cee26ad893}} glueballs
are obtained in physical units as follows:
{{formula:c61072f7-46fa-47a1-bfd5-84526bd39abf}}
| r | 2201aff36aad91a312838ef14d3364bd |
Our study has limitations as well. As mentioned above, the utilized dataset is still of small scale and, as a result, the proposed algorithms were implemented using repeated cross-validations, extensive regularization and largely frozen pretrained backbones, in order to avoid overfitting and false within-subject correlations. This poses questions regarding the generalization capability of the model, however we expect that the suggested augmentation scheme and multi-objective training will scale efficiently as new data populate our database {{cite:27bed2fcc887e187ba62b1ef01392734f69eb245}}. Moreover, it is a reliable assisting model in cases of data scarcity. Another limitation concerns the absence of image metadata like biomarkers or optic disc masks that could help substitute handcrafted detection with an automatic machine learning process. Nevertheless, our proposed approach still shows robust performance against the baselines and offers proof of concept.
| d | 32c70fbc8d847d9381cc181ed636218f |
Although the PESCC offers a factor of 3.5 in spectral bandwidth improvement compared to the SCC, it still does not encompass an entire broadband photometric band.
Therefore, further improvements are desirable as a broader bandwidth will improve the S/N .
To increase the bandwidth of the SCC the multi-reference SCC (MRSCC: {{cite:af63320767142f9bfe6260e3c388a834a606920a}}) was introduced.
The MRSCC has additional RHs, placed at different clocking angles, and has shown to reach high contrasts in broad wavelength ranges.
Similarly, we can introduce the multi-reference PESCC (MRPESCC), which would be very similar to the MRSCC, but with polarizers in each RH.
These additional RHs would generate fringes in different directions, which enables more accurate broadband wavefront sensing.
The polarizers in the RHs could be orientated differently, making them sensitive to electric field estimates of opposite polarization states and, therefore, possibly enabling the MRPESCC to measure polarization aberrations {{cite:06498ac7b53028baacb3caf6751c61b43c06a650}}.
Another solution for increasing the bandwidth is via numerical monochromatization of the broadband image {{cite:07ed618207835019e065dc6f0bbaf83f76b69e78}}.
In this method, the wavelength scaling of the PSF is inverted by a vector-matrix multiplication, with the vector the flattened broadband image, and the matrix the inverse of the monochromatic image to broadband image mapping.
The monochromatized image could then be used for wavefront sensing.
| d | b3e56c4d5b4ed67dfa666187d56846cb |
The above arguments are valid when we only consider the adiabatic
perturbations. It is feasible since the perturbations generated in
single scalar field inflation model are always highly adiabatic and
the curvature perturbation {{formula:b73c8765-be2e-46ec-a053-7d6fa5434bd9}} is conserved on large scale.
However, this picture is changed when the inflation is driven by
multiple fields {{cite:3028cdc50d273471b064adb39f1ceeafc8115c4d}}, {{cite:46bd5a8e129fcda635aa2c86dd092a917ebe4c8d}}. When we
introduce multiple fields, they will generate a large amount of
entropy fluctuations which are converted into the curvature
perturbations at later time. This scenario can result in large
non-Gaussianity of local form. For example, the curvaton
mechanism {{cite:910c48c1522bd101561a65b871cac443964946c9}}, {{cite:8b5ed0bf3a4eb9452fdf5f43e2866a451571d2c5}} and the in-homogenous reheating
scenario {{cite:b13be359833e612c61690cb18508c1a6eb4741df}}, {{cite:dc21cfbd1f6327d16b877d24c69e0f19b246c3db}}, {{cite:d2543273c401d5c48f2dc82e78500ea0effe6199}} are
able to produce large non-Gaussianity of local form. Therefore, it
is meaningful to study the bispectrum of multiple-field inflation
with both the magnitude and the shape in detail.
| i | a31a162e0a2a062fe352f1087738d896 |
We conducted extensive experiments to verify the effectiveness of our proposed methods. The experimental results showed that our method significantly improved the quality of generated CAMs. We refined the generated CAMs with IRNet {{cite:edcfbc3fc4584ee78f0f07a627cab0331d7cacfb}} and trained a DeepLabV2 segmentation network {{cite:3b3063a49c560fe200c55da5f007db63bd85c5b2}} with the refined pseudo labels. Semantic segmentation results on two datasets, PASCAL VOC 2012 {{cite:cdc49780ae7516489b8370130e5a90d88c2aaf8a}} and MS COCO 2014 {{cite:62ae38facc41dccbcd5195844f3c75b8c00125a2}}, showed that our proposed method could outperform the state-of-the-art methods.
| i | bac8bd9724abfde52c8c609c34840639 |
The {{formula:9d634f83-d19a-4e9b-8b92-d66b911f8826}} integral is found in tables{{cite:d5b6d606a612e4a4f6640a9d0e3458604a6b8113}} to be
{{formula:7d1e9779-f6f5-4d27-8c20-efeb3f82af94}}
| m | 20e518e668006db1a73589ed7e27ccb6 |
For each target distribution the MATLAB™ program follow a similar setup {{cite:b2757b55d0ffd06c034f023ef146ba392038a6a4}}. First, we set the parameters such as grid size ({{formula:b061d700-eabe-4020-b4f6-08c35493ae00}} ), number of trials ({{formula:aee12fc5-9a04-4563-b872-3d4953799911}} ), number of loops/simulations ({{formula:508bbd7a-7418-4f20-ae8b-df4f14f43849}} ), vision radius ({{formula:b63b7c6c-7653-4bbb-adc0-9ffd588c81b3}} ) and exploration constant ({{formula:35ad1e81-6e07-45d1-9d0e-f37a508aa514}} ). Of note, the vision radius is the maximum distance between searcher and target which registers as a detection. The exploration constant is a parameter of the UCT algorithm which balances the tendency to exploit vs. explore the decision space {{cite:e87f9498d713a333c374c4f1f6fee0d16d2f958f}}. Next, we place the target in the search space according to the desired distribution.
| m | c09bb5e8a125c2c6f716a3b62133c010 |
This remarkable result can be proven independently of any specific parameters of the Hamiltonian and applied driving field. Consider {{formula:df533102-ddc9-40c1-8cf0-6a8fd7b7f38d}} and an arbitrary unbalanced Hubbard lattice. Provided both {{formula:ec800f02-15d4-4178-b065-de459d45d04f}} and {{formula:c2e9f491-26a4-4464-96a5-425645659241}} grow proportionally with {{formula:43a49240-2914-4ae7-b9ea-3350ce257b77}} we have that, following Lieb's theorem, the ground state satisfies {{formula:109b70ac-714c-457e-9b46-ff7089ba8211}} . Under the application of continuous periodic driving which preserves {{formula:04970207-60e0-421f-84da-46c053eee562}} then Floquet heating will occur but the system will be restricted to the subspace spanned by the eigenvectors of {{formula:56cea7f6-039a-4f47-afea-61ce28fb367a}} which possess the same value of {{formula:b986de98-6503-4ddb-a677-473b947313a1}} as the ground state. The pure steady state {{formula:490bda86-4d05-430d-87af-739d9a60ca05}} is indistinguishable (at the level of the expectation values of few-body observables) from the identity matrix in this subspace {{cite:a2a5b88f96982be05af9d20ab53e52381e04fe7c}}, {{cite:3d901d72b1b868e78e10a29e2277b7946d4d7ffd}}, {{cite:1b58d233b9e4b7e6ded985cda6783e8b34c3017a}}. This identity matrix inherits the permutational invariance of {{formula:4f08258c-dbce-41a4-a440-d880e4269027}} and thus the steady state state must have {{formula:1d9d5343-6189-4b29-ab07-6d98ffac9f38}} as well as {{formula:8f4f1b80-8b5e-46e8-b19f-ff2405411e17}} , where both {{formula:4f8fd392-2747-4eb6-b2b9-d8a83679e7a2}} and {{formula:2ed16a46-c03c-41cf-adf0-b5c05d7c9094}} are constant. By definition, it then follows that
{{formula:57bc0e45-4407-4e13-9c8d-dbe3863db5a7}}
| r | f3f7b602ff7a0007316643eb7cc12747 |
In the present work, we have also provided a first systematic cartography of the special point location on the {{formula:faba3a47-b0d0-4f5d-a848-e56d158fd15d}} diagram using just two physical parameters, the squared sound speed {{formula:7fe4ac1e-c286-48ad-a54b-a3e0cd873038}} and the quasiparticle pressure at the fiducial chemical potential {{formula:c00d2aa5-4c48-4b52-9454-28490389c8ca}} , see Fig. REF .
This allows to conclude that very massive objects like the lighter
companion of GW190814 with {{formula:06076816-38d3-4b82-ac80-7492f566b8a5}} {{cite:1637e8df3ed124aa1db62dfda400b59a22ca58f5}} could well be hybrid stars. This requires only moderate stiffness of the CSS EoS with {{formula:199ef570-4be7-4ee6-b5e9-b6258d20c5be}} .
For such moderate sound speed values, the radius range expected for
hybrid stars with the mass of PSR J0740+6620 corresponds very nicely to the NICER radius measurements {{cite:9ef517978aed81bbffdadab195eef3400ba00c25}}, {{cite:32994d233e78fcba3ee4381e327efff2b2c22cb9}}, while realistic models of purely hadronic EoS (including hyperons) turn out to predict too small stars above {{formula:f51bbced-d556-4e46-b922-dbcb24c06302}} .
We note that recently {{cite:cd16dee8e98016af2cd17b63e184ee6ff3614885}} within a first-order phase transition scenario and a CSS quark matter EoS have also obtained very massive hybrid stars with large radii.
After completion of this work, an extensive study appeared {{cite:389ca4932432710e5d3036bb4ca13a5c87a20edd}} which discusses the possibility of hybrid star sequences with high maximum masses and large radii obtained by either a first-order transition or a crossover construction.
These are particular cases of the more general systematic study we presented here.
| d | 50830e5066a80ae119c7415f2ed61362 |
Given an arbitrary square matrix {{formula:4663f658-55f6-49bc-aac4-eb2b0fcbbf0e}} , its exponential is defined as {{cite:8d8cdab4d17cb6c0a1822884a518d02dcd6e9d5c}}, {{cite:970f6fb2b0f156e5a00508a702835cd7134d8af5}}
{{formula:f62db8cf-b8b6-427b-8e12-7855db12690a}}
| m | be3f1ba04b2ec2f5d89e081931a21fa8 |
Denote the parameter vector with {{formula:6d70115d-5329-4c23-8873-13c2b550efaf}} and {{formula:0a7d2fb1-7c6e-4efe-a2dc-d05e4bd1711f}} , where {{formula:bfa85edf-7bc1-4f7b-84aa-353d88ecdff7}} is the normalization matrix which includes the different convergence rates for the model intercept vis-a-vis the slope coefficient. Then, from {{cite:003385849abe4c3358a7fea2d9e15fc27ab035d7}} and {{cite:e522640112516c1ba08cb7d5455f9619d703f088}} the quantile regression estimator is obtained via the following optimization function
{{formula:46bb58e4-1acb-4124-a140-a8cdd90bf72a}}
| m | ccf585f342fe35663d5643fab501385e |
PSNR measures the image quality by calculating the global size of the pixel error
between the image to be evaluated and the reference image. The larger the PSNR value,
the less distortion between the image to be evaluated and the reference image, and
the image quality is better. SSIM is a commonly used image quality evaluation method
originally proposed in {{cite:9968ae087718b662a3c6ac37900dae31f78a83b9}}. SSIM is composed of three contrast functions.
The brightness contrast function is expressed by Eq. (REF ).
{{formula:e37243c1-6173-47ff-8e54-609f6a3003bc}}
| m | 5e5a94ff5aca4b16312809e252538e1c |
In {{cite:ea4fd7f446ee51f123bd33bcc74c89575671abc6}}, Montanaro showed that using a quantum computer, the number of samples used in Monte Carlo can be reduced quadratically.
| m | 0ffa6ba5d39f956cc6490af7c0d3a833 |
Evaluation Metrics: Classically, MOT systems are evaluated by the CLEAR MOT metrics {{cite:1cfd782577dd681a346cf75bbb34d38a7b47a65d}}. As pointed out by {{cite:545513fb95c0ecbf786173e9eec7c863a800166e}} and later by {{cite:9d79656bb0fb52824a6225fd5a04bc8f868ad69a}}, there is a linear relation between MOTA and object detectors' recall rate, as a result, MOTA does not provide a well-rounded evaluation performance of trackers. To remedy this, {{cite:9d79656bb0fb52824a6225fd5a04bc8f868ad69a}} proposes to average MOTA and MOTP over a range of recall rate, resulting in two integral metrics AMOTA and AMOTP which become the norm in recent benchmarks.
| r | 80233afa5f43a15eb08971a4d35ff252 |
Le Roux et al. {{cite:842431a6e85dd12a92e9c7c614195e371e52a6d1}} proposed a randomized variant of the Iag method, called stochastic average gradient (Sag), where the component functions are sampled uniformly at random. The iteration complexity of the Sag method, in expectation, is {{formula:6074647b-c426-4e0e-b4f0-c02940bab5cc}} for strongly convex problems and {{formula:6348f003-b678-477f-8477-6d9ecd3d376c}} for convex problems. The maximum allowable step-size for the Sag method is larger than that of the Piag method, which can lead to improved empirical performance (see Figure 1 in {{cite:842431a6e85dd12a92e9c7c614195e371e52a6d1}}). Note, however, that in some applications, the component functions must be processed in a particular deterministic order and, hence, random sampling is not possible. For example, in source localization or distributed parameter estimation over wireless networks, sensors
may only communicate with their neighbors subject to certain constraints in terms of geography and distance, which can restrict the updates to follow a specific deterministic order {{cite:719551bd6b792a839d27ea37847f5bcff64f1074}}. Inspired by the Sag method, there has been extensive work on stochastic gradient aggregation algorithms including, to name a few, Svrg {{cite:ddf68307d366082d6d954c1328455aeb0db94f65}}, Saga {{cite:b5d9935c32471a58a939edab3d51585772bb24e4}}, Miso {{cite:9e6bc770d7979d967c946faced19261314c8c1af}}, Sarah {{cite:2d284eff4659ef8521364bb0610e3f55343d9cc4}}, Katyusha {{cite:16e6bf780252161cbb78ed4de35d7050d0336989}}, and Rpdg {{cite:66fb15ebd647d7fea5da6acfdf89dc98149a4fd6}}.
| m | 4b17bc8f960e3d03457df4789b53094d |
Implementation details. Core-set, multi {{formula:692b94c2-0f1b-421b-af1b-65eafe116ab3}} -means, and {{formula:14face71-0bfc-4cf6-896c-c057cc942100}} -means use feature outputs of ResNet-18 that is pre-trained on unlabeled ImageNet using CompRess SSL method {{cite:583a0ea4a67b37e2d91cc019b82b7dd4d1dc0801}} for 130 epochs, which uses MoCo-v2 {{cite:892ebae244ab7695367ab654e373157923fd8c8f}} as its teacher network. Note that this pre-trained feature extractor is used even for CIFAR experiments which means that technically, CIFAR experiments use the unlabeled data beyond CIFAR datasets without any annotation. Max-Entropy {{cite:acb54ce19f89dd48f8687179f4a3dd0b3c356836}} sampling method freezes the pre-trained backbone and trains an extra linear layer as the classifier on the top of that for 100 epochs. In all Max-Entropy experiments, we use Adam optimizer and lr={{formula:aecf92d3-2ddd-45da-b1a3-2a30d741c065}} which is multiplied by {{formula:f36854ce-d446-4f66-a20e-2f4a8787cb89}} at epochs 50 and 75. The labeling budget for iterative sampling methods is the difference between two consecutive budget sizes. For random, uniform, and {{formula:034a8d0e-f18c-4aff-b5c2-27a583134ed8}} -means sampling, each budget size is equivalent to the amount of unlabeled data selection.
| r | 156381d85fa2fbac503912906f9304d0 |
The characteristic equation {{cite:8122be28c6a1ce61141fb1be5f88c9dc363c2ea3}} of the difference Eq. (REF ) is
{{formula:aee0eac2-b900-4cfc-9606-2cdb35e58a69}}
| m | 1b723f89c9202d55069176b593d892d6 |
The various distinctions revealed within {{formula:3dafc3f3-ca1c-4534-9f4f-a07ea7ce85fb}} are also reflected in the
various categories of sets that can be defined within {{formula:f7511957-cc06-47be-a6cf-70722d7220a6}} .
Besides the standard category {{formula:11f7ceae-0044-4c57-8963-033f152603c2}} of sets {{formula:994bb02d-ddfb-4944-a059-abf5348083d6}} and functions, we have
the category {{formula:fad01198-29bf-4c2b-a9b4-13bf6472a18f}} of sets with an inequality (see Definition REF )
and strongly extensional functions as arrows (see Definition REF ).
This category is the “universe” of
Bishop-Cheng measure theory {{formula:81150205-7cd5-4a3b-9921-648683683900}} , introduced in {{cite:0792360cf051606ec918fecf3cd4393cff83295f}} and extended significantly in {{cite:c9f1e117b4b21a89eb031e15e7f5856f059c35ca}}.
In {{cite:6e54b618a78ed6e0a3958185d26a960db94a7e0a}} we also introduce the category of strong sets
{{formula:833a131f-5410-482e-b883-56eda287f03d}} , a subcategory of {{formula:8e18ccd7-0198-4bc3-98d3-7326dd0ad1da}} , where the inequalities considered are equivalent to the strong negation
of the corresponding equalities, a positive and strong counterpart to the standard weak
negation. In the category {{formula:508420fd-b712-4aba-9bc6-54518cf47b6d}} of sets completely separated by
real-valued functions on them (see Definition REF ), a subcategory of {{formula:4dec2d0e-d8bb-4f20-9262-7719039a2797}} , the
given inequalities and equalities are equivalent to ones induced by a given set of real-valued functions.
The importance of these inequalities lies in the complete avoidance of negation in their definition and in the proof
of their basic properties. The category {{formula:17f691ad-526d-4938-98e9-a29a7dfb3795}} is the
`universe” of Bishop measure theoryFor the relation between {{formula:0bbce39a-542d-4537-a902-8deee88f672c}} and {{formula:eb1245c3-b19c-4f5e-9a56-7ebc718e95b1}}
see {{cite:aecb4a3280229621f3c72d1da8a929ac0665f8b8}}, {{cite:149d70f8665af7a2e789768ec30dad21da750e9e}}, {{cite:bcf90d0ada29c3154318f64cdbb5431d01ee4b62}}.{{formula:b55b6308-f678-4d4c-b644-14777eee3e7c}} , developed in {{cite:b6957e03674f2976985833b620aa782c025ec37c}}.
| i | 998e7b01017af19bd73aeceda8fee689 |
We compared our approach with several existing KBQA methods. {{cite:86238d23196bbdb81833477ea21b33b713ddf021}} performs query building by existing templates. {{cite:1910d86dadf3032c0a2b3fe46d506e0b4daa61a1}}, {{cite:a871c93ae3b717c9d01e8a39cfdd6fad4825d6b7}}, {{cite:e1c19b0f4846cb3f21c7beafe5d49b7f4bbf5f44}} construct pipelines to generate query graphs. {{cite:377c9fe16c241961b0502b5e4d05320b1db3eec6}}, {{cite:3227f2ac1a9961cff06692861669b39f4ae4b0e2}} obtain state-of-the-art performance on CompQ and LC-QuAD by query ranking, respectively. {{cite:36c490d83e57fada4e675b4587a6314805c9c4b6}} achieves a state-of-the-art result on WebQ by state-transition over dependency parsing. The main difference between our method and theirs is that they perform state transition by enumerating pre-defined transition conditions, but we let our model learn to how to expand the AQG automatically at each step.
{{table:c825e9fd-1f2a-439e-9232-dde224d6779d}}{{table:d18defc6-9310-4729-abb8-040474f5928d}} | r | 42a56ecc049158dbbb3d1926901318c1 |
In the panorama auto-painting scenario we consider a set of photos {{formula:c9af4109-6d7b-42fa-908f-6b7bd5cc90fc}} taken
from the same location by rotating the camera around its center of projection.
We compute a set of homographies {{formula:af8a80ef-547e-40bb-9ad5-f1e7ecae9aa2}} between photos in {{formula:394668c3-f263-47e1-8448-4d97462ed8dd}} using the method of
Brown et al. {{cite:8133f4e4c39102c1f3f6e4503f1d999bb2a3b912}}. Then we let the artist pick one photo from {{formula:0ec2982d-1973-4a33-b045-e0edda0239c8}}
as {{formula:a7ace805-345f-428f-beea-8a7aaf169305}} and produce its stylized counterpart {{formula:f3833b0e-a02a-424a-adbc-b9e289210113}} . Remaining photos in {{formula:60f5dd90-75ce-4325-a9e3-113003c751f9}} are
used as {{formula:0b64570d-c3fc-429e-a6a9-7a896029f340}} . After the optimization one can use {{formula:5e5bbb58-6bc9-4ae1-ba0b-f07f1bf24b44}} to stylize all photos
in {{formula:c3a867a7-91b1-4c8c-8d59-98e2dd0f830a}} , stitch them together using {{formula:f86437ea-b304-40c0-91cf-7a72de85bb0c}} , and either produce a cylindrical unwrap
or alternatively use an interactive scenario where the user changes the
relative camera rotation from which a pinhole projection can be computed and
stylized in real-time using {{formula:da859376-dedc-45c5-9e45-df9d4e1cd223}} . As visible in Fig. REF
and REF from the comparisons with {{cite:b10a42f3e0ad101454da1aa91765b587f7cd364a}}, {{cite:44f35adf1f53f3202a6f26023af9fb6b43775a3f}} our
approach better preserves the original style details as well as semantic
context.
{{figure:8fdfea01-6f45-4dea-95ab-a46a9492d349}} | r | 97dc49e416b86ab28e524d7b8cb58681 |
where the minimization goes over all possible ensemble realizations
{{formula:b6f43ccd-0811-45b7-89c4-3e3596c940c4}} , {{formula:7ae93ed4-491f-4dd7-86a8-b788167af3ff}} and
{{formula:2def4d6d-837b-410f-a883-eca505eb56b2}} . For two-qubit states {{formula:f4435b89-87bc-4110-9330-5ff1aa0d9c1b}} can be calculated directly
{{cite:04e83cfb119544955b025afaef69a4bbfd40c36d}}. For high dimensional quantum states one has no general
results {{cite:5cefaa0743c02261894933504ad143bb13add13f}}, {{cite:ddc79465103893ed003c1469458cbd95498691df}}.
| i | 8eb8874c789df89583b606a80f941dfc |
Moreover, regarding the influence of star–planet interactions on stellar rotation, with ESPEM we recover the results from previous studies {{cite:707b753e1b5c97bc784a924a47e9835d3205e24e}}, {{cite:ae4244169ae896785053b7df55ef3b5203c42abf}}, {{cite:b93af09a0616bf5405f1c6ceb43bcf69318ed4b7}}, {{cite:8c55caa35952e7c4c956c3b27b3863fe83a5dd01}}. Indeed, as seen in Fig. REF , the engulfment of a jovian planet by a solar twin may lead to an alteration of the stellar rotation period of more than 90%, which corresponds to an error of 45% in the estimation of stellar age if we assume the Skumanich law. It is worth noting that an over-rotation of around 20% is likely to last for a few billion years. A deviation from gyrochronology is also possible for stars hosting super-Earths as star–planet magnetic interactions may lead to their engulfment later on during the main sequence. The induced spin-up is less significant is this case. We show in this work that these effects are barely noticeable in global {{formula:21334a99-2277-42ab-8f54-7f721a1c5dc6}} distributions, as at most 0.07 % of a population of star–planet systems may see its stellar rotation significantly altered. A signature of planetary migration may nevertheless become apparent at a given age, in particular in stellar open clusters {{cite:3df5d5503d83b3c176d9581740308aa6b1ef43a5}}, {{cite:ae4244169ae896785053b7df55ef3b5203c42abf}}, and confronted with alternative scenarios that have been proposed to account for the observed {{formula:076ebd26-4751-46ad-9670-e9db7aa4dc88}} distribution of low-mass stars {{cite:015c6bf130686afa613594cf2b510a01edf79809}}, {{cite:32d2cf8ad72bb6e2e99bcad5d46896debae9f433}}.
| d | d8ade00e349c54b78e7f3f556e135432 |
See Appendix REF .
In terms of the linear inequalities in Lemma REF , the optimzaition error {{formula:0f5a8234-a36a-4383-99a3-263dde21e914}} , the consensus error {{formula:eb5adb7d-87da-4ea8-9b0d-a8b7e4c3823f}} and the compression error {{formula:b9fcc6fa-02a8-49ff-84c3-b568c7633ce1}} all converge exponentially to 0 if the spectral radius of {{formula:5791b326-2477-44fd-8503-59b8439e6757}} . The following lemma states a sufficient condition ensuring {{formula:932481a6-185a-43cc-811f-4119ac6d4452}} .
(Corollary 8.1.29 in {{cite:ca8cdc2a4ada49e873d67f14810312feb6d93f7c}}) Let {{formula:39dfda92-3df3-4938-8e40-7d8f98cdaa1c}} be a matrix with nonnegative elements and {{formula:a293eec0-faba-43e3-a63f-e72643be4c8e}} be a vector with positive elements. If {{formula:135ecda9-0b94-4082-8923-f9961a85401d}} , there is {{formula:6ced293a-5c99-42ac-90fb-737ffe594c16}} .
| r | 1e2d886c25a869ea18b43778abc736d5 |
To verify the effectiveness of the our work, we compare it respectively with several UDA and USFDA SOTAs. The UDA methods include Deep Adaptation Network (DAN) {{cite:7d3f36d44cabcaac367b29fc6c440301ec3ccc9a}}, Domain Adversarial Neural Networks (DANN) {{cite:408b698261eb8613bed33e7c7895f61449f02de1}} Conditional Adversarial Networks (CDAN) {{cite:26866349579acdbff072b09754255742ecb61bb1}} and Batch Spectral Penalization (BSP) {{cite:7401c6725d019426cbed5a598c24c230cd81659d}}. The USFDA methods contain Source HypOthesis Transfer (SHOT) {{cite:ddd3444386c9218fdf37f89a7fbf407bbf90bca0}}, Model Adaptation (MA) {{cite:fabc3e6985ca0fd121109974c5108c6427cfbdc4}}, BAIT {{cite:242fc877a17182bf2cf1464d5e757cbce3b6abbd}} and Batch Normalization (BN) {{cite:c75b97703f7a5cdc7256e7b112154f160f4f188d}}. Moreover, source model only (SO) denotes using the entire source model for target label prediction. JN only (JN(o)) represents using JN regularizer only to fine-tune the feature encoder. It should be noted that SHOT is the baseline of our model by setting {{formula:923538d8-9ab1-43b8-982e-eac604d2d03e}} . To evaluate their performance, we follow the widely used accuracy as a measurement. The results of comparison methods are directly obtained from the published papers, since we follow the same setting.
| m | 2a2826983ca8dcc9dcf2b7ed1c9982d6 |
We presented a proof of concept that, through human-in-the-loop learning, we can train models to communicate relevant information to users under network bandwidth constraints, without prior knowledge of the users' desired tasks.
Our experiments show that, for a variety of tasks with different kinds of images, pragmatic compression can reduce bitrates 2-4x compared to non-adaptive and perceptual similarity baseline methods, by optimizing reconstructions for functional similarity.
Since we needed to carry out user studies with real human participants, we decided to limit the number of parameters trained during these experiments for the sake of efficiency, by using a pre-trained generative model as a starting point and only optimizing over the latent space of this model.
This can be problematic when the generative model does not include task-relevant features in its latent space – e.g., the yellow sports car in rows 7-8 of Figure REF in the appendix gets distorted when encoded into the StyleGAN2 latent space, even without any additional compression.
An end-to-end version of PICO should in principle also be possible, but would likely require longer human-in-the-loop training sessions.
This may, however, be practical for real-world web services and other applications, where users already continually interact with the system and A/B testing is standard practice.
End-to-end training could also enable PICO to be applied to problems other than compression, such as image captioning for visually-impaired users, or audio visualization for hearing-impaired users {{cite:63ab406dfe045338509a03d7c45eaee925944882}} – such applications could also be enabled through continued improvements to generative models for video {{cite:3cc89eec6d6c151e6935fb984a8ee24f4654eed6}}, {{cite:599963e4e56a28304775c3961dd635cc7ab996a3}}, audio {{cite:dfb5cd84edaff85db7f2bf2aea704d7a09388559}}, and text {{cite:26e854ef0e97b1759cdc408245481e6fc0580012}}, {{cite:442b2a7dd083c451c91a114c9497f76e03d662d4}}.
Another exciting area for future work is to apply pragmatic compression to a wider range of realistic applications, including video compression for robotic space exploration {{cite:6fc8c305360c37bf11fca6acc8ccc491bb02f90a}}, audio compression for hearing aids {{cite:b0dbc48240268ebea47467b58004cb0ea6bbb8f7}}, {{cite:9392d3384bf379928731cc618d91699f0e45072a}}, and spatial compression for virtual reality {{cite:eb10d8fdf23c70465b225001b09421ac048d59d4}}.
| d | 847d63c79a334037a19ef27f242977d3 |
We study the evolution of cooperation by quantifying the chance that a single mutant type, introduced at a random node, will eventually spread and overtake the entire population.
We assume that the population structure for strategy dispersal is strongly connected, meaning that for any pairs of {{formula:e0c9a9d9-4258-4f88-8c8b-277aec97f005}} there is a directed path from {{formula:3270d512-abb5-47b6-afff-680f9e5bc6fa}} to {{formula:75f05c3f-7de0-4b10-9077-feee5feba8fd}} in strategy dispersal networks. Letting {{formula:74cb7d1a-2fea-47c3-8233-7a30e53ec403}} (respectively {{formula:8d031b0f-0d08-41eb-9892-515df0c908e4}} ) denote the probability that a single mutant cooperator (respectively defector) eventually overtakes the population, we say that selection favors cooperation over defection provided {{formula:eadc8aa4-f6b8-478c-aedb-f9ce3a7152f2}} {{cite:506e7f4b1c9c8c77ceb530bdcbf2096a02990679}}.
| r | c1c7d9cb2fc1320670beb0bedb511cb3 |
We use conventional Machine Learning (ML) algorithms {{cite:430f629391e9fdc3a72137dc8b87d671979cfe4b}} like Support Vector Machines (SVM) {{cite:cd29ed35f0d98def960c009eaa47de121b150f87}}, Logistic Regression (LR) {{cite:6c19cc96405c732b8416b2f0ceaa44247df6887f}}, and Random Forest Decision Trees (RFDT) {{cite:8be561747cc33d081ba929c5220aab755cfe934e}} for the task and provide a comparison in Section .
| m | 79ec58668a8cefbaa60bce087ea1d439 |
The problem with the {{formula:2e3e9313-d202-4edb-a052-22670d23beaa}} values of {{cite:4ac27be335b499a27f7b298ac0d5571cd9ca5d25}} is not the same as adopting {{formula:18a69a3f-509d-4f87-b6ee-d357b60bfa68}} mag in order to make all {{formula:3e6706fe-1181-487a-9fd2-1b0be22f7355}} values negative. His {{formula:74bfbca0-50c9-4bc4-a421-0aa268ec2499}} values are all negative, despite his use of {{formula:3efd27ac-259d-4413-a9e8-b1db5c35075c}} mag. In fact, his {{formula:ce6be1be-267e-4bd0-97b5-0695c4407088}} value is the nearest {{formula:0dfd319a-e1f0-4823-9805-338ca72dce9c}} among the other sources, which wrongly imply that if one uses a {{formula:416f3553-1186-4699-aa5d-b3c6d8661a4f}} value of {{cite:4ac27be335b499a27f7b298ac0d5571cd9ca5d25}}, he/she would obtain a very accurate luminosity (0.46%) despite, in reality, the computed {{formula:a22c00f5-275f-453b-834b-480381c392c7}} having a 9% systematic zero point error. Most probably, the {{formula:15f3a546-6b54-479d-b60a-c1cf74105add}} values of {{cite:4ac27be335b499a27f7b298ac0d5571cd9ca5d25}} were calculated using the following definition of {{formula:465e17d8-1549-49d3-8595-2f4242220d54}} :
{{formula:59c6d65a-32a9-4049-ac0e-190e3e845926}}
| d | 76a99fa6339e9a55a26a71cf1b7f1e24 |
At the heart of the optimisation problems considered by all these
methods is a term depending on the {{formula:f5954ccd-ddba-47de-94f6-9b3a49fc35cf}} norm of the estimated
precision matrix. {{formula:29e455a2-7a66-4187-b0d2-2170a3bd15fd}} -penalisation-based approaches such as
lasso are popular for sparse regression, but they have a known
weakness: in addition to promoting sparsity they also push true
non-zero elements toward zero {{cite:55a46d87d25f558d050c20d38359a2aeda69c0e1}}. In the context of
precision matrix estimation this effect would be expected to be
especially strong when some elements of the precision matrix are large,
which happens for scaled covariance matrices when the covariance matrix becomes ill-conditioned.
This phenomenon occurs frequently under the circumstances where some of the variables are nearly linearly dependent.
| i | 158b28e34931e56d74e9c7965d8e1bca |
Following {{cite:2fe55c74feca77e08dca46126ab95a0002ec9cc9}}, {{cite:0980cb7d6f45d3ac49012c272c1362a30f2a95fa}}, we construct our constrastive learning framework with four major components: augmentation, GNN encoder, projection head and contrastive loss.
Let {{formula:5353dd75-efca-40f5-b36c-4c43ab3aabf1}} and {{formula:04af1315-f853-4292-8e5c-e8c94321c1ea}} denote the atomic and bond attributes of a material's crystal structure, respectively. Each crystal graph {{formula:5e9cab52-f0cb-495e-98a4-3ae4c1d3c351}} is first augmented into a similar pair {{formula:da7a315c-35f6-481c-891c-cb4841fb649c}} and {{formula:e9bfc9bd-ebed-47e1-ae11-53e48435d8a2}} where {{formula:09128922-b018-4e6f-bc1a-ccb6a6a6d825}} and the augmentations used to transform {{formula:e8089e94-173d-4416-b787-d1b52cd1232d}} into {{formula:37dcd2da-d69a-44e3-a261-6793d7c644f5}} are specific to the domain of crystalline materials (Sec. REF ).
A GNN encoder, {{formula:ca35c57f-29ed-4f6e-acec-c3577162ccc4}} , maps crystal graphs {{formula:7f491853-8a41-4343-b673-44633211057b}} into
representations {{formula:ecd605ff-4109-4090-b16b-12a3dd95b2a4}} .
For our work we use the CGCNN architecture as the encoder, following the same graph representation of
crystal structure as inputs {{cite:d24db2a3ebe45bc056cfa4a92f1a0e8dd0b2e097}}.
The projection head, a two layer MLP {{formula:c9af052d-906e-4499-8e2c-7454f655047e}} , projects representations {{formula:1fe1ba18-5566-4f37-b242-4ede02c79fb9}} into 128-dimensional space to create projections {{formula:47a4d66d-586d-4eb6-bc42-f683c3981c7e}} . The addition of a non-linear projection prior to the loss function has shown to improve representation quality {{cite:2fe55c74feca77e08dca46126ab95a0002ec9cc9}}.
The contrastive loss function seeks to maximize the agreement between representations {{formula:ec2f91c0-6fc9-4fb5-8233-f45c9ba78aeb}} augmented from the same crystalline material while minimizing the agreement among the rest of the pairs augmented from different crystalline materials. We use the NT-Xent loss {{cite:810ea8c5d5485a8b351543a1304f51a77051876e}}, written for pair {{formula:4d771e0d-a03f-4c1f-9ce8-9c5cbdf5ae98}} in a batch of {{formula:05639c87-a38a-4405-b986-e8cf972709ba}} pairs as:
{{formula:b7e00108-ff22-4c7f-a932-1a916365eae9}}
| m | e357a9caea5e83b590af0a32576a3717 |
This paper aims for promote a novel contrastive learning objective {{formula:41eb4d27-2c2f-4027-94e6-d79d000923fe}} that overcomes the limitations of the widely employed {{formula:c3d1065e-344a-4e13-9973-b2cc65eebaaf}} . While in all experiment we performed, our {{formula:40440a3a-a7fb-48f1-8fac-70318121c227}} outperforms {{formula:8532c5c3-0081-4239-9ccb-5b57549c4a12}} under the same settings, we acknowledge that there is still noticeable performance gap compared to SOTA results reported in literature. We want to emphasize this paper is more about bringing theoretical clarification to the problem, rather than beating SOTA solutions, which requires extensive engineering efforts and significant investment in computation, which we do not possess.
For example, the {{formula:30b5654f-c452-4946-b7bc-d3d0507c8871}} paper {{cite:db8bf9c043de20809d0371458bdac0980de9c557}} have carried out extensive hyperparameter tuning for each model-dataset combination and select the best hyperparameters on a validation set. The computation resource assessible to us is dwarfed by such need. Their results on transfer learning and semi-supervised learning are transfered from a ResNet50 ({{formula:2ec2bc10-3319-4bad-9767-ff383870847b}} ) (or ResNet50) with 4096 batch size and 1000 epochs training on {{formula:bb10c2ad-631a-47ea-9d7d-2e4c027ac2a9}} .
Our results posted here are transfered from a ResNet50 with 512 batch size and 100 epochs training on {{formula:9af82887-9ddd-4d04-868d-2f6fdcab9acc}} and {{formula:554822cd-e91f-4281-bebc-ff41d22550cd}} . Also, we chose to use the same hyperparameter and training strategy for each dataset to validate the generalization and present a fair comparison between {{formula:11a923e0-ef3e-4135-9a4b-27a1d0300a63}} and {{formula:a44065e6-fc15-427f-8f53-62f9c1ef398d}} .
| r | 0056930e78b0125b633ca77216e0ad3f |
To evaluate the performance of the designed networks, we resort to the indoor scenario `I1' of the DeepMIMO dataset {{cite:4940c2a43a3ea2366a33a2762ca73c7f43d9196f}}, which is widely used in DL applications for massive MIMO systems.
The BS 10 in the `I1' scenario is adopted as the RIS and is set as a UPA with {{formula:82498abc-c8a4-4079-b726-2c7c1efbba17}} ({{formula:daef05c5-f946-4515-abfb-515325dd55e1}} ) reflection elements.
The 300-th user in the 500-th row is assigned as AP with {{formula:ee0c0ead-970f-40c6-bad8-c0a90daea0dc}} antennas in the form of a ULA and the users are located in the region from the 1-st row to the 400-th row.
We set the carrier frequency as {{formula:44dee63c-648f-4272-982a-927048d39e63}} GHz.
For both the RIS and AP, the antenna spacing {{formula:8bc1d84f-7d15-48a1-a098-b5691c855801}} is set to {{formula:a4931281-88ce-42eb-aba2-cd20699927fa}} , with {{formula:c22c03a0-1dff-438f-94cc-16ddf7520dab}} as the carrier wavelength.
The number of the scattering paths for both {{formula:271f8048-3cf9-4621-961c-583ab82e0ab1}} and {{formula:51f93698-a266-4e2b-883b-dc6d3b97cce5}} is 5.
To generate the channel dataset, we first obtain {{formula:64c334bf-cd23-486c-b623-94a45a67db84}} , {{formula:17a0c827-0be9-4dd7-b69e-8fe651fb3c7a}} and {{formula:93ed2c76-f733-45e6-8d9b-8350c2147381}} with the location of users and AP.
Next, we determine the group size {{formula:f8d6d2b4-c373-4ed8-a0ad-85253f3aada3}} and pilot length {{formula:ec046018-27e2-4861-8753-650e3449a757}} to generate {{formula:8323c6ae-3a19-4fd2-a80e-7c8d504f22fd}} and {{formula:ee33f352-7d03-44e7-acfe-cbe18686aa53}} .
With {{formula:77e9695a-9097-4454-b436-9912f362259a}} , {{formula:980187b9-8b37-4f52-96af-9b12896f995d}} and {{formula:ae4fd16d-98a0-45f3-b8be-edbf325ec121}} , we further generate the input-labels tuple {{formula:eb8d03e9-f5dc-4c70-9072-23910d01763b}} as a sample in the dataset.
Since each row in the user's region contains 201 users, the total number of the cascaded channel samples is 80400.
{{table:68360b69-2525-4001-85c7-3658154898e5}} | r | 637fa4f059bc250fbee273cb0184b220 |
This section reviews three typical safe reinforcement learning algorithms: CPO {{cite:73bfd1bb384e539053203140b316e8d6e2bd0155}}, PCPO {{cite:192349a363b473e13d8b8d763f12c0baa2ac0b01}} and FOCOPS {{cite:1bc3498f7cf57e0969a800ddefa59897f82106cf}}.
Those algorithms also use new surrogate functions to replace the objective and constraints, which resembles the proposed CUP algorithm.
The goal is to present the contribution of our work.
| d | 61f1009673a8e2e58cab075e5b00e7c2 |
We now move on to proposing the methodology used to reconstruct the 3-D statistical picture of the wall-coherent turbulence, from which the geometric estimates for the representative eddy (of the AEM) would be extracted.
There have been several studies in the past {{cite:0b3e0351d56d69b70853cb358d0c5362744ec462}}, {{cite:299929757ccc4d9777b56e3331382f30bc2bc611}}, {{cite:3ca8c426bebc39ecfcd316a87538f401b9e4631d}} which have used multi-point datasets to estimate the geometry of the coherent motions, by computing mostly space-time cross-correlations.
These cross-correlations, however, represent cumulative contributions from eddies of various length scales at a specific spatial offset (say streamwise offset, {{formula:d292e5a8-4f4b-4319-95f1-de191e20e420}} ).
Here, since we are interested in estimating the 3-D geometry of individual hierarchies/length scales to be incorporated in the AEM, we compute the cross-correlations in the spectral domain to get a scale-specific estimate of the coherent structure geometry (i.e. as a function of the streamwise wavelength, {{formula:899599f6-cc56-4cf2-aa7d-d853fbe8b946}} ).
Given our focus is on modelling the inertial wall-coherent motions coexisting in the outer region, we consider the cross-correlations specifically between {{formula:17a2b233-b8d2-4ba6-ac04-ee228dcf42ce}} -signals in the outer region (at {{formula:a9a2a5c1-8ccf-4775-93a9-4379460b876c}} ) and those acquired close to the wall (at {{formula:df8c2d50-f558-4926-842e-2d4cb6998ed0}} {{formula:75af2126-9af7-48cc-a50e-ed74c2e891c5}} 15), via the cross-correlation spectra ({{formula:11269992-a5e9-477d-92c2-9a2960cc95d5}} ) defined as {{cite:24fefba1464379d7ce06b5f08519bc128ab05650}}, {{cite:519544446db52ff07660d7f4ae6962f5790a9c08}}:
{{formula:69976529-32b6-4169-a3fc-7b2fa04085e8}}
| m | 4da9fb8af412861212b517f6dbaf53d2 |
Because of the large granularity of filter-wise structured pruning, there is always the risk to prune all filters of a single layer and, then, to break irremediably the network. This is likely the reason why, in Table REF , the method of Liu {{cite:444fc568d0ffa491c21758c68436044b7acd59ae}} makes the network drop at an accuracy of 10% at the parameters target 7.5%, while at 10% it still reaches 71.01%. Since SWD can adapt itself not to induce damage of this scale, the network does not reach random guess before a much lower target of 1%.
| d | e4e1ec863452cfaf0b4d58cdc1220189 |
Theorem 1 Theorem 2.2 (Chapter IV) of {{cite:eafcbfba2ba59548d9dc9fe9dd69b968488cfff4}}
Let {{formula:727730cc-c15b-47b4-8c1b-243b5fd5f45f}} and {{formula:f525efd9-0c83-4801-92e0-d3312852b7fb}} be defined by
{{formula:cb8e23a3-a334-4c2e-a4db-828b4f560222}}
| m | 186ee7766c32e2fac32d95a8d48f8b80 |
Gogna et al. 2017 {{cite:4f9ddf9953f26655667f48615af709970e10a754}} SAE Reconstruction and Analysis of Biomedical Signals - From Andrzejak et al. {{cite:679219f58a4afe9b829b1e90079bfe77e3a88236}}, 10 Participants (5 Healthy and 5 Epileptic Patients)
| d | 2945bf5d61e293701998e60b9b8c3321 |
In this chapter, we considered a model of an adaptive network and its
fluctuations. We introduced a model based on an SIRS epidemic structure
which included transition probabilities between node states as well as link
dynamics. In this model, the link dynamics are a function of the
state variables, and since the state variables depend on the links, it forms a
closed feedback system between nodes and links. The model is an extension of,
and contains in the limit of large resusceptibility rate {{formula:89fae8cd-f222-4c8f-9f32-d5cf7535b7fe}} , the SIS model studied in
{{cite:60abb30a1744b7a94205794ea29d652536ff8474}}. The fluctuations of the model were simulated in
two ways: The full system was studied via Monte Carlo simulation on a
finite population. In addition, a low dimensional approximation was studied using a Langevin simulation with
an additive noise term to the mean field equations.
| d | 52d9a3edb40663676db6627881093d66 |
Let us further extend our discussion from deterministic to stochastic methods for solving (REF ) when {{formula:4c580c0f-ea6d-435f-a9e9-157f8598b596}} is a finite-sum or an expectation function.
The stochastic approximation (SA) method was initially proposed by Robbins and Monro in 1950s {{cite:408b8d26bff7aad2c4cef04fd912ff17133b2a56}}. It has become extremely popular in the last decades as it has been widely used in machine learning and data science, see, e.g., {{cite:c0a9f44a00ad06c32ef00b087ac43522d2b0106d}}, {{cite:76cc8ede7e974e3b682b8c79bb7ea06f5e527c01}}, {{cite:b622d51037ec21362c2408c4a923dba348447276}}.
| m | 6632a0b579be302eaa0a23ae0f24fcd8 |
Optical Flow
Optical flow models the apparent motion of individual pixels on the frame, attracting widespread attention {{cite:999b5a6108170bb340b578c00a606760b3b359f6}}, {{cite:3b1c9e92e2f60385625b30870556b2dd120e2c35}}.
The optical flow across frames usually reveals the motions of the human subjects, which are obviously useful for pose estimation.
{{cite:a346c3a3c093dca609f8545f1153a9bfbc034828}} combines convolutional networks and optical flow into a uniform framework, which employs the flow field to align the features temporally across multiple frames, and utilizes the aligned features to improve the pose detection in individual frames.
{{cite:74d1f004a5ad27979593dcedaca651293aa27a5d}} presents a Thin-Slicing Network which computes the dense optical flow between every two frames to propagate the initial estimation of joint position through time, and uses a flow-based warping mechanism to align the joint heatmaps for subsequent spatiotemporal inference.
{{cite:ee0fa7e9eabff5382a74ac0ceb0d7980dfe725f9}} focuses on human pose estimation in crowded scenes, which incorporates forward pose propagation and backward pose propagation to refine the pose of the current frame.
However, although the optical flow in these methods does contain useful features such as human motion information, the undesired background changes are also involved.
The noisy motion representation greatly hinders them from obtaining expected performance.
{{cite:409b3baf507fb76cdd7e6b4fbcb92c89aea9f2b1}} proposes a novel deep motion representation, namely PoseFlow, which is able to reveal human motion in videos while inhibiting some nuisance noises such as background and motion blur. The distilled robust flow representation can also be generalized to human action recognition tasks.
| m | bcde4a462958a66ee57be900d054765c |
Table REF analyzes the correlation between human evaluation results of Section and several common automated evaluation metrics for generation. Here, we consider BLEU {{cite:460db2befdf6d5f21a47d2cd9bba53829cc0fdf1}}, BLEURT {{cite:5ff05852166d9752c9b6c27b101759f38d2d8dec}}, BERTScore {{cite:d1fc2e63c034d706aedbc06696c54de6ea026647}}, and chrF {{cite:48506761c2649657c32e80398b3f38c78359076d}}. The chrF metric is a lexical-match metric similar to BLEU, but is character-based rather than word-based and but has been found to be more robust than other surface-level metrics {{cite:e9c165aa957884e73b27a2dfe913b4c4dc55c3cf}}. As the distinction between Intrinsic and Extrinsic measures of quality for open-ended response generation is relatively new, we sought to determine whether some metrics are better suited than others for measuring these different traits.
| d | 6a0fd8a4c9bdfc694580e44c785395a2 |
How do micron-sized cloud droplets grow? This question relates to the fundamental mechanisms that determine droplet-size
distributions in atmospheric clouds {{cite:a813466cdf1415a8aaa116e994f8f99d7bfba6b0}}, {{cite:3462744db5d3544623360f11919b14197e0b3433}}, {{cite:aa1f3541021afaaa4bf5a2c4c224792a8b42a6d2}}. In clouds with droplets of different sizes, droplet collisions
occur due to differential settling. This mechanism can cause rapid droplet growth {{cite:91f6e74e0e354fbe7002d88cf31f5c31b6e163e9}}, {{cite:3e0d0dbbe2640eb0e1691099eeb6848ab6710bc1}}. An open question is how this process is initiated, how droplet-size differences develop in the first place. Saffman & Turner explained that micron-sized droplets of similar sizes can collide if turbulent strains bring them together {{cite:01d5e75967ad03c9f3e2726b033d8576f2c4cb2f}}, but this process is very slow on average. Since the sequence of droplet collisions in a cloud is random, the collision times are essentially Poisson distributed, and fluctuations may nevertheless result in some droplets that grow very rapidly {{cite:3b312ade0302eefd13b03ee4f624bbc955245fcb}}.
| i | 212ccd16169a06eee43189eeaa293cfc |
Tucker decomposition is to decompose a given tensor into the product of a core tensor with smaller dimensions
and a series of factor matrices.
And the best low-rank approximation of Tucker decomposition of a tensor
was discussed and studied in {{cite:fc89dc6f818a7cd68d7826d086f0e7cc43c45659}}, {{cite:375760950053d1248ebed8af2aa74fa7ee81b8d4}}.
Nonnegative Tucker decomposition (NTD) provides an additional
multiway structured representation of tensor
data with all entries of the core tensor and factor matrices being nonnegative in Tucker decomposition {{cite:bbe7e557c407d00d8a0ceee2b9625531aff529eb}},
which has a wide range of applications including image denoising and hyperspectral
image restoration {{cite:564e5d2e858f6f20264666f7bc9bcf2333b88b5d}}, {{cite:581f50dba24fcc3b13fa4d2070d731f560631b9b}}, {{cite:370a412eebebb3bc3fa50699545f0f127c2a275c}}.
Moreover, Mørup et al. {{cite:27a5ae9395d81cd0eb63cfec2254485f789d86cc}} proposed two multiplicative update algorithms for sparse NTD with the observations including additive Gaussian noise and Poisson observations, respectively,
which yields a parts-based
representation and is a more interpretable decomposition.
Liu et al. {{cite:f1ff464ad64267c50c2671f97593d8a9bcbaf328}} presented a
fast and flexible algorithm for sparse
NTD with a special core tensor based on columnwise coordinate descent,
where the observations were corrupted by additive Gaussian noise
and the factor matrices were nonnegative and sparse.
Besides, Xu {{cite:1d51ba7277972045951c98e684a2b97088ebfa11}} designed an alternating proximal gradient algorithm for
sparse NTD and completion with global convergence guarantee,
where the observations were corrupted by additive Gaussian noise.
However, there is no theoretical
result about the error bounds of the models
for sparse NTD with missing values in the previous work.
Also a general class of noise models is not discussed and studied
for sparse NTD and completion in the existing literature.
| i | b20c539e13f791c84b04ef547a30730e |
At the moment, given our ignorance on possible sources of SGWB, it is then wise to keep non-committal on the
relative speed {{formula:01d28f7e-55cd-4e43-a3af-91c869b7b875}} , and on the SGWB intrinsic properties.
In order to forecast prospects
of detection of Doppler anisotropies, the first step is to investigate the response
of GW experiments to their possible features.
Previous articles studied in detail the response of GW detectors
to anisotropies, starting with {{cite:955704c4ac595cbc9011f0af2447e53e308d84ae}}, {{cite:b5934e6f1e6d0a63fbde1acbe1fc09d09c85d06d}}, {{cite:13b3e0d4c9eabe9dce4483f845f94d7afd8e8d08}}, {{cite:8d5f6e142364d1380ed72453322c6f21e353d6d1}}, {{cite:84ce08efa63e246f2ebbf671135994941809dd61}}, {{cite:d002411220b374f9862a0e43a56b07ea5e715564}}, {{cite:6b18711788fe162a92710184c23a16d12302da4b}}, {{cite:a62494219af8966454c39db3af658d4bac9ee46e}}, {{cite:3792331af37c864224899116997117fe5a98dd24}}, {{cite:57b4dd7e78ed42637e45dc3b4e9e875b82471a55}}, {{cite:cc64de74922cda957d7948746ca454eef0edbee8}} (see {{cite:a2149e0a277d5cc8b973d470082b92ae57b38a96}} for a general review). Usually, one assumes a factorizable Ansatz for the
quantities describing the anisotropic signal. The signal should be described in terms of a contribution depending
on GW frequency, times a contribution depending on GW direction only. However, in general, such an Ansatz
is not suitable for describing Doppler anisotropies. In fact, building on {{cite:fee1845c129169a2960b4b401e41d3733197338d}}, we show
explicitly that if the SGWB slope changes within the detector frequency band – a very common
possibility both for astrophysical and cosmological sources (see e.g. {{cite:dc6f4dbee475a165eeb522d2d1b26c72cf76f09a}} in the context of LISA) – the aforementioned
factorizable Ansatz is violated.
| i | ac00c4f4bcb73d8db1946b290b573bc5 |
At low frequencies, the GW spectra behave as {{formula:55843c6c-3304-4711-9608-1cbdf8f11bb6}} , in accordance with the expectations based on causality arguments using that the anisotropic stress of a causal source cannot be correlated at scales above the horizon size at the time of production {{cite:7243f3503e692796430d38d5eacdbf6f7e686be8}}, {{cite:e68c3756f2f1ab09c55abe1c3bf9fd4ae5e664c2}}.
At high frequencies, the spectra fall like {{formula:d73ace4a-a93b-40f4-afad-5f52b1dd1b53}} , allowing a simple distinction from the much steeper falling GW background generated from oscillating {{cite:875e4a17aca5301e038fa81322b08bc2e9ac6b42}}, {{cite:b4589e7efda542d7e3c659f85e66953f599f8f65}}, {{cite:4a94c165c152d4ef861572245bc6a05b90739c76}}, {{cite:d2c81719ccacf85ca4038ddd233caaea92f68b57}} or rotating {{cite:b3d19ea4ab25f961dad28b3a172d155c7b880666}} axion-like fields.
It should further be noted that, when the peak position is fixed, changing {{formula:b77e06f4-42f0-47e0-9d3d-ecef78fbb055}} barely affects the UV tail, while the IR tail goes as {{formula:c78815fe-e880-4aec-9bd5-2dc64dfca026}} (cf. fig:spectralShape), potentially allowing to disentangle the reappearance temperature and {{formula:aba306d5-e604-4a3e-8eb7-f2cf8152630b}} in the peak frequency eq:fPeak, and thereby facilitating the determination of the relaxion parameters from a hypothetical observed signal.
Larger values of {{formula:6b31f663-ca70-400b-8417-0725b8035fc8}} further result in a flatter peak, although this may be an artefact of our analytic approximation, cf. eq:SpectralShape.
{{figure:21005be5-9a3d-4683-88e1-a8a388e99406}} | d | 944a5ed36eb7698ed16c4e0384558ca9 |
Finally, while RG theory provides a way to predict which combinations of task, optimizer, activation function, and architecture can have winning tickets transferred between them, knowing what is the minimal density of a winning ticket that can be transferred remains an open question. In order to address this, finite size effects {{cite:e9ce3833a527e4cf4618eb3d826d04f56c7f9039}}, differences in symmetries, and non-linear corrections to the IMP flow {{cite:0560a03b9bb0faaf5761150c394d1ba0026f4622}} may need to be taken into account. This interesting, and important, question will be the focus of future work.
| d | f52554eab14049545ed7695548b0a205 |
Our approach is achievable for other blockchain networks as well, with the mechanism tuned to the respective consensus methodologies. Ethereum blockchain network is transitioning from PoW to PoS consensus {{cite:6b8fa22835ebe7c5f90a7535f5382fd971a0ba18}}. The transition is supposed to be through enabling the use of Casper {{cite:1733ee9c02db793d56273fcaceb0591482ed46cb}}, a hybrid PoW/PoS protocol. Here, the PoS is applied to finalize the blocks after a fixed number of blocks are mined through PoW. This change aims to reduce the chances of illicit activities such as invalidating the blocks, forming a parallel chain, and disrupting the communication by forfeiting the deposit of the validators. However, still, there is no provision to stop social-engineering based illicit activities such as Phishing, money laundering, Gambling. Applying the reputation model based on transaction data paves the way for identifying the entities using blockchain for the aforementioned illicit activities. The identification helps in curtailing the power of the proposer and validator accounts involved in such social-engineering based illicit activities within the blockchain network.
| d | f65c0e8c36f0ab7aa3af8c1029b7b89b |
A phase-folded plot of both the transit and the radial velocities is shown in Fig. REF for the eccentric two-planet model. The radius for TOI-1422 b was calculated with the transformations provided by {{cite:caf51cf6618610828593ae521494f09c1de48dbe}} and, using the stellar radius of Sect. REF , its revised value turns out to be {{formula:49f3acff-ce4e-49b5-8c6a-8420ab47e789}} . Using the stellar radius from Table REF , we derived the mass of both objects to be {{formula:77479a9c-91a1-4487-8fa7-31a73adc9b93}} and {{formula:15301290-ec8c-4512-8106-6c317ae6db43}} . Their final parameters are reported in Table REF . An independent joint analysis of the HARPS-N radial velocities and TESS photometry, after the transits have been normalised through a local linear fitting, was also performed with a DE-MCMC method {{cite:991947940cbee39fe0306ecae3f73ad01b1d5b6b}}, {{cite:1ecb6340999724674adf06170cb0d8837939dad1}}, following the same implementation as in {{cite:be7c66d51e5309a19f46cdf415e340f14bc62fdd}}, {{cite:1a6320a89198a5ea53e1899ba293c682e7213667}}. The obtained results are consistent, within 1-{{formula:43f2ccd9-fad5-4572-a32b-348f0ba597fc}} , with those reported in Table REF .
| r | d7716a5bcd326c61fcce6266eb98448b |
Some of these multiscale methods, such as the heterogeneous multiscale method (HMM) {{cite:ef99aa03304f24a4efe18cdd36cffb6e69d0de08}}, {{cite:d8fbcad47c73deff3b0ac4af98732d7ef5be6268}}, {{cite:4a61f33aeed17eb07f4f762570d17c4cca56d78f}}, are based on ideas from mathematical homogenization {{cite:ba99f8a929c659f09ae70d2decaad9695c422a27}} and aim at computing effective coefficients for an appropriate coarse-scale equation.
In contrast, approaches, such as the multiscale finite element method (MsFEM) {{cite:e28391b2025f1bbb39977b9437ade0a31dd8dd6e}}, {{cite:b3c0bf0b3f3384009968bca42cfca841d504ba72}}, {{cite:666ad7455b111fcea524cf6e41eae2df42369331}}, its generalized variants (GMsFEM) {{cite:efe1d2af12c4a6dd5e165e6f6f269d4423af0df8}}, {{cite:c0f8bb1020244e518f08d68f2544d458dee9714f}}, or the generalized finite element method (GFEM) {{cite:01166efacc00e979078c3a6c8d401f35a9f4b1f6}}, construct coarse-scale elements that incorporate the local fine-scale features of the solution and then approximate the solution in the space spanned by these multiscale elements.
Many multiscale methods, especially the ones directly derived from homogenization theory, are designed to solve specific multiscale problems that exhibit strong assumptions on the structure of the problem (for instance, local periodicity).
| m | 1d9649da937f3fe4479112916d2f04ab |
Turbulence and multiphase flows are two of the most challenging topics in fluid mechanics and when combined they pose a formidable challenge, even in the dilute dispersed regime {{cite:ab759730e2964d44c759c89dece07c9dce65a104}}. The focus here is on liquid flows laden with disperse bubbles, which can be particularly challenging since the bubbles can strongly alter the liquid phase turbulence {{cite:8e40cacc055eac69cc751ec5e143cffd5b377109}}, {{cite:97ed0a108cf416b1167c34777fa969aa44f5a5ba}}, {{cite:edf1232be32dc62d06abdbd540bfc5f754cd75fd}}. In particular, the bubbles can modify the turbulence due to production effects arising from the bubble wakes {{cite:f138a3b5dcd62b92d1fc6bed71400231d31aa367}}, {{cite:2f874927674558a8e84bbd7bc5a9bde37a2046e2}}, enhanced local turbulent kinetic energy dissipation rates in the vicinity of the bubble surfaces {{cite:686df00418fd6a672e2be629f808c78687090c78}}, {{cite:d0e13ded5619a1843669daaf04f82a81517b0f5d}}, and modulation of the liquid mean velocity profile due to interphase momentum transfer, resulting in an alteration of shear-induced turbulence {{cite:ac2d62581407484994292f67c9c8f329fff160b6}}, {{cite:b703b49e8db63fcfcc97d344ced7cddb71d521e7}}, {{cite:ca74dbee7b5c6320095b13c8fc3108ffa2a33fb3}}. {{cite:96ae6d60fb27baa1e9114380a08fcdc0009da518}} highlighted particular ways in which the classical scenario for single-phase turbulence, based on single-point statistical analysis, is modified due to the bubbles moving relative to the fluid. Turbulence arising from this relative motion is often referred to as bubble-induced turbulence (BIT) and its effects can be captured in the Reynolds-averaged Navier-Stokes (RANS) modelling framework through the inclusion of additional source terms in the relevant transport equations {{cite:4a7d825a2bb42431fbbe8c6b1e8f317eaa397c47}}, {{cite:995666025fbcfa5c27917ff9fcc72cd313770e58}}, {{cite:c5098a0e44da973fd8208c23df9fde7ad2f3bb27}}.
| i | 38afc5d44d8920fa05f1aaa0a878c638 |
We also developed a toy model that reproduces the features of
angular momentum evolution. This model has adjustable parameters with
clear physical interpretations. The relation between the appropriate
values of these parameters and the physical variables requires more
detailed and extensive analysis, which is currently underway. Apart
from studying different initial conditions, we also plan to analyze
the components of the torque parallel and perpendicular to the angular
momentum separately. The nature of these torques can be very different
{{cite:f923506c0e29b0b3eac904f845efe718e63f47b0}}, {{cite:bab40d36bb327425fb9a463a49114c25ba0b0d9a}}, so a separate analysis
should lead to a better understanding.
| d | faa1c9f862648d09744e062f56cc6063 |
4) Learning performance compared to SOTA To further evaluate the performance and reproducibility of the results of the proposed Dirichlet policy, we compare it to two alternative RL algorithms: the original SAC {{cite:9034ee49c44d8d88df684e8abede191ec59c7a7b}}, one of the state-of-the art reinforcement learning algorithms and the deep deterministic policy gradient (DDPG) {{cite:fc6ab26182bdc721aeecf43e6b3c99230456de38}}. We train all the agents over three different seeds on the four-battery case study. As shown in Figure REF , we observe that the proposed Dirichlet-SAC (DSAC) shows a considerable reproducibility and also a superior performance and convergence speed compared to the original SAC and the DDPG.
| r | 7efd07d37acdb479deaa7715fe0e03b4 |
Table REF shows quantitative results on JAAD and TITAN datasets (results from training stage 2). We compare our method (GPRAR) with other methods in three different observation modes: noisy, pre-processed, and ground truth. In the noisy (raw) mode, the observed data (poses and locations) are the outputs of a pose detector. In the pre-processed mode, the data are estimated using KNN-imputer as used in {{cite:a284af41ba6d85110b2595be2d2bcea603250659}}. The ground truth observation is the complete pose data with no missing joints. Our model outperforms other methods on JAAD dataset in all three different scenarios. Specifically, our prediction results are 50% and 22% better than FPL in the noisy mode on TITAN and JAAD datasets, respectively. In the ground truth mode, our model outperforms others on JAAD dataset and produces very close results to TITAN. However, we note that TITAN method uses the IMU sensor data as additional features for prediction, while our method only relies on image data.
| r | 1d1be2ec3c8970e62f369a23be01a760 |
Please, note that the results that we are listing in this section do not correspond to our best results, as they are obtained using a random set of initialized weights as opposed to the best solution selected out of multiple runs {{cite:c506f24c1e17db5e985b74bb5774647353dc03d9}}.
| r | ffc2140d7c97d5132005a0f9973b3257 |
The last situation we have considered corresponds to {{formula:4e090881-7206-4c33-bafb-f4a61c0e4d44}} , with transversal splitting on.
This parameter measures the frequency of transversal splitting and is expected to increase with {{formula:90749a9c-e30e-4ee1-a249-391e32a0d5d1}} .
Accordingly, the system can change from one-sided when {{formula:98b57b3d-881c-40f9-9ebc-1e79185671d5}} is zero or small, to two-sided when it is large.
The transition has indeed been observed and mean-field predictions were well observed far from the transition point.
Unfortunately, while the effect of fluctuations close to that point was obvious, strong size effects have forbidden us to approach it and evaluate critical corrections.
This is the subject of on-going work within the framework of finite-size scaling theory {{cite:c42fb50cf796014f5fda796ab93b5f7337b96ec3}}, {{cite:32376d83c1f4f43ea6bfeb9393ba90c60beb47cd}}, {{cite:57ebf54f22fb89055a3d19149a60c741755a09fb}}.
This follow-up should allow us to establish the universality class to which this transition belongs.
Here, the left-right symmetry of localised turbulent bands with respect to the stream-wise direction is reminiscent of the up-down symmetry of magnetic systems at thermodynamic equilibrium, which may lead to conjecture the relevance of the {{formula:0609bd79-2efa-4783-a231-3858952d22a4}} Ising class {{cite:c74d782d7b3fb326cf235c2dc3ef7d3cf62b52ab}}.
This class appears also applicable to coupled map lattices with the same up-down symmetry when updated asynchronously, one site after the other, close to randomisation by thermal fluctuations.
In contrast, another universality class is obtained with synchronous update {{cite:57ebf54f22fb89055a3d19149a60c741755a09fb}}.
Here, the situation is unclear: on the one hand, configurations are treated as a whole in a simulation step, which tips the scales in favour of a synchronous update model (in line with what is expected for a problem primitively formulated in terms of partial differential equations), on the other hand, spatial correlations generated by the deterministic dynamics governing the coupled map lattices are weakened by the independence of random drawings at the local scale, which can be viewed as a source of asynchrony in the probabilistic cellular automata.
In its application to the symmetry-breaking bifurcation in channel flow, this uncertainty is however only of conceptual importance in view of size effects: owing to the large and unknown time-scale rescaling that allowed us to pass from flow structures to local agents in the model and to the narrowness of the region where critical corrections are expected, the mean-field interpretation developed in {{cite:2c551d39d11324219c5d60d5533cf8150a377478}} appears amply sufficient.
| d | 11f39f6971269bbdf1a024665b5d2274 |
The proposed filters and Sobel 3x3 {{cite:e9185016b09d38d9d731dbf3f0745b8a5ba36c1e}} used in the Canny algorithm {{cite:bf467bf5f20e5e4570915eaf53fb76ba2ad331f0}}, {{cite:92290d4ec53f5cdc2d8049ce39d30af517600192}}, shown in Figure REF , REF , show good results in maintaining a balance between filtering out edges that we can consider noise and edges that we can future use for contour and edges that can be used for facade detection.
| r | 291b8edcbdf1482dbf7b0ab5d479b072 |
Even if this paper focused on the simplest toy scalar LQG problem with two controllers, the essential difficulty of decentralized problems — nonconvex optimization over infinite-dimensional space — was still there and we could finesse this difficulty by taking an approximation approach.
We believe the approaches and techniques developed in this paper will also be useful in more
general problems with vector states and multiple controllers.
Moreover, in the process of such generalization, we will find more close relationship and parallelism between wireless information flows and control information flows. For example, the notion of the computation over communication channels {{cite:3cd7d8f1583b80c217c1b13cbef6d4870bcd1f73}} or interference alignment {{cite:7b75520603bc2cd9072ad09978420982c92baa5c}}, {{cite:3c90d18d1a1d467a57979588e6107c4ed14a4676}} has to be properly understood in control contexts. Above all, by solving the problems only approximately, we “may” be able to make a breakthrough in this long-standing open problem, the decentralized LQG problem.
| d | 97655471ed5674d0558b0196e5d5a23e |
Temperature dependence of the amplitude of the oscillations at 210 kOe follows the standard Lifshitz-Kosevich (L-K) expression,{{cite:c6d9032cd55eaf9899df9bd2f557dc57edaa6bce}}, {{cite:e2de2d1a4488e53ded7217881e3b8ef86bbe1b90}} as shown in Fig. REF (g) by the continuous curve.
The value of the effective cyclotron mass ({{formula:b411845c-f4fc-4dc0-91aa-139bf46967b5}} ), as obtained from the fit, is 0.14 {{formula:36159313-2c8f-42d3-8cb6-8e72b92d3851}} , where {{formula:3c6971ca-028d-4de7-95d2-216e82f4166a}} is the free electron mass. Thus, the Fermi velocity is obtained as 2.08 {{formula:52d5072a-5e24-41ad-af73-c6e9e47e681a}} m/s using {{formula:10a6728e-c331-445c-98c3-f1b283093c1e}} , which provides the value of Fermi energy, {{formula:6d797a85-5149-4acc-84ae-88e84845bb9c}} 17.34 meV. The slope of the semi-log plot of {{formula:1b88e664-f2ce-41a6-8c8c-321624043747}} with {{formula:94c46895-304e-479d-a10c-6801b18adba8}} provides the value of Dingle temperature, {{formula:716d1ca3-66a9-412f-93ae-8f7fe092966f}} = 11.99 K, as shown in the inset of Fig. REF (g). The life time ({{formula:6e3d46da-2ee7-4fb5-b0c9-26b779150fce}} ) of the surface charge carriers is obtained to be 1.01 {{formula:a1911d6c-5ccc-4689-9aee-310f1426cd53}} 10{{formula:6198c7af-8bd7-41a2-9e4f-bacb4689183e}} s using {{formula:fbefcb08-9033-481e-aa35-67a5e352fa39}} ), which provides the mean free path, {{formula:254fea8a-19f8-4429-adc5-711df3b160e2}} and mobility, {{formula:5e0cf5f5-7a09-4c51-a997-f78a0be38f99}} as 21.14 nm, and 0.80 {{formula:3b6710bd-6cdc-4b6e-bce6-49ac864ea344}} cm{{formula:a6c9e77d-3f57-463d-8ad2-3169bdf05daf}} /V s, respectively.
These values are close to the values for Bi{{formula:d97e68f8-8bd8-4f89-aa60-0ff0d25f32a0}} Se{{formula:cd2d9e2f-760a-4288-aeaf-7617602da634}}{{cite:f437c4ae09bf0f043af13295fb8f42e681a5eeb2}}, {{cite:c96f44b4976b5829e6267f8ce515bfb4528d75ef}}.
The {{formula:ca56a464-3cc7-4d12-8bd9-732106154f26}} th maxima observed in the MR-{{formula:34a38578-96c8-4d8c-8b42-5bebb80451f2}} curves are plotted with 1/{{formula:9a95bdf9-b16c-4a4f-bb3b-779a57a197e5}} in Fig. REF (h). The extrapolation of the linear plots for both the {{formula:60d885d6-c4b7-4451-bbed-54375b6204b3}} and {{formula:e8b89e4f-b442-40df-9952-7f2741aaa8e0}} component, the representative of the Landau level fan diagram, meet at {{formula:7686093e-b970-4467-a77d-331c8995e697}} = 0 with {{formula:f4c392f4-c576-4871-b207-ae83288dba87}} = 0.40(6), which is close to 0.5 recommended for the Dirac particles,{{cite:3d55e712f676d9e72144ac3bed02ccb6cecfd36a}}, {{cite:46ee4a84cd19cf3693c228dc533e38e3f1a17308}}, {{cite:9e625cf78ada96518e63afab8209ec3b382c2927}} and consistent with the results of pristine Bi{{formula:8ee52121-afba-4354-9cd9-18d0e22ecb44}} Se{{formula:89a33efd-8950-4626-8230-a6a745f8c552}}{{cite:f437c4ae09bf0f043af13295fb8f42e681a5eeb2}}. Our results suggest that the topologically nontrivial states are retained even for nearly 10 % Sb doping in Bi{{formula:a7bf3485-5536-4e06-a7be-9bce0e2041ee}} Se{{formula:26448440-d9b9-42b9-850e-84c80322e33b}} .
| r | 678b2143c4a737cc887be9b38e492b64 |
Our results are of particular importance for non-equilibrium phase-separating systems. By combining previous results from the mathematical theory of CRNs {{cite:c264fe6c5c9ccc8d8bb8efa18245e8808c17a38c}}, {{cite:c8c958da8441e4755a4b8039e387c1da4cbd3ea4}} and concepts of non-equilibrium thermodynamics {{cite:398e206c1da76599884a923bdc58e617bba697cf}}, {{cite:784f4e0941fbc97440d5f5eee950255ba8c20e58}}, we found that the resulting complex-balanced RD system cannot sustain diffusion currents at steady state, see constSS. Since, in many cases, diffusion currents are required for pattern formation in reaction-diffusion systems, breaking complex balance is a necessary condition to obtain such patterned steady states, at least when interactions are modelled in a thermodynamically consistent way unlike, e.g., those in Refs. {{cite:876c874c42764c2aa86abb34e1d0146edc53fda9}}, {{cite:5d73c5f93c90aad0bd3161bd22ef129e27655a42}}.
In this regard, complex balance can be broken in two ways: First, by choosing a suitable network topology that allows for a steady state which is not complex-balanced, as in Ref. {{cite:31658eb621b81ecdb5a379a3c44fabaa451b264f}}. Second, in a system where different phases coexist, by allowing the reaction rates to depend differently on local environment: For example, in Ref. {{cite:b1e6b11a1d2bf145ce1a42a6b4cb847ae5b4d81f}} a patterned steady-state is produced by allowing one (and only one) of the reaction constants to depend on the concentration of an enzyme which localises in one of the phases. Mathematically, this violates one of the necessary conditions for our results to hold, namely {{formula:3460301e-cf20-4b99-ae9b-c43ce66e21e8}} (see Section ), thus allowing for more general steady states.
| d | b481a1a813d20f433d7c0307360872de |
Before introducing our method for reducing communications, we begin with a brief discussion of the classical HB method {{cite:f02591d93f238bfa429736463f332c26567a1730}}, a popular iterative optimization algorithm, and focus on its parameter update rule in a distributed system with a server and {{formula:71158e51-8a40-4a78-82e4-f68fa291c46c}} workers. Specifically, at iteration {{formula:b49ebba5-d7ee-45df-b0ec-2e5591777b0b}} of HB, the server broadcasts the current parameter {{formula:4781c77c-7626-45a8-bba7-5f1b2891be9b}} to all workers; each worker {{formula:3532e8f4-1c8e-4ff0-a7ac-6418dffa8b1e}} computes the gradient {{formula:6a1e407b-94f5-40ee-8c2b-c8c9f6302243}} based on its own local function {{formula:14cc603a-1f04-4f7b-952b-1f11c4535477}} and transmits {{formula:81e9b3d8-5186-49e5-941c-e1da6045ac34}} back to the server. Upon receiving {{formula:00ee4a9e-3285-4fe0-8151-433f3b09b608}} from each worker, the server updates {{formula:0082633f-0489-4ff3-878f-2cee128d838d}} as {{cite:f02591d93f238bfa429736463f332c26567a1730}}
{{formula:d2fa62f6-ea96-4256-9ee7-0d394a98fbc7}}
| m | 3736825fd1125bb3395c3b1c16203608 |
Throughout this paper, we use the notations and methods well written in {{cite:fd581a912dadd84e34bffd6b7d18d56c7ec90235}},
which we have adapted for the weakly associative case with some modifications.
Further in this section we give some important definitions.
| m | fce93e1cfb382d7a7d82ac953c513569 |
In this unsupervised setting, the standard pretrain-finetune transfer paradigm becomes inapplicable, as there are no labeled images available in the target domain for finetuning.
This requires us to conduct unsupervised adaptation between two different tasks, i.e., train a model on the labeled source domain and adapt it to the target domain in an unsupervised manner.
Unsupervised adaptation presents two critical challenges that can result in negative transfer {{cite:c0dff33612c6d64143f0e02d61f5651283567ac3}} and produces even worse performance than no transfer.
The first challenge is the feature distribution divergence, which naturally exists since the distribution of these features differs from the source to the target domain.
Hence, feature distribution adaptation is necessary.
The second challenge is the task semantic difference since diagnosing pneumonia and COVID-19 are two related but different tasks that have different preferences on the critical factors {{cite:24020ac04861da29bcd8c8c6c0b49f773aebe6e4}}.
Thus, the task semantics should also be adapted to maximize the transfer performance.
While existing domain adaptation (DA) methods are able to adapt feature distributions when the source and target tasks are identical (e.g., both of the source and target domains are classifying monitors under different background), they are not applicable to our problem {{cite:84130beb2524c382ad009581f90f1a8ac66c35de}}, {{cite:7ec93d2cc4f3f2aa9147c24d562a37d056300214}}, {{cite:6d382c45b636f4e1d2e8471d0c30df1086f100ff}}, {{cite:c3985cf0fe0fe3146f00c2ca5fbca4efba98c44a}}, {{cite:0acc4861c41c5f851cf5558db140f61fc2f9297a}}.
| i | 8f0c47505a667284cc90a400c6b5d66c |
Then we apply the unsteerability criterion for two-party {{formula:1fadae4c-2db4-474d-8f86-87e101b8c67c}} -mode GSs,
under Gaussian measurements, found by Wiseman et al. in Refs. {{cite:3587a2e0784f75dce8d467f2a57b3d3a5cfa6c1b}}, {{cite:3907765dc94ed907d9b7e66b8a2c587e4116dc89}}.
Accordingly, the matrix inequalities (REF ) and (REF ), written for a bipartite
{{formula:06cd6c51-ce34-41a2-809a-7e0ef9f1a9b7}} -mode GS {{formula:44be8c29-f841-4c44-9c4d-7faa713140af}} shared by Alice and Bob, are necessary and
sufficient conditions of its unsteerability from Alice to Bob and, respectively, from Bob to Alice.
We emphasize that the necessity part is proven here with no reference to the Gaussian or
non-Gaussian nature of the state, as well as to that of the one-party measurements involved.
| d | 893f3361e097d92256e52a2dd8270610 |
Before we move on to interpreting our results in a broader context, we must first establish the accuracy of our derived dust parameters.
Previous studies focussing on modified black body fitting of the dust emission in compact galactic dust cores (e.g., {{cite:56b98e92381b0b9f4f2e26ead58799f03fb37cc0}}, {{cite:31b103a8902cea18cd3e717a12a1e0b419e241aa}}, {{cite:ea17e74b0cc8532a874c4386e8b006cd252505fd}}) show that a correlation between {{formula:04bf8e20-f181-4271-9093-a2e72f70e888}} and {{formula:e215fb73-09d5-465a-8cb2-ba6202ed98d1}} can be introduced artificially to some extent by performing {{formula:0e6bc7ae-4db2-4fc2-9111-64ad65daf8ed}} minimization on noisy data. Bayesian methods such as ours are shown to produce more robust results (see also, e.g., {{cite:1fa47793c41d7ab7b8791c691ef3cc2a864ee00b}}) because they treat uncertainties rigorously and self-consistently, and as a result they do not produce spurious correlations between the parameters due to measurement uncertainties.
| m | 2849533611438dc176251560ec9efae3 |
These approaches rely on label information to retrieve cross-modality instances. Yagcioglu et al. {{cite:4755296dac0e58b0150e6b67672dcb77fe1e069f}} used a CNN-based image representation to translate the
given visual query into a distributional semantics based form. Furthermore, selecting intermediate semantic space for correlation measurement during retrieval is also an alternative way to tackle the problem of translation. Socher et al. {{cite:8d2b0c1742bb6608395263c9ff9f8d3380fe3daf}} used intermediate semantic space to translate common representation from text to image and vice versa. Similarly, Xu et al. {{cite:3e9f714f6335a7f9886fa81348da228b80aac9d0}} proposed an integrated paradigm that models video and text data simultaneously. Their proposed model contains three fundamental parts: a semantic language model, a video model, and a joint embedding model. The language model was used to embed sentences into a continuous vector space. Whereas in the visual model, DNN was used to capture semantic correlation from videos. Finally, in the fused embedding model, the distance of outputs of the deep video model and language model was minimized in the common space to leverage the semantic correlation between different modality. Cao et al. {{cite:d6b6340ad00f6026f522ad7436b661bc0652b2a1}} proposed a novel Deep Visual-Semantic Hashing (DVSH) model for cross-media retrieval. They generated compact hash codes of visual and text data in a supervised manner, which was able to learn the semantic correlation between image and text data. The proposed architecture fuse joint multimodal embedding and cross-media hashing based on CNN for images, RNN for text and max-margin objective that incorporate both images and text to enable similarity preservation and standard hash codes. Lebret et al. {{cite:7943dae8ab9f9dcb3c38482ed691ec76c409b4d3}} used CNN to generate image representation, which allow the system to infer phrases that describe it. Moreover to predict a set of top-ranked phrases, a trigram constrained language model is proposed to generate syntactically
correct sentences from different subsets phrases. Wei et al. {{cite:85bf13372b29a183d2675f63f4df0f9f173224c9}} tackled the cross-media retrieval problem through a novel approach called deep semantic matching (deep-SM). Particularly, images and text are mapped into a joint semantic space using two autonomous DNN models.
| m | 2d22bf61113ba6e2e98d61facc8f197c |
Despite their success, there still exist two underlying limitations that hinder better exploitation of base-class knowledge, as illustrated in Fig. REF .
First, region-based detection frameworks rely on region proposals to produce final predictions, thus are sensitive to low-quality region proposals. Unfortunately, as investigated by {{cite:1691bd0895b49ff140f3a91cec1eb20c29598fb8}} and {{cite:67e9bb46c7f44ff21879a7f587586a97c587573a}}, it is not easy to produce high-quality region proposals for novel classes with limited supervision under the few-shot detection setup. Such a gap in the quality of region proposals obstructs the generalization from base classes to novel classes.
Second, most existing meta-learning-based approaches {{cite:3f65ebafb32e54616fa2b2b6fee5885cd652e0d1}}, {{cite:22de5d1668873450e052d32982753f7bd7573d89}}, {{cite:1691bd0895b49ff140f3a91cec1eb20c29598fb8}}, {{cite:6efd4ffd833634096527791b658e9f7971f78bb9}} adopt `feature reweighting' or its variants to aggregate query and support features, which can only deal with one support class (i.e., target class to detect) at a time and essentially treat each support class independently.
Without seeing multiple classes within a single feed-forward, they largely overlook the important inter-class correlation among different support classes. This limits the ability to distinguish similar classes (e.g., distinguishing from cows and sheep) and to generalize from related classes (e.g., learning to detect cows by generalizing from detecting sheep).
| i | 087d5a293d406495489750188dd28001 |
As coloring is hard even when {{formula:96cec8f8-7177-4e17-9c6c-49fc9a3b49b3}} is 3 for general graphs {{cite:611672c01074b3e3c13a95f46c8aecbb0991d5b2}}, we can rule out any fixed parameter tractable (FPT) algorithm parameterized by the number of colors for BCP for general graphs. In this paper, we study the complexity of BCP for restricted graph classes and present FPT algorithms parameterized by some structural parameters.
| i | 133c11cccdff18dd0488e63ee9bc8306 |
In the topological method, we generalise the above concepts by introducing the notion of walk-matrix.
For the entries in a walk-matrix, we don't necessarily use trees of self-avoiding walks. We may use other
topological constructions of {{formula:449777b1-080b-4d66-a658-9a3417070a61}} such as path-trees, universal covers etc (see
more about these constructions in the excellent textbook {{cite:193da81e84f4d96eeafc01c2631313392c313d00}}). Also, we have weights on
the paths of this construction. The weight of each path can be chosen arbitrarily, i.e., it is
not necessarily an influence. Each entry in the walk-matrix is a sum of weights of appropriately chosen paths
in the topological construction. In that respect, one might regard the influence matrix {{formula:43a4d467-64e1-49df-92d8-8bb33b26473f}}
to be a special case of walk-matrix (e.g. see Lemma REF ).
| m | d706b69e97369db582089cb1fc7eb807 |
In this Section, we describe an application of Corollary REF for accuracy of approximation of the impulse response of a single-input, single-output dynamical system {{cite:32d45c53e93d81f67a1047a0716a553f1028fa6a}} based on the Arnoldi type method of order reduction.
| m | 9bd88f091344842067693b528d87ec5a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.