text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
which is satisfied according to (Corollary 4.8, {{cite:42e9351356f46bf2fcf582c70b20f4a0535cb1bb}}) and (REF ). The second line of condition (REF ) is voided since {{formula:fb2a22bc-0c45-485e-89c8-4419cf13f2cc}} commutes with {{formula:84529ee0-cfbf-4d6c-994b-f85624332d61}} .
| r | 4246cefc93f50db7fb925fe509cbcb4c |
Now, we will look at the calculation for {{formula:3fcf1e62-93fd-4194-80be-22aa6282e56c}} when {{formula:f89c293d-d06b-4425-ae12-8f2621b25b5d}} in the magnetic field interval shown in Fig. REF . It is worthy of mentioning that the case of {{formula:6f7dc8bb-7fac-4a08-aada-55c95dcd52d7}} and {{formula:9558cff8-a037-4bc9-a656-59fc0738ebb4}} represents the conventional Landau levels, which have been reported, for example, in {{cite:d69c58d76152c8b0e94aedb4023710b2e82d8925}}, {{cite:42c52f39865c38143414792c0eb337177765a1c8}}, {{cite:e952185ce121e9f1abee9c980446d83aa8e17c08}}, {{cite:0660719672420f54854ee1932648dd6f5221b971}}. The band energies are degenerated for the second time for {{formula:bf4655f9-f82f-47af-8e15-279fb3ad60c3}} when {{formula:c5851681-7322-4ad0-b88b-47fc3c284a65}} (Fig. REF a) and {{formula:003b3451-bada-4307-b283-da810344ec9a}} (Fig. REF c). However, no second band energy degeneracy can be seen for {{formula:35b8d516-b995-48d7-b4fa-ba3ba2eef14a}} (Figs. REF (b,d). Furthermore, by increasing {{formula:9be5b351-c6d1-4221-8bd2-2771e32d2b45}} , the {{formula:6a592274-085e-4b1c-b74b-fc977f450c77}} degenerated-energy bands have more energy than the {{formula:4996afcc-7b44-42e4-bd45-505f17dcf082}} energy levels because {{formula:f19d9b31-324e-46f1-8ba6-34fe6b2749aa}} do not undergo second quantization.
| r | 2a4719b07be17f5664f401eb6374aaa9 |
Location problems with distance constraints have received little attention within the vast literature on location problems in OR and related areas. There are some early works that considered maximum distance constraints between the demand nodes and the facility locations {{cite:a31ae4e4b893a29c03d9cb471d719410421bae59}}, {{cite:7974ca5c4c37b8d9f1d8d92f7fc33f4ba378cc26}}, {{cite:cd5dfbea8bf24628c8f3aaf70242bc3428f3387e}}. Moon and Chaudhry were the first to systematically study location problems with distance constraints {{cite:c5215390e292e9e06d777163343404dae634a3fe}}. Among the problems studied was the
Median{{formula:84cce107-a885-427a-bb69-c9fab4f2319f}} , in their terminology, where we seek to minimize the sum of the distances between the clients and their closest facility (as in p-median) subject to constraints that impose lower bounds (hence the {{formula:f8689b65-c0ff-4d99-9be9-1bbc723a1287}} ) in the distance
between any two facilities and also between any facility and any client.
This is the problem that we revisit in this paper, simply calling it p-median with distance constraints.
| i | 6a09c1abe08c33e01634481eb244d055 |
Observations of PSR B1259–63/LS 2883 have also been performed in HE gamma rays with the large area telescope (LAT)
onboard the Fermi satellite. As part of its full-sky coverage pointing strategy, the LAT monitored PSR B1259–63/LS 2883 during its last three periastron passages in 2010/11, 2014 and 2017 (see e.g.
{{cite:c419d55f70170d19e51776a144ff887e404cee13}}, {{cite:2019a591495d71748760cbe5a75b18f0ad7b83d0}}, {{cite:0ad9e10a4ccafe54a6f7ce7a35026394882b5739}}). The source was clearly detected by the LAT, displaying
low to moderate fluxes around the disc crossings and at the periastron passage itself. In addition,
bright HE gamma-ray flaring episodes starting about 30–40 after periastron and lasting for roughly 30
have been detected after each of these three periastron passages. The nature of these flares is still unknown.
| r | 918ee93d528dbb5b638d857b90fb020e |
In this section, we state our results. In Section REF , we consider the large-scale kernel ridge regression (KRR) problem. We generalize the Fourier transform result {{cite:29d4d05d790124af97039edeaf4f871164eee32d}} of accelerating the running time of solving KRR using the tool of leverage score sampling to a broader class of kernels. In Section REF , we discuss the interesting application of leverage score sampling for training deep learning models due to the connection between regularized neural nets and kernel ridge regression.
| r | ec441c1b9f254a027469bafed58b3da9 |
Results on Head Pose Estimation.
Tab. REF shows the comparison results of our methods with several recent head pose estimation approaches.
SVR {{cite:acfcb7fcf6b6eb9d3b5e300b6c0c8b942f85a442}}, RRF {{cite:c9fe8314428d0390b4f61e56185de788c04f9a44}} and KPLS {{cite:c1394d534c76bff575a940c2d7b02dd712d21363}} are all conventional regression methods, and hence have inferior performance to MoDRN.
SPUDRFs achieve the best performance with an MAE of 0.74 on the BIWI dataset and 0.82 on the BU-3DFE dataset, which is state-of-the-art.
Such an obvious promotion, from 1.13 to 0.74 on BIWI, should owe to considering ranking fairness in the initial stage of training DRFs.
The importance of considering ranking fairness in SPL is most obvious on BIWI.
In SP-DRFs, as illustrated in Fig. REF , the Gaussian means of leaf nodes are concentrated in a small range, leading biased solutions.
In SPUDRFs, considering underrepresented examples in the early pace renders more reasonable distributions of leaf nodes.
{{table:b6014df7-2866-4bde-9221-25f3c973a8f1}} | m | d55b34a5d244c91e5883d2b3550db559 |
The gauge/gravity duality and for that matter, the AdS/CFT correspondence {{cite:819dcfc2dae8e6397555c18f01fdfe066edcc144}}, {{cite:e6dbce007553c0d73d58b9e233bcb02d98e3c0e3}}, {{cite:717cc0f82a49cd692d7daeafd55b203ce0d5e520}} provides a significant improvement of our understanding on quantum systems with large number of degrees of freedom at strong coupling. Using the tools of gauge/gravity duality one can perform the gravitational shock wave analysis {{cite:062f49f12dafeca51fddf666b6155419afe302e8}}, {{cite:aa1663f51e46540e16cd684fa14aeae80455e6be}}, {{cite:95c518560b67d0b4786c18773d8b638a28542907}}, {{cite:eaef72a77d76c5f681df3826a8170019de3ee398}}, {{cite:fdbf06aac09656a0a18434516f92df31ea7f6112}} to calculate the out-of-time ordered correlation function (OTOC) which is regarded as the measure of chaos in quantum systems.
The OTOC characterises the chaotic behavior in many body quantum systems in terms of two parameters, namely the lyapunov exponent {{formula:8a922982-baeb-4207-9f6e-4cd5db44f437}} and the butterfly velocity {{formula:c143fb5a-d15c-42cb-b608-21db64f11f8a}} . A series of comprehensive work has already been done over the past few years in order to establish this connection between OTOC and quantum chaos, for instance see {{cite:062f49f12dafeca51fddf666b6155419afe302e8}}, {{cite:aa1663f51e46540e16cd684fa14aeae80455e6be}}, {{cite:eaef72a77d76c5f681df3826a8170019de3ee398}}, {{cite:05153efc05b34833a3488a3da6d11b54b940afdd}}, {{cite:8fd77dd79fb962c5671b02eb946749f1c42224f4}}, {{cite:1ef3c1357e7fc634155d5dbd41598daf8e893c98}}, {{cite:e199c76d4d19370dfc6f56273aedd1d67c7c6961}}, {{cite:769c89009dd125c68526c4f442ba7b66e7573b9c}}, {{cite:efcfef7b449d33ddec06e6e7833558e8ce055162}} and the references therein. More precisely, in chaotic systems the OTOC or essentially the four point correlation function shows the following exponential growth w.r.t time and space,
{{formula:0a05e9a6-bdd7-4f17-9515-76900aac184e}}
| i | 3684ec0c25188a566c37e88a38077f63 |
Another limitation is that the level {{formula:1a423bea-8ae8-4bb5-b11e-a38b689589ae}} is considered fixed. This is also needed in the proof, and is specifically used in the bound (REF ). In some applications, especially in multiple hypothesis testing, the level {{formula:11019d7a-a65c-48fb-93f8-7333763c5bae}} needs to shrink with the problem size. It would be important to extend our theory to this setting.
Acknowledgments
We thank Edward I. George, Jesse Hemerik, Panos Toulis, and Larry Wasserman for valuable discussions. This work was supported in part by NSF BIGDATA grant IIS 1837992 and NSF CAREER award DMS 2046874.
Appendix
Practical Considerations
When are invariance based tests applicable in practice? When can one invoke the group invariance hypothesis?
We think that this is a challenging applied statistics problem, and we provide some discussion here.We thank a reviewer for raising this question.
When a data analyst is performing a hypothesis test, and they have reason to think that under the null hypothesis the distribution of the data is (nearly) unchanged under some operation, then one can invoke a group invariance condition.
Suppose for instance that the data analyst thinks that under the null hypothesis, the data is equally likely to have come in any order — then one can invoke permutation invariance.
However, suppose that the data comes in predefined clusters (such as strata, or classes based on some key distinguishing class), and under the null hypothesis it is only reasonable to think that that data is equally likely to appear in any order in some specific clusters. Then one can use permutation invariance only over the permutations within those clusters.
This type of reasoning is more readily justifiable when testing a point null. In that case, since we only consider one distribution, assumptions can be justified with greater ease. However if we consider composite null hypotheses, such as those in two-sample testing, then it becomes much more challenging to justify invariance assumptions.
However one difficulty is that formally testing (evaluating) invariance assumptions can be very difficult, especially if the invariance groups are large (for instance suppose that we only have one observation; then it is impossible to test that its density is symmetric around zero).
In our view these type of decisions can be quite application-specific.
Further there are a number of books and reviews on group invariance and permutation tests in statistics, and the interested statistical data analyst can study them for additional insights {{cite:18214902808a3b19186d29e3c7505004b0725747}}, {{cite:35555cb98b2c3b734400d42fd0bd4279f97f3cb9}}, {{cite:1ecbb0ab97bd35c814f3222089ff34e7f4dcccc7}}, {{cite:6f9f883125404710da7812107a7101a2876bbb81}}, {{cite:831e32d80dbfb09973afb0003e61503dd6c8cf32}}, {{cite:ffbc2cc918ad107d41d996cfc4c3752f61a69f87}}, {{cite:ec2be04fb237893500f9dab606c95b9c543224b2}}, {{cite:05017bdecf3056d6ee3cb39e6367c979f86c9853}}, {{cite:c3cae8e4852c6beaa5223d9a1e1904bb76a93349}}.
Proof for the general theory
Proof of {{formula:46ab0fea-be09-4cb2-8646-0a46ef109be6}} -sub-additivity in Section REF
Let {{formula:ed9193c0-637f-4bbb-9b6c-0f92aaadd0ba}} and suppose first that {{formula:c5009189-f3db-4c65-b46e-8a85d92982de}} . Then, by concavity, {{formula:bdc3d6c1-86d7-4101-b11f-9fea98bc662e}} , or equivalently, {{formula:48fd8e86-18ac-47c0-895d-8f5bd6ba1a37}} . Thus, {{formula:d5c03ac2-74b0-4d74-8a33-ae1005b92b16}} follows if {{formula:e61ff79f-d7fa-4784-9e3f-a742e8a73e81}} . By concavity again, and also using that {{formula:f0b74511-f2f1-4a39-a1f2-33243bae458d}} , we have {{formula:56c33790-041c-4b36-ad0f-c6f75c74e1c4}} , as required. Next, if {{formula:6b52e894-c397-483f-b92d-ab74d4d91019}} , then the above argument used for {{formula:9f283ea1-6c2a-4010-a5e1-477fab794bd5}} shows that {{formula:8580ac23-d10d-494a-8723-69d48e693081}} , thus {{formula:d4c21214-85c1-4f0f-8db3-0b8572d39111}} . This finishes the argument when {{formula:58c3652d-6614-4bd6-bdd1-77e5a3d374f2}} . The same argument applies when {{formula:ba54dd12-bcb7-481e-8956-902e55cd4080}} .
The remaining case is when {{formula:4df41522-0c24-4a36-b53e-73e8f2e3fcdd}} have opposite signs. We can assume without loss of generality that {{formula:b8b4ad4d-5712-4f7f-9ba1-21cfeb49bf9b}} and that {{formula:47914b50-1df1-448e-af18-65c970d03e97}} (otherwise we can consider {{formula:992749b5-e0d2-4026-8407-94eeca0803cf}} ). Then {{formula:f32be901-9d1f-4376-928f-53e7a5c6766f}} , where the last inequality follows because {{formula:61b2f6d3-1a47-4aba-9475-ad04e7400976}} is non-decreasing, and also as {{formula:e034dfd3-c32b-48ee-9c5c-7302a706c87e}} .
Proof of Theorem REF
Control of type I error.
The first claim, about the level/Type I error control, is discussed at various levels of generality in many works. The textbook result, e.g., Problem 15.3 in {{cite:8b2822e91a0db5028b29dec4e0915d71b3522e8e}} considers finite groups, and for infinite groups (e.g., Problem 15.1 in the same reference), assumes that we average over the full group. See also more general statements in theorem 2 in {{cite:4fdbfcf7f9d6207d3be332716ba2da0104b4d29d}} and theorem 2 in {{cite:593ea596e688f80b3f80fd1a695efbfeabc88f6d}}. We provide a simple argument to show a key required exchangeability claim, which extends the above results allowing for compact topological groups at a full level of generality, and applies to random sampling of a finite number of group elements. This is crucial for our results, because we use continuous groups such as orthogonal groups in many of our examples.
Let {{formula:dabfb93d-5e32-457f-952e-d3ee2189fe87}} , and {{formula:b1349c5c-681a-4dc4-8b96-c063fcf00be7}} for {{formula:30401248-5c2c-4521-8862-a36ce426ad53}} . Note that due to noise invariance, {{formula:d2c60aeb-800a-4d57-a4b4-a00afdb9893c}} , {{formula:cde97662-60db-42f8-8465-a9dc1a038d10}} are exchangeable when {{formula:bcbf9c9a-8249-4080-b5a3-de6e3ad963e0}} are all considered random: the random variables in the vector {{formula:c58f2e6f-424a-4d88-a386-f395088adbf9}} are exchangeable.
The random vectors {{formula:fa14a441-d333-4317-865e-075b24e56c38}} , {{formula:ab84da21-59f7-4a4b-8080-a477778a71e3}} are mutually exchangeable.
To see this, we will show that {{formula:b39e4074-88c5-4146-9ab7-51552096d799}} has the same distribution as {{formula:56004c59-2dea-49fb-b529-fc327addfd13}} , where {{formula:324b6562-f3c6-4ddd-acc6-12e79c33ec3c}} is independent of {{formula:aecc715b-ae78-42d6-a967-6828a047e575}} , {{formula:6be2b2b2-11cf-4d84-a5da-0f49056ae71d}} . Denote {{formula:e6a2a6f5-6dac-4db4-8805-23971dc2939f}} . Then this is equivalent to the statement that {{formula:ffa387b6-ef86-4b91-9d41-8841f5a51de5}} has the same distribution as {{formula:b3a3c2ad-b64f-4a47-ae8d-05ff5f332b22}} .
Let {{formula:e8d71027-d427-41cc-b635-8acdc9804085}} , for {{formula:a86f77ca-3c4c-425d-a11c-be2a6dd344bd}} . Since {{formula:5c829de3-f872-40d4-8a33-25b2320adee5}} , the above claim follows from because the vectors {{formula:4521e342-7c92-48a8-9b30-e314baa4fb14}} and {{formula:72b0d941-b563-4868-9332-ab8cc2f384d7}} have an identical distribution.
For simplicity, we show this for {{formula:5951e8b1-1052-42d2-85fa-97c7c629fe99}} . The proof for the more general case is very similar.
We can write for {{formula:6db37d17-a5df-4248-af3c-24cfd62b7d8e}} , {{formula:bffe06f3-d623-4245-be9d-712f45aa58eb}} . Now, let us condition on {{formula:57a6ebc4-b9f5-4b53-8353-015856f7f7a0}} . Then, we can write using the independence of {{formula:e3274a84-5bb5-4d74-9106-39a51798bc52}} that {{formula:bb7aa38c-456e-4706-b296-ba9d4af7d82e}} . Recall that {{formula:45075fc3-d15c-49e7-8b9a-a021533b840d}} are iid from the Haar/uniform probability measure on {{formula:61e15236-b872-4c17-845b-5b07be5dc013}} . Using the left-invariance of the Haar measure, we have {{formula:60c47ef3-ed1c-403b-9493-e4b585b4a7d7}} , and similarly for {{formula:b567c203-b30f-495c-8a9d-ddc22292a8ff}} . Hence, we find, using again the independence of {{formula:3625a61a-e440-4fb5-9254-706c50de39b4}} that
{{formula:a07b0823-e6fc-4134-8893-a5d5f25551a3}}
This shows that the joint distribution of {{formula:2fddfe7d-013a-46a8-8996-2d07b903960b}} and {{formula:db7fbecb-0aad-41d7-b2b8-6ce62bfada2f}} is the same for {{formula:1abeef28-71ad-4f73-a966-c8bc90533de4}} . The same argument works for {{formula:43a2a6c5-6ae6-49e1-8297-b880ea8acf89}} . This finishes the proof.
One can then finish the proof of type I error control as in the proof of theorem 2 in {{cite:593ea596e688f80b3f80fd1a695efbfeabc88f6d}}.
Consistency.
Now we move to the part about consistency.
We will consider a slight variant of the invariance-based randomization test, where for a fixed {{formula:c5b991fa-376e-42bd-b27a-e5ef9df0dbb1}} we reject the null when
fm(Xm)>(fm(Gm1Xm),..., fm(GmKXm)),
and where each {{formula:8a4e933d-2f59-4bcf-8e79-0df8cf83d538}} , {{formula:8f8cb1b5-eb0d-4273-8758-e5df62b42a1f}} is chosen uniformly at random over {{formula:25bf16cf-e276-4f1b-afcd-0555615b79f8}} . The type I error probability over the random {{formula:712b9132-3bb6-484f-81ed-ea6c84eb89ae}} and {{formula:50a1e828-e3ce-4c54-becb-445fb20204e4}} of this test is at most {{formula:bcf714e5-5bcf-4940-bfcd-102ba6914c1c}} , see Theorem REF .
The consistency of this test implies the consistency of the quantile-based test. Specifically, given any {{formula:6f27fa61-7e5e-4bb9-9e7b-756bf51e407c}} , choose any positive integer {{formula:a80c3619-857c-4670-bfdd-8fbffb2c2f0e}} such that {{formula:b87224a5-c79e-4488-bb55-43c8818aad36}} . Let {{formula:d813fe03-7e0f-48fa-8322-ead6cafb7183}} denote the event (REF ) and let {{formula:0751dab0-f334-41de-a2c3-8c711da04637}} denote the event (REF ). Then, {{formula:7367afa0-c37f-4d65-9550-bbe6fc143fa1}} , and hence {{formula:8a820369-8730-486e-a99c-fdf1f329ec90}} . We will show that {{formula:dd56e65d-fdd9-4e03-ac44-8ce58e66b20d}} . Thus, it will follow that {{formula:feb27dc2-d744-404a-b8ec-2e74d8339ce9}} . Therefore, it is enough to study the test (REF ).
A simplification is given by the following lemma.
Suppose {{formula:73b28221-7ca7-4fcd-9ac7-6e397463b565}} is fixed. Then we have {{formula:a8fb23ab-141b-449d-af21-1caad1bc3e29}} if and only if we have {{formula:c20c0a1d-127e-439e-903a-4694e96b883e}} for a single {{formula:0042da6d-9e1c-4a8d-aa4b-2b1eb590826b}} .
[Proof of Lemma REF ]
Consider the events {{formula:3891af3d-c206-4be2-bcc4-9e4c8ab1ba9f}} . By taking complements, it is enough to show that {{formula:cbe1cf91-d047-4742-9b9c-97361f27c141}} if and only if {{formula:598968ee-f431-4952-91ed-e224aba88d9e}} .
Since {{formula:fdcb4e65-fa16-4016-b874-a475550398f8}} have the same distribution for all {{formula:cbfde237-4e70-4b4c-ae64-35e84d6b3040}} , we have {{formula:00c7ad6e-0ed1-4242-ae18-5b3095b984f7}} for all {{formula:f31aa7ee-2380-4125-8fe2-8350a10f9f68}} . Moreover, since {{formula:b519c97f-2c1f-4b26-a767-e7b543c0d624}} , we have by the union bound that
P(A1) P(i=1K Ai) i=1K P(Ai) = K P(A1).
Hence, as {{formula:45a814fa-b237-4d60-b5bf-4b561361cfca}} is bounded, we have {{formula:189327b6-35f4-44af-9245-59c6b42ed1ab}} iff {{formula:e242a840-2432-4079-b599-574ef6750204}} .
Thus, for consistency to hold, it is enough to show that with probability tending to unity,
fm(Xm) > fm(GmXm).
Now, {{formula:356d5853-1f95-4dff-95f0-94db146e2450}} . We have the following:
[Independence Lemma]
If {{formula:f5511e5d-bcb2-4469-b5e8-30b045d73efa}} for any fixed {{formula:7a821b9f-c456-41a1-b041-a4821281a01a}} , then {{formula:bc23948c-78ad-4e5a-af9c-693a22acbb84}} when {{formula:99f1c090-b169-4560-a942-eb218438b1fa}} .
[Proof of Lemma REF ]
We can write, for a measurable set {{formula:6981dbb5-1219-4679-8d31-37c60c8d5303}}
P(GmNmA|Gm=gm0)=P(gm0NmA|Gm=gm0)
=P(gm0NmA)
=P(NmA).
Since this expression does not depend on {{formula:0f268714-3583-4b52-aa95-9436c1c86733}} , the distribution of {{formula:81bb21cd-6a80-428e-b4d4-97201a8ae82b}} does not depend on the value of {{formula:2e099614-31ca-4192-9d97-ce1e6935b94f}} ; thus {{formula:4c10412d-fb3c-4359-850d-d176cad61953}} is independent of {{formula:6b53abab-3d2b-4f7c-9d6c-17ece67ca8b9}} .
This implies that for {{formula:9915dfc3-6303-4416-8ec0-90d84bc6add1}} sampled independently, {{formula:dd76d417-9aa9-4075-a8e6-313033f45608}} has the same distribution as {{formula:f551e8e3-ac21-44f9-8492-b8f8ad699a57}} . Therefore, {{formula:c2eda6ee-363e-40dc-89e7-cf32fffeff8e}} , and it is enough to give conditions for
the potentially stronger condition that there is a deterministic sequence of critical values {{formula:b54dbe52-79c6-413a-adb5-b64b02e31028}} such that
PHm1(fm(Gmsm+Nm)tm') + PHm1(fm(Xm)> tm')2.
By {{formula:8ad2a99d-d328-442e-abda-307b1570f9cc}} -subadditivity, we can write
fm(Xm) = fm(sm+Nm) fm(sm) -fm(-Nm).
Since {{formula:5002066d-8251-4157-9c14-0fc0b0895972}} is such that {{formula:fc385730-cec9-4985-9493-5ebef0bbea45}} , we conclude that {{formula:e1063d2f-d9eb-45ef-88bc-f56bd86f2382}} . Hence, if
{{formula:158aaa55-2501-4b21-9f74-74fb61098b42}}
then the desired condition {{formula:8c0d963a-beb9-44cb-9ee2-b438fae85a2e}} holds, provided that {{formula:158fedc9-84fd-4e76-ad01-3bf37eb91e3b}} .
By {{formula:5841674b-c067-4f56-9743-87a1a86cec20}} -subadditivity again, we can write
fm(Gmsm+Nm) -1[fm(Gmsm) + fm(Nm)]
-1[tm + tm]
.
Taking {{formula:64f6cc16-791c-42e8-8566-726db476d361}}
finishes the proof.
Proof of Proposition REF
As in the proof of Theorem REF ,
it is enough to give conditions for the analogue of (REF ), i.e.,
that there is a deterministic sequence of critical values {{formula:0f4568a5-9d15-4e56-a256-0d8d23fa945d}} such that
PHm0(fm(Nm)tm') + PHm1(fm(Xm)> tm')2.
By condition 2(a) of Theorem REF , we can take {{formula:0f337f4e-0c81-470d-927e-acdc274bad3d}} , and {{formula:8437ab53-a11a-4353-aee8-56b3e6b81e2b}} . By {{formula:33b0c1a4-a5df-4cbb-8486-99e9c268964b}} -subadditivity, we have (REF ). Thus, we only need that {{formula:187cc41e-1e5c-4e1b-86b6-9205576720d0}} , which is true by (REF ). This shows that we can take {{formula:b7866cbf-edba-4604-bc5b-5fcc2a642a15}} and finishes the proof.
Proofs for the examples
Proof of Proposition REF
Since {{formula:ae3a982e-ac86-45f7-86d3-5455c13fda3f}} is a norm, it is 1-subadditive.
Thus, the condition from Theorem REF reads {{formula:3dbd7720-f2d1-4de4-a551-1e51fb916603}}
Moreover, {{formula:e02d231d-a0e0-4b6b-8e61-a84ed25f7e42}} .
The requirement on {{formula:aca85807-8834-4553-a889-90287f11ab66}} is that with probability tending to unity, {{formula:c250d931-390e-4cf8-b4ab-585bc76d8dad}} ,
and for Rademacher random variables {{formula:076eecaa-e104-49d8-961e-0e74592133b1}} , {{formula:68560c8f-eff5-4100-ad90-32f70209914a}} , with probability tending to unity, {{formula:156fb828-ef35-4394-9873-037f7821c303}} .
By Hoeffding's inequality, for any {{formula:19de7431-e118-4f0d-bbe4-69b859c56c70}} , {{formula:a04cd92d-c69d-45e1-aeb4-9539ae1d3c65}} . Hence, we can take {{formula:7957bd9d-3717-4a21-848a-d3aa705a0d5d}} , for any sequence {{formula:7e741e75-a40f-4ad4-ac28-b4d1f1ba4ab9}} with {{formula:9b02e6e9-a4fd-4146-8334-f5adee3e06f5}} .
Thus, the condition is that for all {{formula:e3e19e1c-df88-4fe9-b6fe-a18d69df56ed}} large enough,
{{formula:223490ab-d2da-4d5e-a8fc-c48a149ddd4f}}
This requires that {{formula:5601e8b3-d0d7-4a1c-9615-ef873f4e6296}} , which we can ensure holds for all large enough {{formula:abfa51b2-e8b7-4266-99a7-82c989a9c0b0}} by taking {{formula:a799a31f-9c91-4cb0-a99e-3506825feda0}} to grow sufficiently slowly. For such large {{formula:22709b91-d974-48de-b686-f4a3b2fb0254}} , the condition is
{{formula:d824d0ba-2ce9-4b65-8fdc-62dec6defe4c}}
Clearly, this holds when {{formula:d92c081d-9099-4f8d-953f-64ae71637417}} grows sufficiently slowly, for instance when {{formula:7160e0d3-9e12-4a36-a7a2-1899e79ff113}} , if {{formula:ee7e9307-9f9c-410e-a353-c070ebb036a0}} .
Proof of Proposition REF
Since {{formula:54835d61-2b95-4ee0-8e34-3fe822d267b2}} is a norm, it is 1-subadditive.
Thus, the condition from Theorem REF reads {{formula:038df54e-1cdb-42b6-9870-a7c662dd2617}}
The requirement on {{formula:22f3e66d-6b37-452f-8da4-2e3204b0dffb}} is that with probability tending to unity, {{formula:b947db97-8db3-4f12-acce-dba42c1067d3}} ,
and for {{formula:441e1a81-bae1-4212-bdd6-9b1b40e169aa}} , with probability tending to unity, {{formula:8b4d7a4d-b9b5-459f-bbdf-aaf49c3d0ae3}} .
Now, for a normal random vector {{formula:4f53943a-a8c0-483d-9e36-4761a904103b}} , we have {{formula:050c7e90-3f67-4292-8627-75dbd330fd7e}} .
For {{formula:3ea72c13-4e22-4946-975d-9b25df600e4f}} , using standard chi-squared concentration of measure {{cite:57bee019bc053c44901c8e1fd315df5c1260798d}}, we have {{formula:440f8b79-3580-461a-ae24-870fb214c2ee}} .
Moreover, {{formula:107c2798-3896-40f2-89bf-619fe4d2286b}} with probability tending to unity.
Hence, we can take {{formula:34b2a81f-db67-4cad-91c1-8366ecbba017}} .
Similarly, {{formula:1a739179-3a04-49db-ad70-263cc4e5c817}} .
Thus, the condition is that there is a sequence {{formula:7dbe0b4d-cf8c-4562-a95d-0e9ad8179877}} such that {{formula:ba40b508-a735-4b30-8142-1b5ea0fd358b}} and for all {{formula:3aa3aa64-c003-4c79-a821-e6cf6a5da1c8}} large enough,
{{formula:e064c297-1d5c-4443-b932-313de4111c2f}}
This holds when
{{formula:ae48e04b-e076-4dd1-85e1-8c20d20ed958}}
Proof of Proposition REF
Since the maximal singular value is a norm, it is 1-subadditive.
Thus, the condition from Theorem REF reads {{formula:29bef88b-0504-41bd-b83d-ec13a2d153d3}}
The requirement on {{formula:f44f4f73-f8fe-46d8-9a0b-0a9c5b4af088}} is that with probability tending to unity, {{formula:47264336-c2ca-4531-8ac6-76fb55b231f1}} ,
and for {{formula:5f412ccf-23a0-454c-b738-c1a57c02ec1e}} , with probability tending to unity, {{formula:0d32b149-e8c1-49ae-b64b-3a74515ded98}} .
Now, for iid normal random vectors {{formula:f73da433-42bc-458a-b1b8-6a4697d00e6d}} , {{formula:b43c90d8-ff09-4fdd-8d41-8fef7916daec}} , we have {{formula:b4433e55-0b00-43a6-9de0-2292c55bfa63}} .
Thus,
{{formula:89e70b60-c3fc-4c00-a94d-c8136fac8210}}
Further, for any matrix {{formula:49505867-8587-467e-9b62-434ab34e2077}} and scalars {{formula:45614528-f139-4c6b-ad8c-8b3f4bccb129}} , {{formula:aa984694-2f46-4c6c-9d0a-8ac7f5f0963b}} ,
[d1 m1; d2 m2; ...; dm mpm] i|di|M.
Now, from standard concentration inequalities we have {{formula:032b15b5-12f5-46c1-9ddc-fc0e056dca4d}} . This follows from the Lipschitz concentration of Gaussian random variables, see e.g., Example 2.28 in {{cite:fa9b5ad2afd6def44461605469033eba6415fc90}}, and from the fact that the mean of the {{formula:1337345c-b9be-49b4-a28d-9bd74b95b13a}} random variable {{formula:89e461d2-9bb0-461f-928c-b1d2be042c71}} is bounded as {{formula:573dff35-1f69-429e-b522-c588d207982c}} , see exercise 3.1 in {{cite:57bee019bc053c44901c8e1fd315df5c1260798d}}.
Taking a union bound, we find that {{formula:b6845ed9-94a2-45c6-bb3c-24f3d875aefc}} . So, {{formula:431c3fcf-708d-4c53-b71a-b77334dda1d7}} as long as there is a sequence {{formula:82fd1341-e615-4193-b5af-56e1a29d0e94}} such that {{formula:8cd3c561-6360-46fb-81a8-3f6417c79b8e}} and {{formula:c9ba13cc-1976-4270-b4e6-7567732f6d34}} . This holds if {{formula:f9e1d221-10d3-4460-bbe5-8b614ccb7246}} . Then, we also have that {{formula:15040556-5b70-4566-8340-f68df95106f8}} .
Thus denoting {{formula:d26faa8c-b252-47e4-a276-f61d82361206}} , with probability tending to unity,
{{formula:2798b24f-a5cd-478a-a996-7d293fa8b69f}}
It is well known that as {{formula:b71f34b1-c004-4af0-b5cf-7ce316079ae0}} such that {{formula:1ae36110-b6a3-4a72-97ca-43bf90077f9b}} for some {{formula:e0249bde-9754-4afd-ab1c-95873fa9fc8e}} , we have almost surely that {{formula:cdcbcb62-c6c7-4959-9e9b-5b5a968d6d81}} .
This follows from {{cite:02752078df4f5222abb6d9654dd8042a722c5a8d}}.
Hence, we can take {{formula:0910fc61-d799-4d53-834e-ccc27f0a538a}} .
Now, due to the distributional invariance of {{formula:10935159-9725-40a3-beed-e8c82cc00468}} , we have
{{formula:f59d3f29-b14c-4891-8a6d-c12c577b4097}}
Hence, using the same argument as above, for any sequence {{formula:8d6eecde-cdb2-48dc-a4e3-6ce8a022736c}} such that {{formula:4d324729-32b2-4af6-8d09-d9cccd2107a5}} with probability tending to unity, we can take {{formula:3ac0d202-3983-4c76-a26e-75041d4598c5}} .
Thus, a sufficient condition is that there is a sequence {{formula:08944f1f-e136-4483-ab8c-4c6e9edcb459}} such that {{formula:672eb97f-0187-4d45-aaf5-127801c5732f}} and
{{formula:58fe8112-eb5c-40c6-8590-387bb68ff443}}
This holds when
{{formula:05426dad-074c-4835-8b4b-95f01971665f}}
This finishes the proof.
Proof of Proposition REF
Since the map {{formula:936358d8-b2b8-46c7-be67-2992972e599f}} is a quasi-norm, it is 1-subadditive.
Thus, the condition from Theorem REF reads {{formula:e3912ff1-bc4b-4bdb-902a-9c789dafa469}}
The requirement on {{formula:aa2e74b1-5ade-4c36-b827-0079588a8e03}} is that with probability tending to unity, {{formula:dbcf4e64-9c56-4c54-a588-cf386671e461}} ,
and for {{formula:5ea60d9c-c7bd-4c93-b486-60a317bfc3a2}} with iid Rademacher entries {{formula:eb48c1d9-dcf9-4d1f-b6d4-eeffec640b16}} , {{formula:9629e195-5664-4d18-a020-4d6238b3026a}} , with probability tending to unity, {{formula:f7f4f182-faf4-4203-b550-329bc0c1b57b}} .
Let {{formula:847a5774-21ff-469a-b64b-dc0cb4ac46d3}} be any sequence such that {{formula:a7757861-010c-45f3-a12b-d92e5f6f93fc}} for all {{formula:d8d90e6b-51eb-4ff1-b840-7895e953efeb}} and {{formula:3642e58b-c6f3-4393-a085-222c8de7743b}} as {{formula:e9701e09-1d3e-4d2c-b62d-681f0471f38d}} .
Now, conditional on the vector {{formula:7cbc5754-43fb-458c-90c8-adedb2c856be}} , {{formula:8cb5bcd2-f293-4149-92c3-9a4098c6521d}} is an {{formula:94ae06dc-2ab8-4271-9894-f762b2d59664}} -dimensional Bernoulli process over the rows of the matrix
{{formula:0ac95240-fc10-4020-b4da-e51d0b4cd864}}
Thus, conditional on {{formula:7a638795-23b8-455b-ab95-394426197d3e}} , we have {{formula:3acca7d8-42dc-46d4-b73c-4f29854292a7}} {{formula:0c82bb16-b036-428f-87fd-a8ddaad7ed64}} with probability going to unity, see (REF ). Thus, it is enough to take {{formula:69ff6ab3-db48-4dfd-a535-fc3b92b281c6}} to be an upper bound of this quantity with probability tending to unity.
Next, writing {{formula:b32b2a6d-00bd-47e7-b4aa-a5b4b88f9ed7}} ,
XmBm Xm mXmBm Xm ,m
= j[pm]|[Xm]j,Bm Xm 1
m
= j[pm]Xm ([Xm]j,) bm 1
m
= mvT(Xm) vbm.
Thus, it is enough if {{formula:2fc5256c-536b-4847-8873-36bf03344c9e}} .
Thus, a sufficient condition is that there is a sequence {{formula:fba693bd-8120-40d3-be19-120caea77cc5}} such that {{formula:a541bcb9-10e5-40e7-b035-013a22774a98}} for all {{formula:3ba2fcd4-35ac-4064-bb07-cf880c9fff63}} and {{formula:3bff96ce-6b4b-4006-835d-c8447d95b534}} as {{formula:b6581ef7-52f2-4ac7-aa89-5c737436b778}} , and a sequence
{{formula:078e8fd3-e496-484b-beb8-6128392e8701}} such that {{formula:4e7cd052-d235-4fa6-8af3-d3766d4da2ff}} and
{{formula:f2b98bea-3b0c-4fdd-b19a-0e63b9793c47}}
This finishes the proof.
Proof of Proposition REF
We can write {{formula:fd0c606a-2046-4834-91d3-ed7182ecacef}} , for {{formula:bbf4206e-c8e4-4218-9ce5-2a22eef60d88}} , where {{formula:c9e81a0e-18c3-4711-afc3-7b1418c9991f}} are iid.
Similarly, we can write {{formula:c73df6fa-2965-4040-9c51-42e8657d5db6}} , for {{formula:2fe086e7-ccc9-4e62-b5e5-8e2c16f7e75b}} , where {{formula:c3e58a56-7e01-4e82-87cc-e1fbbb2e460a}} are also iid.
We can arrange the datapoints as the rows of a matrix.
This model has a signal-plus-noise form with nuisance {{formula:db440a5c-70b1-4a2c-a844-3ab0aad7297b}} and signal {{formula:d516c449-4e4e-46ab-a0fb-ad450c97fefb}} , where {{formula:a27be49b-7e54-461d-94cb-d62b976a7477}} .
We can follow our general approach for problems with nuisance parameters, see Section .
Let {{formula:3bdc0dc0-70ef-4c11-adcd-c179c0fbf769}} be the projection in the orthogonal complement of the span of the nuisance. We project {{formula:4f9ba423-d139-47d6-bbdd-520e74cc12dc}} to {{formula:76247761-d656-400d-ad3e-0e67798b4d9b}} , and we obtain a standard signal-plus-noise model {{formula:d47f569b-936e-417f-a30f-8d5f16b059aa}} .
Since {{formula:688c8a0e-6971-42ca-8368-0c54f4d7dbd1}} , we have
{{formula:868a1a3d-c591-411d-907a-f52073b6fd97}}
Also
{{formula:b3b94eb8-1f26-498b-bdca-daede578bc7c}}
We can write the test statistic
{{formula:470363b6-ed2c-42d8-a475-de5ea8a029fd}} as {{formula:695c8d32-ad31-44e0-84a9-2c2d9a4011b8}} , where {{formula:478a9c65-8b95-4d7f-87fd-eb8c9b6b738c}} . Note that {{formula:52684546-4b13-4da6-b140-eebd5b8f8ea5}} .
The test statistic is clearly 1-subadditive.
Thus, the condition from Theorem REF reads {{formula:1bfb3018-b225-41a6-9ff3-c1f0880ea166}}
The requirement on {{formula:e47098ab-9e67-45a7-8002-1a8e51939322}} is that with probability tending to unity, {{formula:803764e9-79c9-4498-91fb-1fd1639bd062}} ,
and for a uniformly random permutation matrix {{formula:37878d30-e04d-4ceb-9659-e75d615a3ba0}} of {{formula:115112d7-184c-448f-9c33-b6be5a22385c}} entries, with probability tending to unity, {{formula:8802b0d8-cf0a-47d2-9034-2333c8f00490}} .
Now,
{{formula:00e141c7-bd86-4740-8119-dec272dda504}}
Also,
{{formula:fb105fb1-4de1-4bb1-b04b-af8759b8d97f}}
Consider the random variable {{formula:2bb5bf0c-e2d7-4c21-a212-6fdd0ba5ffa5}} , where the randomness is due to the random permutation matrix {{formula:a134e0f5-0d8a-4a29-ac17-3a510b8bbff7}} .
Let {{formula:0994baa6-2bc5-4d4c-842f-356198e5ddc7}} be the dimension of {{formula:03b7f662-1173-4e52-9e63-1301ce6c7f21}} .
Now, if {{formula:b907c584-702a-4131-936c-d57b9c309ba2}} denotes the permutation represented by {{formula:f7a63edc-1f4f-410f-894f-1caf41295545}} ,
U2 = wm w wm w
= ij wiwm(i)wjwm(j)
= ij wi wj wm(i)wm(j).
If {{formula:05626ae7-52cc-4ddd-857c-247611b2b457}} , then {{formula:d629f409-411b-4c56-a3b5-bca33593cc05}} .
If {{formula:ecb6cd3c-953f-4a77-85b5-712dcaee1758}} , then, since {{formula:b2fb572d-d1b6-4a40-8197-ea5e24e7441b}} ,
{{formula:0e341d5f-dd5f-49c8-bbb8-e682b33304e6}}
Thus,
U2
= i wi2 w2/d
+
ij wi wj (- w2d(d-1))
= w4 (1d+
1d(d-1))
= w4 d-1.
Now, we can check that {{formula:4ade709c-b54f-4b5c-bf0a-4e3282df5e77}} . Therefore, by Chebyshev's inequality,
{{formula:e2d01bbb-9c13-4e1d-b52c-3a2ec38f047f}}
Thus, if {{formula:41da0311-d5ac-412d-a3b9-e5423382dd8e}} , we can take {{formula:e3c60643-3bd7-4142-a786-762134e8deaf}} .
Thus, a sufficient condition is that there is a sequence {{formula:63e95caa-e33c-4dc8-9e68-f25aa96a1cb2}} such that {{formula:0a6bebeb-9ae6-41d6-af40-9f835259b7c6}} for all {{formula:87ce9d2c-1ea1-4a0d-917f-337a6ba4c329}} and {{formula:7c495946-c8d8-4242-8893-ca28ca9bd78b}} as {{formula:c7f5025c-41e4-47f4-a7ef-292dea2828ae}} , and a sequence
{{formula:2f2c9b87-74fe-4b3c-8fee-b9c71d52e8e6}} such that {{formula:3d65ea0c-d2a1-4a99-aaf3-04a0cfde2e06}} and
{{formula:e3d9ad83-6de4-49e0-8722-f0f1ff44b63c}}
This requires that {{formula:b7c26374-1824-4247-98c7-a49ab564616b}} . Then, we can take {{formula:83e43c8e-ff9c-49f2-88ce-a9a63ea70c17}} to grow sufficiently slowly, and the above condition holds if
{{formula:4093ed32-0154-41ca-9edb-009494f6bc10}}
This finishes the proof.
0.2pt plus 0.3ex
| d | 1b3880885988e14507bede33389fb43b |
Virtual Camera: preprocessing of unified camera's intrinsic and extrinsic parameters.
Front View Backbone: the front view feature extractor.
Spatial Transformation Pyramid: Projecting front view features to bird-eye-view features.
YOLO-Style Representation: Head of detector base on YOLO{{cite:8608f76b44c3c72956e306bbfa50b1ee68f60552}}.
{{figure:ee5c1403-19d5-40fa-a9c0-7bdffb7aa7c7}} | m | 2cdcbfd635edb1e1ea826cb8d298c72e |
where {{formula:26fcfd6b-aaf5-4617-829d-faca12b62735}} and {{formula:d107df06-f0e8-4b31-a066-f94b181d7c3f}} represent the exchange interaction in the unit cell (intracellular) and between the two unit cells (intercellular), respectively. And {{formula:7bb256cc-47ef-49b7-9aa1-c7da59064d2f}}
is the easy-axial anisotropy term for the quantization {{formula:ddf9d879-eae6-4bb7-9afb-24429373239a}} -axis, where {{formula:9b74bf7a-e21a-4fac-bf6a-d544e7b5194c}} is the axial anisotropy energy ({{formula:bf5d0dc8-eb99-4e4c-9533-a82e7fe19682}} =10{{formula:c02fa6ee-0611-4237-adb5-db16ae579b9f}} is used for all the calculations). {{formula:6f4d396e-abb4-4e80-8099-67db8a82d005}} denotes the DMI term {{cite:138b718ead8f5aeae77066a437f3f809abc2d144}}, {{cite:3425f6737c815056e153170dcfd564d8e4ad9a33}}, originating from the spin-orbit coupling, while {{formula:7a345999-1e85-420c-a1ca-19f77a18d641}} represents the PDI {{cite:85659028a714ab1a9c33125f61352327c635cd4c}}, {{cite:5cea1965f827a39c87b322d4381a40def384fa68}}, mainly contributed from both the strong spin-orbit coupling and the orbital degree of freedom.
| m | 993a7a9d28cfb41c3da18aaa615df19e |
The advantage of topological spatial descriptors is their versatility in analyzing spatial and temporal structure in complex data. With the flexibility of PH to propose and adapt different filtrations, the standard PH pipeline can be tailored to study a wide range of other spatially patterned systems.
Here we showcased the power of zigzag persistence, a topological measure that has recently benefited from improved computation {{cite:ade577f124858ab5e79880334e04037d60710793}}. We combined zigzag persistence with statistical landscapes {{cite:7411d2480d443e8c215705777cd792a79a3c8957}} to enhance the identification of the geometry of the initial configuration of species as well as the geometric mechanisms of species competition and, ultimately, coral extinction in the metastable parameter region.
| d | 47dfa518f8bbac468deb70fc2a7e78dd |
To demonstrate the effectiveness of our SS-CADA for solving cross-anatomy domain shift, we compare it with several other methods which can be divided into four categories: 1) using only {{formula:047e799b-8bb1-4b10-acb5-3c5299e3d643}} : A standard U-Net {{cite:9a126600fd24992537c489b2b923a5b8cebeb3d3}} is learned only from the small set of labeled XAs, which is denoted as L-SUP; 2) using {{formula:9ad4808d-9cc5-4815-93dd-b22c7754158c}} and {{formula:23f42ebe-c396-456e-8fab-8661bdf50f4c}} : this is a standard UDA problem, and we use SC-UDA {{cite:aed63c0edd20bbc09a6d72733596280696d6fee1}} that incorporates shape consistency between two domains for this purpose; 3) using {{formula:ec50c373-c8bf-41cd-9e8d-60eb96daa5a7}} and {{formula:e2a6de8f-70f4-499c-a856-207d8bab8418}} : this is a multi-domain learning problem, and we adopt three methods for this purpose such as a joint training paradigm (JOINT) without considering the domain discrepancy, a X-shape multi-modality learning network (MML) {{cite:bbf3b6a7ba0d78d72814419b8978789437da4539}} and an U-Net with the proposed VSBN; and 4) using {{formula:3fabc201-309e-4a48-a40b-7fffd79df259}} and {{formula:12420f18-b0ba-4048-ae74-9dd00b14b7b7}} : this is a standard semi-supervised learning problem, which is addressed by the self-ensembling mean-teacher and denoted as SE-MT.
| r | 3309a85eb0642d073c015f8f8e2bc33e |
For the NCT-CRC-HE-100K dataset, EfficientNet {{cite:99824352bc6558df00f71a03b275f1481b743f0e}} pre-trained on ImageNet was used for feature extraction. The training set consists of 70,000 randomly sampled images, and the test set includes the remaining 30,000 images. Table REF shows that ESH1 and ESH2 achieved the best performance in terms of mAP for 16, 32, 64, and 128 bits. For precision@1000, DSH achieved the best result, and ESH2 achieved a 2% lower precision. However, as the number of bits increases, the performance of the ESH algorithms increases with a stronger trend such that for 32-bit settings ESH2 and DSH perform equally, and ESH algorithms outperform DSH for 64 and 128 bits with a margin of 3% and 5%, respectively. Clearly, the ESH algorithms attain the best precision at @r=2 under the 64-bit setting. Although for 16-and 32-bit settings, DSH and LGHSR achieved the best performance, the ESH algorithms were still competitive. The third row in Fig. REF , illustrates how the ESH algorithms perform compared with recent methods in terms of precision-recall graphs. In almost all cases, the ESH algorithms achieved a better performance in comparison with the other methods.
| r | 22a28bd0418e110427d5b184abaacfb0 |
+ ShaekDrop + RA {{cite:2cc83d2e8feb2d1c5fa7ad1dac31602d81b857d2}} {{formula:6a8b7dd8-0443-41e8-906b-b40b0e118c9e}} - - {{formula:88d982cc-1cb2-4870-8a8f-56bdd7aea754}}
| r | 4a481a0dc7a9be9e3048ffe6511fcefe |
then t:perp recovers {{cite:a28ba4a9ed636b44484d83ea4db020f895f48cd9}}.
Note that (see {{cite:3f96d20325e4ab5ae5fefccf4c308dbce1865fe7}}) a linear isometry need not be
surjective.
| r | a6641ee53bdf50f9af5da17142e530db |
The flare energy, which is expressed by Equation (2) given by {{cite:f35b155ef9ee2c8b4a23e84c05bf50fca32bad26}}, has been generally used to examine the level of flare activity in lots of studies in the literature. The studies of some authors such as {{cite:2a4dedad85bf64fed3ba290fd3136faadefb1fc3}}, {{cite:2f9b5ace89ee2b62906ee0f0c7ac04811274cd71}}, {{cite:30f3097a1117853fe850fcf181e08b40dcb80745}}, {{cite:c7211255e0930de99a89692119d72db225d5f3b9}}, {{cite:d7ebfff763bade3109184395a986495e71c8d1c8}} and {{cite:d8f6335d07fb95e6437f44f1760b7c559597f017}} can be given as examples. The luminosity parameter ({{formula:9b152627-e91a-4d8d-81a3-26d2905e89bf}} ) takes place in the expression of the energy ({{formula:eef1883d-80b4-42ef-8ec2-454ceb53f872}} ), which has been based by all these and other studies. The luminosity ({{formula:8073058c-0a6c-4ba1-b586-3f9334e4d901}} ) is different for each star. Although there are little differences among the masses of M dwarfs, the luminosities of two M dwarfs, whose masses are so close to each other, can be dramatically different from each others due to their places in the Hertzsprung-Russell diagram. This means that the computed energies of flares are very different from each other, even if the light variations of the flares occurring on these two stars are the same. Because of this, the equivalent durations ({{formula:7489b92b-e774-461f-ae98-f78d71dd65a1}} ) were used in the analyses instead of energy ({{formula:81301ad4-8210-44b2-83de-abfff388b8f2}} ) in this study. If there is a difference in the equivalent durations of the flares, it is also seen in the energies in the same way.
| r | a5111f2556f515cf3aa60b8b9e7f8fce |
The effect of the well-known receding torus model {{cite:46add6e6f62fb6a719c025426d696be5807091e7}}
predicts that {{formula:12e803b4-44ef-4264-8df8-78a4452af835}} decreases with increasing {{formula:a584c7ef-a7af-4bd3-a7e2-6597618bcbac}} . Whilst our data marginally supports this prediction we find a stronger anti-correlation between {{formula:0a2cb450-c1d3-43af-9282-12441396a809}} and {{formula:b06b463b-0951-435c-bd5c-d3734570358d}} . So it is not clear which parameter (or both) is the fundamental driver of the trends seen. This is made more complex as {{formula:4e934b52-3ae8-4736-8125-eaaad52ae228}} , so there is an implicit bias where {{formula:72c93c35-056c-4ed4-9381-8e66b29dd37f}} will anti-correlate with {{formula:da2b7d13-ce71-4aed-bde2-270e6d8214fe}} .
| d | 7b8912491351c648ef32f7d78f7e3971 |
The present paper illustrates the possibilities and scope of such signal-driven policies in the classical optimal investment problem with jumps, going back to {{cite:e710538e8be03b503232cb82dbcd4bd145c6bdd6}}. Specifically, we will assume that in addition to the standard Brownian motion the stock price is also driven by a compound Poisson process. Contrary to the standard, purely predictable case we will allow that, whenever there is a jump, say at time {{formula:11e744a7-7437-4c43-88db-c3c8306ae599}} , the investor receives, with some probability {{formula:ff760b85-1bdd-495f-94cc-9ff124e26d65}} , a signal {{formula:9f4abc30-b345-45c2-b942-e4beb7a0d8e8}} on its size. Given this signal, the investor can then adjust her holdings to be best prepared for the price jump {{formula:14e8a00f-1eb1-40c5-82f8-da894bbda5db}} .
| i | 6e1ec7ba5549d72199346216f490fa16 |
Our
results within the HBM
are given in Table REF , along with the ones from the literature and experimental data {{cite:44386ecd9f33039b25f1ebdc12e57eaa5cd6ac6d}}, {{cite:d101236ba6bb319457e0e1c49be3fabb9a2cf3a9}}.
Our values of {{formula:9a0e3d7a-a61b-439f-8d54-f7f16347bf98}} and {{formula:276206a4-78d6-4e23-84d0-4ca9e708656c}} have little uncertainties as {{formula:18a31ae1-428d-41f1-8710-ee179b25b389}} are correlated in the model calculations.
In the literature,
Ref. {{cite:675702f4e0e82649a37919c7038c54211d841a77}} employs the lattice QCD, and Ref. {{cite:315dd6aafa0cd454f9826a238a792f0f3e9abb09}} includes the contributions from the charmonium resonances.
We see that the angular observables in the literature and this work are basically consistent.
Our
results of
{{formula:16930325-af74-4101-ae30-dd57be207efa}} and {{formula:94a8571a-2396-456c-9ffb-882a1bb7c4f0}} are slightly larger than the others due to the updated {{formula:45fbe908-9a6d-4e93-a5dc-2845c214e60e}}They used {{formula:fc3d835f-1d76-4c54-8505-ebf7c045a1b6}} {{cite:4167d001f97cf3aaca0d3d47c693c134b7315e7d}}, in sharp contrast to {{formula:80a9989a-e5f4-455e-bf7d-e9a2af9ec40a}} adopted in this work.. Notably, the experimental values of {{formula:0fa7258e-6347-48c5-9da4-a754021e84fc}} are nearly twice larger than the theoretical predictions.
{{table:e6bc5362-a54f-40de-8b90-a4fa095eecbc}} | r | 1cf19bfb138b9044db561bed99f374eb |
If we assume that the coefficients of equation (REF ) are rational functions, then the method in section also applies to the third Painlevé equation {{formula:f1949d98-b5f0-4c22-bbcb-60593bed747d}} . Recall that the first four Painlevé equations {{formula:9ea52692-1847-4cdf-b5d8-ac081dff293c}} , {{formula:19e3195f-7d5a-4e44-aa82-cf938162bd14}} , {{formula:c956629d-97ea-494b-adbf-8f57cc3b2b70}} and {{formula:c1697040-7db6-4cdc-8e84-71c0dbc48f98}} (see, e.g., {{cite:5393723e3f481af2c3ca858bacdd5e516cc11466}}) are:
{{formula:baba62eb-11ee-4023-92b5-527d32c881e5}}
| d | c635a62cbfd7037a9064fd9fb9853e4b |
These approaches are non-heuristic but arbitrary to some extent in that researchers should determine what to resample from the data. A classifier should effectively approximate the hyper-plane which forms a boundary between classes. If the resampled data does not fully represent the quality data points that are crucial in deciding the hyper-plane, the model fails to generalize classification performance on new observations. Moreover, most approaches focus on handling imbalance on images, and only a few {{cite:acd093d9ad1f4ff345daf51a50b255a00c68c1a2}}, {{cite:f6d9af82314b3caefb3dcabe2a9a20fc7257e96e}} have tried applying SMOTE and its variants to text data. However, these methods assume effective numerical representations of data since SMOTE works only in feature spaces. Even though effective numerical representation methods have been recently proposed such as GLOVE {{cite:0c6afa86a16e11cf8ab96bc62b15694433cfd058}} and Word2Vec {{cite:7facd2f587358345077034efb2a45edee80d740d}}, methods that can be utilized without having a dependency on the representation method is still in need.
| m | 2cc31f0b9ae40fcc143927d82fbd627a |
It is worth mentioning that with the recurrent design of the architecture, it is natural that the parameters for each iteration are shared, whereas this is not the case for other approaches such as VNs {{cite:901c551db1d2ce566182fbfcf358db5c41806272}}. Similar to other DL-based unrolled network architecture, our proposed approach is a learning-based method which is designed to achieve optimal performance with the pre-set fixed number of iterations during the training, though the recurrent design enables it to be employed for arbitrary number of iterations at inference time. In our work, we determined heuristically that 5 iterations represents a compromise between GPU memory requirements and performance. Therefore, we fixed {{formula:005c301a-61d9-406a-bff7-9be04c96698d}} to train our network, and thus it is expected that the optimal inference performance also comes from {{formula:cf945feb-b7f0-4aeb-867d-377796e0953b}} . If {{formula:e5284b32-dc2e-4c0f-8af3-5d68f690da6b}} is increased correspondingly at training stage, we can expect that the performance of the network will be further improved and converge after a few iterations. However, due to the multi-coil setting and the limitation of GPU memory, we were not able to train the model with more iterations. More efficient training schemes will be investigated in the future to improve the performance of the proposed method. In addition, with respect to the hyper-parameters, our DL-based method appears to be more robust at test time while CS-based approaches may require case-level tuning for the optimal results.
| d | 3be7d517af25af91a72a6883fa2665e8 |
For the distributed algorithm using alternating directions method of multipliers (ADMM) {{cite:0e90dceb459beaf6a140f0b11d774f99f9da7146}}, {{cite:a1a7f5d771693f14bc73da872ab54716507ee29b}}, {{cite:da45db1d3bdc257204085437c80fbee619800af7}}, it is hard to obtain the globally optimal solution of Problem (3) with {{formula:0b77255e-2657-4a43-84f8-dcd7c931492a}} due to that Problem (3) is nonconvex.
Besides, since the power vectors of different BSs in the Lagrangian of Problem (3) cannot be decoupled, it is difficult to establish the distributed algorithm using ADMM.
| m | 94c91b5e0c8b2576f0bca3fa250bad46 |
Thirdly, {{cite:82ddb366eb366d55608fea98c297d3df1b41ed2c}} consider a more general model in which {{formula:5a0f4293-bb69-46d0-95e8-50ace2095249}} and {{formula:b148cb08-7934-449b-9edf-227b124a74cc}} are not necessarily uniquely determined, in which case the above restrictions would actually have to hold for all valid {{formula:88031c4c-74ef-4359-a729-1fbb34d904bf}} and {{formula:4e9f6100-a775-4c80-8c88-4a7138b26fe3}} functions, and it is not immediately clear that requiring this restriction for a single chosen {{formula:983605a2-92db-4a96-a758-3e9565dac0e3}} and {{formula:26ef9b1f-8618-4557-b4e1-3018a3af363c}} is sufficient.
| d | de680b1367c5b244445b99808a022d3e |
To evaluate the performance of the proposed OESCN, we benchmarked it against the EEGNet {{cite:6dd9681a65cd91f2a3388d1192fc486cefb28082}} and the AFBD-SVM method {{cite:d44318e85e0b0b270afc423d0b85a65e77d0f580}}. EEGNet is a classic deep learning network with excellent performance for processing the EEG signals, which encapsulated well-known EEG feature extraction concepts for brain-computer-interface in order to construct a uniform approach for different brain-computer interface paradigms. AFBD-SVM proposed a manual frequency band extraction method with excellent performance cooperating SVM as the classifier.
{{table:d4e19a59-1e19-414f-8e4e-446c37d5e283}} | r | d57c8c5cc9f2a1fa00dbd0c3d3274fbf |
The geometric interpretation of the SGD algorithm we have presented
shares these qualities, as the concept of moving pairs of vertices one by one towards an ideal distance is just as simple.
In fact the stress formulation (REF ) is commonly known as the spring model {{cite:54ee8b22ef1cf2930ba3814679870405cd70534d}}, {{cite:1eb04330be5c7145b6b12ba657eed8d839b268d8}}, and the physical analogy of decompressing one spring at a time very naturally fits this intuition.
The implementation also requires no equation solver, and there is no need to consider smart initialization, which can often be just as complex a task {{cite:c6a1c1e649895bad85ebefd5796445c4f7580887}}.
Considering only a single pair of vertices at a time also makes further constrained layouts easy to implement, and allows an appropriate sparse approximation to grant scalability up to large graphs.
| d | 2eaa4678f81beb088b1698b1a80e2d38 |
Despite the simple implementation, our EGD has not been proposed earlier for user interaction encoding, and it has two important differences from geodesic distances: First, EGD is parameter-free with higher generalizability. The geodesic distance method {{cite:bba3333eaaeda98aa53411b5caf8c345ba77f57e}} requires a user-defined threshold to make sure that the interactions will affect a local region, which reduces its generalizability as different images may require different threshold values. In contrast, our EGD does not require such a parameter, and it can be applied to different images without some specific adjustment, making it a simple and effective method with a wider utility. Second, the EGD naturally outputs a probabilisticty map, which gives can be used as the probability of a pixel belonging to the foreground or background indicated by the user interactions. This probabilistic view allow it to be seamlessly integrated in to a conditional random field formulation for refinement.
| d | 68eba30d51ea7c05932653f0ab3243dd |
In this section, we present the details of the interleaving learning framework. There are {{formula:9da3a5cb-0026-4f9c-9ec9-389e0b0531d4}} learners. Each learner learns to perform a task. These tasks could be the same, e.g., image classification on CIFAR-10; or different, e.g., image classification on CIFAR-10, image classification on ImageNet {{cite:eccd5f83dd092d0a895a9c6d7dc6177157a3483b}}, object detection on MS-COCO {{cite:7f51a04f39d4fac4a9650999403d03f7c9a42d25}}, etc. Each learner {{formula:972c496b-9006-4150-9632-efe151ab8126}} has a training dataset {{formula:d0717709-3036-4ed6-9297-86f774b91226}} and a validation dataset {{formula:6ee332e2-23f8-4602-a5d5-1a2c5ba7a94c}} .
Each learner has a data encoder and a task-specific head performing the target task. For example, if the task is image classification, the data encoder could be a convolutional neural network extracting visual features of the input images and the task-specific head could be a multi-layer perceptron which takes the visual features of an image extracted by the data encoder as input and predicts the class label of this image. We assume the architecture of the data encoder in each learner is learnable. The data encoders of all learners share the same architecture, but their weight parameters could be different in different learners. The architectures of task-specific heads are manually designed by humans and they could be different in different learners. The {{formula:90ca4806-3bef-40e3-8a4a-bff6331bac51}} learners perform {{formula:68749a09-21bd-4c5b-b8b5-0a3e242e066f}} rounds of interleaving learning with the following order:
{{formula:cffb7eca-6a55-462f-b000-81bbeb8fe7f3}}
| m | 99fa786e20570b4fdec087735e14dcbf |
Anchor-Free One-Stage Detectors. We quantified the performance of state-of-the-art one-stage anchor-free object detectors on the LN detection task in T2 MRI: 1) FCOS {{cite:45fba05045653a0b88f8736411c709075a42dfd6}}, 2) FoveaBox {{cite:7bd1441ddd29c21b5c8d92927f5f56d0d71da6df}}, and 3) VFNet {{cite:f0d9657f76361171f6c3a1b0bc82b9a7303732a4}}. These detectors are superior to anchor-based detectors (e.g. RetinaNet) and two-stage detectors (e.g Faster RCNN) because they skip the region proposal stage, and directly predict the bounding box coordinates and class probabilities for different categories in a single pass. FCOS {{cite:45fba05045653a0b88f8736411c709075a42dfd6}} employs multi-level prediction of feature maps for object detection inside a Fully Convolutional Network (FCN). It also computes a centerness score to reduce the FP, which tend to be far away from the target object center. VFNet combines FCOS (without centerness branch) with efficient sample selection, integrates a classification score based on the IoU value between the ground truth and the prediction into a novel IoU-aware Varifocal loss, and refines the bounding box predictions. FoveaBox {{cite:7bd1441ddd29c21b5c8d92927f5f56d0d71da6df}} has a ResNet50 backbone to compute input image features that is used by a fovea head network, which estimates the object occurrence possibility through per-pixel classification on the backbone's output.
| m | ce57cc4ae492327c85bb61e113aa580c |
Lastly, we argue that the common task formulation, extracting a span or a paragraph from a single document, limits answer coverage.
To further improve, models should be allowed to generate the answer based on the evidence document {{cite:ec3e7d58b7c8adf4e8d890d775c7e3b69cc53875}}, instead of limiting to selecting a single span in the document. Evaluating the correctness of free-form answers is more challenging, and requires further research {{cite:5296d2785b8e8ff458bd4c186a5dcff8ae9cb933}}.
| d | be3c825fc46af19b77fa7e21cf564a4e |
However, major works on network quantization like BNN {{cite:5baa0f8242140bec087c21cdf908bd29303c33b5}}, WAGE {{cite:049f56ab0fcd796f22cf8952bd46b2c3c6c0cd60}} mainly explore quantization on DCNN designed for object classification. Due to the complexity of pixel-wise prediction, semantic segmentation tasks have to use deeper networks. This causes some quantization methods in DCNN with fewer layers not suitable for semantic segmentation. Furthermore, existing works of quantization on semantic segmentation task {{cite:b3197021a228e8881e8e3933a1f6a9e09e28f181}} {{cite:09ddef585f626113ecf7717ba7eca140ebdb7a60}} do not fully quantize all the parameters in the network, making it hard to be implemented on integer-based deep learning chip, or to generalize to other existing models that are not dedicatedly quantized.
| i | a1b7e2e25456402bfcea495d657e1a88 |
Although the CMS method is also an expansion method, it can conveniently be carried out to the order of 15-20 or higher {{cite:0137b1ed1384bc66ad0f7e4b8bfcdf199746020d}}. Therefore, it enables high-accuracy computation of the high-order {{formula:03def503-3561-438a-8788-72e337c09c7b}} up to the order of the measurements {{cite:2fb13e453b9255147ff3b5bcf37c5992a0faec87}}, {{cite:c1bc6e8688276490febfab258e6b5f8ccca5dcde}}. The CMS method provides further advantages such as its expansion to 3D to account for tidal shape and gravity field perturbations {{cite:dd52b3f19edb2745f5543827a2426c254d4fc4c2}} and brevity in its formulation {{cite:0137b1ed1384bc66ad0f7e4b8bfcdf199746020d}}.
Its only drawback is that CMS method goes along with high computational cost even in its accelerated version {{cite:c1bc6e8688276490febfab258e6b5f8ccca5dcde}}. This is because CMS method explicitly solves for the 2D planetary shape not only taking the sum over radial spheroids but also by integrating over latitude. Even if making use of Gaussian quadrature, typically several tens of angular grid points are required. If the integrand were a polynomial, only {{formula:85885567-274c-4818-850a-0b16214ef9d9}} grid points would be required to evaluate the integrals over {{formula:2e5cccfa-8421-44d5-96d2-84b4344e92e3}} , which for {{formula:f1e6f301-6a36-462f-aa11-6490b8d9280e}} {{formula:c4b9d98d-6e45-4d1f-b5a6-3d5cb0cd252d}} amounts to only {{formula:3656d4cc-daca-499f-8a2f-871090da3f6a}} , or even 4 points when accounting for hemispheric symmetry. However, the integrands are functions of the non-polynomial shape itself. In practice, 48 grid points {{cite:dd52b3f19edb2745f5543827a2426c254d4fc4c2}} are used. Obtaining the shape to sufficient accuracy at these grid points is the most time-consuming part of the CMS method. In contrast, ToF solves for the shape explicitly only in the equatorial plane while the shape at higher latitudes is obtained by spherical harmonics expansion, and the required precision of the shape is only the one wanted for the {{formula:03292d54-2bd2-4d34-94b7-beb9c6987b3b}} . One may thus see a benefit in using ToF for computation of the high-order moments.
| i | 5aa0822477eb91a0890d615fad84d0d6 |
The function {{formula:dc4a9a8b-b60a-49f6-b8cd-0cc0ceed9dad}} satisfies the differential equation
{{cite:6ab0bdeb04b071bb02c61af1da812233dc1eb92f}}
{{formula:50551c76-d1e9-426d-a805-940d3dd381e6}}
| r | d32fc2c27b1d771c87c80a1150fadf4a |
Three popular E2E models are connectionist temporal classification (CTC)
models {{cite:61c8c00cd9b5a342f707f7eeeccd0d35bee9fc49}}, attention-based
models {{cite:7f7f9dfed54985a8f961be19c222a80e1680a935}}, and RNN-T
models {{cite:b834e8b83a32ebcfa604124dd2ca22ba66cfe2ff}}.
RNN-T models are naturally streaming and can be decoded frame
synchronously. On one hand, it does not require the full context to predict the
next token, as is required by attention-based models. On the other hand, there
are no assumptions about frame independence given acoustic inputs that exist in
CTC models. As a consequence, RNN-T models are very attractive in industry
areas {{cite:cab02a28598b23c4407b61e395a63655dfba40c4}}, {{cite:def45faa6859e8ed015ba6f3aa4fa6ae5e717dc4}}.
| i | 91e957b4c1b1316c2d453c61de340230 |
Gaussian beta ensembles, with a parameter {{formula:8ddc325a-8251-4282-8268-345cd25d2615}} , are one of the most studied random matrix models. They are generalizations of Gaussian orthogonal/unitary/symplectic ensembles in terms of the joint density of eigenvalues. A nice tridiagonal matrix model for them was constructed in {{cite:0dee24540dffb7aedb0db3da57943d1a3e48d6c5}}. Based on that random matrix model, results on the limiting behavior of the empirical distribution, the edge scaling limit, the bulk scaling limit and characteristic polynomials have been established {{cite:a0395a3b2db0450adc99f059e26aad68f0619544}}, {{cite:952079d7c96ec60b56bb2d9e65ffc5cd754e8133}}, {{cite:e24252a9d4dc535865a7e35685c513073b144060}}, {{cite:7da56a90a9311c22c4e0c2e57fa0fc16e5f11ff8}}, {{cite:93bde1db1a9c3d20820d91f24b43e88b244f90de}}, {{cite:b89d5b160b4ac6e8718bcdc944c4c2e731eaefba}}, {{cite:bf0b646857e6f9306f1ead790ee61b8ee8f07d36}}, {{cite:9aa94fb468a49c1dab8f1c336f689a60d291cb18}}. It is worth mentioning that the joint density itself is good enough to solve problems such as the convergence to a limit, fluctuations around the limit and large deviation principles {{cite:d5815d5e9c0e57324d2feda325c9ed927703800b}}, {{cite:fa7c20223c0a7393d3634aed8d0805594cb60326}}, {{cite:eab0406d9caa15df4920d50fa2f82d163ab9cc7e}}.
| i | b861c37ec15122a666dbfaa230af5fe0 |
The second method involved deriving the CME speed and Alfvén speed and taking the ratio to evaluate {{formula:9ae681c8-ca15-4f8e-9075-ec5c2dc913f4}} (whilst accounting for the solar wind). As it is possible that the shock formed over an extended region around the nose, we examined five traces over this region. We found the Alfvén speed decreased from {{formula:5bad016c-1853-4217-80c8-24302e76df53}} 500 km s{{formula:826c87e2-9b2b-459c-ad00-3cae546c7cad}} at a heliocentric distance of {{formula:cbcfa53f-6614-45a4-9c0c-7288edb5a17f}} 1.4 {{formula:bf847441-8677-4a19-b85a-592c05b74ce8}} to {{formula:c41eec71-45ce-4cf0-ae0d-004ffb7e4119}} 200 km s{{formula:acf1fd44-24c8-493b-881c-9d58558bcdaf}} at {{formula:22940c30-e567-4319-a9c5-856a1484796c}} 2 {{formula:8c93ea16-4745-44d8-80aa-8824874ddbe0}} . Using {{formula:7d1f0c53-f04f-4ba0-92e9-e8d73e507ff5}} -{{formula:d5ecfc6c-1235-489c-bd21-5055ea5e68a7}} , we found that on average {{formula:add12a3c-f43e-4e53-8dd6-885d3fa515bb}} increased steadily from {{formula:455e242c-5026-4089-ac6f-b8afec76fd08}} 1.5 up to {{formula:3da42712-1be2-4d3f-85ca-564436ecbed4}} 4 over a time frame of {{formula:87032bcb-1ab3-420d-a937-02ac8b744a50}} 17 minutes, which is in agreement with results from {{cite:fa443aeea25e5e977f05cf06aa1da9f687a1c7bd}} and {{cite:dff87f6ffcc243871e552528b5c98b61e46660ad}}.
| m | 1fdeef2d6a5396ae84787c52fe0e6792 |
paragraph4
.5em plus1ex minus.2ex-.5emPreciseBN. We note that recomputing population statistics like PreciseBN is actually how the original BatchNorm
{{cite:119009c3fa4eb69498c5c83c4f3e6eaab0ed41b8}} is formulated, but it is not widely used.
This paper thoroughly analyzes
EMA and PreciseBN under the standard ImageNet classification task, and demonstrate the advantage of PreciseBN
in appropriate situations.
| d | bfa1ab84ba20b8c705b78b6331449ec8 |
Until the next era of surveys is being ready in order to set light on this issue, our main objective should be to find new probes (or new alternative ways to employ current astrophysical data) which might add information to the argument and be competitive with measurements for what concerns the feasible precision. In that regard, new physics via an alternative theory of gravity should describe gravitational phenomena in a very wide range of systems, from the cosmic scales to compact objects. Hence, we can combine several observations to improve bounds on parameters. In the era of multi-messenger astronomy, modelling GW propagation is the new window in alternative theories of gravity and some of its inherent challenges, therefore in this paper we will study the possibility that late dark energy itself through a new EoS proposal can solve the Hubble-Lemaître tension in the late universe. We should mention that possibilities with an extra dark energy in the early universe has been considered {{cite:33e32fc3093df48f9f808ab1cc67eaa3b6873538}}, moreover this involves first order transitions and many parameter assumptions to be fitted with the data. The aim of our study is slightly different: we will consider an exponential-like late dark energy parameterisation that in its first order
approximation at present cosmic time recovers the standard dark energy models, e.g. CPL {{cite:7a8ee155bfa8d48224ecd33881a12bc20b61768a}}, {{cite:3affc7e09f9e035772a05a8e0754fb7768eda0fe}}, BA {{cite:392c9298ec9f08b168022cd3c026e44e480f2fe5}}, etc. and at higher order approximation can be compatible to be constrained using GW data to investigate how such corrections affect the {{formula:74473880-d80f-4c56-b508-b229d8854f46}} tension at late-time.
| i | e85f064b6acce935207d18fed7a6e27b |
In this workThe source codes for this work will be released soon. we apply EWC and LWF on acoustic models trained using the LF-MMI {{cite:09da395ac8090e77b761a906061459f55237926b}}
criterion and assess their efficacy.
Furthermore, as the main contribution of this work, we propose a sequence-level version of LWF regularization that is based on LF-MMI and we demonstrate its effectiveness through multiple experiments.
The experiments are conducted on several public databases that involve several English accents (US, Indian, Singapore), speech styles (read speech, public talks, and conversations) and channels (studio and telephone).
| i | 6d43120f8222a46996301a6913275840 |
Guan's {{cite:52f409844e38758eab2a9da61d7b24f93e50b997}}, {{cite:3a5a730246beef3026c74bb3a86ad5b41deec1c9}} proof of Chern-Levine-Nirenberg conjecture on intrinsic norms.
The work concerning Donaldson's conjecture in Kähler geometry due to Chen {{cite:adbd6fd6043f2c5537e556657b6c50c9715903ed}}.
For more related topics,
please refer to
{{cite:efb9878af31202a3df252eed123ad61001549528}}, {{cite:2ea9d890b431bab8eaa8923294f23f7e2b961d77}}, {{cite:2725bbd260f113f210c88ce8b5e9f59000ffbd6a}}, {{cite:e85fac5a1c8e3e32a3f87a3ba04de96f406d9fb1}}, {{cite:4257a928c39203c829db2ecadf8c504deaec903c}}, {{cite:78e73cc1191130f6d8648710fa60d813f2ab4ba3}}, {{cite:5f4777596fec4a1520358f3932f50a6051f4de48}}, {{cite:f7329a871566512a37cf76adc55fb377d553c09b}}, {{cite:21b485d1227752da410cf41254275c1a945f144c}}, {{cite:9d551e103a43f1954c52149bd4fa669cfada3ce5}}, {{cite:ab178453419a9787515472c68b671874693f1745}} and references therein.
The regularity of pluricomplex Green function in the pluripotential theory (cf. {{cite:ddc480118f4928d7f8ff34ad3619d57ba98f4344}}, {{cite:31cca67bb456e77567f1cb76ea61f95166febb91}}, {{cite:0841323f43b5c37f1597c8ee99d5c93007c03555}}, {{cite:6bb9a99a453be644f3fdac1a601e48db8983b9d8}}, {{cite:e8efe5cd9259383b29bdd17e94e296aabbbe66bb}}, {{cite:4219475dc25cad13e00c19cf8b40a8b47168798b}}
and references therein).
| r | 1907c613926f64cff15bf2c57bf02e65 |
Finally, the search cost of quantum embeddings is significantly higher than searching for classical neural network architectures due to the computational limitation of near-term quantum simulators. For example, candidates in neural architecture search are convolution neural networks involving up to millions of parameters, which can be found in only {{formula:43ce24eb-05a2-4327-8c7c-271ebf48e70b}} GPU days by recent state-of-the-art NAS algorithms{{cite:42393fe5bb1da2de5940b086ed4ea8c12ca12cc1}}, {{cite:e62e3c7075161e6ca167513dfffb4e9714c74c6a}}. On the other hand, training a minor quantum embedding of very few qubits consumes much larger computational expenses. Our experimental setting takes {{formula:8eef4a4b-458f-417c-9b2d-1b2673581698}} GPU days to search for an architecture of only 23 parameters on the quantum simulators. The cost may be even expanded when we consider mid-to large-scale datasets. Fortunately, the computational expenses are majorly accounted for training the quantum embeddings. In other words, the enhancement of quantum computing/quantum machine learning in the near-term device will directly benefit the effectiveness of our proposed QES.
| r | 0bfff24aae7617cc78416fbc59af921c |
Another possible direction is to compare the model with human behavior in empirical or experimental studies.
Group-structured populations could better describe human behavior than the more traditional well-mixed population model because human relationships often involve a limited number of people.
At the same time, it seems comparably easy in our modern societies to get payoff information even from people that one has no personal ties with.
It has long been recognized in sociology {{cite:93b976dbcaf77b6b4345ba945a3e4ad4e82546e3}}, that such weak links between communities may play a pivotal role, which may also affect the evolution of direct reciprocity.
| d | c9b8a01d598a5d7e0c131e2cce0cc0ba |
As commented by {{cite:06051b2b5601ba305e5fd7fc28fb81cccd4d8a8c}}, “PPR does represent an important intellectual advance, one that has blossomed in its reincarnation in the field of neural networks”. {{cite:06051b2b5601ba305e5fd7fc28fb81cccd4d8a8c}} attributed the low popularity of PPR to the computational issue, which is part of the motivation for this paper. This paper studied three greedy algorithms of PPR and proposed an ensemble procedure, ePPR, based on “feature boosting”. Extensive numerical study suggests that ePPR can generally produce obviously more accurate predictions than the popular methods, including RF and its variants, for data with either continuous response or categorical response.
| d | c957c6df114c1008d06d11cc5f0eae0c |
This section presents an application of Algorithm REF to the
Douglas-Rachford splitting method for finding zeros of an operator
{{formula:6df5e737-f959-4444-9d52-babe6a288346}} such that {{formula:6b21b4af-dfe3-4e4e-88b5-27090acf7601}} is the sum of two maximal monotone operators,
i.e. {{formula:6295825a-57a7-4432-8787-00a270665fe9}} with {{formula:d9452dfa-8eaf-431b-902c-099be2efc785}} being maximal monotone
multi-functions on a Hilbert space {{formula:5636df1a-4282-4fac-9890-a8fad3d3f2d5}} . The method was originally
introduced in {{cite:eb4709fa6c1ae9bba547764066fab907fd42407e}} in a finite-dimensional setting,
its extension to maximal monotone mappings in Hilbert spaces can be
found in {{cite:f35df9bc3703a371df9ed9af10f5294b651776ca}}.
| m | 6cdd2aba5f9de0ec8459a15c31d3d54c |
To tackle this problem, unsupervised domain adaptive object detection (DAOD) {{cite:54f45929a343b5d97106da5fa96dec4e520975f0}} attempts to train an object detector on the labeled source domain that can be generalized to the unlabeled target domain. Existing DAOD methods {{cite:54f45929a343b5d97106da5fa96dec4e520975f0}}, {{cite:f92b5fcf8215f255cfbb27d0e0c4fb34aedbe751}}, {{cite:42196d6a00a139839d6b0a05944ea1ff8a1d3132}} have achieved significant progress in improving the cross-domain performance for specific object detection models, such as based on Faster RCNN, SSD, and FCOS. With the recent surge of detection transformers, it is natural to ask, can we empower them with such a capability to perform accurate object detection in cross-domain scenarios?
| i | 7b06f14fe8284ea3af26a7db560c85ed |
Hard clustered FL's second challenge is that it cannot effectively exploit similarities between different clusters. Though FL clients may have non-IID data distributions, two different distributions may still exhibit some similarity, as commonly assumed in personalization works {{cite:b90ac7d3518547a2cd40c5d1efd174c87e967612}}. For example, young people may have more online slang terms in their chatting data, but all users (generally) follow the same basic grammar rules. Thus, the knowledge distilled through the training on one distribution could be transferred to accelerate the training of others. However, in most hard clustered FL algorithms, different cluster models are trained independently, making it difficult to leverage the potential structural likeness among distributions. Note that unlike in other personalization methods where the discussion of similarity is restricted to similarities between individual clients, here we focus on the broader similarities between source cluster distributions. Thus, we can gain better insight into the general data relationship rather than just the relationships between participating clients.
| i | 80d7229efaf588c4f554a8dddf76784f |
There has been a renewed interest in the study of quintessence models within string theory and supergravity.
This interest has been sparked from the difficulty to identify controlled de Sitter vacua in string theory,
and from the various swampland conjectures that restrict de Sitter directly {{cite:f517d0a9246c15fce308ed14e75948b5aa2bd74a}}, {{cite:1a2cd5cb59a4e393eede092ca23482bee6e57549}}, {{cite:e8fa612a45cc80ae7c249ddaa82c739966856a9f}}, {{cite:d14757fb0dd0acd66195b74eea860b2b2fa56bbf}}, {{cite:a5590e260799a9eac1464115e704054cf1a7f810}}, {{cite:dac466238d8ce17bb0ee0d2e407b0a9ff7fe0c7b}}, {{cite:166801cc8946c7247a4acc768083ace8af6bda27}}, {{cite:706c191a6a3c3ae1684d33498f8e472530b04bc1}}, {{cite:841e687fc3fffadb7d3c39da68fb224b3510da7f}}, {{cite:7ffa1112d3d2555f9b9014212bba6b2879aaa1d1}}, {{cite:9ad28c280ecd4271b0d5bc383f9cf4eabd1ac5a2}}, {{cite:b2c442871d5f74aac408f72a70e55ba97b922aa7}}, {{cite:ebbe8dd99aadde8f5b85800637f52e6c45b364c8}}, {{cite:cbeaba3f058c3b7417e3da6d05969ffb4b7d6176}}, {{cite:915ae2ff50dadcf0c4c26747fbb2cb7807b058a9}}, {{cite:6f74d328463b9d6d28c67d404a795a4104a4c411}}
or indirectly {{cite:72692cf110ec38ede3b03daa5ef0b4c1569a02bf}}, {{cite:db344ed3d125040281593e7f0a9595be62eb7484}}, {{cite:53e1fc77cc753682de81f2e7b2b072f13d580ccc}}, {{cite:767c019d9f37cbb9d3ea17eaaea664964fe91a82}}.
| d | 22070763b51e440f5409d647b16238ae |
For the diameter and the mixing time it is not clear what the `correct' answer should be. It seems likely, but it is not immediate, that {{formula:5b0c4354-4036-4f9a-ad5b-395660cd53cc}} should have larger diameter and mixing time than {{formula:92ac3d79-3bdb-4a19-8847-0cb64ae8b5a8}} does. For the diameter it might be that {{formula:ad72b89a-c778-45ff-a18c-33d930032384}} is the correct order of growth. However, whilst the mixing time of the lazy random walk on {{formula:058829df-8573-4df6-b682-e06e32aa5efc}} is known to be {{formula:255a8cc1-58e6-4fe0-819b-da9df4f08bb8}} , see, e.g., {{cite:130a121d038f1e582a88e79ce46792cc375ab802}}, it can be shown that the largest component {{formula:c7d70625-8abe-4613-9e9f-7edf54edd0ee}} of {{formula:4a37a162-72c6-4d8b-aac2-02cd809af2d6}} whp contains bare paths of length {{formula:17e064f1-e5fc-4693-8140-27f9792810fb}} . In particular, since we expect a lazy random walk starting in the middle of such a path to take at least {{formula:f5618221-b46d-42ac-8d4f-b3d649da062b}} steps before reaching either endpoint, it follows that whp the mixing time of the lazy random walk on {{formula:59905d90-f85b-47f9-b0c3-6e534268d222}} is {{formula:069db245-1121-45ff-949d-1813f1d14a81}} .
| d | b91e4499d9bc787081557f284e6c0730 |
with {{formula:a0f96999-5f2f-4888-884b-609cacc29713}} a positive constant and {{formula:14330d09-be46-4764-b0ca-b0dca68d937a}} , {{formula:aa80f00b-75bf-4ff0-82be-b95a3e2d3565}} ; here {{formula:37620b94-a999-4a74-b497-ad9e8780727b}} denotes the natural matrix-{{formula:d7cfb7ea-3515-4e87-915f-1a397df79f65}} -norm.
The known dependence of the penalty on the local polynomial degree is included in {{formula:f76b42cb-ed19-42ef-b0c2-0dcddebf0cbd}} for brevity; see {{cite:d82784fd0493b6fb5b5679c6315976b6b51db377}}, {{cite:cdaf4d43eea2466de29619b4c3c3a5a4168ab184}} for details.
Note that, using (REF ) and that {{formula:c1752909-5fb4-46a5-9007-b710a5612ae1}} on {{formula:81609f28-c0b3-4e11-af4b-b005655e65b5}} for all {{formula:6c3efa23-064f-4b7b-98e6-c8bdaa942381}} , we have
{{formula:92e4e02a-217e-41f1-b012-87532501d8ed}} for all {{formula:6f1d972a-2b28-4cf0-8f94-08403faac97a}} ,
with {{formula:336df00f-faab-4bac-bec3-458c09504703}} the solution to (REF ).
| m | f6b31cb6432d863412b7b3f13a34294c |
Fig. 2 presents examples of stain separation obtained by different methods. As illustrated in the figure, the NMF-based method {{cite:e16c7b6f4c909ffe95f296ccd4f2fe1eec0085b8}} and U-nets {{cite:cb7d7ea70308cf08250fb9201979e67a368a9990}} fail to separate co-localized fluorescent stains in the images. Though the pix2pix GAN {{cite:303e7ec2802b782a49db6ce6fca46059771fd823}} obtains better results, block-check artifacts which usually occur in GAN models are observed, especially along image boundaries. Compared to prior arts, the proposed method yields the top source separation results.
| r | 3bbc52f775f112d36eb20fa88250065d |
Given the mains (aggregate) power consumption at time {{formula:7eeb2558-4f49-46e7-87be-77d531bc343f}} at time {{formula:e4cf6135-f2aa-4316-9668-4aae3a3a094f}} , our aim is to estimate the power {{formula:20e54002-8515-42e6-9ad7-d9e07e1f47b3}} for the {{formula:aa7aa921-d88b-4862-bee4-0be1d3bb873b}} appliance. We now introduce our GP variants. In this paper, we fit a separate GP per appliance. We summarise our models in Table REF . We also note that due to prohibitive training and testing time and memory requirement for exact GPs, we use sparse GPs {{cite:460c7bdcfbfd1b74eaa3e735c7465e67d98de73b}} for all our GP variants. We also use automatic relevance determination (A.R.D.) for models that take a vector input.
| m | 67325dba0afcd9ff2668cf7807a01b74 |
A number of OTFS equalization and detection schemes have been proposed in the literature in recent years. The majority of methods can be categorized into either low-complexity linear equalizers {{cite:ebde2c73f8533faeafc91b7302d99e991164318e}}, {{cite:4341be3e8f0af0de0d0790b5623286837ecc5bc8}}, {{cite:2fae1e4b5c5c68c01cc6a23cd03b3a731aa1fcab}} or non-linear message-passing-based equalizers {{cite:80c36b23f8702422dceb82c00a1df2b64710afa1}}, {{cite:a73449d1610820cc32e2a1af495580617fc2a1e6}}, {{cite:035be4bb7ca4c040625a75122098db1818ce704e}}. However, these methods assume a scattering environment in which the channel impulse response is sparse in the delay-Doppler domain. Under more realistic channel conditions, the low-complexity linear schemes are no longer applicable and message-passing-based detectors become prohibitively complex due to the large number of scatterers {{cite:17be7d4356e3be284d6b17f057ec402ecc0fac33}}. An alternative approach was proposed by the authors of {{cite:17be7d4356e3be284d6b17f057ec402ecc0fac33}} which utilized a least-squares minimum residual (LSMR) based
channel equalizer and a reliability-based
dynamic detector to estimate the transmitted data symbols. However, the system model in {{cite:17be7d4356e3be284d6b17f057ec402ecc0fac33}} only considers a single-user scenario.
| i | 6503f1d49af6c38328c6052c2236fdd4 |
In this paper, we aim to train a ML model to learn the relation between halo properties and the occupation numbers of galaxies from a galaxy formation simulation. This invariably includes the complex set of effects related to GAB (such as the preferential occupation of galaxies in early-formed haloes as one example). We utilize here Random Forest (RF) classification and regression, one of the most effective ML models for predictive analytics {{cite:dde26ed4e5d799b55fb6d128ada1d5313281bbd3}}. RF is an ensemble supervised learning method that works by combining decisions from a sequence of base models (decision trees). We use for this purpose stellar mass selected galaxy samples from the {{cite:54e596a096efcb9a5139508031b45ff2d42185d1}} SAM applied to the Millennium Run Simulation {{cite:0c81ee45d3f5f6c7d7a67e9d089cc7d323d892e0}}. The input is the halo catalogue including an exhaustive set of halo properties and environment measures and the output will be the occupation numbers of central and satellite galaxies. The RF model will then be used to create mock galaxy catalogues and compared to the true levels of galaxy clustering and large-scale GAB.
| i | 409a62a318a93a81d39738aa31f8a895 |
We consider that the concave hull of {{formula:22ea7aa2-aa56-4bb3-bb11-96372ac57a08}} represents the outermost layer of the RRT expanding outward and can be used to delineate the RRT expanded and unexpanded spaces.
So we construct a concave hull enclosing all the nodes in {{formula:78d9a1af-b547-40e5-9ceb-021cffc12d69}} based on the method introduced in {{cite:6351ba0b7fc3592fbe45dfe06f651676f57bb0f1}} with a complexity of {{formula:12b00ec6-d7c4-43a9-add2-445a35eb56b7}} , where {{formula:365e74e1-0d0b-4d46-841b-93051d7535b1}} is the number of nodes in {{formula:a58e4759-c6e6-43ab-b566-3ab2f2041d4b}} .
Specifically, after performing Delaunay triangulation{{cite:2907a35ddad38bbb64ae85bb4b1a2d676a1a7c07}} on the set {{formula:6bb599ad-df9b-4b58-9c07-592c41ab0d1d}} , the concave hull can be obtained by repeatedly removing the longest exterior edge from the triangulation until the lengths of all exterior edges are less than the maximum edge length {{formula:a9ba428a-fa56-47f9-bdb9-b858598ea587}} .
The exterior edges here are the edges belonging to only one triangle and not the common edges of two triangles, which is the boundary of the triangulation, as shown in Fig. REF (a).
| m | 5f86bcb7919f5ca1c4f546e3de02de74 |
The decision boundaries of real-world classification problems are often not convex. Outputs of IOC-NN are convex with respect to the inputs by design. Melzer {{cite:f5a8e4ccc9d3c3ea095bab29a854636a73d34cab}} and Kripfganz {{cite:2cedc7ea011113ebba21dd83e86a08afd4ddf954}} show that we can represent any piecewise linear function as a difference of two piecewise linear convex functions. Using ReLU as an activation puts a constraint on the convex functions that IOC-NNs can learn. For instance, we cannot learn identity mapping using ReLU {{cite:787a9ea872788d02b380dbdaf70624ba9cf765ea}}. A simple architectural change of using ELU overcomes this issue, increasing the capacity of IOC-NNs, as can be observed in the outcome of all our experiments. Following this direction in future, we would like to explore the universal approximation bounds for IOC-NNs.
| d | e257065618fc88bfb1087b216b16ee26 |
Optimal Transport methods assume that the shift between the source and target distributions is induced by a function {{formula:6a1d8b0d-24b6-4ef4-8871-4ab5cdbd4b01}} . That is, if {{formula:14764dfc-bd0b-4bcf-b970-741de33a1e25}} , then {{formula:9372b842-ee90-4e94-9ab1-55230f593e76}}{{formula:1ee460a0-0d7d-481c-bab9-61adb53881cc}} py|xpy|xS{{formula:4fc195a0-f16e-4379-8b48-8eca005cf642}} pxpxS{{formula:f0fa409c-26b0-4198-a214-28a82ec3c82c}} X{{formula:040ffb7e-66da-49a1-b1b9-ec93baad8dcd}} Y|X{{formula:fbd2421b-23b5-4309-88c4-be790c1d0481}} py pyS{{formula:4896ea4b-33a6-46c9-8e3d-14c46d677bf4}} Y{{formula:43238714-b6fa-4b1d-b6eb-1d87e007f8eb}} X{{formula:cd27f81a-2f5e-4c5e-b1a9-c2607f38ae2c}} py|xpy|xS{{formula:aca97b7e-780e-41fe-ae10-463f593f1bdd}} yY{{formula:79407cd2-3fdf-427c-a084-cf3eb4a707a3}}{{formula:b88ccc02-1464-46aa-9d56-ae2944f65326}} x{{formula:d4e9d256-27c0-4e30-8c39-73c53488cbb4}}{{formula:dfe537c7-bb8c-4c70-b990-f21a55cf07ce}} XSpxS{{formula:6fbc3944-5aee-4e1c-8a31-3c4dc93332ae}} Xpx) that minimizes Wasserstein Distance. Intuitively, this means that {{formula:bd1dddc4-1b46-428a-bf99-e467863783c7}} is the coupling of {{formula:19ac6a4a-ddc3-4752-8b13-28af2ef241c1}} and {{formula:a85c94ba-fe07-4527-b0ce-bd1f0f6459d7}} d(XS,X{{formula:2731fc3d-b7b5-4b18-ba31-90bfda8b9717}}{{formula:8ec20c03-3886-42f0-9034-fd375a0805d1}} (x) = Ax + b{{formula:3bf0ff88-eb5a-497a-9a7b-fa4cba20211a}}{{formula:2fc63211-f7aa-4dee-baa1-a89bbab5dbd1}}{{formula:8c0c9250-f122-4951-bf8b-ff4f69d9c70e}} Finally, Deep Domain Adaptation Methods comprise the final category of Feature based approaches. In the literature, there are two types of networks that are often considered: Domain-Adversarial Networks and Autoencoders. Domain-Adversarial Neural Networks (DANN) are used to find a function {{formula:a3e68ced-7dde-47f6-b5a6-7c65e8006e43}} such that (a) source and target examples cannot be distinguished, and (b) the classification error on the transformed source data is minimized {{cite:6322d57d7d004b8cb8aa934c3c424b0c1bade118}}. Note that is exactly the same tradeoff that Subspace Mapping methods must balance, and thus the theoretical justification for DANN's is heavily grounded in the generalization error bounds developed by {{cite:9ec697c5b52aa460f2ac79849a833e2df47909ca}}. To achieve both goals (a) and (b), DANN's employ two loss layers: one to classify source samples based on their labels, and another layer to classify samples as belonging to either the source or target dataset. By using this architecture, DANN's are thus trained to simultaneously find a representation of the source and target data that makes the two indistinguishable, while also building a classifier that is able to perform well when using the foregoing representation of the source data. A practical limitation to DANNs are that the gradients for the two loss layers often point in different directions {{cite:0c20344c4dcf3d7bbb2967cedaf92a04ba70c946}}, which makes sense in light of our previous discussion of how goals (a) and (b) are at odds. This has been ameliorated to some extent by subjecting the hypothesis space to constraints such as, for example, the cluster assumption, which requires decision boundaries not to cross high-density data regions {{cite:23c929ad4bcfd21f5856b3d3055a2bc957e2bc2a}}. Finally, autoencoders are another popular type of network used for domain adaptation. Typically, a single autoencoder is trained to be able to reconstruct both source and target data examples, and the function {{formula:6a3cdc27-717e-4aa5-bb00-cf0a1f5d6bdf}} is simply the encoding layer of the network. However, unlike ordinary autoencoders, the distribution of the encoded source instances and encoded target instances are encouraged to be similar to each other, usually through a penalty such as KL divergence. Much like with DANNs, an additional penalty for the classification loss obtained by a classifier trained on the encoded source instances is also included {{cite:512f75bada67c17bdcb484e54d4c1ef59eb87ece}}.
| m | 28e8ccc7e09eff24723474b44007443d |
The claim of {{cite:b38d746df201043f1433a56c0bf4a7e82761028e}} was that any bulk operator with support in the interior of a Python's lunch should be exponentially difficult to decode, with an exponent that is controlled by the size of the bulge and grows as {{formula:35370cfb-af72-45f1-9f04-55452d229d91}} in the semiclassical limit. The justification for this conjecture was based primarily on tensor network toy models, where the fastest known protocols for decoding operators inside a lunch use a Grover-search-based algorithm that takes exponential time.
| i | 938e990a4ec3d353789d8e5cc2093a04 |
Denton el al. {{cite:1d44b2a3f312f01752158719c2ee1aac89f30e80}} have recently shown that the components of eigenvectors can be recovered from the eigenvalue spectra, which they name as eigenvector-eigenvalue identity, and is valid for any Hermitian matrix with non-degenerate eigenvalues. Specifically, the identity holds
{{formula:79e522bb-2446-40fa-a795-ed83ee8a0f1a}}
| m | 920606f921b9a8457cc1a2c647138758 |
The development and theory of the extended Kalman filter is
documented in the text {{cite:353bd8576d765583f9315f2666c52e6745603034}}.
A methodology for analyzing evolving probability distributions
with small variance, and establishing the validity of the
Gaussian approximation, is described in {{cite:6156c0b2a4f6ef69f8c6b10c19c4839ed7f971e5}}.
The use of the ExKF
for weather forecasting was proposed in {{cite:ec7f78a694486d693a90bfc027995e9d3adc299e}}.
However the dimension of the state space in most geophysical
applications renders the extended Kalman filter impractical.
The ensemble Kalman filter provided an innovation with far
reaching consequences in geophysical applications, because
it allowed for the use of partial empirical correlation
information, without the computation of the full covariance.
An overview of ensemble Kalman methods may be found in the
book {{cite:078ae4b163e6e7259fef9c272bfc4139f13c7bfe}}, including a historical perspective
on the subject, originating from papers of Evensen and
Van Leeuwen in the mid
1990s {{cite:68bcf5a478d4b870e996734d26f40f815e985a8b}}, {{cite:b4380681975e380273d1a5a7573bb320beb25ddd}};
a similar idea was also developed by Houtkamer
within the Canadian meteorological service, around the same time;
{{cite:3fbf26ce96ca6673131c48afa2b40857f1a356bc}}, {{cite:1c504a75f16d8221a2c878fdb352efc5dad5ddf9}}.
| d | 72e60ec37f0a77b4b63f3a67c3f184f2 |
We also show the class-level performance using 20% of the labelled data and compare with other SOTA methods in Tab. REF . We compare with the previous baselines, namely original mean teacher (MT) with Densenet169, SRC-MT with Densenet169, MoCo V2, and GraphXNet with Densenet121. We also train a baseline Densenet121 model with 20% labelled data using Imagenet pre-trained model. Our method achieves the best results on nine classes, surpassing the original MT {{cite:367d8d12d7e0905d58ef4bcb66ffd1c1e67fe24b}} and its extension SRC-MT {{cite:de897b2bf5bea5d64cd92ce9a6f71922c24cd3cb}} by a large margin, demonstrating the effectiveness of our self-supervised learning.
| r | a5db4e87e89bd882f4375889766f538a |
One additional objection is that, since depth requires optimization to be inferred, and the choice of loss function is a form of transductive bias, that is no less arbitrary than inductive bias. But this is fundamentally not the case, for the optimization residual in transductive inference refers to the data here and now, and not to data of different images in different scenes. In other words, the optimization residual is informative of the confidence of our estimate, unlike the discriminant from an inductively-trained classifier {{cite:307a239c7f67842fd74a6960681a44bd563678fd}}.
| d | 88c42414f925d6db82666f8eec7bb61f |
The task of question answering is to correct answers given questions, which requires a high level of language understanding and machine reading comprehension abilities. As pre-trained language models on Transformer {{cite:6bfb8c14c62e6ed6e7548c0a0094a47652baabd1}} have brought significant improvements in performance in many natural language processing tasks, there have been many studies in machine reading comprehension (MRC) and question answering (QA) tasks {{cite:f46b4e52e47144626f1298fa6d0b91deace7a6b7}}, {{cite:ecf74f4e56344c4859222a9fc0fd69e5d31e2190}}, {{cite:47816e2248a2a65182443ce87d5133c17ecda90b}}, {{cite:72ee3c8018c8cc6e2d1b2c2d1999be6f2db7ff16}}, {{cite:d7bea74a3869e49c09a104619e3e8322beaf6015}}, {{cite:adc663adc4c6d02333d43b7954e80c07742db193}}. There are Stanford Question Answering Dataset (SQuAD) benchmarks {{cite:006a0f99329375b2315c043787d34a0013d659f8}}, {{cite:3e6d9398d51622c9b81ba942e152d6abff0bfbcb}}, well-known machine reading comprehension benchmarks in the NLP area, that involve reasoning correct answer spans in the evidence document to a given question. Since SQuAD datasets are composed of pairs of natural language questions and answers, and the corresponding documents that are unstructured textual data, the task is mainly to focus on predicting answers from plain texts.
| i | 6f20dc8e97c2378327090ee267b4fc17 |
Bounded planar thin-film flows are common in both nature and technology. A variety of interesting flow behaviours fall into this class, including rupture {{cite:47d0041f10cef3a8b516081de4ea6f18a3e576e0}}, dewetting {{cite:0c51a540b291af00026387632433991b27f699f6}}, droplet spreading {{cite:009d68a804622c13f23642cde8464909fa6e1081}} and sessile droplet coalescence {{cite:c93b5601c5f4da0ac42600f32113cf10e344402b}}. Dynamics of these processes can often be well described by lubrication equations (LE), derived using a long-wave approximation to the Navier-Stokes equations {{cite:3ec8b6dd91a2048c3809d957bd985381f5267618}}, which reduces the modelling problem to solving a single partial differential equation.
Motivated by emerging technologies in micro and nanoscale fluid dynamics {{cite:502c76b9cf0042476510d985bf4674fc57df7540}}, thin-film nanoflows have attracted considerable interest recently and challenged conventional theories, due to the emergence of new dominant physics at these scales.
An important physical factor at the nanoscale is thermal noise, which has been shown experimentally to influence interfacial dynamics through the observation of thermal capillary waves on interfaces of (ultra-low surface-tension) colloid-polymer mixtures {{cite:056395d0033f4f2ff6e6bf4d06eeb4aa902a4bb7}}.
Recently, by the use of molecular dynamics (MD) simulations, it has been discovered that fluctuation-driven nanowaves dominate a range of nanoscale interfacial-flow phenomena, such as nanothread breakup {{cite:2ba1854e09089089e3d84b8f961abc1fa7385236}}, nanojet instablity {{cite:73adaa6b2934e2eb2891ee936f38311f2010fd0d}}, nanodroplet coalescence {{cite:bf448645f3030bef9f363852255235275e513019}} and development of rough interfaces on nanoscale thin films {{cite:7772c96c1a1d628419ca8a47b5a7ba9f2e8b9f6c}}.
Notably, the observations of nanothread breakup in MD were further confirmed by an analytical model {{cite:e3e8af1d1071ce7b2a1b5740203d1f5173a8b1ae}} and experiments with colloid-polymer mixtures {{cite:c03cd4709cd2e2b099ac78ffb009971c160c732a}}, {{cite:339de863eeb41f396f702b4cbb6ccecd315c74be}}.
| i | 3ee73bf206244ef62346ebf29f52297f |
We conduct all simulations on a Lambda workstation, which has AMD Threadripper 3960X with 24 cores, 128 MB cache, 128 GB RAM, and 2 RTX3090 GPUs. We evaluate 5 convolutional neural network (CNN) models – LeNet, AlexNet, ResNet (ResNet18), DenseNet, and VGG (VGG16). All these models trained on the CIFAR-10 dataset. We use Keras {{cite:41e0e367e3fdb7c90e704721f794eb4161c669bd}} to train these models. Trained models are converted to SNN using the conversion toolbox {{cite:043283ead62763a3d42e6a3109eac890ef53a295}}, {{cite:915eb14917f514e56f05d9f6ff69f234971dedcf}} and simulated using PyCARL {{cite:5f903f203d4baafe3b773982a51c8a621235cf02}} with the CARLsim backend simulator {{cite:68549f4441472fca6fe26fd1e5f957ee3038cf04}}. All spiking neurons are programmed as integrate-and-fire (IF) type {{cite:3f201707fdab2deb3264e59d9ff184009fa164d4}}. The simulator is configured to use OxRRAM NVM model as the synaptic cell {{cite:91f90f6ff0721c12d777a68f9795794dfccf9539}}.
| m | 0181360fa00aa96a08d4f5762d369db9 |
paragraph40.0em-1emSemi-supervised ResNet-50. Recent works {{cite:0ee430ddfee75d34abf3ede23683b93ad457fcf2}}, {{cite:276ff11fc49bb00146c6703eaf340bf1abd67cdd}} have demonstrated the possibility to leverage a large collection of unlabelled images to improve the performance of a given architecture. In particular, Yalniz et al. {{cite:276ff11fc49bb00146c6703eaf340bf1abd67cdd}} use the publicly available YFCC-100M dataset {{cite:6c93b130fb149011e3b37a845be589d4359afe36}} to train a ResNet-50 that reaches {{formula:cf2ad3a4-2a7b-40a9-a6fc-9d425c5287a9}} top-1 accuracy on the standard validation set of ImageNet. In the following, we use this particular model and refer to it as semi-supervised ResNet-50.
In the low compression regime (block sizes of 4 and 9), with {{formula:c3b79ce8-0958-4626-a646-14212079cc27}} centroids (practical for implementation), our compressed semi-supervised ResNet-50 reaches 76.12% top-1 accuracy. In other words, the model compressed to 5 MB attains the performance of a vanilla, non-compressed ResNet50 (vs.97.5MB for the non-compressed ResNet-50).
| r | 07471809a5a95373586f27caff08fc22 |
Example 3.4 Mandelbrot introduced a tiling fractal, known as the fudgeflake in his classic book (see page 72 in {{cite:fe7fd7ddc8ab65288ae364ae862643d2e337cda6}}). The fudgeflake is a self-similar attractor generated by three similar contractions:
{{formula:f61621b3-7af2-4553-a9a9-534de5d26a43}}
| r | 3b96b559607d7c50469ac7c330c18a4e |
We have presented a data-driven kernel method that robustly extracts dynamic modes from high-dimensional, nonlinear data.
The method may be viewed as a confluence of the dynamic mode decomposition, the sparse identification of nonlinear dynamics, and kernel methods.
Specifically, we use a kernelized identification of nonlinear dynamics (INDy, i.e., SINDy without the sparsity promoting regularizer) to robustly disambiguate linear and nonlinear dynamics, enabling the extraction of an explicit linear DMD model and forcing snapshot matrix.
Access to the disambiguated DMD model and forcing snapshot matrix opens up the possibility of performing data-driven resolvent analysis of strongly nonlinear flows {{cite:01eb528d306396e10dd4ba996a0c9e5a9e07529c}}.
Our approach is based on the kernel recursive least squares algorithm {{cite:20a2a0db59bb0d0f2778074c2b5f86f54510da8e}} and kernel dynamic mode decomposition {{cite:06aacddcd32536b479af6a24cca23c3645722c3f}} but introduces several innovations, including stabilised dictionary learning, improved interpretability, and extraction of locally-linear models and forcing.
We have demonstrated our approach on a range of nonlinear dynamical systems and PDEs, and shown in each case that we can effectively disambiguate the roles of linearity and nonlinearity.
The nature of kernel methods, along with the online learning variant, render our approach suitable for data that is high-dimensional in both space and time.
| d | 4cd21b1c4192001caaf588d0e48cc335 |
We test the different balancing techniques on three problems (Burgers equation, Kirchhoff plate bending and Helmholtz equation) originating from physics-informed deep learning, where the objective function consists of various terms of potentially considerably different magnitudes and compare their performances, as well as their computational efficiency. Training was done on networks of varying depth and width (acc. to tab. REF ) and limited to {{formula:b03914b6-0eb2-4802-a470-c89e2f189e43}} steps of gradient descent (GD) using the Adam optimiser {{cite:fadf79f38c04e426bb73df3913abb0ab19554a81}} and an initial learning rate of 0.001. Additionally, we reduced the learning rate by a multiplicative factor of 0.1 whenever the optimisation stopped making progress for over 3’000 optimisation steps and finally used early stopping in case of 9’000 steps without improvement.
When addressing the inverse problem, i.e. approximating a set of measurements while subjecting the network to PDE-constraints for finding unknown PDE-parameters {{formula:1e8f6dd9-eee0-4118-969a-ab5e519915ff}} , we further investigated the payoff of using two separate optimisers: one for updating network weights {{formula:38854c80-0675-4fee-b16a-e957958a6b92}} , and a separate one for updating PDE parameters {{formula:503d314e-57e9-4db6-ae99-2fcf4259ed85}} .
Further details on hyperparameter tuning and meta learning is given in secs. and .
| r | 71ad4000b685e813cdead6b980121608 |
Among our competitors, we note that the celebrated conjugate gradient (CG) method is another instance-optimal algorithm for quadratics. Whereas our method minimizes the distance to the solution at each iteration, CG is instance-optimal for minimizing function values at each iteration. Perhaps interestingly, the two methods appeared to behave similarly in our numerical experiments. That being said, the main practical differences between the two methods are that CG Heavy-ball-like formulation naturally relies on higher order information while Polyak step-sizes do require knowledge of {{formula:6df3b636-f1ed-4b62-9d9c-12d550149787}} . In typical optimization problems, this value is not known. However, there are a few settings where this value is actually well-known, typically when {{formula:d7f39e40-d4bb-4648-8fc6-4df22d45d774}} generically (in machine learning, this setting is known as the “interpolation” regime; an alternative could be to use Polyak-steps as a competitor to MinRes). Finally, let us mention that although a few generalization of CG, often referred to as nonlinear conjugate gradient, were studied in the literature (see, e.g., {{cite:0748e73483c63fa76d64da037974e9748efe8f28}}, {{cite:c306697559efe3b87d3384217ed77357e17394ed}}, {{cite:b96c103f9481ddf3d29cfb41323d7ef9ef8f6231}}).
| d | ef862c3b9da2c5c0dcf27baa9f16b406 |
As obtained by Banerjee and Majhi {{cite:f59ada5995999b12fcc41f7da5b60146de8b3339}} for the metric that has a time-like Killing vector, the metric (REF ) also leads to such a relation:
{{formula:768a4f3d-f23f-4b84-a63d-0a1e7b7a43bd}}
{{formula:dfe4f744-2e54-4c75-a760-5e387ad9a5c2}}
{{formula:77179564-0fec-4e6c-8719-debc44f5e8aa}}
{{formula:fe458878-62a2-42b7-83b2-c35c1a46a2b5}}
{{formula:8da67854-ffb8-4ca1-86f3-c6e3080adde0}}
| m | 319aa34f49793f20394b3cfe8b2d156e |
For the experimental confirmation, clarification of the molecular basis for heterophilic adhesion is important to support the experimental idea. A possible candidate for the heterophilic adhesion molecules are TgrB1 and TgrC1, which is manifested in dicty slug {{cite:3a0f9fcbe1c236237e397966b69d2eab1b68786a}}. Further, the adhesion distribution on the cell membrane exhibits a polarization {{cite:bbfed4bd6f586e3cb0d5137913a383989b089f8b}}. This polarization of these adhesion molecules is the preferred feature for inducing cell Marangoni effect on tissue interfaces. However, the discussed scenario assumes the manifestation of different heterophilic adhesion molecules in the two tissues. In the case of dicty, this corresponds to the situation in which TgrB1(or C1) acts only in the prestalk region of the slug and TgrC1(or B1) acts only in the remaining region (See Fig. REF (a)).
To the best of our knowledge, such separated region-specific action of heterophilic adhesion molecules has not been observed. At the very least, the genes encoding TgrB1 and TgrC1 have a common promoter region and are usually transcribed simultaneously {{cite:257429848291a0cb8e94761b110bb9e93ede550a}}.
Therefore, the direct confirmation of cell Marangoni flow should need the direct confirmation of the insitu functional adhesion activity of TgrB1 and TgrC1 on cell membrane.
| d | 6b50383c441d21810b1ce450d9b9f2f4 |
The first thing to note about our rate (REF ) is
that it is consistent with the rates recalled above. When the complete graph topology {{formula:1856dcba-89b6-43fb-bb2e-c97bd0a43f5c}} is used at each iteration we have {{formula:b2f38481-70ed-4105-9c55-9752c3b97b51}} and {{formula:ef64f526-0363-4bdd-a4a4-2de2583cdf22}} , which allows us to recover the rate of C-PSGD. More generally, introducing the usually considered Assumption REF and using Proposition REF gives the looser bound {{formula:312d60a9-8fdd-4fc2-810d-5b859bcbc0b0}} which is equivalent to the rate of D-SGD found in {{cite:fdf5cc4277e982ea6e81147a009deb2c191823e8}}. We also recover the result of {{cite:c8430e72e1a50e5542106efd47646516bb7937c4}} which states that the choice of topology (i.e., the value of {{formula:ff958755-248a-4714-af0d-da6d85c43c9f}} ) does not have a big influence on the convergence when data distributions are close to uniform and large batches are used (since this makes {{formula:b8753d9f-2012-4052-82e2-81e6d1839d93}} close to 0).
| r | e07a51fc1f253fc9d80a418fb643dbd6 |
at lowest order in {{formula:8b43e9cc-1cb5-4707-9f44-56ea92eec81e}} .
This behavior should be contrasted with the findings in fluid dynamics, either in simulations {{cite:2b41f2aab0e5b4312ceaa6eade822d25618dec03}}, {{cite:2fbed2d6f0a65cd428f263de940bd909ebdb81ab}}, {{cite:e8ee107e0055da8a2083f35a874bf50b29e5d392}}, {{cite:76094bf8c41b92b626e47e566ac9656bbc499d88}} or using general arguments {{cite:39552bd90813aea72018aa1999829cdfa9f8937b}}, which give {{formula:037af652-984f-4237-b58b-d9b4f40cfecb}} at early times.Strictly speaking, the measure of the momentum anisotropy is sometimes {{formula:6bcfdc90-779e-4e9b-a4fe-58166b48d79f}} itself, and sometimes related to the asymmetry {{formula:8174f9e3-355c-4cff-b331-a5c5cdc3c3ea}} of the energy-momentum tensor, which necessitates no particlization of the fluid. The latter also scales as {{formula:313ec47d-0edc-41de-a818-9b7f104f86b6}} for the system investigated in the present paper {{cite:df074357e0b0834e01d92a2ca12f00c99ab37e57}}.
The mismatch between the scaling exponents predicted by kinetic theory and fluid dynamics was already noted in Ref. {{cite:c76145a49cab19edacca38221712b2bff224fe9e}} but only for two-dimensional expansions.
In this paper we showed that the difference remains for kinetic theory in three dimensions.
This implies that there is no “universal behavior” for the development of momentum anisotropies — in contrast to that found for radial transverse flow {{cite:39552bd90813aea72018aa1999829cdfa9f8937b}} — but rather that different classes of theories (kinetic theory or fluid dynamics) may lead to different behaviors.
If there exist “late-time attractor solutions” for truly three-dimensional dynamical scenarios without cylindrical symmetry, as was found empirically for one-dimensional motion in both strong and weak coupling regimes {{cite:728fc3901d373f4e835cb83299ff24f260dd53e3}}, {{cite:254df1fb4919e3a36b4fb6d5addbf1276058da26}}, {{cite:67a5431c83d3b6e5cbb4d7f355bb241ea123dc3a}}, {{cite:e645c93ede1139f03b2eeb82104fc58c8a1bd3ee}}, then it would be interesting to see how this could be reconciled with the different behaviors of anisotropic flow at early times.
| d | a75031ff47c3e908ff3c0051c1edb1b9 |
Quantum mechanical effects such as superposition and entanglement open the doors to novel quantum information processing (QIP) technologies in communication, computation, sensing, and metrology, that are hard or impossible to build using conventional classical technologies. Among the various implementations with superconducting qubits, trapped ions, and neutral atoms, the solid-state systems with optically addressable spin states (quantum dots, crystal defects) have shown the most promise in distributed QIP owing to the ability to create entanglement between the spin degree of freedom and a photon(s), serving as stationary and flying qubit(s), respectively {{cite:e9d728b8dfd50a4284c99b5b6f77437a8f6d57dc}}. Having photons as couriers of quantum information is beneficial, as they are unaffected by thermal noise and can travel long distances via fiber-optic cable without interacting with each other. Flying qubits from separate systems can be transported to a common location to perform entanglement swapping, thus creating remote entanglement between systems without their direct interaction. Moreover, solid-state platforms are scalable and offer a chip-scale integration.
| i | 6e72a0c3bbc9bebc1d57d146a6fe5bc6 |
The LHCb collaboration also measures the same process at {{formula:0f762c30-29b5-4d0b-bee6-f7bc1137bc24}} TeV
with the same cutoff of {{formula:2db6d787-7a9c-473d-a9bf-589262d497ad}} and {{formula:2a672330-ae41-4921-9334-d1328ec1ba08}} , and
they obtain {{formula:f1ebcb2a-300f-4f62-bbe1-e33d7230078b}} nb {{cite:72aaa34fc0d8f2c7be114f417d73720543b4c575}}.
Our prediction is {{formula:7dc93da3-1725-4b44-bab0-79056f385187}} nb at the LO of {{formula:fdc89901-235a-4836-8473-b4ab9058087f}} expansion.
Comparing to the experimental measurement, our result is smaller by roughly a factor 2.
Although the kinematical cut is as the same as the case at {{formula:1ba8b9b6-ad8d-4daa-8775-fe49b3e4ee6c}} TeV,
the higher {{formula:59a96b70-c3cd-4932-80f7-f612cc24522b}} could lead to larger high order {{formula:0857edef-9387-4bf3-8b93-884b643a3986}} correction.
And LHCb also measure the {{formula:f5d12e00-ed7a-49e6-8dd8-a08bc7849abb}} distribution at {{formula:61554cf5-0c95-452f-85d9-0b62e3f83e25}} TeV.
The shape of {{formula:4c1d2a40-5f5d-43ba-a376-b865121bbab4}} distribution is decided by Sudakov factor,
and our prediction is consistent with the experimental result as shown in Figure REF .
{{figure:b88e018e-bd17-401f-8e35-700a35a7fa52}}{{figure:2556f452-75f5-468a-a33d-a13d72ee0904}} | r | df784518fd3bf39e3a860db3472bba7f |
Blockchain consensus methods are numerous covered in detail in {{cite:cdac960ca83f7dc84c63adcdd9efc46b410b6b62}} and {{cite:407f60490faf79e251914343a6a18e676ec8ef82}}. An identification, explanation and examination of current consensus methods in relation to blockchain have been extensively documented. A general overview is that public blockchain consensus methods, such as PoW, trade speed for security to function between unverified and unauthenticated network nodes. Therefore consensus convergence is expected to be delayed. On the contrary, a consensus method such as a variant of a Byzantine Fault Tolerant (BFT) algorithm is often used to negotiate orderly block addition and replication, and trades security for speed. This is often used for nodes in a private blockchain network, which are authorised and authenticated before their participation and therefore the consensus convergence is rapid {{cite:9f5bf8afb8ef9878d6b839f2ecd721c1c63d13e8}} {{cite:0d6086bcd32466821cc221def3c5a2b744ed8a61}}. In must be noted that Byzantine Fault Tolerance like methods for consensus are not unique to blockchain and have been in existence prior to blockchain {{cite:609af0429fc5d599435ee3ddb238ff8ddab2a8f4}}. In the following section will seek a different approach to investigate some commonly available open frameworks, projects and source code used for blockchain development, where a traditional block construction is implemented. We will identify each consensus method and how it functions.
| m | 40815f250ecda85003ebaee38afe2ba5 |
In what ways can our reachability graph be generalized? First consider staying within the setting of {{formula:0a1f6de1-d266-45f0-abee-54c6e440b846}} -qubit states. The existence of a graph stucture is most useful when we have a gate set that can be used to pick out only a finite number of states, rather than a continuous subspace of {{formula:56fd0818-daa6-437c-82ba-ff3ce41687eb}} or the entire Hilbert space itself. Hence we do not expect choosing a modification of the Clifford gates which yields a universal gate set to provide interesting results. We could instead consider the stabilizers of some other finite group of operators besides the Pauli group. It is an interesting question whether there are any such finite groups with a compelling physical interpretation. We could also consider allowing a small number of discrete applications of non-Clifford gates, i.e. allowing a small amount of “magic” {{cite:10fe2a1568115026b8f242aa6354fe872a78a3a0}}, {{cite:845bc929d1272c990a95dda474e87d94fe4444ea}} as a resource. Or, we could consider restricting ourselves to stabilizer states, but allowing some applications of gates which are in the Clifford group but not Clifford gates themselves, which could be used to “fast-forward” a circuit. Finally, it would be very interesting to understand whether there is a set of gates that produce a discrete set of states but allow, for example, for tripartite entanglement.
| d | 68bee17f7ac3a49366b1e316f89497ba |
Some obvious future directions include: (a) self-dual gravity with cosmological constant {{cite:46e941b49cbea631983ea6ce760ca5d27a6678ee}}; (b) higher spin extensions of SDYM and SDGR {{cite:dd2617a5806c41c4d3be41b9155df56aeff1c38c}}, {{cite:cfa14e0c86b76eb226c954f5f1defe615d99c7e7}}; (c) the supersymmetric higher spin extensions of {{cite:9dbd4ad473c040e7651274e478cb4143ec93c543}}; Chiral higher spin gravity {{cite:5ce615323c5b16718529ec00881f167c212bdf83}}, {{cite:a90ba7840d8f72245742a332e81b4ca8c72fe2e6}}, {{cite:514da8f8e6076b0bffcc647a139028f973a19f85}}, {{cite:dd2617a5806c41c4d3be41b9155df56aeff1c38c}}, {{cite:d1fc06b0e88a111d680b4029aeb37619fdc39c62}}, {{cite:4f09c0844bc0978380cd1e105a62bc48c082a440}}, {{cite:2863b65b5bd237b9c7ea1566253c6e5256bde99e}}.
| d | af8a7132ce9c17e25b328e829fba0d7a |
In this section, the measurements of the flow coefficients, the non-linear modes, symmetry-plane corre-lations and the non-linear flow mode coefficients are presented. They are compared with hydrodynamic calculations with various settings {{cite:b3ca312ff48dff8bcc2db6f2df30cb2135b442c5}}, {{cite:87bd312e77e6ee22f67416b4b7e72af188b853b7}}, {{cite:a97d7362f8c0f008079b1d1c54981f8d509029ea}}, {{cite:6b90118b775698c9215f6bae5ad21983b84edc4d}}. The first calculation is based on an event-by-event viscous hydrodynamic model with EKRT initial conditions {{cite:b3ca312ff48dff8bcc2db6f2df30cb2135b442c5}}, {{cite:87bd312e77e6ee22f67416b4b7e72af188b853b7}} using a value of {{formula:178f2f62-11d0-4cc8-9959-3e28f8887c82}} (param0) and a temperature dependent {{formula:3e8e12f7-43a7-45e4-aead-45566707312e}} (param1). For both configurations, {{formula:838cb473-c827-491b-a22d-13e8aee8821e}} is set to zero. The visualization of the model parameters can be found in Fig. REF . The second calculation employs the iEBE-VISHNU hybrid model {{cite:8dec1a33c7ed1730701c544050dd971e1f2e5571}} with AMPT {{cite:d62ce15e569c87b5413f124a19cf6cd3ad60b573}}, {{cite:9528fe3c92a3f8c394a77a8cb3d53d89005295ad}}, {{cite:616b007a5485e9e3d4e45c16299e8088b5ad6540}} and TRENTo {{cite:e16564400ddd8c7972c086c372e44ee2d82344a6}} initial conditions. The {{formula:1e0d9a8c-29dc-4d63-951c-3aac09faf5be}} and {{formula:8b9b6331-a1a6-4cb4-a00a-9edc2654eb97}} are taken for param2, while the {{formula:cd745878-de35-4328-95da-0e255513bb14}} and {{formula:13278f4c-8d42-48e9-8bcc-ba3ff47b59c9}} (param3), extracted using Bayesian analysis {{cite:3154b2f77f4562c4790f0dbda917ade2b89e50a2}} (except for the normalization factors) from a fit to the final multiplicities of the charged hadron spectra in Pb–Pb collisions at {{formula:4f5dc47c-ef34-422a-8136-f6bf63aa1322}} , are used for the TRENTo initial conditions.
The third calculation uses the MUSIC model {{cite:639b150568d42335357b7be1944d2c48b6297026}} with IP-Glasma {{cite:2c5dddca312b17f7f72aba881b23a5d92ce28a11}} initial conditions with a value of {{formula:838c4689-d5d3-4b00-8c14-5a65caf4db42}} and {{formula:4e4ec0c4-cc4b-48db-a8a6-7cde1bba0c9b}} (param4). Each of the {{formula:e759a109-c01f-4aff-afd7-c60eac120809}} parameterizations is adjusted to reproduce the measured charged hadron multiplicity, the low-{{formula:e29bbc1b-f566-4502-b1da-d8d509a11298}} region of the charged-hadron spectra, and {{formula:9d9005d8-67da-423f-8883-de4b376fbb59}} from central to mid-peripheral collisions up to the fourth harmonic at RHIC and the LHC {{cite:b3ca312ff48dff8bcc2db6f2df30cb2135b442c5}}, {{cite:da55f099a470b1d14048ca3d4cc29c44322ee876}}, {{cite:ad35d08e2f86489215594b7507e2382e65f4d67e}}, {{cite:223d67040ce26f93a880669aa2e6a96a377e3a80}}, {{cite:d62ce15e569c87b5413f124a19cf6cd3ad60b573}}, {{cite:aec4155e7bc574135d73fac680bfc02b56f08c32}}, {{cite:6b90118b775698c9215f6bae5ad21983b84edc4d}}. The model configurations are summarized in Tab. REF .
{{figure:d96586ce-e428-4e35-b133-262afc713d88}}{{figure:272abd08-c25c-4643-811b-e9c5d7ff6190}} | r | fec3678cd6ed8fb13c9dbdba2753f885 |
The primary component of the {{formula:985fc4d0-9c16-4c37-ae0a-2564fae4071b}} Dra system is an evolved A-type star, which is an extremely rare case when it comes to eclipsing double-lined binaries. Querying DEBCathttps://www.astro.keele.ac.uk/jkt/debcat/, an on-line catalogue of detached eclipsing binaries with the masses and radii determined with a precision better than 2% {{cite:0eb2652637473973d66362afcb6334251a5bc719}}, results in only one binary system with an evolved A-type star, namely {{formula:b24d0ae3-66cb-4877-b3de-017e337c1785}} Cen. This binary system comprises a pair of A0 IV and A1 V stars that reside in a 38.8 d orbit. Fundamental and atmospheric properties of both components of the {{formula:524c452b-74d4-4406-b96f-65ef1d8096fc}} Cen system have been determined with high precision, thanks to the available WIRE space-based photometric {{cite:a94cc06c0ef8b826759f25cef4b4c038c4670562}} and ground-based high-resolution spectroscopic data {{cite:6d3debbe20e881c9775824efc8ab6d32838f68c0}}, {{cite:903865525c41dde2bd52faee31d892081e47021b}}. For example, {{cite:903865525c41dde2bd52faee31d892081e47021b}} report the mass and radius of the primary component of {{formula:da2333e5-ac04-4636-a544-f319b2e511a5}} Cen to be {{formula:970f202e-6415-4705-b060-aeeeabd90fad}} M{{formula:bf6cc9d0-db73-49de-a066-b291a8570b38}} and {{formula:03cabf7a-beb5-4948-b654-97a641b87d1e}} R{{formula:79591c65-f8fd-41e8-ae94-33ac5eed0d83}} . Therefore, the primary components of the {{formula:106c628b-397b-4b09-ba4e-590bac0b0b02}} Cen and {{formula:e6298b61-0d2d-4f2c-bc45-6fb129b4ccc0}} Dra are strikingly similar in terms of their masses, while the difference of about 1 R{{formula:df74a47f-498e-451c-9a70-ce683d154b59}} in their radii (see Table REF for the parameters of {{formula:bcafcb0b-ee52-4fa0-802f-585820100cbd}} Dra) indicates the primary component of {{formula:b408450d-41d8-480c-bc43-78a7530eaca9}} Dra is evolutionary more advanced. This in turn suggests a slight difference in ages of the two systems, and indeed, we find {{formula:80a7048e-0acb-49df-b164-4c7daabbd185}} Dra to be {{formula:09cecfe3-d5e0-4897-b590-aa28e3342354}} Myr old, which is to be compared to the age determinations of {{formula:07590f3e-6373-47f2-ac9b-b942a01b5a3c}} Cen of {{formula:fb37da56-6700-4061-8695-1b4968718405}} Myr {{cite:903865525c41dde2bd52faee31d892081e47021b}} and {{formula:40537435-7049-4cf6-a90a-c2eb097f66d7}} Myr {{cite:a94cc06c0ef8b826759f25cef4b4c038c4670562}}. Looking for “a replica” of the secondary component of {{formula:4043cf3b-cc15-435c-8702-de32184b1a9c}} Dra in terms of mass and age, the closest matches are the TZ Men A {{cite:e023244fa597d3c5434dec69584f97a823fbe883}} and V541 Cyg A {{cite:c7c33444045a0d58b3a316eae757d23ff95b40ba}} systems. Their masses and surface gravities (used as a proxy for age) are matching the respective quantities of {{formula:d4ff240b-7f3e-4e73-a7ab-6945d13ee5e0}} Dra B within 0.1 M{{formula:c691c9cc-6ab1-4d89-b6e7-979ac3677c5d}} and 0.1 dex
| d | 690d8ae3c9dc6b658442c70492ccf01d |
however some loss functions cannot be interpreted as a negative log
likelihood as shown in table (REF ) and as discussed
for the SVM by {{cite:00d1594658cc69100f5f9cc02f1e817cee85d352}}. If, the likelihood is a log-concave
function of {{formula:eb84381b-0c9c-4f22-9a95-25da47fb95be}} , it corresponds to a convex loss function
{{cite:fc1878b6a63e535a996da65eda36f7d2a324da0e}}. Common loss functions and likelihoods
for classification {{formula:ddbabc41-472d-4b5c-9220-eddf8855bd61}} and regression {{formula:87d67dd7-59b2-4979-906c-151b8e891f27}}
are listed in table (REF ).
{{table:5fa20ac3-e5f9-43fe-a3f9-8d5ff65c603b}} | m | 186d2585660d2a0e1725643e77fc29d0 |
The community appears to be deeply split on the issue of model dependence, with the proponents citing the necessity of explanation fidelity {{cite:2a7a2668f6627208ec014444b2bc2fdbcc253c2e}}, while opponents doubt the inherent fidelity of the directly model-dependent explanations {{cite:1811817a569307f921479e414fe18c683465d117}} and stress the need for flexible model-independent explanation methods {{cite:cd2e265280f44dc3d1a731a7560e3379ef107822}}.
{{table:0011c319-8510-4761-8c41-224dddc7f714}} | m | 9791866de42c1084d3a4fe403955373b |
A cutoff can naturally occur if it is produced by a maximum acceleration energy in the sources. In that case, the parameters given above would be reduced to upper limits. However, the detection of a pronounced pileup just below the cutoff would be prima facie evidence of a {{formula:82433a83-b924-462d-8e98-0eadc7485d96}} -even LIV effect, possibly related to Planck-scale physics. In fact, {{formula:18c1ee05-104c-43e2-b09e-78a87c901f3e}} -even LIV in the gravitational sector at energies below the Planck energy has been considered in the context of Hořava–Lifshitz gravity {{cite:e3b4e686220b4912b2db81c76c3f56998bf016ed}}, {{cite:3dd2f84e751b7e9a710f2dff3679ace3c8ad6833}}.
| r | c39f7655fdd9c587ac85d6a2e0dc0282 |
We applied these new tools and methods to inspect, for the first time, the imprint on cosmological recombination of inhomogeneous photon injection by accreting PBHs. The physical origin of this inhomogeneity is the dependence of the accretion rate on the velocities of accreted baryons relative to dark matter, thus PBHs. Importantly, these relative velocities are typically supersonic, and therefore ought to have a strong, non-perturbative effect on PBH luminosities. Fluctuations of relative velocities on {{formula:215b06d9-4280-452a-b549-7704721e76b6}} 100 Mpc scales thus translate to a large-scale spatial modulation of the PBH accretion rate and luminosity, thus energy injection rate. To quantify this effect, we adopted the accretion model of Ref. {{cite:6f98396df650db9d4de3f99cc606d2396fc9ff9a}}, which was used to derive conservative upper limits to the PBH abundance from CMB-anisotropy power spectra. Within this model, the PBH luminosity is highly inhomogeneous, concentrated in small islands with subsonic relative velocities (see Fig. REF ). Conservatively, we extracted the free-electron variations resulting from the component of luminosity fluctuations that is correlated with relative velocities squared, as we expect those to give the dominant contributions to observable effects in CMB anisotropies. We found that spatial perturbations to the free-electron fraction {{formula:b5358073-9f2e-44b6-8826-b62729500646}} peak at {{formula:ce0c8c06-7e56-4b73-a636-da4a83e3dcc4}} Mpc{{formula:2993a4c5-b617-4b2d-83c7-9761082dba36}} , and are only partially washed out by the finite propagation distance of high-energy photons. Importantly, we found that the rms of {{formula:19f04157-b078-4fa7-8d7f-a91d904f7d69}} is comparable to its mean, which was the only quantity evaluated in previous studies.
| d | 2c46cfd1062df0fc37900f10f85a0a60 |
Fig. REF reports synchronization diagrams {{formula:a8bafd23-9dcb-41a4-aca2-6118c197e7cb}} in the absence and presence of coupling constraint for fixed {{formula:0294e532-52ae-453a-ac93-024bcd29a7f9}} . As expected, without consumption {{formula:2b05c639-8651-493d-a703-707e4949edec}} (Fig. REF a), our system exhibits an explosive transition reproducing a hysteresis loop demonstrated in the pioneering work on a frequency-degree correlated SF network by Gómez-Gardeñez et al. {{cite:9b30223635451bce85bd55651d4c6f2903119d1b}}. A mean-field approximation also predicts an explosive transition indicating a bistability region, where both {{formula:6924f581-a1b9-45f5-9d13-e436c2e106d8}} and {{formula:db092baf-21b3-45c4-97af-da72d30b4dfd}} are stable ({{formula:f14cf407-6512-40ff-8ef4-e7b2039d2a9b}} ).
| r | 53bb87c84521038aeb65306bbd7f8240 |
We have also evaluated the above base CNNs (B), and the influence of our novel CAP (+C) and the classification module (+E) in the recognition accuracy on Aircraft, Cars and Pets datasets (more in the supplementary in the end). The results are shown in Table REF . It is evident that the accuracy improves as we add our modules to the base networks, i.e., (B+C+E) {{formula:73d7a30b-4ad6-40be-9885-42756613a767}} (B+C) {{formula:0417e223-6450-4cf4-845d-6e21f8875831}} (B+E) {{formula:dd077976-172d-415b-a2a3-1439141753ab}} B, resulting in the largest gain contributed by our novel CAP (B+C). This signifies the impact of our CAP. In B+C, the minimum gain is 7.2%, 5.7% and 5.1% on the respective Aircraft, Cars and Pets datasets for the Inception-V3 as a base CNN. Similarly, the highest gain is 12.5% and 11.3% in Aircraft and Cars, respectively. These two datasets are relatively larger than the Pets (Table REF ) in which the highest gain (7.9%) is achieved by using ResNet-50 as a base CNN. We also observe that there is a significant gap in baseline accuracy between lightweight and standard base CNNs in larger (Aircraft and Cars) datasets. These gaps are considerably reduced when our CAP is added. There is a further increase in accuracy when we add the classification module (B+C+E). This justifies the inclusion of our novel encoding by grouping hidden responses using residual-less NetVLAD and then infer class probability using learnable pooling from these encoded responses. For base CNNs, we use the standard transfer learning by fine-tuning it on the target dataset using the same data augmentation and hyper-parameters. For our models, we use pre-trained weights for faster convergence. We experimentally found that the random initialization takes nearly double iterations to converge (similar accuracy) than the pre-trained weights. A similar observation is shown in {{cite:550272c262070772b691b8918d9cb8a2e2c16d0e}}.
{{table:ab764896-37b6-4d50-9524-f3a695182a8f}}{{figure:bd8f3699-26a1-4d9e-9297-d521103c170c}}{{table:320fe26b-ad58-4525-9fe9-050b504a762d}} | d | d5170a6368c1b53cbd98bfc9770bc73e |
In terms of speed, CNNs are very fast and have a smaller memory footprint (see fig:complexitycomparison). The throughput gap can be evident by investigating the vision transformers reported in Table REF . A particular strong ViT is the Focal-ViT {{cite:e9a20e895d15e68c5c819697c722d222be82b1ed}}; in its tiny version, it improves upon ResNet101 by 0.7% while the latter enjoys x1.4-times better throughput. Nonetheless, our model stands out in terms of the speed-accuracy trade-off. Comparing QnA-Tiny with Focal-Tiny, we achieve only 0.5% less accuracy while having x2-times better throughput, parameter-count, and flops. We can even reduce this gap by training the QnA with a larger receptive field. For example, setting the receptive field of the QnA to be 7x7, instead of 3x3, achieve 82.0% accuracy, with negligible effect on the model speed and size.
| r | 35f65fc2bf9318614594afdc915fcf63 |
Comparison with deformable convolution.
In {{cite:b196533919fdfcd73e764151974dd580debd9ca6}}, Dai et al. propose a deformable convolutional layer, which allows the sampling index set {{formula:5c97e026-b65f-406d-84f7-4fa3b9631413}} to be non-grid and irregular. It assigns an offset for each index in {{formula:dcda706b-093a-4f87-a441-a230f47668d3}} , instead of only modifying the dilation rates. Compared with adaptive dilated convolution, deformable convolution enjoys higher degrees of freedom, but it also has a much higher computational cost. More importantly, the offsets introduced in deformable convolution are completely unconstrained and independent. This may cause the input and output feature maps to lose their spatial correspondence, which also potentially hurts the accuracy of location. We also experimentally proved in Sec REF that the proposed adaptive dilated convolution is more suitable to human pose estimation than deformable convolution.
| d | a613f2404fcf3ea3cb0242ca62650ff5 |
Additionally, we include results for using stronger networks on CIFAR-10, namely VGG11 {{cite:16f136be25d8c23b63cce54fbac163ba0752e003}}, in Table REF . Here, we use high performing networks trained only on CIFAR-10 to assess how well consensus through HD-Glue works in this scenario. We have dropped poorly performing benchmarks here for simplicity. When the networks are too similar, the benefits of HD-Glue are not as apparent. HD-Glue prefers diverse networks in its consensus. Still, the results are competitive, while maintaining the benefits of HD representations. It should be noted that Logistic Regression likely performs so well in this experiment because of how similar the models are. However, Linear-SVM catches up to this with enough examples.
{{table:2f370e58-5ce7-4f4f-9904-56a874d6b72a}} | r | 551d790724116e7aded4902f307f188b |
To further verify the effectiveness of those technologies, we combine RTD, DES and DA to train models of different sizes(i.e. large, base and small) using standard pre-training settings. Since all of our experiments are modified based on DeBERTa code base and follow most of the settings of DeBERTa, we denote the new models as DeBERTaV3large, DeBERTaV3base and DeBERTaV3small. The discriminator part of DeBERTaV3large and DeBERTaV3base are the same as DeBERTalarge and DeBERTabase, respectively. The discriminator of DeBERTaV3small has the same width and attention heads as DeBERTabase and half the depth of DeBERTabase, i.e. 6 layers with 768 hidden size and 12 attention heads. The generator of DeBERTaV3 has the same width as the discriminator and half the depth of the discriminator.
We train those models with 160GB data which is the same as DeBERTaV2 and RoBERTa, and use the same SentencePiece {{cite:6506b93e8f0c8eb44fb56535d501b992d08139b6}}, {{cite:efc2f96e539a64a733f65d26b22fbb71bb2f78f2}} vocabulary as DeBERTaV2 {{cite:19077381452fe85098e244ba3ef9856ff2014ef7}} which contains 128,000 tokens. All the models are trained for 500,000 steps with a batch size of 8192 and warming up steps of 10,000. The learning rate for base and small model is 5e-4, while the learning rate for large model is 3e-4. Following the DeBERTa setting, we use the AdamW {{cite:22fedcb79889e65c2a2e698a03f16a0b72417c88}} optimizer which is a fixed version of Adam {{cite:dc674447b936e4a70ecd9326d8c783b3674d5606}} with weight decay, and set {{formula:048cf1b8-c50b-4ffd-9d2a-d9386046e57f}} , {{formula:777862e1-d6fe-4c01-b021-95b36d6262d0}} for the optimizer. After pre-training, the discriminators of those models are used for downstream task fine-tuning following the same paradigm as Transformer PLMs, such as BERT, RoBERTa, ELECTRA, and DeBERTa.
We provide more details on the hyper parameters of pre-training and fine-tuning in the Appendix.
| r | 403aa281ec46d55e4afa15eb81702517 |
We observe from Table REF that boosting algorithms such as AdaBoost {{cite:803d40d249fb4adb271eb359aa8230142382f138}} and domain pre-trained transformer models such as LEGAL-BERT outperforms all the other models in terms of Accuracy and Macro F1-score in both the ID and AD datasets.
| r | 6c7624d0af9492a3dbebf86323276738 |
SNGP is a method for deterministic uncertainty quantification, that is, given a single, non-random representation, SNGP improves the base model by enhancing its distance-awareness property. This is a orthogonal direction to what was taken by popular ensemble-based approaches (e.g., Monte-Carlo dropout, BatchEnsemble {{cite:59250f5c7a86e19d84f0c9d225054296830bd775}} or Deep Ensemble), who improve performance primarily through integrating over multiple diverse representations {{cite:1a39a07fe8a31102355fcbdeefcd0f81c28b169f}}.
However, when base models consistently make overconfident predictions far away from the data, their ensemble can inherit such behavior as well. As a result, while ensembling vanilla DNNs are effective in improving accuracy and calibration under shift, they can be lacking in providing as significant a boost in OOD detection (e.g., due to the lack of distance awareness).
| m | e9ec301992eb9c5bcf51a1b5a2beb438 |
More recent methods are often data driven, in contrast to model-driven conventional methods. As such, the challenge shifts from building a good theoretical model towards building a suitable training dataset which enables good generalisation characteristics for unseen data.
In general, data-driven deep-learning methods can be divided into three categories: supervised CNNs looking at specific clues, generic supervised CNNs and one-class training {{cite:fb08aaac4204a6d03b425defa7c8aa893c66cc51}}.
| m | 30bcc50044c0bc52723f6943d916ab84 |
Multi-epoch photometry can also be used to identify young stars from their inherent variability {{cite:0ddb6aab9ad29168e851943f2b408092dad2430f}}, though {{cite:a395c6effd55cfbea0aea672b945c5d2ef3d71ca}} find this method needs spectroscopic verification of the members and has about a 50% success rate. Certain types of stars can be identified from single-epoch photometry, for example {{cite:4b013fd5b55c7fc17a02ad5cbbf7feca1115336c}} used H{{formula:54c2e389-9c35-4e6d-847c-4f9b76d15924}} photometry to identify young A-type stars and {{cite:8ca26b7cef1977552284941cf7ec7660086f578a}} used optical and near-IR photometry to identify pre-main sequence M-type stars. Despite this, single epoch photometry is generally used to measure the surface density of young stars above the galactic background level, rather than actually identifying individual young stars {{cite:16bc18098ff6a98b176f6c263867f62eef7f7837}}, {{cite:b892f1bcded671104b7c535253c0f7422f406726}}, {{cite:74cfe32c4a4c8c9f6b9fddf59af23d680003c737}}.
| m | 183485da57743dcf626629889303eb2d |
This fact, which follows from a simple double-counting, was observed by Wiener in his original paper {{cite:2239530ec1d3c31c027d593d8555c62bbcfea448}}, where only trees were considered.
However, (REF ) fails to hold for most other graphs, owing to the fact that shortest paths are typically not unique.
| i | 77d19d35c731d36782c6c87eadc4d49b |
Results on miniImageNet. The experimental results on miniImageNet are reported in Table 2. It can be observed that our method significantly outperforms other methods under both 5-way 1-shot and 5-shot settings.
Especially, we are 2.5% better than the second best method {{cite:bc292e0d8d57d134a76e616538f5c4abd89abc75}} under the 5-way 1-shot setting, with an accuracy rate of 53.63%. Similarly, we achieve 72.67% under the 5-way 5-shot setting, with an improvement of 2.3% from the second best method {{cite:7f51b632d9fb862a87f7c72e26a1bdaa2c220f9f}}. Note that, our model gains 4.7% and 2.3% improvements over the most relevant work {{cite:7f51b632d9fb862a87f7c72e26a1bdaa2c220f9f}} on 1-shot and 5-shot, respectively, which proposes an image-to-class mechanism to find the relation at class-level. This improvement verifies the effectiveness of our model, which can adaptively select the most discriminative local features at multiple scales in a certain task.
| m | 2a9c29d377d3de5186f9929f8861362a |
In Fig. REF , by calculating more data of the energy for different temperatures and {{formula:e5ae60b8-e79a-4dff-9f91-95e3b106d8b0}} , we verify again that {{formula:892114ec-a82a-4cf4-8c23-b91869a40179}} satisfies the simple relation (REF ).
We have also found that the fermion energy for {{formula:14ee4e72-2891-4dfb-9696-c261501e044b}} does not conflict with the result by Dornheim {{cite:409a93ba5142a0efe90558d92c7ced10e9515fc9}} and our previous calculation {{cite:9deef93d5474050f7b6a1b25aafe96daceb399ce}}, while in both Ref. {{cite:9deef93d5474050f7b6a1b25aafe96daceb399ce}} and Ref. {{cite:409a93ba5142a0efe90558d92c7ced10e9515fc9}}, it is difficult to consider the temperature below {{formula:627f2357-90b9-4e82-b2ce-1ff22bb91d38}} and impossible for {{formula:f5a1d9f3-151f-47b5-b3d7-5b0253252a22}} .
In all our results in this work, the error due to the statistical fluctuations is negligible, hence we do not give the error bar in the present work. Of course, in practical application or precise calculations one may give more accurate calculation and more careful analysis of the statistical fluctuations. It is worth pointing out that the main purpose of the present work is to propose and verify the idea of our method. Hence, we only use moderate {{formula:945147b9-ba91-40b1-bd35-66dcd84f9c1c}} MD steps and {{formula:c363cd70-284a-4eb4-9e1f-fa191d9f0620}} beads with separate Nosé-Hoover thermostat {{cite:83ec7593a868ad53f6a1329f36fdaa71d63dfffa}}, {{cite:780d3b72d4b4256e0b5f1d7c56a7bc2941df6720}}, {{cite:81d2c18dae79925fb6b46e906125228f64010d28}}, {{cite:64b59b139733732406fe55152cce12d273ad79bd}}, {{cite:aaa6eac5c78cf21308782a0c2d81bc54f5ed5795}} in our calculation to assure convergence, and to test our idea and also show the efficiency of our method. In practical applications, to satisfy the high precision calculation of some problems, we may consider to increase significantly the MD steps and the number of beads {{formula:fca4c1f6-f0af-4fae-aeef-a97019d40618}} per particle.
{{figure:bcbb31f6-d0ae-48a4-8735-c65f9788ec0d}} | r | b904f47eea04962fa5261a7e1b557ca6 |
The third line of research gives up the expressiveness of the full
{{formula:6c398ec7-86a5-4faa-96bc-a3c63e75d6c0}} -calculus and focuses on decidable fragments. Patterns
{{cite:1803b7f324d7c857ff1e3fece7bde8bab4aabed2}} are arguably the most important such fragment in practice,
with implementations
in Isabelle {{cite:381d96670a7eee4dc042877e42574a880d6153f6}}, Leo-III {{cite:3c4e181110bc1b7834cbb518b69ecfc5b2924d07}}, Satallax
{{cite:10afccaa4311b5a47ccd5dac658ced406056b52d}}, {{formula:6b21f004-4509-4c23-a50d-ae01ecd7deaf}} Prolog {{cite:5068fde7829e6f771bff4dd9c3b9cb07a08265d0}}, and other systems.
Functions-as-constructors {{cite:746e7649b7eb46deb649a5dbb9782dc7059faf9e}} unification subsumes pattern
unification but is significantly more complex to implement. Prehofer
{{cite:2855b7e563ca9ac4a4aecbf81cbe31865215cac6}} lists many other decidable fragments, not only for
unification but also preunification and unifier existence problems. Most of
these algorithms are given for second-order terms with various constraints on
their variables. Finally, one of the first decidability results is Farmer's discovery {{cite:658a0bf2c697b98b90610006639bd4116dce24a5}} that
higher-order unification of terms with unary function symbols is decidable.
| d | 3ef91c54a2184411dde44a6e878dffec |
Comparison with Other GAN Models: Our proposed model is simple and unique as it does not require 3DMM information or external synthetic images during training {{cite:5a7f4ca4eab3ae1e5e1d5e07e5f49d27216a6078}}, {{cite:bbd5a32eddd6b7cad74d6192f0227c41299a12d5}} nor do we need to fine-tune our model during testing on input images. Due to this simplicity, we choose 2 popular publicly available unpaired domain translation models for comparison - StarGAN {{cite:808257d7fe1f57dc88a57db1d0d256a034ea1ef3}} and the more recent StarGAN-v2 {{cite:e424611878c280d8b379350e4aa73ebfd2144174}}. We train these models for lighting and expression manipulation with the same MultiPIE {{cite:c39ecaf96c119499d7f2a8cc20074f14e5874395}} training split for 100 epochs. Additionally, to gauge the effect of {{formula:36854314-e07e-4a75-8206-606fc406bd0e}} on off-the-shelf models we train StarGAN separately with the auxiliary discriminator added (StarGAN w/ {{formula:f0da5a2d-1f80-45e6-a591-65f494a54c5a}} ).
| r | a39debd01023f181319dc0025d570fe1 |
Limitations. We are aware that there are still limitations of our model.
First, as discussed above, our model presents bias to a certain degree,
which might be caused by: 1) the data bias existing in the original LAION dataset {{cite:f03eec13e19d46e833a5155833edaa7e8b30edd2}}, or 2) the performance bias of the face detector {{cite:519ef5bbb3bf0abf8310599ca90e566b2ba537fb}} we use.
Second, our current work has not yet adapted to some important face tasks, , face detection, face anti-spoofing and face forgery detection.
We expect our future updates would address these issues.
| d | 3cc7531823669a00e5bc2f5a9a6db3a5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.