text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
To sum up, we conclude that Lanczos coefficients and K-complexity behave in an inverted harmonic oscillator as if it was a chaotic system. Some comments are in order. First, our result differs from OTOC, where OTOC does not exhibit chaotic behavior at low temperature{{cite:560d012672543b27fae10b74d78b4cff7436b112}}. This behavior originates from the microcanonical nature of K-complexity and OTOC. We will study this in detail in section . Secondly, using the result of {{cite:560d012672543b27fae10b74d78b4cff7436b112}}, we can test the inequality (REF ) for {{formula:198f2b9b-9295-4a29-8c8f-dc8510b5f2cb}} . We summarized relevant values in table REF . As we can see, the right side of (REF ) is saturated, while the left side is trivially satisfied(up to numerical precision. {{formula:ec3ff729-7c25-4a22-b160-729a86ccbd3b}} appears to be slightly bigger than {{formula:2cf4cc39-86e9-4363-8cbd-7e6354176f2e}} for {{formula:58fdbca1-7859-45e4-b5bd-ab151976b6b8}} , but it is in the range of numerical error. We are currently focusing on the rough tendency.). This is in stark contrast with the large-{{formula:d51ffcc1-26f2-4868-b5cc-8a842eab9418}} SYK model, where the left side is saturated for all temperature ranges, and the right side is trivially satisfied{{cite:b9f31416c19e304b7094056bcf23ecb7cb31ce13}}. We leave the resolution of this difference to future work. It may be related to the difference between chaotic systems and saddle-dominated non-chaotic systems. Finally, we can compute the Shannon entropy of the system numerically and compare it with our results. Shannon entropy is given by
{{formula:8d6b5a10-8623-4f8d-9a63-f6c1ff6d4805}}
| r | 259e494a7d9d5acdeec52bd0a758275f |
It would be interesting to explicitly identify the dual {{formula:01a28f43-2ece-41a0-bb7f-30fe3df93ba3}} SCFT together with relevant deformations dual to the solutions found here. Uplifting the {{formula:cb0de23b-8c5d-4c01-aea7-fa062fd89e45}} gauged supergravity considered here to higher dimensions will allow to embed these solutions in ten or eleven dimensions giving rise to new examples of AdS/CFT duality in the context of string/M-theory. In particular, it could be interesting to check whether the singularities allowed by the criterion of
{{cite:8516f57fb1624a2a476f82ace185f56428483bc2}} are physical in string/M-theory by using the criterion of {{cite:dbf515d8baab51f232512b76d14418038ff40a1e}}. If this is indeed the case, identifying relevant M-brane or D-brane configurations related to these four-dimensional solutions also deserves further study. Finally, finding more general holographic solutions of {{formula:e258ebea-d3db-4e68-bafa-92aa678cb2d2}} gauged supergravities in other truncations or in other gauge groups could be interesting as well. These solutions might include holographic solutions describing spatially varying mass terms given in {{cite:8532ba56b82a77b269cade5d5ce9b32013be08a1}} and {{cite:07321d2f4d5970e4153e74cc9da6379a1beedc8b}}. We leave all these issues for future investigation.
| d | 6759af2e976743eae8a789c8870567b7 |
After all the iteration, we can obtain the stationary point {{formula:eba06a40-dc51-4948-9282-b1adf5a37cd3}} .
Here, we can use other splitting methods such as ALS {{cite:f65d5e9e6716b21a7428977d1f97244c8b097423}} and PRS {{cite:9c1cce6e55be2b0d2bf95cd14c37d74ec92e462a}} to compute the objective function (REF ). When the objective function (REF ) is convex, then the function globally converges to the global point {{formula:8c22fab7-ce42-4bd9-9e19-19343a0aefd4}} {{cite:806b8eb1856227bc0578a4877a9330ef57ee70e4}}. However, the function here is nonconvex. We need strict assumption conditions that ensure the convergence results, for example, proper choices of {{formula:fbbec245-ee3c-4854-b04b-75c9db3d54ca}} and {{formula:8dbefe9d-1b37-44ed-bf69-cf6d9a1a347e}} {{cite:f4d174296a6161d3e0a4d692d459cc93068cf692}}. In the following, we will prove the convergence results.
| m | 37993834f8c0cee88fa9c5060cb9cb34 |
Robots with different numbers of limbs have different advantages. Quadrupeds are known for their agility {{cite:87c4714c277cf28c24efe6bf8a04eee3ac7d6ef3}}, whereas hexapods and myriapods for their stability {{cite:b6a500b0e835286927345cbacbb4c34f121bb296}}, {{cite:0e0de8d1c4d5a1c3d79f7bddd2612590bfb36343}}, and limbless robots for their ability to fit into confined spaces {{cite:8287a8354587efa33878262becaa5bbae8bd40ff}}.
But robots with increasing complexity and numbers of degrees-of-freedom (DoF) present challenges in motion coordination, which if not addressed, may render them unusable.
Furthermore, the diversity of shape and form makes it challenging to transfer control insights gained from one platform onto another.
We are left with limited intuition and physical understanding of how to coordinate the many DoF in diverse and complex robots to generate effective locomotion.
| i | 5245ded283c55045f50b2cef6db85183 |
We test AVOD's performance on the proposal generation and object detection tasks on the three classes of the KITTI Object Detection Benchmark {{cite:9c991246041fd08a2d2ffad088f2a700e0dcae49}}. We follow {{cite:ed477753967ddeb63a508b67a23e58257a022e35}} to split the provided 7481 training frames into a training and a validation set at approximately a {{formula:fd80c5f8-976d-44c8-b243-45b16b1753d6}} ratio. For evaluation, we follow the easy, medium, hard difficulty classification proposed by KITTI. We evaluate and compare two versions of our implementation, Ours with a VGG-like feature extractor similar to {{cite:ed477753967ddeb63a508b67a23e58257a022e35}}, and Ours (Feature Pyramid) with the proposed high resolution feature extractor described in Section REF .
| r | 7cc62705e3536e21cd79cae6534aeffd |
Many searches for new spin-dependent interactions have been motivated by the idea of axions {{cite:8b74a4e481b00c727af9596c8064477824ffc2bb}}, {{cite:4f065bb78d0a98b67af3d6a720fedade9eec9c02}}, {{cite:9e3ebdcdf275062cefd5884387433c5ca7781555}}, {{cite:97a79b9fa24931cb7b24fd56b5fbd418a343b3a7}}, {{cite:7917c79bc9a42463e09f550754c19a3670cb93a0}}, {{cite:9948d838eab1bd94a9f3bd995dfb27796fa90899}}, which can induce a {{formula:b1ae3c1e-f714-414d-a574-6154b8bf8026}} -odd and {{formula:3edb967e-fc49-4281-9d23-cd0b90dd5c7c}} -odd interaction between polarized and unpolarized particles proportional to {{formula:9bce9c5e-7334-451e-84ba-40eaa1b6941f}} , where {{formula:bd2815f5-699f-4179-aa4e-e7dd97b0f733}} is the distance between the particles and {{formula:da8f5083-7675-4edf-89f4-d79ed1863d87}} is the spin of the polarized particle. Several other ideas can generate exotic spin-dependent interactions {{cite:f70ec58f328a60149bf496e1adde683fbfc7996b}}, {{cite:7f7392d8b6d8e24b30f1985c5218727a4151d54d}}, {{cite:382c5ac2e43cbd65b814136cb6834fa8eebf22f1}}, {{cite:c54d78e61ba3ac9a87ede46eb5a964dc9ba3fa02}}, {{cite:327894c52ab92f60c6f2d8dd07b4d02893253723}}, {{cite:79b420d7c52e94e851c7f1bdd9dba3a00ee547ff}}, {{cite:c708734c03c3312b9c800aca83e23d74af925084}}, {{cite:ebc7abf9e080018ad761241c1dfe5fea7204891a}}. However the idea to search for new spin-dependent interactions can be considered within a more general theoretical context. Dobrescu and Mocioiu {{cite:f113d0e084ee5138dfa16a3ff9bae9cc591e8838}} recently performed a general classification of interactions between nonrelativistic spin {{formula:a4a3242a-5482-4f2a-bce1-7b2cb2250e95}} fermions assuming only rotational invariance. This analysis emphasized the rich variety of possibilities for new spin-dependent interactions. Of the 16 different terms in the elastic scattering amplitude uncovered in this analysis, 15 involve either one or both of the spins of the fermions.
| i | e90216571a49cbfe251a21b5c105f1c8 |
Identifying the underlying structure of a data matrix and extracting meaningful information is a crucial problem in data analysis. Low-rank matrix approximation is one of the means to achieve this. CUR factorizations and interpolative decompositions (ID) are appealing techniques for low-rank matrix approximations, which approximate a data matrix in terms of a subset of its columns and rows. These types of low-rank matrix factorizations have several advantages over the ones based on orthonormal bases because they inherit properties such as sparsity, nonnegativity, and interpretability of the original matrix. Various proposed algorithms in the literature seek to find a representative subset of rows and or columns by exploiting the properties of the singular vectors {{cite:a58a0a53602f6b29b324390b5bee395270e92020}}, {{cite:3916a369d13c832d117769860dcc527016e5494a}} or using a pivoted QR factorization {{cite:a207005443170e5ab209534e5d20ede253c9797e}}. Given a matrix {{formula:7bd75acd-25ec-438d-a374-e237cf9c87e2}} and a target rank {{formula:29f317ea-2438-4efa-a18c-b81d120b72b7}} , a rank-{{formula:1295811b-3d07-4959-83d5-d20c2671a4e4}} CUR factorization approximates {{formula:44d04377-6726-418b-8458-7a799efe04a1}} as
{{formula:64028e49-9fbd-4652-9ce5-b401efe4ccd1}}
| i | 9a0603a8ad078e9eae35a091812dd5d5 |
Online cloud computing platforms have made large scale distributed computing accessible at affordable prices, leading to a surge in usage of distributed computing frameworks like Apache Spark {{cite:f643ccd4d56a76b54cb7eb6c047dcb106e0082c4}}, Hive {{cite:2fec87388ce0aab165b32a0247de84c0863f40dd}} and Presto {{cite:13019597864a324faf802a9ef31d515d4327630f}}. However, in case of application failures, the user has to navigate through massive amounts of recorded logs to diagnose issues, causing a dip in productivity and unsatisfactory user experience.
| i | c61e75236e7428d2f871b31856df6f2f |
We consider a sketch to be a collection of strokes, wherein each stroke consists of 2-D continuous offsets and 3-D discrete pen-states. This representation is also known as the stroke-based format. The discrete outputs pose a difficulty for the gradient updates to be passed from discriminator to generator for the weight update. {{cite:056f4fa5105e7d09c91e7361cac423adf9ab414e}} proposed a novel policy gradient based loss for generating 1-D discrete tokens. Whereas in our case, we have a combination of 2-D continuous variates and 3-D discrete variates which makes adapting of policy gradient loss not a straight-forward task. Currently there are no metrics that quantify the `goodness' of vector sketches. Hence, we have proposed the `Ske-score' which quantifies the goodness of vector sketches. Our contributions are as follows:
| i | 9b5eca217e5778c7012a13257107f0eb |
As a fundamental and efficient approach for solving (P),
the classical augmented Lagrangian method (ALM), dating back to {{cite:ee2c26a3849157a8951c2f00cc2caf266ed6ffdb}}, {{cite:e1103ab22ddca8ebe6660c14479a00e3607c3f9d}}, reads as
{{formula:8e54666e-75ae-4ed9-9e56-a53072b6b253}}
| i | c7d47a25ebd550ae9936fc75002b46cc |
The extracted values of {{formula:06216f39-0b0e-46c6-b44f-f51278c89da3}} from the {{formula:624e8205-f846-4661-8e93-d399f98e7207}} distributions as a function of {{formula:6434a85b-3761-4dfd-aab3-ac8990938673}} should have the least contribution from photon transverse momentum and interference effects. However, there are still contributions from the finite size {{formula:1dd7dd79-2508-474b-abbd-96d9033ffcd0}} wavefunction {{cite:ca8d686f180168053f6bb4f998bdfc2efd31c033}}, with {{formula:3a482252-ba4a-469f-b97c-bd77355788b6}} fm, and the finite angle between the photon polarization and impact parameter illustrated in Fig. REF C. The depolarization can be estimated from the average angle between the photon and the impact parameter as {{formula:36116fcf-1647-4922-9f8a-4f0d6acbfc33}} . The resulting true nuclear radii are {{formula:99b5387e-d9e7-48ac-be31-136fe65d4300}} resulting in {{formula:496f1168-65d0-4f2b-a059-cc3e55f3efbd}} (stat.) {{formula:358e86a4-6136-459c-86d5-0cf66b2b192b}} (syst.) fm for {{formula:fd428ed6-5507-46e3-b7ae-fb07d83907ab}} and {{formula:e385e0a3-7dd5-40e4-809f-0e400b834ea0}} (stat.) {{formula:29358e8d-9344-45a9-ac04-f323879a7294}} (syst.) fm for {{formula:244773e2-f89e-4cc8-9bec-47abf2540d09}} .
The systematic uncertainty is dominated by the difference resulting from the use of different form factors and fit ranges.
This also shows that, unlike {{formula:f44d6a88-1c71-45a4-ab8e-d24892856c3b}} photoproduction off nucleons or light nuclei, the details of the {{formula:53f00cae-9a4f-4d6b-89df-cbc5f530e401}} wave function (size) actually plays almost no role in determining the nuclear radii. The ratio of these two nuclear radii is {{formula:8674651d-f892-44d5-8926-0e03bad965c4}} . Furthermore, these radii are systematically larger than the nuclear charge radii obtained from low-energy electron scattering {{cite:536c7af93ec53eaf551a28be00d5986aca876386}}.
It should be noted that the strong-interaction nuclear radii have been measured at other facilities at lower energies {{cite:74358ce584a574c233c6579fb7b6b30b053bf1d6}}, {{cite:23290a81b426618e5d19ae1deb627d5b5f8dad51}}, {{cite:7a67bbff48d5d84544217bb313a55767573a022c}} with diffractive photoproduction of {{formula:f3ab013e-ee91-4d93-a56f-2a50b2d76cba}} in photon-nucleus collisions. A scaling of {{formula:8ea2c330-11e2-41f9-b24e-644cb894e1e3}} was extracted from the experimental data at DESY {{cite:74358ce584a574c233c6579fb7b6b30b053bf1d6}}, {{cite:23290a81b426618e5d19ae1deb627d5b5f8dad51}} with {{formula:7b75e5e3-9696-4f42-8924-34d219d93057}} {{formula:0ec813f0-78b8-4324-9337-a557474cf3a2}} and {{formula:65d25cbb-e29b-4d29-b253-2a58cd78103f}} {{formula:3a0f96c7-fcfa-4588-ab2c-7137005e8b2e}} while another experiment at Cornell {{cite:7a67bbff48d5d84544217bb313a55767573a022c}} obtained {{formula:3c8d5f02-6890-4b59-a3c9-10dc68a55026}} {{formula:0891f239-5130-40e6-8c5e-b351f99a0651}} . The neutron skin {{cite:fa74bb7a653e7e49816ee061a8564eff53885507}}, a root-mean-square difference between the neutral and nuclear charge radii, has been measured in even-even stable nuclei and shown to be around 0.3 {{formula:1a4a3a7a-2f0b-4bb2-9e59-fb51ee512227}} for {{formula:85391c37-3ac5-4d42-bab4-e9953eb841ef}} {{cite:611d1f3371a3a6b2aadb646bd9f66d5229316039}}. When the strong-interaction radius is taken as a weighted average of neutral and nuclear charge radii, our measurements of the same quantity yield: {{formula:6fdb1146-5563-445a-83c4-673575e678ba}} fm for {{formula:638cae8c-125e-4a0c-82f4-8f918d3bf551}} and {{formula:c7b5be09-90db-4714-bfa2-e8c180ce1e4a}} fm for {{formula:e70bc034-717b-449c-839a-0dd78b511f56}} (using previous measurements of {{formula:0e85c5f0-0f8b-4c46-b251-68134e00d512}} {{cite:c8dc906154de23eb604e41d3625cb2f80f16f1d7}}, {{cite:536c7af93ec53eaf551a28be00d5986aca876386}}). Our measurement of {{formula:0da09e20-2b47-4a75-a7e8-07e0286238fa}} for {{formula:a0519b34-dcb8-418b-a300-250d609fd088}} seems to follow the trend of world measurements at low energies {{cite:0aec78f7fc55ace0dfa6390111ea45b10ac2e25d}} while that of {{formula:d3b00b6a-534d-453f-b86d-de4d46c64a49}} is significantly non-zero and indicates a value larger than that expected for neutron skins of similar nuclei [in terms of the fraction of neutron excess (N-Z)/A] {{cite:0aec78f7fc55ace0dfa6390111ea45b10ac2e25d}}.
| d | 1e424d84869a29c9a3a7d45e8b3557e4 |
A version of this result is shown in {{cite:1d97f253d8bd693c7ee0436dea4670a380efc8bc}}
for surfaces over {{formula:9c3b5afa-7a11-4a71-b91d-45380a3b8bd0}} CKS(X){{formula:fc1f54bd-8e3a-41d4-9fd7-e08f5b7922d5}} H2nis(X, K2,X){{formula:d9ef0cf3-0502-4282-a488-8c4c36cfa4a9}} K*,X{{formula:ee142949-3941-4841-88f9-02fa2da8cef5}} K{{formula:2b7f52cb-994c-4e7b-8a47-58d150f43013}} Xn{{formula:ada65a3e-877d-40f0-852d-3ad4bb9d4147}} We choose a conductor subscheme {{formula:e52c964f-656c-4c3d-b080-cc63118b4a9d}} and consider the exact sequence
of Nisnevich sheaves on {{formula:f5e103bc-26cd-4113-b384-9c1670584922}} :
{{formula:b5ed14f4-d41f-4ec1-9391-bfc1c05acd58}}
Since {{formula:40d4ae90-5486-4f88-b16f-1fc3c96b08d9}} for {{formula:61e27d29-4618-45de-b580-de04fc70f890}} for a semilocal scheme {{formula:5377d155-e094-4791-a6e7-5bb2b0ea160a}} ,
the Leray spectral sequence tells us that
{{formula:4386a823-14ac-4936-96df-7cae3d715878}} for all
{{formula:cad400e7-962b-4e54-8d5b-a9cb5fb80f75}} . Since the kernel and cokernel of the map
{{formula:ebbec841-a98c-4de0-8f7c-883d5cc78f85}} are supported on {{formula:e5840552-5f89-4f53-b47e-e6989a1c78de}} , it follows from
the above sheaf exact sequence that there is an exact cohomology
sequence
{{formula:9ab1457c-55ee-4e57-acb5-aab0b92d1c3e}}
We can replace the last two terms by their degree zero subgroups
without disturbing the exactness.
Since
{{formula:7cf4c970-6520-4009-affb-20a0764c5f4e}}
is exact by the Thomason-Trobaugh spectral sequence
{{cite:f6fb3ddb3927baed8383f2a7d50ac4066d4b3fa4}} and since {{formula:88dc2cbf-a299-4528-911f-ef4647ef3d80}} , the left-end term in
(REF ) is same as the quotient {{formula:e4974a3c-79b6-4525-9f72-013aea6b3921}} .
Since the edge map of the spectral sequence
induces an isomorphism {{formula:0d7efaee-b923-4e5f-b202-ec1b1b9d577d}} for
any Noetherian scheme {{formula:a79cff80-8c0a-4dbe-9bdf-9db4fcf18560}} of Krull dimension at most one,
what we are left to show is that the canonical map
{{formula:258126fa-c063-46d5-a2c0-6ffd67ad2267}}
is an isomorphism if we replace {{formula:ca3a911a-2d19-42e5-8bd9-d59affa41d64}} by some of its infinitesimal thickenings.
To prove this, we compare the {{formula:b1c3bc63-9ecf-413d-9c8a-5cae1bc5291d}} -theory exact sequences for the
pairs {{formula:fa1f6432-3f0f-461b-b2a6-81baafb2e0ec}} and {{formula:30829d6f-f4a0-485d-b428-341f56215fba}} and use the isomorphism
{{formula:be59de9e-6e32-4a22-8c59-4817ccdec69a}}
to get an exact sequence of sheaves
{{formula:83865568-adc9-4d17-93bf-a0c2af24514b}}
where {{formula:3910464a-3579-4aa5-a7a5-98cdd3fc5db1}} is the sheaf of double relative {{formula:f3bf969f-3ae3-4cfc-b5cd-73587f4ec101}} -theory.
Note that the middle arrow is surjective by
{{cite:dafe7d282c319f14d88c68ef93d3091816d9f7d4}} because this surjectivity clearly
holds for the Milnor {{formula:345c9966-dac9-4931-a620-ab5652dff3eb}} -theory sheaf.
Now, one knows that {{formula:c868e0a8-fa97-4200-8e37-66a5530142a2}} by {{cite:dd94182a2c23c6360a13903cfcb5cd70fe4dfd6a}}.
We thus get an exact sequence
{{formula:d18d72f5-333b-4c80-a629-d4731ffd5fed}}
Comparing this exact sequence for {{formula:83094586-4c0a-4d32-9109-89afc635246d}} and {{formula:6e2fe0ef-43b5-4757-b909-0147d597aa8d}} (where {{formula:8163bd36-9ff5-4a74-960a-3b7c5e68822c}} is
defined by {{formula:55144706-eccf-4622-a157-8c1473a5f713}} ), we get the desired isomorphism if we
choose our conductor subscheme to be {{formula:697e0216-7bb9-4fe0-bd83-21f2c86eb2bc}} .
| r | c5dffe4081965c87d4b18bf780dc25fa |
Numerical solution to the minimization problem (REF ) poses a challenging problem due to
the presence of a highly nonlinear and non-differentiable term. To
get around this difficulty, a large effort has been devoted to construct
effective schemes for the minimization problem (REF ) and
the PDE problems (REF ) in the past two decades. For
instance, the artificial time marching scheme in {{cite:70eb7899286ecbe57e932e036078fd86bee98756}}, {{cite:49aec8aa4966778c2dfc533cb1944a152695296b}},
the lagged diffusivity fixed-point method in {{cite:f61e13f7af9064cfb5888c0f859a49ea580bd67b}}, {{cite:ea54b97f45be79945d7331ef94062e30be7befb7}},
Chan-Golub-Mulet method in {{cite:08646383bc8b2209d11f81a5a950fe9c0bf33071}}, the Bregman iteration
in {{cite:0c183152d813111d8c2ddaf228153f9f9e600d07}}, the augmented Lagrangian technique in {{cite:43f1c4cb0fd1d595835f7e0fc0e778a4388c1764}}
and some others in {{cite:4a4d7bb3f26ce3029bdc1171a26a670064800b2b}}, {{cite:0151ddbefd619c2acb42149a689a5a6a606841d3}}, {{cite:0539bcdb6c30aeef2fa22f36d6d99077be42de4e}}, {{cite:d8f5c7c5c0494e328f5284891bda1b6872f5c8ee}}, {{cite:0237931508ee4982674651139f506e02048f2396}}, {{cite:1324e753b8fbae3056d534a7ce5f732f2d617205}}.
However, most of these numerical algorithms are gradient-decent type
and thus only first-order. Thus, the main motivation of this work
is to develop an effective second-order algorithm for the model (REF ).
| i | 8d50a21cdba268cb76d3f3be38f15d4a |
The proof of Theorem REF is divided into three main steps contained respectively in sections , and .
As a consequence of Theorem REF and the Urysohn property of {{formula:cdb9e659-831f-44fb-9b84-4964e0454413}} -convergence {{cite:2f2fc11dd7f989339382110df946525c9581ea55}} we deduce the following corollary.
| r | 0f8d7501b50f42e3399a33301303cf2d |
After the work of Nambu {{cite:8d7c29bde4be6702d439d81c687ffb98ae21a83a}}, there are more works done on the classical solutions of Weinberg-Salam theory. A well-known case is the sphaleron (sphaleron is static, particle-like, localized in space and unstable solution of the Weinberg-Salam field equations). Klinhamer and Manton in their work {{cite:932ff151a64ec31157a01cb5ee10d123c39135e2}} first coined the term `sphaleron' and shown that it possesses baryon number {{formula:09540db1-842b-4cdb-bc99-0c531e5275f3}} . They also notice that there is an electric current in the U(1) field. On the other hand, the works of Hindmarsh and James {{cite:8b26461bd64fd26d45524376c7610be3d1d6596e}}, and Radu and Volkov {{cite:00a38abc4e0a202697e5edb832a5dace40da6ac8}} also found that within the sphaleron there is a monopole-antimonopole pair and loop of electromagnetic current. There are also some other works {{cite:a1d230291245794a99fe747cd67420663f108ee6}}, {{cite:04473339b7e87dd9bcba98cd26bc7f4e728127ff}} on axially symmetric sphaleron system that uses a similar but more general ansatz, which shows system of sphalerons lying along the symmetrical axis. Though possessing axial symmetry, these results do not actually reveal the internal structure of sphaleron.
| i | e03b8e2fbc91269e126ab065182dc077 |
As shown in Fig. REF , our approach achieves significantly better texture results even when the texture is extraordinarily complex like the floral dress and the geometry we estimated filled with elegant details instead of a sketchy and bloated object. Even applying per-vertex texture mapping in the geometry from Multi-PIFu {{cite:7c141b8aaae28b449465770d1f187e036043ac03}}, image blurs occur much more frequently in contrast to our results.
| r | 9b7246b8951386c798aa403e3420ac00 |
Deterministic Rule-Based Methods: The general principle for such methods is looking for keywords within a conversation that can be used to instruct an agent to provide predefined responses. For any such conversational agent, a 'script' can be defined from which response clauses must be generated according to expected keywords in the particular context. It is important to consider that even though the agent has a script of meaningful and intelligent responses, it has a severely limited understanding of the language itself.
Scripts are a set of rules which store the knowledge base of a context and act on keywords from inputs. These rules can be of the form "IF some condition THEN some action." These actions act on the input phrase by transforming, adding, or deleting symbols.
Corpus Based Statistical Methods: In contrast with a script that contains rules with defined actions for keywords, a corpus is a collection of natural conversation. Usually, these systems are data-intensive and need to mine through huge datasets {{cite:4e3e8c8971cf245dc8dacb06af52be3ba4b1c8b8}}, {{cite:ef8389fead02ef2af406ec3251054015cbacfaf3}} for training to generate good responses. While the rest of the paper is also focused on methods that use a corpus of datasets, here we look at methods that assume the Markov property on conversations.
To model decision making for sequential environments with stochasticity, Markov decision processes (MDPs) {{cite:9560a7ce499135c5432cbe7b181d68028f30e53f}} are used. For actions taken by an agent, the environment changes state in response to it. The current state is used to determine the immediate reward obtained by the agent and the conditional probabilities for future state transitions. An agent's goal is to choose actions that maximise a long-term measure of total reward.
| m | cae13490185b756a9b3de357ad15ea3b |
To prove part ii) we follow the same steps from part i). We obtain {{formula:fdec3afb-5d33-47b4-943c-9e20f1e5bb17}} , {{formula:6b2f50d4-33b4-42ce-a70b-3840c780b6bd}} by following the method in the proof of Theorem REF , and using Theorem 3 in {{cite:e65c2ad7cda00001b626e940b810b13f8a3d31df}}.
| r | 682585fd770cb2f34584f86a3032e068 |
This has been proved for the ideal Bose gas ({{formula:aee42b45-c6ac-41b6-a336-a06c324ce25f}} ). The representation of the one-particle reduced density kernel in Theorem REF for {{formula:d984d842-fb5a-4204-8ba7-243198016e23}} allows to perform the thermodynamic limit (cf. {{cite:fcec3e9966bc2f3f561380d3b97b6b7be34a156a}}),
{{formula:f9607b6f-9b92-4c32-b6ce-6eafce536718}}
| d | 17f15849a6c7bb00f879516fca1a0e10 |
Like the original paper, “XEntropy” stands for cross-entropy using fixed word vector, and “Optimized” stands for optimized word vectors. “Average WordVec” means that we have taken the average of vectors representing words in the metadata and used that as the target. We also trained a variant of DenseCap {{cite:1efe5ccb28f8210f668c60cb3489025a2bc9e98c}}, called “DenseCap-objects” to identify objects rather than phrases of scenes. This was done by taking off the recurrent network, and using multi-hot encodings as the final layer after localization. With “DenseCap-objects”, we do full backpropagation.
{{table:bafbb613-1805-40aa-a448-947fe7680d52}}{{table:831b3229-8e86-4a96-8157-b32dccc60cde}} | r | 58156d913a749d2bc38758ed413e711d |
Trying to find such relationships, one can observe that symmetry drivers allow to construct differential operators which map the characteristics of conservation laws into symmetries, and integrals (including formal ones) generate differential operators which, roughly speaking, perform transformations in the inverse direction. Extending the terms defined in {{cite:c5fcfd7f8be0acc27d3853fb32e9d14a6809bb87}} for evolution systems to a wider context, it is natural to call such operators Noether and inverse Noether operators, respectively. In Section REF we consider the Noether operators generated by integrals and symmetry drivers, but point out that these Noether operators do not explain the simultaneous existence of integrals and symmetry drivers. Despite the last fact, Noether operators seem to be interesting in themselves and, possibly, may be useful for other purposes. Motivations for this are given in Section and, partially, in Sections REF and REF too.
| i | f97caa3bb287b79aca08a4b390e4d941 |
Recently, Jimenez et al. {{cite:e90f7dc264078e543eae65b31bab48b9ab2ba6f0}} introduced a new class of modified gravity named symmetric teleparallel gravity or {{formula:97c777c5-2513-4c84-9bb6-36660969bd9c}} gravity, where {{formula:e150819e-407f-483d-88f8-083f649f1eff}} is the non-metricity scalar. In this theory, both torsion and curvature will disappear, and hence the gravity only depends on the non-metricity. The affine connection plays a significant role in the symmetric teleparallel gravity rather than physical manifold {{cite:e90f7dc264078e543eae65b31bab48b9ab2ba6f0}}. It is also noted that {{formula:7b9d1efa-05df-46d3-bf75-30d91b626ec8}} gravity is attributed to second-order field equations while in {{formula:3be82c30-72bb-40af-937f-0028e608a342}} gravity has fourth-order field equations {{cite:c7a10c056191f3d41d98f31370aefb4558a8121b}}. Hence, {{formula:2c0df609-278c-4752-8564-a9b96597f459}} gravity gives an alternate geometric portrayal of gravity, which is regardless equivalent to GR. Readers may check some Refs. on cosmological {{cite:055565179f01087cb42216f4f391862b706245a1}}, {{cite:e5b01f6cfe2eed415705e399ca730f0170be0bae}}, {{cite:9b5cd087c4a5734eb8d740ae35d167abdc53cdbc}} and astrophysical {{cite:4a13b644a9761cd1646830abd371311a5997310f}}, {{cite:c0e6ab886ff30b81f0e0afe7fe6e9f69e1c98fe8}}, {{cite:5907c590ac74398aec659afaad51d00666650329}}, {{cite:a2c9acf4ae03eff9b54d559aaef4aef5d715d7c4}}, {{cite:dfd90a537d7835238c57e70cdbd7caef666ab063}} objects where the authors have studied deeply in this symmetric teleparallel gravity.
| i | 022b5816c1f1cd120c906d7e12c3ccc9 |
We use model, ground-truth and loss function designs very similar to those proposed in CornerNet {{cite:5de99efafa63373bd8e8a4d18ed1ce203e283c1e}}.
These components, and especially our modifications for TetraPackNet, targeting tetragon-based object detection, are explained in the following sections.
| m | 32e1df1ebd7dbb760d3fc03f79ba33c7 |
This section evaluates our proposed fusion model on the individual test dataset for each weather. Similar to the state-of-art method {{cite:67c18270b2c70ea044ae095d59269247357251e5}}, we evaluated the proposed GLA framework over PassengerCar class because these are the most prevalent class in the DENSE dataset.
| r | 071cb3574cf66ece70f24c002c7658cf |
In this section, we describe the proposed approach to learn discriminative features for multi-label classification. The detailed architecture of the network is described in Figure REF . It is built upon ResNet-101 {{cite:7cd3f3766bc11ad0d6f2a975d3bb18f0e23352aa}} as a representation learning backbone.
The proposed loss operates on features from the penultimate layer of the network and discriminates them using the class-specific attention maps. It consists of two main steps: first breaking the many-to-many relation between features and classes into one-to-one. This is achieved by an object parser that converts the multi-label image to multiple object regions with only a single dominant label each. In the subsequent stage, a discriminative classifier is applied on the predicted features obtained from the object parser.
| m | befe5057db476b32f934eb91380e6952 |
where {{formula:8dbc6cc4-7918-4ff2-adc4-ca401be7dec2}} for {{formula:071b8197-93f7-45c7-b678-d7bb7c92ed5a}} and {{formula:5095b28f-8825-4a96-a5b5-ab4faf5d1782}} for {{formula:5f7fcd4d-debd-463d-890a-2fe15db52ee7}} . In the case of PBH baryogenesis {{formula:a320339c-cf6c-4094-880e-8e8837f200cc}} . Now using Eq.REF , Eq.REF and Eq.REF , the frequency {{formula:0fea6465-ae55-49ca-8be4-ad371bc95609}} can be calculated as {{formula:c8e6859a-9254-477c-9d25-4cbb8ee2e7c6}} Hz, for {{formula:f720195f-fb43-4ccf-9666-e943ad68721c}} . Therefore, to observe this break one needs GW detectors at MHz frequencies. In addition, let us mention that beyond the turning point frequency, the fall of the spectrum as {{formula:7a29fc84-0b41-4e00-83b5-727e9f3d623d}} is true only for the fundamental or first few modes{{cite:a9f360b7e4af04e33d22db952887c1144eb94103}}, {{cite:86c8826cce6e6d9da9af53c398b1dd78ce5b3617}}, {{cite:c8d1455418128af152cc11906caa2a669aa11a64}}. Once larger number of modes are included, the spectrum falls as {{formula:7960ea97-e6be-46b2-b8fe-ad87d7f4d84e}} (see Appendix: for a derivation with a cartoon diagram)–a feature that can be observed in the bottom-left panel of Fig.REF . There could be another gravitational wave background via graviton emission by the PBHs{{cite:3074f26c715960f6cb40bc54069e36ddf1f10dca}}, {{cite:d498b3683fd1233ce540bc913bb17bc241b1b1e7}}, {{cite:b942771be3a288d4f6631d7a3695416422f63c3c}}. However, for light PBHs as in the present scenario, GW background via graviton emission typically peaks at frequencies more than {{formula:aaeae7f9-beec-49aa-81ba-87a00ad305ae}} Hz. An analytical relation between energy density and frequency is given by{{cite:3074f26c715960f6cb40bc54069e36ddf1f10dca}}
{{formula:66debdef-1ed5-4953-a13a-96a6f5ef66e8}}
| d | 27de73227946b3810729651a462d5599 |
Recent works on generative modeling for 3D objects or scenes {{cite:79cb65980d040ec8c34a0a2935a2dc79b63e7bd1}}, {{cite:d17805c97290983b9cfb4eb499a8bad13ff9abc9}}, {{cite:1f29d013867856baa6d858da80a2f880b27b2c60}} employ a Generative Adversarial Network (GAN) where the generator explicitly encodes radiance fields — a parametric function that takes as input the coordinates of a point in 3D space and camera pose, and outputs a density scalar and RGB value for that 3D point. Images can be rendered from the radiance field generated by the model by passing the queried 3D points through the volume rendering equation to project onto any 2D camera view. While compelling on small or simple 3D datasets (e.g. single objects or a small number of indoor scenes), GANs suffer from training pathologies including mode collapse {{cite:39c26ade89a8fed382ccc4792eae0715f791fa76}}, {{cite:9dc3e48fcb561afa685ef340fbef26b868893bd6}} and are difficult to train on data for which a canonical coordinate system does not exist, as is the case for 3D scenes {{cite:74621565759cdd7464273caef77f6be9753f1ef5}}. In addition, one key difference between modeling distributions of 3D objects vs. scenes is that when modeling objects it is often assumed that camera poses are sampled from a distribution that is shared across objects (i.e. typically over {{formula:2085c643-2ef2-41ae-a394-4c7e80878a4d}} ), which is not true for scenes. This is because the distribution of valid camera poses depends on each particular scene independently (based on the structure and location of walls and other objects). In addition, for scenes this distribution can encompass all poses over the {{formula:ee8aadb5-044e-4ecd-bfb2-f9ea2e5cc1e2}} group. This fact becomes more clear when we think about camera poses as a trajectory through the scene(cf. Fig. REF (b)).
| i | 245846f269f7afbc134c283e62740480 |
A promising path that offers almost arbitrary flexibility is the use of optical dipole potentials (see {{cite:543782e28ab0fefef3a3d8a0cebec7e10b0c3e76}}) due to intense (laser) light. Spatial control of the incident laser beam for BECs using digital micro-mirror devices (DMDs) was shown in {{cite:37eb6aff0939ec1eae785dd4505dde94d54b2332}} and {{cite:d3e25d8573e8cd181911f84537940677abae9e09}}. Combined with a suitable optical imaging system, the two-dimensional array of mirrors of the DMD allows precise control of the intensity along the condensate's elongated direction. However, obtaining the required setting of the DMD to achieve a desired effective potential is highly non-trivial.
| i | 62949b722a7136b441661c086bc536ac |
In practical applications, we should not only consider the effectiveness of LP, but also inference efficiency.
Many LP applications generally require fast retrieval of the top scoring neighbors for low-latency services {{cite:29573da1fcbc38463dce8f5115dbf3d1b9b055f3}}, {{cite:7e0631c8d2ec5a126568fae804732ed6fc5ab9d8}}, {{cite:0073c25ceffdfdaae95223a9c5043b0c9f87f43b}}.
For a Dot Product decoder, this retrieval can be approximated efficiently at the sublinear time complexity {{cite:30c660a314e34da4ee3d51de92caa982b8bcfd58}}.
However, to the best of our knowledge, no such sublinear algorithms exist for the top scoring neighbor retrievals of the HadamardMLP decoders.
This means that for every source node, we have to iterate over all the nodes in the graph to compute the scores so as to find the top scoring neighbors for HadamardMLP, which is of linear complexity and cannot scale to large graphs.
| i | 134289cdfa9dcf0818abfb7947926771 |
One disadvantage of any simulation-based design approach is that the objective function to optimise over is stochastic. Even though the classification approach reduces the stochastic noise compared to ABC, the optimisation algorithm needs to take the noise into account. Our focus in this paper is not on optimisation, so we use a simple coordinate exchange algorithm on a discretised design space. However, our design algorithm may get stuck at suboptimal solutions if the noise is too large. We try to alleviate that problem by using parallel runs with randomly selected initial designs and by reconsidering the last few designs visited in each run, where the noise is reduced at these designs by evaluating the objective function several times. This algorithm leads to plausible optimal designs in our examples. For all our examples, the efficiencies of the optimal designs follow a reasonable trajectory as the design sizes are increased. Furthermore, the differences between the design approaches are consistent across the design sizes. For high-dimensional designs with a continuous design space and noisy objective functions, the approximate coordinate exchange algorithm {{cite:bd416aec6da36aeed7c034fcf71925cbb85721ee}} is a theoretically sound and efficient alternative. {{cite:f7d4573031a609dba0fe966197e226bf95a33a1f}} present an `induced natural selection heuristic' algorithm that can cope with moderate to high dimensions and noisy objective functions. Other possible optimisation algorithms suited for noisy objective functions in small to moderate dimensions include `simultaneous perturbation, stochastic approximation' {{cite:0268f346799e86e5a66b6df2c22984a5a6902f8c}} and the rather robust Nelder-Mead algorithm {{cite:434c7403cf9fa634effacbc658d1ae2311de87ec}}.
| d | bc7b4114e5365b436420e0ce6b8a9486 |
To validate the effectiveness of our proposed framework on representation learning, we visualize the generated speaker embedding with T-SNE {{cite:5d1d54d40289316e8758d1614ab17b7cd0ee41b6}} in Fig. REF . We observe that each speaker forms a well-differentiated cluster for each emotion. These results suggest that the proposed framework can generate effective speaker embedding, which is crucial for expressive voice conversion.
| d | 79804df70240f0c053c8c592fa46c9ee |
Model selection procedures for LSB processes were discussed using NIC,
an information criteria similar to AIC. While model selection for LS
processes have typically been done through different information
criteria, it would be an interesting problem to develop other types of
model selection methods such as methods based on cross
validation {{cite:cc51d32118161bec7edd6c802287b865eb5cc761}} or Bayesian
methods {{cite:475d8a928cdcd48d5afde6e6436b78b8550dbdda}}, {{cite:44d4ceeee59e199969e855d6a7cbe7ad6da6a512}}.
Related to model selection, we demonstrated in
Section that the choice of block length {{formula:a6494cd8-78a4-40eb-9266-1e7495336897}} and
step size {{formula:a46159c0-e5f1-4142-b875-fc0fcf22159e}} in nontrivial for the block Whittle likelihood
estimator. We provided a data adaptive method of choosing these
parameters by minimizing the mean RMSE for these estimators. This
requires a simulation study to be run in practice. However,
to the best of our knowledge, a theoretical solution to this problem
is yet to be discovered and could be a direction of future research.
| d | 314292e2f6c075f840be2a4ed89f3ffe |
{{cite:e95f717d3a801bed379b561bb70ecd9fb8a3ef27}} 2020
{{table:8ec01074-4aa9-4ff5-952f-3d1012b29326}} | m | 757c9d509b615ad0126aabf7c269af5f |
This paper focuses on these approaches by considering the popular StarCraft Multi-Agent Challenge (SMAC) {{cite:0cf04ec74eb3fb029da295f640f1c14abf5e23a0}} as a testbed. We do this by first adapting and then improving the DRL algorithms of the Deep Quality-Value (DQV) family of techniques {{cite:b2ffae152a98884ed7ed1515d086212081af539b}}. These algorithms are characterised by jointly learning an approximation of the state-value function and the state-action value function. They have proven to significantly outperform popular algorithms such as DQN {{cite:86d58ec9b3a3b817684d2bac4d43047bce4bce4c}} and DDQN {{cite:a2d0bfe9fcbb85ec9208e882078b14afd9f26a9c}} in SARL. Specifically, this paper has three main contributions:
| i | 803cd249de8d5dc5d538626dda3a2832 |
This result aims towards a correspondence between orbits of linear forms in Hom{{formula:545b47e3-288f-4ff2-9446-f049537aae6e}} and primitive ideals in {{formula:03770808-e22f-40d4-881b-19f648435372}} , in line with the classical result of Dixmier {{cite:f2bd65a192cbfae7052f65a1eb614316af30db9f}}. In a subsequent paper {{cite:ec30deef7fb2c76b953a139bf31a6bba5ed13244}}, Theorem REF will become a key step in classifying primitive ideals in {{formula:5bed03f7-5d10-4b91-b371-fcb87dd8b6c6}} for {{formula:839937b6-24aa-4fe5-9035-f7aed6410e78}} nilpotent.
| r | a24d9a0d43c89f3f1fec4b7909ebf32f |
Photonic crystals (PhCs) are employed to control the light–matter interaction using the photonic bandgap (PBG) effect {{cite:4e4c98a1a3a9b9408b66353cbed424d1ceaf0be2}}, {{cite:f202e6ff9d7baf1820d8b2ab3b937e0786cf2244}}, {{cite:e7bba567a1d4094924527545ce6ab53340b4d803}}. In particular, two-dimensional (2D) PhC slab structures have been considered a useful platform for photonic integrated circuits owing to the three-dimensional (3D) perfect guiding {{cite:ea6d632d1047bbc661cd84b073d3ea22faf0ed52}}, {{cite:445e78d1b5bf742f3ee68960d7008865d54dbaca}}, easy fabrication {{cite:16ca8b6121a7d43c5b3c05fb2cd957e700f97f9a}}, {{cite:bee9454311479b9533dd91bca5e56c0a3a1c1558}}, and scalability of the structure {{cite:1cc25f681addfbf14aabba3f9d36686fb3d96151}}. However, most of the studies conducted on the topic have been related to the transverse electric (TE) mode because of the existence of large PBG for TE polarization and weak coupling between TE and transverse magnetic (TM) polarizations in a thin slab {{cite:a4ccb715b9d06a30a74234c896fd17265cd7b234}}.
Since Notomi’s report, a nanobeam (NB) structure has been substituted for a 2D PhC slab structure owing to not only the ultra-high quality factor (Q-factor) but also the smallest possible dielectric cavity {{cite:7d230bd01787f34042bc1b30f97a00910a6b96ee}}, {{cite:96b70e75149996c62908d63b0fb170bef91bd2a4}}, {{cite:00a96ff925ff099d0d416a5ee48542bdcf6968f8}}. There are several advantages of the NB cavity, for example, the simple structure for high Q-factor cavity {{cite:96b70e75149996c62908d63b0fb170bef91bd2a4}}, {{cite:00a96ff925ff099d0d416a5ee48542bdcf6968f8}}, low laser threshold {{cite:a8eeb7db1112ca75439322531fdefaad67337596}}, {{cite:4de355ad8b265ff79e9248ddee172cba492c8aca}}, high-density integration {{cite:f95ce7f2c853172bd425a99a39f341b5cf2439c9}}, {{cite:ac669fba35a5017c86dbd48e9a33946683b1d4b9}}, ultra-low power optical switching {{cite:c02e9683958ca2671966829af84d90a90eff156b}}, {{cite:4b15373c36b7a196d0dad300d3bcd0ad6ceacaa6}}, and easy integration with the silicon waveguide geometry {{cite:2c64ae5c2dd5d1e0eb40a0678f7f28234feb22ce}}. Hence, various sensor applications of NB cavities have been demonstrated, such as refractive index sensing {{cite:c68f63bc4a2c8da33b8b91af3401effec1b5fa82}}, {{cite:95407fe18fae1fb8b6bd8e5fbab72216ef376c2f}}, {{cite:5aaef9da8761f16e24168d72c0a55d62dbb1d3cf}}, nanoprobe for biosensing {{cite:a0ca5a03c497fb14851214d50e5735d5eba4c464}}, optomechanical sensing {{cite:23b6badd8f5ce10bc3ccff8cd6e167058723dda3}}, and magnetic field sensing {{cite:6fa18a4f8f56b75b75fb4fba6574c540871603f8}}. It was reported that thick NB structures can have high Q-factor resonant modes with both TE and TM polarizations {{cite:f1b058a1a06a0343a82612df63c599b75b799aeb}}. In our previous report, we proposed that a thick NB cavity with a horizon air gap can be a good candidate for ultra-sensitive refractive index sensing {{cite:5aaef9da8761f16e24168d72c0a55d62dbb1d3cf}}. However, to the best of our knowledge, there has been no experimental demonstration of TM mode lasers in NB slab structures because of discouragement of the coupling to the TM mode in the compressive strain at the quantum wells (QWs) {{cite:ce80f3bd3138e01c348c0f54d17c99355864ed3f}}. There are properties peculiar to TM polarization, such as surface plasmon excitation and strong confinement in the horizontal air gap. In this study, we propose an NB cavity with a thick slab that consists of a 1D array of air holes with quadratic size modification. The numerical results show that an NB structure with a thick slab has a large and complete PBG. In addition, we first experimentally demonstrated a single TM mode laser operation in an optimized NB cavity with an InGaAsP multiple QW layer which was lightly etched for sensing applications. The lasing mode was confirmed by numerical simulation based on a scanning electron microscopy (SEM) image of the sample. We believe that the TM-mode NB laser is a good candidate for a compact on-chip TM-polarization light source for surface plasmon excitation and has various sensor applications.
| i | 2eb2e613f1a65a7001d804c88deccc91 |
It is also noteworthy that the storage cost for Tucker decomposition in the proposed procedure grows exponentially with the order {{formula:79bd95f6-4ce1-4e49-9063-e63bc31e02a4}} . Thus, if the target tensor has a large order, it is more desirable to consider other low-rank approximation methods than Tucker, such as the CP decomposition {{cite:7af39a099ac341ebc18a43d196b8b6f8623629ef}}, {{cite:12b9f242c85afb71c421ea31e26b64cd45a2e111}}, Hierarchical Tucker (HT) decomposition {{cite:650e58b840d6c207b793e4b33d6be6775834bf56}}, {{cite:bfc6a6e12de97069847e2be76d6f2a850151b044}}, {{cite:c62f92ab78de132f88a8a7f3d87099793811fe72}}, and Tensor Train (TT) decomposition {{cite:81229920f6c0b70f94246736154972e5a47a4ae5}}, {{cite:5f5d4a83911ed6052974fedfeef21414eed83826}}, etc. The ISLET framework can be adapted to these structures as long as there are two key components: there exists a sketching approach for dimension reduction and a computational inversion step for embedding the low-dimensional estimate back to the high-dimensional space (also see Section ). Whether these components hold for the previously described methods remains an interesting open question.
| d | f015f352a71096e94bf6f85c6872661a |
Nematic order has been widely observed in two-dimensional (2D) realizations of active matter {{cite:340b14d164338869917b34ca532b04561813404c}}, {{cite:0dfd223b9229aa486f20cdc9b2c87c784bc98fc1}}, from vertically vibrated rods {{cite:ae390caa49076b12e25df11049082a9a02c9ee4b}} to mixtures of cytoskeletal filaments and associated motor proteins {{cite:ba0929c388b4d07d3bc2f5f5346ccd5c588b6ceb}}, {{cite:8f4e947734740d8560ea3b3a4daff27fad664c1a}}, {{cite:7f906c740f1cba733017a7352d3668776648fec8}}, bacterial suspensions {{cite:d5a9026f134c5b89b09ae29657de3383623d1af8}}, {{cite:c87a5f6b95e45588caa0d3bc71285ea7bbcc05d1}}, {{cite:48a162237c105aa5ba81d48a1a32526d4d8a11a5}}, cell sheets {{cite:f09bae1832393360a569b718422bcb6e33898f2b}}, {{cite:10bae034f86b17feb97579630c578f52ba2a47a3}}, {{cite:0dff9cb3df0e04f5c1a7a8f41494b7b4f814fc2b}}, {{cite:23975de0089b9446d1bb3adc1f9ac2b4853e1989}}, and even developing organisms {{cite:0868d56cfc51a731c4e86352d4a0776ae9a9c12d}}. Nematic order has only quasi-long-range (power-law) falloff in 2D active systems {{cite:94daf52be35f35773a0ca678b688008b64e2cf7c}}, {{cite:08b62700edf6959e5e6bfe6cbbd31a2d285239d5}}, {{cite:e8c2a2922387f33c08208a898dbccf636b068b30}}, as it does in equilibrium, and is easily destroyed by active stresses that drive spontaneous flows accompanied
by the proliferation of topological defects {{cite:ba0929c388b4d07d3bc2f5f5346ccd5c588b6ceb}}. A distinctive property of active nematics is that the comet-like {{formula:aabe78a6-676f-4275-b84f-e8d3fbff7612}} disclination becomes motile {{cite:ba0929c388b4d07d3bc2f5f5346ccd5c588b6ceb}}, {{cite:6eec934efa07c85a6d1763fce88a287ae64be988}}, {{cite:80922ec2b996a3fa63a3518c6d784146cb4389ef}}, allowing for an activity-driven defect unbinding transition {{cite:161cf5d7b0d3063c0d379c91e62e5d4f43fd8d84}} to a turbulent-like state with chaotic spatio-temporal dynamics. Both experiments and numerical studies based on the solution of continuum models have demonstrated that this complex dynamics can be characterized by focusing either on the statistics of flow vortices or on that of topological defects {{cite:0dfd223b9229aa486f20cdc9b2c87c784bc98fc1}}, {{cite:3563d009b9f0445a4efcf37d499f691945d8e4ae}}, {{cite:22cfe5c604a782cbec409d58e300977d631501eb}}, {{cite:b3ff24b4b66e1ff01a3b13eb7b0587f2f726b53d}}, {{cite:fc49bd9c41940794dc6ff7cae43fe621c81936ed}}, {{cite:7f906c740f1cba733017a7352d3668776648fec8}}.
| i | 50fa6dd5eaa936daa4c368b6f2598616 |
To also put these results into context with QKD applications over short links, we estimated the back-to-back detection rate, the quantum bit error rate (QBER) and the asymptotic secure key rate for the entanglement based QKD-protocol BBM92 {{cite:895ee9122a26152f0088cf951ee1f2fb623dcafb}}, {{cite:b478680e541b6c588368cbfcfff4bef4c8be6317}} with our frequency-multiplexed photon pair source. The results are shown in Fig. REF . One can see that, a strong benefit is that our demultiplexing approach allows higher key rates at high photon pair rate, avoiding high count rate detection issues. In particular,when the coincidence window is 3 ns, we can easily reach with our setup the required mean photon number to obtain the highest key rate possible: with only 9 mW of pump power, it would be possible to achieve a photon detection rate of around 50 Mpairs/s, under laboratory or low loss conditions. The estimation considering a satellite QKD environment by adding higher values of losses can be found at the discussion section.
{{figure:b006cf71-51d9-41a2-96f5-4f3da569da29}} | r | 8d986256a1d734eb47a6eebcfe26d9e9 |
If the community structure is truly capturing the protein's topology, we expect this grouping to reveal aspects of protein function. We can test this claim using Gene Ontology (GO) term analysis {{cite:dd14825bb2db65f4a7010f6f9920da0d2da5fa70}}. This effort assigns functional relevance (e.g. lactase activity, oxidoreduction) to genes. The SIFTS project {{cite:dab770c5e6b8133330b4ecf91227cfe7017128a1}} maps these GO terms to records in the PDB, meaning that each protein now has a set of labels encoding information about its function in the cell. We can then test if the grouping results in enriched GO terms, i.e. terms appearing more often than expected by chance {{cite:7b2741ba11a706e878df4c3fc84f5c397f066f32}}, {{cite:3db0ebaca6dedc7bfaeff6b04992ccb617cc7b3c}}. For {{formula:4c95c036-a819-4bdb-90d9-5c64b28eae91}} total proteins, and a subset of that dataset with {{formula:1ccb7f57-5717-47be-8562-3bc02ee8522e}} proteins, the probability of a GO term being found is given by the cumulative distribution function (CDF) of the hypergeometric function. For a given GO term, let {{formula:36159fec-4e30-4570-9685-f4a19c9e8208}} be the number of times it occurs in the subset, and {{formula:22328605-56cd-43cf-aef7-cac6ed574f72}} be the number of times it occurs across the full dataset. Then the likelihood that the term would be seen {{formula:d03d5f13-5c4d-486b-a474-6fb7dd91526d}} times by chance is:
{{formula:deda8c26-3b6e-4c2b-b2d4-5200f8b3b906}}
| r | cb7adb77468fc5ee5d8e1e9994d16c53 |
MAEs (MaskFeat, VideoMAE, SpatioTemporalMAE, OmniMAE, and AdaMAE) generally outperform previous supervised representation learning approaches (such as TDN {{cite:e35b2ded82397fbc0fe9b5c3db6c0803544b2c77}}, TimeSformer {{cite:3bcac0015dcb887d6017524ce84ea133254190bb}}, Motionformer {{cite:4a0bc9ed28e1d89b926b458706e3accb5b476897}}, Video-Swin {{cite:d7d97e6d2e996834bae73670433958ddc534f7ea}}), which use ImageNet-1K (IN1K), ImageNet-21K (IN21K), Kinetics-400, and/or Kinetics-600 labels for supervision, by a significant margin on these datasets. This empirically demonstrates the powerful representation learning capability of masked reconstruction methods. Furthermore, they also outperform recent contrastive learning approaches such as BEVT {{cite:4465014eadfb85000342817342dd2e3ef6d9b32f}} and hybrid approaches (i.e., masked modeling and contrastive learning) such as VIMPAC {{cite:f16865622d8cc98041c34e4cc21a0a905fc459b2}}.
| r | 74f95a4564bd3072f78ab1ea9db40cdc |
The motivation behind using MTL includes the implicit data augmentation, since a model that learns two tasks simultaneously is able to learn a more general representation. Also, if data is limited MTL can help the model focus its attention on those features that actually matter as other tasks will provide additional evidence for the relevance or irrelevance of those features. Finally, MTL acts as a regulariser by introducing an inductive bias that reduces the risk of overfitting. An overview of MTL can be found in {{cite:26e0fbc31276ea1490c271a3fa9757426f723737}}.
| m | 1ea096617930ccf040efa5917627580b |
a{{formula:32476c35-d720-4b0c-be99-fdfc9af69aca}} and {{formula:d82983ea-e000-4ca0-8f8c-04d7f052d4b3}} are beginning time and end time of the giant X-ray or optical bump, respectively.
b The references of GRB prompt phase observations for our sample.
c The references of GRB redshift for our sample.
d The references of optical afterglow data for our sample.
References: (A02){{cite:8ea433bc273de1f3a94e3da99fcb656276734d49}};(A08){{cite:141d9b55e372e71115b086857bf19112dc30f232}};(B05){{cite:12dd4ebec700f5678274202ece3b02ccd9d297c1}};(B07){{cite:72f2dbff5f200d0d44ed6fcfc1668cef953a2edb}}; (B08a){{cite:b7b3084f25994801558fefa141e06b5ec502afd5}}; (B08b){{cite:48112050d93dfb653643f51ab91a6e0a787f2c67}}; (B12){{cite:b25504f20235c4bbcfcdca0b2c0fc438b6cee9cf}}; (B13){{cite:645fc90f9088f8f5790ede6ef478b005f3dc36f9}}; (B16){{cite:a76ea31a8571a81f7557313194f9d4ab2acd98ff}}; (C08a){{cite:f1c823cd5d1df7dabb7259b425d1911640049a00}}; (C08b){{cite:f29635844a1e63394b200618ffa97e10c333aa83}}; (C09){{cite:6720d7233e9060ce212af2844c60240c91d2a7a4}}; (C10a){{cite:0198e99ded70eb63cc9549f2831ce123e7c69d9a}}; (C10b){{cite:bb72e933442d6c5d85efd53633be6c88af630ab6}}; (C13){{cite:c8e0d9d8f1ee8d068350e796558e0857f1e8a60c}}; (C14){{cite:a2f5d86b8984b4f365538a744ffe07318964c4a4}}; (C16){{cite:52aa5ace43b95e862fc3a5adff7c19fb986546f3}}; (C19) {{cite:40b2be9a9ad46cdbc418649cf00b194e41f74d50}}; (D10){{cite:67ab1cb35894fba76c64948eb8320e85725cb38f}}; (D18){{cite:61226fcfbc15a2531152efbc276edf3bb728d9c9}}; (D19){{cite:842ae72b336cea883ac99466655c891d9a70f3a6}}; (F09a){{cite:8bdcd2d03b0763b1e423512a750f20a425cc61c1}}; (F09b){{cite:4104ce5322c50121d6c384e0d7fc927563b70426}};(J06a){{cite:116e73dd015caa899bc5f8c57ae14508621ebb1c}}; (J06b){{cite:4ee7508809627322384894ea589e7ed031042899}}; (K07){{cite:3e4f07ac6defaae83b12feea0bf094e1992cf133}};(K10){{cite:748160019cb957c0338d56f0f582faa66d39ab2e}}; (K12){{cite:8eb999e46a86c3c212a145a6fc1296ddd3e465ce}}; (K16a){{cite:08cdd79322b9d01129174a89778bc69b6b7debb8}}; (K16b){{cite:b26c613cf087144e12e79e7b81e81ececea1a5fe}}; (K19a){{cite:b03fb6802427e3b0c700b875116f7e842c33d305}}; (K19b){{cite:7cfc69996d2ac8a6bf689634e3c9a0cf46b5cdc5}}; (L13){{cite:ef3fc9a0107b26ea9537346f27eabce98d2c6004}}; (L15){{cite:01116c4edddf2cb661e1ad4c09e012c377cd2f84}};(L19){{cite:43027a18623a271af2056eb8fb502c0a14148e17}}; (M08){{cite:30407b40ff392cf9789f85a9c2ed3d9f3d8cc337}}; (M13){{cite:035f49fca0b174dfed0848b7100a856409bb1ffd}}; (M15){{cite:fbbfb32d1975a09cb38b93738ac206fb1fca673d}}; (P06a){{cite:551fb26cb08f22ae8617f3b13f8febbcd82c127e}};(P06b){{cite:9856d8ecae0a49c0136df9857f6a89fe7b70978c}}; (P07){{cite:0bdf97121ac9422e66ab5905aafcc4c9436c4f28}}; (P11a){{cite:39f158fcba9557ad5797219a306d7c7eb82af43b}};(P11b){{cite:5d3e199e48e0a1924dddb9bb9689615f53a5367f}}; (Q13){{cite:f027254747f9efe09e6c6d4086809ceddde97c42}}; (R08){{cite:5dd54381080fac5786641871699a404ffc035acc}}; (S04a){{cite:676e270b4a666048f786ce13d49f2e00a7193091}}; (S04b){{cite:29cec9cff8edadb10853fad004bdc6dae3bfebef}}; (S05){{cite:e72099f73029bbf55c0064b52c0fe1ee8b744cb4}};(S06a){{cite:37b4d38467af5a46a08521f7436a996d3b14eba8}};(S06b){{cite:2c525f9205ab7d2f7a180f781449d217d9d63227}}; (S10){{cite:8570607e7abcda55135a2ea6e7c49565850c848f}}; (S12){{cite:d88ce0f01906ea72d6663296ba5954e0d755d444}};(S13){{cite:75272ba0488bb71ee5c1d98b6669dfff9ae9a067}}; (S14){{cite:c27759f495cd7e35320fced759b763022aefd427}}; (T05){{cite:975746e5e5af12ff3777c6d3cd07b0ab4d47ed1e}}; (T12){{cite:66f8fa9c6ffe2ec5e2e4c6f9ea4575f91df2b5dd}}; (U10){{cite:47fa619447f39a881e91295af548807fa48f0644}}; (U11){{cite:07202a2605058e8cc330924e782f7be6d362e306}}; (V11){{cite:c53aef347fd1233f0d3bec139c6f92d6a23134da}}; (V19){{cite:1dc10ea9dfeb7d8de8a6f9b87c86b29a11ecafdc}}; (X10){{cite:36b064614ebbae0d42c496741cf55abb84925d79}};(Y16){{cite:df7f0c2bf86c8a12826755a7d1ad52e00102143d}}.
| i | d972758c5583f88ce91f331e558f4736 |
Based on {{cite:e2fd3d0703231e72c5dbe4e7e33e8a3df2c8ba2e}}, the optimal solution should satisfy the following Karush-Kuhn-Tucker conditions of Problem (REF ):
| m | ba282faa217f243223b05ed131a8da43 |
which, using the value of {{formula:be7f7dc3-0942-44b8-adff-1043312a9e8c}} from studies of gravitational radiation from string loops {{cite:9c60b25b408638c2c1db89e8401a59784b79820a}}, is about {{formula:6d4cf865-f7ea-4f90-b7b7-bfc056dbfe14}} , assuming no Eddington accretion.
| d | ec4a7c345b4c38a5a6c67465a70d9f9d |
The results presented highlight some ML success against baseline forecasts and also a number of unique avenues that could be explored moving forward to enhance and improve both ML-based guidance and the SPC human-based forecasts, as well as increase interpretability of the ML `black box'. While the feature assembly experiments (e.g., p2, tl10) did not yield forecasts that surpassed the skill of the traditional CSU-MLP system, the simplification of features could be exploited to include other ensemble diagnostic or summary metrics (e.g., mean, high or low member values) that characterize ensemble spread into the medium range. The meteorological predictors could also be varied in any of the ML configurations to explore which predictors add the most value, or objective methods (e.g., permutation importance) could be used to reduce feature redundancy and select a more optimal subset of features. It will also be vitally important that alternative interpretability metrics (e.g., tree interpreter {{cite:a40193426499d32670f89ad1da9d42aaae70e407}}, Shapely additive values {{cite:3fef2ca52887c1350015d6e75dc9035e0f8ecb82}}, accumulated local effect {{cite:256ead66ef6b26f3425a704efd3e3574e7450a08}}) are employed to interrogate how the RFs make predictions; this exploration is underway and will be the focus of a follow-on manuscript. Additionally, the added benefit of the ML system against the underlying GEFS model could be quantified more explicitly. Traditionally, ML-based forecasts have been measured against the very model that generates the ML predictors, with demonstrated success improving upon the raw dynamical models {{cite:57c7ec1dad057164006c9e73184d66367e4e7870}}. In this instance, with notable 2-m dry and low-instability biases in the GEFSv12 systemInternal SPC surveys have suggested these biases exist and are reducing forecaster confidence in deterministic Global Forecast System and GEFSv12 forecasts {{cite:a766144c5fcc33e34849f807cf1974e098d4ed1f}}, it would be informative to quantify the value added by the ML system to correct for these biases when making severe weather predictions.
| d | e10893d4b90de056f0ac49a85f0847a1 |
Our predictive benchmarking method uses the performance results obtained by a SLAM algorithm in a number of environments to predict its performance in new ones.
blackCompared to current SLAM evaluation techniques, which are used to perform ex-post benchmarking, our predictive benchmarking approach provides an estimate of the expected level of performance of a SLAM algorithm in new environments without requiring their actual mapping.
While existing SLAM evaluation techniques handle every new environment as independent, predictive benchmarking builds upon the knowledge gained with previous evaluations to create a model of the relationship between environments and their associated SLAM performance; this results in increasingly accurate performance predictions as the number of explorations used for training increases.
blackIt consists of a sequence of modules, each responsible for a logically distinct step, as shown in Fig. REF . In this paper, we consider 2D lidar-based SLAM methods, using GMapping {{cite:391292da13815f3d0a19f28b54e559d0a6a2732a}} as the reference algorithm because of its widespread use and the availability of stable implementations. Validation of the method is performed by considering another widespread SLAM method, KartoSLAMhttp://wiki.ros.org/slam_karto black{{cite:9b1ecb5f5ebd74f2c8dfc0f3698e628b7544fe2c}}, but the proposed approach potentially works with any other SLAM algorithm. The choice of these two SLAM algorithms is motivated by the fact that those methods are widely used both for research purposes and for real-world applications {{cite:8023424ab137d9d426e3455b1f22676643dbe5bf}},{{cite:05c9eef5b996266205aa90996ed7872470682984}}.
{{figure:cdaace9c-ccf9-40f2-9fc2-97bb30293cf8}} | m | c7232679c78695bde3cf6ce1324984d8 |
Vanilla dense neural networks architectures suffer from overfitting on the balanced EMNIST dataset. To mitigate the problem, we tested a lot of regularized models using dropout and L1, L2 norms that clearly show a massive improvement in generalization strength of the models over time, if fine hyper-parameter tuning is done. However, the algorithm still falls in local minima as our experiments' figures showed. The literature's solution for this is to use CNN models that use convolutions to extract spatial features from the data merged with dense regularized layers to add non-linearity to the tensors floating within in order to boost the accuracy on this 47-labeled classification task. That is because the model sometimes might struggle to extract relevant information from an image having only a flattened vector of 784 units (or 78) as input; better feature extraction can be done this way and surely will improve the results. This has been done already on MNIST dataset ({{cite:7888b134a5ce86bf30ce0aa0b4428bbac7127708}}, {{cite:5db34b064c04056cffeeb49335ba8d40c07a5d36}}) and on EMNIST, with capsules, ({{cite:fb11f58dc8bc4d13220a6e2a6258a6f6c1410af9}}) achieve more than 99% accuracy.
| d | 8aad00299038c2c83f5b8b787ac500a5 |
OpenImages. This dataset {{cite:efa030d382efd50dafea1c414758683806a258aa}} is the latest endeavor in object detection and is much more challenging than its predecessors.
Our classifier achieves 69.0% top-1 accuracy on the validation set of OpenImages V4 which is lower than other the three datasets. We achieve 58.9 UAP, using the TensorFlow evaluation API for computing AP {{cite:fb2e74146ae54b1b858a97819bcccfcacd4c4456}} on this dataset, which is different than COCO AP calculation (here we discarded grouping and super-category). We are not aware of any model scores on this set of OpenImages V4.
| r | e51aaa7759f369e8195e6a61f2a539c7 |
We also compute VMM from the tree level diagram, the vertex
correction and the four-fermion interactions respectively. The
contribution from the tree level diagram agrees with the result
obtained in ref. {{cite:0750ab1853ea5b1bc189c6145ae426009f71cca4}}. The other two contributions counteract
each other and therefore the total contribution to VMM is about
{{formula:cd75e2bc-b47f-49b3-8491-fe30700c9ccd}} if we choose {{formula:c0d271d7-2068-401c-9c73-e92004186136}} and {{formula:62968c2d-becc-49ed-be70-846b533f0d94}} .
| d | 5ff2f5480129c9bfbafd60acab486600 |
As seen in Table REF , DAT improves FID, KID, and IS consistently and significantly across many datasets with three popular models. This suggests our approach is versatile and can generalize across multiple data domains. In particular, the usage of DAT greatly improved the FID of SSGAN on STL-10 dataset. Due to the large difference between the LSUN-Bedroom distribution and the ImageNet distribution, IS do not seem meaningful for it. So we do not show IS on LSUN-Bedroom dataset. Of course, there are some datasets where SSGAN with DAT has the best performance, and in some datasets, InfoMaxGAN with DAT has the best performance. Furthermore, there has a gap between Train FID and Test FID, as {{cite:d9d29d4c05ed4a21253526496f6b12a06cc5b835}}, {{cite:4e7c75b51c1b35679f6a8467c8f8bfb2c2738808}} said, GAN also exists overfitting. This may be the direction of our future work.
| m | db854025d3d3600a2e7a8449b53d4c19 |
In this paper, we discussed multi-soliton dynamics of
anti-self-dual Yang-Mills equations by analyzing
the action density in the asymptotic region.
By considering a comoving frame with the {{formula:315b8016-1a75-4aa8-a6d3-d98a4d1d676e}} -th soliton,
we proved that the entire multi-soliton distribution is asymptotically equal
to the {{formula:9dc52c76-e0f5-4ff4-af48-081ea406707c}} -th one-soliton distribution except for a phase shift,
and we also calculated the phase shifts explicitly.
Therefore, our results can be interpreted as intersecting
soliton walls with phase shifts in the scattering process.
It is surprising that this behavior is quite similar to
the case of KP soliton scattering {{cite:9415d8f8def9a475de0dd5fe6cf8800149b4c70e}}, {{cite:1ad2c2579be386bf38aa39a8f763880ed838717b}},
and it suggests the viability of Sato's formulation
for anti-self-dual Yang-Mills equations.
Furthermore, we proved that in the Ultrahyperbolic space {{formula:00150682-accc-49fa-bc76-70e828d98c4e}} , the {{formula:223f9367-1a1c-48b7-bf88-367d0da187df}} -matrix is unitary. Hence the multi-soliton solutions can be embedded in {{formula:135bb632-0390-4fa7-b4a0-d107933f817d}} gauge theory. This implies that there would exist
intersecting branes of three-dimensions in the N=2 string theory.
| d | 2404299dca2fc296b4aece9f432b7048 |
We empirically evaluate our proposed algorithm on six different continuous control tasks from the Open AI Gym continuous control tasks {{cite:42bccb2baa79eead11f53e2d0b32105e60d5ff71}}. We compare our method against well-known optimization-based reinforcement learning algorithms such as proximal policy optimization (PPO) {{cite:7526285e0966fe2eb054e15b93df6c5342981d3c}} and trust region proximal optimization (TRPO) {{cite:052f35034637d01afb929479dd3b13ba29304ed2}}. Then we discuss the performance of the primal-dual formulation with various commonly used optimizers, including Adam {{cite:00678ddbef402971c9bf16d8b64f873f71d1d15c}}, RMSProp, and gradient descent to show the effectiveness of our method. Our results indicate that ASDGA is not only theoretically efficient but also at least as effective and efficient as other well-known optimizers in practice. To ensure reproducibility, every set of experiments is repeated with five different random seeds with the mean reward obtained presented. We describe the configuration of each algorithm and optimizer in detail, along with more experimental results, in the appendix.
| r | af4270108e5c338d03f7566f9b74d7c3 |
Our study is limited to extant words in English and randomly generated character {{formula:805f71ab-3977-405c-962e-aa9d7e82e068}} -grams using the English alphabet. Given the unique impact of a specific language and alphabet on representation spaces, there is motivation to see whether the relationships we identify generalise to other languages and alphabets. Finally, we reiterate that our analysis was limited to the last embedding layer of the CharacterBERT model; future work may focus on weights in earlier layers, including attention mechanisms explored by other BERTology studies {{cite:253e3195e8c2312f7549a98c612bcf46a5513676}}, {{cite:8d8574b5a1edafeba6efff532a618753ea8d36f8}}. By only analysing the final embedding layer, we study the `psychology' of such character-level models; in analogy, much may be gained by studying the `neuroscience' of such models encoded in their attention weights {{cite:9b86f10b6795dec69475f4cb61acff77d3d6fd45}}.
| d | 0afd56c5e1c43724216ac272b79a7474 |
Since the historical success of AlexNet {{cite:ffb9f982dd78f5c4035586bc2bb6f6d43249c3e0}}, convolutional neural networks (CNNs) have been applied for a variety of machine learning tasks {{cite:d713537b241582a42919589abd948abd72f71954}}.
In the research field of geometric deep learning {{cite:b74f8ecefe42c2e0d0a3d783389c788855163cea}}, group CNNs (GCNNs) have been developed to capture the inductive bias behind a variety of datasets
such as sets and point clouds {{cite:0a7c8747557ff873d076d40be6bb0bd2a7556e5f}}, {{cite:00d461c13884e4a8f141daf33d2a64e7a85867d6}}, graphs {{cite:636c4b5e458ad981f7dd4d9296ef44ac8430a703}}, {{cite:2ac46ab02f4e50c2b8dbe98ef70ce23f9f1cef28}}, manifolds, groups and homogeneous spaces {{cite:e818a9d1b89bac13d6d3c27307bc538185fcc713}}, {{cite:fc8426e24185ba20871929e719ae5842ac396fa6}}, {{cite:f07600a85a4f132fb3b91cc5fc0cd8578a7f3de7}}, {{cite:636c4b5e458ad981f7dd4d9296ef44ac8430a703}}.
| i | 2ef4b21bfb822bbad2b7aaa1242fb78d |
The theory of iterated integrals was developed first by K.T. Chen in the 1960's {{cite:7d708cccca1a54b6521a8e3bfb0a5f1607ff1fff}}, {{cite:cb44e4a465848b852463345e109a8b93d36879c6}}. It has played important roles in the study of algebraic topology and algebraic geometry in the past half century. Its simplest form is
{{formula:452de564-3415-46ba-8a88-ae2e7cbf43c7}}
| r | 44b6179a04edfc17a6bb879a241234b8 |
Our initialization method is designed to alleviate the problem of learning local minima solutions — an issue that the family of EM algorithms are well-known to suffer from (as well as slow convergence rates) {{cite:8e483ff8947b765e72b0860237991df94d355ce6}}, {{cite:a6bc2f17cce9e87a7e725327ec01012f83a780b4}}, {{cite:a6053cec7379d74e9f5921ab338728cdd36a3051}}.
As such, the study of good initialisation methods for specific EM algorithms is
an ongoing area of research {{cite:a6bc2f17cce9e87a7e725327ec01012f83a780b4}}, {{cite:3d2765bc058a531cc9a0d25b12ef2464fd9495f7}}, {{cite:f5e5d5cec3450e580726685afea73feb59926596}}, as is work on finding alternate update rules to address the same issues {{cite:a6bc2f17cce9e87a7e725327ec01012f83a780b4}}, {{cite:c6831aa854335ae2465eca20e1bd88d905842954}}.
Our work relates to these investigations, however we focus on EM algorithms that are specific to clustering problems, rather than general data-completion problems
— specifically, we consider intent clustering as encountered in MI-IRL.
| m | 43788eacd0b0a2591802b2e18f393115 |
In this section, original ViT is named ViT-Base {{cite:ce974ab518dc974e7c57240048544979fc6cc133}} with several changes, as shown in Table REF .
{{table:d7f358be-3c21-47e4-baf7-d93886590f92}} | r | 662bef60891140b45b780cf4263585a9 |
In this Letter, to resolve the stalemate in this research field, we propose an alternative route for creating an effective {{formula:03bfc8aa-c769-4fc9-815d-6435279433ee}} -wave superconductor.
We specifically consider a three-dimensional Josephson junction illustrated in Fig. REF ,
where a thin-film semiconductor is sandwiched by two conventional {{formula:c63d27fb-0963-4f40-8950-fe9fa4308b7f}} -wave SCs.
In addition, we assume that the semiconductor hosts a persistent spin-helix (PSH) state, which is studied intensively in the field of spintronics
{{cite:ace38b763bbabab723b98892092bfa3d2ae1089f}}, {{cite:02ba4c81ae8045f5d111fa914303eed5c75e012a}}, {{cite:5c762cd89cb6bd01964c2977154c1740c15bed48}}, {{cite:edb6e79a4be7dd9aceb14f2e4d2ba57d1f914ea7}}.
This work is inspired by the innovative studies on planar topological Josephson junctions (TJJs), which is attracting much attention recently
{{cite:f068228f1374de2ffddc3af731991a6ec0092160}}, {{cite:6dcaab01079b67c8545761de744c460eb167ad9c}}, {{cite:1fe3e878a3d07126fd3d0cf4fcc6c30c56ca247a}}, {{cite:0ec3783a55dbe148655f166aef598585d4cdfccf}}, {{cite:e45a835f1155eba3e3cd6b65c752c3313bb85732}}, {{cite:32a94fcbe51ab401daa18fd4412dde2c1e8daba4}}, {{cite:bde77f6c0b5a5ab63a98af387cf7e664546d3f17}}, {{cite:b58e5ec14a821ccfb79f2517435b1936a94c65f6}}, {{cite:73aa5b55791e40133f07963d6329a80e06b9d81b}}, {{cite:39b569a689b4b261efdd4c1cf30603237837163f}}.
In a planar TJJ, effective topological superconductivity is realized at the one-dimensional interfacial region of the two-dimensional Josephson junction.
In our setup, we obtain an effective {{formula:6a85756a-1f36-4202-8ab6-990e477e72e1}} -wave superconductor in the vicinity of the two-dimensional semiconductor segment of the junction.
Thanks to the nature of TJJs {{cite:f068228f1374de2ffddc3af731991a6ec0092160}}, {{cite:6dcaab01079b67c8545761de744c460eb167ad9c}}, our setup can enter the topologically nontrivial phase supporting the flat-band MBSs
by applying Zeeman potentials smaller than the pair potentials in the superconducting segment.
Moreover, in the vicinity of the semiconductor segment, the local inversion symmetry along the {{formula:1edf2315-cf23-45dd-b7d9-211c3c6bb3de}} -direction is preserved, and thus the Rashba SOC can be negligible.
Therefore, our proposed system is free from the two difficult conditions required in previous proposals.
In addition, it has been recently shown that the PSH can be realized not only in III–V semiconductor compounds {{cite:78fbbd6626c3719723f85ff6a12274bfebb706aa}}, {{cite:fb56793dbeb6e7f0c75cea79174380e7a9906329}}, {{cite:5c65d3a08b88ebca66edb765f132f88129294743}}, {{cite:ee5f505b198149e898bdcf6425f76080edf1e6b0}},
but also in two-dimensional ferroelectric materials such as group-IV monochalcogenide monolayers
{{cite:da95a2f43b4bf4a3b45bdc7f2c6dcbd7bc671c78}}, {{cite:30121e66db1562ce432c1e7f6c9ea0c42399c405}}, {{cite:73edabc743eebc97a9be101c1fd7b252d473ff66}}, {{cite:73edabc743eebc97a9be101c1fd7b252d473ff66}}, {{cite:c62a0dbefcc4eaf9b5164f5f061eb31967b6baca}}, {{cite:4e7e5a77f09fb5904986c14b3773b5dbb544fbad}}, {{cite:14ca0da7d888ed95583ed52cbb5c61bf2537cec6}}, {{cite:8c6a7b27cfac22563d1bf55bb362520e17b46838}}, {{cite:4c7e255547aaa0f8598856ecece2a5cded78d7c8}}.
Hence, this rapid progress in the field of PSH makes it very likely that our proposal can be successfully fabricated in experiments.
Consequently, we propose a promising recipe for making an effective {{formula:ce80253e-2784-4a13-a8df-b46aedb8e7eb}} -wave SC harboring flat-band MBSs.
| i | b4f5c119decad5e90168bd32cb673626 |
Our results are mainly based on an adaptation and a combination of the techniques used in Holden et al and Chase's papers {{cite:78c622ced26a604e8f34c7cd167e5c10db97d383}}, {{cite:77bfd2ab07019a1c05b530d5d1c099ac6695e356}}.
Here, we will give a brief overview of their methods and why it is not trivial to combine them.
| r | 6c0e3424e422051fa386608f6184f33e |
We compare post-training quantization using INT8 and FP8 quantization.
To allow an apples-to-apples comparison, we do not use any methods to improve post-training quantization that have been developed in the past years (e.g. those mentioned in {{cite:76d331408d2df1dec0c1c72676154c36d9c20f23}}), other than the methods described in Section REF .
| r | ef3daf89ead26777670529abf1dbadb5 |
The first inequality is due to Rosser {{cite:639cfae210c44f2f5aeae0d988ab2b3c3d4e892a}}, whereas the second is due to Rosser and Schoenfeld {{cite:a0b79805ed3879470a385a3fa3d02d6e0f8bafae}}.
| r | da57dce7f5f825655f6cca032cc7d798 |
The main limitations of our study are the requirement for hypergraphs to be produced by the generative model of Sec. , the use of the mean-field approximation, and the use of approximation (). [Here, we refer to the approximation that all nodes with the same hyperdegree are statistically equivalent as the mean-field approximation, rather than neglecting pair correlations in Eq. ()]. The generative model we used assumes that the presence of a hyperedge connecting a group of nodes depends only on a set of pre-determined quantities of these nodes, which might not capture the generative mechanisms behind some real-world or model hypergraphs. For example, a simplicial complex model where triangles only join triples of nodes that are already forming a clique (as assumed in some studies {{cite:8b794fe89b45bc2484f81746aefcfc1182ea1032}}, {{cite:00a955a1eee53ee1bdcb495811277e81d86ff4b5}}) is not included in the class of models that the generative model in Sec. covers. In such a model, correlations between the states of nodes belonging to the same triangle could be non-negligible, and thus approximation () could break down. In that case the techniques introduced in Ref. {{cite:c79a9fb5ed0bd85c945ad38c1650370651159097}} could be needed to account for pair correlations. For example, for the SIS model on a simplicial complex, Ref.{{cite:5e8a1bc9bac08aef2bb996bb5dee5bf19f853cd8}} finds that the epidemic threshold is only predicted correctly when accounting for pair correlations. In addition, Ref. {{cite:4810ccd821565701e4e213319a1d59f31f7934cd}} recently noted that synchronization properties in the strongly synchronized regime differ between simplicial complexes and random hypergraphs. Exploring the limitations and possible extensions of our method for simplicial complexes is an interesting problem left for future work.
| d | b33b6768526aa3aa8264458682599fe6 |
Our training set for the ChIMES force field was determined from DFT-MD simulations of amorphous TiO{{formula:7eb123e3-9084-4fcd-a955-e9f6d0859353}} run at temperatures of 2250 and 300 K using the VASP code {{cite:fa05b92be45b4af42f0c001a9bbbf8e4b9738eb7}}. For these calculations, we used the PBEsol functional{{cite:8893e2b4230db2c5bc46927a996082b594a55182}} with an energy cutoff of 550 eV and {{formula:380a1ae1-aed6-42f2-9e81-2839837fe0c8}} -point sampling of the Brillouin zone. PBEsol was chosen due to its improved description of solids and their surfaces. The amorphous phase allowed for improved sampling of a wide-range of interatomic distances over the short-time scales of the simulations, which was enhanced by including data from elevated temperature. Each MD simulation was run on a system of 216 atoms for a total of 5 ps, with configurations taken for {{formula:c0c26f25-8544-4c42-86c7-d1ea3d6e216a}} training every 100 fs (in order to allow for decoupling between training data), resulting in 50 training configurations taken from each temperature. We also included additional 10 configurations from an MD simulation run at 40 GPa in order to improve sampling of close interatomic distances. This yielded a total of 110 training configurations.
| m | cea11c28f1e4958aedfb8af4043cddc1 |
A common approach to discovering these seemingly disentangled representations are variational autoencoders (VAEs) {{cite:4c3dbff8b81b822cab52fc83b4a33e35d082a302}}, which are trained on unlabelled data to learn a lower-dimensional representation capable of reconstructing the given input.
Unfortunately, it has been proven that unsupervised methods cannot reliably learn representations without the introduction of supervision or inductive biases {{cite:862556800cf150515c4ed3839395bc0e8be21cea}}.
Fortunately, the recently introduced Ada-GVAE framework has partially overcome this problem by using a weakly supervised signal to discover these underlying factors {{cite:1475ced96c386ee8f2bcab4c17aba31006583cb8}}, but there still remains room for improvement.
| i | 2aa52a000e2c1392155e40de7c6fbe31 |
On preventing phase transitions, anomaly detection already offers one path forward. Once we can detect anomalies, we can potentially prevent them, by using the detector
to purge the unwanted behavior (e.g. by including it in the training objective). Similar policy shaping has recently been used to
make RL agents more ethical {{cite:39f2bb446e47e2b855e85806845032585b58713c}}. However, since the anomaly detectors will be optimized against by the RL policy, they need to be adversarially robust {{cite:91afe9b9b20b235520ca2937c66a44704f9414fb}}. This motivates further work on adversarial robustness and adversarial anomaly detection.
| d | c4acc129f850ab118e6509eb2222ae56 |
where {{formula:542352a7-5502-40c0-8cd9-bf8431f5577d}} and {{formula:ca40c861-2127-4ab0-8fce-d694f6de2694}} are the bag radius and quark energy, respectively.
Recently, {{formula:23c67551-29d6-434d-a6f0-41d8d0017627}} has been updated by BESIII {{cite:d000ac1d1c874303a9029c76fdd057b760ac9fc3}}, {{cite:6dc7f40dcf7ecf4091da9b4a8fc5918888584898}} with remarkable precision. We take {{formula:22ae1240-205e-447c-8010-c0168511883f}} , {{formula:85fca3a3-2c1b-46c6-aecb-19d775e52d6f}} GeV and the {{formula:ba2c39ba-6ef3-4e55-94fa-1edbb0435f0f}} lifetime of {{formula:b9261d3c-cf0d-4ed1-bc7a-3ce46bdedc21}} s from the particle data group {{cite:75e46553a194d658f92f88c6979342ea7bdbbe5d}}.
The main uncertainties of the HBM come from {{formula:418ec202-3bae-4637-8165-1aadaa4086df}} , affected the form factors largely at the low {{formula:4590d114-3cf1-4e51-a16f-1776f6e1a1bf}} region.
{{table:d1953d75-435a-4ee1-959a-79d44b988dc5}} | r | cd081e20bae12ab4f4fde887172ea27a |
The Grad-Shafranov method allows to solve the boundary value problems (A),(D) and (G).
The case (A) has been already considered in {{cite:a676fed81094cd2c004ab17c003bf2207ca76c1e}}, {{cite:6b71d6c96f37f292be76690fec6b2943e81bcbb0}}. In this paper we will discuss the application of the Grad-Shafranov approach in Section .
| r | adbeb235829793b5f0378ec3db814d52 |
Recently, {{cite:227b4210472462df533a4a1eee9f442e56c92ba3}} developed a mixed-integer-programming-based (MIP-based) approach that builds representative matched samples. MIP-based matching algorithms ({{cite:d5a14ad6bfce3b9e2fce0561e1043ea4b952ae4a}}, {{cite:c1816e9aedc37d4d798e48d439623c83ad5474c3}}, {{cite:227b4210472462df533a4a1eee9f442e56c92ba3}}) encode requirements for different versions of covariate balance as constraints in a mathematical program. Although recent advancements in computing power have made it more practical to solve large-scale MIP problems with many complex constraints, the general MIP problems are still theoretically intractable or NP-hard. From a very practical perspective, MIP-based methods require installing a powerful commercial optimization routine (e.g., Gurobi or IBM CPLEX), which is proprietary and could be an obstacle to some researchers. This being said, if one finds no difficulty installing the solver and that the match can be readily delivered by MIP-based methods, then this is great and one should do that. Other approaches to adjusting for sample selection bias include weighting and doubly-robust methods (see, e.g., {{cite:ea15396d8e74b0e865ee86c761d89af288484642}}, {{cite:4e59e0427e48b48dbc558b26b5446362c87423d7}}, {{cite:673f67d4cfa08e943e67e204a15b5b836ac1c0be}}); see {{cite:16640ba0113c4a53da9868dbbf1aa47a4a314c47}}, {{cite:fb512df2fa7ccc275ed8ba54486e8d68408acfd8}} for a more comprehensive review.
| m | f70bb8d52bb423a5fd73352f4e4ada93 |
Note that although the SDR of problem (REF ), i.e., problem (), is equivalent to problem ()-new defined in Appendix D, the reconstructed rank-one approach proposed in Appendix C is no longer satisfied for problem ()-new. In general, the SDR tightness for problem ()-new may not hold due to the limited degrees of freedom of the transmitted signals. As a result,
the Gaussian randomization technique may be required to reconstruct the rank-one solution and the performance loss is inevitably incurred {{cite:ac52433082287f5872d5919ddd032935dca43ab2}}. This result indicates that the dedicated radar signals may be required provided that the cross-correlation design constraint is considered.
| d | d29c271a763874882c277f8912a95593 |
The detailed quantitative analysis and comparison of algorithms within the locally-balanced class in the high-dimensional limit is made possible by the mathematical framework developed in Section 3 of {{cite:378140678e9dd1b01961d1854760825bb31062d6}}. This framework identifies and uses only essential Taylor series expansions related to the limiting Kullback–Leibler divergence between a locally-balanced proposal and its time reversal.
Using this we establish optimal scaling for a broad class of algorithms including Barker and Langevin with a single unified proof, along with significantly weaker assumptions on the smoothness and tails of the target distribution than those in {{cite:72802b4b70c94a3939a29f4ec8caf79036ea8092}}.
Our results are at present restricted to limiting expected squared jump distances, rather than diffusion limits as in {{cite:9d926458ca828700b99b68c110967b93fd39fcae}} or {{cite:72802b4b70c94a3939a29f4ec8caf79036ea8092}}, but we believe that it is possible to uncover a limiting process under the current assumptions and such a line of enquiry is being pursued at the time of writing, continuing the axiomatic approach in {{cite:378140678e9dd1b01961d1854760825bb31062d6}}.
| d | da8d679282231ce0aae096115cb7942f |
To evaluate our method in simulation, we implement a suction gripper in the ManiSkill environment {{cite:72f6ae2b0f7c88fb5036a139bcf90b4737d038d3}}, which serves as a simulation interface for interacting with the PartNet-Mobility dataset {{cite:cbbea7f43258d7065e1ab82d2c67a23f2e290ed6}}. The PartNet-Mobility dataset contains 46 categories of articulated objects; following UMPNet {{cite:5818e692388f63f01de4c1a52fa61c5c6be8136b}}, we consider a subset of PartNet-Mobility containing 21 classes, split into 11 training categories (499 training objects, 128 testing objects) and 10 entirely unseen object categories (238 unseen objects). Several objects in the original dataset contain invalid meshes, which we exclude from evaluation. We modify ManiSkill simulation environment to accommodate these object categories. We train our models (ArtFlowNet and baselines) exclusively on the training instances of the training object categories, and evaluate by rolling out the corresponding policies for every object in the ManiSkill environment. Each object starts in the “closed” state (one end of its range of motion), and the goal is to actuate the joint to its “open” state (the other end of its range of motion). For experiments in simulation, we include in the observation {{formula:82bcaa43-e388-4cbf-a3e0-7361aa3ef9c1}} a binary part mask indicating which points belong to the child joint of interest. Results are shown in Tables REF and REFCategories from left to right: stapler, trash can, storage furniture, window, toilet, laptop, kettle, switch, fridge, folding chair, microwave, bucket, safe, phone, pot, box, table, dishwasher, oven, washing machine, and door. Clipart pictures are borrowed from UMPNet paper with the authors' permission..
| r | 87b46e30d52f52d7af3bfb36749c2cbe |
In our future work, we will apply the photoionization computation to the current hydrodynamical results in order to obtain the local emissivities of O6 {{formula:acd27031-e192-4d8a-90eb-0bab115da184}} 1032 and {{formula:a0bf4830-e7a1-4a38-b025-2ac812ffc4fe}} 1038 and the distribution of neutral hydrogen. The emissivities of O6 will be used as input information for the investigation of the line formation of Raman-scattered O6 features through the Monte Carlo technique adopted by {{cite:cf89fa713bfacba22ff6f34866764a2913dabbdd}}.
It remains to be seen that an asymmetric density distribution in the accretion flow is responsible for the multiple-peak profiles often displayed by Raman-scattered O6 features at around 6825 Å and 7082 Å.
| d | b99b93ddac7512ef78394a78ac4b1549 |
{{formula:86a9d047-2ae6-4022-84a8-3d219bc3714c}} is triangulated by {{formula:c128a87c-0417-4825-9d8d-8962ecd62cd4}} that consists of simplices {{formula:3b2c164f-9eeb-434a-85e6-322218aee161}} with {{formula:37935db2-2b10-4a4c-85b8-a9c11ddddcc4}} its diameter and {{formula:730e9ce9-ca7c-4665-a88e-fceb134efabf}} . We assume that {{formula:4abeeaa3-952f-4d7b-81da-d0001b9612e4}} is shape-regular in the sense of Ciarlet-Raviart {{cite:4c45ac7256eb2b0ccff4e27ef137d0d1e5a8be85}}: there exists a chunkiness parameter {{formula:0f284d35-f3c7-4806-8c1a-7e70d7be335e}} such that {{formula:aed49876-426a-4396-8772-efcdbbb82c11}} ,
where {{formula:43b92653-ccec-4a94-8126-5794820542f4}} is the diameter of the largest ball inscribed into {{formula:75db7506-f1ef-4962-b423-bb4d872978d9}} . We also assume that {{formula:dc88fc21-8c6f-4851-bf80-7988a9a966de}} satisfies the inverse assumption: there exists {{formula:f5109590-b3fe-49f3-9672-b01739077114}} such that {{formula:c3127691-4e2e-47b0-a345-d53775daccb7}} .
| m | c653a5bebc5fd6ea65147fc7d99ee1b4 |
Our Architecture.
We consider a 3-layer neural network {{formula:43a86815-3144-489d-9479-cd914512261a}} , where the input {{formula:8abc7ed3-eaf9-45d8-842d-174b997fe30e}} passes through a wide randomly initialized fully-connected non-trainable layer {{formula:54cce2e7-1275-4fff-9e5d-b706d92921dd}} followed by a ReLU activation {{formula:b1f277ca-3931-427d-af5b-81c25bfe5cb3}}Our results hold for more general activations. The required property of an activation is that its dual should have an `expressive' Taylor expansion. E.g., the step function or the exponential activation also satisfy this property. See {{cite:8f784859487ee618445664b7c478c4441a9d401d}}.. Then, there are two trainable fully connected layers {{formula:598ed59f-cf2a-46d2-913a-8f714c599d30}} with no non-linearity between them.
Each row of {{formula:4a08d3b0-f32d-4d92-88f2-39fc31235bdc}} is drawn i.i.d. from {{formula:277a71fd-d78a-4b77-b21e-88c8a8d65eb7}} . It follows from random matrix theory that {{formula:18332794-d5a0-4cd6-b294-dd90eee0eb88}} w.p. {{formula:fe47bd2f-e702-4905-b363-0403caf44532}} (Section ).
This choice of architecture is guided by recent results on the expressive power of over-parameterized random ReLU layers {{cite:8f784859487ee618445664b7c478c4441a9d401d}}, {{cite:cc2ace2a59fb56108ec4295ddd9710d90a9fb777}}, {{cite:9db20cc019efc6e3919806b6783e22b6aa517d5b}} coupled with the fact that the loss landscape of two layer linear neural networks enjoys nice properties {{cite:8bc3d58009ed4f4f8426385d865bb9b6f8393791}}, {{cite:a7766bc8d00ff9a0cc1ca8773675e3b1f6ba2df2}}.
| r | dd796446b0bf94e33d9272f24ee33140 |
Interestingly, in a recent paper {{cite:ae3eb05346074499dcf4427efefef0963aa4b4a0}} that explores the discriminative power of graph neural networks showed that GNNs are at most as powerful as the WL test in distinguishing graph structures. By choosing right aggregation function, they develop graph isomorphism network(GIN) that can achieve the same discriminative power as WL test. However, in experiments, it is observed that GIN outperform WL kernel on social network dataset. One explanation the authors provide is that WL-tests are one-hot encodings and thus cannot capture the “similarity” between subtrees (while they can capture that whether they are “the same” or not). In contrast, a GNN satisfying certain criteria (see Theorem 3 of {{cite:ae3eb05346074499dcf4427efefef0963aa4b4a0}}) generalizes the WL test by learning to embed the subtrees to a feature space continuously. This enables GNNs to not only discriminate different structures, but also learn to map similar graph structures to similar embeddings and capture dependencies between graph structures.
| d | 8c382f77364833690dcd490cac3a1330 |
Since the proposal of the Weak Cosmic Censorship Conjecture, several counterexamples to this conjecture have been found, including collapse of dust {{cite:53d9f8973e2941d4f09d854379d5945f3fcc88ea}}, {{cite:a3b439faa5c03eee56c8b82cf1361adb2ec122f1}}, {{cite:83aed4ef4a000e8e3c62796e8aa9b24ea7d9dd53}}, perfect fluids {{cite:e346af9dc2d1359e110dc928218a9bfbb285fee1}}, {{cite:fc5ac690a4a59afde4d9e317c2a90926f90aec2d}}, {{cite:880f794b22664d5ece27d2438d393bdb5415d81d}} and scalar field {{cite:73f9792110b50bac93f190b5229120c1b36a19d1}}, {{cite:7b30a6e7f4dea821928a2840e22d9a6c4e2a5457}}, {{cite:3ae249defe62e2ec9371b6b3e0580e34d61151cd}}. The instability of naked singularities in scalar field collapse was analyzed in Refs. {{cite:f9dfa1fbb5bbda68f4aebc778d2bc16694c0c598}}, {{cite:5dd11cbfa68cf432e3399ced69c05e6ad7e1b165}}. Motivated by the question posed by Christodoulou, namely whether the smallest black hole forms in scalar field collapse would have finite or infinitesimal mass, Choptuik simulated gravitational collapse of a scalar field {{cite:b8ced218b87e3208d96c2f8e1abb2983f1da3715}}, {{cite:0564960786b50e753971085548cd4a5e4f00de1d}}. When the scalar field is weak, the field will implode and then disperse, while a black hole will form when the scalar field is strong enough. By fine-tuning the strength of the scalar field, a critical solution was obtained. The main results of critical collapse analysis include discrete self-similarity, universality and mass-scaling law. The global structure of critical collapse was investigated in Refs. {{cite:a3501e15526eafa8f72d09f95963183b67a23307}}, {{cite:3db4a4aa363cb7bf98a9720df01dc7b2d3b49d0c}}, {{cite:74345a545dde8dea27152ff1cacd3e8606e0d3f8}}, and a real analytic solution to critical collapse was proved to exist in Ref. {{cite:9e942b308e4a546e2926b7bed5a79279ec7db031}}. In critical collapse, the scalar field carries away its energy and an infinitesimal naked singularity is left behind {{cite:af54afff6d8f143e6db152dfab3b72d69f4062c0}}, which is a counterexample to the original censorship conjecture. Due to this discovery, the cosmic censorship was restated such that it only applies to collapse with generic initial data. Consequently, the naked singularity produced in critical collapse was suggested to become less worrisome. For reviews on critical collapse, see Refs. {{cite:5de6ad7d0fa30a0e3356b07193e2bb74f8c68a9d}}, {{cite:3c49fa1a5eb534278500734e2c27644175b4c336}}, {{cite:66f6d001df317329db36517125328f950ca1b7e7}}.
| i | 03c03089c3fce538ad6635c7eb9c8f4e |
The use of semantic vector-space representations of classes, i.e., class embeddings, is prominent in
zero-shot learning (ZSL). Most mainstream ZSL approaches learn
to estimate the
degree of relation between a given input (image) and a class embedding, so that
previously unseen classes can be recognized purely based on class embeddings, e.g., {{cite:4c3c1e1d642ceb69471201589685750aa1004d75}}, {{cite:0c061f98ab3072d7a797dbb06165b44a5943a89f}}, {{cite:ca0b938ad4ad76516b605a7a1c70bef6b18f55f2}}, {{cite:27c19fa40e73c0cb2f090e57049399caa9e87234}}, {{cite:8817f60cbd0cd501eb23f73ee433f6b30b7071cb}}, {{cite:f5633da35c482ee548d92c30102eb3595e308155}}, {{cite:8783d8e3892bed2881292bd61e4d382805e94acb}}, {{cite:fc46eeb1aeffef4eaf4144633250d0614f177028}}.
In ZSL, class embeddings can be interpreted as class summaries from which
valuable discriminative or generative knowledge can be extracted. In our work, instead of relying
purely on class embeddings for building classification models, we aim to benefit from them
in improving sample efficiency in FSL.
| i | 3116637e069f18a4af0fc379af411e00 |
We have made a comparative study of RHDE models in different dimensions. A few points are worth noting. The density parameter is plotted in figs. (REF ), fig. (REF ), and (REF ) for four, five and ten dimensions respectively taking three different delta values. It is seen that density parameter exhibits a sharp transition towards a value close to unity as dimension is increased. For positive {{formula:81b2c836-5528-4c78-b6ab-f955aea71d41}} , the variation in {{formula:8501ac91-ab10-4670-abb9-078053d16bcd}} with delta is seen from the figures, but in the present universe and in near future the behaviour becomes almost independent of model parameter {{formula:0951a805-5b01-4f1f-80c8-4be42bd51486}} . This trend is same for all the dimensions considered. Another thing to note here is that as the dimension is increased {{formula:72b85c3b-bedf-4755-aea5-ca14c786d8e5}} becomes almost independent of {{formula:e2a7bde4-ab5b-4800-b2ed-636022f1b0b0}} throughout the evolution of the universe. From the plot of the eos parameter it is seen that dark energy begins in a quintessence era in the early universe and the value decreases reaching a value of {{formula:13fae04f-91f7-4065-be11-61993691e4f8}} , in the present universe. It remains constant at {{formula:1b3ecd1e-6206-4278-8a9e-99755b4fc689}} for negative {{formula:9966ba37-d5a0-4f47-be25-7f91927bf277}} never crossing the phantom divide ({{formula:5466d14a-c4f7-405f-9948-860bb8ddf35d}} ). This behaviour is dimension independent. We note that for {{formula:1b56a4e3-0b05-43e2-9fac-020f0e5fd1e2}} , the universe transitioned from a dust filled phase to quintessence era. The universe transits from a decelerated phase to accelerated phase as shown in figs. (REF ), fig. (REF ), and (REF ). For {{formula:da07d0ad-b84e-4e03-bc0a-54c29092c490}} the universe transits from decelerated to accelerated phase, however for higher delta universe remains in the ever-accelerating phase. As the dimension of the universe is increased higher {{formula:6b6edc04-4885-45d9-8836-6c545d7e273b}} values correspond to ever accelerating universe as is evident from the figures (1)-(3). For different value of the model parameter {{formula:c9352d8d-759e-400f-a438-b24655603e67}} , present value of {{formula:8bdd002d-24c2-40fe-97b6-0e7316a8cb3b}} is seen to vary from {{formula:a2be9b39-ff41-4d7a-873d-2fd988818221}} to {{formula:a594eeca-c489-45c6-9a6a-da434e29f817}} which is very much acceptable in light of Sn-Ia data. Fig. (REF ) depicts a plot of distance modulus ({{formula:ecf30e2d-2915-4b18-a956-d904819d8d79}} ) with redshift for different values of the model parameter {{formula:f20b17c3-dede-4882-9b7f-eb579a934300}} along with Union Compilation data {{cite:145eb1dfb763d8066e3043aa4e3a6757e371f0d6}}. Clearly, some values of {{formula:48bae361-60da-4177-a34d-a746da756c6a}} (e.g. {{formula:655d4252-071a-4b7f-9f88-8c045941b558}} ) is observationally favoured. A full scale analysis of observational constraints, however, is beyond the scope of the present work. The focus of this study has been the late time evolution of the universe only and the effect of higher dimension on this phase. However, it would be interesting to investigate some unified scenarios based on HDE {{cite:19ccb6a54af6891d1775b6570d231f148a185a2e}}, {{cite:d75a50fcd73069033d04d4149d2158ad9a627b95}} which also includes an early inflationary phase.
| d | 576c1882ec8b5c563bec21845bc87e5a |
In our paper, we will build on a formalization of shacl {{cite:b9fa36ae661abfb88a870350f2d824119476eb80}}, which has revealed a striking similarity between
shapes and concepts descriptions, as known from description logics
{{cite:92a7e5b29c9f71f9c885ddaeca3583e98285eb40}};
we recently deepened this connection further {{cite:971273bb94004c140927a7f03abbe3d924659c81}}.
Following this line of research, we formalize a shacl schema as a tuple {{formula:f33c1747-634a-4d9b-b777-b8755b679d0a}} , where {{formula:200c21f5-06bd-4ec2-8a3b-ca6b63b4e618}} is a set of rules of
the form
{{formula:42f85e1a-e199-4a39-a252-dc2e108e037b}}
and {{formula:e0bf4bec-c4e7-4984-a02c-eb091969f6f9}} is a set of concept inclusions of the form
{{formula:b2a33aaa-aa25-470d-9455-79494b3fbb7d}}
with {{formula:df65b74c-32f2-4630-85ec-78da90915ccd}} a shape (i.e., a concept description) and {{formula:474a01f7-78ba-4a25-b2f8-68ce4ebaaaec}} a shape name.
{{formula:397e1609-c22c-47db-8971-bcbcb3d88f37}} defines the shapes in terms of relations in the RDF graph and {{formula:20f4c512-406e-43af-aaa5-dd3e07a82155}} contains the so-called targeting constraints.
| i | bd4b332ec022b114c0204384970afd3b |
We have performed all the calculations using Vienna {{formula:cc9e16c0-9bb7-4560-b04a-266a4f4dedd5}} Simulation Package (VASP){{cite:5f541a423227d3c65ef9c7dc0eee033cba2f09f6}} and projector augmented-wave (PAW){{cite:a07fc0390c65a16fefe627fa9b43f1e1bcdb4932}} pseudopotentials within the framework of DFT.{{cite:d4a350bf051c0441f627b01fcee1b2a5fbf86100}}{{cite:75a075e7e53c7e76e921692702b06a817a8c3804}} We have optimized the crystal structures of all conformers using Perdew–Burke–Ernzerhof (PBE){{cite:1ec74a732864daa6bb68099f3c9924867ed01be2}} exchange-correlation ({{formula:17be7c22-0d6e-48e4-960a-20ec4168a8e0}} ) functional with a {{formula:aaa54d28-7c18-4409-90c0-5d2f5a2eae9b}} -centered 2{{formula:d92df39b-83d3-4ad8-be46-54173965c648}} 2{{formula:af2decc3-fb6c-4e6e-8363-eb7e342f99f7}} 2 {{formula:e56fcb81-1144-4fef-876c-6511257fb745}} -point mesh, and set the criteria for convergence of total energy and forces (for optimization of atomic positions and lattice vectors) to 10{{formula:f91cc146-efec-4975-b19c-5369bb31a6a9}} eV and 10{{formula:69e6c1c0-6009-4d2c-98fb-ba52d727b9fc}} eV/Å, respectively. The energy cutoff was set to 600 eV. Later on, from convergence test, we have found that a {{formula:895d4353-27c8-4a3f-b45b-4d3b1d926ebc}} -centered 3{{formula:1f118f7d-62e9-475e-a75b-d9f7ccaa20c8}} 3{{formula:95533cd1-55c5-4080-a099-e058c5301444}} 3 {{formula:4cf4be72-7b2b-4a60-b3ef-16c7408f168b}} -point mesh is sufficient for sampling the Brillouin zone (BZ), and so, the 3{{formula:eaff6933-827a-426f-a0e4-f0f4fb5119ee}} 3{{formula:19cde4f2-87f4-4c76-a2a4-a13dc75002e4}} 3 {{formula:65ecf4e8-8501-401b-8236-6196dc6e911e}} -point mesh has been used in our further calculations. We have used advanced hybrid {{formula:d89d7f7f-5b8f-4f60-ae4d-071f88a6d5c8}} functional Heyd–Scuseria–Ernzerhof (HSE06){{cite:5d36a0a773208b7645cfa0e7376700a30e6bf596}} to get more accuracy in our results because PBE functional commonly underestimates the bandgap of the materials. The spin orbit coupling (SOC) effect has been duly included in all the calculations.
| m | a859c5160db88779ea313f39f75caf7c |
Despite substantial performance improvements introduced by DNNs, they still have significant shortcomings due to their opacity and specially their inability to represent confidence in their predictions. These downsides hinder the deployment of DL methods in safety-critical systems, where uncertainty estimates are highly relevant {{cite:9634fd02f208779a04807f0677987ffcc94a2799}}, {{cite:60278e1f6a2f3778cc623eb8533fb4bc3bd6971e}}. To overcome these limitations, the authors in {{cite:890d256f6d5c4a4d7983dd54f69d815ec7ec7658}} propose the use of Bayesian Deep Learning (BDL) for implementing components used either in modular, E2E, or hybrid fashions. Bayesian methods in DL offer a principled framework to model data (aleatoric) and model (epistemic) uncertainties to represent the confidence in the outputs.
| i | 2c4945c1b701553984ffde450726bc58 |
Recently, deep generative models such as StyleGAN {{cite:0344773e23c7bb2a1a705f2e045e2dc68cdc2b2c}}, {{cite:502d4faa5766542a4d228d53a063ca419d5d164c}} and diffusion models {{cite:13abbed6280b86443923604941700ae4e13a6b6e}}, {{cite:30a1144ea135e7e6271dee095a295e44759846a3}}, {{cite:2306016e8fedee5fbc5daf82e5d9d3da5e63d6f7}} have made a significant breakthrough in generating high-quality images. Image generation and editing technologies enabled by these models have become highly appealing to artists and designers by helping their creative workflows.
To make image generation more controllable, researchers have put a lot of effort into conditional image synthesis and introduced models using various types and levels of semantic input such as object categories, text prompts, and segmentation maps {{cite:36f43c8de079fe2f693b159a4578af98c2b64d69}}, {{cite:6cdb85950966129711932ea438e05fe6da681978}}, {{cite:61bb6fa0f69b94de7a6e746ace6eb14945d09e21}}, {{cite:f1f51043f274caddb872ac03ad92383eef60a5b8}}, {{cite:2ddcb4be8292993d123c7954c7613e97b2927e81}}, {{cite:5bfca4629314f9783a2c9aecc0c5d7f150458d7e}}.
| i | 354febfaa28e21c0953439daa059a150 |
where {{formula:f2617d8f-4d3a-4987-aba1-a2cfe8450fae}} , {{formula:2350a820-d359-4cf3-bb58-68a6e56f454b}} are positive constants and {{formula:0c50f63b-9876-4c13-b9bb-0b2277229fc5}} is a smooth strictly increasing function. This system has been introduced in {{cite:7e995126779f6d02285d9a7a9820438fbc2c737a}} to model chemotaxis, a situation where organisms move towards areas of high concentration of the chemicals they secrete. In this model {{formula:43f1556d-9103-44d3-bc43-eea8f95c5935}} represents the concentration of the considered organisms and {{formula:404d3d56-276a-431d-add6-c8b3f5a581fe}} the one of the chemical released. A huge literature exists on the dynamics of this system, concerning several fondamental questions such as the Cauchy theory, blow-up in finite or infinite time and stability to cite a few of them. Since it is not our purpose to analyse the time-depending solutions, we just refer to some recent papers {{cite:cea940324cc723c046e4e5620d41dc885af7a193}}, {{cite:806a966fa05ec58ef875359a5908c6226c2e0bc0}}, {{cite:26cffa08b741dd7af8b650401ab5c603394c95ed}}, {{cite:bd26a97fab23c1ea55e681c71e2b0de9784adac9}} and the many citations therein to trace the most important results in this direction. A large variety of highly nontrivial equilibria have been detected since the seminal work {{cite:44223e13acdb9e77423915fce00ca5ce257f2470}}, see for instance
{{cite:0498a6732b712c81b56316563c327f257ad9ef9b}}, {{cite:99892c7ca962dd6635eef741c1540279721a687d}}, {{cite:4f4a524814baa282ba4a49ace1bfa8a144b7b057}}, {{cite:de8747b80bce29accb760c59f18a29a418df746e}}, {{cite:6a6f3b9d6dc180c2e04c71fc2adbb713dfe8e9ad}}, {{cite:410523bc26cf2b7b50fe55ff988a9641b123a62d}}, {{cite:eee17441ae7e8d755d6e9ed137799064b17cddb8}}, {{cite:b66cbbcc5762b489e1410fd1a37e7b8f68a81e63}}, {{cite:4acb299a92a4dc367e7fa1f9f9ab63a50697912c}}, {{cite:1c80d8718f017cd3d843d28aad855986f5ffaa45}}, {{cite:eeeb7abd4219c97283492665d8e47d30d3763cc0}}, {{cite:e05c8274a739d4f3def18e4cbc4a9343d2f17ec5}}, {{cite:e6bbd80e7f77bc11a1efd2d02417f85161e884e9}}. This list of references is far from being exhaustive.
One can prove that stationary positive solutions satisfy
{{formula:316a0126-69ff-4a66-bbea-505129d63de8}}
| r | 57b118429f552ba355e94727c1ccd69d |
Traditionally, images from compressive cameras are recovered by solving a convex optimization problem, minimizing both a least-squares loss based on the physics of the imaging system and a hand-chosen prior term, which enforces sparsity in some domain, Fig. REF (a). For successful image recovery by compressive sensing, there must be both incoherence in the sensing basis and sufficient sparsity in the sample domain. Lensless mask-based cameras utilize multiplexing optics to fulfill the incoherent sampling requirement, mapping each pixel in the scene to many sensor pixels in a pseudo-random way. The prior term enforces sparsity and ensures successful signal recovery. Over the years, a number of different hand-picked priors have been used for compressive imaging, such as sparsity in wavelets, total-variation (TV), and learned dictionaries. TV, which is based on gradient sparsity, has been particularly popular {{cite:5e4ae1089369fd3c3f78c7368bd7a4b28a5a81f3}}. These methods are effective at solving imaging inverse problems, however they have recently been outperformed by deep learning-based methods, which incorporate non-linearity and network structures that may be better suited to image representation {{cite:10bf3a1c05bf7d29dfad6ff11d76cfaa3939c799}}. Recent work on plug and play (PnP) priors {{cite:acf9e9c42dc6aee17940321583da09e0a30040af}}, {{cite:b8ba39250b240d5d2675b2f0bd8860acfbd07784}}, {{cite:304a37084a54e22c7a6b1a109e4c7763475c8138}}, {{cite:5294cfc5e050016a34426dc6cb39d50c61494f03}} offers some hope of combining inverse problems with state of the art denoisers (both deep and not, e.g. BM3D {{cite:c9c02285a82d2f4ab906be5a25c0891ffb3eac9c}}); however, PnP still relies on using hand-picked denoisers or pre-trained networks that may not be well-suited for a given application.
{{figure:1f6243f1-68ae-4b26-9e16-f279a08f64f5}} | i | 8277d5635c6d1f929710051c97f3309e |
The compensation of costs and benefits is certainly a natural interaction strategy through which rational agents determine their connections {{cite:fa9d612f463923fc793752c12411237c772d90b7}}, {{cite:eaec82df880ebdcf7e05ea700042ed976376b22a}}, {{cite:9c48e25674dd36bec54934f28c7d8376fd581e1f}}, {{cite:52a947889b0a84fad588488221857f78a45abee7}}, {{cite:9bc166a7940f3affe1d3f4635fcf4b195cef0195}}, {{cite:d73b36e09e21e2de9f4a22c315bc7781f1482be4}}, and therefore our study contributes to the understanding of why the six degrees of separation is such a ubiquitous property across vastly different social networks.
It is, moreover, reasonable to assume that a similar evolutionary principle may also apply to the design of man-made or technological networks {{cite:05f80ef714d4706235a7d8f7907177a4c607baa1}}: take, for instance air or sea transportation networks {{cite:5e19a844dd9600c36321d341e4fcfc8cce8c0366}}, {{cite:c832916d79e6a45faa38944c9a37c2cbf559d3ed}}, {{cite:0e6fa7095b96bdafc15b7f2903417f5c8ca90fe3}}, in which airports/ports may increase their volume of trades and/or tourism industry by "being in between" the main routes of interchanges of passengers and goods, and in doing so they are keen to incur in the relative costs of maintaining (or even enlarging) the number of local connections.
| d | 813424626312e506b901a98c6f972454 |
Our formulation of Theorem REF
owes a lot to the pioneering works of
{{cite:8f559882cb7d40d8c8ab39115b3a0fabf4af93ad}} and {{cite:4a6550294a6bc97fe67b25587261a4b667b638f3}} (PN). Our is merely a refinement of the results
published by these authors together with a clarification of the underlying mathematics.
The interesting outcome is that the solution of the functional-optimization problem in (REF ) can be implemented by a 2-layer ReLU network.
| d | 432340911e415184340cca52e793704e |
For a thorough treatment of the algebra of quaternions {{formula:821d5145-b34c-4f38-8083-22672fca378c}} , the reader is referred, for instance, to {{cite:58c1ab856d1b7dfdd138896af4f8eac716d6b19a}}.
| r | cfee8b25083cf8615b5fbb5518de1481 |
The KM3NeT/ARCA detector is currently under construction at the bottom of the Mediterrean sea near Portopalo di Capo Passero on Sicily, Italy {{cite:958968f039e5e7ea7a35514ecce38086c031aeb1}}. The detector consists of a 3-D grid of optical modules that each contain 31 photomultiplier tubes (PMTs). The timing resolution offers opportunities in the identification and reconstruction of tau neutrino interactions. Tau neutrinos can interact through the charged current (CC) weak interaction where it will produce a tau lepton and a particle cascade from the shattered nucleon. The tau has a mean lifetime of {{formula:b3f21222-7bbb-4f3d-b40c-45902cade526}} seconds and it decays into hadrons and leptons {{cite:6b86ecad3964e8382815bd13c9c81e8f08376d48}}. The branching ratio to an electron or hadrons is 0.8261 and this results in a double cascade signature. The cascades are separated by an average {{formula:742f2ca5-e481-4c8b-95ab-53e2f590bea6}} due to time dilatation. In this work, we present a new reconstruction algorithm for double cascades. We find an improved angular reconstruction performance with respect to single cascades due to the lever-arm effect. The early and late part of the event are strongly restricted in position thanks to the arrival time of the light that they produce. This places the start and end of the event along the direction of the tau lepton, resulting in a better angular resolution.
| i | e532945d5ac4ff2779bfa95acc5827af |
As the speech synthesizer, we used Tacotron 2 {{cite:5ba5204a4845204295ea495907205677f62a850e}}. Tacotron 2 is a combination of the Tacotron-style model{{cite:5d42af7b4d2094fd9652dfbd2706f012cc0e518e}} that generates mel-spectrograms from a sequence of characters and the modified WaveNet {{cite:79f016f43c03b11eabf6dae68bf4f31803443891}}, {{cite:e05cba88e416ddc5a833a9a4c64ac2a56f4c8d5f}} that conditions on mel-spectrograms to generate the waveform. Since this approach can directly learn the correspondence between characters and waveforms, it can generate speech of such high quality that it is difficult to distinguish from a real human voice.
| m | 63ec25a9b7de674993e077c44016c58a |
RMs of polarised background sources are often used for analysis of magnetic fields in nearby objects such as the Milky Way {{cite:884ad8aaa98c8811cd9eac420e2f983b05bcb5f5}}, {{cite:5b51773a6f0fc5d055f094924c333e29c8b41514}} or the Magellanic Clouds {{cite:0d06278eb97787a4f3cdde13e5aa6705385b8101}}, {{cite:f61b2ff86f3c829d61dc534849939a90d2426383}}. The resolution of such studies and therefore the number and types of objects we are able to investigate using this RM gridding technique are strictly coupled to the density of polarised background sources. While the previous studies could only use a maximum of three polarised sources per square degree due to their limited sensitivity, we detect a polarised source density of 21 sources/deg{{formula:a0a5fae3-e2ae-4dc0-b5ff-c249ae3547a6}} . This source density is very similar to literature values found at the same frequency and with similar resolution. However, the exact value is always dependent on the primary beam response level up to which the data was imaged. To mitigate this effect, cumulative source counts should be used. Preliminary results show that the Apertif measurements (Berger et al. in prep.) are confirming the results shown in {{cite:e237025ba9ad9e7b44fe9f8e808e2bf392f1f01f}}. The usability of a polarised source for an RM grid is also dependent on whether it is resolved or compact. {{cite:e2d3e64e467a1c65251099fbafc29e3d1660c611}} assumed that only 50 % of the detected polarised sources can be used. From our comparison of the standard deviations of the RM values of extended and compact sources, we can assume that, at least for mildly resolved sources, the RM value at their centres is a reliable probe of the intervening magnetic field and is not significantly influenced by their internal RM. Of our sources, 67 % show a compact morphology, and therefore the usable fraction of sources is likely to be larger than the value given by {{cite:e2d3e64e467a1c65251099fbafc29e3d1660c611}}.
| d | 7792be5bed444b59a31d54c5371269bf |
(iii) Also HL gravity treats space and time in a different way.
For the case of isolated black holes, the metric is time
independent and hence the spacetime is invariant under
infinitesimal diffeomorphism transformations. As a result,
according to the argument of {{cite:76150a8719e769085b4e452fec053bd385eb5c9e}}, the field equations of
the theory can be expressible as a thermodynamic identity, {{formula:ef84cc75-2fcd-4348-9960-106112fcd151}} around the event horizon as it was shown in {{cite:b6e3b4e13d565b902e1d6e6c7c2c5a099a6ed0fa}}.
| d | ba050669be6f5525278077527ab42a2a |
The experimental results of nuScenes are summarized in Table REF . Our SPCM-Net achieves a competitive result compared to other baselines in all metrics. One interesting observation is that both SPCM-Net and PointNet++ {{cite:c535f92ea22634f8b3576ba8b24706b06669af71}} + LSTM outperform PointRNN {{cite:1493008f04356cbb31b9b3ea40ac2af7f9764fc4}}
significantly. We suspect that static points dominate on the nuScenes dataset such that most of the points only contain ego-motion. Therefore, it prefers models with a better capability of extracting global information.
| r | 374f1604490e90d61177f1ffa0fc30e1 |
Nonetheless, it is worth recalling here the inherent limitations of standard DFT approaches in estimating accurate band
gaps in semiconductors due to the ubiquitous electronic self-interaction errors and other fundamental problems {{cite:b419dd2c76b161c59fe7a095498cb69de0802be2}}.
In particular, it is well known that common exchange-correlation functionals like LDA and GGA, which are computationally
very affordable, tend to significantly underestimate {{formula:25d95b89-8182-47cc-ab73-70309908b3d4}} {{cite:d7c6db7bd0b96312d318f761eb8184328e459124}}, and unfortunately (although also understandably)
most of the results reported in the MP database are obtained with such standard DFT functionals. Thus, it is very likely
that some of the candidate piezo-photocatalysts identified by our high-throughput search actually do not fulfill the band
gap condition {{formula:35299234-20bf-4e65-b026-379c9ce8496f}} eV. Similarly, it is also quite likely that some potentially good piezo-photocatalysts
have escaped our computational sieve due to the systematic DFT underestimation of their band gap (e.g., hexagonal ZnO {{cite:eed4dceab356cc311dcb93520a9675364aa310a2}},
see the Discussion section below). Furthermore, the reliability of the simple PPC model introduced in this work, upon which
the present computational search has been built, needs to be checked. Consequently, we undertook a careful assessment
of the MP data and PPC model employed in our computational search by performing supplementary first-principles calculations.
{{table:e60dacb6-19d8-4d8b-a03d-171f343e8063}}{{figure:98ea76cb-82a2-4b17-b3dc-76caeeab7ee8}}{{table:fcf30d10-9065-440d-8f8f-9e74028dd9e6}} | r | a6d71b8038af3712f5a3264da6d46c23 |
where {{formula:07c0d0b3-5c02-4f9c-90a8-4e06b485d0ce}} denotes concatenation.
For instance, to approximate the RBF kernel, {{formula:bb61504c-95c2-46a1-94f1-36d6060cd97e}} , we would draw the {{formula:27dc5c0f-fdd5-4af3-9912-4f8cbbf98118}} -dimensional frequencies {{formula:103b7d75-d831-4b95-85db-8e41103e814b}} {{cite:06436f639b8e6fcc50c314adecbc42e457b7260b}}.
It is simple to show that when using the approximation {{formula:422c30ad-ab8c-47a9-9ba1-ff77529899f9}} , one may compute Algorithm with {{formula:fb7248fc-db8e-4983-8ce4-55601405bebc}} iterates by avoiding pairwise kernel computations {{cite:fd40c3f3086c2b2defebedbd0578524a4499454d}}; we detail this below.
| m | 307617764ad63b617809f6d0bc7181a0 |
We have used the common normalization techniques used in NLP tasks such as batch normalization(BN){{cite:a537c710201768c2485fd566763bdb85f04f7dfc}}, layer normalization(LN){{cite:1d07fcf9157f85eefa1e1a08834bcc68e36c4a98}} and group normalization(GN){{cite:cd9773e03e0372907818fc427be6a047a3346b98}}. Such techniques do faster convergence and generalize well on NLP datsets. Apart from that, other common regularization techniques have also been used for the comparison namely Dropout {{cite:1f84af6701bcc10b155f19b5c2e1890b9fed7948}} and Weight {{cite:989f61358465c0c811e26114820ae049d8b2b42f}} Decay. While experimenting, we replace the KL Regularized normalization with the comparison methods(Figure REF ). We have also performed the experiments with BERT{{formula:5cf84401-6486-4e53-8a3d-be502a1b1846}} model given in Figure REF without KL Regularized normalization.
{{table:de55f3c0-69f8-47fb-a606-98c0e2b4af96}} | m | 9092bc822de066587a9d65d5a4d1d2a7 |
This work presents an extended contribution to RocketQA {{cite:57ba3c7cbc3645e2021902b1239583f713f34d2d}}, called RocketQAv2.
As seen from above, RocketQAv2 reuse the network architecture and important training tricks in RocketQA.
A significant improvement is that RocketQAv2 incorporates a joint training approach for both the retriever and the re-ranker via dynamic listwise distillation.
For dynamic listwise distillation, RocketQAv2 designs a unified listwise training approach, and utilizes soft relevance labels for mutual improvement.
Such a distillation mechanism is able to simplify the training process, and also provides the possibility for end-to-end training the entire dense retrieval architecture.
{{table:d040dfa2-4f9f-46c5-9e30-5730e462b533}} | d | e3b349fe68a825c76c529a9648fc5a90 |
For the three datasets we have in common with {{cite:111423a1b944a4b601d74901252cd5e5648eab66}} it is possible to draw performance comparisons with that work, though retaining our concerns about the use of AUROC for highly imbalanced data, which we found did not distinguish well between the methods. The two best-performing methods of {{cite:111423a1b944a4b601d74901252cd5e5648eab66}}, in relation to G-Mean and AUROC, were the HD-Ensemble method of {{cite:111423a1b944a4b601d74901252cd5e5648eab66}} and RUSBoost {{cite:586919b5c589270dfc7b3fde8ff712af1cdd12d9}}. For skin-588, GMN, BCE-ASTra, and GMN-ASTra all had statistically significantly indistinguishable performance from RUSBoost, with respect to both metrics. For cod-763, the two ASTra variants tied with RUSBoost on AUROC, but beat RUSBoost on G-Mean, while on ijcnn-971 they achieved a better G-Mean than state-of-the-art HD-Ensemble. Given especially that we used small, single-layer neural networks, with that used for skin-588 having only three neurons, we consider these results encouraging.
| r | 6772f5f63a5c6f3ca2b2ac56c1a80b95 |
Image-text Datasets: We report the comparison results for both Pascal Sentence and Wikipedia dataset with prior approaches, in Tab. REF and Tab. REF respectively. We report the mAP score for prior methods as provided by the authors in {{cite:6c87890dcd7a2b343f6482ee393d2b279b648f41}}. Since there is no fixed split provided on the dataset, we perform the experiment with 10 random train/test splits, and report the mean and standard deviation. We did the same for the state of the art DSCMR {{cite:6c87890dcd7a2b343f6482ee393d2b279b648f41}} method with the same random splits as well.
{{table:08feb426-57e1-420d-a668-97b4dfbef0ec}}{{figure:f8a16a8c-c591-461b-92bb-46b27e26ae9f}} | m | 41dce83a070d229b2bdbf597a2ebed3c |
We observed that our basic prototype-based method, under the training-free setting, does not gain from more given examples. We hypothesize that this is because tokens belonging to the same entity type are not necessarily close to each other, and are often separated in the representation space.
Though it is hard to find one single centroid for all tokens in the same type, we assume that there exist local clusters of tokens belonging to the same type.
To resolve such issue, we follow {{cite:3d27e0551f3c98c834bda57ac7569b461dc93504}} and extend our method to a version called Multi-Prototype, by creating {{formula:36eaff00-2572-4228-9ebb-aca1c10ed284}} prototypes for each type given {{formula:bb02b9ad-3778-4d26-993c-072c2ef47a2c}} examples per type. (e.g., 2 prototypes per class are used for the 10-shot setting).
The prediction score for a testing token belonging to a type is computed via averaging the prediction probabilities from all prototypes of the same type.
| m | 5deab6eea346c8d3eec563424ebf3101 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.