text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
More generally speaking, inverse problems consist of finding the unknown characteristics of a structural system from some of the outputs, or measurements of that system. Most notably, this includes the above mentioned inverse conductivity problem in geophysics {{cite:01e35cd44f4e5381b503fbda63ea937b594c1b19}} and engineering, but also includes a large field with applications in medical image reconstruction {{cite:b37a628794ede582006fac53f06500c06c5d78d7}}, {{cite:8700727e78f4caa1a4227db50624a611b6eb9f6f}}. Mathematically, such problems are ill-posed, broadly meaning that the parameters to be estimated {{formula:0adc7022-7bb9-41e1-84b6-72cd47161f7a}} are highly sensitive to changes in the measurement data {{formula:d1cce50c-3955-4b88-b412-fdd94fe1e29d}} . The solution to the inverse problem involves estimating the parameter {{formula:beee376b-90c8-4e5b-a275-45cb1347a2c0}} from a fixed set of measured data {{formula:53765c62-21de-41a9-9492-8bc530ab2006}} , in contrast to the forward problem of computing {{formula:b4932b88-7a0c-4752-8594-8127ce7db97b}} from knowledge of the system parameter {{formula:add22708-8ec7-4fce-b678-2780824a4473}} . Specifically, this means given the forward model {{formula:500c7ee0-2b0f-4439-be2e-0804e4d0a537}} , that models the system equations, we first formulate the underlying observation model
{{formula:0eb57b4d-085a-4d7c-8669-94efaabe7619}}
| m | 8f29bd60989b267e5fc036109c56430b |
However, in the presence of low-dimensional intrinsic structure and well-behaved noise that is symmetric, sub-Gaussian and independent of the {{formula:c393b32d-e5b3-463c-a7ed-5069aeb85188}} and {{formula:d1d11dad-c18d-47b9-a011-8496250c8a4b}} , RERM
with optimal, large enough, tuning parameter achieves a much smaller prediction error than minimum norm interpolated estimators {{cite:81354cadd8162dea772c65c2f0552811c98f41c4}}. For example, for sparse linear regression with i.i.d. standard Gaussian design and Gaussian noise with variance {{formula:726c07c1-7a10-451f-81e5-211608633628}} , RERM achieves a prediction error of order {{formula:75490538-968f-453d-87e0-c74fde495941}} versus {{formula:4a051d46-d51a-4849-98e9-192022e582cf}} for the minimum {{formula:34ba9830-cd58-473c-b8b3-946912d7c934}} -norm interpolator. This gap is not due to our proof techniques but is inherent to the nature of using an interpolating solution as shown in {{cite:0adbf327d249c057d1f2d79b1c765fb18bcb848e}}.
| d | 91fddd97c69f91e3fee69c6e692ad672 |
In Figure REF we compare the evolution of the axial ratios {{formula:1b0bfb64-766b-4bd4-96cb-784eea6d67bf}} and {{formula:9372b075-f11d-4af3-bf56-69e2668538d7}} and the anisotropy parameters for {{formula:6a93d0ef-5b9f-4cc2-b7fd-5f618fdf18bb}} models with {{formula:8299e451-0f83-4520-8473-1929b323af5d}} and 1.8 for additive noise with and without friction. In general, the presence of noise or noise plus friction does not have a significant effect on the onset of ROI for models with a steeper cusp (generally more unstable, as they admit a larger degree of wildly chaotic orbits see {{cite:b3987e5bf97f5765ca0264531134637cbc825af3}}, {{cite:107a3b1df1b86cdd3a5a1ecaa346529347e76bd6}}) and low values of the initial anisotropy (i.e. {{formula:05ce08a3-727e-476a-b1a5-0c8c98d41079}} ), while for larger values of {{formula:dbff89fb-db9c-4468-9883-f217145ac4a6}} , corresponding to a more violent instability, the evolution of the triaxiality and the anisotropy are affected by the presence of noise, with systematically less anisotropic and more "triaxial" end states associated to the presence of larger amounts of noise and friction. Introducing a multiplicative noise (with velocity dependent friction coefficient) complicates the picture even further with as it apparently it does not alter significantly the evolution of the axial ratios, nor the final values attained by {{formula:d001cef5-ba12-46d1-808a-d12f7c2efc11}} even for extremely anisotropic models with steep cusps, while it seems to somewhat anticipate the time at which the anisotropy parameter starts moving to lower values (i.e. unstable models become more isotropic earlier), as shown in Figure REF for a system with {{formula:25d9f1e6-8766-44ef-8c57-207288b556b6}} .
| d | e8c083c6a949f5915c1099ccee229c42 |
Our method has introduced a significant step towards creating truly realistic 3D talking heads, as it has been shown by our extensive objective and subjective evaluation against other SoTA methods. It is important to note that our method even outpeforms DAD, which was trained with 3D annotated data on a large-scale dataset. It should also be pointed out, as it is also evident in Figure REF that the lipread loss, not only retains the motions and shape of the mouth, but it also makes it more distinct in the rendered mesh. It becomes apparent that in order to achieve realism in terms of speech, we need to opt for more perceptual losses. This has also been done in previous methods regarding the emotional expression {{cite:5007acb0efc7034afff590da22770d659f47b0fa}} as well as 3D shape {{cite:d2f2b8aede75b04f7068ed509d663558c8580f98}}, {{cite:482d128ee629852afd3d64e501094489d7895aeb}}.
Note also that training with our lipread loss does not require any kind of text transcriptions or the corresponding audio.
Moreover, even though our method is trained in speech videos according to the proposed lipread loss, it can be used to model arbitrary mouth movements, non-related to speech. This generalization property stems from the fact that we train to perceptually simulate mouth movements and thus the encoded mouth features are not necessarily correspond to speech-related movements.
| d | a2174b80dd79538e6634ba1b82d84050 |
with {{formula:e4206ee1-954f-474e-92ef-09e85bd5cad4}} . {{cite:1865069f80df0fb444997c9f3bb79210614a081b}} extended this view of eigenvectors to derive sparse loadings by implementing a regularization term. The results are based on Theorem 3 of {{cite:1865069f80df0fb444997c9f3bb79210614a081b}} which we recap for completion using our notation. Let therefore {{formula:6aab9659-5f3c-4c5e-a5b3-3284192dfd87}} and {{formula:52fba8c3-db04-4094-a30d-76e055081f45}} .
| r | e03f4da9bdd1ec4586a152e736ba07d2 |
In the energy bins below 8 EeV no significant measurement has been obtained for the equatorial dipole amplitude {{formula:a127a37d-9b08-40d3-8c20-a4035092bff0}} , and the 99% CL upper bounds on the amplitude are also displayed in the plot for those bins. For both magnetic field models considered the predicted amplitude is compatible with the observations. For energies around 10 EeV a significant dipolar component has been measured by the Pierre Auger Collaboration, with an amplitude of about {{formula:0d4adb1c-e261-469e-b6d7-b4004b262da5}} {{cite:3a484b5a76dbf62068256485524c1221b00dfde9}}, {{cite:6fa8b0b2f90914959bc448f08eabdeecfbd8b42f}}, increasing approximately linearly with energy above 4 EeV and up to the highest energy bin considered {{cite:3f85bc8c0dff8fc62497c8db0b10946a0805c811}}, {{cite:6fa8b0b2f90914959bc448f08eabdeecfbd8b42f}}. This anisotropy is expected to be associated with the sources of the highest energy CRs that dominate the flux for energies above the ankle. Since at the highest energies CRs are limited to arrive to the Earth from relatively closeby sources, lying at distances smaller than about 100 Mpc, a plausible explanation is that the observed anisotropy arises from the inhomogeneous distribution of the CR sources in the local Universe. The distribution of matter around us, as traced by galaxies in the 2MRS catalog, does in fact show a significant dipolar component with a maximum not far from the CMB dipole maximum {{cite:5647b25c39106dc53cbff0bd36f9323cb0e88995}}. This is in fact expected as it is the gravitational attraction from the nearby matter what produces the Local Group peculiar velocity that is ultimately responsible for the CMB dipole. The anisotropy of this high-energy component is thus expected to have a direction not very different from the Compton-Getting one but with a larger amplitude, making it the dominant contribution above the ankle. This could in principle allow to also account for the measurements at the two highest energy points shown in Fig. REF .
| d | fb728b3bb5e8a569a413492d8c1ab5aa |
And a CFT module has 8 duplicate Transformer blocks, as shown in Fig. REF .
In addition, since the dimension of the input sentences {{formula:610f3247-9079-41fa-bad8-6baca3972bd0}} is {{formula:ef463880-86b6-4635-aef0-05516ace1342}} , the actual expression of {{formula:69d31218-a94a-4984-b68c-6191d492536c}} in the above Eq. (REF ) and () is {{formula:94a5304c-92a6-4b63-83b1-100029d37786}} .
Apart from the Parameters and FLOPs, the memory access also needs to be considered {{cite:874076cec0cecaa7d81fc0e9bdf526f145f3b582}}, especially when calculating the dot product of queries and keys, an intermediate matrix of {{formula:d2e136c1-6757-401f-8701-71b4c9cc5e15}} dimensions will be generated.
When the input picture size is {{formula:c598978f-718e-4d5d-b0fc-86d73fa322e9}} , after two downsamplings ({{formula:27041b80-06ac-4cbc-9dae-f2f2328cf83f}} ), the elements of the matrix {{formula:28118686-13ab-473f-8a93-2a28b408c3a5}} exceed 2.4G, which is unacceptable for ordinary computers.
| m | 63ea3ffdf5db655bff82b0e074b8f7de |
where {{formula:82f8eaf8-dfd8-4736-a267-6dfc811e3291}} is the principal branch of the Lambert {{formula:a77ad778-e176-456a-981d-20e6f7168e39}} function (or product logarithm). A key property of {{formula:008fb2cd-c7df-4c0c-b1e3-50dcf98d8823}} is that {{formula:8f099d40-637c-4b91-bfed-2f2762ad4e78}} for all {{formula:23a30683-2206-4c04-a89c-4049d443c664}} .
As {{formula:5fa39cb8-6ce5-403c-895b-463c6b533ada}} for large {{formula:2340eb43-b74c-4aaf-8f30-19326fda7889}} , see for example {{cite:1952bcc1bde42d5d8b9c12a16e161f0e645443a8}}, the sequence {{formula:d779bf76-0a25-452f-9341-bff788731453}} is strictly increasing.
Thus, as {{formula:4d4d5d06-8cbe-42fb-8a39-437d05678678}} gets large,
{{formula:c429ed4e-aff6-479e-8d8b-f0ae173a491d}}
| r | 03267d0bb193beb020abb90d5e178731 |
Spatial Graph-Convolution The purely spatial convolution is computed using graph convolutions (GCNs) as defined in {{cite:b06ee98d55dd5100f24f594126bda4453f96c809}}.
Instead of using the full skeletal graph, we again utilize the kinematic tree that was described above to convert the undirected skeletal graph into three directed subgraphs: In the first subgraph, an edge {{formula:193b09eb-01b3-4b2b-979b-560815a864c6}} exists if and only if joint {{formula:4b417903-1cd7-44e5-9209-de7749ea7ee6}} relative to joint {{formula:24132022-7dc5-494d-abe9-56574d704f18}} is further up in the hierarchy of the kinematic tree. This subgraph retains all edges linking a joint to its immediate child joints. The second subgraph is similarly constructed, but with inverted edge direction, therefore retaining all edges linking a joint to its immediate parent joints.
The third subgraph consists only of self-loops.
The subgraphs are represented by three adjacency matrices {{formula:d304ee26-dfd3-4d1d-bfec-2d8f630d0aef}} . Note that {{formula:fb5d0153-6671-4de2-af13-4ab2f73cb796}} corresponds to the identity matrix. The output of this operation is computed as follows:
{{formula:1832347e-d38a-44d4-96e4-855b614a7421}}
| m | 556f0cfa2d26607d4836096e49092ae9 |
Because of the inequality {{formula:baf6167a-a706-4e6f-bc21-37f43067067d}} , the second integral can't diverge to {{formula:6a6c38f0-a0ac-4b1f-b00a-1e649aad954c}} and therefore for any measure in the Szegő class we have {{formula:bd7453b6-5ea5-4148-bccb-450e10fe7691}} . Furthermore, in this case there exists an outer function {{formula:d3e3b095-d502-4b1d-a244-c0294a2f01f6}} in {{formula:efb987bf-9d55-4dc8-b97d-9b61101e35ee}} such that {{formula:d19b61a3-a887-43e1-81eb-677a71d18799}} and {{formula:435db11c-85b7-41bf-a4ff-01713eb8c445}}
for Lebesgue almost every point {{formula:ed84f2e7-e28c-4b35-bcc8-ab363afa9754}} on the real line (see Section 4 in {{cite:963a65a0bb958639ce6f2872b55b0cc92323b2fc}}).
We will call {{formula:dc692cb2-623b-4fc9-bd27-545e4bca8257}} the inverse Szegő function of {{formula:8cbb579d-cae8-4702-9263-b12d71831cdd}} .
| r | f23683feab2d7754700ed70e4558b036 |
Finally, the three-dimensional Minkowski Functionals contain more information than their two-dimensional counterparts, and a complete analysis of the three-dimensional field will be forthcoming. In this work, and throughout a series of papers {{cite:77649118239cd63a5bd334f223e43c1ea9fb0ee6}}, {{cite:d451a0d64857b87f2bad17eb0643b2329e012ff6}}, {{cite:648d81579808f01f2d23dfbde93d957dc4957457}}, we have focused on the two-dimensional genus, extracted from shells of the three-dimensional galaxy distribution. The reasoning behind this choice is two-fold. First, the BOSS galaxy catalog is relatively sparse, and we mitigate this issue by taking thick slices along the line of sight. Binning galaxies in this way is a smoothing choice, so we can interpret our approach as anisotropic smoothing perpendicular and parallel to the line of sight. Smoothing on larger scales parallel to the line of sight allows us to use linear redshift space distortion physics, which is important as non-linear redshift space distortion effects on topological statistics are not yet well understood. Second, in future work we intend to compare our results with higher redshift photometric redshift catalogs, which will require galaxies to be binned into thick shells. An understanding of how photometric redshift uncertainty modifies our analysis must be further explored before this comparison can be made.
| d | 2a91b47434741f5e3454c229dda36097 |
We provide additional results of conditional image-to-image translation (Conditional I2I) {{cite:1b14dbd26bef44974299934180efbe5e89799e3a}} and style-guided synthesis {{cite:30ed39489998e8e810f0c9fef01cb203eaa1ca8f}} in Fig. REF , column 9 and 10. To train the model of {{cite:1b14dbd26bef44974299934180efbe5e89799e3a}}, we resize images and semantic label maps to 64, the original resolution used in {{cite:1b14dbd26bef44974299934180efbe5e89799e3a}}. We test different learning rates and early stopping strategies to prevent the generator from model collapse. To implement {{cite:30ed39489998e8e810f0c9fef01cb203eaa1ca8f}}, we train the model of {{cite:30ed39489998e8e810f0c9fef01cb203eaa1ca8f}} using our patch-based self-supervision. We test multiple learning rates and channel sizes of the generator. However, we could not achieves good results for {{cite:1b14dbd26bef44974299934180efbe5e89799e3a}} and {{cite:30ed39489998e8e810f0c9fef01cb203eaa1ca8f}}. We believe the disentanglement strategy of {{cite:1b14dbd26bef44974299934180efbe5e89799e3a}} is too challenging for the highly diversified COCO-stuff dataset. Meanwhile, input domain concatenation used in {{cite:30ed39489998e8e810f0c9fef01cb203eaa1ca8f}} may not be sufficient to capture and fuse the style information for the more challenging scene image dataset. In addition, spatially-adaptive normalization {{cite:ab98e835c826e53c1e413e8953aaeb2f4386df16}} might be required for {{cite:30ed39489998e8e810f0c9fef01cb203eaa1ca8f}} to better utilize the captured style coding.
{{figure:6a6eed7f-d1ab-4e40-917d-008a609ffba2}} | r | 6f86b47cfc2e0e32dec0746e2cfa9ae8 |
As mentioned in the introduction, the anomalous decrease of the bandwidth near the magic twist angle promotes the importance of the next-to-leading order terms in setting the anisotropies, thus selecting from the nearly degenerate manifold of correlated states that are obtained if only the leading order terms are kept. Instead of listing all of the consequences of the above symmetries on such higher order terms, here we only mention in passing that {{formula:afaf5fe0-b15f-4488-ab0c-73d3182f3cd6}} and the combined operation {{formula:8e1aeee2-36d3-4094-aca5-965db9302ada}} will be seen to allow for a particularly interesting inter-layer tunneling contact term which, as shown in the companion paper, is the main source of the particle-hole symmetry breaking in the model of Ref. {{cite:41704704104c33843e1b4eb9337411b7f510adde}}, but which is altogether absent in the Slater-Koster type models{{cite:34778621092135cbabb7ae2871ab6a8beb8b9d6c}}, {{cite:59cf939e94dbd5d4899d600fb92df8d7e3f2a6aa}}, {{cite:f129a43feb24c2e32d9383229b1f9114c614db77}}.
| d | a1f2ec41b55327505a08e5093cc3516d |
Instead of comparing the performance of the ERM against the optimal risk attainable in the class, one may wish to compare against the risk of the optimal 1-step-ahead forecast.
For loss functions in the Bregman class the optimal 1-step-ahead forecast is the conditional mean (assuming it exists) {{cite:ad371033712b723cce4d0a5c6dded5c90bd07dc9}}.
Thus, the risk of the optimal 1-step-ahead forecast may be defined as
{{formula:f93777cc-ac68-4fc8-baea-ee277ed72c50}}
| d | 4fb246cdec4a613b7dfa0ed3b99beda4 |
Meanwhile, several training techniques {{cite:e3d1b41ebcc85973634a1a4f03c915d5a7c3754c}}, {{cite:41bb5841a62567148157058c7c66d8403747ce46}} have been proposed to derive a more robust embedding model against the perturbations. On the one hand, data augmentation methods transform a single image {{cite:1445341a750b91b90331a864cf45c4a38c04f36e}} or combine multiple images {{cite:9d2c90bf3a7581cd7517a7af23783d19093cc0aa}}, {{cite:b9350d7550f3ac303a49bb2059d8c130b6b8761f}} to create more training samples at pixel level. However, these methods cannot create new information which is not included in the given data {{cite:e3d1b41ebcc85973634a1a4f03c915d5a7c3754c}}. On the other hand, adversarial training methods such as projected gradient descent (PGD) {{cite:d05cae3840b00df0339f07ff27c98074b8f77055}}, AugGAN {{cite:e83e75ecdb6b450b1ad5cf1e11689f62ca88ce77}} are used to find the perturbed images to confuse the model, i.e., predicting an incorrect label, as the additional training samples. However, these methods usually require numerous iterations to generate the adversarial examples by optimizing a predefined adversarial loss, which is computationally intensive. Besides, a clear trade-off has been shown between the accuracy of a classifier and its robustness against adversarial examples {{cite:41bb5841a62567148157058c7c66d8403747ce46}}.
| i | 0c95e53b27aeb93554e0043534fe47a1 |
The majority of span-based models {{cite:1ef2004f8fa338dc7cf6a9af91f7ebac859004fa}}, {{cite:a7ff7f19048c670a3a98580b3e883a43d200057d}}, {{cite:0d91205a6bf018bb768d4e19be14a36c90049614}} use Pre-trained Language Models (PLMs) as their encoders directly, which relies on the encoding ability of PLMs heavily, resulting in insufficient span semantic representations and poor model performance.
To alleviate this problem, some span-based models {{cite:42feb1a41f39d3e7c74afe6b0d7af42368c1fa5c}}, {{cite:f78d551e946be037a18451509eafed042b45ef81}} make attempts to incorporate other related NLP tasks into this task, such as event detection and coreference resolution. By using carefully designed neural architectures, these models enable span semantic representation to incorporate information shared from the added tasks.
However, these additional tasks require extra data annotations such as event annotations, which are inaccessible in most datasets for the task, such as SciERC {{cite:40a66c955ffbc01791d7b55b4ed6a7b50f52721c}}, DocRED {{cite:271461f5aeedbd2decfa3d0aa7542109092f7c89}}, TACRED {{cite:ddc637ba48dbbe10c8f02b8f503637b66fff69f1}}, NYT {{cite:e5e920dce038ef414224ccfe33232226d4240650}}, WebNLG {{cite:b6aa169d691719926a4404eaf7d2a715de52ff4b}}, SemEval {{cite:d5b4eb94a87249bc0618be3d0290d0830ed40d11}}, CoNLL04 {{cite:9fd88c889515e085bb477e61eb971f1e8498c60e}}, and ADE {{cite:2a049f5e57123590a3a96f565c452d88bdf4fd4a}} etc.
{{figure:78176d85-81e3-4c18-b580-55886406eea0}} | i | 4b6abcb7b23ab6da8858fcf16a2792fb |
We also analysed participant feedback in task 1 by means of inductive thematic analysis based on the framework by Braun and Clarke {{cite:d36bf42f3a1969d5ba7e8eca1a631235e64c5144}}. The goal was to identify common themes regarding positive and negative factors considering the three videos created for task 1 of the user study.
| m | 870bfec43b2bbf702bb30fa8470bd91c |
As discussed in Section REF it is very likely that there are more general supersymmetric black holes with an additional electric charge corresponding to the {{formula:fc54ee1b-60ed-4e59-ae6b-2775a64fb804}} flavor symmetry of the class {{formula:0d86ebc3-76a5-4f09-b43c-e30fad4b035f}} SCFTs. It will be very interesting to construct these solutions, either by using an appropriate 5d matter-coupled supergravity theory arising as a consistent truncation of 11d supergravity, or by a judicious Ansatz for the fields of the 7d maximal gauged supergravity. Of course it should in principle be possible to construct such solutions directly in 11d supergravity but we expect that to be prohibitively difficult since the corresponding BPS equations should reduce to a system of coupled nonlinear PDEs.
There are two other generalizations of the black hole solutions discussed above that one can contemplate. For concreteness here we have focused the discussion on the class {{formula:f6b46099-8455-47a7-b511-95c531ce1d6c}} SCFTs associated to the 6d {{formula:6d26690f-64e5-4967-8879-20b4e55f60ce}} SCFT of type {{formula:36cafc70-df6f-45f9-9a00-3e90f5809b19}} compactified on a smooth {{formula:e86a318c-0ac5-4bc5-86c4-7fa056253991}} with {{formula:445bd4ec-df03-421b-8072-2f23d2af30fb}} . Using the results in {{cite:4ba0c3a19b00fbc4c60e72a2fe0fd8c88f57b793}}, {{cite:3d68fd8ed59dd3c02d3a6f226c45db968d406dd6}} it is straightforward to generalize this setup to the {{formula:3f484706-900c-44f0-92e3-3b867eec8dc9}} type class {{formula:7b488ae3-62f4-4011-9ab7-56cff7bf3f53}} SCFTs and to {{formula:00862b01-98a5-4f93-84bf-f163858b4373}} . On the supergravity side this corresponds to modifying the 11d supergravity solutions discussed above to have an internal {{formula:10d5270d-2970-4922-b8a4-d1ef1064815f}} space instead of the {{formula:d36bf735-f881-4636-902b-5e2fe146d046}} used in (REF ) and to modify the metric in (REF ) to that of the torus or the two-sphere. More non-trivially, one can attempt to find a larger class of black hole solutions arising from wrapped branes on punctured Riemann surfaces. While it is known how to construct such AdS{{formula:12fb14a4-c5ff-46e4-adad-d1cbba8a0437}} vacua with {{formula:84c08531-fb7b-4bee-a5ca-353b76100175}} and {{formula:62e31a1d-4429-425b-ac54-64f277b49e31}} supersymmetry, see {{cite:4c7cfab48298c51c64254c3686a6e4479c9e075d}}, {{cite:9c49de4f856669f98d3819de4c3c0f92c00eb2a6}}, {{cite:e8ace1d3da624d64d884ca5b08f980f0bd0ccac1}}, it is not entirely clear how to generalize these supergravity solutions to include a black hole. Perhaps the 7d gauged supergravity method developed in {{cite:e4ee6c9b3e06e4c976ec18c5e9e09e82f3dce151}} to treat some special punctures on the Riemann surface will prove useful in this regard.
As already emphasized in Section REF it is not entirely clear to us why the Cardy-like limit of the index used to derive the “second sheet” formula for the index in {{cite:fc0674153fd3b5704e44fa2a8f520932211c9a03}} has a larger regime of validity that captures the behavior of the large {{formula:96d69ffd-b66d-441b-ac54-dbacda1a7b02}} limit of the index relevant for holography. It will be most interesting to understand this better and also to generalize the “second sheet” formula for the index to include fugacities associated with flavor symmetries. We have proposed such a generalization in (REF ) and it will be interesting to check our educated guess more rigorously. Finally, it is important to study {{formula:c2c10e61-a804-4464-b78e-17f5c289e766}} corrections to the large {{formula:2eed3556-9f49-4cb3-bbda-800b0e76c6fa}} class {{formula:cbf1230b-ed86-4f14-a8fa-4fd6e3ef8453}} superconformal index which should correspond to the 1-loop contributions to the gravitational path integral of the KK modes of 11d supergravity around the AdS{{formula:335d8717-3bf3-4199-abbc-9747bae28e07}} black hole solutions. Establishing this relation rigorously will provide a stringent precision test of holography. We also note in passing that subleading terms in the large {{formula:5de48045-105f-4646-b6ad-1497efe1e0a3}} power law expansion of the superconformal index are also captured by “second sheet” formula. These correspond to higher-derivative corrections to the CCLP solution in gauged supergravity. This was studied recently in {{cite:4943db70a744fc528e4f76fb5c9e1c4bcab0bc7b}}, {{cite:8769e20f2e08a79d40968591a7d5a50312fe3572}} where a precise agreement between the holographic and QFT results was established. In particular a discussion on higher-derivative corrections to AdS{{formula:4fd0952b-e1b3-4140-a00e-257c9ab48d2f}} black holes arising from wrapped branes was presented in {{cite:4943db70a744fc528e4f76fb5c9e1c4bcab0bc7b}} based on the assumptions that such solutions indeed exist. Our results here indeed show that this is true.
The superconformal index for the class {{formula:7539e333-c643-48ce-b05f-68f45742d18f}} SCFTs discussed above was studied in {{cite:af8f7852fe7243c4efc55fddb6dc12281aed8a2c}} for low values of {{formula:bd11cfc9-e118-457d-9155-9e477bea3c97}} . It will be most interesting to understand how to extend the results of this work to larger values of {{formula:ae75583f-048c-492c-bf21-05b3da7647ef}} . For gauge theories coupled to matter a useful approach to studying the large {{formula:cc110925-7da7-4d94-a780-23aaeb9de402}} limit of the index is to employ the Bethe Ansatz re-formulation, see {{cite:32cbc9c9ba0623493848003441750445b71f7a7b}}, {{cite:a21002fb4f00c2826c9ffee6f390037c40f8f266}}. If such a formulation is possible for class {{formula:aad425c6-cb13-4ac5-bdc1-b07ea45f4574}} SCFTs it will provide a concrete calculational tool to gain valuable insights into the quantum gravity corrections to the properties of the black holes constructed in this work.
| d | e5da26b22f1bfafd308d1f32f298dd28 |
Such an approach is convenient in some circumstances when it provides physical insights on quantum field theory {{cite:eeb610177504e33874f2da3643f617144c6d462a}}, {{cite:284ddc44cc096ab36c16713117198e8cb756aca4}}, {{cite:72ee64c4fe91fe7c68c8a262584677b5f69ffa71}}. For example,
an appropriate version of OPT is given by Schwinger's proper time formalism, which is the best way to carry out certain effective action calculations {{cite:c8ec7434f0ca1466d7d8d08ec9f6fd1da2cbf495}}.
Moreover, the OPT is also convenient to give a general proof of the infrared finiteness of averaged transition probabilities {{cite:e1e3440b32e9f4052f4d7f4039c5ee79c3b06672}}.
Another helpful feature of OPT is that it clarifies the way that the singularities of the {{formula:c2af4313-d8fa-462d-a475-12dbec82a8c4}} –matrix arise from various physical intermediate states. This may be useful in connection with the unitarity methods applied to loop computations in gauge theories {{cite:de021199d91e3cde72e3cc51ef9e914cf3d8db37}}, {{cite:bf6357598f55a6e939ebdf418192767ea8cd5c19}}, {{cite:dffe7804d6807bd505ac2903ee0fc9b1866a6140}}, {{cite:5372ac9e35cb9bddd126775e194c5e4c50570190}}, {{cite:a9a2f3e5d2dcac264032d608b176ce40275c9d4b}}, which rely on the fact that loop amplitudes are determined by their singularities.
| d | cc6d76a45dd953e487483667a7e3a758 |
Our approach is fully differentiable and can therefore be trained end-to-end using multi-view images.
Our experiments show that when trained on large amounts of data, our method can render high-resolution photo-realistic novel views for unseen scenes that contain complex geometry and materials, and our quantitative evaluation shows that it improves upon state-of-the-art novel view synthesis methods designed to generalize in a single shot to new test scenes.
Moreover, for any particular scene, we can fine-tune IBRNet to improve the quality of synthesized novel views to match the performance of state-of-the-art neural scene representation methods like NeRF {{cite:784ad94b893a20b528bb13c8635472d006c1fe66}}.
In summary, our contributions are:
[–]
| i | 6732164ca8024f1a7fab82412947b693 |
Generalization. As explained in Sec. ,
different types of media data (e.g., human face vs. vehicle photos) are
generally projected toward distinct manifold
spaces {{cite:5d35a2c706e23e8ec335675661435987eac7b08b}}, {{cite:a6eadf43ba7ee4dd9540efb4fb39ea976025136d}}, {{cite:a5583311c243970c57ab4d9aca999a513098d403}}.
Hence, training a unified model to recover media data of different classes are
beyond the scope of our SCA.
| r | c66e309e0d4cee38cfd8b4d535010103 |
In {{cite:4e4cd36db67226b3ce9e69e45b4c935aa9264b17}}, {{cite:a98df987eeed9aa48b509a8839c29c5749c2e449}} a sequence of probability distributions
{{formula:4e063e1f-ead7-41bf-bcf7-2130a5317c3a}} is said to be chaotic if, calling
{{formula:8a8ab092-f539-4fa3-8bd3-7b4db02d6071}}
| r | 6a23ed0015c8ffea9e8f634ef31a3c9d |
Due to the evident domain gap for vision-based detection systems, some works consider the auditory domain instead of the visual domain for object detection. The auditory domain does not exhibit a domain gap between different lighting conditions, which makes this modality robust to such environmental conditions. The task of Sound Event Localization and Detection (SELD) is to localize and detect sound-emitting objects using sound recordings of a multi-channel microphone array. The difference in signal volume and time difference of arrival between the microphones can be exploited to infer the location of a sound-emitting object relative to the position of the microphones {{cite:8150cef46007867b5697357b6a4f8e1c08143835}}. However, state-of-the-art learning-based approaches for solving the SELD problem also require hand-annotated labels to be able to infer the sound direction of arrival of a sound-emitting object {{cite:ca7f4d870a640edfe641d2b52d1acb07ba62a899}}, which restricts their applicability in many domains due to the need for large-scale annotated datasets.
| i | 9ffd23f93096533586d940989b842cf2 |
The LTS schemes in {{cite:7b0e6d67a45d294283ce007caf38c6838d834810}} are based on explicit methods of second and third order accuracy, normally referred to as strong stability preserving Runge-Kutta (SSPRK) methods {{cite:99a1778b0004604468c770b89df0df02414c7fe0}}, {{cite:fe16f4ae4d2cf430a68fa7d1c8f0d6c5130e0ef3}}.
The second order SSPRK method, from now on denoted SSPRK2, is a two-stage method, whereas the third order SSPRK method, from now on denoted SSPRK3, is a three-stage method. The two methods can be written compactly as one with the following notation.
| m | 8c2f8eecef69bc218d275bcb66d6057b |
In this work, we have built upon the techniques used in {{cite:1c9c1692b56d6add8c2b72c4376b10655079360e}} and presented a graphical framework that constructs complex codes from tensor networks, which may be considered as a natural generalization of code concatenation. While holographic codes were among its first concrete implementations, it is far more capable and can generate a range of different codes with different properties. In particular, it connects tensor network geometry with error correction codes and can deduce properties of the larger code from those of the simpler components using local moves like operator pushing.
| d | 84183f1d46d88a356d098e6ae18335e6 |
with {{formula:4798771b-d7b5-4771-9f98-18846097e849}} satisfying the so-called second Heavenly equation {{cite:3909bebc7807a9469b2af88f3d2d63114b12fad8}}
{{formula:e29d8491-6e85-4860-ac06-1ef80ff8a7b1}}
| d | 14785b628398f6f1ac0a098ff20d3879 |
One interesting question is to construct the characters of these
representations and hence forming the partition function. One could
further compare with the partition function of the Euclidean black
hole {{cite:718ebade74d0f356a896a22bf1958b6a5e9f3555}} and trace the winding modes to the Lorentzian
geometry. In this regard, a significant contribution to answering this
question already appears in the work of {{cite:58fa7011d06b977834953506b519fed046c5ebee}}. In this work,
the authors construct the partition function for Lorentzian {{formula:8941e7d2-d770-493b-bd7c-f1acc7b1eda1}} by
building upon characters of the gauged {{formula:d554d24f-4f36-4c33-89c8-d6ea6eb0660e}} theory. Using this, they
have also constructed partition functions for various marginal
deformations - in particular, the Lorentzian black hole (see eqns 5.15
and 5.17). It will be of much interest to read off the spectrum of
states from the partition function. We should compare the structure
of the spectrum with the work of {{cite:e2ab15ac5d4a0f89cee434308b97a41f6ce07fda}}. In this work, the
(worldsheet) elliptic genus of the Euclidean black hole which included
states from the discrete series was argued to satisfy a curious
identity. This identity should be related to the observation that
while the timelike geodesics with {{formula:1b12031f-ef98-4234-9b4a-63094eae1858}} are absent in the region
V,VI of the black hole, one can construct “T-dual” string
configurations which satisfy the physical state conditions. The
computation of various correlation functions - either from the point
of view of the regions I and II or from the T-dual regions V and VI is
another question. As in AdS holography, the geodesics and evaluation
of the action can be used to obtain a saddle point approximation to
correlation functions.
| d | 01f166f9e7d92fc893d14362649b857d |
Further suppose that, in the graph on the right, the causal influence of V{{formula:2377dc8e-3966-4e8a-8848-b5e3f375e275}} on V{{formula:d3354b65-4024-4760-9c55-e85728412c86}} is precisely canceled by the causal influence of V{{formula:edcf9745-da8f-49bd-b7ff-603a97474a51}} on V{{formula:d29709d1-8bd4-45ec-a16e-b62f399fb070}} . Then both of these graphs entail V{{formula:3be2779b-3d39-4de6-9197-3cd56c815015}} and V{{formula:6abe0f18-59dc-48d7-8e83-17ba2c2656fe}} , but there is something unsatisfying about the graph on the right: the required conditional dependence relations have been recovered only by fine-tuning causal influences to cancel precisely. This is why {{cite:b30f65ec7ecccf850d7ec737acf5a3297f5374dd}} calls this a stability condition: holding fixed the strength of the causal influence V{{formula:57221717-3f2f-46f0-93fc-6360d6917c9f}} on V{{formula:620712ba-461e-4a9f-b0a9-872adb99272b}} while perturbing the strength of the causal influence of V{{formula:36ba9350-6e96-4b8f-b8cd-9d55576ae18a}} on V{{formula:b3c27097-fc29-454a-9810-98f5cf6c6533}} by any {{formula:bfb694eb-d073-4670-8164-ee42b5ff511b}} will destroy the conditional independence relations. It is these kind of finely-tuned graphs that are ruled out by faithfulness.Faithfulness is often motivated by the fact that the set of parameter values quantifying causal influence that produce this type of cancellation are measure zero in the set of all parameter values {{cite:7261ea3daafd2b49988a9b20c029dadbd4c37ce9}}, {{cite:4e27b0c9716c075182b364af3926b396056e9960}}. For a clarifying discussion of alternative justifications for imposing faithfulness, see {{cite:d6cdfc58e91020ad3ea56f5252c9be35e8317ce3}}.
| m | 448a6e04468d2c07d75f84df9e5f482b |
An XAI method {{formula:884abeba-9d03-45cb-b759-caf6f0a874cd}} takes a model {{formula:bbdcf6e6-4d55-44e3-a514-f09ff2d9b057}} and a molecular graph {{formula:08537901-bc41-4b88-9dd6-4cdbe0d13028}} as inputs to generate an attribution score {{formula:7dce8fbd-6c55-433c-95de-1259915363e0}} where {{formula:67ac0658-f7ee-4c20-8cb6-45967dd31cac}} and {{formula:9bb47412-9c4a-4d6b-86b1-256bda01551b}} are node and edge weightings relevant for predicting property {{formula:3e271dae-5db4-4590-b2f4-6f665253b71e}} . These weightings can be visualized as a heatmap superimposed on a graph. Our ground-truths for attributions are node-level, so we redistribute edge attributions equally onto their endpoint nodes’ attributions. In our framework, we utilize the following methods for molecular graphs: GradInput {{cite:9ba6a8b80271e6dd07a97db1bbb23c340b7f71e7}}, GradCAM {{cite:6bf41bfaa174b68e44153d7bf8b57b82c606d5c6}}, Integrated Gradients {{cite:addc3993d35817aa39eefe0d0294e27ea1dbea26}}, CAM {{cite:cdfbe8d910fb916739d97ea65c379771ea89c391}}, MCTS {{cite:b0324332542cf8987c1811beb9cda74e98ed9a3e}} and Attention weights {{cite:e515e3e3063f0f669a01ace58f6a8be2e937320f}}.
| m | fd60bdc9f5ed6116922a6b3756bcbe48 |
Under {{formula:b1c1a377-0cf2-4402-b68c-c83ad7d7b754}} , the test statistic
{{formula:fdf9de05-44a0-4bd0-ad6c-765293afc40c}} has an {{formula:b5b632f7-e431-4dc2-b0d3-d18443d42a6b}} distribution with
{{formula:3a3f0b44-98c1-46a4-9f6f-8a97173e7170}} degrees of freedom,
where {{formula:8b3b5a0a-23d4-4727-b8e2-d8b75854b66c}} and {{formula:0fd118ee-a0f6-48db-ba58-8c3bd3e9e941}}
{{cite:0c439d7a770752efe48d643321bc99680d2a2fc2}}. The probability for
{{formula:6f3d72ab-dd4a-4701-aa60-acee182aa623}} reaching values higher than {{formula:fc778c86-69ba-47c4-b386-a1b8f59ae2f7}}
is called the critical level {{formula:0543f2f8-e7e9-4410-831c-c62077a248b6}} .
We reject the {{formula:9b3abe21-d222-44fc-96f1-2a1eb65a6ac0}} hypothesis,
if
{{formula:ab0ba24d-dbed-4636-905d-1c64f564ba6d}}
| m | 6e92bede5e5d789821172865b4258daa |
Additionally, we report UCE values from a DenseNet ensemble for comparison.
In contrast to what is reported by {{cite:95fad12b334727ed0b59a803c0d12e9167d9bc2a}}, the deep ensemble tends to be calibrated worse.
Only on BoneAge, the ensemble is better calibrated prior to recalibration of the other methods.
After recalibration, both approaches outperform the deep ensemble.
| r | f1dd65d7e0ff5fe2f43a6965ec40362f |
Furthermore, we compare our result to the condition in {{cite:831c5ca6707cb11d782da218962eb974dea26240}} that involves a bound by the algebraic connectivityThe second smallest eigenvalue of a Laplacian. {{formula:cd5a2dc8-2275-4cdc-8109-d323152c1aba}} .
We compute {{formula:65f4851d-a034-4298-81fc-c351b9129ed2}} where {{formula:5dd2143c-0e26-47a6-a2b4-901c162993bd}} , and parameter {{formula:5ac92819-2a73-4ad5-8ae8-99262736a851}} in {{cite:831c5ca6707cb11d782da218962eb974dea26240}} is computed as {{formula:4e637573-1d91-4c27-9e46-9f69790b411f}} . We have that {{formula:25406f3e-7bb1-4422-a6ec-4ca033fb8e99}} is less than {{formula:d8104d1a-a745-4868-96a6-e3a70e98359a}} , which shows that the condition in {{cite:831c5ca6707cb11d782da218962eb974dea26240}} fails even though the system synchronizes. This is similarly observed for other network sizes.
Hence, the condition in Theorem REF is less conservative compared to that in {{cite:831c5ca6707cb11d782da218962eb974dea26240}} when local asymptotic stability of the synchronization manifold is deduced.
| r | 4af639fd33e97921f0f904ed24083c21 |
is precisely the sum of Dirac measures located at the eigenvalues
{{formula:86147aef-6a03-43ee-99b4-0065608e2dff}} of the operator {{formula:a18a43c4-ad16-42a1-8462-8dc027f95015}} .
Similar results were obtained for various classes of differential and pseudo-differential operators in {{cite:668b815079aa86f08392407b80b99cec2561064f}}, {{cite:9bd30ede0bc674f201f685bab6acf7e302ad05aa}}, {{cite:10b442ad5630a02b3cc2e4588b8acfc711ab8365}}, {{cite:4173913b81e8d958726ff2a23694f58dee82abbc}}, {{cite:ba4233b6e16d6862bfff3bb70a75d3ca2fa7cd70}} and {{cite:96066f6475ae75323cc0832612789fba0adbe686}}.
| i | b8784fc64617ef505df9036f0ba54395 |
The present data-driven approach was demonstrated with a set of local sensor measurements.
As this method performs spatial field reconstruction at each time, changes in the number of sensors or motion of the sensors are easily accommodated.
Having this flexibility allows for extensions to incorporate other types of measurements, such as under-resolved satellite based measurements or particle image velocimetry.
The current formulation also opens a path to incorporate intelligent sensor placements {{cite:969900068c3a9a8cee9076eafaa29cb54af20a18}} to further reduce reconstruction error and enhance the robustness with data redundancy.
Moreover, the present method can be applied to cases where output attributes are different from the input data attributes.
In such a scenario, the Voronoi-tessellation is akin to a projection operation to extend learning one step further.
While we use the Voronoi CNN solely for flow reconstruction in the present paper, high-order turbulence statistics, e.g., root-mean-squared values of velocity fluctuations, can be extracted as well.
The power and simplicity of the present approach will support scientific endeavor across a wide range of studies for complex data structures.
| d | cd296d3463df2f6091a9597eb3e56027 |
There are infinitely many sets of context-free grammatical rules, and thus it is not appropriate to assume the uniform
prior over all the possible PCFGs (improperness).
This study adopts the hierarchical Dirichlet process (HDP) for the prior over PCFGs {{cite:130aa7d01a13eaffa7ab9ac37e9851ff42e9791c}}, {{cite:6c93687bd61709694dde9ad1ed196e03f89c9d2f}}.
The HDP prior introduces a bias for compact PCFGs:
It assigns exponentially smaller prior probability to PCFGs whose production probability mass are spread over a greater
number of grammatical rules, favoring those with a smaller number of reusable rules.
The Bayesian inference balances the PCFG likelihood (fit to the data) and the HDP prior (compactness),
and the posterior probable PCFGs are those that can generate the data with high probability
while reusing a limited number of production rules.
Similar balancing between the explanatory power (likelihood) and compactness (prior) is widely adopted
in scientific evaluation of models {{cite:d1f6f938e3987fa6053dc9fc2468273996480077}}, {{cite:dc1504343589bdf7ad484f033b6a4b1c7000956e}}, {{cite:614dbba470c9ebe0a82bb7da0e41e2204680452f}}
as well as modern theories of learning
{{cite:33282761156b976a648101ec2baf40244f0a7f37}}, {{cite:83d94185eabbf83dede56052024d12998090508b}}, {{cite:2128b8747d567a7b6484e68afbbf990650c0c3a4}}, {{cite:0a8cd62102db6434d7aff561b4120651c44aeffd}}.
| m | 16eb52f10a9d3a5cdbbd239df85c2f9e |
Condition REF is usually imposed in the analyses of high dimensional linear regression models {{cite:7a0cdd841d9e7ce4491be7608e0e8ecbe9752da8}}. This condition implies that the sample covariance matrix {{formula:dcaa41a3-2150-45aa-bc90-55c614d087d8}} satisfies a so-called restricted value condition over a cone set, which is described more clearly below. As discussed in {{cite:e999af7edf085a7de2bcf6aae5a45bb5b33b1ab9}}, Condition REF is a refined version of a commonly imposed condition in the SIR literature, that is, {{formula:810b151d-804e-4510-9a47-1522cf21ac79}} , meaning the dimension of the space spanned by the central curve equals the dimension of the central subspace. Finally, Condition REF controls the smoothness of the central curve {{formula:b9056947-f87a-4035-8213-d9af559def2b}} and the tail distribution of {{formula:99fbfa55-665e-4083-959c-562ead5cdd7a}} ; see {{cite:12f8f213e7b327e87e034ac5278ded0794557fa0}} for a detailed discussion of this condition.
| r | 9b59fb82fb7d17d01bb5a5ceec664d84 |
In section and the Appendix, we outlined and revised the necessary formulas for the AFCDM which allowed us to construct exact and parametric 8D solutions with a fixed energy parameter in the conventional co-fiber space of the phase space modeled as cotangent Lorentz bundle nonholonomically deformed by a nonassociative star product determined by a R-flux nongeometric structure. Such nonholonomic geometric constructions are nonassociative versions of geometrical and physical models from {{cite:db2e1ae8275da1634a16200e372a234142fe3d25}}, and citations therein. Nevertheless, the methods and solutions are very different from the former commutative ones where generalized “rainbow" configurations with a dependence on a variable energy parameter {{formula:1384a587-9cf2-451b-a62d-154311d5d81e}} and on a temperature parameter {{formula:392f6909-6489-48f3-8808-12a3aa8ff61f}} were emphasized. In this work, we consider fixed values {{formula:d5ede983-d8f2-43ed-91b3-b67e9e73fc7c}} and {{formula:53d429df-a7e6-40af-ae8c-6435ff4f2300}} and study possible relativistic momentum effects which via nonlinear symmetries (REF ) (see also appendix REF ) encode possible nonassociative data and R-flux contributions.
The Tangherlini type 6D BH solutions {{cite:0b6bf6d30f0c9f546b65e93de3853e0550362910}}, {{cite:2181807e0ee6a95df359da6cc6418254f6ea4134}} were extended to 8D phase space configurations in section REF , which define higher dimension BHs with a conventional horizon and radius in phase space and which, via nonlinear symmetries and effective cosmological constants (see formulas (REF ) and appendix REF ), encode nonassociative data.
Another class of solutions for nonassociative vacuum gravitational equations provided in section REF describes double BHs - a 4D one on the base spacetime and another 4D one in a co-fiber (momentum) space. These two BHs are not independent because of nonlinear symmetries of the generating functions and the effective sources which connect to another type of effective cosmological constant and determine (effective) masses and horizons.
Off-diagonal parametric R-flux deformations of prime 8D metrics can be performed in such forms that they result in quasi-stationary deformations with a fixed energy parameter (see proof in subsection REF ). Such generic off-diagonal solutions describe, for respective classes of nonholonomic constraints, certain embedding of some prime BH configurations into a non-trivial phase space vacuum, or some locally anisotropic deformations of horizons with polarization of effective constants. In general, it is not clear what physical properties these solutions, which encoded nonassociative R-flux data, may have. In explicit form, the possible physical importance of a solution can be clarified for certain special sets of data, topological assumptions, and asymptotic conditions. For more special cases with correspondingly prescribed classes of generating functions, we can construct some variants of BE configurations. It is possible to prove certain stability conditions for such BE solutions and their physical interpretation is quite clear for the models with polarization of constants and deformation of horizons because of certain nonassociative data.
We generalize the Bekenstein-Hawking entropy approach to phase space BH and BE configurations in section REF and speculate how the corresponding temperature and associated thermodynamic values encode nonassociative data. Nevertheless, we conclude that such an approach is limited only to classes of solutions with conventional horizons, duality and holographic properties. More general configurations in nonassociative gravity are supposed to be characterized by another types of geometric thermodynamic models, for instance, involving relativistic nonholonomic/ noncommutative / supersymmetric generalizations {{cite:b9827955ce1fd0cc39355efda16f31b14322d806}}, {{cite:ebe59ae44ccde399459f6ebfda2c7faf85b58cf6}}, {{cite:d98a3720b0bac93dd8406cd66ba01963e38715a9}}, {{cite:db2e1ae8275da1634a16200e372a234142fe3d25}} of the concept of G. Perelman entropy {{cite:64d58a90d98e1c86e2fadba71a3b86314b911c72}}.
In section REF , the concept of W-entropy is generalized for 8D phase spaces in a relativistic way including via nonlinear symmetries and effective cosmological constants, nonassocitative star, and R-flux contributions. We show how a statistical analogy for nonholonomic phase space Ricci flows can be formulated in order to derive and compute thermodynamic variables for quasi-stationary solutions of nonassociative Ricci solitons with fixed temperature parameter {{formula:ba096440-b5ee-4a1c-a7d5-7d459d816ce1}} (in particular cases, they are equivalent to modified vacuum Einstein equations). It was proved that Perelman-like thermodynamic variables for {{formula:91e21a21-1bac-4749-881a-6f84ee5e6367}} –parametric configurations are determined by certain temperature and hypersurface (spacetime and phase space ones) integrals with effective volume elements.
Section REF gives two explicit examples of how the Perelman thermodynamic variables can be computed for general quasi-stationary R-flux deformations of phase space Tangherlini BHs and deformed double 4D BHs. These two configurations can be distinguished thermodynamically by different dependencies on the effective cosmological constants and temperature. This new geometric thermodynamic physics can not be studied using the concept of Bekenstein-Hawking entropy.
| d | f404147218f667a0957f79bb2d799b12 |
Base: Each client uses local data to train local models.
FedAvg {{cite:f36a9e06fca1c4f7a1cd48540131645be8a1a2b1}}: The cloud aggregate all client models without any particular operations for non-iid data.
FedBN {{cite:eba658855cb8ffa4e231ed7752072c36f9439b06}}: Each client preserves the local batch normalization.
FedProx {{cite:be5f7fc3d9fa32e1d9ba31e1811c985010f3933c}}: Allow partial information aggregation and add a proximal term to FedAvg.
FedPer {{cite:f9808b2b0e0172f041cbde729b6d2b0e90ec887a}}: Each client preserves some local layers.
| m | 58ed5c0c7059444d79dc0dccfc778c88 |
Question types. In tab:mainresult, we demonstrates accuracy on six types of questions based on their prefixes. Most models tend to perform better on the “Is” and “Can” questions while delivering worse results on “What” questions, likely due to a smaller number of answer candidates – most questions with binary answers start with “Is” and “Can”, offering a better chance for the random guess. Moreover, we observe the hugest gap between the blind test (model w/o 3D scene context input) and our best model on the “What” and “Which” categories, suggesting the need for more visual information for these two types of questions. This also partially echoes the finding reported in {{cite:54c083a01c5ae7b47e18edf9c7131338f0717164}}.
| r | 8fe12354e52f7ee3ee704ded1bc31409 |
M2 SMOTE TomekLinks {{cite:f7527b03d9bfd2d58b7a7c8186fdebfefda7f158}} 13 Binary C4.5 AUC
| m | 23cee281446eb94c343f5799a94debec |
Note that without the constraint of being in an LOCC protocol, a
measurement on a {{formula:67461186-8e37-4a30-a83e-228040448437}} -dimensional system can be compressed to
{{formula:09a7f14d-c2c4-49b9-b053-c76568c29e05}} outcomes.
This bound {{formula:481f47e4-d60a-474e-9c4c-b52e0acca9eb}} shows that to optimize state discrimination in
LOCC, about twice as many outcomes (or one additional bit of
communication) are sufficient.
This is independent on the number of rounds (and finite
or not), how deep the parties have executed the protocol, the number of
parties or the total dimension of the system, and not on the
number of states in {{formula:2e6edba5-f6b9-4b18-b591-fdc910c6c2bb}} .
In comparison, in {{cite:26bcf8911f3335473e583a4937544e9174a8b4f3}} each measurement in round {{formula:4b79ad10-d74f-4ee5-879c-5b807a58e601}} out of
{{formula:4d8bf35a-82f5-4ac9-afa9-97076481cbb1}} has at most {{formula:550cd636-0fb7-4339-84d3-129f6a4e057b}} outcomes where {{formula:85b85003-9115-4ae5-a3b4-4aef06a3f617}} is the number of
outcomes, after coarse-graining, at the end of the protocol (which
is {{formula:3f21ab9a-b300-4d4a-961e-301a3876bdd4}} for state
discrimination) and {{formula:19213278-8b08-432a-b881-3b18954df457}} is the global dimension.
Most importantly, this bound diverges when {{formula:7ff7db59-6fde-4b84-99c5-3462f25e1a3a}} diverges.
| m | 4d2edb0feefcc45bbe66bf375a561884 |
GradSurgeryGradSurgery is originally for balancing multi-task learning. We can easily apply GradSurgery for auxiliary learning by specifying a task as the target task and the others as the auxiliary tasks {{cite:dc7700d3d8724c934ae603d5877817b6def04915}} replaces {{formula:849e0b86-0b52-4dcd-a64b-7809380ab8fd}} by its projection onto the normal plane of {{formula:6912c277-ca25-46e8-be4d-c226d5b49f05}} if {{formula:d4a847f8-1553-4265-854c-bd6b319d89e3}} is negative, unlike {{cite:be29fa0c994289f5be66450ec8ced06ab42013ee}} where {{formula:3301313c-1233-4983-bd96-b0a93adc9dd7}} is just ignored. Formally, if {{formula:7de5e668-bca8-4876-b58a-4a0d1b9b0f70}} is negative, they let:
{{formula:79b1e591-17f0-41fc-a228-af586e7b8418}}
| m | c3a7c020574dd40a0e50b6585216f7ce |
Based on the fact that tremendous progress has been witnessed in the deployment of QKD, practical secure QKD protocols are becoming an important research direction in quantum communication since they are relevant for the practical deployment of QKD . In this study, we propose an FP-TFQKD protocol using coherent states that can be guaranteed to be secure against realistic flawed sources. We further demonstrate the feasibility of the protocol through a proof-of-principle experimental implementation {{cite:9166753a11f92f308acbe6b3652225468c50858f}}.
By combining the twin-field setup and security analysis with the RT method, the practical security of our protocol against device imperfections can be proven. For source flaws, we consider SPFs, side channels, THAs, and state correlations, which typically cover all possible source imperfections that are currently observed in experimental implementations. Although we only consider the special polarization side-channel and one special THA, we note that other side-channels and THAs can also be included in the calculation of fidelity with extra parameters. In addition, we offer relations between parameters characterizing source flaws and experimental data, and provide specific implementations in our experiment on how to define and measure these parameters, which can be applied to other types of source flaws . Therefore, our protocol is more practical than other QKD protocols considering imperfect devices. In addition, the simulation results show that our protocol generates a higher secure key rate against source flaws than the other QKD protocols in Ref. {{cite:7d75d9508dc9866f7efea403de3f1a4d1e96b876}} and Ref. {{cite:8e1ea6a43406763fb456565eaec7420fd9fc3e98}}). In particular, compared to the side-channel-free QKD in Ref. {{cite:8e1ea6a43406763fb456565eaec7420fd9fc3e98}}, and its experimental implementation in Ref. {{cite:8ac465880c913c5147eaa0cd50575481d14e4d71}}, our work removes the requirement of vacuum state preparation, which introduces intensity correlations that invalidate the key assumption of this method {{cite:6960d3e87f543979b54ce02f8c846512cd2ba4e3}}. For experimental implementation, we also present a finite-key analysis against collective attacks, and the method we use can be easily modified to prove security against coherent attacks {{cite:47bf682160c9ec3c64d6decd9f1973c406297b61}}, {{cite:6ad784abafcf688add7f853232f91f06df979b5f}}.
| d | 259d369558b512ec4d63bf0d8463b7ec |
Contributions: In this work, we introduce IMEA, an Informed Multi-context Entity Alignment model, to tackle the above obstacles.
Motivated by the strong expressiveness and representation learning ability of Transformer {{cite:37e5ac543402c90c39a03a7b2ce0b466d1f1c2cd}},
we design a novel Transformer architecture to encode the multiple structural contexts in a KG while capturing the deep alignment interaction across different KGs.
Specifically, we apply two Transformers to encode the neighborhood subgraphs and entity paths, respectively.
To capture the semantics of relation triples, we introduce relation regularization based on TransE {{cite:96524291fe4f291132a51dc5bd930fff79e14182}} at the input layer.
We generate entity paths by random walks across two KGs.
To capture the deep alignment interactions, we replace the entity in a path with its counterpart in the seed alignment (i.e., training data) to generate a new semantically equivalent path.
In this way, a path may contain entities from the two KGs that remain to be aligned,
and the Transformer is trained to fit such a cross-KG entity sequence and its equivalent path.
The two Transformers share parameters.
Their training objectives are central entity prediction and mask entity prediction, respectively.
The self-attention mechanism of Transformer can recognize and highlight helpful features in an end-to-end manner.
| i | 3757a3f292843b87f0f4dec99aecdf78 |
As the remnant radio galaxy ages, its electrons will lose so much energy that their spectra will eventually steepen. This steepening would reach to a point where they do not emit significant radiation at high frequencies, instead, the radio emission may only be observable at low frequencies.
However, {{cite:08b609c995cb9603e4316f38324286022e13a51d}} argues that in addition to the aged population of remnant radio galaxies, there should be younger population as well in which lobes have not had time to steepen over the observed frequency range {{cite:6c4394ce556c55ec2e0be38644b1a750cd5bf90f}}, {{cite:1c9fe71a93934dfbd8093d4d04de48201dfa63c8}}.
It is possible that the remnant phase of a radio galaxy is governed by a Sedov-like expansion, which will cause the dimming of radio emission due to a decrease in the magnetic field strength and a decrease in particle energies due to adiabatic expansion losses. Unfortunately, the models of the spectra of remnant radio galaxy are highly degenerate, and the modeling of an individual remnant radio galaxy cannot constrain the remnant phase of radio galaxy {{cite:9ee70a91ef4966751a0761e3bca821c1ea1e7804}}.
Therefore, a possible way to constrain the remnant phase is via a statistical approach, and sadly, due to their small number, the physics of remnant radio galaxies remains poorly understood.
Of course, alongside their must exist a significant population of remnant radio galaxies, which show the absence of detectable radio emission in many clusters associated with merger activity {{cite:db81e91836062b28920972bee97155e1abd3174d}}.
Thus, searching for low-surface-brightness profiles, say {{formula:059bbc32-767a-4fe7-891f-5868d99d2ac3}} 50 mJy arcmin{{formula:291b9c13-9388-43a6-be79-1709314047b4}} , with an absence of radio cores and hotspots, and with no well-defined radio morphologies, is a possible way to identify remnant radio sources in large survey images.
| d | 9300e1004273b1984f04db91fc926a67 |
Quantum state engineering has been a subject of increasing interest to
construct various novel nonclassical states in quantum optics and quantum
information processing{{cite:f7d418c576ae65923e6acc8016ee8244fe4cd0a6}}, {{cite:f9931816686afa7737a00a1d61d572b0f3b52eb5}}. From a theoretical point of view, the
simplest way of generating nonclassical field states is to apply the photon
creation operation to classical states such as the thermal and coherent
states{{cite:94ecc73bfae19253b38f69cf7a7404b67a9668eb}}, {{cite:415f00bbaddfb16c4414350206033e0399825843}}, {{cite:8b11863ace77603c82c584cf21c5f55fb45b3b0f}}. These nonclassical states, such as the single-photon
added coherent state{{cite:6c4690d291a66541682420f7e952a66031a881ba}} and single-photon-added thermal state{{cite:dd831e226dcfd64d5c753a9857f7f1e5bd2eaece}}, have been realized experimentally. Subsequently, it has been demonstrated
that subtracting photons from traditional quantum states exhibit an
abundance of nonclassical properties{{cite:658e9132f60534356a9079ca26c770c64d2a7524}}, {{cite:31a60e2e8cdc15f1d4bf7fbcc1d17e6afd094fd9}}, {{cite:a72c497278da8cdcf62cf8dc045b94a314b37829}}, {{cite:891044c316361b3b4d2be07d2c936846bc23f6ac}}. Photon subtraction
or addition can improve entanglement between Guassian states{{cite:6414baedd7000b5f223e05b8f404639d907b3b76}},
loophole-free tests of Bell's inequality{{cite:e6e05fa4b6a900acccdec2efd3bc4ac775f78412}}, and quantum computing{{cite:1d6de957b01bb9d2e4db72c0cee462dea861d7fc}}.
| i | 072df33578f4559a002ab3862989da64 |
The recent discovery of superconductivity in the strongly correlated system UTe{{formula:62534752-53a1-4442-8d98-50d1729123b9}} has sparked enormous interest {{cite:c5801f2decb6ac278c4ebccf1d21b0fea819f302}}, {{cite:84790b1c90101404a4307d891f8fd528c3f6e61d}}. This orthorhombic compound is a paramagnet with anisotropic magnetic properties {{cite:493b249b5d7921e1f8270d00ae76a01b50743683}}, {{cite:c5801f2decb6ac278c4ebccf1d21b0fea819f302}}, {{cite:84790b1c90101404a4307d891f8fd528c3f6e61d}}, {{cite:44f086807c8df7c77a545c6fe4742cdb9d6d9e3c}}, {{cite:668c9954582402eb1072210f4801f23261b02885}}: the magnetic susceptibility along the {{formula:df0623c6-1f30-407d-a638-a7d9fb9fb068}} -axis increases strongly at low temperature, leading to the initial suggestion that the system is very close to ferromagnetic order {{cite:c5801f2decb6ac278c4ebccf1d21b0fea819f302}}, whereas the other directions are "hard" magnetization axes, with {{formula:33e41784-34d0-47c8-9bdc-ff4bc518688a}} being the hardest at low temperature. But the properties of the superconducting state are the most striking aspect, and in particular the strong enhancement of superconductivity when a magnetic field {{formula:98bc07ba-b568-44f5-9b40-2615ae0e4d79}} is applied along the {{formula:6966a910-8cfe-440d-bfff-18c462ffe6ce}} -axis {{cite:5ac0d847702b1a7f1f2992f0f22f85c11bcbff91}}, {{cite:75cbad6780c7b2fa80fb44e4784ac699fd076cea}}. In this case superconductivity persists in magnetic fields up to {{formula:2dd75791-3f10-4838-abc0-215bd43885e2}} T, where a first order metamagnetic transition occurs with a large jump of the magnetization {{cite:44f086807c8df7c77a545c6fe4742cdb9d6d9e3c}}, a similarly large jump in the residual electrical resistivity of the normal state {{cite:d184ca9b3d16d8f1dcf6934c943ceb11bda41899}}, and the destruction of superconductivity {{cite:75cbad6780c7b2fa80fb44e4784ac699fd076cea}}, {{cite:5ac0d847702b1a7f1f2992f0f22f85c11bcbff91}}, {{cite:3a7008dbecaec10e409323489cd64c3235240090}}. Even more remarkably when the field is tilted by about {{formula:3bb8aab9-3bc5-464b-9df0-dce6c439a317}} from the {{formula:50839d24-95f0-4f18-bef9-cff0119c1a25}} -axis in the hard {{formula:9d26e2a5-e44b-4215-8283-004fb6588619}} plane, superconductivity re-emerges above {{formula:992ce37f-d004-461d-9587-075c5c4ecdec}} T for this angle {{cite:5ac0d847702b1a7f1f2992f0f22f85c11bcbff91}}, {{cite:0b6e92ab331893d95a87cc47ffb57e53dbb33868}}. The extremely high values of the upper critical field, {{formula:e20b88f4-c64f-4829-8cd2-88fe7c362ab8}} , compared to the initial superconducting critical temperature ({{formula:6cde0944-b5f9-452d-9d3e-bcc6af27ea83}} K) suggest a probable spin-triplet order parameter, at least in some parts of the phase diagram. This enhancement of superconductivity is very reminiscent of the phenomenon found in the ferromagnetic superconductors URhGe {{cite:f48700135e3bcf86fcda591314d0c47699947d91}} and UCoGe {{cite:87d127c796254c3b77bfda5fab92564f9687db02}}. However, in these cases the reinforcement of superconductivity, when a field is applied along a hard magnetic axis, is understood as a consequence of the collapse of ferromagnetism, since an enhancement of the ferromagnetic fluctuations have been shown to be responsible for the superconducting pairing {{cite:ca3b2e415ba718302b8f61d4d5e096a9d8761387}}, {{cite:b8030eddc486947b083b9eb8e6b08da032119729}}. This explanation can obviously not be directly transposed to UTe{{formula:c5264004-4f5d-48ee-afde-d218ac01143e}} where no sign of magnetic ordering has been found down to very low temperatures {{cite:864853c9e935579426a147e6f0df6d5dcac7ff1c}}, {{cite:da95bcc262c29940e46efeacc67a3b21fe4dbb3b}}. Low-dimensional antiferromagnetic fluctuations were reported, suggesting that UTe{{formula:9d58c313-79d0-4694-bd52-fffad10c97aa}} , whose U ions form a magnetic ladder structure, is subject to antiferromagnetic exchange leading to antiferromagnetic correlations {{cite:81d567233184808c008fecf8ebb5f1a1aa6ada8b}}, {{cite:097aa9d3ee39de23240a67c7c8171a1fedb88263}}. The opening of a gap associated with these antiferromagnetic fluctuations was also observed in the superconducting phase {{cite:6f422b69bc49f4cad1e82036a3a5c8b59d7ad6eb}}, {{cite:00096fe6f5f43083beb1f85f40c506efc2253f0e}}. The magnetic properties of UTe{{formula:282347c2-0bc0-40a1-ae01-5879cf4051f1}} are thus associated to its unusual superconducting properties. A full description of the relationship between the two is essential to understand superconductivity in UTe{{formula:a777fe43-0b08-4167-b9bb-476884cd833c}} , and may well advance our understanding of magnetically-mediated superconductivity in general.
| i | 09e5d1440f75a055631e17e325a3e665 |
The existing methods for robust recommendation {{cite:3e1b7d6ca3a4c7249eccfebcc3046f43d9c998d1}}, {{cite:a9d170a79896474940380d8471a199efc832cc63}}, {{cite:7540ddfde83fe59ef14cbb4edca986413331392f}}, {{cite:db35307c13c0a5dbd9ce9d1a18eb577fc91d4bf6}} often inject extra noise to training data or model parameters to deal with noisy user-item interactions, which roughly fall into two classes. One class of methods, such as CDAE {{cite:3e1b7d6ca3a4c7249eccfebcc3046f43d9c998d1}}, use Denoising Auto-encoder (DAE) {{cite:9cdf3eeee54a3a0b41aea51745b58a241bf22f88}} for generating robust embeddings of users and items, which adds random drop-out noise in user-item interaction vectors and trains an auto-encoder based on intentionally corrupted input with the objective of minimizing reconstruction errors.
| m | 6603286b553e52fe676caf4e9a4d77cc |
For UG2-2.1, we report baseline results from a total of six AR models including: I3D {{cite:0b404d195caed0aaf417cb945573a01f3745963c}}, 3D-ResNet-50 {{cite:a661bb81ee37fa2e16cb8343adeb7dbc2ba923be}}, 3D-ResNeXt-101 {{cite:a661bb81ee37fa2e16cb8343adeb7dbc2ba923be}}, TSM {{cite:456c24509ef09f3e5b1162b1a724dd14fb6a7f05}}, SlowOnly {{cite:b60b8a8bc7792b2f4d6bfef201800de6b42ebe98}}, and X3D-M {{cite:964bbe3767fa6eca4d99fbf35dfe613ad94cff32}}. Among which, RGB frames are utilized as the input for all methods, while we also report the results utilizing optical flow obtained through TV-L1 {{cite:7199b6c53b7a5b8333929c7c904cdbce2485333e}} for I3D and SlowOnly methods, along with the results by class score fusion {{cite:6d584e4cd40abf76c4da9b870d8c38f9041708ef}} with both RGB frames and optical flow. Meanwhile, applying enhancement methods which improve the visibility of dark videos is an intuitive method to improve AR accuracies. Therefore, we also evaluate the above methods using RGB input with four enhancement methods: Gamma Intensity Correction (GIC), LIME {{cite:c5d54710c454554c654c4b6cc16f8e8ff4b0fbe7}}, Zero-DCE {{cite:14d6057650cacda9a4db717e96cf75b56d2c9fe8}} and StableLLVE {{cite:a69715f08291c825ae214e6da7f1d7cff9054d29}}. All AR models and enhancement methods adopt the officially released versions when applicable, where all learning-based methods are written with the PyTorch {{cite:e2db40f47346b190ebc39edb653feed6430019a6}} framework. All AR models are fine-tuned from their models pre-trained on Kinetics-400 {{cite:0fed75a0e1d5c6948b6bb5b520a69c483290e821}}, and trained for a total of 30 epochs. Due to the constraints in computation power, the batch size is unified for all models and set to 8 per GPU. All experiments are conducted with two NVIDIA RTX 2080Ti GPUs. The reported results are an average of five experiments. The detailed results are found in Table REF and Table REF .
| r | e9b9070d4227fb2a179a87848c8152ff |
Reinforcement learning (RL) algorithms have achieved state-of-the-art performance
in many fields including locomotion control {{cite:1a14dc407d1521a112c4097cd81955cfc8a6b57e}}, {{cite:246e45e7f8d090b711f2223b8cb22cc0e7304847}}, autonomous driving {{cite:c8acd276ed8f3e0e7fec6ab9168c07b9564f422f}}, {{cite:a813d5f485e9d6c282cf26dcb1d679057b18fbe3}}, robotics {{cite:4da05a8a935acbc5a4833b0951e63fe887f53bea}}, {{cite:693f141c12c74edb79b10fb2d0733cd6a09a6116}}, multi-agent systems and control {{cite:66c90b48360abec33b9c1d685ff73da380016265}}, {{cite:ab364bd6741ebf7acfb44a7d96a6279269472824}}.
A model-free RL algorithm bypasses the fundamentally hard problem of system modeling,
but it suffers from the issue of sample inefficiency.
It relies solely on an agent's interactions with environment for policy learning/optimization.
A typical strategy is to train an agent by explicitly learning a policy network aided by a concurrently learned state-value (V-value) or action-value (Q-value) network, within the actor-critic (AC) architecture {{cite:ff5d411da5cac6d1387eb9f0b3da52118743c7a6}}, {{cite:2c56c4aa8f8d195e94564afcfc015bf6a6c6b74b}}. To boost sample efficiency, an off-policy RL algorithm maintains a so-called experience replay (ER) buffer, which stores all past samples for future reuse. In an off-policy AC algorithm, actor and/or critic networks are trained using data uniformly sampled from an ER buffer.
| i | 17d7b97734a55b91bfc31e60062865a9 |
As mentioned above, operational semantics {{cite:3723545b5f217b77c6aeb9e39edc345734c07523}}, {{cite:623536612797549bc1d50cd515071570a9d8fce0}} captures the meaning of an executable program by the environment transitions according to the instructions on an abstract machine. To be more concrete, we illustrate our motivation by structural operational semantics {{cite:d66457f882c586567e77b4ce8667ee5903e3e2d2}}, {{cite:b1b5bbb433659001f3bf42c4f970f43700f6c7ae}}. The meaning of assignment and composition on a simplified abstract machine can be represented respectively as
{{formula:509893a4-8877-43e5-ba0a-2f6747f23655}}
| m | 068f55457c9dbbd6ffc9f61e5909d4ec |
Due to lack of space, state-of-the-art in meta-classifiers has been omitted from the literature review. However, note that this work is focused on zero-shot learning. Hence, five standard meta-classifiers have been tried: (A) Meta-Decision Tree MDT {{cite:0d1447335accdacc0d12a980440df07860dfce6b}}, (B) deep neural network DNN {{cite:6b292ad24451dd883068d5f189d377e5aee1ad2d}}, (C) Game Theory-based approach GT {{cite:ec8cc532e631cafa164d102f9f54e8c4cbc8d1df}}, (D) Auction-based model Auc {{cite:ec8cc532e631cafa164d102f9f54e8c4cbc8d1df}}, (E) Consensus-based approach Con {{cite:c068567e3e11939ee38a8ee30584bdeed4fd5951}}, and (F) a simple majority voting MV {{cite:d76fd3a94f4581a544a7102bebef2fd7e69c1783}}. Here, classifiers (C), (D), (E) and (F) have been implemented directly as in the cited literature. However, the implementation of (A) differs from the one described in {{cite:0d1447335accdacc0d12a980440df07860dfce6b}} by not applying a third condition, which is the weight condition on the classifiers. This is due to the fact that all individual classifiers are applied simultaneously to the same datasets. Finally, for the DNN has been implemented a simple neural network with two hidden layers. The hidden layers and the output layer use the rectified linear activation function. The optimization function is the SGD algorithm, where the mean squared error loss function is used.
| m | 6bf56d4d9fe3b85e5793b64bed152095 |
The two temperature model represents the next logical step on expanding from a one temperature model, and is motivated by previous literature {{cite:9114142054a4a352bc1b512f9f962382cfe2821a}}, {{cite:0d2ce0d8cbb11c1f61aee3f2248193bf353a4834}}. However, the particular temperature values found using the two temperature model may not indicate the presence of gas at those specific temperatures (see {{cite:c4f6c228fb9da6c1cbfdb007b43da11a0fe16933}}). The spectral fits may be a simplification of a more complex physical situation with gas distributed over a range of temperatures. In that case, the specific temperature recovered with a two component plasma model may depend on the energy band used for the spectral fitting and how the plasma is sampled along the selected lines of sight. As noted in our previous comparison with the results of {{cite:ddbf8e427af559d47b8d50760e4dec47d2b89539}}, use of a two temperature versus single temperature model shifts the temperatures. Using ROSAT, {{cite:209a1c15ac79336428e5807fdb91d6f64b87ea30}} and {{cite:e9d523a04d473ec3d11969475addf4f470ed658e}}, find temperatures of {{formula:1ea7bd91-1a9f-4e8e-9fad-ec0f2d3d3517}} keV and {{formula:a404f41a-1d91-4d08-ad21-70cc3adae95d}} keV. These results are not actually inconsistent due to the difference in the energy range of ROSAT versus HaloSat. HaloSat is unable to rigorously detect the softest component seen by ROSAT, while similarly ROSAT cannot rigorously detect the hot component we've seen. As such, the difference in temperatures of the overlapping component would be due to that component covering excess emission from the respective missing third component in each fit. This could explain why the temperature of the warm component is higher in the ROSAT study than in any of the HaloSat studies. It appears there may actually be at least 3 temperature components in the CGM if one looks at a wider energy range, although it is likely that these components are just peaks of a broader distribution of gas temperatures. Some examples of alternative models that could be investigated in the future can be found in {{cite:b521388cd2e571b170657e66db6f1fb19b62ef3c}} and {{cite:f6e8bdcf93383c457cc45557645f560d25d1fdd5}}.
| d | ee8f006e2be48ccb5e1bcf4fce932bb6 |
First, in Sec. , we consider the absolute retracts of bipartite graphs and some important subclasses of the latter. We observe that in the square of such graph {{formula:a016faf5-2800-46cc-b665-31afbc9ca6ee}} , its two partite sets induce Helly graphs. This result complements the known relations between Helly graphs and absolute retracts of bipartite graphs {{cite:c499f88812ba9eaddd59894fb11d2aa0f186cd80}}. Then, we show how to compute the diameter of {{formula:abe54fbf-f2a6-43f5-ac59-3318ff67c95d}} from the diameter of both Helly graphs (actually, from the knowledge of the peripheral vertices in these graphs, i.e., those vertices with maximal eccentricity). Recently {{cite:82bbd9cda0d3e4dddb92ea309e2ab4163adfb79e}}, we announced an {{formula:27e9758f-c21b-45dc-ac42-7b4d4bc10f76}} -time algorithm in order to compute all the eccentricities in a Helly graph. However, extending this result to the absolute retracts of bipartite graphs appears to be a more challenging task. We manage to do so for the subclass of chordal bipartite graphs, for which we achieve a linear-time algorithm in order to compute all the eccentricities. For that, we prove the stronger result that in the square of such graph, its two partite sets induce strongly chordal graphs. Here also, our result complements the known relations between both graph classes {{cite:3f6c89a7495b68e2d8946fc6075689633cf280d0}}, {{cite:b6b2c83a7d132d7db093655a11c1a8f6b98ba842}}.
In Sec. , we generalize our above framework to the absolute retracts of {{formula:011299ed-1f27-4a49-a003-9e7a07bc6378}} -chromatic graphs, for any {{formula:51dd1c4b-3cbe-4255-805d-780dcafa444e}} . Our proofs in this part are more technical and intricate than in Sec. . For instance, we cannot extract a Helly graph from each colour class anymore. Instead, we define a partial eccentricity function for each colour (i.e., by restricting ourselves to the distances between vertices of the same colour), and we prove that the latter functions almost have the same properties as the eccentricity function of a Helly graph.
Our positive results in Sec. and rely on some Helly-type properties of the graph classes considered. However, our hardness result in Sec. hints that the weaker property of being an absolute retract of some well-structured graph class is not sufficient on its own for faster diameter computation. Specifically, we prove that under SETH, there is no {{formula:92d9f0d5-7011-4185-8450-b4ed2c47b260}} -time algorithm for the diameter problem, for any {{formula:32b805bc-1a09-4d95-a50c-15313725f48f}} , on the class of absolute retracts of split graphs. This negative result follows from an elegant characterization of this subclass of split graphs in {{cite:01804df349bed3ff96d5e76201af455657ce4774}}.
Finally, in Sec. , we briefly consider the absolute retracts of planar graphs. While there now exist several truly subquadratic-time algorithms for the diameter problem on all planar graphs {{cite:cedc7c97204ca827f9d6d3b39ef6809aa2396dc5}}, {{cite:7931dea8f1b9daccac9872a70629f58ede82b612}}, {{cite:cd50a03b9a13fc5b3b88b1ecd1bd88b888bf5b73}} – with the best-known running time being in {{formula:b235b989-614a-4900-ae1b-041eba8414e6}} – the existence of a quasi linear-time algorithm for this problem has remained so far elusive, and it is sometimes conjectured that no such algorithm exists {{cite:cedc7c97204ca827f9d6d3b39ef6809aa2396dc5}}. We give evidence that finding such algorithm for the absolute retracts of planar graphs is already a hard problem on its own. Specifically, we prove that every planar graph is an isometric subgraph of some absolute retract of planar graphs. This result mirrors the aforementioned property that every graph isometrically embeds in a Helly graph {{cite:def918d4fb6b98df531419ab4763799d3139ce06}}, {{cite:92496996ec6eb016b89912cd4d9fbcd01b18decd}}. It implies the existence of some absolute retracts of planar graphs with treewidth arbitrarily large and inner vertices of degree three. Doing so, we rule out two general frameworks in order to compute the diameter in quasi linear time on some subclasses of planar graphs {{cite:8e480041506d66a9bffc5a20bfb70e94fe48c40f}}, {{cite:76005377171ecdb930307e9a9e3fcc3fa846a175}}.
| r | 9fcea7a764773055f0c661d4691bfa87 |
On the mathematical side, the problem of SMEFT tree-level matching
up to dimension-six terms was completely solved in Ref. {{cite:88a39565a1da2754c285137822d27b1d1f96d8df}},
which provided the complete dictionary of SMEFT contributions for
all possible tree-level mediators. The key observation behind this
work is that, for any fixed maximal effective operator dimension,
the number of extra fields and couplings which can give rise to SMEFT
operators at low energies is finite, so that the program of Ref. {{cite:88a39565a1da2754c285137822d27b1d1f96d8df}}
can, at least in principle, be carried out up to any effective operator
dimension.
| m | 689470b9d2f73a00c1c2fa4852c1add1 |
Accelerated methods for multiobjective optimization are not sufficiently discussed from a theoretical point of view in the literature yet. In {{cite:fa75f6507917cd90744bcb6a44454d4a21b17442}} El Moudden and El Moutasim propose an accelerated method for multiobjective optimization which incorporates the multiobjective descent direction by Fliege {{cite:3d26f21c0bba0bba16480fc2b5466fd44c5df715}} and the same acceleration scheme as in Nesterov's accelerated method {{cite:373cf5905e7f91eb0d1099f83af69664dc4b2fe2}}. El Moudden and El Moutasim prove a convergence rate of the function values with rate {{formula:187a2af1-855f-4f62-98d5-5bf30fd4d752}} . Their proof relies on the restrictive assumption that the Lagrange multipliers of the quadratic subproblem, that is used to compute the step direction in every iteration, remain fixed from a certain point on. Under this assumption, the method simplifies to Nesterov's method for single objective optimization problems applied to a weighted sum of the objective functions with fixed weights. Only recently, Tanabe, Fukuda and Yamashita derived an accelerated proximal gradient method for multiobjective optimization problems in {{cite:f439f35968bd1ea39e67e17aaa20bc266eb1017d}}. They developed their method using the concept of merit functions (see Subsection REF ) and show that the function values converge with rate {{formula:ddd5d1db-9273-4771-99fb-aca38d5c396c}} without additional assumptions on the Lagrange multipliers.
| m | 3655765598e91ee0de80d77944877429 |
Rather than plotting the full distribution, it is also common {{cite:3a26882bccd4b03c26ec340420e940453d5624b4}}, {{cite:aa8ebe20641a15f7a0b73d3046df8daa5e9b7e06}}, {{cite:c9e2ed80c91068a3a5c792ba0987ea552fb7009f}} to compare the posterior with the ground truth parameters of synthetic data.
Such a comparison does not require the true posterior to be tractable.
A good estimate by this criterion concentrates on the true value, or assigns high probability to the true value {{cite:f1d5e0c4794ddffde74c231b105658db9419c836}}, or has mean near the true value {{cite:86183980e9b2088967519f82372c8838deb6bf8f}}.
Like plotting the full posterior, comparing to a known ground truth is easy to do and can diagnose significant algorithm failures.
It does not require knowing the posterior and can be supplemented with calibration tests {{cite:f1d5e0c4794ddffde74c231b105658db9419c836}} or cross-validation {{cite:374f859dadd8081410114ba3c8391ce9c9ad8fb4}}.
Alone, however, ground truth comparison provides limited information. A distribution may have high density at the ground truth parameters while still being a poor approximation to the true posterior, either by underestimating uncertainty or because the data is atypical for the true parameters.
| m | 3cc8572fc1d3c7f7918cc9ee3f262963 |
Although a promising quantitative explanation for the researched phenomenon is developed, it is yet indiscreet to conclude that EpiRank is by any means a sufficiently accurate index of urban epidemic hazards. The current dynamic network model draws little besides the two aspects, urban population and inter-city transportation, and too many real-world factors are left out. Validation of EpiRank results is also difficult to be conducted in a systematic and rigorous way, besides using the six epicenters as the ground truth. Despite a mathematically consistent and empirically effective approach, the proposed simulation framework and the constructed EpiRank index needs further analysis and extensive tests in various settings (e.g., to investigate the situation in the US {{cite:2ab0db2db7ee0847af24dbff41de3ccbc0ec795a}}), before their powers and shortcomings could be substantially uncovered; this study only serves as a first attempt.
| d | 61fe2bb00d3f0bdc338f6d8926f6a8d6 |
where {{formula:0cb98e56-797f-4f65-8785-bf7865ed8aaf}} and {{formula:1de47bb2-d49f-492a-b432-72b1bf10a004}} are their 2nd Fourier coefficients, i.e., total transverse rapidity flow and radial flow {{cite:ba44e304cf0f7c5c383561aca6e1503d92725d56}}, {{cite:2197f44d75a453d379ccac63d4296dde6137acb4}}, {{cite:8b7c854f849842d28e9eaef9fecd815e6d2bfaaa}}, respectively.
| m | 7581a9f1ab42d5726837d835755b3669 |
Other methods such as {{cite:1c64cf815859175352e445af4e35b61fa15a4589}}, {{cite:4cffc18ef87f1c7d1fb4f573502fc4e5c3204bc9}}, {{cite:f97bf0cb8a3d6b20827693037d00d60f57310273}}, {{cite:dc2603b33ea3c91cbd4203d41731076a60940ad8}}, {{cite:ea8f2b59b1209d2ad4345c4ad686229c13625772}}
and {{cite:f6b9a990b03ab7dbd8cbc454e19d9fd3c8c03215}} attempt to address the scarcity of existing ground truth labels by
adopting a semi-supervised or unsupervised approach. These include methods which
attempt to perform segmentation based on a single datapoint as demonstrated in {{cite:ea8f2b59b1209d2ad4345c4ad686229c13625772}}.
However approaches such as these only factor in low-frequency information and are designed for
landcover classification. {{cite:f97bf0cb8a3d6b20827693037d00d60f57310273}} utilizes point-based
pseudo labels to segment objects of interest in imagery using a network with a localization
branch and an embedding branch to generate full segmentation masks. While similar in nature to P2P,
{{cite:f97bf0cb8a3d6b20827693037d00d60f57310273}} optimizes a squared exponential distance metric between same-class
pixels in embedding space to generate segmentation masks, whereas P2P frames the segmentation task
as a simple image-to-image translation problem with localization constraints
enforced by an adversarial objective. Our approach thus circumvents the need for pair-wise
calculations from a similarity function with the additional benefit
of a separately tunable constraint.
| m | 8575c205bb1961400f0046b39ba5ac6d |
Each method described above was applied to deep CNNs tested on visual classification. Models were trained on the MNIST handwritten digits dataset, the CIFAR-10 image set, and the ImageNet dataset. On all three datasets, we implemented relatively simple baseline architectures following the basic paradigm established by LeCun et al. {{cite:fb21b202a3ee2cb33984e32a77a346895a419643}}, namely several convolutional layers each followed by a pooling layer, with one or several fully-connected layers on top. To test the performance of our proposed methods on models with increased depth, we also applied them to more complicated architectures with nine convolutional layers {{cite:10c3f69451a40293825b7e65cff68c56786ffa75}} on CIFAR-10 and ImageNet. The exact architecture details are presented in Supplementary Tables REF and REF . In addition to FA, we trained models with direct feedback alignment (DFA) and a new method we call dense feedback alignment (DenseFA), which adds residual feedback connections from every downstream layer. Unfortunately, the memory requirements of these methods precluded their use on the larger models we trained. To further investigate the relationship between FA and BP, we also investigated the performance of models that were trained with BP but with either added noise (BP + Noise) or with weight matrices that were forced to align with arbitrary random matrices (BP + Alignment). These techniques and their motivations are discussed in greater detail in Section REF . On ImageNet, as a means of circumventing the need for normalization altogether, we also experimented with BP (BP Const.) and FA (FA-uSF Const.) models with weight norms constrained to be constant over training (see Section REF for details).
{{table:79276080-29fc-4fab-9203-37e1fab96bc7}} | r | 0ed5a41f9452166a3593654740e5a8d3 |
Mahalanobis Detector
The above three methods are based on the confidence. Another approach is to formulate the problem as unsupervised anomaly detection.
{{cite:dd171df2b7cea410f8c15cfdcd420e4753ed06d9}} proposed to model the distribution of intermediate layer's activation by a Gaussian distribution for each class but with a shared covariance matrix among the classes. Given an input, the Mahalanobis distance concerning the predicted class is calculated at each layer. A score for OOD is given by the weighted sum of those calculated at different layers. The weights are predicted by logistic regression, which is determined by assuming the availability of OOD samples.
To be free from the assumption, another method is suggested that generates adversarial examples from ID samples and regard them as OOD samples.
It is also reported in {{cite:bbda6d96d4bdd081355fbba58a8c7cf815fd0265}} that setting all the weights to one works reasonably well. We evaluate the last two methods that do not need OOD samples. Although the original method optionally uses input perturbation similar to ODIN, we do not use it because our experiments show that its improvement is very small despite its high computational cost.
| m | 21a53e0527c549cebb7fa992cc1bce61 |
We train our method on ShapeNet {{cite:c2d85eb027d3f6afe5ef451801689cd27e9716bd}} data set. In order to achieve better generalization we use objects from all classes. In each step the object class and corresponding object from it selected randomly. We follow this strategy for both pre-training and training phases. To assess our agent we conduct two different comparisons with different baselines. In the first experiment we compare our method with the results of slices based surface reconstruction methods (Bermano et al.{{cite:11a0cd9aa87d6295e7f22865a9951ba398e22035}} and Zou et al. {{cite:cea27db1dee745958a839e24c572ef67141b0ae7}}), point cloud based method {{cite:416f1e3fa1aa222b6faf0dbbdc9fd7f333c7a7ff}} and baseline method where we use not projection images. Both slices based approaches are build on the level-set extraction of some indicator function. Additionally, Zou et al. {{cite:cea27db1dee745958a839e24c572ef67141b0ae7}} searches for an optimal topology which complies with the user prescribed genus. From this perspective Zou's method is similar to our method. However, our approaches towards solving branching and connectivity problems is completely different. Where we focus on parsing 3D shape into smaller regions which increases reconstruction accuracy. While Bermano's method does not search for the optimal topology it additionally proposes a solution to unknown and multi-labelled regions. Figure REF illustrates the result of comparison of these methods on shape reconstruction from the given contours. In this example, 3 out of 5 contours are downloaded from the Internet while 2 others represent ShapeNet objects. Here we consider two different settings of our method for 3D shape reconstruction: sparse input which represent planar cross sections shown in the Figure REF , and dense input with two times more slices. As it can be seen from the Figure REF , our method captures difficult topology and geometries more accurately for both settings. Also, the quality of reconstruction for all objects are similar for both IoU and CD metrics despite the model was trained only on the ShapeNet data set (Table REF ). It should also be noted that none of these methods generated the results for dense inputs. While our method effectively works with dense and as well as more sparse input. This indicates generalization capability of our method. This should be noted that our IoU calculation takes into account surface correspondence only.
| r | e65ea8b2c3c0bbdca9d437a5628a102d |
For evaluation we use BLEU {{cite:ccc6c9928bb67ccb40c699ff7bf95ce97fd9eed2}}, ROUGE {{cite:0247153b7479e978e481dbe573bb7c6fdafa55d1}} and METEOR {{cite:9cb180b98b16dfea3b7fa2fa905d3674302058c3}}. For BLEU, we report BLEU-1,2,3,4 scores and as ROUGE score we report the ROUGE-L F1 score. These metrics allow us to directly compare directly with previous works. METEOR is calculated in addition as it demonstrates higher correlation with human evaluation than BLEU on several MT tasks. All reported results unless otherwise specified are averaged over 10 runs with different random seeds.
| d | fbafceb4ba1aa1d85316ef99e1dcde3f |
When we compare the RS and NC network methods to the random guess method, we find that all network methods, except RESCAL and DEDICOM, are significantly (p-value{{formula:e97dff84-c470-409f-8224-42e296a96688}} 0.05) more accurate in terms of all evaluation measures (Figures REF , REF and Supplementary Figure S5). The most accurate method that we evaluate, DMF, achieves gains of 115% in terms of precision, 150% in terms of recall, 131% in terms of F1 score, and 21% in terms of accuracy over the random guess method. Note that when we compare an approach, say DMF, to the random guess method, we measure the gain (i.e., relative change) of DMF over the random guess method as {{formula:4cd82141-d03f-487c-a90a-83e1b6815440}} . For example, if {{formula:0bfb50da-1c3d-4d6b-9bb9-326e10fb1c64}} is 0.612 and {{formula:8fc0b854-c403-4de4-bfba-9086bb1c51ff}} is 0.245, the gain is {{formula:c678ca13-233e-4e8e-9a36-28a60df2921a}} . RESCAL and DEDICOM do not accurately predict depression - RESCAL and DEDICOM show similar (i.e., not significantly different) values as the random guess method, in terms of all evaluation measures (Figures REF , REF and Supplementary Figure S5). Note that the superiority of DMF over the other two RS methods (RESCAL and DEDICOM) is not surprising: DMF was already shown to perform better than RESCAL in recommendation tasks {{cite:e07f76cc70c36eac620284699d92ac0c377fd69f}}. In turn, RESCAL was already shown to perform better than DEDICOM in recommendation tasks {{cite:7b7b778a31a7059c97ac358264b17426cf3d236b}}. Therefore, we expected DMF to work the best of these three RS approaches. However, it is surprising that in our task of predicting mental health, RESCAL and DEDICOM produce random-like results. Also, it is at least somewhat surprising that DMF is superior than the fourth considered RS approach, HERec, given that the latter is a more recent approach than the former. Examining why RESCAL and DEDICOM produce random-like results and why DMF is superior to HERec is non-trivial, given the heuristic-like nature of these methods in the recommendation task, without many if any theoretic guarantees. As such, this is out of the scope of the current study.
{{figure:3b676c42-fec5-4686-9428-aecdea5f90cb}} | r | 148d631ce4677e602c2e916823118e63 |
The aim of this study was to evaluate the bias of face-based BMI prediction models across four gender-racial groups (Black Females, Black Males, White Females, and White Males).
Experimental results suggested performance differential of facial analysis-based BMI prediction tools.
However, in-contrary to bias analysis of other computer vision systems reporting the least performance for dark-skinned people {{cite:47a9ee16ded0e0fda12786df0226baff66c9d69d}}, {{cite:3d52cee863a04df905ee56a519b266ef280f91f4}}, {{cite:b9d3d9a81c1404264fbdd066ff582a123447d3ad}}, {{cite:092e98b25059187d269a488617570901fb3bc4f2}}, Black Males obtain the least error rate in BMI prediction from facial images in this study. The psychology-related features suggested that as the BMI increases, the changes in the facial region are more prominent for Black Males than any other gender-race category. This assists the BMI prediction tools based only on facial image analysis in more accurate prediction for Black Males over other gender-racial groups. In our experiments, Males outperformed females in BMI prediction which is also the general trend reported for other computer vision applications {{cite:47a9ee16ded0e0fda12786df0226baff66c9d69d}}, {{cite:3d52cee863a04df905ee56a519b266ef280f91f4}}.
| d | 897e67d711ec7af6e4dbb306dd414050 |
Films of known thickness were measured by compressing powder
within apertures between the KBr discs. Apertures were made in the
{{formula:0d08f786-9cd2-4147-9af2-5dedc5cd0e75}} -thick microphone foils measured by {{cite:1d4d4a1259458c3adb34b5e4e4d93b8b4a58fdb5}}
and with with 6 -thick tin gaskets used for IR cells (these were
measured with a micrometer). The very thin films required to obtain
unsaturated carbonate peaks were obtained by gently squeezing the
discs together and rotating them to make a uniform film. Spectra of
each sample were taken in the order microphone foil, repeat after
squeezing, 6 gasket, then take gasket out and run the very
thin film. The thinnest calcite film ({{formula:0ecb3663-87c6-4aa2-a9eb-cda1b5965fef}} ) is of grains
remaining in the scratched KBr discs after cleaning with a lens paper.
| m | d0f6841e4b575ad43f0fbed00a0c2c2b |
We compare our proposed algorithms CORAL-CEM and MMD-CEM to ERM {{cite:d0073302f022bb70ce844615a60473f6de7479de}}, IRM {{cite:063aea776f3093a029aed156c4563e4db2657423}}, IB-ERM {{cite:c56a5eabb1c2f8f19cb706a8726fb1c438fb8282}}, IB-IRM {{cite:c56a5eabb1c2f8f19cb706a8726fb1c438fb8282}}, MMD-IRM {{cite:d84d2a13fb4e9764f50f6af2a497ff2617106cc1}}, and CEM {{cite:bc9161452b22fcbbfbf21775a9ec999ee410ce10}}.
Since the code for MMD-IRM algorithm was not released by its authors, we implemented this algorithm according to the description in {{cite:d84d2a13fb4e9764f50f6af2a497ff2617106cc1}}.
| m | 13f5386f6b4d3d53d588bf57687a9042 |
Real-world populations.
In Table REF , we report values of the rescaled average RSE distance {{formula:9b162c30-5d96-471c-b6b5-46699f760d82}} for various real-world networks together with their degree-heterogeneity {{cite:174ee657f82ca0271f6d18aaa35b92d6758a5ea6}} and clustering coefficients {{cite:56cac892c2d9d5a74172655a620676e66dc21081}}. As expected, low values of {{formula:ee07a7e8-85c9-4493-b053-1dc2a4a10bc2}} are correlated with high heterogeneity and low clustering. More generally, {{formula:90d51ba4-3ed0-4a31-9c67-87db1ba18a8e}} tends to decrease as (i) the degree heterogeneity increases (ii) the clustering decreases. Details concerning the underlying network data can be found in the Appendix (see also the SI).
{{table:0016d523-f2bb-45dd-ad00-e10e01e5fff2}} | r | 6abea548571641c6979eed39d91297b1 |
We randomly sample {{formula:bc0678e8-b182-4277-8a84-cfe84610ddc1}} sentences from the Toronto BookCorpus {{cite:b08c1b33fd93f816df2fd79b0adb85f6b4eaa4dd}}—a collection of English books of various genres—for computing Mantel tests using Levenshtein distance, both raw and normalized, as the textual distance.
We repeat the procedure 5 times before averaging results.
We use the same operations as previously to control for synonyms and stop-words; i.e., we test whether removing stop-words altogether and normalizing words based on their WordNet synsets improve mfc.
| m | c6ddcba6b895e123b86a43b1e46071ab |
The advantage of GPS is twofold. First, the sampling is guided by the designated adaptive sampling policy, enabling a more effective message propagation than GraphSAGE, whose node representations are generated through random sampling of neighborhoods of each node. Other importance-based sampling works adopt a different strategy, where inter-node probabilities need to be calculated and optimized to reduce the variance caused by sampled nodes {{cite:2ef25f8d03ba85b51f3800e664523b9b2d82ab7f}}, {{cite:82300b91fab5f96a1202b91ee6d75f4c414e4dd3}}. However, it incurs a significant computation overhead in AS-GCN {{cite:2ef25f8d03ba85b51f3800e664523b9b2d82ab7f}}, and the latest work GraphSAINT splits sampled nodes and edges from a bigger graph into subgraphs on which the GCN is applied, causing unstable performance on small molecular graphs (details unveiled in the experiment part). GPS, on the other hand, enjoys a more stable performance and high accuracy at the same time.
| d | 6339e01be6bc980068e99e605c65c033 |
Unlike the previous research that combines two independently trained systems in parallel, in this paper, we propose a two-pass hybrid and E2E cascading (HEC) framework as shown in Fig. REF . A conventional hybrid system is built as the first pass model, which outputs segmented audio and its corresponding N-best hypotheses. An attention-based encoder decoder (AED) model {{cite:a009cffb1a23e476162440887172eb4545b97f12}}, {{cite:db28faee10f581cd7caba6f5d5e83ad5e1ed2de3}} is then trained as the second pass model that takes the segmented audio and N-best hypotheses as input and integrates them into the decoder module through attention mechanisms. By doing so, the two systems are combined more tightly as the second pass is totally aware of the first pass' output during training. This cascade design also creates opportunities for the second system to act like an error corrector of the first system which parallel combination systems can not do.
| i | 9327f34d150e76a2bca4e1b4febd230d |
While our framework leverages the work of {{cite:6a2df1e67f39c57c43133c7c8c9790c0f47cc04b}} to enter the SNN domain, this work also introduces novel results and further innovations. In the {{cite:6a2df1e67f39c57c43133c7c8c9790c0f47cc04b}} experiments, results were shown for networks with plastic and neuromodulated synapses on only the recurrent weights. In their experiment, the cue association process was iterated for 200 time-steps without introducing any noise. They show that only modulatory variants of ANNs with fixed-feedforward weights and neuromodulated self-connecting recurrent weights are capable of solving this task. In our experiment, we extend a similar task to the spike domain and introduce a significant amount of sensory spike-noise. Additionally, the time dependency is more than doubled. We show that not only are neuromodulatory feedforward weights without recurrent self-connections capable of solving this task, but also that feedforward plastic weights are. We also show that the introduction of spike-noise does not decrease training convergence. On the cue association task, we show that with Oja’s rule, neuromodulatory signals drastically reduce spike-firing rates compared with the non-modulatatory variant. This reduction in activity does not apply to BCM, which has a natural mechanism for both potentiation and depression. Finally, our experiments showcase a meta-learning capability to adapt beyond what the network had encountered during its training period on a high-dimensional robotic learning task.
| d | bf92606d3a14c93eccf423ce5c42d8bf |
Discriminative patches are the patches that were correctly classified by the model with high confidence. The softmax probabilities of the predictions from the classification model were used to obtain the confidence scores. Despite achieving better performance, modern neural networks tend to be overconfident. This can be attributed to the increase in width and depth compared to the older neural networks like LeNet, and methods like batch normalization and weight decay. Thus, the probabilities of prediction are not representative of the true correctness likelihood and calls for a calibrated confidence score. To this end, we use a straightforward calibration technique: Temperature Scaling {{cite:378a148b2ce56063537bd311d6d6cbc54f130304}}. The confidence score is calculated as
{{formula:a150ffe3-85b2-4b94-8394-fb63159dbc36}}
| m | cbd602408b64f2f8da5bda9ee32974c5 |
Our interest in the graph {{formula:da133439-b270-4145-a3fc-366918be5846}} was triggered by the following question, which is related to Schramm's locality conjecture, see {{cite:e8fac29c329d592198c72f4cae99e15a418454c4}} and also {{cite:d33062ab07e5c34ca384cf430f1a7025f78282e4}}, {{cite:a27ae3e677b899de7bbc0d16eba61f69751458e3}}, {{cite:0db3ce9ec7f972b485324b4787a8fe90a5385dc0}} for some recent progress.
Suppose that we are given an integer {{formula:29812709-1499-4ac0-9d30-78b43db8b66f}} and a real {{formula:e5c8bc0c-f349-4571-95ed-54ec7e70b3f0}} . Then, remove all vertices of the random geometric graph {{formula:86bafe52-4bc1-490b-8977-dd2c0a99636e}} which have degree larger than some constant {{formula:32dff6fe-71bc-4ea4-a1a9-d395ebd70acf}} together with all the edges emanating from them. This defines a subgraph of {{formula:789b3ac8-dc4a-4d76-8712-553c5a526d33}} in which all vertices have degree bounded by {{formula:7f7b8495-727a-4e1a-ab14-dfcf2a498d2b}} . Moreover, as {{formula:7d6d13d8-5339-45fa-8ae5-a00121bf9b9b}} , these random subgraphs (say rooted at the origin) converge locally in the Benjamini–Schramm sense to {{formula:1479ee32-a02e-4c68-92d7-def1d3e1f36f}} (see e.g. {{cite:e77181833443afd168a70aa358baeef85c21101e}}, {{cite:fe0c1e38707e19191e5f23fef12cb20b361880a2}} for more on this notion of local convergence). Thus, we were initially aiming to understand whether the bond percolation thresholds on these subgraphs converge to {{formula:962d25cc-9001-4e61-874b-9414f38bb2e7}} , the bond percolation threshold of {{formula:0de2c690-86b3-4871-8a98-79fdefb99d33}} . Our first main result confirms a more general version of this statement. We remark that, while Schramm's locality conjecture was stated in terms of vertex-transitive graphs, we assume translation invariance of the distribution of our graphs. First, a random set embedded in {{formula:0bebb0a0-abc0-42e2-b629-31a3ae9234da}} is said to be locally finite if for any bounded domain {{formula:207404a3-521c-454f-8b6d-fd879b01e6e6}} , the restriction of the set to {{formula:a7c894e2-c27d-4a9f-a02d-1af2a69bf53d}} is almost surely finite.
| r | 69eababde1e9f3187d4a7fc4c86dcf24 |
Recently, the success of deep learning in the fields of image and activity recognition has motivated its application to gait recognition. The basic feature representations of these methods are also gait templates. {{cite:d4a2646992cff49dc06c29b7b98585f582a7e2bb}} employed a CNN (Convolutional Neural Network) to define the degree of dissimilarity between two GEI templates, i.e., a probe GEI and a gallery GEI. Deep learning requires a lot of training samples, so they used the largest gait dataset publicly available which is the OU-ISIR-LP dataset. This method, named GEINet, produced an accuracy of 95% for same angle and 80-95% for cross angle performance. {{cite:19b58cd358ff522f06e7fa98902a8cbb0fc41b96}} proposed the DeepGait design to outperform GEINet using the VGG {{cite:a630b08b5f293d074b5da777883957bc8fbe16c4}} deep convolution model and Joint Bayesian model for view invariance. Using the same dataset, the DeepGait achieved gait identification rates of up to 98% and its cross-view identification accuracy range from 88 to 98%. A more thorough experimentation using CNN was conducted by {{cite:c740cf48a62056daf1757d216d039ce9f2673069}} on both OU-ISIR-LP and CASIA-B datasets. They used three different CNN configurations for gait identification. Their experimentation revealed that the ensemble of the networks with GEI and additional temporal features gave the best accuracy. Though their cross-view normal recognition performance became the state-of-the-art, their accuracy for covariate factors are relatively low. The average CCRs with SetB ranged from 80 to 90%, and SetC ranged from 60 to 75 across view angles.
| m | 6ae272e9bee7a1539c76ea794f5f1967 |
The second set of experiments compared our proposed framework with other baselines in the setups of linear evaluation, fine-tuning and few-shot transfer learning. In addition to our model (SSL with LPA), we studied 4 baselines: SSL with LAA, supervised models trained without augmentations, supervised models trained with LPA or LAA. The details of training SSL models follow Section . For supervised models, we used the same architecture as NeuroSAT. The hyperparameters for supervised were the same as SSL in Section , except the learning rate was chosen to be {{formula:5fc6d074-2656-41bb-919e-b30d8049fbcd}} , following {{cite:3703622b24a0d160828df64872e3187c607326f4}}. For each dataset, we chose the best augmentation combination for LPA and LAA according to Figure REF . The degree parameter associated with each augmentation was tuned separately for SSL and supervised models. We used 200 instances as validation sets for early stopping of all methods.
| m | 6695ddf22d22a8464b7a978539f575e0 |
However, the applicability of our result does not stop at (syntactic) extensions of our framework, as it applies to arbitrary query languages and querying formalisms of different types. In particular we would like to stress the relationship to the very comprehensive existential rules fragment of bounded treewidth sets (bts) of rules {{cite:131b6657c9bf6f84112d2cf973fece4307c0dcf8}} that is not chase-terminating and encompasses a plethora of well-known existential rule fragments with decidable query entailment, including guarded {{cite:e457c8d5c1cb65482fdedbc5c8890121796ce3d2}}, frontier-guarded {{cite:131b6657c9bf6f84112d2cf973fece4307c0dcf8}}, and glut-guarded existential rules {{cite:8d66db68a6097685b1d5379ae30049b48e1a4c25}}, as well as greedy bts {{cite:b8ed189a8476e673ff1ca42f9e26986c156de2b6}}:
| d | f14b154fb10fde7e74f6be991c319c09 |
In the present work, only the variable `sex' was considered as a sensitive attribute. However, in many practical applications, multiple sensitive variables may need to be considered. For example, the models may be considered fair with respect to gender but still make discriminatory decisions with respect to race. Likewise, Caton and Haas catonfairness2020 also warned of the effects of variables that are themselves not considered sensitive, but are still related to sensitive variables. Furthermore, only one fairness metric was considered in this research, risk difference, which is a measure of demographic/statistical parity that captures group fairness. As previously discussed, reduced unfairness according to one metric may not reduce and it may even increase unfairness according to another metric {{cite:f5277b68d8b56b6b7603f0c72c33cec6b738ccee}}, {{cite:eb7dc0f9b636562db98e93c16733e64a99046edc}}, {{cite:25a75449ecd07cd71e26a4fe43a15ce1c6975e91}}. If the models used in this research would be employed in real-life settings it would, therefore, be crucial to consider if demographic/statistical parity would be suitable for the specific application. Related to this are the chosen threshold: for example, in this research, the limit of risk difference beneath which a model was deemed fair was 0.05 for the strict threshold and 0.1 for the lenient threshold. However, whether these thresholds are suitable in a real-life application may depend on the use case and legislative requirements {{cite:67a78dd7f1d349d06b6eee12bfb6c2174d055c2b}}. If the goal of fairness is preferred over utility, the fair and differentially private and fair logistic regression models by Xu, Yuan and Wu xu2019achieving should be preferred over the equivalent models presented here. Likewise, utility may be measured differently in different applications, which may require using balanced accuracy, F1-score, or other metrics, which might lead to different model rankings. Additionally, it would be interesting to assess whether combining pre- and post- processing fairness techniques would improve results. Lastly, a relaxed notion of differential privacy was considered in this research, while in some applications, strict or traditional differential privacy may be preferred.
| d | 1b202b44532b0a55f15a529acaefb406 |
In the literature of multitask learning, {{cite:e02e18cc150d927ce02dba71ad6324c4c7c58821}} showed that the excess risk, namely the difference between the learned parameters and the optimal ones, scales as {{formula:3c448d2a-70c4-44b7-b1ad-cecc67878f17}} , where {{formula:65e34783-12c1-4a9d-a14e-841458e78f6b}} is the number of tasks and {{formula:11d39153-8b16-4397-9642-1744b0b70f3d}} is the number of training data per task. {{cite:d6dd38719787a1b4ce3abfdf08a15016ab3772cc}} further integrated task diversity into the generalization bound showing that the transfer learning risk decays with increasing number of training tasks as well as task diversity. Here, the task diversity is defined to encode how well the training tasks can cover the space captured by the learned representation needed for predicting on new tasks. In our multi-task formulation (Section ), the Classical fine-tuning is essentially sampling only one task which is identical to the whole training set. Although it contains all the training signals as MAMF, the task diversity is lower in the sense that the learned representation is narrowly fit to this one task, making it hard to generalize to new tasks during meta-testing.
| d | 226f673a8f93c073da2b1abd5e409efe |
More formally, in the centralized setting we present a {{formula:bc17c539-a638-4934-a6e1-7ccabb79295d}} -approximation algorithm with summary size {{formula:515aa54e-92eb-448e-a649-41e08f507bb6}} . In the streaming setting we provide a {{formula:318fd7b3-322c-476d-b92e-68dd84cc1ca4}} -approximation algorithm with summary size and memory {{formula:989a5f4c-5eca-441d-a005-d6ddd0947b32}} . The constants in the two cases are {{formula:1c852df3-48ce-43d4-a7c9-abf08b66ab9d}} and {{formula:ff9c7311-093c-44ba-9fee-96f139479f3e}} , where {{formula:51297260-663f-43ba-8e44-e3be9aec998f}} is {{formula:3ea073fc-b806-4c23-89bb-dc823f71302d}} , i.e., the best-possible approximation guarantee for the standard centralized problem {{cite:e0ad9008bef32eb42b70bc08d7cd3a6ca794d80c}}, {{cite:a99e6c56804e5f3c46097d36132cb6d732363e71}}.
The “price of robustness” is thus just an extra additive 2 or 4 depending on the setting. At the same time, up to possibly a logarithmic factor, the memory requirements are tight. Finally, note that the state-of-the-art for (non-robust) streaming submodular maximization with matroid constraint is a {{formula:b9d6dcf9-5872-4636-8505-728530c17a84}} -approximation {{cite:22371504527e73d23dcbd250fb4f8b2bb2a82c75}}.
| r | 8a23c6f34ed4be134afebf92816ee869 |
We use Ramulator {{cite:ee6a660b92b0191da1e823b17674ddf880490b88}}, {{cite:151b7fedf88bb1ea365496283dcfaa6f085fbedb}}, a cycle-accurate
DRAM simulator with a simple core model and a system configuration as listed in
Table REF , to implement and evaluate the RowHammer
mitigation mechanisms. To demonstrate how the performance overhead of each
mechanism would scale to future devices, we implement, to the best of our
ability, parameterizable methods for scaling the mitigation mechanisms to DRAM
chips with varying degrees of vulnerability to RowHammer (as described in
Section REF ).
{{table:e854652c-0752-43a0-a0db-d938f0922dbb}} | m | 38eae5b666e30b8158819313110aa8e4 |
The value of DIC can be computed directly from the Markov chains generated with MontePython.
For values {{formula:357adfd4-3b0d-409c-9466-3e964358fba4}} we would conclude strong evidence of the RRVM's as compared to the {{formula:f451039c-3643-45fd-b3a6-4df5c2b78e3e}} CDM, and for {{formula:db4ffc77-032f-490f-9ebf-9d6488082305}} the evidence is very strong. Such is the case when we use a threshold redshift {{formula:468b638c-8f0b-4aac-8c98-ba38e35d9dec}} in type I RRVM (cf. Tables 1 and 2). In contrast, when the threshold is removed we find only moderate evidence against it ({{formula:278c9821-fb77-4bd0-9f0d-4ab9933c3077}} ), although the fitting performance keeps on being slightly better (smaller {{formula:c969a6b5-53c1-447e-b6b7-65e6be5f63ec}} ) than the {{formula:b404cfec-63c0-4892-9825-3e11832d3c61}} CDM, similar to e.g. coupled dark energy {{cite:8b693cfda9f125c037150275afc6bd8a309a2629}}. Quite obviously, the effect of the threshold can be very important and indicates that a mild dynamics of the vacuum is very much welcome, especially if it is activated at around the very epoch when the vacuum dominance appears, namely at around {{formula:44bb0a0e-3d40-454f-9a92-7dbc7482d9a9}} . To be more precise, the vacuum dominance in the {{formula:e26d0e91-2cf4-46ac-bf91-3e75f4ac5c1b}} CDM starts at around {{formula:c2a47c43-7829-4d33-92c2-8e96058f6161}} . Therefore, these results suggest that if the vacuum starts to be slightly dynamical at an earlier point which is `close' (in redshift terms) to the transition from deceleration to acceleration ({{formula:588a4675-563a-484a-8665-f1526674c3e8}} ), then the impact on the description of the overall SNIa+BAO+{{formula:bdeb899d-bbae-46c9-926a-3feb2e924bb9}} +LSS+CMB data becomes extraordinarily significant on statistical terms. Before the transition point, physics can remain basically unaltered with respect to the standard {{formula:14ba3d3f-5dcf-4867-bcaf-7bf9878695a7}} CDM model, but the vacuum dynamics allows to suppress an exceeding amount of LSS in the universe, leading to a better description of the {{formula:fbb0da9e-9b3a-4993-a2c2-a048a58cf78c}} data set. It is not just that the total {{formula:1655c684-c001-4bea-ad6c-7812a9c0bc9b}} is 13 to 18 units smaller as compared to the {{formula:709e3fd9-64ab-43dc-9a8e-870b62a1c9ca}} CDM in the presence of the threshold {{formula:8c98fec1-f018-4490-9734-da28741a5092}} (cf. Tables 1 and 2), but the fact that the information criteria (which take into account the penalty to be paid by the RRVM's for having more parameters) still decides very strongly in its favor. In the absence of the {{formula:b23d6070-2233-4691-a005-2d267a035eac}} prior {{cite:22d214c338f4c6e9ca45d56ed2ebd6174e449962}}, type II RRVM performs a bit better than the {{formula:f770fb16-397d-4f4e-a013-9f270cfaa6ee}} CDM (cf. Table 1), but the improvement is not sufficient. Occam's razor penalizes the model for having two additional parameters than {{formula:7dbee5b5-3a35-4269-8336-fa6382eadf88}} CDM and leads to a moderately negative evidence against it. When we include the prior, however, we get a strong evidence in its favor ({{formula:8d22491e-9a6c-44bc-8660-4091ea3a14ce}} , cf. Table 2), since this model can accommodate higher values of the Hubble parameter and hence loosen the {{formula:baace47e-508a-44b9-86e6-fba6b0c03f39}} tension. This is similar to what we found in {{cite:fe3d46f645efdd989cede7586c669c383b0a1981}} for Brans-Dicke cosmology with {{formula:97426947-298b-47e9-9bbd-067d7f5c0c03}} .
| r | 6ecf8710790404223def93db74951e51 |
Let {{formula:fde29690-a704-4f57-8e9c-2b4a182386f7}} denote the set of digraphs.
Various important results in graph theory have been obtained by considering some functions
{{formula:ba25ae5a-a6bb-41bd-883d-db22ebed987c}}
or {{formula:4f0ee8e8-dbd8-4d06-a3f2-21549a5e19ba}} called
operations
(here each {{formula:3251d489-be93-4846-9ed4-4e51ac770484}} ) and
by establishing how these operations affect certain properties or parameters of graphs or digraphs.
The complement, the {{formula:ee4c5a55-f043-4094-a520-a54e31ffe5ce}} -th power of a (di)graph, and the line (di)graph are well known examples of such operations.
Also, the Bondy-Chvatal and Ryzác̆ek closers of graphs are very useful operations in graph Hamiltonicity theory
{{cite:2da66ed9a1073bcbcad96e684586a2a8e70ec4c6}}.
(Strengthenings and extensions of the
Ryzác̆ek result are given in {{cite:8ff9cce559c2f6f3a65c15afe732c6ec3c457033}}). Graph operations introduced by Kelmans in {{cite:da4ce65e1f812f71f3db5aafc8c245b497fc09ea}}, {{cite:5d9a319ebca9ed4cac868fbeb94935712d8cbc6e}}
turned out to be very useful because they are monotone with respect to some partial order relations on the set of graphs {{cite:a32c2c1c13e9a6e81bd03d7a8a6a69189335a0ec}}, {{cite:120547fdb5e936e9163a34be2b7d2206df2c7f7e}}.
Gross and Tucker introduced the operation of voltage lifting on a graph which can be generalized to digraphs {{cite:6a5c48e04c6c0d7dcc198451fe9f96a721385c85}}, {{cite:b93aaf34bd5b5a96bfcdf379424698be55611663}}. By this operation one can obtain the derived covering (di)graph and the relation between the adjacency characteristic polynomials of the
(di)graph and its derived covering (di)graph {{cite:844cd74b9ad44ae03a0582ced7753173704a8aa5}}, {{cite:6a5c48e04c6c0d7dcc198451fe9f96a721385c85}}, {{cite:723ecc0ec4140f053b338ecfe2cfd827f9234eac}}.
| i | 4a91dcac623211d02a3e0122519437c5 |
We train our model with both detection and segmentation heads to verify the learning ability of our model for multiple tasks, and the results are shown in Tab. REF .
While comparing different BEV encoders under same settings, BEVFormer achieves higher performances of all tasks except for road segmentation results is comparable with BEVFormer-S. For example, with joint training, BEVFormer outperforms Lift-Splat{{formula:34ec4e06-ef61-4956-90ed-d4cd13527ace}} {{cite:b0985af44410f0d97276194246e35f3300747765}} by 11.0 points on detation task (52.0% NDS v.s. 41.0% NDS) and IoU of 5.6 points on lane segmentation (23.9% v.s. 18.3%).
Compared with training tasks individually, multi-task learning saves computational cost and reduces the inference time by sharing more modules, including the backbone and the BEV encoder. In this paper, we show that the BEV features generated by our BEV encoder can be well adapted to different tasks, and the model training with multi-task heads performs even better on detection tasks and vehicles segmentation. However, the jointly trained model does not perform as well as individually trained models for road and lane segmentation, which is a common phenomenon called negative transfer {{cite:7f1c6a55bec602aa47b10e7708676c6531842a79}}, {{cite:3792b3c18deca2104d6e0b87fbdda66b62dd88a3}} in multi-task learning.
{{table:d96cd6be-a4d9-4045-8a9c-35cf1ca9683d}}{{figure:6d56d9c0-fe24-499d-a81a-34571b648c43}} | r | 0edb8b80d97f5b744c9ed4d45164fc61 |
In this section, we present numerical results to verify the performance of the proposed transmission scheme based on Bayesian optimization. The BS is equipped with {{formula:b66ac639-e77a-4d74-a341-099f61cf2f5c}} antennas. The number of elements in RIS is {{formula:4b756003-cb4f-4845-a997-57d49a568a33}} if there is no specific introduction. The parameter {{formula:a76615ec-d4ed-4191-b6e7-1a7ad39f6674}} in (REF ) is set to {{formula:3027706f-26ef-425d-bbbe-0357230f6022}} . The window size is {{formula:973334a4-a750-4cb2-b456-a56c40f2b0a0}} . The SE kernel is chosen in Bayesian optimization and the hyper-parameter {{formula:be82a999-b66b-44e5-abf0-39b9df2d125c}} is learned by the maximum likelihood estimation in {{cite:65389ca3d938d329e1a478247ce6c6eb8027a6d1}}. The number of partition {{formula:a7b20330-47e3-41a9-9684-0d04dc4b6d7d}} is equal to the input dimension {{formula:c121392f-3e19-414e-b9fa-099d509244ac}} . The smooth parameter is {{formula:7b4c30b9-fd01-4ef6-97db-e94d862b286d}} . The simulation results are obtained by taking an average over 1000 random realizations and the maximum iteration time {{formula:2abacb39-9ecd-4df1-894b-b4770dc94c7c}} .
| r | 53cd9e03c419cba5682a7eeabeaa4f68 |
Recent GANs {{cite:18a3ce4cab855ff1df66d798518883399d952973}}, {{cite:5217899c5ed560ff00e61a9b20fec7df04a1e65d}} demonstrated that synthetic images can be generated with very high quality. This motivates research into embedding algorithms that embed a given photograph into a GAN latent space. Such embedding algorithms can be used to analyze the limitations of GANs {{cite:58a331ae8ce71b07fa42d53c8aac5959fd16d288}}, do image inpainting {{cite:47246fccaebf78b42e7a4ef91336f8d6972e02c1}}, {{cite:d0926242ebd3b8de1dc93c5236f0e866ed389f7c}}, {{cite:9bca5846da39427ad659dd5929d8346605b9b951}}, {{cite:fffbf71f52ae371a33aa1b95505b5edbaf210946}}, local image editing {{cite:8a5fd51e0eebf17d688f12d330f2cfeafef9268b}}, {{cite:c934fcd05b7d8e2dab520abffc387a91ad9da43a}}, global image transformations such as image morphing and expression transfer {{cite:11ccd368805279db96da5092341ce2efd05a7078}}, and few-shot video generation {{cite:afd409c33d326664510c0d1ed7b88c1dc4da3c2d}}, {{cite:bfa74493e1c37a22156325a70b49706a687554fe}}.
| i | c797876482fbe1ee14bcd1249e8afb53 |
In Fig. 2, the critical temperature of the three dimensional
ternary alloy model has been plotted as a function of {{formula:4b4f5381-6389-48d6-b557-a91171916d02}} for
various values of {{formula:c705d3e7-6a66-446a-8e90-7cec0557a2ba}} when {{formula:58edf2ca-e769-4b8a-96a9-5e8aa4a5b10b}} . It can be seen from Fig. 2
that the critical temperature of the system has a linear dependence
on the interaction ratio {{formula:17fc19df-c6f1-4fb3-a4fd-3a7008824a8b}} and there is a critical behavior at a
special {{formula:080b0ac3-09ae-4a2b-838a-d32a8031187f}} value. When {{formula:277a73b5-933b-432a-9b8f-9adef6de76dd}} , the critical temperature of
the system has a fixed value of {{formula:ccd9c8f1-dd78-4ee5-8cf5-749d68eb5d9b}} for all {{formula:5689bcff-57ee-4faf-955d-b90736ff4553}} values. At
{{formula:5ad8bbd1-d4f6-4d79-a727-be6b5c997574}} , the critical temperature of the system does not change with
concentration {{formula:36aaa96b-1cb2-4446-a4f9-9ede26ee62e2}} . This means that neither the spin-1 ions nor
spin-5/2 ions substitution to system change the critical temperature
of the system at {{formula:12e7a09d-ab44-48f5-b58c-6cbc584a11a6}} . This critical behavior has been reported
in theoretical and experimental studies {{cite:020c606a03205e8898c45d4fa6883ef34d548062}}, {{cite:6ac2190ab98a7a5e54c523d206379ecc815200f1}}, {{cite:4f3273f69a403b5d0cf58db2a74659a45c16ed2a}}.
The value of the {{formula:b4420a3d-e110-4cf1-aa3d-8ca68d95130d}} for ternary alloy AB{{formula:4f71a447-459c-4606-b332-cf93250551ff}} C{{formula:64cc1ebf-2a7b-4602-aaa0-c701f19e3690}} whose
spins consist of S{{formula:e70a7a7a-0e79-48b4-856a-092bdaaee3fc}} , S{{formula:9798b66a-c934-4c24-8ffd-292680189918}} and S{{formula:0103e8e2-26cf-4bce-bf64-9abb28aceb4e}} has been
obtained as {{formula:bc6dde81-50a8-402d-a6a7-34f0557967a0}} in the study based on mean field
approximation {{cite:6ac2190ab98a7a5e54c523d206379ecc815200f1}} and as {{formula:b0c1683e-5690-4fb6-87cf-3472b3fae5cb}} in the Monte Carlo
simulation of two dimensional system {{cite:020c606a03205e8898c45d4fa6883ef34d548062}}. Furthermore,
the experimental measurements indicate that there are Prussian blue
analogs at the {{formula:a96c5843-9135-4554-87c8-cc6f8d955cc1}} have a {{formula:961738ab-ead0-4da6-9266-cfc34482e96a}} which is almost independent
of {{formula:302b544c-8c89-464f-9b38-8fec175dbe46}} {{cite:4f3273f69a403b5d0cf58db2a74659a45c16ed2a}}. Fig. 2 also reveals that concentration {{formula:078a0c61-572b-42fa-8b06-23242b65f406}}
plays an important role for the ternary alloy model
AB{{formula:afe9548e-c7ba-4642-a6a1-2fcbf5dbebbc}} C{{formula:f556ab3f-989f-4717-8977-1a7c079848ac}} since it determine the kinds of the spins and
interactions in the system. For example, when {{formula:8c79da74-617d-4821-98c7-7af8658217d7}} and {{formula:94aab5b6-195a-4d27-9ccf-7ebf84f35979}} , the
system AB{{formula:04bf9b35-0a5c-4c92-b0c9-be394f4f6f7f}} C{{formula:727e2396-51cc-44f3-971f-2f55c1614711}} fully reduces to the ferromagnetic mixed
spin-3/2 and spin-1 and ferrimagnetic mixed spin-3/2 and spin-5/2
Ising system, respectively. As seen in Fig. 2, although {{formula:60c724d1-8d2d-4cb2-81b3-902ede491aba}} of
the system is independent of {{formula:0082d5ff-5fd1-43e1-bcfe-09638f1667bd}} at {{formula:374a8df4-492a-4083-b1f5-60e17eebff6f}} , however, the total
magnetization of the system may considerably change owing to
relatively small variation of the concentration {{formula:a53fc11a-d672-422e-967a-2918cf75e8de}} . Indeed, for
different values of {{formula:39c02b17-e5d1-424c-ace1-8ac8ab9788b4}} , the dependence of critical temperature of
the system on the interaction ratio {{formula:d3a36ff1-67d3-45dc-8bc8-cb0b3bc302a9}} is very different above and
below of {{formula:43d842ec-341e-4725-8262-097bafc51620}} . This behavior can be explained by the change of
the concentration {{formula:5bf31852-0036-4a82-9495-eeba364ae8ab}} in the system. On the other hand, it can be
detected from Fig. 2 that when {{formula:a911b096-ad62-4c6f-9de6-c6c61760d14d}} , the critical temperature
of the mixed spin-3/2 and spin-5/2 system is smaller than mixed
spin-3/2 and spin-1 system. On the contrary, when {{formula:41ee27b2-5e1f-4d9c-aa4e-2d248148ba4f}} , the
critical temperature of the mixed spin-3/2 and spin-5/2 system has
the highest value. On the {{formula:2acf2f24-118b-46de-a55c-21e63a523b9a}} lines, the critical temperature of
the mixed spin-3/2 and spin-1 Ising system is equal to that of the
mixed spin-3/2 and spin-5/2 Ising one.
{{figure:67f222c9-649c-46cd-accd-298fa4c628ca}} | r | d4c8e6672ac8261ae2c7caa6f4fe862f |
The choice of an entropic penalty does not require specifying a
reference measure {{formula:682c17c7-62db-4549-86ed-4802fa5a47e5}} and connects (REF ) with the
classical maximum entropy methods {{cite:481e174a7fa88bba1b9a625c804967a261b6e1ba}}. Indeed,
(REF ) is the Lagrangian associated with the following primal
problem
{{formula:1da8b900-601f-4678-bc3b-0e279c887384}}
| m | a0f6426ee0e85e3653c139be86a34977 |
We also note that this paper is limited by the model of opinion dynamics. Experimental research has shown that exposure to differing opinions may increase polarization {{cite:079b168d731c892dc58c853959b45f85732a5cfd}}. There are recent extensions to the Friedkin-Johnsen model that incorporate opinion repulsion, such as {{cite:e66597754fe36ac8bce06e82a734c327a994d4ef}} and {{cite:7dc1f7171be8f77e3bb5fda46cc1d62e0b5d0cf0}}. It is interesting to study this paper's heuristics, and develop techniques for reducing polarization in these more complex models.
| d | fd658420cd596ad4fe229509237e4ea7 |
We first used two baselines, by following {{cite:07722ec90264cc9d36317a283d87ee34627a84f1}}, i.e. the mean of training drug response values as a prediction for the unobserved drug responses, where we considered 1) cell-line specific mean (cls-mean) and 2) overall mean (all-mean).
We then used two state-of-the-art machine learning approaches: MultiNMF {{cite:671ee3b02f31f58f2d8726fc9a20503a4ab40c1a}} and KRR {{cite:0d0232a23200a241f0ba47f6ae51e646df43d183}}. Lastly, we used DrugCellNet, which is a straightforward but efficient network interpolation method for drug response prediction. We chose DrugCellNet since DrugCellNet already outperformed standard machine learning methods such as ElasticNet, random forest and support vector regression in
{{cite:5261132d3773feb67dabfda7e367fd9fedbfee1a}}. We note that these three methods (MultiNMF, KRR and DrugCellNet) cannot use the entire five data sets in our experiment, though DIVERSE can handle all these five. Instead, these three methods used only drug response ({{formula:ac4ba813-66b2-4c92-a8e3-9901a33b1120}} ) and drug similarity ({{formula:23920661-4539-4c23-8ed8-2beef0dfd3cd}} ) data sets.
| m | e6cc26f347380c0f29512e26cd360b38 |
Phase separation organising chemical reactions was suggested as an important concept to understand cellular biochemistry {{cite:1aaef3e956fb2a758ae39af05ea72b5a9e1c6e3d}}, {{cite:f88335f33bcbd45035d0d6a71319eed9cca4d2ba}}, {{cite:6085950d6310318de16399fe3ab73418a2e5af1f}}, {{cite:89922af63daf7ea3593ea50c2f99e84d95a77ef7}}.
In particular, phase-separated condensates can provide distinct biochemical environments and serve to localise and confine chemical reactions {{cite:a2f49bf9c78b054da42884d058f961c3c970eb34}}, {{cite:9afb63032c77a3410d6dff634361ead0e8ebb96b}}.
The study of biochemical processes in phase-separated systems is currently a rapidly growing field.
Our theory can play an important role to interpret observations in experimental systems were solute components undergo chemical reactions in the presence of coexisting phases.
In particular, our work clarifies that the increased concentration of reactants in a condensed phase does not by itself lead to increased reaction rates.
Rather, if the coexisting phases are at phase equilibrium, the reaction rates {{formula:8a1c34bc-c13f-433d-8b6d-4074961c9263}} of component {{formula:1ebfacf6-5e99-47b8-910f-f4a698d45d54}} in each phase can only differ due to different reaction rate coefficients {{formula:13fe5ea1-414c-4cbd-a314-14612eca8dd8}} .
In other words, the increased local concentration of a reactive solute due to phase separation does not necessarily increase the rates of reactions in which it participates.
The speed-up or slow-down of reactions is solely determined by the reaction rate coefficients in each phase, which can also decrease upon condensation.
These insights might be relevant to explain recent observations in coacervate emulsions with enzymatic reactions {{cite:1c0cf218ca9233c3c22aa2bd93f27a9afbacc4e5}}, {{cite:f117f3909a9fce6670838190207526ad7a4fbdce}}, {{cite:ba40d030fcdefbb1cb2e7c59a3e9e1667e828deb}}, {{cite:620609ff01e10f892a6a95a46e34edf3ffd8c54b}}.
Another important insight of our work is that the rate of change of the concentration of reactive molecule {{formula:1d0532a2-cbf0-4178-811c-cc891a72434e}} in one phase is not equal to the reaction rate {{formula:7a643506-4600-4430-b149-ecee21843e07}} of this component.
This is because phases are coupled and components are rapidly exchanged between the phases at phase equilibrium.
To determine the reaction rate {{formula:995285b7-381a-406b-a133-5cd892beaee9}} of component {{formula:7971b590-295d-469d-8fa1-524be75db912}} , the exchange rate between the phases as well as the changes in phase volumes need to be taken into account.
Thus, the chemical kinetics in coexisting phases tightly integrates
phase separation kinetics and reaction kinetics.
To highlight this point, we note that the effect of phase separation on chemical reactions in two coexisting phases cannot be inferred from the study of reactions in the two phases when they are isolated.
| d | 7c806e7eaa649f0a4b7293d477ff116e |
The second (“hybrid") corpus was generated by combining the authentic content and the machine generated content. For this task we utilized some of the latest abstracts from Artificial Intelligence domain in ArXiv. Each hybrid abstract is made of 4 parts. The initial content is extracted from an original abstract up to the point where it reveals about the proposal (e.g. “In this paper,”, “We propose”, “Here we”). The next sentence is generated until the first full-stop, using the Arxiv-NLP provided by the Huggingface team {{cite:a88b6bcad406e8e69e2379825023dbfac16053b5}}. Then, the rest of the original abstract is copied until the point that corresponds to the conclusion (e.g. “We conclude”, “Our results show that” ). Again, using the Arxiv-NLP, the rest of the abstract is generated. Likewise 100 new abstracts were composed. The temperature parameter was set to 1 and top-p was 0.9. This generation is done with human intervention, so that it is biased towards the objective strategy of making the generated content difficult to detect.
| m | 1f552655bcbceb917140b8b29c1ad1e8 |
Fig REF displays the density wake created by our fiducial single perturber after it is turned on at {{formula:be43c3d1-1d2a-46ad-af5f-ccaba03d9e0e}} . Results are shown both in the Lippmann-Schwinger approach ({{formula:a97e5eb2-635a-4494-a28a-587dc74a827b}} , left panel) and in the Madelung approach ({{formula:b5d5269b-89fb-4f39-b82b-db32a5076a74}} , middle panel).
The density contrast {{formula:abfcb2d9-95fc-4d5b-8ede-105a4bd30767}} is plotted in the orbital plane after the completion of {{formula:854640d9-899b-4ae8-a65e-ae4c37008fb7}} rotations. The white symbol indicates the perturber's position on its circular orbit. The wake produced by the perturber's gravitational disturbance is a deformed ellipsoid in the vicinity of the circular orbit. The inner overdense wake is surrounded by an underdense, outwardly spiraling whose tip slightly lags behind the perturber. The main differences with the wake pattern in the gaseous case , {{cite:9218bac793728321f7ac8947c6f589f5953d22e2}}, are the existence of underdense regions together with the absence of sharp discontinuities and caustics (the latter arise in the gaseous case when the motion is supersonic).
In the FDM case, small scale density features are smoothed out by the "quantum pressure" Eq. (REF ), reflecting the de-localized nature of the FDM particles.
| r | 1dd12c69e8cc7e9de0cf4ff05771c4b7 |
Although slow-roll is almost an inevitable requirement of inflation, the observed small temperature fluctuation in the universe (or, more precisely the small CMB amplitude) is not. Slow-roll inflation could have explained homogeneity, causality, and the quantum origin of density perturbations without predicting the small CMB amplitude. The observed {{formula:33089d0e-0928-487b-b965-618e2a07324f}} (or {{formula:0fd552a3-c137-485d-9102-2349b845d27c}} ) {{cite:008a474b0354a60a1e333271bc4eda36d4ca8f45}}, {{cite:a936f3b7c9b0f0ee789e5635bc227bd6ba0a4319}} is typically translated to {{formula:08747019-b90b-46ae-82f8-86031b603156}} for an effective quartic potential description {{formula:656dbba0-c9f2-4219-bc24-284de7b1a717}} of an inflaton {{formula:d16d2866-1fcf-4735-a925-79ea157429a4}} . Such a small coupling is seemingly unnatural even though it is not a necessary prediction of slow-roll. Why is our universe realized so? Could this small CMB amplitude be related to other physics?
| i | 92ac6239fec2a7c577f93253d62e042b |
Similarly, the SFR is very low ({{formula:c9b78335-664a-4c0b-b4b4-4cdbe0af83ec}} 0.1 M{{formula:96130fb8-e15f-4879-a9be-0a9c8773cd03}} yr{{formula:6bf3976b-6902-496b-87a3-1a0b82f4efde}} ) for half of the sample with an average value of {{formula:bf48274d-3a09-4ff5-b862-c7de836bbed0}} M{{formula:42a98b21-c98d-4fb7-8fac-a01c0cc26c96}} yr{{formula:fbb7f759-15ba-4e5e-ae40-216b6792577c}} . These values are similar to the typical values in a sample of late-type galaxies {{cite:39cdaa5757485ad865903beeed690d73700229a9}}, although it is very small compared to the SFR of interacting galaxies, which is of the order of 1-{{formula:541a64ac-97ea-4c51-88e2-8f8dd1709a7d}} M{{formula:d9378450-86dc-456e-a486-2ba19949dad6}} yr{{formula:fe8c3c05-7ade-4fd4-b50c-d5dfd73923c1}} {{cite:9db93d5699072e8e5164cbb2256fe2b44237c641}}.
| d | 8f4d3de4105667da987a0c55b8d019bf |
Why should one expect eq:srequals2ew to be true? An intuitive proof, also discussed in Ref. {{cite:f8bbb068268b51c78a985a6543b8253711e9b538}}, is that {{formula:fadacbde-6413-4f49-af15-cd35aafc1eff}} is dual to two copies of the original entanglement wedge, glued along the RT surface as depicted in fig:EW. The entropy {{formula:50b860ff-6337-41d3-915f-5282dcd9b044}} then equals {{formula:6ae29d1e-bc32-4caf-bc59-3a286dc6e551}} by the RT formula and symmetry.
| i | 0c117594615802e2fa3824333cc614d8 |
Hypergraph, a generalization of graph containing edges that can be incident to more than two vertices, serve as a natural tool to model such complex and high order relationships. For example, In a co-citation hypergraph, hypernodes represent papers and hyperedges mean the citation relationships. The ubiquity of complex relationships in the real world naturally encourages the study of hypergraph learning,
including clustering of categorical data {{cite:eb6409ab0870d080d9f8f55129f2eb8b4810214f}}, multi-label classification {{cite:9e3976ca453740f6f97867ffb2e4efc787cee360}}, image segmentation {{cite:7bfea91aec35b43e7a0351e54a0339c8361b8edf}}, image classification {{cite:4ead8947d4fc548d5ddc7d52e028a40be5129796}}, mapping users across different social networks {{cite:e69844a4b8fe7323dd55d510f77ae662dafc4035}}, and so on.
| i | 6352526c499e22d5eddd398800908d91 |
For the most popular distance metrics—the Euclidean distance ({{formula:b0075072-7161-4e27-805f-88920d54534c}} -norm) and Manhattan distance ({{formula:59ffd5a3-bf94-4b7e-80f9-6c141d4db061}} -norm)—classic dimension-reduction (sketching) provide very efficient distance oracles {{cite:416f1d22b7984440c958862886a3e2fa34302ac3}}, {{cite:cb2e380ecc50f9eb0985316f76c222b25452ba12}}, {{cite:4b7d6d1880ea3a1b15dc1e035b3713a461b2add3}}.
However, in many real-world problems, these metrics do not adequately capture similarities between data points, and a long line of work has demonstrated that more complex (possibly learnable) metrics
can lead to substantially better prediction and data compression {{cite:cf4ac97a20b330ae9e2fa70a0a334ac3a9ecb8ae}}. In particular, many works over the last decade have been dedicated to extending various optimization problems beyond Euclidean/Manhattan distances, for example in (kernel) linear regression {{cite:edfc42244ef1f06e331a3910df435da62afa7f22}}, approximate nearest neighbor {{cite:8d3bf4c84fa66d594fa888d073bae5d0a03db8fc}}, {{cite:b9b6821d89ee1a61a946758bc3d7b9699f2b11dd}}, sampling {{cite:b3d4b60fb4d0f70038fd3a3011c7cac9fe441a13}}, matrix column subset selection {{cite:bef1a3b9fd69d0b67cfdbaca48326bfd455d7dda}}, and statistical queries {{cite:a11c219ef47115ca46a665086f037d1616108512}}.
| i | c863f917cab2a0f7108c64a3b480f724 |
Furthermore, four topics beyond the scope of this paper are worth further
investigation. In this paper, our proposed
algorithm is based on two step estimation under piecewise constant
proportional hazard model assumption. Proposing an efficient
sampling algorithm without Laplace approximation is an important future work.
Furthermore, we fixed the number of pieces of baseline hazards in both
simulation studies and real data analysis. Imposing adaptive number of pieces
model in baseline hazards is devoted for future research.
In addition, variable selection approaches based on hierarchical CRP
{{cite:a0c106316980ca66487f4184992887f7c35a4a3d}} is also worth being investigated. Allowing
different covariates and baseline hazard share
different clustering processes is also an important future work.
| d | 80ea3778acbe6343186f33ce09495a36 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.