text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
The quantitative results in these tables show that NanoNet consistently outperforms or performs nearly equal to its competitors in terms of performance. The quantitative results also show that NanoNet can produce real-time segmentation (i.e., produces at least close to 30 FPS for each dataset present in the Tables). This is one of the major contributions of the work. The other strength of the work lies in the parameter use. From Table REF , we can observe that the best performing NanoNet (i.e., NanoNet-A) uses nearly 35 times less parameters as ResUNet {{cite:66a90348c698df1a43e9570878286c00345cd5e1}}. Similarly, NanoNet-C uses 225 times less parameters as compared to that of ResUNet and also produces better DSC, mIoU and FPS with the Kvasir-SEG.
d
6c7f97ca362300ffd3e6d56a3254f116
Generation: All the generative networks are trained on the {{formula:d8919c77-49b0-4d7c-8428-e87364474cb1}} real data in E-Gait. We report an FID score of {{formula:e6854721-6770-4c4b-b3eb-1befb5fe19fb}} , while the FID score of Baseline-CVAE is {{formula:03c144a4-f84c-4cb9-8e6d-99d8621577a4}} . Lower FID indicates higher fidelity to the real data. However, we also note that vid2vid {{cite:f4a12639bc51352e38a099bbb8aaf10cdc465824}} completely memorizes the dataset and thus gives an FID score of 0. This is undesirable for our task since we require the generative network to be able to produce diverse data that can be augmented to the training set of the classifier network. {{figure:860b6e04-b57a-4d1e-a12a-722ff4aef409}}
r
87449fe121e49781218311df609c63c8
Next, we turn to a physics-based design of the interparticle {{formula:5a8d298f-b10e-4707-becf-3296b7b27a1b}} , and refer to a canonical, Bogoliubov-type form {{cite:04a928d3866c95c37a62110fadbd7f6875443494}} for the square of the normal mode excitation ({{formula:c332d0d0-3fcd-4246-9392-0b43b513bc9c}} ) energy {{formula:3e2f7a93-bead-40aa-a2c4-f277fd895516}}
r
c264fe2d10286c687de8ea6eb9175de4
There are also several baselines worth of comparison to NP-RepMet: for instance, we can train a standard object detector on base classes using the same FPN-DCN backbone and then fine-tune its classifier head on novel classes. This is denoted as `baseline-FT' in {{cite:0c2f7bb4a46c3b5e6ec8179100ad54c24f9f675e}} and Table REF : the reported results are 35.0, 51.0 and 59.7 in 1, 5 and 10-shot, respectively. More baseline implementations can be found in {{cite:0c2f7bb4a46c3b5e6ec8179100ad54c24f9f675e}}, they perform much inferior to RepMet/NP-RepMet. {{table:6cd8a42d-2f11-4ccb-bd82-35d476a096de}}
r
490e1129397ea5ddd086d9a03cf8bb81
The main distinction between random network models is that the probability of the edge added between two nodes differs according to the model. In the BA model {{cite:e83f98a055832812574a65f2aaa7c84d288fa169}}, as a node is added at each step, each new edge is determined in direct proportion to the degrees of pre-existing nodes. In the ER model {{cite:ebdef214c66cceebd361229a168a9bfb72353b37}}, the number of node pairs is {{formula:fb1eff6f-1159-4c0f-b8a1-260308c9d637}} , and each pair turns into an edge with a fixed probability {{formula:0f98ab17-1593-4025-bfa7-c82d5108c0f7}} . The CL model {{cite:5f397b9bb4fbb64517a64939068ef1ec4a1d2fc4}} generalizes the ER model to fit the expectation of a given node degree (number of neighbors) distribution with unequal edge probabilities.
r
e2fbe2fa5b26c2f2ac5f8f84fee6da36
E2s'(n,R)= -R + s'R2 +4n 2 vF22 Second, the Landau levels (LLs) for fermions in graphene can be found by requiring {{formula:7335e319-bac2-4c19-bd20-a09bd37bcc01}} : {{cite:d69c58d76152c8b0e94aedb4023710b2e82d8925}}, {{cite:42c52f39865c38143414792c0eb337177765a1c8}}, {{cite:e952185ce121e9f1abee9c980446d83aa8e17c08}}, {{cite:0660719672420f54854ee1932648dd6f5221b971}} Ens=svF2eB n . Additionally, by imposing some restrictions on the magnetic field (strong magnetic field limit), we can make more approximations. Thus, at high energy, we have {{formula:7e2d7b88-f365-4f12-8b38-c9a7586a6570}}
r
0efb1dd9dd78bba63ea9301677c76422
Furthermore, we conducted experiments on publically available real-world dataset provided by SPANET {{cite:37f7eb4517f219c779715c6de466a2f962c58ac9}}. As shown in Fig. REF , traditional hand-crafted method, that is, JCAS {{cite:0add08d0c17c87a8ad87d1761700b6fc9bd4ba8b}}, encountered difficulty in removing rain artifacts. Although CNN–based methods {{cite:37f7eb4517f219c779715c6de466a2f962c58ac9}}, {{cite:b9b8b9d98e64e4a9d29f7c6bbc0b4cff6e935ee5}} remove rain streaks better than hand-crafted method, they still suffer from the dot-patterned rain streaks. In contrast, our method preserves background details better and effectively removes various rain streaks including dot-patterned and long-shaped rain streaks. In addition, we conducted a quantitative evaluation using the RealDataset {{cite:37f7eb4517f219c779715c6de466a2f962c58ac9}} in the fourth column of Table REF . Interestingly, the hand-crafted methods {{cite:00b391dfe4cc1cc9843189429bcdeffe42373c35}}, {{cite:d63b376ce8a3270f1e9eb9914af700ffef109989}}, {{cite:0add08d0c17c87a8ad87d1761700b6fc9bd4ba8b}} outperformed the CNN–based method {{cite:09a72737af0719bdcc990db430fbbb3cee2b8fc9}}. This shows the limitation of the fully supervised learning paradigm because such method tends to fail when dealing with conditions of real rain streaks that have never been encountered during training. Our model achieved the best results by leveraging the real-world time-lapse data without ground truth. {{figure:85a93f97-8b66-408c-8296-dd4705ce3321}}{{table:1919526a-1fe9-4ed7-828b-2cbc98cc3e3d}}
r
9cc7a75b7c75d86945bd64cfd1d4af74
Global Non-Penalized (GNP) : In this basic approach, the control points are optimized without smoothness penalty in the cost function (Equation REF with {{formula:1356aa5c-43bf-4523-99a6-e01b9af7a896}} ). The number of control points is set to match the RMSE threshold given in section REF , paragraph 2. We call it global because spatial and radius dimensions are not addressed separately. Global Non-Penalized with Akaike criterion (GNP-AIC) : Optimizing the number of control points to obtain the desired spline smoothness is a common approximation method in the literature. In this approach, the optimal number of control points minimizes the Akaike information criterion ({{cite:8915c445636d6e4e7331a765cfd20dc7f91401cc}}) {{formula:250cb312-7ac4-4ad1-b8c6-3135343970a9}} : {{formula:1ec2be10-736a-4882-87ee-4b8f1ff2a0e1}} where {{formula:30af895c-2966-4597-b6ba-db5600583fee}} is the number of data points, {{formula:f53dc133-3d4e-4682-a6de-e0055a9c0a9a}} the degree of the spline, {{formula:5119a843-ed9e-4830-83b6-d0a5ccf5d811}} the number of control points and SSE is the sum squared error from the data points, including their four coordinates. Global Penalized with Akaike criterion (GP-AIC) : This approach corresponds to the original approximation by penalized splines described in {{cite:203ccf6282294953bc1a65e214060049dde878e2}}. It uses the same global approach as in GNP, but with a smoothing penalty defined with a parameter {{formula:a64bbb37-62dd-4076-95eb-84aec42092f2}} as in Equation REF . Spatial coordinates and Radius Penalized with Akaike criterion (SRP-AIC) : The approximation strategy that we propose in this work penalizes spatial and radius dimensions separately. The comparison of our strategy with GP-AIC allows to evaluate the contribution of treating the spatial and radius coordinates individually.
m
8cc720abf4a5ddb878af8619a4a64ed4
Second, we do not understand how the pseudo-parallel data translated by the backward model affects the forward model's performance. For example, {{cite:06de090eb5c69659f2db2c9224d60a74ca8d6fb0}} have observed that pseudo-parallel data generated by sampling or by beam-searching with noise from the backward model train better forward models, even though these generation methods typically result in lower BLEU scores compared to standard beam search. While {{cite:06de090eb5c69659f2db2c9224d60a74ca8d6fb0}} associated their observation to the diversity of the generated pseudo-parallel data, diversity alone is obviously insufficient – some degree of quality is necessary as well.
i
5e128b184585a3d704224465233e1c4a
We focus on 4 tree ensemble baselines from literature: single-task soft tree ensembles, sklearn GBDT, sklearn multioutput RF {{cite:a31b180d823e3385745f3e38cff3edca53a9ec8c}} and r-grf package for GRF {{cite:4afa82a16a44dc51e2e65e9d2db20ebca2feae72}}. We consider two multi-task settings: (i) All Fully observed responses for all tasks, (ii) Partially observed responses across tasks. In the former case, we compare against RF and GRF. In the latter case, we compare against single-task soft tree ensembles and GBDT. Note the open-source implementations for RF and GRF do not support partially observed responses for multi-task settings and GBDT does not have support for multi-task setting. We refer the reader to Supplemental Section REF for tuning experiments details. {{table:3919c2a2-107e-4c5b-a27b-61e66ed758bd}}
m
faeb4c3c64a907e4e2ad6f051f6f960d
Fig. REF a-c show the time series {{formula:8733f109-dafa-40d3-923a-e654fcb635c8}} and corresponding histograms (Fig. REF d-f) with increasing {{formula:5f3cd270-e296-4476-9a01-ef2a6b8c6c8b}} . As {{formula:cdea7a00-df9b-4fd5-8d8d-92a17cd95608}} approaches the forward transition point, the number of produced metastable coherent events increases. Analysis of the histograms reveals that the variation of {{formula:07d40e43-7467-4dc4-8f8d-e99af7a3fd7d}} near the preferable incoherent state obeys power law {{formula:5c160005-b6aa-4299-b6e8-2eb5980dff34}} . Although the power law is fitted in a relatively narrow region of {{formula:c7f5fa92-52e4-4471-ac62-64febda6356f}} values near the stable fixed point, this observation could be considered a hallmark of self-organized criticality {{cite:6e48c42d49922d32feb9400321bdd572674eac5a}}. It implies that the occurrence of small- and medium-size events, i.e., the short-term establishment of coherence in small- and medium-size groups, obeys scaling of the same physical mechanism. Although the dynamics seem to be highly turbulent at first glance, the power-law fit of the tail indicates the presence of spatial order. At the same time, a peak beyond the power-law fit indicates a non-negligible probability of large deviations of {{formula:a63ba670-35ff-41b8-8203-69ff2dba321b}} . Such events lie outside the scaling rule and obey different physical principles. Such distribution is a hallmark of a specific type of extreme events – dragon kings, DKs {{cite:af951ceaa4b4b61a429948647415cf76a98a685b}}, {{cite:583de1b3c4549f4c83bf424b83efa545c4595f90}}. DKs possess a remarkable property – these events are significant and non-random; therefore, they are predictable to some degree. Indeed, DKs are generated in a deterministic system and obey certain mechanisms underlying critical behavior in the vicinity of the typing point. The possibility of predicting such states, although tangible, is a challenging task that requires a deep understanding of the structure and dynamics of the considered networked system. {{figure:b5d17974-ace5-4c23-b43f-3463d208fba3}}
r
4f83278779c65a41593719e50ebd392e
In order to judge whether the observed magnitude of the conductivity signifies strong- or weak-coupling behaviour, we note that at weak coupling (including only QCD processes) the usual expectation is that {{formula:9117c10b-9afe-4e57-b478-f2865d3046c4}} {{cite:f975b1a90c510c5ce227827820dbb6dccca4247e}}, which is much larger than 1 in the region where the weak-coupling analysis is valid, namely at asymptotically high temperatures. A useful benchmark at strong coupling comes from holography, for the charge diffusion coefficient {{formula:8a62e4f7-d947-465d-9ad4-d7db621ca980}} , where {{formula:24535df0-d6c3-4e8d-865f-b8d85142caf0}} is the charge susceptibility. The characteristic result at strong coupling is {{formula:8e6a3747-86dd-44d2-ba05-eb831a02b4e2}} in {{formula:4fb24167-ead4-4e25-a1e1-cc09fb93ad29}} Yang- Mills theory at nonzero temperature {{cite:11bedbbdd506c6df319801381c2cfb0a8b13b376}}, {{cite:dd298227699c041a7fe4606c1f579a4dbad1da14}}. In Ref. {{cite:e02e48308c27514f93b6cdbe00ce6494e3f6b59f}} the temperature dependence of {{formula:95d518fd-3640-45ce-8415-b38840ebcfe6}} was computed in a self-contained manner, i.e. by also computing {{formula:40da8266-e640-457b-9cc4-efa1589d9d05}} within the same lattice QCD setup, with the result that {{formula:eb5c2f06-99ba-4524-a7d2-07081c9e6cdf}} , compatible with the holographic order of magnitude at strong coupling. Moreover, it was observed that {{formula:e5dd2b5a-da30-458b-bcbe-49693c609b91}} has a minimum in the crossover region, see Fig. 14 of Ref. {{cite:e02e48308c27514f93b6cdbe00ce6494e3f6b59f}}.
r
d2263d8843383e9cf357a93b1c6f8470
paragraph4 .5em plus1ex minus.2ex-.5emOther advantages. Recomputing population statistics is also found important in the presence of other train-test inconsistencies: {{cite:006a5f73e7e728b3dbf61bea045a5618905bf7f2}}, {{cite:ad5a8e4c51741fbc1928847aeebf02425abc0daa}} recompute statistics because the model weights go through averaging after training. {{cite:091fe452ca8b80f20924fd58e0d5059b85840ce9}} recomputes population variance to compensate for the “variance shift” caused by the inference mode of dropout. {{cite:722d242aede1ff4d0db85dd75524bee4b0e65d56}}, {{cite:b6a31f03e651a235d98fd18845b3f6f3ddfbc57f}}, {{cite:1d610c284025a81c6b9d32b6ea0760f33d6c23e8}} recompute / recalibrate population statistics to compensate for the distribution shift caused by test-time quantization. Recomputing population statistics on a different domain {{cite:39bc2cdede5ae9f1a056f534b3a7c1710991e0e8}} is common in domain adaptation, which will be discussed further in Sec. . All these examples involve extra train-test inconsistencies in either model or data, which justify a re-evaluation of population statistics, as the EMA statistics estimated during training may be inconsistent with the feature distribution during testing. State-of-the-art vision models often regularize the training by strong train-test inconsistencies {{cite:121913d2e08c63e7134324292a118b965a211301}}, {{cite:ad467ee74e83a9909609b91c36956713b529c9ca}}, {{cite:c630454769c02330f4b634334e80231b9a141ffb}}, {{cite:43f04b8680d335cee0b1ce2866fbc14b1ef01011}}, {{cite:0c46bf68072daf664a5977e1a3b82d97458ec23a}}, which may also interact with BatchNorm in unexpected ways.
d
cc0d93861fde450d35879d28d90b4d14
In 2012, Gleiser and Stamatopoulos (GS) {{cite:fa6d154a2c2c6b3d963b07b9bdc26abbb068d900}} show that CE brings information about some parameters of a given model for which the energy density is located. In Ref. {{cite:fa6d154a2c2c6b3d963b07b9bdc26abbb068d900}} it was shown that the higher the energy that approximates the actual solution, the higher its relative CE, which is defined as the absolute difference between the actual function and the test function of the CE {{cite:539ccf4e2b135c7e89df35b2bd0d5987b3be4530}}. CE applications have been shown to be a promising path for the physical understanding of some systems, e. g., the non-equilibrium dynamics of spontaneous symmetry breaking {{cite:d339ce7d857df3ca7d7500bdfa90f33df51ed5ae}}, the stability bound for compact objects {{cite:8b4ba591359040f28a4f620dfbaf30ff1266401a}}, and for investigate the emergence of objects located during the inflationary preheating {{cite:45f57eebea8911ec0ea34da19566b771839dfb9a}}.
i
50802974cee4a70604cd22a57b6915b3
For a graph {{formula:d204972b-626a-4c5b-a747-979fadd4232b}} , a natural generalisation of {{formula:b6d5b8e4-dc99-4022-86d5-d96b4770d6e5}} is the graph {{formula:230d340c-eacb-4280-846f-14f3cfa3635e}} , which has the same vertex set as {{formula:bfb12d15-db35-4c27-b679-efe339508f44}} , and {{formula:caa5b7de-4292-4544-992b-a9d8fe07c43c}} is an edge in {{formula:004b0dbc-f8d4-4e94-ae98-d169edca744f}} if and only if {{formula:20b31f41-7e2c-489a-a4d4-b63c20caa92f}} and {{formula:7a4cd478-c544-48ae-845e-48f64e21acca}} have odd distance. Both constructions in the previous paragraph tell us that for outerplanar graphs {{formula:7d7ef087-b31a-4398-b7e5-fa6e622e20cc}} the chromatic number of {{formula:94a900ac-afb0-4320-8dd0-18966f461722}} can be arbitrarily large because the clique number {{formula:dd9c4f9e-afaf-46e3-9552-36b80aacaa2d}} can be arbitrarily large. This motivates the following open problem of Thomassé, which appears in {{cite:73d098bc58df8f85ba0a56189fb7e7af839bb5db}} (see also {{cite:e2a9d306eaa0822a4389247ad1899b5228b344a4}}).
d
d97a703d4fe49bd0681f1265f70fb45b
Peak-to-peak maps are scatter plots of consecutive local maxima of 1-dimensional solution trajectories. Many choatic systems (e.g., Lorez {{cite:85c78d931216a2bcf7c8ede646f693cde1776c21}}) are known to exhibit so-called peak-to-peak dynamics in that the value and time of future local maxima can be accurately predicted from the value of previous local maxima, or peaks. A peak-to-peak map, which relates the values of a 1-dimensional trajectory at consecutive local maxima, represents a reduced order model of the original system which can been used, for instance, in optimal control {{cite:44cc75292d7b4fd2098d43cce093c74dc53947c0}}. Here, we use peak-to-peak maps to provide a compact visualization of the system, allowing a by-eye comparison of the approximating-system dynamics to the true system, and a qualitative indication of system stabilization as the number of interpolating nodes is increased. To construct peak-to-peak maps, local maxima were extracted from interpolated solution trajectories over the time interval [0,2.5e3], sampled at a resolution of 100 points per unit interval.
r
d439be9688d08af5ada9e5d32e2fec46
In vision, explaining an instance with its corresponding counterfactual {{cite:0d54ab1321eda8f579c1c79284964cfbcb829fb6}}, {{cite:1b385dcea8e48b5f96ad6806b29a8951487c3d98}} has become common for highlighting changes that would most easily flip the prediction. In {{cite:0d54ab1321eda8f579c1c79284964cfbcb829fb6}}, authors perform minimal edits by swapping regions of a query image from a distractor image till a decision flip occurs. However, the choice of a suitable distractor image is crucial for quick convergence but this choice can be unintuitive for some domains such as the medical domain or when little information is available for the dataset. Further, the image resulting from such edits can be unnatural looking at times and therefore lack explainability. For such explanations to be efficient, the changes applied to the image for a different prediction must be minimal and human interpretable {{cite:5e5d1db2f331518fb2c419b0f111a78789e7802c}}. Alipour et al. {{cite:1b385dcea8e48b5f96ad6806b29a8951487c3d98}} use the latent space of a pretrained styleGAN for retrieving counterfactual latent codes and is similar to our approach in idea but differs in implementation (their method produces causal explanations only unlike ours, while also employing pretrained attribute detectors in latent space that are largely unavailable for medical domains.) Recently, semi-factuals have been argued to offer advantages similar to counterfactuals {{cite:242dcde8fc6f10dc4c9833d4471679a849b3218d}}. As opposed to counterfactuals that propose explanations as 'If only' clause, semi-factuals propose explanation of type 'even if' i.e. what changes to the situation would still lead to the same outcome. In our earlier example, a semi-factual image might illustrate the inflammatory changes that occur right before an ulcer starts forming, as this point the doctor will still identify the image as abnormal.
i
455155f5a0ebe17f4ef0d3534265e22c
Following the experimental progresses, there have been plenty of theoretical works concerning various properties of these excited {{formula:a9898b62-1b41-41db-99fd-82be27f03943}} ({{formula:8039ba63-da0b-4877-b994-27e3d51cfe42}} ) states {{cite:f139f107e70c86e85919f2fc1dfe2dfeb6b13bec}}, {{cite:44ef7238d09318a80b98da51631b2cbff7a379a5}}, {{cite:15805dcd0238c9537a299fa5a08e62d2b2cba900}}, {{cite:6992292f6888f769377a5bfe9286398948fa6f8e}}, {{cite:0f8b0ea7ddeb6c10eff0895f9e8fea1c49379f70}}, {{cite:7f5310218c2890da4db505f5ea758873beab3cd0}}, {{cite:0cdc52181186018618633488961b2f8e723d3dbc}}, {{cite:79054a0d27a917a852afca6a792928189c58fbea}}, {{cite:64cb14eefd504da4ee11ff61b25a55d2efe9a265}}, {{cite:b4686c125d4fd568a71286dcd714a44f81f0a390}}, {{cite:cd42920cc8545846ad734ab8f6b157b699436a67}}, {{cite:d646693ec2882d2e567d95a7435fd85beb3d5c9e}}, {{cite:d762a701477445582842091a8502767490237eb9}}, {{cite:0fd9f3784a3ead13ad6b82b73454821f0336d9f0}}, {{cite:8620dfe0cd06bfb632fefff32da993d67fbbd081}}, {{cite:f72321422dd92418c1c3e0b9166a624e070ea21d}}, {{cite:d66467aba615367c843869171c1a687a3d4d8837}}, {{cite:8410447e6b48d64ffef722d8cee2617da9f803b2}}, {{cite:d74c6270084c91872dd17a8bfbe88cbe3d15bd41}}, {{cite:d0fb491a4d5714323ea32a8b8c7a2e3358e7528c}}, {{cite:f0db4917447a54e30a05e1c3e49ffa55fa66b4c7}}, {{cite:27ee3ab20e1a574411e5daeee8ca474f9c578fda}}, {{cite:202675524e4c1e4d91906f9994558baef0ea1748}}, {{cite:10e532e3caeb8b4863eb68516f69eb1c4a870c7b}}.
i
35049faf0705887008f3a663cf179219
{{formula:415daf09-5142-4aef-a03a-334309bdba56}} being Riemann's zeta function. Mézard and Parisi {{cite:fc22f3862e338e962cc33e2189d78322ad24d734}} also conjectured that in the exponential case ({{formula:5c13c19b-30e4-4491-8079-d6dcc0ea39fe}} ) it is true that {{formula:205663c6-cb00-4534-bc08-03984922199e}}
i
db658dfc3c304c33f007925a229f232c
NGCF {{cite:0ec1d48670ec35dc3e1b51d2263bb3b3ae6bd4f1}}: NGCF integrates the bipartite graph structure into the embedding process based on the graph convolutional network. It explicitly exploits the collaborative signal in the form of high-order connectivities by propagating embeddings on the graph structure.
m
641884ebaa9d489d53dc27e1666759d8
where {{formula:ee7360c2-4b61-4d69-94da-4268ea02f8b2}} is the discrete gradient operator calculated in the {{formula:aaa8ba3a-c5dc-453f-b651-6c4836143345}} th cell. To discretely satisfy the equilibrium condition in static droplets and bubbles, {{formula:16ddf592-d5a1-4d65-9af1-94e023e6cf1c}} has to be discretized with the same scheme as the discrete pressure gradient {{formula:dff66e63-f028-46e3-a856-12864e5855ca}} {{cite:faac793c85cf8f3ce299901d8883d71ee5d7b74a}}. Thus, the modelling of the volume fraction gradient is coupled to the underlying discretization method in the CFD solver, and there is no room for further improvement of its accuracy. In this regard, {{formula:07851a11-2ef0-4456-a6ec-bf94a438d02d}} is the only available component to model and its correct estimation is of key importance. The simplest estimation is given by {{formula:9ce21656-c3f2-4828-b84b-9e0dacc51a5b}} where {{formula:55ff74e9-374e-4087-a735-4938eea05aa7}} . The major problem in this formulation is noisy discrete derivatives of volume fraction field due to its abruptly-varying nature. Despite this shortcoming, mature CFD libraries like OpenFOAM {{cite:724ceb1bf242cab7b1614e135397b92dca00659c}} is still distributed with this simple method to estimate the curvature. In their original paper, Brackbill et al. {{cite:6cb331fb286ef7127fde48ac80c7632ba5057c61}} tried to remedy this issue using smoothed volume fraction fields {{formula:7b81828c-2f1c-4449-b766-ad26a2eed863}} . However, both smoothed and raw variants lack convergence with grid refinement {{cite:546136c2a2c533432eafae160ff5f319c089d002}}. Moreover, utilizing smoothed volume fraction field also modifies the discrete gradient operator and violates the static balance between pressure-gradient and surface-tension forces {{cite:faac793c85cf8f3ce299901d8883d71ee5d7b74a}}. On regular grids, a more consistent and well-balanced smoothing can be achieved using the height functions {{cite:ebd8ad04d6b0a1cca3a9458fbf3dc287bedca004}}, {{cite:546136c2a2c533432eafae160ff5f319c089d002}}. To this end, larger stencils, typically 7 cells along the height, are employed. The height function method has very good convergence properties {{cite:3236fb44782d6c317d325317de0f1a881f019917}}, but greatly struggles on coarser meshes where the number of cells per radius of curvature is less than about 10 {{cite:faac793c85cf8f3ce299901d8883d71ee5d7b74a}}. Because of this drawback, alternative versions using adaptive grids {{cite:517663df12f6f6c6412044a9e95dd2e40794de5a}} and coordinate rotations {{cite:d396ea293fe86dbe21782bd13809be17ee9c6f0e}} were suggested. Another curvature-estimation method proposed by Cummins et al. is based on reconstructed distance function (RDF), a signed distance function from the interface {{cite:546136c2a2c533432eafae160ff5f319c089d002}}. In this method, derivatives are applied on RDF field instead of {{formula:989b9f58-65cf-433f-8c7d-8c97f099e232}} , which is a smoother field, hence delivering more accurate results. While the RDF method is in general less accurate than the height function method, it is much easier to generalise to unstructured grids.
i
af9c1bd8c11068fe0d1dc484424dadac
Asymmetric factor ({{formula:335fabaf-90cd-48ad-89ba-2d48cde3f3de}} ) is introduced to quantify the asymmetry in the SPDC emission profile. It is defined as {{formula:411c84d0-9c37-4aed-b929-97d40e078f61}} , where {{formula:7f7bd14e-46cc-4592-b9c4-72b16efe42c1}} is the {{formula:b14975d5-6f6b-41aa-9ca0-d85dc86c0352}} value of the thickness in x axis, and {{formula:d32db454-3443-4f89-936a-e9318ddbba0c}} is that of the y axis. We study the effect of crystal length, pump beam waist and the phase matching angle on AF separately by varying one input parameter and fixing the other two. Firstly, by varying the crystal length we find that the asymmetry of SPDC emission profile initially increases linearly with the length of the crystal and goes flat for larger crystal length as shown in Fig. 4(a). Theoretical plot (solid line) in the figure is plotted by using equation (6) and (7). The error plot in the figure is made by considering the phase mismatch value which is not considered in the equation, along with propagated instrumental errors. Because of the fact that the SPDC photons are generated throughout the length of the crystal, the width of the SPDC annulus is larger for long crystals. This effect, combined with walk-off effect, explained in theory section leads to greater asymmetry in the long crystals compared to short ones. To know the asymmetry due to the alignment of the crystal, we fix the crystal length and pump beam waist, and rotate the BBO crystal effectively changing the emission angle and study its effect on AF. We observed that the asymmetry in the SPDC mode is linearly increasing with emission angle as seen in Fig. 4(b). Because, the walk-off angle for the extraordinarily polarised pump beam with respect to the crystal optic axis increases with emission angle. As a result, asymmetry due to walk-off effect is large for larger emission angles. Three plots in Fig. 4(b) is made with different pump beam waist varied by using biconvex lens before the BBO crystal and carefully focused at the center of the crystal. All three SPDC parameters have important role in deciding down-converted beam profile. Also, it should be noticed that, SPDC photon collection efficiency to SMF is high for large pump beam waist as shown in the literature{{cite:b5023815a26cc955beedd2cf133ca4cac0aef5f4}}. This does not contradict with our result as the affect of pump beam waist is insignificant compared to other effects. Careful design of experimental set is required to neglect the asymmetry in the SPDC profile. However, the complete elimination of the asymmetry is difficult to achieve. {{figure:4d78093d-7b4f-43a1-893b-8b734462be6f}}
r
9d019895c176f269436d3441fe2eca0b
where (*) follows by the sub-exponential concentration of property of {{formula:6594e9c5-12d3-460e-a557-ea17bc2534f4}} random variables (see Example 2.11 in {{cite:339479fa953776b1939f5ae7241338fd08bcbcae}}). Finally, plugging (REF ) and (REF ) into (REF ), we obtain the bound for the minimum sparse eigenvalue as {{formula:081b4f09-bf87-4834-83f0-f9f7654c644a}}
r
8f25ca3ce315e7d79fd835218825dec9
A key characteristic of our proposed Spacing Loss is that the latent space regularization that it offers can effectively act as an add-on to existing methodologies. We showcase this capability while evaluating in single-stage setting. In Tab. 1, we organise different dataset splits based on the balance between the number of classes in labeled and unlabeled pool. The concise notation in Row 2 can be expanded as: dataset{{formula:cdb8d3c7-6046-4bdb-b18c-519b4c927cb3}} total_class_count{{formula:1fe0b054-b5a9-4373-91c2-8413b795d49e}} labeled_classes{{formula:4e74a931-eb68-403e-935c-f1abcadbb7c3}} unlabeled_classes. The latent space separation induced by Spacing Loss helps to improve the class discovery capability on all settings. It is interesting to note that the improvement is more pronounced in the more pragmatic setting, where the split of classes between the labeled and unlabeled pool is skewed. t-SNE {{cite:fd13cf5651699487c37811f65878095c61be7dc6}} visualization of backbone features in fig:tsne shows good separation in these latent representations of novel categories in CIFAR-10-5-5 setting.
r
cab7d9a83f6d42bf78d426674431a4b9
Remark 3 (Adapted from Theorem 1 in {{cite:0ae9f395a47d1fc79446ab27105a566eafb05e82}}) For general {{formula:551b8d5a-c2e4-4503-ba4c-b537d7dc8524}} -divergence, we can obtain the following equality asymptotically under some mild conditions for {{formula:d8447577-dec1-437d-8ac2-751b2e4fd1da}} and {{formula:133a62df-24d2-4e5c-b852-8f3393173857}} : {{formula:a7759119-f875-4468-842a-5eb4def4a0d6}} {{formula:6bf9e662-f014-4b45-8599-ad41ac2ec6fb}}
r
7bf6b8c63cc98d83db152da0d6cad4bb
The structure theory of infinite dimensional Lie algebras have been extensively studied due to its important role in Lie algebra (see, e.g., {{cite:d93eff87d693ee0e5de45cbe9167fab8d6fe6bc7}}, {{cite:1759e65292f129353332abb2d50c6eb88a765b30}}, {{cite:b22714e3b77020c3fa3a562ff5071387eaea7de9}}, {{cite:8907f78ff8a7d12dbeda85105a30a66b3f9abf0c}}, {{cite:596e0c22a7b5720aa5803e9b6fcd9865340ea086}}, {{cite:27a4393a8b1d6b118847bd7b2ab04889f8e85aac}}, {{cite:c0c69b45d06a08f4ccaf5c45262cec3e7b899cf2}}, {{cite:0c88c429343a4922b3c3e17bdbf6218371a206dd}}, {{cite:501c7c5e2dc54c5130f252facf5dfc42a75913fb}}, {{cite:ab43c06cc23307a4af0858100794ef8b33268276}}, {{cite:221fbf7d469ef8925a6844efc3927db3b5341498}}, {{cite:ebdc8972eae119d6482c6e51833e273d3e74a338}}, {{cite:5f60db7275fe773a009e0d6d6bfedc47d316aba0}}). Loop algebras are certain types of Lie algebra, of particular interest in theoretical physics, which contribute a large ingredient to the structure theory of Lie algebras such as affine Lie algebras in {{cite:e2532a98ed5033e7abc684f30031c75b92d36bad}}. Recently, generalized loop super-Virasoro algebra, loop Witt algebra and generalized loop Virasoro algebra were studied in {{cite:f4e196b1a774c1fca74bb54844bd58eb3e69deb2}}, {{cite:af9f43f75414d432e0e0acdf0823876b8786a78b}}, {{cite:6f1374e95e711a87dca5c88aca7c6e7dfb4dc392}}, {{cite:f66cb80db0c6a79bc5436a054dbaa0798309f7fe}}.
i
94e4216dc31a2083223ed89875194bf1
where {{formula:7142de28-6280-4810-a1f6-391398ddd9ba}} is a regularisation parameter and {{formula:1bb32e35-0e5c-4053-b31c-811e858e678f}} is a finite-dimensional subspace of {{formula:08d5d263-9936-4c05-8fe9-15ffd67a389b}} . What is more, for all {{formula:b68b6f4a-31ad-4058-af5b-292a661a626e}} the Representer theorem {{cite:6536c8d82942f58413e9ae1363ff25e8e0466c04}}, {{cite:5162fcffc6d4bf0429a80d8b569df1a2a276ec89}} guarantees that there exists coefficients {{formula:5d5a5d4b-4b6a-40fe-b3a1-417081777abc}} such that the solution to Eq. (REF ) is on the form {{formula:6e90fd49-8ac5-466a-ace2-addbabfa1191}}
m
7f5a4c4a3ee5c1ea1ebf82d4ee2193fd
Traditional method (termed Tr.): This method is to pre-process IMU data using the traditional approaches, i.e., a low-pass filter followed by the intrinsic modeling {{cite:ffb4e34ea7ab72d96f8cca02d08b0fee9361d59f}}. The results are used either standalone for integration or fused with the visual data for long term navigation {{cite:445cc2c4250a7bcde93490c29fee75894c91596f}}. VINS {{cite:f521ccb1e0d3c0386944afb7be9602b94ff38add}}: The second one is a representative state-of-the-art visual-inertial odometry method with open-source implementation, which also relies on analytical equations to process IMU measurements. End-to-end learning (termed EL.) method {{cite:88e6b8aa3f67f8bf620615104c2afe931c3317bd}}: The third one is a competing DNN method by learning the relative position and rotation directly, with our customized RNN implementation. TLIO {{cite:9c29dc68f7eccc094fec3f1eca887893b4e2c06f}}: The fourth method is the EKF and deep learning combined method, which demonstrates high-quality performance in augmented reality applications. Gyro de-noising {{cite:27caa540289239284d584a2d81d888579f2abca5}}: The last one is a DNN based gyroscope de-noising method for attitude estimation.
m
59352ee1376b9d81c1f47d3d62e60f37
Our approach consistently outperforms state-of-the-art methods by a considerable margin with comparable parameters as Sketch {{cite:ad429749852330db369217dab8a39745b6adcb51}} (when using BiSeNet {{cite:012c2a2171f1aeea3a9ea990081a58bc666cf6f9}}). When using DeepLabv3 {{cite:74becdde6c3905594da43616dd9ede36aef4ed1b}} based method, as shown in Tables REF , REF and REF , the improvements are even greater, achieving {{formula:e1d4b57c-94ad-4c87-b234-f489bcae56e8}} and {{formula:14e4116d-3dfe-4b2d-af37-0b169dd8d37d}} in terms of SSC on NYU and NYUCAD, respectively. Also note that our SISNet's output resolution (60, 36, 60) is on par or lower than those of existing methods, denoting that our voxel-wise accuracy is much higher. Our design that locally increases the resolution of instance in the intermediate process of the network can achieves better results and dose not increase computation cost significantly. In addition, we observe that our instance-classes' completion performances consistently surpass existing methods like TVs, whose gains can reach up to 10%. We attribute the improvement to the exploitation of the completed 3D scene semantics, which greatly boost the performance of instance completion with well preserved shape details.
m
5419c4cd7142f66f4be4d46b706f8be2
Recently, Mei and Patel {{cite:b2bbd2917b2586821b44d3ee306ddbb10a1fc91d}} proposed a method that can model higher-order distortions due to turbulence using a simple method, called ElasticAug. The mathematical model for this method is the same as that in Equation REF . This method uses Gaussian blur as the blurring operator and uses the elastic transformation {{cite:e144c1a78185be7aa1908f1ffc8220b1477d7def}} as the deformation operator. The elastic transformation displaces pixels using random motion vectors. Table REF contains the parameters used for the blur and elastic transformations in our experiments {{table:511172c5-c474-40bc-aad8-c2761721b212}}
m
902dac372ee050e8cee6efb51dbd98a9
Here we present experimental results on MNIST and SVHN. For comparison we also report the results produced by {{cite:155f06cbcd57f73cb7124b1ba03a823515f5bbc6}}'s method.
r
a17879927a38d4fa1153faa4bc0c78d3
We choose the following set of parameters and initial conditions to study the film rupture: the initial film profile along y for fluid {{formula:056673c2-8923-4914-8e02-425ac7346d86}} is a step function of width {{formula:be52ae10-acd3-4aad-b068-a7b799a5370f}} in the range from 6 to 13 lbu, which drops from {{formula:70cc1275-2d5d-4472-821a-c0950f55bc4f}} (for y{{formula:7e17a5e1-2895-430b-8fd8-6d28a020644c}} ) to {{formula:93714c68-7416-41ea-bdfd-a6af9be04c11}} (for y{{formula:4bbc5cc8-d7d5-4c84-b241-712a47934c12}} ). Conversely, the density of fluid {{formula:7113a536-2285-4d45-9cee-67bd583f88c5}} raises from {{formula:2f28a34d-d3a6-4dbf-b323-c0d507e05bab}} (for y{{formula:def28289-2163-4f6f-850d-4f5de91e9dd6}} ) to {{formula:4609ac4a-650d-4dd3-94ae-00de72b35fad}} (for y{{formula:bc001683-c152-4fda-bbe6-2347426ce4e7}} ). The interaction parameter {{formula:0668ceff-a78c-4649-9441-d992d542fb6f}} ranges from 1.4 (the minimum value ensuring phase separation) to 1.7, above which the film is always stable and does not rupture. The range of {{formula:2a3329c9-269a-475d-8009-c202eb2a6307}} is directly related to the choice of {{formula:a9e795b5-4028-45ca-ae2f-96e43a116322}} and {{formula:22aad33a-7acb-4770-8fca-f4cdf176b576}} , as highlighted by the corresponding phase-separation diagram (see Supplementary Material, Fig. S2). In the absence of the wall, these values of {{formula:039c27b3-1c92-482a-8783-b9c7414d4970}} lead to interfacial widths in the range from 2 to 4 lbu (see Supplementary Material, Fig. S2). To ensure the stability of the simulation, we use wall-fluid interaction parameters {{formula:11d12b36-ca0f-4537-af42-9741c9349f91}} and {{formula:0bdb5a97-d343-40c7-b2f0-9ba9c7932bef}} in the range from {{formula:143d86ff-cdb7-409d-a50a-7817c633464f}} to {{formula:0774c92a-d5b8-4114-b696-fe871a14397d}} . {{formula:977d75e4-68a4-4bd9-85c7-e6e2740466e4}} is directly related to the fluid-fluid surface tension {{formula:ea13be65-3f28-4b69-8b15-b42ec9c751c7}}  {{cite:63ef8ceb060c03e00800cf3e9c13263858b36cb3}}, which can be measured via Laplace experiments (see Supplementary Material, Fig. S2). The definition of the wall-fluid interactions in Eq. (REF ) suggests that {{formula:08fe3a2a-ab38-4884-9127-a598309063cb}} and {{formula:78a7cff4-084c-41e9-8479-c6a9e3d33f35}} are related to the wall-fluid interfacial tensions {{formula:c7aafb3d-0bf7-4ba9-b2eb-eaeb28a28d86}} and {{formula:14604111-db25-4ec0-9bd7-7272651679bd}} , which, however, cannot be measured directly. Instead, for a given value of {{formula:e18fa562-09c9-44a1-bc52-152b6ad2178b}} , their difference {{formula:7cb88ae8-1861-4c2b-a4e3-875fd92c7cb2}} tunes the value of the equilibrium contact angle {{formula:5b4f35d9-a790-4856-b3bf-46e29e56cb5f}} . The latter is expected to be proportional to {{formula:925b2aff-a01b-49b3-94c2-5eb563b2cec4}} , as observed by Huang and coworkers {{cite:3f60ec443b18d40690a80d69699e14f5664d2055}}. Hereafter, all dimensional quantities will be reported in lattice Boltzmann units (lbu). All simulations are performed on a domain of size {{formula:5fdff708-1890-46a1-9081-8c0c656199a5}} lbu along the x-direction and {{formula:83b0fa37-3e7f-4e60-8b03-7627f56da26c}} lbu along the vertical y-direction.
m
8ca97e027f41edcd0795697f6c3d967c
Over the past two decades there has been considerable interest in black hole phase transitions in anti-de Sitter space {{cite:0f9f251db4198e4ec6f5c2e308a8797f750c858d}}, {{cite:ce04cf5752474d9cfc70842899dc86e3fe990223}}, {{cite:31509f8c21be4a0a2a64538efff27d7f516b5bcb}}, {{cite:a2af6e373fb60a9f3ec9014d343b056e92875163}}. Motivated by the interpretation of the cosmological constant as a thermodynamic pressure {{cite:69efc0cf365e328947e1128317708101fbd41bdb}}, {{cite:2c5f0b0803f44e8d8140110fa64d97d865f5bf86}}, four-dimensional charged AdS black holes were found to exhibit a small-large black hole phase transition, fully analogous to the liquid-gas phase transition of a Van der Waals (VdW) fluid {{cite:a2af6e373fb60a9f3ec9014d343b056e92875163}}. Later studies also showed that this small-large black hole phase transition is ubiquitous, taking place in most black hole systems {{cite:d5790a7821ed772ee6f6045bb7b6c95352a25884}}.
i
cf53306297523b61d7e67bdae1249818
Even though our comprehensive analyses show improved attribute consistency, this study has several limitations. First, the training data require annotations of attributes. Fortunately, CelebA provides 40 annotations for facial images. To label attributes of new datasets, semi-supervised learning largely reduces manual labeling efforts. Second, manipulation of attributes may be biased by training data; thus, reconstructed images deviate from originals. The second row of Fig. REF shows that the manipulation in the positive direction of “wearing lipstick" also enhanced female attributes. However, when manipulating the image toward negative attributes such as “female", “not wearing lipstick", and “mouth closed", the results might be biased because the original image was consistent with these attributes. Fig. REF also shows that a female with long hair became a male with short hair after manipulation, even though the original stimulus was a male with long hair. It is resulted from the co-occurrence of “female" and “wearing lipstick" and of “male" and “short hair" in the training data. Thus, although the attribute manipulation following {{cite:bffb306f626039e98a2a7c453274190e019873ba}} applied conditional manipulation to isolate one attribute from another, some entangled attributes are difficult to manipulate independently. Third, images reconstructed from fMRI data corresponding to the same visual stimulus were different (Figs. REF and REF ). To the best of our knowledge, previous visual reconstruction studies have not attempted to solve this many-to-one problem. However, we speculate that as the number of attributes to be manipulated increases and the performance of the fMRI-based attribute classifier is improved, the similarity among reconstructed images corresponding to a single stimulus will also increase. Despite the limitations of this study, our results indicate that the proposed framework successfully improved attribute consistency.
d
909f3d0f5913cf4c7c47a7d57f79e325
Although we can get the spectral convolution in Eq.(REF ), it is noticed that the convolution kernel Eq.(REF ) is only a {{formula:5e05c276-b2bb-4270-a08d-5c6e0bb5fb00}} -localized kernel, which aggregates {{formula:50a2f508-43d6-40c5-a736-a3d497832ccb}} -hop nodes to the farthest per iteration and thus restricts the flexibility of kernel. Furthermore, the initial pre-defined hypergraph structure is not always the optimal one for the specific down-stream learning task. Notably, it's somehow proved that GCNs which are {{formula:5ae04160-9a7e-4061-9f1f-50f32826cbd7}} -localized and topology-fixed actually simulate a polynomial filter with fixed coefficients {{cite:33b724c455e1259f102a00b00211eaea538fef47}}, {{cite:d93076f442efb4a64d988d3d34dec95b7bdad935}}. As a result, existing techniques {{cite:7ad235f280cb6e62240b342ca6eedf07b7087606}}, {{cite:ed46d6e034a0eab2de04350bba1e000a8678406d}}, {{cite:14a3c097f6f9d76b84fd899588f44a979154620e}} might neglect the modeling of non-local information and fail in obtaining high-quality hypergraph embeddings as well.
m
36b8da64a079fde009524f0dfca36f12
Early experiments that used human annotations in place of programmatic reward used simple domains {{cite:8f3b7994cdbec900280c714a2cd81eed767c1f8d}}, {{cite:67abea24d13b7506f49e259110623898331d6200}}, {{cite:d5793dcca74efd0d6edd404edcf74a95633e3135}}, {{cite:dd484f33a869f1fe334566c8b03eb5ccf6ecffa5}}. There has been a recent surge of work in more complex settings, bringing forth the techniques into the realm of deep RL {{cite:2de7c09abc2b037e20d824d8fdd6bef77f9879ac}}, {{cite:bbe32102df10ffdb627f8b22dfabdd4d905d1376}}, large language modeling {{cite:5bd6f437513107816e7a421c6a2a9dc17cc395f6}}, {{cite:fb2dadc20aa0faaf742f2d5bf356a9c27cd68d67}}, {{cite:55d8eeeecf51562090187ce5beea654b84774e95}}, {{cite:3b049e0b1c55c8351e5dac8925dd1a42ab71eeb9}}, {{cite:0fa20b8f9f0169b25aa2eccb2040b02603dbf441}}, and robotics {{cite:b68d1653aa0439058d447c59d9df91644dd0eaa4}}.
d
56251aa29b51cb7fb2fa51bf5b729603
With the worldwide boom of E-Commerce (business-to-client) research in recommender systems has become one of the top priorities both for academia and the industry {{cite:7f98f93a63203bbb350c5450cf8d63bb4ff5c77a}}, {{cite:7cf582bc54a7a5909a142173f12783a5f7dcde72}}. Recommender systems are beneficial for both, the business, and the client. Unlike a physical marketplace, online portals have virtually an inexhaustible collection of items. It is a huge task for the user to sift through all the options and buy/rent the one he/she is satisfied with. Thus, the recommender systems assist the buyer/client with tailored suggestions. For the E-Commerce portal, without proper recommendations the user does not purchase items; hence the portal looses on business opportunity and thereby loses prospective revenue. The initial days of recommender systems saw the application of content based filtering to recommender systems {{cite:aee765ece385ebb3e4d755f3f35d17aa2ab4a9e5}}. Content based filtering was a well-established approach in information retrieval – this was in the early 2000s. However, it never became popular; several reasons are briefly mentioned in {{cite:18e087312a83a5decad6cd8d7791e18bfcdf50f6}}. Mainly owing to the requirement of user intervention in the definition of ‘content’; i.e. one needed to find out the exact attributes for matches between user and item. For example, for books the factors might be the author, genre, publisher; for music, they might be the singer, genre; for movie, they can be anything ranging from the actors to the director to the production house to the genre. One did not know if the expert defined list had already captured all the possible variabilities in the chosen attributes. If the list of attributes is too small, important factors would be missing; if the list is too large one might end up capturing noise in the data. During the same time (the early 2000s) more abstract yet powerful techniques based on latent factor modeling and representation learning started gaining momentum. Instead of explicitly designing the attributes, latent factor model makes an abstract assumption. It assumes that the user’s choice of items is guided by several latent factors. Instead of designing these factors (as in content based filtering), they were learned from the data. Eventually, matrix factorization {{cite:49653da597891980e19874fbdc303d309ad6c2ba}}, {{cite:4b5c9af4d10b3c69a3d503b95eb193fd3d832872}} became the most popular technique for latent factor model especially after the announcement of the famed Netflix competition {{cite:d0ec4354d55325d94ec973d6f515de0f50b2aed6}}. In the late 90’s and early 2000’s neighborhood based approaches gained popularity in collaborative filtering. They were interpolation based techniques. In the user-based approach {{cite:ebd7e51c8c8a9f600e5cd8d8ddfe4082be59b973}}, similar users were selected to constitute the ‘neighbourhood’, and the ratings of these users were used to impute the missing values. The same could be done from an item perspective {{cite:d9d0bdc6f3d876ef0ae90be36fde23e665b0cff1}}. These approaches were simple and easy to interpret. But they were heuristic, the interpolation weights were defined rather arbitrarily. The neighborhood based method yielded significantly lower accuracy (at least on the benchmark databases) than the more powerful albeit abstract latent factor models. Even though the matrix factorization technique was the popular choice for latent factor based collaborative filtering, a seminal work {{cite:c9722d2ad0dbfd0d9cbef0c346e55751c05996cc}} showed the possibility of using another representation learning/latent factor approach for collaborative filtering; it was the restricted Boltzmann machine (RBM). There is hardly any work on RBM based collaborative filtering since the publication of {{cite:c9722d2ad0dbfd0d9cbef0c346e55751c05996cc}}. The basic RBM formulation used in {{cite:c9722d2ad0dbfd0d9cbef0c346e55751c05996cc}} is an unsupervised one; it was only based on the user’s rating on the items. However, in a real system, users’ and items’ metadata is always available. User demographic information such as age, occupation, gender etc. is collected as a part of the sign-up process. The item information is also collected during its registration. Prior RBM based formulation could not make use of such auxiliary formation. In this work, we propose to improve upon {{cite:c9722d2ad0dbfd0d9cbef0c346e55751c05996cc}} (in terms of rating prediction accuracy) by exploiting the user demographics and item metadata. In recent times (last few years), researchers have started exploring the possibility of using another powerful representation learning technique for collaborative filtering, stacked autoencoder. Both stacked autoencoders and deep belief network (built from layers of RBM) are used to train deep neural networks. Almost all studies in autoencoder based collaborative filtering are minor variations of each other. The basic autoencoder formulation is directly used in {{cite:61c331a8d082fc8f42032b2e18c48d29ca6594cc}}, {{cite:9e73703cb84e4bf88e052daf11ad411d7946e9ef}}, {{cite:62df6e4cd04d9d172260f05d6bd101f222aa49c4}}. In {{cite:48c6e7e6e03153afd6bb1f479727f6d2f2edc3c0}} baseline prediction is used along with the ratings in the autoencoder framework; the baseline values are simply appended with the available ratings so that the autoencoder learns to reconstruct both the ratings and the baseline values. A combination of marginalized denoising autoencoder and probabilistic matrix factorization is used in {{cite:e562112f6c1e9d07758515b71f412b416d987ba9}} for rating prediction. All representation learning approaches are inherently nonconvex. Therefore, the associated theoretical problems are ever present. There is an elegant convex solution to the matrix factorization approach for collaborative filtering; this is called matrix completion {{cite:02e2d3455072d158725a974311b56a3a769fe6d8}}, {{cite:f03f75cdae716c2d182c95a37785feceb2aa18c0}}. It is a convex variant which directly solves for the missing ratings instead of going through the intermediate steps of determining the latent factors for the users and the items. However, this is unrelated to the representation learning/latent factor model based approaches we will not discuss it in detail. The rest of the paper will be organized into several sections. Background on collaborative filtering will be discussed in the next section. The proposed formulation will be detailed in section 3. The experimental results will be shown in section 4. Conclusions of this work and future directions will be discussed in section 5.
i
1fc7ff76d1afee67ecdbbaa10243055d
We evaluated our model on the Human3.6M{{cite:5a1a22e2f9eb38f255ff82a16a6e25ffc42710d8}}, MPI-INF-3DHP{{cite:36030a3f3f276729cb2a1a60092a0053af2a7542}} datasets and showed quantitative results. In addition, we also show qualitative results on the in-the-wild datasets MPII{{cite:ce1f00b35aa9a43c9966f6f76402a22393ecebfc}} and LSP{{cite:a2edf9618c928c1d9575bedfabfe6c6b465bf86c}}, where 3D ground truth data is not available. {{table:b82931c7-9c59-43de-b581-a24539e4ed5c}}
r
6f64d770ee1977406296e66c69ef717a
Details of the electronic structure calculations in the presence of Hubbard-{{formula:958c09fc-7fb4-4766-9631-9f01fbdec6c7}} , SOC, and zigzag magnetic ordering are presented in supplementary Figs. S5 and S6, and Table S5. The full orbital content of the energy eigenstates was characterized using a combination of Quantum-Espresso and Wannier90 softwares {{cite:16a82415b3a4264c3e409522aa6f1a0a351f07af}}, {{cite:263b60a1063e14268d2add9ab7babc74deadbc7f}}, {{cite:0224078956956ee232ce57a295d79972198f94eb}}.
d
6ccf18600826d784c888aab0fde02ec9
MuJoCo {{cite:2c8fd5f94508f2925e55d357486d280d514fe968}} and PyBullet {{cite:32f8f5eae88f2f79986960837ceb550512a897cf}} are physics engines that facilitate research and development in e.g. robotics. They provide fast and accurate simulation for rigid multi-body dynamics and control. Such simulation tools make it possible to scale up training for advanced contact-rich environments. OpenAI Gym {{cite:625ad36ac4fe5b0253664f7296ed518719c8501d}} is an open source interface to RL tasks. The gym library provides a suite of tasks for getting started with RL. Besides, it defines a standard interface between the agent and the environment. It is composed of actions (to be sent to the environment), and observation, reward, and whether an episode is done (received from the environment). This interface definition has become standard in RL research and development.
i
7747ad568a9e52d43eaf7a148e3935fc
One interesting open problem for untrained reconstructions is the network choice. In our experiments, we utilized a convolutional network with an encoder-decoder structure and found that this worked well; however, our network tended to blur out high frequency features and there may be other architectures that could potentially provide better priors. For instance, deep decoder {{cite:d7acc14afc42029f282ddc80a7174b098fec916d}} has been demonstrated for similar tasks, but we found that for our application it required more iterations to converge and was outperformed by an encoder-decoder structure, demonstrating that the choice of network architecture is important and application-specific. Although our network worked well for photographic scenes, other network architectures may be more appropriate for other types of images, such as fluorescent biological targets which may have very different statistics than photographic scenes.
d
cf8e123ad14874b9e5d4431c78672942
Smartphones and casual photography using these devices have brought an increased demand for methods based on various image manipulation techniques like photo enhancement. Image enhancement is one of the fundamental problems in the field of computer vision starting with methods like histogram equalisation for contrast enhancement. Retinex theory of color vision {{cite:8a851f2d4f040de446b5a7bc6be6338916eec973}} by Edwin H Land inspired many methods like {{cite:a54e425007f794d811e4f05a86ce883fdd091cbc}}{{cite:7942292e4d75400b78353382ba887daa8801d92f}} that considers images as the pixel-wise product of reflectance and illumination. These works treat image enhancement problem as an illumination estimation problem, where the illumination component is used to enhance the input images. These works were only able to generate very inferior results because of the high non-linearity across the channels and the spatial sensitivity of colour in the image.
m
777cf457796fb098a5e135e8ab9fa873
There are a number of promising data-driven speech representations. Some directions include self-supervised contrastive learning {{cite:fc337cc55a083257845ed913abec2e3d97ee0767}}, {{cite:728152ec0b66bf05034abe232859a712eeaedf98}}, {{cite:d7c89e61798beec0c5b5cf1adce0e636b51ca875}}, predictive coding {{cite:3a34e5ac1cc2f469b6c91bd2776b7217040210eb}}, {{cite:c759b5cbeb31c236b13a813cff0c3ed9f1e38148}}, masked-unit prediction {{cite:33b760e4c59c219c7b812412ac768ddad6eb56ed}}, multi-task learning {{cite:acab7694cbf16da373f57d0ee7572cb5e78575ed}}, multimodal coincidence {{cite:ebbe658726a6a2faa96aafb3c422dd9bdd50fe75}}, {{cite:cb6bb1a86e710a73c12befad0d9628278c123b23}}, and intermediate representations from a supervised task {{cite:0be726e17108b15a2018d3e5cc535f5ddf7daaf4}}, {{cite:93c0c0491f6270326155c2bc6ca250af052aabae}}. One of the most promising objectives for representation learning for speech recognition was proposed in the recent Wav2Vec 2.0 {{cite:5973bfc58ba4a655bb3f65d2204cf368af032523}} framework, which combined Transformers {{cite:db14e6a61d91b1f9ea476d75976ded93a1313819}} and a self-supervised contrastive learning objective {{cite:c759b5cbeb31c236b13a813cff0c3ed9f1e38148}}. The Wav2Vec 2.0 training objective was subsequently combined with more powerful Conformer architectures, producing large improvements in semi-supervised speech recognition applications {{cite:b63c70a441bf17d2bddd3e8c7d8055533356372f}}, {{cite:45184f52b66055b2f649e0d3868607a4bf24d131}}, {{cite:aeec0b42e689b4c3024d4660e45dc0d77137ccbf}}. This paper explores the use of these Conformer-based models to define fixed representations for non-ASR speech analysis and paralinguistics tasks. To fully evaluate the potential of these models, we evaluate several model sizes and pretraining datasets combinations.
i
02c8cf75b60fdc1143c50c050ae19545
Although the developed TSA approach is promising for transient analysis, its performance relies on the training data, like any other data-driven model. Here, as the transient data are generated from simulations, the training dataset is ensured to be balanced, i.e, the number of stable and unstable transients is close. However, the real-world transient training datasets are imbalanced, i.e., the ratio between the unstable and stable events is less than 5%. This imbalance occurs since the unstable transient events are high-impact but low-frequency events. An imbalanced training dataset may adversely affect the classification accuracy. To address the problem of imbalanced data, two strategies, i.e., data resampling and redesigning the classifier algorithm, are suggested to improve the HGAN-based TSA method. The data resampling methods, such as oversampling and undersampling methods, can rebalance the ratio between the two types of events {{cite:7e29ff805982964415cf2609ab029568d3399c3a}}. In addition to resampling the training data, the classifier algorithm can be adjusted, such that the learning process is more sensitive to the correct identification of the unstable events. For instance, in the cost-sensitive method introduced in {{cite:a508f4b99e5ec2141c299e8933d168fc61163bed}}, the misclassification cost is modified such that a higher cost is assigned to the wrong prediction of the unstable events. {{figure:aba60688-ff21-4fc6-9b99-5f693d406334}}
d
746bca78fdb6c3927a6168d0bb8b8e73
Throughout the paper {{formula:1cc7b9d4-7adf-4aeb-893e-17acdacd1ac1}} is a commutative Noetherian local ring and all modules over {{formula:88f9eaee-f1a6-45bc-8f8e-42c2907cbf00}} are assumed to be finitely generated. We start by recalling several definitions and terminology from {{cite:7b6a6635ff286dd2ed9e0515139a79cfb4ba76ce}}, {{cite:1a588f4d169413774f2ca3b3772e252a04d956b9}}, {{cite:309678ccd5937cbe3db8828d3566803bced049ae}}.
r
0fc2e0710fad213b178221d791cbd249
The AdS-CFT correspondence {{cite:afb82540289997f208d8071ffaffdca1b06ed44e}} seems to indicate that quantum mechanics should work as expected for a description of black hole evaporation. That is because one can have evaporating black holes in a spacetime that is asymptotic to anti-de Sitter space. The boundary theory that is dual to the bulk theory in anti-de Sitter space is perfectly unitary and defined in a way that is non-perturbative, {{cite:61209766fa0263f69932fe2550961782b8aa1237}}. The puzzle then is to determine where the semi-classical picture is wrong. Whatever the problem is, one should be able to rely on semi-classical arguments as long as fields do not approach Planckian scales. That seems to indicate that quantum gravity effects are important but only as one gets close to the singularity.
i
a13e743529287265f06d06d4cc94de0c
Probably the most interesting outcome of our analysis is the existences of structures mixing complex and paracomplex structures. As we have demonstrated, these mixed structures arise very naturally from (a pair of) pure spinors of a given real index. Such more general structures are only possible when the signature of the metric is not definite, and so it may seem that they are not of interest in Riemannian geometry. However, metrics of split signature {{formula:f0aa80dc-b824-41ba-b7a6-f4b81146e43c}} do appear in the context of generalised geometry {{cite:7c69ae32e53a58ac16cf53ce1d744b49d9402ab9}}. Thus, a generalised complex structure on a manifold {{formula:d9516a87-0aa5-47ee-b1f4-9bf6eac24e19}} of dimension {{formula:40d980ad-6273-4fd4-8131-af05436125f0}} is a complex pure spinor {{formula:9fea2caa-9831-41e9-a69d-55bbf3c407d2}} of {{formula:3180a5d3-62d1-4f0e-89b6-e4a326c9e797}} (which is a complex polyform on {{formula:9ea8f3db-4a77-4559-88c1-be46078e79cf}} ) that has the property {{formula:13f21d84-be7f-4219-bc04-326da2970ffb}} (and thus defines an almost complex structure on {{formula:292b63e6-b59c-432f-9ad1-499dd4c44052}} ), and which has the integrability property that the null eigenspaces of the complex structure defined by {{formula:b714e277-a82b-4d89-b810-5c2f8a1d8753}} are closed under the Courant bracket. The integrability condition can be shown to be equivalent to the condition that the polyform {{formula:e5d81a8b-45e7-4bd5-8dfb-27109ad59746}} is closed on {{formula:f7fb664a-7b13-4e37-be92-7a65c22fe0b1}} , see {{cite:7c69ae32e53a58ac16cf53ce1d744b49d9402ab9}}. Given that there are several different types of pure spinors of {{formula:6bb85947-6765-4865-abd7-7833e7ccad05}} (with different real index), with a pair of complementary pure spinors {{formula:40c705d5-cad8-4846-9ce3-deea71f2895d}} defining in general a structure of a mixed type on {{formula:5443d019-27e4-465b-a071-c084d58f42e2}} , it would be interesting to study the arising more general types of geometric structures on {{formula:436b1594-b713-4961-b89e-6bda79a984a5}} , thus further generalising the generalised geometry of {{cite:7c69ae32e53a58ac16cf53ce1d744b49d9402ab9}}.
d
0a48692464a7613a128acbaf1bef594c
A limitation of the proposed method is that the used a single whole-heart template may not capture the full geometric variations observed clinically. In particular, the current template assumes four separate and distinct pulmonary vein ostia and thus may not fully capture pulmonary veins with alternate branching patterns, which can be important for preoperative planning of pulmonary and cardiac surgery {{cite:9ebc7875c9754e27eaff8ffa5d7f2731d9d2ceed}}. Similarly, the template used would not be suitable for cardiac malformations such as single-ventricle patients with congenital heart diseases since the structures of the heart are significantly different from our current training template. Nonetheless, this framework could still be utilized if sufficient training data of, say, single-ventricle patients were available and a corresponding single-ventricle mesh template were used. To better handle the above applications, in future work we aim to add a template retrieval module to automatically select a template that best suits the input. Furthermore, implicit shape representation {{cite:cfdb82083131f43c061b2492e4599b48338174e0}} can be combined with our learning-based shape deformation approach to predict cardiac structures with different anatomies.
d
7739dd30bff9c29c707fad62f08a8826
For each {{formula:14a5976f-dcdd-45a8-9936-22832e01c137}} , put {{formula:420dfe79-2544-48c1-9dbe-e6604b7dffcc}} and {{formula:9dd8b006-736e-4a3b-89ac-47210fae31ab}} By the {{formula:9f207ce5-72bf-4678-8574-bbbd9004ff98}} -inequality (see, e.g., {{cite:bb8073ecb8017a6357fcd14691948fb795770eb2}}) and (REF ), we have {{formula:2174f560-68ce-4129-abd7-cfdcaba42c93}} {{formula:206996f2-43d1-4bfa-9d63-c22495916523}} {{formula:026eecfa-6dd3-4fad-aa18-7c9004ff3281}}
r
515f4cb0d0755e8902684403417df75d
Our results suggest that including MD simulations in the training process can be considered as a means to improve the reliability and accuracy of NN potentials. Active learning NN potentials {{cite:2ce0030d464974b2417afaa363ddfddbddfa5cb3}}, {{cite:e2017b3cfbfe7253efc35705f1265f7b3f5ea0e4}}, recognized as a major building block in achieving stable and transferable models {{cite:9fed4d22d0655880ce2be604e952271a9c8ca891}}, {{cite:e5aca45686633ccabec99572f555c5002ee17e3f}}, {{cite:0312a979eb3f829a127b95059f9c22716bde07b5}}, can be similarly interpreted as an incorporation of MD simulations into the FM training scheme: Performing MD simulations and screening visited molecular states for high-uncertainty configurations allows to augment the data set iteratively in phase space regions that are reachable by the NN potential but still sparsely represented in the data set. Alternatively, MD simulations can also be inserted directly into the training pipeline {{cite:5f2099fe926ce784bebba248ed5c4704e90afd1a}}, {{cite:ce2325a32429a92c2d27815736bba81dbc9908db}}, {{cite:c3118cc68b17d498a03074537a3e87eb4df0c8ef}}, {{cite:460df1a88e2e636c90cb039e6a36a4f3e7052023}}, {{cite:0a2d7c25e2692875301729a379cce8e337a97b41}} using auto-differentiable MD codes {{cite:ce2325a32429a92c2d27815736bba81dbc9908db}}, {{cite:460df1a88e2e636c90cb039e6a36a4f3e7052023}}. We expect that the benefits of using ML in simulations and, inversely, simulations for ML training will continue to drive the ongoing synthesis of ML and physical simulations in molecular modeling and beyond.
d
c1fe71190df0f42cd2a9d7f95aa18cfc
We note that the polaronic transport is favored in materials with strong SOC that lack inversion symmetry {{cite:2246dfa6957f19f7b47b7fcbf56b2931c614ab02}}. To get insight into the relevance of SOC, we further performed first-principles calculations using density function theory. We applied the WIEN2K {{cite:dd873235773ea1ef8c0dce16bbe08a6f0efcf3e4}} implementation of the full potential linearized augmented plane wave method in the generalized gradient approximation using the PBEsol functional {{cite:e893d2e5e765a7974d334751e16df6844c5d72d3}} on stoichiometric IrSbSe {{cite:25bb3529eb9605681c62923f4ed18e27bbd52435}}. The SOC is treated in the second variation method. The basis size was determined by {{formula:555537e7-c485-41c2-b2cc-6a047e9b5816}} = 7 and the Brillouin zone was sampled with a regular {{formula:10ed0252-b8e9-4bee-b4a8-82c141c0a5d4}} mesh containing 176 irreducible {{formula:2c7a3963-13af-441e-b65d-6744a19b8fa1}} points to achieve energy convergence of 1 meV. As shown in Fig. 6(a-d), the calculated band structure and atom-resolved density of states (DOS) indicates that the system is a nonmagnetic semiconductor. There is a band gap of about 1 eV for the calculations without SOC [Fig. 6(a,b)] and about 0.9 eV for the case with SOC [Fig. 6(c,d)]. The total DOS (black line) rises rapidly from the band edges with the band character being mainly of Ir {{formula:de07df26-9df8-466c-89e7-271653e44221}} orbitals. However, the contributions from Sb and Se {{formula:863239f1-3793-42cc-990d-4e7dd15b7d4f}} orbitals are significant in reducing the Ir {{formula:f760907f-71b7-49bb-8ff5-84861274e286}} orbital weight, compared with the dominant character of the transition metal in CoSbS and FeSbS {{cite:743ae57ff0058ef4c04ff599dc8b5a219f6435e4}}. This weakens magnetic instability, if any, upon charge doping. The effects of SOC is more clearly seen in the band structures [Fig. 6(a,b)], where the SOC induces the band splitting is about 0.2 eV for the valence bands around the Fermi level. The band-structure plots also suggest that the valence and conduction bands are substantially massive. These are similar to the results for IrBiSe {{cite:c8fb596c5faa3ed0f306886eab99c30da786ec7c}}, where the SOC splitting of about 0.3 eV was reported. Thus, the split band in stoichiometric IrSbSe is also expected to be fully spin-polarized with 3D chiral spin texture {{cite:c8fb596c5faa3ed0f306886eab99c30da786ec7c}}. The present off-stoichiometric polycrystalline sample with a much smaller band gap suggests the occurrence of in-gap states. {{figure:71230fc3-d70e-4110-a04f-768400239015}}
r
b022b6678f1d08b3aaee5260ac8e59ce
The TM charge accumulates due to the flux of energetic charged particles in the space environment. Cosmic rays with energy above {{formula:9352ab7f-28a8-4ced-b3ac-d3c3b547fb62}} 100 MeV/nucleon are able to penetrate the shielding of the spacecraft and deposit charge on the TM {{cite:a4610056532976712d0cb0c19a18eba9e58625b5}}. The incoming spectrum extends many orders of magnitude above 100 MeV/nucleon making the problem suitable for simulation by high-energy physics simulation tools. During the development phase of the LISA Pathfinder mission, a number of studies simulated test mass charging due to galactic cosmic rays and solar energetic particles in LISA and LPF {{cite:e0ca96f4b8be8c637d389f2f8e5089698c465ee2}}, {{cite:5a6da3b708ad9deb560bdd0c96d0b1cb40d41584}}, {{cite:35f9ba8a6ff874cd43e6ba0ee2ba10b077f79738}}, {{cite:00484a3d25f30a7ad7f068ea9c49042b6cb5a5ec}}, {{cite:97690e628852e1bcc507a500feba9d82983d2039}} using Geant4  {{cite:d3db7b088888f0617986efebb64917b8a6060ec5}} and FLUKA {{cite:b17f255ac711301d701c90387e60cdbe9f7f1713}} high-energy physics simulation toolkits. This paper updates and extends the previous modelling using Geant4 to include a detailed assessment of the low energy electron population needed to explain new LPF results which show a dependency of charging rate on the TM potential {{cite:75af6b6379759449e1c3c25eef3142e78b91653a}}.
i
e746a6400e1997dd4ab0bc45cf931118
where clearly {{formula:bcddc635-b834-4320-813f-9d095c26c2ed}} and {{formula:2867d07d-e288-4a92-bb49-b029fc02e9c7}} . By {{cite:7551e9d0ad28794cbf18296ce2a716774e94eb47}} there exists a constant {{formula:c9766c69-bdb6-415c-bf76-fd850a95ac11}} such that {{formula:36f33f62-1386-4d94-b2f3-3dd171e23e6e}}
r
bcf19b288850ddf4da3590de453f2d6f
In our particular case, we seek to continuously maintain a dictionary {{formula:1b52abce-086b-4569-b4c6-2b0900a6841c}} such that all incoming data points are well modelled by this dictionary. Therefore, the generic 2-class formulation in (REF ) reduces to (REF ) in our case since all data points must be associated to the {{formula:72dd8fb2-c147-4ad5-9ebd-adfe6eb44285}} class to be inliers {{cite:13105c9c3a3e325f21362d2e9d3052c491f7a251}}: {{formula:fce38027-8886-4a88-a498-1ab87bd6618a}}
m
0d37d2da709cbfd6b7db1695e6f09853
Note that when {{formula:f248c472-86fc-41fc-a9e1-7e308946ff02}} is a complex Lie group (and {{formula:ba8b6c5a-956d-4ec3-8863-c08f804e16d3}} is the associated complex structure), it is always Chern flat, but the converse is not true. So in Theorem E the conclusion says a bit more than the Lie-Hermitian manifold being Chern flat. In general, two Lie-Hermitian manifolds can be holomorphically isometric but with the two Lie groups not isomorphic to each other. For instance, there are examples of non-abelian group {{formula:baee8fcf-26e1-4237-8bea-a5bfc8834732}} where {{formula:47822ab2-4bc0-4b90-b791-e4576a509210}} is Kähler and flat, thus holomorphically isometric to the complex Euclidean space {{formula:95d504fe-6f70-4f2b-bf1f-51ce78f1a098}} . See {{cite:4be00207aeab3417add6baca37b2eaa43f295370}} or {{cite:f873d1468cc416a57ef5fd62b41d170ae03d87b2}} for example. See also {{cite:0135c32037d65ff3eb98aa1a2a7a52d81a04dc9e}} for the characterization of Lie groups with flat left invariant metrics.
i
a0f93a14ca721f3627bb64fdc6537eee
Orthogonal frequency division multiplexing (OFDM) is the most popular multi-carrier transmission technique due to its simplicity in generation and recovery, high spectral efficiency, multiple-input multiple-output (MIMO) compatibility and robustness to inter-symbol interference. Hence, OFDM is the currently used waveform in 4G, while 5G utilizes a flexible multi-numerology OFDM scheme. Great deal research has been done on OFDM and some have been very successful such as OFDM with index modulation (OFDM-IM), where subcarriers are grouped and data is transmitted through the indices of the active subcarriers along with {{formula:96ef5821-59df-4326-a3e2-1f28cbda6548}} -ary signal constellations as in classical OFDM {{cite:20bde901f4f2df4a4e6cf0ab895d3606c40a0721}}, {{cite:9e1ac5930bfa9593212bcb9f60f3f3022ab5db8e}}, {{cite:4776a8cc51633fe8f14111ce5c056681083916bd}}. However, even OFDM-IM alone, does not meet ultra-reliable and low-latency communication (URLLC) requirements since it does not provide diversity in the frequency domain.
i
4abf7681089ad94018c0471f4039fb9e
The shared control arm, adaptive nature of the trial, and stopping rules means that the treatment effect estimates based on the trial may be biased, especially for those that stopped due to superiority or futility {{cite:ba93727008f25e1389aa1bad246f6aed8b2601f8}}. Exploration of whether modifications to deal with such bias can be incorporated into our model is an area of future work. Our method specifies a parametric model in the first stage of the hierarchy and a nonparametric DPM at the second stage. Completely nonparametric hierarchical Dirichlet process mixtures have been developed and proven successful {{cite:fc6d4902a378ef4586820ffb553941cb69c3ea3e}}. Another avenue for future work would be to implement such mixtures to flexibly model the treatment effects at the first stage of the hierarchy as well. We have focused on a single continuous covariate. However, it will not be uncommon to have a mix of continuous and categorical {{formula:fc37f861-ee38-4ae0-b1d8-5b6e33307ee9}} . We can use the specification of {{cite:c149f8ce0a91872102adc4cb272f31befaa9cbf0}} and assume independence among the categorical covariates within the DPM and/or in the case of many covariates, we can replace the DPM with an enriched DPM {{cite:2fbb5297d69b42f2c1748e5ce90190362c22d655}}. Finally, although we demonstrate how independent censoring can be accounted for easily, investigation of dependent censoring may be useful for trial settings without registers and for outcomes other than all-cause death.
d
5db5824d401b3a5ed96315f27d3da07c
We demonstrate the potential of our algorithm on the ShapeNet {{cite:c2d85eb027d3f6afe5ef451801689cd27e9716bd}} data set on various types of shape classes in terms of how it captures the details of the 3D model by comparing with state-of-the-art methods. Additionally, we access our method on different models downloaded from the Internet to show the generalization capability of the method.
i
a1e0a37f5603f358291ed03292c29a7e
PLM have led to tremendous performance increase in a wide range of downstream tasks, including machine translation {{cite:b89b698d4d900e3185b1022aee24cfa4f595ee2c}}, text classification {{cite:6000b280f641e3f51d367165bb827d2fd834a33f}}, document ranking {{cite:ee9b7a3164f3e72f76caa45a358ac40d003f0690}}, etc. The core component of PLM is the self-attention mechanism, which allows the model to capture long-range dependency information.
i
ca02d1ba1367a39fc79b0decbb029a02
The corresponding undirected graph is called the undirected power graph of {{formula:aba60e9f-3381-4a56-b4c1-58be4804ccb3}} , denoted by {{formula:799b907f-81e9-4ed3-b097-06e0bd87bf8c}} . The undirected power graph of a semigroup was introduced by Chakrabarty et al. {{cite:e0cbd0947f9eb79637c447372926fae87e675dbd}} in 2009. So the undirected power graph of {{formula:8de56afe-d1f1-4fe8-aec1-344ae4323974}} is the graph with vertex set {{formula:363cd041-51a5-4cea-bcb5-29f02df16814}} , with an edge between two vertices {{formula:f0db8062-3275-457b-92c1-347475ae43af}} and {{formula:3dffa3d9-cb13-409b-bdab-dc748b2c29b8}} if {{formula:d46f4b16-2ffe-4d4a-ab3c-0e04d5f21b8e}} and either {{formula:e98495f3-90d9-4b9b-9c7a-53b4d261da53}} is a power of {{formula:02810ca7-c604-4a63-9a7b-5b921329df2a}} or {{formula:38fba144-3ff1-4da4-bee5-2f2118c7fd46}} is a power of {{formula:d6e86aab-f0c6-44ae-9cdd-1d122b6bef41}} . In the sequel, “power graph” will mean “undirected power graph”.
i
39c347118153b6b03f8018f43aa97747
Bringing the RG framework to other sparsficiation methods, like RigL {{cite:91a9d95cdfdb46e31c0594df22cc7b8e54ecfe14}}, which can similarly be viewed as RG schemes; Bringing the RG framework to study winning tickets beyond computer vision, such as in natural language processing {{cite:8218e7dd0e9adbabe1c6e1d7540c917dd32053bb}}, {{cite:ae9610a70093e3d48b5e510fc0a00961616c5817}}, {{cite:89d875cab8e75f67dcf1d8c67a27bd6093f9ad38}}, reinforcement learning {{cite:89d875cab8e75f67dcf1d8c67a27bd6093f9ad38}}, and lifelong learning {{cite:8deaba3d92b103a545ea3fcbdf217ff676f3c5b5}}; Computing and classifying systems by their critical exponents (such as {{formula:1127990f-f1ae-4f6f-b0a7-ec90f431573d}} in Eq. REF ). We attempted to do this and had some promising results. However, we were limited by having only a few independent seeds and a scaling function with multiple free parameters {{cite:a60d788ee6f06de84f9d74313b6bee77f7fccdb2}}, which led to large differences between computed critical exponents of systems that displayed qualitatively similar scaling behavior (see Appendix B for more details). Bringing methods such as finite scaling {{cite:e9ce3833a527e4cf4618eb3d826d04f56c7f9039}} and increasing the number of independent seeds may alleviate this problem; Exploring the connection between the RG framework developed here and the work to build effective theory of DNN behavior {{cite:3fef35d843cd640ea96c53c2bd148399a09bfee7}}.
d
d4d64a55a11e71ba5d154a22b67cb2e1
Let us continue with the famous Representation Theorem, first obtained by {{cite:78a006ef821dedaee2a8bf52da6c93ed9560ec92}} in the continuous setting and again for arbitrary measurable functions by {{cite:2dd3e2d108e4041b42fcc1ae97b62fb9b6a02add}}, c.f. {{cite:7f216b0ba91b4cd0369d48a78ad6ed145aafa3be}}. It states that there exist bounded measurable functions {{formula:23a470dc-e901-47f9-b511-67652d539c9e}} and {{formula:93101bdf-ec5c-4731-bb61-055529c95251}} such that {{formula:d642786a-4954-4208-9a8d-765b34d49d8b}}
r
900951133c48e646ac737fda664fdf40
Regression based methods such as {{cite:c08cd29aa753095e0c6d5c969a9c62d1ebb48e97}}, {{cite:7cdf304ac7f51b70908cb20cc502adb8934e006c}}, {{cite:edfc1c2523b9a0ecff50b768e959faea822795ba}}, {{cite:f59d2555083722e63f0bee86123c8ea85f990e0f}}, {{cite:ed416a94303a73392f8af614be716abca2a5912c}}, {{cite:b74b3c0fdd36c5114b7148f8c42f932163b684ef}}, {{cite:da5ea6ab5c423e88799f7b51a72d59dd573ac23c}}, {{cite:b5c8ae7af7440c1791bb262ce974a34dc1d53e70}}, {{cite:df3493defce2ad6c37336652004c6b6313ce3caf}} are mostly inspired by general object detectors (e.g., Faster R-CNN {{cite:53bb9ac89440208eacbf85f78525c1cd2a62b558}} and SSD {{cite:e633b27055dc6ca03866e2985adbc9e8e0ca4add}}); they directly regress the entire word or text-line with arbitrary shape in an image at object level.
m
b8c0a9248833778d13a28e8ad8c5d2a5
BayLIME is a Bayesian modification of LIME that provides a principled mechanism to combine useful knowledge (e.g., from other diverse XAI methods, embedded human knowledge in the training of the AI/ML model under explanation or simply previous explanations of similar instances), which is a clear trend in AI {{cite:dffd577effa848914ce5f5d78eca3e279e658215}}, {{cite:3de12ac4216bec541897034ae1ebfe2bcefea7c2}}, {{cite:8faf546bb2ad547329fe41ac3d78ed364533d776}}. Such combination benefits the consistency in repeated explanations of a single prediction, robustness to kernel settings and may also improve the efficiency by requiring less queries made to the AI/ML model. That said, we discuss the following questions to highlight the practical usefulness of BayLIME.
d
b18bb683af74525db7e710b82faeaf18
Alternative supervision. Much of the progress in semantic parsing has been due to being able to learn from weaker supervision. In the framework we presented, this supervision are the desired actions {{formula:e1e748b7-e1cb-4809-958a-ca0e5711d958}} (e.g., answers to questions). One can use a large corpus of text to exploit even weaker supervision {{cite:617cfd4c71fdeecb69ff3055517b18e797038d67}}, {{cite:8c3c26d2b5a8a87611ba9d04c8967a48a1d8571d}}. More generally, one can think about language interpretation in a reinforcement learning setting {{cite:f4f2df526bfe526fd530447ff79ff6785ae60c09}}, where an agent who presented with an utterance in some context performs some action, and receives a corresponding reward signal. This framework highlights the importance of context-dependence in language interpretation {{cite:265199c6bdd7ee29b5c3b326699772738be665f1}}, {{cite:09676b0b285011d2221dd4cfd035b478d5ae55d2}}.
d
174a30812279e10af0f200a22703e111
A noteworthy outcome of the precise comparison between BCFT and gravity is the identification of different disconnected OPE channels with QES inside or outside the horizon. From the examples studied in this paper, we infer that a QES outside the horizon on one side corresponds to a disconnected channel with twist field one-point function contributions from the same side of the TFD. A QES inside the horizon on one side, on the other hand, originates from disconnected contributions due to BCFT images on the opposite side of the TFD. It would be interesting to perform a more exhaustive study of all possible OPE channels in the BCFT picture (for our two interval problem there are 24 such channels) and the precise characterization of their associated QES. It is interesting to note that the purification of modes and consequent dip in entanglement entropy requires QES behind the horizon. This is similar to the nonequilibrium situation with evaporating black holes {{cite:492bb36f8f07b15593ba4a50d4543aa9dac06ba4}} where the QES remains inside the horizon while the entropy relaxes after Page time and only pops out of the horizon at parametrically late times when the system is approaching equilibrium.
d
6aa7c4ce774571660b182848574bb437
Nevertheless, our proposal is much more modest. We argue that the universal laws of thermodynamics should also apply to the vacuum, as already considered for black holes and in attempts to endow the gravitational field with entropy {{cite:4aa920dc6fa4cc43f277468e2dbbb6e6ad959273}}, and furthermore, that inflation provides a suitable way out for the fine tune problem of the the vacuum energy. Thus, as the inflaton evolves and drives the inflationary accelerated expansion that shapes most of the observable properties of the Universe, it also induces thermodynamical transformations in the vacuum that yield in its suppression. In fact, the idea that the vacuum evolves {{cite:471bf58af648e42e38dcf567d49f4700002fe275}}, {{cite:9b557af11cc0c6d633525721b70d87c5db076c3c}}, {{cite:7f99f9f039fe13fb8393e67ebe99036e45eda0a6}} in a smooth way with the cosmic time, {{formula:80bb745f-ab7a-403e-b207-3dc35f156d99}} , actually as {{formula:4b392b27-1c52-4eab-9f8f-da4a51b231fa}} , allows for interpolating, in specific contexts, the expected {{formula:98b2e60c-aa17-459d-a515-24a5e0b45675}}{{formula:3640144d-c72d-4f10-86b6-909a444c8cfc}} initial value of the cosmological constant to its observational value. Thus, if the vacuum evolves, it is not all that surprising that an abrupt and significant event such as inflation can have an even a more dramatic impact on the vacuum energy. The arguments presented in section 2 seem to support this conjecture. It remains to be seen whether the presented assumptions can also be used for generic cosmological phase transitions, as proposed in section 3; if so, the several avatares of the cosmological constant can be all tackled with the same underlying set of arguments.
d
7f610853325c01a9d8ae33b9866b7728
We observe that random intersection graphs considered in this paper admit asymptotic power law degree distributions, but their degree sequence is not an iid sample from a power law. We mention that some real affiliation networks are believed to have a power law degree sequence, but with an exponential cutoff, {{cite:aafd46a8f8597d429932da9766d1daafec7ab053}}, {{cite:7cdb11e1aabf5d0981781aed730556279c1b04d8}}, {{cite:10728b8aa39c0d3d1d898dd6632302359cc17d2e}}.
d
9aae5444b3606062840793af6f790b29
By REF and REF it follows from Theorem 3(a) in Section 1.2.2 of {{cite:39e1b29e4ad1c04c9f71f52b47ef7a45044f248e}} that {{formula:a212ab35-2d31-4d09-b04c-244f5956c8d4}}
r
aed180b57d4f878bcd0d2009d6f46052
Although 3D keypoints in {{cite:46b2de23ccc97274e6747367c91d3468bb0cb668}} are not oriented, the local orientation is implicitly used to encode the object target configuration in their pipeline, for instance the “put mugs on a table” task (Fig. 6). This highlights the importance of local orientation and the benefit to incorporate them explicitly.
d
45735024de6a5afe9cd513f4a09c58d4
While our approach imposes significantly less constraint on the structure of slow dynamics than competing approaches (e.g. ARHMM), there are still limitations. First, we relied on PCA for dimensionality reduction of locomotor trajectories. Since the success of PCA relies on linear separability of trajectories in the data space, the length of trajectory segments is limited. In our case, one minute appears to be the upper limit for a 5-component PCA-based decomposition. This issue could be circumvented by incorporating trajectory-informed kernels in the decomposition process{{cite:fd24f6cf95066ae45dcd6a9053bd57b012711c18}}. Second, we note that our model does not take feedforward-feedback between slow dynamics and fast dynamics into consideration. Including such connections in the model would lead to exponential growth of free parameters in the model and require much larger datasets to fit. Simplifications of slow dynamics may help trim down parameters and prevent overfitting.
d
5f9a3675018dbb77e6578fa641df4ccb
The focus of this paper is on the image captioning problem, which requires analyzing the visual content of an image, and generating a caption, i.e., a textual description that summarizes the most salient aspects of the image. Image captioning presents several challenges beyond those addressed by object recognition, e.g., inferring information that is not explicitly depicted in the image. However, existing methods for image captioning (See {{cite:e5ff2a2052f216de271291c92e348b660ce0bcc0}} for a review) fail to take advantage of readily available general or commonsense knowledge about the world.
d
89f7666f4a418702388300b8f21579f8
This follows from see Equation 1.163 of {{cite:3cee45577a1d6b7cec0ed3dfba41746e3450e290}}, up to rescaling. In view of this, we may rewrite {{formula:ec4a37bb-e903-49f6-b356-60f4ba4f8a03}} as {{formula:e3686219-0664-421e-8ee8-da99d8b00804}}
r
7d7ad25350e77bd25ceae5a1e324b333
We match perturbative orders between the two fits at NLO in {{formula:51256547-b9cf-4750-a419-eba07686b849}} . In practice, this means that in the CT fits, performed by default at NNLO, we instead compute the hard cross sections, perturbative PDF evolution, and running of {{formula:c136cc85-ad6b-426a-8d4e-2dabe044b746}} at {{formula:53bfa140-b9a6-4035-b507-56c988e9b6e9}} accuracy to agree with the default NLO settings used in CJ. We perform supplementary fits by excluding some data sets that appear in one fit only. While both CJ and CT fits include Tevatron lepton charge asymmetry measurements presented as a function of the charged lepton's rapidity, the CJ fit also includes the fixed-target low {{formula:ae6488cd-815a-4fcb-847e-d4add87d7957}} and {{formula:6a152ee4-b4c8-42ef-bd4a-8b1393c5c75e}} DIS data from SLAC {{cite:1f62a4ee4cc340a64e0b999b60cad1327860c656}} and JLab {{cite:0c52dc37d1f9670e0c956dc30b41e1a606417869}}, as well as the CDF {{cite:17c4f4f6bc924ba7216ccbbb26907f438c957074}} and DØ {{cite:0cbc0a6b0f26d24b311914f99c49791af630d305}} {{formula:33971143-1ecf-4b9b-b943-ba4c635e5f37}} boson charge asymmetry with reconstructed weak boson kinematics. On the other hand, CT makes use of neutrino-initiated DIS data sets on heavy nuclear targets (both inclusive and semi-inclusive DIS [SIDIS] di-muon production in {{formula:bb8d8bb6-1216-4309-8430-209faa61f4bb}} -A scattering). In CT, data on heavy-nuclear targets are fitted at the isoscalar level after being corrected in the fit using a phenomenological parametrization of the {{formula:9dc85d62-3a13-40ef-9bee-0f780b36e89a}} ratio from Ref. {{cite:b3e89c56742d6ad79de5eab9b09cab56182b07d7}}. To isolate the impact of these extra experiments, we performed CJ fits without the {{formula:31304614-a479-4dd4-8585-4d8608ac111c}} asymmetry and SLAC DIS data sets, and CT fits without the inclusive {{formula:18cfb53b-98bf-4be5-a9d6-736cf6addf33}} -A DIS data. As in the original CJ and CT publications, we estimate the final PDF uncertainties using the Hessian method {{cite:258254b771e8e6510e1a7d3aab997aefd0a5d7eb}}, but in this paper we fix the tolerance to be {{formula:d4c629b3-d5b8-4372-a0cf-62ccee2185c5}} for both global analyses, in between the nominal {{formula:562180ae-53a3-4bb8-a005-728331f00c5d}} in the CJ15 fit and the {{formula:6b17d25f-fc67-48a5-9de4-f6a4698df505}} value (at the 68% probability level) used in the CT18 fits. Furthermore, we do not include the additional “Tier-2" tolerance contribution {{cite:5e014bc8ed8e0a55a85c497664eeb2ad37468ccf}}, {{cite:1b84b641942918c073ee28e7b918c7df6f76edcb}} that is applied in the CT18 fits to prevent the error PDFs from running into strong disagreements with individual experiments, but content ourselves with the “Tier-1" tolerance as defined in {{cite:7a8afd922b74d08f34a678aa1ecc78232d19ef7b}}.
m
9c0dc84ded398cb14724533af83d9f17
{{formula:53ef28ba-579f-4999-916c-54b6c3c31a88}} ; {{formula:04aaa80f-52cc-4144-817a-1a0ba192c6d1}} ; and {{formula:9d238e1e-879c-443b-b8e9-4a62eacc178f}} . From lemma A.2 and (8) in {{cite:b1b098b1a62b935df735794e3e65daba57c21cc0}} and (S1) to (S5) in {{cite:0b97fbe21ead4b2d17e6bcbde63b3a8d6e67abb0}}, {{formula:42eb0f4d-95f5-4be5-8183-b83b3853f1ef}} ; {{formula:731d0116-fc4c-4c7d-ae81-e1541493c0a2}} ; and {{formula:32576f38-90c6-4861-8498-c5477d2f9c98}} , {{formula:6b3098a8-750f-445c-af37-feb2de8bf893}} , {{formula:6d52e127-53c2-4948-84e7-5efa749c6925}} . Besides, {{formula:1c15ca19-60f8-4567-b35b-2a053eca4bf6}} ; {{formula:2bcaec9a-d37d-4d69-99a4-e892e590981c}} ; {{formula:8f493236-d896-4032-974e-441ef6fb528e}} ; {{formula:ff68f2f9-2406-4a41-a06a-9180e757263f}} ; and {{formula:6f7e4f7e-9793-4e90-bca1-3d3689c6371e}} . Therefore, {{formula:838f744c-d858-459c-9c5f-dc24c471571d}}
r
73dcf5cf8bf11ded03e4cdc5f70a0a6b
Two (linear) product operations of a certain kind defined on the same vector space are said to be compatible if their linear combinations are still of the same kind. In recent years a lot of studies have been done on various types of compatible algebraic structures, for example, compatible associative algebras {{cite:cfa559aada6502a2105ac0b7a3bddb75df06dc33}}, compatible Lie algebras {{cite:500012e68f6b8f5b7dd45174e5c02c366c33b49d}}, {{cite:65638ff796231ee7e54351b977b0038b005e801e}}, {{cite:9a06a6f67fa4a0ed58e4ba0cd49ea691d9001495}}, compatible Lie bialgebras {{cite:6960a32e4f2ba0b4de52f66361f942d4e53f2668}}, {{cite:cd7e61b6e7835614394bb0e32521bca1d2726659}}, etc. Compatible associative algebras have appeared in connection with Cartan matrices of affine Dynkin diagrams, infinitesimal bialgebras, integrable matrix equations, and quiver representations. Bolsinov and Borisov {{cite:652dc40d7882c712fca520d0d7d08c3a9f0fa833}} showed a close connection between compatible Lie algebras and compatible Poisson structures via dualization. Compatible algebras showed up in the study of loop algebras over Lie algebras {{cite:65638ff796231ee7e54351b977b0038b005e801e}}, elliptical theta functions{{cite:a9ff3c0892f65f846580abdd4d36e70a0cd63541}}, classical Yang-Baxter equation{{cite:188ba702740f17b03d4ee64abab6a3ffd45b611d}}, and principal chiral fields{{cite:500012e68f6b8f5b7dd45174e5c02c366c33b49d}}. Compatible algebraic structures also appeared in many other interesting works of mathematical physics, see {{cite:e0a5935f0a468e35d8d4fac7b34ee6cd0b064445}}, {{cite:b5738b80c508a51679589ae74d31eb0cd81ee7f1}}, {{cite:506e9fe928d460771a3646f36be49ac4d58346da}}, {{cite:ec416d02711a53cda13743c8d341e33a81d7f9c4}}, {{cite:9fceeb3842511ba02333ea10da0ca9f25ed0f89c}}. In {{cite:d433e2b9d3c21025f3ecfd86168910984ade925b}}, the authors studied the cohomology and deformation of compatible associative algebras. In {{cite:125bca16a6580be4150460d591bb622fb6d3fa3c}}, the authors characterized compatible Lie algebras as Maurer-Cartan elements and studied cohomology and deformation of compatible Lie algebras. In {{cite:a3285d2d57f0a2a39e1821541ff7076b9d54b95f}}, the authors studied compatible Hom-Lie algebras generalizing the work of {{cite:125bca16a6580be4150460d591bb622fb6d3fa3c}}.
i
6a81b9ad8ffe341ad1a1c9fe7900aace
It was subsequently shown that JT theory captures the dynamics and thermodynamics of higher dimensional near extremal black holes in both flat and {{formula:0b69e251-6a2e-460b-913c-67bcef801d7f}} spaces {{cite:5ca90e14559ddcfb562c844698cb392dc2f7b5be}}{{cite:1e2792a70100be52d29bf5105e4d2412a9f0f750}}{{cite:9e82491584ca3288e52d728c7c05a0a3249009e4}}{{cite:e2858a84eda6fd05a620b1302fbfaf2dc3bc1276}}{{cite:8db47d09c205f21db5726e6059cad49207b8e9db}}. Specifically the JT theory was shown to capture the thermodynamics of near extremal charged and rotating black holes where in an extremal black hole is perturbed by changing its mass while its extremal angular momentum and/or charge is held fixed {{cite:1e2792a70100be52d29bf5105e4d2412a9f0f750}}{{cite:e2858a84eda6fd05a620b1302fbfaf2dc3bc1276}}. The JT gravity theory was shown to reproduce the excess mass and entropy as a fucntion of temperature close to extremality of the higher dimensional black hole. It turns out that the JT theory coupled to 2d matter can be solved exactly {{cite:1df35cc8fff8cf2d298bce46471dd8888d989278}}{{cite:fb1d1151417fbd013163cc4d31e1e4c06ca58416}} which is useful since the quantum effects associated with the gravitational degrees of freedom is this region are important {{cite:72e6fb8ff6f00cd5c3a94ed02e276d1dbb72ac45}}{{cite:f5e8b4a3a0150ed27d61493a92e4551130aea51d}}{{cite:b2efe5677477ef4426ec6a6deeb52f296169c72b}}{{cite:10dc947159d84561ad723d338cb13e7ac8757adc}}. This is known to modify the naive classical gravity results at low temperatures. The density of states vanishes continuously as the geometry approaches extremality for non-supersymmetric configurations {{cite:22ed25f97a1cac46b40b2bf0b8410df5432be8a7}}{{cite:fb1d1151417fbd013163cc4d31e1e4c06ca58416}}{{cite:acb537b2f97eb20776dffe15e26d3c8f73d1e47f}}. However for supersymmetric configurations, there exists a gap in the density of states and a large degeneracy of states at zero energy over extremality {{cite:0d2093ca7cc5868ca0c5e748dc7d27d35c3d8926}}. This has recently lead to some interesting observations in supersymmetric extensions of the JT model {{cite:05769d52bf09bb91f15f82c544907878c91d0f60}}{{cite:3512abb65a124578c0b81a82f0ad504dce36aac3}}.
i
398e254b2920b95f8251669b654f6343
The remaining parameters, namely {{formula:da9c5187-4c11-4601-84ca-90fc4d505f46}} , {{formula:193eb7ec-f98f-44e8-b655-1861502808fc}} and {{formula:1405b63d-46f8-41cc-8f61-ca1078a5e83b}} , will be flavour-dependent. Using eq. (REF ), {{formula:80267c0a-cb72-4fe2-b1e8-7760af4684c4}} can be fixed by the mass of the lightest scalars through j=-mj z0 J0(mj z0)J1(mj z0) , where {{formula:e83f2dac-c9d6-4895-b635-776ff8f7351d}} . In Fig. REF we show the dependence of {{formula:ec7690fa-3e6f-4b8f-b14b-f265353ae18d}} with the scalar masses, where we have used {{formula:f1a6f734-c775-4d1f-a767-482f6a2ba5eb}} MeV and {{formula:b55a61b8-c64c-45b8-bb8a-a0344b758412}} MeV together with the conservative range {{formula:eca51fe8-a789-4ef1-bcdd-d79113d879b5}} MeV {{cite:c829934caacc75d17762edfcba973ad5a8efa822}}. The masses of the excited states are also predicted by this single parameter. However, the resulting spectrum for the excited states differs substantially from the physical one.
d
7e974683bd91a642fe3a5a76b224202b
Recently, the multimodal fusion of point clouds and images has been proven effective in many tasks. For example, in the 3D object detection task, the performance of Lidar and image fusion-based methods usually surpass that of single modal methods {{cite:466ba2a70de9c39a33bdef750d36c5f27bf912bd}}, {{cite:96ed792a915fa4fe01adbbd6c5414f06b2bada48}}, {{cite:d31bdf7e9ed4c9b76f1a77fe1339fcebc8060bad}}. Some researchers have begun to make use of 2D images to help the completion of point clouds.
m
840cc09c0a4645cfaaed890b05b00c7c
Quantum unknown source encoding. For an independent and identically distributed (i.i.d.) source, Shannon Theorem {{cite:4247e92b6cbef0bbb04d91650684529ca97ddaf5}} characterizes the redundancy information with a fundamental limit that is achievable for a noiseless channel. The main idea is from the asymptotic equipartition property of typical series, i.e., the joint distribution of typical sequences is asymptotically dominated by its Shannon entropy. A similar result holds for quantum sources {{cite:87d16296f1463e8d2d707ef395cabd543069e7a4}} in terms of the von Neumann entropy {{cite:cf37e84f1689b464873652e323f3caa09033b4ed}} by using typical states. Here, inspired by Theorem 1 we provide another method to compress an unknown quantum source with only partial information of the measured state {{formula:a1221991-5657-4dc5-bca3-8ba164a8d75f}} {{cite:cf37e84f1689b464873652e323f3caa09033b4ed}}.
r
d65f03d141ad8bed935adcd088f1a433
This equation is valid in the {{formula:0228227e-a56e-4924-9b7d-a8b13d0d143a}} limit and technically only for certain (distributional) initial conditions; a follow-on paper {{cite:305f8c5cfab894dd716c4d583fdd1aff1020694f}} established under mild assumptions that the manifold in question is attractive and so the above equation should hold after some initial transient, while recent work has shown that the Ott-Antonsen (OA) manifold is not attracting for finite {{formula:efcc51e0-1e36-43d4-b5bc-92dc29043fa6}}  {{cite:26c1d22b059b5275c68d4367b63ce1b0f80e070b}}. Equivalent results hold for the case that the oscillators are organized into modules, where within each module the oscillators' natural frequencies are drawn from a Cauchy distribution and the coupling strength between oscillators {{formula:3605c9e5-1602-4057-9a5d-604f4bbc4c77}} and {{formula:a679639d-3e86-439f-80bb-a4bfd9aae2d0}} depends only on the modules they belong to. If the distribution of natural frequencies is not Cauchy but a different rational function of some order, then a similar reduction is possible, but with one variable for each pair of poles of the natural frequency distribution {{cite:68bc486171087a4aee79e62b2ea7f42aa72fc216}}.
m
6b9088a77f82d31d9bea8633dcb4b1aa
We finally claim that this last display is equal to the claimed form in the statement of the result. Let {{formula:f5e4307b-c848-4a68-b415-1e22b221c86f}} be such that {{formula:28df9b60-4c48-425a-8b34-896d1bbb2a94}} . Then there exists a probability measure {{formula:1fb08603-2ebf-47f7-81ff-3a9bc8960fb9}} on {{formula:1e3093ed-7528-46f7-ac0c-07b9114d07d8}} with marginals {{formula:c68dc069-5882-483d-867c-219831a00c63}} for which we can write {{formula:6d1f71ec-e6d8-4cc6-a76f-3677f15ca90b}} , where {{formula:6b6676b0-3098-4975-9b06-57df13dc2499}} . Since every open set in {{formula:3d8b54bc-f2df-4617-bae5-4da74697d8f0}} is {{formula:f68d5db2-6c57-4e6a-953a-48b398f54a6d}} -compact, the probability measure {{formula:33f8e100-6ef7-4c49-8e1a-dbd741c9118e}} is necessarily Radon {{cite:68178619aa8c979df242d3ee6b609f53062a511c}}. Now for all {{formula:29085530-2b7e-4a24-adba-69fe1f9ee490}} , and {{formula:cdc4e3f8-d09f-4b85-821a-3d6f3289be66}} , {{formula:97d06998-7719-468b-ac4e-ccef7a940c6a}}
r
4e01ea80df2c721146939f12100ff3d8
  In real-world applications, novel distortions may emerge in arbitrary order. As a result, a continual learning method for BIQA is expected to be robust to different task orders. In addition to (i@) the default chronological order, we experiment with seven more task orders: (ii@) synthetic and realistic distortions in alternation: LIVE {{formula:5ea439a8-3098-4898-97cf-5ce8782182f5}} BID {{formula:805d5327-aa72-4c5d-99f0-5b88f0e949dd}} CSIQ {{formula:1e014cc0-4e97-49e1-8779-7e9d34de43d2}} LIVE Challenge {{formula:f8296303-d0cd-4276-9d6f-54b986ef607d}} KADID-10K {{formula:19cabe21-0c49-4d9e-b1fd-185d00966965}} KonIQ-10K, (iii@) synthetic distortions followed by realistic distortions: LIVE {{formula:0a017934-c025-47a2-bb00-ce69559987f1}} CSIQ {{formula:e9b3babd-0ac4-478c-9a66-1ce9e94a58a2}} KADID-10K {{formula:e1279d49-e129-4591-a130-df7264e85927}} BID {{formula:72f32c2e-5f3e-48f8-8758-945344b2ee30}} LIVE Challenge {{formula:abee52b5-c274-4e51-95cc-5917bf0f92a0}} KonIQ-10K, (iv@) realistic distortions followed by synthetic distortions: BID {{formula:dcc73e0f-d862-4753-8190-64c9ee93e238}} LIVE Challenge {{formula:1d7ab6d7-a286-433f-bf72-8b2a6df4e5ea}} KonIQ-10K {{formula:a07ca984-19af-478b-9ad0-69588558e4b4}} LIVE {{formula:a0717950-ade0-4781-9e10-c8de9e7138d3}} CSIQ {{formula:9b63742b-2dbd-43c0-a2a2-2064016e3b66}} KADID-10K, and (v@)-(viii@) the reverses of Orders (i@)-(iv@). We quantify the task-order robustness of a learning method by the average MPSR of different orders. We compare our method to LwF-AW {{cite:2a80afbd361bf26a88edf6d99b42027d4d25a19d}}, SI-AW, and MAS-AW in Table REF . The main observation is that our method is more robust than LwF-AW, SI-AW, and MAS-AW for seven out of eight task orders. The only exception is Order v@, where we begin with KADID-10K {{cite:3902e55e1249cbb1f7daddab9a1b34f5106474fc}}, a synthetic dataset that is considered visually much harder than LIVE {{cite:5a1b2700b5147f2ab953e114bb3399d5f5ad4cec}} and CSIQ {{cite:fb2e6f2bb3126d14c56d9c2b4eaf1a40548c3161}}, therefore posing a challenge for performance stabilization. Given a specific task order, we also measure the task-length robustness by the mean MPSR of different lengths, {{formula:53820f0c-67d5-48b1-bb00-29154a8f7b72}} . We compare our method to LwF-AW {{cite:2a80afbd361bf26a88edf6d99b42027d4d25a19d}} in Table REFAs the task-length robustness of SI-AW and MAS-AW is inferior to LwF-AW, we omit their results in Table REF for neat presentation.. We find the task-length robustness to be dependent on the task order, and our method performs better or on par with LwF-AW across all task orders. Relatively inferior results are observed for Orders v@ and vi@, where KADID-10K {{cite:3902e55e1249cbb1f7daddab9a1b34f5106474fc}} is listed in the first and second place, respectively. Altogether, these promising results indicate that our method has great potentials for use in practical quality prediction scenarios. {{figure:b3963bfd-310f-4d2c-8e5d-0a23197fe896}}
r
951a637e4250cf0fd7366650b6dfb2ce
{{formula:e3d5f795-6995-49ef-b88b-b3007bd0d38b}} Single-scale CNN: It processes {{formula:094bf33d-8ad6-45ae-954c-682d28bf864f}} s at a single magnification. A {{formula:6ee10e98-9724-40b6-84f5-e77dfcb94b3c}} is trained to predict patch-wise cancer subtypes and aggregate the patch-wise predictions to produce a {{formula:3d2a0db0-a060-4843-965b-81a9e1750c08}} -level prediction. We experiment for three scales, i.e., 10{{formula:b023389e-3680-42d1-8466-9cc9517efd23}} , 20{{formula:2e903c51-c721-48a4-9813-347d6529450c}} and 40{{formula:d57ec955-75c6-4122-8fb0-343aa03b7b61}} , and denoted as, {{formula:a5a75c7b-35dc-40bf-8cb4-8c5199c26777}} (10{{formula:db556313-874b-4473-bcb4-5b038d59052b}} ), {{formula:688ba412-a031-4c79-9178-fe08a862e14e}} (20{{formula:6cb8070c-870b-4851-ba65-dc7b550cdb5a}} ) and {{formula:76bf6e66-d366-4fc6-a187-5d2f187abd97}} (40{{formula:762bed46-a817-44b2-ba2d-a5dabb8ad39c}} ). Same network architecture and training strategy is employed for all scales. For each scale, we extract patches of size 128{{formula:14dcc049-6ae0-4022-9ac0-c448aa1ef2df}} 128 pixels with stride 64 pixels. The {{formula:6ce75f29-c28c-4a0b-a0da-6f139cc7c3a7}} follows the single-scale training procedure from {{cite:4868805ba4a93532002d66803e513608ba65058d}} and patch-wise predictions are aggregated using the Agg-Penultimate strategy proposed by {{cite:029d087dab2b196508d614c6cfc2484fe1918c25}}. We use transfer learning with a ResNet-50 architecture, pre-trained on ImageNet dataset, as our {{formula:97af169e-bb46-441b-8ff6-bea96d1d1c6e}} backbone. Following the feature extraction from ResNet-50, we employ a two-layer {{formula:5e9bb5b0-5411-40cf-8394-3d1a08540869}} of 128 channels to classify the patches. The ResNet-50 parameters are fine-tuned to improve classification. Adam optimizer {{cite:c4e115ec1292b8be5837b89dc529c0f8c3cb3416}} with {{formula:77ca4d9d-f429-4104-8e27-2f55c8b94b74}} learning rate and batch size 16 is used to optimize the categorical cross-entropy objective.
m
d119f19596f5bf2945556726fdca15ca
We considered in this work segmentation of two-dimensional slices from multiple brain images. We believe the proposed approach can extend to three-dimensional, multi-slice segmentation of hippocampus with little difficulty. In fact, the new ADNI 3 studies include hippocampal scans consisting of six to seven slices, and chromatic sampling has been shown to scale at a much slower rate than block sampling {{cite:58614183c5e77d911f2650bfd68d38f766aea1e7}}. In some cases, many more slices are needed, either for larger binary segmentation problems or whole-brain parcellation. Toward this end, there exists in the literature a variety of techniques for MCMC in high dimensional spaces, including when the model involves GMRFs {{cite:f937aec0b69f930ea2b54eac34d6cc89073a9335}}. Additional options are available if one is willing to move away from GMRFs. For instance, {{cite:1ccb5634d242ea568aabe835deb8930f23a0ccad}} use pre-defined regions to aggregate voxels and reduce the dimension of the spatial field underlying fMRI data while maintaining spatial dependence. {{cite:d7fe4a2e28411fcd81c021534055d5ea0ef58ed8}} mitigate the computational burden with a multi-resolution MCMC approach that successively refines resolutions in interesting areas of a brain image. {{cite:66546d3741cbd5624d9eb77ea74e7910d2fd852f}} review many large scale Gaussian process techniques that have been proposed recently. Also available are posterior approximation methods such as variational Bayes, though such approximations can yield inferior results to those produce by MCMC {{cite:d9bafb873bf0db595a5cde43e7734c43d611552d}}, {{cite:f937aec0b69f930ea2b54eac34d6cc89073a9335}}. Concerning the extension from binary segmentation to whole-brain parcellation, an obvious modification to our proposed approach is to replace the binomial response with multinomial. This raises new challenges, not the least of which is computation. Given the aformentioned MCMC advances, though, we are optimistic that such a path is within reach.
d
8825958d6c0216e030a18489e838704e
From magnetization scaling analysis we have derived {{formula:89992b1a-a5c1-46e3-972d-4d53a34cd79d}} . This value is small compared with the mean-field Hertz-Millis-Moriya theory either for clean or disordered systems ({{formula:50d50630-3341-4048-aa61-6c11ede55db2}} ) which is usually enhanced via critical fluctuations {{cite:78daa6e1ac46d84008d9c959b4695904fdb4e4c9}}, {{cite:bc2a71ff3a679ba4b026f3087ec2f2d21e47a021}}, {{cite:da26a1a53103cee12da2b1368116f2e78b2883c1}}, {{cite:d1b0b7f68a383d86084b0c5a36322e4f01f61ac7}}. The dynamical critical exponent {{formula:6ec2fdc3-a9d8-4445-88eb-dd87c2f2ea1d}} can be deduced from {{formula:9acde642-adad-480b-8ad2-d5da67e87f7d}} if we assume {{formula:e98c4145-9c27-4e55-97b9-7250eb71a3f5}} . Such an assumption is deemed to be reasonable as the crystal structure of Ni{{formula:5cd9db0e-6817-423a-bfd6-8107274eb0f5}} Rh{{formula:fa888615-0b41-41f8-86b4-90806ce42443}} is face-centered-cubic. {{table:2a9b7127-bdb1-4931-a24e-17bd7f55ffdf}}
d
23116d07366107e8cabfaeb5716d57b3
The variance and covariance associated with different TSC motifs given in Table REF can be used to calculate the two and three variable MIs using Gaussian framework {{cite:fcf39aaf88c8104fe90f7fbe934b2ceaa6bcfa2c}}, {{cite:5b81c6b1732dbe1cd1c84c5030755a5691a64833}}, {{cite:cee13419796229fbddd5c9aaf025670c1d7a0f65}}, {{formula:d09094af-ff2d-48ff-9eab-fede9c6a3f95}}
m
8307b4f91bf8b33df8593ce05ab6da13
In this section, we show that {{formula:570188b0-7bd6-4f80-b25a-e1f7903c42cc}} -tuple Domination Problem and Liar's Domination Problem are W[2]-hard. In {{cite:af70f9ba0becfd50c2c667d1440b94c4d0be5f30}}, it is proved that {{formula:2626525d-24ba-4ad8-aa91-b2691f6a720e}} -domination problem for any recursive sets {{formula:a5765b47-250b-4657-8ce8-6fa20d4976f5}} and {{formula:0f3c6992-ce5d-457f-8daf-ec84521c4aed}} is W[2]-hard. This implies the hardness for {{formula:f8cb875b-4937-46b1-a03c-902bfa369627}} -tuple domination in general graphs. But in this paper, we have come up with a simple W[2]-hardness proof for {{formula:6487c2ee-48ec-4568-aa9f-a627c7786a8c}} -tuple domination in general graphs. To prove this, we show standard parameterized m-reductions from Domination Problem, which is known to be W[2]-complete {{cite:5ee1f76133980df923c3e618b0d0618603d16beb}}, to {{formula:2f69075c-fc56-4b36-80c0-826d1f3700bf}} -Tuple Domination Problem and Liar's Domination Problem, respectively.
r
2d2461a01e759c61fd97dd9cdab9397e
In this paper we have argued that the thermodynamics of near extremal Kerr geometries for perturbations both to its mass and angular momentum are captured by the Jackiw-Teitelboim model. We considered change in the extremal mass and angular momenta such that {{formula:1404e54b-c584-441c-a845-a7341893cf03}} for Kerr {{formula:f2e0e0ca-8ec8-477a-b465-edb3740a3109}} and {{formula:6907ce4b-dc82-4b5a-b0a6-5051b9f98206}} for Myers-Perry Kerr black hole in {{formula:be5026de-83c1-40a0-87a3-edff9f340d3e}} such that the resultant geometries are near extremal. Therefore we focus on the thermodynamics of near-extremal Kerr black holes in the canonical ensemble where a linear combination of ADM charges are held fixed. We do this by explicitly generalizing the near horizon limits prevalent in literature by approaching the outer horizon along in-going null rotating geodesics parametrized by specific angular momenta {{formula:5f232a36-e937-4191-8f07-777a01ed9925}} for Kerr {{formula:3124af60-5df0-427e-860b-7139b65b2eba}} and {{formula:c2c47bf0-a8e8-48df-aab0-f1f22ab78a47}} for MP Kerr {{formula:1c0b7665-6f8a-4936-9173-18ed81b81cec}} . We discover distinct IR geometries parametrized by the specific angular momenta used to obtain the near horizon metric. These geometries- labelled by {{formula:18495474-3160-4a40-9850-82347a4635dd}} for {{formula:a37a6457-7d6f-4d7a-b9ff-b2e7c5ef050e}} Kerr {{formula:d236901a-4413-41bb-a379-829a243e6ed1}} and {{formula:4eadd650-5bf8-4bc4-bf43-b4462a0843bb}} for {{formula:6763166c-26e2-488e-8841-27fdeea21876}} MP Kerr {{formula:8d4d9c76-193d-4690-97b9-7e48d98ba1d5}} , cannot be related to each other by simple change of their near horizon coordinates {{formula:72bb1895-420d-434c-adb2-a654e9594c23}} subsections- REF & REF . The near horizon JT action obtained for such IR geometries- referred to as JT{{formula:0ce5d5ed-144a-4454-8892-0d4bf0ff2e4d}} , differs in the manner in which it is embedded in the IR of theory dual to the higher dimensional {{formula:c0ea23bf-69cc-42c2-811c-b587fd8adf5b}} geometry. We explicitly derive the small temperature dependence of excess mass and entropy for such black holes from the JT{{formula:148dd272-d024-48ce-b7a7-e56017431c64}} model in the near horizon region obtained using such limits. Setting the specific angular momenta to zero from the start amounts to reproducing similar analysis prevalent in literature {{cite:1e2792a70100be52d29bf5105e4d2412a9f0f750}}{{cite:e2858a84eda6fd05a620b1302fbfaf2dc3bc1276}}.
d
c5c2fc9b4b7d5d8491ba2c705dc4bcd9
Determining all the congruent numbers is an old problem in Number Theory. There are various sorts of generalizations of the problem, see for example {{cite:4c4cc654b868fcd22094e0d64f69c01e911897f5}}, {{cite:1d3d4495d1d9d0413ab6ef955b864ed9efb6c3fd}}, {{cite:208130f0d6022e01455c7e5dfb0536e78095f302}}, {{cite:a0b209f002311472e57ade18d30bbdc8de723a35}}, {{cite:e4f900484b4608eeb357e62a8dafa7a92fe499b4}}, {{cite:da321d87ea23a7936ef3d2dcb7b3c761952ca9d8}}.
i
615c19769b3f715358a579b8c479d9f5
The broader aim is to determine whether gradient descent would be feasible in an operational context. For this to be the case, it would (at least) need to out-perform the current state-of-the-art data assimilation technique used in forecasting centres worldwide, 4D-Var {{cite:f272316868d8298cc7f8f39d3d61e1ac11048756}}, and out-perform the sequential Bayesian methods in development such as the ensemble Kalman filter (EnKF) {{cite:5b811164e1f23726657c00d0437b081f61863c4a}} and particle filters {{cite:a7428c15be7863094ae3a9566213529db1aa4c24}}. While a numerical comparison is beyond the scope of this paper, there are a number of conceptual advantages of gradient descent over these other methods. These differences are discussed in more detail by {{cite:33f4fa42544d22f57bcee2acf8970c05bb95b53d}} and {{cite:6493a27a63c8b73f6157ef15333e7d4710c1f186}}.
d
f3306583af700b5de73896b7fef47be9
where {{formula:ca994d83-e3ec-465a-b8c9-21dee0c8fbd1}} is the Frobenius norm or {{formula:2bb2ebf4-46b2-4e1c-b0a9-99aebbef3fff}} matrix norm and {{formula:779c2a0d-a666-457e-b1c4-e5124e1a7861}} is the total number of samples. We come back to the choice of {{formula:e6ab9a89-558b-41c2-ac95-7ccad4e4905a}} in Section below. Theorem 1 in {{cite:6df3d1820c5479f97f7f0df2ed890506dbca3321}} states that {{formula:3d0315f5-c6e6-49e4-bde8-2e8107e8704e}} is a DAG if and only if {{formula:55afb5f5-e348-479f-8755-510179dd59c9}}
m
34f02640298eb9e499310d6f4046a20d
are representative of the transversely polarized EM wave absorption by excitons and of the Coulomb-energy losses for longitudinally polarized plasmon excitations, respectively {{cite:ea3d7bd267c481b60419d445237c6b688f3cc9dd}}. From Eqs. (REF ) and (REF ) {{formula:93cb02ce-f301-40eb-a59b-9bf6cb2cfd13}} and {{formula:d11cda67-ade5-4c38-8a6e-2fc873999dc5}} can be seen being their respective resonance peak energies. The latter is an analogue of the interband plasmon resonance, the one situated in-between the two neighboring exciton subbands {{cite:9f82c989baa3203e55fc2db5019d20b8f7757719}}, {{cite:6ea8fca568906a2c2ab8f4fcf94368e0469ce713}}).
d
1cb5a5a73d281d7f701cf1294a038f03
TransE {{cite:9fbdeec1e94fcd07cb326afbb92306295e483458}} model is inspired by the intuition from Word2Vec {{cite:16a63a516c803bc5037ab67a2864fcf89a8e0bf4}}, {{cite:47d90c577da8dbcde265118128e5f7feed50d4e1}} that many predicates represent linear translations between entities in the latent embedding space, e.g. {{formula:94bcea35-46f3-4fc4-a377-f1cfb88dc718}} . Therefore, TransE tries to learn low-dimensional and dense embedding vectors so that {{formula:5398eef7-7dba-43fa-9d90-1bd495668c0a}} for a true fact {{formula:68beb5e9-0513-4541-9450-5c009b0289cd}} . Its scoring function is defined accordingly via {{formula:c59d3450-9778-4ca8-8ff6-5ca8d7000fdb}} . RESCAL {{cite:5ec390c29ad3624b7f73262f6376df6534252a4d}} is a tensor factorization-based method. It converts a multi-relational graph data into a 3-D tensor whose first two modes indicate the entities and the third mode indicates the predicates. A low-rank decomposition technique is employed by RESCAL to compute embedding vectors {{formula:eb4fa0cc-8866-475a-a4f6-256ea77e5cb2}} of the entities and embedding matrices {{formula:71b3d0a2-04e1-45e4-bae9-d7431e1f46a5}} of the predicates. Its scoring function is the bilinear product {{formula:4ff5fcfe-f235-4c75-82f4-affff80e5d39}} . DistMult {{cite:d347a2cd8cffd0e16cc50b5428cf0beed1f1faa2}} is also a bilinear model and is based on RESCAL where each predicate is only represented by a diagonal matrix rather than a full matrix. The neural tensor network (NTN) model {{cite:46ca64586aaef82a040e8243b55af5b7e1c324bc}} generalizes RESCAL's approach by combining traditional MLPs and bilinear operators to represent each relational fact.
i
ff842705c16989425e8c921a7550c9f0
The connection between probability theory and PDEs, it is a subject widely analysed since the introduction of the well-known Feynman-Kac formula ({{cite:a1c62a2aa3696d56cceb723efe08279cf2b6fe88}}) that expresses the solution of a large class of second order PDEs of elliptic and parabolic type as the expectation of a diffusion process. Years later, in {{cite:7ade742c99b3ce9ee5127639f207cec97873f282}}, {{cite:a09a2a8f39879ab78ea9b49ee421b390e765b382}}, Pardoux and Peng generalized previous approach to show the connection between Backward Stochastic Differential Equations (BSDEs) and a system of semi-linear PDEs, then proving the non linear Feynman-Kac formula within the Markovian setting. Concerning the non-Markovian scenario, we know from {{cite:ddae9e7447658b1525e4ad344a2a08fcd9a42f13}}, {{cite:ac5eed22f46cec5e2de7b8ec2de3ea18699daaf0}} that a non linear Feynman-Kac formula can be established, associating a path-dependent PDE to a non-Markovian BSDE. More recently, the introduction of horizontal and vertical derivatives of non-anticipative functionals on path spaces by Dupire {{cite:d4b5ac4088070639af05fb30152e20d37f1e6957}}, Cont {{cite:087c0f9d02ae87415f99c45e901deee2a2d8631d}} and Fournié {{cite:e3be302b065d5f23d7ab12591249fb5936c4931f}} facilitated the formulation of a new class of path dependent PDEs and the introduction of the so-called viscosity solutions, see {{cite:c6082080f5c6017cf8e522d68181765df9831b6a}}, {{cite:ed792859c3fe1d50512612e86db9fc89f4096d37}}, {{cite:ac5eed22f46cec5e2de7b8ec2de3ea18699daaf0}} for more details. In {{cite:b5ae15c8e9eff611f15ec50f653e705d6ff03685}} they study path-dependent PDEs of Kolmogorov type by a reformulation of the functional Itô calculus by discussing the relation between the functional and the Banach space approaches.
i
2437c551c821bebd5a3889ddb9ebac90
There are important questions that we did not address, such as what the reference {{formula:2a6da23d-2804-4833-b7cb-639a845384aa}} represents exactly and how we should describe it from the viewpoint of an observer falling into the black hole {{cite:3d4de40d23bebcfbbcc5f3f41dd2dde2ea61edc5}}, {{cite:1f4a7692a081b13d5baf150aeab68f9fca8be623}}. It could well be that a semi-classical spacetime, including notions such as a smooth horizon, can only be defined from a viewpoint that ignores the reference {{formula:0908a7b6-ab0c-464b-b5e8-3f66b180b94a}} . In candidate theories of quantum gravity, one could make more detailed speculations about {{formula:4054ad5c-0fa8-46c7-a84a-c059aa6234b4}} . For example, in holographic theories with a gravity/ensemble duality {{cite:5ab2a6cd73b7db5d94a2555486f9bbf3610617b3}}, {{cite:d2bf063eed50b037ab4bd47dce6832e73be3d13c}}, {{cite:1bd54484381ea0f8eaebb8e7b22968c8ef2d1922}}, {{formula:5c023c27-598e-4c53-b276-dabcc03df9b7}} could represent the distinct boundary Hamiltonians for an ensemble of quantum field theories; in the fuzzball paradigm, {{formula:a295b48a-79e2-44b9-a5ad-d09a25ec5ff3}} could represent the stringy and braney excitations that live on the horizon {{cite:6d2345edb364986380310247d3adf11c9a1a57ff}}, {{cite:a1cb1d793baed42f56d9fc17de6f0c974f73f0ad}}; and loop quantum gravity would assign {{formula:86b42785-e5fe-4ca6-b38e-74eb046aa11b}} to the quantised geometry at the Planck scale {{cite:6d3d3b6af23233145e5617f7bd7d5dc08e568995}}, {{cite:2a74400cbc325ab466bb87e94f05414c7b62571c}}, {{cite:cf4136b0c7156c31321be749f4ebe32048d107b8}}, {{cite:834fffdb18b074364a95ccc679562141aa758bfd}}. Black holes still have lots of insights to offer for years to come.
d
67b1287df02de9c6dfba36cd12323f3f
By structuring the corrections at a given perturbative order into numerically dominant and subdominant pieces, a substantial gain in terms of computational efficiency can be achieved. A natural approach to identify the dominant QCD contributions is through an expansion in the number of colours {{formula:c0e1bfb3-e2f1-48b0-8df5-09eb23dea32c}} that further facilitates a decomposition into separately gauge-invariant contributions. The leading-colour contributions to a given process take a particularly simple form at each order: virtual loop corrections contain only planar diagrams {{cite:384cda422cd52514b113020de0975bfd2129cfd6}}, and real radiation corrections are obtained by summing squares of colour-ordered Feynman amplitudes {{cite:8fb1bf7c5c8488cb77161951ae5dbdd501f17680}}, thereby discarding all interference contributions, rendering their evaluation considerably more efficient and simplifying their infrared subtraction in certain schemes {{cite:7814bf6a09c7c916299db359542fa77c01432935}}, {{cite:32b6e2f7b14cc9a549a0742099053465f060f78b}}.
i
11b61c49d14ad7dbadb0784579e429e3
Our techniques make use of local electron density differences (DD) and natural orbitals. Our method using natural orbitals (Method 2) showed faster convergence compared to the other method based on localized Kohn-Sham orbitals. Similarly, previous studies {{cite:45afde289f663a39a9b52740679c56167ac06b7a}}, {{cite:5ff862a790a289aa9ae640406c31fa1bfd93768d}}, {{cite:ba0b3f7e19193983ca733e2b02fa1eb770320663}} show that natural orbitals lead to a faster convergence of CAS energies to the full basis limit, as they capture anti-bonding virtual orbitals as opposed to the Rydberg continuum {{cite:aeea3550935be85b4224662b58bae5cd151cfa4d}}. While the construction of active spaces using CCSD natural orbitals scales polynomially in system size using classical computers, it is expensive for large systems in practice. In this situation, where a CCSD calculation with the full set of virtual DFT orbitals is prohibitive, one can choose a subset of orbitals from the sorted list obtained after implementation of DD method (Method 1) to perform a CCSD natural orbital calculation that fits within the available resources.
d
338b929c0ca91d0876dd6fe2564250f0
The basic statistical approach is to use treatment assignment ({{formula:8f518c92-1f21-44c8-a0f6-eb871daf5581}} ) as an instrumental variable for the potentially endogenous belief variable {{formula:50784892-9da6-4977-ab01-311dcbbfabd2}} {{cite:dba2cc24ef0a0033eeb9cf2dd35764fb47f47aeb}}, {{cite:14a200401dde3b908a6a397f7ac6a5125016684c}}, {{cite:5bd019c88b3312d6aba5777f2676af96ded1911e}}. By randomization, treatment status is credibly exogenous. Yet, under mild assumptions it affects beliefs, as we demonstrate in section REF . We will then continue to specify a generic parametric model that identifies average effect of treatment (ATE) on beliefs. In subsection REF we will connect this model to our probit participation model (REF ) and show how the treatment-induced variation in post-intervention beliefs can be exploited to recover the true causal APE and reliably test {{formula:64b9a588-b5fc-4e66-a47d-3171711d2ff0}} . The direct test of the hypothesis and an indirect test of the key identifying assumptions are specified in Section REF .
m
2cc52dc52320f753806aa2e8ebc1873a
It is well known that 2d and QG point vortex dynamics have Hamiltonian structures which provide insight into the dynamical behavior of the vortices. This fact, together with the behavior of the trajectories seen here, leads us to think it likely that QG{{formula:c02b8b2a-8791-4bab-a16f-fa8e51c243cc}} point vortex dynamics is Hamiltonian. There is a large literature on Hamiltonian fluid dynamics, e.g., {{cite:1230b849495decdc13eca3bbbffea524f4592706}}. Based on the theoretical understanding of Hamiltonian fluid dynamics, we expect that the imposition of balance and the asymptotic expansion in QG{{formula:06243279-e635-4050-b759-431e9e9461a9}} will preserve the Hamiltonian structure present in the primitive equations. We also note that the balance formulation of {{cite:861f0d0bc6c1183c694fe88ac6ac0bde8180b67e}} has an explicit Hamiltonian structure. We leave the development of any potential Hamiltonian structure in QG{{formula:b024e09b-aafd-40f8-898e-ef7418da31e3}} point vortices for future work.
d
f04915e2e956b296fca71ec611d82010