text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
The involvement of the entorhinal-hippocampal complex – as being the most probable candidate structure underlying network-like cognitive maps and multi-scale navigation {{cite:09f5c5f37279194f59018400894875ca87987f17}}, {{cite:c870c59060ec72d17c6282f484a3b83071e5f538}}, {{cite:309964176b56eaaf3a9e95415bdffd0dcbaa27fc}}, {{cite:2dcacae9b371b41b464ce2d1b91e487c016f9129}}, {{cite:72112395844566a325d123bcedf0392afa818882}} – in language processing has already been experimentally demonstrated {{cite:8012c4be90c5327a47892035363e5595252965c7}}, {{cite:584a313403ac70e512794dc296b5e43aa7fad262}}. Our study further supports, in particular, the involvement of place cells, as being the nodes of the "language network" as suggested in cognitive linguistics {{cite:6aeac762eb2d070598a044f0d4445b0688468004}}, {{cite:02132ac7c2c15fbe2a4731d041066a3b334adc9f}}, {{cite:717d645f64743e0aa97ce378da15ab446d834b6c}}, {{cite:1301d415ea27bb11ae59839086a40328b8b39dbf}}, {{cite:e13ae2bae03a91c064f1b6d2f12216c1a391389d}}, {{cite:5704c3db0c1c82dfd23ba280e6fd630666abe946}}, {{cite:0bc9f2bb1f81359df3abf09e4b6b11d8565331bc}}. Early language acquisition, especially, is driven by passive listening {{cite:54fb24bcfb52944f0f2cd1c749d4068608078ff4}} and implicit learning {{cite:3419fca002ef22feb7642ed4ff77b384d242aae3}}. Our model replicates learning by listening and therefore resembles a realistic scenario.
| d | 309a12a32981abf8fc39518724ee1dd0 |
Imagenette Data Set: Imagenette is a subset of the well-known ImageNet ILSVRC 2012 data set {{cite:384d05b431a52165342db5abe71d9b136e8e03c7}} with 10 easily classified classes. Images were rescaled to the range {{formula:d0043e0c-1e6d-4490-8f95-33be9bf26855}} and normalized using the mean and standard deviation of images in the training data set. The XResnet 18 {{cite:84391b884061ee8a509d247a98bbc5c5968b1a69}} network architecture was used to train a model to classify Imagenette data set with MISH activation function. Details of the training hyperparameters are given in our Supplementary Materials section.
{{table:d649c248-0fa7-48cd-9107-152ff85c15b1}} | r | fcb395eba56926ac0b009481d7f774d0 |
Inspired by the Contrastive Language-Image Pretraining (CLIP) {{cite:09f5e733c2101a639190269adc3d629bf35058e2}} framework, we propose
Contrastive Surface-Image Pretraining (CSIP) framework which utilizes dual encoders trained using a contrastive loss to encode imagery and above ground level (AGL) maps into similar latent space representations. Given an image {{formula:0cd69e67-ec81-494f-b2c7-9f12de297230}} and AGL map {{formula:760f84a4-2b78-40c2-8e52-cdf94d0fa687}} pair, an image encoder network {{formula:a45c37ce-c1e0-4cf0-98f7-1ab09c21d762}} , and AGL encoder network {{formula:c880a68d-a73d-4aa3-bc73-0192d0562aab}} , we encode the image and AGL map into feature vectors {{formula:55c886cc-e6ee-4dec-b3fc-11da10fe1732}} and {{formula:192cf921-650b-47bd-87b5-c7cb2280a62e}} , respectively. Each encoder network is composed of a backbone and a projection multi-layer perceptron (MLP) head. We then minimize the Normalized Temperature-scaled Cross Entropy (NT-Xent) loss {{cite:d160f7a84af45b8a71a12c2b054b82ec078241a4}} with a learned temperature parameter {{formula:0772c8d7-2871-440e-9651-339e15a85a95}} which is a variant of the InfoNCE contrastive loss {{cite:aeb23948bc9f89e89e22059633ac060582557eec}} and is described in Equation REF . By minimizing the NT-Xent loss we seek to maximize the cosine similarity between latent RGB and AGL vectors from the same pair relative to other latent vectors in a minibatch. Within a minibatch, we utilize other randomly sampled pairs as negative samples. An overview of the CSIP architecture is provided in Figure REF .
{{formula:ca16f871-906a-413c-ad68-12d113245f98}}
| m | 3348d29b2463ee38f969ad1b54b02c85 |
Finally, we compared the Color Magnitude Diagrams (CMD)
of our disequilibrium and equilibrium models
against MKO observations of brown dwarfs by {{cite:a68b0c44567072dce91884472751462550065f4e}} and Spitzer data from {{cite:70ce6de98ad7a28b6d0423c4acc380acb7b383f5}} (Sect. REF ).
We noted that our disequilibrium chemistry models give a better fit
to photometry of mid to late T type brown dwarfs, supporting the importance of disequilibrium chemistry for T type dwarfs {{cite:7fcd74620bc09fc3d67add388108c356dd587364}}, {{cite:8abddb0cf8bce27dbdd491325c531b0ad478c509}}. In particular, only
disequilibrium models could fit the Spitzer colors of {{cite:70ce6de98ad7a28b6d0423c4acc380acb7b383f5}}. The
equilibrium models have more CH{{formula:52c5cdc6-473c-427c-80e5-f04e08e063cd}} and less CO than the observed atmospheres so they
appear redder than the latter. On the other hand, disequilibrium chemistry increases the CO content
of the atmosphere and reduces its CH{{formula:c55c4ea5-f614-45eb-8115-73631222d1d9}} content (Fig. REF ), which
results to bluer colors in agreement with the observations. A number of these target atmospheres have already been suggested
to be in disequilibrium. Previous fitting of spectra for some of these
targets suggested that a subsolar metallicity or high gravity, may be responsible
for their different colors, but the authors did not explore the possibility of
disequilibrium chemistry for these targets. Refitting these observations with
equilibrium and disequilibrium chemistry models at different metallicities would be
of interest. Extending the Sonora grid to atmospheres of different metallicities
with disequilibrium chemistry will be part of future work.
| d | b6b9dfbc6eb2e05b511569d0ae417cf9 |
We provide the results from ImageNet {{cite:759127a7aebabd848c528e1abdeb04cb1753ec6a}} {{formula:bc5f5a03-72f4-4e17-9dab-6d9f82d812ad}} conditional model with our method in Fig. REF , and the results from LSUN Cat {{cite:83defd1b84ae7eac1f68f954415b298c70ca4cb2}} with our method in Fig. REF .
Also, in addition to the Fig. REF in the main paper, we show the more comparisons of unselected samples from the unconditional ImageNet {{cite:759127a7aebabd848c528e1abdeb04cb1753ec6a}} {{formula:9aafd576-7b7b-4eab-9ba9-cab77bfc9bb6}} model in Fig. REF . We report each result with the same seed for comparison. From these results, we can show that applying our method can improve the fine details e.g., the texture of food, human faces, or the details of animals.
{{figure:f27350dc-2f85-4f8f-b89f-cbb638a333ca}}{{figure:8f2b6cf8-c47e-49ce-8f70-557197a661d0}}{{figure:8d1b3ce1-5586-4d2b-8d51-6bbbe61eddeb}}{{figure:08dd3b0a-03ac-470a-8b89-2beee729eaf2}}{{figure:e6ca4345-2306-4a8a-9b0c-6d81c85b1a05}}{{figure:b2b745e8-414e-475b-8d2c-c69dadc2b3d5}} | r | 4e9e1ca4a794df323d8d1f9bbb6d443e |
In the counting of pairs, the weighting scheme is applied to both
the data and the random samples
(except that we use a 10x larger {{formula:6f579d92-b85b-4e56-bdc4-c9762088511f}} in the random).
This treatment is different from what adopted in {{cite:742c0faa4feed361506d3c4e26b581acf990e5b5}}
(where the authors fixed the weights of the randoms as 1).
Here we do this to take into consideration the possible
correlations between nearby objects induced by the smoothing,
although we tested and found that there are no significant changes to the measured 2pCFs.
| m | 49ce5f208122e98199f73be80723c42d |
As shown in our previous work {{cite:af31ec7f1caffb39cb83ca3406dd33dcb552516b}}, the simulations shown in Figure 2 where implemented as follows: Numerical parameters were set to the known properties of aqueous
glycerin, air, and sand ({{formula:164c4104-1b97-4aff-8524-87a86e9c75ba}} , {{formula:24d12279-1f04-4103-b8b5-78a63a889281}} to {{formula:5647ca69-07cc-4953-8a92-d14d6d643035}} , {{formula:983aa13b-247f-461b-b627-4e725bd09332}} , {{formula:a0438066-e89e-449a-91b6-be544d95beea}} , {{formula:b7161d48-6b69-4e9e-8d63-8d0779a17f0f}} ), the air-glycerin surface tension {{formula:f89e45a8-48f5-43c2-b94a-0898b9587fb9}} , and the sand grain radius {{formula:55357fac-4432-406c-ab83-7ed1bdf4c70b}} . To mimic the existence of sub-REV scale heterogeneity, the solid fraction was initialized as a normally-distributed field {{formula:137e2268-23e0-4b14-ba40-00d881784a05}} . To account for the non-reversible and compressive nature of the experiments,
the deformable solid was modeled as a Hershel-Bulkley-Quemada plastic with a density-normalized yield stress of {{formula:dab57371-9ccb-499e-96df-6e3fdc84ff92}} {{cite:719ffe59a86249faa5ce591d155ef3afc0d2ca25}}, {{cite:343d43b1d2b534560587340db7f34ff3a7655a09}}, {{cite:af31ec7f1caffb39cb83ca3406dd33dcb552516b}}. Permeability was
modeled as a function of porosity through the Kozeny-Carman relation:
{{formula:96ab3812-8815-43ca-9de4-76d6ae66808d}} with {{formula:16d7121b-ed0d-48b4-93d8-527b216186b6}} These two complementary models couples the solid rheology and absolute permeability to its porosity, making it harder to deform whenever its compressed and easier to flow through whenever it expands (and vice-versa). Relative permeabilities were calculated through the Van Genuchten model {{cite:d86bb4e1d0ab83e2c5047eb142c1c6bf0eacc4c7}} with wettability parameter {{formula:33d38e0d-e6b4-4756-8726-bb165abc993b}} ; capillary effects were assumed negligible.
| m | 12ea47cd0b10e613156a07a6af18137c |
We use CuBERT, CodeBERT, and GraphCodeBERT as encoders of the input sequences. For all of them, checkpoints for input length 512 are available. GraphCodeBERT allows an additional 128 tokens for data-flow information. For CuBERT, a checkpoint for length of 1024 is also available. We experiment with all of these. For span prediction, the token encodings are followed by a dropout layer and a single-layer classifier.
As a non-pre-trained baseline, we train a Transformer encoder from scratch with input of length 1024 for span prediction.
For relevance classification, we finetuned the CuBERT model for length 512 with a dropout layer followed by two feedforward layers. We used the AdamW optimizer {{cite:c8cbaca07b0e69e946b3a0a831504fa45f6b3b29}} and selected learning rates through initial experimentation. The model checkpoints were selected by the least validation loss. Appendix provides the complete details of our training setup.
{{table:0b05460e-e53e-4aa3-a429-99ff6cb82013}} | r | f0959d72ab534893d3ac92f5a6417713 |
In recent years, distributed optimization techniques over a multi-agent network have attracted considerable attention since they play an essential role in engineering problems in distributed control {{cite:56368a4e3a619fb2e707ced21eda384885b14b37}}, {{cite:d741e35956a54a5ae4c7552ae073d9f2a7e2e87f}}, signal processing {{cite:7e07ea163911ecd0a0846641b5280d3c13fe34ad}}, {{cite:09f551b202b59006c25072f813d20e132536f0fc}} and the machine learning problems {{cite:3f39217c1735312eec0780cdb445330e85679907}}, {{cite:87f5a768bb7052596823fb192555e9f7cf9d653b}}, {{cite:41850987ad06d840fdf96acc59521948fd6562bc}}. In distributed optimization, many agents have their own local cost and try to find a minimizer of the sum of those local cost functions in a collaborative way that each agent only uses the information from its neighboring agents where the neighborhood structure is depicted as a graph, often undirected or directed.
| i | b3b4c03b3c18aa104fc9bcba12053352 |
In recent years, the convergences of the field of AI and neuroscience have brought new ideas to each other, which lead to interesting results. One of these ideas that used the concept of the TD learning gave new insights that interpret the results from recording place cells of a moving animal in spatial navigation task {{cite:a8857ff32d1098224f51b88c55c5a610effc2881}}. In the study, the SR model, which decompose the probability of state transition from the TD algorithm, showed that the learned pattern of the state transition is comparable to the pattern recorded in place cells of a moving animal. This provided the rationale for the predictive map theory of the hippocampus. In this article, I attempted to elaborate the SR model into a biologically plausible model and derived heterogeneous synaptic plasticity rules from the SR model.
| d | 28c663f305aeffff627da2bffbc9a805 |
GSD vs. Multi-Pass Models
Bayesian MCDO {{cite:2cba2c91645e63d70ea3864e31e511bbd1524c82}} and Deep Ensemble {{cite:bd4ed2603c36bbfc9aae5c9600d4a3f1d7902764}} are considered the current state-of-the-art methods for multi-pass calibration. Bayesian MCDO requires multiple passes with dropout during training and inference in order to achieve stronger calibration. Deep Ensembles requires {{formula:c43d671c-71a7-4898-8042-ddb3aa3c3344}} times the number of parameters as the single model it is ensembling where {{formula:1f937b50-3063-49b4-bf35-78e1b5cf7b3c}} is the number of models ensembled. The obvious disadvantage to Deep Ensembles is that it requires {{formula:059a01d7-652a-4697-b89b-e4470f781e58}} times as long to train and run inference as its base model. While no model currently beats Deep Ensemble in accuracy on both clean data and corrupted data, we have shown that our model has stronger calibration in the face of certain levels of severity of corruption Tab. REF
and REF . Bayesian MCDO has shown to have stronger calibration than the same model not trained with dropout, but tends to suffer large accuracy drops as well as not being as strong as other single pass models or Deep Ensemble in calibration, even with many passes. Our model empirically suffers minimal accuracy drops when compared to its backbone and in some conditions led to stronger accuracy on corrupted data (Tab. REF and REF ).
| d | 3358d6169747ba1fcd85ed7b9fe004d9 |
We also note that system identification is significantly more challenging if greater numbers of candidate bases are retained. In this case, stepwise regression is roboust as it iteratively eliminates inactive operators instead of attempting to find a solution in “one shot", thus mollifying the problem to a degree. Given the larger number of operator bases, the loss would be small and remain flat for several iterations before the model become underfit. However, VSI also suffers from missing key operators, similar to related methods such as SINDy {{cite:fd09ed9e881876f36124cb48776b71e70a20392a}}. In this case, VSI would find the “closest” governing system where the missing basis are compensated by a larger number of other basis terms that are being considered. However difficulties may arise from non-unique combinations, and non-sparse solutions. We also anticipate that the loss will show many jumps indicating that operators that do contribute to the model have been eliminated in these iterations.
| d | ab42d03c87c1df4f37d3009b94e98697 |
Qualitative Comparison: The 4× and 8× qualitative results are depicted in Fig. 4. The top two rows show the 4× visual results. As for Bicubic interpolation, we observe that its results contain over-smooth visualization effects. SRGAN {{cite:e23d2e5ef0e8c4f0a0ee13d05796fd5da273aadf}} relatively enhances the SR results compared to Bicubic interpolation, but it still fails to generate fine details, especially in facial components, such as eyes, and mouth. It is obvious that ESRGAN {{cite:a2323c043e5f95933e744b76e75c82cb374000de}} produces overly smoothed visual results and misses specific textures. On the contrary, the 4× SR images produced by our method retain facial-specific details and are faithful to HR counterparts.
| m | 562a547475dc9f33c0244a3e80b177a8 |
Several previous studies {{cite:d17d768fe8f432877e87b0c687a08e5e66b8e455}}, {{cite:995456811e4f49bdb0e064f011762483c68b6716}}, {{cite:7bc148f679018cedc6715392cae12389d471cf86}}, {{cite:1743680d6b97722352de6626550452d292e0ba1d}}, {{cite:bdfe8931b1686a971a996c41179806f5103d71aa}} have achieved promising results on self-supervised visual representation learning in the image domain. State-of-the-art methods are mostly based on contrastive learning under the instance discrimination pretext task {{cite:14b46ba414c676df57cbf16fcb63beb9542cda62}}. These methods aim at increasing the feature similarity between positive pairs while minimizing it between negative pairs. To achieve that, two random augmented views of the same image (i.e., applying random augmentation twice to the same image) are regarded as a positive pair and the augmented views of different images are regarded as negative pairs. Although good performance has been obtained on large-scale benchmark datasets such as ImageNet {{cite:d658d23bcdbf95f5b57f01fcb57dd451bd806f8a}} and COCO {{cite:b2bde2b8fa818b0bcc8e202aa05783962ebc335f}}, challenges still exist in applications where all images are visually similar to each other (see Fig. REF ). For these applications, the high similarity among samples becomes an obstacle in optimizing the embedding space for contrastive learning as the visual gap between negative pairs is quite small. Due to this property, we refer to these applications as contrast-agnostic. One well-known example of such applications is medical image analysis. Many of them consist of images showing certain body parts captured by specific medical instruments, resulting in visually similar images with low variance across different instances.
| i | c86218deb24c7303b13cb98bf0f1ff2b |
Several recent works have proposed data-driven path planning models {{cite:fe4b1952ab4c474b24158d1541be8881091b9fd5}}, {{cite:17be8f45ad422d1eea0c5223d0ada37751ce6a2f}}, {{cite:61f9bc4426e53e3c4646468eba6b9884d29875d0}}, {{cite:080ba423a401a0755cb6f535c3866dc6d3330dff}}. Similar to how classical algorithms, like {{cite:5237812388f158a0f4a8988f8984bdaa69899ff3}}, move outward from the goal one cell at a time to predict distances iteratively based on the obstacles in the map, current learning-based spatial planning models propagate distance values in only a local neighborhood using convolutional networks. This kind of local value propagation requires {{formula:a32ee620-406a-4545-bf8f-362511b24dce}} iterations, where {{formula:c7ed3476-4258-4d93-bc0e-79e0fd6c2913}} is the shortest-path distance between two cells. For two corner cells in a map of size {{formula:d553e8f5-b617-49e1-ab1d-64a2a36733c6}} , {{formula:b6c24e3e-2225-495b-8777-2f3023dca93e}} can vary from {{formula:e58e9f1c-9d72-4434-88e0-4987188b0366}} to {{formula:63935ca8-23a5-4303-851a-34eb3605b392}} . In theory, however, the optimal paths can be computed much more efficiently with total iterations that are on the order of number of obstacles rather than the map size. For instance, consider two points with no obstacle between them, an efficient planner could directly connect them with interpolated distance. This is possible only if the model can perform long-range reasoning in the obstacle space which is a challenge.
| i | ad35ebcb0f632c4aa1746d20f469db29 |
MIDeepSeg was also compared with several existing interactive segmentation methods. In 2D cases, in addition to traditional methods like Graph Cuts {{cite:f99cb775214a4cc42aff78c68957cd58be0a0593}}, Random Walks {{cite:548d3891b2cac70ae652653fa811f3bba1ffedf4}} and SlicSeg {{cite:c8fcf2316759d6cd6093c835cf65efeae61125fd}}, we also compared recent deep learning-based methods including DeepIGeoS {{cite:bba3333eaaeda98aa53411b5caf8c345ba77f57e}}, DIOS {{cite:f9dfeebb837ce1f9d8aecf2900346f40bb0635f0}}, DeepGrabCut {{cite:9466b70f7a3c6a0b80480887b3b5ecb1498a7c97}} and DEXTR {{cite:bb52f14d423bdd5af6376054b520ccc661a83968}}, where the same 2D network structure was used as our 2D version of MIDeepSeg. For 3D segmentation, we compared MIDeepSeg with ITK-SNAP {{cite:2e6486d5931700df9a2f17ec7203350f48a8684a}} and 3D Graph Cuts {{cite:f99cb775214a4cc42aff78c68957cd58be0a0593}}, as well as 3D versions of DeepIGeoS {{cite:bba3333eaaeda98aa53411b5caf8c345ba77f57e}}, DIOS {{cite:f9dfeebb837ce1f9d8aecf2900346f40bb0635f0}}, DeepGrabCut {{cite:9466b70f7a3c6a0b80480887b3b5ecb1498a7c97}} and DEXTR {{cite:bb52f14d423bdd5af6376054b520ccc661a83968}} that used
the same 3D network as MIDeepSeg for 3D segmentation. Graph Cuts, SlicSeg, Random Walks, DeepIGeoS and DIOS allow the user to refine the results multiple times. DeepGrabCut just allows the user to draw a bounding box at the beginning and does not support further interactions for refinement. DEXTR takes the extreme points as the user interactions and allows the user to refine the results once. Graph Cuts, SlicSeg, Random Walks, and ITK-SNAP are traditional interactive segmentation methods without the need of training with an annotated dataset and have a high generalization. In contrast, DeepIGeoS, DIOS, DeepGrabCut, and DEXTR are deep learning-based methods and require labeled data to train, and DeepIGeoS cannot deal with unseen objects. Two users respectively used these interactive frameworks to segment each test image until the result was visually acceptable, and we reported the average results of the two users achieved. The segmentation results were compared with ground truth label which was annotated by experienced radiologists manually. For quantitative evaluation, we used the Dice similarity coefficient and the average symmetric surface distance (ASSD).
{{formula:6fa34859-6c3d-4767-ac89-0cdc4fe38842}}
| m | 8575f1ecb4106964e7b5ac062e9bed8e |
On the BraTS'20 unseen test dataset, our method achieved higher Dice values compared to the validation Dice values, obtaining an evaluation score that was equal 5th highest and 10th place in the overall ranking in the challenge. As an indirect comparison with existing methods using different validation datasets, our method achieved a Dice value of 0.89 (on the BraTS'20 validation dataset) for the WT class, comparable to the top-ranking methods of BraTS'17-19 ({{formula:edd3ac6e-804a-403a-a1af-c5857ced1aef}} 0.90) {{cite:56ff3c1dcde731ec9885803eb215d4c3156af0aa}}, {{cite:f7f8d94c67f8f0671081a9373835f436f65dcf9f}}, {{cite:effcf367fccaceb4f374ade84472c53d199d62c2}}, {{cite:9240b8929de92c65fd27b4eab62d3ad013d9a276}}, {{cite:479cb62b60a88f52de935a3019e97c27d31f84ba}} and a Dice value of 0.77 for the ET class, on par with top-ranking methods of BraTS'17 ({{formula:a9eacfbd-9925-456c-896e-7d54805099d3}} 0.78) {{cite:56ff3c1dcde731ec9885803eb215d4c3156af0aa}}, {{cite:f7f8d94c67f8f0671081a9373835f436f65dcf9f}}. It is worth noting that even though the previous BraTS challenges used data from different subjects, this indirect comparison could be quite useful in determining the potential of our method, since the comparison involves the same task and type of data.
| d | 6750b12db55986cccd34b4fb83771519 |
We note that the lack of new detections is not unexpected. Firstly, signals from binaries with the inclinations targeted by our banks are significantly weaker than those from face-on binaries typically detected by existing searches. Secondly, in contrast to existing searches, we have performed a two-detector search using only Advanced LIGO Hanford and Advanced LIGO Livingston data, reducing the loudness of the signals across the detector network. We have checked that these two effects lead to an optimally observable volume {{formula:c997164e-08e4-4252-842a-1eb9797766fd}} times smaller than that of searches targeting rather than face-on signals. The fact that existing searches have only reported three events to date within our targeted mass range (GW190521, GW200220 and GW190426) {{cite:d9cb1b19e2960a1499220ba118f15f149adf6a79}} is perfectly consistent with our results. In addition, the inclusion of a third – Virgo – detector in our search will also improve our ability to discriminate glitches from true signals, further increasing our sensitivity. We leave the inclusion of further detectors in our search for further improvement. Such work will primarily involve extending the two-detector ranking statistic to a multi-detector one.
| r | 4cdab5625d39bd1c28794d2e1b1f11aa |
Task specifications (and non-Markovian reward functions) are naturally separable, so an automaton (or reward machine) allows an optimal policy to be found more efficiently by breaking down the complex task into a sequence of Markovian subtasks. Nevertheless, recent work has recognised that this automaton is usually a priori unknown, so they learn it on the fly before finding an optimal policy (see the Related research section). We concentrate purely on the first part of this process – how can one best learn the task automaton (TA) representing a task specification in sparse, non-Markovian environments? With this TA, one can leverage work on temporal logic planning to synthesise an optimal policy using RL or other methods {{cite:d27b631025cc6765dabbeebf042a02cb11b17e17}}, {{cite:32f2606576a48ee89f802d74b8f92c8fd11bc4a3}}. The learnt TA must be as small and have as little environmental-bias as possible.
| i | 9f19cca831d238a34c6e6aa742526160 |
We provide a probabilistic interpretation of attention as a generative model for queries and values through a set of memory units. Using this formulation, traditional attention in transformers {{cite:4589594e0684fb5479af533c27dbbccb74130d9d}} reduces to the special case of maximum a posteriori (MAP)
inference of values given queries, assuming Gaussians for the likelihoods. Using Bayesian inference, we provide a systematic approach to adapt keys online
as a locally ML update of the corresponding model parameters. Our formulation also allows to fix the values of certain units and propagate their influence to other units
online by conditioning on the fixed unit values.
The following sections provide more details on the probabilistic model.
| m | 8751cb5734e6fbb6a7589114f00aefbb |
Denote {{formula:72782f45-d138-405c-9907-84a5c1689c6f}} the communication time need for each master-worker exchange. For simplification, we assume that {{formula:416c070f-4703-42e8-a9ca-8d9a994006b9}} is fixed and is the same for all nodes. If the time needed for computing one update {{formula:5fd5965f-e68c-4ad4-a6b9-85004f72a3ce}} , then the total time needed by the distributed algorithm {{formula:7b9d8b70-68b0-4925-a9e2-4d52f72b0985}} could be higher than that of the sequential SGD {{formula:13c62260-c73b-4225-b994-6daab1a0f73c}} . In such cases, existing distributed algorithms increases the local batch size so that {{formula:57131a01-d29e-42ac-9732-824cc0183b19}} increases, resulting in lower stochastic gradient variance and allowing higher learning rate to be used, hence better convergence rate. This introduces a trade-off between computational efficiency and sample efficiency. Increasing the batch size by a factor of {{formula:7cc264fc-1e0c-40bd-bfb7-9ec419e02ae9}} increases the time need for local computation by {{formula:2326802b-3bce-4770-898d-e8d22f8317a5}} and reduces the variance proportionally to {{formula:c52edb2e-a7fc-4efd-8db3-08ef4391b5f8}} {{cite:5da1994ba4ea2afd1230321bc7af42d808448768}}. Thus, higher learning rate can be used. However, there is a limit on the size of the learning rate. In another words, maximising the learning speed with respect to the learning rate and the batch size has a global solution. This maximum learning speed can be improved using DPSGD, performing {{formula:43dd1834-a787-45ef-95da-8404e5a2c627}} times less communication steps. For the mini-batch SGD with minibatch
size {{formula:8d74bc6c-5eda-4f33-9aad-ac473ff70e9d}} , the convergence rate can be written as {{formula:93d5cb32-5650-4a00-b9b6-ac057c6de9cd}} . Since the total number of examples examined is {{formula:d95582ca-0245-4d5c-8fbd-fc1e9e6899dc}} and there is only {{formula:76790cc1-f017-4967-be50-4f66d852293a}} times improvement, the convergence speed degrades as mini-batch size increases. The convergence rate of DPSGD with mini-batch {{formula:3f3f4775-0e92-48c4-98e8-8027d1bdb804}} can be easily deduced from Theorem 1 as {{formula:44018ac7-44c6-459a-8686-5c460ec41c8d}} . Hence, {{formula:b56ba615-f4b7-43cb-8ee4-8d846415f552}} better convergence rate than mini-batch SGD and {{formula:a56b9c08-228d-492f-8bf3-22cf8be73782}} better convergence rate than standard asynchronous SGD with {{formula:d8097475-160a-4977-b411-d2b6ab9062d6}} times less communication. These improvements are studied in the following.
| d | 2c69fce4fbd15a8d1304ef5031823abb |
(4) Applications. Currently, the only direct applications of FSS (I2S and S2I) are entertainment and law enforcement {{cite:dab2fc63e01dad1bd6a65798383debe9e3e12504}}, {{cite:4dabdd942ae811145568be5cd04ea8e49e0c4f52}}.
With the development of FSS techniques, many other promising applications could also be implicitly or explicitly facilitated by FSS research, such as art design, animation production and so on.
In addition to these industry applications, we believe that FSS methods and ideas could also benefit other fields of research. For example, sketches could be used to assist image resizing {{cite:ee4b4bba64581bce5c4789e749f9718e51fa8bdc}}, super-resolution {{cite:99b7c325e61242d1636d8a682d1577edb8faf916}}, etc.
Further, the sketches usually contain the most conspicuous information of an image and can be therefore be considered compressed versions of RGB images.
This characteristic makes sketches useful for the image compression task.
Besides, the S2I task can be considered a specific case of image super-resolution in a broad sense, because both tasks aim to reconstruct detailed RGB images from the given inputs.
The only difference is that the input of S2I is high-frequency information, while that of the standard super-resolution task is the low-frequency information of the original image.
| d | 44d5ffc0940a7a9ed6b567480551f18a |
First of all, we extend the analysis of the {{formula:bc191cde-bf3e-4a32-a8a4-8aa58de6440d}} , as already shown in Figure REF to the results reported in {{cite:2c8aa9cf9d9a216614e1b81d3d922b6680823a4b}} in Figure REF . Here, we can see that of all the countermeasures proposed (for CIFAR-10 data), only {{formula:0262791d-da51-4cb7-8088-49d00455c6cf}} and {{formula:e8154b08-d55d-4d92-99ce-d90459a0e899}} would be considered by the defender, again depending on the strategy of the adversary. If the adversary would choose to attack in less than {{formula:9b43d0e1-b5d2-4dc5-b76b-b556097b8fb4}} of the cases or {{formula:8391594e-1e4b-4a9d-8b7d-1ee499330132}} , the undefended model would be strictly preferable. The proposed ideal solution from {{cite:2c8aa9cf9d9a216614e1b81d3d922b6680823a4b}}, {{formula:ce6dee6b-b7f3-4423-9053-e8a9ef9d869f}} , is only optimal if the adversary chooses to attack with an probability of more than {{formula:ed472228-8f82-4a14-910e-acd265bcba7b}} (or if {{formula:ac17314c-0e08-47a4-b700-bdf3c3a2739d}} ). In between, the countermeasure with {{formula:5ad14ef8-ba92-4318-b882-814e99b2b36f}} is the defender's optimal strategy and all other strategies ({{formula:f5a99d39-f44f-4ff8-b8d3-1cb86b0eba87}} ) are strictly dominated and thus never optimal (under assumption REF ).
A similar figure was also shown in {{cite:3b29035cd1f4074024c65d92d2603c22544b0d13}}, but with a completely different focus and lack of theoretical foundation. By analysing our advanced adversarial classification game with the help of {{formula:d09a4454-9fca-4050-8028-49d9acdf9038}} and {{formula:3f34f493-a772-4d43-84ca-151a9c61c7b3}} , we justify these figures by a solid theory and encourage researchers to incorporate this evaluation method when reporting results about new attack methods or countermeasures.
| d | dd06b06f94c2a3a94b64874f49aa679d |
Assumption REF (A4) gives the following analogue of Lemma 5.7.8 of {{cite:b27d91d5b59172726f0365ee10b0cb5f597c5017}}.
| r | 020b488615fb0bf14dbcaaa4c6016313 |
At SBC ({{formula:6262808e-f90f-423f-8a4d-bf3df86d6795}} ), the unstable modes that degenerate owing to spherical homogeneity emerge simultaneously from the static thermally conductive state over the critical Grashof number {{formula:8833a9bd-9f56-4c4a-8882-6be3062293f9}} .
Owing to the nonlinear interactions among these modes, highly symmetric steady states that are invariant under a set of transformations of point groups, such as axisymmetric or polyhedral patterns, may bifurcate directly from the static state{{cite:19cc79493ceb19661ff9addc29fc1c5db7b08d0b}}, {{cite:b40bf69f8aafe0b9d77c30ee1d881651c09802fe}}, {{cite:c040bfc51e1806afaf171ce337b41c09b6b252a4}}.
For the top view (Fig.REF (e)), it must be considered that the 3-fold state achieved at the point R additionally satisfies the mirror symmetry with respect to the {{formula:4df787c1-c099-4e53-8e2c-e25df0c5b64c}} plane, {{formula:42d54c2c-a2d2-4c67-83c0-3dd18553e5f1}} .
However, the equatorial view (Fig.REF (f)) suggests that the mirror symmetry is not practically satisfied by the 3-fold state obtained at R, where the Grashof number is significantly larger than {{formula:5ca63855-077b-4da7-b161-c5ff3ea5d46d}} .
The weak breaking of the mirror symmetry is associated with the nonzero angular momentum around the polar axis, as discussed later.
This was also clear in the equatorial section of the flow structure of the 3-fold state at R, as shown in Fig.REF .
{{figure:b2a11fac-b5d7-4a19-b0ff-d9afc224a43f}}{{figure:2f8f7399-096d-4010-8b2a-613cae8fb77a}} | r | 6e2ff70e7e3f8dc5174f93209492f1d1 |
By an application of a concentration inequality for Lipschitz functions of Gaussian vectors, we have for any {{formula:a0d268fb-ab9c-458d-a43c-e7b82239a2d4}} (see Example 2.28 of {{cite:c6bf4948f6d3f52680b23bd905cc1012f1e61637}}):
{{formula:14f02842-4b19-4597-9423-976d19d8469c}}
| r | 6de0895c6957628bd8495e86be2b0324 |
There are two main branches of adjoint methods in use today: the
differentiate-then-discretize approach, which derives the continuous
adjoint equations from the state equations and discretizes them separately,
and the discretize-then-differentiate approach, which discretizes the state
equations and derives a discrete adjoint that is consistent with the discrete
state equations. The first approach is usually preferred because it allows
different choices of numerical schemes or even meshes for the state
and adjoint equations. This can prove very advantageous, for example, when
using adaptively refined meshes, since the state and adjoint variables will
have very different areas of interest. A downside to the
differentiate-then-discretize approach is that, at a discrete level,
the obtained gradient is not guaranteed to be consistent, and typically gives rise
to numerical instabilities. On the other hand, in the
discretize-then-differentiate approach, the adjoint equations are
obtained from the discrete state equations, and the gradient is guaranteed to
be consistent at the discrete level. For a more general discussion on the
trade-offs involved see {{cite:fcdbfde85de8cc21f18964f73b8a1506c03f4410}}, {{cite:124822dc23667ec2d214364586f1e750a731eeaa}}.
| m | 285534bc013b6f16ee857337c66b6469 |
Fast and numerically stable uncertainty propagation has numerous applications {{cite:7486664e7618dbd3e9aed820de6be8b63f1011dd}}. We could use it for selecting the next best view {{cite:32fc09b4cfed70b66a7c2c9148a3e86175f1d3bb}} from a large collection of images {{cite:e75f27cc35fd0b6074037e0425bc97d896747afa}},{{cite:213ce2c683756968b18ae02e5cfbbb2f0afe37c8}}, for detecting wrongly added cameras to existing partial reconstructions, for improving fitting to the control points {{cite:27ce7a7ea2896b50994668e34906c3774998c353}}, and for filtering the mostly unconstrained cameras in the reconstruction to speed up the bundle adjustment {{cite:d1843c2680b8d797466b3a09f842ee347258d3a5}} by reducing the size of the reconstruction. It would also help to improve the accuracy of the iterative closest point (ICP) algorithm {{cite:b7fea2e262b55415cb83104a1222924294905dda}}, by using the precision of the camera poses, and to provide the uncertainty of the points in 3D {{cite:f39f0d7c0dd5912cd4bb51a37489230e426e218f}}.
| i | 75799104a495acbdb7d863b41cd0385e |
In this section we present the performance of our estimation and uncertainty quantification algorithms for images of single edge dislocations in single-crystal aluminum. As described in {{cite:5122a048de4b307140faf522cf40aa2b5fb81e69}}, the displacement gradient tensor field {{formula:2571db40-b843-47f4-8328-bc10df94f89d}} induced by a theoretical infinite length dislocation is invariant along the direction of the dislocation line vector. Indeed, in this model system, {{formula:0665b951-84d4-45d6-918f-a66dbbd40ba3}} can be expressed analytically at all 3D points in an arbitrary coordinate system, or described as a 2D function along the coordinate system defined by the Burgers vector {{formula:0643c078-661e-44b1-807d-f331c565e4e6}} and slip plane normal {{formula:288ec3d5-af14-4c99-8298-ed8e0c6f16ec}} . Since we assume a known character and therefore line vector for the dislocation, its core position can be represented in 2D by a point defined in the plane orthogonal to the line vector. We define this plane to contain the original origin from the lab frame and call this the algorithm plane.
Expressing our algorithms in this coordinate system captures our intuition that we should have infinite uncertainty in the {{formula:49f68b7a-d5fc-47a4-9ddc-44ff9e23ca60}} direction for an idealized infinite length dislocation. We emphasize the fictitious nature of the algorithm plane, as we can still truthfully represent finite-length dislocations using this system, so long as they behave this way within the span of the observation plane.
fig:3d-diagram-and-mapa. visualizes the algorithm plane along with the observation plane and dislocation coordinate system for a dislocation with Burgers vector {{formula:04ba471f-a6e6-4444-bd75-61f4a6e6b2db}} , and slip plane normal {{formula:c15daafb-f554-4a69-8381-07cedde50885}} . One may interpret inference results in the lab frame by projecting points in the algorithm plane onto the observation plane in the direction of the line vector as show in fig:3d-diagram-and-map. If another coordinate frame is of interest, one can project points from the algorithm plane into that coordinate frame similarly.
| r | f3ed34a7565252cfa279c0d356cb43b9 |
Theorem 2.1 (PPM, {{cite:ad8b9dc4cd0cde401ac935f2e4926bd296041cff}})
Assume {{formula:60c879c0-c20f-4225-88d0-85cc518565da}} is {{formula:1ca2b53c-ec14-46d7-ac1e-637b03f81c43}} -convex and continuously differentiable everywhere. The (PPM) converges at the following rates:
| m | 1f2b9b153783a1a05d5ff0c091d8afbb |
Unsupervised representation learning has been highly successful in the domain of natural language processing {{cite:4daa0bd54b0811d6851d1b9e80b9b5326d663e87}}, {{cite:fc09a861e8e130c946baaa45e5a73a17d9f0f17e}}, {{cite:6d0eacd6812e8e83502b8c0bd1ecf36e7016c1b7}}, {{cite:ebbf12bf6cc2d1a353813d0d90c39659edc81aaa}}, {{cite:6e99e6f7da33052ce1aac6e5b79e9221f80da5db}}.
Typically, these methods first pretrain neural networks on large-scale unlabeled text corpora, and then finetune the models or representations on downstream tasks.
Under this shared high-level idea, different unsupervised pretraining objectives have been explored in literature.
Among them, autoregressive (AR) language modeling and autoencoding (AE) have been the two most successful pretraining objectives.
| i | ed3a9b0bdd7c31e52bbf5c2c6ce8cf11 |
Recently, end-to-end approaches that directly predict trajectories from raw sensor data have gained traction {{cite:e3919a60aeae019bfab28266099da193968ad371}}, {{cite:c6633f90a6fe38630531cf3a5f3df2d87baeacdb}}, {{cite:831f050136bc6532117f1da8e3473ba6343ce16c}}, {{cite:fba906261cbe607fc7ed8ebdff208d4019a04adb}}, {{cite:b4bde0efab466a0b870c32bb031e2c0fc5489d70}}, {{cite:8a53b65b1f8b70c66bf957af5e85b3a9b4354a53}}, {{cite:d62e361a230231c77cb815035aa82f12f3b45c82}} due to their efficiency and high performance. These approaches typically exploit lidar as well as HD maps for this task. Lidar captures positional information about objects, while HD maps provide scene context and a prior on possible paths. To accurately predict future motion, these methods need to implicitly understand the dynamics of objects in the scene. They do so by relying on a sequence of position observations from lidar over time, increasing the reaction time of the vehicle. This can be problematic in certain critical situations where accurate, low-latency motion prediction is needed to avoid an imminent collision.
| i | 1eeba84fa1a64909995a95bf099488bb |
For practitioners we present a simple cost-benefits analysis using our work as an example. To achieve a roughly 2% improvement in downstream tasks over general BERT required 48 hours of in-domain pre-training. Another 48 hours of training led to a further 1-1.5% jump in performance. At a run-rate of $22.33/hour (for 8 x V100s on Azure cloud) the financial cost to achieve the initial performance boost is $1,071, and if pre-training is run for 96 hours the cost is $2,143. As shown in Table REF , the general trend is the longer you trained, the more you will spend for each percent of improvement. We note here that this analysis is representative of the experiment that we conducted, and can easily be improved upon by simply adding optimization techniques such as Whole Word Masking, faster GPUs, and/or the use of deep learning optimizations such as DeepSpeed {{cite:9a7c151f583ea45d7aec3a7a19644a307b027472}} or NVIDIA mixed precision {{cite:d77e67ef0d0becb6427ce0df251e091ffab3d780}}.
| r | 89601c6bdcfe91aafa32072903978a74 |
In recent years, security has been becoming an attractive issue in the control and estimation of cyber-physical systems, such
as chemical processes, power grids and transportation networks {{cite:c7446f224816df41710f8a220c245bc3be1acced}}, {{cite:647c7ae3ec72d2056d18e7068b18b94a47445f45}}, {{cite:98f0e86d8b6b49eb741f44a898cb70a42ce688e0}}, {{cite:afda99cde8c5663fbf0b712c6b4c91b9514afd85}}. The robustness of various system properties have been investigated under internal faults (like disconnections of links/nodes {{cite:647c7ae3ec72d2056d18e7068b18b94a47445f45}}, {{cite:af7a829cee5168856b04a443183566633eb753d9}}) or external attacks (like adversarial sensor/actuator attacks {{cite:98f0e86d8b6b49eb741f44a898cb70a42ce688e0}}), including stability {{cite:08821965542cc008fe18ed1791d394152d0fa1e6}}, stabilization {{cite:73c659bc5744daca4caa0fab2579097311715076}}, controllability and observability {{cite:aa678331de4ccd307fbe7a9ce703a96d907b9553}}, {{cite:6ac89869320cb7cfb71a0276448ff8d58abaf3c5}}, {{cite:f1ea849395091b3e0b073e898bf5837ac5fdffdf}}, {{cite:43ad8359fbd6691a308f1d96be9d1e03e3e22440}}, {{cite:39e2bcae94e2d9c55682649c88a04c633a229884}}. Particularly, as a fundamental system property, controllability/observability under structural perturbations (such as link/node/actuator/sensor removals or deletions) has been extensively explored on its robust performance. To name a few, {{cite:aa678331de4ccd307fbe7a9ce703a96d907b9553}} considered observability preservation under sensor removals, {{cite:6ac89869320cb7cfb71a0276448ff8d58abaf3c5}} investigated controllability preservation under simultaneous link and node failures, while {{cite:f1ea849395091b3e0b073e898bf5837ac5fdffdf}}, {{cite:43ad8359fbd6691a308f1d96be9d1e03e3e22440}}, {{cite:39e2bcae94e2d9c55682649c88a04c633a229884}} systematically studied the involved optimization problems with respect to link/node/actuator/sensor removals from a computational perspective. Since controllability/observability is a generic property that depends mainly on the system structure {{cite:faa75d7dc54e751b05cabb403b829c406f31ad92}}, its robustness is mainly dominated by the robustness property of the corresponding graphs.
| i | 89d2d01f31e0c3553293e0539e9b369f |
Parameters in different model scenarios are presented in the Table REF . The vacuum effective potential {{formula:e941347f-8ae6-4e78-878e-b3f5759fb644}} is a function of the two variables {{formula:3deadd7c-f808-4595-bd20-9f903783b099}} and {{formula:b6f563d8-33ad-4402-8749-a7fc2a24fe6c}} . Its minimum is located at {{formula:03f1788b-8830-4633-b532-2e71a2db79f6}} MeV, {{formula:855dd347-004e-4658-abf3-fffd9db4681e}} MeV in all the 2+1 flavor model scenarios, irrespective of the {{formula:38955502-5ff9-42d4-874a-2af040b1ef8b}} meson mass values. In order to study the variation of {{formula:66f75329-9229-457a-96d6-f67bfea14934}} in nonstrange direction, the {{formula:4f8f2fa5-8568-498b-baea-1e42886395b6}} is fixed at 433.34 MeV
and the normalized vacuum effective potential difference {{formula:e110f340-6062-44cc-9cf8-dfb3cce824ca}} is plotted in Fig.(REF ) with respect to the scale independent nonstrange quark mass parameter {{formula:7bc74707-fe44-4702-a691-754170993380}} for the {{formula:104b8852-0f6c-4f11-b2b3-262c6cd9a98a}} 400 MeV. It is most shallow for the no-sea approximation of the QM model and deepest for the QMVT model while the on-shell parameter fixing of the RQM model when compared to the QMVT model gives a shallower effective potential. The nonstrange and strange condensate temperature variations for {{formula:e47ed636-f6ff-492b-a572-15743184faf5}} , are presented in Fig.(REF ). Similar to the two flavor case {{cite:63a44fa561dc5ddc33ccf3ffa588ed7ee50ee9c4}}, the sharpest QM model temperature variation of the nonstrange quark condensate {{formula:0f72b880-f305-4648-bff5-bf30d9fa730b}} becomes quite smooth for the on-shell parameterization of the RQM model and the most smooth variation of the nonstrange condensate is witnessed for the QMVT model plot. The temperature derivative of the condensate in the nonstrange and strange direction when the {{formula:b0ce42dd-f4e8-492f-b375-a5eba49b13ed}} =0 MeV, defines the chiral crossover transition temperature (called the pseudo-critical temperature) {{formula:e50c9d5c-2f5c-427c-8fc9-bec73e6bbe59}} for the nonstrange direction and {{formula:a70245ce-f8cd-4437-92ff-a914285ee04a}} for the strange direction. The early and sharpest crossover transition occurs at a pseudo-critical temperature of {{formula:deed80e7-5f77-47a2-8c71-3862e2b53bab}} MeV in the QM model and a smoother chiral transition is witnessed for the RQM model at {{formula:9a1266f2-15f6-4bb5-a00c-f37b6006846b}} MeV while a most delayed and smooth chiral crossover occurs at {{formula:80efb2d2-2e8d-44e1-84d3-5e59e7a72b21}} MeV in the QMVT model. The melting of the strange condensate {{formula:04f3967b-7aee-4bc0-bbff-32bf26c511ab}} is most significant in the RQM model when it is compared with its temperature variation in the QM and QMVT models. The Fig.(REF ) depicts the phase diagram for the {{formula:8b970886-f2ab-4ec8-b86d-19911d4449ad}} MeV. The critical end point (CEP) location {{formula:c2883f5e-d597-4ef7-b155-fbdfd67efa5b}} 113.3 MeV, {{formula:7cd113b2-0946-4b46-8ace-cf51c85ae32b}} 96.7625 MeV of the QM model shifts significantly as reported in earlier works {{cite:ea1ef61283fa3f548106223f6d149d6c3a371606}}, {{cite:72d9c7fe7acd1db9269990314a5a3aa67337d097}}, {{cite:925daf7d4614c166c1074e70796906fb10565247}}, {{cite:903773b4a03d338e87c85e310b4bed843a376cac}}, {{cite:b665350b415753f7004db046ab047353bb5c3f72}} to a far right position in the lower corner of the {{formula:51bc1777-4243-4b1d-82a6-a52e3f9299aa}} plane at {{formula:6bcdeb83-3a5c-40f6-b75c-dcac60381553}} 285.91 MeV, {{formula:d4310773-bb57-46ec-9302-b843c3e7bf98}} 32.23 MeV for the QMVT model setting. It is worthwhile to emphasize that due to the exact on-shell parameter fixing, the CEP in the RQM model moves to a higher position when compared with the QMVT model CEP and it gets located at a higher temperature {{formula:0a23845a-2cdb-4973-9e9f-f2659f9ee7d4}} 37.03 MeV and a lower chemical potential {{formula:05608d3f-b440-41a2-a621-6c3b49ddc52a}} 243.12 MeV.
| r | 56d597382b6479bdd95362ef71a953d7 |
Coupled waveguides play an important role in numerous optical devices such as multicore fibers, optical directional couplers, polarization splitters, ring resonators, and interferometers {{cite:90249d00794d8a24d02302b2927d2d6ec86dad5c}}, {{cite:a39a465f7e28f37b74a030ca184db4ae3681b6a4}}, {{cite:e5507acc9a66010ff36323c91f2359348da931b2}}. Recently, optical devices comprising of twisted or knotted nanofibers have been fabricated {{cite:cf1ba1b5857b84b385a2f5245e9b4d27d54565fc}}.
Coupling between two nanofibers has been studied {{cite:cf1ba1b5857b84b385a2f5245e9b4d27d54565fc}}, {{cite:7950332567e6465e274d7928a8fee6010fda57ec}} in the framework of the coupled mode theory {{cite:90249d00794d8a24d02302b2927d2d6ec86dad5c}}, {{cite:a39a465f7e28f37b74a030ca184db4ae3681b6a4}}, {{cite:e5507acc9a66010ff36323c91f2359348da931b2}}. It has been shown that the guided normal (array) modes of two coupled dielectric rods can be calculated by using the circular harmonics
expansion {{cite:d4a3522634eac31be66910c138b1b500cd9e931c}}. This method has been extended to multicore fibers {{cite:01558a55b2a968c9a6b45da8427cb88f5f8e6c50}}, {{cite:4a4ceb4462f594425a9328227876f2e75daf6443}}, {{cite:b1c46555aa6e65ad0d0c55e487cded83f4af464d}}, {{cite:28cf8e103ad0595838a5cbd2ffaa264081f465b1}}. The propagation constant and the flux density of the field in a guided normal mode have been studied {{cite:d4a3522634eac31be66910c138b1b500cd9e931c}}, {{cite:28cf8e103ad0595838a5cbd2ffaa264081f465b1}}, {{cite:99fc395ea73df46230dc1860f86ab971863dc339}}. The polarization patterns {{cite:28cf8e103ad0595838a5cbd2ffaa264081f465b1}} and the mode cutoffs {{cite:4a493b14b8dddd7aafea23e0ed3aa1f3104c9304}} have been investigated. In optomechanics, forces arising from internal illumination by light traveling in coupled waveguides have been studied {{cite:075af590edc91faf5ea2c3b6f1b7fe32f1f71c18}}, and light-guiding arrays of mechanically compliant glass nanospikes have been fabricated {{cite:9dbf47fa60506ae9f319f377a5e0836e905b49a4}}.
| i | 7ffe2174e7a014b9fc78a7a1579c2fde |
Bursting behavior is known to be extremely variable and violent, and the bursts influence the accretion process.
To date, a significant soft excess is detected in most bright bursts which is caused by the interaction between the burst and the corona or/and disk.
However, in this work, this phenomenon is absent, or conservatively, not as significant as in other bursters.
Accompanying with the small fraction of the deficit in 30–100 keV, e.g., the deficit fraction of 4U 1636–536 {{cite:5917c31b4e6ec2663c066a35e8803e8c22feadf8}}, {{cite:87a8d11b2660461039519eb2ed81eae2c197d03e}} and Aql X–1 {{cite:b458673efa3181898792bebe3c7c89256844ea31}} is roughly unity, a larger inner disk radius than other bursters is preferred,
since the radiation pressure and the flux density of the burst imposed on the disk/corona is proportional to 1/{{formula:842b1bcb-dd71-4e40-b894-59eae70a58f2}} .
Other possibilities, such as MAXJ J1816–195 has different structures/physical-parameters of the disk/corona than other bursters can not be ruled out.
When more NICER data are available, more bursts from MAXI J1816–195 will be found to be simultaneously detected by NICER and Insight-HXMT.
We expect to study those bursts in the forthcoming publication to better understand the interaction between the bursts and inferred outburst emission.
| d | 6067631949aca3f5ebc8b6ff38f79f3c |
Furthermore, such a smooth translation between different modalities can allow us to gather, for instance, additional annotated data with no extra effort. The top row in Fig REF shows an original camera image from the Semantic-KITTI dataset. By using the corresponding LiDAR scan, our framework trained on this dataset can already produce a variation of this scene as depicted in the middle row in Fig REF . We can now simply replace the Vid2Vid head with the version trained on the Cityscapes dataset {{cite:3782faf2080fb1dfd5ceaf95bc4b9d01043e575d}} to produce a different variant as shown in the bottom row in Fig REF .
Producing such different variants without additional effort can help us to augment the available image datasets needed to efficiently regularize the neural networks.
| d | 5c58e5d57d2a51c104df16685bd2b1de |
In all three stages of training, we use an effective batch size of 16K. We utilize the Adam optimizer {{cite:e1bce8fdebcf9b5b0fca995fc4da49a57642b1c0}} with weight decay and set peak learning rates as [5e-4, 3e-4, 1e-4] for three stages respectively.
We train up to 30 epochs from which the best model is selected based on validation set loss over all languages.
| r | c0b4f2c5f1d73ab3d581c9b657a9641a |
The mid-infrared wavelength region (MIR; 3-20 {{formula:5d126efc-0c2e-4747-b830-674e47b6582e}} m) provides an excellent opportunity for the search for life, as it contains features for multiple biosignature gases, as well for gaseous species that could provide evidence for or against biological O{{formula:6b7f4670-223d-4b3f-8a0d-03f6473dba01}} production {{cite:efa5c6a95b6aa9275a8b235ac780c27eb3995824}}, {{cite:fcd6f3181dc819c9db291039b6fb22233ea864c2}}, {{cite:93b0557f05a45e455d416c66724994adab3f6682}}. Furthermore, thermal emission observations are less impacted by clouds (e.g., {{cite:95e80446febedb9a35c0b41f8d5e22390f3d448e}}), and could also allow measurements of a planet's surface temperature {{cite:efa5c6a95b6aa9275a8b235ac780c27eb3995824}}.
The collisionally-induced absorption O{{formula:3ce793a6-76ed-43e2-b012-6e8e23810df9}} feature at 6.4 {{formula:c4fc89f6-aca3-4919-b333-41e9e93faa8b}} m is the only MIR feature that allows for the direct detection of O{{formula:b7a55e99-150e-483a-966d-8f5f89a711f1}} , although it would be extremely difficult to use for abundances of O{{formula:8a4d8314-caff-4043-8469-05cef1cb4d0d}} consistent with biological production {{cite:237f8ba5fc228c150f4abdb14e6d61fa064a6425}}. It will, however, be useful for identifying high-O{{formula:a0043e7a-adfe-4378-bd41-ce9a2ee3a994}} desiccated atmospheres, a possible mechanism for abiotic O{{formula:efe363dc-7543-47c1-91dd-2d615985c576}} production {{cite:aec358aa521f697a0341f1e01df20e4787e95df4}}, {{cite:56a23d8fab9c779d60bce4f41dff13b5f16c18cf}}.
Inferring the presence of biologically produced O{{formula:d7e1b82f-14cd-43f2-924e-36f032fd4398}} will be restricted to indirect detections via the 9.7 {{formula:24cab775-9ce1-4d7a-a086-adb474c2e532}} m O{{formula:536c9780-e8b2-4dfb-9d51-61c04cf47ba7}} feature in the MIR.
| i | d1f1f51ef1b6211a0db9097993e20e93 |
This fact can be seen by adapting the classical reasoning for the finite element method to the spectral setting (see for ex. {{cite:b53cfc63268c1bba9ae3e587f92e1fa26c329f4b}}). Moreover, using inverse inequalities (REF ) and the above accuracy estimates we can show that for {{formula:8a1047f2-7cfd-4edf-905e-13ebd137a444}} with {{formula:47fdc8eb-9f49-4822-acd8-ef1610f3592e}} sufficiently large, we have {{cite:b53cfc63268c1bba9ae3e587f92e1fa26c329f4b}}, {{cite:4c854bee8ae026b8b531d80fab9bdc8987439d2c}}
{{formula:41dd5557-aae0-455b-9c9c-48023c0fc4de}}
| m | d07e444dc603824e9de0956172d21122 |
According to the idea proposed by Leon Chua {{cite:8d0b714033d48e212c279e0e28a3b9903bfdb11f}} the memristor relates the transferred electrical charge, {{formula:58e89e97-f09a-47e0-a409-2bcd47cf5e03}} , and the magnetic flux linkage, {{formula:9291fbcd-73f4-4c60-9d75-256bd3f2c3f6}} , by means of the linear relationship {{formula:2e793879-eb83-4ab8-8d83-f222f6dbb11b}} , whence it follows that {{formula:9905abc8-76aa-4cc9-a413-30062ecf8c7f}} . By this way, using the relationships {{formula:d8918027-cafa-468a-9f00-76eb75e47293}} and {{formula:c14e4a14-eb4e-43b7-8615-a4b79345b082}} ({{formula:12aeb234-104b-4969-bc9c-dad4747975e2}} is the voltage across the memristor, {{formula:e4ecec60-2942-45a0-ad91-baa2daef556b}} is the current passing through the memristor), the memristor current-voltage characteristic can be derived: {{formula:0cfad2c7-0df9-4981-9240-37956aae670f}} . In such a case, {{formula:4dd99c02-8e47-4092-a469-23ed5cc67534}} is the flux-controlled conductance (memductance) and depends on the entire past history of {{formula:188087f6-821b-4e7d-9561-caa92666c074}} :
{{formula:bc2546bc-3644-4587-a136-96e87fa66df6}}
| m | 91233b9cd30ba7140ad15bd9e283949d |
We have proposed herein a model to describe spin-half quantum particles in curved space-
time in the framework of quantum field theory. Its novelty consists again in assuming that the
Einstein equivalence principle and general covariance hold for spin-half quantum particles. It
is not a self-evident assumption, because the mainstream approach in quantum field theory in
curved spacetime is based instead on exploiting global isometry group of a given spacetime for
expressing quantum-field operators through creation and annihilation
operators {{cite:3cac15946026c77ac90ac81af37676b9903664e1}}, {{cite:e756bebd8cf2747942e74a7defb823fc34c32aa2}}, {{cite:1c718a7da88d7bd6b9a0169fb8ca872fc2db59d9}}. Since local Poincaré group and
global isometry group are, generically, not isomorphic to each other, it is unclear if
global isometry group should be favoured with respect to local Poincaré group,
taking into account that it is the latter which plays a fundamental role in high-energy particle
physics {{cite:5414e76af53d528436bd1f7fe2ccbb496b3b23ea}}, {{cite:29f0152798a6435782dc72ab4f79b3d168e0e94b}}, {{cite:2acc97c483cde8f45f95a494ff81600ad223dacc}}, {{cite:6a8dd40bea516b0f6e23e4522ed6113f832dc93b}}, {{cite:46586f7d4fc233c9dec6f8e4dd5d06572ea8fb70}}. Furthermore,
general covariance is abandoned in this approach. It follows from
favouring different mode functions for the definition of creation and annihilation operators in
different patches of same spacetime. From an experimental standpoint, this is in tension with
the Bonse–Wroblewski experiment {{cite:c4241270bccf6d51e6bf607fcf9c08bc19d0ba6f}}. Indeed, the observed interference
pattern is induced there by acceleration which, in the leading-order of approximation, enters
wave-packet phase in the form {{formula:9f0089d1-05d6-4f27-9602-fa451e523ea3}} , where
{{formula:9a061ef7-fee7-46e7-9903-40aee14bf094}} are Rindler coordinates. Wave functions being eigenfunctions of the
Killing vector {{formula:91546dcd-f99b-4b27-9db8-5fb3df33487d}} of Minkowski spacetime cannot have this acceleration-
dependent correction to {{formula:662bb812-32e7-4e78-b8b4-5129fde6acfe}} , which are, however, normally used to do quantum
particle physics in Rindler spacetime {{cite:e756bebd8cf2747942e74a7defb823fc34c32aa2}}. In accord with general
covariance, plane waves re-written in
Rindler coordinates do have that observed correction {{cite:043b6bde4e17955d908b9bd00572ef4b9b06ce57}}.
| d | a73a42215a8584b44015e7de3e47709c |
We refer to {{cite:c5664d037ce88e22c75638d423a2a641eb12c99a}}, {{cite:4e668ca4284af48e814c717435b0b1c0cd15b79e}}, {{cite:6c38a7e2ccdb6367f223a682ab2358dfb3106c96}}, {{cite:e544211af5a16f53dc723e09a777463304d0e592}}, {{cite:3fb3a2ec779d606623ef9eb11e1a0ecd83683828}} for several sufficient conditions of then comparison principle. In short, {{formula:a836475b-9c3f-4f17-ae82-be9537179cb1}} satisfies the comparison principle if {{formula:b7c409f1-906f-409e-8650-83c3cabb0dd8}} is independent of {{formula:12700321-a6a7-4548-a59e-8a6a004f64d2}} , and one of the following conditions holds:
| m | ed0733f39b8fc2f5f28a72ccc14956df |
Before synthesizing pseudo-bursts, it is essential to align the input burst frames (having arbitrary displacements) so that the relevant pixel-level cues are aggregated in the later stages.
Existing works {{cite:ea183df28cb67b8557db6f5d6735699a439aeb20}}, {{cite:5d4b53a008b5201a1609b4c7cb509c91ab495356}} generally use explicit motion estimation techniques (e.g., optical flow) to align input frames which are typically bulky pretrained modules that cannot be fully integrated within an end-to-end learnable pipeline.
This can result in errors caused during the flow estimation stage to be propagated to the warping and image processing stages, thereby negatively affecting the generated outputs.
In our case, the proposed BIPNet implicitly learns the frame alignment with deformable convolutions {{cite:78649f90342d7eac2c20c75682b6321f0b699323}} that can effectively adapt to the given problem.
Further, we integrate the edge boosting refinement via back-projection operation {{cite:87455600f9511a00cb57c9712edcd2a6111cf9c8}} in the alignment stage to retain high-frequency information.
It facilitates sustaining the alignment accuracy in cases where highly complex motions between burst images exist and only the deformable convolutional may not be sufficient for reliable alignment.
| i | eddb40d0f1712f078c09d334fdbccbba |
where the equalities are understood for the expansions of the rational functions in {{formula:90dcb752-7406-4fba-8fdb-28b4c65afd39}}
as series in {{formula:dff10573-a151-44e8-a71c-ab724d3a9248}} and {{formula:b7a0c51d-f85a-4a24-8f39-b0ea8911aff9}} , respectively.The roles of {{formula:01d81e98-7a21-4305-81da-1ff97f473160}} and {{formula:5f1e8fa6-8379-4a4d-9c59-66907c67b880}}
are swapped in our notation as compared to {{cite:cd130de8a504269400c649a3f933a61bb8fb7fa1}}.
Every {{formula:1e983867-9996-4ae0-9362-8f216df6fc8e}} -tuple of polynomials {{formula:f42045f4-5c50-4b04-9ed4-367ab4595dec}} in {{formula:4b755e7a-579e-4c16-a2dd-10dc5e072aa3}} ,
where each {{formula:32cf8014-986c-4324-b157-2a8c4ad62184}} has constant term 1, arises in this way.
| r | ace9ca4f5fa681ec5f38f96a510a0d33 |
We use PAWS {{cite:dffbd0556e4aef5888084b212298a9ab6f641d1b}} as a canonical example for this family.
A key difference from threshold-mediated methods is the lack of the parametric classifier {{formula:8a4c9d4e-2450-4635-a2e7-00a7fadc29f1}} , which is replaced by a non-parametric soft-nearest neighbour classifier ({{formula:f127fabd-02ea-4b7f-a1d8-fea2e83edf33}} ) based on a labeled support set {{formula:dde25da5-23a3-482b-b4e0-048d333d0d36}} .
Let {{formula:e8db7be3-5afb-4264-9ebe-b9404987afa0}} and {{formula:7b048af2-8032-4f05-97bd-ea59c187dec6}} be the representations for the two views from the backbone encoder, their pseudo-labels ({{formula:357f36ad-ab1b-49a4-8d65-8a0be79835a9}} , {{formula:5f1796ff-5584-42d1-8d45-8dd80d92013b}} ) and the unlabeled loss are given by:
{{formula:c2a28a9b-ee0d-4a55-b649-3997de8d5bf1}}
| m | 4a26763b953155a56de7d9c7e19cdbbd |
Since Assumptions REF and REF ensure uniform injectivity of {{formula:d2fd31f7-b5ab-4776-a68e-773c4512a80f}} , and {{formula:cb522143-f3ff-4c08-98dc-87039cc4552d}} satisfies the PDE (REF ), the inverse {{formula:67060a15-ada2-4136-8797-9c35240406b1}} exists and is unique. Thus, the data samples used in the training are of the form {{formula:5821478c-4e98-494b-89e1-98030134458b}} and {{formula:5310bb1e-e13f-4081-8c1f-cd7a92015e05}} , which entails that the problem (REF ) is a realizable learning task that is probably approximately correct (PAC) learnable {{cite:bdf4ac57536ddaec21a5ef7493ad57a08f792538}}. Then, one of the sources for non-zero generalization error is the fact that the training data {{formula:5c3bde7e-bdea-4833-a97a-ac7c7ac8f354}} induced loss {{formula:c649c155-b6f4-4b26-92cd-fbd5c15ac018}} in (REF ) is an approximation of the actual loss
{{formula:c16d0eda-7d15-41d9-8739-35e2a788d2a2}}
| d | 24e0beb8e2682df6210da8d87a6bd929 |
In this section, we recall some known results on the simultaneous dimension of graph families and the fractional dimension of graphs. We begin with some useful observations. The open neighborhood of a vertex {{formula:4e002920-47f5-471e-bf0e-f01777f5d29a}} is {{formula:9a5d1baa-304e-4aa7-9464-a5b90a1650f4}} . Two vertices {{formula:628e513f-ff25-4ac4-a460-0be3ab9b81f7}} are called twins if {{formula:e2e775c5-247f-405a-9889-a0325b6d6943}} ; notice that a vertex is its own twin. Hernando et al. {{cite:f0b92515f2c3670898c190f852d4a1a99354dbe7}} observed that the twin relation is an equivalence relation and that an equivalence class under it, called a twin equivalence class, induces either a clique or an independent set.
| r | 05487865961a9ea745c7b684197f4792 |
Ground truth for data-driven approaches.
When applying data-driven approaches to map light-field images to high-resolution 3D volumes, the ground truth can be given by other high-resolution imaging modalities including but not limited to confocal, light-sheet and multi-photon microscopy images {{cite:2bc6c99fd37f387836f480fd802175ae200b2ff7}}, {{cite:d55ef81db069c082185811367b9f222053954a54}}. However, due to high cost of obtaining ground truth, it is common that data-driven methods lack labelled data. This issue makes it challenging to directly exploit supervised learning methods for training. It may also cause overfitting and reduced generalization capabilities. To mitigate the impact of lack of ground truth data, other learning techniques and strategies, such as transfer learning, adversarial training, semi-supervised learning, unsupervised learning, can be adopted to develop feasible solutions. For example, one can use well-founded optic models to generate a large amount of synthetic data to train data-driven models {{cite:7a2f6d5beb42df7e6424df375847f683471e2be3}}, {{cite:391ca10772d8582d7edf3eeef4786a6528aa1578}}. The trained models can then serve as a good initialization for further fine-tuning on sparsely labelled real data. Alternatively, domain adaptation {{cite:054ed3a4ab67b9c30fd2c0c9af1425a6e43ab6fe}}, {{cite:3778c0e5f2f709483b6a79aecfddeda3ee567c8c}}, {{cite:9cfa13d70bfa280995358bf81fc14ed05c201fe7}} can be exploited to learn domain invariant feature representation via minimizing a distance metric, e.g. maximum mean discrepancy (MMD), or minimizing an adversarial loss between the source (synthetic) and target (real) distributions. In this way, a model trained on the source labelled data can then be directly applied to the target domain.
| d | 9edf90fbd41fba480b1e1bb5c4af41e3 |
In this work, we learn a task-agnostic head (or joint head) and propose a new evaluation protocol for class-iNCD (see Fig. REF (b)). In details, we first use the new head to estimate the predictions of unlabeled data from the new classes. We utilize the HA {{cite:36f0372e630be02e3800754679f65e063817f142}} to re-assign ground-truth IDs based on the predictions and ground-truth labels for the new classes only. The joint (task-agnostic) classifier is used to evaluate the new classes test samples by directly comparing the predictions with these re-assigned ground-truth labels. Whereas for the old classes test data, we evaluate using the old classes ground truth. As shown in Fig. REF (b), our evaluation protocol explicitly distinguishes the old and new classes. As evident, our evaluation is more reasonable than {{cite:b74770c1b5c36029109c34fd12fd8ad76c416a93}} and penalizes the metric when the new classes are classified as one of the old classes, which is an ideal behaviour.
| m | 18f24791b77ae9cbc03cb3f77181ccc3 |
Our proposed Point2Seq shares a similar intuition with the concurrent work Pix2Seq {{cite:a08e2c5718dbc35c8943ebffce33e63dffc63222}}, which is proposed for image-based object detection, in terms of leveraging objects as words that can be read out from a feature map. However, our method is intrinsically different from {{cite:a08e2c5718dbc35c8943ebffce33e63dffc63222}} in 3 aspects: 1) Unlike {{cite:a08e2c5718dbc35c8943ebffce33e63dffc63222}} that merges all objects into an individual sequence, we treat each object as a sequence and predict all objects in parallel, while words in each object are generated sequentially. In this manner, we can circumvent the object ordering problem in {{cite:a08e2c5718dbc35c8943ebffce33e63dffc63222}}, and our method is much more efficient at the inference stage compared to {{cite:a08e2c5718dbc35c8943ebffce33e63dffc63222}} in which the inference latency will be heavily influenced by the total object count in an image. 2) We adopt the continuous word representations instead of discrete tokens in {{cite:a08e2c5718dbc35c8943ebffce33e63dffc63222}}. The use of continuous representations relieves the need for quantization and makes our method compatible with the existing loss functions tailored for 3D object detection. 3) We propose the scene-to-sequence decoder to generate words for each object, in lieu of the Transformer architecture in {{cite:a08e2c5718dbc35c8943ebffce33e63dffc63222}}. The scene-to-sequence decoder is lightweight and leverages a sparse set of features to predict each object, which is more suitable for 3D object detection where the detected targets usually are small and sparse.
{{table:5b032106-0294-4749-9403-d598f3ca57a8}}{{table:9152e620-5565-4471-a907-516f3cfbb7ef}} | d | 3f55fed90567a32ae54a8abaf3a553dd |
PSO with AAR fitness: The PSO method is performed by using the objective of problem (P3), i.e., AAR, as the fitness function, where each iteration requires all generated channel samples to evaluate the fitness and update the particle positions. This method serves as an upper bound for the proposed mbs-PSO method.
Sum-path-gain maximization (SPGM)-based scheme {{cite:644af7e929a7ea7dc9f82c1e60585bb6b56b3543}}: In our proposed two-timescale beamforming framework, the large-timescale passive beamforming optimization is replaced with the SPGM-based method, where the IRS reflection matrix is obtained by maximizing the effective channel gains.
SVD-based scheme: We adopt the conventional SVD precoder in the considered IRS-assisted MIMO system, where the BS optimizes its transmit powers solely based on the outdated CSI at each slot.
Random phase shift: Based on the studied SVD-ZF system, we adopt a randomly generated long-term reflection matrix, and then optimize the transmit powers by using the method developed in Sec. III-A.
Without IRS: We optimize the transmit powers to maximize the overall AAR of the studied SVD-ZF system with the IRS disabled.
| r | f74bc3b2d2c461e99ab30b3f2ace06a9 |
In order to prove Theorem REF , we need the following two lemmas. The first one is an immediate consequence of Dedekind's theorem. The second one follows from {{cite:8422a01d8afe35696e0ff70726293853c9a87f93}}.
| r | 312c7074674c6b444a8a9d7a8e40959f |
Another possibility is that our dataset (410 students) was simply too small for complex deep models to find success, compared to the 1000s or even 100,000s of learners in other datasets where DKT has been evaluated. However, model complexity alone does not explain the difference, since the simpler BKT model did even worse than DKT, and our Code-DKT model, which had far more parameters, performed better. Additionally, DKT has historically performed well on some other small datasets (e.g. the “ASSIST-Chall” and “STATICS” datasets from {{cite:4ac0bd3cabb790b0995df8af1a1b876a01f96835}} with 300-700 students). Regardless, many tutoring systems only have hundreds of students, and effective KT models must still be able to perform well on these small datasets. Thus, to the extent that our datasets is representative of the domain, our results suggest the need for improved KT models for programming.
| d | 5afc65648229a2d0c82c25c38784a9dd |
As with Furstenberg's conjecture, Theorem REF
has also valuable consequences about expansions of natural numbers.
Let us first recall that a set {{formula:bee4bcae-a5d2-4128-80bd-67987c7b77ac}} is {{formula:6e45766b-0fb3-46d6-a6e7-e5435bf67899}} -automatic if its elements,
when written in base {{formula:6fd7038b-9942-44a5-a56d-f21e1a7ffa07}} ,
can be recognized by a finite automaton (See, for instance, {{cite:2981f360b098d597506a4b9aad02c0ac044f5289}}).
In this framework, there is a famous theorem by Cobham {{cite:cbe02fdf71b49fa3ccfac0a013cd6bac4ce03bdc}} that can be stated as follows.
If {{formula:d2e4f198-d92e-4b5b-b74f-e87db6588275}} is both {{formula:384f7acf-80d7-4cee-8258-e7ef80ce3500}} - and
{{formula:9605d4c2-d27d-4ba0-af2d-49959e427816}} -automatic,
where
{{formula:2e88fc0a-2566-4192-89b5-4674049ac962}} and {{formula:978944f9-7fe5-443c-858e-afcbfcab254b}} are multiplicatively independent, then {{formula:bf1f820b-919a-43e3-8626-6e9677cb7327}} is a periodic set, meaning that {{formula:41b5bdd2-0ebf-45fe-9370-18c023a8b641}}
is the union of a finite set and finitely many arithmetic progressions.
| m | 5146e7201503dab9f8a1f408bbdc33b6 |
To address the aforementioned issues, we propose Contrastive Learning for Local and Global Learning MRI Reconstruction network (CLGNet), which is composed of spatial branch and wavelet branch, as illustrated in Fig. REF . Firstly, according to the Fourier theory, point-wise update in the Fourier domain globally affects all input features in the Spatial domain, which inspires us another way to learn the global information. Hence, to learn the global information, we introduce the Fast Fourier Convolution (FFC) {{cite:9c2fd66a16c94cb9398ac6c1eb235c575bd38612}} to our model and propose the Spatial and Fourier Layer (SFL), which adopts vanilla convolution and FFC to learn the local and global information (see Fig. REF ). Furthermore, based on SFL, we further propose a Spatial and Fourier Residual Block (SFRB) and Spatial and Fourier Residual Block in Residual module (SFRIR) as main components of our model. Furthermore, compared with self-attention and transformer, the proposed SFL can learn global information more effectively in less time. Secondly, we introduce Contrastive Learning (CL) to constrain the upper bound and lower bound of the proposed CLGNet, as shown in Fig. REF . In order to constrain the quality of the recovered image, the following two steps are required: 1) constraining the upper bound and 2) constraining the lower bound. For constraining the upper bound, we need to pull the recovered MR image closer to the clear MR image, and for constraining the lower bound, we need to push the recovered MR image farther away from the undersampled MR image. After the above two steps, the solution space will be successfully constrained and the quality of the recovered MR image will be further improved.
| i | 071ecb55f977853b63883ef556321473 |
We note that WideResNet {{cite:62035234a9ca38350c165b79c1ec3e146da81a4b}} used in prior works is much deeper and computationally more expensive (with {{formula:588ce0f0-40a0-4bfc-8ced-772be750d2df}} million parameters), whereas MSDNet with {{formula:51e08174-0623-4898-9207-4a47e8aefd5c}} million parameters achieves similar OOD detection performance, as shown in Table REF .
{{table:0157b0eb-5f09-4536-b8eb-07b0cbf3afb9}} | m | 7cd9198df58ab96468404e5352ba1231 |
These methods attribute importance to input features/signal dimensions for the output i.e. how much the signal dimensions/features of the input contribute to the output across the neural network {{cite:8116aa31672b486d9d3679740fc0e697f74d4910}}{{cite:d9b86876013b02b5c31235dd9351b21b3546920d}}. For a linear model y =w*x attribution r = ([w] {{formula:0945176d-e7c9-4bcf-b4ab-1aad5a3eac2b}} a) where [w] is weight vector, {{formula:32f39bcf-28aa-4178-8236-6139e0563eb5}} denote element-wise multiplication and a is signal {{cite:d9b86876013b02b5c31235dd9351b21b3546920d}}. Attributions are built upon signals i.e. attribution tells us the importance of each signal dimension/feature of the input image toward predicting the output, this is sometimes also referred to as relevance {{cite:a88e73f03444da0e9a01223b8938de5754452971}}.
Attributions give more detailed explanation about model prediction than signal and these methods are used to analyse many medical DL models{{cite:fb12d4d40d30d5f084fc9aac9bcaca2086edaa57}}. Bohle et.al {{cite:fb12d4d40d30d5f084fc9aac9bcaca2086edaa57}} employed LRP (an attribution method) to locate Alzheimer’s disease-causing areas in brain MRI images. They contrasted the saliency maps produced by guided backpropagation with LRP and discovered that LRP was more accurate in detecting areas known to have Alzheimer’s disease.
| m | 9f89efb73620a9c7dd1381f5f8eacdc8 |
We use the notation
{{formula:b2374f23-78ef-45c4-9098-1b9dde7cb5fd}}
By Markov's inequality, Theorem REF implies that there exists a set of the form
{{formula:92058e83-9f29-417e-9a25-05bc07896982}}
with
{{formula:43806c67-ca17-4774-8ac6-7b4a53936048}}
Since the distribution {{formula:003a2521-e0d2-49d3-bc95-c4344015847e}} is determined by its moments (see e.g. {{cite:8a9d6c70355e07491cdc5c0b435ebdc9e4de07a1}}), Theorem REF follows.
| r | 67c4793a228ffea172cb8c677c684b6e |
Astronomical observations over many length scales
support the existence of a number of novel phenomena, which are usually attributed
to dark matter (DM) and dark energy (DE).
Dark matter was introduced to explain a range of observed phenomena
at a galactic scale, such as flat rotation curves,
while
dark energy
is expected to account for cosmological-scale dynamics,
such as the accelerating expansion of the Universe.
For instance, the {{formula:45371a46-4cca-44f7-814f-dba74acdf84f}} CDM model, which is currently the most popular approach
used
in cosmology and galaxy-scale astrophysics,
makes use of both DE and cold DM concepts {{cite:a2e4b64f34c110ed6621cf0b514f7beb7374e7b0}}.
In spite of being a generally successful framework purporting to explain the large-scale structure of the Universe,
it currently faces certain challenges {{cite:43901c2d6dc520bd0f9cd51a520175b8a2603a78}}, {{cite:16f5c6c57e1531ddb52932be3163acda30f79d8e}}.
| i | 9134f92847d3a0399fe9cfbedfe9e142 |
First-order: Some first-order methods such as extragradient {{cite:766aa58bbd25f64c4013866ca4dd96a391d6ba84}} require a second, costly, gradient evaluation per step.
Similarly, methods alternating player updates are bottlenecked by waiting until after the first player's gradient is used to evaluate the second player's gradient.
But, many deep learning setups can parallelize computation of both players' gradients, making alternating updates effectively cost another gradient evaluation.
We want a method which updates with the effective cost of one gradient evaluation.
Also, simultaneous updates are a standard choice in some settings {{cite:b64a8c8d4b7405a9f19c6849af34c4bb65909447}}.
| m | 4cf86362d9e8ca9ac4fe648b5f0cda31 |
where {{formula:fc3ecb1d-a712-4e67-b970-f5ded2212036}} and {{formula:29ada17e-f95e-451b-ada3-3e96807cceb8}} are borrowed from {{cite:11066c6bb9090ef60c42551622151c8f1852e6f5}} and {{formula:e933fd7e-f0e2-4504-a887-7eb06614e672}} is the adversarial loss discussed above. {{formula:7a382052-35ed-48d1-b31e-1c676f2e5dc9}} is the main weakly supervised loss {{cite:11066c6bb9090ef60c42551622151c8f1852e6f5}} for event classification while {{formula:5f55f7b1-0c7e-4a28-aa01-84ee819087bb}} is the modality specific classification loss that is applied after label smoothing. These two components are also elaborated in the extended version of our paper. Rishabh to insert arxiv link here or earlier on in the paper
| m | 80bebd2331fb35bbf33959d2e16b35d8 |
We carried out Monte Carlo simulations in order to understand if these {{formula:67cca974-8fe8-4125-9121-a27695f71d15}} corrections can explain the amount of reduced scintillation photon collection in the solid phase. Events from the decay of {{formula:b87ff9ae-4409-4545-a30e-ab059a715aea}} Co are simulated using a dedicated event generator {{cite:2f26be4b443bfb2afb7d7503dfa7b9a065f91cec}}. Particles from the simulated source are then transported in the xenon bulk volume using the GEANT4 {{cite:b0246b9227fe45a57d306c4606c4a00d72f43019}} (version 4.9.6 patch 2) framework coupled to a scintillation light production module for xenon {{cite:25297ebd7023e2104ff42d55763d45115ae8f9ae}}. The model accounts for all relevant xenon properties including the non-linear scintillation light yield as a function of incident gamma ray energy. We assume the same scintillation light yield model in liquid and solid phase simulations and account for the differences in the xenon emission spectrum in each phase. However, we used different scintillation wavelength in liquid (178 nm) and solid (172 nm) which effectively changes the energy dependent W{{formula:a94813b3-a101-453d-b92a-79a946c4756d}} value (the average energy required per one scintillation) in solid by 3.4% in the model. In fact, the W{{formula:e76b4bbe-5d01-4755-b62f-449e4e6f25de}} in solid has some ambiguity in references {{cite:6182d11c5b9cc6cbdf440bed0ac8b6531744abd7}}, {{cite:c781b880fb909d4c1142d16b2ca281bbdae60e9f}}. The model also accounts for all relevant wavelength-dependent material properties. We assume a xenon bulk absorption length of 100 cm and Rayleigh scattering length of about 30 cm {{cite:9e2963f86c38387b43bdeb74f24781fb216f877c}}. We account for the wavelength-dependent quantum efficiency of the PMTs. We found the major uncertainty of the light collection efficiency is from the PTFE reflectivity. For the PTFE housing, we assume a reflection coefficient of 0.57 at the xenon scintillation wavelengths. Figure REF shows the results of the simulation. This simulation suggests a 14.2% reduction in the number of photo-electrons expected in the solid phase compared to liquid, consistent with measured values (see Table REF ). The measured energy resolution at 122 keV in liquid xenon is about 0.45 (FWHM). The energy resolution of this system depends on photo-electron statistical fluctuations, fluctuations of electron-ion recombination due to escaped electrons, the non-proportionality of the scintillation light yield, single p.e. response of the PMTs and effects from the detector geometry {{cite:19df61bd36589b678fbe123e0b5f0498193d41d7}}. According to the simulation results, the energy resolution contributed by the photo-electron statistics, detector geometry and optical system is about 0.272 (FWHM). The ratio of the measured width of the single p.e distribution to its mean is measured to be 0.3 (=s) in sigma, which contributes to the energy resolution about 0.092 in FWHM (=2.35/{{formula:5e8ee5e3-7662-475b-b67a-39c9a6508bbe}} ). The rest of the fluctuation may not be solely attributed by the fluctuations of the electron-ion recombination but some unknown detector systematics, which deserves further study with improved detector setup.
{{figure:875edbb8-3d8f-48f9-b926-0181aa489ac4}} | d | 5f6be16ef67256edc669bea8343403f8 |
In order to test our proposal, we use 7 video sequences in UVG dataset {{cite:e0efd8ba4b56b9830e831ada83bfacb9f2b3b7d7}} at 1080p resolution. We use 10 set of consecutive 16 frames (total 160 frames) of each sequence and compress them by compressai {{cite:577ba868010dcc550edbd77eb3928c6dd068ed41}} implementation of SSF {{cite:de8b6bb761b8c96c4d3c141674d0760a782d8e72}}, and author's official implementations of LHBDC {{cite:48bbbc9fedbee7f9aa31ca05ccf31207fd802c9d}} and AIVC {{cite:796191d8a6480103ba9a2d08576f375bb6a56ff2}}. Since all methods were proposed on different purpose, we do not make comparison of these methods but measure saving of our generic solution up to these baselines model. SSF encodes first frame as I frame and rest of the 15 frame as P frames. LHBDC needs 2 reference frames. We supposed first frame and 17th frame is I frame and all 15 frames in between frames are B frames encoded by hierarchical bi-directional settings. In calculations, we did not count 17th frame, because it is also next GOP's first reference frame and should be accounted in next GOP's file size. AIVC encodes first frame as I frame, 16th frames as P frames and rest of the in between frames as B frames. In order to show detailed gap sources, we give the result in Table REF for the lowest bpp objective. In all cases, the ratio is percentage of each information's bitlength ({{formula:58889d26-518d-4d1c-8e02-5daae7f27886}} ) w.r.t the total bitlength {{formula:f65dbbb1-e408-4f83-bd24-43d2957db541}} in the baseline model. The gap of each information is {{formula:f002ef11-dbfd-453e-bd18-eb72c1b42bf8}} while the total gap is {{formula:03e056cb-38f9-47d2-94de-e7fb1cc37f76}} . It can be seen that for all methods, the gap is not negligible since a potential performance gain of several percent could be achieved. We calculate our savings by {{formula:9dfaba2a-0d5a-410f-bbad-7f405f854273}} for each information type, where the total saving is {{formula:b5c59d74-6240-4ea7-9f66-d04fb5f5b590}} . We measure those percentages for each frame type as well as 16-length sequences of videos reported on the Table REF .
| r | faffca047494807a426c5b5ba34553a8 |
In general, the approaches to image super-resolution can be classified into:
single image super-resolution (SISR) and multi-image super-resolution (MISR).
Single image super-resolution has recently attracted considerable attention in
the image processing community {{cite:8c75c3b4c0573f223e6fb70eb9c1054c7aec68bc}}, {{cite:8106107f5b888c2fb1e2e5e5f02ed459a8571295}}.
It is a highly ill-posed problem. In fact, during the acquisition of the low-resolution (LR) images some high-frequency components are lost or aliased, hindering their correct reconstruction.
In contrast, MISR aims to recover the true details in the super-resolved image (SR) by combining the non-redundant information in multiple LR observations.
| i | c4f1d5cedb4cd75050ab34a0030bf927 |
Approach 2 - Fair meta-learning for segmentation: This strategy aims to add a meta-fair classifier to the segmentation network to train a model that not only segments cardiac MR images but also performs classification of the protected attribute(s). As the classifier we used a DenseNet network {{cite:28d12106c97f48bec4c2c7fb5380ed7aa07262ce}} and the input of the classifier was the cardiac MR image as one channel and the output of the nnU-Net segmentation network as a second channel. We formulate the problem as a multi-task learning problem where both networks are jointly optimized. The idea is that the classification network prevents the learning of a dominant group from negatively impacting the learning of another one. Both networks were first trained for 500 epochs independently, and then jointly trained for another 500 epochs without deep supervision. This is a novel approach for segmentation fairness but is based on the work proposed by Xu et al., {{cite:bb5c22d9b5a65ab1d2a413a9b1fe1c452d7124d2}} for classification fairness, which combined a ResNet-18 network for facial expression recognition with a fully connected network for protected attribute classification.
| m | 8e8ee15f204eeb99505e831b86f8fe2d |
In this section, we verify that Open-sampling can boost the standard training and several state-of-the-art techniques by integrating Open-sampling with the following methods:
1) Standard: all the examples have the same weights; by default, we use standard cross-entropy loss.
2) SMOTE {{cite:6a46e47700b85bc1d47c629a9bb8e94684f8cc48}}: a variant of re-sampling with data augmentation.
3) CB-RW {{cite:cc9943b1f2f77367262dcb68e1c5e760f6fdf0c5}}: training examples are re-weighted according to the inverse of the effective number of samples in each class, defined as {{formula:6f73e546-038c-41c8-a856-86f65722d144}} .
4) CB-Focal {{cite:cc9943b1f2f77367262dcb68e1c5e760f6fdf0c5}}: the CB method is combined with Focal loss.
5) M2m {{cite:77976f2a1012f772d13ac773bb135c3977d186a5}}: an over-sampling method with adversarial examples.
6) LDAM-RW {{cite:e13cdea08e148df17198a80e8a0324e26705a661}}: the method derives a generalization error bound for the imbalanced training and uses a margin-aware multi-class weighted cross entropy loss.
7) LDAM-DRW {{cite:e13cdea08e148df17198a80e8a0324e26705a661}}: the network is trained with LDAM loss and deferred re-balancing training.
8) Balanced Softmax {{cite:14ca2340107d942afe1334e717317cf010eadb4e}}: the method derives a Balanced Softmax function from the probabilistic perspective that explicitly models the test-time label distribution shift.
9) SSP {{cite:9584c0d972df4f83215f288c88daa678cb6a1987}}: the method uses self-supervised learning to pre-train the network on the auxiliary dataset before standard training.
Here, we do not expect vanilla Open-sampling to achieve state-of-the-art results compared with many complicated methods, our method can be still a promising option in the family of class-imbalanced learning methods, because it can outperform existing data re-balancing methods and improve existing state-of-the-art methods.
{{table:5b094381-9054-4e2a-b328-d865779701f1}} | m | 8e123e11aed15cbc55f33f046f6219da |
The sensitivity of our observations towards TMC-1, between 0.19-0.35 mK (1{{formula:308573aa-6062-406a-832f-b6e832e77d07}} ), is much larger
than
previously published line surveys of this source at the same frequencies {{cite:c3274b5e794d203144ac1f2a3d9d662dc27ca5d4}}.
In fact, it has been possible to detect many individual lines from molecular species
that were previously reported by only stacking techniques {{cite:320d11b37b07f6d66a64f7514b69509c8ac635eb}}, {{cite:4738401d3e77e022651abf856d45b03957a88512}}, {{cite:d015784a2fc022310c8de8e42c29978729ce1835}}.
Line identification in this work has been performed using the MADEX catalogue {{cite:d3dc60ddfdd34622f137e7e1d3fdaca373f0dd0a}},
the Cologne Database of Molecular Spectroscopy (CDMS) catalogue ({{cite:78ce0ef8afb0e019f6ced455dac639efb9f88a93}}), and the
Jet Propulsion Laboratory (JPL) catalogue (JPL; {{cite:5c281324234c443c35c364aa4ebe1ef4bc883e98}}).
| r | ecda31470bd2884cbd93022b005376a2 |
The RL algorithms which have achieved state-of-the-art results on various robots rely on deep neural networks because of their good approximation capabilities. With advancements in computing and auto differentiation frameworks, there is a surge in the adoption of neural networks in the robotics community. They have even found applications in learning control policies for the multirotor UAVs. Hwangbo et al. {{cite:809d682a5bee6ba9f9a13820c9ea73b047be2732}} present a quadcopter policy learning approach in the actor-critic framework using deterministic policy optimization. The RL algorithm was developed based on information-theoretic natural gradient descent. The algorithm presented in this work is conservative and uses a PID controller along with a learned policy to stabilize the learning process. Koch et al. presented an RL-based approach to learn the quadcopter attitude controller in {{cite:3bb9af35ba21e17ff39b9fe3b0e880131d0f7b5c}}. Another excellent work on the sim-to-real transfer of learned UAV policies using deep RL was presented by Molchanov et al. in {{cite:86f90c9f09d16a371a00df6283a1808a35ceaef0}}. Here, the authors showed a successful transfer of quadcopter control policies learned in simulation to the real-world platforms. They considered an {{formula:15622400-5952-40c6-9155-2a2622eaef33}} -configuration quadcopters with varying physical properties (such as mass and dimensions of the system) to analyze the robustness of the learned policies.
| i | c55a69da3150bc187a66c6cc15d3477f |
In this paper we have generalized the factorization proposal introduced in {{cite:e3c88cee83d65ed0c65b523852263494a85ab425}}. The main idea is to decompose the observables into the self-averaging sector and non-self-averaging sectors. We find that the contributions from different sectors have interesting statistics in the semi-classical limit. When the self-averaging sector survives in this limit the observable is self-averaging. An interesting phenomenon is the sector condensation meaning the surviving non-self-averaging trend to condense and in the extreme case only one non-self-averaging sector is left-over resembling the Bose-Einstein condensation. Then the half-wormhole saddle is naturally understood as the condensed sectors. We apply the this proposal to simple statistical model , 0-SYK model and random matrix model. Half-wormhole saddles are identified and they are in agreement with the known results. With our proposal we also show the equivalence between the results in {{cite:e3c88cee83d65ed0c65b523852263494a85ab425}} and {{cite:53e09668d49262d6ad9a0c3f0e20e6caaee897ab}}. We also studied multi-linked-half-wormholes and their relations. There are some future directions.
| d | 7aba51ff13702a14a6a9b1479ba6e6fc |
(2) A Hom-alternative algebra {{cite:b71c39ba690fb5a4e64bf881888e8093adf97121}} is a multiplicative Hom-algebra {{formula:6a4944c8-246e-4b27-ba96-0a77b3026b68}} that satisfies both
{{formula:93402e50-16fe-42b5-bbff-056cdcd9fd1e}}
| r | 9e799f886cbc218b99c47868dd02f1c1 |
We next investigate the performance of our joint graphical horseshoe estimator, jointGHS, through comparison with the Bayesian spike-and-slab joint graphical lasso of {{cite:ba10c30db8945d6de15ec91dc23b06a94df6f129}} and the joint graphical lasso of {{cite:4dc72530cbf68b64a670d9dfbe3b3f8518aa56a9}}. We restrict ourselves to a setting with {{formula:0672d799-5a17-4277-a78a-b67778a21a04}} graphs with {{formula:4f7b5852-7ffc-46ae-847f-8e85675d471b}} nodes each because the latter methods are either infeasible within reasonable time (still running after multiple days) or very time consuming (taking more than 24 hours) for a larger number of nodes and graphs. By varying the similarity, i.e., proportion of common edges, between the graphs, we assess the performance of the methods in different settings. Namely, we consider six settings, with the similarity between the two graphs varying from {{formula:61b08885-72a1-4734-a602-c968855be890}} edge disagreement (i.e., the same edge set) to {{formula:7c97bab0-ac7a-472c-aec6-8e6aa7adb20c}} edge disagreement (i.e., no common edges). For each setting, we construct two {{formula:3362d280-1cc8-4438-9d0c-66b165034037}} precision matrices with the desired level of similarity and sample {{formula:eab77a51-f487-44e1-8616-30afb2001b75}} data sets from each of the two corresponding multivariate Gaussian distributions, with {{formula:60c0ecfe-904a-4e59-9df9-d38413e33c0c}} observations for the first graph and {{formula:85129214-a86d-4625-bc8c-6949b68220f4}} observations for the second. In all settings, both graphs have true sparsity {{formula:b751e025-1044-4184-aa7a-138464e432fd}} , corresponding to 49 edges.
| m | 872e68ea7f4d8211631ff66f36157245 |
A naive way to utilize these labels is to design a multi-task learning process {{cite:d28b8f09d7e91f6d74dc1789bfad1ceb725cf1fc}} that directly combines vanilla unsupervised pre-training losses such as MLM {{cite:e898fb4a9217608dc0b9684d2bd1a95cda6391d0}} with a supervised DA classification loss.
However, this approach has several drawbacks when generalizing to large-scale pre-training paradigms:
(1) The DA annotation schema is inconsistent among existing corpora, making it challenging to collect large-scale DA annotations;
(2) A vast majority of available dialogs do not have DA labels. A naive joint training process without careful regularization would lead to highly over-fitting on those labeled samples, resulting in low performance;
(3) All supervision signals from unlabeled data are self-supervised without any explicit inference over the DA space, so the linguistic knowledge PCMs can extract is only the general type, and the knowledge of dialog policy can not be effectively explored.
| i | 2c711b9c077313a49a7dba7a41558499 |
where {{formula:89890577-2b99-4bdf-81c7-337a06d132ad}} such that {{formula:51eaf7f9-86c2-4dd7-a7b3-d4297ecfc682}} for each {{formula:d985bcb7-12a7-4a7d-8ec9-fb7e7ad014ae}} . Note
that the model in {{cite:8d769ff5da6fde1aa96df669f68dc85f83fe9f70}} also contains a bias
vector, so that {{formula:189d2e8e-0d53-48ec-b3ae-13000cf0fa65}} in (REF ) is replaced by
{{formula:d2d0d616-915f-4ef8-9645-604cbdef08bb}} , for {{formula:c4419544-23cf-42a2-a2b9-7696d79b1765}} . We drop the bias to
remove some degeneracy of the problem for simplicity. Another, more
crucial, difference is that in actual deep learning, as considered in {{cite:8d769ff5da6fde1aa96df669f68dc85f83fe9f70}}, the feature vectors {{formula:b20c810a-a376-45bc-9bfe-3661f1e6b258}} are given by
output of deep neural networks acting on the input data, this would
make the variational problem much harder to analyze and thus we will
only study the simplified scenario.
| i | 5659b203b257dde0050eada0dac21275 |
In this paper, steering wheel angle prediction is performed using a two-stream deep Convolutional Neural Network (CNN) {{cite:f5778c82cbb1ad89668e800f152f72e91c8535a7}} {{cite:d633e987c9539b590579d44cacc6db08e0aaa2ba}}. The architecture is described in detail in Figure REF . Each stream in our implementation has two convolutional layers and a max-pooling layer. After concatenating, there are two fully-connected layers activated by the ReLU function.
{{figure:4ac1b2be-c1a2-496b-a174-b79d53c90475}} | m | 61958a8f901b719ba6fb1369181eec01 |
We are interested in extending this research in multiple future directions.
Despite using a decentralized trajectory optimization framework, our experiments in this paper were performed on a single desktop PC, and commands were then transmitted to each UAV individually in real time.
We are currently working on a new multi-UAV system with onboard processors, where one master drone will run the fast centralized planner and then communicate with all other UAVs via a mesh network so that each drone can optimize its own trajectories.
In addition, we plan to employ our system in novel applications such as 3D reconstruction of scenes in the wild.
Even though multi-view systems are already well developed for indoors environments {{cite:0a3fe67ef2d8580ddca9856a53ebcd9046b2ccb9}}, creating such systems for capturing images in natural environments is still an open research field.
| d | 7306f3ed630b9ab55c5ea0aa73520b2e |
In CivilComments-WILDS, we divide the data into training, validation, and test datasets and maximize worst-group accuracy in the validation data (and by association, maximize the average accuracy over all domains). Then, we perform model selection and evaluate the OOD accuracy on the test data.
For Amazon-WILDS datasets, we adopt the same hyperparameter selection strategy as that for CivilComments-WILDS.
As a metric, we do not evaluate the worst-group performance but rather the 10th-percentile accuracy for the performance of each domain by following the standard federated learning literature {{cite:45aa2d9112e7c2c274124cb232692ac6d4de75e2}}.
{{table:05b7b5c7-4f71-489d-8b75-656f4b973e7b}}{{figure:714d5e3d-cc1b-48f1-8bb8-ea415e3d6237}} | m | 29b50355757caaf7917559c5de255070 |
The formalism of the {{formula:057315a2-8609-4ce5-9ab1-35da429902dc}} -modes {{formula:5273ab8f-3c0d-480e-8d75-409b5b5395be}} or the “reparametrization modes”, discussed in the last section to obtain the {{formula:e04d6d8a-a3e8-4f32-a776-98494d42cf1b}} corrections to the 4pt functions in the JT theory have since been generalised to arbitrary {{formula:0001cfd6-1683-494a-9b2d-adcd27ed39b3}} -dimensional CFTs in {{cite:8a3d8a3bec0fd0b171d049de4de1c3433947261e}}. These have a conformal dimension of -1 and can be seen as a tool to obtain the contribution of the stress-tensor conformal blocks to 4pt functions. The stress tensor and its shadow can be seen as descendants of these {{formula:0a9918bd-66b9-43e4-ae9a-fcb9749b062f}} -modes. Their 2pt functions in a {{formula:274279a9-778e-454f-8514-0be0a2a4afc9}} -dimensional CFT (without a temperature or chemical potential when {{formula:35998911-1887-4f4a-8d6f-47d601c60742}} ) can be used to obtain the stress-tensor conformal block contribution to a 4pt function {{cite:8a3d8a3bec0fd0b171d049de4de1c3433947261e}}, much the same way as used in section 5. The authors of {{cite:8a3d8a3bec0fd0b171d049de4de1c3433947261e}} also find an action which reproduces the expected {{formula:b51d3592-1820-4f5f-95d1-ce47e2c75dea}} correlator from general conformal invariance {{formula:729a20eb-a877-41d2-9b7e-d023bd42c03d}} an anomaly action for even {{formula:9150a363-8160-4fcc-82fc-a91ce4cef51d}} . Given the appearance of the {{formula:73195309-8d7f-4d72-bc4a-2377315d4d62}} -modes in the near horizon analysis it seems interesting to speculate upon the bulk analogues of these modes in a holographic settingPedagogically the {{formula:2bbf8223-f0b0-45c0-9e77-ef07603ee70d}} -modes first were discovered in the near horizon setting where the JT model was used to describe near extremal dynamics {{cite:95e4331cef9a042632b18e04040469461f535a8e}}{{cite:6c0d45600bde096c5b31f554fa0ff2fdb7ec8d35}}. Following the renewed analysis of 3d gravity in AdS by Cotler and Jensen {{cite:a1bdb9bdf5735747798b9db00d1a7578ad5d112f}} it was subsequently generalized in {{cite:8a3d8a3bec0fd0b171d049de4de1c3433947261e}} for any CFT{{formula:07f8a747-26f1-4777-942e-1b36ab3aec97}} .. The {{formula:54c83017-21e7-4c3d-a886-ff48faddc07a}} -modes discussed in {{cite:8a3d8a3bec0fd0b171d049de4de1c3433947261e}} were indeed described as degrees of freedom capturing the chaotic behaviour of the CFT. It is therefore not far fetched to speculate that the {{formula:43c17a94-15fe-484c-85ba-17a1fb1572fb}} -modes appearing the JT or the JT{{formula:23a956d6-3f1c-412f-a581-8d21626af0df}} analysis are the dimensionally reduced near extremal analogues of the {{formula:3fd58475-3aa3-43f4-89ec-757f8c1cc1cb}} modes described in {{cite:8a3d8a3bec0fd0b171d049de4de1c3433947261e}} for a CFT{{formula:702504cd-ab21-4ca8-b087-d88c62971e68}} . Holographically understanding the {{formula:c5bd4347-9540-4f95-81d5-0c339df67617}} -modes for AdS{{formula:19fd6810-40b2-42a0-a432-0f411b90bcbe}} black holes -along with the prescription of going near horizon and near extremal furnished in section 3, would provide a better understanding of the UV completion of the IR degrees of freedom we see in the JT analysis.
| d | f670375817cbb1fbcde87beeeacd094b |
Principal Component Analysis (PCA) is a well-known data dimensionality reduction technique {{cite:b390639706ceb056407e79fe0db527fb8c436c36}}. It works by projecting a dataset of {{formula:56d942d6-04a7-46de-9ab3-1f098ce52ae6}} vectors {{formula:925d197f-bbbd-4a47-a478-abc57e11553c}} , {{formula:e7817ea7-a867-4ef4-90d8-ef290075f6dd}} (represented by a data matrix {{formula:f2e09061-4d4e-43d3-b844-3898436c4ced}} , whose rows are such vectors) onto a reduced {{formula:6832a6f1-8560-4101-b341-f038c17cbd0f}} -dimensional subspace of {{formula:9e8f06e4-4033-40cc-aa2e-7db779df99c0}} , which is generated by the first {{formula:b7b25477-9420-4ed0-8a3f-49107304dd5a}} so-called principal directions. These are orthonormal eigenvectors of the symmetric matrix {{formula:db50f07d-ac5d-4d16-a4ad-6813bab02c9f}} , associated with its {{formula:b4d2c3b5-6243-47d3-ad4b-42adf71bf6c3}} largest positive eigenvalues. The latter are proportional (via the multiplicative factor {{formula:06f9421e-5c0e-48fb-9c4e-90b602206174}} ) to the {{formula:b966b37c-fb1b-4866-8f97-d450ab919790}} largest positive eigenvalues of the related Gram matrix {{formula:ddec8ba6-a640-4ae7-9a97-60f7284b169d}} , whose element in position {{formula:db6d5e46-406d-411e-b6c6-54b81f3e0f06}} is the inner product between the vectors {{formula:03b612be-41fa-4529-bf0d-5a180a781f49}} and {{formula:d83ee93f-efbd-46ad-8c3e-27d8d7b1d7b5}} . Focusing on such eigenvalues is important because the eigenvalues corresponding to discarded principal directions (the ones associated with the successive eigenvalues, not selected by PCA) provide information about the mean squared error of approximation of the dataset when only the first {{formula:ade37654-a570-4e14-bfac-d225567fbbc4}} principal directions are kept to construct that approximation (as a consequence, knowing these eigenvalues is useful also to select a suitable value for {{formula:6de06d29-2ed0-4ac5-8f0f-3f5a6300301b}} ). Moreover, when the dataset has zero mean, each such eigenvalue represents the empirical variance of the projection of the dataset onto the corresponding principal direction.
| i | 9a0f4c4c1ad0d4d3586368f98edeede6 |
Since industrial machines are operated by humans, learning control strategies by imitation learning {{cite:5202a36248e0f64be6060f12ea52ddd04ede901c}} may be useful for a wide range of applications.
If the applied task has multimodal state transitions, its policy model can be extended by multimodality or robustness {{cite:1896bb47301956ed030306f526c0fb66b1882085}}.
In addition, deep kernel learning {{cite:c169868a72c6673fe6be1e721540c51040ff3c6f}} can be introduced if handling such high-dimensional data is needed.
| d | 4827cf4d3f590f746d077a99d29ba431 |
Motivated to generalize Artin's Approximation Theorem (cf. {{cite:7963fde16935a0972829e43201ed64610d256beb}}) to excellent Henselian
rings the third author developed a powerful tool, the General Néron Desingularization (cf.
{{cite:ad19ed1e7f9b5b134925e610a2d05449def909a3}}). This result was discussed and used later by many other authors (cf. {{cite:c876d93024aacb4c99a6cffba673bc2e0d8e7f80}}, {{cite:37aea8780d82abfb5b6210b3706e706177b8e6bb}}, {{cite:056b466c631a657cdaea5bbfba177d1100f181c8}}). The proof of the Desingularization was not constructive. In this paper
we want to give an algorithm to compute Néron Desingularization for an important special case. We begin recalling some standard definitions.
| i | e6809012ccdaff53577e796392c471a4 |
One important point shown in section is that also the irreversible islands-in-the-stream formalism can be captured by a remarkably simple quantum statistical mechanics model involving a temporal sequence of unitary averages. One can view this as a nested generalization of Page's theorem. The fact that ensemble averages are involved is not a surprise because there are strong arguments that the semi-classical gravitational path integral is actually computing an average over some more microscopic description {{cite:2eef6e74a92d0e4fcf4a0e2e0f87b1d2a91ea9f6}}, {{cite:49696b0897b7d9815e61c62cd87578ce6b3e2279}}, {{cite:fcf8064d7ff7e7ec1b133057d69765b06d32f8cc}}. This is immediately clear in JT gravity {{cite:ae2f68aba8084c3ed3a3055522a654d6ed1f3510}}, {{cite:b1aef1e308090e1aa20e98267f584e269350d75b}} but has been argued generally in {{cite:c1cf6565fb5550fea0ab0dad53d859258d5689ed}}. In this latter work the ensemble average arises from the fact that when computing a Rényi entropy using the replica method, there is something special about a gravitational theory because the way that replicas are joined along a future Cauchy surface can itself fluctuate. The upshot is that the Lorentzian replica wormhole and associated island correspond to a region on the Cauchy slice where the replicas are joined in a non-trivial way. This kind of occurrence is familiar from the theory of baby universes and an ensemble interpretation arises in the same way. It would be interesting to relate this baby universe ensemble to the sequence of unitary averages that we have found describing the correlations in the Hawking radiation.
| d | b654a36bdae17b3a6be29e105a0ef016 |
where {{formula:6c1a1766-a362-498a-a3cd-0b43edb84cd0}} have the disjoint coherence support, while a strictly incoherent operator has the form {{cite:32a5eb1b7cf535247de18d198ff43588621135c6}}
{{formula:00a3a91b-a19c-43b5-9728-5149870a5f5a}}
| d | feb65072fc9a23e38090826f410bf219 |
This is a paper about geometry of variations. We formulate definitions of the objects and structures which are cornerstones of
Batalin–Vilkovisky formalism {{cite:a62926e61002d890f4319af8d331e56c095c32a8}}, {{cite:5db1eabe16e1e4726f7bbf0789079a031719c588}}, {{cite:2588b9a01e8b34a23e70a6e8bb0ac7f71b987718}}, {{cite:e37ce1f967b62822411cd3e666d3222447fd57fe}}, {{cite:6386dc25aebb50f77feb441cb2b6eed4c29926b6}}.
To confirm the intrinsic self-regularisation of BV-Laplacian, we explain why there are no divergencies in it (such excessive elements are traditionally encoded by using derivatives
of Dirac's {{formula:2d4085a0-38ab-48d4-bd0c-04e9512c927a}} -distribution).
Namely, we specify the geometry in which the following canonical inter-relations between the variational Schouten bracket {{formula:5917da3d-af1c-408d-8d1b-996f2a9821bc}} and BV-Laplacian {{formula:f1c83ee8-eb58-4508-9dce-887186a9e290}} are rigorously proven for any BV-functionals {{formula:f85f05b5-ff53-44f5-9f71-e6efc696679f}} :
{{formula:9726f4ab-6ce1-4557-855d-693d702dd939}}
| i | 3cb9d45e2ff38afdef883482793217f6 |
As mentioned above, directly finding the solution of CSP is infeasible. However, as developed in {{cite:ef7215a502de6207c139c5394d87509436fa4bdf}}, iterative algorithms such as compression-based projection gradient descent can be developed to approximate the solution.
In this case, the denoising step such as (REF ) in the algorithm can be recognized as a code (encoding-decoding mappings) to play the role of (REF ).
Assuming the above solution is a good approximation to CSP, then the main reconstruction error term in (REF ) is bounded by the distortion of the code, i.e., {{formula:33bcee84-efc9-486e-b064-532acbe2f196}} . Therefore, a better code (with a smaller {{formula:3ee71f26-3d92-4028-9e01-0bf64d886592}} ) of the SCI signal can potentially achieve a better result. This is consistent with the experimental results in Sec. and has also been observed in the literature {{cite:d8a2ec80d733d063e694026b53cc5368a63dccaf}}.
If we treat the deep denoiser (the deep denoising network) as a code, and since the performance of deep denoiser is usually higher than conventional denoising algorithms, it is expected that the plug-and-play algorithm using a deep denoiser (described in Sec. REF and shown in Fig. REF (d)) leads to good results for SCI reconstruction. This is consistent with the theoretical analysis in Theorem REF .
However, the end-to-end deep learning algorithms described in Sec. (shown in Fig. REF (b)) might not be able to perform the analysis based on this theoretical result, since it is challenging to define the encoder and decoder described in the theorem.
| r | b50de3fd39efe284aa0c49a168cf56b9 |
The computation of a primitive element of a field extension can be done by hand (if the degree is small enough) or with the aid of any computer algebra system.
For some of the computations we used the algebra software Magma {{cite:95600d751deac7d21de9315637994b3106193d67}} and Singular {{cite:2f4f7af3af6c9428adbf2ba2c1bcb5f9282adfc6}}, as they allow the computation of the geometric genus of a curve.
| d | d7f3036e7082dc233e44c3ba435f3a5f |
The performances of the baselines represented side-by-side reflect the limitations of GNNs stated prior. Of all networks, GGRNet {{cite:21bcb1a3a3f0df455eace2cefe8a5c4653fe24ba}} performed the worst, ranking even below the MPNN {{cite:4dad8920cf985e9749e2c5e8bc15bdd142cec4b4}} implemented by Flam-Shepherd et al. {{cite:a02613d7b471b6af0943e3946adfefead30a9dbf}}. Though GGRNet implements a more complex MPNN, its readout function simply averages the node vectors and passes the result through an MLP for the result. The simpler MPNN uses the set2set readout instead, which is what likely lead to the better performance.
| d | 387a37cb5eaee885614bb627ad317201 |
Overall, prosocial agents underperformed their selfish counterparts, but the picture is nuanced.
Optimizing for per-capita return can be difficult because it complicates credit assignment, and creates spurious reward “lazy agent” problems {{cite:43633327218145ec582cd03af32ea1f878a306cf}}, {{cite:3d4b2e2d05669542e3f11f47ce9a34fa8f8993d9}}. However, in the social dilemma Clean Up, only prosocial agent architectures managed to learn policies that were significantly better than random. Prosocial agents perform best on several other substrates as well (Prisoners Dilemma / Stag Hunt in The Matrix, Commons Harvest Open / Partnerships). This suggests that doing well on Melting Pot will require agents to be able to contingently balance selfishness and prosociality.
| r | 763d91d6ae1458b984306ab42199bb2f |
Our goal is to organize the implicit storage of knowledge, to add target knowledge (yellow box in Figure REF ) and anchor to select target knowledge.
A simple approach is to train updated LMs from scratch; however, this is far too expensive considering the parameter sizes of recent LMs, such as 175B for GPT-3 {{cite:9e26f2b0f611a4188b5d4fa576f4d4d09943aaaf}} and about 11B for T5 {{cite:120e41e0da3503608f6876178cad41e16fa59e17}}.
There has also been related work for the two scenarios.
For Scenario 1, a method for continual learning can be adopted, constraining the distance between parameters before and after fine-tuning {{cite:cf4ddcfb1e5cff3a97f7b7e2395f72c870efba44}}.
However, this approach still suffers from so-called catastrophic forgetting, where the LMs fail to retain large amounts of source knowledge.
For Scenario 2, one may consider knowledge editing methods, where we see reasonable performances for a single knowledge edit while retaining the rest {{cite:a34b408b02d759a0a137563d2d91a7d05aa8a1fc}}, {{cite:bbdbd21910cd5378db4c12a71f31523b5a4b1be1}}.
However, this line of work does not perform well when multiple edits are accumulated, e.g., only 67% of 125 edits were updated, as reported in {{cite:bbdbd21910cd5378db4c12a71f31523b5a4b1be1}}.
| i | 0051d9637555eaa8c766cc0ea7621d72 |
Notice that Proposition REF does no exclude existence of Turing kernels (we again refer to {{cite:fdcb4defe39009e7c51941a1712cee62dc9f2cae}}, {{cite:9197a9ea2555b4f0a2ec0046a87b131b1ef1cebd}} for the definition of the notion). This makes it natural to ask whether Elimination Distance–({{formula:7c35c3be-3e54-4a86-95e2-037b84865f2e}} ) to {{formula:1f8d2b1d-42ec-4701-bf37-b9867d298595}} admit polynomial Turing kernels for {{formula:706eb3ed-a20f-42e6-ba34-7090d915e665}} for {{formula:1fafa937-4b2a-45e7-9998-60d0558bab84}} .
| d | 0dc6fd06717940114b992d2bd59a6a55 |
Interestingly, the ({{formula:3953c45d-d781-4bc9-98a4-24ef333bfee8}} )-surface (Fig REF ) has basins and the saddle region arranged along both directions. Conventional wisdom would have led to the conclusion that {{formula:bf453a75-3476-486a-90e3-d9678016c4ab}} is important in defining both reactant and product basins as well as the barrier crossing process. However, the double-well structure exhibited on the ({{formula:8e2bd588-4896-4fb4-8061-4e63db4a97f7}} -{{formula:9b8c35ab-7009-46b9-b774-e9fa3f6dfa6e}} ) plane is profoundly misleading. None of the topological features on the surface over the ({{formula:7dcde67b-a50e-472e-9dc6-2f9f40a7f273}} -{{formula:3dfe69b3-399b-4d93-a745-35cb4af7bf1e}} ) plane retain the dynamic properties of the original 5-d surface, as the committor test showed that configurations corresponding to the peak at the transition region completely fall into the reactant basin. This demonstrates that the correlation between {{formula:bd93cb95-60b6-4944-92b4-4e5c85dffe66}} and {{formula:4d64251d-776a-4322-b441-632bfcf5d2b7}} , due to minor roles of {{formula:7f587e8d-8d32-4d9a-8fd1-53b99ad8e055}} to the transition
process {{cite:e22c9d3eb132516aa4ac5d28922dcc224f0dcc8e}}, {{cite:6bee35e9496693b58f627effbdd2c2035ca56cad}},
distorted the probability distribution along {{formula:0413b62a-3904-477a-b188-3df16fb53892}} , such that the ridge/saddle extended into the reactant peak, leading to incorrect {{formula:febbcf0e-fb66-4314-a1f9-0af2144fcc41}} value that marks the location of the peak in the transition region.
In contrast, {{formula:3d612681-bde8-42db-b833-4761baaa72ee}} did not impact the distribution of {{formula:e1f636be-f52d-48b3-b7e5-266c1ea3e3b3}} , so the peak in the transition region on the ({{formula:134efdf1-0684-451f-8848-ad6a3e39e3db}} -{{formula:a8948a7b-ab9b-4f46-a063-9daba3bce3a6}} ) plane still bears the correct {{formula:afa13042-f2c1-4eb9-b91c-4ddf4aa827fb}} value for the transition states.
| d | 87e856d2cb35dbea825f2ccd945812cb |
The electromagnetically induced transparency (EIT) phenomenon{{cite:dbfd70f742804ea8e2fe913a928efcfb9b647f7b}} is another important quantum optics effect in the driven three-level cQED system, and is closely related to ATS effect {{cite:5f59e593cdac5589d56c4c1d591566cef00a65b8}}, {{cite:b6e09298f88272520b4866602256853b4e3db356}}. Because the EIT phenomenon is caused by destructive interference between two different excitation pathways, it can be used to slow down or even trap optical{{cite:4e3cf38bc9f8f4498707c977d6690b78cfd5b75a}} and microwave photons{{cite:51ab9cd560f752dbe4f64b2a6ebadfd3c529bf5d}}, showing great potential application in single photon storage{{cite:138358c50c51573911a851c4e0137486c44f0175}}. Under different damping rate conditions, the driven system can transit between ATS regime and EIT regime. A significant EIT phenomenon can be realized by using the quantum coherence effect between cavity and qubit when the qubit's damping rate is much smaller than cavity's. We set the qubit's damping rate to be {{formula:64a7fa66-8a82-4b37-a0f5-a3687da3976a}} MHz (corresponding to decay time {{formula:5e3b604e-b6f2-489f-9645-6107abee91c4}} {{formula:38c20b6b-0d59-4f00-854e-4fa3ca6609c4}} s, which can be reached in various superconducting qubit systems{{cite:e0d04eef04b32b7687837de2d310611b1d4c37dc}}, {{cite:dcb08ba4dc9444d4a9cc332c7ec825a45ddef27d}}, {{cite:a5f890841c82cb982aeeb9ae9b9433713460b045}}), and all the other parameters of the system are the same as in this work. With this setup, the system can fulfill the condition for realizing EIT{{cite:1ce78d5def7aedf36e30d9608db21a7711bfb83e}}. In EIT regime, the transparency window of spectrum is caused by destructive interference, and the quantum coherence Rabi oscillation of the populations between the states subjected to coupler tone is strongly suppressed by the state damping. That means the inverse Fourier transform of the spectrum {{formula:159a7ccf-7054-46ed-be82-d6709b0485c9}} is exponential decay curve, without oscillation.
| r | e7486bb46442f0d6f29c95313069c544 |
Also noted that SetRank outperformed the baselines of DLCM and GSF, in the cases of both with and without the initial rankings. In Istella dataset, without the initial rankings, SetRank{{formula:e31ab979-8f8a-45a0-842d-f4963d03e378}} improved DLCM{{formula:1b89c624-f0d1-479c-9bc5-da7232621fef}} about 0.05 points in terms of NDCG@1; with the initial rankings, SetRank{{formula:54d5347e-4c4e-41ed-8d16-41dfb37b51ff}} improved DLCM about 0.02 points in term of NDCG@1. Similar phenomenon can be observed on other datasets with other evaluation measures. As discussed previously, the RNN in DLCM is very sensitive to the orders the document being inputted {{cite:a778614907d81a07bfed760d8227babadd7af809}}. SetRank, on the other hand, learns a permutation-invariant ranking model and thus is more robust.
| r | 2ba5c6f178f16e7c72690115fc8f590f |
One of the limitations of this work is that we assume knowledge of what TG in the set to use during deployment. In future work, it would be interesting to eliminate this assumption in order to autonomously and efficiently select a suitable TG using Bayesian Optimisation through Intelligent Trial and Error {{cite:5064e224f8940e34936a2de9cc4bb2737fe339cd}}, {{cite:e56489d3a4997181d8b2fe2dd76fb3119c9c9d9c}} via online data collected during deployment.
| d | ac310fcefb17c03ce7b41ac4fea66f25 |
The baseline demonstrates that the DS problem exists for all the 5 anatomies. The difference between the Dice scores on the training and test distributions is sometimes as high as 60 Dice points; a model that provides almost perfect segmentations on test images from the training distribution can potentially provide completely un-usable segmentations on test images from a shifted distribution (e.g. test images from a different hospital).
Data augmentation {{cite:baaa8080214db08520fff5da96bc0cc2de0b61ba}} helps vastly. This strong baseline is much more robust to DS than the baseline - in some cases, providing a performance jump as high as 50 Dice points. These results corroborate numerous similar findings in the current literature. Given the generality and effectiveness of the approach, we believe it is imperative that works studying DS robustness in CNN-based medical image segmentation should include stacked data augmentation during training.
A gap to the benchmark still remains - in most cases, heuristic data augmentation falls short of rivalling the performance of supervised fine-tuning.
Results of the TTA methods are described below. When making statements about statistical significance of results in the text below, we follow a strict threshold based on Benferroni correction to account for the multiple comparison problem {{cite:911121940e207eddd3926819f585acddf1a67d9e}}. For each dataset, permutation tests ({{formula:27acd1cf-94d6-45e0-a183-ca391a65c9bd}} ) were used to compute statistical significance of the performance improvement or degradation provided by each TTA method with respect to the strong baseline. Thus, 5 comparisons were made for each dataset. So, the p-value threshold was divided by 5.
Entropy minimization-based TTA {{cite:e16e2d83691f2a10076618a2b22bb3fc27ae242e}} does not require construction of additional models to capture training distribution traits; yet, it provides performance improvement in some cases. Also, unlike other works {{cite:ee8ba6817a5b32a71ea9c9438e31ccd97311fd20}}, we largely do not observe the problem that the entropy minimization leads to all pixels being predicted as the same class. This might have been due to the limited adaptation ability provided by {{formula:e3f0251f-6eb8-4491-8aa0-e9464fe2fc13}} .
However, the performance gains are statistically significant for only 2 out of the 12 test datasets. As well, the performance degrades the strong baseline significantly (although marginally) for the lesion test dataset. Another downside of this method is that it can only be applied for tasks with categorical outputs.
TTA-DAE {{cite:08a5130cab9047774d5d6100c2889fd411338d1f}} provides the best performance for the most number of test datasets for healthy tissue segmentations. For 5 out of 11 healthy test datasets, the improvements provided by this method over the strong baseline are statistically significant.
It also leads to a drop of 5 and 8 Dice points in the mean results for the two spine datasets; however, permutation tests show that the drops may be due to large degradation for a small number of test subjects within those datasets. Even so, the large drops in performance for particular subjects may be indicative of the DAE's DS problem - that is, the DAE's outputs may be unreliable when it is fed with segmentations that do not match the heuristically designed noise distribution used for its training.
Furthermore, the DAE fails to improve performance for the lesion dataset. We believe that this reflects its inapplicability to tackle the DS problem in anatomies where reliable shape priors cannot be learned.
In terms of applicability, the DAE-based TTA is also restricted in terms of the tasks that it can applied to. For segmentation, the DAE could be trained by heuristically designing a suitable corruption distribution. It is unclear how to achieve this for other tasks.
Autoencoder-based TTA {{cite:86046f9b2ad5ceb543d682669f37aa93c492ab7b}} provides performance improvement in several cases. However, statistically significant improvements could only be obtained for 2 test datasets; even in these cases, the improvements were marginal (1 and 2 dice points). This method also also lead to a drop of 12 and 13 Dice points for the prostate BIDMC and the WMH dataset, respectively; the latter was statistically significant.
The FoE-CNN that in principle resembles the concurrent approach of {{cite:3d67947e161d54d7a8201fdfd5c0a481a2383b32}} is overall, less performant than the PCA-based extended FoE proposed in the current work.
As compared to the strong baseline, the proposed FoE-CNN-PCA based TTA improves performance for 7 and retains performance for 2 out of the 12 test distributions. In particular, the proposed method shows promising performance gains in cases where the other task-agnostic methods falter substantially (e.g. prostate BIDMC and WMH). Out of these, the improvements are statistically significant for 3 test datasets, including the lesion dataset.
The 3 test distributions where the method leads to a performance drop, the drop is relatively small: 3, 1 and 1 Dice points. We claim that this illustrates the stability of the proposed TTA method and validates our initial hypothesis - FoE-based TTA improves performance in the face of acquisition-related DS in medical imaging, while itself being substantially more robust to the DS shift problem that other priors such as the DAE {{cite:08a5130cab9047774d5d6100c2889fd411338d1f}} or the AE {{cite:86046f9b2ad5ceb543d682669f37aa93c492ab7b}} may be vulnerable to.
Importantly, the proposed method provides the best performance for the task of WMH segmentation - indicating its superiority in cases where CNN-based helper modules such as DAEs {{cite:08a5130cab9047774d5d6100c2889fd411338d1f}} may be unable to learn appropriate shape priors. Notably, all competing methods from the literature fail to improve DS robustness for this lesion segmentation experiment; the proposed method is the only approach that shows promising results in this challenging scenario.
Analysis Experiments
Approximating Expert Distributions with KDEs rather than as Gaussians: Comparing the KDEs v/s Gaussian approximations (Fig. REF ), we observed that the actual distributions do not differ substantially from their Gaussian approximations. This is also reflected in the TTA results in Table REF - performance of the proposed method is very similar for both estimates of expert distributions.
Effect of the weighting between the CNN and the PCA experts: Results of this hyper-parameter tuning are shown in Table REF . The introduction of PCA experts with {{formula:50701f6e-7aa2-40f8-bda4-bc53f8f273a8}} improves TTA performance for 4 of the 5 prostate datasets. However, increasing {{formula:7d6f4fdb-d50e-4a7a-abe4-f6b98b55718b}} to {{formula:dd97079a-dcbd-43c8-9d8c-8454d3cde741}} leads to performance decrease in 3 of the 5 datasets. Based on these results, we choose {{formula:cffbc53d-e84a-4984-866f-8ad71ad09ffe}} for all datasets of all anatomies.
{{figure:5b146f81-75ab-42d5-b87b-54f5bb0f0ae8}}{{table:89ff4b4a-b482-41e4-9216-228fcdca933a}}{{table:16cba8fc-d56d-44fd-b13c-2ad91100dfc9}} | r | 569e158f25809ebcd81abb9f934dad0c |
In addition to previous work {{cite:88d7438414aa238decb2aa563df331c485436a8f}}, in this paper we have also considered corrections to the entropy which are logarithmic in the black hole entropy and the shift in the black hole entropy, the latter arising from the backreaction of the diary. This refinement is necessary in order to evaluate the time needed for the information in the diary to start appearing in the radiation. This time was identified in {{cite:34b842e59d404135bf626c224670ec45b36b832e}} as the scrambling time and, interestingly, it involves two terms (REF ): one term is proportional to {{formula:634b91f8-b1ea-4933-a806-43e4767f9275}} , where {{formula:9165d711-c8fd-4a40-bfd3-21c1e832b239}} is the shift in the black hole entropy due to the backreaction of the infalling object, and can be interpreted as the usual scrambling time, whilst the other term is proportional to {{formula:97a65f8e-bff5-4863-96fa-b72491a460c6}} , where {{formula:26673dd3-a7e2-4696-8a51-497d491a15a2}} is the entropy of the diary. This other term has a thermodynamic origin; it is due to the fact that the process of the black hole absorbing the diary is an irreversible process if the first law {{formula:36f1af23-285b-4050-b7ab-acfe1b3e2739}} is not saturated. Therefore, irreversibly delays the time at which the black hole returns the information in the diary to the radiation and so the black hole doesn't behave as a mirror anymore.
| d | 495425e55ba4b4eb8babaacd3780bbdd |
where {{formula:84975781-1480-4b57-bc42-eb06fb872c3e}} for {{formula:4f332e46-4109-4c00-987d-124d8ff4d75f}} and {{formula:2100f46a-28e8-46c3-a6b7-b68c7739c889}} for {{formula:32dd4663-b9e4-4ecb-ba3b-937f906d5a82}} , respectively, are the regular functions. Eq. (7) can be further written in terms of scaled magnetization {{formula:722d409a-d590-4cac-8f71-30dbf8640604}} and scaled field {{formula:07bc3c8b-ade2-4112-b08d-aa638545b101}} as {{formula:069c31b0-bd33-473a-b006-1aef9f40796c}} . This suggests that for true scaling relations and the right choice of critical exponents, scaled {{formula:7a410961-f830-45c5-ad08-059cd2b7284c}} and {{formula:bd855498-531a-40b7-8180-35fefb3aa958}} will fall on universal curves above {{formula:5510123d-450d-4ca6-8a37-e614b940284a}} and below {{formula:8e0e6aff-3d3f-4ff0-a790-48a30944560b}} , respectively. The scaled {{formula:a9ef46d9-3a2b-4246-9b4b-cf76aad964a3}} vs {{formula:840e62f0-3e8c-4f5f-aa8c-261d0f14fbe3}} curves are obviously seperated into two branches below and above {{formula:e16263a8-75e6-43cd-bd6a-482e30d43d74}} , respectively, as shown in [Fig. 2(e)]. This can also be checked by another form of scaling equation of state {{cite:e7f2ed9b738dce262dc25f753be1af54c87613c2}},
{{formula:2aeb3abf-4cce-47ef-b862-f4f21d241d61}}
| r | af69f9282c82e588f77df7c204d22856 |
Although the Standard Model (SM) is successful in explaining most of the observed elementary particle phenomena, there are at least two evidences which hint new physics beyond the SM, such as nonzero but tiny neutrino masses and the dark matter (DM). A simple extension of the SM is to introduce an exotic {{formula:a5a9e5e1-a6ca-421d-ade5-277485e9da68}} gauge symmetry which corresponds to a new short-range neutral gauge boson. Various models have been proposed from the top-down approach {{cite:ae16b2a44b9b441d01316d57664d67ea8b2e41ec}}, {{cite:39186c94904fa6488aefca0868871c502b039e90}}, {{cite:8e8ef37f3c88babf4f24617f06ad70684e0a48f5}}, {{cite:3bd66113512e58d8e21777dc6383bf1e43b4fdbd}}, {{cite:2abecdd520ad37cd9489db2ad3fb6e46bbb70dbd}} and the bottom-up one
{{cite:03eb51d67be9f7c79547744a4e9246f54e649779}}, {{cite:e4fe971e8d80546c474186d1f3897a49c5d33f19}}, {{cite:93b73c9f79ea0eb8ecb26476691457fce7464171}}, {{cite:119018b3fba6c893e3325975ad054143d351d266}}, {{cite:4f995a4c381f8db1a26ad0fa2fe6d2f5fa44227c}}, {{cite:cebb830e69f5c2e60936033055008b00df96cd40}}, {{cite:4b1a44f99fe059d9f22d6bd4a2aef2cd1ecdbc5c}}, {{cite:d78848201c2ef6441f8f19c392bfdd02c8c780eb}}, {{cite:94a2f68a2f5cfc8734aa7c04c0c260d32f60183b}}, {{cite:928df1022cf18dd020c68f954d0fb01e144bc352}}, {{cite:ad39fcfbf9b2d104c932f30f2a2729913e246b77}}, {{cite:30b73f2fc587540aa91b0d45ebcad8cb9c05c37b}}, {{cite:2c628ea64e65d35b2ec17ab87501590f77b304cb}}, {{cite:4a0c24cd896092d67c79a2fb9dc8eb577f26937a}}, {{cite:f142f8f1dfb9710f4d5562dec8216580923681e7}}, {{cite:ef0102b5bb2e8056bf7094ac1d847672f8ca685c}}, {{cite:e7c4a088be898938f460a36e16a27ae70015e06f}}, {{cite:2f113f94f38551e57d20b7e35423e56c96cf6202}}, {{cite:dde908e9e7dba1d092095055279115e8f85c94c1}}, {{cite:acc7d22c5b47927000e116cda13e26e4671fb445}}, {{cite:13e5fbf69e98059f19ea62979f24df4332f79051}}, {{cite:9936d229b089335eac38127a12dcb7530c8bc01f}}, {{cite:8b1695ca6022421fc5c99a52213ff1d661504ac7}}, {{cite:f2b4cbda5b96c339c58037b0e45811b1e4757307}}, {{cite:8616a6520bb4f7dac7b151a2b89a7a202c6418d9}}, {{cite:ac42116ca7845c806512834348155d508fde9695}}, {{cite:9f1cb7311736a5091664da1efa801dcca7f4ab7b}}, {{cite:73068f9353ae1617ef7757d77290d0abed01ef58}}, {{cite:541592bcc93eecc48cd8e089c5914820bd442c28}}, {{cite:28f533389779879099acd08d816ab86fb6925d01}}, {{cite:0d3025d04b02f801bbb3f4e759e4eb7635bd2c15}}, {{cite:0a7bc27a58ed05277c1fa8de7113803104af71b9}}, {{cite:b004bce238529985dc1a0314b62ff174d8640b59}}, {{cite:39b4386ce71830da5500c70c340f253c22f89fcd}}, {{cite:320d773214b2599121dad77fe9bdb2ff927c5f33}}, {{cite:110aba3821f95d05add91d163a025210a900d3f8}}, {{cite:962b2da707a7353cbe75bd80971e192ded819e04}}, {{cite:3dfcb180772317b011d7a0910359383c034fd5a3}}, {{cite:19a2de688635675c475800b4c7a46f1e8aedd783}}, {{cite:c2b98bc822a28ce5e150edc14610ab79b729f1b8}}, {{cite:48d9f295904aea72efeecf890f38d6a5f9836ec0}}, {{cite:a5a092ca057bc82861327b254fb107f52cc446e0}}, {{cite:b7e24ca69b4ded3799a6f9aa3a41d2a3fa27ee77}}, {{cite:35f70433a556fe45ef2fc849c85e74f7ddd98d87}}. In the {{formula:aa29ec83-1488-4dfc-8b9a-2390eb9c0ef0}} extensions of the SM, the right-handed neutrinos and even exotic chiral/Dirac fermions are introduced to achieve the anomaly cancellations, generate the masses for the observed neutrinos via the well-known seesaw mechanism, and provide the DM candidates. The DM phenomenology has been investigated in various {{formula:30bdb817-f494-4645-aa5e-45f0bffc2eae}} extensions of the SM {{cite:4a24c1913ed172d8e06626108891b5e8c95bc1ac}}, {{cite:49dc3c358d6eaa5eb57a33edfbcd38e9940ce402}}, {{cite:3029fcf055fb3e346174e213de411d4e1c59b67b}}, {{cite:831144eb8fdc0d1e1b182896e3a042c192ca9dd7}}, {{cite:5784cc647e4f2cd3c5a6416670ba6481bbc182e8}}, {{cite:16a0b52bb99015ba59d888d67d3ff78edc402b0d}}, {{cite:41112ffc5328d870d86065ee595da99a29c108d7}}, {{cite:504d59e71a44c5c0f80d5be52c98a3fd72dc3d37}}, {{cite:cdac87051ed904d11adc68726b5e43705ca17fbe}}, {{cite:5f4aae11b7438030385f0b557a8a5263b426948c}}, {{cite:8e46b41e9c6cabd00b5f7539d17cfe030d6d643a}}, {{cite:fd6d6212a9ea511e254a1323084b85a8fe4600fc}}, {{cite:ddb075a6b4839032a35da0608938b148d3df4d65}}, {{cite:6ac1a0bd3b017df8ba8b2f7509a4785daf86f415}}, {{cite:7d0b3a74488406b094757cfe8990d6480179a493}}, {{cite:625ccb1c4d5d65a0dd53b8116aca90a76b86decb}}, {{cite:2e2c39e5a44018a2f836e42203e71195eb602d38}}, {{cite:fec5b9a99a9f84b0d5220abd2dd7e5fc4e3ab10e}}, {{cite:984168683a8dd943743671e77cc6ce742e7f6eed}}, {{cite:16c577d59c81711f44fa2012b0a5f54eb74073b8}}, {{cite:b046568ed9b9b1402f7bb70363f08ce58290aa84}}, {{cite:9de919c4d392d9f7cdc005d793bed87e807bd910}}, {{cite:c5015afe188ca6cbd08732fde852b8373a56a0e8}}.
| i | ef0d69b32d9b001565da9f76927db7bf |
As any other study of this kind using metallicities obtained from Hii region emission lines, our results depend on the calibration of the strong-line diagnostics adopted. In our work, we have re-calibrated the {{cite:92f82d251e0e8a1e51d3bf4a18bdb91e9f0879b3}} oxygen abundances by applying a shift {{formula:3ef9701a-b446-43bf-a246-9ca5f672a225}} based on the quantitative spectroscopy of blue supergiants in spiral galaxies {{cite:7509aa3ef4e4ce0294d92813e99f5ded08c2cd13}}, {{cite:33340f38e263dd4461c3f09139e54dfb8ccf3751}}, {{cite:4f6df3c439a66ae4355a3940beba7632ed63de2a}}, {{cite:8731149e321d377a851bdc3e087582fcc017789f}}. If we use the original {{cite:92f82d251e0e8a1e51d3bf4a18bdb91e9f0879b3}} calibration, we obtain larger values for {{formula:70245dd2-d1e2-4059-8356-9a3dbb61ddf4}} but the qualitative conclusions remain the same as before. In particular, the fraction of galaxies with very small accretion rates {{formula:aa421ad6-d490-4ebb-83c7-f289f65d4d66}} remains unchanged. For future work, we regard it as crucial to improve the calibration of the Hii region strong line methods based on a larger sample of stellar spectroscopy of blue and also red supergiants {{cite:33e78fafa69d1b657cdf657abd857a3ede2f04c2}}.
| d | 884fc2fdd026cff667225b195acf22da |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.