text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
Choosing a non-standard {{formula:eb27d9c2-4d53-493b-9038-71a4963a0035}} from Table 1 contributes very little (less than 1%) to the uncertainty of a computed {{formula:9dc1545a-55ac-4eb2-92f5-9240c6a27257}} . Therefore, the largest error contribution, definitely, comes from the choice of {{formula:438f327f-1cc2-4e23-a97a-8a962f9d91b3}} . This is because there are an infinite number of {{formula:6d423374-09a9-49ae-88cf-a67192193df4}} and {{formula:33077779-a6b5-4f76-929d-f10ac3fc3ef5}} pairs to indicate a single value of {{formula:867b9220-0ea6-4fb0-82d1-d7e14ffe0f62}} according to equation (11). On the process of computing {{formula:5bb253fc-1213-42e4-9e0d-fa056078dd09}} values, {{formula:2430cd51-d689-40c1-b166-884ef0a5f1b4}} is a quantity most probably coming from DDEB, thus it is a fixed value. Since {{formula:be6f94b5-a5ce-48a8-b26e-94cad1a18c7f}} , where {{formula:c36df607-d3a8-4947-9651-763a3e82b421}} is also a fixed value. Therefore, pre-computed {{formula:d24cf8d1-bb84-4c64-8d49-051e8b748c6e}} values are effected directly by the choice of {{formula:51d4b372-d78b-4bb7-9f34-d0b40471e6a5}} with the classical method using equation (11). As in the case of {{cite:6fa6f0669ebc4047630a07041f447a1ea402e69c}}, who preferred to use {{formula:3bd6fba1-bc42-4a62-82ee-1380ab051ddc}} rather than the nominal 4.74 mag, all pre computed {{formula:068d49d4-5318-452e-b8ba-370740c5c9e3}} values would be 0.01 mag smaller compared to the standard {{formula:18426550-8410-4cfa-9ae1-79f56c032014}} . Although {{cite:6fa6f0669ebc4047630a07041f447a1ea402e69c}} do not state clearly which {{formula:07c7131c-d20b-43ed-ba61-c58dfbd3afb1}} value used, we have assumed they are using the nominal {{formula:168061e2-2ce6-4d02-97e2-a2cd80c16d1a}} since they cite IAU 2015 GAR B3 for the value of {{formula:47d282e8-b7ba-4fdd-8551-031f20fd1af5}} used in their equation (3) for obtaining the bolometric flux received from a star.
d
80562af8310dc8bc6dac1dba90b2815b
The value of {{formula:650aae6f-2021-42f2-a6ad-22b4b8722d76}} is taken from the potential model calculation {{cite:4e5ce5bb4bc9640a42abe072532ffb91c750009a}}. For the strong coupling constant, we adopt two-loop formula as used in our previous paper {{cite:ed6dd4c4d89ea001724bb89c74f2f3967c57af5c}}, where {{formula:d9ee1b53-f9c3-4719-b57d-075c75376b2e}} .
r
17ab8a2798ea41fb8da232b8904fa9ec
Architecture and training. For all experiments, we train for 10 epochs using the Adam optimizer in minibatches of 8 images and a learning rate of 1e-4. We initialize {{formula:8ab3d596-2a0d-4560-a965-0fb5ab14372c}} and {{formula:e6f28882-1eef-46bf-9902-3c13cd027582}} to {{formula:09b4f414-1b36-4352-b58f-20e93e766e08}} and {{formula:6a5e05ac-3b5a-4df8-8bc4-2f9afa78a1cd}} , and perform one epoch of `warm-up' to refine these global parameters after which we reset the atomic coordinates to their initial values. For homogeneous reconstruction, we directly optimize all RBF parameters. For heterogeneous reconstruction, we use a VAE to predict the {{formula:acde5de9-5f33-422d-81cf-647a12b52392}} -dependent offsets of the RBF centers from their reference values. Both the encoder and decoder are 3-layer MLPs of width 256 and residual connections, and a 1-dimensional latent variable. We use ground-truth poses {{formula:04e33cf7-0479-4827-9b76-a02eda5f6730}} for training. In real applications, the poses would be inferred from traditional cryo-EM tools {{cite:98a382f810cbd957f2a81d1fcb15946965b23c4c}}, {{cite:22f477ebdff10fda462949bf710ef34cfaf893c1}}, {{cite:efe5bcc2667c9eab1a36bdec09b21951c4eeb781}}. The model is implemented in PyTorch {{cite:66c2590c9bf88810c0f94f28629d4daf148997e0}}.
r
6b64c60d26318f70ece6e55f590074d1
The 3-dimensional trajectory plot in each result section is accompanied by an X-Y projection for a better view of the trajectories. Detailed variations in the positions of both the UAVs and survivors are presented as well. The data is plotted using the matplotlib {{cite:1956d761abbe6a50d31a5e1a3eb48498703b2bc4}} library for Python.
r
b5ca7c69d5c296eb90d9444929f62644
We aim to test if different augmentations in each ensemble member gives better calibration, we use three augmentations: 1) Adversarial perturbations: Images are perturbed to increase training loss, adapting the `fast gradient sign method' {{cite:20e3e4f1c9734b1bb5e3bc2b7dd7bdc3c3164c93}}. 2) AugMix: Augmentations from {{cite:e2e6c7ebb6b7f0016622133edf721b48e28d4bb2}}, with minor modifications. 3) Stochastic Depth {{cite:c29ef29758a614e0c4eb93f5131a6b556f4c382d}}, {{cite:18ea3ce6020216414e8fb108c6becdf0f56caa5b}}. We randomly drop residual connections. With the exception of Stochastic Depth, we can apply these augmentations to the input data and require no modification of the neural network architecture.
m
68ee5973edff624f7ecc2677e98d08a9
GAT. In addition to more general ways of set aggregation, another common approach for improving the aggregation layer of GNNs is to introduce certain attention mechanisms {{cite:f7f478a65a73d9d428192ddfbe6f41ee269848fa}}. The basic theory is to give each neighbor a weight or value of significance which is used to weight the influence of this neighbor during the aggregation process. The first GNN to use this focus was Cucurull et al.'s Graph Attention Network (GAT), which uses attention weights to describe a weighted neighboring amount: {{formula:745fb82f-09de-454b-84d3-914459080520}}
m
149548a07e188ebfd3a2093249e59a5e
where {{formula:9f402c54-786e-415f-b403-a2c96d50c504}} is the Fourier transform of the smoothed top-hat spherically symmetric region with characteristic radius {{formula:fb4a4d1b-6429-4cae-a56e-7cb2d4a93431}} defined as {{formula:d654f7bf-5949-400b-baeb-f80140a5680b}} normalized to unity. Then, the following expression is used {{cite:ce4ee93b1d7296e7fef05747cf5332d2a562f192}}, {{cite:c8695befe3195eb3ee4d5b94b6b7b8a4c5a6d783}}, {{cite:5b88bea5131491b26ac6e722c0f4d406dfa86bc6}}, {{cite:d0d9421c08a0fd8395fe7e213a09987a9e2cc29a}} {{formula:abfd3791-9119-4817-b13d-81b9b9e338eb}}
r
a85f97af9f59fe0269c6ad97459411a3
Furthermore, the Ryu-Takayanagi formula and its extension in presence of quantum matter can be obtained by replica method for the gravitational path integral {{cite:25bb474c7f384bc70995a6967aba1e2c0acb64ba}}, {{cite:8b353d3f9bbbf264e89ce6548c241f1419a38a28}}. This means that the island formula is also calculable by using the replica method {{cite:65bbf57ae8d4072bd5c29ab6d5c35afd834ecfcc}}, {{cite:b262e4479df4184393d8dce567a99c2ffa850f0a}}. The later one implies that there are geometries connecting the different replicas which are know as replica wormholes. These geometries are used in the JT gravity to analyze the late time behavior of correlation functions {{cite:f03ea774da9b638004c8abef2d401e7a31af1db7}} and spectral form factor {{cite:aa5d856e33795c462e2727deb4af1721100ff252}}, {{cite:61e8071bdd96d7afbc763239f77cf7031d528c2e}}. Intriguingly, these wormholes can give a small overlap (of order {{formula:5e2b819b-1657-4456-a163-d4e656991bf9}} ) between naively orthogonal bulk states and this small correction to the Hawking radiation can restore the unitarity in evaporation process. But apart these very fascinating characteristics, they lead to a factorization puzzle. Since, there is no interaction term between the dual QFTs in the two sides therefore the partition function of combined system is actually product of the partition function of left and right systems, {{formula:a984eaa3-cae7-485b-b9ec-2af3f05d2e33}} . However, it seems that the presence of replica wormholes in the bulk implies that {{formula:3ceb1923-e181-4033-bc2f-35ee7cbde4b2}} . A resolution suggested was that in presence of wormholes the bulk theory is dual to an ensemble of field theories {{cite:61e8071bdd96d7afbc763239f77cf7031d528c2e}}. For the FSC case, we have the same factorization problem and it might be the same resolution as for the JT gravity which is interesting to be explored. More importantly, one can ask what happens to the wormholes connecting the decoupled system when we focus on just one element of ensemble. For the dual of JT theory, which is SYK model, it was shown that not only those wormhole saddles persist but also new saddles exist, which are named as half-wormholes {{cite:d628a090f000aa0c26522c43b8661616b25563b9}}. Exploring this new saddles for the FSC case would be also very interesting.
d
70be275cfa1b3c7fcf2a547bcb05ed10
the entropic scenario plus varying {{formula:0b5d6fed-6989-4f89-9d87-e54cd388c545}} and/or {{formula:5f6e1a3b-7d70-4b95-952c-b64e4a624329}} is quite indistinguishable from a pure-{{formula:a4ad2e50-ac63-4c39-a144-b17d7bf3d8ae}} CDM model, that is why we call it an entropic-{{formula:efa4af88-db80-47f2-bdc7-81ad73c10aa3}} CDM model. Present data is still unable to differentiate between the two scenarios; the model obtained (entropic-{{formula:d7cbc89a-e3ac-4452-8ae6-542636e492c7}} CDM cosmology) is a variation of the exchange of energy between vacuum and matter model studied in Refs. {{cite:a6f2b8ee7f845d0bbce4072932830118f43ad340}}, {{cite:04687d3ef06ffba26bf15014ab86e02113a14d83}}, {{cite:031f8d9a2ae2d936294789eb55963f8c340b89b7}}. the best fit for the value of the Hawking temperature coefficient {{formula:c01f304f-5e8b-4f3b-a9e6-f2a30253f0a3}} is quite different from the theoretical values used in literature, i.e. {{formula:00ed9a90-cf8a-4e7e-9d6e-75eb17744b56}} or {{formula:2eea2f06-8210-40b2-842a-9954665f879d}} ; it should be pointed out that other considered entropic scenarios have the values of {{formula:76ae0dff-5bfa-48a3-8d75-2988a319be2c}} (e.g. {{cite:7a3e61bd2015fc06dcedf7b23927bc07cf3f3edd}}); the model with small values of the parameter {{formula:1b2d10db-8ab9-42d9-b0cb-293759a4a8a5}} is equivalent to a dynamical vacuum model with small variation of the vacuum energy studied in Refs. {{cite:a6f2b8ee7f845d0bbce4072932830118f43ad340}}, {{cite:031f8d9a2ae2d936294789eb55963f8c340b89b7}}; the value for {{formula:0a67cdca-cef8-48a9-aafb-cad6c0ac681b}} is compatible with zero since we were able to put only an upper limit to it. This would mean that the Hawking temperature were zero for the models under study; it is also clear that we still have the deceleration-acceleration transition, as we show in the plot of the relation for {{formula:58c26f43-314a-450f-97cb-7d6e114762ee}} and also for {{formula:1dd3af72-0ab0-4ec8-88ec-99e3083abd60}} in Fig. REF , where our models are compared with a standard {{formula:84efe0d4-fb89-46ed-9abd-3bb4a354672c}} CDM resulting, as said above, barely distinguishable.
r
853d205e9f4c2b5697b62f8c36592d08
The initial Slater determinant was taken by performing a standard LDA calculation. The molecular orbitals, namely their expansion coefficients in the GTO localized basis set, as well as the matrix {{formula:355fe5ba-47fb-4a2f-98ed-9cbcb9f9e4a5}} determining the Jastrow factor, were simultaneously optimized with well established methods developed in recent years{{cite:28b8c5de45bb36db805116191c1d02c740d22d8d}}, {{cite:860ba8f493a9c1a6b485b16827e5850b14095b4e}}, which allows us to consider up to 3000 independent variational parameters in a very stable and efficient way. Note that the two-body Jastrow term {{formula:f8bc3d8e-f7fd-4503-94df-8fcb469a3b89}} can be chosen to explicitly recover the EI mean-field wave function (REF ), as shown in Supplementary Discussion. After the stochastic optimization the correlation functions / order parameters can be computed in a simple way within variational Monte Carlo (VMC).
m
0a5805d2d4165bd4cc6f754a3d5c2685
It will be an interesting investigation to further enhance the LEzGP method. Better strategies of selecting the tuning parameter {{formula:c117e1af-ef78-4e1e-9f9c-375c5ceed857}} need to be investigated. Other methods in selecting subsets may also be useful in the LEzGP method, e.g. the localization method in {{cite:0d70e004a4955434e0d973ea95af212617f90e71}}. In addition, one issue of the current LEzGP method is that when there are many different level combinations of qualitative factors in the target inputs, the model estimation can still be computationally cumbersome, if the goal is to predict the whole response surface. One possible solution is to arrange the target inputs into a few groups according to their level combinations, and then apply a more flexible LEzGP method to each of these groups.
d
52768b392f62eb34cb9366db230d0ecf
The tensor product possesses a very useful property: the associative law ({{cite:43d1f7f1f51c029214e5d8b3735ef2d73d624f70}}, Theorem 1.1). With the general product, the following definition of the similarity relation of two tensors was proposed by Shao {{cite:43d1f7f1f51c029214e5d8b3735ef2d73d624f70}}.
i
36da8c6d914b4aea457fe90f12b2eb2a
Our implementation uses the matrix and vector functions in the Basic Linear Algebra Subroutines (BLAS) for {{formula:764c9691-1214-4465-8b82-7d8dd74befd3}} and the CUDA Basic Linear Algebra Subroutines (CUBLAS) library for {{formula:4626c1f0-b874-4ede-a484-c0e5cd493d92}} . The routines used for computing the minimum of {{formula:32907fe4-4957-4b20-9454-af33e651fc9a}} on {{formula:ce4de359-351a-4e80-9e50-e980d4eceed8}} and {{formula:3e8af46c-7385-4bda-8601-64d67ecbe0a6}} are described in {{cite:f178f68d077483aa82d3b6040898414cd364e634}} and {{cite:7a5771f731aff18a9ef5b8f4e2c4a038981565a1}}, respectively.
r
99002f0b48d1907e2fb85e23f82d9747
Self-supervised, blind-spot networks have been shown to be strong suppressors of i.i.d. noise without requiring pairs of noisy-clean training samples. However, the assumption under which these networks were proposed breaks as soon as any correlation exists within the noise {{cite:bf898ab7942ecdad80653acfbf11052c57a46cba}}. In this section, we first introduce our implementation of blind-spot networks through pre-processing alongside a tailored loss function. This is followed by a discussion on the general neural network (NN) architecture and training schemes used in this study and a detailed description of the implementation of transfer learning in this application.
m
406728be4f9172b2f77a3326703eb594
Gulrajani et al. {{cite:08c3fe0519b6d439cf378daed548377371986809}} WGAN-GP Google One Billion Words Gradient penalization. Visual Inspection Generated discrete data, yet not comparable to baselines.
d
4fa127844c386c0eb152031140272bdb
The ANN-to-SNN method first trains the neural network with the rectified linear unit (ReLU) as the activation function, and then replaces the ReLU on the trained neural network with spiking neurons. However, the task performance of SNNs resulting from the direct conversion is generally poor. Therefore, additional operations such as normalization {{cite:8fc2ee62965acd1fe0658edd062068793ba2f29f}}, {{cite:36c14a91d5bda97a04034ebe5080c333bb68eb2e}} or threshold adjustment {{cite:da3b918d1871f60ed3f7d9bf20f43798826ce613}}, {{cite:3a649e8cbf118158f4358d7fb8de1e2b1fb22a58}} are required. On the other hand, backpropagation with surrogate gradient is used to train SNNs end-to-end from scratch. Spiking neurons run in forward propagation, and some continuously derivable approximation functions are used to replace the gradients of spiking neurons in the backpropagation.
m
862e39563b924395edae769afd215f6e
To construct the latent prior, L-BRGM relies on a pre-trained StyleGAN2, which is a recent state-of-the-art GAN architecture developed by Karras et al. {{cite:2461e6f6f412a6c7ea07cc0da47dc96f016e45f3}}. StyleGAN2 is an improved variant of its predecessor StyleGAN {{cite:1b80943cbe9c0912a4a4d21ef32a082d4793e977}}, named after its architecture's inspiration from the style transfer literature. In StyleGAN, the input latent space {{formula:fac8df59-dfd0-4577-b546-e7d350290df4}} is warped into an intermediate disentangled feature space {{formula:29a39bb5-68c6-4943-91c3-6c8c852d62bb}} (also called the style space) via a eight-layer fully connected mapping network {{formula:8517d9e1-caa9-4340-98e1-3274edb6f033}} . The StyleGAN generator {{formula:ed9a9107-a210-4ac6-bf94-888e525a6d30}} , consisting of adaptive instance normalization (AdaIN) and convolution blocks, subsequently produces photo-realistic images from these disentangled features. Randomness is introduced in the generated samples by feeding noise into the generator. StyleGAN2 enhances the sample quality by redesigning the generator architecture and by introducing a novel path-length regularizer (PLR). The PLR seeks to ensure that the generator approximately preserves length, i.e., a fixed change in {{formula:37d35dab-32e9-4873-8e4e-962e401d7041}} causes a fixed magnitude change in the sampled image. The effect of PLR is to promote orthogonality in the Jacobian matrix {{formula:07cf0398-08c6-4575-9d12-f055d9829b71}} at any {{formula:621bed74-e024-4cca-901c-dd2d8da276ce}} . As remarked in {{cite:2461e6f6f412a6c7ea07cc0da47dc96f016e45f3}}, the PLR makes it easier to invert {{formula:175f24cb-95c2-4330-bd4b-3546aad591f2}} , motivating the usage of StyleGAN2 in our approach.
m
874d0e88e1a1b533c3073efdf65e752c
We have used {{formula:8c71efa9-8097-4763-bbc4-13ee597154a5}} as in {{cite:60d84fb35a171e7f26862665330b999b02c5cccb}}.
m
2e5600071fc47b719d4dca5b8e00f677
An alternative supervised approach, which requires a limited number of training samples to achieve satisfactory classification accuracy is representation-based classification {{cite:d7660d13a7082bc0a4bd156efec064221647a2f2}}, {{cite:44b42547db7026db1d855a5189bbf2dc701f4e20}}, {{cite:58050784f2f5de6d1c66b1d18b1b6b97381e6f5b}}. In representation-based classification systems, a dictionary, whose columns consist of the training samples that are stacked in such a way that a subset of them corresponding to a class, is pre-defined. A test sample is expected to be a linear combination of all points from the same class as the test sample. Therefore, given a predefined dictionary matrix, {{formula:37e508f4-309a-4034-8742-123636916b42}} and a test sample {{formula:6d44968b-1315-4112-b784-a328d712f51a}} , we expect the solution {{formula:e5f67657-315a-4645-ad80-fc4acb2ecb80}} from {{formula:8210115d-4bcb-4593-a46d-4c070a948adb}} , carry enough information about the class of {{formula:0fc952ec-6a29-4a7e-8e6e-6352a6d6a5a8}} . The two well-known representation based classification methodologies are sparse representation-based classification (SRC) {{cite:44b42547db7026db1d855a5189bbf2dc701f4e20}} and collaborative representation based classification (CRC) {{cite:d7660d13a7082bc0a4bd156efec064221647a2f2}}. Out of these two, SRC provides slightly improved accuracy by solving a sparse representation problem, i.e., producing a sparse solution {{formula:691c7de0-f172-478a-9603-e1129c1145ac}} from {{formula:a4543365-0a81-483f-8cf7-52307548738f}} . Then, the location of the non-zero elements of {{formula:633d4fe8-bf7d-48b1-8fad-5acaba87d00e}} , which is also known as support set, provides us with the class of the query {{formula:c6110b8f-10d9-4652-bb98-f2538d914c46}} . Despite improved recognition accuracy, SRC solutions are iterative solutions and can be computational demanding compared to CRC. In a recent work {{cite:bf1e97877cd76c29c11d0c3c7fe9e6b0488c174f}}, a compact neural network design that can be considered as a bridge between learning-based and representation-based methodologies was proposed. The so-called Convolutional Support Estimation Network (CSEN) uses a pre-defined dictionary and learns a direct mapping using moderate/low size training set, which maps query samples, {{formula:08dfd106-a877-43d6-bc95-feab79e60a12}} , directly to the support set of representation coefficients, {{formula:86ca0a83-902e-4992-8752-33bdc10349ec}} (as it should be purely sparse in the ideal case).
i
d122677f9cdaeb365899ea54e6f72844
We are interested in the setting of switching bandits, where the distributions of arms' rewards change an unknown number of times, say {{formula:b7a451ff-504a-4d35-bc82-1af3101dd682}} times, before a time horizon {{formula:6abf49a6-af96-47cd-8c55-0d36229315a1}} . Performance is then measured by a dynamic regret which compares rewards against those of playing the unknown best arm (maximizing mean reward) at each round. {{cite:e38b35622434ab5ee3e9283d2e9676055e95ad44}} showed that various existing procedures {{cite:3731657c1bc3c8a879a8c25766371f194b0e1a50}}, {{cite:4f4de35d9b00ab402acce666134f28457d768794}} achieve a near-optimal dynamic regret {{formula:66ab3adb-96fd-4ea3-8e6a-788d6830aaae}} , however, requiring knowledge of {{formula:54900b40-97aa-4b2f-9818-8eb73e06d9f2}} . This impractical requirement on knowledge of {{formula:09073a01-d4b7-401d-8aee-69d939b99b50}} remained standing till a recent breakthrough of {{cite:c59727bd62c81090340eb886e6ac5d7f475d0174}}, {{cite:02bb23930560bdc8b13d552390c59080482ab535}}, with important follow-ups in {{cite:bb0ad8a18883471b5dab90c88a6c9665324a91ae}}, {{cite:a7f3be1b9be486fa9f4939c9378649952e171100}} for more general contextual settings.
i
64cfe6cf87d1dd27cfce9dcd5d04bf0e
In the first part of the paper, we take advantage of the results obtained in {{cite:4d7755698fe3668cfd997dee156a9b6a87b07294}}, to study the retraction of uniconnected solutions by means of left braces, giving special attention on the solutions associated to cyclic left braces. As applications of these results, we answer to {{cite:a9bd082964f41ff210f9240767e99723eecf7952}} and we extend a result on retractability of solutions obtained in {{cite:6662d64d60da653f6275de569fdb34f6e2561d55}}.
i
b16e2cf0974e1dbe1cb4980a8f5f0d0d
where {{formula:2f1d8d94-e66f-482a-a947-e8842bdbf51c}} is the critical exponent related to the change in {{formula:6ee080ea-986c-43fd-9d1c-3c369dbd3c79}} from applying the block spin RG (i.e. {{formula:d203e57d-5d95-4b0e-8215-e72aa6107730}} ) and to the correlation length (i.e. {{formula:4cb31391-1bb4-4834-87e8-93698761d6ba}} ) {{cite:226bd6cef2d17de29dcb0c628ca322b25aa1aed0}}. Therefore, {{formula:71cc0672-ce4c-4766-b7d6-199393988772}}
r
6e39e9c7787e1b8fe950d703a2ecc859
The electronic structure of ML black phosphorous is studied using DFT calculation using Quantum Espresso codes {{cite:9dcdbb88bb837cf055e34407ebc09766ba44e144}} along with PAW pseudopotential {{cite:6b0bd2272726078598b7b146bbdcfbded429cc28}} and the PBE functional within the generalized gradient approximation (GGA) {{cite:3e80e38e50c8529d8ff9ab8fd08ad2bc71566cbe}}. The van der Waals (vdW) interaction has been considered for the monolayer structure {{cite:be2f63aa43217a229062e4d0fc40b33d87f52a3b}}, {{cite:927c18b7afe3317ebcb674c40e380b8c3bad62f0}}. A Monkhorst mesh of {{formula:6774281a-52da-4b1f-9e15-c2ce5a611e5c}} k-points is used for geometry optimization with 540 eV as plane wave cutoff energy. Optimization iteration process is followed until the total force converged to {{formula:1fc6b3b1-0ebd-4053-a049-d44cc5a04bd0}} 0.001 eV{{formula:8bd9269d-9e84-4573-b11b-658fdacf8249}} . Supercells with lattices of 12 {{formula:8f3eb6b5-354d-467e-93b6-11f5636c19d7}} in the z-direction is considered to neglect the periodic interaction among the surface images of the monolayer sheet structures. We use {{formula:e8b9c5f4-a196-496d-b43a-e846b82321cc}} k-mesh for electronic structure calculations. The phonon mode related simulations are performed within the framework of density functional perturbation theory {{cite:79db225108320da72dfb28af8d9121f6c505ee88}} with {{formula:18a754bd-238c-4505-8def-2a3d6d7cbc62}} k point mesh. The transport phenomena has been studied using Boltzmann transport equation (BTE) {{cite:6031ab193805cd987b512c514509745d417b716b}}.
m
45884081c23691e906459d8e19bbc51f
In this study, we convert a reconstruction problem into a GAN inversion task. This work seeks to relieve the bottleneck and enhance performance using in-domain GAN {{cite:fa555241105212a069fe742895d6c3a4377d14f8}}, which not only reconstructs facial images from brain activity, but also reveals semantics of inverted code. We found that attribute manipulation can be used to improve consistency between the reconstructed image and the stimulus image (Tables REF ). For example, comparing with VAE-GAN {{cite:723236eb99f310af36eaeeea84a00c4761181031}}, we can retain the correct gender characteristic (Fig. REF C) using the proposed framework. Also, Table REF shows that images generated by our method coincide with human perception of specific attributes. These findings suggest that we can achieve better results by leveraging capabilities of learning semantically meaningful latent codes and adjusting facial characteristics.
d
8f9126cd1ef57e2acf3b130b4d63592e
For the most stable candidates [{{formula:09d36abf-f586-44ed-a9a1-6047e9f3ee05}}  (100-140 GPa) and {{formula:70100475-dbea-4118-a166-94c7d3800d30}}  (140-300 GPa) of LaYH{{formula:8ab4e5e9-cc4a-4a01-b45e-9b4fad41a2f6}} , for example], we performed ab intio phonon calculations to evaluate their structural stabilities and {{formula:af543cea-8d5d-4681-9ad7-fae5ad9873e2}} using the Allen–Dynes formalism, {{cite:18406e219eac2c3232b719e788c7e75db00e289b}}, {{cite:931c21836b32618aeef01905c088d88e929073de}}, {{cite:49290efa60d153916c604f58babd44a7e40e7604}} implemented in Quantum Espresso. {{cite:18406e219eac2c3232b719e788c7e75db00e289b}} Computational conditions such as the cutoff energy of the plane wave basis set, sizes of the grid-mesh over the Brillouin zone etc. were determined such that the energies sufficiently converged in their dependences. We finally used an {{formula:c83ed797-c12c-485f-9da6-eb67a95f565d}} {{formula:336aee00-fd3f-430d-9764-9b6108db7bca}} -mesh used in the self-consistent field convergence assisted by the Marzari–Vanderbilt smearing scheme. {{cite:5835b54b8d73c6b38d85f7d2edbd637e3f360199}} The final energy values were estimated through extrapolation of the smearing parameter toward zero. Mesh sizes for the phonon calculations were {{formula:0266864c-9d4a-4c1c-99c4-16aeee646c69}} for the {{formula:7ad138d9-80e0-418b-804b-a9c882a8fec0}} - and {{formula:bdbc9971-c2ae-46e9-9b77-7601ac24541a}} for the {{formula:5abb1801-def4-4b28-9cfe-9d88805fe091}} -mesh.
m
3a036414b8b40933b4340e8d7b88d3cc
Privacy Leak is one societal concern of zero-shot quantization because the generator creates synthetic samples that follow the distribution of real data. As several input reconstruction techniques point out {{cite:96fd35d4a6e255e3d0f68e3139db44fb58ab75c7}}, {{cite:266255457060bcebcfed4b273ac95cb071f814ad}}, it could be that the synthetic samples can reconstruct the private training data. However, to the extent of our observation, there is no sign that AIT reconstructs the real data as shown in fig:samples. Training method for the generator in AIT is no different from its baselines, since it does not alter the generator loss in eq:gdfqG, thus does not contribute any further to the privacy leak.
d
bd302a3b97686eee3932291674570c45
In order to assess the performance of our method, one dataset is chosen: DOTA {{cite:4a8948993397e1e3093adaa695e3094d6b7b24c3}} which contains remote sensing images. It has 16 different classes and contains around 400k annotated objects. These objects are distributed within 2800 large-scale images and their sizes vary greatly, even within a single class as the spatial resolutions between two images can be significantly different.
r
ea3bdedb15d472c672f2906c4b70463e
So far, implicit representations have been explored to model the 3D geometry of local shapes {{cite:fdd5413de70761d17297f76364cf66ea0863a579}}, {{cite:b47d563275877199872d627a2c0bcd0d08eaf5de}}, single objects {{cite:b3d4e3cf21849a7ac1804207c9328ca96e17efad}}, {{cite:5e74a35e77ac373ef172d57257d185a9f792acfb}}, indoor scenes {{cite:7858ede6ce28876e8e0f1669d50c3a18d50e949b}}, {{cite:cf131f1557fa6b7d067701e9c0b5e569a63c5447}}, {{cite:79d3aea2e31dce258d00e582bc99508b9feb4981}}, {{cite:c8a5bc8dde97d1606b7622c8b5b3898bfa6a030d}}, and single buildings {{cite:e0c998735dd1643ad1d03a624e661ce03634f9ea}}. In this work, we go one step further and investigate their potential to accurately reconstruct 3D urban scenes, on the order of several km2, from satellite data. To that end, we introduce ImpliCity, a coordinate-based, implicit neural 3D scene representation based on a point cloud derived from satellite photogrammetry. Since such point clouds are comparatively sparse and lack high-frequency detail, we additionally use an image stereo pair to guide the occupancy prediction. ImpliCity reconstructs city models with fine-grained shape details, smooth and well-aligned surfaces, and crisp edges. It thereby reduces the mean absolute error by >60% compared to a conventional stereo DSM.
i
2edac8de91407c9ba0fb684e06dd3281
Proposition 2.4 (Section E of {{cite:4ffe7a4496ee7f707a3b97bcabbed387519ab203}}.) Let {{formula:a784702d-8dcd-4e9b-8ef9-faa85101efb4}} and {{formula:2c1ecbce-1f64-413a-bd56-854bb3288979}} is a matrix with {{formula:f7552ee6-7247-471e-815e-e30497c4e34d}} . Suppose there is {{formula:59907ed5-5260-42e3-916b-e46ed83d46f4}} , such that {{formula:8108798d-a6b7-4265-95cc-58c243105b5c}} . Let {{formula:f37b7a04-2d7d-4ae3-9038-a6b5e4d01c4b}} . We have {{formula:0c966d3d-261d-4785-8998-ed49adc6378f}} .
r
0a361545517555aefffd3899d831ea05
A detailed proof of Theorem 2 and Theorem 3 can be found in the Appendix. In order to illustrate the double robustness property of our proposed estimators, we borrow the idea of the oracle estimator {{cite:0304ba1c01832d77f063515df7114b02af67b14c}}. Define the oracle estimator {{formula:2d78fe25-a6d5-4e94-b372-047955dfba53}} as the solution of {{formula:d576aa66-b39e-4c5d-b32f-b602d9f81e2b}} using {{formula:3b88e210-756e-47bc-9c2a-52030d3136b2}} and {{formula:80af6133-8906-4824-8b5c-24666c76bc60}} (i.e., assume that the zero and non-zero of the coefficients are known in advance). Since we do not know the truly important variables in the application, the oracle estimator is just a conceptual idea to help us establish the theoretical properties in variable selection. Due to the double robustness of {{formula:4ffe4a7e-e377-4718-95a1-987d50f51c7e}} , {{formula:204e854c-ea4c-4ff3-8e8f-c0f85cdd10a8}} is a consistent asymptotically normal estimator of {{formula:694bd3da-f923-4cd5-921f-daa5b7abc177}} under standard regularity conditions (see Appendix for the details) for {{formula:d9f21219-ab48-4523-aa09-75f53a66bcf3}} -estimators. The properties of {{formula:0b2b0009-325e-436a-95de-a25c6586f2c8}} in Theorems 2 and 3 are referred to as the oracle property {{cite:0304ba1c01832d77f063515df7114b02af67b14c}}, i.e., {{formula:81deddd2-3210-4a1a-bfd4-3313c06716b7}} performs as well as the oracle estimator {{formula:46390772-21e2-4065-bfbb-8ffee3affe8b}} .
m
ce32429e3cb0520cd6503eae5c0af469
The recent measurement of the mass of the {{formula:33eef831-84a7-47c9-90e8-e2f21b8ed856}} that we use to constrain our parameter space is given by (REF ). We will also use the average published by the PDG collaboration, given by {{formula:d1dc1b12-6ac1-4f11-bce4-3d38cbb96932}} , to compare the parameter space that leads to these two different measurements. We fix the mass of the lighter CP even Higgs to the current average central value of the SM Higgs {{formula:f360f66e-5dc6-46db-b399-bf32294a6aca}}  GeV {{cite:e4671b9eee0eac0bdd9a0db2ffb3160d684afad8}}.
m
a6a77135c7a2fa8f243c2c26a63d9e38
Once we have a mathematical form of the intrinsic interface, the density and pressure tensor can be obtained in a reference frame that moves with the surface, {{formula:708c1245-effb-4481-bb74-8610d864c433}} which has a different value at every point in {{formula:6ea3a99b-b57a-4830-b701-b1668a4307b9}} and {{formula:89995877-e869-42ac-b206-14f6e2827c6e}} updated at every time step, {{formula:63a29dfb-a644-442e-ba13-923595cd3161}} . To obtain quantities that move with the interface, the {{cite:45a6a2b7ce9455866b37fac82b9212b528b25608}} definitions can be integrated over a volume where the surfaces in the {{formula:caabb8c0-5680-4bbc-a06e-9e62a3e4dd2e}} direction follow the function given by Eq. (REF ), with uniform grids in {{formula:a2ca7317-ed4b-4da6-b607-1bac78d34e0a}} and {{formula:310a63f2-08d9-4707-a1fb-866d29b2a459}} directions. The density in a volume moving with the interface is then, {{formula:3d3596a1-7ca9-40d1-9a79-c2de3ac91c8b}}
m
283fd911074d036bc71cd59bf2c9296c
For task-related reasons, on the other hand, this evaluation approach lacks validity in terminology harmonization. When evaluation is based on one relation type only, there needs to be a decision on which relation type needs to be featured by an evaluation data set. {{cite:62be0c01690c53601c240f5a7caa9335677ffe76}} assume that this choice should be based on the down-stream application a distributional semantic model is supposed to support. A similar argument is brought forward by {{cite:27ba3613ba666a3a66d5f5899b0a001bc85d2879}}. They consider evaluation data sets that focus on similarity to be especially important for lexical resource building. Here, we come back to our motivation as discussed in section . Modern lexical resources go way beyond classical glossaries and try to be knowledge resources. This is especially so for domain-specific resources, where onomasiological modeling of terminology has become predominant. Lexical and terminological resources accordingly become multiply related data structures (for example iglos {{cite:0aaa2d9d50786055c9e1d64a5241756a7b0b94ad}}, Frame-based Terminology {{cite:33d9255115f7ffad9afe00371bee11b0e482bad3}}, FrameNet {{cite:897f4cb97721d63285d5fc11d1fcf6cc39685be7}}, or WordNet {{cite:50954cce27c98e9830f243db2656cec7dfe4a8b7}}). For the creation of such lexico-terminological resources, it is therefore not sufficient to create data sets solely based on one type of relation type, that is, similarity- or relatedness-based ones. For terminology work and terminology harmonization (cf. section 1), very different relation types – some from the relatedness spectrum, some from the similarity spectrum – play an equally important role. The consequence would be to create several evaluation data sets instead of one. We therefore have paid for validity and methodical efficiency with some limitations in reliability.
d
0ac7ba14e89f223d9a8cad27091694a5
The above local stability assumption is also related to the Lipschitz sub-minimization path assumption suggested in {{cite:90d0d054d7981ed7a8286e7769b5ef35ddd176fe}}. As discussed in {{cite:90d0d054d7981ed7a8286e7769b5ef35ddd176fe}}, the Lipschitz sub-minimization path assumption relaxes the more stringent full-rank assumption used in the literature (see the discussions in {{cite:90d0d054d7981ed7a8286e7769b5ef35ddd176fe}} and references therein). As {{formula:dcd1e4dc-db69-4e99-ab79-ffab13ff939c}} is a compact set, {{formula:75196c8d-167f-46b0-bfbd-7eac935b83d6}} can be taken as the supremum of these stability constants over this compact set. Based on Assumption REF , we have the following lemma.
d
b759fd82e81577ae37ee4958b826e47d
As a consequence, in order to find all meromorphic solutions of both CGL3 and CGL5, only a finite number of possibilities need to be examined. This is done by two methods. The first method (Hermite decomposition {{cite:d2847528071fa666422557ac3ee4b21cfed2958a}}) represents {{formula:2f4cda7f-e47b-4a96-8c84-3eba09b62179}} as a finite sum of derivatives of “simple elements” admitting only one pole of residue unity (Weierstrass' {{formula:dd22f4db-0aef-4ce1-b9d7-fc09a554f761}} {{cite:c67fa8346b5656fe72113627a386ecd7b61cdf4a}} or its degeneracies {{formula:67a9b362-1a4f-41b9-98d9-09065fc0c032}} and {{formula:a5aea748-d15a-4da7-9e8d-8d67ff28d89a}} ), while the second one (subequation method {{cite:137e79a937014d1161bdde8fae36bb0e8c0b65cc}}, {{cite:288264ab3fe4eb478c989765cd7e12d0327d7463}}) builds the first order ODE obeyed by {{formula:fd676942-f2ce-4610-8e4f-3ea15b351c3b}} then integrates it. Full details can be found in {{cite:b7060156b752a8cce3dbdeb5eea3a3aae99df4f3}}.
m
dfba1b7d6c73254f04eaa63064feb4e6
Be {{formula:15d4e8bf-bdda-467e-ba64-9e9f22490760}} and {{formula:c17a2973-f378-4a12-9e2c-afd62da00ae1}} the states of a two-level atom (TLA), with energy difference {{formula:eb0f70f5-24a6-48a4-b7ea-adf6183eb5e8}} , and atomic dipole moment {{formula:78fce1da-3561-46e7-8d42-541ad7c001a8}} . If we let this system interact with a classic field {{formula:65c3fc43-d911-47aa-aa83-3f9ea257a911}} , the equations of motion for the relative wavefunctions are {{cite:9b135bd7de7d6b3fa373a245fc03be553d6152bb}}: Ca= iE(t)cos(t) eitCb,
d
a56f0f244023d3c337551ef103f39133
where {{formula:75ecdd3e-fb17-4829-bf69-aa1c206909eb}} s are the expansion coefficients with the Slater determinants {{formula:2bc5b61e-ff29-494f-be66-457a91cf1b17}} s generated by exciting the HF wave function {{formula:1752d0c4-f912-4201-9d2f-df03e55f7f2d}} . Due to extremely steep computational cost, truncated configuration interaction (CI) method is usually considered in the multi-electron systems for practical scenarios. At a given level of truncation, the coupled-cluster (CC) theory accounts for electron correlation effects more rigorously and satisfies size-consistency and size-extensivity characteristics in contrast to the CI method, thereby earning the title of the gold standard of electronic structure calculations {{cite:4aac4a5056d399d0b553e059c985b172c02a4c86}}. In the CC theory ansätz, {{formula:36fe2022-3b5c-4517-a8ef-a5e4fe443b42}} yields the exponential form as (e.g. see Refs. {{cite:4aac4a5056d399d0b553e059c985b172c02a4c86}}, {{cite:ed01cfac633d058e64d1bcbb57a4885e39a37181}}) {{formula:004be7e0-55ce-42e5-ae17-3cde30104708}}
m
06f5d1e30da7930a73c8f5e779a21472
Classic: STP can be approximated by computing the minimum spanning tree of the subgraph induced by the terminal nodes. Given a minimal spanning tree, we can construct a subgraph by replacing each edge in the tree with its corresponding shortest path in a complete graph. We compare it as a classic algorithm{{cite:628e22a09d1657a19491dc573aefdb52eca0001c}} that usually appears as a baseline in heuristic algorithms. Vulcan: Vulcan uses the combination of graph embedding and DDQN, learns to select the optimal vertex in the current state. For stabilizing the training, we set the exploration probability from 0.1 to 0 in a linear way. The selection of hyperparameters is trained through small generation instances and fixed for other instances. GNNs: In the previous introduction, we have explained why the current GNNs are not suitable for our problem. To verify its correctness, we select several representative GNNs as a comparison of Vulcan. The original GNNs are mainly used in node classification or other graph combination optimization problems. To handle STP, we use the same encoder network and decoder network as Vulcan. To reflect the difference, we use GNNs instead of the processor network that we proposed in Vulcan. Finally, we use the same reinforcement learning for training. S2V: S2V{{cite:d44a7ccee23849117bcf57c4ee04c48e89324b8a}} is used in MVC, MC, TSP, and other issues to capture the structural information of the graph and has achieved amazing results. GCN: GCN is the most common baseline for node classification and node prediction. Since the efficient performance on the graph, it is also used in many NP-hard problems, such as MIS, MVC, etc{{cite:efc3290b3cde91e76140c1b2b8ddf93408a9ead4}}, {{cite:93b72947fb4f0304c9b090421360d611091f1bb4}}. GAT: Graph Attention Networks (GAT){{cite:19d82c2e7f81860ebdbd137d65125a192f0fffdc}}, {{cite:9afbd7b0d5eea991ac9dbf913190ff028c2a5c8e}} adds an attention mechanism so that the network will first capture more similar adjacent nodes, thereby improving the model's accuracy. MLP: This part does not use GNNs, and only adds a few Multi-Layer Perceptrons (MLP) to improve the generalization ability of the encoder network.
m
599ee321f7ee8d15e4f82e3e718fefe9
The calculations have been performed using Density Functional Theory (DFT) approach with ultrasoft pseudopotentials and Generalised Gradient Approximation (GGA).{{cite:6b808fd0d9084c2a5868faf624144b238f46d728}}, {{cite:953a8f5c7102c4ca06898cbaec65a35d49ec7af3}}, {{cite:5a75773a968b81fb2e772f87826fd64cd1b8852e}} We have used CASTEP{{cite:97c0d46628b2feed60d3f766be527cf538a76218}} implementation of this method and pseudopotentials provided with this package. The elastic constants of the NaCl phase have been also verified by the independent calculation using VASP{{cite:1d4d4b53085e952d1aeb3bdf93f112d563fb5971}}. The electronic minimisation used density mixing scheme and pseudopotentials parametrised in the reciprocal space. The summation over the Brillouin zone has been performed with weighted summation over wave vectors generated by Monkhorst-Pack scheme.{{cite:7f0a145bc7304b078a539b1e71459c5623ca92d6}} All computations were done in 1{{formula:f01d9787-dedc-496e-9f22-63ef2e1c4cdb}} 1{{formula:512924a4-cff4-47e4-8f30-53223d92efe7}} 1 supercell with 8 or 2 or 8 atoms for NaCl or CsCl or KOH structures, respectively. Some tests for NaCl has been carried out on the primitive, others on the face–centered cubic unit cell.
m
2b5cb38f9c7d12f3ee02c40a010a5de3
The theoretical predictions of current study can be tested on some state-of-art experiments, such as the Brownian particle system {{cite:70b6a306f4b7428c271b3942dd66b38c1f861e53}} and trapped ion system {{cite:97a62ed0349fd28cf6ac5f8faad6f91c2c9fdfa6}}. As a direct extention, similarly to the stochastic efficiency defined here, the coefficient of performance of a refrigerator can be defined as the ratio of the stochastic released heat to the average input work. Then, the statistics of a stochastic refrigerator can be further discussed. Besides, it is expected that the many-body effect of the working substance {{cite:ee2a46d90484f87654afeac8cb428f1dffe154e1}}, {{cite:e8585624ae978e3280e8fbdb189ec17ba4e538e5}}, {{cite:f0f2a81610397d216c0f03ab8ddd4dfc2572cc50}}, {{cite:9a6796b72343bd9f251b04c314b568ab8c3a63e0}}, {{cite:aa16e29b8275603f144a6eb0eb1e0d2657d2b53f}}, {{cite:05f574c13c4a9aa079c9f3626995540360a5b30f}} and the influences of the control protocols for the cycle {{cite:858c316e39fdc3c04eecb19f7047a6d95541d5e5}}, {{cite:a59d7543f162e9327bbeae5d5a2830c493103f64}}, {{cite:685599225b4b6fdeb03f2ae98ae2add228514d0d}} on the efficiency statistics and TUR will be taken into consideration in future investigations.
d
69c664a4eb71b07652b5fd52e1f935b0
Other directions for future work include pruning on top of PCACE to further understand the impact of the top-ranked neurons. It would also be necessary to compare the PCACE rankings to other quantifications of the definition of importance of a channel, as it was recently done in {{cite:1ee77dae0f1ab0384eb667de23ef32aafee4f597}}. However, there is a scarcity of work in this direction, and thus a lack of standardized methods to compare to. We hope that our algorithm provides a step towards a quantifiable notion of explainability in the deep learning field. Lastly, we could consider to perform a non-class-based PCACE, and run the algorithm with a weighted set of images from all classes. We could also aim to incorporate the possible correlations between pairs of neurons, or even consider computing the PCACE with groups of neurons as predictor variables, as they have also been found to work in groups ({{cite:7d4d42089eb252d6a0abb42f5a8ea6bce0fec373}}, {{cite:d51545217e7aa63e94c3c18f7547ad6e6595bf87}}). There is still much to be done to open up these black boxes.
d
0fce75a99f60028f8fb1f1f1f647f5e9
In this work, we proposed a novel framework for estimating stochastic sample trajectories that reflect prior knowledge about the target system. Unlike existing methods with OT {{cite:7a298230e9590982b6e5a64683c1b658a4ded272}}, {{cite:445db1e67a96b062697cb30b40e4df63f07e13ad}}, or CNF {{cite:97885a348f005ded941279ee78f1165748681024}} for a biological system, we explicitly modeled the diffusion phenomena by using SDEs for the time evolution of samples with stochastic behavior. This allowed us to handle the uncertainty of the trajectory (Fig. REF ) and to successfully capture the diffuse transitions (Tab. REF and Fig. REF ). In contrast, {{cite:ba593916ab2bb20bedced81e1c29b9c56b0cf5f6}} and {{cite:34cec0bffdc0598feff6664df423aa49f2f2361a}} estimated the sample trajectories of biological systems using the SDE solution to the SB problem. {{cite:ba593916ab2bb20bedced81e1c29b9c56b0cf5f6}} proposed an iterative proportional maximum likelihood (IPML) algorithm to solve the SB problem. They recasted the iterations of the dynamic iterative proportional fitting procedure (IPFP) algorithm as a regression based maximum likelihood objective. {{cite:34cec0bffdc0598feff6664df423aa49f2f2361a}} proposed GSB-Flow, in which two SB problems are solved sequentially. The first SB problem with Gaussian marginals was solved in closed form, and the next SB problem using the closed-form solution as the reference path was solved by maximum likelihood training. Compared with these methods, our method handled a wider class of SDEs, and the diffusion function of the SDE was learned from data. Moreover, designing the Lagrangian enables us to flexibly incorporate prior knowledge about the target system into the model along with the principle of least action. Our Lagrangian-based regularization also treated the biological constraints proposed by {{cite:97885a348f005ded941279ee78f1165748681024}} and {{cite:06b4a6e46de3842d54703bc589095543a0cddf2c}} in a unified manner. In experiments, Figures REF and REF , and Table REF indicate that the prior knowledge introduced by the Lagrangian is useful to estimate the trajectories of individual samples with stochastic behavior.
d
ee14d5c50d665f8ad352c2cb6911d3e0
The dataset provided by the competition organizers consists of the corpora {{formula:864774a8-d335-42f9-8a9d-ba6ec4edfbc5}} and {{formula:127d106b-96be-451c-b004-477e3c8ed6d5}} , and the list of target words for four languages: English, German, Latin, and Swedish {{cite:3f1bd776b2b7856c106ac3673770dd5f3449c885}}. All of the corpora were already prepossessed: they were in tokenized and lemmatized form with punctuation marks and one-word sentences removed.
r
bb4a494cdd47349ac8315b2ec6a9d991
Heuristic techniques that find satisfactory solutions to the Examination and Course timetabling problems have also been extensively investigated {{cite:c4e63d09105d953cab6e8f8da522c4271e11c6af}}, {{cite:4988fe5bdbfb12ebc1acd358a3a6b904f6c56ba8}}, {{cite:c0bf4bafb06fffebaaa94daa735ad3ca5ff69c64}}, {{cite:00ba31e69b8d5652f782ab9438ff371021d544c4}}. Implementation of some of these algorithms are available as packages, for example, DEAP - Distributed Evolutionary Algorithms in Python {{cite:cd2f50c0ae5712a7c51a28cb5638b2102dca6f4e}} is a python implementation of the Genetic algorithm. This study explores the use of some well-known algorithms to solve the combinatorial problem in this research and compares the quality of resulting solutions.
m
2f873f4c5c4df2313383fa436f4cd101
when {{formula:e3048e0c-1c59-495a-a34a-9add4daa2614}} vs. {{formula:d17d710c-9478-4c5a-84f4-3ecc1933eca4}} when {{formula:d017ad69-85fe-4b3f-9c7b-c7e6e945d2dc}} . In MIMO, it's {{formula:e921c4d9-7b56-4456-b508-00e504c5255f}} vs. {{formula:908b5ec4-9b69-4083-aa9c-d68bb3291cff}} . Because subnetworks do not share features, higher {{formula:12ad4ba7-081f-4cf2-9e46-5b677ba8a6e3}} degrades their results: only two can fit seamlessly. Ensemble Top1 overall decreases in spite of the additional predictions, as already noticed in MIMO {{cite:b8c13eada53c401f53c79d6a4cd93daf4aa80bbe}}.
r
7354577e9b9b6e4250035108a46179a0
We briefly consider the way in which our techniques fail to extend to Clifford+T circuits (i.e., including a {{formula:b0b99e38-72a1-4196-a80b-7c70206bc725}} gate for {{formula:5f8053c4-cdc4-4fd2-b954-1dfd6d45fd9a}} ), or Clifford-plus-controlled-S circuits. Consider a representation of states similar to Eqn (REF ), in which the relative phases are expressed as powers of {{formula:29f04a49-a18f-42ff-8710-ff124d782c16}} instead of powers of {{formula:b96e6bfa-bd78-47de-940d-dddce861d65a}} . This would complicate the analysis of the simulation of the Hadamard operation (or more precisely the procedure analogous to ZeroColumnElim), requiring the analysis of a new case in which some entry of the Gram matrix {{formula:f55a86da-8deb-4602-97fe-0b73681d06fa}} represents an odd power of {{formula:25385cb6-2bde-4ec9-9e80-9922be2d06be}} , to extend the analysis of Eqn. (REF ). There is no algebraic `coincidence' regarding {{formula:35bb9058-7299-4a71-bbec-566e6549233f}} , which is analogous to the equality {{formula:e5be27af-d459-454a-8770-9d37b29691b6}} , which would allow us to reduce the rank {{formula:366beb17-6f74-4702-b046-7f388d4629ae}} to simplify a quadratic form expansion involving relative phases which are powers of {{formula:11c49133-d4c6-4cee-a19c-0b3657d65648}} ; more sophisticated (and computationally demanding) techniques would be required. In the case of controlled-{{formula:46c93fce-2ac9-48b8-b00a-0babfff86ac7}} gates, any attempt to simulate them directly on a state as in Eqn. (REF ) would require that we abandon the constraint that {{formula:05b3243a-9554-48e8-b0b5-511ffbbf6cc5}} is symmetric; however, this complicates the analysis of Eqns.(REF )–(REF ) in ZeroColumnElim as well, as well as any analysis involving a change of variables (as Lemma REF relies on the symmetry of {{formula:5638b993-9584-4b16-bee3-6b1a98406263}} ). The existence of such obstacles comes as no surprise: the ability to efficiently simulate either of these circuit classes, suffices to simulate arbitrary quantum computations with bounded error {{cite:9b16b796e01dcee74243b42443c00debbb170bae}}, {{cite:65476e9dd64274ef94b86a91e6a3e11b9aa3f054}}. However, extensions of our techniques in this direction could yield modest reductions to the simulation complexity of more complex quantum procedures.
d
93742da623088ecd5c98ef2406aa843c
A number of approaches have been developed to accurately compute the energy that arises from electron correlation, which are usually carried out after a HF calculation, thus they are termed post-HF methods. The most straightforward is to consider the exact wavefunction being a linear combination of all possible excited electron configurations that can be generated from the HF wavefunction – the full configuration interaction (FCI). However, the factorial scaling of FCI with number of electrons and basis functions makes it computationally costly except for very small molecular systems. Coupled cluster theory is another approach that converges to FCI with increasing numbers of excitations {{cite:fa9b51f2ea2830c8861bbd3a4bfcb6a5ebbe3529}}. Notably, the unitary coupled cluster with single and double excitations (UCCSD) is a frequently encountered quantum computing ansatz {{cite:2665bb941e2ddf91943658c22e9f610222146d32}}, which scales as the sixth power with system size on classical computers. The UCCSD approach provides a fermionic unitary transformation that is easily translated into quantum circuits via standard fermion-to-qubit mappings such as the Jordan-Wigner {{cite:f695c7b5901269af16278c23b75e2d578da0f95b}} and Bravyi-Kitaev {{cite:706c68d3e908437e6c8210fe56f01efccbc904fa}}, {{cite:593f19dc2d067d1a4ea5a397dee605f6dc8eb845}}. However, despite best efforts in reducing quantum resources needed, the UCCSD ansatz requires at least {{formula:eb7abe79-c061-4b27-b257-4dc1b744ffdb}} circuit depth and {{formula:ef923cf0-44bb-4627-a255-94ebe2affc78}} number of parameters, which make it difficult to implement on currently available NISQ devices {{cite:2665bb941e2ddf91943658c22e9f610222146d32}}.
i
83cea80b6bf2d18f76ebda86ad2bc074
SCMs with deterministic relations have been considered in a few work. In {{cite:f3b9a16310ddc963faa326179446c0ad314c6539}}, {{cite:84c2aaa69c6be721f3916039688aa4140772c9a3}}, D-separationThis is different from the classical d-separation (with small letter d) described in {{cite:557bafd93d723bad5ad357b65bf97916027f8de9}}. condition is proposed for graphically determining conditional independencies when deterministic relations are allowed. Yet, it remains unclear whether D-separation condition can capture all conditional independencies induced from the distribution. Further, a few approaches have been proposed for causal discovery. When the system only consists of two variables with one variable deterministically causing the other, {{cite:93f5ed03d2f8885ea17817da7c37434133b98df4}} showed that the correct causal direction can be learnt if there is no correlation between the density of the cause variable and the slope of the deterministic function (w.r.t. a reference measure). Their analysis however does not hold for linear relations. In a system where both deterministic and non-deterministic relations are present, {{cite:5c3950cd8bd5fb52038a88d136e4104c25281a7c}} considered recovering the reduced model where all deterministic variables are removed. {{cite:951d77e6d6eabffc89171ffbab499b67a226f4c2}} adapted the conventional constraint-based IC algorithm {{cite:557bafd93d723bad5ad357b65bf97916027f8de9}} and added new rules in the independence tests to detect deterministic relations. {{cite:d59b80792a8f01920f16faa984f3b31d951ece54}} combined constraint-based methods with greedy search {{cite:158efeed4550d0b924b9b1099b82610b60e05fe8}}, where the deterministic relation is detected by calculating the conditional entropy among variables. {{cite:35d8fbee860e98848d75b02720bc469323e67f4e}} introduced information equivalence, where two variables are information equivalent (w.r.t. a third variable) if knowing one variable is equivalent to knowing the other, from the viewpoint of the third variable. Information equivalence is thus a generalization of deterministic relations. {{cite:35d8fbee860e98848d75b02720bc469323e67f4e}} adapted the PC algorithm with additional tests to detect information equivalence among variables. The aforementioned methods all suffer from the same identifiability problem as in conventional causal discovery methods, where the underlying causal structure can only be estimated up to certain equivalence classes, and majority of these methods do not discuss the capability of identifying the underlying structure by their algorithm (i.e., the equivalence class of the recovery result). Further, these methods do not consider the presence of latent confounders in the system.
m
aa415971f5196c38ec9f79a671b1ece3
for {{formula:de252ac8-b14d-457c-983b-bee7c65b473f}} . Here, {{formula:754e365a-8127-4905-a13c-a2870dd0d97a}} is the membrane potential and {{formula:1cd33025-c881-4311-a22a-214e025b74e7}} is the membrane recovery variable. When {{formula:45fede74-72a2-4a0c-bd30-14f8c62cd74c}} reaches its apex ({{formula:f518c7a9-d44b-43ca-879e-37b9aaf17d46}} mV), {{formula:f6cb6133-af79-485a-9c1f-9d06755ba8f5}} and {{formula:dc0e6262-f2fb-4ee4-a7ce-65c0698dd0b4}} are reset according to Eq.(3). {{formula:b82bffee-5a8f-4c96-b016-d587ff5dcb4c}} , {{formula:78c0e339-755a-4319-b50d-564ae586af15}} , {{formula:9d8f9bc0-902a-4fd3-97db-d2d812e68399}} and {{formula:b915d106-bb24-49f2-bcd3-43dfe219e635}} are adjustable parameters that determine the pattern of firing and are different for excitatory and inhibitory neurons (see Table 1) {{cite:b28d44658ac1cf57eb31330adb40fb3c0edb3bb6}}. The population density of inhibitory neurons is set to be {{formula:6b3ee9d2-5ad0-4279-b75e-9e3ca9587e5c}} . {{table:76091a6e-3b06-4109-b959-6de57bd67078}}
m
1322006d44b4da3c250941ed8dcd4e87
We consider first a {{formula:fa83f115-b999-4e59-a40c-e65fe409c500}} agent network whose topology is displayed in Fig. REF . The combination matrix is given by the Metropolis rule {{cite:34e1cc0454ee3f2492cf59a570d968528d116440}}, {{cite:d7b52bc1c73fbb6314acb502334997813c0723d9}}, resulting in a doubly-stochastic and symmetric matrix with {{formula:0f11c700-9969-4b7e-b945-69a1c0da3bcc}} . {{figure:ec4271fc-01db-4fc4-af0b-5265afe44ffd}}
r
c745dfa14ba58f0ab99a16862381ce48
Problem setting and notation. The paper tackles the problem of UDA for action recognition. Given a source dataset {{formula:13ab414c-365a-4662-82e4-4a05db59b60e}} of videos and associated annotations, and an unlabelled target dataset {{formula:bd3bcf80-cf06-4e0b-bcb0-a0d0d11aadb8}} , where {{formula:b3aaea71-cf3e-4074-a6c8-dfce4fd8822e}} and {{formula:64e7d2ff-e7a2-43d5-abf6-fdeca175c6ad}} , {{formula:359194cf-ef7a-41a8-97a9-db4d1c753c3c}} ({{formula:25e05b2f-8f93-41f5-87da-f1741d63eb83}} denotes the number of action categories), we aim to learn a function {{formula:05f0a8e9-8137-4ff5-a3fc-7f4fce447211}} with parameters {{formula:6c82ea23-ef30-4886-8bb6-1cdcc296f8e0}} that maps an input video {{formula:4f145895-ce42-4cf9-bc09-5e814f034f4a}} to a class label {{formula:ccf93ab8-9ea4-48cd-9358-3490c7db72ca}} and perform well on target data. Note that this is not a trivial task, since source and target data are sampled from two different distributions, {{formula:cda9086b-40ae-49ef-87c5-a485c908690b}} . To tackle this problem, we propose a novel approach called UDAVT that combines two main components: a spatio-temporal transformer architecture and a novel distribution alignment scheme derived from the IB principle {{cite:62492b302f3b39782d54729c906ed32fa024315d}}. {{figure:55d504b1-210f-469a-a0a9-08b287c44aa8}}
m
6caed53b906d63ed0fada34e295eeb76
In quantum mechanics in the Heisenberg picture the state vector, {{formula:ac5eb4cb-9ddb-4632-8b2b-b9c5c4deaebb}} does not change with time, while an observable {{formula:2a31c757-f3d8-41a2-824c-34b11ddc5a98}} is a physical quantity that can be measured which satisfies the Heisenberg equation of motion {{cite:aa34b34220b2d0b0ae5996db42998246e44a996c}}, {{cite:62985f718e3d6c20abe6377ef89b67bf38b31dbe}}, {{cite:25d2e2bd536797ffc1de013620222fb7085d6cd2}}, {{cite:edc4a960501cf923718b4fa9d2999bfa967f3efc}}, {{cite:e8776c41fd0e27592ad6636b492a354751940a61}}, {{cite:dd63c51820c17ed04cdc2340f29dd680d36be886}}, {{cite:0a35e2d08833a4723cd61e59d14e49fcd8c3d615}}. {{formula:152cfecb-e6db-4b84-bebe-fbb29dc30e7b}}
m
84bbc63403a181f05542ff8ae607e4fa
Another constraint for this formulation is the cluster decomposition principle {{cite:f59ca79c56956200790b56fe22f33bf5340b1a07}}, {{cite:0f111a5e91965a6af26079155a25bd49bc746a1e}}, {{cite:5240cf4a032976b9e575c1218ef114cf10a2805b}} which may be stated that as "the outcome of a scattering event, in which two or several particles come in close contact with each other is unaffected by the presence of any number of particles very far away, or differently stated, that several scattering events separated from each other by large distances are independent of each other" {{cite:f59ca79c56956200790b56fe22f33bf5340b1a07}}. In fact, once the particles participating in the interactions may be expressed in terms of field expansions as in (REF ) and then quantized, then the cluster-decomposition is guaranteed {{cite:5240cf4a032976b9e575c1218ef114cf10a2805b}}. However such an expansion for individual particles may be inapplicable in some cases because the particles may tend to act collectively due to long-range correlations or due to long-range interactions. If the de Broglie wave-lengths of particles in a system of particles overlap then there may be long-range order (e.g. a Bose-Einstein condensation {{cite:942a5243446829b556509b2985d931572c14e077}}) so that the system of particles tend to move as a whole rather than acting independently. In that case an expansion of the form of (REF ) can be done only for the particles that are not in the condensate system. Therefore we should impose the condition {{formula:eca18edb-6810-465d-bc7c-05f90bd01bea}}
m
9e3a07a27ee02bcbcfe17e8a1be1a6b7
{{cite:076093606853b9e3bbb70076fdbdc18bf695ce25}}. Consider now the random variables {{formula:44339178-7536-4113-86db-142a89328e72}} and {{formula:53aa5d32-d702-4d60-8746-79395ffef25b}} ; then, applying the triangle inequality, for each {{formula:dcf3db34-d46a-497a-9331-03e61d82a6ca}} , {{formula:bb4b6802-4b62-4400-853a-0139c6f223d3}}
r
20b0b37987eca62d26926a78dce15381
{{formula:dd198109-75ae-4cbf-82eb-289362ef47d5}} , and where {{formula:f5b83311-7ca1-407e-82ef-8358245a9427}} refers to weights between neurons in the hidden layers, and {{formula:d0625d86-61b1-4834-8ed6-0d19d8c781d9}} to weight between hidden and output neurons. The discontinuous spiking function enters the gradient as the term {{formula:b19707e4-a5cd-4fd8-8dea-2fabab80aae6}} , and here we use the differentiable surrogate gradients {{cite:1fccea00caaa4568f5871fc6844dbe11ce775a29}}.
m
fdc2a516735574b230063fc561140767
Because {{formula:22d31562-e9da-4a69-9f28-964952ebc316}} depends only on the band structure, we carried out the first-principles calculations to have a better understanding of the large {{formula:a52d1001-1032-4e78-83a9-78fd23f151b8}} in LiMn{{formula:0851b203-cdaf-44ad-8e18-2bae1ebf8a61}} Sn{{formula:bfee6710-67f7-4f8e-b04a-a5131453aeec}} . Figure REF (a) shows the band structure without SOC in the vicinity of {{formula:4ce76bc6-2a33-4de2-805b-01d6ac45b8cc}} . Both spin-up and spin-down bands pass through the {{formula:57ffd31e-c310-4f50-bc4a-727d7ee3e273}} , with several crossings close to the {{formula:a0a11356-d5f7-4d56-86e8-e26b31c49ace}} . If the SOC is included into account, the band structure is almost unaltered due to the light elements, only with some small gaps opened in the band crossings. This band structure is consistent with the two types of carriers revealed by the Hall measurements. It is worth pointing out that there is a spin-polarized linear crossing slightly above the {{formula:51319d06-4d0c-4cab-b64d-5372e960e0da}} at the K point, and it is gapped with the consideration of SOC. This is reminiscent of the Chern gapped Dirac fermions found in TbMn{{formula:e0f293f1-0b86-4801-ade5-261ce8868e4e}} Sn{{formula:311ea4b4-bb3c-4b50-9d47-3fcf5fa1a564}} {{cite:4d1abfd9b87615c9b53d133693e1b832a766ed6a}}. Figure REF (c) shows the spin-resolved density of states near the {{formula:a731824e-6684-4f33-8942-c0637b1f6e19}} . The bands near the {{formula:65ea09f3-600c-469e-840c-9d0c56e88e06}} are mainly hybridized by the Mn-{{formula:ff1aee59-4fa2-4ed8-a29a-93be0eb79883}} and Sn-{{formula:2edbc1f0-1671-40b7-8438-6dbc57815306}} orbitals. The DOS for both directions have relatively small values at the {{formula:dac146d6-1725-4206-bde3-7cc563cf771d}} , consistent with the several band crossings near the {{formula:dc371481-c05e-489c-ba58-005d2dcc2cba}} revealed by the band structure. The calculated magnetic moment based on the spin-resolved DOS is 2.4 {{formula:41228628-d279-43ff-a62b-e06cb27aff4c}} /Mn, same with the value obtained in the magnetization curves. The energy dependence of intrinsic AHC {{formula:88073798-a72a-42a6-a5d7-cd9daa1c8c13}} can be calculated from Berry curvature based on the band calculations. As shown in Fig. REF (b), there is a peak near the {{formula:660098f6-8d47-4242-b78e-492d09010658}} with the value at {{formula:2bc854ab-7669-4f1e-bf84-e83ee6fb7a97}} of 300 {{formula:6784cc3a-7702-4146-9ceb-45ab32a7a42b}} cm{{formula:54a82328-e18a-49ea-8cfe-1a995d0add9a}} , which is close to the experimental value. In the time-reversal-symmetry-broken systems, a band crossing near the {{formula:150dde16-bbae-42d2-8be2-01afeea9ad3f}} with a SOC induced gap is believed to contribute a large Berry curvature {{cite:3acab90864c5c681d85eb1172ec5970ea33f90c4}}. Thus, the peak of {{formula:c1061c39-2d62-482e-8b79-c50347802c60}} near {{formula:b0ed9c6d-d03a-4e1b-9d58-b34f65c7157b}} can be understood as the consequence of the close positions of the band crossings to the {{formula:b235e90d-8709-479d-b914-f66fad691212}} . We can also notice a higher peak of {{formula:8fe392d2-e2d3-41cd-a483-d7da8a94fb46}} centered at about 0.15 eV, which is followed by a deep valley around 0.2 eV. If we assume the {{formula:5cd81b99-090d-478c-a39e-f808fbf34ebe}} Mn{{formula:02a4eed5-4c4e-4ae1-adfd-11f9d3ff601b}} Sn{{formula:f436e4a0-a9e0-4760-a4cc-2c1b99b67099}} compounds have similar band structure, the {{formula:305f3aee-45ea-4b8d-8a4e-2c7b0edd9b56}} can be lifted for about 0.23 eV by the 2 extra valence electrons of {{formula:effeb5c4-27d1-4f80-acff-5e2d0c640224}} . This results in the smaller {{formula:a0da8b6a-2597-46bf-8d11-1f7f3b4adada}} in {{formula:7efc61bd-c4a2-4f67-a6b8-522cac694d8b}} Mn{{formula:100637a4-c79d-480a-af8b-0dc6e4dc4da6}} Sn{{formula:5219b3a5-5ef2-4329-9feb-f5d9eb98d825}} , as shown by the upper dashed line in Fig. REF (b). All the calculation results fit well with the experimental observations, suggesting the reliability of our calculations. Based on the calculations, a much larger AHC up to 0.73 {{formula:0aa41343-3d55-4a57-a091-642e3c12ea64}} per Mn layer can be expected if the ferromagnetic compounds with {{formula:fc7dfca0-2e02-49c0-aacc-0e7b0cf5f7a4}} located at about 0.15 eV can be synthesized, such as (Li, {{formula:e19165cd-c89b-417b-9a03-002429bc9563}} )Mn{{formula:b01cc388-9b7f-4b6a-98ed-620ca159d3f2}} Sn{{formula:ca7d7a8c-b8f4-4ce9-8744-35cf97bd43d7}} and (Mg, {{formula:9c96f98e-07c4-4f0c-aea1-b261dd529342}} )Mn{{formula:c460c38d-dede-4544-ae4b-979ca5d8a0d3}} Sn{{formula:97c45a6c-6d76-4567-878b-3081ce11dbaa}} .
r
375d278c381dee1842c8328a83dd1b8a
Contrary, to learning a condensed representation of visual input. Our work proposes an MDP formulation to exploit the semantics of representations in pre-trained GANs to generate high fidelity images for facial aging. For this task, we have binned the continuous range of plausible age variations into consecutive buckets. Inspired by goal-reaching RL problems, we learn a conditional policy which samples face images belonging to different age bins; conditioned on the base state (latent vector defines the base identity to be generated) and the required age alteration i.e., making the face older (ascending) or making the face younger (descending). In our MDP formulation, the states and the transition function are defined over the latent space of the ProgressiveGAN {{cite:90f92ecb930ba1c0352e8170411d616491787eb2}}.
m
e582b523cc1a4b52081ed9b67114b0bd
Finally, we study class 4 CA, which involve a mixture of order and randomness, with localized structures that move and interact in complicated ways {{cite:4b3e9856e42bae833dc0a7b490019d5d266f637f}}. A well-studied example of this is ECA Rule 54, defined in Fig. REF (a), which can be interpreted as a discrete analogue of excitations in an active nonlinear medium with mutual inhibition {{cite:8fbff764f2bac993d300125c89bf9bd74d0ee3ac}}. In this case, the mobile self-localizations called gliders appear on a stable periodic background called the ether. Gliders behave like solitons in many regards {{cite:bd6cc2a74d0b69ad8f9b984570cd284707d5d28e}}. The study of optical solitons has become a diverse and rich field of study {{cite:68610e92aad5bc384aa3a2140479936db01cc784}}. However, optical solitons usually arise due to a balance between nonlinear and linear dispersive effects. In contrast, we have demonstrated optical soliton-like behaviour in a synthetic temporal lattice with only simple binary rules. Despite its simplicity, our system captures physically relevant features since a reversible extension of ECA Rule 54 has produced insightful results in non-equilibrium statistical mechanics and generalized hydrodynamics {{cite:1e4c38608eb9d4a87390732fd8ae1754d525db36}}. By properly programming ECA Rule 54 in our photonic simulator, we experimentally demonstrated a glider collision, shown in Fig. REF (b), whereby gliders emerge after the collision with the same shape and velocity but with a phase shift, which is characteristic of soliton collisions {{cite:3095e9f28b42d5e9a6ce19eaa1cd4a2718821ca0}}. Such glider collisions can be used to construct logic gates {{cite:532b8feec65bc6fcc7146228257e9a3bee2cc401}} and Universal Turing Machines {{cite:c7ee88652791888a07eda88017f1a6fa2ed26dd6}} for unconventional computing. Furthermore, we also observed a glider gun, shown in Fig. REF (c), in which a higher-order localization produces lower-order gliders akin to the process of soliton fission {{cite:5ebc74780493624598f17975b8559a81c715b053}}. Conversely, a glider black hole, shown in Fig. REF (d), looks like the process of soliton fusion. Therefore, we have demonstrated a diverse range of glider and soliton interactions in our simple photonic computational platform, which can help unlock new methods of optical information processing. {{figure:340b96fe-940a-4a08-9400-167f99ab3973}}{{figure:5ba3d76d-ad19-4d11-a48b-75e71068b0f4}}
r
4be3ba979299ac32e4e7ecd6904cabea
Recently, Chatterjee (2021) introduced a novel correlation coefficient based on simple rank statistics {{cite:3edf24a1227fd2783d442e4ea0e1ef1f515122af}}. His test has quickly attracted much attention as it is distribution-free, consistent against all fixed alternatives, and asymptotically normal under independence. We begin with a brief review of this test. Suppose {{formula:19e5ee85-13d0-4b45-8418-d840c0be8424}} 's and {{formula:2389decd-bbf3-478a-9b23-5ae4760ace57}} 's have no ties, there is a unique way to rearrange the data (with respect to {{formula:ce00399b-db49-4509-bfad-ea3ea0a401a6}} ) as {{formula:a528fba6-5fcb-48c9-b2b5-865af3dcccd5}} , where {{formula:8decdd85-6531-48cd-a05e-fb3d588b2129}} and {{formula:aa590ea4-516e-471d-99ff-9dedec7f8d1c}} denote the concomitants. Let {{formula:92abe77e-f57c-41f8-b781-f51f84b0aa79}} be the rank of {{formula:947a56eb-fc98-44dc-b8d9-27b81bbbc14a}} , i.e., {{formula:83379709-abc9-4d87-a9cc-26f01d48192e}} , Chatterjee defined the following correlation coefficient {{formula:3f68ea89-aa83-4efd-953b-92969980fd3d}}
i
765f5094bf74fa3bb12b5641f0951d2e
We derive the log-barrier algorithm presented in sec:algorithms first by approximating player 1's subproblem into an unconstrained problem using interior point methods {{cite:18774e28901377ef432bcddc431669f86f50f346}}. {{formula:4af317e5-ee28-4a0a-93a1-0c895c2cb42d}}
m
009628986f43144818bb3572fccc8cca
In this paper we add two attention modules to consider both global classification score and correlation in the spatial locations, in the segmentation predictions. As the first attention module, we propose to use a channel attention in the segmented output. As shown in Fig. REF , it multiplies the attention weight to the output channel, in which the weight is the probability assigned by the classification branch of the network. The channel is then added to the resulting features, similar to self-attention approaches {{cite:038859dfe98d3b0246aa4c21fa0c6d32948d235d}}. Let, {{formula:044588c0-1648-4527-8f6d-f73fa62db11a}} be the RGB image, and let {{formula:21af0dd4-2b28-4844-bf05-97daa218343a}} and {{formula:ef750ac0-5140-45d4-b1fc-5f2dffaad684}} be the classification probability and segmentation features extracted by the CNN, respectively. {{formula:00fb4f87-5fcb-4f0c-b170-47d8ba4d28fa}} could be the output of any segmentation network. In this paper, we use deeplab v3+ encoder and decoder {{cite:f170ffbc7e0bea9a1314d50064851f67b41aacf7}} to extract the features. {{formula:c8550d27-40fb-420d-82fb-024ee2e19259}} is computed by the sigmoid function of the classification scores obtained by the classification branch (see Fig. REF ). Following {{cite:a3bebc04219ef457924bf23c56e227d1afaa348b}}, we compute the channel attention model as {{formula:ff67e43a-3d65-4c1b-8a54-8b28cb65a707}}
m
2903731e5830e8f2e558cc709a99a31c
There are several limitations to this study that warrant further discussion. First, we used only 20 volumes in total to test the segmentation performance for each device. Second, the study was performed only using spectral-domain OCT devices, but not swept-source. Third, although the enhancer simultaneously addressed multiple issues affecting image quality, we were unable to quantify the effect of each. Also, we were unable to quantify the extent to which the ‘DL-enhanced’ B-scans were harmonized. Fourth, we observed slight differences in LC curvature and LC thickness when the LC was segmented using ONH-Net trained on different devices (Figure REF , Figure REF , Figure REF ; 2nd and 4th rows). Given the significance of LC morphology in glaucoma {{cite:5a40e695d6dbbf5bc19219e6a54669b3758c267b}}, this subjectivity could affect glaucoma diagnosis. This has yet to be tested.This is yet to be tested. Further, in a few B-scans (Figure REF , Figure REF , Figure REF ; 6th column), we observed that the GCC segmentations were thicker when the ONH-Net was trained on volumes from RTVue device. These variabilities might limit a truly multi-device glaucoma management. We are currently exploring the use of advanced DL concepts such as semi-supervised learning {{cite:4efa7aa970eb4053b0db4761121e861388d5b8fd}} to address these issues that may have occurred as a result of limited training data.
d
5d88853a98151c70269418ea494e4cb5
The goal of this paper is to study the similar Liouville correspondences between the flows and Hamiltonian conservation laws in both the Novikov and SK hierarchies, as well as the DP and KK hierarchies. Furthermore, an underlying correspondence between the Novikov equation (REF ) and the DP equation (REF ) is also constructed. Our motivations are three-fold. First, it was shown that the Novikov equation is related to the first negative flow of the SK hierarchy {{cite:7be1da76954f8a38c0213f56297b44aef670bb40}}, while the DP equation is related to the first negative flow of the KK hierarchy {{cite:f9c50736e3f738054d9bddf11b483aca59ca8b25}}. Second, the CH and mCH hierarchies are related, respectively, to the KdV and mKdV hierarchies through Liouville transformations relating their isospectral problems. Third, the SK equation is related to the KK equation by a Miura transformation {{cite:1ad7cca6425babc078a38ae81d7bbcb35f2eba27}}, and there exists a transformation found in {{cite:10594815463945a8cd5ba69df657cf999a5672e5}} which maps the mCH equation to the CH equation.
i
2c0a541ce0c50fa30b795356cfffbd86
We first compare with the trained-from-scratch methods in Table REF . We show the results on the test set with cross-entropy training and then SCST {{cite:8c96c78e74e1fd71f5e35332c7397869fea24b53}} RL fine-tuning. With complementary information provided by the retrieved text descriptions and image conditioning, our method improves the baseline model {{formula:fdac08d5-6cbe-4964-90ff-fd47176c305c}} by {{formula:d2a3b709-73ed-43b2-aecc-7cd647895b2c}} in CIDEr and {{formula:78050c2d-060c-41d1-bb34-16f3f3ebb15e}} in BLEU-4, and compares favorably with all previous trained-from-scratch methods across all metrics.
r
d2f8ebcfbd5aa4bce7cba9b138412527
In the literature, there has been a significant attention in UAV and measurement campaigns including several modes and scenarios. Considering the transmission modes akin to terrestrial ones, air–to–ground and air–to–air are the two prominent classes {{cite:043800f8a16abd3e15ad55e201d849886af4a61a}}. Among these two, air–to–air class needs further investigation due to the aforementioned reasons relevant to UAV modes of operations {{cite:1d0c2bfc614b3c4747280229948df437e6ad9141}}, {{cite:c87aa6e57c3dce21d00486c2838d0bdf7b0d6165}}. On the other hand, air-to-ground could be considered to be a transition class since it contains both terrestrial elements and UAV simultaneously. Air-to-ground class consists of LOS, NLOS, and OLOS {{cite:4e837c2e7e842be37776535412585e20966ddbc2}}. Of course, a comprehensive air–to–ground model requires the probabilistic transitional states for LOS, NLOS, and OLOS cases to be defined as well {{cite:eb7831eb8daa5455a83f7a6395d320a63093b13a}}, {{cite:bc1c569d34f9e3a3bb5d6d28006d1a859ccd285a}}, {{cite:df0e354033645734174bb264f16aa813994752c9}}. It is obvious that further analysis is required to have an extended model which takes into account shadowing as an additional parameter {{cite:c4256479fad630e9b5638700ef623ebb8c71f766}}, {{cite:76e4c9fbd8ba4419c7ba6bce0d6e4b4fb4143da3}}, {{cite:f2d1391f54b4f2c889dfe671fbafe13e7271a70d}}. A very detailed collection of studies present in the literature could be found in {{cite:fe7f2b8f923115c2dc29b07aaa675d18f1ed0922}}. Beside statistical models which depend heavily on theoretical derivations {{cite:69379a16c82a7653dfbb8c2c675f8cdadcd1f050}}, there are exact {{cite:f7005236ff721b8f92a01f204a6c29c6f422d777}} and numerical approaches in determining the propagation characteristics as well. Especially ray tracing method that runs in downtown scenarios with the extension of building heights is employed very frequently {{cite:51751d065f6c9de115e906a0ad90e474b4c694fa}}, {{cite:c87aa6e57c3dce21d00486c2838d0bdf7b0d6165}}.
i
d569b9af4e3e8407bb502035ac84feb8
The drawback of such a formulation is that it is unable to detect implicit targets in situations where the same aspect-category-sentiment pair has explicit targets associated with it. Take this review for example: "Always crowded, but they are good at seating you promptly and have quick service." [{service, SERVICE#GENERAL, positive}, {NULL, SERVICE#GENERAL, positive}] are the actual labels for the targets, categories, and polarities. {{cite:df0f31fec096c5409ba04ddbfaf13fb3f7d8945a}}, on the other hand, deleted the second opinion owing to redundancy, while {{cite:7bd020e20a635ad91e73880273be37a5e4d1307b}} removed it due to target conflicts.
i
4183c911ba04a8a4a0775d099f05d690
The ratio of the superconducting gap to {{formula:d7b74c24-51b3-44cc-83c0-8d62a6661845}} was estimated to be (2{{formula:9ad6b8c0-025f-40f0-b2a4-8cf27b6733ba}} /{{formula:51716b61-6f80-4faa-be02-8925f782d7e0}} ){{formula:a95ffab9-b153-4910-86f6-8935f69401af}}  4.2 which is consistent with the strong-coupling limit BCS superconductors {{cite:294fad138b11a16a98ec9506b71c101a1c0af0ae}}. Interestingly, for the Bose-Einstein condensation (BEC) like picture, a similar ratio can also be expected. Thus, just from the ratio 2{{formula:fa211855-a3c8-4844-b3f0-1e0cd55238a5}} /{{formula:08105a64-7b84-4e7f-9a4b-672e7c6aa2d0}} , we cannot effectively distinguish between BCS or BEC condensations{{cite:9833605fb92291cdca84a18da0307ef4f1ffbad5}}. In this regard, {{formula:7ce384dd-cb78-4a04-991c-dc5c0a199c70}} ratio is a quite crucial parameter to address this conjecture. Within the picture of BEC to BCS crossover {{cite:ffe882efe51f8fef387f7d7e92b4d6ce2fc37d95}}, {{cite:17c8e9a5abde985d2032696617d49257235c45d6}}, systems exhibiting small {{formula:f7ca4386-5585-4ac3-b03b-778a707721b3}} are considered to be on the BCS-like side, whereas the large value of {{formula:1747a603-5cc2-4465-9d5d-7aeea6b8623f}} {{formula:ab432f82-f518-43a9-ae15-c79ca092bb0e}} 1–20 and the linear relationship between {{formula:eabbeca9-d148-4f92-91ec-f03bdec7c34f}} and {{formula:2a995b93-0642-447e-a5f4-29db96c3d246}} is expected only on the BEC-like side and is considered a hallmark feature of unconventional superconductivity{{cite:294fad138b11a16a98ec9506b71c101a1c0af0ae}}, {{cite:e993b979877396b061f7b024f7c4df0be4d764bc}}, {{cite:a9922bcc127b17ebfccc7a1a04feae80fca9d0ed}}, {{cite:74efe0a80ed197991a98743032c96ae20836efe3}}. For NbIr{{formula:fd33d541-4924-48d1-a3d9-6560c793f018}} B{{formula:66cc30a3-97d4-46e8-bf9c-fdf95a466de8}} , we obtained the ratio {{formula:96e2ab48-9e5e-435c-8794-4f01f2a49881}} {{formula:517bcf55-92a8-4224-88c5-66a2536010b5}} 2 which is intermediate between the values observed for electron-doped ({{formula:ae46b456-0b84-4eac-8d50-bfba38ed9457}} ) and hole-doped ({{formula:e43303d5-701b-476b-bde6-ff70de75d782}} ) cuprates {{cite:ffe882efe51f8fef387f7d7e92b4d6ce2fc37d95}}, {{cite:17c8e9a5abde985d2032696617d49257235c45d6}}, {{cite:6bee3eec495d0921f5f8dcb96cb0cb6a742edf67}}. This result yields strong evidence for an unconventional pairing mechanism in NbIr{{formula:d1ba68ee-c429-4845-8d08-1bb673f913b4}} B{{formula:7925d116-4b95-4e65-904a-6be416de4b4b}} which also exhibits linear temperature dependency of the upper critical field.
d
b5260745f4d11e6721c805b074365d70
It is worth noting that we only quantize the weights and activationsNote that we did not yet add {{formula:2b78bb37-df30-4960-be89-792650622e13}} after concatenation. with a {{formula:25bb1063-ace6-4171-8d72-4da6ef12e8ab}} operator instead of running fully fixed-point inference. Rounding and truncation errors of fully fixed-point arithmetic will lead to some additional error. However, as most of the quantization noise is due to weights and activations, the simulated quantization generally correlates well with on-device accuracy {{cite:73afde489fd219a42e2def15698edab6f4db6bc5}}, {{cite:80f98738c8147af48b436df40197ba960542cbbf}}, {{cite:c49c9539341fb6734c18860dda237439d9fd0f90}}.
r
f1f66d774e88998571db65c73299ead0
Summing over the branching fractions of {{formula:2a7a9f99-1df6-41ed-941f-22f56bd64785}} decays and the other exclusive {{formula:f1838ce1-190f-4ee8-b6db-f7b855f31122}} decays in PDG {{cite:9e491e9e3fa460c4d712519f5f36232c87c8052d}}, we obtain the sums of the branching fractions of all the exclusive {{formula:8cda138e-b941-43a2-b57e-054e79a1b709}} and {{formula:41ab21b6-cd03-4906-bb90-1d23a0125d7d}} to be {{formula:6b433dfe-06e5-409b-af6c-7592d876ea28}} and {{formula:4ad8dd57-342f-4ec0-973b-8fb69193b576}} , respectively. They are consistent with the measured inclusive production {{formula:91e41f49-d8e9-458f-8bfb-f85d736ac808}} and {{formula:7839d61e-43d3-488b-84dc-f2d8d94f6b70}}  {{cite:cb7436f2b0128ae9cb93b6f29c53bfd8fd8a6e79}} within {{formula:bc72229d-79cd-4733-95d9-a435d095054f}} and {{formula:776ac71c-da62-4d54-94ce-740ddc6c2d51}} , respectively. This excludes the possibility of additional exclusive {{formula:9afce36f-eef2-4fd4-ab1e-1f39db8650dc}} decay modes with large branching fractions.
d
cb87156d943d08a5d267f31484738a31
With digitalization penetrating all aspects of life we are witnessing an explosive growth of data. Data clustering {{cite:e2fb00909067c8763f9e7c71dd6e55a6412fba71}}, i.e. a categorization of data sources into different groups, is one of the most popular approaches to harvest knowledge from this deluge of data. In countless applications data clustering has shown to reveal latent yet meaningful structures. Clustering can be applied to both non-relational data (information about individual data sources) and relational data (information about the relation between data sources). In non-relational data, clustering aims to group data sources based on some measure of similarity. In relational data, clustering - also called community detection - focuses on identifying sets of data sources that are more densely connected within, as compared to between, sets.
i
0f12c28e263227f292999e37881eae2a
In the context of reinforcement learning (RL), a related question is the following: can offline data, resulting from observations, be combined with online data resulting from experimentation, in order to improve the performance of a learning agent ? In the Markov Decision Process (MDP) setting, where the agent observes the entire state of the environment, the answer is straightforward and practical solutions exist, leading to the fastly growing field of offline reinforcement learning {{cite:5c5d110ef708dae3c32280c5092cbace782d60a7}}, {{cite:28cc74c3dfae4f3180fd3da07400e3ade22491ee}} where large databases of demonstrations can be efficiently leveraged. In the more general Partially-Observable MDP (POMDP) setting however, the question turns out to be much more challenging. A typical example is in the context of medicine, where offline data is collected from physicians which may rely on information absent from their patient's medical records, such as their wealthiness or their lifestyle. Suppose that wealthy patients in general get prescribed specific treatments by their physicians, because they can afford it, while being less at risk to develop severe conditions regardless of their treatment, because they can also afford a healthier lifestyle. This creates a spurious correlation called confounding, and will cause a naive recommender system to wrongly infer that a treatment has positive health effects. A second example is in the context of autonomous driving, where offline data is collected from human drivers who have a wider field of vision than the camera on which the robot driver relies. Suppose human drivers push the brakes when they see a person waiting to cross the street, and only when the person walks in front of the car it enters the camera's field of vision. Then, again, a naive robot might wrongly infer from its observations that whenever brakes are pushed, a person appears in front of the car. Suppose now that the robot's objective is to never collide with someone, it might deduce that never pulling the brakes is a good strategy. Of course, in both those situations, the learning agent will eventually infer the right causal effects of its actions if it collects enough online data from its own interactions. However, in both those situations also, performing many interventions for the sole purpose of seeing what happens is not really affordable, while collecting offline data by observing the behaviour of human agents is much more realistic.
i
a8fe9867d47c3d30bd89c509abb2d20c
On the other hand, epistemic or model uncertainty characterizes uncertainty in the model parameters. Examples are the lack of knowledge, e.g. out-of-distribution document pairs or the described issue of re-sampling heterogeneous same-author pairs. This uncertainty obviously can be explained away given enough training pairs. One way to capture epistemic uncertainty is to extend our model to an ensemble. We expect all models to behave similarly for known authors or topics. But the predictions may be widely dispersed for pairs under covariate shift {{cite:6ace002e5073e1d329908bb5a26eb41694b65232}}. We will discuss our approaches for capturing these uncertainties and defining non-responses in the PAN 2021 submission paper.
d
9a6719b65b7e790effd1ce69beef4d6b
If the linesearch terminates with {{formula:77877cdd-2c60-4d2c-bdc9-f721b16ba01f}} , then clearly {{formula:d6f44f32-0c2a-47bf-acc1-c33cbb05dd6c}} . The optimality given by the dual gap (see {{cite:24a358fc731974047976be5d9c0d363c229a6b28}}), can therefore be stated {{formula:a797dba8-8d25-4af7-89de-461892342bfe}}
r
5854cd1b5a6a89380f7a396fb82a3bb7
However, most of the current deep models only learn local convolutional features inherited from a pre-trained network on recognition tasks like VGG{{cite:69d8bc50a45211fe9fc24d295f0061ab7f576a57}} and ResNet{{cite:1bc40e3287904cf5911f9bb6b9d4fa7e85803107}}. In real-world scenarios, there can be various distortion types applied to whether locally or globally, within an image. The conventional scheme for deep IQA models with the CNN extractor and MLP score regressor only considers the final representation of the visual content, and attention among different levels is often ignored.
i
6d4c51a0dd15859b7c9aecff771baa0c
Despite its compelling role, CCFT itself remains poorly understood. On the one hand, the celestial conformal field theory behaves in many aspects like an ordinary two dimensional conformal field theory. Many techniques in the ordinary CFT can be borrowed and applied to CCFT. For example, one can construct the stress tensor {{cite:26224976df8cbb33af2b8799ee4f0cd8103e6f6e}}, {{cite:90e17aaf6524d1aa5301dde8b5f722c800da459b}} and it is meaningful to discuss the operator production expansion (OPE) of various operators {{cite:df5cb1ebd051e55a45796cee494837d2453a0536}}, {{cite:4c1847253a0cd22cec5c9062ffb70deadd339f5a}}. On the other hand, CCFT features lots of peculiarities. In particular, the spectra of operators and the notion of inner products, conjugation seem to be drastically different from the conventional CFTs {{cite:88b5b1ebc84dd8505298ea7d835fc1eababaa3cd}}. See {{cite:d4bdb65d1138e9f59ab8f7dea0d39347854d4db0}}, {{cite:e2b4ce95d3112069d7ae3a42abaa57cc8e162a69}} for recent discussions. Nevertheless, the rich symmetries and various self-consistency conditions already impose stringent constraints on CCFT.
i
966de0d761a0d1a5aefcae415a8598bb
In the AdS/CFT correspondence the radial direction can be interpreted as the scale of the boundary field theory. Thus, a radial evolution can be thought of as an RG evolution and has been dubbed “holgraphic RG” {{cite:94941bd6b47a611da2c6843eaf6be5197ed0bec9}}, {{cite:be4d8b6607c72303a13481824418fbee95b5654a}}, {{cite:0429f33365a2a9d10cbb4adf9fdb8503a8b2994c}}, {{cite:074f9eebca02791ede7fa503017839f7022fc691}}, {{cite:148c44aa89b44c397261af6e59591d0c69c5dad0}}, {{cite:3580b5ab84f766347e3b1ed5199082f6374eace8}}, {{cite:3135e735052fd19fe9ae01729621b2e9050cf6ad}}, {{cite:041d21d22cb6bdce6c02999b32271e9c6cc54436}}, {{cite:7f4e63522682e3de039c350fdf535a551b4b8299}}, {{cite:024fd893f31b04fcda336a42d3291dadc2826d40}}, {{cite:4264466b6769148a716d3bc04aedf83ce3034cde}}, {{cite:8c39276ee8fc7b92a98fa095e73220c175a278c1}}, {{cite:6ee668a55ece67a965783e124c63713d0a8b0a3f}}, {{cite:d9b83a140adb60fdcc03df96cc4524c108b19db2}}. The precise connection between the boundary RG and holographic RG is, however, still an open question.
i
79f41552ff6b3c00b9683cce8997cd43
Similar to literature on anomaly detection in energy time series data, multiple works propose forecasting-based unsupervised anomaly detection {{cite:b59b22c5c3b706174e84941761c919e1f39591c2}}, {{cite:3a3f36016af8c9e1e70f7a947101832b9154fb42}}, {{cite:e8d263df313e34f517cf9288154fdec138ac3df3}}. In most cases, the euclidean distance between point forecast and ground truth is used to flag anomalies based on a defined threshold. In {{cite:885ae5a2194c46936c21982fef28a18818afc225}}, the authors propose the use of a CNN as the time series forecaster. According to the authors, the proposed method can be trained on comparatively few training data and without removing anomalies from the training dataset. A novel approach on anomaly detection in time series data is proposed in {{cite:fca3728b3f2b73511f6ab1faed3f84f7b752811a}}. The authors introduce the use of the SR algorithm from saliency detection in computer vision for unsupervised anomaly detection in time series.
m
aba8a043e5b9a71ce60ea9c82f9247ae
redBy utilizing micro-batches (i.e. disjoint subsets of each mini-batch), we find that explicitly regularizing either the average micro-batch gradient norm {{cite:e43559883cb89a274678bbcfa1f8e1bb8223c972}}, {{cite:d6d3c4f697af7b95f4769cfcf7314e87c7b7ebbc}} or Fisher Information Matrix trace {{cite:a479a16008f8939261969c9192b3b411e59e381e}} (equivalent to the average gradient norm when labels are drawn from the predictive distribution, detailed in Section REF ) in the large-batch regime fully recovers small-batch SGD generalization performance, but using Jacobian-based regularization {{cite:a42f1c16d13a9b121ddaae018ff707b133589fc1}} fails to recover small-batch SGD performance (see Figure REF ). We show that the generalization performance is strongly correlated with how well the trajectory of the average micro-batch gradient norm during training mimics that of small-batch SGD, but that this condition is not necessary for recovering performance in some scenarios. The poor performance of Jacobian regularization, which enforces either uniform or fully random weighting on each class and example (see Section REF ), highlights that the beneficial aspects of average micro-batch gradient norm or Fisher trace regularization may come from the loss gradient's ability to adaptively weight outputs on the per example and per class basis. We demonstrate that the chnggeneralization benefits of both successful methods chngno longer hold when the micro-batch size chngis closer to the actual batch size. chngWe too subsequently chngshow that in this regime the average micro-batch gradient norm behavior chngof both previously successful methods differs significantly from the small-batch SGD case. We highlight a high-level issue in modern empirical deep learning research: Experimental results that hold on CIFAR10 do not necessarily carry over to other datasets. In particular, we focus on a technique called gradient grafting {{cite:b410c6aee6c650a05e1a347e4138f2ef08cfc1d8}}, redwhich has been shown to improve generalization for adaptive gradient methods. By looking at its behavior for normal SGD and GD, we show that gradient grafting recovers small-batch SGD generalization's performance on CIFAR10 but fails in CIFAR100, arguing that research in this line should prioritize experiments on a larger and diverse range of benchmark datasets.
i
fa9ede9eec003a3cb1da79e0a5067b8e
The proof of Theorem REF relies on the following key lemma to upper bound the second moment of {{formula:c35e6c86-9570-4df5-9532-f1c7c7b61393}} . The idea of bounding {{formula:42be3aa1-125c-4319-b930-8ed9c6cbc157}} appears in many relevant literatures such as {{cite:a8e56b607b4cf60523c1ba1514d0c2d4bcb3c0ea}}, {{cite:ef79d1b2815746c9cb766abc35927cc35661a0cc}}, {{cite:0ab3b5ef4680278d5dd6bf9c292c2d616e624e92}}, {{cite:8238bff2b2990ea8eb9c0687dcf82983080c951d}} under various technical assumptions. Our tradeoff assumption REF seems to be novel to our best knowledge.
r
4b03e9ec17044d0a19714ae67e875d31
From a theoretical point of view, our paper provides the first finite sample concentration inequalities for a number of different CDF and risk estimators, which are widely applicable to recent distributional reinforcement learning settings, which learn the CDF of returns and are capable of optimizing different risk functionals {{cite:bbfc44b854c7af83b07fce2463de81b48eb8abfa}}, {{cite:d880fa69b331bfc4c8f598f0d602c2a7701fda35}}. Of these estimators, the doubly robust estimator is a novel contribution and has not yet been defined or analyzed for distributions in the literature. From a practical standpoint, our method can be used to comprehensively evaluate the behavior of a target policy before deployment using a wide range of risk functionals–a contribution that is especially important in real-world applications.
d
ee079b965d4ce8e739f5960deff67582
In contrast to other modalities like images and texts, generating instance-level explanations for graphs is non-trivial. In particular, it is more challenging since individual node embeddings in GNNs aggregate information using the entire graph structure, and, therefore, explanations can be on different levels (i.e., node attributes, nodes, and edges). While several categories of GNN explanation methods have been proposed: gradient-based {{cite:67017ef620d3bd1b4da4a4f6045fda44cb24188f}}, {{cite:8a6e11351ca19b19056defa175c3b982cc9cd045}}, {{cite:cc240316ecb416ee9a6b0b9f6e0634f5e8c4df29}}, perturbation-based {{cite:7cacb8e42571a444610d883b2a50f21625fce60d}}, {{cite:0b74d107d54cc6852879ee04ed9259e5afb52832}}, {{cite:ff4f784f944546f1a34655663dcf68532906c7b7}}, {{cite:b022092e51d6e64c33d3766c16839835668cd7ef}}, {{cite:9357a4938408edf777e672306c6fe5e7796a4cfa}}, and surrogate-based {{cite:39ef5737c42ad28dead013a9951c46eef1191d1a}}, {{cite:ba740b2dd750076552feac3dad243df5da93e3ee}}, their utility is limited to generating post hoc node- and edge-level explanations for a given pre-trained GNN model. Thus, the capability of GNN explainers to improve the predictive performance of a GNN model lacks understanding as there is very little work on systematically analyzing the reliability of state-of-the-art GNN explanation methods on model performance {{cite:f5370300638f6a483d62072a1b5aead6c44e4ad5}}.
i
950649ebaba6571e1b3d15bc314234a7
One of our main results is spectral sparsification of the kernel matrix {{formula:779c2be3-196d-47da-93cd-d7ea77171acc}} interpreted as a weighted graph. In Section REF , we compute a sparse subgraph whose associated matrix closely approximates that of the kernel matrix {{formula:e151c6eb-8957-4c3c-b72e-a63cd393085e}} . The most meaningful matrix to study for such a sparsification is the Laplacian matrix, defined as {{formula:9176094c-7c73-4b57-939f-fc39748541d1}} where {{formula:d988efeb-4275-4254-844f-c56a5e68d4ff}} is a diagonal matrix of vertex degrees. The Laplacian matrix encodes fundamental combinatorial properties of the underlying graph and has been well-studied for numerous applications, including sparsification; see {{cite:1109cf64a13106953c639e8a440223946d6bd416}}, {{cite:21a7df7da527bf60d19013defa98de0032eb8d70}}, {{cite:98f558a25d049c2c8130404465cf1d03edfe3232}} for a survey of the Laplacian and its applications. Our result computes a sparse graph, with a number of edges that is linear in {{formula:7c6b5abc-f645-406e-b9b3-24ce2a05366a}} , whose Laplacian matrix spectrally approximates the Laplacian matrix of the original graph {{formula:f34bf1e0-38ab-45f6-9293-3b2c35069edf}} under Parameterization REF .
r
16a7729a4ada456e7af97fb697d54bf4
[leftmargin=*] – Has severe defects {{cite:2f294968abb65bf57ad62001c9f893aad16eaa5c}}
m
87b6042589749090d4147f7873491006
Once constraining the values of {{formula:e46e12b0-8859-46d0-accb-8b15d0798f71}} and {{formula:937c5784-2ad2-47df-b7f1-601733737207}} to the experimental data of {{formula:bfecda9c-8e5b-467a-a6ca-6d166d1860dc}} and the binding energy for {{formula:b7a6fec8-22bd-487b-9640-1a54fad9b0e4}} Pb, we calculate the BE and {{formula:55c99170-44b3-4f66-ad34-aaa4655978e0}} for some of the spherical nuclei. The calculated results are displayed in Table REF along with the experimental data {{cite:9dba8f415395e8500c86a2ab0bf1855bd33fdcef}}, {{cite:0bc3b5c60b06468177df00acf8447c10c89ded9f}}, {{cite:ad479d1386773a58efa766417e5cbd33bb4424b4}}, {{cite:f90ea7202ac5485f9009e73e204fb20de95db294}}, {{cite:3b4fa2428e9172acb87dac8818463dce83e5a4cb}}, {{cite:ffe6e959511fd00e5ef83195362c7cdea4d0e2f1}} . As expected, the BE and {{formula:71dd6ef2-da63-4887-bcd1-e7862a0f7795}} remain unchanged for symmetric nuclei where the number of neutrons is equal to the number of protons, i.e., N=Z nuclei ({{formula:fb3bc0ce-d211-4c59-9e23-974322f1b4b9}} O and {{formula:4c485ef4-3080-427f-8bdd-229587959cfb}} Ca). The neutron distribution radius changes considerably for all other nuclei, with a marginal influence on the binding energy. The nuclear matter quantities such as binding energy per particle for symmetric nuclear matter (E/A), the isospin dependence observable symmetric energy {{formula:233ada8c-882a-4772-b772-2140cbefd56f}} , slope parameter {{formula:c4b60c33-28be-48c3-aa92-b550ce731f4e}} , surface symmetric energy coefficient {{formula:999ff541-c415-499a-b663-c26a5cd115e0}} , skewness parameter {{formula:8df17c89-0427-4b28-9ea8-e864ca491cbc}} , incompressibility {{formula:deb4da36-78e3-4178-9264-a326aed95610}} are calculated using new/modified G3(M) and IOPB-I(M) parameter sets. Furthermore, the incompressibility of asymmetric nuclear matter coefficient {{formula:9309671e-edd8-484d-a4c2-a63c86541379}} , {{formula:c29a2588-f424-47d5-9e2f-abe2dada9435}} in symmetric nuclear matter, isospin asymmetric coefficient {{formula:faedc69b-30b2-4e61-9819-0f848abdfea0}} , central density {{formula:075dd231-57e4-4335-bd39-ef7639e0bec2}} , incompressibility of second-order at saturation density {{formula:20b5603a-2d0f-4471-a009-e1e8c4b39b25}} and slope of the incompressibility {{formula:3a080483-f9fd-4200-9aca-f3efcf9fef7a}} are also estimated new/modified G3 and IOPB-I parameter sets. The calculated nuclear matter (NM) properties along with Neutron star (NS) properties from the new/modified G3(M) and IOPB-I(M) are listed in Table REF with the old/original G3(O), and IOPB-I(O) estimates and the experimental/empirical values.
r
767091e6cf74bc2f55e477fceab860ff
Recent years have witnessed a great development of non-task oriented chatbots. Existing approaches fall into generation-based methods {{cite:f61d488b768d675f49a6e55fb206fd36c18732cf}}, {{cite:deb3c781df2e01e491d9b3d3fc1780c00b921b9c}}, {{cite:c6307d7c7f1979dbedf9f2cfa74364f1c2020d46}}, and retrieval-based methods {{cite:e1237f0c6a178cf772d4973aa179966a773e53f3}}, {{cite:7798d27b087e4ef319fe222f24777d9337d09391}}, {{cite:de5b79ecff5dd6f41efe61190cc778ff3efb4e87}}. The retrieval-based approaches, which enjoys informative and fluent responses, have attracted more attention in industry, e.g., social chatbot XiaoIce {{cite:f21aff7df8e91da0cd6ad6b7e4c362254dbe999d}} and E-commerce assistant AliMe {{cite:ed5daf79e223b398d10102ded39ab7fc3708c7dc}}.
i
5ca11e4628aae98f09eb24cd1e63cd57
High dimensional learning problems often train models with a large number of parameters. Modern large-scale learning tasks can easily consist of {{formula:1bbc3c12-e49b-49a5-b375-e453ca2d5633}} model parameters that are typically represented in 32 or 64-bit floating point precision formats. Their significantly high memory footprint is a critical issue in deploying these models on resource-constrained edge devices for inference tasks. Consequently, the problem of quantizing trained models to low precision for efficient implementation on memory-constrained devices has received significant attention {{cite:b3b737e8adfb787e05f5216a1723b2848cc1830e}}.
i
27c6881ad571cd1a1b683e74af8ad55d
In {{cite:27c88ae0290033b6ac93bd4e74d1cf44462153a3}}, Gromov introduced the following non-negative topological invariant for a connected, closed manifold:
i
6b58cbafc4f90271ecd040286c4a8a1a
From the predicted mass spectra, we can see that all the triply-heavy tetraquarks lie above the corresponding meson-meson thresholds, which indicates that according to our model no stable {{formula:092c5097-e220-40b5-8cd7-be07dfdedfa1}} state exists. In previous studies {{cite:6d775f6296607590ff5abd265014eb9ebac8cfdf}}, {{cite:c047a981c073b94625632578e71b940cf22ae5c7}}, we also investigated the mass spectra for the singly-heavy tetraquarks {{formula:3c37025e-8ce4-41df-9caa-0ca2132b9a1b}} and doubly-heavy tetraquarks {{formula:6a128444-8416-4957-9b52-053cc1869534}} , and found that only the {{formula:218eef3e-eeaf-4948-9dce-bf4f7a9d666b}} {{formula:0d651793-38ba-4bad-b033-a36036ca7ec5}} state lay below the relevant strong and electromagnetic thresholds. In the case of the doubly-heavy tetraquarks {{formula:b7d4a277-57a3-4000-96ef-7eab7db65ea6}} , we pointed out that the mass ratio between the heavy {{formula:855522c2-9fda-4ef5-b0f9-7c38db3f7a4e}} and light {{formula:410425aa-e2de-42fe-b5b5-131cfb06e9de}} subsystems was the determining factor in the formation of a stable tetraquark {{cite:6d775f6296607590ff5abd265014eb9ebac8cfdf}}. If one keeps reducing the mass of heavy quark {{formula:c963c981-c48c-4c75-b15c-e7351f6da862}} , a doubly-heavy tetraquark {{formula:e9b27aa7-08b4-4919-ad65-bb1ef450c59e}} will become a singly-heavy one {{formula:7153fde4-3480-4ad3-9c68-1a9b6811117a}} . Similarly, replacing the {{formula:93356b9e-9239-417e-822f-9b2363b54c27}} in a {{formula:cba398c3-4b2a-4851-b3a9-9d78fe2bf4c5}} state with {{formula:71aa01ef-247d-465b-8185-9546c26b8dcd}} , one can obtain a triply-heavy tetraquark {{formula:5f9e5cc0-90af-4653-bc71-6e2096346757}} . We can speculate that the empirical analysis of mass ratios for the doubly-heavy tetraquarks {{formula:c030641e-5a9a-447b-8fca-0c6fda402d08}} also holds for the singly- and triply-heavy tetraquarks. Indeed, the stable {{formula:9a060dd7-e06c-4b3e-89f6-8a5204ce99a8}} state has the largest mass ratio, while all other systems with smaller mass ratios cannot form bound compact tetraquarks. Caution should be exercised in generalizing this conclusion to the {{formula:c75c1a06-2c2f-4156-bd05-bb7045d94590}} , {{formula:1ae8a120-1397-4b08-9d90-6cda516a079d}} and {{formula:fdbf048f-ce04-46f2-ad86-4ba5bed83567}} systems in which a charge conjugation quantum number may emerge, and then the above empirical conclusion may not hold. Moreover, it should be emphasized that, even if a tetraquark is not stable, it may subsist as a resonance with finite decay width and be observed by future experiments.
d
ffbe09e3c985fc469d938f527c2dbc01
We recall that if {{formula:710a17e6-42f4-4d7e-8b36-7954c1f99eaf}} is open and {{formula:3c9928c6-5fbd-4acd-bd2d-f55e77ab8b9d}} is a {{formula:c3424663-ecef-4505-b42b-8650f2983f24}} -mapping for some {{formula:1c661599-4639-401b-ad12-8d8f3e37ee5d}} , then {{formula:d17b5095-9c10-4b9e-b090-0c2b0493c228}} is Pansu differentiable almost everywhere {{cite:3ceeea3cb76803e272868e7789120f979b1b9049}}, {{cite:261b9af188b4ebcd3d4de9fe517d7a44c0ca272e}} (see also {{cite:cf7d4dccc70c66391665980f30a9b10d4330f574}}, {{cite:813ea3b448090f69909f788d14668b8520610203}}), and we may define the Pansu pullback of a differential form {{formula:846f3042-33b6-4009-a098-5d56859cca20}} by {{formula:a39d3d6f-d071-4011-b7b0-1906f0957dc7}}
i
c34e14832c78451cce436134b65ce4e4
Tweets, the messages of Twitter microblogs, have high density of semantically important keywords. It makes it possible to get semantically important information from the tweets and generate the features of predictive models for the decision-making support. Different studies of Twitter are considered in the papers {{cite:a1138ae049e7999228246553f601e541b68dbaf0}}, {{cite:b2cce3c24c161b6d759c605f1609a6feb3acdfb8}}, {{cite:556bbba661a90aa51be943e342857eac14422274}}, {{cite:9b0fadac333a8ace93666df9112367b047366330}}, {{cite:e798fb54845d65a26b6b9db46066e2c090615bb4}}, {{cite:67100c6aec8efd8d92471e11749cce3afc0e4cfc}}, {{cite:6e890779c20d939a287e639e3aea51de263b1f27}}, {{cite:404368241db1389bdf2c305d83e7ce974ce9e296}}, {{cite:077f2b969816fd9ef6dd3e3fc6c844c2a00fe344}}, {{cite:631cf0ea9bae380cf3a6508b2b81d774e4c74f1f}}, {{cite:28380bbd602228517a080aedbfca83b64d5a8024}}. In {{cite:79cc731381908bf6ba15d1409139c54e764eeaef}}, {{cite:14bf3c909aae344d2549ca9d84cfb24231c2d251}}, we study the use of tweet features for forecasting different kinds of events. In {{cite:538d01f91f889c262704f97f6e1b219ecb0b9f91}}, we study the modeling of COVID-19 spread and its impact on the stock market using different types of data as well as consider the features of tweets related to COVID-19 pandemic.
i
b96a9327f15fb89dd344cbd38a471f0c
Main application. Consider the well-studied fairness notion of EFX, in which each agent {{formula:7ff7eb67-43cb-463e-8b8a-8c09998d7b3b}} prefers her own bundle over the bundle of any other agent {{formula:34345f4f-0d6e-4a79-8eb4-8f0ac5714215}} , if one arbitrary good is removed from {{formula:fd904d03-78cc-46d4-a2b9-47379a2cf0ca}} 's bundle (in the related EF1 notion, the most valued good is removed from {{formula:31271768-d01a-41cf-a4d7-0c50ff267be0}} 's bundle). As standard in the literature, we assume additive values/costs, and study exclusive allocations. In light of the recent progress on existence of EFX for goods, culminating with the result of {{cite:2d19b112e51a39e852d01fc6fe01b16d0f7dac94}} for three agents, a natural question is whether EFX existence persists in our first model of interest – goods with copies. We show (in Section ) that an EFX allocation does not always exist for goods with multiple copies. This negative result holds even in a simple setting with any number {{formula:c914a706-d425-43fc-b2cd-a30d29b9ece1}} of agents and identical values (see Example REF ), demonstrating how adding copies can completely change the fairness landscape.
r
5a2a15c5659aec650e2c23652c2d4260
In this work, we first study the Floquet two-level system, where the intensity of the probe field has the form of square-wave periodic sequence, which is a basic model of direct frequency comb spectroscopy {{cite:7bdd81f5fcde92b173d65594045f34ed3cc93358}} and is different from the sine/cosine pulse trains in the Mach-Zehnder interferometer {{cite:b5277403d1016f94d976e41b0ed5a78768ed06a2}}, {{cite:1eee40515f00204c7a93d1750ae3574715e5a66c}}. Note that here the modulation period is shorter than the system's coherence time. We explore the steady excitation probability over a wide driving strength range and give the analytically results with some special scenarios. A lotus-like multi-peak spectrum is observed and the coherent destruction of tunneling effect is found in the strong driving regime. When the Floquet two-level system assisted with a third energy level and a periodically modulated control field, each peak splits into two, resulting in multiple transparency windows. Here the modulation pulse of the control field is the same as that of the probe field, but they are asynchronous. An intriguing finding is that the central transparency window (CTW) in the Floquet three-level system has a similar profile to the traditional EIT or ATS in the un-modulated systems. We use the Akaike information criterion (AIC) method {{cite:f74d3c60c97e61965407834f715afa393990a1eb}} to discern the CTW from EIT and ATS by evaluating their relative AIC weights for different modulation periods and find that the CTW could be EIT-like or ATS-like in different parameter regimes. Moreover, the quantum interference of the CTW can be modified by the modulation period, which as an additional degree of freedom increases the tunable space of CTW. Therefore, the CTW may provide a superior platform for the explosion of quantum technology than the traditional EIT/ATS.
i
99e820127302ee7d50a2cffce20c1647
paragraph4 .4em plus1ex minus.2ex-.5emTransfer to Audio/Speech-only tasks. To test the generalizability of the learned audio representations, we further evaluate them by transferring AS-pre-trained MAViL to other out-of-domain speech-only or audio-only tasks. We conduct experiments on the Environmental Sound Classification (ESC-50) {{cite:df1de945345c723155cf6e414c26b14a8809632f}} and Speech Commands (SPC-V1) {{cite:fa1bc357f1e53abac6a8a330edc2aca463ea6d0e}} datasets. We fine-tune only the audio branch of AudioSet pre-trained MAViL. The results are summarized in Table REF . MAViL outperforms all recent supervised and self-supervised models and achieves a new state-of-the-art. MAViL demonstrates desirable transferability from audio-video self-supervised pre-training to audio-only downstream tasks. {{table:53864cbb-3aac-4fe0-bcde-1d527ac186ea}}{{table:087fea6a-a4a0-4aeb-9284-50415bc6cf9b}}
r
15e70c2e7ea716a7855b96e47040f31d
The elastic constants of YRh{{formula:bbaea067-89d8-46bc-abfc-9b41abadfd2a}} B satisfy all of the above criteria {{cite:d8418bc95f12b3f1423bf09eee04b407ea2d7a34}}, and therefore the material is mechanically stable.
m
6a6fc9dd6063383011c36fcbd8933aaf
We start our overview with some game settings that fall under the scope of our approach. Obviously, all memoryless-determined objectives are among them, since we generalize Gimbert and Zielonka's work {{cite:d0b5911efad145c360189b566ff3c51e27b0680b}}: this includes, e.g., mean-payoff {{cite:c89abd521c3877e426cc8a62a09ac25af02fa7d8}}, parity {{cite:b81d098f44d61328386b8d9ab6f4a7e0519a01e6}}, {{cite:ec7d2c59cd1428a48e7012c668e54533b4e43900}}, energy {{cite:27dec8da7570aec3c384f7bb6e270f89d1bd7328}} or average-energy games {{cite:03ed435fffcb0ea50ccde75a58b71ad57c2f3fef}}. As established in Section , our results encompass all cases where arena-independent memory suffices. Hence they permit to rediscover the existence of UFM strategies for games such as, e.g., generalized reachability {{cite:73c79db6a4af32ac7b5b5bce60b31514eddbcb12}}, generalized parity {{cite:4c9df05b3d8968cf14a7894125203ecb7099391a}}, window parity games {{cite:324ecf56f2958a354e9580b1d491932156379c04}}, or lower- and upper-bounded (multi-dimension) energy games {{cite:309b8c619188f53f6e4aaf47ed4c33d06f8ea961}}, {{cite:03ed435fffcb0ea50ccde75a58b71ad57c2f3fef}}, {{cite:0733acc05c9c5ca8b2d44de73c7e2fc59b1c1cae}}. Our approach can also be useful to extend these known results to more general combinations, either via appropriate memory skeletons or through the lifting corollary (see an application in Section REF ).
d
11ca5a923487711c705b8827b8e278cf
We replicated our main experiments (Cifar-10/VGG) on another dataset and another network architecture. We chose for the Fashion MNIST dataset {{cite:81fc6b9a8cc34c801d549b95704aa3dd982b7587}} and a simple multi-layer perceptron architecture with one hidden layer of 5000 neurons and ReLU activations. We list the details of our experimental setup in tab:fashion-experimental-settings. We varied two key parameters compared to our Cifar-10 results: we used 64 workers instead of 32, and used SGD without momentum and without weight decay. Because this task is easier than Cifar-10, the initial phase where both training and test loss converge at similar rates is shorter. We therefore consider the first 500 steps as the `initial phase', as opposed to 2500.
r
c4c2ffd0cce39f44c18358b7352ae5f8
To date, there are several candidates for PPISNe {{cite:69a437c4f56d124ee8099d233f04ba5c002bca34}}, {{cite:52c646954684c0ae020d4a82128768de45a3f92a}}, {{cite:809b27e9df047710ae727f6a89fcba131a0e6cee}}. {{cite:136236a1eae5b40575786821962b3e5ace5ed404}} reported the detection of a Type IIn SLSN 2016aps with energy {{formula:1e41f180-5281-49fb-936f-3c0a2f7a52c3}} and a total mass (ejecta + CSM) exceeding {{formula:3aac9496-f41a-45a1-ad6b-058f57f27a3c}} . Detailed one-dimensional radiation-hydrodynamic simulations {{cite:d8bd0606e0e26a1fc383ec18117bb4fdb0c15458}} suggest that SN 2016aps could be a collision of an ejecta mass {{formula:a2f5c807-7f9a-4281-8083-2ef7374c8d98}} with a {{formula:3f7e2f5f-f001-4f4c-bf80-e8270766f39e}} wind-like CSM of outer radius {{formula:f23ab34b-f939-417f-8cb8-8e2e8330252c}} . SN 2016aps has a radiated energy {{formula:bd721c0e-5276-4c9e-ab15-101637c7fb65}} , which is an order of magnitude higher than iPTF14hls. The mass loss rate of SN 2016aps, {{formula:fd4b657e-db25-4519-a44b-44232794a033}} , assuming a wind velocity {{formula:df00121f-d4f6-4a56-bdcb-fcbbd5ba3a66}} , is comparable to that of iPTF14hls. {{cite:8071e3a8a6c88ea22487860d01ed647f5befb9c5}} reported the Type II SN 2020faa, whose first six months of light curves are of great similarity with those of iPTF14hls.
d
0dea3e191fa4fce00ca746841221a708
In this paper, the void-channel solutions were obtained using the open source finite element solver FreeFem++ {{cite:3759293eea4dcc6c29024694ecee9caadf86a35e}}; we used this to solve the Helmholtz equation for our void-channel model, {{formula:2ddc8c4f-b0b2-4909-9d14-9efc77b0ed44}}
m
17986d17ae4a5837dea4a01101788be7
We provide the qualitative comparison between 3DCrowdNet and SPIN {{cite:b2fbfb3116d027d6cd81c76328f689967a71edf2}} in Figure REF . SPIN {{cite:b2fbfb3116d027d6cd81c76328f689967a71edf2}} is the most relevant competitor among current state-of-the-art methods, since it leverages image feature and estimates SMPL parameters as 3DCrowdNet. Apparently, 3DCrowdNet produces much more robust 3D shapes on crowd scenes than SPIN. 3DCrowdNet disentangles different people from a target person in a bounding box, and estimates reasonable 3D pose and shape under diverse occlusion, including inter-person occlusion. We provide more qualitative comparison with other methods {{cite:f7cf39110c14621ab049316f4ad171a62c5d1947}}, {{cite:879257d7c465fdaa3fdf24dca934a618bbdbf3e0}} and failure cases in the supplementary material.
m
fc8f105339ffbff94b5d412aae053a0a
In Table REF , we report the average results of all tasks with and without ground-truth object detection over the considered episodes. For task on, we randomly generated 400 different goals, defining 400 episodes; for tasks open and close, we randomly generated 100 goals, defining 100 episodes for each task; for the object goal navigation task, we used the test set of goals proposed in {{cite:5fcb150f643a145b2c35e7da938d2598af7e4b8a}}, defining 2133 episodes. It is worth noting that, for the object goal navigation task, two different episodes often have the same goal but a different initial pose of the agent.
r
f488367f585e2419315837675257a6ab