text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
As mentioned above, global FI measures can be agnostic or model-specific. A global model-specific FI for GBM can be derived by evaluating its individual trees FI and averaging them across all trees in the ensemble {{cite:d3c7d69862afb05374ee56dda37c9a52ff2595d2}}. Given a tree, there are many methods to interpret how important is a feature by inspecting the non-terminal nodes and their corresponding features which were used for splitting. The most prominent measures are:
m
ac02eb84d4d89def3b484c989ebfee0f
In this paper, we have developed the Layer-Peeled Model as a simple yet effective modeling strategy toward understanding well-trained deep neural networks. The derivation of this model follows a top-down strategy by isolating the last layer from the remaining layers. Owing to the analytical and numerical tractability of the Layer-Peeled Model, we provide some explanation of a recently observed phenomenon called neural collapse in deep neural networks trained on balanced datasets {{cite:f1b0a7b7db37beaa4cb80ff63d7624ad3b3ac055}}. Moving to imbalanced datasets, an analysis of this model suggests that the last-layer classifiers corresponding to the minority classes would collapse to a single point once the imbalance level is above a certain threshold. This new phenomenon, which we refer to as Minority Collapse, occurs consistently in our computational experiments.
d
f6f85d671eb7fe40b3b29f4d5088b861
In Einstein-Aether theory there are scalar, transverse vector, and tensor perturbations, whose propagation speeds {{formula:6306ed29-423a-4634-9002-c8a13a20fc48}} , {{formula:9c8b888b-5a1a-43e7-a30e-af5accc45a6d}} , {{formula:b79a4650-07ad-4724-a9ad-9accb6e92b8a}} on the Minkowski background are generally different from that of light {{cite:6b6e6c02e727e74eb440e68acfc91dcbf8fc2cbd}}. To ensure the stability of Minkowski spacetime, we require that all of {{formula:5b8b8e6e-f583-4ff7-af1d-638539d5fcd9}} , {{formula:17ffdada-cac3-4dea-84ef-e9d371715a81}} , and {{formula:b9e14531-301d-48fa-9270-2cdbed9b9dbd}} are positive. Moreover, the observations of gravitational Cerenkov radiation {{cite:1a2c622dd304af2f333ab90845521f09c5a0c4dd}}, solar system tests {{cite:b52ad9ccbb7471f6478d69cd17b98674c7f3655d}}, big-bang nucleosynthesis {{cite:44afe00c61f6f75b23bcd7a776484f1c1ab402b0}}, binary pulsars {{cite:7e27806bf68d26e28ebdd00d9af988f3019d3c09}}, {{cite:c895fe14cd4f76f2c15ed7e977b8807080ec0dcf}}, and gravitational waves {{cite:31c972db88b833e6f946c9084487a785daee6b34}}, {{cite:def6f86368e8e059ed55bdb1b85a6e58bf582fa3}} put constraints on the dimensionless coupling constants {{formula:cce5ef7b-de88-4609-a5a8-e8d26041c36f}} of Aether derivative interactions. In particular, the gravitational-wave event GW170817 {{cite:ece16299cafebd23bc4bb68086264d9301d0636c}} together with the gamma-ray burst 170817A {{cite:339140b02f4b9c2041847a7d437ec225daee1175}} placed the upper limit {{formula:4b23cfbc-566b-451e-8bba-7743df5950f7}} , which translates to {{formula:2c87df60-497e-4efe-aea6-abd742650f2d}} {{cite:31c972db88b833e6f946c9084487a785daee6b34}}, {{cite:def6f86368e8e059ed55bdb1b85a6e58bf582fa3}}, where {{formula:19ac0578-9d9d-4ea6-b30a-108404513f10}} . However, there are still theoretically viable parameter spaces in which all the observational constraints are satisfied.
i
ea9297f8a828c4598bf6ef426f667d6b
Gaussian process (GP) is the method of choice in the popular PILCO algorithm {{cite:c80c0e7075bcb1a392408516b077bc5343d56dce}}. On the modelling side, it cannot handle non-Gaussian (multimodal or heteroscedastic) posteriors and {{formula:ff6e11ef-23cf-4a58-81fe-ba834a790919}} -interdependence, failing Req REF . More importantly, similarly to {{cite:552970c1560238a8e82a7c41efaf1498c108f1cd}} and {{cite:27ad5ee091ec1051dac4b2874d5b7fc9cc93248f}}, we found it very hard to tune and slow to simulate from. We have reasonable performance on the sincos data set which we report, however GPs failed the raw angles data set (as expected due to angle non-continuity) and, more importantly, the hyperparameters tuned lead to suboptimal dynamical performance, so we decided not to report these results. We believe that generative neural nets that can learn the same model family are more robust, faster to train and sample from, and need less babysitting in the MBRL loop. {{table:1d3468a4-0b25-4f63-9382-e6157b2590f8}}
r
77f05d3cdf3d3ac6f31fd8ed42e91a5b
We only considered the cases where we have not access to {{formula:953d4f15-60ec-4fa4-a4b8-49f48d437c6a}} . However, there are ML applications where we do. For instance, in computationally intensive numerical simulations, for physical sciences, economics or climatology, some models are run on supercomputers and can take days to perform a prediction. Speeding up these models is a research area on its own, that gathers computer scientists, physicians and mathematicians. This speed up is important to obtain faster predictions, but also to conduct statistical analyses of the results. One way of achieving this is to use surrogate models, i.e. lighter, cost effective models that approximate the heavier, expansive model. These surrogate models are often based on learning an approximation of the full model using data points, and NN are more and more used for that purpose. See for example works of {{cite:9e22204fa76f1802a13df409db5d28248e884b78}}, {{cite:98d9c70cd56d691cb1a9f9d222e897ac2ae520af}} or {{cite:50b337f3c92904057000cc2d1bb4ec489d1731fb}}. Yet, the data points being computed using the full model, it can be costly to obtain a sufficiently large data set. Our approach could be applied by sampling points from the information given by {{formula:060dae42-3ce7-48e3-b5aa-54bc280f33dd}} , to get more informative data and improve the accuracy of the NN for a same training data set size.
d
be70956d9679f7ad04f5f3353e90bf26
Another approach for investigating the properties of the strongly coupled theories at finite temperature and chemical potential is via the AdS/CFT correspondence {{cite:fdb497c6689bac9d692099bc7813ca5111aa36f8}}, {{cite:0511b751ab7299c5f00d097a33ff945ef2e45922}}, {{cite:70c0421a29337f6d51762217077142117bfa888c}} which relates the weakly coupled gravitational theory in {{formula:26992a12-5c6d-4376-957b-26107706c9f6}} -dimensional AdS spacetime and the strongly coupled conformal theory of {{formula:e8dc5307-f267-4459-b792-57cfa2266118}} dimensions living at the boundary of AdS spacetime, referred to the holographic approach. The application of the holographic approach to QCD has been investigated with the top-down approach at which the holographic QCD models arise directly from the ten-dimensional superstring theory {{cite:80e94a9c2d0a47ce6318b35d378d859148f62e74}}, {{cite:ddebb2e837cae5dbe187b51122c295ca877571ac}}, {{cite:58cc90983113d54b9773159dfa3c86e162c31d0b}}, {{cite:c8a96cb60fe065cc312141ada6a826a468b9db59}}, {{cite:5fcfcfd9406d04487ac75cd8006397bb49363b8b}}, {{cite:df5d9561cefcd0f3a482988c0417f34327697001}}, {{cite:6fb1a8380663816ea76300d88d8c436af4143f12}}, {{cite:0161bc7cea7c1fa363ca05c819f10428806a22d0}}. From the bottom-up approach motivated by the phenomenological reasons, the holographic QCD models were introduced with ignoring the backreaction on the spacetime geometry background {{cite:6fa5439b399e1d0e943d34065f16c349dec305d9}}, {{cite:6f3ef2e51ebe83decbfb8b5a3384e45cd15c65d6}}, {{cite:428879650716252a89525d127692d043f5e922f0}}, {{cite:5b5d317baf4e9ee345bc084ebf5b5f07a1afda41}}, {{cite:c021889320a53e2d4a13378b26fdf797f53643e0}}, {{cite:72a3e958469169aa393e1349dc173437afe3b9ea}}, {{cite:71a367e3fc8f89099e6e68a5fddcb43ea2e9b0c9}} and with considering the backreaction {{cite:3909b8a8b8acd29b586f67f01226d2486d5c347f}}, {{cite:304e92267bc6af658329a7aa4704c0c5db405827}}, {{cite:fd25bcf3a3d908f5f6b36e1578e4288dc346bba0}}, {{cite:be03d2b804b624e7262a83d044b160ff7cddd1c5}}, {{cite:d13652cd8ceb14f4c2d57340eaf40b6ab2faeaba}}.
i
818e98b4b9db2f6434bb921222de6c08
Researchers switch their attention to the robustness of deep models as they now achieve high-performance results on clean/in domain validation sets. Previously, the object recognition {{cite:fb3c2df23f49135b2e8a4545f75929083864b381}}, and semantic segmentation {{cite:b786f2b6d0240cea7fd9b1f4d98175aa28b47ebf}} tasks are rigorously evaluated for image corruptions. We are the first to conduct a similar study on the instance segmentation task to the best of our knowledge. In addition to evaluating models under image corruptions, we also evaluate on images collected with a different set-up and from different environments than the training images; in other words, the images that show domain gap with the training images. Our evaluation is also not limited to network architectures and backbones but extended to models with different normalizations, augmentations, initializations, and also investigates the impact of multi-task training.
d
7d8f4e3d730626380a5a9e47dba1096f
where {{formula:67f97c02-6e6e-4583-94bd-6478aa2c88d2}} is defined by using the equation {{cite:20d67d6eaed0d36bda7b9bed723ac5de4927e02c}} {{formula:43046ac5-2ebe-41f6-8c54-45e8702353b2}}
m
3b35c3bb9a6dcd557f705c86e9d43a20
Finally, it would be an interesting issue to study the dynamics of kinks and bound state solutions of the model (REF ) in the context of quasi-integrability, which would be relevant to the study of the quasi-conservation laws such as (REF ), and the relevant quasi-conserved quantities in the so-called (quasi-)integrable approach {{cite:67b220961bac401d37e5e8b8a06d96f632197743}}, {{cite:8216315e3c6a4b50dba9d15a491b4cc5d243f0ef}}, {{cite:a088b49e2c8f10b2492b93918f1a29e7c45a9c9a}}, {{cite:8909861f2d59233a8e8673a9b1834eff6d9a238f}}. So far, to our knowledge, a model possessing a strong-weak duality sectors had not been studied in the quasi-integrability context. Besides, it could be relevant in the context of one-dimensional topological superconductors in order to discuss how the interactions influence the shape and the lifetime of the bound states in view of the recent proposal to identify the Majorana modes by means of local integrals of motion in interacting systems {{cite:0ee79188e046a197f6db671b2946bed21d71680b}}.
d
1a852bca3eddec7f0297bea062176ed5
Methods of solving the Kohn-Sham equations {{cite:69f5822f2bff3105c6e60e6bfdb777d3355d8e76}} in average atoms typically involve numerical finite-difference integration schemes like Adams-Bashforth or Runge-Kutta. A largely unexplored alternative is to use basis functions via spectral methods, where one solves a matrix problem for the coefficients. The primary problem is that the boundary condition for the average atom requires that the wavefunctions match the free electron solutions at the cell boundaries (or beyond, depending on the model). This condition admits a discrete bound state spectrum and a continuous free electron spectrum. Thus, to use basis function methods one would need to discretize the continuum, introducing a physical approximation in doing so {{cite:cfd9e28d2e001bdadf8774ee887ff6eacbeaae3c}}.
i
1df230e2ff38898fda947e0a89b8ac9b
Noise-Limited Regime (NLR): ({{formula:c52e37fc-0341-4342-9ddd-09822c44db0f}} in Fig. REF , Fig. REF and Fig. REF ). In this regime, the typical MU is likely to have a NLoS path with the serving BS, see Fig. REF . The network in the NLR regime is very sparse and thus the interference can be ignored compared with the thermal noise if we use SINR for performance metric. In this case, {{formula:67ad6ac4-4a48-4421-a65f-8c3b03d39cb2}} and the coverage probability will increase with the increase of {{formula:3aa7539c-6869-4e38-9208-f3fd32167697}} as the strongest received power ({{formula:f7f89f21-bbd7-4f95-829c-e0d87174ecad}} ) will grow and noise power ({{formula:324583da-9bb4-45cf-8cca-f5b31d39a86c}} ) will remain the same. While if we use SIR for performance metric, the SIR coverage probability remain almost stable in this regime as {{formula:e87c0761-409b-4547-8792-dfd022560691}} increases. This is because the increase in the received signal power is counterbalanced by the increase in the aggregate interference power. Besides, as the aggregate interference power is smaller than noise power, the SIR coverage probability is larger than the SINR coverage probability. Signal-Dominated Regime (SDR): ({{formula:1d5c146e-bd1a-4c4b-9b6a-049e9da8fabd}} in Fig. REF , Fig. REF and Fig. REF ). In this regime, when {{formula:f179a1a8-3d9d-428d-a6df-d25f7e882214}} is small, the typical MU has a higher probability to connect to a NLoS BS; while when {{formula:0f7970a2-c3f2-4f66-b297-a48af69304e6}} becomes larger, the typical MU has an increasingly higher probability to connect to a LoS BS. That is to say, with the increase of {{formula:b7098dda-83b2-47d7-9bb4-047aa7a52467}} , the typical MU is more likely to be in LoS with the associated BS, i.e., the received signal transforms from NLoS to LoS path. Even though the associated BS is LoS, the majority of interfering BSs are still NLoS in this regime and thus the SINR (or SIR) coverage probability keeps growing. From this regime on, noise power has a negligible impact on coverage performance, i.e., the SCN is interference-limited. Besides, if ignoring noise power, from the NLR to the SDR, the coverage probability from NLoS BSs decreases to almost zero and the coverage probability contributed by LoS BSs increases. It is because when the network is sparse, almost all MUs are associated with NLoS BSs and when the network goes denser, MUs shift from NLoS BSs to LoS BSs. Interference-Dominated Regime (IDR): ({{formula:7a105cb4-2063-493c-a3f4-71691255c0aa}} in Fig. REF , Fig. REF and Fig. REF ). In this regime, the typical MU is connected to a LoS BS with a high probability. However, different from the situation in the SDR, the majority of interfering BSs experience transitions from NLoS to LoS path, which causes much more severe interference to the typical MU compared with interfering BSs with NLoS paths. As a result, the SINR (or SIR) coverage probability decreases with the increase of {{formula:81f0abfa-3889-4c64-97b6-2c8311e99bc3}} because the transition of interference from NLoS path to LoS path causes a larger increase in interference compared with that in signal. Note that in this regime the coverage probability performance in our model exhibits a huge difference from that of the analysis in {{cite:b9c08a0556ddd81a99dca920d90fdd5b6c30b6f4}}, which are indicated as “NLoS only” and “LoS only” in Fig. REF . Interference-Limited Regime (ILR): ({{formula:ece3c66a-4018-4208-954d-4bbc25237a65}} in Fig. REF , Fig. REF and Fig. REF ). In this regime, the network is extremely dense and grow close to the LoS-BS-only scenario as the increase of {{formula:1a34b933-40a9-4fff-a77e-a0cd16e519f6}} . The SINR (or SIR) coverage probability will become stable with the increase in BS density as any increase in the received LoS BS signal power is counterbalanced by the increase in the aggregate LoS BS interference power, which is also illuminated by Corollary REF . {{figure:596cce0f-467f-4869-af26-647abfcc55a7}}{{figure:76c8cd56-5162-4ddd-bfd2-40650649e717}}
r
26558b16820092cc709c13d74c601779
Object detection {{cite:e0b84551dbbb302ed7f8fbcffc8930e9cbd427be}}, {{cite:54e7c519bc86f3ed10e553f487bbfffae00c1622}}, {{cite:99debd81591233b2435dd94dd2d658fb63643b75}}, {{cite:4fe5664cac3cab2c2e6739fcf6b5fcf271b9440a}} is a core task in computer vision that has considerably advanced with the adoption of deep learning and continues to attract significant research effort {{cite:2e728c0ac3831a1a9fe924a858c4d84c67218767}}, {{cite:fdfb73ded4b32b668bc4c053d3bc8f9f94e4eb61}}, {{cite:8280a373abe6ecc9953ed5197272d2eac1ccfae8}}. Current deep object detection methods achieve astonishing performance when learning a pre-defined set of object categories that have been annotated in a large number of training images (PASCAL VOC {{cite:062807c8940877d0d878b8dcb1b3d9c5e94d1225}}, COCO {{cite:1bfda69d0d89d2a0bc35a10348a908f567b93a75}}). Unfortunately, their success is still limited to detecting a small number of object categories (e.g., 80 categories in COCO). One reason is that most detection methods rely on supervision in the form of instance-level bounding-box annotations, hence requiring very expensive human labeling efforts to build training datasets. Furthermore, when we need to detect objects from a new category, one has to further annotate a large number of bounding-boxes in images for this new object category.
i
af33d8cf99117b3ea0f24b13f3050267
Sparse voxel-based methods quantize the 3D points into sparse voxels, and then apply 3D convolution operation only on those non-empty voxels to reduce computation and memory cost. Minkowski CNN {{cite:ccf34a510156aabc8f220bb4c4c9261e49b9c98e}} is the first efficient sparse voxel framework, and it surpasses all point-based methods in both accuracy and speed. A possible reason is that sparse voxel is structured, which is convenient for convolution operation. SPVNAS {{cite:03faf7a6fe65d3bf4265c79ada0617dbef522552}} introduces neural architecture search (NAS) into {{cite:ccf34a510156aabc8f220bb4c4c9261e49b9c98e}}, and achieves better results with lower computation cost. Recently, variants {{cite:26e961ff89fd298feb477dd047b9b1f278fca780}}, {{cite:43ef5e6c082a3ddb55ad33b2a8633f0542615078}}, {{cite:a08be544360a5a4455182f8f46aa169b15c04856}} of sparse voxel-based methods are proposed. Cylinder3D {{cite:26e961ff89fd298feb477dd047b9b1f278fca780}} quantizes the 3D points in the cylindrical coordinate system, and proves its efficiency. AF2S3Net {{cite:43ef5e6c082a3ddb55ad33b2a8633f0542615078}} proposes the attentive feature fusion module (AF2M) and adaptive feature selection module (AFSM) to efficiently extract local and global structures simultaneously. RPVNet {{cite:a08be544360a5a4455182f8f46aa169b15c04856}} fuses range view, point, and sparse voxel features in a single framework to alleviate quantization error, and achieves the best results on the SemanticKITTI and nuScenes benchmarks. Although these methods dominate the LiDAR semantic segmentation benchmarks, they have difficulty in deployment and cannot run in real-time on mobile platforms.
m
7f0334ef574b4ba9b2998d48eef759b2
Aspect Conditional Masked Language Model (ACMLM) {{cite:52611a7b9cccd6bd89cac42512fb53fc310f1b84}} is a fine-tuned BERT {{cite:c46f261541e4a4e92419df177d8d59f654ad04e4}}, where an attention layer is introduced to encode the features for both the user and the item. By predicting masked tokens, this model can produce diverse sentences.
m
5b5368f7f78fd9057d50d9138731840f
The present paper constitutes a generalisation of the holonomy groupoid constructions in {{cite:a5a7a6a2bf574095ef0c37f28022222489206689}} to singular foliations, and is inspired in part by the recent work of Garmendia and Villatoro {{cite:7078f5a5c0d43cfb50f13262fca814ac7d216e13}}, who showed how to recover the Androulidakis–Skandalis holonomy groupoid as a quotient of a diffeological path space. In the author's view, the primary contribution of this paper is a novel perspective on the holonomy of singular foliations which arises from parallel transport of conservation laws. In particular, this places the holonomy of singular foliations in the same realm of differential geometry that deals with symmetries and conservation laws of differential equations in the sense of {{cite:e003edc3c1619e4a4d5e8a99b0fd6300645526e1}}, {{cite:b340430fceb90618b8c27858e0e627351c1bf0f7}}. In addition, the diffeological pseudo-bundles of germs that we introduce in this paper are shown to be extensions of jet bundles, which are closely related to (but topologically distinct from) étale spaces of sheaves. We believe that these pseudo-bundles may be of independent interest and utility. Let us now outline the content of the paper in more detail.
i
67f477beec655d0aec0312c235ff70ad
We note that, in general, for loop calculations some care is needed when using dimensional regularization. To take advantage of the spinor-helicity formulation in a one-loop calculation we need to choose an appropriate version of dimensional regularization. Specifically, instead of taking the external polarization tensors and momenta to be ({{formula:c74b8b35-098e-4710-9de8-84ccfc61ffde}} )-dimensional as in conventional dimensional regularization {{cite:058ec24afb7785ebd7eea1f9ce05c94470f9bbf5}}, we use the so called four-dimensional helicity (FDH) scheme {{cite:e3f0befc13160d336596bdd7d6def9219e490253}}, {{cite:fa46133d7e162c828c022d69d18d5a52a7a95f2c}} where both external and loop state counts are kept in four dimensions and only the loop momentum is continued to {{formula:f24c8735-24bb-410e-974f-2cbaafcc2140}} dimensions. Because the massive one-loop amplitudes that we obtain here are neither ultraviolet nor infrared divergent, the precise distinction between the different versions of dimensional regularization drops out from the final results for the amplitudes. We do, however, need to regularize intermediate steps because individual loop integrals are ultraviolet divergent, with the divergence canceling in final results.
m
a82dd7ab3ddad1e99cd4fa66d910a33c
The pattern formation mechanism presents significant advantages; first and in contrast to classical chimeras, the generation process is robust to the choice of initial conditions. Secondly, patterns that follow such a global instability principle are known to be stable and not transient, which is also observed in our numerical simulations. These two aspects are are the main open questions regarding the process through which chimera states emerge. As part of the novelty, in this work we have particularly studied the case when the coupled dynamical systems are fixed points instead of the usual (nonlinear) oscillators. The main reason for that was because in this way it was possible to make use of the solid classical results already existing in the theory of pattern formation. Nevertheless we believe that the results shown here can be also extended to the case of coupled oscillators. We are confident that our results will shed light on a better understanding of the generation mechanisms and stability of chimera states and also fill the theoretical gap of a mathematical explanation for recent experimental observations in this domain {{cite:19ffc699b64e9f38b4e3205a268f4fcac0240ef0}}.
d
a8bb86a5f92d0fadb2dfe937dc034793
Take {{formula:4786635c-1c2e-4ea4-a1bc-3db3a76b4b45}} in Lemma 3, pp. 161 of Blum, et al. {{cite:63d13a867758b85fa134310ad906f9f9b8688dc1}}. Also, the following auxiliary results of elementary convex analysis will be needed:
r
4faf6c23a79aab4676ebba48f31db45a
We now turn to a 2D S-TI-S junction with helical magnetization depicted in Fig. REF . The helical pattern is given by {{formula:9a49e473-8d20-4af2-ac0f-fa56e991e11b}} , where {{formula:468dac88-e541-4087-8ead-a6fb20ffddaf}} and {{formula:e962307e-2fcd-4bb2-a0ec-457b96933cc3}} determines the actual pattern of the magnetization. The helical magnetization with a period of {{formula:e75894c7-5977-4d97-b81f-2ef0b14951e1}} nm was already observed experimentally in manganese on a tungsten substrate through spin-polarized tunneling experiments {{cite:9c9963d3a4a36eee9856baf89e3d263a837c087f}}. In order to solve Eq. (REF ) we consider limits {{formula:3ca28883-2e9c-4353-a083-9015776a6256}} for {{formula:519a69f9-6973-48ab-b38f-2a407af4287a}} and {{formula:acf859aa-0113-44db-bdf1-03da1fa2844e}} for {{formula:fabab723-587e-40ba-a5e6-4e8d3f4ec4ed}} . We substitute the Green function, obtained for the configuration shown in Fig. REF , into Eq. (REF ) and find the supercurrent: {{formula:a3d022b9-d97c-48e7-9660-c24342d2d7a4}}
r
1444cbb9406190cc1754698b6f4396e1
We have hence seen that we can design a quantum Hamiltonian simulation algorithm which hast time complexity {{formula:e9afe83a-8f61-4d8a-911a-a54efc2058f6}} even for non-sparse Hamiltonians. The algorithm relies heavily on the access to a seemingly powerful input model. While it is questionable that it is even possible to implement such a data structure physically due to the exponential amount of quantum resources {{cite:b937175683398a62401bcda2049e0abf96babd52}}, {{cite:c967e706abca7a001a61aae6c6f179740ebce5e2}}, {{cite:12b1a969c256645e44a296098751299b27c4a2e3}}, and this requirement might even be further increased through a potentially strong requirements in terms of hte error rate per gate of {{formula:b4bf2392-bb78-4966-bfc5-c4bf32e23608}} to retain a feasible error rate for applications {{cite:14ada8c6bc5b10c9e20a82d4bc772bdc82727a36}}, for us there is an even more important question: How fast can classical algorithms be if they are given a similarly powerful data structure? We investigate this question in the next section, where we use the so-called Nyström approach to simulate a Hamiltonian on a classical computer.
d
6671c575da06b6266a9cf1b7c755b640
On the other hand, CeCu{{formula:bcb2378a-6127-4dce-b37a-5db4d9ba5d4d}} Si{{formula:f625f354-b3bb-4bad-a245-7261cc48a58d}} , CeNi{{formula:e5ccc89f-8ea0-4cc4-be24-469e45f8cf99}} Ge{{formula:91ecaf06-9e33-40bf-95c1-c54cf704665e}} , YbIr{{formula:f61fbea2-adbc-4272-abb7-5a71d4dddc09}} Si{{formula:94aa0ca0-0772-4a85-a3b5-a04dd6c64eb3}} , and YbRh{{formula:6a9a8527-a102-4a3e-aab5-d54d0fa537d3}} Si{{formula:7675e92e-4ea6-4e79-839e-463aefa8020c}} are in the itinerant regime or the localized regime close to the QCP, both {{formula:8f56f2de-d71a-4dff-a39c-d9f31e2573ee}} and {{formula:ee6f03b6-2fc4-4c65-a503-b8b562c57bd1}} increase with decreasing temperature even below {{formula:cee20df0-3615-43e0-94fb-18ffd32853d2}} . This result implyies that the {{formula:15f64ef7-a036-4447-9c9f-0581a873c027}} -{{formula:15d4f606-00d9-4d16-a39f-75038930f2ce}} hybridization develops at low temperatures. It should be noted that above {{formula:8c4872fc-5169-4ab8-81bd-47bef913fee0}} , the effect of the electron-phonon interaction should also be taken into account, and the effect cannot be separated from the {{formula:1b054723-752c-416b-9b18-49a879809df5}} -{{formula:ad4d46c8-ef1a-4c13-9b5e-5a14fe5ab174}} hybridization effect. In these itinerant materials, the temperature dependences of both {{formula:b192c988-b143-4f17-b740-6d7a2790112a}} and {{formula:9ca776d1-8e33-4f6d-827f-f53b52d5e4d3}} start at or higher than room temperature, which seems to be independent of the {{formula:057f95c8-d2c9-4f4b-83e6-2189cc66f6d7}} . This result suggests that the change in the electronic structure starts at a much higher temperature than {{formula:49fecd01-bc93-4c9b-b858-86ed6705c47f}} . A similar thermal effect in the {{formula:c2c4852b-1af6-426a-8830-fcebfaf0f2e1}} spectra has been reported from ARPES results of CeNi{{formula:7b97c4e3-0db9-48f0-88c9-77a45c15cc65}} Ge{{formula:c3f58c4b-a89b-4835-bb08-59c75cde039c}}  {{cite:ad3130547ac7b28ced98d77f84f43b95a2f78b37}}, YbRh{{formula:86d13e3c-bb29-4b00-9d81-d9f6534ab832}} Si{{formula:fddd9dc7-682c-4b4d-a8ba-2e6c10907ce9}}  {{cite:6c6fdb093e56fae4f31cf91d5567d7606885a8ad}}, and CeCoIn{{formula:375da5a1-1198-491a-a601-f15ddb3fba3e}}  {{cite:6fe21c3d0bae54eeb1b62fed6592339fad31e893}}.
d
b3190d09aed99b8711142193feeca3d6
Our lower bound is incomparable with that of Fung et al.{{cite:928230e9b89de7eedde2e6b28e1eb756013a51d7}}, who showed that for any constant {{formula:7297d1ec-8e79-4363-af28-99591b24ff7d}} , there exists a graph for obtaining a factor {{formula:b54d3580-7d62-43fa-ab76-5f4bc1943b33}} -cut sparsifier by averaging trees requires using at least {{formula:2bc0c8bb-bd16-40a8-bfd5-fd38d1375c82}} trees to succeed with constant probability. Where Fung et. al {{cite:928230e9b89de7eedde2e6b28e1eb756013a51d7}} used triangles in their lower bound construction, our bad examples are based on collections of small cliques, which lets us ensure cut differences in even a single tree, by giving longer-tailed degree distributions. All of our lower bounds are based on simple constructions from collections of edge disjoint cliques, and use the fact that the exact distribution of degrees of a fixed vertex in a random spanning tree of the complete graph is known. Note that a lower bound for cut approximation implies a lower-bound for spectral approximation, because the contrapositive statement is true: spectral approximation implies cut approximation.
r
899addb07b3b04b8c90444dee84147ce
Since it's preferable to have more than one criterion for the difference between the two corpora, future work will focus, particularly, downstream tasks to confirm this {{cite:5b2321901e5fc3fbe31c1a2290e0c8e5876ca43a}}, {{cite:a5bf5fd6a8dd509c68775b70b099bbdc5d26f509}}. Implementation without using the pre-processing script by {{cite:5d2cd79d3b2637307154769ed4a974ad31590688}} on the original Wikipedia corpus will also be attempted.
r
3b6c8a60e9ce02e82b0c3d835b805683
In this paper, we show that Spiking ResNet is inapplicable to all neuron models to achieve identity mapping. Even if the identity mapping condition is met, Spiking ResNet suffers from the problems of vanishing/exploding gradient. Thus, we propose the Spike-Element-Wise (SEW) ResNet to realize residual learning in SNNs. We prove that the SEW ResNet can easily implement identity mapping and overcome the vanishing/exploding gradient problems at the same time. We evaluate Spiking ResNet and SEW ResNet on both the static ImageNet dataset and the neuromorphic DVS Gesture dataset {{cite:8eef29126886a02e3bdbe7447990ff851126f7b6}}, CIFAR10-DVS dataset {{cite:fa741c389ec4d3ee990d5fabde327929cd326ad6}}. The experiment results are consistent with our analysis, indicating that the deeper Spiking ResNet suffers from the degradation problem — the deeper network has higher training loss than the shallower network, while SEW ResNet can achieve higher performance by simply increasing the network’s depth. Moreover, we show that SEW ResNet outperforms the state-of-the-art directly trained SNNs in both accuracy and time-steps. To the best of our knowledge, this is the first time to explore the directly-trained deep SNNs with more than 100 layers.
i
87bbbf5c28d2f1379675f8a23f1b98d4
subject to the boundary and initial constraints in the last three lines of (REF ). The computed solution to (REF ) is set as the obtained minimizer. However, due to the nonlinearity {{formula:726fa127-6f23-4088-8c21-161e9553e122}} , such a functional is not convex. It might have multiple minima and ravines. Therefore, the success of the methods based on optimization depends on the good initial guess. In reality, good initial guesses are not always available. Motivated by this challenge, we propose to construct a fixed-point like iterative sequence based on the quasi-reversibility method and a Carleman estimate in {{cite:d96fbc5d9c7095fcfdb171a6acd37a08f332fdab}} and {{cite:f551f6061ab0babd54868b427ec72ee508855092}}. The convergence of this sequence to compute solution to (REF ) will be proved by the contraction principle and a Carleman estimate. Our Carleman-based iterative algorithm to numerically solve (REF ) is as follows.
m
ed12c79c941687291ec75dd9a1f6fd9e
where {{formula:ca1378d1-feb2-4915-829f-6d35ec66f794}} and {{formula:5240b222-0740-4802-a2b4-d3d0a6cf5397}} are the retarded and advanced Green's functions, respectively, and {{formula:ab057b5f-fd5d-4a17-b300-5fbb1d3c55f5}} and {{formula:f1db7192-9883-4854-a7d3-866513477aa8}} are the coupling factors. In terms of the self-energies {{formula:1dafe1a5-ea30-43d6-b0b9-90a0a021c3f5}} and {{formula:443ef3ee-1a11-4d69-b073-113a09ac3516}} due to S and D, the Green's functions are defined as {{cite:958552e06ffc1b64b163f4b8193ade074e68e59f}}, {{cite:a5d88c2225558b108ccf7b44359220ca289b763f}} {{formula:0289e02f-cf85-4091-a925-0c2b7fa066b7}}
r
9ffe697fc01a0d1050afa553970481f4
In our experiments, DIP-GM is the proposal based on our MILP model (Model REF ) of the graph matching problem. It stands for Deep Integer Program-Graph Matching. DIP-GM{{formula:b029e86c-f085-40e6-853d-7cc9ded16ea0}} is our proposal based on our topology-relaxed MILP model (Model REF ) where topological constraints are removed. The Model REF can be seen as two independent linear sum assignment problems operating on the vertex sets and edges sets, independently. The BB-GM {{cite:7cec86c2d09e4fd0144869b64151ee367e57bc23}} method is also compared in the exact same setting meaning with the same feature extraction module and loss function. It is equipped with the effective heuristic based Lagrangean decomposition {{cite:587eea8118d24224b6915f3ef741308c8dcc18e2}}. All the aforementioned methods operate in the discrete domain and call a combinatorial solver. At the opposite, the Sinkhorn Net {{cite:f009eb1736c22de219b2e153c76b8da5536153bb}}, a famous graph matching algorithm in the deep graph matching field, is also the object of our attention. It reduces the graph matching problem to a relaxed linear sum assignment problem (Problem REF ) applied on two sets of vertices. This problem is solved in the continuous domain by the Sinkhorn algorithm {{cite:d41715ca91bf73b132a702015632ea58bb2e7bcf}}.
m
65b04921521db4f583aba1b7e5c3549f
RISs mainly are planar and rectangular, tile-resembling devices whose physical characteristics, and particularly their permittivity and permeability, can be engineered in real-time to obtain a required macroscopic electromagnetic behavior {{cite:09d52f1cbe4d6b04b97e19f5301ac3e163c36392}}. Thus, when a wireless wave emitted by a user device (e.g., a smartphone) impinges upon an RIS, it can be programmatically reflected to any custom direction, or even be split among several ones. Based on this property, a popular demo case of the RISes is the establishment of effective LOS (line of sight) conditions between a transmitter and a receiver who are actually in NLOS {{cite:09d52f1cbe4d6b04b97e19f5301ac3e163c36392}}. Furthermore RIS uses include the mitigation of multi-path fading phenomena, the overcoming of localized coverage holes and the optimal energy management in IoT systems {{cite:09d52f1cbe4d6b04b97e19f5301ac3e163c36392}}, {{cite:3beb217f4e880ba13f55fca459f6237c834138fd}}. Moreover, RISs could assist in the design of low-complexity and energy-efficient massive Multiple-Input-Multiple-Output (MIMO) transmitting and receiving antennas, meeting a very promising expectation of 5G and Beyond-5G (B5G) networks {{cite:e2a97658c29e235277fb3aec155ec1b1379d6b69}}. Since RISes can act as generic facilitators of the communication links between base stations and the end-users, their system use cases are abundant. Smart homes, cities, hospitals and industries are theorized to benefit greatly via the RIS technology. Moreover, the utilization of RISes in vehicle-to-everything (V2X) communications could overcome the current challenges stemming from the acute fading and Doppler shift suffered is these systems {{cite:039390a2af30a1f8d969a45fc67bc2d40ef978e7}}, {{cite:e5a5a876546b67df543455ba465c996420b507b7}}.
i
35b2172bbd685c4b9abfd578ea375444
In the recent years transformers {{cite:cf58472f6dd8e8e6bff0fcff618f6709d7c61ec0}} have shown to perform strongly in the field of natural language processing. By solving the problem of long-term dependencies and effectively paying attention to the right words for the context, very impressive transformer-based language generation and translation models have been created including Google’s BERT {{cite:a08023a51ca89c1afabc7f7a3466bdb854c946dd}} and OpenAI’s GPT-3 {{cite:df3d187f5880042dc3dc4b4a303b5fae9e0a2480}}.
i
6d2f697e26c75b98c0f2d5abb330cdc2
For nonlinear functions, some formulas have been proposed to calculate {{formula:c64e0e2c-f67d-4615-988c-3c014eaf5cc5}} have been proposed (for further details, see {{cite:179b7cacc98156d25b73bf55fa8cd42cb2f9e3f3}}). In this study, we use the following formula for {{formula:41e9a730-42ec-4f96-9c00-c7eb88a6b83f}} proposed by {{cite:96ab85cc661d5cce447a8f42c4923671e6025c52}}, which exhibited the highest ASR in our preliminary experiments (appendix:OtherConjugateFormulation). {{formula:08214931-5c5c-48cc-8d81-f64195680b31}}
m
9123705ec7132a8f6b13631bf37010b4
We have run {{formula:394967d5-ed1a-4e1b-ba04-6a9849ae7223}} -body simulations of star clusters on circular orbits about a point-mass galaxy, using the GPU-accelerated {{formula:15b34060-024f-4324-b435-2b1a74ebb10c}} -body code, nbody6 {{cite:6d21fb8d5e2cf3d3a3d0ae566f3f71dfbb864a6c}}, {{cite:4e85f95d07f42ea5802093ff492fdddb8c7f4d6b}}. The equations of motion are solved in a co-rotating reference frame centred on the cluster {{cite:fec8176399188e995a7c55fde60dd58b06dc94db}}, rotating with angular speed {{formula:20c2dbc3-01fe-488c-b329-667fb624f0c1}} , equal to that of the angular speed of the cluster's orbit. The {{formula:43a9c683-8d2c-479a-b2ea-682c72ce17d5}} -{{formula:4c6cd996-cfd2-4452-a8e0-9b80f3e1b178}} plane of coordinate system is coincident with the cluster's orbital plane, where the positive {{formula:17f3737f-e9d0-4c0e-ad60-b920b231939f}} -axis of the rotating reference frame always points away from the galactic centre, and the positive {{formula:e7a84ac2-890f-4c66-b3f8-6f53d41c2137}} -axis points in the direction of the cluster's motion. The cluster's orbital angular velocity vector is parallel to the positive {{formula:c46b0954-be32-46c7-a410-ae6453d057c0}} -axis.
m
bd595e548f0fad30822de42769bee62d
We also formulate Dark Experience Replay (DER) {{cite:4bd93da94ecad51802520759ca8d41740a98fd7d}} for UCL. DER for SCL alleviates catastrophic forgetting by matching the network logits across a sequence of tasks during the optimization trajectory. Notably, the loss for SCL-DER can be defined as follow: {{formula:a7d72bac-3218-4fcd-b557-6501b805ac66}}
m
40a583b4f4e08aaffc9aabf2c4891e77
Our analysis here supports the findings by Madry et al.  {{cite:7db885fcd509f8528f357a9920ff23ab4a88c49e}} and Bubeck et al.  {{cite:4d35b2e1f8bdabedfa99f53f2487dbc3112a9137}}, {{cite:e5eaeee5cef7cefb8594d78b46e6e98ce188d837}} with the caveat that Bubeck et al. 's analysis is based {{formula:107c30c9-f9f0-4574-aa64-bff15f6f0ce8}} norm whereas attacks considered here (and also by Madry et al. ) are based on {{formula:794afcb8-b560-462d-ac21-3143dfab5430}} . This remains to be explored in future work.
d
7b57796154759dbe6e98ccfc58c47a25
Besides masked-based explanation, there are other causal explanation methods, e.g., LIME {{cite:88cf4ab51c594489448eb594664ed91fbbb197ed}}, Kernel SHAP {{cite:a44c29c666236ad478bb14405d78c659589f7a6d}}. They are regression methods based on super-pixels. As such, they produce coarse-grained explanations.
m
1c7069e7cd79aa65010ee6d24a0c50ad
Specialization to the bipartite graph alignment. When specialized to the bipartite graph alignment problem, the proposed Algorithm AttrRich provides an alternative polynomial time algorithm to the celebrated Hungarian Algorithm {{cite:f6b98e6c609ca7dd1cdf717faae63decbad05b1c}} when {{formula:22b84ff5-70e0-4bba-bb96-d4a24421c4b1}} with a slightly lower complexity when {{formula:4f3bb8c0-8a81-49fa-af8c-5b3935dd12b4}} .
d
724fac28ba95657a0027310227824bb0
Within certain modelling frameworks, classification of time series is well established. For example, within an ARIMA framework {{cite:0941e24d7a385e23d2746725e8f543acf5b23cbf}}, or within a state-space framework for exponential smoothing {{cite:671961db23d37a6e2fe98d1d84150d6c95bb2a15}}, series may be classified, for example based on the AIC {{cite:4197ce9bace88b0da9b25fcf9c5c62457d64d9d4}}. It is more challenging to classify series according to their recommended forecasting method if some of the candidate methods, such as Croston's method (see §REF ), lack a fully satisfactory model base. In the field of intermittent demand forecasting, {{cite:e0c4e1d2faa2db5aedf9f9d2d84914689f01738c}} proposed the SBC classification scheme, enabling time series to be classified according to their length of average demand interval and coefficient of variation of demand sizes (when demand occurs). These rules were based on assumptions of independent and identically distributed (iid) demand, and a comparison of expected mean square error between methods. The scheme has been extended by {{cite:1cb9ddef60f83db44575930171e3bed57b8bbfbe}} and by {{cite:a9244d545872ba9cbaa021fc3a3a89cff5c3669d}}. In an empirical case-study, {{cite:3be528880a9717b11207bfcb84cc4796daaafa7e}} examined series not necessarily conforming to iid assumptions and found the rules to be robust to inexact specification of cut-off values. {{cite:930d5c51331139461f672802eb7a78d2430cba48}} used logistic regression to classify time series of demand for spare parts in the South Korean Navy. The classification was designed to identify superior performance (accuracy and inventory costs) of direct and hierarchical forecasting methods, based on the serial correlation of demands, the coefficient of variation of demand volume of spare parts (see also §REF ), and the functionality of the naval equipment.
m
4da98c6855d5d255ca4d2c32ddd7104e
Memory comparison and speed-memory trade-off: In table:recontablesynthetic and table:recontablellff, we also compare the memory requirement of our method along with our baselines. For SNeRG {{cite:2d2816478a9882fa6d07b660eac46573fa5c4115}}, NeRF-SH (PlenOctree) {{cite:2f309db01cfaabc65c05fcc74f100540ce348721}}, FastNeRF {{cite:0fd7ed5b6dfdc432399eb33d587a622c15c8cbed}} cache based inference and our method, memory here refers to the network cache-size. For other methods, memory here refers to the memory occupied by the weights of the trained neural network. From the two tables, we can see that non-caching based methods have a considerably less memory overhead but have a slower inference speed compared to the caching based methods. So there is a clear trade-off between the memory efficiency of a model and it's inference speed. From table:recontablesynthetic, table:recontablellff and fig:memspeedgraph, we can see that our model can generate images at a high speed, which is competitive with caching based methods, with sufficiently low memory requirement, which is closer to the non-caching based methods.
d
ab74f809c9bc0171c923a9f7a499908c
In {{cite:9747f1c4b067715c24fec43319be72e274613b32}}, Isbell proves that a collapsible 2-dimensional cellular complex {{formula:6a682bef-6640-44d8-a298-73e11b3a4413}} admits an injective metric by explicitly constructing a hyper-convex metric on {{formula:1d9a2fae-8e6d-475a-91a3-4d80365a856a}} as follows: taking a triangulation of {{formula:a4bc2444-049d-490f-aaba-daf0ee30103a}} , he subdivides its triangles into squares so as to form what he calls a collapsible cubical 2-complex, {{formula:4f95e662-47be-48cc-ad98-032c782267e7}} . He then metrizes {{formula:b6b77fd3-df4c-41ef-8432-f36873d34a22}} as a geometric realization of {{formula:5eb96589-975e-4e50-b23e-e212ff280b29}} , having first realized each 2-cube as a copy of the unit cube in {{formula:36baca9a-2ec2-4bb9-b94a-201c7201ab18}} and endowing the resulting 2-dimensional piecewise-{{formula:4ed52db1-8667-4ac2-98b2-06a022a38889}} polyhedron with the associated quotient metric. Isbell's verification of the injectivity criterion then proceeds in two steps:
r
70e2c7b4775765d32e92185be032bb1d
Hybrid methods combine ideas from the above-mentioned methods such as pseudo-label, consistency regularization and entropy minimization for performance improvement. Moreover, a learning principle, namely Mixup {{cite:9147720ce68d799c7ad344684579410ccb4607c9}}, is introduced in those hybrid methods. It can be considered as a simple, data-agnostic data augmentation approach, a convex combination of paired samples and their respective labels. Formally, Mixup constructs virtual training examples, {{formula:a0604aa9-260b-4a7c-a6ec-f1c8891c1900}}
m
f02259464177601173fc7a203c620a3e
i.e., each node can only affect its own iterate and the iterates of its neighbors. When nodes only have information of themselves and their neighbors, this is a natural way to preserve feasibility of the coupling constraint. The extended method in Section REF for handling general {{formula:09577b8e-6f33-42bd-a9dd-f56d75a72d5b}} 's and the feasible methods in {{cite:991c60d1190ed687f3ea208d2997a4d50da4c5d4}}, {{cite:257ce3bce15e1a62640b9b9f9b2dae33ae02804d}}, {{cite:7d4eaf9edef0b94737709c12793077f011a6ff1d}}, {{cite:9c8e486dc042690d32de47f7388d6852cdae0849}} all belong to this scheme.
m
82ba5614349b570c2e7de3d1cb580cd8
Model-in-the-loop's performance. Interestingly, the MoViE+MCAN model that was not used in the loop and trained with a different seed, performs very similarly to other models. This suggests that to some extent, annotators overfit to the model instance. An alternative explanation is that model selection for all evaluated models was done on the AdVQA validation set, which was (obviously) not possible for the model in the loop used to construct the dataset. In Adversarial NLI {{cite:3dd10d9fd1fe90720a812be983713c0002ab7167}}, the entire model class of the in-the-loop model was affected. Note however that all VQA models perform poorly on AdVQA, suggesting that the examples are by and large representative of shortcomings of VQA techniques overall, and not of an individual model instance or class.
d
3fa65973f318cff54c27a7e786a230b2
Our models are built using PyTorch {{cite:420bc06c918e754dba42ade70eb5d4c4bae1dcec}} and Open-NMT {{cite:dfe42139a90cbe8ab6345b3998969f8d0e2207ee}}. We configure Transformers with word embedding size 512, gloss level tokenization, sinusoidal positional encoding, 2,048 hidden units and 8 heads. For optimization, we use Adam {{cite:3e9cc2e1ea163fbf621776e438b3c6d23a6938a3}} with {{formula:27549fc3-8a0f-4f77-80ca-c14e017b5c9d}} , Noam learning rate schedule, 0.1 dropout, and 0.1 label smoothing.
d
9ecff5a703575dc50e51de2912a680b5
First observe that by Lemma REF , {{formula:2a8e9571-8bcd-4ce5-8894-0f6e4f6171aa}} is {{formula:8e89f98f-34cb-4659-a563-08baa61e089f}} -strongly convex for all {{formula:3cd455b6-dcef-4aa9-83f3-e318e82ba2dc}} Next, notice that by Lemma REF , there exist {{formula:fe00f57a-18c3-499f-b72c-1d155fb12ab7}} such that {{formula:257541c0-1d90-449e-806c-85eded5417ef}} has a {{formula:c9d07e5b-3584-4896-8e58-5c033425fa59}} -Lipschitz gradient for all {{formula:2ff1538a-34fb-4f1d-8418-6c7e4d92cf30}} Then, the result follows directly from {{cite:dcfc52b58d550c32441423eb5296b6b3639ed345}}. Note that under additional assumptions on {{formula:52da57dc-bad5-4cf3-87b9-4b45b9ec6bdf}} -Lipschitzness of {{formula:f87fa3fd-76c6-48f8-b3fe-1cfc39cf910d}} we can plug in the explicit smoothness constants established by {{cite:33bf0e9a2326e64c1ea373bda32af8f5c71ac2fd}} to obtain explicit constants in the convergence rate, i.e., {{formula:66c014b7-12e7-49e0-998c-0747631dc21b}} and {{formula:911c3200-2105-4c5e-92c5-85d6471da322}} . Theorem REF indicates that solving TERM to a local optimum using gradient-based methods tends to be as efficient as traditional ERM for small-to-moderate values of {{formula:7e5c7cf3-4646-40d5-b94e-676ad6d482c0}}  {{cite:0eb4020a09d43554257e0a3d96997f2e0dbaab94}}, which we corroborate via experiments on multiple real-world datasets in Section . This is in contrast to solving for the min-max solution, which would be similar to solving TERM as {{formula:5e973efe-f890-463e-93cb-962f9e1e1fa6}}  {{cite:a88295b5049db15b2148827c9eff8d5cecf02566}}, {{cite:475e1983a668b226da7caacf4b87583e04ba94ac}}, {{cite:fee0d28ab04b1dce620e47661f87517ec23bc5d0}}.
m
76c2a785f471c604c365714b0de79a48
In {{cite:01b62cda59c6c9fdac36bfebcf03516e9398597c}} it was considered a special type of set theoretical solutions of the quantum Yang–Baxter equation, the so called non degenerate rational maps. Nowadays, this type of solutions is referred to as quadrirational Yang–Baxter maps. Note that the notion of quadrirational maps, was extended in {{cite:10010463d0b1ea44634e42e842925471f48fa390}} to the notion of {{formula:920b1cea-8135-4467-8435-96d3f018c0f9}} -rational maps, where highly symmetric higher dimensional maps were considered. Under the assumption of quadrirationality and modulo conjugation (see Definition REF ), in {{cite:f188289f41c4b94762bcabb2f457cb07146bae86}}, {{cite:8f7001352ff7d507f78cfb2da7ef6632e91b3296}} a list of ten families of maps was obtained. Five of them were given in {{cite:f188289f41c4b94762bcabb2f457cb07146bae86}}, which constitute the so-called {{formula:d8419578-b372-4bfc-b06f-001e2f0a409a}} -list of quadrirational Yang–Baxter maps and five more in {{cite:8f7001352ff7d507f78cfb2da7ef6632e91b3296}}, which constitute the so-called {{formula:8ac355d5-102c-46e7-8323-d7bc56a723c3}} -list of quadrirational Yang–Baxter maps. For their explicit form see Appendix . The Yang–Baxter maps of the {{formula:e4f4fdd5-b0b4-4fcb-88fb-4f2efb048180}} -list and the {{formula:cd82fcae-9499-4984-869c-2d053afa0bf9}} -list can also be obtained from some of the integrable lattice equations in the classification scheme of {{cite:1da4f0aa8c92f41370986a7d8024cdd9d868f2b2}}, by using the invariants of the generators of the Lie point symmetry group of the latter {{cite:91aa1b2fcab44a370e02e5e65e091287b185aa86}}. In the series of papers {{cite:72765659e2f24033003c6c17d3d3151415c28167}}, {{cite:ce27434fd4cd11e764d5505751d5682afcfeb631}}, {{cite:91df6e33c2c712fc3320390f4e40ab6677f0f92d}}, from the Yang–Baxter maps of the {{formula:02796c3d-5b9c-46f5-aff8-bcd44522c05a}} -list and of the {{formula:81cefc16-b97a-47b1-9211-878952600bd8}} -list, integrable lattice equations and correspondences (relations) were systematically constructed. Invariant, under the maps, functions where the variables appeared in separated form, played an important role to this construction. The cornerstone of this manuscript are invariant functions where the variables appear in separated form.
i
b57c8dc8f329968c3a17571eff9f99e1
A possible extension of this research is to investigate analogues of these problems for multivariable functions. Previous research on learning multivariable functions {{cite:b1031059f371b73d9f33f6ce6469634936a9161e}}, {{cite:40677691a979ada74579f9b4d9eb264eecc1804f}}, {{cite:7cdce6b3dca7104b89d3d7ff37f5863f24fa18dc}} has focused on expected loss rather than worst-case loss, using models where the inputs {{formula:3b686a4f-e369-48ba-b876-1a804300ccd1}} are determined by a probability distribution.
d
664d8b2caf80b58635ecdd168964f124
The majority of the experiments use the synthetic oracle for labeling. We also run experiments with actual human annotators in the demos + preferences experimental setup, with the full schedule and with the schedule reduced by a factor of 2. In our experiments the humans were contractors with no experience in RL who were instructed as in {{cite:2e9381d5bd594bf3fb3b0eed2625cb6a98c5bcf8}} to only judge the outcome visible in the segments. We label these experiments as human. {{figure:af335eca-4c2c-4052-9ea6-394af4fee648}}
r
c0496262253ea067dd88d8aa792d3f2a
An interesting direction to explore would be cost sensitive learning in Class Conditional Noise (CCN) models without using/tuning noise rates {{formula:954ad409-8993-400b-8316-1748dca7effd}} and {{formula:95f197dc-8ee2-4af5-a0b2-a3fdb710f52d}} . We could invoke Elkan's result {{cite:0b5a5ddc1c672a2338ec2bc7630ba83ab0d1cb9a}} in the re-sampling scheme due to the uniform nature of noise. However, the estimation of {{formula:77064241-3b5d-4280-91ac-130b0ac1360f}} induces an iota of approximation bias in the scheme. A theoretical proof for correctness of re-sampling scheme could provide directions in dealing with CCN models, including perhaps a bound on the error in SLN models; see Remark . Even though the constant term appears in the upper bound of Theorem REF , it would be interesting to understand whether it is due to the affect of noise on the systemic component of risk or something else. Also, using synthetic data, one can comment on the tightness of the bound. A more fundamental question would be “Is lack of guarantee on consistency a price that one has to always pay while learning to get away without requiring the knowledge of noise rate?"
d
b364f07d99ecf89fa6fb4255b87fc6bd
Figure REF visualizes the datasets in the Background Challenge {{cite:70c4128cb86d8cc17293bd3ec008f35e590a5f11}} benchmark, which provides various combinations of foregrounds and backgrounds: Original ({{formula:613d5709-fbf4-4cf9-885d-03b8ab752e37}} ), Only-BG-B ({{formula:d700286e-72b5-47f9-9a46-1b5a33ee255e}} ), Only-BG-T ({{formula:174e89e4-0142-4542-aa59-ceae26841ff4}} ), No-FG ({{formula:ef2f36c9-116b-4544-895b-10b50d0df5d4}} ), Only-FG ({{formula:1950fe02-c9c0-43e9-90ba-1a6962d43089}} ), Mixed-Same ({{formula:6c14caf3-6afe-4251-b7ba-ae89466cf295}} ), Mixed-Rand ({{formula:abd55e08-3588-4cc0-ae51-4aed6b51a56c}} ), Mixed-Next ({{formula:3c350858-45b6-428a-bc14-30088d74fb2a}} ). The upper or lower arrows indicate whether the model should predict the class well or not, respectively. We omit Only-BG-T, No-FG, Only-FG, and Mixed-Next results due to the brevity of the presentation. Nevertheless, we provide some observations and discussions on them:
d
bcd220a1457a36d59b01e3818b305c9f
However, we notice that both CLM and MLM still remain some deficiencies in calculating sentence scores. For instance, some works {{cite:5a362bfd52bacf39e0d1e4abc5558a5ced04f0e9}} utilized GPT-style model for ASR reranking within single inference, yet GPT model can only extract unidirectional information due to the limitation of CLM, without considering the whole sentence semantics, and thus affect the sentence score. To utilize bidirectional context, some works {{cite:b9387ef33cc7377ec35a2e7cb33624b8ea61d873}}, {{cite:962a5037ce74caba9e83941759c600eb72b0259e}}, {{cite:499e08e8a119d7a3ac58e3734086e3c4a3079a8f}} applied BERT model for rescoring. However, the nature of MLM needs model to mask some tokens in the sentence for prediction, which means it requires BERT model to forward multiple times that each forward only masks one token for prediction. As a result, it is time-consuming to directly adopt MLM for scoring sentence. Overall, we find that for calculating sentence score, CLM only needs once inference but only use unidirectional information and MLM is costly in computing sentence score although it can use bidirectional context. Therefore, we raise a natural question: is it possible to design a language modeling to use bidirectional context for sentence scoring with only once inference pass.
i
3e5cb53edd131735a62a9b457ac83237
Architecture scope: We sample our architectures from the large AnyNetX network space. It contains the CNN building blocks to span basic designs such as AlexNet or VGG as well as the whole ResNet, ResNeXt and RegNet families. We acknowledge that there are popular CNN components not covered, however, Radosavovic et al. {{cite:ce9243b029b1f8077a73d7db8690b0c91f4629f9}} present ablation studies showing that network designs sourced from high performing regions in the AnyNetX space also perform highly when swapping in different originally missing components such as depthwise convolutions {{cite:6654a0a40e6c8fb6fd4229f0b4d2c7cb3260e791}}, swish activation functions {{cite:0833b5a23f40366782c28163f10ca009d312cfb9}} or the squeeze-and-excitation {{cite:6ec198792aac294b00c23549d855b5b607a5797d}} operations.
d
cf81fa5c59904a522e47087e5ec19751
We strongly believe though that robotics competitions like AIRR reveal highly relevant research areas for AI. In this case: How can AI best be designed, so that robots need minimal time and data to reach robust and highly agile flight? A monolithic neural network trained end-to-end purely in simulation likely requires too many training samples to form the best answer to this question. And, if we equate the experience accumulated in a simulator with the evolutionary experience before the birth of an animal, this is not the strategy that we observe in animals either. Animals – even from the same species – are all different physically, and their intelligence is set up in such a way as to deal effectively with these differences. Whereas humans need a long development time before becoming operational, many flying insects can almost immediately fly and perform successful behaviors. The reason for this is that evolution has put in place various mechanisms to deal with, e.g., the physical differences between members of the same species, ranging from adaptation to various learning mechanisms. This means that true AI will require not only reinforcement learning {{cite:ed50174741cb805be52a6b077df9566ddd28e043}}, but also, various types of self-supervised learning {{cite:5146b55fea563f2ae4ee39195f70b14eaea3bf92}}, unsupervised learning {{cite:d0f9cb2bc0c16bfd616dadb61a95479bf8a19541}}, and lower-level adaptations as used for instance in adaptive control {{cite:afb85e4ecb9a2e4d6120f0f5655ba072e765ed51}}, {{cite:81f77cbbeaee40d564a21f8b6da26441edb0b943}}. This last level of learning, arguably at the lowest level, is hugely important for crossing the reality gap in robotics {{cite:85f4de10946135b28a8f8ff908535c51177ab676}}. To make our approach work on time and robustly enough for the competition, the employed AI still relied quite a lot on us as human system designers. We learned the drone’s model based on flight data, used supervised learning with human labeling of 2336 images for training GateNet, and designed an active vision algorithm for finding corners in the segmented images. For state estimation and control, we predominantly used human-engineered solutions that most would classify as being part of the field of control system theory and were adjusted by experts when the robot for instance moved to a location with very different air density.
d
e39a96e2857c8a23bf7381296e2c4bab
Let {{formula:635014f8-966f-4a60-bf24-30d529abbd43}} and {{formula:b7168b6b-ab4a-44f8-ac96-31c5da2e0f3d}} . Then the following hold (c.f.{{cite:c2c560ff366c620c53825e1df14ccd14c99e8209}}, {{cite:30a4fe7d9a553713a32e8407a298314296af3b5b}}).
i
2b9be2312173476c84f35abf630bdf35
The equivalence of conditions (1) and (2) as well as the equivalence of conditions (4) and (5) can be found in {{cite:f1c4c02ecdfc706f1df4fbea81496000e80fd7d3}}. That (2) implies (3) is immediate from Proposition REF .
r
e840d2682d3025ec667e0832f9829f88
As a backbone of our explainability tool, we require a method to represent the special activation properties induced by a single sample in FR models. The AM visualization scheme used in this work is the Score-CAM proposed by Wang et al. in {{cite:fa3f142309f5dad48e6a7c8f4153909e54306ac5}}. This method is designed to efficiently display visual explanations for CNNs. It re-weighted the final activation based on emphasizing the most relevant regions within each feature map according to the network's decision. These activation CAM methods surpass the inherent limitations in the gradient-based CAMs {{cite:a904374b9edf61e38e28b3452cbb159905b4c58b}} and provide a more effective and faster way to calculate the salient map {{cite:b28760100101a4d2f13fb0c3e404a145770eafef}}.
m
c725dcffa856b9017aea2cd40ec33a5a
Our method of testing identification addresses in some sense a statistical problem `in between' classical treatment evaluation, where both the treatment of interest and the identifying assumptions are predetermined, and so-called causal structure learning or causal discovery, see e.g. {{cite:a89a4ce1e3f0215d689ff449fae8b2bb20785939}}, {{cite:d9a4f0ef087fcadfb112c30179c3fdf7be5de5ae}}, {{cite:85fd3a4cd153f7558af6f1021723968c7b9e4ce0}}, or {{cite:ac13996004543c0567c21b57f20dde3f805d56cb}}. Causal discovery does typically not predefine the treatment and outcome variables, but aims at learning the causal relations between two or more variables in a data-driven way, possibly under parametric assumptions or the assumption that all relevant variables in the causal system (apart from random error terms) are observed. Here, we do not rely on such assumptions, but instead impose more causal structure, which permits us to distinguish the treatment, outcome, covariates, and the supposed instrument. Such a causal structure appears realistic in many empirical contexts with information about the timing of measurement of the various variables. For instance, it is obvious that a treatment taking place in an earlier period can affect and outcome measured in a later period, but not vice versa. In contrast to classical treatment evaluation, we do, however, not pre-impose specific identifying assumptions, but test them in the data.
i
ccc4e20b15e823eb903f4592016dcecc
The Fast Gradient Sign Method (FGSM) is a popular one-step method to generate adversarial examples in the supervised setting {{cite:3307e3c627520d0a1190df3e833759ff275a1173}}. Its untargeted mode perturbs the input {{formula:e3d3971b-9f86-48ff-b983-c4c8837eea8a}} by taking a step of size {{formula:f84291ba-2f89-4bdf-9846-6a3c06a278d0}} in the direction of maximizing the classification loss {{formula:f8eca55e-c8b5-4715-b357-db2be8e336ff}} relative to the true label {{formula:ba454fec-ff3d-42c0-a9b4-fe1cf0e83e83}} . In targeted mode, it minimizes the loss of classifying {{formula:47a0a4d8-e465-49c6-92d1-0c997b4a3142}} as a target class {{formula:4eae7544-7197-4ea3-92df-4a351a79133e}} : {{formula:e539c5d3-c649-4bc3-a5fb-41d5a56e842c}}
m
2948abbe8818277f73cb7c73654c1488
Table REF summarises the quantitative results of this experiment. The DiSCVAE yields higher accuracy rates over competitive sequence disentanglement techniques, like the DDPAE and DSeqVAE, despite being completely unsupervised, i.e. there is no need to fit a GMM post-training. These improvements additionally transfer into the semi-supervised case, with the mAP of the k-nearest neighbour classifier trained on the DiSCVAE representations surpassing its competitors. In contrast, the MSE between last-frame prediction largely leans in favour of the VRNN, which is consistent with how entangled representations often yield better predictions at the expense of interpretability {{cite:cdb926631d31beff48a834284bdfd0d9acdbe2c6}}, {{cite:57e51a1ae758c74e22a6e4ff09e717cbc489584d}}. Note that the DDPAE MSE is not reported as it was a factor of 10 higher than the rest, which we suspect is due to its nature of decomposing frames into components and modelling intra-component pose variations, rather than holistic object dynamics.
r
30a98ae4efbc80fbd3f51526eb00eab3
In the last decade, many works have managed to further establish sharp convergence rates, which mainly requires to investigate quantitatively the sublinearity of the corrector {{formula:e5177a05-005d-43ac-9869-6793feaa7728}} ; see {{cite:1f3d651981e4ceadf222fe807a6218279b71f081}}, {{cite:49c11495d8745d6199cbe5c68f44fac66be9eaee}}, {{cite:185335c78c0b3ecbfb7475c06476585423b556b5}}, {{cite:add42595e3c9c62fedf0adf61c2da75e99b777fc}}. In dimension {{formula:14b9ac12-62ad-454a-8748-b2ce373e8bf2}} , the corrector can be chosen itself as a stationary field with bounded moments: it is then uniquely defined up to an additive deterministic constant, which is fixed for instance by choosing {{formula:7dd997f5-2c64-4a99-9c1d-27efed1eaa98}} . In dimension {{formula:474ef825-1b57-40e1-9d7a-8214d9f095d7}} , the corrector cannot be chosen stationary as it has some nontrivial (sublinear) growth at infinity. Based on optimal corrector estimates, it is deduced that for all {{formula:6c9b2eb2-71fa-4bb7-8214-9595a648a787}} , {{formula:0760b049-2e2d-48df-a5f1-3533bc7fa2e2}}
i
1a4cdde87d800ce8dcf0386585b65d68
Entropy inequalities have been a core part of information theory since its inception; their development driven largely by the role they serve in impossibility results for coding theorems. Many basic inequalities enjoyed by entropy, such as subadditivity, boil down to convexity of the logarithm, and hold in great generality. Others are decidedly more analytic in nature, and may be regarded as capturing some deeper geometric property of the specific spaces on which they hold. In the context of Euclidean spaces, a notable example of the latter is the Shannon–Stam entropy power inequality (EPI), stated in Shannon's original 1948 treatise {{cite:7ab9a33d586d0c9487f99d4d7ba90e69b95ee1c8}} and later proved by Stam {{cite:9981114b828f36f566aa8dafff89dc74f44a388c}}. Another example is the Zamir–Feder inequality {{cite:e8796db6597bf2ac345556789ef8f8350cb02baa}}, which can be stated as follows: Let {{formula:62b18d32-a520-4416-8bae-6eb9605846b1}} be a random vector in {{formula:d17ed371-b524-42a1-81a1-5e2e50aa012d}} with independent coordinates {{formula:3f1ad096-ec2e-4d60-9ac6-b4b8b1710fa5}} . If {{formula:9f3c5bf7-c32d-4761-8062-0daaf4ec8224}} is a Gaussian vector with independent coordinates {{formula:b7fba222-54c6-4e0e-b571-7fddf2cb33c3}} and entropies satisfying {{formula:31bbc682-dc0c-4154-9b33-b5e43805611a}} , {{formula:b74177f5-fb9b-423d-8546-971b8f4a40da}} , then for any linear map {{formula:5e72a380-1cd5-48b3-85dd-c342a5fe1df6}} , we have {{formula:5921197d-c6be-4d82-86e2-db22b8b3a5e1}}
i
9da0f381a968760bba827bb99d144c0e
Over the last few decades, Stein's methods have become one of the essential tools to prove and get a rate of convergence in Central Limit Theorems for sums of dependent random variables. It was first introduced by Charles Stein in 1972 {{cite:e0a727babda3e3ca02ce9160af9ed717876b333b}}, who combined Gaussian Integration by parts or “Stein characterizing equation for standard normal distribution” with certain “noise robustness” property, which is now called the exchangeable pair approach. This method can now be applied using a variety of approaches, namely exchangeable pairs, dependency graph or local dependencies {{cite:3da371c7af39e24793b2f94e113a6b947bca3e25}}, {{cite:caa394c1c5357067cc90ce48361604ef248247e6}}, size-bias {{cite:16d8825d546b1fb63c71cdb5d58e610da4dd1008}} and zero-bias couplings {{cite:e95129f3bb6c5dec08217099e1eb545af188494a}}, Stein coupling {{cite:85ab30b88498fbe89650ea57b69ebfe29e07fc00}}, and through Malliavin calculus {{cite:0c2ad660f06bf024ee3ecd568fd5fefd330a20c6}} among others. The main underlying idea in Stein's methods for CLT is as follows:
m
32f6d79c740e0db76ba2f878257d0afd
Compared to the GALEX-based measurements in the same redshift range of {{cite:6e631beb328acfb7846c13fa98b38194280f160d}}, our sample is smaller, covering a smaller sky area and with a shallower limiting magnitude. On the other hand, while {{cite:6e631beb328acfb7846c13fa98b38194280f160d}} suffers from systematics related to source confusion, this is a much smaller issue for our survey: the superior point spread function of XMM-OM leads to minimal source confusion ({{formula:639b9e07-a0e7-4b04-b427-eafe2a73c912}}  per cent).
d
22eb854648bc48a0bd914bdabd29470e
We also compare with focal loss on Multi-Color MNIST dataset. We use {{formula:e6b0f269-ee67-4fc8-b3e6-37e48a1e88a0}} and {{formula:56858376-2bd4-43e0-bfef-08e3cf29e2ec}} as they perform the best in {{cite:42ed0124818e657a112fc386b08c1592b6dc98aa}}. The results are shown in tab.supp.comparefocal, where focal loss's results are even worse than the vanilla model. The results prove that hard negative methods are not well-suited for debiasing. Since DebiAN is different from hard negative methods by using estimated bias group assignments to mitigate biases, our method achieves much better debiasing results.
m
2c718b903a053a88f1e22f45dafbc34c
This work is an important step in understanding performative prediction in dynamic environments. Moving forward there are a number of interesting future directions. In this work, we consider one class of well-motivated dynamics. Another practically motivated class of dynamics are period dynamics; indeed, in many applications there is an external context which evolves periodically such as seasonality or other temporal effects. Devising algorithms for such cases is an interesting direction of future work. As compared to classical reinforcement learning problems, in this work, we exploit the structure of the dynamics along with convexity to devise convergent algorithms. However, we only considered general conditions on the class of distributions {{formula:217d919d-78d2-4127-afa9-d2976b73f7d7}} ; it may be possible to exploit additional structure on {{formula:f7a20d90-3094-4b07-8aec-6279566b453a}} in improving the sample complexity of the proposed algorithms or devising more appropriate algorithms that leverage this structure. Technical Lemmas and Notation Notation. Throughout we will use the following derivative and partial derivative notation. For a given function {{formula:b643ef78-5be3-4c20-bee8-d0a8d7372e16}} , the partial derivative of {{formula:9ee6f668-9aa3-4385-94e9-ef787c0fa7b1}} with respect to {{formula:09cd3b22-265e-41bb-9793-f5d2a0a74ede}} is denoted {{formula:e49f2500-77c4-477b-a70d-444506468aba}} and the partial derivative with respect to {{formula:25addbbf-a717-4556-8cf8-b47225aa159f}} is denoted {{formula:2f31b2f1-bfe8-464f-abd1-ed0ebf55b4fb}} . For the expected risk {{formula:b32996cb-191d-4913-b18d-cfeb599be90b}} , the total derivative with respect to {{formula:ed5368ee-7983-42f9-b76e-e2ce1354ba0b}} is denoted {{formula:c505c7ca-fb2c-4c29-a80e-5a6999731d8e}} where {{formula:7f9676d6-b1cf-4f65-aa02-bf02e54087b3}} is the density function for {{formula:688cb01d-f244-4e20-85a6-a1fd91b76823}} and in the last equality we have applied the so-called `log trick', which comes from the chain rule. Throughout, we use the notation {{formula:d34a8734-d87d-4de8-bd72-19e778f40c50}} for the Euclidean norm. Technical Lemmas. The following lemma is a direct consequence of dual form of the Wasserstein-1 distance. Lemma 2 Let {{formula:4fb50114-bba1-4ac3-a2f4-de69987f2d54}} be {{formula:afb6eccf-0704-470d-bfb4-d0af116fcaac}} -Lipschitz, and let {{formula:023c7a0e-2db4-43e2-807a-dd72f928e6a2}} be random vectors with distributions {{formula:fdddf19b-e6da-4e87-9b2a-ffdc44e3433c}} and {{formula:a364a3f5-78f9-4f87-bc0f-bc0f47c22cac}} , respectively. Then, {{formula:3921ce91-df69-4306-b0e2-a713366f17a0}} * Observe that we may write {{formula:4466d885-f95c-4a2d-8da0-185610e92bc9}} Therefore, we deduce {{formula:36286400-5613-4303-9556-17dfda0ce342}} Analogously, observe that {{formula:eaeab0f2-cd2d-4887-b52d-1566ed2c95e1}} where {{formula:dee8ac83-edb2-4d51-a051-2700b011415b}} is the Hessian of {{formula:1579e55e-37fb-48ee-b1a7-17d173d7929f}} with respect to {{formula:9226bdac-f7a7-400a-8923-f001809be3ef}} . Therefore, we deduce {{formula:d8ba4b03-a02c-4d56-ad86-645afdbaa2f3}} The proof is complete. Proofs for Zero Order Oracle Setting Technical Lemmas Recall that {{formula:adbece71-34b2-45c2-83c5-3cbf3ba07efa}} lemmalemmastrongconvexmu Suppose that Assumptions REF , REF , and REF hold. Choose {{formula:522bb7ab-0f36-4cb9-b5f3-34a392fbfe52}} for some constant {{formula:75e897c3-f840-4958-ba0e-dbe9bab6e2ed}} . Then the map {{formula:0f15c989-015d-44f8-8e05-a450ac222e88}} is strongly convex over {{formula:dc349b1c-6dfa-4715-9adf-8e7a111d62d4}} with parameter {{formula:a8b55dcb-becc-4500-9d15-b0efc15b4084}} . We first estimate the Lipschitz constant of the difference map {{formula:08bf94dd-5424-4afa-a4e8-f2afdec32f49}} To this end, we compute {{formula:2a3366db-dcf8-4ff8-bc5c-a4325a032194}} Taking into account that the map {{formula:1f03c0ca-a495-4666-8af5-3914c65c4941}} is {{formula:4b55dee0-d833-4bab-83c2-f61c89a2abec}} -Lipschitz continuous, we deduce {{formula:8520e654-599e-4416-9113-d22e5bc05477}} Thus the map {{formula:9d6858f5-e8db-4943-9be3-357933b5aabf}} is Lipschitz continuous with parameter {{formula:6c52d176-abd9-4d71-b53f-7aa5e3bdd24e}} . We therefore compute {{formula:e4a78ec0-f694-42ef-a0bd-b998cc0d39cd}} which completes the proof. * [Proof of Lemma REF ] Observe that using Jensen's inequality along with Lemma REF , we deduce {{formula:a9c9a097-1657-4d30-a33e-78bfdf452aec}} Hence, we need an an upper bound on {{formula:d6020fb5-2215-40c7-bd84-230be5488513}} which is the Wasserstein-1 distance between the distribution at time {{formula:d139dadc-521e-4151-be3c-cda2f16793dc}} and the fixed point distribution for the query point {{formula:813cfe40-0e49-4958-8b14-de7874620961}} . Upper bound on {{formula:9621639c-9688-44b6-959a-aee707fb570f}} . Using the fact that {{formula:bc3f39c1-9f53-4be4-9139-dbc007fd0d34}} , we expand {{formula:d7bc0c66-9e5f-48a3-91e9-192910c1c4eb}} as follows: {{formula:ed9f6eca-65f1-4e76-aa4c-a21f6c40d7a4}} where we have used the triangle inequality, Assumption REF (d), and the fact that {{formula:c4c15018-a47f-46e6-8368-b180c2725ba7}} for any {{formula:0a9c5317-a90b-47f4-a53e-986c0c55e700}} , {{formula:d519c1ce-89fd-47dd-ac70-7ce892f667a5}} for any {{formula:322d1a62-6bff-48cd-a1fa-2a65e925de6c}} , and {{formula:ac341b43-8d77-4ae0-b401-750ca1f59806}} . Continuing to unroll the recursion, we have that {{formula:1eadca28-ec59-4885-9415-ea3dbb888d20}} Hence, we need a bound on {{formula:046ab8ce-eb13-4184-826a-4e9a084f8af7}} for each {{formula:32a6d418-c39a-4b6b-b210-410e9bdb7f60}} . Using the fact that {{formula:e2084053-8782-4804-af16-804e0655f6f2}} where {{formula:31cba97f-b053-4cb0-899e-b8db36e115ea}} , we have that {{formula:e6c4812c-edcf-4a90-883f-1f59cde3efa2}} where the penultimate inequality holds from the fact that the learning rate is non-increasing. Hence, we have that {{formula:5e4f539f-70fd-4e2b-9586-d422b5b6b3c3}} where the last inequality holds using the fact that {{formula:3fb9ffae-1545-4d08-8a5b-feb2c3a691ac}} . Bounding gradient error. Using this bound, we deduce {{formula:0f0dcb50-79d3-4879-a705-89e6f9b58cf9}} This concludes the proof. Lemma 3 Suppose that Assumptions REF and REF hold. The loss {{formula:ffb67696-83ea-4a8a-bc36-f3e121f5ac13}} is differentiable and the map {{formula:5bba201c-89cd-401e-8cb9-0748c59ff280}} is {{formula:6ce5c394-dcea-4fb2-8bd9-d5a1e0952c51}} -Lipschitz continuous. Moreover, the estimate holds: {{formula:4c0f4f8e-10e5-4598-b1b9-5a567ed7e3ad}} For any point {{formula:2050ec69-d9dc-442b-b5f6-c0ad441f0292}} , we successively estimate {{formula:50c7b9ee-db99-462d-9921-336d8f517982}} Thus {{formula:35e160d4-78cf-4af9-8074-de23e4893895}} is {{formula:7d813b32-04b5-46bd-9cdb-bad8f450a87d}} -Lipschitz continuous. Next, we estimate {{formula:70e80514-c458-45c8-b7bf-7e77798ee70f}} which concludes the proof. Define the smoothed loss at {{formula:dfc5b0eb-cdd8-466d-a00e-a5ffb0cbdc63}} as {{formula:c56b5e36-d418-4f48-b6ef-d2e5c6c55d78}} Let {{formula:44d8e24c-f518-4911-8372-3d3c0a04ea6a}} the optimal point of {{formula:668d14e5-e176-4876-a402-9f7469d8c62a}} on {{formula:6a82f82d-0138-4ad9-bdc7-71290687d32b}} , and {{formula:873e1c6a-7a57-41a6-9735-18804b963375}} be the optimal point of {{formula:2707c652-354a-452e-9d7d-d45e8742dc06}} on {{formula:24a7206d-eac9-4433-8366-eb0168da2cea}} . We have the following bound on the distance between the optimum of the performative prediction problem defined by {{formula:40257f54-195c-4130-b4f4-ecbc72fe5cb6}} on {{formula:dbf4209d-6660-4bc3-81c2-235bc7710d15}} and the optimum of the perturbed problem defined by {{formula:85448a5a-e2e4-4442-a245-595560819fb5}} on {{formula:0f096267-1dd1-4fcc-865e-e183fd430140}} . The normal cone to a convex set {{formula:56814dd4-6362-44c9-9654-856769849bd0}} at {{formula:d812990b-7217-4581-88f8-33f3af01c218}} , denoted by {{formula:34bd08e5-01b9-45b0-b03c-e0fdb3090233}} is the set {{formula:69326515-e575-4862-9804-0c372b1f18ff}} Lemma 4 Choose {{formula:6313b5b0-8ce8-4433-891d-896a1fc08bcf}} . Then the estimate holds: {{formula:7945f3f2-4f08-4c2c-becc-0833d5ba74bf}} There are two sources of perturbation: one replacing {{formula:695389e7-8a3e-49b9-839f-1e93bfb7f2d9}} with {{formula:074018a3-9fee-480b-bb93-3a19f8c692b5}} and the other in replacing {{formula:6c223f5d-5228-48fc-a986-497f8f61bb29}} with {{formula:d0d6d8e3-f6ff-44b8-9cc4-a57f442c4fda}} . We will deal with each one individually. To do so, set {{formula:720cfc54-e5a3-4bf8-b55b-cfc65431ec15}} and let {{formula:9f6ed689-c063-49d0-9ab3-2e1944d0d13f}} be the optimal point for {{formula:7d8a8dab-5bc9-4082-b9d9-69f8892a0e0e}} on the shrunken set {{formula:0022eb58-4d46-45c0-957d-51c001d80b5d}} . Thus {{formula:602bfa27-5591-465a-bd96-58b66f22ad7d}} satisfies the inclusion {{formula:b94a97c4-9f42-482f-bd8f-c3e37a156c3a}} where {{formula:67865fed-6a08-47f8-af7a-1ea9153923fa}} denotes the normal cone to {{formula:db70a857-9210-432b-bc62-e5a9ce16024c}} at {{formula:d6678216-f193-4026-868f-5dffbe777993}} . The triangle inequality directly gives {{formula:047ca2bf-cce0-4966-acc0-48fffa6650ba}} Let us bound the first term on the right hand side of (REF ). To this end, since the map {{formula:e2dbe277-4c47-488b-901c-917342463148}} is {{formula:9bb2341c-8f7d-48c1-adff-2661e882427a}} -strongly monotone, we deduce {{formula:1a56f36c-48c0-469b-bdbf-fd2de094d3ae}} Let use estimate the right hand side of (REF ). Since {{formula:81090f1f-6a59-4b25-b5b1-8516621353f8}} is optimal, the inclusion {{formula:e06c7301-b2fc-4d1d-ad1e-106f417decce}} holds. Taking into account the identity {{formula:b5b5de85-ea96-4cf3-a812-4ef59cc4100c}} , we deduce {{formula:b83c24ba-1899-4547-bd96-c289d59b41b4}} where the last inequality holds since {{formula:239e7b8b-832b-429f-9fb1-e66eee1d810e}} is {{formula:43c27ccf-6054-41c4-8c43-300be4acd0dd}} -Lipschitz continuous. Appealing to (REF ) and using the triangle inequality, we therefore deduce {{formula:08e0f6e8-d8d7-4617-a8d8-6cd7fb64f5ed}} It remains to upper bound {{formula:fba000b7-cd3c-41a3-91b8-25ca0045c9cc}} . Since {{formula:376618b6-e65f-4ca2-89f2-c76e266093ab}} is optimal, we have that {{formula:1d2546cd-2f16-4b11-aa63-c3aa6789d6b1}} Analogously, since {{formula:69658163-86d9-475f-8d76-d9c3cef7e0cf}} is also optimal, we have that {{formula:a629b328-28db-4f54-862d-dfe2bf2220b1}} Then, by strong convexity and estimates (REF ) and (REF ), we get that {{formula:b0a1e41d-b05f-419c-9833-1337ab0c1da4}} where the last inequality follows from Lemma REF . The following lemma holds by a simple inductive argument. Lemma 5 Consider a sequence {{formula:6eb7fc86-5b19-446a-a2a7-a62b8e518421}} for {{formula:6e409f0a-a44a-4592-998e-718c31285847}} and constants {{formula:a9ee6981-67b5-4f31-834d-0b4d71db4c20}} , {{formula:1cef8dcd-cfc2-4f15-b1e3-0fd5e1df1e73}} satisfying {{formula:b52bc45a-2653-4172-a2ab-40d43abee586}} Then the estimate holds: {{formula:a55d784f-d841-45f8-b1f8-d481e0125e3d}} Proof of Theorem  REF * Adding and subtracting appropriately, we have that {{formula:20fce2a5-3274-4cbe-ab30-c1b1a271697f}} Now, to bound {{formula:500e51d4-c658-4999-8cec-57672d2e16b8}} , we have that {{formula:54dad152-8874-4f1a-b9f6-c22f421c3c45}} where the last equality holds since {{formula:5becc165-24dd-445c-9973-57fe1696b972}} . We rewrite the smoothed gradient of the loss at time {{formula:01e4f307-5fcd-4015-9c2d-aa932deb8ef7}} as {{formula:f6dd4787-f592-43bb-8352-23a0af2902f5}} Hence {{formula:0a3e60d0-7779-4b69-b81c-e6233226db9e}} where we used the fact that the smoothed loss is {{formula:8660eefc-6e84-42d3-812e-208928a73952}} strongly convex for any {{formula:51e84b4d-c5a8-4664-b46e-099985bfb4d6}} and we let {{formula:e66ef9b4-b359-401f-affb-086844d4714f}} . Using the fact that {{formula:f4694df6-115d-4c4b-b07b-564bde5c9981}} we have that {{formula:5532f2fa-6227-494e-8993-9a06cdc450c6}} where we use {{formula:d2f4095d-07a2-4946-9b79-7b7c8db6de42}} . Now, since {{formula:08d5d3db-9868-42bb-9297-c25c75d10c76}} we have that {{formula:0abc338b-d342-40e2-99ae-712def758ea1}} so that {{formula:ff0573e8-1e32-402e-a6f8-06b41c302eab}} Therefore, we deduce {{formula:2dd20744-88f3-404f-8e7a-4210981e0998}} Since {{formula:dafbd56d-7008-4bae-86e4-056ccd437d72}} , we apply Lemma REF to deduce that {{formula:78421cf6-d208-419d-8fcd-11707a375f08}} This concludes the proof. Proof of Corollary  REF * The assumed upper bound on {{formula:69e995a5-0f26-4c88-835c-7b5834f0deca}} directly implies that {{formula:6532887a-b22d-4dab-91b5-6eb64b3937e7}} and {{formula:02f387cb-abb4-40cd-a9ce-c9e642d70cac}} . An application of Theorem REF yields the estimate {{formula:ba42c348-fe99-42d1-8347-b3684e654e40}} Setting the right side to {{formula:403003e5-acaa-435c-ba2c-a02134d50dbc}} , solving for {{formula:5ccf866c-1d49-4a4e-a2df-ed27230b5404}} , and using the trivial upper bound {{formula:ad4cd315-cf13-4a32-8b0a-f7377a394482}} completes the proof. Proofs for First Order Oracle Setting Proof of Lemma REF * Observe that using Jensen's inequality along with Lemma REF , we deduce {{formula:4acbb432-d07e-486a-84d9-b2ce2253c342}} The remainder of the proof is identical to the proof of Lemma REF . Indeed, we have that {{formula:fe5828ec-0ddc-4629-b644-40b74a4371cf}} Hence, we need a bound on {{formula:fdafcd5d-c3ae-48c4-9867-9415298db2ab}} for each {{formula:1bea136a-4cbf-44f9-8af6-516d001ee284}} . Recall that {{formula:b76f448f-b9f6-4cc3-86a5-c9c9217fbf44}} where {{formula:4e82be64-60cf-469c-a953-7d3b5e63748f}} Moreover, {{formula:325d05c0-d6f2-403e-a2dc-d13fb708443c}} since {{formula:d9369c56-6635-4a22-bd50-119f07abba18}} is {{formula:8dca9bf8-3c9c-4462-acb7-d3ccb0512ad6}} -Lipschitz continuous. Hence, we have the following bound: {{formula:da34251d-c73c-4971-a3ee-9a97f45a2044}} where the penultimate inequality holds from the fact that the learning rate is non-increasing. Therefore, we deduce {{formula:64b74fa0-cb85-4545-b1a1-12ae10da9565}} where the last inequality follows from the fact that {{formula:449cd03a-0ba3-434e-a47d-0dbd85bf1d16}} . Using this bound on the Wasserstein-1 distance between the current probability distribution {{formula:8b1376a1-4b32-43bb-ac09-9318ad9acc1f}} at time {{formula:ee12c205-0006-4034-abca-8cde35d15a8d}} and the fixed point probability distribution {{formula:d7711159-e82c-4714-a646-34adf4f0b9f8}} induced by {{formula:36689cb0-ca7c-41d5-92a0-e54327dc6c5f}} , we have that {{formula:1d102c88-0876-46cf-80fe-39091779b885}} since {{formula:96d9172b-5290-4301-95fa-043e6efd70c2}} . This concludes the proof. Proof of Theorem  REF We restate the theorem for convenience. * Note that the gradient {{formula:a5eab049-d711-4474-aae7-6883a0f42380}} approximates the gradient {{formula:414d520a-57c1-4287-b2e1-2a1cb0bf69a5}} . {{formula:bdb830d0-8da5-47a1-9941-b78bfeb879fd}} Noting that {{formula:e79c45c6-bdb1-4f20-9f01-7c1e00f0b20e}} is the minimizer of the 1–strongly convex function {{formula:9b87b170-5555-44fc-b869-4a8ca0bbd986}} over {{formula:b842c83b-7676-4516-a752-6753d9f5aa18}} , we deduce {{formula:7dbf78c1-5afb-42e3-ab56-3d0bc667bfa6}} Expanding the squares on the right hand side and combining terms yields {{formula:a198818b-342d-4070-97d5-87c2482ce45f}} Setting {{formula:96c01de8-481a-4563-855c-a6482bb22ab8}} , we successively compute {{formula:db3f1324-0bda-46ee-9c0b-18e54d07daf9}} Strong convexity of {{formula:02ff4a12-0bd9-430b-b318-e89c5f53ac93}} implies that {{formula:6e524022-e603-4d73-b682-b48aed450f8f}} so that {{formula:cfc1eb79-9045-4907-ba52-3d4921571b9d}} Using Young's inequality, we upper bound {{formula:f147444d-1a73-45ed-bae3-d089a302fbe5}} as follows: {{formula:d6ce9eba-e561-4ca7-8e8f-5b12a14ec1e7}} using Assumption REF . Using Yong's inequality again, we have that {{formula:b58b0b87-2a49-4efc-9ce5-5528995b56bc}} Next observe that {{formula:e996b497-26c3-45e1-932a-383750fb8bc3}} where {{formula:97186dae-e15f-482b-a85c-4ef1722cfbc4}} Therefore {{formula:ffc4af49-ff9c-47ac-ae04-26c1ad2fd338}} Now we have that {{formula:3a040fdf-2635-4ae1-867d-262d28441240}} Setting {{formula:caf15da9-6c7a-4ba0-837d-3506f95a3b4a}} and {{formula:3b3876c6-4496-42c7-99f6-21c7f3e2519c}} ensures the last term on the right hand side is zero. We also have that {{formula:3c024bcd-df93-40ea-9e27-183f73aecd30}} implies that {{formula:a7d05f4a-4a17-4896-ad42-532093369c51}} . Rearranging (REF ) we get that {{formula:960aa980-eec1-4d93-bbfa-2ca70900ce15}} Next we verify that our choice of {{formula:fc1ff987-4b97-4ceb-a478-a1cd71927d79}} is large enough so that {{formula:004de88f-5c17-4d66-b3ed-773df7a789c7}} . Indeed, this is equivalent to {{formula:53d7a9c1-f415-4a6b-bf21-2bc093967d4f}} which is in turn equivalent to {{formula:0a6a7f6e-fef8-416e-a34c-c6e75273b17e}} Hence, for our choice of {{formula:1b52e6e9-6edf-4a50-8379-6aa3cd4ee38f}} , we have that {{formula:844ba728-861a-4881-80f7-20baf6beb0ad}} Which completes the proof. Numerical Simulations In this section, we start by describing the SFpark data and experiment set-up. Then we provide additional figures and details for each of the two experiments conducted in the main. Finally, we introduce a synthetic data example which abstracts strategic classification in settings where agents have memory. SFPark Data Description In this section, we provide more details on our data cleaning strategies and our model for the SFpark dataset. Data cleaning. We start by discussing our data cleaning strategy. Of the many features in the dataset, the key ones of interest to us were the street name, district name, total time available (number of parking spots multiplied by number of seconds per hour), total time occupied, and rate. Many of the rates were unavailable in the original dataset, but the rate charged for the day before and day after were. If we encountered a missing rate, we replaced it with the rate before and after, if those rates were equal. We only worked with blocks where we could successfully fill in each of the missing rates. This process can be found in the accompanying code. Estimating price sensitivity. The model we consider is explained in the main body. To provide more intuition and details, as an example, consider the 600 block of Beach Street (Beach ST 600) for the time window between 1200–1500. The initial distribution, {{formula:339d5e98-c3df-4742-85a5-f78325baa160}} , is sampled from the data at the initial price for parking along Beach ST 600, which in this case is {{formula:baf2cfe1-d0be-44af-92db-2b8a0f84262c}} per hour. As described in Section , we assume that for an announced price difference of {{formula:d5158b22-f883-4e5d-b7ba-3585ccd69a7c}} , {{formula:23a92a71-b2f0-4e6d-9592-ba5443d3376c}} is the charged price and {{formula:f736ea2e-90e6-4158-ab88-5c99a8fbbae5}} is the variable of optimization. The occupancy follows a distribution of {{formula:1f3ca9dd-ea33-4665-a1b6-5be2c0f64891}} , where {{formula:de7fcb5a-b5c9-425e-9002-da905ef270a0}} follows the same distribution as {{formula:6d2cd1e4-b25d-4db8-bb1d-680ccbf0a25d}} . The price sensitivity {{formula:cb31c34e-2495-4635-a5c3-a16fe87ae6a2}} is a proxy for the price elasticity, in that it provides us a relationship between the change in price and the change in occupancy mapped to a {{formula:c3ddeef6-feec-4cf0-9b46-7c07efe38267}} scale. Indeed, recall that price elasticity is a change in the percentage occupancy for a given change in percentage price. Hence, price sensitivity as we have defined it has the same sign as price elasticity except that it is in the right units of our mathematical abstraction for the problem, and is in this sense a proxy thereof. We compute {{formula:9ac81066-adac-4cab-8a22-fec9cb68d932}} by considering the following: The average occupancy for the initial price over every weekday in the beginning of the pilot study until the price is changed. The average occupancy over every weekday in the final week of the last price announcement. As an example, for the 600 block of Beach ST, the initial price was {{formula:649fa9f2-2fce-403b-a0b9-6506a512b70a}} per hour and the average occupancy before a new price was announced was approximately {{formula:3c25dbcc-d217-4175-9723-270d7646be14}} %, the final price announced during the pilot study was {{formula:524d40ae-b8dd-496a-a47e-853d25fe5c3d}} , and the average occupancy for the final week was approximately {{formula:b1d2f414-b8a8-4881-bd4c-657ddb753881}} %. Therefore, for the 600 block of Beach ST, we estimate that {{formula:210849c1-3508-4c5a-9db8-9415d1053f1d}} where occupancy percentage is mapped to the {{formula:6c87d851-909b-4df9-9341-6c74d5f95a45}} scale. It was shown in {{cite:94d9c6b021b565b0bc0174a826837d9ef236c193}} that price elasticity is in general a small negative number on average for the SFpark pilot study and experiment. This is consistent with prior studies on price elasticity for on-street parking where information about price and location plays a crucial role {{cite:53f4b1b20481a535acc176bdbcc02b100304eb36}}, {{cite:6e7584a7e461192533a76e478e81744c73cc2f69}}. However, for the SFpark pilot study, the price elasticity also depends highly on the block and neighborhood. {{figure:b0b49df7-6a7f-49da-9303-2daa76a0946f}}{{figure:45598423-3cef-4f14-8d29-5ab8ccc3d820}} Estimating geometric decay parameter {{formula:c57fa7fc-827a-44c9-affc-bd362086053a}} . We also use this data to estimate the geometric decay rate, {{formula:2fb15faf-64e7-4a37-92bc-de9727f41162}} . As described in Section , when a new rate is posted, the effect on the occupancy is not immediate, and so the geometric decay rate, {{formula:ef2aa265-c476-44d1-bef2-59d56241d0ce}} , in this context represents the speed at which this new announced price travels through the population (and consequently affects the parking occupancy). We group the occupancy data by day of week, in order to account for different traffic patterns on different weekdays. We assume that the week before a new price is announced is the fixed point distribution of the previous rate. For example, for the 600 block of Beach ST, a rate of {{formula:ef59a17f-a12f-4d51-9385-126f83efe944}} per hour was announced on February 14, 2012, which means that we assumed that the occupancies on February 7–13, 2012 were the fixed point distributions of the previous rate {{formula:86ed05a2-fbe6-4a3b-8d48-0b3f9c8635f5}} . We now fix a day of the week (e.g., Monday), a block (e.g., Beach ST 600), and a time window (e.g., 1200–1500). Suppose the prices {{formula:772cca5b-5206-41ba-b485-96cfc0bd2396}} are announced and {{formula:04d22479-4f19-474f-bcc5-b0ac5c6109ac}} represents the fixed point distribution of announcing {{formula:0113f8ea-5a0d-4224-9460-ab8428d2ec8f}} , where the price {{formula:d4a89083-2713-43fd-9f23-3615bdab9335}} is in effect for {{formula:2ce75f08-9788-4e63-98ba-945702b01e38}} weeks. Then, for the {{formula:5d67dbb9-5d29-49c3-ab45-63e6b5bcb38b}} -th week after announcing {{formula:23cd923e-ee55-46c3-b9cd-6ba08ca3691c}} , we assume that the occupancy is represented by {{formula:6eb168f0-a955-409b-a615-571275d58586}} . For each week {{formula:f4c9a1e8-c5d2-4eee-bd73-750830d94fca}} , and for price {{formula:28ada599-1b0c-41a1-800b-373764972324}} , the occupancy for the specified day is represented as {{formula:ecf2b834-bf29-4e4e-932d-fc874e3fb77c}} . To find the value of {{formula:85483d4e-b02f-4895-b632-b860029a8326}} , for the specified day and block, we solve the following optimization problem: {{formula:b4d874a4-8341-482b-8616-7762be65a9a6}} We perform projected gradient descent to solve this problem. For the final value of {{formula:b50da90f-5cba-46b4-b56c-a5d8500abe18}} that we use for the specified block, we average the estimated values of delta for each day. Comparing Performative Optimum to SFpark Here, we provide experiments for other blocks on Beach Street (beyond just the 600 block in Section ). Each row in Figure REF shows prices and corresponding occupancies for Algorithm and Algorithm for the 500, 700, and 800 blocks of Beach ST, respectively. In each instance, we make similar observations to those in Section for the 600 block on Beach ST, namely, that SFpark consistently overshot the price to reach the target occupancy, and that the choice of {{formula:9a772216-eb96-4811-a8a4-a9a8ecc111e2}} is reasonable, in that a time period of 8 weeks is sufficient for the population to equilibriate before announcing a new price. An interesting observation from Figure REF comes from the fact that the 500 block of Beach ST has a price sensitivity of {{formula:888c84c6-7554-460d-9053-64bb480f05a7}} , and the 800 block of Beach ST has a price sensitivity of {{formula:c99bded8-2041-44e9-ba42-481617b07802}} . Since both of these values have large magnitudes, we observe that for a small price reduction, the estimated occupancy increases to {{formula:12d860c6-2868-4d3d-bb96-161635a76a64}} . Therefore, for blocks where the magnitude of the price sensitivity is large, our experiments suggest using a smaller choice of {{formula:c3cc41dc-e101-422f-a1f7-0c825c1cb397}} , and consequently a larger choice of {{formula:21a6d7e2-6790-4346-bc2b-25fb9f601ba6}} , in order to reduce the variance for the price announcements to prevent having large fluctuations in occupancy. All four of the blocks on Beach Street have very similar estimated {{formula:4c6ec2f7-3ade-4660-81ef-8b19f7401b2a}} values. {{table:8e63f828-57a1-47d9-aa32-146841f9dad7}}Table REF indicates that each block adjusts to new price announcements at similar rates. This makes sense given that each of the blocks are on the same street all next to each other as seen in Figure REF , and located near similar landmarks and attractions. Redistributing Parking Demand In this appendix subsection, we describe the details for the experiment on redistributing parking demand. The study includes the four connected blocks of Hawthorne ST 0, Hawthorne ST 100, Folsom ST 500, and Folsom ST 600 because the blocks are adjacent to one another as shown in Figure REF . Thus, we wanted to investigate whether price changes would redistribute the traffic such that each block had an occupancy closer to the target of {{formula:d3037b29-45d6-4852-aaf7-06a6745808a4}} . An interesting note is that while Folsom ST 500 and Folsom ST 600 both have negative price sensitivity values of of {{formula:92fd570a-1582-475a-9f02-2da01f00b3e0}} and {{formula:253d68e9-89cb-4e15-8df9-a8f650791cd3}} respectively, Hawthorne ST 0 and Hawthorne ST 100 have positive price sensitivity values of {{formula:49d3fe13-56ff-4e40-8cfa-017d20ab826c}} and {{formula:7f3ccd26-6168-4015-9c9c-ce6126714056}} respectively. Since Hawthorne ST has a very high initial average occupancy, SFpark should consider decreasing prices on this street in order to shift demand to the nearby streets. This is exactly what we see done by both Algorithms and so that both streets are closer to the target occupancy. Although the price sensitivity is very different for these blocks, the estimated {{formula:59c8b105-f3c1-40c3-9e61-61171a56152b}} values are very similar. Hawthorne ST 0 has {{formula:11072d35-448f-4245-b81c-7002033e09ff}} , Hawthorne ST 100 has {{formula:93918e1a-efaa-4d4a-bdda-25f92a4fbb31}} , Folsom ST 500 has {{formula:7067da3d-d012-48d8-9114-e2d0bc2329f5}} , and Folsom ST 600 has {{formula:c3a2d1dd-19b6-4dc3-b63f-33799f2d92c3}} , so each block adjusts to new price announcements at similar rates. Synthetic Data: Strategic Classification in Dynamic Environments In this appendix subsection, we apply our algorithm to a synthetic strategic classification problem—which was considered in the dynamic setting in {{cite:1d46eb2fb43c7b11d680050944ea4995b2085b26}} and in the static setting in {{cite:b9653b22aaf4ed668b6a8b26c526df60f617f9ab}}, {{cite:f083416b69d6938d2659e24aa8c094f2aa3c6577}}, {{cite:2d6e80983eb34d6dfabb348e6e541ffc22eed33c}}, e.g.—where there is memory in the agent population. For simplicity (and to support visualization of the classifier performance), each data point contains a feature vector, {{formula:05879df5-61cf-4ca8-af38-8ab706a78db2}} , and a corresponding label, {{formula:a79f621e-20bf-4636-ad4d-f744127aacbf}} where {{formula:bc4386a3-d8f8-476c-91f0-141f508d5d75}} and {{formula:47fd51fb-f71b-40ac-91a0-46dfbb527f15}} is the number of strategic users. The loss incurred by the institution is given by an {{formula:d89dc539-2b16-43ec-8aff-625faeae2d6d}} -regularized logistic loss: {{formula:6e928537-e27b-4860-ab52-e0b049c60468}} where we set {{formula:b1840fdc-71dd-482f-aa00-9a155895e511}} . The agents are non-strategic (meaning they do not perturb their true feature vector {{formula:14596f6c-8682-4a6a-91ce-7aafae1a30f1}} ) if they have label {{formula:fef6d738-766f-42b0-94c9-d9285ace07f1}} , and otherwise `best respond' to the announced classifier according to the model {{formula:8ae1959a-a195-4911-8ee4-b88756718420}} We take {{formula:04d7bfaa-26fa-4188-8b99-42ba989bca1a}} , but the observations we make hold more generally with the exception of very large magnitude perturbations for which the problem (even in the static setting) becomes untenable. We randomly select a subset of the two features to treat as strategic. We also randomly generate a ground truth data set by drawing {{formula:787dab5b-73c8-4a81-89a3-52afb73be3ee}} samples from a normal distribution, drawing the ground truth {{formula:eb4823df-5b08-46d0-bbc7-49a6a03cc12f}} from a (2 dimensional) normal distribution and then assigning labels according to {{formula:ef98be3c-7042-4e25-8f26-cad8a4893c1f}} Specifically, agents are allowed to perturb in the {{formula:02629276-b79d-4693-b99a-7f1daf1b1857}} direction as can be seen in Figure REF . Moreover, we take the initial data distribution {{formula:f0747368-d10f-4274-9d8e-53e88f4bb94e}} to be far from the base distribution for users' true preferences {{formula:04fa09e3-e6b8-4f2c-824c-bc3022f654d4}} even with performative effects; specifically, {{formula:3609ec3e-347f-480b-9b8f-d7fa315b72d5}} is a Gaussian distribution with a mean of {{formula:0fb9b227-4b7a-4b89-a847-fa76f26c85e4}} and scale (standard deviation) of 45. More details on the implementation can be found in the accompanying code. We divide the data into a training and test set with a {{formula:3d970ab1-3f53-4fba-bac2-93856d3c8b4b}} –{{formula:21a4f330-b667-4159-983a-6df59ba18ce7}} split. We set the regularization parameter to {{formula:006c55d5-28e1-46ca-ae6e-d63743427e51}} where {{formula:03f5e5b6-339c-4456-9d18-62c3aaff6f1f}} is the size of the training data set. For (REF ), the inner product can be interpreted as the utility of the agent and the norm difference as the cost of manipulation. We present results for a modest value of {{formula:46cc15a9-1e50-4b78-9134-8b5c6785a913}} ; similar or lower values are consistent with our observations and as our theory suggests, as {{formula:cd970f71-a910-4e17-94b3-87662e587304}} , the solution obtained by Algorithm  approaches the performatively optimal solution. {{figure:9b8c1d60-0065-4ecb-bd05-47fdcdda7dde}}{{figure:cb708e24-881a-4f3f-b93f-e9809ac2b817}}We explore different values of {{formula:d0eea7be-134a-482d-bf23-e37b89df08f6}} and {{formula:d72e3235-4c2f-4e1a-a8e0-10954a2ff98d}} —i.e., the mixing parameter of the geometric dynamics and the epoch length of Algorithm —on not just convergence but also on accuracy. The observations we report actually lead to a number of interesting open questions for this field including how performative optimality relates to generalization. We find that depending on the skew of the data distribution and the strength of the perturbation power of the strategic agents—namely, {{formula:2f3186d9-56d1-4fe4-b08a-93370f4a347b}} —that surprisingly, the performatively optimal point may not generalize very well as compared to the solution obtained by Algorithm  when the mixing parameter {{formula:cbcb50b5-07fc-4f73-8d6b-fd76ad47c5f6}} is large. The latter has better accuracy as can be seen in Figure REF ; the loss value per iteration and the classifiers for different {{formula:d5b7a6c8-56fd-4334-ad2f-f99079897199}} values are shown in Figure REF . In other settings (e.g., with different ground truth data), the solution obtained by Algorithm , even with different values of {{formula:f7817ee2-ca61-4a8a-9ba7-9f21e2901a2b}} and different choices of epoch length {{formula:8aa4bab1-e8cd-465c-aa04-dedd58dac406}} , performs just as well as the performatively optimal solution as depicted in Figure REF , the data for which has original distribution depicted in Figure REF , which also contains the learned classifiers and losses per iteration for different {{formula:69aa008b-70d0-4e4d-a601-8bd2f19bc1ed}} values. {{figure:aff5097c-aaa1-4191-873d-11d88fcbe9e2}}{{figure:47089710-fa43-4bd4-81b9-7aa3e8f0e126}}These observations about the generalization performance of the obtained solution under our proposed algorithm (for different values of the geometric process or mixing constant {{formula:f47f8902-4e32-40b5-82eb-9f476f20d66e}} ) as compared to the (performatively) optimal point, while highly dependent on the underlying data distribution, open up a number of interesting directions for future work on understanding precisely when the optimal point gives good generalization and robustness guarantees. Semi-Synthetic Data: Strategic Classification in Dynamic Environments As a point of comparison to the existing literature, we perform additional numerical experiments on a strategic classification simulator from the Kaggle Give Me Some Credit dataset discussed in {{cite:2d6e80983eb34d6dfabb348e6e541ffc22eed33c}} and {{cite:1d46eb2fb43c7b11d680050944ea4995b2085b26}}. In this dataset, each data point contains a feature vector, {{formula:3d4a07b4-369e-47a5-a2b4-69513bece65b}} , which represents historical information about an individual, and the label, {{formula:d08d1436-e53c-4352-96f3-6ce763d76b1e}} , which represents whether or not the individual has defaulted on a loan. For more details on the dataset itself, see Appendix B.2 in {{cite:2d6e80983eb34d6dfabb348e6e541ffc22eed33c}}. Let {{formula:7cbef734-9c1d-4108-a3e2-364de6ba6f19}} be the subset of features that an individual can strategically manipulate. We assume that the best response of every individual to an announced {{formula:93a5ef99-ff6d-49bf-be77-d7acafb74471}} is given by {{formula:b9f5d5e7-cee8-4309-a7a1-616b547fdd02}} , where we use the notation {{formula:9265d134-7730-4585-8494-76e0c85797b0}} to be the restriction of {{formula:84b5975e-8c74-40ed-beed-797f8466a1e5}} to the subset {{formula:36b949b7-16da-4393-98f0-77d810cc770e}} and similarly for {{formula:c3daf8a0-11e7-44f8-95dc-5cac6c9cc0cd}} . The remaining features of the individual stay the same as the original data. We conduct two sets of experiments. In the first set, we compare our algorithm on the total number of iterations—i.e., epochs {{formula:e54a122a-ec4e-45ca-afc2-a6da6b1cd87c}} multiplied by {{formula:7becb166-2118-4bfb-a7ee-b0a8513187ad}} —to risk minimization (RRM) {{cite:2d6e80983eb34d6dfabb348e6e541ffc22eed33c}}, {{cite:1d46eb2fb43c7b11d680050944ea4995b2085b26}}, and repeated gradient descent (RGD) {{cite:2d6e80983eb34d6dfabb348e6e541ffc22eed33c}}—implemented for the dynamic environment which was not considered in {{cite:2d6e80983eb34d6dfabb348e6e541ffc22eed33c}}—both of which, notably update {{formula:b972d79b-309a-4401-9297-b2128f2c0e39}} at every iteration in {{formula:8613ce3a-4917-40d8-966b-694a8b0799eb}} where as our approach (Algorithm ) only updates at every {{formula:7b30339e-d3c7-4289-9ca6-764012c763be}} steps in that same interval. {{figure:bbf46161-06f4-41eb-b847-77cae8955391}}In the second set of experiments, we compare our approach to an epoch based implementation of both RRM and RGD where in these implementations the dynamics are also allowed to “mix" and the decision maker updates only every {{formula:8484fc2b-8444-4a72-aa06-752e34dda9b6}} steps as in our method. These later experiments are more comparable even though the epoch based implementations of RRM and RGD have not been studied theoretically. For both experiments, we plot the {{formula:fb9e4bfa-4e0d-4f29-8c77-6766c1f0f18d}} distance to the optimal point. {{figure:7e289cba-c218-4372-ba62-daeba4d7cd0e}}Experiment 1: Comparison to Iteration-Based (Classical) RRM and RGD. Figure REF shows the results of the first set of experiments, for which we have taken {{formula:b6a36dad-4a1e-4870-b6ef-f376c10ece4b}} , which is relatively large meaning that the mixing time for the geometric process is large. Neither RRM nor RGD target the performatively optimal point, but instead the performatively stable point, i.e., the point at which repeated retraining will stabilize. As shown in Figure REF , a performatively stable point (the point RRM was shown to converge to in {{cite:1d46eb2fb43c7b11d680050944ea4995b2085b26}}) may be far from the peformatively optimal point. Interestingly, we also observe that for small values of {{formula:5682168f-1ab3-48af-8a25-1c731fc8e6ff}} (i.e. on the order of {{formula:47a9e032-f24f-4312-b363-1561ed9feb55}} ), the performatively optimal point and the performatively stable point are very close, and so RGD behaves nearly identically to calling Algorithm with {{formula:cc67a35d-3240-4b52-9384-32def256a1f8}} . This seems to imply that when performative effects (i.e., size of {{formula:7b1b42a3-e218-4dcd-b9b5-6f75be80cc0d}} in this set of experiments) are very low, the naïve strategies of RRM or RGD suffice when trying to find the optimal point. On the other hand, for values of {{formula:bc0f9e73-2069-4171-bfb8-28f819e99ec2}} on the order of {{formula:ff80a0b2-ff75-4c22-b0ca-a4d7a9a2601b}} or larger, RRM and RGD do not converge to the performatively optimal point while Algorithm  does, albeit with worse iteration complexity to convergence to the stable point of the respective algorithm. Experiment 2: Comparison to Epoch-Based RRM and RGD. Figure REF shows the results of the second set of experiments. As noted above, in this set of experiments, we compare to epoch based implementations of RRM and RGD to Algorithm  which is also an epoch-based algorithm, the idea here being that these are more comparable algorithms in a sense. As can be seen in Figure REF , the observations are analogous to the first set of experiments. Epoch-based RRM and RGD converge to the performatively stable point (as defined in {{cite:2d6e80983eb34d6dfabb348e6e541ffc22eed33c}} and {{cite:1d46eb2fb43c7b11d680050944ea4995b2085b26}}, for the dynamic setting). For {{formula:19eb6a6c-525e-4527-963b-33d858c076fc}} on the order of {{formula:2e7a210a-b805-4559-a9c4-9bc76ca9f81c}} , the performatively stable point is close to the performatively optimal point (although still not equal to it), and for {{formula:96fc5d26-58d2-4ece-8c84-aacb22089656}} on the order of {{formula:ab902945-b5cd-4839-b755-1f4db451baca}} or larger, the performatively stable point is considerably farther away from the performatively optimal point. On the other hand, Algorithm  converges to the performatively optimal point for all shown values of {{formula:5a70594a-6d22-4b59-95ee-e01559340f93}} , the size of the strategic perturbation. We note that we did not compare to the zero-th order method since it has different information than both the RRM and RGD and is thus less comparable. We expect the same observations about non-convergence of RRM and RGD for large {{formula:4a90101b-fffb-470e-89ca-9c48327f3c02}} to persist and Algorithm  will converge as the theory predicts, albeit at a much slower rate than Algorithm  due to the bandit feedback.
d
369f13f0c4dd2dd062a625e622917bba
A commonly used tetrahedron method {{cite:3c07347ed4b6363f1062ac74bd74923a34bd0467}} is a popular tool of density-of-states calculation within density functional theory {{cite:8b3aaadec06748cdcd6bc080d9bf3e02b0a1f14f}}. A fundamental block of this approach is linear approximation of the spectrum within an elementary tetrahedron. However, when one of tetrahedron vertices is a van Hove point, its contribution, being actually dominating, is crudely averaged with the contributions of another vertices. This implies very bad accuracy of the method when the energy is in the vicinity of van Hove points. In Fig. REF , a comparison of the results of direct calculation using Eq. (REF ) and tetrahedron method is shown. For the latter method, we find strong artificial oscillations in the vicinity of van Hove energies. The convergence of the results in the limit {{formula:625834f7-f11f-4a4b-bff2-75d5b981e7b5}} is thereby substantially poor. The origin of this discrepancy is an averaging of vertices of particular tetrahedron with zero velocity and non-zero on equal foot, which underestimates the contribution of van Hove singularity point vicinity in reciprocal space, see also Ref. {{cite:a4f875ea4f617758067495e1cfc9ccf793f4cb0e}}. {{figure:52a981e3-4d19-40db-9a50-9d26b5da00cd}}
m
0d3fb8a989edd8ecadbf0ec646b4acc2
A more general nonstationary problem is studied in {{cite:1da536edced87c6e181227ddad1d67ee975d1f12}}, wherein the cumulative maximum variation in mean rewards is subject to a variation budget {{formula:025efac7-186b-4a6c-b0d3-25e02e7f4740}} . Additionally, the authors in {{cite:1da536edced87c6e181227ddad1d67ee975d1f12}} establish a {{formula:f2ab329e-819c-4483-8a52-3938e4aed375}} minimax regret lower bound and propose the Rexp3 policy. In their subsequent work {{cite:23fbc5adad44d6ac45a34e5a48d71fa1b2d4c6ee}}, they tune Exp3.S policy from {{cite:c5c886262f432516c8dd1b2f27e494131056359b}} to achieve near optimal worst-case regret. Discounted Thomson Sampling (DTS) {{cite:6f6c36b10beb58ea7ba5b09e9cc472b30681fa47}} has also been shown to have good experimental performance within this general framework. However, we are not aware of any analytic regret bounds for the DTS algorithm.
i
04f5d903c8a2609f0e6d12444115b258
The effects of weak corrections in parton shower algorithms are known to become significant already at LHC energies, in particular with the upcoming luminosity upgrade, and will be even more relevant at future colliders. One of the major challenges of the construction of such a shower is the calculation of the relevant branching kernels, which in this paper was done using the spinor-helicity formalism. Compared with QCD, the electroweak theory involves many theoretical subtleties that have to be handled carefully. One major issue is the chiral nature of the electroweak theory, which forces the shower to be helicity-dependent and leads to a large number of possible types of branchings. In particular, the scalar components of longitudinal polarizations lead to unphysical, unitarity-violating contributions that have to be treated carefully. The collinear limits of the computed branching kernels are found to be in agreement with the results of {{cite:28f41553097b4cffcbe4fb87cedb28feb2a56597}}. The electroweak shower also includes many branchings that would usually be considered to be decays of resonances, in which case the distribution follows a Breit-Wigner peak. A strategy to match the parton shower to a resonance decay was proposed, but this may likely be improved upon by a better understanding of the interplay between the virtual corrections contained in the Sudakov factor and the decay width. A more sophisticated treatment of this matching is beyond the scope of this paper, and will be the topic of future study {{cite:68bafc1615f7f897d56c5819b04b2b3a9deaa6f3}}. Further electroweak effects added to the shower include a recoiler selection procedure that compensated for recoiler effects of previous branchings and treatment of bosonic interference effects. Results were shown that quantify the general size of electroweak shower corrections at future collider energies and at LHC energies.
d
814d9357b77b4bba73375fde28a02e60
What qualifies as the best performing model for SCF? Our SP-GRU learns the best fit for synthesized behavior. On the commonly used metric of NLL {{cite:9211cda377f653ad5b99ed37390e419fbe1f228b}}, {{cite:407b691bd6cfc1bc8177f6d6de666442e8ab093d}}, {{cite:401254e53e3ae71cf619a05dfbbcbf2f2557b756}}, our SP-MLP models perform the best for real-world data. However, they fare the worst at estimating the mean. On the other hand, the SP-GRU models estimate a better likelihood than the NP baselines with comparable errors in mean forecast. While the NP baselines attain the lowest errors in predicted means, they also achieve the worst NLL. From the qualitative visualizations and ablations, it seems that the models minimize NLL at the cost of orientation errors; in the case of SP-MLP seemingly by predicting the majority orientation of the two sellers who face the same direction. Also, the NP models forecast largely static futures. In contrast, while being more dynamic, the SP-GRU forecasts also contain some smoothing.
d
704ef4695c96ef3b7d6c1447eb32aec2
Figure 1: Background colors represent the maximum mass of hybrid stars for different parametrizations of the NJL model (i.e. different values of {{formula:8d25b638-7ebf-49a0-9020-eeb8b85572ab}} and {{formula:acf3a8f5-b621-4139-89d0-871018c1dfb5}} ). In each panel we use a different hadronic EoS (without hyperons): (a) GM1, (b) TM1 and (c) NL3. Notice that the color scale is different for each panel. The solid contour lines indicate specific values of the maximum mass. The black solid line represents the boundary between parametrizations that allow for stable hybrid stars and parametrizations that do not. The red dashed line indicates the value {{formula:f452754d-5f0b-40ce-98ed-b6591d8f6b7e}} corresponding to the observed mass of PSR J1614-2230 {{cite:241d19a34563a3128659548767087cffd590930d}}. The region between the red dashed line and the solid black line allows to explain the mass of PSR J1614-2230. Font: Reference {{cite:c0b9a1298f92a561aaacc3782d4245b1351654ed}}.
r
94cf002163e8e069b589ad99967fba08
In this section, we provide simulation results of the proposed algorithm. The positions of users are generated according to a uniform random distribution in a circular range with a radius of 50 m. The minimum distance between the UAV and any user is 1 m. Both path loss and Rayleigh fading are considered. The parameters of the probabilistic LoS channel model are {{formula:ccb8adcc-47d3-4b74-a311-36f7c23c0e9f}} and {{formula:81b0799a-ae75-4b7b-a362-1a3fd8ef3f02}} . The bandwidth {{formula:f67b9082-ef0f-4e25-827b-8280e63589e9}} is 1 Hz. Considering realistic scenarios, the configuration of the multiantenna, rotary-wing UAV is specified in Table I with reference to {{cite:fb640a90a0a66d2af1603437bdcac4188e3044c8}}, {{cite:528a80ced29625330e16528cd44a9a1461060b36}} and {{cite:26a044326f7134b0f57db78f9539c186366ae12a}}. The other simulation parameters are listed in Table II with reference to {{cite:5f9c35225b27b1036ff4867f801d0c1318c7e961}}, {{cite:14535ebc1d00ee72f4ea7cb35e0ea60f63cba4f3}} and {{cite:dc5df9638da77ce0fc4cb2dcb9c06d0dd79d25ff}}. {{table:11447c0a-dc9f-4616-8688-b21a8762f364}}{{table:452f87b6-5828-40cf-a7d2-91eb8517493b}}
r
88649b5ca84f8ea1aa2ca690958b1371
We consider a cluster of sine tasks, one of linear tasks and one of quadratic tasks, regression problems inspired from {{cite:55e72ca3b7fd7c6acbcf55e91a318d8bcfe8e28b}}. Details on the problems can be found in Appendix REF .
r
111c1ee4d1cac232c080d4ae371ce09d
On the other hand Bennett and Brassard proposed a completely different unauthenticated key agreement protocol, now called BB84 {{cite:e26e27e55ee8d5057ef2962a2165e0b6258ccabd}}, whose security is not based on a mathematical problem, but on physical assumptions on quantum mechanical systems. In this protocol Alice needs to be able to send Bob qubits. In most implementations, like {{cite:4e35384211604533e5ca08d53d4a004f1cec9e16}}, this is done by sending photons over a fiber optic cable, where the qubits are encoded in the polarization of the photons. After BB84 other variations have been proposed, notably E91 {{cite:5e985606e1562739149f50cf80e7df8e4f7141a2}}, and they are all indiscriminately referred to as Quantum Key Distribution (QKD).
i
67d422b2236df676eada0ce47722b370
- Integer linear program (ILP). The standard ILP formulation of the MDADC (REF )-(REF ) {{cite:eba924fd8eb87be4b27bc1ef524c48b64698abf6}}, {{cite:e4b6763a92775748d7c7971f2884f59eba7b14d4}} involves {{formula:4572968e-e1fc-4752-aae5-ebdcd6409de1}} binary variables (the number of edges in a complete {{formula:72661b1f-64da-4203-b3c4-5100dbbc7aa3}} -partite graph with {{formula:19fff432-8143-4131-b005-ad3ef3122771}} nodes in each subgraph), {{formula:a187e1c5-58e0-434f-8c6a-40f5c0d3f556}} equality constraints and {{formula:2da3e07a-0db7-42e6-b6e4-a7ade3e2c43b}} inequality constraints (so-called triangle or clique constraints). Another formulation of the ILP expresses the triangle constraints with reference to one the {{formula:6311fc8c-4898-47b0-b510-c403ea640c58}} subgraphs, thereby reducing their number to {{formula:d5b79135-2bfc-44c5-ac8c-7cd53532d49f}} . - ILP relaxation and integer quadratic program (IQP). Two of the methods in {{cite:eba924fd8eb87be4b27bc1ef524c48b64698abf6}} are considered: the first consists in dropping the triangle constraints, solving {{formula:6d2f321a-354c-4e63-a3ac-f141b1faa245}} separate assignment problems, and recovering a proper solution with multiple-hub heuristics. The second expresses the triangle constraints with reference to one of the {{formula:5f0c7f84-d12c-4e53-b1c4-d37e223454e2}} subgraphs as in the above ILP, and formulates the objective function only in terms of those edges starting from and arriving to the reference subgraph. This reduces the number of optimization variables to {{formula:ae2064ed-dcad-4076-a648-73054944d333}} but transforms the linear program into a quadratic one. - Constrained {{formula:327795e4-a6f6-45d2-9ad1-e5d13608c4ed}} -means. The COP-KMEANS {{cite:979c56c8fa8d092748fc2f401194b68f842fe56e}}, MPCK-MEANS {{cite:0dafbdf6d413eda2c30e1ff8f3e3d3e7c6e63bfc}}, LCVQE {{cite:6be8393a7b68efea4406df01a456dbdf90bf6bc4}}, and CCLS {{cite:d5169888556714a05de0f24346b691ce6072bed1}} algorithms all handle equivalence constraints and can thus be applied to (REF ). They are conveniently implemented in the R package conclust of the last authors. COP-KMEANS and CCLS treat equivalence constraints as hard constraints and thus exactly solve (REF ). MPCK-MEANS and LCVQE handle equivalence constraints as soft constraints (in addition, MPCK-MEANS incorporates metric learning) and thus approximately solve (REF ).
m
b2eeed1ffe8296034336131816032384
Our framework may not generalize well when the size of the training data is small and/or the training data has significantly large domain gap with the test data (, synthetic FlyingThings3D to real-world KITTI). This is mainly due to the Transformer based design. Unlike convolutional networks, Transformer models do not carry important inductive biases and therefore require training on large scale datasets to generalize well on the unseen domains {{cite:f5d0d7ef5956578af6a5f48820d5998209f38713}}, {{cite:74230e7d3af3e23f8c798c8cc9f5522a9f07bbda}}, {{cite:18d3b60880dfd77d49d7db3d7646da5ed370a98c}}. Fortunately, there are many large scale datasets available currently, , Virtual KITTI {{cite:a0ab078fc770efa45f841f2576e419f07c41e081}}, {{cite:0695ad6d10f2f5506e2b2af385bf2a486fa3d3f0}}, VIPER {{cite:30ee3d9923d53dd828b19ea700a412fc76bf6a2f}}, REFRESH {{cite:f152e4cf79050abadf7e6a69177e356d3440071e}}, AutoFlow {{cite:4255a067763aa6b77213714cbaa88b739fbd95db}} and TartanAir {{cite:8b179344fc3732858ac3b10c4c658282e47ef4c7}}, they can be used to enhance Transformers' generalization ability.
d
2d0ce89393f90e16158bbeece324a741
The surface tension force {{formula:61de69e4-aae5-4252-be44-cddd21758ba5}} , in Eq. REF , refers to the continuum surface force model {{cite:30fb83f1ba6cf544cf61191555082d9bf5c987d3}}. {{formula:d78c9975-ff64-4596-a50a-0d800197e1e7}}
m
652a40f09fe8c268056234841d480799
By using the relation in Eq. (REF ), commonly referred to as the kernel trick, operations in the high-dimensional feature space can be computed without ever explicitly computing the coordinates of {{formula:fe9e03a3-01c8-49d5-b387-060688ddb94c}} and instead using a kernel function in inner product space. Remarkably, the inner product can be efficiently computed by the kernel function, alleviating the need to assess the high-dimensional coordinates, {{formula:ea33eb3f-fd2c-473e-836e-a9b0cec0c801}} and {{formula:d8aecc1e-56c4-4f00-b532-2541daf44f34}} , while still receiving a metric for similarity between the vectors {{formula:cdc57a55-7ba2-4ab9-88b8-da9f041b1eb2}} and {{formula:ae4fa501-438d-4445-a572-d9f85ed37e5a}} in the rich feature space {{cite:9e8410ca2374c2170c79f1a3cc189b612609c991}}.
m
23f0cf8b8d13372ca1f4ed248b25eedc
Another research direction was to improve the computational complexity of Courcelle's theorem, since the function {{formula:12727831-e162-49a6-bc0c-1a5a9f9c5259}} grows as an exponential tower in the quantifier depth of the MSO formula. However, Frick and Grohe {{cite:0f196416a2d402f2bc8f16ffdda22c57253f30cd}} proved that this is unavoidable unless {{formula:278b91a4-360a-475a-9ad5-816c6dbc79fe}} which raises a question: is there a (simpler) graph class where MSO model checking can be done in single-exponential (i.e., {{formula:3391d033-09e3-4f63-8cb3-7feeb04c91f6}} ) time? This was answered in the affirmative by Lampis {{cite:01c6c614980c41e4b783fce73e5491c1c8da4547}}, who introduced graphs of bounded neighborhood diversity. The classes of bounded treewidth and bounded neighborhood diversity are incomparable: for example, paths have unbounded neighborhood diversity but bounded treewidth, and vice versa for cliques. Bounded treewidth has become a standard parameter with many practical applications (cf. a survey {{cite:9ced350a27f35b6fa3f72891447c7c4c928b84b0}}); bounded neighborhood diversity is of theoretical interest {{cite:c5e513d482e90ca20d015903864b92461427ec5f}}, {{cite:02c7c18d02862d9be911d6b0202ddb4483a7353b}}, {{cite:d9d8ff0c1fbe51a7e8b3cee272da650d3aa4fb90}}, {{cite:23496446c269bd58011a49d16e80eff029ee7599}}, {{cite:8907dde703664505ba8bceeceb2854ca77e2524d}}, {{cite:b4b94c75b8193d222d371161c82da69d35260f47}}, {{cite:4246f2070644908f341d2f2233275b59035fc1ef}} because it can be viewed as representing the simplest of dense graphs.
i
4df0d679558c76be6c6ff116e800a155
Basically, NAS can be classified into three categories: Reinforcement learning (RL) based {{cite:46c5bceb1ef893b62b141d50e7e3f0dbed989f7a}} method, evolution algorithm (EA) based {{cite:ed059c38e7110aa4643b13785c1d92661c476ab7}} method and gradient based {{cite:4e32264b867ddc2ae3d544a1db5be424c8f59f33}} method. The former two methods generate a finite number of discrete neural networks with controller RNN or sample a net within a neural network population and train these models from scratch, which raises a relatively high demand for computational overheads (2000 GPU days of RL {{cite:16b19ce93180753733d2f1fed82fd6d44b201fd4}} on CIFAR-10 and 3150 GPU days of evolution {{cite:d224cc4b53970df01132e21b614b3238795ddeb8}} on ImageNet). On contrast, differentiable architecture search (Darts) {{cite:4e32264b867ddc2ae3d544a1db5be424c8f59f33}} is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. It eliminates the need to train and compare discrete neural networks. Parameters of architecture determination and basic operation are optimized alternately within a single supernet.
i
cde9833db64dc89f3859fb00b9e2e4ff
Remark 1 (Unification of {{cite:73bfd1bb384e539053203140b316e8d6e2bd0155}}) If {{formula:bd925a12-48fc-4188-bb49-60b852280abf}} , then the distribution {{formula:dbdc97d6-ea97-4444-9fb7-c15c92c407b0}} is reduced to {{formula:bef8bbc0-078d-4ca8-964f-ee7e9a264db8}} and the bound (REF ) is reduced to {{formula:e6f424c2-13b3-4ceb-b91c-13f3e66e4444}}
r
23bbf356833b69cfb24e3554af6fe08a
Our study is motivated by the fact that over the last few years several papers{{cite:b6abd5e6a252858e60a13562304cc36ca7b88e44}}, {{cite:c08ff479e9ad4648f43702098dd3a6a768ea72b7}}, {{cite:51a885b966763f3b606ecddfe4b5a6d13c3ea1d5}}, {{cite:083baa87101d541092439c69b487080d8e1d7058}} have independently reported that vertices in the inner shells of the networks can be leveraged to identify high centrality nodes or serve as seeds for community detection. However, each paper focused on only one type of analysis and there was rarely any overlap between the networks studied in these papers. When we conducted an integrated study over a large set of real-world and synthetic networks, we observed that the reported properties of the vertices in the inner shells hold only for a certain type of networks. This observation impelled us to investigate the topological property of networks where the inner shells contain high centrality nodes.
i
019c4b971d2dd7e11e091114c2606103
The fact that {{formula:675fc865-0aec-40ad-95a2-791c1655c003}} admits no compact perturbations increasing its scalar curvature is known as the Geroch conjecture. By identifying faces of a large coordinate cube in {{formula:383e7c09-9035-448b-b795-72a586a4b3ae}} , one can deduce this fact from the non-existence of positive scalar curvature (psc) metrics on the torus {{formula:2d14dc41-49de-4363-b975-51a3da120785}} , first observed by Schoen-Yau {{cite:549d83b9974797aff782019376354d85da7b7334}}, {{cite:3bac976a9bad03a563b38750419676bcd6123a22}} in dimensions less than 8 and subsequently established by Gromov-Lawson {{cite:53f03fe0d6a3f9dc13dc6392b41b7d1147a58b01}} in all dimensions. Though it is far from obvious, an argument due to Lohkamp {{cite:945c731bbf162b973685475ae8dbdb2569a2d7fa}} shows that the more far-reaching (Riemannian) positive mass theorem also follows from the statement that the connected sum of a torus and a closed manifold admits no psc metric. See Gromov's Four Lectures {{cite:930dd5d5efbe5b24b744a9f052b2f1f20cdd14fd}} for further discussion of this and other rigidity phenomena related to scalar curvature.
i
576152524d771131a3c82fa7e9ef88bc
We implement an aggregation-based method TARN {{cite:5a900ad46372a2a07ae0c2cafec091a8b3f89e58}} and a matching-based method OTAM {{cite:8a99c7ff31340c6ad0dc9b430e2558905a87f176}}. We train these two models and our proposed CMOT on the meta-training set of HMDB-51, and test them on different meta-testing set. We show the testing accuracy on two different meta-testing set in Table REF . As expected, aggregation-based method fails the Ordering-Dominated setting while matching-based method fails the Content-Dominated setting. Our proposed method works well in both two settings, verifying the effectiveness of semantic-temporal trade-off. {{table:94ebeb31-7aca-46bc-97a7-e9554594b165}}
r
01e45470d09463fc9d350218b4a0b23b
Remark 1.5 Theorems REF and REF tell us that the effect of the partial confinement is so strong when {{formula:8eb303ef-0a20-4719-93fd-ec98e48f036a}} that a dimension reduction appears. On the other hand, this effect is so weak when {{formula:e9e600e6-2576-42ee-addc-87f2a024a8d4}} that we can almost ignore it. The latter is not trivial since the partial confinement is unbounded. Moreover, the dimension reduction of Bose-Einstein condensates is important both theoretically and experimentally, and it has been studied in various settings, see {{cite:bd3d4d14dbe5a621da0d70d70a91a0b6b995ab73}}, {{cite:1b33e2708ba6b0462ea3e2d7d6e8ae953bdaab78}} for cigar-shaped and disk-shaped condensates, and {{cite:efbbf762953b719f02f512646d3cfe403b599d1e}}, {{cite:b7e1f805b3111d3220a2dab8c29748270c0f9dc2}}, {{cite:c2cd0d1999ca38fcc9223f315814b0615852ab1a}}, {{cite:52b447247f57b2844552413ae300d467debb132a}}, {{cite:43d555293a85f7d16cfb2ca65e7c0c5d14a5d829}}, {{cite:26b85e8e9c3115ce71c9a7b8a0054064d58bbcfd}}, {{cite:a74e8b4362d189bc92ade826542549e46973dc29}}, {{cite:37cb7ead92f481b4570d3e3a4e3d0a9425b764c1}}, {{cite:3d908d7182cef43166ffd46cb2b7e2365f543aaa}}, {{cite:e9b05b08df04dac0b2c38051e9d631a2c56dc810}}, {{cite:18d862832b2ff15c31b9e08a1778589dd5cfec6e}}, {{cite:18d862832b2ff15c31b9e08a1778589dd5cfec6e}} for further references and related results.
i
243a521b7fbdef2907c328f504bb913f
We then raise some concerns with respect to the procedure and the benchmarks used to assess new CL methodologies. As we can see, through a pretrained model, we can achieve impressive results with respect to the current CL state-of-the-art {{cite:7cce20876ffa96dbc2871aebdf3ffcf5efef6f25}}, {{cite:33b00b51d2eb795e543f8677f6706115b7f7dfcb}}, {{cite:e18a53499a0357ec703dd5eda3299d780c326b65}}. This point have been also raised by GDumb {{cite:b5663595a855e592ee5c59452fa387aadbd96194}} where the authors questioned the progresses by providing a very simple baseline.
d
a94f26ddf29fecf6b5f6c9be40461d13
We firstly evaluate AA-RMVSNet on DTU evaluation set {{cite:7e9aec8d3f47cb7822977efc5b2a430d50c51ae8}}. The depth comparison of Scan 13 with  {{cite:4b572613b540ddf535ff134be193d30430fd6aa9}}, {{cite:3bebb764aa1c555bd4133a8b6caef50d50e773e4}} is shown in Fig. REF . Benefited from the intra-view AA module which integrates multi-scale and context-aware features, our method is able to estimate more complete and continuous depths for the low-textured surface of the paper box. Some qualitative results compared with other methods are shown in Fig. REF . Due to the improvement of depth map estimation, our method obtains more complete 3D dense point clouds with details reserved. The quantitative results of the whole DTU evaluation set are shown in Tab. REF , where accuracy and completeness are two absolute distances calculated by the official MATLAB evaluation code {{cite:7e9aec8d3f47cb7822977efc5b2a430d50c51ae8}}. Overall is the mean average of the two metrics. Compared with the advanced methods, our method achieves best completeness and competitive overall performance. Through the comparison with two previous recurrent MVS networks R-MVSNet and {{formula:64574266-20f6-4f1f-b1ef-c4248ffd7f55}} HC-RMVSNet, our method significantly improves both accuracy and completeness on DTU dataset.
r
7a821a0fb32b2f0a8be37bf30511ed76
Since the dawn of modern exoplanetary science in the nineties, one of its most ambitious aspirations has always been the discovery and characterization of “true” Earth analogs, i.e., rocky planets orbiting within the habitable zone (HZ) of solar twins. By looking at the distribution of the currently known exoplanets, it is evident that only a handful of candidates approach that sweet spot in the parameter space, and that all of them actually miss a reliable mass estimate, being hosted by stars too faint to be properly investigated through ultra-high-precision radial velocities (RVs). After the pioneering results from the CoRoT satellite (built by an international consortium led by CNES; {{cite:b0b5673900e79f77be421fdeb3c5aa9215f4e4e9}}) and from the very successful NASA missions Kepler {{cite:8ce01948b1e2294ca84538182e8fe364b046e4a9}} and TESS {{cite:64d7c41d835da3811a1952344e816309fcc2a68d}} demonstrated that space-based, wide-field photometry is extremely effective in detecting transiting planets, and new-generation ultra-stable spectrographs based on the HARPS/HARPS-N legacy {{cite:3874bc15eb946730d50831ed850f6693879479bf}}, {{cite:08209aea9302ccfbe16a524a517cfff5aa8359b1}} such as ESPRESSO {{cite:f0c4edd0b77d32382768af61ba0b0e6dfcd76f9d}} and the planned HIRES@E-ELT {{cite:1687a09a4f55aaa45eae123abfdaa4fbd8a0b2d0}} are getting closer and closer to the 10 cm/s RV precision level needed to confirm true Earths. In this context, the ESA M-class mission PLATO (PLAnetary Transits and Oscillation of Stars; {{cite:5d42656135a1bcebd39815938b0db2e81f0d418a}}), planned for launch in 2026, is designed to push the current technology at its extremes in combined terms of photometric precision and overall field of view (FOV). The latter is clearly linked to the number of available bright, solar-type main-sequence stars, which in turn will allow not only a proper follow-up by ground-based facilities, but also the extraction of stellar parameters (including age) with an unprecedented accuracy from the asteroseismological analysis of the light curves themselves.
i
021f3da6d6d981f8590cb28ce759bc6a
To construct a Gaussian {{formula:c7453141-65ec-4a69-997f-d22c4a3c2005}} function, we refer to the formulation proposed in the PTOLEMY project {{cite:85ad9b956f817be23b794ffa9f83ad5b66351a99}} (See also {{cite:0dc275096dc66b72e1dc341984f88a127b29dcf1}}, {{cite:8d3ba0113191fb88a8c140f5e3733862628682f0}}, {{cite:941d80afa663489ca7bd35e7521416ff9e2b0839}}). We define the would-be observed numbers of {{formula:e82feaec-41de-473d-95ff-e352b40a132e}} -decay background and the C{{formula:d089046f-be29-49b6-ad39-b92eaa89f463}} B signals within an energy bin {{formula:286edfaa-d60f-45d4-aafd-aec7f4143888}} , which is the same with the energy resolution of the detector, centered at {{formula:323f5f18-c6d3-4072-a03b-eef63e403134}} , {{formula:e9dc3e36-17a8-4107-a28d-df78f519a516}}
m
ab9cbc78b8160243334dff7ee118f3cd
The discovery of gravitational waves by LIGO {{cite:a8e65b8d2b344e0958c6c4ba5e2a7fa2909ea854}} suggests a novel possibility to observe the early universe. The comparatively weak interaction between gravity and matter means that gravitational waves will free-stream after their production, carrying with them a fingerprint of whatever process produced them, a fingerprint which is potentially observable today. In particular, a strong first-order phase transition in the early universe would give rise to a stochastic background of gravitational waves (for recent reviews see Refs. {{cite:c2fa784b171ac407d89f41e056b601586690f1d1}}, {{cite:31cdf60eafe1367e0ecfe780b1f662b764b7612f}}, {{cite:1df42a858f227a9d20e7153d177e8371ec7b4f8b}}), which may be visible by planned gravitational wave experiments such as LISA {{cite:8e49b00b5d4eaab356e46415542d4ca82638a1ce}}, DECIGO {{cite:e8aaae42f3a370cbbd417614a83c900181bb43c6}}, BBO {{cite:6991339206f84d0af6232713bfd0e5d9fc87e444}} and Taiji {{cite:9330a034d6d73cd322c9be02cec9ad0ea5ee5156}}. In fact recent evidence for a possible stochastic background of gravitational waves by the NANOGrav experiment {{cite:230dcdfedf67fc623f889577807a6096d200894a}} has been interpreted as the gravitational wave signal of a first-order phase transition {{cite:a15f0524cbdb524010f7f0ce50af4a05d88268e2}}, {{cite:574bab81fa590ef6c3a1cffaacd41823fb5dc3e1}}, {{cite:862b7838e265ab1e3b12833248b6e4415ed47f47}}, {{cite:2a319ca6237e18f0f8e7bf8d98dbe99abb478210}}.
i
e49cead65d4f269b98122e935456d350
Finally, there is the issue of scalability. Although some DGP methods have been shown to scale well to large datasets, they have not been thoroughly benchmarked on highly structured datasets such as Imagenet {{cite:2cc674e08ed4f9f971edf25edab04c009512fc3c}}. The problem lies in the model depth required to achieve good performance on such a dataset. Unlike MNIST, Imagenet requires DNNs that are substantially deeper. DGPs, however, are usually only tested on models with up to ten layers. It would be indispensable to study and understand how DGPs scale to such a dataset.
d
cd39c7e4f1b968b5328ab7db06d1ec0c
For further evaluation, quantitative performance is evaluated for image translation. SSIM {{cite:0994b35424667439e3677e8b5c5b971702ea68e4}}, MSE, and PSNR are used for the evaluation. The results are shown in Table REF . We can see that our model outperforms E-CDRD {{cite:c9d3b678ce203272ee327e9506bc9fee6739aa61}}, which learns a disentangled latent encoding for the source and the target for domain adaptation. Meanwhile, it matches the performance of StarGAN {{cite:aae21bd7e3b456b4f0180427ca7aa583d3a69217}}, which is designed for multi-domain image translation. {{table:7257e3eb-89a3-4e42-98b7-f28f63c20607}}
r
f149d701d632a0a93b54627352f46f73
The kernel of HAP method is the similarity paradigm link prediction algorithm. In order to verify the performance of proposed HAP method, it is compared with six other link prediction methods, including JA{{cite:c66980b4e4e250a0c6b6e6e6f88e61fb342ba4fa}}, PA{{cite:5e65aec37c8a0b4755469c1e07ca70cca9b00d44}}, CN{{cite:9e730bd6974ee5b984e42c1201685df2ff2a7f2e}}, CN1{{cite:bfdefeef5abcf33ff23c6c42fefa53d273655a52}}, RA{{cite:fe2efd6ea4c466a253d89024ee74f90475c36eb4}} and RA1{{cite:bfdefeef5abcf33ff23c6c42fefa53d273655a52}}.
m
7959cba1018db054141b3793b0c99a57
{{formula:b5db6b87-d900-4470-bea0-435ccd6fc7cb}} Notice that the a.n.i. or e.n.i. property of {{formula:0760cab1-b1d0-45da-aba9-e2f9df894b70}} does not imply that {{formula:5ac174ad-b7c0-4125-9f58-310fc5f02f4e}} is regularly varying. Recall from {{cite:9e001d7123e7bfbae1044e96fa6a7145a7bb0339}} {{formula:9ce20e48-2d31-4c0e-8acd-f034b849f249}} cf. {{cite:e9f8bd05606b904506df06e35831f4b7fbf89f77}}{{formula:961bacc2-589d-49f7-9b21-a81626e77f85}} that the density of the standard log normal distribution or the Weibull distribution with parameter {{formula:73d55cbf-c0cf-404f-acce-99eaa0aa3d3f}} is subexponential and e.n.i., but it is not regularly varying. Both distributions are known to be self-decomposable {{formula:bf3bfaa0-7f19-4731-9723-bf398ffaf42c}} cf. {{cite:e1962d686f858736dae48054b8eed12c2f76a57b}}{{formula:f47834fc-9b6d-4f6c-a6b7-c091df2c91c7}} .
r
94838346c8970eddef6ded0629234d9d
This paper is concerned with the derived symplectic geometry (in the sense of {{cite:b80f9ceb52a95087d10991c573d245349488a3e4}}) of critical loci in the presence of symmetries. Derived symplectic geometry can be seen as a model independent, homotopy invariant, and global way of dealing with the {{formula:c02a8943-dc19-4f35-b970-256facd362ea}} -manifolds of {{cite:2cb5ec6f4476378e9e80cb862e162407efd510b9}}.
i
8055da0f86edd0b3c58ecc9356f6e819
All numerical simulations were performed using COMSOL Multiphysics which employs a finite element method in frequency domain for the calculation of electromagnetic fields {{cite:9933a8a0fccb7ea8bce3d1225ebec08eeb1f569f}}. Our simulation model for the design and optimization step is reduced to a unit cell with a single resonator from the entire structure of the metasurface (see Figure REF b). The surface character of the structure is maintained by applying Floquet periodic boundary conditions to the unit cell in the {{formula:8e9ad228-13dc-4ba1-a420-1e86a0888217}} - and {{formula:d4e26843-b772-4a34-9aab-fe8ac2edb343}} -directions to imitate an infinite surface of identical resonators. The boundaries in the {{formula:b069dc11-b0c7-40f8-9360-d17c511fe404}} -direction are set to open boundary conditions by adding perfectly matched layers (PML) in order to absorb the outgoing electromagnetic waves. Additionally, the model contains lossless integration ports above and below the resonator where the different far-field quantities at the FF and TH are calculated by a near-field to far-field transformation (NFFFT) for periodic structures as {{cite:ca78c7f89ab2eb6ba07f170c79889b47184b7862}} {{formula:13dc9a99-0573-4d3d-8b04-e40c7c747f31}}
m
cffe5a9cc781aca716ba2707258790ab
As seen in Table REF , we present a few statistical metrics to determine the fairness for different ethnicities in the RFW dataset using MTCNN {{cite:e093c6ae7388348875f1e5d08c33f0bbdad75744}} model and FaceNet embeddings. It is not directly evident from the results of one group getting better results consistently, but a clear pattern in the bias towards certain ethnicities became evident on a deeper study. The prediction accuracy for Asian (A) and Black (B) groups was lower compared to Indian (I) and White (W). But, this is not enough to indicate the bias as there isn't a significant difference between different groups. However, Positive Predictive Value (PPV) and False Positive Rate (FPR) indicate higher confidence in White faces than other groups with a significantly lower False Positive Rate, and this pattern is also seen in PPV, as for white faces, the value is as high as 0.98 and compared to Asian group which is only around 0.78 indicating higher precision rates in detecting white faces compared to other groups. {{table:75fd71b7-4b73-4bf0-9f9d-cd6d29e85a6b}}
r
9baec189731ba5a0602adac27f3c0579
A promising direction for future research is the addition of a third task, potentially radiotherapy dose plan estimation. Hence, we can generate contours that are consistent with an optimal dose planning. Further studies could also focus on sophisticated MTL network architectures similar to sluice networks {{cite:9c97aa3f6f464bad476bfbefc45dbe3cb569f5fc}} or routing networks {{cite:7002493188550ac72d03eeeef655e1196ca9dc9d}}. Moreover, we can study how to fuse the contours from the segmentation and registration paths in a smarter way rather than simply selecting one of them based on the validation set.
d
41690663ad919dd63732a6f2cd4eef50
The results of Section REF suggest two things. First, data complexity (i.e., the diversity of images) largely affects the pre-training rate {{formula:d1988310-0e48-4634-ab20-feaedbf1a449}} . This is reasonable because if we want a network to recognize more diverse images, we need to train it with more examples. Indeed, prior studies {{cite:54920963a466f898477061c03357d6bb12829ac8}}, {{cite:47dfcad9a2e6aa2ee9e970120821892e54f902c9}} observed that {{formula:9ef843c2-f28d-426b-82a8-bf54740f1812}} is inversely proportional to the intrinsic dimension of the data (e.g., dimension of the data manifold), which is an equivalent concept of data complexity.
r
2f3b217deb4aad077762c9aa50776585
We use the Surface Evolver {{cite:6df3a6c04bb3e388d0467bdb20d15cc8d61c6b2c}} in the manner described by {{cite:a2bbb7342d26e885f4e64219a30352c8da2ab1ef}}. We create three foams of around 725 bubbles (in this range the number of bubbles does not affect the results; data not shown) between parallel walls with a Voronoi construction {{cite:ab641c64f85d509da8b88388a8e480b0c00e24e9}}, {{cite:19aa1b75682353c007e023d33a7ee3ffc07e94b8}}. The channel has unit length and width {{formula:1ef2d718-5122-4ff5-ae42-b26483c45cb6}} . The foams are monodisperse, with bubble area denoted {{formula:9d3a765d-2fea-48ab-b18c-b9448f4dd6a4}} and about 22 bubbles in the cross-section of the channel. A bubble in the centre of the channel is chosen to represent the obstacle, and its periphery constrained to the required shape; its area is then increased until it reaches the desired area ratio {{formula:4f1c93ee-2379-4fba-99c6-ca3b06a705dd}} and it is then fixed – see figure REF (a). The tension of each film, {{formula:942d8c4b-76a6-4d6e-9ce9-1a0cecf41621}} , which is twice the air-liquid surface tension and is in effect a line tension, is taken equal to one, without loss of generality.
m
c3299e7c25e08515b98592c6d06a59a2
Results are replicated using the original author's covariate set for matching and model specification for regression. Since virtually all of the papers replicated in this study make use of matching in conjunction with regression {{cite:bedb74bc9bdeca8471602c24b758dc2f5fd5a1af}} we do the same here. We try to remain as close to the original author's preprocessing and post-matching estimation as possible, changing only which method is used to construct the matches. This means that we run the same regression models that the authors do after matching. Each paper produces results using different combinations of either matching covariates, treatment and dependent variables, or post-matching regression adjustment. We replicate as many of the combinations used by the original authors as possible. Each row in Figures REF and REF and in Tables REF and REF is one such combination. This explains why there are many different results for a single paper.
m
0394e4ba1b5411bab7111f6e9b08d07a
In order to compute the dominant eigenvalue of {{formula:e0674bab-173f-4399-b00b-a237c732ffd7}} numerically, we retreat to intervals {{formula:bd3af1c8-e058-444d-a11a-fc0817d7c107}} for some {{formula:cfef9eb7-6b61-41d2-8817-c7db3611f6ae}} . It is shown in {{cite:063e4dfd2b4ec2e8053bb745784025420e21b8c9}} that the dominant eigenvalue {{formula:f223918c-78f6-4dec-a4d1-1294f2aee482}} of {{formula:39c962b4-3f54-42f1-815a-eaa25bfb413d}} and the smallest positive solution {{formula:d1f005b9-a1bc-4b8b-94ba-c16d1dc4bab4}} of the transcendental equation {{formula:b606d529-427a-4a01-ad0a-ed2b19a33120}} are related via {{formula:900e9b70-ce47-4c2d-93fe-fd8be53b5b71}} . We refer to Tab. REF for exact values and approximate the dominant eigenvalue with positive collocation methods (piecewise linear, polynomial and spline) and positive Bubnov-Galerkin methods (piecewise constant). This is based on a grid (REF ) with {{formula:f2735da6-adcc-47a4-9ae6-ccc2ca04c0a5}} , {{formula:4c7bf3ad-9054-4619-be85-fa908a13d460}} and {{formula:04db09a5-7fb7-4c0f-ba09-0a79c0fd0ffb}} , {{formula:3dd339a2-7bd3-4ed6-b0d8-712530d5e62c}} . For the sake of discrete projection methods, the remaining integrals are approximated by the summed midpoint rule (REF ) with the centered nodes {{formula:f42dfb30-2c00-4918-a387-c6e63b2ff922}} , {{formula:4b4a164a-a5db-404e-9b2e-b0d8c20b6a82}} .
m
e3286db7edd4c54191b13e3d54f9113e
However, a complete description of these measurement processes is missing thus far. To make use of the full power of such protocols for quantum state engineering in optical quantum technologies {{cite:f73ca5b6dac1ab066c462938128b9ed291b144b3}}, {{cite:84510f589ec9ddc42206be1b1633a4a6b02afda4}}, {{cite:75ceaa87b698c7266228c30ec90e92f5bfff1a2d}}, the corresponding theory, and the properties of the states of light are needed. One goal in this direction is to construct the quantum theory of measurement for HHG, and to obtain the generic structure of the entangled states of the electron and the field modes in strong field processes. Both is presented in the present work by introducing the natural language in which measurements are described. This is done by providing the necessary measurement operations, and corresponding positive operator-valued measure {{cite:1505396007e884be165915b0d7dca88ec54c68d2}}, {{cite:31f940cc85fdddaf5732a9105fc7051fa2c0659a}}. The notion of measurement operations is well known in quantum information science, and has lead to many advances ranging from light-engineering protocols {{cite:5c307fb824e56423fad0cc73777e958abeb19d9d}}, describing open quantum systems {{cite:71721cc09f4e5be6255aef4d77bda1bb109cb48d}} or quantum control {{cite:ded15c1ad18195a145ea8c8980d1eed8c32cf555}}. But, these methods play a new role in the description of the interaction of an intense laser field with matter. A quantum operation {{formula:19ab7c14-28b9-4ac7-8794-ecec990303af}} is a completely positive map of one quantum state to another {{formula:3e9ba1b2-400a-4959-b477-10243552920f}} , and describe how the state {{formula:e4ff3e7e-ff4d-4a19-810d-72886245f121}} changes during a process with outcome {{formula:06a7d152-07e4-4213-93c9-bd728abe8062}} , which occurs with probability {{formula:2094cd52-7558-4075-b059-e55f91b72a1c}} . The representation theorem for quantum operations states that the operation is expressed as {{formula:0862d2ef-52d1-421f-bc2b-c71cf2896a04}} , with linear operators {{formula:19cfe001-8bc2-4828-b65b-86659d406e2b}} satisfying {{formula:9a65a1e6-9909-4f09-a6c8-f0acb6f9dff3}} . The operators {{formula:6db0b2bd-da1a-4fe2-bae0-6a9f2861d3f0}} correspond to generalized measurement operations, which for the special case of projective (von Neumann) measurements are given by projection operators {{formula:1e4b5b4e-0f47-4520-922f-710209cb981b}} . This concept of quantum operations naturally leads to the notion of measurement theory via positive operator-valued measure (POVM), in which each possible measurement outcome {{formula:5f0fca4f-8bff-41ab-98a2-83da49d32286}} is associated to a positive operator {{formula:55aeb397-08ba-4978-9caf-1923059ff8ae}} . These operators are the elements of the set {{formula:ecf65804-7f8f-49b3-83c7-6f622a9739e1}} defining the POVM, and allow to construct all possible operations which can be performed on a quantum system. It will be shown, that within this framework a conditioning measurement on the electron allows to generate massive, and controllable, entangled and superposition states of the optical field.
i
18ee0fa93854b9c70c178c1d870b670c