text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
In the quantum field theory, redundant coupling parameters can be removed by applying a local field redefinition, while physical observables like particle masses and scattering amplitudes are independent of field redefinitions {{cite:826f5d525966a90c5790e669cc936f777618e5bd}}. In this letter, we investigated the possibility if a physical mass parameter can be redundant by applying a nonlocal field redefinition. For this purpose, we presented three explicit examples equipped with invertible nonlocal wave{{formula:63387e19-ce69-4a5e-981b-43c050959689}} field redefinitions. In the case of the classical harmonic oscillator, the angular frequency became redundant using (), while it could be recaptured to the system by (). For the scalar field theory, by applying (), the mass parameter of the spin-0 field was removed from the theory, while it could be reattached to the theory upon (). In similar fashion, we introduced () and its inverse () for the Dirac field theory, using them, the mass parameter of the spin-{{formula:cca31e63-168e-41c2-9fe5-65f77992f994}} spinor field could be eliminated and restored respectively.
d
650f70f2c07ce78d0c0d36253e1da0f0
We note that our results are mostly concerned when {{formula:25239aba-9567-4dbb-aa52-941ccc8464f0}} is odd. Such condition comes from the necessity that the function (REF ) be non-negative. Essentially, this also explains why the assumption that {{formula:2fb0984d-ccb5-4759-962f-b6c6055e7450}} is odd appears in {{cite:93fcd435d5e04238d5d4b4fd1816e1fdf0247ffe}}. We could replace this hypothesis by assuming that {{formula:a16f724b-0dc5-4a40-97ec-b1582a6afe8a}} is either non-negative or non-positive, however, whether (REF ) has non-negative or non-positive solutions for {{formula:822c0465-5522-4b6f-8c57-da6a7a2af4b8}} is an open question. For the CH equation this is proved by using a diffeomorphism constructed from the its solutions {{cite:9b8774c2c260986d9a1abd74649e32d9697a0306}}. Although we could find a similar diffeomorphism for (REF ), we do not have a similar result as {{cite:9b8774c2c260986d9a1abd74649e32d9697a0306}}. Moreover, the impossibility of the extension of such result to (REF ) make us also unable to proceed similarly as in {{cite:aaf792f4d68f2b2db477735081a69787c1dc5b4d}} to prove that (REF ) does not have compactly supported solutions corresponding to an initial data compactly supported.
d
ab2b3156973a1ddf501dcc900f7967c6
BLEU: BLEU score is a common automatic evaluation method for majority NLP tasks {{cite:92422a605c7b1dbc663e8e380455b4fdeaa116fa}}, {{cite:c7cb2e5c4c035d609ae7112d872a646573f08e14}}, which score the n-gram overlaps between the generated sequences and reference sequences. For our experiment we report the BLEU-4 score. Diversity: Diversity is usually reported in the task of dialogue generation {{cite:cc901dff75753e732adf26d9b693e458b2315728}}, which score the number of distinct n-grams in generated responses, and {{formula:9b2999db-d59d-4e32-8e72-55d890d1d8b2}} for this experiment.
r
cfac5e6331c9d432f010767793087c20
Intuitively, it is possible to learn general features by training a model to simultaneously do well on multiple tasks. Recent work in NLP started to show promising results on learning a generalist model with multi-task learning {{cite:35fd980d7f978afef369b528096d2564308e112d}}, {{cite:4ca7bae83e79208e61f247d1613f9bfc89269784}}. In computer vision, the biggest challenge of training a multi-task model is in the data collection and annotation. Despite datasets like COCO {{cite:9b49e8a6869834b1b218cef7958750473dbdf919}}, collecting a wide variety of annotations (, instance segmentation, person keypoints, image caption) for the same image dataset is quite challenging. Due to the time consuming nature of annotating images with labels, it is hard to scale such efforts with the number of images and the number of tasks. The lack of large scale multi-task datasets impedes the progress in multi-task learning for computer vision.
i
ea0ae0214b3e33e0cdc8337e24278a24
Let {{formula:6a8286fb-1080-4f2e-8662-0732f55e19e1}} denote the set of the first {{formula:314b198f-3462-4362-b98e-7d5d7edfe5d3}} natural numbers. Let {{formula:c377f8de-5377-4317-b120-cbfe495700af}} ({{formula:b66909b6-ebd4-4df8-a220-b50fe93a4a96}} ) denote the largest (least) integer less (greater) than or equal to {{formula:c68dfcec-a5d3-404b-96a8-b1b7f7e30728}} . For two positive integers {{formula:0115bd22-4b11-4329-bbc5-48175c1b571e}} and {{formula:e5ccf134-47e0-4dd6-9433-7eb9d565742d}} with {{formula:4ec3c46c-9868-40a4-9ed2-3debfba0874f}} , the set {{formula:52f46a4c-0e07-4120-8542-f60d807e6dd3}} is denoted by {{formula:83a572e5-5db3-4b4a-b072-e8e144397909}} . The terms and concepts that we do not define can be found in {{cite:523821ccde22344a439ddd16c42303e03f87efba}}.
r
ca9ccee30d15b5ddf1dfbe8e6e044426
Datasets. Our data was gathered using Pushshift {{cite:11294999c4eb07e2efe9b46c231ba83d10ab18ea}} and comprised of all the comments and posts made on Reddit during the period from 01/2015 to 04/2020. In total, this included 5B comments and 684M posts from 39M unique users. For the analysis presented in this section, we use this data to construct three different datasets that are described below. The characteristics of each dataset is illustrated in table:relationship:datasets.
m
2c28413d1f627756dcc7ae66b1e87a9f
The proof technique is inspired by the arguments from {{cite:20e00768fcbb54c8a1db6bd41cab81dd1c5d7d9e}} developed for analyzing stochastic model-based algorithms, with several new elements along developed for handling the challenges introduced by the model averaging and partial participation mechanisms associated with FedProx. A particular crux here is that due to the random subset model aggregation of {{formula:784d4175-aee3-4637-b200-2e11dd7dbcd3}} , the local function values {{formula:dcb01cba-7fc7-45f7-b323-c2785fa8b5dd}} are no longer independent to each other though {{formula:4517eb7d-9767-467d-af31-efe623bc9971}} is uniformly random. As a consequence, {{formula:72f4aaa3-7418-4896-8e39-7f21123f1230}} is not an unbiased estimation of {{formula:4ca7a6fc-13e3-4b84-b640-de21c2cd52a8}} . To overcome this technical obstacle, we make use of a key observation that {{formula:75e80d5c-5467-4f01-860a-32ae4a64b10f}} will be almost surely close enough to {{formula:da59f5c6-ba4d-43bd-be60-99e763e448a2}} if the learning rate {{formula:a77b0400-7ea5-4522-8921-234231cebd48}} is small enough (which is the case in our choice of {{formula:cfbdb190-10c9-4e3e-8c80-ee270236b68f}} ), and thus we can replace the former with the latter whenever beneficial but without introducing too much approximation error. A full proof of this result can be found in Appendix REF .
r
6dc7d1e33040b527321194d2c789b305
As another baseline, we compare the performance of our method against SuperGlue {{cite:8d6342b1cfe0b322f4ef30c91ed8ae4fc8a7f873}}. While initially proposed for pose estimation and homography, SuperGlue has achieved state-of-the-art performance for feature matching {{cite:2d640a3ef0c43be0db9168c3c8dc6f169b12f412}}, {{cite:b005213d5e61c3b9d92e2aadee51b9716015d57c}}, {{cite:91ebe34475c28cc2c122bc5df1a378e9948e76b6}}. So, we extend SuperGlue to object identification using the inlier to outlier ratio as the matching score. In Figure REF , we show the object identification results for SuperGlue along with NetVLAD and AirObject. It can be observed that SuperGlue provides high recall and low precision matching. We believe that the high amount of perceptual aliasing coupled with no background context leads to low precision for SuperGlue. Furthermore, the computational time required for local feature matching makes SuperGlue infeasible for object identification, given the large array of objects present within a single video. Hence, this further solidifies the necessity for our proposed robust object encoding and identification method, AirObject. {{figure:fb43ea9b-de95-4041-a7bb-f690540854ca}}
m
af99d829c6e32faecdb508f62c7b1ac0
In addition to the score function, each neuron of a hidden layer also has an activation function that makes the network nonlinear. Several activation functions such as sigmoid, hyperbolic tangent (Tanh) and rectified linear unit (ReLU) functions have been used as the activation function. While some studies such as {{cite:563a615ca11b9ec5e5b60b7aacf9237580b14b8c}} have shown that ReLU outperform the others in most of the cases, we also examined sigmoid and Tanh in the following experiments. Finally, the last layer of MLP, called output layer, maps the final hidden layer to the scores of the classes by using its own score function. We used both the classical c-operator and the new ef-operator at the output layer to make the final decision.
r
7af2844d1e23b8e9b31d9477f61852a5
Data Augmentations: Following {{cite:ba4d613f7cca3db0306bf8990e3b85b62664aa97}}, {{cite:c3366af2273b17d5fb2520d461f13a0cd4f57d2f}}, {{cite:8824194439b3339438ea6342bb468578e231457b}}, we use horizontal flipping, random cropping, Gaussian blurring, and color jittering for augmentation. We also use a multi-temporal-resolution idea which is analogous to multi-crop in SwAV {{cite:0cde94917bb18ade9f081294d90ccc8dfa54ad2a}}. In this augmentation strategy, we sample shorter length clips alongside longer ones. We observe this makes convergence faster, but does not affect the final accuracy.
r
85f9ef844bb78b39d0411fadf9b801e4
We see SQL variants being the top performing amongst off- and on-policy agents. Generally, Q-learning performs worse than policy gradients however, when comparing their combinatorial counterparts (SQL vs SPG), we see this trend reversed. Generally the greedy baseline helps SPG versus using a critic, consistent with {{cite:dd7671bb10438870ab692edecfd4570410d54177}}. Increasing the beam size from k=1 (Greedy) to k=20 (Beam) for SQL improves performance slightly however, the highest performing variant is the only non-sequential method evaluated – SQL (Masked). We believe this shows the promise of non-sequential generation of proteins, where the model learns to directly generate the best performing structures. {{table:d2c0c378-6ec0-47e3-ad02-4c2bcc00fcfb}}{{figure:4fd7aebe-4bf4-4cb1-abbf-ea23d3499068}}{{figure:c3d34b4b-df1f-438d-b0be-3473e9bd1251}}
r
fd642fb74d330af9e30e9e1e3c2e405c
Recently, the concept of being able to engineer materials for the fine control of water flow through nanochannels has gathered a lot of attention. Its application can be envisioned, for example, in membrane separation technology, where selectivity is achieved through both the size of the nanoscale pore or channel {{cite:1c7980d31936bec8840f1c6c75c5e259a7dd6fa6}}, {{cite:c82f4e0babd71194a1ab53e74a47947519177284}}, {{cite:3800923248c0a83a5ac9ce124b2056e0987a43f7}}, {{cite:8bb04de93b09706c76a3f3b778d9f5ec346c734a}}, {{cite:23feff9e7e8c6a43fba4072bffebeb750a8e294b}}, or alternatively by manipulating the surface chemistry effectively mimicking mechanisms found in biological membranes {{cite:dade3505da608c082122dc49482750b1e38f2ec9}}, {{cite:dbc6e254d98ac20d90f7282f3ec8536298b060ea}}, {{cite:e51ba090ec5bb89fb5cab19b46fc165722a4ffb0}}, {{cite:36a10973f31e28047188f798b5ac09ebe1c1ee23}}. Water, a substance which already displays anomalous properties in the bulk {{cite:2ee5efbc9e8173625d2eb5a2b41da1b53f3cabcd}}, {{cite:942a4d40ac21d4a45eb0ded5cd86feb4f94d810c}}, {{cite:dd5f37485f386b785237f9c1b7e08a189590b314}}, shows even more curious behaviour under confinement. Interesting structural and transport phenomena, such as single file diffusion {{cite:8536e45dabd3c82b2d16fb84180e28943588c9b0}}, {{cite:0e9c7507a2a1a082746e9aae58334e6be55e8e67}}, {{cite:763b18f42612e87cc45fac7fda06614cb2e90982}}, {{cite:9eb7d8df8eac0ca56688a055cc42b8398f83a5c5}}, {{cite:1a462863f594a9edd3f03fbde03298d4ffd9fac1}} and ultra-fast flow through carbon nanotubes (CNTs) {{cite:43255a446b7385908874cb27f2afafa7624a86b0}}, {{cite:7d3694dc78ed984f3f18360017c7b8bbae593ab7}}, are only some examples of this. The behaviour of water under confinement has been studied at length with a view of quantifying the extent of these phenomena in terms of the properties of the porous media such as geometry, surface interaction and in terms of the fluid's thermodynamic state {{cite:43255a446b7385908874cb27f2afafa7624a86b0}}, {{cite:7d3694dc78ed984f3f18360017c7b8bbae593ab7}}, {{cite:d17cd29f986d9aeb602ebfbe51688f383c55d973}}, {{cite:ca656b425019d170f70e867e600ba3cb6f9a8276}}, {{cite:ee29953be84ea289ff567f6dbfb50331449af0d8}}, {{cite:7e4f091e41ff036f3142d8a11bf63ef839aa6cf9}}, {{cite:1fd85e7e32053b27d8818aea74a3bb77228f7ad1}}, {{cite:83bf23016b1a44f86b165e1d36803aadf2cc9e7f}}, {{cite:febc235a953326c83d61e584de4dde6327d4bb53}}, {{cite:2c5337a33cabb1256c456db2a1670e2aa8c7bd9b}}.
i
acb3cf73dda435223f21e73d4bf2643a
We conclude the appendix with our robustness when classifying adversarial examples created from other models. First, we tested the transferability of adversarial examples from model (A) in Table REF , a baseline model that was trained in standard training only on clean images, to our model. We attacked each of the five models trained with our method using adversarial examples generated with model (A). Additionally, we tested the transferability of adversarial examples generated with a model trained using adversarial training as in {{cite:838e15e22e3b07a8c74225793d3f11c6a4c5c961}}. Results are shown in Table REF . It is evident that the transferability of adversarial examples from models trained with methods that are different from ours is much lower compared to the one presented in Table REF .
m
cd7b7629c7db997443fac950786dd004
In order to state our result, we first recall the local existence of strong solutions to the ideal incompressible MHD equations (REF )–() in the domain {{formula:2e34634e-9266-4346-a0ff-f0dc7a6f33b0}} . The proof can be found in {{cite:d7a38e0a3cc4e0465b95241d1a56683a76ff4ac4}}, {{cite:814da7a17750f85be76d9b9f7694a6985294d1a0}}.
r
c530eb8f3a67e891846c5e6322682bb6
In 2018, Li et al.{{cite:e019b3170e570395ac92d85a827da6ef879c55ba}} proposed a technique called PhotoWCT, which further improves the baseline established in {{cite:b29ce6ad89bf634108659d336d2ea17e18d01fb2}}. This method changes the upsampling step to the unpooling operation in the decoder, to smooth the final result. This method maintains the spatial consistency of the content image and the stylized image. In 2019, Yoo et al.{{cite:ac017525c25d19ec09caa6afcfaf2e439678e632}} proposed WCT{{formula:698a4b9f-ef77-4e10-b60f-8ec7066bb20c}} method inspired by WCT. This method uses wavelet pooling instead of maximum pooling in {{cite:e019b3170e570395ac92d85a827da6ef879c55ba}}. Wavelet pooling retains the image details, while the model introduces richer style elements through multi-level stylization operations.
m
fa6ae53933d8f3332bad8af3ca9deae8
where {{formula:a4bcb0e2-7e97-4a73-add3-5a88fb58f2d9}} denotes the scaled (by the energy constraint) adjoint veclocity projected onto the hypersurface tangential to the energy hypersphere at {{formula:70d8a65c-5641-4bae-97df-7d6c496f114b}} , as described in detail in {{cite:8f735ef15d944dbc32a33aafb95179e0a1415683}}. The angle of rotation {{formula:9a28636f-b3ed-48db-8e2c-0df8aa8f2f03}} is calculated by using a backtracking line search {{cite:d778a880a49fa614d3c490796581dfaf98148e02}}. This looping procedure is repeated until convergence has been reached as measured by the normalized residual {{formula:37290c02-9337-41f7-b6a5-820e05ec4191}} , defined by {{formula:21a087d9-6199-4b17-afed-dca7b99a896e}}
m
db1b8fb1355e256be402d0adeb0d0d16
Figures REF and REF show the models that best fit the observed polarization for each source, alongside the maps from Figure REF . Comparison with our simple model shows that the polarization in DG Tau and Haro 6-13 is broadly consistent with that expected from grain alignment to the radiation anisotropy; the polarization vectors are azimuthally oriented, with two low-polarization holes caused by beam averaging on either side of the disk major axis. In contrast, models of the expected polarization from mechanical alignment through the Gold mechanism for these disks produced azimuthally oriented polarization vectors with low-polarization holes on either side of the disks' minor axes. However, the azimuthal variations in polarized intensity expected from radiative and aerodynamic alignment are not seen in DG Tau and Haro 6-13 (see {{cite:15bd59ed114dba0d996045f47624b08b0660629e}}, Figure 12.). The polarization in RY Tau and MWC 480 is broadly consistent with that expected from scattering; the polarization vectors are aligned with the minor axis of the disk, and the polarized intensity peaks at the center of the disk. In contrast, the model for radiative alignment predicts that polarized emission would peak at two points along the major axis in RY Tau and MWC 480. Additionally, we note that while Haro 6-13 and MWC 480 have nearly the same inclination angle (40{{formula:b18400ed-e239-4308-a7f4-b04f4538509c}} and 36{{formula:19a85ed7-57b3-4be1-87d3-702d61f3dc57}} , respectively) they have different polarization morphologies, which indicates that the differences in these two disks cannot be attributed solely to differences in inclination angle.
d
8ecdc89e121f4e8c8a66cca5542f4d30
Quantitative properties, a.k.a. quantitative languages, were defined in {{cite:345721eb4a44eedf5fa56021264e6b5ba063f21f}}. Although such properties have been studied much in the context of probabilistic model checking {{cite:9bd8db4f052ab5480abf7500b6a4217774441804}}, decision problems in verification {{cite:345721eb4a44eedf5fa56021264e6b5ba063f21f}}, and games with quantitative objectives {{cite:4ca0c20f5a1246e61b6bf52ec26cdf35bbf15b32}}, {{cite:12044daa6e48315d1960235aaa5a0e8488feb4d7}}, in runtime verification, we observe a gap. While some formalisms for monitoring certain quantitative properties have been proposed {{cite:d849ba91f3b987a4ad9b17dfb7ab97a083aa4bac}}, {{cite:a7b9fa8a9f19b637e3b8cd1e8fffe2e6c7773560}}, {{cite:b52f674e9aa47d936650c8e7462ae9fd012b99b6}}, to the best of our knowledge, our work is the first general semantic framework that explores what it means to monitor and approximate generic quantitative properties of traces. We believe that such a framework is needed for the systematic study of precision-resource trade-offs in runtime verification. See {{cite:9c075b74a376cc1a82ed38c92b33e42840d86803}} for a discussion of why quantitative verification at runtime is needed for self-adapting systems, and {{cite:d788eef70f76a773e6406bb8ed5a7564c1ace8b3}}, {{cite:63b284bab4ff2e69cf9a113c838c591e6b9695e8}} for monitoring neural networks.
i
b6a3b343a08290d0ac01b7ec73fbe864
In the past few years, convolutional neural networks (CNNs) have been widely used for LF image SR and achieved promising performance {{cite:a02e7e760528be77345194df50773e15b472e86e}}, {{cite:92c8e910a5ca4353148f5896757348f9b4eb6521}}, {{cite:875f2e39f1298b5c73441d1f570537126ff20cc3}}, {{cite:3b36f99fce2c1dad67ce86def5939333792d40f1}}, {{cite:e0b4bbb7e2878fa5ea7e61c559320d47ad50d46f}}, {{cite:29c40e4b2a5a7f5dde42b64b7b8c1090fa08bc47}}, {{cite:c97adb442d5e8e537ff65b17f000bcf1a31bfc4a}}, {{cite:52491215da4543d967b8408e66e19dceafd34d86}}. Yoon et al. {{cite:a02e7e760528be77345194df50773e15b472e86e}} proposed the first CNN-based method called LFCNN to improve the resolution of LF images. Yuan et al. {{cite:92c8e910a5ca4353148f5896757348f9b4eb6521}} applied EDSR {{cite:a7d81bade945cce4d5e00f0dcfeced10b1dc863e}} to super-resolve each sub-aperture image (SAI) independently, and developed an EPI-enhancement network to refine the super-resolved images. Zhang et al. {{cite:3b36f99fce2c1dad67ce86def5939333792d40f1}} proposed a multi-branch residual network to incorporate the multi-directional epipolar geometry prior for LF image SR. Since both view-wise angular information and image-wise spatial information contribute to the SR performance, state-of-the-art CNN-based methods {{cite:e0b4bbb7e2878fa5ea7e61c559320d47ad50d46f}}, {{cite:c97adb442d5e8e537ff65b17f000bcf1a31bfc4a}}, {{cite:29c40e4b2a5a7f5dde42b64b7b8c1090fa08bc47}}, {{cite:52491215da4543d967b8408e66e19dceafd34d86}} designed different network structures to leverage both angular and spatial information for LF image SR.
i
789277eb3f8e56d84565dbcc25308462
Relation to BN. The proposed LogN is formally close to BN {{cite:ad2abb112fede11a3183e027a8a4213d10f32a00}}, i.e. they both use statistics for normalization. This ensures the versatility of LogN, since by post-hoc enforcing BN on logit, it can be conveniently adapted to any detectors and distributions in a plug-and-play manner, without training or tuning. However, differently, BN normalizes intermediate activations with feature statistics to accelerate network training; while LogN normalizes final activations with logit statistics to cope with long-tail learning. Meanwhile, BN functions both in training and testing phases, to ensure the homogeneity in the input distribution of the data in both phases; yet LogN functions solely in testing phase, to handle their heterogeneity in the label distribution. Additionally, LogN does not include scale and shift, as the normalized data exhibit a basically uniform label distribution, which is also guaranteed in test set by a regular long-tail setting.
d
5305f95c211b61c5cb210d75d94577f3
Most existing works use human-written verbalizers {{cite:d07507ac7423b711455cf19461ba13d9a3c77284}}, {{cite:6c8dd30bd140d84b8e81de3e84a732f22d222d14}}, in which the designers manually think up a single word to indicate each class. However, the human-written verbalizers usually determine the predictions based on limited information. For instance, in the above mentioned example, the naive verbalizer {science}{{formula:1616b8c9-09c6-4eaa-be02-717ed8ff26bf}} Science means that only predicting the word “science” for the [MASK] token is regarded as correct during inference, regardless of the predictions on other relevant words such as “physics” and “maths”, which are also informative. Such handcrafted one-one mapping limits the coverage of label words, thus lacking enough information for prediction and also inducing bias into the verbalizer. Therefore, the handcrafted verbalizers are hard to be optimal in prompt-tuning, where the semantics of label words are crucial for predictions.
i
65070ac287f9307d571cf00a44370887
As was stressed in {{cite:2b77e12daa1a12780750d727d7bfeac7eaf8e530}}, however, the time complexity of EDA is dominated by the computation of {{formula:e616b40b-9b5d-413a-b167-15a2303aff9f}} and {{formula:d3f51095-1664-4c6b-96cc-cfa7bf4ed571}} , as well as the evaluation of the large matrix exponential eigenproblem. More precisely, one has to explicitly form and store {{formula:b3867fb9-b1ac-48d1-97c2-7cfdb1992d38}} and {{formula:458aabaa-5f9b-42f0-bc9e-806125f8260c}} , as well as their product {{formula:2d892c68-7902-4739-ab28-81af779ff6a9}} . The complexity is {{formula:d93f1f44-4c19-46b9-8498-4287c6a6185e}} , which is prohibitively large as the dimension of the data is high {{cite:970f6fb2b0f156e5a00508a702835cd7134d8af5}}. Moreover, we have to solve a large matrix exponential eigenproblem with respect to {{formula:eb76965d-1c07-4f5d-aa9d-a8ef453c0896}} . If the matrix exponential eigenproblem is solved by using the implicit QR algorithm {{cite:ac50d20f5406e6a9d3b1ad07fd72d685329ca2b3}}, {{cite:8d8cdab4d17cb6c0a1822884a518d02dcd6e9d5c}}, another {{formula:6a511cab-14f2-4b9d-8980-732df16663f8}} flops are required {{cite:ac50d20f5406e6a9d3b1ad07fd72d685329ca2b3}}, {{cite:8d8cdab4d17cb6c0a1822884a518d02dcd6e9d5c}}. Therefore, it is necessary to seek new strategies to improve the numerical performance of the EDA method.
m
0855c456c230cd00c3808bcffe4e84ac
(3) Although the theory of general relativity is a field theory of gravity, the definitions of gravitational fields are not based on continuum mechanics {{cite:dc0685bc1e53f49c526e7fd942170f90a4594c42}}, {{cite:c03a041e73bf3ee325952cb5a79cc4bf79cca231}}, {{cite:3d33c1d4a93c2480767bc1ad25f3656fdb167c24}}, {{cite:c649b45651044a4a0c66592fb9fc916697c1d235}}, {{cite:546c873b8b4ecbe103f65d419b323da6fc07c141}}, {{cite:d43b84d598ff43efe4e6f52baae7a6f99b83b5c9}}, {{cite:217b44f3a236b91480afe740b97d2f1aadbd9a8a}}. Because of the absence of a continuum, the theory of general relativity may be regarded as a phenomenological theory of gravity. In our theory, gravity is transmitted by the {{formula:411bdacf-7469-4599-abe7-09a813cb2527}} substratum. The tensorial potential {{formula:cf552a82-066e-43f0-ae0c-57520191f4b7}} of gravitational fields are defined based on special relativistic continuum mechanics.
d
34d75445aec8444a1a6952960bc79371
Theorem 1.3 (cf Lemma 1 in {{cite:8473d860c5f3b2133ca287769bda0a98a5a3fc90}} and around equation (17) in {{cite:1c76a6788ce157b618bae7b534b078d65d60cf5b}}) For any dataset {{formula:1500b23a-a5fb-4bc2-8bf9-f7d1557395b1}} we have {{formula:8ab4f96e-aabb-4f80-b934-a5249791f0f4}}
r
54f17bdeca76848159d28a2cd5f3199f
endows {{formula:d8293565-831a-478a-a71c-8c12513e616b}} with a Banach space structure. Additionally, (see {{cite:441cd5707ea9b7a787ecbc657ee59188c1f5336c}}) this norm generates a topology in {{formula:a7db3c6e-8f76-438d-9963-6ad16809f6c1}} that is independent of the choice of Riemannian metric {{formula:625fdba7-e3ff-4206-828c-070f0ad6d2c6}} and coincides with the weak and strong topologies introduced in Chapter 2 of {{cite:e688d46cedc9c551e362fdf76e2116bf8dcf16a3}}. These notions can be extended to higher order differentiable maps in a straightforward manner.
d
df60f01356039fb14cb96dd930562f1d
One common prerequisite of these experiments is the cooling of the oscillator to minimize thermal fluctuations and its coupling to environmental degrees of freedom. This can be achieved either passively (as e.g. in Refs. {{cite:347022317d15036f558bab14a51d7075b0638c1f}}, {{cite:c4a4409af3a4761d83f59f9265498d2f2f223469}}), or by relying on active feedback cooling protocols typically borrowed from the optical domain {{cite:ee9a5c631c42b0a120108db8fcbefb806d1811f3}}, {{cite:e1107aa52481923936519e4da287e6746db1064b}}, {{cite:d7b8b40c1693aa112e291c226a9911e18aab5249}}. Either way, low temperatures provide an advantage.
i
812d466d825120911481cf61d115d82e
Figure REF compares the performance of five popular classification networks based on the methodology outlined by the current state of the art {{cite:72ac87195d20d6040c789b229f0b8fb4f16fa901}}. Results demonstrate that for a mixed occlusion dataset, ResNet101 and ResNet34 {{cite:430a91346e1603e039e3cdfdc9d4c6923cf01041}} achieve a 2.1% improvement over the MobileNetV2 classifier {{cite:77d27eed12c071b5ce44388b59b3ece289a55f1b}} used by Apurv et al {{cite:72ac87195d20d6040c789b229f0b8fb4f16fa901}}, using the same training data, backbone detection network, and classification pipeline.
r
6c92de47e0e54001820f7903ec679bfe
One of the major reasons why previous force-directed algorithms, such as in {{cite:f8f856a13fbeb5bc76b363c116cab4db1905f17d}}, {{cite:54ee8b22ef1cf2930ba3814679870405cd70534d}}, {{cite:1da99657f2e57c1f30b2f295b4bae023f8339a28}}, have become popular is how simple and intuitive the concept is. The idea of a physical system pushing and pulling vertices, modeled as sets of springs and electrical forces, makes them easy to understand and quick to implement for practical use.
d
710be166b3810a8a86a3f3f8638c66b8
To infer potential informal roles, we need a latent representation of each node (i.e., each player) to capture its structural identity in the dynamic evolving network. We introduce three representative dynamic network embedding methods: 1) Diachronic Node Embedding (DNE) {{cite:90e9fe2015c2d72bc39a995921090ed03d20c624}}, 2) an alignment version based on struc2vec {{cite:3a46fd90cee7b7a8ec38a06ba199e41eb9f74bd5}}, i.e., {{formula:0a5edbb2-d283-420b-975f-d231333383cc}} , and 3) dynAERNN {{cite:71b48023e9322cd36719e682a2162eba7e619025}}. DNE learns node embedding by modeling neighboring local information in a dynamic graph. However, it is incapable of learning structural identity due to the nature of deepwalk used in DNE, i.e., the structurally similar nodes that locate far apart will not have equal representation according to deepwalk {{cite:42fabdac9c5016f77e3b4388134af33d652d5e3d}}. To eliminate the limitation, we replace deepwalk with struc2vec, which encodes node similarity at different scales and thus can have a global perspective. The proposed method is named {{formula:ca5364b2-7a2c-418e-9263-d1059bf63735}} . Similar to DNE, we attach a TransE {{cite:0896920e4fa1043a240c8ceda966b313ebbf8f12}} module to struc2vec, which aims to align the struc2vec embeddings between the continuous networks. On the other hand, the state-of-art dynAERNN proposes an encoder-decoder architecture to predict missing links in a dynamic graph, and the last hidden layer outputs the learned embeddings. After the embeddings are generated, we use the dimensionality reduction technique t-SNE to project the node embeddings generated from the above three approaches into a 2-dimensional space. Then, we cluster the projections using X-Means {{cite:2b24824f08dee2d5efc41f69eaba442d1ce70c63}}, an optimized clustering algorithm that extends K-Means in terms of calculation efficiency and efficient estimation of the number of clusters. The resulting clusters are then considered as informal role candidates. In the following subsection, we first design evaluation metrics to compare and evaluate the node embeddings generated from the above three approaches and then explore the identified informal roles.
m
a72b7f3e9a98d51a84d5cee419feea50
For scenarios with single-antenna users, Fig. REF shows the sum rates of ZF & MMSE in {{cite:701eaf13aae8619ef5ecfda7ac5c0dd5a86e8712}}, MF in {{cite:36c4947a5984ec96ec7fb0be71b58e1096697617}} and their widely-linear counterparts under IQI. The performance of both WL-ZF(WL-MMSE, WL-MF) and ZF(MMSE, MF) degrade when IQI becomes more severe. However, WL-ZF and WL-MMSE outperform ZF and MMSE significantly, especially in the high SNR region. The sum rates of ZF and MMSE level out in the high SNR region as a result of the IQI. In contrast, the proposed WL-ZF and WL-MMSE can efficiently suppress the negative impact of IQI and approaches that of ZF without IQI. In the high SNR region, WL-ZF has the same diversity gain as ZF (i.e., the same slope of the curves) with a minor power offset (i.e., the shift of the curves) around {{formula:54f9e11e-19f9-4f5c-966b-20d9d96a8760}} dB for {{formula:53db1477-e3dc-43ac-9643-215d577a02af}} and 0.6 dB for {{formula:f2e6e7eb-d022-4511-b735-5184c09c47ff}} , which verifies the results in Theorem REF .
r
d0fb99ede8710a583dcb6910a28c4025
Results of PASCAL VOC07+12. We also construct the experiments of object detection fine-tuned on PASCAL VOC07+12 using Faster RCNN with R50-C4 {{cite:dbd9c5bdafe8f15cd4c1c92a5e28e5203da19234}} in Table REF , fully following the setting of {{cite:92c1a9d8dfdb65516d43362456998ddb7b64ba36}}. It can be observed that UniVIP is still better than DetCo, even the number of pre-train images and iterations is far less than DetCo.
r
fd0a2f6fbe22d3872da312cd2139a33f
Clustering is another approach used in our study for data sampling. For this purpose, first, some samples are selected from the unlabeled dataset pool by the least confidence approach. If {{formula:0e8a6ad1-8bc0-4f80-9198-3f54f5de95a6}} instances are to be selected for labeling, we initially choose {{formula:06025fe6-46ad-4389-a708-272cae8f1682}} instances based on the least confidence criterion as our candidates. Then, for clustering, questions are encoded with the use {{cite:ff3e47b10f292af1f58119f4a162c7051682445b}}, and using the {{formula:cdc9566c-4889-4799-920b-ffd57394ab29}} -means algorithm and based on the Euclidean distance measure, those candidates will be grouped into 10 clusters. To select final {{formula:f5077b4a-b542-4c8b-b9de-6ff0949547ff}} samples, each cluster is sampled proportional to the number of its members. Selected instances are annotated and added to the current labeled dataset. Then the model is re-trained on the resulting dataset. This procedure continues until our unlabelled data are exhausted.
m
ea79988f550dec58042d11807229830d
However, most of the above works only focus on capturing the temporal inter-frame dependencies (motion information), the spatial intra-frame features (appearance information) are rarely discussed. Wang et al. {{cite:9e141712ff154bbdb64c3dfdacfac8dfe8c36b09}} proved that the spatial features and temporal dependencies are equally important for predicting high-quality videos. They designed a spatiotemporal LSTM structure (ST-LSTM) and the new predictive model was denoted as PredRNN. To alleviate the gradient propagation difficulties in deep predictive models, Wang et al. {{cite:d5740311ccf309be67a2ea8beb95789d2c5ebf85}} improved PredRNN with gradient highway unit, and the new model was denoted as PredRNN++. Moreover, to capture more complex spatiotemporal signals, they further proposed E3D-LSTM {{cite:725a502d2b5e3b5b6b5322ab4e9e56a7399bac47}} and MIM {{cite:38ca73836505784b23119184e92e67b811cd3f1a}} by integrating 3D convolutional layers and additional forget memory block to ST-LSTMs, respectively. To improve the ability in capturing global dynamics for videos, Lin et al. {{cite:0d004755634806c993bc896d05737ebc7571fd92}} introduced the self-attention mechanism into ST-LSTMs to memorize long-range spatial features and Lee et al. {{cite:3d3c711a9a2e6d6a2add96f29d51f47cb2657f66}} introduced memory alignment learning to memorize long-term temporal dependencies. {{figure:343e0ed2-6f2e-4417-80b5-3f3b8cebb75c}}
m
8c8f542f68b1dd14a9dafc1ce132e206
Most researchers agree that these events involve a massive circumstellar matter (CSM). However, there is no consensus on some basic ingredients of 2009ip-like events. One issue in disagreement is the powering of the main events. One view (e.g., {{cite:4e60d8f8e07406611a17f5090c5e34672ffb6b8b}} for SN 2009ip) is that, at least in come cases, the precursor (pre-explosion outburst) is a CCSN event and that the main peak of the light curve (the `explosion') results from ejecta-CSM interaction. Another class of possibilities is that the explosion is a CCSN while the pre-explosion activity results from either an instability of the CCSN progenitor or from a binary interaction (section ). Another model for SN 2009ip was the merger of a massive star with a luminous blue variable (LBV) star {{cite:c030fed5d6b6ca7141e1449681c398d28fd251c4}}, and much of the energy results from accretion onto the main sequence star. Then there is the possibility that the explosion is driven by the accretion power onto a neutron star (NS) or onto a black hole (BH) companions that spiral-in in a common envelope evolution (CEE) inside the RSG envelope (e.g., {{cite:d058e9739f496dfb511fa78aca8ad38e1e6db65d}}), or deep to the RSG core (e.g., {{cite:1cc74d6cb67fb6d5d8bfd1e362b29415b759709d}}, {{cite:2db4725a234e54042781b231575440a9b2ccff2c}})
i
f3e158e67729ea1e5d02eb82e556e564
In the case of filaments with isotropic or near-isotropic cross-sections i.e. rods, the force-extension relationship has been found to be {{cite:47f64c0414c17cbab79a1fcf4a641853c379476d}} {{formula:5cc0eeb2-b3a8-4272-9259-6a68d87e1b62}}
d
09d0f78c9322330eecd893d13860c4f4
This truncated system is clearly not closed, since in the last equation the evolution of {{formula:c38b28ac-674f-4f31-a999-31f2c3a1ff72}} depends on {{formula:90874c6b-00c1-427d-8871-bbd83f764abc}} . There are various ways to close the system. For example, the classical {{formula:42e162ac-ca53-4afe-b5d7-e5710f13a585}} model {{cite:1024c0ceb1810183b6c6a4f85a3eb5e0bc72f886}} assumes an ansatz of orthogonal polynomials in velocity space and the resulting closure is {{formula:8f1c5774-a4f6-492f-b444-07c57f889019}} Many other methods exists, including the entropy-based {{formula:13bc6401-b404-4a0b-9a7f-6cee82407d95}} models {{cite:e7d3d9e86a4b0013ac1bafab1ae4b3aefec410e1}}, {{cite:ebde33e413fdf6e25cdec0fa14563f615cdccad0}}, {{cite:5151a3b65893d043fd3e6eed699c9e7eb05b3638}}; the variable Eddington factor models {{cite:aefbc13c4669e3a9b5a75c63aad894e6260064d7}}, {{cite:f62b86a7c5b2305652a082943e6eeb9d32b85717}}; the positive {{formula:178c37fe-b3f4-4abe-bf20-983b40985494}} models {{cite:0318efbc23bd133832700ecd628a01fb5e4ab33d}}; the filtered {{formula:65a03934-1e75-4707-8cad-8da97c541219}} models {{cite:0a048c9684294f62b74b9789d8cb4c8a10321ee2}}, {{cite:4ccd1e4b890971b8190ce882805314e6912a08ab}}; the {{formula:d764ccb5-7d7a-4ec2-be33-d4f99fe2398e}} models {{cite:331d358e329e2dd5ad0613705021dace5c0a553b}}; and the {{formula:34208ec6-412e-45d2-a7fe-cac502c0db81}} model {{cite:43239c62d54da09c5592208c57dc0d35637f75ad}}, {{cite:9c9168a52ab847adecc5aee93808a8b2be59c829}}, {{cite:282333c45501776885f92f75112555ceac923dca}}.
m
a74dc9d4a70805734e2902dbcc13ae30
Reinforcement learning (RL) combined with deep learning has achieved outstanding success in various areas such as video games {{cite:0097810c37306704be82988ccebcf4933e03735b}} and robotics {{cite:a57f7dc1131cf95de0edf0d0899dc2dec849d1e5}}. These advances have inspired the research community to examine the performance of deep reinforcement learning in autonomous driving {{cite:62d287c32051da462b3361935783e577d44e51e2}}. Although, prior research has extensively explored AV navigation in the presence of other vehicles at unsignalized intersections {{cite:64e9b9a80a9013faf26f53b35eb718f701d3466c}}, {{cite:a19fc5ab9bad39a2206965d3d198c485cb91197e}}, {{cite:60ecf13e87e1f21b36efe4feb8aa922ed0960fa2}}, navigating amongst pedestrians, who are the most vulnerable element in the urban environment, is under explored.
i
40ccdec18869fb3c6a839fb68d069f5e
We start by showing, in Fig. REF , some examples of the political parties networks generated by the network-based approach, for each year. To help in the analysis, we have also applied the fastgreedy algorithm {{cite:490cc7d6b383adacc0c7e3c04f65eba047a08304}} to detect communities in the networks. The node representing the government is highlighted in black. As one can observe, as a general rule in these networks, the position of the government's node is highly influenced by the party currently in power. In the first graph, for instance, the presidency was from PSDB party, which results in the government's node being directly connected to this party in the network. This same feature can be observed in the graphs from other periods as well, with the node from the government always being closer to the party currently in the presidency, and also to other parties that supported that presidency, in each year. {{figure:2e7a7183-9e26-43d4-bd3b-8f3eab51e636}}
r
a0e9298391a454821c4058fcf7ba4d26
Leibniz algebras first appeared in the paper of A. Blokh {{cite:82a70db327bcecb7096d1cb3d7601d2bb85bc16f}}, while the term “Leibniz algebra” appeared in the book of J.-L. Loday {{cite:3718d0629f327bbce79929bf41646e887a1bee86}}, and the article of J.-L. Loday {{cite:bf202f945b55a53bc4b7beb7ce64335c4b27b720}}. In {{cite:b8f530bd3f93bc1bcdf4c38642adbdaa80e6e407}}, J.-L. Loday and T. Pirashvili began to study properties of Leibniz algebras. The theory of Leibniz algebras has developed very intensively in many different directions. Some of the results of this theory were presented in the book {{cite:98375f326751f9d5adb041b7f429b9f6d322b0ad}}. Note that Lie algebras are a partial case of Leibniz algebras. Conversely, if {{formula:49eac85e-64c6-4396-b8fc-3f4b26ce10de}} is a Leibniz algebra, in which {{formula:e4390232-3160-4ad9-8a45-40473a4ba0e5}} for every {{formula:321d4ee6-6f5e-40d3-92c6-d65cebfdf02a}} , then it is a Lie algebra. Thus, Lie algebras can be characterized as anticommutative Leibniz algebras. The question about those properties of Leibniz algebras that are absent in Lie algebras, and accordingly about those types of Leibniz algebras that have essential differences from Lie algebras, naturally arises. A lot has already been done in this direction, including some results of the authors of this article. Many new results can be found in the survey papers {{cite:5959036cb51ba62fc8f8e997ad537b38fd5109b4}}, {{cite:1a5c8aa5292554ec1dd67d7aa93bbe10fe07ea9e}}, {{cite:006d24584db538b3355dda73c4b72e5d238bfdfb}}. Other results related to this topic can be found in {{cite:2ad23f3a0898ce27ae7c931e8a973ade365d1687}}, {{cite:d6f372847f2a069b41f955690e0dd77d860f778d}}, {{cite:4fdfe066f58b8547f500ac1c22e2d64aa71c0a32}}, {{cite:ba3052c33901eff4af51eca8b3235f5110f2e292}}, {{cite:9bb3f85ddcf81883de5a42b245511f0311192078}}, {{cite:392e23a00abb023a226c4ae23093782fb3d75e88}}, {{cite:f7778a60644db1ca11a5ef6c8f0ac1152e2f5ce5}}, {{cite:11325ba3e531c12b68ffc7f6a3967e0eb2894b55}}, {{cite:765872d69c5c71b316ef8832b06f70314680572a}}.
i
c90acf86e9272f099b93596886a2e9d8
One first type of viscous fingering controlling protocol has been proposed in Refs. {{cite:47e60f9d559b6ef24cbf6e8ebf1ee21820ff4e71}}, {{cite:b53f7b21b06f016c37a4d902b1362a5e4c753598}}, {{cite:6973f9e28c85421173239b8f7227d2bcc8ed000d}}, {{cite:a19f0e0457a93a9928d0b4c042508e3f1e94086d}}, {{cite:04cbe5df27883fe8758fceca0d642cf7a2753ba8}}, {{cite:852eeb7a65ece6eca5ac369e047404585682d200}}, {{cite:c2cdf3ae30eb90c7edd52ae55efdb2a64fc87b4c}}, {{cite:60b3274b06a7c6ea0dff5c0dae6991f7c140c93b}}, where it has been demonstrated that viscous fingers can be considerably stabilized, by modifying the basic geometry of the classic Hele-Shaw cell setup, while keeping constant the fluid injection rate {{formula:66fcb830-fec9-450b-9467-2e55623d4650}} (area covered by unit time). For instance, Al-Housseiny et. al {{cite:47e60f9d559b6ef24cbf6e8ebf1ee21820ff4e71}}, {{cite:b53f7b21b06f016c37a4d902b1362a5e4c753598}} have shown that fingering formation can be properly inhibited if the upper cell plate is slightly tilted, so that the Hele-Shaw plates are no longer exactly parallel. A related stabilization scheme has been considered in Refs. {{cite:6973f9e28c85421173239b8f7227d2bcc8ed000d}}, {{cite:a19f0e0457a93a9928d0b4c042508e3f1e94086d}}, {{cite:04cbe5df27883fe8758fceca0d642cf7a2753ba8}}, {{cite:852eeb7a65ece6eca5ac369e047404585682d200}}, {{cite:c2cdf3ae30eb90c7edd52ae55efdb2a64fc87b4c}}, where the rigid upper cell plate is replaced by a flexible membrane. More recently, Zheng and collaborators {{cite:60b3274b06a7c6ea0dff5c0dae6991f7c140c93b}} suggested a time-dependent control strategy in which fluid injection is applied while the Hele-Shaw cell gap thickness is increased in time in the power-law form {{formula:98f2ebfc-ca7b-4604-a5d3-1382e2209cce}} . In this situation, either the fingering instability is suppressed, or a constant number of nonsplitting fingers are maintained during the fluid displacement process.
i
f437f0e691faac40741cefde4df13980
Following {{cite:aec855d237a5e4ff07ddec03c8eb7c117a8eccca}}, we use a batch size of 32 and 3-epochs of fine-tuning for each dataset in GLUE. For each task, we report the best accuracy on the development dataset with learning rates 2e-5, 3e-5 and 4e-5. Table REF shows the results of GLUE datasets for RoBERTa, RoBERT-ABS, and RoBERTa-M4M. Due to the fact that we use less data in pretraining (54G vs 161G), RoBERT-ABS underperforms the official RoBERTa model on 6 out 8 datasets. Using the same, smaller pretraining corpus (16G) as we did with RoBERTa-ABS, RoBERTa-M4M is able to match the performance of the official RoBERTa model, mainly due to the effectiveness of the M4M method. It outperforms RoBERTa on the MNLI, QNLI, SST-2 and MRPC datasets but underperforms on the rest of the four datasets. It is worth noting that RoBERTa-M4M tend to outperform RoBERTa when the training dataset is relatively large. We hypothesize that, with more pretraining and downstream training data, the RoBERTa-M4M may obtain further improvements in accuracy.
m
f63cb3a1aff47e08dc47bca21903c783
The difference between old and young clusters herein is large: old clusters (like the globular clusters in the Milky Way) formed before there was a Galactic disk, and remain relatively free of its influence. Young clusters however form in the galactic disk, and the tides experienced by these clusters are dominated by encounters with giant molecular clouds and spiral arms {{cite:a506d104b7f59591b942b597442d00d42ba4f0dc}}, {{cite:2b875cf1ee80b7777a683f245d1e6f9e994c7ac9}}, the effect of which is about four times larger than the tidal field {{cite:a506d104b7f59591b942b597442d00d42ba4f0dc}}. For this reason, we focus on old stellar clusters and initialize our simulated star clusters at an early epoch, before the galactic environment would have formed. In order to simulate young clusters, a galaxy simulation including baryons would be required. However, when the old clusters formed, the GMC density in the star-forming environment was likely very high, causing early disruption of low-mass globular clusters {{cite:6019e0ed7c57e3f3cdbe939a82384fbcff1ea546}}. This effect is not included in our simulations.
d
5617f26f9f4cf447b595ee65f8dfe809
The breaking of large amplitude electron plasma waves/oscillations has been receiving a great deal of attention since 1959 due to it's basic nature and practical importance {{cite:1796e804368f9ef436e6055d955c0069b4dd8bd7}}, {{cite:fb0a43555a2b0f9382c9f26769aabfa858deaeda}}, {{cite:ae2e440b14c8745f7b88762093f6beeb9bb8e174}}, {{cite:515f7088bf9022f6be014616e6399ef9293a832f}}. It has wide applications to some current research problems ranging from laboratory plasma to astrophysical plasma where breaking of large amplitude relativistically intense electron plasma waves are routinely encountered {{cite:5927602a45dcc820ae78f318ec03208bbef58342}}, {{cite:749cd362c5944a5dcdfca0289b9e10cd5f5d4d51}}, {{cite:6ea8dde34eacaff5124df0eac7c51f226a9f61b5}}, {{cite:6b7f678115ee1ea25b46c3fcba36589b91556c05}}, {{cite:feb94364f6862d6d73de36455127134e987e3f21}}, {{cite:ce3ec8e5a68a7013d8ee8c0e206c7c24f0df0a93}}, {{cite:c411c01238d772b906720cde912f928880ec1023}}, {{cite:7defd56725b5ecf08c0b3230f570bd8832f93409}}, {{cite:31cd36acecc8cf25be72148d53f6230779ff9c03}}, {{cite:d6c7c9f4ecc9f2fd19dac9fc6f27b4c856f05e5f}}, {{cite:8fd7689b8abd045b0fa25ef4be71a84421d11fe5}}, {{cite:58257a4547cc26d3cdec95a3026ebf481e3a3856}}, {{cite:38555af2912ca29e71d4ee4a8908132676bf2782}}, {{cite:1667bf8e6615e0d7ddf3b680379860aed1d07eb3}}, {{cite:cf0dd270ad4cce24b9d580677223f8b8a0b6eb33}}, {{cite:342b50dfab4c29724f522da5db199aaf9bf58970}}, {{cite:e03e9938100ddcf87317c22002548295da01c6df}}. For example, recent experiments on plasma acceleration by laser and particle beam have shown that the breaking of excited plasma oscillations/waves plays a major role in the particle acceleration process {{cite:5927602a45dcc820ae78f318ec03208bbef58342}}, {{cite:749cd362c5944a5dcdfca0289b9e10cd5f5d4d51}}, {{cite:6ea8dde34eacaff5124df0eac7c51f226a9f61b5}}, {{cite:6b7f678115ee1ea25b46c3fcba36589b91556c05}}, {{cite:feb94364f6862d6d73de36455127134e987e3f21}}, {{cite:ce3ec8e5a68a7013d8ee8c0e206c7c24f0df0a93}}, {{cite:c411c01238d772b906720cde912f928880ec1023}}, {{cite:7defd56725b5ecf08c0b3230f570bd8832f93409}}, {{cite:31cd36acecc8cf25be72148d53f6230779ff9c03}}, {{cite:d6c7c9f4ecc9f2fd19dac9fc6f27b4c856f05e5f}}, {{cite:8fd7689b8abd045b0fa25ef4be71a84421d11fe5}}, {{cite:58257a4547cc26d3cdec95a3026ebf481e3a3856}}, {{cite:38555af2912ca29e71d4ee4a8908132676bf2782}}. Wave breaking is also important for first ignition concept in inertial confinement thermonuclear fusion {{cite:53494258059c41fe5fba2811feda3c3be9a0bce2}}, {{cite:18d7e983b6d02454a55250134755def9bf73d5a0}}, {{cite:a705be3d1b17e688cf4f9de0c4cb5f7a0446aee4}}. The concept of wave breaking in a cold homogeneous plasma was introduced by Dawson {{cite:e84185a8051f7302adc6c10818be38a15a27e831}}, where thermal motion was neglected and ions were fixed. Dawson demonstrated that the amplitude of applied perturbation can not be increased beyond a critical limit, known as wave breaking limit, as the trajectory of the neighbouring electrons constituting the oscillation/wave start to cross each other beyond this limit. This results in fine scale mixing of various parts of the oscillation which destroys the oscillation/wave. But, when non-linear density perturbations are excited in a large amplitude plasma wave, thermal effects may become important as the electron thermal pressure may not allow the density compression to build up as predicted by the simple cold plasma fluid model. In 1971, Coffey {{cite:657612a335d5e17353334eee0ae441b8550a6be9}} investigated this phenomena for electron plasma wave in a warm plasma by using the simplest distribution i.e. “water-bag” distribution {{cite:b659776271a19bc8711c17efd6415bc999c68ea1}} for electrons. Unlike in the cold plasma case where the wave-breaking limit is defined by trajectory crossing, in the case of warm plasma Coffey defined wave breaking as the trapping of background plasma electrons in the wave potential. An analytical expression for the maximum electric field amplitude and density amplitude as a function of the electron temperature has been derived which shows that temperature effects significantly reduces the wave breaking limit {{cite:657612a335d5e17353334eee0ae441b8550a6be9}}. Unlike the nonrelativistic warm plasma case, where Coffey's limit is the one and only existing theoretical wave breaking limit available in the literature (till date), the relativistic counterpart contains several theoretical results given by several group of authors in last three decades. These are as follows:
i
08ec75e710a72381dc08760e414d71ee
As is known to all, the study of exact solutions for integrable equations which are used to describe complex physical phenomena in the real world have been paid more and more attention in plasma physics, optical fiber, fluid dynamics and others fields {{cite:2e00ee7a955920a7893e7e7b3577f0b18838583d}}, {{cite:742e2886326a5ddc60ef47bd239c118cb66497db}}, {{cite:8241c6379f06176c2980beea1304acb145bbf714}}, {{cite:08982ff9e9d7fe6f540200ed0c902dc7f09cc5b7}}, {{cite:70f9010befb9ec98dc1f4e6966c97ba4b2487ae3}}. The Hirota bilinear method, the symmetry reduction method, the Darboux transformation, the B{{formula:d95f8031-b65d-4db3-9153-8aaef201e655}} cklund transformations, the inverse scattering method and the function expansion method are powerful means to solve nonlinear integrable equations, and many other methods are based on them {{cite:e1eaef9bbaae01219bcad6af3f251a715c3110e3}}, {{cite:29fe511eedc067aa185b1a1a1188c76c7b5a3653}}, {{cite:8088925d0d96124fd9cbc86dcab02eead5e6226e}}, {{cite:1d0f511b7b06ff747d8adc3625096d0f5b116e1a}}, {{cite:6a4425d571ce935b1cb5cec8a54910235a1b195e}}, {{cite:b679efffaaf04156d258a253dfe05fde92c49657}}, {{cite:fbd73c66c561717ea1f7f32a2da4879db12a71c6}}. Although the computational cost of some direct numerical solutions of integrable equations is very high, with the revival of neural networks, the development of more effective deep learning algorithms to obtain data-driven solutions of nonlinear integrable equations has aroused great interest {{cite:00caf70dc032ba89a86b7efd58ef88d53a9e2b62}}, {{cite:ef6af7f6a9bf023f44eb6b1d2534ff302af9917c}}, {{cite:401fdbd4de9fd8d0e4cd38d224b8b21405d3bd7b}}, {{cite:5faf1eba7b97b3727edaeb12b8a7a8d2eb673f90}}, {{cite:5b855cacf7a3d6ee165163b9c1de504fa16fa0dc}}. Li and Chen constructed abundant numerical solutions of second-order and third-order nonlinear integrable equations with different initial and boundary conditions by deep learning method based on the PINN model {{cite:00caf70dc032ba89a86b7efd58ef88d53a9e2b62}}, {{cite:5faf1eba7b97b3727edaeb12b8a7a8d2eb673f90}}, {{cite:5b855cacf7a3d6ee165163b9c1de504fa16fa0dc}}. Previous works mainly focused on some simple solutions (e.g., N-soliton solutions, kink solutions) of given system or integrable equation. Relatively, the research results of machine learning for constructing rogue waves are rare. In Ref. {{cite:4105c7051596f761d596709f8213fbd29594b871}}, the bias function including two backward shock waves and soliton generation and the generation of rogue waves are studied by using a single wave-layer feed forward neural network. As far as we know, the soliton solutions, breather solution and rogue wave solutions {{cite:9fddb09a5377548c4cf4915371e1581fbd5e68f0}}, {{cite:2975b2ef6c35400754aa6312ae0cc95d98baa797}} of the integrable nonlinear Schrödinger equation have not been given out by the deep learning method based on PINN. Therefore, we introduce the deep learning method with underlying physical constraints to construct the soliton solutions, breathing solution and rogue wave solutions of integrable nonlinear Schrödinger equation in this work.
i
f1af0129c1ed78fd5bb775558930bf99
Since the {{formula:8fa8d5f4-0ccf-46bf-abb9-95636ac80334}} -norm is (arguably) the most natural way to measure the sparsity of a vector, the above idea suggests that the {{formula:d255aaa9-0055-4ad7-ac85-6dddbfdbf36f}} -penalization method is a “fundamentally correct" (but computationally intractable) method for variable selection, provided that some mild conditions hold (noiseless, Signal-to-Noise Ratio (SNR) is high, signals are sufficiently sparse {{cite:51c3a033bb9844634e45f20215e2024708276d14}}, {{cite:27b6ff6820023820bbdfe54c89756c2f06e05646}}).
m
887eb5931dde1b72b2e78b030a26ffd2
Other methods: Our aim is to show the predictive power of the semantic features constructed from the transcripts as well as to measure the accuracy of the proposed StockGNN method compared to baselines. To achieve that we propose several baselines which have two phases. First, we generate unsupervised low-dimensional embeddings via the well-known Doc2Vec method {{cite:247af7148bfc0ba3fe0c58aaf755edfee3e42d51}}. Intuitively, similar documents will be embedded closer in that low-dimensional space. Second, these embeddings are further fed into a classifier for the final classification or label prediction. We have used three different classifiers: Support Vector Machine (SVM), Logistic Regression (logreg), and a Multilayer Perceptron (MLP). Other classifiers (such as Naive Bayes, k-NN) produce inferior results. We call the entire pipeline of using Doc2Vec and a classifier as DEsvm, DElogreg, and DEmlp when the classifiers are SVM, logreg and MLP respectively. DEmlp is a modified version of the method proposed in {{cite:547121b38a3e1338efd6a95ba3efcb0b732caa7a}}. Unlike using prior embeddings of words and aggregating them, we use Doc2Vec {{cite:247af7148bfc0ba3fe0c58aaf755edfee3e42d51}} to generate the embeddings for the document. Furthermore, we construct features only from transcripts as our aim is to classify with features from transcripts, whereas the method in {{cite:547121b38a3e1338efd6a95ba3efcb0b732caa7a}} use the company embeddings as additional features that make the effects of transcript information ambiguous. We find that in most cases StockGNN outperforms these methods.
m
ef57bf22d37e3b789606d95dbd8c6615
First, we consider special cases that can be solved in polynomial time, motivated by similar studies for problems on uncolored graphs {{cite:ea095e7f4b480fd1a45cc4f07ffb912ffd3b2d87}}. We are in particular interested in whether or not we can exploit structural properties of input graphs that can be expressed in terms of colored forbidden subgraphs. We show that BPD can be solved in polynomial time on graphs that do not contain a certain type of bicolored {{formula:a56ea73a-19fa-439f-aad9-08433fc80290}} s as induced subgraphs, where bicolored {{formula:0b35e0de-f05f-4d9d-b616-037f3cce4b97}} s are triangles with edges of both colors. Moreover, we show that BPD can be solved in polynomial time on graphs that contain no {{formula:1ce16268-42ae-48b0-accd-b8429f6e77ce}} s with one edge color and no {{formula:764b557e-3df6-42f4-b27a-ca77d51aa931}} s with one edge color as induced subgraphs.
r
70fd1f371f62af829e5ee33d2e4c458c
With the advent of powerful hardware solutions (TrueNorth {{cite:f5e3fc10de7dd42ef70a419a69fb82faa35f3c65}}, Loihi {{cite:b6c3cd66c793f3c1e8cd3989ac571bd37c93b698}}) for spiking neurons, the question arises how deep learning can be implemented with spikes. The current approach is to implement real-valued activation values by spike rate {{cite:bc3996a34a2ef45b9419f009fde5b025ef722614}}, {{cite:5a52d6ecfb6e953cb249e7e8f5953c8c74851af1}}. Some of these approaches yield near state-of-the-art results but require a large number of spikes and exhibit rather long response latency (time to solution) due to the fact that the estimation of small rates requires large integration windows. A potential remedy to this problem is to compute with spike times {{cite:e994d8fe0c64492ddf816e65f42b290b06d409bd}}, but current solutions are brittle and cannot be scaled up to challenging machine learning problems.
d
8a37f92cac613a9de2668640f6d41aac
In the present paper, we focus on a specific class of Markov processes dubbed local density-dependent Markov population processes, which preserves the density-dependent assumption of Kurtz, but allows an underlying network structure that dictates the environments observed by each individual. We incorporate interactions between more than 2 vertices into the model with an underlying hypergraph structure accordingly to reflect on some recent developments in the theory of higher order interactions {{cite:be3bcd95b7b6e46c7fdcb1dc4210852d8444766d}}, {{cite:d99ccafa6157c2a0dfacd53c51885d9787d4cd10}}, {{cite:d0394e8b0f4c08753fec05ec9eea0e1d3a415c7b}}, {{cite:982432da2df16fe819410e1118cce153ae6db730}}. We provide general error bounds for NIMFA that are strong on well-distributed networks.
i
052bd974b3e488ad16461ff7f28ab05f
Finally, the results in Figure REF again verify the usefulness of unsupervised post-processing also in cross-lingual settings. We observe improved performance with both m-bert and xlm-100 when mean centering (+mc) is applied, and further gains can be achieved by using abtt on the mean-centered vector spaces. A similar finding also holds for static cross-lingual word embeddingsNote that vecmap does mean centering by default as one of its preprocessing steps prior to learning the mapping function {{cite:a9bc2fb85ebc48fcc1ef1244101b45de8d2ac56a}}, {{cite:c63217f99be4cc50ba6de2f637e1be941d012d6e}}., where applying abbt (-10) yields higher scores on 61/66 language pairs.
r
1977ea413c424d8cbaa24604d4a7c54e
with {{formula:beb31f48-3fb5-4321-a4c0-25be21c9c59e}} the electron gyromagnetic ratio, {{formula:a0b3e828-9cee-4cc5-8921-10dcff5dc3d5}} the vacuum permeability, {{formula:e73eaec0-eec1-4b07-9125-b4235cf0d0f6}} and {{formula:dae84a89-b0a0-450b-8e0d-bebce66668a8}} the YIG saturation magnetization and thickness, respectively, {{formula:395a2553-05dd-427e-87af-3e223f2a0aa9}} the spin-wavenumber, {{formula:6e8ddf13-2ff0-40be-b4b7-4c7d2a9a768b}} the metal resistivity, and {{formula:7f142407-6e0d-47c2-9074-bf635a99574b}} the spin-wave ellipticity. This expression is derived under the assumption of a homogeneous magnetization across the film thickness {{formula:2a81b5de-3255-400b-b608-f169c636053f}} , which becomes strictly valid in the thin-film limit {{formula:40eb2c3e-c336-4b9f-89f7-1f40342c3e16}} . The form factor {{formula:f4ce47fa-8f78-4feb-82f5-c1596ebc66fd}} arises from spatially averaging the dipolar and eddy-current stray fields over the thicknesses of the YIG and metal films. An analysis equating the magnetic energy losses to the power dissipated in the metal yields the same expression (Supplementary Section REF ). We plot Eq. REF and its thin-film limit in Fig. REF d using {{formula:ef4cc606-2b55-4be2-8da6-75aaa6464158}}  {{formula:05a706a7-6257-4cd8-96c0-f7a77c112f6f}} m for the resistivity of gold{{cite:aa7118da38c537c0ad07b486d2b73d6dcb951b0f}}, finding a good agreement with the damping extracted from the various sets of data without free parameters. The finite width {{formula:6eb778fa-a4bb-49a6-9bf3-d7ad5df6b25e}} of the stripline can be disregarded when {{formula:4f4f92e6-bcda-4f6f-b053-4980133c418c}} (Supplementary Sections REF and REF ), as is the case in Fig. REF d. Accounting for a non-homogeneous magnetization may be achieved via micromagnetic simulations{{cite:1b34702724680fdfda6d18ce978f90dae94ef5e0}}
r
2117db3470b2b54797ed3baaed702f99
Evaluation of the state-of-the-art neural network pruning models {{cite:a74f8031d10c5138dda3730de9e77b68fabc619f}} namely; Global Magnitude Pruning, Layerwise Magnitude Pruning, Global Gradient Magnitude Pruning, Layerwise Gradient Magnitude Pruning along with Random Pruning using ResNet-50 as base architecture fine-tuned for ocular-based user authentication in smartphones. Comparison with the compressed version of ResNet-50 models, namely; ResNet-20 and ResNet-8 along with lightweight MobileNet-V2 and ShuffleNet-V2 models, trained using knowledge distillation {{cite:cec49cbc1d6b53481ceb16eacd53ef823f5f8ec2}}. Subject independent evaluation of all the compact lightweight models on VISOB {{cite:b8a84e1277e1c6a90a87926aebf0879a6707e7fe}} and UFPR-Periocular {{cite:159ed25ac4c108543095c6f9d055273660f572e1}} on mobile ocular datasets. Inference time evaluation (in terms of deep feature extraction time) of all the compressed models by real-time implementation on iPhone 6, iPhone X, iPhone XR, iPad Air 2, and iPad 7th generation.
i
5907d66d103258a0bc9613439344bb2d
The input SAR images are resized to {{formula:3740c898-9e7e-4bcb-8125-7201237d1a2a}} in both the training and the inference stage, and the output feature {{formula:b109f6f4-90be-4cca-b4a8-065002d1c440}} is with the resolution of {{formula:69251a9f-30d9-4390-bd06-b1cf2c0c9045}} . In the training stage, we use the ImageNet pre-trained weights to initialize the parameters of the feature extraction backbone. The hyper parameter N in the polar encoding process is set to 8. Adaptive moment estimation (Adam) optimizer {{cite:cb897a8b6463243c70d1668c18a0451cc264921c}} is adopted as the training optimizer, the weight decay of which is 0.0005. The initial learning rate is set to {{formula:5edb9205-d205-47e2-9e3a-3a73c03dd807}} . The learning rate is then adjusted according to the exponential decay rule. The mini-batch size used in the stochastic gradient descent algorithm is 8. The model is trained in a total of 150 epochs. The algorithm is implemented with the deep learning framework Pytorch {{cite:1adbc9cc9630a31727815b2cfe982d9405a9e908}}. The comparison experiments are conducted based on the framework proposed by {{cite:d8f7b6a72726d2cea71d6d594fbfaa2f6bc7d9fe}}. All the experiments are carried out on the platform with Ubuntu18.04 system, 32G memory and Tesla P100 GPU .
r
bd2e3f0e146a6f1437e16227e0f8dbc8
In Fig. REF , we compare the median-likelihood estimates of dust physical parameters derived using the optically-thin assumption with those derived using the general opacity model. The parameters are well-correlated, however, we find strong systematic offsets in the derived dust temperatures: the general opacity scenario produces {{formula:eee05f5d-4f20-4972-9460-97eae26553c6}} typically {{formula:cd0178b8-8194-4608-af65-9d06f58dfe33}} 10 K warmer than the optically-thin case (see also {{cite:5b1f4e6c9003e6fc5d7b18cdfac0d53eb8aa75b3}}). This leads to a strong offset in the inferred dust masses, which are typically 0.5 dex (i.e., a factor of 3) lower in the general opacity scenario than in the optically-thin case (the same offsets are also seen in the medians of the stacked likelihood distributions in Fig. REF ). These differences can have very strong implications when interpreting inferred dust masses in the context of chemical evolution and dust production models, especially at high-redshift, where current models required substantial ISM growth to account for large inferred dust masses (e.g., {{cite:38d05108058215852960722a468bc8a132bfd7ff}}, {{cite:c88aa15559c2197e113fa2260d56a7ff9e7cd020}}, {{cite:c34a7df1eb4225a4859a790f0c4a6e40b505d26b}}, {{cite:3d9d1ea1f20fc484c6f726d79c0bc21200c8f69f}}). Nevertheless, we find that the inferred dust luminosities and emissivity indexes seem quite robust against dust model assumptions, with no systematic offsets. We checked that these differences would be more pronounced if we chose to include models where the dust remains optically thick beyond 140µm. However, in that case, the parameters with the highest systematic offsets, {{formula:bd41336a-9801-4a47-b8b8-5e6e4aa20218}} and {{formula:64de33de-d466-4d29-8308-fc306f7478c9}} would remain almost unconstrained with the current data due to the strong degeneracy with {{formula:78943645-2973-4f1d-afeb-148612b784dc}} , therefore the systematic offsets would be mostly a result of the prior. Given our calculation in Section REF , very high values of {{formula:728c3edb-6d44-4a4e-97b1-9c1c2386962f}} seem unlikely (though see {{cite:bb6d0f529777ae0ac38b34486ecdc00b94bc9613}}, who claim {{formula:29304e00-167b-41c6-9b6c-57a6b9a5d14e}} in high-redshift, lensed SMGs). We caution that very optically-thick dust could lead to even more significant differences in the inferred dust masses (factors of 10 and more). {{figure:df1de489-b62a-4e8b-a491-36f207a34ca9}}
r
6077a5c36ba38853caef140ebc3e6d3c
Let us take some cases where the dimension {{formula:0059c66d-0356-42ff-b8c9-5b61a77df375}} is not prime. Consider {{formula:e028e26a-b490-43c0-818d-112a7db59d64}} -GBS sets with {{formula:835361bb-ae69-4dec-b3ed-5f855af2019e}} . In the case of {{formula:df7ee166-65c4-4efb-adca-6d5fe41a84ee}} , there are two types of one-way LOCC distinguishable GBS sets that are not F equivalent {{cite:1cf0737ed56e7ce25158bb59e184636384511aaa}}, {{cite:8fed087e859d36f3afff94e7767900b2d93035d0}}. One of them is {{formula:32db4c5b-0257-47f8-bfdc-5aefe899ba88}} . This set can easily be generalized to general {{formula:aefe9daa-9abe-4289-a0da-94ea074c6a0a}} cases. Consider the GBS set given by {{formula:bdcf87b5-314b-4e95-ab2f-4e2d9199e112}}
d
265e238b7f91b5b2d663f9bc71e1cbe5
However, both FD and MMD are not suitable metrics for KITTI. As shown in Figure REF , real-world LiDAR scans usually contain clutters which should be removed in the recovered point cloud. MSN {{cite:1c78ca6c14c73c0ba31c5452a176dd1d537ab9f0}} incorporates the minimum density sampling (MDS) to preserve the structure of the input point cloud. Although MSN outperforms other methods in terms of FD, the clutters in the input point cloud are also preserved. MMD s measures how much the output resembles the cars in ShapeNet. However, cars from ShapeNet cannot cover all types of cars in the real-world. {{table:a4b050cd-af26-4a73-a438-a9366d529384}}
r
4de69a7cc1eab63ead21c699cd3e34cd
In this section, we provide the results obtained using various approaches described in Section and REF in the form of latency-quality curves. We use Average Lagging (AL) {{cite:adb81f40a0ed00ce06ee54ae6bdf9835ac2c01e6}} as our latency metric and case-sensitive detokenized BLEU {{cite:96636af3e0df4f26b3428b5cf11dc9379287d6c3}} score to measure the translation quality. In Table REF , we also compare the performance of the models after the initial training phases without the latency loss. It provides the improvements obtained using each approach for the {{formula:b288261e-a487-40bf-8ff8-f52fa27afd1f}} models. The baseline MMA is trained using the same architecture but without any help from the MT task.
r
e54c039eda11fd22c98a8707d6ca33bf
Results on a larger architecture. We compare the test accuracies of RCAD and ME-ADA (most competitive baseline from ResNet-18 experiments) when trained with the larger backbone Wide ResNet 28-10 {{cite:c85f2a47d3fe9504c5b9e2c748b9190215cfd408}} (WRN) on CIFAR-100 and its derivatives. We plot these test accuracies relative to ERM trained with WRN in Figure REF a. Clearly, the benefit of RCAD over ME-ADA still persists albeit with slightly diminished absolute performance differences compared to ResNet-18 in Figure REF .
r
6817423a776bd04e0f62bf06c1c5f280
Remark 1.2 There are some studies about the isolated collision for the general {{formula:9203319f-53e2-4a82-8504-b450aaffc2cd}} -body problem(see {{cite:5a5c7a7e2031389d5a6a879bec8fa2238844fcda}}, {{cite:8b5d3f3245918312f83a50a48c542998afb70567}}, {{cite:c33c58e20a88b5f7a3180160f119dd42d9ab52cb}}). However, all the results of them only said that: there exists an isolated collision for the general {{formula:3508a0a4-2b06-4ec5-9abd-157b0a5575aa}} -body problem. Our results show that we can say more about the one-dimensional Newtonian {{formula:bcdbd91b-c93c-4bab-9dc3-17f5ab1a392b}} -body problem with equal masses: all the collisions are isolated and finite.
i
b108dc1afdd06b5ee9fc1961039fc2a3
Pre-trained language models have become a cornerstone of natural language processing, thanks to the fact that they can dramatically improve data efficiency on tasks of interest – i.e., using a pre-trained language model for initialization often produces better results with less labeled data. A historically common approach has been to use the pre-trained model's parameters for initialization before performing gradient-based fine-tuning on a downstream task of interest. While fine-tuning has produced many state-of-the-art results {{cite:607af880509900fb0cad4a97d13f3001cccf819f}}, it results in a model that is specialized for a single task with an entirely new set of parameter values, which can become impractical when fine-tuning a model on many downstream tasks.
i
8e48b17cc129c3697fcce3e4ac175016
In this appendix we explain an effective general method to obtain high-precision numerical results for the coefficients of high-order growth terms involving {{formula:639d13a1-fefb-445e-a8df-663714ddaa15}} and powers of {{formula:8c85140b-d0f1-47f2-ad49-1129cfb79968}} . For applications to {{formula:e73380ea-5596-4af1-90bd-1c05331169da}} growth see, for example, {{cite:659641a465c602438056fca5f4d435ecfd97fabe}}, {{cite:b809457dd265cfcd3859c16567f985bb42b46fc1}}. We first summarize the conventional Richardson extrapolation method {{cite:96a2cc531cbd39e3e9703938c1653f86d29b7711}} in a form that makes the generalization simple to formulate, and also simple to implement.
m
e534326109cce0be504f2137a9497b0f
In theorem:curlupperbound,theorem:curllowerbound, the size of the representation {{formula:fa75eb1d-8a7e-42cc-9b1e-6c1428dedd04}} is assumed to be bounded. This assumption is reasonable from the experimental perspective since it is common to normalize representation to employ the cosine similarity as the similarity metric. Several works reported that the normalized embeddings improve the performance {{cite:eb3faab054c28878695514eae7bc30d0f151e004}}, {{cite:0841b7991c2365f411d3537768691cfa0cffa51d}}. Unlike the existing analyses (reviewed in section:comparisonwithexisting), we take advantage of this assumption to derive the sharp bounds.
r
c3829e51078cae37732d4a19f14e3ea5
However, convolutions are not the only approach to increasing the receptive field of image processing networks, and models like MLP Mixer models {{cite:166307f0490e6f3cc82ae94e89aa4268617e2da1}} as well as vision transformers {{cite:f6237da983559647e4b00ce9ece12b3915088cf9}} are exciting topics for future study in this field. Especially in the case of super resolution, higher receptive fields will unlock possibilities for the development of deep learning for high-resolution image processing, allowing for an increase in fidelity, diversity, and processing speed. Improved super resolution will allow for advancements in medical imaging {{cite:eebcf18bbc838e16c05809250de541ba54f5cbe2}} and compression of image and video formats {{cite:fa1b23aa5b13bae1408dd93141daad7f092179d5}}, among many other applications.
d
e4ec989f60416fcc62ddd90887ba053f
To further validate the efficacy of image deduplication on neural nets, we aim to evaluate a wider range of image deduplication algorithms, including SIFT {{cite:3828848b55e0cf4bbaf198ad4ff3f9fdd538a58c}}, SURF {{cite:46980f09ac2e9dee1261a902da14f176f86904d1}}, ORB {{cite:e6b6fd17429ac86775bc68f7e66451b64b50ac4e}}, feature extraction-based approaches. As future work, we would also like to test a more diverse set of image datasets such as MS-COCO {{cite:d221fd475796fac644bf0860f21bca470086358e}} for object detection, and more CNN architectures.
d
0d9430b413d1f436b43595c9b506b220
Therefore, to tackle the above challenges, we propose a generic framework for boosting current SOTA traffic speed prediction methodsCurrent SOTA methods in traffic speed prediction are graph-based models that use GNNs to directly learn over road network topology, such as TGCN {{cite:8742cb306e309cf3095453a8fbf13beac7701148}}, STTN {{cite:e03481cf5ccf8906490dbde011fb9a91695eabf1}}, STGCN {{cite:954674fb014a1235bd7a302a6cb6f435fb0aac05}}, DKFN {{cite:6154c61973f0a5961c47b952dfb69c4851af27b8}}, etc. by flexibly integrating implicit spatial correlations. Specifically, we first develop a Dual-Transformer architecture to preserve the implicit spatial correlations and the respective dynamic patterns among road segments automatically without utilizing explicit geographic information as prior knowledge . To further integrate the explicit and implicit spatial correlations, we devise a knowledge distillation-style learning framework, where we take the SOTA models (explicit spatial correlations captured) as the teacher model and the proposed Dual-Transformer architecture as the student model. Along this line, the learned Dual-Transformer architecture will preserve both the explicit and implicit spatial correlations, and thus, boost the performances of the SOTA traffic speed prediction models. Our contributions can be summarized as follows:
i
0e0bf68ad60cda5621b89c64b64a8d52
One of the questions of interest in the theory of persistent homology is the following: given a random function on some topological space {{formula:3b9c0da6-6378-4138-ae4a-8932d439c89e}} , what can we say about the barcode {{formula:83df7c8e-403f-4cc3-adcc-7435a63472ca}} of this process? The study of the topology of (super)level-sets of random functions has been a subject of interest in probability theory for a long time. Many advances in this direction have been provided by a myriad of authors {{cite:951233771de8a2afad97029d557084e4d100c4a8}}, {{cite:dcec269cb5d1ed5a2ec03a6fb3ea0dc7709ba852}}, {{cite:d7dad38141bb5e8f77415b4f3bd123f62c97f382}}, {{cite:b75886f1cd76abc2a0cc04efe60f4f6094baf474}}, {{cite:9e5d67e81fa63c33444086a8588c0a4b434f8861}}, {{cite:547ccb15d84968ffc055bad979512b1ad887e35a}}. Most prominently for this paper, by Le Gall and Duquesne, who gave a construction of a tree from any continuous function {{formula:84665d5e-a13e-4b97-87cb-1ebf317a660f}} {{cite:7083c59781dc7eff7572256a93b9edf60dd094ef}}, and who interpreted different properties of these trees to give fine results about Lévy processes {{cite:3bcb1d0f2b5863a5f734fad4c865da5ddf15e6af}}. Picard later linked the upper-box dimension of these trees to the regularity of the function {{formula:5fcee3cd-66e1-4a21-875e-31819e5867e2}} {{cite:e67b178d89689ef2fd6ef1243365ccf932719e64}}. In essence, these trees have proved to be a fruitful and natural setting from which many results regarding the topology of the superlevel sets of the function {{formula:91446d1a-b44f-474f-8add-4a7cb130b739}} stem {{cite:cbec6ee386ca56349c646e3e1579b7dc78e0f678}}, {{cite:8ea334ffcf6e5054217165c5109fb1e9400ed0a0}}, {{cite:e634ea9ff2f4b38d39da3607a9733c8c4c8f14a2}}, {{cite:9a5b6e408c1068314ee04df37f4695fa039e3631}}, {{cite:7183e42c29e5670af1e3167c2b614a7daa12c6eb}}. A natural question is whether, or indeed how, these results are applicable to the persistent homology of stochastic processes. The answer turns out to be total: the study of barcodes and trees are completely equivalent, at least in degree 0 of homology. This has been established in {{cite:7183e42c29e5670af1e3167c2b614a7daa12c6eb}}, in which a dictionary between {{formula:5b94425f-a8c8-46a2-9f7e-388c0216364b}} -barcodes and the probabilist's trees was constructed.
i
225c3fcaeb1acca5fad58e6b88103ee3
As in Section REF , we investigate how the weights scale with depth and whether Scaling regime REF or Scaling regime REF holds true for convolutional layers. To that end, we follow the steps of {{cite:b54abb98592a3b2d3fe7f5fd90b4686c2c285308}} to get the singular values, and therefore the spectral norms, of the linear operators defined by the convolutional kernels {{formula:428e3c5e-e406-4113-9876-f42d85f1d178}} and {{formula:a8c7a197-1d0f-46dc-9ede-df4c8c62ccd8}} . Figure REF shows the maximum norm, and hence the scaling of {{formula:7e15a0c1-a51f-44d9-97c1-160654f21e2a}} and {{formula:bb7f3e35-6901-4a2a-847f-6669886da03f}} against the network depth {{formula:4a91910e-3e91-4516-b4bc-3cfa1d1f058b}} . We observe that {{formula:147eacd3-90f0-4577-a06a-d078c9f8ba11}} and {{formula:c23f1bc4-525f-404a-8715-99d7e24b12bb}} with {{formula:a8b3d1f1-3546-4402-90f3-3f367aa22f34}} and {{formula:d3499590-eeff-4c8e-91d0-9e602cbdb377}} . {{figure:8ba84b21-9650-4e87-b9b8-a09866ad8770}}
r
809a75079cf687f41ddd45466b020bdd
Remark 11 (Computable Parameters) We note that both {{formula:dee6cbc3-fff1-4828-9ee0-78745d083fa6}} and {{formula:3efa6b86-6374-4458-8866-93c5c6a77e06}} are quite easy to compute. This is valuable because it means that, given a sampling pattern {{formula:2b8c5cdd-1ffd-4007-b053-37867ad79997}} and a desired weight matrix {{formula:9551e0a1-43cc-4dcf-9457-75be250513c4}} , one can quickly compute the guarantees given by Theorems REF and REF . In contrast, common deterministic conditions which guarantee accurate recovery under random samples (for example the restricted eigenvalue condition {{cite:70e678683d337f2242d62ba6a96050e619624c64}}) are not in general computationally easy to verify.
r
94003799aaa94139a02e541bb1c60e4e
These shortcomings have shifted the interest to benchmarks of advantage that are thought to be more robust against noise and that are known to be verifiable with a feasible number of samples. Prominent examples are the heavy output generation problem (XHOG) {{cite:93a02b7d60d15d82e06d6c0a3c9c91304c38a91c}}, {{cite:b2cd54867bde139188e47aef02697317bdc16308}} and the related linear cross-entropy benchmarking (linear XEB) fidelity {{cite:d0b4696641a90319f2b2a8d161fd8c6ce4fbf71d}}, {{cite:f21d11b973f7d7175c1db1aeafc97aab65c2b2fe}}, {{cite:315942b3aa263b463c556bc4f693240884eb70a9}}. The recent quantum advantage experiment used linear XEB as a benchmark for the quantum state generated by the 53 qubit device. However, these approaches have two main drawbacks. First, they require the computation of the probability of sampled strings under the circuit's ideal distribution, which consumes a running time growing exponentially with the system's size. Secondly, the number of required samples grows exponentially with the size of the system for a constant noise rate {{cite:315942b3aa263b463c556bc4f693240884eb70a9}}. Thus, the linear XEB verification approach requires us to be in the “sweet spot" where both the number of samples needed given the noise level and the size of the circuit are not too large to render the verification impossible. This approach is not scalable to larger system sizes with current levels of noise. Besides, it is still unknown how to reduce the hardness of the heavy output generation problem (XHOG) {{cite:93a02b7d60d15d82e06d6c0a3c9c91304c38a91c}}, {{cite:b2cd54867bde139188e47aef02697317bdc16308}}, {{cite:0d7ee440f506a2e2b5a318e865cbe3ad83e9aeef}} to standard complexity-theoretic assumptions.
i
917930db46ade123f9581c2fefb2d6d6
There are various approaches to derive QNMsThe detailed discussion on this aspect is referred to {{cite:e243bf018694a7abc4c28709301ea71a5e4bc559}} and the references therein.. Different approaches produce the different precision {{cite:57c692c65cfcac80c83e95cbf0e110a302227c79}}, {{cite:6de47eceb2c6167b8a9d2246ccf7c990220eb222}}, {{cite:fd55fe9b2fdf327d0656379199fd54f8f4b3e28b}}, {{cite:124805d60e22a69b4a0ecce4c24bb5bded699524}}, {{cite:06bfd19cb59cf40d200ee723a42111a6d726eb63}}, {{cite:d1820c81dde06430926959bea7d9a46ff0acbef2}}, {{cite:06c599bdc1f62a4845a6fb2ea7104e4013a5f681}}, {{cite:47296bfb633da3f001a7f6160e513b71b5564204}}, {{cite:05d8766083c1ac52f29342c7b4c886e1a7fd8489}}, {{cite:c098bf7233bde3831155b8013cf9da2504ef223c}}, {{cite:7d0bb9b216b6627056e385737012d7d5d14089b0}}, {{cite:458b86729b322693556913521f39bf21923e262d}}, {{cite:db63c6e2de27aaf6c06dcd643f201b710d5d7767}}, {{cite:bfd5e324d6d523eb36c5c9493c8699f474961057}}, {{cite:dd10832a3e61bbfaf4a01b22575db56c6af30c2b}}, {{cite:43701cf83a8c7528cd6d6e1385552bfd840cd4b5}}, {{cite:672cd05617c6a5a950395089ec87a1b3d9b5fa34}}, {{cite:5ca37c5461e398d741689da39391fc773b75f708}}, {{cite:16a8901718cb82a34c2767ef413957dec207976f}}, {{cite:73a5e6c897345b34b679bfa597fd65f979a5bea2}}, {{cite:1e79fc355f33593d1724ae82357408af0c657af9}}, {{cite:a1e070be276a0f679a059721f0239892a5625662}}, {{cite:55af3da239c800ce990534604602618453e4dc58}}, {{cite:b0886de4d9639016e73ac93adf0132724be1d19d}}. Null geodesics are useful tools to obtain QNMs. The angular velocities at the unstable null geodesics orbits of black holes determine the real parts of the QNMs. The second derivatives of the effective potentials for radial motions are introduced to express the Lyapunov exponents in the imaginary parts. In the eikonal limit, the relation between the QNMs of black holes and null geodesics was derived by Cardoso et al {{cite:75fda86b032937cb8fe574a9ac76f693b0cdc7a0}}. However, the correspondence between QNMs in the eikonal limit and null geodesics does not always exist. As shown in {{cite:7ba986f9c94c4fd1b8fec7da7c8a139c862a140f}}, {{cite:a2222c9c7ec318c3641db1326a14de5157275688}}, this correspondence is guaranteed only for the test fields, while for the gravitational and other non-minimally coupled fields it may not be fulfilled. Similarly, shadow radii are closely related to QNMs. Through the relationship between the photon sphere and shadow radius of a black hole, Jusufi expressed the real parts of the QNMs in the eikonal limit by the shadow radii {{cite:127728f699666969b9cc0fe4da898ed7d73e26db}}, {{cite:12955d46abc39d4678ad9f70297d9c254a1e5c81}}. This shows a correspondence between the shadows and test fields in the spacetimes of black holes. And then, he studied the effects of the perfect fluid dark matter parameter {{formula:6f03c0d8-3a0f-43ec-8454-c671b4d48842}} on the QNMs by the massless scalar and electromagnetic field perturbations, and found the value of the reflecting point {{formula:71a92be8-4a67-4dbc-a5ce-fdaf78a9fd02}} corresponding to maximal values for the real parts. This work is helpful to detect indirectly the dark matter near the event horizon. Subsequently, this correspondence was verified by the {{formula:32d9852f-431f-4602-b22e-ac410f5d9738}} order WKB approximation approach in {{cite:43a633a69741c84d173693ac9c8d982a96ced5ed}}, {{cite:983e43fd81085b8b9a015015dd2276908cab3654}} and by the {{formula:d9e0f137-1959-4cb5-a454-882c7998c623}} order WKB approximation approach in {{cite:f01c904aa6b07aa01aa160fada935cf5c146e321}}, respectively.
i
c17db43afa935d583c4d81db0a6529fa
Both CNNs and bi-directional LSTMs with fine-tuning achieve accuracy higher than 90%. If we disable fine-tuning, classification accuracy is still high, although overall fine-tuning appears to consistently outperform non fine-tuning configurations, which is also consistent with the results presented in {{cite:edb482a248a2176788f736daac3a80101ae8c744}}.
d
629720ed04efd0c8c3e0a5ce0e7c5f5a
Baseline: Max-softmax   {{cite:bb1957e954842dd8e0de574391827ee66d1b6a06}} showed that the maximum of the softmax outputs, or confidence, can be used to detect OOD inputs. We use it as the score of an input being in-distribution (ID). We will refer to this method as Baseline. It is well known that the confidence can be calibrated using temperature to better represent classification accuracy {{cite:4aed49ee03b27b296e4695b5f411c6466dc41d50}}, {{cite:f7ce65dbb92a2d696c39de74916577774979f52a}}. We also evaluate this calibrated confidence, which will be referred to as Calib.
m
98306576cf5b04705bf1a1113b2052b5
where {{formula:b2c1cdb8-ac39-473f-9835-f31529ab7143}} is the coherent neutron scattering length for atom {{formula:1701b7cd-0021-4b67-b6ba-dac478b86cac}} , {{formula:766d9fde-aa90-472b-9fd7-054574d4486a}} is the wave vector transfer, {{formula:51efc493-f0e7-49d1-a6d4-4a8786673bdd}} the equilibrium position of atom {{formula:c24b0e4b-c473-451c-98a6-3439d20d4930}} , {{formula:563b2252-9e92-4224-842b-c8d2c05d62ce}} the eigenvector of phonon mode s for atom {{formula:e203446c-e966-4063-bda9-0658a2d9115f}} , and {{formula:ab0439a7-dad6-47bf-bd61-68a0db87202d}} and {{formula:86973da7-4028-4b30-a509-bfc0afea30d0}} are the final and incident wave vector of the scattered particle, {{formula:1b009e89-540a-43d3-ba28-faa6691d5a2f}} the phonon wave vector, {{formula:4bcb31de-4a44-405d-8d76-ecbb3ee0db32}} the eigenvalue of the phonon corresponding to the branch index {{formula:326dc8d6-8e91-43e6-97de-3f163074084e}} , {{formula:7f782140-8161-4790-8c33-722796bf6c46}} is the reciprocal lattice vector, {{formula:6eed393d-1f36-4a3a-8784-56a557fd1185}} the atom index in the unit cell, {{formula:66eb9078-6ccb-4e25-9321-f8e543c8c680}} the corresponding DW factor, and {{formula:33c33a83-7b9a-4870-8c16-88b1dcef2ef2}} is the Bose-Einstein occupation factor ({{formula:87fa7747-bc8a-47b3-a46f-e01e6d57952b}} ). The {{formula:7efb9bce-be5d-4f27-b889-e015197f9c5b}} and {{formula:99f9d8dd-f6af-408a-9406-2d2740907564}} sign in Eq. (REF ) correspond to phonon creation and phonon annihilation, respectively. The phonon eigenvalues and eigenvectors in Eq. (REF ) were obtained by solving dynamical matrix using Phonopy {{cite:2c8a4bbafcf1411f837bc65911225e95c85bc82c}}.
m
a19bb4ff3d00d0c3718101fb9a2839c7
Bolukbasi's method {{cite:bdd7e0663b0495c14f14e1efff09539d7d1ad24a}} requires sets of pairs that define the gender direction. For this we use their predefined pairs, since we target grammatical gender bias, which we have demonstrated to be similar to social gender bias. In addition, a predefined set of inherently-neutral words is also needed: these are the words that will be debiased by the algorithm. As a first step, and in order to estimate the feasibility of using this method for reducing the grammatical gender bias, we use the set of the inanimate nouns from SimLex-999 as our set of inherently-neutral words.If this method doesn't mitigate the bias we showed in the previous section, then using inherently-neutral words extracted from the vocabulary automatically cannot possibly work as well.
m
3678193b2555e09c48742b55e596a435
for some {{formula:3778ee0e-305b-4005-a5c2-037e5d1e9819}} independent of {{formula:30485662-10a2-4972-ab58-5818de9755b6}} . The asymptotic behavior of {{formula:390513e6-a4bb-4024-9162-9bc045985026}} in the energy space {{formula:2b351ea5-7d1c-4421-9fce-f248c474d65d}} has been known since Struwe's seminal work {{cite:ec4b13105b5960f556918fb9b09582213b7cf50e}}: assume e.g. that {{formula:850ca60d-eecf-4973-93d2-7a2220b45ca9}} converges in {{formula:3e9a812f-98be-4d2e-8b0a-e08182758309}} towards {{formula:e73be5ca-e134-4ce7-9937-9758a383deda}} . Then, up to a subsequence, {{formula:f7c871d5-faf0-4dc3-a820-53d3f61cb0f5}} decomposes as {{formula:d72f4cba-c361-40cb-bf02-35937a135287}}
r
acaf3aed89d1a6c2c34cbbb17b52e0a7
The presented approach clusters the data into segments and models each of them with an individual, locally-stationary model. Such a decomposition of a complex dynamical behavior may not be optimal since it may not always be clear how to differentiate between a nonlinear and non-stationary time series without knowing the underlying structure of the data-generating process. Identifying an appropriate model structure is therefore an important step. For discrete-time systems, {{cite:2d13c568f3ef8d11fdbafcc4f5d1bf3d10645c0b}} developed an estimation algorithm which can sequentially rate and collect important model terms. Following that, {{cite:bd12c6794686d7c40f9f8418d74c1ad11b7b9955}} find an effective model for a highly nonlinear terrestrial magnetospheric dynamical system. Within their framework the time-varying parameters can be estimated with a multi-wavelet {{cite:50a8d55b1fdee41724a872fe6e73ee94b4af6ba7}} or a sliding window approach {{cite:8edb77b905ea3965b6dd42f0ad075581a260496e}}.
d
26d34d07af7ad4e0524073a7193995b5
{{cite:6ff2a61fbcfecbcce34f4f44215e38f85f27bd9c}} makes the case that researchers building language models should be purposeful in curating training datasets, as curation choices are effectively world design choices. Corpora built on top of news scrapes snapshotted at a specific point in time will capture all of the inherent social bias and structural issues related to news reporting at a given point in time. {{cite:c0aaca87b709f963985d94a61d988f726dc9db61}} demonstrate that models with fixed sizes will be capacity-limited, further highlighting the need for careful data curation: in practice, most deployed language models are fixed-size, and care must be taken to ensure that they are learning from the highest-quality data possible to make the best use of their capacity.
d
16388f991765b63321100ff949fefce3
Active regions exhibit sufficiently slower rotation as compared to ephemeral regions. Moreover, there exist a weak tendency for larger active regions to rotate slower. This finding is in agreement with previous conclusions made in other studies {{cite:f2fe13c654c3e21b356ac6a5660cd0fa6148b137}}, {{cite:e3c7065cfddc4ebde956ae8472465e6925ecbc53}}. Ephemeral regions exhibit higher scatter of the rotation rates as compared to active regions. Most of unipolar active regions rotate slower than the mean rotation rate at all latitudes analysed in this work. At the same time unipolar active regions exhibit on average lower peak magnetic flux as compared to multipolar magnetic structures {{cite:2e59383fcf9ed899886d4f39d48f347bc009f886}}. Consequently, they disobey the rule found for all active regions regarding the tendency for weaker active region to rotate faster. We found no significant difference between the parameters of the rotation rate distributions of active regions of classes A and B. These distributions exhibit similar widths and near-zero modes (Fig. REF and columns 6, 7 in Table REF ). In contrast, the rotation differences of unipolar active regions form relatively narrow distribution shifted toward negative values.
d
e81cb7fb605fd6bca0444fd5718ac5a9
In our experiments, we extend the GPT2 architecture to formulate our model, named GPT2E and train it on the CoNLL-2012 dataset {{cite:22de77c99480646ea452405845aa28f8139cb7b4}} using the annotated coreference information. We evaluate the model's performance in terms of Perplexity on the ConLL 2012 and the LAMBADA {{cite:72010fb3ef1450212891a636355f110924d53114}} datasets and showcase the effects of such training on the word representations as well as on the downstream task of Named Entity Recognition (NER) using the CoNLL 2012 dataset. To that end, we compare GPT2E's performance to a base model (GPT2) when trained on the same data, to highlight the effects of coreference information when paird with our Entity-Transformer architecture.
i
7afcd9d6548dd22a745479184e81423a
Besides click feedback, a few methods also consider other kinds of feedback to construct training tasks. For example, CPRS {{cite:f4c15d34a0dc0bb55d34614b825926e586b331e1}} trains the recommendation model collaboratively in the click prediction task and an additional reading satisfaction prediction task, which aims to infer the personalized reading speed based on user interest and news body. FeedRec {{cite:2035b59ff82fdd74da24962ab22c17995e0a77e6}} trains the model in three tasks, including click prediction, dwell time prediction and finish prediction. These methods can encourage the model to optimize not only CTR but also user engagement, which can help learn engagement-aware news recommendation models. There are also several methods that use additional news information to design auxiliary training tasks. For example, EBNR {{cite:e51612dd90c424115f305249105541f112457297}} uses autoencoder to learn news representations and it uses another weak supervision task by encouraging the embeddings of news in the same topic to be similar than the embeddings of news in different topics. TANR {{cite:2939890dbad35ef76d3255ff2a58f3854cb1df39}} uses an auxiliary news topic prediction task to help learn topic-aware news representations. SentiRec {{cite:f4694d4a04eb22bc19b0bf9d5a719297b87ba9bd}} uses a news sentiment orientation score prediction task to learn sentiment-bearing news representations. KRED {{cite:521a64baf883e59a61beee05f880f169b3b98626}} trains the model in various tasks including item recommendation, item-to-item recommendation, category classification, popularity prediction and local news detection. These methods can also effectively encode additional information into the recommendation model without taking it as the input. However, it is usually a non-trivial task to balance the main recommendation task and the auxiliary tasks.
m
fe5aad59bc6c3755a614ec4aa1cf84f1
Remark 2 The weighting matrix in WLS provides the relative importance of the components of an error vector to be minimized {{cite:7824013d598e9a1b711a89cec837ff6a4a3d3db7}}. In the proposed method, the derived weighting matrices ignore the second- and higher-order error terms, which are non-negligible when the noise is large. To increase the robustness of the algorithm, the weighting matrices should include the second- and higher-order error components. An additional refinement mechanism is proposed in the following section to learn higher-order noise terms in a large noise environment by embedding NNs.
d
bae4267222e45a58582ea6775fb0fe54
Observe that, for Formulas {{formula:b777327a-0fe8-44df-8d4a-3a388d21216c}} and {{formula:79da97aa-b636-43ac-a6f2-e521063056ef}} to be satisfied, a trace has to be finite and so that in its last instant the value of the {{formula:8a8f7e14-cf6d-4e90-b707-430f58d85d35}} -counter is {{formula:992a5abf-313b-4f4b-b7e2-336dc2edcc2b}} . As it will become clear below, this step differs from the proof of {{cite:f6683a74750471a0eb961926b11aa17997f64578}}, since we exploit the last instant of a finite trace to indicate that the construction of the corridor is completed.
r
e188b92d7715b6fa7d0dad5374b0f70e
An obvious challenge in the use of loss functions which include {{formula:4112ea05-3db1-4cf2-8b9f-c74dec4ab9bc}} is that the number of samples required for density approximation grows geometrically in the dimension of {{formula:8b377e4d-3a3d-4a68-95e5-0f837a738b8c}} . In this work we have focused on the case where {{formula:0dee8ea6-4b0f-483b-82c4-261dfa735618}} is scalar valued and thus avoided the issue. Problems in higher dimensions may require approximating density through some observable or via distance to neighbors along a low dimensional manifold via technique like diffusion maps {{cite:a3b927ad7f8586b89060bcee7b30e60c56bad855}}, if such a manifold exists. The relative entropy loss does not require approximation of {{formula:ba9b137d-55d9-4463-9c1a-600b3c66bfc2}} , but does equate importance of a particular sample with its exponentiated magnitude. Thus, it may not be an ideal choice when the rare events of interest are not substantially different in magnitude from the core of the distribution. Further investigation of such problems would be an interesting research direction, but we consider it to be outside the scope of this work.
d
4b83a438cce7e08c3f40c052724063a2
On the other hand, we found that the benefit of the axial summarizer mechanism is dependent on the complexity of the prediction tasks and the size of the data available for training. When randomly initialized, we observed that it is important for the complex sequence-to-sequence prediction in the MIMIC tasks but it was not useful in the simpler PUMMEL tasks. When initialized with pre-trained weights, the axial summarizer clearly improved the data efficiency, i.e., improved faster when more data are available, as compared to the commonly used additive visit summarizer. Modeling the intra-visit codes separately could also be useful for interpreting the predictions as demonstrated in {{cite:a4beaf999f189459d1460fac28bb1636439874db}}, {{cite:8a2d1634179a554c48d6e52033e91945aa13ac0c}}. An interesting extension of our work would be to compare the features learned by SANSformer models with features that were engineered by domain experts in the field of risk adjustment.
d
c39d377b3530307960cae3c87b008231
For many meta-heuristic methods, there exists little to no theoretical justification or convergence proofs {{cite:4d37a384031a43ea7ae200da0a2ee38ff7ddea2e}}. Others may converge with probability arbitrarily close to 1, but might only do so after infinitely many function evaluations. In practice, many meta-heuristic algorithms even fail to work reliably for smooth, convex problems with few parameters {{cite:f06afb1b34b9eb95ddc90be571e3182ded33db3f}}. In fact, for some algorithms, non-convergence can even be proven mathematically {{cite:114c06f0d7bd56f3fd451671a2f7994422c4f2fe}}. Eventually, even a rigorous convergence guarantee will be useless if the convergence rate is too slow for practical purposes. For most methods, there exists a plethora of disparate variants, which renders comprehensive analysis of convergence proofs and convergence rates challenging. This is quite unsatisfying from a theoretical perspective. In practice, reasonable results can be obtained using global optimization methods {{cite:68c205794c9c00aeb10a84c28e6b73b6247a9a99}}, {{cite:1e5722abb2502758870b5d0ac734a1f95e6d7dc5}}, {{cite:c80c90212300e8c2b043fdc4104a79c931f90a15}}. Yet, usually no guarantees of global optimality can be given.
m
3a34274aeb315e3a7fd97694e604aae6
The crucial step is the determination of the weights {{formula:da4a0c56-9c3b-46f1-a14c-d3ca2db405d5}} and {{formula:3febec3d-aa11-45bf-8047-4d2f5fa6f5e6}} of the atomic neural networks, for which we need a reference data set containing the atomic spin values of a representative set of atomic structures. This data set can be used to iteratively optimize the weights until the neural network can reproduce the atomic spins in the configuration space spanned by the trained data with the desired accuracy. In order to check the predictive power of the HDNN and to reveal possible remaining errors, predictions for test data, which are not involved in the training process, are compared with known reference values. Only if the error of the test and the training set are comparable the predicted spins can be trusted. Additional validation steps can be applied in analogy to HDNNPs.{{cite:19eff0c59616d8c46a92ec32ea769de27f1890b1}}
m
9ba94cde91b9d3e04e71e7f917a3a8f0
One way would be to employ time-consuming techniques requiring to both manually collect and check (digital) evidence; for instance, one could look up sources like encyclopedias, newspapers and even gain further evidence by asking friends. Another way is to devise automatic fact-checking systemshttps://fullfact.org/blog/2016/aug/automated-factchecking/ {{cite:858f7f37020af448e4b877e2ea9dde0a536bfcb5}}, {{cite:043b3fcb0c3209f1261fe9a4220a4ceee8b25c0b}}. Existing approaches, can roughly been categorized in three main categories. First, text-based approaches based on a variety of learning models; these can use probability and logics (e.g., {{cite:a79e94420454678206390d35cdc95fae082222d5}}), deep-learning (e.g., {{cite:7c7d46b4bbdbc2f913a4d4fa8153acb46e251364}}), and also include multi-modal (e.g., text and video) information (e.g., {{cite:26d0e937d3f5243a6def9721b8142f0828a4e774}}). While these approaches can rely on large amounts of text and/or mutimedia sources like audio and video, there are difficulties in automatically understanding such pieces of information to (dis)prove a fact. This makes it difficult to give precise semantics to the fact being checked and contextualize it. On one hand, giving semantics boils down to understanding the fact itself rather than relying on statistical indicators like the popularity of a tweet about the fact. For instance, to (dis)prove the fact (Dune, director, D. Lynch), it is crucial to understand that the predicate director relates a Film and a Director and that Director is a subclass of Person. On the other hand, contextualizing facts and gaining insights from (chains of) related facts can represent a valuable source of knowledge {{cite:ed8f2cbd6f390d3472bccf090e5ddaba8e1a64a3}}. As an example, the fact (Jaguar, owner, Tata Motors) provides more insights when understanding that it is about the car brand instead of the animal; the additional fact (Tata Motors, type, Company) can help in shedding light on this aspect.
i
54a511e334ce8e4289d5f4a047e95d7d
Automotive security research has been traditionally focused on in-vehicle vulnerabilities or adversaries exploiting the lack of secure communication {{cite:bceedf9c6f4334027f55d969117cd2feea560a5c}}, {{cite:daa5652b1ce9a3bec169f1479b19360fe249f212}}, {{cite:8887df1a614ac602af5fd1400bf0ce8aa1ca2248}}. Machine learning has primarily been used for computer vision modules to improve on-board perception {{cite:8fec51cb17cb2fd861cd73f8767a7a116d245ef6}}, {{cite:5914a1de25d497190dd053606662881c3de5b01b}} or for securing in-vehicle networks, e.g., CAN bus {{cite:c4430a17c190c9268b166e7216a12ec0d5f84a33}}, {{cite:6489a385c86158105f4b03429b2fbcae7c7005af}}. With the emergence of CAV systems, recent research has focused on security of cooperative and safety applications such as platooning {{cite:bc84e45d50394a09468de7123faaa49b17a7ce9d}}, intersection management {{cite:046920523ce5d222672c35aff2fa24df2c55b48c}}, collision avoidance, emergency vehicle warning, lane merge and turn conflict warning, etc. {{cite:de6dd9dfb38bedc1b69de0a3b7b108d2bb945d90}},{{cite:283152c66b35e028d05493d9b29116dfb871b4e9}}
d
84e1c26ed151227ce85359313bcb8899
In this section, we evaluate the performance of the proposed widely-linear precoding schemes through simulations. We compare the proposed widely-linear precoding schemes with their linear counterparts, e.g., MF {{cite:36c4947a5984ec96ec7fb0be71b58e1096697617}} & MMSE {{cite:701eaf13aae8619ef5ecfda7ac5c0dd5a86e8712}}, RBD {{cite:7238f66502f2eb2955087e752d7d41a3a05a8f4c}} & S-GMI {{cite:9afa7724313451321f4409508ccb2f30545b1c69}} for single-antenna and multiple-antenna users, respectively.
r
8be197071a59265b00b0411bb71fbd84
EF in PLAX view is estimated based on the distance between inferolateral and anteroseptal landmarks, i.e. LVID. We use the length error (LE) of LVID as well as the location deviation error (LDE) of inferolateral/anteroseptal landmarks (abbreviated as IL/AL) as key errors. LDE is also the most widely used criterion for detection/tracking methods. The comparison is mainly made among the proposed method, the most recently proposed frame by frame detection-based method (Modified U-Net{{cite:4dfc363f8145b7596497971084848c00e2c2bd13}}, CenterNet{{cite:fcd1179786df37ba2a81b8daf855a6413ce0b183}}), and the regular detection+tracking method (Unet+C-Ynet{{cite:dfc2fbde018945d048bcd4e585c92cf15cb7169e}}). Unet here is with the same structure as the proposed method. Unet and C-Ynet are trained separately. A general comparison can be found in Table REF . {{table:080481b1-1e93-4d0f-978b-36e7936b0e70}}
r
0c8b4d08e4462caea2accc1d3004edca
We compare the results against traditional variational autoencoders and generative adversarial networks. The results show reconstruction of the inputs with feature and generator. We also show random samples from the generator network. We evaluate the Fréchet Inception Distance (FID) {{cite:03eff438dd65dcf1a36a2cf33e769f490ec2d262}}, {{cite:289036c9ee723fdbb0b630616d4932d0a14fadcb}} for each data set as a measure of the quality of the samples. The FID score is calculated by extracting the activation of the global spatial pooling layer of a pre-trained Inception V3 model {{cite:202c4d2dcb02242ed1f2c515e958a42906279b31}}, for equally numbered images from the data set (here we choose 10k images) and sampled from a generator model: {{formula:3fe8613b-6e2c-4de7-ab47-51b66313059d}}
r
1f98317aa37259c42b60c12f7cb6373b
Compute {{formula:86b7a6c0-8a5e-46eb-97d7-d30e52dab2c0}} , the optimal shape fitting {{formula:23616381-0396-4ea4-b91e-b582f81f4396}} . (It suffices to use an approximately optimal shape.) Compute {{formula:d0b99551-49d5-4b5e-9e46-1ecf86750911}} , the projection of {{formula:f1453faa-554d-4f0b-8591-5403e40b4d99}} onto {{formula:c8bbb423-dbc8-4dad-8094-93161c8f7e76}} . Compute a bound on the sensitivity of each point in {{formula:17fea798-903e-43bc-a534-d4d58d56e6a2}} with respect to {{formula:bbf3b675-ce82-4723-837e-1e73f98e1ee2}} . Since the ambient dimension is {{formula:3344b798-0298-4ccd-ab22-354e70e5cf37}} , we may use a method that yields bounds on {{formula:36ef950b-7f6f-4f33-bb09-644a003bcc7a}} with dependence on the ambient dimension. Use Theorem  to translate this into a bound for {{formula:9255ec9b-5cc9-4330-bffc-e828540d3ccb}} for each {{formula:8758f74d-912f-4832-ba71-975ec629b3aa}} . Sample points from {{formula:754ce527-fced-4db6-ad08-c0ce67eddd18}} with probabilities proportional to {{formula:3e127a4f-9cf5-4925-9f57-5eca6920e15e}} to obtain a coreset, as described in {{cite:10be2fe0ed251665dce73a862a062834019d4f98}}, {{cite:5e56d296cff06b6431fa92aa1eab6ae8c5115354}}.
r
ababf1cf171f4c0a6c4b27b943d64072
Particle yields in 0-10% central Pb–Pb collisions at {{formula:3799def5-09a1-4ba9-bdbf-09ec31789aaa}} = 5.02 {{formula:0af6e64d-6e6d-44d2-872a-184ce1386c06}} are compared to predictions from three Statistical Hadronization Models (SHMs), each based on a grand-canonical ensemble, and are shown in the left panel of Figure REF . The models assume hadron production at chemical equilibrium and reproduce most of the measured yields within uncertainties. The estimated chemical freeze-out temperature is about 153 {{formula:f2243274-5172-47c1-a339-095a835d296f}} , same as the value obtained in describing the data in Pb–Pb collisions at {{formula:bfa3c0c4-ad81-40aa-9407-0a9b671a827f}} = 2.76 {{formula:64052ff2-177c-486e-9434-949e27d8cb1c}}{{cite:648491a977a8ed795f9ec3f3bd84b7add2ceee92}}, {{cite:42a57ed6bcdecbd3a27dc655fe7f8204f1dda75c}}.  A remarkable exception is the {{formula:1da7d2a3-2433-46d9-9501-fcd8c0c80fe6}} , for which the deficit with respect to the predicted yield can be explained by loss of {{formula:6afc7e50-d4ae-4cb7-b5ac-f856c0c6bc48}} signal in the hadronic phase. There is also a tension for protons and multi-strange baryons whose explanation requires additional effects (baryon annihilation, interactions in the hadron gas or feed-down from excited hadronic states, etc.){{cite:86c8dafd888fa706e4fd74840f6d42a3eaa99d9b}}, {{cite:4ae749f9ab73f5909c586b88e98735c27ae0cf4c}}. It is notable that the models also reproduce production of nuclei and hyper-nuclei although their binding energies are much smaller than the extracted values of the chemical freeze-out temperature. Overall, SHMs are quite successful and describe yields of production of hadrons which vary over seven orders of magnitude. {{figure:e777d27a-0ef5-45a9-9c6f-d79b67bc86e0}}
r
a622f73e3aede6ecdcb35558e259aaf0
We then add the Planck tSZ angular power spectrum {{cite:40d0e9fe8777832f4b415f7c31ac43a2d1a7251c}} in the likelihood to add contraints coming from large scale ({{formula:8e6b87a3-bb24-4fdd-90dc-1a69b147d0ca}} ). We only consider the case using the RF modelling of the tSZ spectrum. Adding Planck tSZ data does not improve drastically the constraints (Fig. 3), but move the best parameters within 1-{{formula:7aaf18a9-5706-4d74-81c7-cd776235c9f2}} . {{formula:a9bc9adb-5ebb-4090-8fc6-5983dfbec01a}} and {{formula:61ed5c3e-5cff-4cb9-8338-c55a626b4788}} are shifted to lower values, while the mass bias parameter is shifted high, towards a better agreement with the combination of Planck CMB and cluster number counts {{cite:21d6eb3b36becd089b56d5c3b4cae62925e768fd}}. The constraints are still not good enough anyway to determine the mass bias. We thus study how the constraints vary when adding a Gaussian prior on the latter (from CCCP {{cite:fa485021c27f20fb59034eaa1f1c30d976aa4a9a}}). The degeneracies are broken, and low values of {{formula:79e30e8a-3f24-4a33-a354-5cc3737e86de}} and {{formula:5c9eac87-d84a-4277-ba48-71e531dd006b}} are preferred. All results are summarise in Table REF . {{figure:922675fe-e3b9-4b8e-80a9-80da681a9f98}}{{table:7c95b77f-47dc-4fb7-b4f9-8dc8a1675cea}}
r
df206a86ad3b6113d6455b3e801df868
It is not clear how much bigger dimension one needs in order to achieve a coarse embedding of a graph of growth {{formula:66b4b5af-d90e-479a-870a-06a39f068167}} in {{formula:746ac979-bf06-41c3-ba12-2c13a98c3357}} instead of an embedding considered by Linial, London, Rabinovich {{cite:681ddd5d774d4c463e40b71ae55302540a86baa7}} and Krauthgamer-Lee. Note that the embeddings of {{cite:72bdfae4a7f8de276ea6cbf750bdc7250fced54a}} are weaker than coarse embeddings, as one only requires that distinct vertices map at distance {{formula:1eb18045-e8be-4a93-a31b-df569fda1221}} from each other. For example there is an onto embedding in their sense from the linear graph with vertices {{formula:247880ae-f183-4ee7-880c-a93a31dc30ed}} to the standard Cayley graph of {{formula:f9e0d9f1-e6f1-481f-b168-af6becd998d9}} -so these embeddings may raise {{formula:a1ec94a6-73dc-4225-b729-074e7b9521f3}} .
d
4470daf312c37ab9fda1d92965f319ca
We perform both frequentist and Bayesian analyses of our data. The measurement of {{formula:b1aaf22d-4bc8-4803-8b67-78c89f121d78}} from each of the analyses is in agreement, with the presented confidence intervals coming from a frequentist analysis and the Bayes factors and credible intervals coming from a Bayesian analysis. In the frequentist analysis a fit is first performed to the near detector samples binned in the momentum and cosine of the angle between the lepton and the beam direction, with penalty terms for flux, cross-section and detector systematic parameters at the near detector. Systematic parameter constraints are then propagated from the near to the far detector via the covariance matrix, {{formula:4f4c3a09-dd9b-4f27-bcc3-f4e087cd2ed8}} , in Eq. REF and their fitted values. The matrix is the combination of the posterior covariance from the near detector fit with the priors for the oscillation parameters, with some parameters affecting both detectors directly, while others that affect only the far detector are constrained through their correlation with near detector affecting parameters. Gaussian priors for {{formula:1169f1b9-5aef-4ab0-af59-889b9530c915}} , {{formula:542d0e6b-2968-4429-b637-4d7f07ce1554}} , and {{formula:60e03b90-c7a2-4ba5-96f0-5208021ef5f8}} are taken from the Particle Data Group's (PDG) world combinations {{cite:a0fd7a215fc5573051ee82c9a9498c721bc4e450}}, while {{formula:92fe6844-324c-4ae4-aeb5-187bbf724c5a}} and {{formula:e56af971-a8bd-49de-aac9-ad2cd1b45f6a}} ({{formula:1cb06fc0-1b1e-480b-b5c8-53680a7f69ea}} ) have uniform priors in normal (inverted) mass ordering. For the Bayesian analyses the prior for {{formula:7163f9af-45d5-4996-aa55-0c0e648e4aec}} is uniform, with an additional check applying a uniform prior in {{formula:072940ba-91dc-472f-9def-67d84229d6a8}} producing the same conclusions. Furthermore, rather than fitting the near detector and propagating to the far detector as a two step process, the Bayesian analysis directly includes the near detector samples in its expression for the likelihood and therefore performs a simultaneous fit of the near and far detector data.
m
f332bbcb7a76d1158dc0b9a763cfe25d
In this paper, we present a new mechanism, differing from the self-tuning or well-tempered classes, to solve the Cosmological Constant Problem through a simple, minimal scalar field model. Our model belongs to the Kinetic Gravity Braiding (KGB) {{cite:a81de749a5a0f3762f931d12d4fe9f8cd1673b90}}, {{cite:ae89520bed04b405da4dad1fb408aa9e1c6a3769}}, {{cite:525e2f7e05ab02f21ecc78abc777b41ff4c588d9}} subclass of Horndeski theory. The mechanism contains two key mechanisms - cancellation of a large vacuum energy density through a linear potential, and dynamic self-tuning to a de Sitter attractor through a minimally modified kinetic energy and scalar field equation. We show the early-time cancellation of the vacuum energy during slow-roll allows for the existence of a matter dominated era after which the field enters stable, fast-roll evolution at the attractor. We also demonstrate the robustness of the mechanism under a phase transition in the vacuum energy density.
i
127fc326aa831486ffa04a1c8a79ab9d
Although there are at present no direct constraints on the escape fraction from faint galaxies during the epoch of reionization, deep searches for escaping Lyman continuum radiation at lower redshifts do show some evidence for redshift evolution {{cite:8d451e2f1b1a898184b919b277524e4e80ab6d18}}, {{cite:1bffb188b826987481278421e3cacf7c3a195d09}}, {{cite:9070a53720dfecb217d9187be117a0823c8175a3}}, {{cite:d4f21784772c7b64defcbdc50dc9e196f2545671}}, {{cite:985355dcfb2131d0a8ccdccca57a9608af3eb366}}, {{cite:f7d54c83355a749c40195aaf9ee8e19438a3c177}}. Such evolution, in which the escape fraction increases with redshift, could owe to increased feedback at earlier times when star formation was more vigorous {{cite:770473d11367c22b944deb1e87d91de0bbb5640c}}. This picture, in which ionizing photons escape galaxies along lines of sight cleared of obscuring gas, would be consistent with Lyman continuum observations suggesting “on/off” escape, possibly connected to the viewing geometry {{cite:9070a53720dfecb217d9187be117a0823c8175a3}}, {{cite:f7d54c83355a749c40195aaf9ee8e19438a3c177}}, {{cite:6f2cf00acb68f77d93e75effc84f4323f60a203b}}. Another possibility is that faint galaxies may typically have higher escape fraction than more massive galaxies {{cite:0d0f91e6066dd96b3ed0639c8f18ec21fb369dfb}}, in which case the larger relative abundance of faint galaxies at high redshift would result in an increase in the population-averaged escape fraction. The extremely blue UV continuum slopes recently reported for {{formula:0a5398fe-af7b-4730-bb18-b34a783ef86c}} galaxies {{cite:10ce409305529882226776b150daaeffe8e141ad}} are also suggestive of weak nebular recombination emission, which would be consistent with very high escape fractions of ionizing photons (but see Dunlop et al. 2011 for a critical analysis of the UV continuum slopes).
d
b8dc99f383144ba5425021e706b50e3d
It is a classical result that if each ball selects one bin independently and uniformly at random, then the maximum load is {{formula:c5ad55ab-60b3-4497-93f1-8fac5c1e8881}} In general, with high probability refers to probability of at least {{formula:f4ffda3c-39dc-4fda-a754-2c6766db8b6f}} for some constant {{formula:e454dc3f-b9d2-453d-bb7b-0504740cfac7}} . for {{formula:bdabef4a-ceed-4ad2-af36-728dd4cea6c7}} , and {{formula:a2fa271a-586f-492f-ac50-3aa5516b60a8}}  for {{formula:ec5e5495-1cbe-4666-bfb5-136fcc8fafd1}} . In the following, we will call such a process one-choice. Azar et al. {{cite:9380dad16f3cc9fea41104804a7f95ddb1b98443}} and Karp et al. {{cite:025ef4f7ae79d843376f94e21771fa002292efac}} proved the remarkable result that if each ball is given two randomly chosen bins, then the maximum load drops to {{formula:1786a096-d665-4b7c-bdbe-f629cd63b0a2}} , if {{formula:1685b4e2-dbe7-4318-9eed-e96516b092c1}} . This dramatic improvement of the two-choice process is widely known as “power of two choices”, and similar ideas have been applied to many other problems including routing, hashing and randomised rounding {{cite:1415687e537dae485d900964a29e6853e43708fc}}.
i
12f3356743e1b5368725ee4ab0cb5bdf
Unlike two schemes in Section , we develop an accelerated variant of the Douglas-Rachford splitting method by utilizing the Halpern-type idea in {{cite:ac6b1fec51fa1dfac7da1704df777a65eeb66575}} and the Lyapunov analysis in {{cite:77f4bbf25c451b42e7da23fd36b9a83caa960240}}, {{cite:28c64930f17e6f82b88b90e31b836674be0e209f}} without requiring the Lipschitz continuity of {{formula:0407ad47-c748-4cf5-ab61-c9615e4f0db7}} .
m
3bd73d8cb10d0a1b2e0319bbda81dc43