text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Classical numerical level set methods {{cite:a9420e74e6bf65bd6a86564b0f36c83d14f3d251}}, {{cite:bab2a199a57ef503d35a0ae3a464720a4c92a76f}} attempt to directly find level sets in the input domain {{formula:68d47616-2648-4e3c-915c-71a5595ba080}} via boundary evolution.
Let {{formula:a9aaa9ac-ddfa-4f14-bd4b-d1878fb7b043}} be the
enclosed part of an object defined by the boundary {{formula:87ad8884-cb36-4783-a17f-b620a580a97a}} . The zero isovalue of a signed distance function {{formula:63bbc58f-e101-42c3-904d-39a087658c48}} , defined as
{{formula:a25e2b6d-9d51-4a1a-a8f3-f22468a65da5}}
| m | c62c45753859215c0b90c42e4f8166d8 |
is called a {{formula:87e70250-9356-44be-928d-fdad6691ca7e}} -set of {{formula:a638d6e6-4dd2-43b1-8312-8bf3e9608831}} ; see {{cite:f4dd03f18d9cce5b6dda77abff430e65309441b1}}.
| r | 79d8207558dc13ebf34b8100b01b6ff7 |
We tested our new algorithm on two examples from chemical process engineering. Additionally, we compared the performance against three other algorithms from the literature, namely the original Vector Direction Method used by Atkinson and Fedorov, the algorithm by Dette et al. {{cite:1fbc2ff9fcf51e86b692a2ec795489f22d5de629}}, and the original SIP formulation which solves the maximin problem (REF ) by the Blankenship & Falk algorithm. All algorithms have been implemented in Python {{cite:8cfa1979292e555411605e54850420d231578200}}.
| r | 8f653a5f2bde8cc32022b003428ac939 |
which is equivalent to the divergence of the radiative flux
{{cite:79e28d71c2e8acca75277512431b338c284efebc}}. the The opacity
is {{formula:a686ac06-94db-40d9-8382-5a2b1044b08f}} tracing gases with two different
opacities {{formula:dbe60b29-0fde-4949-8bc3-7ad86b62a9e9}} and {{formula:ec3fb080-2e28-4ddb-b078-902a296cb9ce}} . The mean grey intensity is
{{formula:9de4cbc8-912c-413f-81c9-5b8daac8acd7}} with {{formula:b78a8671-8d74-45ee-9b8a-6c079a3728a6}} and {{formula:9cd71e44-d71f-4070-9bd6-93645a474b6c}} the upward
and downward intensities computed using the radiative transfer equation in a
standard two-stream approximation {{cite:8be40ad812c547d55af0c5d9041084f4171929d3}}. We
treat here only absorption opacities and scattering is ignored.
| m | f1b03f8cce018f5c4a451f329250e809 |
Currently, many tools aid in signal processing, and some of the more widely used demodulation techniques include adaptive optics, background noise rejection, relay transmission, and hybrid RF/FSO communications {{cite:f2384d26e72afbda80e3d31a4a35b90b09aff0ab}}. While techniques such as adaptive optics aid in demodulating noisy signals, neural networks have several advantages that we explore in this paper. One of the key advantages of using a neural network is that it is integrable into other tools used to demodulate signals, such as adaptive optics, while not significantly increasing the system's size, weight, and power.
| r | 76c82f42773fb87880a4641567c71b13 |
Much of ongoing research in the discipline of high energy physics (HEP) require using high-end computational resources. Examples include analyzing petabyte scale data collected at the Large Hadron Collider (LHC) {{cite:b779182315f785410c7ef0e75c3ef0b8dbec5ece}} or obtaining precise theoretical predictions that may require taking into account matrix element calculation from thousands of Feynman diagrams {{cite:c61445826679ba4d1e4c5133b67deb0084240622}}. Being such a computationally intensive research discipline, HEP research uses different kinds of digital objects (DOs). These DOs used in HEP include large datasets obtained from detector operation or Monte Carlo (MC) simulations, data analysis software like ROOT {{cite:79e0a48f75c304ff148334ff0f29308ce0490943}}, MC simulation software like Pythia {{cite:88a81aac51f75f6bc39441516847c4c4b241ac40}}, Sherpa {{cite:4377ce39fc0cc6478ab0e9b5402f8e7f88339a6b}}, and MadGraph {{cite:f454b7bc0aaf82edc15398b91c5a1384a0b5a2c6}}, dedicated libraries like LHAPDF {{cite:854db567c991261eacdf4ea49837140f1cf277b6}} for calculating the parton distribution functions, as well as numerous privately developed codes and software packages, documentations, tutorials and notebooks.
While collaborations facilitate the internal preservation of their data and common software frameworks for collaboration-wide usageFor example, the ATLAS collaboration supports dedicated data storage facility {{cite:de5bd59be736839314e92da2ce9bb26fba5d9b83}} and maintenance of data processing software {{cite:437fce65bcd92bd6b8a89652a9b9b4b86f12acf9}} for its members, preservation of digital resources independently developed by smaller research groups or individuals is equally important to be able to reproduce the results from HEP research.
Hence, a significant amount of community-wide effort and resources has been put forward in order to preserve these DOs. For instance, the Durham High-Energy Physics Database (HEPData) {{cite:eb87e55d61b3fb995c4a2d08f0d658497544dcc1}} is an open-access repository established for sharing details of physics analyses from scattering experiments in a digitally accessible format. Similar efforts have been put forward to preserve and reuse dedicated data analysis frameworks and statistical models (see Refs. {{cite:c357a03ce947bce1663a1cf6ee9f759eeb63eb5f}}, {{cite:7ca9ef865da912a48a177ab1d048fa5c7560e5c9}}, {{cite:edc44c2f9512a3f4ae71a856b7038ecf04d53e8c}} for example). These efforts emphasize the importance of a holistic approach in preserving different kinds of digital objects. The importance of such preservation has been repeatedly iterated in literature, especially in the context of reinterpretation and reproduction of analysis results in HEP {{cite:bd1c00074b53e4f302a4e5167660a1622096644a}}, {{cite:c53ddbfe5569cf561813c19eae0099b8c205c010}}.
| i | ee1c8a9cb10805ae3f72e454d111377a |
We now turn our attention to the {{formula:5ebd0520-c7b0-48fd-8b86-b2d6cf9a6cf9}} and present the longitudinal and transverse LFDAs in Figure REF where we juxtapose them with predictions
from LQCD and QCDSR. Notably, the longitudinal distribution is a concave, nearly symmetric function of {{formula:0e6b4676-0d1e-420a-b46b-adb64a018ab8}} , much broader than the asymptotic form, which is a
consequence of the smallness of the {{formula:55bf4712-45d1-40e5-97fd-fbbf0c973a8b}} coefficient. The transverse LFDA, on the other hand, is asymmetric around the midpoint and its maximum is
located at {{formula:55b1d868-1556-40c7-9cb7-1019469f22de}} , which clearly indicates SU(3) flavor symmetry breaking and that the strange valence quark carries a larger amount of meson momentum.
The asymmetric shape is due to the similarity of the Gegenbauer coefficients, {{formula:7349041d-c470-4ff0-b639-c946d0891004}} whereas {{formula:8a696515-f51f-4669-8e69-ebedd3446b14}} , see
Table REF . This is in agreement with a recent calculation in LQCD, though in that study {{formula:58b57d7c-1ab1-4687-af0a-58dcdfd44ed5}} tends toward the asymptotic
distribution {{cite:a177b5cceaf37ca570a4f44bc5484eccb9d61a01}}. In contrast to these findings, QCDSR predicts {{formula:0950a6cd-8418-412b-b989-940d238051d0}} {{cite:513c6766fc70b9c32755dc6ac3021f73ceecf20a}}.
| r | 2f4467feb27209465e6baee0c5bcc64b |
Yet another avenue that might help disentangle the formation mechanism for this companion is a comparison of the bulk composition of the secondary with the primary. The spectral inversion technique can successfully recover gas abundances and atmosphere properties of brown dwarfs (e.g. {{cite:3f7feadf9b216e9fc4a05349c37c68048fe80124}}, {{cite:e1b8e53bf675192befb99e1ee72a913875fd8d80}}, {{cite:a9bc9cf286c1ebe9e426b0e449ba3cd20673f29c}}) including C/O, Mg/Si, etc. Previous studies of giant planetary mass companions (e.g. {{cite:34dcd54b0193bf553a4bee715690c2bf55e319b8}}) have used the C/O ratio comparison between components to speculate on a formation route. Recent work by {{cite:0b768339474d69181c1976564a881d71e451b7df}} used the spectral inversion technique to confirm two brown dwarfs as common origin based on similar composition measurements including a measurement of C/O for both components. The BD+60 1417 system is an excellent candidate for a spectral inversion approach to determine if chemical composition could help differentiate the formation mechanism. While that is outside of the scope of this work, the data collected here-in can and should be used by retrieval codes such as BREWSTER ({{cite:3f7feadf9b216e9fc4a05349c37c68048fe80124}}, {{cite:e1b8e53bf675192befb99e1ee72a913875fd8d80}}) which uses complex cloud approaches to examine the detailed atmospheric chemistry of a given source.
| d | 6477f8fa580b71a153f5a76d5fe99b25 |
Clearly, {{formula:2a5bf4c8-ff61-483b-9255-6bf316ce1eff}} , while, for the other coefficients, we have, following
Biggs {{cite:e982d2adc4e2c663414f0abeb5a25aa8edda17df}}, that
{{formula:46da7b00-0cfe-4794-88fd-64cb09cce627}}
| r | 6932031a7212acede8deb442ef8418be |
Mobile hardware accelerators are usually limited in the types of operations that can be massively parallelized for fast execution. Thus, more complex quantization methods are often not supported by existing hardware. As such, other works have focused on quantization-algorithm-specific optimization methods (Eg. targeting 8-bit uniform quantization). These include quantization-aware fine-tuning {{cite:6baccc673df9a6237fad3dc1ec02e4beb4ebb94d}} and differential optimization of quantization parameters {{cite:d45ab1247b343303f51f936cc230895e2de33fa0}}, {{cite:98422e2f2c321f6ba58a21a6a7aecaa3b32bc165}}, eg. finding the optimal max/min thresholds of each weight/activation tensor for minimal quantized degradation. These methods train a model that is robust to quantized perturbations by simulating the error/noise of fixed point arithmetic.
| i | 68b1b81d77898a9e13d247883a60a84c |
To this end, let us first introduce a class of cycle statistics. We will denote {{formula:724a388a-38ba-4e34-9d7b-452fbfae3d4c}} as a cycle on the factor graph corresponding to the posterior distribution {{cite:b654ab040cd1f143733b9139c529e851815808dc}}, shown in Figure REF . Specifically, the factor graph is denoted as {{formula:1de5d027-2e5d-45ed-9c83-a985dcfcb976}} . The vertices are split into two groups {{formula:2d637a8e-0937-4271-8b22-b025db0b16cc}} , where {{formula:ec8344f7-44aa-46f5-a0a3-b06efad3246b}} denotes vertices from the adjacency matrix {{formula:41f26d92-714e-4564-b4c0-4b74883fb9a3}} , with {{formula:7c64e793-d0eb-4ddb-ab33-e90a609e3af6}} , and {{formula:8d9eb9db-6ba8-4724-91d0-c9ce93eda8d7}} denotes vertices from the covariate matrix {{formula:09cdca03-9b33-4f23-a43d-b328a0fb7cb9}} , so {{formula:70d04cf2-78de-4c4b-a052-8224e90778b5}} . In Figure REF , vertices in {{formula:de7ea78c-d92e-4add-a813-6fdb70204bb9}} are shown by dots, and those in {{formula:8cc4d6e0-05b1-46a9-994a-39bb590de35e}} are shown by squares. The edges also split into two groups {{formula:1a2bef27-a89d-468e-aadb-abe8d113f1be}} , where {{formula:557b2292-31ab-4381-bbed-aa4dbc86bf57}} , and {{formula:f970cae1-5f16-4a43-82c6-f154ee7b2e38}} . We will refer to edges in {{formula:27d66ca5-23ad-454a-bfe1-3f46ab0506f3}} as {{formula:63d698e6-3b0e-4eef-a490-f59e1abe403c}} edges, and edges in {{formula:dc3819e9-63ef-4ba0-bd9f-1ee63dd95502}} as {{formula:21980da8-a016-4632-8c57-9d19ad16eb46}} edges. Because {{formula:69a31bb9-6fa3-45f2-80a5-256bfa8f3c27}} edges must appear in consecutive pairs in a cycle, we refer to such pairs of {{formula:265de4fb-153a-4f6b-a054-079b875055a5}} edges as {{formula:3d09cec3-d0dd-472f-bd03-dd18265c5098}} wedges. The graph of a cycle {{formula:f172463a-6b9b-41bc-9524-56d9bab99da2}} is denoted as {{formula:a2967609-c8f3-42e2-87b3-62695b4e2dc2}} . We use {{formula:0a7469a8-70bc-42dd-90d8-42533f2d183c}} to denote the length of the cycle, and {{formula:da8887d8-5bff-4f60-81f2-a3ad7dd03555}} to denote the number of {{formula:c5f9a4fb-c87c-4cef-93f7-3aebd0e17c4c}} wedges in the cycle. For a cycle {{formula:38e011cc-5789-412c-afb6-167e4646a130}} , we denote {{formula:364f3944-b5dd-4f95-94a9-537591379885}} the subgraph of the {{formula:772a1027-4ade-44a0-a989-cb08718e5b63}} edges, and {{formula:4d9a58ed-0d94-4c56-8e41-4f069d200b03}} is the subgraph of {{formula:a700cc21-eb5b-4c34-b504-e4ec6aa1abc9}} edges.
{{figure:61699629-2da1-426b-b677-36fd3951f5bf}} | r | 8da708bfc80a2fc0ca9f0bdbf13dbb19 |
We evaluate our models using the official evaluation protocols, i.e. AP{{formula:95d6cf9a-010f-4126-990b-576df183df1e}} for UAVDT and mAP and mAP{{formula:2f7d86b2-2378-4cbf-b5a3-50f6d507e76f}} for VisDrone, respectively. Furthermore, similar to {{cite:a5a232eff4d4527d44e123b6158ee1fb34244215}}, we report results on individual domains and their respective averages AP{{formula:4a2dcd60-8947-4973-8b05-787dce1bd1ae}} and mAP{{formula:396ddc05-5f2c-4345-8505-1174d3ff3ac4}} over all respective domains to measure the universal cross-domain performance. These metrics weigh each domain equally and therefore mitigate the influence of domain imbalances in the test set. They favor models that perform acceptably on all domains instead of just a few, possibly over-represented domains.
| r | 6540681b30d1364bfe6c17df2b14f21d |
Table REF presents a comparison to the state-of-the-art techniques on graph metric learning. In particular, we compare with the different architectures proposed in {{cite:6aaf9984aef99ce607a5fa0aff13175c68b63fbd}}.
| r | e61dd10cdb6b4cce02201eb314f528e8 |
All experiments were carried out in PythonAll code available at: https://github.com/JamesFitzpatrickMLLabs/optlearn. Graph operations were performed using the NetworkX package {{cite:3f5f59beb6b8549b339b5c92c8da1667cd71d0dd}} and the training was carried out with the Scikit-Learn package {{cite:1e2db4baac5bc8dd738f0dff318b790c6d81f020}}. The linear programming features were computed using the Python interfaces for the Xpress and SCIP optimisation suites {{cite:88f95b6d0935c45c073372e28dffc836ea5249f0}}, {{cite:83d03967ffc216475129b6e8e3598d1fd5a71f09}}. Training and feature computation was performed on a Dell laptop running Ubuntu 18.04 with 15.
6 GB of RAM an, Intel® Core™ i7-9750H 2.60GHz CPU and an Nvidia GeForce RTX 2060/PCIe/SSE2 GPU.
| r | 27dd2bcf3c109853f6a9bff59c1525e9 |
The multi-label task for assessing the aesthetic quality of images based on different aesthetic attributes like aesthetic, memorable, and attractive attributes using high-level semantic information is explored in {{cite:1a7f86ec148b62ce4a125b5d468d733920b36809}} as shown in Figure REF (a) by designing a Bayesian Network to predict the aesthetic level using multi aesthetic attribute prediction. Furthermore, a three-node Bayesian Network presents each aesthetic attribute, including its label, value, and measurement. There are two modules of the mentioned framework measurement acquisition by SIFT {{cite:67a2f3c8943ef3eac1bd6ed04c4e8daf9f837d13}}, GIST {{cite:f17d97064eb5e6321e68e5ff37b25056d5c4ceca}}, HOG {{cite:28d0a6a28a3e8596b9ab9bd1964c834d75ce930d}} or self-similarities and multi-attribute relation modeling. Finally, a support vector regression (SVR) is trained, the ground truth values are discretized in the building model, and a hybrid Bayesian Network structure is learned on continuous and discrete values. The training (with ten-fold cross-validation) and testing are performed on the memorability dataset {{cite:d65e59d58c7a52fb0e3668ae99d675b24eb314fd}} containing 2222 images and are evaluated on three different metrics: F1-score, Kappa, and accuracy.
The BLIINDS-II algorithm {{cite:6d896b8490c4070e5a3ffaa4f42047d4cbfce60e}} employs discrete cosine transform (DCT) {{cite:d4ccca32c812614b7c4aa5fbe5b726ba0f306444}}, {{cite:538d8fd73ef2c41e428ac2e6ec425336bdaf0bc0}} is given in Figure REF (b), where local DCT is computed utilizing input image and lowpass downsampled image. Afterwards, a gaussian model is built, extracting model-based features, which are then fed to a Bayesian model that predicts the quality scores. The simple Bayesian probabilistic model requires minimum training {{cite:264d558dfe86feb9e1fb4a4fb3b96ac2700df82c}} and is trained on randomly selected data samples from the LIVE IQA dataset {{cite:14aa92fba8ea959be82233af3593e47c1cf2e13c}} containing 779 images. The algorithm yields 91% accuracy.
A scene-dependent aesthetic model (SDAM) {{cite:9580af41c5cd2d2f54790c5b744138f253671e20}} takes into account both visual content and geo-context by utilizing the transfer learning {{cite:2c6ef626a74d4948f8e88273503f9c76808f7184}} approach, where input images along with their geo-context (online images with similar contents as that of the input image) are used (see Figure REF (c)). The SDAM learns from two types of images, i.e. one category is geo contextual images that are location-wise similar to online photos, and in the other category, similar class images from the available database (DB). If a sufficient number of contextual images are available, the machine learning {{cite:01f58d1ae90a8f70bd349bcb1238443f8495513f}} approaches are applied to access the input image quality. The contextual image retrieval may contain the location of the same images but with different objects where the GIST identifies these types of irrelevant images and are discarded. Moreover, to learn, SDAM uses a state vector machine (SVM), which is tested on 9600 geo-tagged and 32k auxiliary dataset images, achieving an accuracy of 81% on popular spots and 73% accuracy on images of less prominent locations.
Wang {{formula:7171312d-8068-40a0-880d-3bf4728575f7}} Simoncelli {{cite:ae7efefb8b71b2b7317bfc7cce13d35e098da579}} uses a wavelet domain natural image statistical model, providing a distortion measure algorithm for communication systems where images are transferred from one location to other. The input image is decomposed into 12 wavelet bands, i.e. three scales and four orientations. The six wavelet bands are randomly selected to extract features and minimize KLD {{cite:80fe79cd035f6983595258bdc4d0d15c4c83f0c9}}, rendering a quality score to rate images in different distortion levels. The architecture of the proposed deployment scheme is given in Figure REF (d). The framework is tested on a LIVE database containing 489 images showing 92% accuracy.
Recently, Riaz et al. {{cite:7bf45337db12d38bea2b0eb72664ae93d36ff5ee}} employs generic features, including both global and local features, by extracting SURF features in addition to wavelet and composition features. The method also determines basic photographic features, colour combination, saturation, contrast, smoothness, intensity, hue, and aspect ratios from the input image. The approach applied three steps; in the first step, the online database comprising 250 images ( downloaded from Photo.net), in the second step, human professionals rate pictures, and the third step, all the features mentioned above are extracted. An artificial neural network is trained on these features, achieving 83% accuracy.
{{figure:33d6f547-ab42-4c1c-857c-5b4d423f6233}} | m | ade27ae1e097f930bcfe7467efaeb14e |
The exclusion limits were calculated by employing the multi-bin limit setting technique in a program based on
RooStats package {{cite:9824c81f8cbcd244d54b96521f846877d2da8120}} with the modified frequentist approach, using the profile likelihood
as a test statistic {{cite:dfed3422c11bccabddd572b95f16d0dec53e8844}}, {{cite:0b2b372ec1052213f78375468ff5f6e4aa00bb89}}, {{cite:fc6eb7bc325caa5ad7eeb096850ad1a1f1d112a7}}.
The 90% C.L. excluded region in the two-dimensional plot {{formula:29dd6daf-5eab-4183-9d56-f01744495005}} is shown in Fig. REF .
The regions excluded by the {{formula:0a05677f-1bba-465f-a9fe-595d8fcb802e}} measurements are also shown, the most stringent is LKB {{cite:172c4cb2d1f3fc3755a09d1cd00102040ed3c2a0}}.
The central value of this measurement has the sign opposite to possible contribution from a pseudoscalar particle {{formula:aa4dede4-67a7-4e3a-a39d-edc26b99532a}} coupled to electrons.
We used a frequentist approach to calculate the 90% C.L. limit from it.
We note that the limits from the {{formula:402fa8d4-ae51-496b-8f89-0e8a93c1c7a7}} measurements are model-dependent and can be significantly less strict in some scenarios
{{cite:347785ffeecf29fd28fe04e13ca476ac21593f33}}, {{cite:f52131d61b1c23b598c42f9d38a16217e40740e7}}.
{{figure:f30d9dae-d49d-4faf-8a88-c575709119cb}} | r | 225953f89d70f3b3336d300d1237e363 |
We illustrate the performance of our algorithm on both simulated data and a real-world marketing data set. We focus on the generalization error of the minimizer of eqn:loss, rather than on the optimization performance of alg:asyncl2gd,alg:asyncal2sgdplus, since related approaches have been studied in the single-cluster regime {{cite:226796bba427378695e5b7f05e2de965e1354cd9}}, {{cite:b026fdd7b979616481911e6899cff962eb5484cd}}, {{cite:5eace82292ea19d1aed801873b34d4d5943b7dd3}}, {{cite:9430672a33d2f878aff0a1141f20ef40d03b044c}}, {{cite:cee474cc87e308caf7ee05e31d3b91da479fd9fc}}. Our aim is to complement those studies.
| r | d8e462b1a4d2a006898e11e8ed6216dd |
We evaluate the performance of the proposed receivers by Monte Carlo simulations and compare them with several state-of-the-art methods. Consider a MIMO-SCMA system with {{formula:d7eda718-c0d8-4e5d-bb42-36678cc7773b}} antennas, {{formula:ae01d707-3330-4182-afe8-f47143618fa9}} users, {{formula:5e3be212-9c26-4e07-895d-696f857fea52}} nonzero entries in each codeword and {{formula:8b79bcb3-0127-4539-b3ac-7a5df87666cc}} . Therefore, the overloading factor is {{formula:27526b25-3985-4673-a1b5-69ae3e9800af}} . The SCMA codebook is designed according to {{cite:431886b5eba1ce48ca360cbb01f6451dbd437721}} with the indicator matrix {{formula:49d5260d-25f9-4c16-9468-a2516a722cd9}} defined as
{{formula:acb24c33-06d7-405a-9a88-0f5aad808013}}
| r | 042793eba0929fb6019bffb4b51d034f |
where {{formula:b98ba8cc-2699-48e6-8893-b6549e2dee9c}} .
Note also that for {{formula:5090179a-d33a-4f9d-97ef-d34f46002e12}} the classical Azuma-Hoeffding inequality (see, e.g., {{cite:622b618f79de07b3bf2f3cb22b75925263a15592}}) allows to replace the constant {{formula:f7ca8042-6941-417b-8e9c-b06699d2e43a}} by {{formula:968e8d9b-1fd3-4ac6-a567-5d95654590a9}} .
| r | 408c75dd4a08dfdca0e7ef521122c999 |
The online nature of our learning algorithm allows it to be used in real-world applications where training data arrive in a streaming fashion, and learning needs to be performed incrementally {{cite:6b2af5cd8f5798c6a13c6b243d018b46d601fe7e}}, {{cite:be8b99175308c30615f1394baabf92e415396538}}. Further extensions may include integration with few-shot learning methods for faster learning {{cite:e58a7b36028d7488b09d03555cbc7a6554033d20}}, {{cite:be8b99175308c30615f1394baabf92e415396538}}.
| d | 13361378cf4695974971e8b333677d9a |
So far, two ways to supersymmetrize the Randall-Sundrum model are proposed in {{cite:6821a91a77851d8fb85c1efb6781788eff0d6170}}, {{cite:71e9a9d516ff251c176852d67ef0030531af09f7}} and in {{cite:bb06b2dbb5f1f86867be3d3ebc53911cc7f0b9bc}}, respectively.
The former involves a kinky gauge coupling which has position dependence
like a step function in the extra dimension.
Especially the multiple of the step function and its derivative, the delta function, vanishes everywhere.
The latter does not involve the position dependent gauge coupling
but it implicitly assumes that the multiple of a step function
and a delta function takes a finite non-zero value on the branes {{cite:d62a214d6ddd67e968e8e84c3abad7d16b1eecec}}, {{cite:b057ac39b93ac3a3afc23e9ab5b04b6fab3ae33f}}.We thank Y. Sakamura for pointing out this issue.
| i | 56c9ed5ef4eb73c6ff542993bcc70739 |
Quantum phase transition occurs between different phases of matter
by varying the driving parameter, such as magnetic field, chemical
potential or interaction strength, at zero temperature.
It is driven by quantum fluctuations associated with the Heisenberg
uncertainty principle rather than by thermodynamic fluctuations
{{cite:e953819893978985222fbfe4c21dc0c32b5e69e5}}.
In critical regime, i.e., the regime near the critical point, the
problem becomes much more difficult because quantum fluctuation and
thermodynamic fluctuation couple strongly with each other.
Novel critical phenomena associated with rich symmetries are also
emergent with respect to the quantum phase transition.
For example, one-dimensional(1D) quantum Ising chain with transverse
field exhibits {{formula:810736f1-e73a-46e6-8aa7-cc6077c7470d}} symmetry near the critical point {{cite:fb87d5daa4a1eafdbb39ba327ae15f2a0dce4056}}.
| i | 24e20d59c1a1c69b6757cf8e5f038dc1 |
Following the previous white-box source free UDA {{cite:dd093c002b3b9cc971770eee92cc5049420816b9}} and UDA with source data {{cite:0f1b83765d13e4f825b14ea3dab05eac20604f87}}, we used HGG subjects as the source domain and the LGG subjects as the target domain, which have different size and position distributions {{cite:0f1b83765d13e4f825b14ea3dab05eac20604f87}}. We trained {{formula:ab1080d8-29be-491e-ba65-40ac5f893de3}} using the prior work {{cite:dd093c002b3b9cc971770eee92cc5049420816b9}}, and did not have access to its network parameters at the adaptation stage. For simplicity, we used the same 2D U-Net backbone for {{formula:103d5c93-9044-4dd1-8aff-8efaf5ee6110}} and {{formula:cb0a6933-6d30-4203-b3ec-9f4ead78989f}} as in {{cite:0f1b83765d13e4f825b14ea3dab05eac20604f87}}. Specifically, we used U-Net {{cite:0a9634abda22707801a82a681256cc0f98d92c4b}} as our segmentation network with 15 layers, batch normalization, and dropout. The network was trained using Adam as an
optimizer with {{formula:d8845f27-08b8-481a-8fa3-d9d3f6bdf3ac}} and {{formula:e5039068-3ca7-43e7-bca9-ffbaa65b9888}} .
| r | 0614ce42a231fa5ac00edb081beac21d |
There are several directions for extending our work. One important area is to move beyond normal sequence models and consider multiple graphical models. The graph fused lasso method for estimation and structure learning in multiple Gaussian graphical models {{cite:fd4dc0787a135c488f8fc394a34269854b98feb6}} has got much attention in recent years. The Horseshoe fusion prior may be utilized in this instance for precision matrix estimation over several groups and structure learning of respective graphical models. Additionally, our proposed approach may be extended to isotonic regression models, where there is a natural ordering in the means of the observations. We propose to explore these ideas as future work.
| d | 0a4a555068f2fde55bb3171702e26e5a |
In the “standard cosmological”
framework for the early universe (cf. the books {{cite:ca8783faf36afba6401f37986536ff1720990624}}, {{cite:164b75821b147f149f2e2f2de4e0086bbaa16859}} and references therein)
the universe starts with a period of exponential expansion called “inflation”.
At the same time, after the discovery of the accelerating universe {{cite:5f21105bfad70bbc6c434e5695d817b314b4031e}}, {{cite:f8076539aca3c2b6356b3c75231cee319dad5e5d}}, we have now a late universe “standard cosmological”
framework for the late universe, the {{formula:11913a89-c16e-4618-9eeb-b9918c7981cb}} CDM picture {{cite:30f3353b8991370f426d13c2fc531da5ea69c2fa}}, consisting of a cosmological constant, Dark matter and ordinary visible matter,
the Universe being now dominated by the Cosmological Constant or Dark Energy (DE) and the Dark Matter (DM).
This simple {{formula:e7186fd1-688b-4111-9e49-d5558161a3a3}} CDM is now being somewhat challenged by the discovery of several cosmological tensions, the {{formula:bc1058b3-9057-4ae6-9933-6427582bee75}} tension {{cite:81b8aa301733a5722b734e458da5618b2c4367ab}} and the {{formula:62e2b873-ed38-435b-af0a-558820951c3a}} -8 tension {{cite:17a47293715b53323089f1c35020f9f7219fb9bc}},
This suggests that the introduction of a cosmological term to describe the DE and the addition of DM may be a too simple description of the late Universe.
In the inflationary period also primordial density perturbations are
generated (Ref.{{cite:164b75821b147f149f2e2f2de4e0086bbaa16859}} and references therein). The “inflation”
is followed by particle creation, where the observed matter and radiation were generated
{{cite:ca8783faf36afba6401f37986536ff1720990624}}, and finally the evolution arrives to a present phase of slowly
accelerating universe {{cite:5f21105bfad70bbc6c434e5695d817b314b4031e}}, {{cite:f8076539aca3c2b6356b3c75231cee319dad5e5d}}.
In this standard model, however, at least two fundamental questions remain
unanswered:
| i | fdb32f4d954142cf69a93c256be79a27 |
In this paper, we characterise completely a flag Hardy space {{formula:97fa5c9e-72b7-4454-ae41-44b501dc90d3}} on the Heisenberg group {{formula:068b602f-4ff5-45da-a61b-6eff0c91d245}} .
It is a proper subspace of the classical one-parameter Hardy space of Folland and Stein {{cite:620267dcf523fc6fe10dd9168a7299e2ac0a9862}} that was studied by Christ and Geller {{cite:b007b77ba35e4bf972d3cfc08eb2c0c374d49212}}.
Our space is useful in several applications:
| i | 6dae66ec69c8bb78ce3b31c4b245dad5 |
Sampler for meta-learning. Tasks in meta-learning are heterogeneous in some scenarios, which can not be handled via globally sharing knowledge among data. Therefore, it is crucial to address the task-sampling problem in meta-learning. {{cite:deefb5bee2eee7113bb5345ed1a29803cd2ce955}} assigned many tasks that are randomly sampled from different clusters using their similarities, and only used the most related task cluster for training. This method solves the task-sampling problem in the view of tasks. {{cite:2c5905a81cb1cf9a08da5bb45dd6962d16370e62}} proposed a greedy class-pair based sampling method, which selects difficult tasks according to the class-pair potentials. This method solves the sampling problem in the view of classes. In our paper, we propose CATA (Section REF ) based on clustering rules regarding data, which is in the view of data.
| d | b95bc77983a87adca46f9fbe19e309f6 |
In this section, we construct many group LCD and group reversible LCD codes using the code construction {{formula:5b018994-60cc-4640-b1c6-8ba7af1040db}} given in Equation (REF ), where {{formula:5a6cbcd7-8821-4a3e-9330-ed1bb93f76b2}} are some of the {{formula:1e764699-8d7b-42ba-9824-88a05ca1fee1}} matrices described in Section . The searches are performed in the software package MAGMA ({{cite:72c53d0d378592bfabadb0a14bc3a81d0e12f466}}). We only tabulate the lengths, dimensions and the largest minimum distance of the codes that we construct. Codes with the largest minimum distance that are optimal (according to {{cite:dd889fa7cc4fe5efbcfbcef0aabe00c39fc79790}}) are written in bold. The generator matrices of these codes and their corresponding weight enumerators can be found at {{cite:705ce5f00bfa31a2415d330999b1f627c639b29d}}.
| r | 0332d1bc06f86a5b91fab06d4721f417 |
The results of mass-radius relation for NS discussed here and shown in Fig.REF . The constraints from the observables of massive neutron stars, PSR J1614-2230 {{cite:d4fa27b1db1450d3a3867f4ca63f6fc2b8a38961}}, {{cite:d38a62a2137d339a439416aac4a43d137b91a100}}, {{cite:0909e6e7efba39b1351d31217fa7c534398e2a43}}, {{cite:7443acfa984a70420bb5214fe6f84cb36368808c}}and PSR J034+0432 {{cite:46fc847238802f4cbb0c1b3d08a5332644b79ce1}} are also shown as the shaded bands. The Neutron star Interior Composition Explorer (NICER) collaboration reported an accurate
measurement of mass and radius of PSR J0030+0451 {{cite:367f0cf82627432e1f62b88299bf990a8121aab7}} in 2019, and MSP J0740 + 6620 in 2021 {{cite:8ca1b540bbc988a1247d3668572b53d4774961af}}. For the solid lines without {{formula:4a5e56e6-655e-4d6f-a628-6508455d746a}} -cut, different coupling parameters {{formula:03ccfefb-6e5d-4e40-bace-95614194cc1a}} have a significant effect on the maximum mass and radius of NS, it shows that the {{formula:4d4cf603-5994-4e44-a0b2-98ea4f9b636c}} resonance increases the maximum mass of NS and decreases the radius. As the increase of {{formula:40cd2739-4b8b-4d7f-99a0-1a1c9df21478}} (1.05{{formula:d324ed24-a94c-4b81-98a4-8c15520db9aa}} 1.1), the maximum mass decreases, but is still greater than in the case of pure hyperons. The dashed lines denote {{formula:ca1ade93-deb2-4e23-980b-93ffe4e7a546}} =0.15, this scheme can significantly increase the maximum mass of the neutron star and make it heavier than {{formula:607ae7db-6203-49bc-b3ec-6be8441b6473}} , also accords with the constraints from gravitational wave and NICER(2021). Note that there is no appearance of {{formula:2c0d7bc1-417a-4c1e-adf3-83ae0f5193df}} when {{formula:11237a3f-f796-43ad-882c-b390349a5b4d}} =0.15 from Fig.REF . We list the simultaneous measurement of radius for MSP J0740 + 6620 and PSR J0030 - 0451 by the NICER data and maximum mass of the neutron star for various values of {{formula:092df9c0-2d5f-4b49-95e0-b730543ae4ae}} in Table REF .
{{table:26f79000-e1b6-40fa-9653-ef1546416e01}}{{figure:c6ce3d15-b8eb-4082-a23e-0144cf5a54bb}} | r | 4b9264ebd993fde89226e36cedc0cd20 |
In this work, we propose a novel Graph Relation Transformer (GRT) which uses rich, vector-based edge features in addition to node information for graph attention computation in the Transformer. The proposed GRT outperforms the M4C baseline model {{cite:e0b5f8fbba52361aa912b4678691eba1b62721f8}} while also improving the spatial reasoning ability of the model. We also provide qualitative examples of cases where our proposed approaches performs better than the M4C baseline model {{cite:e0b5f8fbba52361aa912b4678691eba1b62721f8}}.
| i | 8f164e5ac59a1ca1ab7f4ebe69f1df2d |
Our source, like the majority of photon sources, has a greater than 50% vacuum component. This property makes the generated heralded photons less suitable for quantum computation applications {{cite:fccb22e59e4f22bdd4699e1f32701d0738e29543}}. In the future, this limitation may be elevated by using detectors with higher quantum efficiency and by enclosing the signal and idler modes in cavities.
At the source's current level of the performance, up to {{formula:5be86f58-20c2-4dd0-962a-645b18d00d4d}} kcps, our heralded single photons are in a quantum non-Gaussian (QNG) state, as estimated from our {{formula:d8f03039-b0f8-41c1-8e55-a62bd7539f50}} measurements using the QNG state criterion {{formula:7bf939f4-0860-4db7-bd24-8df0371335a8}} (for {{formula:f3a9c070-1533-4fa5-9149-977b3bd5ba25}} ) {{cite:b999024d47882c011013b5d0e118e4f7e791b7ea}}, {{cite:0bbdbc0c994c89f2f5b76a358271eddaccef95d5}}. Here, {{formula:be7fca66-bd39-4b2b-a9ac-7a8f7a978245}} is the probability, upon a heralding event, to detect exactly one photon in the signal mode, and {{formula:e1d2b6fa-e757-4ad0-85eb-d7e2d0ca99df}} (for {{formula:458a5de6-aee3-4935-9ba7-d315f728cb62}} kcps) is the probability to detect two photons. The QNG criterion, which is stricter than the non-classical condition of {{formula:d4532466-6ef5-499e-bee2-46587b48cf2d}} {{cite:b999024d47882c011013b5d0e118e4f7e791b7ea}}, is a sufficient condition for the security of QKD {{cite:bbc46f9127a35833f1d7f166c0209cffc283b9af}}, thus making our source a potential resource for QKD applications.
| d | fb83f0b5ec444a0673e0a3043fdf1434 |
Nuclear recoil (NR) events from background sources including those from
detector components, surface contaminants, cosmic ray muon-induced
neutrons, atmospheric neutrinos, and so on, are expected to be small
over the duration of a typical SN burst event. For example, the LZ
experiment {{cite:61914d5b711bcb98b809904522d3bbdaa995651f}} estimates a total of {{formula:223cea0d-c943-4b85-88a8-09a77408815a}} NR events in the 6–30 keV NR energy range from all background
sources for a fiducial mass of 5.6 ton of liquid xenon over 1000 live
days of exposure. This works out to {{formula:18da827f-1834-4921-b9b7-6adeb48da553}} NR
events/ton/sec. For a 40 ton active mass class experiment like
DARWIN {{cite:61914d5b711bcb98b809904522d3bbdaa995651f}}, this nominally gives a number of
{{formula:ae58c759-9d2e-4ead-aa5c-19a95355536c}} NR events from background sources over the
duration of {{formula:bbe5fee5-f6c5-4f4b-93ac-81217ecbe7dc}} sec of a typical SN burst, well below the
expected numbers of SN neutrino induced NR events due to both CE{{formula:ace97d3d-f855-4bd4-809c-d1c85c2e087c}} NS and {{formula:92d32b44-d4fa-4f3f-b874-ce5c5f595771}} In processes estimated above.
| d | dccf6d2a73cdc6efc9082d10224097bb |
This comparison of our results with the existing literature {{cite:d34a5f5d87756add5756105e7211a4e2973e9957}}, {{cite:ee668b49fd1e4025aeeeeab72e3d42cc94ecaada}}, {{cite:e036e35428d661293a19f3bbf2b201c658cb7c06}}, {{cite:9f0475d964e4d77347655989147a41b7bf38a3f4}} brings forward the issue of model dependence in EoS constraints obtained from observations, experiments, and calculations that span many orders of magnitude in density.
By design, the GP EoS prior used in {{cite:d34a5f5d87756add5756105e7211a4e2973e9957}} does not allow for as much model freedom as our model-agnostic process due to the strong intra-density correlations it assumes a priori.
This is especially true at high densities.
Another approach to nonparametric inference is to use neural networks, as in {{cite:d9e10bed6b1514060daca1efbd223aee16e4fbba}}, though the model constructed in that study deliberately seeks to closely reproduce the behavior of a handful of tabulated EoSs from the literature.
In this sense, the nonparametric models used in {{cite:d9e10bed6b1514060daca1efbd223aee16e4fbba}} and {{cite:d34a5f5d87756add5756105e7211a4e2973e9957}} are more analogous to the model-informed GP prior from {{cite:e156735eb0fb1de7733a9af1f01bf46ef6001b93}} that makes relatively strong prior assumptions about correlations within the EoS.
Parametric EoS models, such as piecewise polytropes {{cite:f5b978949e0ea3a5b5d98f07786f8a89f5650194}}, the spectral decomposition {{cite:2b525c6d47343572388f6ba02575be2dba0767e9}}, and the speed-of-sound parameterization {{cite:f91ec6f1437028d87e30e151aad80a775f3b58d5}}, {{cite:56876534be8d57a5538b1de3148cf707488f06b3}}, impose even more restrictive assumptions on EoS morphology by virtue of specifying the functional form of the EoS with a finite number of parameters to describe an infinite-dimensional function space.
Examples of such model dependence are given in Fig. 10 of {{cite:d34a5f5d87756add5756105e7211a4e2973e9957}} and the variation between the two models presented in {{cite:e036e35428d661293a19f3bbf2b201c658cb7c06}}.
| d | f6e720247afcc06719436464dd5693ed |
On the other hand, recall that the regularizers {{formula:f46b0d2a-1f78-4c08-ba7d-ce03ef2c9328}} are important in controlling set sizes, so Figure REF examines the marginal results over 100 different pairs of {{formula:d2ecc9a9-b773-4078-aa52-231ae0670c69}} at a fixed {{formula:ae728087-d59a-4402-bd74-d34133085d74}} . The pairs come from the Cartesian product of 10 uniformly spaced {{formula:66432838-c8b8-4df2-bc41-39d8fadae47f}} (resp. {{formula:06b15864-d8a7-4ba0-ade1-3d0d1a32e9b0}} ) over {{formula:2fb3e94d-8e3c-4224-b435-62b9822a5782}} (resp. {{formula:61d68771-e6bb-461e-9f68-d8dc837666ee}} ). On the first two datasets in Figure REF and REF , |SRAPS| sometimes yields much more conservative coverage than |ERAPS| by producing prediction sets that almost contain all labels. Such sets can be too big to be useful in practice. In contrast, |ERAPS| typically produces sets of size between 2 to 5 that are more precise. The results are also very stable over different regularization parameters, thus relying less on parameter tuning. On the last data in Figure REF , both methods have nearly identical results. We suspect this happens because the dataset resembles typical image classification datasets, in which data are less dependent so that existing CP methods already perform well {{cite:f1b68f94a03c610acf42c313fe7fe185627a872c}}, {{cite:61bdc8d0e6ee5d6db63baff4e005e9044926a06e}}. Overall, the effect of different regularizers seems minimal on the empirical coverage by |ERAPS|, while for a fixed {{formula:ccfd5322-9e32-45c2-b1b1-ee0aaa41883a}} , larger {{formula:1654a16c-25fa-4ec1-9d13-a08aa81e16d1}} corresponds to weaker regularization on set sizes so that sets tend to increase in size as {{formula:3189cb42-6920-4762-ad10-d83de7961790}} increases. Hence, we suggest picking {{formula:58100c36-2322-43ed-89df-8d39d29969a6}} relatively small (e.g., 1 or 2) so that the sets tend to be smaller; {{formula:a995bcb9-bf89-4bef-8b8a-c19e24b2f54a}} can be set as 1. Note that {{cite:f1b68f94a03c610acf42c313fe7fe185627a872c}} provides guidance on computing these parameters to optimize set sizes, but doing so requires a separate set of tuning data, which may not be feasible when data are scarce.
{{table:a8fbb48a-6af9-45d3-b246-32a91f1a1c5d}} | r | e9b711f23e076bc2011d54a83d4df337 |
We note that all of our results hold for simple MH algorithms as well as refinements such as
Hamiltonian Monte Carlo {{cite:e2199630601686f232fbbefea478e409a3492967}}, {{cite:c25834304b198e509e794bc6874f7e88db9905b8}}, {{cite:dd27f6a7ac56c83cf2e63f0128d68e188e7c374f}}, the Metropolis-adjusted Langevin
algorithm {{cite:ef1e1184a1bcde998372cbe020906fc8ff48ee18}}, and particle MCMC {{cite:160144027474c04bedeae824a1dbeeea7d7c7a00}}.
They also hold on both continuous and discrete state spaces, `lazy' implementations where the
proposal distribution is not absolutely continuous with respect to the base measure, and methods
like Barker's algorithm {{cite:da087dbe4383593d406e7cc0809dce59053a473d}} where the acceptance rate function differs from the usual
MH form. Thus, our characterization of kernel couplings and maximal kernel couplings applies to most
Markov chains which involve a sequence of proposal and acceptance steps.
| i | 082a790cdd3464a368ca2cc9c95e5189 |
Through diverse experimentation, Kolesnikov et al. {{cite:3e1744937cbf59b2e12f67bc8cef68d4d0ee9d18}} showed that neither the architectures are coherent with the pretext task nor the self-supervised pretext task is consistent with the architectures in terms of performance. One possible future direction is to develop such a pretext task that is practically invariant to the downstream task, or in other words, it learns domain invariant representation that can easily be transferred to different vision tasks.
| d | df9951f66edad7b5de64f166855e8286 |
On the other hand, a number of astronomical observations confirmed that the late-time universe is undergoing an accelerated expansion, such as observations of type Ia supernova {{cite:41a0c43cd0dd34ac502e277c84d2382060e34f55}}, {{cite:b454d2ec613d06a20ed782477ef468a0056d2df3}}, {{cite:2f2df911e06711e5c1ec92b1ebc85d2f0f071443}}, {{cite:10d84ca3b6a87ff08e12d6a6664bcc4e4ccbdffe}}, {{cite:c83a773ef70a9395c60c16efa5da77353b6dccf7}}, {{cite:deca7c1b79d43ddb302f1feb08f55e3119c8ae11}}. In order to explain this observed phenomenon, an unknown energy component, dubbed dark energy, must have been introduced in the framework of general relativity, looking only on the matter part of Einstein's equations {{cite:fc5894bf62695d0aa6a90c16e484187c9c8b201d}}, {{cite:82e9c80fa1c5d95039f8aeac1dfb805366585cf8}}, {{cite:fe0aa905a6889cc94c291f30eda4dd8b27285a20}}. The simplest candidate for the dark energy is the cosmological constant, which is consistent with most of the current astronomical observations; however, it suffers from the cosmological constant problem {{cite:6ff3c89a6ad2f2bda3a4038124cd631799c2ab7d}}, {{cite:2c2eb448333d3805c761a8e83c816b720ba58c15}}, {{cite:606398b3cf906b1edd2061169d96b0e0af696861}} and maybe the age problem {{cite:54a3f5e1085111cacd1a82f717c6370455c288c5}} as well. It is thus natural to consider other complicated cases. A dynamic scalar field can also serve as the dark energy component such as quintessence, phantom, {{formula:04dc7d31-bbc7-470f-997c-b7d58bf88832}} -essence, etc. Quintessence is the simplest scalar field dark energy model without having theoretical problems like Laplacian instabilities or ghosts {{cite:57dfc80d825aa381d24592458d21267702800521}}, {{cite:ab3132abe2d061f6f27de6db63eed24f50602f33}}, {{cite:ae0ab461a415c334f6eebdbdd59ca41947540b95}}, {{cite:f0e9534fe1dfa747578636514c0ed7215f38fdc4}}, {{cite:36cd0750d2af89983abe59b488716652e3ad2ba9}}. One Schwarzschild-like solution related to the quintessence model was found in {{cite:087a1467337d2861a9821bc17feaa48670f3c4d8}} by Kiselev. The solution describes a spherically symmetric and static exterior spacetime surrounded by a quintessence field. To derive this solution, Kiselev used the quintessence stress-energy tensor with the additivity and linearity conditions as {{formula:c12a16c6-884a-45eb-9133-e99fb39447bb}} and {{formula:29435bd1-68b1-4e3e-80ee-644f37c7b55b}} . There are some notable papers in the literature focused on the Kiselev (KSL) black hole, such as the study of its null and time-like geodesics structure {{cite:e625df874296801bd58d8e17db33a48fd12bfae3}}, {{cite:c40e954af21280b2e661e08caab671220e28b314}}, the study of geodesics structure of Schwarzschild-anti de Sitter black hole with quintessence {{cite:4e38e5120636da72cff6e6feafd659f5699626c4}}, the study of accretion onto Reissner-Nordström-anti-de Sitter black hole surrounded by quintessence {{cite:feaeb476186ae9a0b59c79981051d70c7ccc2801}}, the study of gravitational lensing due to this Schwarzschild-like black hole {{cite:9989bcf6f1168c8607daa5df4e54b4889a625a24}} and etc. {{cite:4828b8d17a335eb0ee0ea24ff15551a5e15fbf6c}}, {{cite:4e38e5120636da72cff6e6feafd659f5699626c4}}, {{cite:985a2c48109d69abb6ce0c83fa7ae45f5535b493}}, {{cite:052f17c88a0f4de51f72ac37c4f0f037b0bcfcae}}. In 2019, Shahjalal took into account the KS and KSL black holes, simultaneously and found the metric of the quantum-corrected Schwarzschild black hole surrounded by the quintessence (we call it as Kazakov-Solodukhin-Kiselev (KSK) black hole), and then its thermodynamics properties, in addition to its area and entropy quantization was investigated {{cite:648c05129425ec6577443ad8fb378790e7870b68}}, {{cite:3c0f298b988c33a57321d67d32e7e94152e3ea09}}. Recently, the geodesic structure and Hawking radiation as tunneling process for such a black hole is studied in Refs. {{cite:84ac2f6bba57743c5bebc531e377562ae0b7f408}}, {{cite:424d9c445a5ecd8b505be62a6ecd69d3dfd22815}}. Also, effects of quantum corrections on the criticality and efficiency of such black holes are studied in Ref. {{cite:5f3930deb1bab6a15994bea052d6dc0c94c4a318}}. In Ref. {{cite:052f17c88a0f4de51f72ac37c4f0f037b0bcfcae}}, the authors considered accretion onto a SCH black hole with static exterior spacetime surrounded by a quintessence field (i.e., onto the KSL black hole).
| i | f271b6f118ff3307b8584a901be3f471 |
Domain Generalization by Solving Jigsaw Puzzles {{cite:307889b793246bf6c7350c1ea8ae763e31f0b24a}}.
This paper studies two setting. The first is the robustness setting, exactly the same as in {{cite:c414fda2bb29bb30bbf5c0f6598ffa0e524f16e5}}, except that evaluation is done on MNIST{{formula:4885c58c-0516-4654-a16f-e91cac2d1262}} MNIST-M and MNIST{{formula:4e4ae216-f9dd-439a-bbda-9ab99d1936ea}} SVHN.
Their baseline is also a method from the robustness community that trains on adversarial examples and uses no target data.
Again, because their problem setting is very challenging, the accuracy is low for both their proposed method and the baseline.
The second setting is called domain generalization, which is very similar to meta-learning.
The goal is to perform well simultaenously on multiple distributions, all labeled.
Evaluation is done using the mean accuracy on all the domains.
Beside the name, there is little similarity between the setting of unsupervised domain adaptation and domain generalization, which has no unsupervised component.
| d | 4587ddfe569d618c2b27ee0e65c7bb67 |
In this section, we present results for object detection with the EfficientDet {{cite:c30b1864ac2e6a08b6511a89b9f3af07bfb8bf50}} model. In particular, we focused on the D3 variant of the model as it has shown competitive results on other object detection tasks. It has been demonstrated that EfficientDet models can reach high performance as shown on the COCO dataset {{cite:c30b1864ac2e6a08b6511a89b9f3af07bfb8bf50}} and in some cases can be an order of magnitude faster to train than other object detection models.
| r | cc6ea27d0bf505c4049a53e550de5840 |
Intermediate Pre-Training. The second group contains models trained on specific content types and domains, i.e., SciBERT {{cite:9812c2ec09f0f3c39c289dd8e9112c8ecc792d30}}, BERTweet {{cite:057e66786ae4f73c6210ba09b0f1f828dae5c437}}, and BioClinicalBERT {{cite:c4562d18941db4424d890996c7d2cdb90a6b643f}}. For example, SciBERT adapts BERT for scientific articles. These models show the effect of intermediate pre-training on specific sources compared to general-purpose training (e.g., whether BERTweet is superior to BERT for misinformation on Twitter). Moreover, we compare the models in this group to language models optimized using intermediate pre-training on COVID-19 data (third group).
| m | 722b1045bd5fde83b8ae5bfac73407f9 |
We consider various other attribution methods for our analysis i.e. Vanilla Gradients, Integrated Gradients {{cite:3edc9a2b1223e0dff8727f1150f3eec5fcb45fa4}}, Vanilla Attention, Attention * X (or inputs) and Explainable Attention {{cite:71b615a29ec8454ad1fe0357004384408befce1e}}. We do not consider techniques such as LIME {{cite:d5586414ae2e2f2acaafbc497cfc90de6ddd34e1}}, LRP {{cite:f79ae5b6ee92919aa10ffceb696a25fdd9242517}}, DeepLIFT {{cite:b3e0eca812943149c215cf650b99343ee1bc301b}} as they are relatively more computationally expensive during inference time. As the style-masking policy, use the "attribution-surplus" policy to determine the final style-masked sequences.
| m | aafd2c882191137de7b6fd96d7778570 |
{{cite:63b34ed7571b3af3ed2787a8247d1c0989287777}} argued that the magnetic carpet on the photosphere continuously pumps out the 2D structures along with a minority population of Alfvén waves (i.e., slab turbulence) above the photosphere. The 2D structures advect through the chromosphere, across the transition region, and into the solar corona. Since advected 2D structures do not reflect at the transition region, there should be no abrupt and significant decrease in the 2D flux at the transition region, as is expected of outward propagating Alfvén waves {{cite:63b34ed7571b3af3ed2787a8247d1c0989287777}}. {{cite:63b34ed7571b3af3ed2787a8247d1c0989287777}} argued that the mechanism for heating the solar corona is the same for fast and slow solar wind flow. Recall that the slow solar wind studied in this letter emerges near the equatorial region, and may be due to the liberation of hot loop material into open field regions by interchange reconnection above the photosphere {{cite:6397c793e9a08c077c8fd51c6d37854dd6db0a34}}. The slow solar wind in the equatorial region accelerates rapidly within 4 R{{formula:7af88382-4926-444a-beef-b5c677488bf3}} , similar to the fast solar wind in open field regions {{cite:0494ec86b50677b568d14a0ee9d807308ac5b14e}}, {{cite:d6efef527b8ecbb05e3f7e07c5446fd053a26339}}. {{cite:9ac07aea3857d9963934fd4b4bc60930273cfe82}} found observationally that the solar wind accelerates rapidly within 2 – 4 R{{formula:b925c1a5-2618-4176-96c6-7a9955cc9a56}} , consistent with Figure 1 (left). The Alfvén velocity increases initially to a peak value of {{formula:8ea6861a-7fa9-4ff3-b033-4e7f07b2ca9c}} kms{{formula:6eccc5b3-379c-4c38-8a66-6f57676f59cf}} , decreases gradually, forming the Alfvén surface at {{formula:78abcf06-d5e5-4b89-b623-04d9f0854cdd}} R{{formula:c5791e0f-15fd-4061-9cdb-bb4ebab0b37e}} , and is similar to the in situ PSP-observed Alfvén speed (blue full triangle).
| r | 767b08c22f58d3d2afe5609e34bdc869 |
Nearest neighbor (NN) search is an important computational primitive for structural
analysis of data and other query retrieval purposes. NN search is very useful for dealing
with massive datasets, but it suffer with "curse of dimensionality"{{cite:8d88595c7ab767ed5f56156923ce622a68612915}}, {{cite:c9c09fde6ea698b51bf162cdbb04cd14b967a657}}.
However, some
recent surge of results show that it is also very efficeint for high dimensional data
provided a suitable space partitioning data structure is used, like, kd-tree, quad-tree, R-tree, metric-tree
and locality sensitive hashing{{cite:350a2b12309d6930829dda942da5b989cfab2d9f}}, {{cite:db39cb3f9d592295c384a8584191cd6e83f82c08}}, {{cite:015c5cac7722c690446fc754d844c6bdded70a25}}, {{cite:f3b3fdc8588aad1ad22a0f71d24e1e63ae7198cd}}.
Some of these data structures also support approximate
nearest neighbor search which hardly made any degradation of results whereas saves lot of
computational times. In the NN-search problem, the goal is to pre-process a set of data points, so
that later, given a query point, one can find efficiently the data point nearest to the
query point on some metric space of consideration. NN search has many applications
in data processing and analysis. For instance, information retrieval, searching image databases, finding duplicate pages, compression, and many others.
To represent the objects and the similarity measures, one often uses geometric notions of nearness
{{cite:3e6ffc62f3b0682ddada4c9e0e3a07d40f291546}}, {{cite:5b00133f5d1ac00915a92c0c130d0b72610ca2c4}}.
| i | ebb02e533d9e2771ad1758aa628d0eb4 |
While adding robustness to machine learning models often leads to better generalization errors on perturbed data, the accuracy on clean data often decreases compared to the non-robust models. This trade-off between robustness and accuracy was extensively studied in the ML literature {{cite:0aca7052012f01d7f8eb25f36154536f9f4a1d94}}, {{cite:d082b69f783d2a9460fa6e279e4ecc7a0f3e0f9c}}, {{cite:ee5edb66f854db26dcfc4ae0c085dbbb52f84723}}, {{cite:812131b4a9f8a419d5e493daae262f3e3154a29a}} and it was argued that robust models need larger training sets {{cite:c5741ac126a043ade8c1451d0ae3effb969e4182}}. Furthermore the size of the uncertainty set which is used during the training process influences the mentioned trade-off and since we do not know in advance which size the future perturbations will have it is not well-defined what performance of a robust model is to be achieved. While increasing the size of the uncertainty set can lead to better accuracies on strongly perturbed data the accuracy can decrease on slightly perturbed or clean data. Hence finding the right size of the uncertainty sets is a difficult task which has to be solved by the user.
| i | 56ef96bbb7c0cb33c2aa8a06a8c8a4ff |
2. The aspect of quantum information. According to “{{formula:0a7c1b71-f1f9-4929-aef9-5e2a36277389}} ” {{cite:19f162e6f54473990ebb31469877cd222f4cff4b}}, the entanglement between the DOF of two subsystems produces a connected geometry–wormholes that bridges them to each other. However, the island formula only provides the evolution of Page curves. Still, it does not explain how the quantum information escapes from the black hole to the Hawking radiation. We can only treat the island formula as a black box operation.
| d | a9d6df2535ae024562cb7ae1d94edc8f |
Recently, reinforcement learning (RL) algorithms have outperformed human experts in many fields that require decision makings {{cite:f19554fabd21ce7804cbec37719bffc1933ac63b}}, {{cite:606bf3ff3925dbc8b64ae347c5010e5fcc7e7b9f}}. In contrast to the traditional control methods, RL can complete some challenging tasks in learning dexterous in-hand manipulation {{cite:e45a9bd3c787b2fba26d74581445b4698a5ff6a1}}, {{cite:2837450d83f74965a7c7de7a61e42feaa2d6a713}}, {{cite:39427b646437c061b9b897efa1f1745babb23bcb}}. However, manipulation that generates changes on the object is still difficult {{cite:41d3f993b7b1de15f6e8b3feb951c7341aa32a36}}. More difficult is generalization across tasks, although previous work can achieve simple level of tasks such as throwing {{cite:9cce4a0057b25d086ded11f9d37e222e290220ac}}, sliding {{cite:a1cc294bfc38de477776fa8e8fc5010fb9298662}}, poking {{cite:9604218730ff5fe1cd0fed285bbd14fac543cb0e}}, pivoting {{cite:567a44dd975e06e359ec5d30d0e2da927b5f6159}}, and pushing {{cite:61820a53e5a6f777edcca998b53898f52528985e}}, but is still difficult to perform well in unstructured or contact-rich environments, which require the ability to combine and generalize complex manipulation skills. In a nutshell, reaching human-level sophistication of hand dexterity and bimanual coordination remains an open challenge for modern robotics researchers.
| i | 2de0cff3d085af7ee9279e66481e0aca |
In this section it is discussed Model-Agnostic methods, one of the two taxonomies from the Method-Generability approach. The Model-Agnostic Methods do not depend on the specific model used; instead, these are methods that separate the explanation from the machine learning model {{cite:deba91a6e497cde4b87e1ff96827e9af27ac35e3}}. This technique consists of adding a new layer between the black-box and the human for interpretability purposes. These methods cannot access the internal parameters of the model, and they analyze the model using a sample of inputs and corresponding outputs to assess how it internally works.
| m | e1284bfc948f23de4c6b67c3e4b6130c |
Note that we include in the definition that the derivatives are uniformly bounded.
This is not the same as the Whitney topology on spaces of {{formula:b63e704a-5b05-4940-b705-6294f2cc1dd0}} times differentiable
functions in a {{formula:2c83806e-168d-4ca9-a5da-c711993407ab}} -compact manifold {{cite:bbc280b8ffb30266ac37ed4bf1b32af6f592e827}}, which is a Fréchet
topology. Even more general definitions appear in {{cite:b2b49814943f5875d345f2967a09c41e57e1bd76}}.
| r | 585a88268667d02ca386aa8e04a61425 |
All of the circuits in this study are trained in parameter spaces {{formula:d81a89dd-93b5-4af0-aa5e-01915b835394}} with {{formula:0d9b178d-7ab5-49b6-825b-0b1b79c60d08}} . Contour plots (two-dimensional cuts) are defined by projecting points in the {{formula:7f7854ac-94fc-4f89-b22f-f01c281cd37e}} -dimensional loss landscape onto a 2-dimensional plane defined by ({{formula:4d3b7fc6-248c-46aa-8923-61453e1791b1}} ,{{formula:444673f3-181a-4633-a623-dec69bfdc067}} ,{{formula:3f243821-c782-4b19-8f55-8b160b8f6e33}} ). One point ({{formula:af4a59d7-0031-4924-a2f4-489548794519}} ) is chosen to define the origin, then the two orthogonal directions in the plane are defined as: {{formula:362d6846-ec29-4a33-9f22-6eed8e36c3b1}} and the second direction {{formula:857bebee-dc4c-448e-bc6e-cccc57080c4d}} (orthogonal to {{formula:2bfd2618-31c5-4efd-bf40-c7b8039cc96d}} ), is found using the Gram-Schmidt method {{cite:7b4cbaf52609acc8e74715c5a0a77deb922914fa}}, and fixed such that {{formula:cbaf5832-9883-4bc4-9a03-291c26924858}} sits on the {{formula:3abac857-172f-4489-be1f-0ac32ac9ca9f}} axis. Both directions are normalized and numpy.meshgrid is used to generate a grid of points in this plane in which the MSE loss is evaluated.
| m | ea2df7573dde1e0c78189458f48f7775 |
In this section, we explain DynamicTriad {{cite:b8b2419b8c0d43f7477caf3caa83c7570601fed7}} method that we could not fit into the above categories.
| m | 563316e0fdbbc5c794274b81660bff85 |
Now, we seek a condition on the cocycles to know when two {{formula:547d6e62-e4e2-43e7-883b-d4feefdcf512}} -central extensions are isomorphic.
Let us fix a basis {{formula:d26abbee-7e4c-44ec-94f5-c42278fbab07}} of {{formula:88e165fe-e3e3-4923-bfdc-cf88b434d403}} , and {{formula:55a3ad63-3726-4ca7-9d29-27bec6cc684d}} . Then {{formula:dff597a7-2ced-473d-b0d3-cc5d5f60561c}} can be uniquely
written as {{formula:2f77dfe8-ca53-4299-b0a3-6c3a8cd5a79f}} , where {{formula:72703b5d-c009-4232-b901-1ac4ffdc90fe}} . It holds that {{formula:56e5eda6-8bd6-4009-b2d2-571e867f5d93}} if and only if all
{{formula:b5ec75c9-d8ec-4a12-86ef-b99d1af37bcf}} , for all {{formula:40134bca-503a-4b2c-aee8-e4b74234d959}} ; and it also holds that {{formula:9d52d97f-b007-40af-8d1b-98fc10dffe87}} .
Furthermore, if {{formula:5d74ff98-3cba-4e9a-a539-545360823c5d}} , then {{formula:322af154-0c86-4e84-9736-a56fd0df7802}} has an
annihilator component if and only if {{formula:c3c70758-2490-4ab9-8691-0d38104c7066}} are linearly
dependent in {{formula:176bfe66-f4ac-41e2-989b-8992493b6bcb}} (see {{cite:6f725b36a8cf2d647c4c6b4fc55322b062ccacc7}}),
where {{formula:4b451f7c-e464-4fec-b2db-c50ea916a868}} is the image of {{formula:09759dec-5cbb-4141-a211-4dd3c13564df}} in {{formula:2cd6450e-b714-4827-ae9e-17f47af80812}}
| m | 9e4af2a527ced3ef12cffc246cbbeb36 |
With the proliferation of large-scale pre-training data {{cite:97ec63f3c7d0c34eb56ab6a48fe2c2958bbef350}}, {{cite:66f2465e4d2d121302cb8fc7d9c2185851566550}}, {{cite:fc41ad17cd5d5eae97fa1775949f4c2877381779}}, model size in neural networks has also been increased correspondingly in order to reach a certain learning capacity. On the other hand, the rapid increase in model size has also spurred interests in developing efficient transfer learning methods {{cite:bf0264964972cc9f36711b8b54d6f8cda2bbb9c8}}, {{cite:65acdfd3cde6f581e9356dfa28a3f80ede26b1f4}}.
| d | 86a27c3699644bc826822b072ae39025 |
While our approach learns network architectures of increasing complexity by adding connections, network pruning approaches find new architectures by their removal. It is also possible to learn a pruned network capable of performing additional tasks without learning weights {{cite:6dcbaaf792d1af061a2dfa8a3749c7df86c063ac}}. A concurrent work {{cite:f5663ca5095533c311f295867938c940fd57c8d7}} to ours learns a supermask where the sub-network pruned using this mask performs well at image recognition even with randomly initialized weights – it is interesting that their approach achieves a similar range of performance on MNIST compared to ours. While our search method is based on evolution, future work may extend the approach by incorporating recent ideas that formulate architecture search in a differentiable manner {{cite:2bc8955c1258ae7dd97aa8fd2437883cef2f2e6e}} to make the search more efficient.
| d | d4a550196a818ac895f1f764142018f6 |
Spontaneous polarization can occur in the AB{{formula:633fbeec-9730-486b-a1c8-b87dee5621db}} -stacked few-layer and bulk {{formula:787a295d-77ad-49d4-b1df-5dabfcccc96f}} -GeSe because their atomic structures are noncentrosymmetric.
Group-IV monochalcogenides such as GeSe have already confirmed ferroelectricity in other various atomic structures {{cite:f096dd8696b0255cae52cfd82abb2a6bbcb39cfb}}, {{cite:0b5f6bf94662189d39926f088ab739d37fab607b}}, {{cite:5488068ded0b4896e0189bbe808fdcfba31034d0}}, {{cite:fd1648076346348ac75b77640d20b28afd842107}}.
We applied the Berry phase method {{cite:5183af9b53f598d39854d0b588f5b32151869422}} to the AB{{formula:7a62e527-b6b4-43e0-8010-1c4fa50108cd}} -stacked bulk {{formula:52c9a73c-bccb-4948-87f3-62f5817bbf42}} -GeSe and we simply integrated the charge density times the position vector along the out-of-plane direction for the AB{{formula:2a39fd9f-ae95-487d-87b7-d0b5a09d4b80}} -stacked bilayer {{formula:a8653112-1496-4d66-bf41-b7a0a1f735e7}} -GeSe.
Spontaneous polarization does not exist in the in-plane direction due to the presence of multiple mirror planes containing the {{formula:fd763d3e-174a-4d8c-88d9-5c3451691424}} -axis.
Along the {{formula:a330ce2e-e5ee-40eb-8c13-b01bb7badec8}} -axis, spontaneous polarization of
0.168 Debye per unit-cell exists for the AB{{formula:bd744e93-c443-4b84-a04a-f1bd1bdc2e8b}} -stacked bulk and
0.0173 Debye per unit-cell for the AB{{formula:7b5de5f8-55de-432e-ab58-bfbc7f88bfbe}} -stacked bilayer.
These values correspond to
polarization per volume of 0.296 {{formula:5d61ae79-1327-4835-b85a-f5e6dfc23864}} C cm{{formula:6da06ce7-f7f5-46a8-abe7-40810c93498e}} for the AB{{formula:b2f53652-45cc-4978-b96b-c5e49514ce8e}} -stacked bulk {{formula:6f9d3b1e-8266-48fa-a42c-035c8fe01623}} -GeSe
and polarization per area of 4.71{{formula:b24df7cf-02da-45ef-8866-6cbc0cdd0800}} 10{{formula:2a0b2249-93a7-4de0-b6f6-71fec6315e3b}} C m{{formula:f9bb346e-6def-4986-b375-c0501921b032}} for the AB{{formula:c4a958f4-6911-4c3a-b67d-5628f2def193}} -stacked bilayer {{formula:07ae3e08-0706-46ab-ab10-07bfb0802dae}} -GeSe.
These values are
comparable to polarization values in
{{formula:29f2f825-f261-41d9-ae69-ac747e3779d9}} -GeSe with SLHL {{cite:f096dd8696b0255cae52cfd82abb2a6bbcb39cfb}},
traditional perovskite ferroelectric materials,
two-dimensional materials with a hexagonal buckling structure (SiGe, SiSn, GeSn, AlSb, GaP, InP, etc.) {{cite:1d1f25a952b5a09f33b021adb6ba2640750e596f}},
and graphitic binary compound bilayers (BN, AlN, ZnO, MoS{{formula:3298694c-759a-408f-a54c-0d6b60479fd0}} , GaSe, etc.) {{cite:944f8c74be72ea5e4e2aab35dd61ba3fe9f1ddb5}}.
It is very interesting that {{formula:0e6969fb-e584-492a-82f7-86757ca40ccf}} -GeSe does or does not have the spontaneous polarization depending on the stacking method of quadruple layers although each constituent quadruple layer does not have any polarization if it is isolated.
This feature can bring in advantages for devices with ferroelectric/nonferroelectric junctions {{cite:2439079d6c85a536c1d4de3d7a185d63713c90fb}}, {{cite:5f443840c941a65e6b4c56dae2b798848bf7e650}}, {{cite:ebe481f9e254b79ce5534d9cb49b45cb0aa1504d}}, {{cite:67c895beaa6a7f89a907af974cdf1facd47da4c6}}, {{cite:7215c758648ebbb91f8acba9ca8c113a081eff8b}}.
| r | 9bd2ea3af2af47503ba11364c7e289cf |
For the proof, we leverage the algebraic characterisation of hitting times {{cite:9b1604d9feab4216ac49feda0eb09fe06f384710}}, as opposed to the standard analytic characterisations. Indeed, the hitting time matrix can be obtained as a solution to a certain matrix equation. Our key observation is that the matrices in this equations are Generalised Laplacians, and the matrix operations used in this equation satisfy good closure properties vis-à-vis Generalised Laplacians. Hence, the hitting time matrix must necessarily be a Generalised Laplacian. In fact, the Generalised Laplacian framework is powerful enough to capture any random walk parameter which admits such an equational characterisation. Consequently, the closely related commute distance {{formula:116920a8-23a1-4be6-ba32-9efd46d2e1c8}} can be understood to be a bilinear form induced by a Generalised Laplacian,
{{formula:3d9e43be-3669-42f0-83cb-2d37b7561422}}
| r | 45b447eca6b4e7eaceba2b3796e8605c |
Musicnn shows competitive results in MTAT. However, other models (sample-level + SE, self-attention) outperform Musicnn on larger datasets (MSD and MTG-Jamendo). This confirms an intuition that domain knowledge can be beneficial for relatively small datasets, reported in {{cite:9ada7463ee87638ec9466e2742785229a7fede4f}}. However, the design choices for parameters of Musicnn restricts the power of the model when it is trained on larger datasets.
| r | a3e87d6384425119518e8408b75be838 |
Monte-Carlo Dropout. We implement the MC-Drop method as introduced by {{cite:0d4d096a8d6a710dccd6062355ad09cc70412e26}}. Following the work of {{cite:1fc956e447a3ce0be6cc4ea5b9f7c6cd2e7fa3d6}}, we configure MC-Drop with a dropout rate (DR) of {{formula:7dc83392-14fe-4158-9a07-e36eaab05948}} . In line with other methods used in our study, we use a ResNet18 backbone {{cite:2f5861605feac372c37e3e01c837dc308c2ea781}} and also provide the posterior (Baseline with DR {{formula:6c0fffcf-2139-4fdf-af1d-531c47c6d381}} ), where we perform static inference on a model trained with dropout.
| m | 73083c103083f21aea1bea7795fdb4b3 |
We first consider the class of sharp configuration which are even order designs. The definition of a sharp configuration was first given in {{cite:fba4d6811c29f463912bbefda52e2323555a09fd}}.
| r | 101ba1be90116423d6fca264ce52c668 |
The complexity of ICA is harder to characterize. Firstly, the FastICA algorithm assumes that the data has been centered and whitened {{cite:c0a51be09eb38c273be2a763f0439f0ce8febe46}}. The implementation we use, from scikit-learn {{cite:3c2f5d909def2248dc4d6a50cd181abeaed461b2}}, uses PCA to do the whitening preprocessing, so its complexity is at least as high as PCA. The iterative algorithm to find the weight matrix {{formula:4e2b62a4-50ad-4f6a-9bab-3230fbd2b3f6}} is repeated until the matrix has converged, but there is no prior assurance that the algorithm converges or how many iterations it takes to do so. The scikit-learn implementation defines a maximum number of iterations to stop execution, we set this value to 1000. Each iteration involves calculating {{formula:1e5cc51b-6aad-43a6-9c05-182420774152}} for each component and each node, which the scikit-learn implementation does in {{formula:16ff36fd-c6c8-472c-84fa-e119f5acce4a}} , where {{formula:71db18d6-5180-4d82-addf-1c21b94fa502}} is the number of components to be calculated, and the decorrelation step, which is done in {{formula:64373dde-735b-4de1-b6b4-26d9c4df38dd}} . The overall complexity of ICA then becomes {{formula:246dd981-e266-4f65-89a3-b093f3f38f2c}} , where {{formula:82e89ed9-4dfd-4552-a297-2b3b843b97ea}} is the number of iterations.
| d | f003e87fe2e3d22a49ccc9a935a84f76 |
As shown in Table REF , we compare our model against class-supervised, visually self-supervised, and textually supervised baselines. The results of class-supervised and visually self-supervised baselines are obtained from {{cite:f4c462a9d52e13e0b1697a2530e8ed5423ebddb3}}. They are pixel-wise classification models finetuned on the pre-trained ViT models, i.e., DeiT {{cite:97dc61015aa011de8ea9455901c16e3afb7cf77a}}, DINO {{cite:e6ef5f9452f7a644f915ec688dba9465802db819}}, and MoCo {{cite:11c0eb35c5f57341b06db6cd3f69657e8ab77cd6}}, with a 1{{formula:c528822e-9e71-407e-a92a-a7ff2b6f74a3}} 1 convolutional layer as the semantic segmentation head. The finetuning datasets are the training sets of the VOC and Context separately. Compared with the class-supervised model, our result (52.5%) on VOC is still comparable (53.0%), although training without manually pixel-level annotations.
| m | de4b80643455c9746affb7af8ff7d576 |
Lattice Monte-Carlo simulations with magnetic fields do not suffer from the sign problems.
There have been lattice studies on the chiral and deconfinement transitions {{cite:3d53fb7bff964c8ed2cd15790c3437ce0fb45fa3}}, {{cite:33be418a78abf12f864fa6f73b1870d7ce95ea04}}, various condensates such as chiral condensates {{cite:d579288cdd7043e0b31cc07735d220fbfa4dc3ae}}, {{cite:84eb77fb05f9f8d0ddd27cebe263939a29c59a73}} and the Polyakov loops {{cite:1bf64d41e0ac1728cd3872d6cdb542cbe2923e2d}}, the string tension {{cite:af0e7f5b512e683ad8389d0ed77096ce5c495f5c}}, {{cite:ad461bdf991d24f8ea0a36eac4645c36704b1b97}}, {{cite:7f7d8a12d67ded5f1f0fdec0d1b09d02f61194f0}}, equations of state {{cite:73f7c0a5ad81b2384684da746a978c4d42162491}}, and hadron spectra {{cite:7e16e12796b327b8a4ca5ab1b20113d0592d5907}}, {{cite:69e6d7e0b640860aa17a91679c2373a11411664c}}, {{cite:65aa598cd09a5afde9a0458dd800a083048e97f2}}, {{cite:371149dcf1eb71008cd58dc3607f9f4646ce0d00}}, {{cite:c18cef783a5a6820dbdd7d1d12a30ff5db0e38a5}}, {{cite:a65f10f4eefefec5a1158447c6b8055295a2a7c2}}, {{cite:e6f12bc037e745d9eae6fc5dc126278d88cea099}}.
The effective model descriptions for these quantities are not straightforward, and the attempts to reproduce the lattice data should improve our understanding of each model.
| i | 51d4a0d28f73d949670a2a0966995836 |
Next we compare the performance of the proposed approach and the classic feedback-based algorithm. For fair comparison purpose, we still employ the factor graph and message passing algorithm for the feedback-based schemeExisting feedback-based methods {{cite:163f8d9ad0a1bbbb855dc2d4146e14bfedba5bff}} rely on the EKF, which in general has a worse performance than message passing algorithm due to using first-order Taylor series expansion.. Note that in the feedback-based scheme, the pilots are contained in the downlink communication signal. Therefore, the reflection parameter in the observation model of (REF ) is replaced by channel gain {{formula:e321c2b8-b8b8-42d8-9b52-cc5618981a13}} , which is determined by estimating the range parameter {{formula:d329e202-f7e7-428f-b0c3-8becbcf9fd93}} . In contrast to the DFRC signal that the whole block can be used as the pilots, the feedback scheme can employ only 1 or 2 pilots, leading to a much smaller SNR gain after matched filtering. For simplicity, we equivalently multiply the noise variance {{formula:b467a401-6187-4d64-b6b0-3cc40578b01b}} by a constant for the feedback-based scheme.
We compare the CDF of the estimation error of {{formula:727074c4-17aa-4d01-96c0-39dd5e79f877}} based on the proposed approach and the feedback scheme at the last time instant with {{formula:f04ed607-8250-4587-8bf2-6468dd3d60a9}} antennas. The high-complexity PF-based message passing algorithm and the EKF method in {{cite:349df2e563c60867a3c559398b2a67d1d7f329d0}} are utilized as a benchmark. It is observed that with significantly reduced complexity, the proposed parametric message passing method can attain the performance of the PF-based one, verifying the effectiveness of applying Taylor series expansion and MF message passing. Moreover, the proposed approach significantly outperforms the feedback-based scheme with 1 and 2 pilots due to the higher SNR gain.
{{figure:fe8bdbb8-35a4-4d64-8ab7-a514c90ddf32}}{{figure:45db4202-e7f9-4d25-9a99-0935b9b9327a}}{{figure:3002966f-79a6-4c9e-854b-4b290511b98f}} | r | ea917c785ab9ca5c1f3eb7896122cd30 |
From a theoretical perspective, virtual constraints extend the application of zero dynamics to feedback design (see for instance {{cite:78283428abf6ac82d91722659d120973340cf45f}} and {{cite:5d4ade917bd9b798e2fd8511b77e95ffcd9b6465}}). In particular, the class of virtual holonomic constraints applied to mechanical systems has built rich theoretical foundations and applications in the last decade (see {{cite:91403c5ec7283ad2227391b76354a37de27686af}}, {{cite:77e046149d8cda46c6fd9cde7753edbde170ebfd}}, {{cite:4ecb1990d2ad1805d9908fc67233390550ff8ea5}}, {{cite:c5e3147f688c0110c4b412e64f212626072249a0}}, {{cite:018f03e2790a501679678c39be10b06cfc60c31d}}, {{cite:38560f8aea9d45713c33eea33b06fb69e0e89e56}}, {{cite:4839d91032ec96437970effece4e8d3e9a1ade2b}}, {{cite:5a5e6ae48bd88828cf51a1a59f71136403bf5ee0}}, {{cite:9a26a1211fc900090d117e6363c0a3c7791df686}}, {{cite:2845a3dbd4485af4d3ef345ca16116724c812132}}), nevertheless there is a lack of a rigorous definition and qualitative description for the class of virtual nonholonomic constraints in contrast with the holonomic situation. The recent work {{cite:6478bc53652fd0e0a7510b3629295ca738798616}} shows a first approach to define rigorously virtual nonholonomic constraints, but the nonlinear nature of the constraints makes difficult a thorough mathematical analysis. In this work, we provide a formal definition of linear virtual nonholonomic constraints, i.e., constraints that are linear on the velocities. This particular case includes most of the examples of nonholonomic constraints in the literature of nonholonomic systems (see {{cite:d370b2e829e518ad1ca9c05307868792fd488e8c}} and {{cite:449883367ed3329caf769dcd8c3a18ac2f53094a}} for instance). Our definition is based on the invariance property under the closed-loop system and coincides with the one of {{cite:6478bc53652fd0e0a7510b3629295ca738798616}}, in the linear case.
| i | 405ed5f27638d96a3e41af35a1a5f828 |
It is unknown whether or not there exists a loss function for which the original system of equations can be exactly recovered from one-dimensional measurements and random fitting parameter initialization.
There is always the possibility of adding more physics-informed constraints to the loss function, such as known problem-specific symmetries, to help the optimizer find the desired model.
Furthermore, the diffeomorphic transform between the delay embedding and the full-state attractor can be further enforced by using a bijective neural network, i.e. an invertible neural network, that honors the one-to-one map by construction.
In addition, the SINDy algorithm used in this study is relatively basic in its use of the {{formula:059b6e65-6c2d-4833-82b6-9e9aaf279d61}} loss for enforcing sparsity, and many improvements have been suggested in the past few years {{cite:9f231145d0d21585f6216ba37e6da4505170c486}}, {{cite:5bf5d6ffceccfa422dd55c9e28e13518078ba65d}}, which can be integrated into the proposed delay SINDy autoencoder framework. Particularly, using a probabilistic {{cite:81026ee21346b6229bd3c621ca5564f5efd7daba}} or a weak {{cite:ca2ebd60eab7fac636915673295e9f3c029b0f3f}}, {{cite:dd51e3e1d533a1546ae5c0bf42910ba852b3807e}}, {{cite:e4eb8620d81b34bec3900a4f7e8981b60fe67b10}}, {{cite:be3bce97e05917304aaa27a098048e09b193993b}} SINDy algorithm would certainly improve on the results obtained from real-world noisy data such as the Lorenz waterwheel video.
| d | 8ba1969c744beee7ae1056817dea6b78 |
Another key aspect relates to the question of the best shape representation.
While numerous representations have been proposed, Signed Distance Functions (SDF) {{cite:e1039b01aa2e0e3ff93082252ddc7668eb5aec57}}, meshes {{cite:980bcc03d59e4cfc34d7a1eb3b1cf3e9a30fb6d3}}, {{cite:0d96182e52cb153515b5b8a4c0328cca5e55ea17}}, voxel grids {{cite:1e2a4629293bb33be2f298524f5ea86ed985b877}}, point clouds {{cite:ee46452972421f9b65b5386f3912747feb2a72ad}}, {{cite:74731049bbe3339b9ee2964546346b23663a37dd}}, and even hybrid approaches {{cite:54c8a2f878480287dda25aac2138380c69550ec4}}, they all have task-dependent advantages and disadvantages.
In this work, we propose a representation-independent shape selection mechanism.
That is, shape exemplars are selected from a given shape database that can implement different (or multiple) representations.
The most convenient representation is chosen depending on the specific task, be it for defining objective functions or for visualization purposes Fig. REF .
| i | d3f99f5a979798ff0aa5b0795fc8b9d6 |
NeuralSparse {{cite:5f5ce87ab455e12df00a24fa0edf4db619fa2bbf}}. NeuralSparse learns to select task-dependent edges by getting signals from the downstream task. Given a hyper-parameter {{formula:9299ec79-ab54-440d-afd5-c71b79df3ca6}} , NeuralSparse samples k-neighbors subgraphs, which are given to the GNN as input. The process of sparsification and learning representation by the GNN is done simultaneously.
| m | 551081b512bc0c0b695f10bc81df5a50 |
We use PatchGAN {{cite:13077c6337900aa1722088b671656879c5041bd2}}, {{cite:c132e0321fbb2d273b3a43c9f711395fcfdfa0e4}} for the discriminators {{formula:9c501980-aad7-4bc2-821c-d9d4ff6937a2}} and {{formula:9c4c3ef9-4a91-40ee-9605-b8863f3de0d8}} , which takes two inputs (e.g. source and target probability maps or entropy maps) and distinguishes patches of size {{formula:a8831746-2379-4b9b-b5b7-a6eed6619fa2}} . PatchGAN discriminator mainly penalises structure at the scale of local image patches and is run convolutionally across the input images. There are five convolutional layers in this architecture, with a kernel size of {{formula:4a9e9414-b6f2-4f6f-87ca-bdbc73878c44}} and a stride shape of 2, and the number of feature maps for each layer is: {{formula:ba88007b-cba2-4e9f-9a08-8d8f46dedff1}} , respectively. Each convolution layer is followed by a leaky ReLU activation map, parameterised with 0.2.
| m | aa5c73b17548f55746fc43443035f80e |
To work out the matrix elements in the coordinate space, we follow the same method
adopted in our previous works {{cite:8cd0195753365c24657735e52dc40b1c64de980d}}, {{cite:89730459fe82d0985f3cfdc6ef609fae94695079}}, {{cite:c26dc7bba6c6af35fbd028d61050e0c04e042587}}.
As we know, the relative-motion wave functions {{formula:7a93895a-2c8c-42fa-afb0-9cdebd3c329f}} can be expressed as
{{formula:b53f758f-e9bd-4fa0-a0ef-a75111b328cf}}
| m | 3db2c1adf6bad983ab01ae57e7c56d68 |
Now, one goal of the Everett program has been
“to take the mathematical formalism of quantum mechanics as it stands
without adding anything to it{{cite:2ea38158844c319fa940fa820015d96e0386f125}}.”
If we find it necessary to modify the formalism,Van Esch{{cite:370a2c2634be4052b1bf165050ae7269e70435c8}}, Barrett{{cite:a85a33d4270c4af0161c9f188005f5c08e66c155}} and Vaidman{{cite:3fcb7644e3ddeef6b8b9da703675c7a8bde89f35}} argue that adding assumptions of some sort to the formalism is a requirement if one is to obtain probabilities. is the effort worth it? The answer to that question remains as simple and compelling as ever: The Everett interpretation, by virtue of eliminating the insistence on a single outcome to quantum measurement,
avoids the nonlocality {{cite:b1ad54f805e1053de45b9c4c231f6297b8c782bf}}, {{cite:12a5422b55b6a011fbbef9e13babea228dcafb1f}}, {{cite:b6c5b1d9359324d002157770b81cd50f5a533b6d}}, {{cite:04908fd9274e44b86fa7665a697b38065de40eb7}}, {{cite:507232ffef7bef7960135cda0acf0772a0bca963}}, {{cite:3a55d5fda8293f76ca95d37ed15f6d0c4ff1d6aa}}, {{cite:f2e74e38a8225365dd8f15a7d3f991e62e833e04}}, {{cite:c078aa2912a2d15cd01a2d6c67f7861cd553e762}}, {{cite:f61794d022328e569623c9ee0dc430096d6f4a45}}, {{cite:05ffe677273c6ff4dd14675c6c07b079cc6c4d85}}, {{cite:44affcbff16951b763004f4b17adb49a368b05b1}}, {{cite:c55e32d3af0f91ebbb29356cdc776d530578fa02}}, {{cite:1203acbd4eb36e28818aaea02aced2134bceb15b}}, {{cite:d3bb02ef3a41664760c3a6415bb73303e592defa}} that Bell's theorem assures us must be present in any single-outcome quantum theory {{cite:482cc3ce1bf350ef79f1eef7134ad8ea367e66b7}}, {{cite:36dc5a7ccfc9b7c3d12b1e56170f2a7b45f5d35e}}, {{cite:a94eb9ec281b652b87dc60a2cc091898e5247eda}}, {{cite:c16f961d7909a1746501456ef36c228ccaf32ff2}}, {{cite:37c8cc4fbb452b9705302d3a5948475646dc6424}}, {{cite:49e161ccf248b6c2d7f3f8a5b1577f13e261186f}}, {{cite:737d2c8f43010c6c6daa8dc82db2b9c084e64c88}}, {{cite:3bcac532ffe9e8b6070ba26938aec78bfad19214}}.Weissman{{cite:f369882e3250a14a58d0058e6e93f500645b2b27}} introduces nonlinear modifications to the Schrödinger equation but retains multiple outcomes, thereby retaining locality.
| d | af5c73216a769cff9e2333d079b66a0d |
The best understood avatars of the AdS/CFT correspondence provide a map between geometries and CFTs preserving maximal supersymmetry, which for AdS{{formula:5595431b-4f6d-4144-9b24-a749ba9db818}} solutions in ten or eleven dimensions is 16 real supercharges {{cite:79215df2056abdd7762ff552457a128beaa5e3c5}}. These maximal cases are thus a well motivated place to focus a classification effort. Since {{formula:d3310057-0870-455b-bfd1-e382483415b9}} superconformal algebras are chiral and a solutions can support two distinct algebras of opposing chirality, there are several ways to construct maximally supersymmetric AdS{{formula:964e4708-321d-44fc-b3e7-24386b91edf9}} solutions. The canonical examples are the D1-D5 and D1-D5-D1-D5 near horizons of {{cite:b9bd2047ac97231e7ce2d8958191630017245e86}}, {{cite:99c52abe61bdda4502881c5b71bb84f45d8c226d}}, which preserve {{formula:f2430936-a776-4d8a-bbf1-f75b58f3a0a8}} symmetry with small and large superconformal symmetries respectively. (A classification of AdS{{formula:9f0b912e-3dcc-447c-9a51-a31ca2769c05}} solutions with large {{formula:71f15a9e-14f5-4707-ae7d-037ca793745f}} in M-theory is given across {{cite:a6f2ff2b4b95bb5d51649436627e05a489b7d4c6}}, {{cite:5eac85947c0d9d161fd4ec23ba6d2628c8426ebe}}, {{cite:e9820a437a2880f3a74972d4adf3ff1083bb06ec}}). Generically, any solution that is {{formula:4f6ffe67-144d-49d2-b0a8-9555b8d01d3c}} supersymmetric, for {{formula:c4010f37-6871-4ec4-b839-1de390140ff5}} , is a maximal case - they are all deserving of study, but we will not attempt to do justice to them all here. Instead we shall set ourselves the more modest goal of classifying all solutions in ten and eleven dimensions preserving {{formula:09a2279b-e043-490f-b188-6361b5a644e8}} supersymmetryNote that the difference between this and {{formula:7fa3318c-4a3a-463e-bf9c-e8cfb11ffab3}} is essentially just a matter of conventions. Additionally we need only consider solutions in type II and M-theory explicitly, as while AdS{{formula:71ac83bb-acad-4432-bd3e-917840d3b3cc}} solutions exist in Heterotic supergravity, they are incompatible with {{formula:9a97f96b-b298-4bb5-84de-4ffb96e68461}} {{cite:d097bd70cb2569da38e7f390529ee5bbdf40c98c}}. - we will in fact find the local form of each of them. For related work on AdS{{formula:da8db4bf-eb32-41df-b54e-850247831aec}} solutions preserving various supersymmetries see for instance {{cite:11f2e17306d84ca13251b7e37ff6c8446bdf0ea7}}, {{cite:d5fe11d59c4d181bbf4449d54e70d8c19acded2e}}, {{cite:acbacd8c8238908efc2f3c912e44a68cc374ccfb}}, {{cite:2ba94ddd410247f1c04a74d3a8a504008b9a0ee0}}, {{cite:f2b2d87bb1d9494850c7cb6863c0ef6e9e401a30}}, {{cite:5b4a394f00778cf445e64aead80bd84f18c6f4db}}, {{cite:9cc0c27fdd1fe98284bef3023fe255ff8c90296b}}, {{cite:39312f5055311546b353e55965c7456b429f455b}}, {{cite:defd79da724dc23222e47d660eef93722a8689ef}}, {{cite:fd0e02dccdce9a872aa6b80a508e6a8f6625c236}}, {{cite:bf4abc96c11bbb29d19309a7647ce64be54c7bbf}}, {{cite:dd01aa911d9e2139e2f40c57fb51bb348bb4a2a6}}, {{cite:1f07b998b8843569004fa397196f3aa20ef66b6c}}, {{cite:8c125a1e0a3a058d7fe732992fa46f8867705499}}, {{cite:91e4820ad955a6c9204f337b93b7b213e7adf6ee}}, {{cite:7c0fb6b31262f099f2425d03cb3afc27bf5a46ca}}, {{cite:0a9218c0727029d4d0cd3bf56a9bb68e5efb1f07}}, {{cite:d77f6731c0275715109e5579203efc7f7357979e}}, {{cite:d931574e72c2d43c09630815391b7d4919ea4ef0}}, {{cite:e49e1114c5ee82e80c3e6be62953911856503f7a}}, {{cite:385de647266f2612f82bfb6bb2d7d2e296b3b839}}, {{cite:68a7e7e0912ee21e96f8ab35c1cd87efda01a643}}, {{cite:8b9b31821ef52ee8456b9a8c7bfbc2d090540289}}, {{cite:30fbcd310784d6381eb31efa627c9410c646c865}}, {{cite:af8d5a3a71d299e5cf25338c80facbfd3b1869b0}}, {{cite:f78a9949720987aa5da24a3dd36e0970073768f1}}, {{cite:24af0a88671ff25a0d1ba78c1f39293f26cf084d}}, {{cite:b8020091f7d7becfa2aba2f9314ca5785924d447}}, {{cite:a1da259943195222d1832aa888e63eecacbdac44}}, {{cite:90f7371f28ac59631693457863fee50daa636b58}}, {{cite:c4acc42996061f28920f96e633e14f5b515cf39d}}, {{cite:b668ad5d7177421b15c853f7f17583f74b210285}}.
| i | cd8b6ab484b61b9197d53ec8308cf22b |
The endeavor to understand certain geometric aspects of decision problems has lead to intense research in statistical learning. These range from the study of data manifolds, through landscapes of loss functions to the delicate analysis of a classifier's decision boundary. In the present work we focus on the latter. So far, a wealth of studies has analyzed the geometry of decision boundaries of deep neural networks (DNN), reaching profound implications in the fields of adversarial machine learning (adversarial examples), robustness, margin analysis and generalization. Inspired by recent isoperimetric results and curvature estimates ({{cite:1914bff9798abe992eeedf1795861af6ed90997a}}, {{cite:4e5682d01d79d40d2362f7cc17932b5644548ee8}}, {{cite:c90091f6ebbadcf407a7ddaa38cdc9c74ae2554d}}), we attempt to provide some new aspects of decision boundary analysis by introducing and studying a corresponding diffusion-inspired approach.
| i | 408ae956a2030b4a74c4306d42e9a642 |
Our empirical observations show that, when the optimal regressor deviates mildly from the dummy vector set by the IRM (i.e. a vector where each dimension is norm 1), IRM seems to find worse solutions than ERM. However, when the norm is close to the dummy vector, it outperforms ERM. Our observation might be related to other observationsWe point the reader to https://github.com/reiinakano/invariant-risk-minimization for IRM replication experiments. showing that IRM is difficult to optimize, and fall easily to local minimum when {{formula:54fffd53-38cb-4de9-83a5-eb77c9ba4c11}} balancing the feature stability term and ERM term is not well tuned. On the theoretical side, our results might be related to {{cite:82589f4551bbb91d5bf0b7045979b338c4c5bb3b}} which shows that IRMv1 indeed performs worse than ERM depending on the ground truth models.
| d | bfe69698d03ddb8f29a917f2037fe0b0 |
DSLR: To capture LDR images with DSLR, we mostly used auto exposure settings and captured a total of 25 LDR shots with such configuration. Notably, we choose stochastic lighting conditions like middy sun, low-light condition, high-contrast lighting condition, and sunset as shooting environments. Which allowed us to cover the most challenging shooting environments from real-world environments.
Smartphone: Smartphone photography has gained significant popularity over the last decades. Therefore, we included images capture with different smartphone cameras in our LDR dataset. Typically, due to the shortcoming of smaller sensor size {{cite:20b6903448882d7255a6a62076ec6af33ca07b9f}}, {{cite:9fa0e381982e45761630136870918ac66c97e9bf}}, {{cite:2b3aa4e3f68767a003786e51b0a7e562df03da12}}, smartphone OEMs shipped their devices with the ability to produce HDR images. However, such default HDR settings do not fit well with our target applications. Thus, we used a third-party camera app known as Open Camera for capturing the LDR images with different smartphones. We disabled the HDR mode, including HDR contrast enhancement from the default settings of the application. Apart from that, we kept the exposure setting in auto mode and captured a total of 27 LDR images in tricky lighting conditions similar to the DSLR setup.
| m | 86add3b8881497829632f7a6d4db72ad |
Before the success of masked autoencoder, visual self-supervised pretraining had been dominated by joint-embedding methods, either contrastive ones ( {{cite:96f2d56290f524b3433d5b84ee520a323663ba02}}, {{cite:81d73ee81c91cda0953bb4902b435175eca39990}}) or negative-free ones {{cite:dbb633c3028a5b797b3c5d8c26bbad410954fa30}}, {{cite:2a0d3a65824ce3584ed38a581df38f18c3fa43a5}}. Thus, it is highly relevant to compare masked autoencoder with joint-embedding for visual self-supervised pretraining.
| m | 9646373d5b40b9d7af28988cd1f709a6 |
To demonstrate efficacy of AVAC, we further show the performance and energy gains for a real-life Edge application called HealthFog {{cite:c9ac3596f97109527eb0257ecd5b558786653718}} which provides high accuracy healthcare services using ensemble deep learning. Figure REF shows the application running in 3 major stages: (T1) the program performs large number of ECG read operations and shares data in real-time with sensors and actuators; (T2) the program performs task scheduling and migration decisions for minimum service-level-agreement violations {{cite:314c8c1b4f64de4b35d98d4bc9fa4d91e34329b0}}; (T3) the program utilizes ensemble deep learning based methods to evaluate the data and generate results like health analysis and automated prescription generation. Gains in static and adaptive cases are same in T1 due to only-read operations. T2 has large number of convolution operations giving high energy gains in both the Static and Adaptive cases, compared to the reference. Further, T3 has many MM-like operations and also bootstrapping processes similar to those in the DT, leading to higher gains (29% performance and 19% energy). Also, Figure REF highlights how the proposed ML-based model can adapt to shifts in memory access patters to instantly enhance gains.
| r | 41f41b69aab44f3e1116362bec52acf9 |
Multivariate stochastic processes have been the focus of intensive research in the last decade {{cite:4c801196d1c8fa01f067853d77853b1ce637eaf4}}, {{cite:49af611f3d7eca976b18ae5e42275fe979855449}}, {{cite:72c4f830582237da0750c3cbaf2051d2d13cc3ee}}, {{cite:bcadc5ee2cc3b7823acbbe37edf6a9610dc0d61e}}, {{cite:e0d836dc95e12a6eb8379ab0e0fdd9d844022945}}. There is much advantage to modelling underlying geometry in time series {{cite:1e7a72a6695fecd9ce29527808c47492bb080675}}, but that viewpoint exactly corresponds as to how the underlying structure in the observations evolves over time. Oscillations are natural as a modelling starting point when studying stationary phenomena. The multivariate generalisation of an oscillation is an observed trajectory from an ellipse {{cite:1778919dff149ad8e42a79cc3eebd9d43a3d0171}}. This puts an emphasis on the classes of models starting from oscillations, broadening to partially observed trajectories on the ellipse.
| d | 39581e84924827d97aa339565f24d992 |
We include an alternative view of the results shown in Figure REF here in Figure REF .
In this Figure, we plot each editor's performance on a growing history of previous edits and its performance on the pretrained model's upstream data.
For both Figures, optimal performance is in the upper right-hand corner.
As expected, each model decays the pretrained model's upstream performance over time, though this decay scales down with learning rate.
Also as expected, MEND underperforms in each case as it lacks an edit training set, which is privileged information with respect to our task.
We also observe that editing BERT on SCOTUS is far noisier than T5 on QA for all editors.
This finding corroborates recent works that show BERT training tends to be highly unstable {{cite:31d58735ecb62977b871c58f4675caaf3a6debd1}}, {{cite:4677603a78699d8933d341efbf059d41e021bd68}}.
{{figure:40487d52-f1f0-4b85-98b0-5b7e2c5be540}} | r | 124e7b75176fedc07d5af96090cf3ef4 |
Our approach assumes that a linear spacing in latent space corresponds to slice spacing in image space. Although, alternatives such as spherical latent space interpolation {{cite:6ba8ba68699133463521421316e37b14ccdca0b5}}, {{cite:7751a3c74a3ce157bcd441e392a1dc5f6cfff6bc}} or an enforced Riemannian latent space {{cite:17e802dbd1ffdad17ac67fc63889fc73583f3aeb}}, {{cite:c07539ee3cc54f0ca620856ecaa5d24a0df4853e}} can be used, linear interpolation in the latent space of an autoencoder trained with the proposed synthesis loss showed excellent results. Nevertheless, human anatomy does not change linearly along spatial dimensions. We conjecture that the model can learn such a nonlinearity from the training data. Furthermore, we presume that the synthesis loss encourages the model to learn the nonlinear mapping between distances in latent and image space. For example, our experiments on cardiac MRI revealed that the model has learned that structural changes at the base of the heart are substantially different than at the apex. Finally, our experiments on neonatal and adult brain MRI apply mixing coefficients ({{formula:675cf271-cef8-4770-b97f-5f209dd05ae5}} ) unequal to 0.5. Results of these experiments corroborate our assumption that linear steps taken in latent space can approximate anatomical distances in image space, however, the approach does not guarantee such a relationship to be exact.
| d | aa961c1033c012d3737e023e26c68bc8 |
where {{formula:c414afda-8601-4145-80fa-e8d32743eb46}} is the NS spin frequency and {{formula:d5cbaa57-c558-4f3e-a96c-0bf5bb6eb21e}} is a coefficient in the range
of 0.87–0.95 ({{cite:321d3584c111cc38e9783ad91630d31571e14999}}). Similarly,
letting {{formula:4f0f9e21-0875-4ca4-ab21-f452a2a3071e}} , we will obtain the lower limit of {{formula:1bd40ea3-c774-4b93-9ace-20d4c812fd52}} , because
less magnetic pressure is needed to balance the gas pressure from the disk, getting
{{formula:35f2c517-4bca-4bc5-bff0-b2d7387081a2}}
| m | 9698893daf6fe9772857fbe0cd8cb07a |
we would like to solve for {{formula:cbdcd579-3b2c-49e6-8d74-d6fd594e94b2}} , but this is a system of inequalities in {{formula:a1db8f2c-d651-4cb5-b392-c8ba2cf46c78}} unknown integer variables, with the additional constraint that the entries of {{formula:ca0b80e3-74dc-42c7-b54a-c58947914ddc}} have to add up to {{formula:c776b162-17f5-4acc-b426-27001511fbd5}} : there is no easy way to even know whether it will have a unique solution for this specific set of covariates {{cite:8a4e9236c03c9805670cef77b2b332a9cf7c0876}}, {{cite:b9e8560182aab3428355f1edeaae9828d17d12ab}}, let alone a way to devise balance constraints that ensure uniqueness of the solution of the system for an arbitrary dataset. Depending on the values of {{formula:8099e725-20ee-4070-b7cb-eb6ed2ae795d}} and {{formula:1bf726b7-1b06-48de-9c0d-4cd9726d4316}} , {{formula:660dbef5-0532-4a9f-8b8c-09e250120c28}} and on the additional constraints that we impose on {{formula:9b1d7304-dc13-485a-9599-2558c1af3891}} , there may be several different values of {{formula:2b0063b4-52cb-4455-8e1f-a094014374c0}} that solve the system above, and thus achieve the desired balance or better, this would be true even if we were to replace the inequality in (REF ) with strict equality. If we allow only one-to-one matches, or even one-to-many in certain cases, different solutions will match the same treatment unit to different control units, discarding a different sample of controls every time. The problem is that the system above guarantees us balance at least {{formula:3a689690-de78-4e51-8104-4faa6ff19142}} on the matching covariates, but says nothing about the values of the dependent variable that the control units we match with will have. It is important that the dependent variable never actually be used to make matches {{cite:b40625bc22d3717a0f4809222d79299ae23bd39c}}, as this would equate with selecting cases on the dependent variable, which would introduce confounding in the design.
| m | e945302b1793ede41121ff3a52ac08eb |
This section presents the proposed architecture to estimate 3D human pose from 2D. Inspired by recently developed transformer approach, namely, Poseformer {{cite:8db8d417a889ce9f70acf537274e99710c42ab01}}, we propose interaction modules inside the spatial and temporal encoders to make the transformer more efficient when lifting the 2D to 3D poses. The 2D input poses can be inferred from any 2D pose detection approach such as {{cite:0c0d95051836f9e31fba397900bb119e97954102}}, {{cite:492d9596ad593a0f79a13f07a6824c9f0851807c}}. The poses of the consecutive frames in an input video are concatenated to form the input to the proposed architecture. Suppose, {{formula:35fa0949-2d97-4376-8758-eaf1f9ddda5e}} denotes the set of the 2D input frames, where {{formula:4b067473-db0b-4497-bdde-7496b4317623}} is composed of the 2D positions of the body joints for frame {{formula:2704d977-a407-4a88-8a86-09c0687a3a8d}} , {{formula:af7b39fd-2007-4dc8-a8d5-e9f4917ebbb7}} is the total number of frames in the input video and {{formula:b86cf5b2-814d-47f0-9dcd-6e9bad283321}} is the number of joints. The output of each frame is the 3D body joints, {{formula:b653ae5a-e24d-48ee-b59c-b0c348a0b901}} . The proposed architecture incorporates cross-interaction modules into a vanilla spatial and temporal transformer {{cite:da86151130aa7603298761b1c5602b1adf9a02c1}}. Incorporating these modules with the transformer helps to capture both long-range relationships and local interactions between the body joints and across frames in both the spatial and temporal domains, respectively.
| m | 0036c7ccbb4dd71dc40a60b274522f3a |
Here we also argued that given a geometry is optimal for one simple objective function, it is therefore more likely to be an optimal geometry for another simple objective function, as compared to a null expectation based on random functions. This is a rather surprising result, given that we typically conceive of exponentially large search spaces. Our argument is somewhat similar to that of Valle-Perez et al {{cite:3e85aff7a24457c00e0b4d5a1b7004afda4c3235}} who proposed that a reason for the success of deep neural networks in machine learning is that they are biased towards simpler functions, and natural functions are also tend to be simple, such that a coincidence of functions is more likely than expected by a random function null model.
| d | c4ead0de220f0b12d29d9d89d6df3db5 |
Image translation problems like denoising {{cite:6bb885192ceb94fc3463fc8efbb0731d0d005bb1}}, segmentation {{cite:71f90d79a46148edf87ff712d63f5d38b528f5db}} and super-resolution {{cite:dbe51536fbbc34a1c3efd12d6baa8d7a64d9ceaf}} are spatially proportional/similar translation and much easier to solve like pseudo-conditional probability {{formula:221ad8b1-3d11-4835-9a64-263a18744e2c}} . We are trying to solve problems like removing haziness {{cite:92ba4a9455f8b7986ac1b38f4dc736a813d14df8}} in images, or perform some non-conventional transformations like 3T to 7T MRI {{cite:5f97bde4507149c86d3614c2978b4fc4abfaa671}} or locating a specific organ and provide a clearer and detailed view or true conditional probability {{formula:8cc1bade-66e4-4f14-a87f-0d8dd217b52d}} .
| r | 2fd0c1ce0f04686cda34d158830c10a0 |
Let us start from the strong disorder case {{formula:c0af0f97-e527-493a-8351-9344d9ac4773}} illustrated in Fig. REF . When the typical tunneling transparency {{formula:3bbc89d7-6cb3-4e2a-90de-af80546aaa71}}
of the insulating barrier separating neighboring puddles is small (the action {{formula:7bbf1926-a806-4b24-b515-23e0d0527887}} in units of {{formula:56aadde6-f2f5-42de-9079-8b7fa339fba1}} is large), one can envision a sequence of three mechanisms of activated transport replacing each other with decreasing temperature, as in a lightly doped wide gap semiconductor {{cite:0c680d22552535886ff79f1db724148ad8a4cc6e}}.
This three-mechanism sequence is illustrated in Fig. REF .
At relatively large temperature {{formula:ddd7c042-ee1b-48ec-94f5-d752f198e464}} electrons and holes can be activated from the Fermi level to the percolation level (i.e., the classical mobility edge).
Thus, the conductivity at such large temperatures is given{{cite:6f768ec3a3e067106a606236d39453ef9b907ba5}} by
{{formula:536ce5eb-aa18-4bf8-8f28-71d370530601}}
| r | d5141958c5dc60094494aab707d67849 |
For a long time, it has been known that the distribution which satisfies the problem-defining constraints while maximizing the entropy functional is in a sense the least-biased distribution {{cite:0130e5cd761b5a5d425f8f70c60c12a1628bd0e4}}, {{cite:0592a02007cdd1e54bfadae65eed5d8c00ab9238}} that incorporates the provided information and nothing more.
Aside from this conceptually appealing interpretation, the entropy-based approach to modeling offers many benefits of practical significance.
Given certain information from phenomenology entropy maximization uniquely selects one model distribution as the most representative among all phenomenologically viable distributions offering a clear deductive reasoning to modeling.
At the same time, the maxent logic combinatorially assigns probabilities to any other distribution in the most intuitive way, namely according to its “distance” from the provided set of observations.
As this point appears to be of fundamental importance in the maxent logic, we start our more formal parameter-agnostic exploration by investigating the distribution over phenomenologically viable distributions.
| i | 14a6e7b416a4ca9d2d7a167a47ddb629 |
In Fig REF , we show the facial editing results of our method with a different number of condition attributes.
It can be observed that our method is able to handle complex disentanglement between face attributes by adding more condition attributes into the conditional manipulation operation using the strategy presented in {{cite:ee191cdb1f3bc002c60a0521721912b1f5f3ab31}}.
| r | 28fcdf9a43c93c49b8810c70234dfc66 |
As noted above, for the GCM to be equivariant to large transformations
of the input, the parts need to to be detected equivariantly. Some
capsules papers have used the affNIST
datasethttps://www.cs.toronto.edu/~tijmen/affNIST/,
but this only used small rotations of up to {{formula:7324fa31-1c7f-4bb1-b016-a9a7d9dc9b93}} .
{{cite:046e061179fbae66abfed97957b740dddaf31029}} did investigate the use of very
different viewpoints on the smallNORB dataset; while their capsules
results in Table 2 did outperform a competitor CNN, it is noticeable
that there is still a performance gap between novel and familiar
viewpoints. We have demonstrated (see Fig. REF )
that the PCAE decomposition is not equivariant to large rotations, and
similar observations have been made by
{{cite:225810dd3eab1c5d90371f109a9b917a83821824}} for their model. Thus we
believe that further work on the equivariant extraction of parts is
necessary in order to achieve equivariant object recognition.
| d | 5949485af6f04aa88aa5320a28938261 |
This has historically not been the case in finance, however, where PCA based methods have dominated since Markowitz' modern portfolio theory {{cite:1002143495f7f4b86f9aea2f8a2fb56677df5761}} {{cite:792e14832424305bc5caf3cdf7df3bf2a88ec33b}}, and only a handful of studies have even seriously looked at ICA {{cite:e4bbac15d03dc6ce2f6fc337a0188af26ef79ac5}}, {{cite:fa3d20e43b9a67b820397f5777d61fdc276799ea}}, {{cite:7102674fe759d49e555ed8371a55b3537ea4f5a7}}, {{cite:f3e107034639b8d56e59b1077cecfe30cf7ab0c7}}, {{cite:42424ef5893694ba622af7e6eed09451b6af4a2c}}, {{cite:aa70c2f2c589378b734dc57daee5f03aee87c120}}.
| d | ce642dc759bb51fc1ee43fe107dc6d1c |
We studied bi-modal variational autoencoders (VAEs) based on a product-of-experts (PoE)architecture, in particular VAEVAE as proposed by {{cite:ab99367126426abcaf4cbe8d00eac7ad1ea96502}} and a new model SVAE, which we derived in an axiomatic way, and represents a generalization of the VAEVAE architecture. The models learn representations that allow coherent sampling of the modalities and accurate sampling of one modality given the other. They work well in the semi-supervised setting, that is, not all modalities need to be always observed during training.
It has been argued that the mixture-of-experts (MoE) approach MMVAE is preferable to a PoE for multimodal VAEs {{cite:fc9da8e2f2304e0bd1f0e2f2ba9b5b498f62bdc1}}, in particular in the fully supervised setting (i.e., when all data are paired). This conjecture was based on a comparison with the MVAE model {{cite:00beccb5e7d83b66fd80310c8babed0886ff9920}}, but is refuted by our experiments showing that VAEVAE and our newly proposed SVAE can outperform MMVAE on experiments conducted by {{cite:fc9da8e2f2304e0bd1f0e2f2ba9b5b498f62bdc1}}.
Intuitively,
PoEs are more tailored to towards an “AND” (multiplicative) combination of the input modalities. This is supported by our experiments on halved digit images, where a conjunctive combination is helpful and the PoE models perform much better than MMVAE.
In a real-world bioinformatics task,
SVAE outperformed VAEVAE in predicting molecule structural fingerprints from mass spectra.
We also expanded SVAE and VAEVAE to 3-modal case and show that SVAE demonstrates better performance on individual modalities reconstructions while having less parameters than VAEVAE.
| d | 192a335ff7ffd3a37139c09acd84236b |
Our approach in synthetic dataset design would be to understand how DL models perform classification,
based on the existing real-world datasets (i.e. problem under study for classification or counting).
To achieve this, we take advantage of the work in {{cite:088a198a627f846686b4dedc97ed191da477a2fd}}, which allows to visualize what happens inside DL models, i.e. which aspects/characteristics of the image are the ones that trigger the final classification. These characteristics could then be used to better design the synthetic datasets, emphasizing on these aspects when creating the simulated images.
In this paper, we focused on two different applications:
| m | 390f653a04215ddfdb6692ab0beb3a35 |
Speech, as one of the most natural and common media of human communication, carries a rich array of information that extends far beyond the verbal message being expressed.
From an utterance, the speaker's gender, age, dialectal background, emotional status and personality could be identified by human listeners.
Part of the paralinguistic information in speech carries physiological and healthy condition of a speaker.
It is feasible to capture the natural speech sound in the form of acoustic signals, and extract the speaker's information via signal processing techniques and statistical modeling {{cite:5b2ce2af446d5fb86719e8d9b2af6313f911ffec}}, {{cite:3fa210500fdabfb57400298ca1cec701c6658d82}}. Acoustic and linguistic analysis of speech signals are effective means of detecting and quantifying disorders, diseases, and other changes in human body.
| i | 1315df1545e43842174088dc7d9ae7b0 |
We compare
our models with several state-of-the-art neural NLU models on two
publicly available benchmarking datasets: the ATIS {{cite:5af231dd92728dced9584d75f3c9edcd599336c4}} and SNIPS {{cite:4fcbaa24e81cfabdbdf408f411658ac9cabd01c8}} datasets.
The results show that our models outperform previous
works.
To examine the effects of adding syntactic information, we conduct an ablation study and visualize the self-attention weights in the Transformer encoder.
| i | 131412dae8e023027ef20f6d40f39b02 |
Fig. REF shows the style transfer when the style and content images contain a word cloud. The challenge here is to supervise style and content features while maintaining the readability of the text. DPS {{cite:21702dd039853dc943ae2c5a06892935b4ec8ec4}} spills-over unrelated features on the word-cloud. To investigate the photo-realistic style supervision with contextual loss CL {{cite:965e674a9735cf39db3867262de9c03349fe6928}}, we integrate the photo-realism regularization module {{formula:90389aba-4671-4b69-bba4-8b4eb1448978}} {{cite:21702dd039853dc943ae2c5a06892935b4ec8ec4}} with CL {{cite:965e674a9735cf39db3867262de9c03349fe6928}}. The photo-realism mostly suppresses distortions and preserves the structure of the objects in the output. {{formula:fa47b334-f7e4-4ac4-a489-1884e8029396}} {{cite:21702dd039853dc943ae2c5a06892935b4ec8ec4}} with CL {{cite:965e674a9735cf39db3867262de9c03349fe6928}} does not distribute image features well. It might be because contextually similar features between the source and the target images were not well used in the output. WCT2 {{cite:b7933280980ba987620b1935ce9f99f68b8020a1}} and STROTSS {{cite:e3544df5b1e52f0b9738f4936a4025abc4a28e7e}} does not preserve the local level image features details and reduce the text readability. DeepObjStyle provides a better distribution of features even when the segmentation mask does not provide the position of the word cloudWe illustrate the extended version of Fig. REF in the supplementary material..
{{figure:29135225-8377-482c-a084-5a0921a036fe}}{{figure:7d3fdfbb-1eda-45a1-9d8b-41580a6e645b}} | r | e699a17a5f305531ba48b2cfe371f14a |
While designing a hybrid functional, the choice of mixing fraction({{formula:70f4424d-acaf-4a77-82be-5f337ea835b8}} ) is crucial to the bonding character, ordering and alignment of bands, and thus in the electronic and dielectric properties of the material. At an operational level, the parameters that define the screened hybrid functional form a two dimensional space spanned by the range separation parameter(screening parameter ({{formula:3f4e670a-e1a1-4118-a425-ff42e242ed84}} ) and the fraction of Fock exchange ({{formula:f46db8a1-9164-4e42-8730-4c957f821323}} ){{cite:89c2e507c041c1557d034f69c154c369e669f5df}}. Given the variety of systems, there is no fixed, universal combination of these parameters ({{formula:aa1bfb5d-df3d-472f-a0f0-ffa258bd9b82}} ) that leads to predictions with satisfying accuracy. Mixing fraction is often fixed based on the dielectric constant of the materials{{cite:b492d69ebe83fbf9e1d290ea736456bd616c5521}}, {{cite:4afe500d679cbb44706c05a0c7fed9ceb44b0791}}, {{cite:65a5a72ceb9faa393cfca73a910e1f52a35a6fa3}} following the interpretation of dielectric constant as inverse screening{{cite:a89ca4c4795e554b96291ac341dfb6ab47b377ad}}. Based on fitting a large number of molecular species to atomization energies the fraction of Fock exchange {{formula:9fa566f8-f244-4348-806e-550543caa566}} and the screening parameter {{formula:65c202bb-4891-4c91-b7cf-b8c5ea0853bb}} were originally set to 0.25 (25%) and {{formula:12542790-9083-46cf-b786-492172e30aab}} , respectively, for standard HSE06 functional{{cite:2be5da13460869c2bb170a6fd57347d72b99a25c}}, {{cite:f2b11e82a2ff95ef9ad003e735ecbf3109cc91cc}}. In the present investigation, we maintain the screening parameter at {{formula:510bf335-067d-4729-8ebb-b6cd87445531}} (corresponding to a screening length {{formula:da7bf4f3-f4bb-417b-9bd2-f103649095c6}} ). However, in the spirit of Ref. {{cite:0932dafc3467b487f107874c276970f7d011c73d}}, Ref. {{cite:0a9cc0c37abbe400903bddcc2c58d22f39cff6eb}} and Ref. {{cite:5d51783e7c184e8733dbd7cfdfa2383cc188a2ee}} {{formula:94f2dc8a-b072-43ff-983e-db8fcc946873}} is tuned and optimized such that modified HSE06 functional yield a DFT bandgap close to experimental gap. The DFT wavefunction of modified HSE06 functional so obtained with the optimized {{formula:cfaec972-711b-4338-bb22-20662492237f}} is chosen as initial state for MBPT({{formula:59543e29-24e3-4c8b-b3a6-dd98f96646cd}} +BSE) calculations.
| m | 571805fea5ec993b5e19940b4cce2007 |
Quantitatively, to have detailed grasp and contribution of each input parameter in understanding mixing process, we perform feature importance.
Three different types of feature selection methods are studied for consistency.
These include F-test, MI criteria, and RF.
Entire simulation data is used as there is no QoI prediction involved in using F-test and MI criteria methods.
For RF method, 1620 simulations (which is approx. 70%) are used for training and remaining 695 simulations are used for testing the feature importance.
All training samples are given equal weights during tree construction.
Samples are not bootstrap when building trees.
Out-of-bag samples are used to estimate the generalization accuracy (which is {{formula:fed60b6c-ddde-41c4-90e3-8d667e8368ac}} -score) of feature selection using RF.
In our case, the generalized {{formula:42835db3-db3f-470a-9364-44e324366157}} -score for testing the RF model for feature importance is 0.88.
The inputs values for these feature selection methods are as follows: Number of neighbors to use for MI estimation is equal to 3.
For RF, analysis is performed for different number of trees in the forest.
These are equal to 5, 100, and 250.
Gini criterion {{cite:38271102bf1e486a3aac309e127a806483ba42c4}}, {{cite:6a0eb693a89df77dd407674ecdb687e2a65dd07c}} is used to measure the quality of a split for each node during the construction of the tree.
Minimum number of training samples required to split an internal node in a tree is equal to 2.
Minimum number of training samples required to be at a leaf node is equal to 1.
Maximum number of features considered when looking for the best split in a tree construction is equal to 4.
| r | 1cc02c6cd0d57fa1bdc3c045eb8f1cdc |
Effectiveness of Graph Transformer Networks on heterogeneous graph datasets. blackTable REF . and REF . show the classification results on six heterogeneous graph datasets. In large-scale graph datasets (e.g., CS, ML, NN, DBLP, BLOGCATLOG, and FLICKR), we trained GNN-based methods and GTN in the mini-batch setting with graph sampling algorithm {{cite:8485fb86a95f1ca221defec46f6d24e29bdc66cb}}, {{cite:023a1b97129037830031cda7ff277cc163205b22}}. blackWe observe that our propsed methods, GTN and FastGTN, consistently outperform all network embedding methods and graph neural network methods in six heterogeneous graph datasets. GNN-based methods perform better than random walk-based network embedding methods. Interestingly, though the HAN is a modified GAT for a heterogeneous graph, the GAT usually performs better than the HAN. This result shows that using the pre-defined meta-paths as the HAN may cause adverse effects on performance. In contrast, Our GTN and FastGTN achieved the best performance compared to all other baselines on all the datasets.
It demonstrates that the GTN can learn a new graph structure which consists of useful meta-paths for learning more effective node representation.
Also, the performance gap between GTNs and FastGTNs on IMDB (60.02% vs. 64.64%)
shows that in FastGTNs the graph transformations based on the semantic similarity (i.e., non-local operations) are effective. We additionally provide an ablation study of non-local operations in A.6 in the supplement.
| r | 2208052a65b9dbc88ebb2a3ddff45e16 |
Hard labels were utilized in the baseline to train the target network directly. We also compared the proposed method with label smoothing regularization (LSR) {{cite:478f2d94292850f3a29e88c05357842f20aef90e}} and self-knowledge distillation regularization approaches, including teacher-free knowledge distillation (Tf-KD{{formula:5356fc54-cf0d-47a1-a727-094a57c3f108}} ,Tf-KD{{formula:7638b762-2c93-4b79-8a9c-7599484ba71c}} ) {{cite:2c8e06ea637ca303b32b9c7cefa77be4e0876955}}, class-wise self-knowledge distillation (CS-KD) {{cite:261387fb7dd137bd0fd79e13c8220ebbc1acd810}}, progressive self-knowledge distillation (PS-KD) {{cite:089202c3f9301bc1ff91d03f583417b2d9cb97ec}}. The above methods focus on logit-level regularization. Data-distortion guided self-knowledge distillation (DDGSD) is a data augmentation based distillation approach {{cite:980f0f2ce28ffb194349b2da817223c9c63aef19}}, which was compared and tested the compatibility with DLB. We removed the feature-level supervision in DDGSD {{cite:980f0f2ce28ffb194349b2da817223c9c63aef19}} i.e. MMD loss, for a fair comparison. As DDGSD is an augmentation based method, we explore the performance compatibility between DLB and DDGSD. All the extra hyper-parameters involved in the compared methods were retained as their original settings.
| m | c6fdd2374a35b57fd66bf17c5be5c5bb |
In this section, we evaluate the performance of the proposed algorithms.
Both the DL and UL channel power gains are modeled as {{formula:aaf6c527-cb74-4deb-ab4d-5f22e6b65524}} {{cite:93b60d68b4c43fb901116b794c3a3dd210b78b9c}}, {{formula:96cde8a0-a794-4163-84cf-7f7d8899392d}} , where {{formula:0e6bdf33-bb99-42c9-8eed-d8134ce9fb81}} represents the short-term fading, which indicates that {{formula:a0cb9011-a5c8-49fb-9dd3-b35a6c8e31d7}} is an exponentially distributed random variable with unit mean,
and {{formula:168e26db-68ba-40ea-854a-8e4221f65b34}} (in meter) is the distance between user {{formula:a9ac2fde-883b-4286-9d42-9edb4a8c5b43}} and the BS.
Note that according to the above channel model, a 30 dB average signal power attenuation is assumed at a reference distance of 1 m, and the pathloss exponent is set to {{formula:8d6e1168-f080-4a3a-9aa2-a6bf96bb56bf}} .
There are a total number of {{formula:b803770a-7407-449e-979e-c091e9425af2}} , ranging from 2 to 10, users uniformly distributed in a square area 10 m {{formula:9523b7c5-ab00-44d7-8f11-7a8cf27be7e9}} 10 m.
For each user, the energy harvesting efficiency is assumed to be {{formula:98da007e-b05f-4ec3-a3a5-5327eb867880}} .
The noise power is {{formula:470b75cf-0675-43ae-b825-84954962a1a8}} dBm and we set {{formula:4c4263ce-26b6-486b-9dbe-6c3467487255}} dB assuming that an uncoded quadrature amplitude modulation (QAM) is employed {{cite:967a5e69f94f7e0acb4fba48438b55155cb76732}}.
The number of epochs is st as {{formula:4e157ae5-6804-4a4c-b701-ed456de6fba2}} .
Unless specified otherwise, the system parameters are set as {{formula:1ba4d2e2-41a3-4564-a662-6a1d682136d3}}, {{formula:87035450-98ac-40fb-bf8c-026e4c5003da}} W, and {{formula:0d232185-3d3c-4e78-91f3-e90290f0557d}} .
| r | ec31c99eb72a8737b31cbc07bb190326 |
Besides point-cloud-based methods, the voxel-based approaches compress the event data into voxels, bridging event cameras with conventional CNN techniques {{cite:a8ad6d663bc112127cb216fbccbb082318837965}}, {{cite:36bf61db9ca930502bc646fd2430cf68e9e0dc3c}}, {{cite:76056652a6f36bd14ce035fbe1d5f80094d1dea3}}, {{cite:ad75438afc201c4e3a06c165011d7211ef24e361}}. The voxel-based approaches such as {{cite:a8ad6d663bc112127cb216fbccbb082318837965}} and {{cite:ad75438afc201c4e3a06c165011d7211ef24e361}} achieve the state-of-the-art (SOTA) performance on several event-based datasets, while enjoying higher efficiency than point-cloud-based methods, due to the good regularity for parallel computing on GPUs.
| i | d1544032f51bf204a0e1fbd73603b416 |
In this paper, we focused on the simplest one-loop amplitude with the lowest Kaluza-Klein level {{formula:9de2a642-1754-46be-a3d7-2a0066b4e857}} . To further explore the loop-level dynamics, we should also study more general amplitudes with higher Kaluza-Klein levels. Computing these amplitudes requires a thorough analysis of the mixing problem at the tree level which is so far missing in the literature.
Relatedly, the tree-level super gluon amplitudes on {{formula:ee19ee17-8ea4-4a62-9871-a18588c2f2c6}} was shown to display a hidden eight dimensional conformal symmetry {{cite:f1beb672f59ca083ce7b13011b485e20570bf192}}. It would be interesting to explore how this structure can help to organize the loop-level amplitudes.
Recently, a double-copy-like relation was found in {{cite:d17c72b274af56eebef4b2e4a54eb71ae7ce7fa3}} which relates all tree-level four-point amplitudes of {{formula:1a1f5b89-696c-4bfb-88ab-58c1549d7b8c}} IIB supergravity, {{formula:d43ba92d-5235-4aed-a924-d56b31ef9805}} SYM, and bi-adjoint scalars on {{formula:146fea6b-24dd-4924-8094-56b419462fd8}} . It would be very interesting to see if the tree-level relation can be extended to the loop level as well.
For super gluon four-point functions at {{formula:38ee8207-8c9e-474f-ab83-83da3dcc59c9}} the only loop contribution is the one which we computed in this paper with gluons running in the loop. However, at {{formula:4535ef98-fa45-4fe6-a4ca-feefc1f2ef92}} we can also have one-loop box diagrams where two internal legs are gluons and the other two legs are gravitons. Since the gluons are restricted to the eight dimensional subspace while the gravitons can propagate in the full ten dimensional space, it will be particularly interesting to examine the flat space limit of this amplitude. Presumably, it should match a ten dimensional flat space one-loop box integral where the two of the four internal propagators are confined to a codimension-2 subspace.
It would also be interesting to extend the one-loop analysis to other backgrounds which have AdS factors other than {{formula:49343901-c394-4778-9465-dda512f350e4}} , and to explore the structure of one-loop amplitudes across different spacetime dimensions. Similar supergravity one-loop four-point amplitudes have been computed for eleven dimensional supergravity on {{formula:6c751bac-3ab3-4ab8-aa74-e2afef71b392}} {{cite:21108726f7a52e1a523be2e6944ff9c1e21ea4b9}} and {{formula:4f400873-ca41-4848-9846-a480c29d849c}} {{cite:e09deba0c8ded8342e12242551e2e376a0d5ef9b}}.
We can also apply the AdS unitarity method to compute all-loop contributions that correspond to the iterated s-cuts in flat space, as has been done in the supergravity case {{cite:391c8f039c4909853ceb82db7b5a5ec083ef1392}}, {{cite:348e9e665e9f2feaa61b325dc7ea97ce9c7e6ce8}}. However, to fully determine these higher-loop correlators knowing just tree-level four-point functions is no longer sufficient. We would also need the information of multi-trace operators which are encoded in higher-point correlators.
| d | 2f0b794613b5e59d16bac57bfa057de3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.