text stringlengths 54 548k | label stringclasses 4 values | id_ stringlengths 32 32 |
|---|---|---|
Besides the combination of generative and contrastive tasks, SSI {{cite:6a616ce673838900777e2393279405f03c35a1cd}} assembles a label-based predictive task and a contrastive task to pre-train models for sequential recommendation. Given a sequence, two consistency constraint: temporal consistency and persona consistency are imposed through the predictive learning. For temporal consistency, this method needs to predict whether the input sequence is in the original order or shuffled. For persona consistency, it needs to differentiate whether the input sequence is from one certain user or with some items replaced by unrelated items. Meanwhile, an item masking-based contrast is conducted between the masked item representation and the sequence representation. As for the combination of generative and predictive tasks, it is witnessed in PTUM {{cite:6ebd2b2be91718df5cb8af3fee65b7499cef9cb6}} which imitates BERT {{cite:f2c504121a17ac5fcb13704bce732cd75c2d2e8e}} to conduct the masked-item-generation task and the next-item-prediction task. Moreover, UPRec {{cite:9d0df08ea15f5dc31248691682af014c16928cf1}} connects BERT with heterogeneous user information, and pre-trains the encoder with the social relation and user attribute prediction tasks in parallel.
| m | 43b7642b15f0f3043aeee1cd40b358bc |
Denote {{formula:3b1a1670-2e3c-42c0-ba74-3c0d9829f9f8}} . For
{{formula:dab484ec-4c31-4544-88ff-5a44804a1c68}}
and {{formula:0ca08014-6174-4bbc-b73f-b8c68702c062}} , as in {{cite:f57b7d5f86e6fd54275d291760d28c73dfd1040f}}, define
{{formula:1ef307c5-36e3-42ab-a29e-ceb683c419ba}}
{{formula:bd4fae0b-e62c-4164-a0ed-945eb4e9f7ef}}
| r | 1e8690bb2e1bc73433a5e6ab7c000e81 |
Convection was treated according to the standard mixing-length theory {{cite:d45280323e995ad7d0a82fa527d7427021bffaff}} in this work.
The treatment of convection is one of the sources of uncertainty in modeling of stars.
Different treatments of convection, such as {{cite:a2aaa56f8b2c5cd57f6f9dba3200c56b7fbb4722}}, {{cite:587039d4b683bf787c75792cdd735fc1a62c94c7}}, {{cite:86953d5d23737d5f999ca93dd566092fe9ee86b7}},
and {{cite:fd2349b8a2851f79af32d1c8594fd3773b100dd1}}, could affect solar models and deserve more detailed study.
| d | 6d797249ab507a6635cd739ed14d4b7c |
Based on the calculations in Sec. , a finite expansion along the geodesics in the FLRWK universe may implies the circumvention of Hawking and Hawking-Penrose cosmological singularity theorems {{cite:8681f9d4183c9682e60bef8e86f1b552875761cc}}, {{cite:3450c76b18cec84050aecda02e3da0cef6227301}}, {{cite:f4c1aa0597000e3b62aef421aa622dddf299c377}}. However, due to the discontinuity in the expansion of geodesic congruence at the spacetime defect, it would be more appropriate to have the following interpretation: the singularity theorems are still valid in the FLRWK universe but the “singularity" of these theorems corresponds to a spacetime defect with a local degenerate metric {{cite:0b76bed77c3de379e17ba890dcbd85c37fcfe1c7}}. See the last paragraph in Sec. 3.3 of Ref. {{cite:0b76bed77c3de379e17ba890dcbd85c37fcfe1c7}} for more discussion on the nature of the singularity.
| d | 4e07d16af12f210fc9446845e375f230 |
Neutrino spin oscillations effects, predicted in our work, are expected
to be probed with current and especially future neutrino telescopes.
The flux of neutrinos, if a core-collapsing SN takes place in our Galaxy,
is estimated by {{formula:ef15367d-1fde-48a6-8f86-5f19d99438b7}} events for the JUNO detector {{cite:682f1987c6270fb9958037110aa0f96c2962a5f5}}
and {{formula:ba91ddfe-e804-4edc-87ad-5ba5ff655c9e}} events for the Hyper-Kamiokande detector {{cite:24a4503b8a4e0fe2f38276026d2a507f6cd8fd1c}}.
The sensitivity of such detectors is likely to be sufficient for the
observation of neutrinos along the path 2 in Fig. REF .
| d | d833b329c9f5994d10b7e2d19c3fb07d |
In this section, we report our experimental results in the applications of image deblurring, image inpainting and image CS recovery. All the experimental images are shown in Fig.1. Since the group sparse representation (GSR) is exploited to prove that WNNM is more accurate than NNM, we called the proposed scheme as GSR-WNNM. To verify the effectiveness of WNNM, we have implemented a variant of the GSR that use NNM, denoted as GSR-NNM. To evaluate the quality of the restored images, the PSNR and the recently proposed powerful perceptual quality metric FSIM {{cite:c69d03aef4d1537e05a38dc427277c8ca47c66f0}} are calculated.
| r | 1b0818db48bd36ed2457394b97812450 |
Two-fluid relativistic plasma flow equations describe each fluid (ion and electron) in plasma using the equations of special relativistic hydrodynamics (RHD). The fluid components are coupled via electromagnetic quantities using Lorentz force terms. Finally, the electric and magnetic fields are evolved using Maxwell's equations, where current and charges are described using fluid variables. The resulting set of PDEs is a Hyperbolic system of Balance laws with nonlinear flux and stiff source. Due to nonlinearity in the flux, the solutions will exhibit discontinuities {{cite:98b492947763b99ba31cf64d3b34dde5d3ead428}}. Hence, we need to consider the weak solutions, which can be characterized using the Rankine-Hugoniot condition across the discontinuities. As the weak solutions are non-unique, entropy inequality is imposed to avoid nonphysical solutions. More recently, even the entropy solutions for the systems have been shown to be non-unique {{cite:2c025d54acaf6e743ca1e6de60013a1c4f3c3d69}}, still, entropy stability is one of the few nonlinear stability estimates for the solutions, and hence it is desirable to have a numerical scheme which replicates this stability at the discrete level.
| i | aa9b0f101582cb074ded3544aafe2936 |
Specifically, we use Codex to aid implementing a Mask Region-based Convolutional Neural Network (Mask R-CNN) {{cite:f9b8c6fa6bd3aa3829f63782b7e25cadb77d7df7}} combined with a loss function that computes the pairwise cost between each object from the previous frame and each object in the current frame {{cite:f7d99165a5d0efab611b4d02dbe6b7a71e9ca802}}. Our modified algorithm is able to segment blobs in a three dimensional volume by preserving the spatiotemporal features in the GPI video data. We discuss the qualitative utility of using Codex to help with our code implementation, as well as present the results of this process with the model's performance on both synthetic training data and real-experimental data.
| i | 1dcc343c4001b2235d7f37715606eeb1 |
Introduced in {{cite:a30e54edef02b18ba7842979285f335f9120f606}} and {{cite:0765f40670c16f5e45f312f6fb0f06e7e1055161}}, the synthetic control approach consists in solving
{{formula:3abe0c66-1716-4772-87c3-052344cfba19}}
| m | 2931fdc8bd9335a1cac6e1ec990d61b1 |
As we will see in Section REF , the difficulty in ergodic control essentially arises from the storage and computation with multi-dimensional arrays involved in the algorithm. Tensor methods have recently gained popularity in signal processing, physics, applied mathematics, and machine learning communities due to their efficiency in storing and working with multidimensional arrays. These methods exploit the structure inherent in multidimensional arrays such as symmetry, parallel proportionality, and separability to represent them compactly and robustly. Furthermore, they allow performing algebraic operations efficiently in the compact format. Thus, the storage complexity and the algebraic operations complexity are significantly reduced. For a survey of classical tensor methods, we refer the readers to {{cite:156d9e1df251120eb0d8f14a0a65b4061d5a2097}}. For applications of tensor methods in signal processing and machine learning, we refer to {{cite:fd1d44d030db592bd9f84de2c6672b9c26baf5ed}}, {{cite:8596109e1712001e5dcc2127c6f1ac335a29a493}}. In control, tensor methods have been used in {{cite:d1be6605d8a2cd9b4f232a3306bef1e8e2204047}} and {{cite:09270ac5e5264397265a02298d99766428c4c463}} to solve multidimensional optimal control problems which were previously considered to be intractable.
| m | 21de6776aec6cdfd28d20d9e46a6bfd4 |
Rank metric codes were introduced by Delsarte {{cite:b68df01648131e0b9aa3b308579e0660a48e25d5}} in 1978 and have been used in several contexts, such as crisscross error correction {{cite:809941ad034ea43473e1b133acbf1bda3c3bdffd}}, cryptography {{cite:73ebe77e19cb06aa04047c92e894ab1242242d8a}}, and network coding {{cite:792412e01cc5ac6202da0eff1dfa6ab3a13e398f}}.
Because of their ubiquitous applications, they attracted increasing attention in the last years; see e.g. {{cite:68867004a2a792873df2d205bd1ce831b7f64cc9}}, {{cite:9e8650e8f510075d2bdbd629b8c540eee04f177e}}, {{cite:e431c904eb6718f27f7fe35ca825dcb7698495b9}}.
| i | 50ccb92a65f174fa3603ad5d73ea4983 |
Intelligent reflecting surface (IRS) is a recent hardware technology, which promises a significant increase in energy efficiency while achieving high spectral efficiency gains. Its main advantage stems from its structure consisting of a large number of nearly passive elements that can shape the propagation environment since they can be digitally controlled by adaptively inducing amplitude changes and phase shifts on the impinging waves. Subsequently, IRSs have attracted a lot of research interest {{cite:a96e8e5b421c84a038437a05a6a9c9b508169a18}}, {{cite:243962ed381b4ddbf33ac9dbe6a1693136a74b66}}, {{cite:24bf3b9554bc5102a09a177091ca5d4e91141f29}}, {{cite:0f1c3e2f7c3616428f35a4e34a54d1c2f6676729}}, {{cite:ac4ed7b24e3d0863de13edb50861a8116e6d9b13}}.
| i | 7faf803622a1bd175b10d03dff76145d |
As mentioned, approximate tensorization is known to hold in a variety of settings,
so the algorithmic result from thm:alg-main has
a number of interesting applications, including product distributions {{cite:f51684edc5713b44227134057f01d6328a8be92d}}, {{cite:1442367b46e1ba3bebeaeef5837b957c80cbd521}}, sparse undirected graphical models {{cite:d9c53f4f820bd6d3b6d39f60fc81f7722ca8e1d1}} (in the so-called tree uniqueness regime), and distributions satisfying a Dobrushin-type uniqueness condition {{cite:282c2d087313de317ca09244d50bef9a8f804163}}.
See subsec:application for more details and formal statements for these applications.
| r | 83dc8cf361a6a6774acb1157e04db7b1 |
PerDoor uses the principle of adversarial attacks to generate backdoor triggersA secret pattern that helps misclassify altered inputs to an incorrect, targeted label. because of their ability to misclassify human-imperceptible perturbed examples to the desired class with high efficiency {{cite:32d7474e602832f8fafe1997dd7ed72f611f4c3f}}. Adversarial attacks have previously been used to generate efficient backdoor triggers for traditional deep learning systems {{cite:c6be103bc604389659abda96f913b2a4d03d50c2}}; however, the efficacy of such a method in a massively distributed FL framework has not been explored previously. The application of adversarial attacks also aids PerDoor in generating non-uniform trigger patterns for different inputs compared to the fixed uniform trigger patterns of existing backdoor attacks in FL {{cite:93cb7bbb7eda869712cd56d969abbd39186a07de}}, {{cite:62657228b32a141abaeb1ed7136cada1558b2bf7}}, {{cite:1238ade4249d284f29c04043f6484aed67f8731d}}, which are prone to be easily mitigated {{cite:46259e7c223708f946605fc475713b3f18c44b09}}. A brief illustration of PerDoor is shown in Figure REF compared to the traditional backdoor attacks in FL shown in Figure REF .
{{figure:99d58104-7a4b-4b8d-b6aa-f5943bf28c7e}} | i | a2adc94daadac04f8e6608e9cd8af9e4 |
Overall, DR1 and DR2 perform similarly across scenarios except when the sampling score model is correctly specified and the outcome models are not. In that case, DR2 shows noticeably higher efficiency over DR1, especially under strong selection effect for trial participation. This could be due to the tendency for practical positivity violation {{cite:a3cdf2b9e8069dd7711843642b2e36111074c5a3}}, {{cite:d9604e29980239cd2a4acf151129478432f0023e}} as the selection effect grows stronger. Because the trial is only a small fraction of the target population, the estimated sampling scores could be fairly close to zero, leading to extreme sampling score weights for DR1. For this reason, our simulation results favor DR2 over DR1. Correspondingly, using an estimated treatment propensity score generally has minimum effect on efficiency for both DR estimators, except when the sampling score model is correctly specified and the outcome models are not. In that case, estimating the known propensity score clearly improves the efficiency for both DR estimators under moderate effect modification. It is therefore still appealing to consider an estimated propensity score as it does not appear to adversely affect the finite-sample efficiency of DR estimators in the settings we considered.
| r | bada59004da9593373fbcdbc33f44fbe |
Full waveform inversion (FWI) has been receiving wide attention in recent years {{cite:f29cca4da924be0f7ed0c0f853e342ae871cf54d}}, {{cite:3e97f1c436090f378f4c3f4336d360be11396371}}, {{cite:37c8e54d17742158c48c393104b797774988833f}}, {{cite:422341ba48a4e016508ee657b86c9291e3c395d0}}, {{cite:f110f9821ce4c43d75085a61753f2514d37cbf89}}, {{cite:4955ab8e81e4edf3a85803d63fb7bec02dbad762}} due to its high-resolution imaging in geophysical properties. Generally, it can be formulated as a PDE constrained optimization problem in mathematics, which consists of two parts {{cite:69073722beeee201e66dc6a886a03a0338f5025a}}: the forward modeling of seismic wavefield, and the optimization problem searching for suitable model parameters to minimize the mismatch between the predicted and observed seismic signals. In previous decades, limited by the computing power, most tomography methods were based on the ray theory, which ignores finite frequency phenomena such as wave-front healing and scattering {{cite:ec38bdd5c4f266ae54373134d31217e464ebaab0}}, and thus results in low-resolution inverion results. With the rapid development of computing power and the forward modeling method, more accurate synthetic signals could be computed by directly simulating seismic wave propagation. This makes it possible to obtain high-resolution results by FWI, which could provide important information for seismic hazard assessment {{cite:bfb3b9112348d12d8a59b55c3cd54f1f5ebb4fc7}} and exploration geophysics {{cite:69073722beeee201e66dc6a886a03a0338f5025a}}.
| i | aaecaa2bd0a5cea2bcb6300f9d25e671 |
For the evaluating the similarity of learned word representations We use Urdu translated version of corpora SimLex-999 {{cite:770166786c6b1d6cbcea81bc23741a36733a2af6}} and WordSim-353 {{cite:f82c608d4c7049228616ec05a90b1148bf2e26c5}}. SimLex-999 is a gold standard dataset for evaluating word embeddings. It contains 999 noun, adjective and verb triplets in a concrete {{cite:895adcbddae5852d77c34dd879daf840f7fa3690}} and abstract form. The dataset is designed to evaluate similarity of words rather than relatedness and contains similarity score for words. The WordSim-353 dataset {{cite:3dd0d4f8d58f91d899300e0e286c05a437dfa846}} contains relatedness scores for 353 word pairs.
| r | 176a58b09050f0d02fe39cb6f830e1d7 |
The second dataset we use is the Liver Tumour Segmentation Benchmark (LiTS) https://competitions.codalab.org/competitions/17094 {{cite:07dfe3d3435418c8567bc61dde9b639c28b4d82f}}, focusing on only segmenting the lesions of the liver. We chose 29 images randomly, and used 2 images for training and 27 to evaluate performance. Segmentation of the lesions was performed by providing the network with input inside the lesions for each image, so the network could be provided with geometric information.
| m | 764e280fa7fc95d478306fe14a5f419e |
The Relaxed LSAP is a special case of the Kantorovich problem (also called optimal transport problem) and it can be solved efficiently thanks to the Sinkhorn algorithm {{cite:d41715ca91bf73b132a702015632ea58bb2e7bcf}}. Such an approach is called Sinkhorn Net {{cite:8b56dc8c5c3be842d34bfc34e866d4c0c02dd2e5}} in the literature.
| m | c36923248a11ef5de8bd83023eb84c06 |
Finally, though we have focused on the moduli space dynamics of framed BPS
particles in {{formula:e8421d66-8c43-44d9-989f-6ef4caa195d8}} {{formula:15754685-366c-4139-b376-19d20d5f0abd}} supersymmetric gauge theories, our analysis can be potentially
applied to study the wall-crossing phenomena of any supersymmetric
theories in presence of higher dimensional external objects.
One potential application is a study of the wall-crossing formulae
of the four-dimensional gauge theories in presence of a surface operator,
which has been conjectured in {{cite:d226b4bb7f7debd84e83b4d31b9f3599084a93b7}} as
a hybrid of 2D Ceccoti-Vafa WCF {{cite:cbdbdc4ae7c5f38c78db9b303c18819300ab3485}} and
4D Kontsevich-Soibelman WCF {{cite:793fb86f1de67d1ce32b9158d4d2cc92785e619a}}. Our analysis also would be
useful to study the wall-crossing formulae of
two-dimensional {{formula:2512ace3-afa5-4285-bf92-3c348b9e262e}} massive {{formula:5b4da9d7-f795-4878-958f-b16d8aa3840e}} models in relation to
that of four-dimensional {{formula:ea948ed7-a83d-4440-999e-5096dea502ca}} SQCD {{cite:f6bafbf774807e839035b65be90026cf10f79f7e}}, {{cite:cbaae4d3206eaca5287e99ea2373ea9c79875d57}}, {{cite:5ae124d3e19b1b46bd95c4f0ae599af6534fe3cf}}, {{cite:c97492806851a9d707acf90f8df391a19b0e123b}}.
| d | 9c4537405cbc667124409865811042e1 |
Deep learning has made large strides in recent times, providing breakthroughs in several computer vision and machine learning problems. However, deep learning models remain unable to assess their confidence when performing under unforeseen situations.
As argued by Begoli {{cite:e5aa37177c87ad680dbe270f96cc4162c34b9229}}, uncertainty quantification is a problem of paramount importance when deploying machine learning models in sensitive contexts such as information security {{cite:f0a8ee756894548ccbfceb17345d597db77eb062}}, engineering {{cite:8ff2fc5b90324ea7f6ecc6f846aa5ac6746b81c7}}, transportation {{cite:3aa35c36a37fb0fe49df6bae9f70df6d4df06653}}, or medicine {{cite:e5aa37177c87ad680dbe270f96cc4162c34b9229}}.
{{figure:bada6c07-95ee-4913-b9b8-2adaf4294dfd}} | i | e782f22dacb344b62df13dd6c78faae3 |
higher-order sample statistics require more samples in order for the sample statistic to provide a reasonable estimate of the true statistic.
Moreover, as we learn separate sample statistics for each leaf in each tree, we are still able to model complex distributions over the entire dataset using distributions parameterized only by location and scale parameters.
Finally, one could also store the sample quantile information of each leaf and draw samples according to the stored quantile information. This would remove the need for specifying a particular distribution. While this seems an attractive option, calculating sample quantile information for each leaf is computationally difficult as it requires an expensive sorting operation, and storing a sufficient number of sample quantiles to reap the full benefits of this method requires storing at least 2–3x the number of leaf weights. In short, there is no real need to use higher order moments or leaf sample quantiles to provide accurate probabilistic estimates as we demonstrate in Section REF .
Output sampling. PGBM allows one to sample from different output distributions after training, which allows practitioners to train their model by minimizing some point metric (e.g., RMSE) and after training try different distributions for optimizing the probabilistic forecast based on some validation metric. The key benefit is that this allows PGBM to achieve state-of-the-art point forecasting performance as well as accurate probabilistic forecasting performance using the same model. We will demonstrate this in Section REF . Note that practitioners can of course also directly optimize the probabilistic forecast by using a loss function that optimizes the probabilistic forecast.
Split decisions and tree dependence. In PGBM, split decisions in the tree are not recomputed based on the stochasticity of the leaf weights, even though it could be argued that this would be appropriate when sampling from the trees. We intentionally avoid this as it is computationally expensive to recompute split decisions after training when sampling from the learned distribution. Secondly, by sequentially adding the mean and variance of each tree we omit the explicit covariance between tree {{formula:878ec3f7-a534-43a3-b52b-84901ab09a42}} and trees {{formula:781e2a5d-521c-4f73-82fc-88691bd19069}} , and only model the covariance between subsequent trees. We implicitly model both these effects using a single constant correlation hyperparameter {{formula:c429b207-e34f-4e8e-b500-12178e2221a4}} (Eq. (REF )), for which we provide a more detailed analysis in Section REF .
Hessian distribution. The distribution of the Hessian {{formula:c7531438-e5d8-4081-9bf3-41bcc9042f84}} should have a support of {{formula:8d904aea-028f-4db9-9b15-f41d2afb90c3}} to avoid division by zero in Eq. (REF )–(REF ), or equivalently, we require the sum of the hessians (plus regularization constant {{formula:47f01179-fb69-4b2f-b82b-79b6388cc659}} ) of all samples in an instance set {{formula:1855f062-03e7-454e-9cc3-9e8566adf76b}} of a leaf to be positive. For common convex loss functions such as the mean-squared error this is not an issue, however for non-convex loss functions this might pose a problem in rare cases where all hessians in an instance set add up to zero. In those cases, numerical issues can usually be avoided by requiring a decent (e.g., {{formula:71a20c73-962b-400a-b9b7-4382e264311f}} ) minimum number of samples in each leaf in a tree – this can be set as a hyperparameter in PGBM.
Experiments
In this section, we first demonstrate how PGBM can provide accurate point and probabilistic predictions on a set of common regression benchmark datasets from the UCI Machine Learning Repository (Section REF ). We show how PGBM allows practitioners to optimize their probabilistic estimate after training, thereby removing the need to retrain a model under different posterior distribution assumptions. Next, we demonstrate the efficiency of our implementation of PGBM compared to existing gradient boosting methods. Finally, we demonstrate PGBM on the problem of forecasting for hierarchical time series, which requires optimizing a complex loss function for which deriving an analytical gradient is too complex (Section REF ).
To facilitate reproducibility of the results in this paper, our experiments are run on open data, and our experimentation code is available online.Repository at https://github.com/elephaint/pgbm
UCI regression benchmarks
Task. We perform probabilistic regression on a set of regression datasets. Our goal is to obtain the lowest probabilistic prediction score as well as the lowest point performance score.
Evaluation. We evaluate the probabilistic performance of each method using the Continuously Ranked Probability Score (CRPS), which is a measure of discrepancy between the empirical cumulative distribution function of an observation {{formula:aa8305da-c780-4403-bb24-657b72d19c63}} and the cumulative distribution {{formula:1ffd9e74-b4ec-4644-80ff-ec36cafe3d8a}} of a forecast {{formula:e52dcd09-1cfd-4cd5-9444-70e105c0bbd7}} {{cite:8d00d49ff16eaead96c0804ece49e7ae639f4583}}:
{{formula:eac56112-0698-4483-95c3-61ad41775ff1}}
in which {{formula:aadbcd2b-6b4e-494b-a6c1-1890cbf2d579}} denotes the indicator function. We compute the empirical CRPS based on 1,000 sample predictions generated by the trained models on the test set. We evaluate point performance using Root Mean Squared Error (RMSE):
{{formula:936dc811-080d-41c2-8afd-25afd960dea5}}
We present these metrics relative to the median of PGBM over all the folds tested for a dataset, and we refer the reader to Table REF of Supplemental Materials for further details.
Protocol. We follow the same protocol as {{cite:58534dff6c6b7dc16ece722d01ae1ff6568c227d}}, and create 20 random folds for each dataset except for msd for which we only create one. For each of these folds, we keep 10% of the samples as test set. The remaining 90% is first split into an 80/20 validation/training set to find the number of training iterations that results in the lowest validation score. After validation, the full 90% training set is trained using the number of iterations found in the validation step. As output distribution for the probabilistic prediction we use a Normal distribution, similar to {{cite:58534dff6c6b7dc16ece722d01ae1ff6568c227d}}.
Baseline models. For probabilistic performance, we compare against NGBoost {{cite:58534dff6c6b7dc16ece722d01ae1ff6568c227d}}, which has recently been shown to outperform other comparable methods on the current set of benchmarks. We use the same settings for NGBoost as in {{cite:58534dff6c6b7dc16ece722d01ae1ff6568c227d}}. For point performance, we also compare to LightGBM {{cite:9f069a564b151439b30ffddaa64c7ef84807f61c}}, one of the most popular and best-performing gradient boosting packages available. We configure LightGBM to have the same settings as PGBM.
PGBM. For all datasets, we use the same hyperparameters for PGBM, except that we use a bagging fraction of 0.1 for MSD in correspondence with {{cite:58534dff6c6b7dc16ece722d01ae1ff6568c227d}}. Our training objective in PGBM is to minimize the mean-squared error (MSE). We refer the reader to Table REF of Supplemental Materials for an overview of key hyperparameters of each method.
Results. We provide the results of our first experiment in Figure REF and observe the following:
[leftmargin=*]
On probabilistic performance, PGBM outperforms NGBoost on average by approximately 15% as demonstrated by the relatively lower CRPS across all but one dataset (msd, where the difference is tiny). This is remarkable, as the training objective in PGBM is to minimize the MSE, rather than optimize the probabilistic forecast as does NGBoost.
PGBM outperforms NGBoost on all datasets on point performance, and on average by almost 20%, which is in line with expectation as we explicitly set out to optimize the MSE as training objective in PGBM in this experiment. However, as becomes clear from this result, PGBM does not have to sacrifice point performance in order to provide state-of-the-art probabilistic estimates nonetheless. Compared to LightGBM, PGBM performs slightly worse on average (approx. 3%) on point performance. We suspect this is due to implementation specifics.
The main takeaway from this experiment is that even though we only optimized for a point metric (MSE) in PGBM, we were still able to achieve similar probabilistic performance compared to a method that explicitly optimizes for probabilistic performance.
{{figure:ab838cb8-c2f9-4f98-82ea-b87f095a06af}}Analysis: correlation hyperparameter. We perform a brief analysis of the correlation hyperparameter {{formula:87a22bdc-d06b-4609-932c-b6d306a3134d}} (Eq. (REF )). This hyperparameter controls the dependence between variance estimates of subsequent trees in PGBM, and is critical for probabilistic performance. Figure REF shows the CRPS evaluated on the validation set at different settings for {{formula:0fd9da0e-6559-400e-8d55-d274cd3ba856}} for each dataset for a single fold. We normalized the CRPS scores on the lowest CRPS for each dataset. Across all datasets, the CRPS seems to follow a parabolic shape and consequently there seems to be an optimal choice for {{formula:5c67a7e4-2a5b-4adf-8bd9-70fad4630ca6}} across different datasets: a value of {{formula:1c63d7f3-537b-471e-b4d0-ef654da445f2}} –{{formula:b4dc8a55-8ffc-406b-aa7d-7beff37ecb19}} typically seems appropriate. Empirically, we found that an initial value of {{formula:8c0da308-c7f3-493a-849c-628230293e38}} , where {{formula:a1495d4b-5428-454c-9dde-13598fa0f392}} denotes the size of the training set generally works well and therefore we used that in our experiments as default value. Intuitively, the positiveness of the correlation between subsequent trees seems logical: if the leaf weight of a given tree shifts more positively (negatively) as a result of stochasticity, the residual on which the next tree will be constructed shifts in the same direction. Consequently, the leaf weights of this next tree will also shift in the same direction, thus exhibiting a positive correlation with the previous tree's leaf weights.
Furthermore, larger datasets tend to cluster together in behavior, as can be seen from the curves for the protein, msd, kin8nm, power and naval datasets. It seems that for larger datasets, typically larger settings for {{formula:1b4c0f5c-d358-40e1-aec4-a399ac812e01}} are appropriate. Our hypothesis is that for trees containing many samples per leaf, the correlation between subsequent trees is higher, as more samples in the tree's leaves will generally imply that the model has not yet (over)fit to the training set and there is likely more information left in the residuals compared to the situation where there are few samples per leaf. This would explain the behavior observed in Figure REF as we train each model with a maximum of 8 leaves per tree, resulting in more samples per leaf for larger datasets. We test this hypothesis by training the relatively larger protein and msd datasets using different settings for the maximum number of leaves, for which we show the results in Figure REF . Confirming our hypothesis, we indeed observe the optimal correlation parameter decreasing when we train PGBM using a higher number of maximum leaves per tree (i.e., the minimum of the parabola shifts to the left).
{{figure:96b6c4ee-2481-454e-b175-f641e3e3e15c}}Analysis: posterior distribution. One of the key benefits of PGBM is that it allows us to optimize the probabilistic estimate after training, as the choice of distribution {{formula:aa29f216-8d3d-48ac-8d5b-434eddda7661}} in Eq. (REF ) is independent from the training process. This offers the benefit of training to optimize a certain point metric such as RMSE, and choosing the distribution that best fits the learned mean and variance after training by validating a set of distribution choices on a validation set. To demonstrate this benefit, we repeated the experiments from our first experiment for a single fold. For each dataset, we evaluated the learned model on CRPS on the validation set for a set of common distributions and a range of tree correlations {{formula:955ef21f-1628-4fd0-9eff-04d630174cf1}} . The optimal choice of distribution and tree correlation on the validation set was subsequently used for calculating the CRPS on the test set. We report the results in Table REF , where `Base case' refers to the base case scenario from our first experiment, where we chose a Normal distribution and a tree correlation hyperparameter of {{formula:5d0218e1-c030-4273-85ce-bea098225a14}} across all datasets, and `Optimal' refers to the result on the test set when choosing the distribution and tree correlation according to the lowest CRPS on the validation set. We see that for most datasets, the minimum CRPS on the validation set is similar across choices of distribution, which implies that it is more beneficial to optimize the tree correlation rather than the choice of output distribution for these datasets. On the test set, we see improved scores compared to the base case on all but the smallest dataset, thereby showcasing the benefit of optimizing the probabilistic forecast after training. We would advice practitioners to start with a generic distribution such as the normal or Student's t(3), and optimize the probabilistic estimate after training by testing for different tree correlations and distribution choices.
{{table:eff5cf18-ec56-4423-937b-26c276fe84fa}}Analysis: training time. Our implementation in PyTorch allows us to use GPU training by default, which allows us to significantly speed up training for larger datasets. We demonstrate this benefit in Table REF where we compare training times for datasets of different size against a baseline of PGBM (we refer to Table REF in the Supplemental Materials for the absolute timings). For this experiment, we also included the higgs dataset, which is a 10M sample UCI dataset commonly used to benchmark gradient boosting packages. For PGBM, we show results for training on GPU-only and CPU-only. NGBoost does not offer GPU training and runs on top of the default scikit-learn decision tree regressor. We ran our experiments on a 6-core machine with a nVidia RTX 2080Ti GPU. As can be seen, PGBM is up to several orders of magnitude faster than NGBoost as the dataset size increases. This demonstrates that PGBM and our implementation allow practitioners to solve probabilistic regression problems for datasets much larger than NGBoost.
In general, for smaller datasets, training on cpu offers the best timings for the two methods. We include the relative timings to LightGBM for reference in Table REF , which shows that PGBM even becomes competitive to LightGBM for the largest dataset (higgs). However, the timings for LightGBM represent the timings to train a single LightGBM model. If one is interested in obtaining a probabilistic forecast, the timings would be multiplied by the number of quantiles required. Hence, for a fine-grained probability distribution, the timings would be 5–10x higher for LightGBM, again demonstrating the effectiveness of our implementation for probabilistic forecasting.
{{table:a72f26d3-d10d-4d00-91ce-6ec0b80cbb79}}
Hierarchical time series
Task. So far, our experimental results were obtained using the mean-squared error as objective function for which an analytical gradient and hessian can be easily derived. In this experiment, we apply PGBM to the problem of hierarchical time series forecasting, which is a problem where our loss function is rather complex, so that it becomes very tedious to manually calculate an analytical gradient and hessian for it:
L = jN wj (yj - yj)2,
where {{formula:d92a1bf4-75f5-40a2-9e37-be616fe25b43}} is the weight of the {{formula:24d9fe19-8db6-4298-8e2a-1d9dbbfc5d6c}} -th time series, and {{formula:a002ee41-2a71-4305-8d4c-231743ff1901}} is the number of time series. In hierarchical time series, we aggregate time series across multiple levels. For example, in the case of two time series and two levels, {{formula:75bde24d-713c-4188-b9da-9dd4be7eb0f1}} and our loss for each series reads {{formula:ae38275f-1efd-4aba-8a73-5ed13eb63966}} , {{formula:9147dc4c-0167-4437-97ce-f947ee0870c5}} , {{formula:8cd01689-7792-40b1-bc39-c1573bf26319}} with {{formula:f906b80e-80ef-4d9a-978c-8faab3ed5dc9}} weights of each series, for example {{formula:84cbccfd-7dc9-4108-9f4f-19191af885d9}} . Hence, the gradient and hessian of {{formula:5ea49502-73ae-42a3-9527-0efac411c168}} with respect to the first estimate {{formula:9bfbf6fc-3481-402f-ac84-32a19ebdda24}} becomes {{formula:a76a67bb-4a9c-40f8-aea5-b293937ec945}} and {{formula:c56e84be-a141-4e11-8ed7-55abae7262ea}} . It becomes clear that deriving this result analytically becomes increasingly complex when the number of levels and the number of time series increases, which necessitates the use of autodifferentiation packages such as PyTorch if we are interested in optimizing this objective.
Dataset. We use a subset of the dataset from the M5 forecasting competition {{cite:3f93c787f68043346008a909a9377765fe0225d2}}, in which participants were asked to provide hierarchical forecasts for Walmart retail store products. We use a single store and create forecasts for a single day. For each store, we are interested not only in accurately forecasting individual item sales, but also in optimizing the aggregate sales per day, aggregate sales per day per category and aggregate sales per day per department. Hence, we obtain four levels for our weighted loss function:
[leftmargin=*,label=()]
individual items,
category aggregates per day,
department aggregates per day, and
total daily aggregates.
For more details on the data and preprocessing we refer to Supplemental Materials .
Protocol. We compare against a baseline of LightGBM, NGBoost and PGBM trained with the regular mean-squared error objective. All models are trained using the same hyperparameters. We validate on the last 28 days of the training dataset and pick the number of iterations resulting in the lowest item RMSE on this validation set. After validating, we retrain on the entire dataset for the number of optimal iterations and test on a held out test set of 28 days (with the first day starting the day after the last day in the validation set). We use the Normal distribution with a tree correlation of {{formula:5bd1077e-7c7e-4b00-9573-42d179fabe33}} to generate our probabilistic forecasts for PGBM.
Results. We evaluate our model on RMSE and CRPS for each aggregation and the results are displayed in Table REF . We observe that using the weighted MSE that incorporates our four levels of aggregation results in a similar point forecast score for individual items, but in a much better forecast for the aggregations – differences up to 10% compared to the second-best point performance of PGBM are observed. Secondly, we see that the gain using the weighted MSE increases at hierarchically higher aggregation levels such as `total by date'. This is important, as this implies that we are able to generate item-level forecasts that are more consistent with higher-level aggregates. Finally, we observe that item-level CRPS is worse in the weighted MSE setting compared to the regular MSE setting, whereas our probabilistic estimate for higher aggregations improves up to 300% when using the weighted MSE. This can be expected: in the MSE setting, each individual item forecast `does not consider' aggregates in the category or department, whereas in the weighted MSE setting, item forecasts are optimized to also consider the impact on the overall aggregations. All in all, this experiment demonstrates the benefit of our implementation: we can optimize over more complex loss functions, thereby enabling probabilistic forecasts of more complex problems such as hierarchical time series problems.
{{table:c215fb96-1bf0-4428-b7c8-c457d9cfe5c4}}Related work
Traditional forecasting methods such as ARIMA {{cite:a0616b2a0f14c677851799f2a28164428b1bf1f8}} allow for probabilistic forecasts through specification of confidence intervals {{cite:6481397b2913c45ee175331602802e51a7724dfa}}. However, creating a confidence interval in these methods often requires assuming normality of the distribution of the target variable or its residuals. Generalized Additive Models for Shape, Scale and Location (GAMLSS) {{cite:3e1fad555e10381a0747abb06717338a5adcd000}} is a framework that allows for a more flexible choice of distribution of the target variable in probabilistic regression problems. A disadvantage is that the model needs to be pre-specified, limiting flexibility of this method. Prophet {{cite:9f9ac7f42cf23a3727456f8cec8312ba70fc7557}} is a more recent example of generalized additive models applied to the probabilistic forecasting setting. However, Prophet has been shown to underperform other contemporary probabilistic forecasting methods {{cite:9982687baa1c506a9a3ae2bafa4c420f573f2290}}, {{cite:e6de4a3b63d7e6419ebd9685b217bdd4a367c62d}} and to have difficulties scaling to very large datasets {{cite:e6de4a3b63d7e6419ebd9685b217bdd4a367c62d}}. Other Bayesian methods exhibiting similar issues include Bayesian Additive Regression Trees (BART) {{cite:80468527b35e23d83b8f4080914e8efd31b243a5}}, which requires computationally expensive sampling techniques to obtain probabilistic predictions.
GBM {{cite:5692a777a1384d58cf05f5271b5369c0d9e4ae72}} are widely used for regression problems such as forecasting {{cite:4da79854ae0507dd55bbce5704fb0ad81f27144d}}. Popular GBM implementations such as LightGBM {{cite:9f069a564b151439b30ffddaa64c7ef84807f61c}} or xgboost {{cite:4da79854ae0507dd55bbce5704fb0ad81f27144d}} have won various machine learning competitions {{cite:4da79854ae0507dd55bbce5704fb0ad81f27144d}}. The winning solution of the accuracy track of the recent M5 forecasting competition was based on a set of LightGBM models, and 4 out of the top-5 solutions used LightGBM models in their solutions {{cite:3f93c787f68043346008a909a9377765fe0225d2}}. However, GBM are not naturally equipped to provide probabilistic forecasts as these models return point predictions, requiring multiple models when a practitioner desires a range of predictions. For example, the uncertainty track of the M5 forecasting competition required contestants to provide a set of quantiles for a hierarchical time series problem. The winning solution was based on 126 (!) separate LightGBM models, one for each requested quantile and time series aggregation level {{cite:e17ef53694b51e13a87e5e0351fcdccba92b6c8e}}. To address these limitations, NGBoost {{cite:58534dff6c6b7dc16ece722d01ae1ff6568c227d}} allows for probabilistic regression with a single GBM by using the natural gradient to boost multiple parameters of a pre-specified distribution. Compared to NGBoost, our method PGBM does not require a natural gradient; it can achieve better or on-par predictive uncertainty estimates, without sacrificing performance on the point forecast of the same model as does NGBoost. {{cite:e31a0120970711f42f1c952433b0589447f74e4a}} also propose boosted additive models for probabilistic forecasting. Our work is different in that we use GBMs with stochastic leaf weights to estimate the conditional mean and variance simultaneously for each estimator. {{cite:e1b41c28391d966fe8839b0ef689c19293c03ee7}} present a method for incrementally constructing decision trees using stochastic gradient information. Our method is different in that (i) we focus on the general case of probabilistic regression instead of incremental online learning of trees and (ii) we obtain our stochastic estimates by approximating the ratio of the gradient and hessian.
Outside of the GBM context, decision trees have also been used for probabilistic regression problems in Quantile Regression Forests (QRF) {{cite:ba3eedad0b59ed9139f386de4ca8f6a832586640}}. However, this method requires storing all observations when computing leaf weights of a decision tree, which makes this method less suitable for large datasets. In addition, GBMs commonly outperform random forests on regression tasks, making the former a better choice when performance is a key consideration.
Contemporary large-scale probabilistic forecasting techniques often leverage the power of neural networks, such as DeepAR {{cite:e279cd0ecf3a642971e61ad0dc88e297cdda4e9b}} or Transformer-based models {{cite:9d4cc3f61b0901eeffddf6c436d7f0e87bf4e248}}, {{cite:5555cb339d2bdfa1baefad70ec802052ecb2d0f6}}. However, in practice GBMs still seem the technique of choice—only one out of the top-5 solutions in the M5 uncertainty forecasting competition used a neural network method as its primary probabilistic forecasting tool.
In summary, we contribute the following on top of the related work discussed above: (i) PGBM is a single-parameter boosting method that achieves state-of-the-art point and probabilistic estimates using a single model, (ii) PGBM allows choosing an output distribution after training, which means the probabilistic forecast can be optimized after training, (iii) our implementation allows training of larger datasets up to several orders of magnitude faster than the existing state-of-the-art, and (iv) our implementation allows using complex differentiable loss functions, which removes the need to calculate an analytical gradient, thereby opening up a wider set of problems that can be effectively addressed.
Conclusion
In this work we introduced Probabilistic Gradient Boosting Machines (PGBM), a method for probabilistic regression using gradient boosting machines. PGBM creates probabilistic estimates by using stochastic tree leaf weights based on sample statistics. By combining the learned weights for each subsequent tree, PGBM learns a mean and variance for samples in a dataset which can be used to sample from an arbitrary distribution of choice. We demonstrated that PGBM provides state-of-the-art probabilistic regression results across a range of small to large datasets. Benefits of PGBM compared to existing work are that
[leftmargin=*,label=()]
PGBM is a single-parameter boosting method that optimizes a point regression but achieves state-of-the-art probabilistic estimates using the same model,
PGBM enables the choice of an output distribution after training, which means practitioners can optimize the choice of distribution without requiring retraining of the model,
our implementation allows training of larger datasets up to several orders of magnitude faster than the existing state-of-the-art, and
our implementation in PyTorch allows using complex differentiable loss functions which removes the need to calculate an analytical gradient as is common in existing gradient boosting packages.
We demonstrated the latter benefit for the problem of hierarchical time series forecasting, where we observed up to 10% improvement in point performance and up to 300% improvement in probabilistic forecasting performance.
Limitations of our work are that PGBM only learns the mean and variance in a tree, which limits the choice of output distribution. However, we observed no negative performance effects in our experiments thereof.
In the future, we intend to further work on the theoretical error bounds of PGBM. Under mild assumptions, sample statistics in each leaf of each tree appropriately represent the true statistics of the samples in each leaf provided a sufficient number of samples. However, we have yet to determine appropriate theoretical error bounds on the final estimated statistics when considering the simplifications we make during decision tree learning, such as the greedy approximate split finding, using a limited number of tree leaves, our approximation to the stochastic leaf weights, keeping decision points constant and treating the correlation between subsequent trees as a constant across samples and trees. Regarding the latter, we also expect that the probabilistic estimate can be further improved by using a better approximation to the tree correlations instead of our choice of keeping it fixed across trees and samples.
Derivation of stochastic leaf weights
Expectation
We approximate the mean in each leaf by using a second-order Taylor approximation of the expectation of a function of the two random variables ({{formula:6e5b3f07-7eb4-4145-ad74-618c4052f606}} , {{formula:7d6e35e6-0645-4c7f-b512-ff107a071cd3}} ) around the point {{formula:3bdc396e-4606-4f05-b48c-4ff6c8d26a2b}} :
E[f(g, h)] = E[f(a) + f'g(a)(g - g) + f'h(a)(h - h)
+ 12 f”gg(a)(g - g)2 + f”gh(a)(g - g)(h - h)
+ 12 f”hh(a)(h - h)2 + H]
with {{formula:ec6c7509-9725-47a4-b75c-dbbf39c3b180}} denoting the higher-order terms that we drop for our estimate. Using the laws of expecations we then obtain:
E[f(g, h)] E[f(a)] + E[f'g(a)(g - g)] + E[f'h(a)(h - h)]
+ E[12 f”gg(a)(g - g)2] + E[f”gh(a)(g - g)(h - h)]
+ E[12 f”hh(a)(h - h)2]
= E[f(a)] + f'g(a) E[(g - g)]0 + f'h(a) E[(h - h)]0
+ 12 f”gg(a) E[(g - g)2]2g + f”gh(a) E[(g - g)(h - h)]2gh
+ 12 f”hh(a) E[(h - h)2]2g
= E[f(a)] + 12 f”gg(a)2g + f”gh(a)2gh
+ 12 f”hh(a)2h,
with {{formula:29ea7bb4-e4d0-4641-bc38-123f4de2f3df}} denoting the covariance of the gradient and the hessian.
For a function {{formula:fad4cfd6-8603-46d6-910b-4c0712f6c93b}} , we have:
f'g = h-1
f'h = -gh-2
f”gg = 0
f”gh = -h-2
f”hh = 2gh-3.
Substituting and using {{formula:b474edd0-432a-44ce-a304-8f8491ba6238}} , {{formula:4b4c22e3-5623-4a58-9812-be3d99214605}} :
E[f(g, h)] E[f(a)] -h-2(a)2gh + gh-3(a)2h
= gh - 2ghh2 + g 2hh3.
Finally, we can include the regularization constant {{formula:3f97cab7-8d77-4217-b76c-7a52c37116ed}} to arrive at the final estimate of the expectation for the leaf weight {{formula:082a39df-e3e5-491d-be7d-b8cf740de7bc}} . This constant only affects the mean of the random variable {{formula:953f3225-755f-4a75-a9cb-8aa6b593417f}} , therefore we can safely add it to the terms containing {{formula:ef6baf45-f87f-4f86-858e-caf2d5b22d6e}} :
E[g(h + )] g(h + ) - 2gh(h + )2 + g 2h(h + )3.
Note that we can obtain the first-order Taylor approximation of the mean by dropping the last two terms of Eq. (REF ):
E[g(h + )] g(h + ).
Variance
For the variance, we start with the definition of variance for a function {{formula:f5b3da56-0378-4a42-abc2-fdf52ed0e421}} :
V [f(g, h)] = E[(f(g, h) - E[f(g, h)])2].
We perform a first-order Taylor expansion of {{formula:04657ad1-6aed-41cb-94d3-4b7d71dae3a3}} around the point {{formula:35287676-61ee-462e-9b69-0f85d74ff05d}} and we substitute the first-order approximation of the mean:
V [f(g, h)] E[ ( f(a) + f'g(a)(g - g) + f'h(a)(h - h)
- E[f(a)])2 ]
= E[ (f'g(a)(g - g) + f'h(a)(h - h))2 ]
= E[ f'2g(a)(g - g)2 + f'2h(a)(h - h)2
+ 2 f'g (a)(g - g)f'h (a)(h - h) ]
= f'2g (a) E[(g - g)2] + f'2h(a)E[(h - h)2]
+ 2 f'g (a) f'h (a) E[(g - g)(h - h) ]
= h-2 2g + g2h-4 2h - 2 gh-3 2gh
= 2gh2 + g2 h2h4 - 2 g2ghh3.
Finally, including the regularization constant {{formula:e4f95539-7677-4b7a-b5c1-ecb8bc57659f}} we obtain:
{{formula:4c771b5f-ded5-4dcb-875c-0f6bf9bba141}}
Reproducibility
We report absolute scores and dataset statistics for the UCI benchmark in Table REF . An overview of the key hyperparameters for each method for both experiments is given in Table REF , and absolute timings for the timings of Table REF in Table REF . An overview of the M5 dataset is given in Table REF . We refer to our code at https://github.com/elephaint/pgbm for further details, such as the features of the M5 dataset, which mainly comprise lagged target variables, time indicators (e.g., day-of-week), event indicators (e.g., holidays) and item indicators.
{{table:b0221cc0-ae1d-4ccb-8073-c87db2816c93}}{{table:0445e860-de3b-4cd3-9d99-72818339109d}}{{table:6868b055-71ee-49c0-aa56-7c2b1474638f}} | d | 601840d326d48aa8d95da5a5f99d84c7 |
In this work, we address both generalization and personalization of the federated unsupervised representation learning with the utilization of local style information. In {{cite:f41ecc3009c74e01f66df38c8aa964e14f9de975}}, authors assume that an image is generated by a content factor ({{formula:07a7745d-0bed-46c8-80bb-4c1b408f9f23}} ) and a style factor ({{formula:5f52d1cb-9f85-4ef3-9278-a8528a9e8d6b}} ). The authors further prove that, under a self-supervised learning framework, the feature extractor learns content-identifiable representations that are robust to augmentations applied (upper part of Figure REF ). Based on this argument, our intuition is that a locally extracted image feature is a product of a generalized content feature ({{formula:c84d27a4-3815-4801-bffb-a0b688e987da}} ) and a personalized style feature ({{formula:c839eaf6-cca1-411e-ba2f-7ef3033a651e}} ). During self-supervised learning, the content feature is infused with locally extracted style features so that the learned representation is also robust to this latent distortion (lower part of Figure REF ). Since every client has its own style and preference, infusing extracted local style features into global content features and facilitating personalized features in a decentralized setting become feasible to achieve good results in both generalization and personalization. We propose Federated with Style (FedStyle) via training a local style feature extractor and combining a content feature extracted via a local content model with the style feature to generate a style-infused representation for contrastive learning. This incorporation of style and content features empowers the local content models to be robust to local distortion in addition to other image-level distortions applied for contrastive learning {{cite:a6c073d7f32ef354419adf41f651829313fe2aff}}. With local models robust to local distortions, a generalized global model robust to various distortions can be aggregated. When deploying on downstream tasks, apart from the generalized content feature, a stylized content feature is also used to promote personalization.
| i | f8cc6758e660a420e633a876c1cf2882 |
It is worth noting that, although there have been the aforementioned recent attempts to design instance-level explanation methods for GNN models (e.g., {{cite:4d2bcae263780e7d0e855bd8877931103a744370}}, {{cite:04a4fa19a59efb1336c44a75d4ddb37f2af28cf7}}, {{cite:12917fbb1dfbf9eee4436e22ccc5cf44ae22e4e9}}, {{cite:755419e3a1506ad83b6231251293541fa805610d}}), model-level explanations for GNNs have been largely underexplored in the literature. XGNN {{cite:cfbf84746ab986cac4ad56fec46abb34cb6aa96e}} was developed only for model-level GNN explanations, where a subgraph pattern that maximizes the target class probability is generated via a reinforcement learning framework.
| m | 768358bfd5509e56a780ec03ebd654c2 |
{{formula:79c83de2-ee1e-44f2-b3ec-3f49d101d12a}} is the unique metric with the least Hawking-Horowitz mass {{cite:9bb940a7eceaf2cb56fa76703356521e32d29b24}}.
| i | d266db83de3e50d4ea51d147ffbbc2d3 |
Using a population of FDs CR(%) {{formula:660b89e7-645a-4e41-bb0a-0e8b199bfc0b}} 4) in magnitude that occurred between 1966 and 1972, {{cite:072a00bb5a1260f75acfcccb545d186be3b6a706}} observed that the interplanetary disturbances responsible for Forbush decreases are characterized by a region of high solar wind velocity and enhanced interplanetary magnetic field intensity.
Similarly, {{cite:ccc6303e00b3397211987f7e85187ce57dc3cb23}} reported an association between FDs and variations in IMF intensity using Deep River neutron monitor data from 1967 to 1968. They noted that from 1967 to 1968, all the cosmic ray flux variation is associated with the passage of a region when magnetic field intensity is high. About two decades later, {{cite:76d751ac925b1ff7a6413d381485bfc6b4fad148}} found that both sudden increase in IMF and the SWS are preceded by the onset of the Forbush decrease at Deeep River neutron monitor (NM) station. The results of the present work in which FD-IMF and FD-SWS relations yield correlations significant at 95% confidence level is in close agreement with results in the literature referred to above. From statistical analysis of 695 events selected with GSM {{cite:c7cb9bfe77b688fd08b6a4d3380911e7d9857374}} found that 49% of FD variation are due to the effect of solar disturbance characteristic.
The result obtained here for FD-solar disturbance characteristic is in fair agreement with their finding.
| d | 45a01af1dc99e58539e5a2c5c15b1a7d |
These properties are satisfied in the case of a closed manifold as we work with. Indeed the doubling property follows from {{cite:83351c414c833820b9bd077d5e57d54d73ed5a6e}}, the asymptotic expansion holds by construction of the heat kernel via the parametrix method, cf. {{cite:f7554329a960511904257ff6b9e67ef583ccb37d}}. Finally, (REF ) and (REF ) follow from the Li-Yau inequality for weighted manifolds {{cite:5d965301500d92ad7c33aba838645555f25bc9d6}}.
| r | 4b8393963b6c93746d32e9777be0ae01 |
We compare our method with seven state-of-the-art methods {{cite:dce8436a5a6fbdcf3e7c90406711eba1d456c059}}, {{cite:64a20fe0497a6381096e844f5f4c4ae8121a77d8}}, {{cite:c1153624e7a314dc1aa3fd7e5644bb81213656a1}}, {{cite:c793552475045e060ee818451e64fb4b0aad1719}}, {{cite:2bbc21a645172bc4b6254086fc123c711484d0c3}}, {{cite:a57280b1242750ac483f605de626435187ef772a}}, {{cite:31d5951010f3eed31a60af024df8b79bcfbb498f}}. The full Recall@K and MedR results are reported in Table REF , REF , REF , and REF .
Our model achieves state-of-the-art or competitive performance on all datasets. It shows that our TVTS is capable of learning the association between video patterns and language semantics.
| r | 867623c723b6cf237037fec6e913b545 |
Inverse reinforcement learning (IRL) {{cite:7ef4d74d62bda3a0116fbd032d26af717b6deb65}}, {{cite:f848a3257e2628cee05cc23b343121a70ccae091}}, {{cite:6c12d9b95f80f838449422311e16f143a3b9dda1}} focuses on inferring the latent costs of expert demonstrations. Early works assume that the cost is linear in a set of state features and minimize the feature expectation difference between learned policy and demonstrations. {{cite:6c12d9b95f80f838449422311e16f143a3b9dda1}} use dynamic programming to find a maximum entropy (MaxEnt) policy which maximizes the likelihood of the demonstrated actions. Later works {{cite:5abfcb952eb4c2dd9f73650f68129a03a3a46a18}}, {{cite:33aba5f74106b86238b1762a3b90597f5e634467}} introduce Gaussian process or deep neural networks to learn nonlinear cost functions. {{cite:355c4a6bbfcbb50ea39448ce70b895f2a49a6038}} use sampling to estimate the partition function in the MaxEnt formulation. {{cite:c207c2aad9d161228a326ed45ae2c25a00672aa1}} use adversarial learning to find disentangled costs that are robust to environment changes. Most IRL models, however, consider general cost formulations that do not explicitly capture the sequencing and compositional requirements of the demonstrated task {{cite:fb3890442a96e5101d60aa69a20cdb29e789876d}}, {{cite:5d4ec7d9cd7ed70ff2b46b07f9588c6a924c8659}}. Compared to a general cost formulation, this paper shows the logical structure of a complex task can be inferred from demonstrations. Exploiting the underlying task logic in planning ensures that the learned agent behavior mode matches the demonstrations.
| i | 635f918f2ff5579b4abb6ec47263da83 |
This requirement is generally satisfied unless the probability of {{formula:969713c2-bbc6-4cb9-83fc-33c8be027522}} is 1.
The proof of Theorem REF is provided in Appendix .
Theorem REF states that the gradient of either the MSE or CE loss with respect to any positive activation {{formula:b6cb4673-ba58-462d-bd09-ca69e04d2d2c}} is positive in expectation.
Hence, any training algorithm based on negative gradient directions tends to reduce the magnitude of such positive activations, which will lead to a smaller training loss.
Here, the expectation is taken with respect to the randomness in the last layer parameter {{formula:c62711e6-6c52-48b0-8d85-8290a324b983}} .
Hence, our result can be considered as an analysis for DNNs at initialization where weights are often chosen randomly from a fixed distribution.
In particular, the required properties for the distribution of {{formula:8563f5cb-6f17-45cc-acfd-5b0a0b6ea769}} in Theorem REF for both MSE and CE losses are satisfied by commonly used initialization methods, such as the one in {{cite:c632c935de844a5c66a24f4e63f8904923acc0f1}}.
On the other hand, Theorem REF does not apply to subsequent training iterations since the label {{formula:adadd9e7-edc4-41d0-b2a2-107f9111818e}} is no longer independent of {{formula:2762bda8-6f89-4d0e-bb66-adce53ead4e0}} .
However, it can be seen empirically from Figure REF that the trend of a decreasing percentage of nonzero entries persists for a certain number of iterations during the beginning of training until such a percentage reaches a low level and stays relatively stable until the end of training.
| d | ee398f3a9c930077ac890392e37c5a59 |
In addition to the baseline method, we use
CutOut {{cite:c935bbcd53cc207f43da9c265c8b6fcc72a8d68f}}, RE {{cite:1ab18a0746dd9d007969b1ea2ed204088d1174e8}}, and CutMix {{cite:7fb9da6fc195624d418036db53d9b9a31bdfd3fc}}.
Table REF gives the evaluation results.
The proposed method outperforms the conventional methods on all metrics.
The depth estimation performance of the proposed method tended to be higher when {{formula:ea1b0d8d-d2c2-4417-a8bc-da08607d18fd}} was 0.5 or 0.75.
The performances of the conventional methods decreased as {{formula:67b7d061-f300-480c-97ad-b0143c718853}} increased.
The performance of the proposed method did not degrade with increasing {{formula:d42df052-7f30-4a6d-b3c9-570f996c560c}} owing to the small change in the lower-order feature levels.
| r | 5afe0304a4e1ece111c3947a6bfcb83c |
Fix one, vary two. In common empirical studies, the degree of freedom is usually equal to the number of variables (network depth and width for model scaling {{cite:fbdd24895d5ac41dce3e96758ca1d045d28c6622}}, {{cite:4b237a6352ae2b0528ad47fc25da8cc5c233ed5d}}). In contrast, our main subjects of study – image size {{formula:0c88300b-c0e2-4a9b-a62c-38265369a9c7}} , patch size {{formula:b9333b10-c26b-4c37-84c3-14925b38fc56}} and sequence length {{formula:bd12ed02-1035-4795-87b0-209f10f34cac}} have a deterministic relationship: {{formula:d2f1ed18-2158-4183-bc99-d2a0900359ae}} .
As a result, it is impossible to employ the typical strategy that vary one variable while fixing the rest for studies.
Instead, we choose a reverse strategy that fixes a specific dimension ({{formula:f3798766-9997-47a8-8f49-ef1baeb25a83}} ) and lets the other two change jointly ({{formula:29ba36bf-bab8-498d-a8d5-e928ba163a2f}} and {{formula:4c55db02-11c8-4673-8d1f-f22d169ab991}} ). Going through all the combinations (shown in tab:ablationstudy), we arrive at a solid conclusion that sequence length is the key for input scaling of MAEs.
| m | 6eff150c1de267332f9edb66c7df63fc |
RLE limitations.
The proposed approach does not currently support the quantification of higher-order interactions between features for the relational local explanations. A more complex graph-based representation can be utilized for this task for future work.
Furthermore, patches of image data should have adequate local information, and the adequacy depends on the resolution of the images. In our experiments using the ImageNet data set - an image usually cropped into the {{formula:36881843-61f4-4262-ae47-edb80a82d192}} format, we observe that we can operate with up to 36-49 patches (depending on the content of an image).
Lastly, the current implementation of the RLE framework cannot be utilized on tabular modality since tabular data has no spatial relationships {{cite:f1d78abb698b966719ce226c8a7412dd98bb866e}}.
| d | c8871c002b7fe16257b2e088e339d962 |
Then, we have confirmed by the Hamiltonian analysis that the number of DoF can change after an invertible transformation in the singular branch. Moreover, even in cases when the number of DoF is preserved, completely different dynamics can emerge in the singular branch. As a further application, we have discussed novel singular but invertible disformal transformations {{cite:5b15e3d8de133c1ab05a9965942089a4959ad7e2}} of the metric. These transformations result in emergence of a new degree of freedom of the same type as in mimetic gravity {{cite:7efb820f929dc182d7bfe8a93ba87646ec11cbe8}}.
| d | b09ce8b43070009b32c11cf897ce2aa4 |
One of the important standing goals in quantum gravity is to understand the microstructure of black holes. As a microscopic theory of quantum gravity, different attempts were made to employ string theory to study the microscopic properties of black holes (for reviews see for example {{cite:fc66e97c15236d4fe67b03373af7937e38e9f980}}, {{cite:c301ffee7f7c75c2aeeaa89115431307574edb3c}}).
One such path is the “correspondence principle" {{cite:2f19b4a9821a74b2cbcd6541a950770752a2c83d}}, {{cite:7cd2f0c403a5b8ea55a92dacffb08319f4d8bcfc}}. The basic observation is that when one adiabatically shrinks a black hole horizon to the string scale, its thermodynamic properties are qualitatively the same as a generic string state with the same energy {{cite:a63c268cc25659c582d206b9a3ce682a62c44b29}}, {{cite:9aeb77e69562644a4b0afe97ea4086e6b439e634}}.
A canonical version of the correspondence principle was offered in {{cite:ddd146e63c8f31ba549270b86cf8d18531161693}} in terms of the thermal string theory partition function on (asymptotically) {{formula:e04ac7bb-d77b-4bd2-94e2-75d86d057911}} (see also {{cite:331c9b743507121d2666cf028846d514007d958a}}, {{cite:9320c62ab40562be06f4ec76080c7df5b7671e1d}}, {{cite:ae42cb57f195e2ee5570daa36a8877e8d5512c97}}, {{cite:c83b4a9f0c36887fce9ca4c47931f82325033292}}, {{cite:6560cf3156ff57f714779c0cf80f2f992ac2480c}}, {{cite:4959690942b32cf1638c076bcf6438107bf1f7aa}}, {{cite:7c41a6c7138fe2f9b825ab3e11978d380c42e04f}}). For near-Hagedorn temperatures {{formula:232f5e7e-1737-4f6d-a4ac-ab12c275434c}} , the first string winding mode {{formula:8eee46ba-7f21-4dfe-a47e-1ba5c3e2e20d}} around the thermal circle is parametrically lighter than the string scale. Upon compactifying the thermal circle, the authors found a ({{formula:107aa6c6-04b2-4a06-92e8-8c2271e63ac7}} dimensional) bound state solution of the winding scalar together with gravity. This solution seems to describe a self-gravitating bound state of hot strings. We will call this solution a “string star”.
The Euclidean Schwarzschild black hole is another saddle that contributes to the thermal partition function. The solution is perturbative only for large horizons, when the temperature is low enough, {{formula:0b4b1053-56f2-4134-9a77-bc1b2c441842}} . As a result, at generic intermediate temperatures {{formula:90ca9c61-e65b-48d8-87c6-eb64d4de271c}} both the black hole and the string star descriptions fail. A naive extrapolation from the perturbative regimes to {{formula:ae4f7886-b382-4fe3-b1ee-7c0aec797477}} surprisingly shows a qualitative agreement on the thermodynamic properties between the two saddles. The conclusion might be that thermodynamically, the string star and the black hole are two continuously connected phases. Recently {{cite:7c41a6c7138fe2f9b825ab3e11978d380c42e04f}} gave an extended and modern outlook on the string star solution, including an analysis from the worldsheet perspective. Below we will follow their conventions.
| i | d52e03119d11026cf0995d55423a5fab |
In fairness, it has to be mentioned that
the similar scheme was applied to the one-dimensional {{formula:a71bee88-3fd6-4521-b4cc-3537f44b9a7b}} magnet
under the transverse field {{cite:6c3f1c3db6736826ae3a1c0d507d44493a2b8ed8}}.
In this pioneering study {{cite:6c3f1c3db6736826ae3a1c0d507d44493a2b8ed8}}.
the authors took a direct route toward the multicritical point,
{{formula:fe0687bd-0c58-416e-8420-8a3aafbed3c5}} .
The {{formula:c7cb9bba-37f5-40bf-a685-7f030fc18a0e}} data
exhibit the intermittent peaks, reflecting the successive
level crossings along the ordinate axis {{formula:775bfc67-002a-4ee7-9788-f0f50691b832}} , as shown in Fig. REF .
In this paper,
to avoid such a finite-size artifact, we took a different route to the multicritical point,
keeping the anisotropy {{formula:1c8899e5-5ee7-44a1-806d-e662a957010b}} to a finite value.
That is,
based
on the
the crossover-scaling theory {{cite:d94b0bedaeb7de81ae8503cb04f057b553a680ce}}, {{cite:f28fbd88ca88888ae88500056fb1c69197d56de1}},
the anisotropy {{formula:33faf13f-3acd-4d5c-870d-91014c841175}} is scaled properly, as the system size {{formula:3ec5b074-ab8a-433d-9f69-b2db61cdb8f8}} changes.
As a byproduct, we are able to estimate the crossover exponent {{formula:9dde8201-7ad9-48d6-99f3-bd36af26cb28}} quantitatively,
which characterizes the power-law singularity of the phase boundary;
see Fig. REF .
Before commencing detailed crossover-scaling analyses of {{formula:edcf7522-47dd-4ded-b2d7-3ebf3ef4bde6}} ,
we devote ourselves to the fully anisotropic case {{formula:c9a28f0a-b7ee-47b7-aaac-206090d126ec}}
so as to examine the performance of our
simulation scheme.
| r | 61b3efd5a1c49c42f833a4b7d489ac73 |
Following {{cite:fc1cfe02e08123b9839bb11a8b20b57cd978d253}}, we use P@K (Precision) and MRR@K (Mean Reciprocal Rank) to evaluate the recommendation results where K is 5 or 10. Precision measures the ratio of hit items while MRR measures the item ranking in the recommendation list.
| m | f52ff9fcf37732c794d14bda343f0cbc |
The proposed shift in perspective from precise to imprecise probability would also be beneficial beyond the classroom. One statistical factor contributing to the replication crisis currently plaguing science is a misunderstanding of statistical reasoning, in particular, the meaning of “statistical significance” {{cite:5bd6c7f5858127c19e7ad33eb24904e1dac1c865}}, {{cite:35a3dc28b12ec8e17f22f9876a7c20e129b1e69a}}. Pushes to ban p-values {{cite:414f1300daf343d0360f9fdb8a47d21ff9049ee6}} or to abandon statistical significance {{cite:8860278c4cf0b23e007831b7f8c492e90b74328c}} are aimed at preventing “{{formula:0833a7d8-cb7d-449a-9b46-e6616586e628}} ” from being misinterpreted as direct evidence supporting a scientific discovery claim. But nothing in the original literature supports such an interpretation. In the 1925 edition of {{cite:8cd5b4750e966da8b7635271fd9a38cbed39303b}}, and even further back to Edgeworth in 1885, comparing p-values to a pre-determined threshold was intended “simply as a tool to indicate when a result warrants further scrutiny” {{cite:35a3dc28b12ec8e17f22f9876a7c20e129b1e69a}}. Fisher definitely did not intend for statistical significance to imply scientific discovery, but apparently he did see p-values as a measure of how plausible the null hypothesis is based on the data, posited model, etc. Unfortunately, p-values look like probabilities and, without a more appropriate mathematical framework to explain them in, the community defaulted to the familiar probability theory and interpreted small p-values as an indication that the alternative is probable, hence a discovery. What's changed from Fisher's time is that now there exists an imprecise probability framework wherein the mathematics can be developed in a way that aligns with the p-value's intended interpretation. For example, the sub-additivity property of plausibility measures leads to the following:
| i | aaa87bc953d6f6a17fbba19e42b19391 |
With the continuous explosion growth of data traffic in wireless communications, sixth generation (6G) communication networks are expected to meet a great deal of pressing requirements in the near future {{cite:5b986e43463442d90fd1c1a9481a6828674a2a19}}. To meet the bandwidth-hungry applications (e.g., on-chip communication, virtual reality, kiosk downloading service, nanoscale localization and nanoscale biomedical communication), terahertz (THz) frequency band (0.1-10 THz) has been regarded as a prospective alternative to provide large spectrum bandwidth and support ultra-high data transmission for 6G communication networks {{cite:520ef1af8fa41a34a23421710bae29dfc07241f1}}. However, there are still some imperative challenges existing in THz communications. On the one hand, due to the high path attenuation and strong molecular absorption effect experienced by THz waves, the transmission distance of THz communications is limited within a small area, and thus is not applicable for the practical communication scenarios {{cite:50d9d138e6e28f82e5d6a978caa7dc6ec23b7fce}}, {{cite:dcc7fc3ce8455219905d9f9ccbfd2a42e8657824}}. On the other hand, THz waves at such a high frequency band undergo extremely poor diffraction and are easily blocked by the obstacles. To tackle this issue, the concept of nanoscale reconfigurable intelligent surface (NRIS) is newly proposed to mitigate blockage vulnerability and improve coverage capability {{cite:324177b17166a321705df2d25d65a731ba18d2c4}}, {{cite:24375aa5dd8d629852c85ed4c35188c1c7de6387}}, {{cite:d3d229d8356d5579eacc5fa92bb259b486ca36ed}}. To be specific, the NRIS, which consists of a large number of passive reflecting elements is a kind of physical meta-surface and belongs to a special case of conventional RIS except for the extremely small physical size. Each reflecting element is capable of adjusting the phase shifts by using a smart central processor {{cite:10b151744a1fb9ce9c85e5d3a80615b3380c5ce3}}. In addition, NRIS is passive and lacks the active radio frequency (RF) chains, and thus is more energy-efficient compared with existing active devices, such as amplify-and-forward relaying {{cite:cc29e7aa1cf5f177c3bce44a9ec01822e83d6b84}} and massive multiple-input multiple-output (MIMO). Therefore, the combination of NRIS and THz communications is worthy of further exploration.
| i | 08e1152a80066562f39e9bfae1decc5d |
Coupled cluster theory {{cite:76d3601210ff2a6c067b49d3fa184e2a153b14a7}}, {{cite:8741a82f3db0abd9edd79a063e93dafab914f07d}}, {{cite:15058470c741e476bbf05a3d628f36a0573d7499}}, {{cite:cc177e75b78e8bcdcd4f62568372de2c97c2c735}} is a well
established method for solving the electronic
Schrödinger wave equation for small to medium sized
atoms and molecules.
CC theory employs an exponential wave operator {{formula:d66db261-031f-4459-8df5-dbedcd467cdf}} ,
that brings in the effects of the excited Slater
determinants on to the reference ground state
wavefunction, which is usually taken to be the
Hartree-Fock determinant. The wave operator {{formula:92eb6eec-4537-4664-a84e-109cc4d24a01}}
is chosen as
{{formula:e97be354-a832-4341-bc3c-dc4ff86156cd}} , where {{formula:39cfdb1a-260e-493b-be22-275ed72bd1db}} is the sum of all possible
many-body hole to particle excitation operators. In the
most common cases, {{formula:c14c7ee6-52ba-439f-ab3a-f4b72be9b35b}} consists of one and two-body
hole-particle excitation operators. The resultant theory,
known as the CC with singles and doubles excitations
(CCSD) predicts accurate quite energy for molecules with
predominantly single reference character. The amplitudes
corresponding to the excitation operators, which are the
unknown quantities, are determined by projecting the
similarity transformed effective Hamiltonian
{{formula:60ca5757-a165-4377-90b7-fbb55b70544a}} against
the excited state determinants. The correlated ground
state energy is calculated as the expectation value of
the effective Hamiltonian with respect to the chosen
reference function, {{formula:d7bb3622-f3ca-4054-b039-39e14124d502}} . Due to the exponential nature of the
wave operator, the similarity transformed Hamiltonian,
{{formula:48ca6f60-c884-4da9-98d8-8d0998718ecd}} is highly non-linear in {{formula:95760b56-ab7c-4580-ab89-6f02b6e18e69}} , and hence one
employs iterative scheme to solve these equations. For
CCSD, the most expensive computational step scales as
{{formula:629c4051-6b62-4a20-bd50-d4c42fcbd044}} per iteration, where {{formula:73754d48-a81a-42e0-b048-ae03aed14f45}} is the number of
the occupied orbitals, and {{formula:e4f61754-968e-4152-ae24-80207293d498}} is the number of the
virtual orbitals in the chosen reference determinant.
This often makes the theory prohibitively time consuming
for large systems. The theory since it's inception has seen
many developments to increase it's accuracy with a reduced
computational scaling. Due to the iterative nature of the
solutions, there have been significant efforts to
accelerate the convergence{{cite:cb65fa07e168c664e4cc4208488dea2ba16ef935}}, {{cite:aba813b2a47fca7ba84f2ea6ec792249b1b65a7d}}, {{cite:451911fa4fa9fefcb6dfeecf7d930d6be977ccf5}}, {{cite:3795fc0670b61088f1837ca734faf64211d91d7d}} or scale down the
steep computational scaling associated with the solution
scheme{{cite:3d05933cc9e15288103236535b28177ab46483b1}}, {{cite:b7df8a4e40726697373afb427a244737f27c7414}}, {{cite:991cfd66b88180795dc00d4c3a10a01dc85fc368}}, {{cite:91d5c135225f9758e1a6fb48d2258958c6a5cb76}}. One may thus look for an iterative scheme where
the expensive {{formula:ac3e8db5-9926-402f-b5f2-9c870af2a312}} scaling may be bypassed, at
least partially, which is likely to save a lot of
computation time. This may be achieved by exploiting the
nonlinearity of the iterative solution scheme.
In line with this, some of the present authors imbibed
ideas from nonlinear dynamics and synergetics{{cite:ffb0374ad898d73c98dc4bb116e42f259050b762}}, {{cite:e1b1543a8dead922c703de1a5e913f043a4f49f8}}, {{cite:e5c14f9783341934a7965b85c8c0cc5c676b214e}} to
demonstrate that not all the cluster amplitudes are
equally important in the nonlinear iteration scheme.
Based on the magnitude of the amplitudes, the authors
classified the amplitudes into unstable master
amplitudes (later to be referred to as the principal
amplitudes) and stable slave amplitudes (later
to be referred to as the auxiliary amplitudes). Borrowing
the key concepts from nonlinear discrete time-series
{{cite:703780a5c3a76e4c0581ea9e5428c65f834f8c83}}, {{cite:bfbf8302ec6bd98315d63c43777142875990c07b}}, coupled with
principles imported from the areas of Synergetics, a
Machine Learning (ML){{cite:252d9a00ed7919839dbe05d64451c93a72fa1126}} based
hybrid numerical scheme was developed to establish a
relationship between the two classes of amplitudes. This
effectively reduces the independent degrees of freedom
to accelerate the overall iteration process.
In the resulting scheme, only a few initial iterations scale as
{{formula:68f702ec-93df-4781-8e09-23680754bd79}} , while most of the other iterations scale
significantly less. This saves a lot of computation time,
as was demonstrated by the authors. Complementary to that,
one may employ an adiabatic approximations based on the
difference in the time scale of relaxation of various
cluster amplitudes during the iteration, where one may
formally reduce the scaling of the iterative scheme by at
least one order of magnitude, without undue sacrifice of
the accuracy.
| i | 0fa046fb8a8a3ea20c72e8ac308ac9b1 |
In this study, we examine the correlations between accurately-derived Hi column densities and dust-based proxies for {{formula:5fc3501b-a7ba-491a-84ac-124dea8111cc}} . We make use of opacity-corrected Hi column densities derived from two surveys: the Arecibo Millennium Survey {{cite:09456625703bf5dd5ee344b19a52ddbeb57300dd}}, and 21-SPONGE ({{cite:b9f51392e0814ca9c5505f25957eca45d5800b62}}), both of which used on-/off-source measurements towards extragalactic radio continuum sources to derive accurate physical properties for the atomic ISM. We also make use of archival OH data from the Millennium Survey, recently published for the first time in a companion paper, {{cite:a4f472786b7c85450a410eb51ec6b08b8c76975a}}. OH is an effective tracer of diffuse molecular regions ({{cite:fd856603bc118f008af949cd3f355f06cf46468d}}, {{cite:0d771c3e2f22edccd0763bd656757b137efcbb3f}}, {{cite:fc17fdd7e921379b712e545bdcb954ec2058585e}}, {{cite:c5770eddaa3c41698c70c9a77dbefbd95c0918cb}}, {{cite:1190de3f44ae63e830ce39aa3aede309c24027ee}}, {{cite:cf6515937789f7e4e2e96ba16bcea71efd3e0454}}, {{cite:a4f472786b7c85450a410eb51ec6b08b8c76975a}}), and has recently been surveyed at high sensitivity in parts of the Galactic Plane {{cite:724fc967491dd46f95cc7aa51a9a2cda4db9668d}}, {{cite:c0b27b2dcae21489536100755cd098e60a2f4978}}. There exists both theoretical and observational evidence for the close coexistence of interstellar OH and H{{formula:39f77d05-f501-4932-bc8d-1d53bb307988}} . Observationally, they appear to reside in the same environments, as evidenced by tight relations between their column densities {{cite:a38266bd3efa722fd8fd5157d69b00a7ce6dfb35}}. Theoretically, the synthesis of OH is driven by the ions O{{formula:5b924de1-ea3b-4af3-babd-caf98fc5e04f}} and H{{formula:7f88b54b-2438-4638-897a-1c8a2839ba63}} but requires H{{formula:cd624df3-129d-4679-a274-79a9b73568f0}} as the precursor; once H{{formula:9218a332-b03f-4692-aacc-890213881bee}} becomes available, OH can be formed efficiently through the charge-exchange chemical reactions initiated by cosmic ray ionization {{cite:82d39131027760c36308198b7c33b982c2ca018b}}. Here we combine Hi, OH and dust datasets to obtain new measurements of the abundance ratio, {{formula:b0a9d36f-0b5b-48a9-b9ed-9b95d924a487}} ={{formula:294020a8-8479-4325-8baa-87c7c366ca45}} /{{formula:583b8f69-8f33-4fc5-8a78-d854786d4993}} – a key quantity for the interpretation of OH datasets.
| i | 0f9e4fe0378f79a4605d2367840fdd13 |
Galaxy populations are bimodally distributed in color, morphology, metallicity and so on which indicates a divergence galaxy evolution path. The mechanism that governs these observation results is still under debate. This depart phenomenon can be illustrated more clearly in color-magnitude diagram {{cite:f1562216b96cf8a79d65d98f7036ab074c900051}}, {{cite:57188030efe2b87d5c35ec97de7d09014328467a}}, {{cite:6d20762ec5e657956a8fcd161c1a6ee5f3033b33}}, where the galaxies locate in blue cloud (BC) or red sequence (RS). The blue cloud galaxies are still star-forming while the red sequence galaxies are always full of quiescent galaxies.
| i | 16b101f664e8825ceb3c353e57b55d02 |
where {{formula:d78f2d27-4008-40cd-993f-f62fb409c511}} , {{formula:9a413752-2770-435d-a0af-0112efd2d778}} ,
{{formula:d4e8b4b9-9314-46fd-91e9-bc81c37a17e9}} and {{formula:8ccb82e7-9d65-41e0-8488-499cdacaf75e}} . Note that
(REF ) and () are written in the forms of (6.1) and (6.4)
in {{cite:055d5beb9061a852d99bfd988535a1ed793a512f}} (p.449, p.450), respectively. We actually consider (REF )
so that {{formula:79cb826a-c7f4-4cee-a5d9-6bf2a422027f}} . Recall that {{formula:bd7a6134-101c-4aa6-b581-7b70fd9d4b1c}} satisfies (REF ) and
this, in particular, implies the linear growth property of {{formula:5036d8a5-c736-49a2-ab9d-884326bf286b}} : {{formula:b3933180-5948-4f59-a098-916b46cea1f2}} . Thus, we see that the conditions a)–d) of Theorem 6.1
({{cite:055d5beb9061a852d99bfd988535a1ed793a512f}}, p.452), especially,
{{formula:0e6364ba-1d08-4963-a421-b40be7fde757}}
| m | 6540076813dc82e0a57c351c24cb96cf |
Our main result is presented in Fig. REF . First novelty is that we have found the observational bounds on the Hawking temperature coefficient {{formula:4e324e86-0e6b-4867-adf6-6b17d7b48aae}} which (on the theoretical basis) was usually taken to of order of unity {{formula:cd414fb4-b07b-4418-8822-765c77c5a1b0}} . Our evaluation gives that it should be of the order of {{formula:986f63c2-2a29-4b00-a80a-d509dc8f9126}} . This difference is not unexpected, because the {{formula:0221ed90-71fe-4d12-8ab8-882fb2302cd8}} estimation was based on purely theoretical considerations, with no previous connection to data. Now, we show that observations are not consistent with such large values of {{formula:2ff3a370-ce75-4eb4-877f-f7bd11f22908}} . Instead, it is at least two orders of magnitude less. Thus, the entropic force in the model we have considered gives only a small contribution. Similar results were obtained in {{cite:a6f2b8ee7f845d0bbce4072932830118f43ad340}}, {{cite:031f8d9a2ae2d936294789eb55963f8c340b89b7}}. Another novelty is the bound on the variability of the speed of light {{formula:b7beef69-13e0-4bb3-86e1-828b01cb8fad}} and the gravitational constant {{formula:f3d32a03-1aa7-4233-8ae2-245c9851b43f}} . According to them, in the entropic scenario we have investigated both {{formula:bf3ab3cd-733c-468f-bc46-d8dfbf84a448}} (Fig. REF , left panel) and {{formula:43a72867-4fde-4c46-a547-5c7fa3f708f8}} (Fig. REF , right panel) should be increasing with the evolution of the universe. Bearing in mind that the speed of light is related to the inverse of the fine structure constant defined as
{{formula:6148dfb8-b1fe-4960-8ed7-1161d77e7c5e}}
| r | bafd5279de1367f8c9fedf9075594ad8 |
Meanwhile, in our numerical calculation, we used a specific model of the two-dimensional system introduced in Sec. REF .
However,
our theory ensures that the fluctuation theorem should hold for a much broader class of models that satisfy the ETH (Table I).
Since it has been numerically confirmed that the ETH holds for various non-integrable systems {{cite:61c408233f6ee089bd62823d74e8241c85bd09aa}}, {{cite:3ee0b793598caa204afe0f919fb4dc60d2f0be57}}, {{cite:77e01b233473c03d6a24810af512af32a715a29e}}, {{cite:f05b19ea1158cf2e6a1c79c4d79726dffd28bbe5}}, {{cite:c1599e3dc9f57a678608e3e2e5c38fcc25753e7c}}, {{cite:abd423c564061a5af63eb6b394723f976430b97f}}, {{cite:205d44f357dac7f1385f02ff2cb24df4d279d832}}, {{cite:82ea071d3ee29f4264cc4105e5fb16d2cca46930}}, {{cite:813073c35ee509aa5f4694d1e6bc8e8a6254b049}}, {{cite:e01600496117eb2192dd806592ec138d8a2ae127}}, {{cite:01cafb2fcf1cb46428cac1caf28981303a082d52}}, {{cite:ed2b57ceaa27c0f16baafa2e5656e4f6ce89d188}}, {{cite:f80487a85d04e5c7c2b86860e48800480a625b25}}, {{cite:be9536d8167bb03b4cd8f74d5c0f10a94f8b4339}}, the fluctuation theorem holds for these systems.
| d | 9d12a00b1184db536915de3ec867525a |
To learn adversarial robustness without input perturbations, we follow {{cite:20dd114099424ec3e106e11bc21db30f57ff69d7}} and distill an adversarial trained model using only the clean training data. We focus on the self-distillation scenario where both the student and the teacher are ResNet-18. We find the AT teacher's epoch has significant impact to the student's robustness, so we used the checkpoint from epoch 40 for all experiments since this provided the best trade-off between natural and adversarial accuracy. Additionally, we find distilling the logits with the L1 loss leads to consistently better student robustness. Justification for this setup and additional details are in Appendix .
| m | 8b6c97e1951cec6976e35984351a76a6 |
One barrier to improved algorithms for robust IL is a lack of appropriate benchmarks.
IL algorithms are commonly tested on Reinforcement Learning (RL) benchmark tasks, such as those from OpenAI Gym {{cite:c21879c6bf91518c570a1a0923d556ac05ac93fb}}, {{cite:41a0a2720ea95dc22e014d4fbd1748e8744c0705}}, {{cite:5bb81c2c461e6eda5624bc82378df62f9b25276e}}, {{cite:b1883eb68cee668206980b653ba7f2a91b288dbe}}.
However, the demonstrator intent in these benchmarks is often trivial (e.g. the goal for most of Gym's MuJoCo tasks is simply to run forward), and limited variation in the initial state distribution means that algorithms are effectively being evaluated in the same setting that was used to provide demonstrations.
Recent papers on Inverse Reinforcement Learning (IRL)—which is a form of IL that infers a reward under which the given demonstrations are near-optimal—have instead used “testing” variants of standard Gym tasks which differ from the original demonstration environment {{cite:3c75fde6240c31a0742b6b851f3411ca231129a7}}, {{cite:af7b70c2ac27d58d36c16678b0eb53d0dd96b550}}, {{cite:da1345ec4aec4e32e2282e0350db1e83416bb067}}, {{cite:5d16fcbf43ee5b0a9b4138fb75b56e5ef6efa96d}}.
For instance, {{cite:3c75fde6240c31a0742b6b851f3411ca231129a7}} trained an algorithm on demonstrations from the standard “Ant” task from Gym, then tested on a variant of the task where two of the creature's four legs were disabled.
Splitting the environment into such “training” and “test” variants makes it possible to measure the degree to which an algorithm overfits to task-irrelevant features of the supplied demonstrations.
However, there is so far no standard benchmark for robust IL, and researchers must instead use ad-hoc adaptations of RL benchmarks—such as the modified Ant benchmark and similar alternatives discussed in sec:related—to evaluate intent generalisation.
{{figure:d66b4757-3c08-43ea-acbd-1c3407158549}} | i | ea0f5c41bf5fdf8372beab2dfd9cfc7f |
where again we have neglected various constants.
The first term is the standard area term, which can be put in a form reminiscent of
the Bekenstein-Hawking entropy {{cite:6a8a4691e223e2216e4598b5ef853bcb5c5cda16}}
by defining an effective Newton's constant
{{formula:18441d4b-085a-4277-a91d-047d54671362}} {{cite:99bbd57b488bd55528671dd6178f3b947d330529}}, {{cite:c9e39931661d5323698f6b874a6cacaed1a34c9c}}.
The coefficient of the second term involves the combination that is proportional to
the central charge of the dual CFT, in this case
the {{formula:5390b57a-2e75-46eb-a4fd-d3d346cce1c6}} supersymmetric {{formula:f9358706-11ff-40c8-ae1e-b42ab8c67f4d}} gauge theory in the large-{{formula:8ccffbfd-b36f-4cef-b820-c6e0d85c4bb6}} limit: {{formula:53567240-ef68-4bb5-a466-d1be53ee11a4}} .
For this particular CFT, the coefficients {{formula:7e049a51-4625-4159-b0c7-36b28667d39f}} and {{formula:6bce22ed-8d24-49ed-bec8-f143100bf198}} are related. In particular,
we have {{formula:9b1a0e2c-9d07-4e36-80bc-223339f9dbf4}} . The last term in eq. (REF ) determines
the coefficient {{formula:5965e0c6-1bae-4857-bc0e-493b1ab6ebd5}} in eq. (REF ), displaying its explicit dependence
on the expansion rate and the spatial curvature.
Notice also that the form of the logarithmic term is consistent with
the direct calculation of the entanglement entropy in dS space
for a theory of a massless,
minimally coupled scalar field, for which the coefficient is equal to 1/90
{{cite:3ed1810e06aa5c1a628d8615fe783ad0413f2d89}}.
| d | c58a36654ed7ce858170543e328a9b0f |
We measure the angular momentum content of each {{formula:5c92cca0-4df8-432d-b518-3f1e35210bd7}} -body model via two distinct metrics. First, we calculate the kinetic energy due to rotation and compare to the total kinetic energy in the model. The second parameter measured to understand spin is the well-known “spin parameter” (e.g., {{cite:11be4f08e70b810a64e2783e3a428acc6dbaee5e}}),
{{formula:37f5f895-f089-4a24-ae4e-5eb187623e7a}}
| m | f77416aaf8937ff82f0b6102aaf4a56b |
The CER returned by the model trained on the joint dataset shows improvement in the results of both datasets and a large improvement in the out-of-domain testing scenario. Even though the datasets used by us are the largest among all Cantonese ASR datasets, they are still much smaller compared with more comprehensive datasets such as LibriSpeech. Each of the Cantonese datasets contains fewer than 100 h of speech, while LibriSpeech alone contains 960 h of English audio {{cite:60f00bbec7b1854f9dd6f420ce9fcca8884aaba1}}. Thus, combining both datasets is a natural step in creating strong Cantonese baselines for data-dependant deep learning models.
| r | 90753f0ccf20856bc15cd57c2d0700db |
As demonstrated in Table REF , we report evaluation results on the 1-channel real-world noisy speech extracted from the CHiME-4 challenge.
Two categories of results are displayed, the upper part lists the results of supervised methods. The first three rows are the results reported in the challenge {{cite:d367aefa7d72dfedf52cdd815f8d6c791c971628}}, {{cite:ce13c0c1eb89e0ece895721cb706c7f3fd715597}}, {{cite:8f5eff2ae6a0b5001c369d7c7b3f0958d1f627b8}}, and the fourth row displays the best result on the 1-channel track data to the best of our knowledge {{cite:f5ae771cd3bd85a01d98c7072a47b7d7447b2522}}. The lower part of the table lists the results obtained from the pre-trained self-supervised models that are fine-tuned on different percentages of labeled data.
The major difference between these two categories is that our proposed method does not perform any form of preprocessing, and not all labeled data are required to train the acoustic model. Meanwhile, the supervised approaches conduct speech enhancement to preprocess the noisy inputs before feeding them to acoustic backends. Moreover, to obtain better WERs, techniques like speaker adaptation and ensemble acoustic modeling were employed in supervised studies {{cite:8f5eff2ae6a0b5001c369d7c7b3f0958d1f627b8}}, {{cite:f5ae771cd3bd85a01d98c7072a47b7d7447b2522}}.
After applying an LSTM-based language model, we achieved 6.5% average WER with only 5% of labeled data, which was 0.4% better compared with the top-1 result reported in the challenge {{cite:ce13c0c1eb89e0ece895721cb706c7f3fd715597}}. Furthermore, by using all labeled data for fine-tuning, we reached a 4.1% WER on average, which was 21.1% relatively lower than the best WER obtained by the supervised approach {{cite:f5ae771cd3bd85a01d98c7072a47b7d7447b2522}}.
Compared to only utilizing the pre-trained model (no continual training), adding the reconstruction module resulted in around 4.7% relative WER improvements.
We also show the results using the LibriSpeech (LS960h) based pre-trained model fine-tuned on the five channels of the labeled data (excluding the microphone channel that faces backward). For this relatively small sized pre-trained model, we achieved competitive performance with an average WER of 7.0%. In addition, adding the reconstruction module to the pre-trained model brought around 14.6% relative WER improvements on average. The performance boost for the LibriSpeech based model was more obvious, and is likely caused by the difference in the model scale and the size of the training corpus (960 hours versus 60k hours).
This result attests to the potential of our approach, as we are capable of achieving noise robustness with a limited amount of labeled speech and no preprocessing.
| r | f0a066f605aad56294de6eb384f05be0 |
As previously discussed, the aim in this work is to address all three issues impeding the progress of LFQA, as outlined in {{cite:92a95213cf4e310ff8f91a3288ea9632190f68f0}}. We discuss our methodology towards solving each of the above tasks in Sections REF , REF and REF .
| m | a8ad948e7483825b406a469b69567851 |
Human infants are not as tabula rasa as models like InferSent but rather encode useful inductive biases {{cite:8004a2277c0452e17369422cd7b6b009eb3309bb}}, {{cite:7fcb4364d9234b8866305369fd3692d00eb2bd65}}, {{cite:6a141af622ec249d8228a54d319c9be2779a518c}}, {{cite:ce82c696aa7039addaf8c7dbcb8792ffa2796829}}, {{cite:15bf35222cefd6f03ce10212d179a1757a235fc5}}. Building such biases into our models {{cite:b8c3ac044b4cb60110bce15f98864c56cecf5922}}, {{cite:b76715136f56429b8250496ac5badbb081210ac2}}, {{cite:abd30ea19c7714ecbcdf6c7b7debd1bfd2a07c5f}}, {{cite:404ec18ea7ce66895a19cae83e8238e76bd30cef}} is a promising direction towards scalably learning systematic representations. We also showed how analysis and controlled testing for heuristic strategies in the learning environment can provide rich insights into the representations learned. Such analyses could also be used to improve learning and subsequent performance by leveraging this underlying structure {{cite:363620bdda99a8a52eaa8113db436d1b432f1adb}}, {{cite:b6324f4986b4060e7e06bff4ea47ec2033e757cc}}, {{cite:85f1eb299f6c3bc7827c6afddbbf4d7d6c29a435}}, {{cite:0d83645498f576d04c54c85c4d0a34e438e83749}}. Finally, we leverage methods from cognitive psychology to introduce a new structured test dataset (the Comparisons dataset) as well as a new metric (context-tying) for sentence representations. Rather than the traditional single-dimensional metrics of the accuracy achieved on ad-hoc test datasets, our approach provides insights into the kinds of mistakes made and therefore a more principled and nuanced ways to benchmark artificial systems against humans {{cite:e8369e7ba3f81069b40f309d3cb374b8ccff107e}}, {{cite:b85bebd0e004be57a69a2d89b02fb988e9b4ec4d}}, {{cite:89c52a85662f2d1f9752dd67680b5309d09d072d}}, {{cite:1e14239ce50c6f2b21d224992276b1ce3741c4c2}}, {{cite:cfda6a13469de1ade29fcf0cd5b8457cb7d247fd}}, {{cite:c499a4f9b50e0b0bb73cf3897ca2996904fc39c9}}. A metric like context-tying is not bound to the domain of language, and can also be used to benchmark systematicity in other domains that benefit from abstract compositional representations – like scene understanding {{cite:078966f6293d667a5fe731c3d13eff0caa38fc6f}}, {{cite:53a406edffbd12e8017044c6e1c9a2c17929ceac}} or structured planning {{cite:100e9094e4cc3c310b66a4ba29fc9a4aac0ab477}}, {{cite:9523cd6e1e5c63235dc08113914f74ff58505123}}. Future work should pursue other such diagnostic metrics, to build towards a comprehensive suite of testable criteria for exactly what constitutes human-like representations, and also to further inform which aspects of these we wish to emulate in artificial systems.
| d | fdf1ea791e857c25a6cd6f04208f77b1 |
With bandit feedback, players need to derive an individual gradient
estimate from
the utility value to update the next action. The most commonly used
methods for gradient estimation can be divided into two
types: multi-point estimation and single-point estimation. In fact,
multi-point
estimation techniques have been widely used in various optimization problems
{{cite:9815e15c0136052c2505a4ed67969e140681412f}}, {{cite:7469d81a6859518c850c8031b1c85b1ba7959824}}, {{cite:e0c58d228a99d80f42f503834245633c8db5c68f}}, {{cite:fd9d0dffd1e0551fe696def338874889177e4c99}}, {{cite:c61bbffbb24c65671f803d5198dacc8371f62579}}, {{cite:6bdf2a30049138ec5a59d8bdf6aee268a40014ec}},
which are also
known as zeroth-order oracles. Different from optimization, for the repeated game
problem, each player's utility is not only related to the actions taken by itself
but also related to the actions taken by its opponents. When its actions change,
the
actions of its opponents will also change accordingly. Therefore, multi-point
estimation methods are not applicable or cost too much, hence,
single-point
estimation methods such as simultaneous perturbation stochastic
approximation
(SPSA)
{{cite:822c6b774e1f2776fc62935940509d5eae077c43}}, {{cite:9a0aaef0730cfe5e32cecea30d5842cdcdf5626b}}, {{cite:0a5dfba4556c08ccbb9cf9fce22df88e7d325e73}}, {{cite:992c3310754207eeed4a29c8f4a27315c59b51f9}} are
studied
in the context of non-cooperative games. For example, the work of
{{cite:4e10e13f0230ac60b1b183e085b8151ca7056d9f}} considered potential games and proved that
the exponential weight
learning program with bandit feedback can achieve a sublinear expected
regret
and
converge to Nash equilibrium.
{{cite:b1965066dfbf7cdb333082e7848ad00ad2381337}} showed that in
monotone concave games, no-regret learning based on mirror descent with bandit
feedback can converge to a Nash equilibrium with probability 1. With delayed
bandit
feedback in monotone games, {{cite:da15430bfb4c4fde3a22960f2eec4723ff5ccc07}} proposed an algorithm with
sublinear regret and convergence to a Nash equilibrium. {{cite:512d24a7d412e0e98391dc5703a6aeb7e8a0d6e3}} showed
that the no-regret learning in the Cournot game with bandit feedback can
converge to the unique Nash equilibrium.
| i | 68381281b3b655cb1cb702c1f365b87f |
Groenendijk et al. {{cite:951abadd565cdba45cb27e26a4bcb3818ca18556}} proposed a weighting scheme based on the coefficient of variations, where the set of weights depends on properties observed while training the model. The proposed method assimilates a measure of uncertainty to balance the losses, and consequently, the loss weights evolve during training without demanding another optimization. Additionally, they have illustrated that coefficient of variation (CoV) weighting outperforms other weighting schemes such as GradNorm {{cite:b2b8fe6cdd4405481f59abde00789e4dd63b1e23}}. When a physics-informed neural network is developed, one ends with a loss function defined as a weighted linear combination of multiple losses, where the final performance is sensitive to the weights for these losses.
| i | 5da2cd736a02843550417e7680e41e1a |
For benchmarking, we use the DE model in {{cite:3c6caba7107ac94ccb6b0141d87a269d8ab912cf}} and the results of the DE model on UDCv2 as published in
| r | 3a6f0eebc54b57c57b6edeb666e68b55 |
Next, we select the {{formula:961fb520-2682-461b-92d2-bdb33f50e59d}} NNs from each cluster's centroid on the graph-level embedding space in order to find the set of {{formula:f965187c-b5da-4642-8c78-0cf7b2b30f8f}} graphs, {{formula:74a9d9ae-8d46-4374-848f-f26739db4030}} , for each cluster (see line 4 in Algorithm ). To this end, we sort the vectors {{formula:caaec83b-507b-43d5-894d-4bc0c727471e}} in descending order with respect to the Mahalanobis distance from the mean vector {{formula:79764d5f-e1ad-461a-af1b-b918cf9afa2a}} within the {{formula:65f481f0-61ca-4725-817d-2705668da19d}} -th cluster {{cite:85229c9b9ab31e74b4e02002bef577ef2312ac40}}:
{{formula:562a35fd-ffd7-468a-87ca-9c9dfdde258f}}
| m | 83b0b707d921434cc5788a7d467cfc73 |
The work presented here extended the previous work of Bonifacio and Preparata {{cite:85f2819e4719634b091d60c95195b135d7d858e9}} to develop both a short-time and long-time solution to the differential-difference equations Eq.(REF ) for the quantum amplitudes of various relevant `in-states.' In the short-time limit defined by {{formula:35b8e7dc-3955-463f-ad79-8720d079fbcc}} the pump Fock state {{formula:2d55e197-fffe-4c3b-a3e5-3d934e1758fa}} essentially factors out from the rest of the signal/idler portion of the state, and the usual two-mode squeezed state with an un-quantized source of PDC is recovered. Since a Fock number state is a highly non-classical state, it might be more appropriate to have modeled the initial BH `pump' state by a coherent state {{formula:dff31b3e-bf4f-4de5-9df1-eeeaa888ccf8}} with {{formula:9313393a-5420-47ac-ad4c-155c2fbcd24f}} , which for a laser pump source is the quantum state most like the classical state of complex amplitude {{formula:c08d5fc0-4a8c-4773-8514-9f991ee857c9}} (i.e. {{formula:17b47eb2-e9ae-4461-b9f8-6063313cdc46}} , while {{formula:86c186fe-fa4d-4a10-b87f-3254cb7bf206}} ). However, the point of view taken in this paper is that since {{formula:76396361-dadd-4d60-ac04-b9db082606aa}} , the coherent state {{formula:5a6f3115-54d7-4878-a7f6-427515230ef6}} is very sharply peaked about the mean {{formula:0a58858a-2d7a-456e-b928-3979150d0756}} with {{cite:102a253b9d2d0736f99782114c16e7988ac0db67}} standard deviation {{formula:98d9ca7c-7ad4-4afd-836e-032b461d0702}} implying a fractional uncertainty {{formula:e21e8201-d33d-4bcc-8f24-eb7e3f998d5f}} , and hence {{formula:aa65adb7-6489-4542-9ce4-de903b0f66ca}} . The extra summation in the definition of the coherent state could be accommodated in an analytical treatment (as was performed numerically in Section and will be explored further in future work {{cite:8edfbca3ad4733496d4c0f153d712f5d6db365fe}}), but in this present work would only serve to complicate and obscure the central analytical features discussed.
| d | 9761631d5f7861213b82803350566eb5 |
Our saliency maps are of good quality (see Figure
REF ).) as shown by the visual
comparison with some of them and two state-of-the-art models
(“Hierarchical saliency detection”: HS{{cite:af7a7e5f1f7253926d661d5724a1057e72fb08ad}} and “Hierarchical image saliency detection on extended CSSD”: CHS{{cite:6be602f8b4153362b51b26d59d4eeeb8e38c4be2}}
models). We used for evaluation of our salient objects detection
model the Mean Absolute Error (MAE), the Precision-Recall curve (PR),
{{formula:2eb63a15-8624-48de-8b33-9541b89fab05}} measure curve and the {{formula:7f032767-359e-4e14-8d1f-e30dd2e63d64}} measure with {{formula:9df43ab7-27e3-4222-ba9b-86156082a5b5}} . Table REF shows the {{formula:69057192-560c-4716-8367-f033ad0e68d9}} measure and
the Mean Absolute Error (MAE) of our model on ECSSD and MSRA10K
datasets.
| r | a13edfb53893779089baaba53060dae4 |
Besides, inspired by ACL {{cite:7a73d767cebfffcf549d5b965450d1340815a4ae}} from continual learning literature, we adopt an adversarial training scheme to make the knowledge in the global net {{formula:a31c1c91-186c-4f97-a15d-511de7f2b188}} be task and domain-invariant. Similar to the GAN pipeline, the global network can be regarded as a generator. We designed two discriminators: the task discriminator {{formula:20734927-4061-48aa-8300-713b3eb80281}} to evaluate which task is the generated sample from while the domain discriminator {{formula:d900d3db-f259-40f3-ab45-597e65808eeb}} to classify the domain of the current sample; see Eqn. REF .
{{formula:2a468e61-da6a-4584-bfe3-ea2ec1ccaa73}}
| m | 4e28d3fda412098cbcda1f7d60696f36 |
where the function {{formula:25211ec7-4e76-490e-b21c-ede1ab67084e}} is the density of the Lévy measure of the hyperbolic cosine function {{formula:6cd17471-876a-42ad-91f1-a4ac635418ea}} .
Formula (REF ) is confirmed by {{cite:b4ae5368c8d333de4647de214230fc0950fdb3fb}} and (REF ) was confirmed numerically for {{formula:94a30ccc-7b32-4c64-b659-37e3e0b24082}} . Formulae (REF ) and (REF ) seem to be new and might be of some interest.
| r | 23ad8dbc94f90d421a6881e72984abc1 |
To efficiently obtain the unitarity cuts of the four-graviton amplitude,
we use the double-copy construction {{cite:5e427eacb8441d200403bdf500b1df32f78d2976}}, {{cite:3e7b66cbbf60201f471abf9e6be6beae43709e63}} which expresses gravitational
scattering amplitudes directly in terms of gauge-theory ones. Here we
use the BCJ form of the double-copy relations {{cite:3e7b66cbbf60201f471abf9e6be6beae43709e63}}, {{cite:f1febe82f9dd77a474f387c5a960b77459396b94}}, which
is more natural when organizing expressions in terms of diagrams.
{{figure:5073835d-dedc-432d-be84-af7a7e3cb968}} | m | a58c70351ea329eacc53eff6f044cf14 |
Game theory was initially evolved
to investigate different strategic
situations with competing players {{cite:2dd2a452f77404f3fbb1470e37450ba89701e9e2}}. Of late, the concept of game
theory is being applied to
different statistical events to
measure the success rate when one’s
success depends on the choice of the
other agents. The game of Prisoners’
dilemma (see e.g., {{cite:9fb51fdf9f60da72eb1e9d29cea2388da921e7c8}}) is a popular
example where two non-communicating
(or non-interacting) agents choose
their actions from two possible
choices. It is a two-person, two-choice,
one-shot (one time decision) game. The
Nash equilibrium (see e.g., {{cite:e27bf806e470d5df3a58d6665cf19bf69631de0e}})
solution employs the strategy, where
the other player can not gain from any
of the choices, and both the players
necessarily defect. However, this is
not a Pareto optimal solution (see
e.g., {{cite:b42ee118be3fc9bc22ee35b7ed7d78a7a76a3d6a}}), where no change in the
decision can lead to a gain for one
player without any loss of the other.
This problem has been used to model
many real life problems like auction
bidding, arms races, oligopoly pricing,
political bargaining, salesman effort
etc.
| i | 28e36194c01374d0388fb5f299855a7c |
Shape matching is an extensively studied topic with a variety of different approaches and methodologies. We summarize references relevant to our approach here and refer to recent surveys {{cite:c5188435cba62ee7974ba4f559f3f3809a4a2fad}}, {{cite:0fc4c942660f34c683db97ccbeb2250a527d6da7}} for a more complete picture.
Classical methods for non-rigid matching often devise optimization-based approaches that minimize some type of distortion metric {{cite:2fd3a0324ce8ee9fc4c8439b899cfaebb5be5cfc}}, {{cite:b51a5f9b41ee70a38ac98ff7517e8656c9b04d30}}, {{cite:d82890e0a742be7455f90d62f78a831661e25503}}, {{cite:53c89887b250e517080628e439d1d09610eb386b}}. A common prerequisite of many such methods is the extraction of hand-crafted local descriptors that are, in approximation, preserved under non-rigid shape deformations. Common definitions include histogram-based statistics {{cite:9b2f1a44c9fe42419b19afcd4179f4da9d4291b5}} or fully intrinsic features based on the eigenfunctions of the Laplace-Beltrami operator {{cite:dcafb5f851db4944e8fe7153212993e35b2af222}}, {{cite:308b3c244a7a52dd9ff4e7d2b3351b3859221452}}, {{cite:cc75cca42c9414e55c2dfecf97bb88727c263be5}}.
Over the last few years, functional maps {{cite:42ef84e820b941950a65569143e8d830e0abf625}} have become a central paradigm in shape matching. The core idea is to reframe the pairwise matching task from functions (points to points) to functionals (functions to functions). There are several extensions of the original framework to allow for partial matching {{cite:75d5ae62a20e740e8382145b4922f4fb3a81f02a}}, {{cite:a7d366da40e780b45821e9d7a299f011ea14f33b}}, {{cite:5f97a354f410969057db61f48d3f4f917726a2ab}}, orientation preservation {{cite:2660d950c6edd3a7f726fe8223f83ce1d64eb817}}, iterative map upsampling {{cite:48128c44c37129035d2a90862c096069159c67c7}}, {{cite:b1067663f654f1fee3acd7067454089293481bc8}} and conformal maps {{cite:3439c97ca287754efbf68ebcb74b06b59590ca2c}}. Our approach utilizes functional maps as a fundamental building block within the differentiable matching layer.
| m | d89685e98263eb623fbd13c7ff6099c9 |
There are ripple effects to our interactions with recommendation engines - our data goes on to be used in other settings as raw data for other algorithms. Algorithmic assemblages, operating in "networked information environments" {{cite:993878c7472721f6695bd13920b334e3473c8637}} do not only produce individual distributive outcomes but are also "intimately bound up in the production of particular kinds of meaning, reinforcing certain discursive frames over others" {{cite:e2b7a27ab723432971df6e3eb500e63a0be50185}}. Addressing these considerations requires employing frameworks that (1) allow us to think in terms of algorithmic assemblages situated within a sociotechnical context {{cite:cfb0e474cc4d24aed6d6e81dea313bea7d932ebb}}, {{cite:8a9834ce1c9358633f4301397a3bc0450e462dd9}}, {{cite:20ba165397565a7028335f523a65eacb4a01d9e3}} , and (2) see data and algorithms not as static artifacts but as part of an ongoing process which evolves at different time-scales {{cite:10bf0dd9c5af5142c18f778397cefb5c41497d4b}}, {{cite:8308755251388fe29a5e32078aa06a6c55f91734}}, {{cite:993878c7472721f6695bd13920b334e3473c8637}}. Specifically, (1) we need practitioners and researchers to employ evaluation metrics that go beyond optimizing for the inputs and outputs of a single algorithm, and (2) consider how the data that an algorithm operates on is part of a dynamic larger whole (for example user's preference or identity). Building on prior work showing the value of interdisciplinary research, we see an urgent need to draw from the field of Philosophy, Systems Theory, Complexity Theory, and others, which have developed the concepts of emergence and synergy. "Synergy is the only word in our language that means behavior of whole systems unpredicted by the separately observed behaviors of any of the system's separate parts or any subassembly of the system's parts. There is nothing in the chemistry of a toenail that predicts the existence of a human being" {{cite:e78cee94999b976345395dbf30505cbb692e0e2e}}. Similarly, its imperative for us to question the predictive power of individual datums, for examples click-rate, over a person's political inclination or other aspects of their identity.
| d | c0e9895ae8b06490049b0273fe4afabb |
While early level set methods are “pure PDE methods" which construct the evolution equation (REF ) by first defining an evolution PDE in the Lagrangian framework and then converting it to an evolution PDE for the level set function {{cite:26f95fe1efec2172843736aa9e59e5dae3c90a7a}}, a method known as variational level sets obtains the desired evolution equation by minimizing an energy functional defined on the level set function itself {{cite:26f95fe1efec2172843736aa9e59e5dae3c90a7a}}, {{cite:6c39e53dd021dbebf9724db588f2540c2e1900d3}}, {{cite:41a23867edc8aaddc639814a06967350524c1c02}}. First proposed in {{cite:698c15b1640924b52ee28bf5daa617892bb93bec}}, this approach was popularized by {{cite:bb571c2d6576e30b02ed6d8f80cbcfb0e6b7d5eb}}. Not only does the variational level set approach allow the evolution equation to be obtained in a more straightforward manner, but it also allows additional details such as region-based information {{cite:bb571c2d6576e30b02ed6d8f80cbcfb0e6b7d5eb}}, {{cite:45198a5a628fcdf91c86718254bf27e76bf7d4b9}} or shape information {{cite:8128c179bb3984651867a68223ef3e340fc35f46}} to be included in the energy functional.
| m | 325d5c49aac57613fcda8f3cc8c7a8e3 |
In order to keep the modality constant, we first convert the MIDI files to audio using FluidSynth {{cite:f0cf57107df870855a89c5e768088e4b4bb1b5ea}}. We then transform the frame-level audio patches to image spectrograms using librosa {{cite:09645769f7e0365f09525f2abe881f3999d2c6fa}}, a Python library for audio and music analysis. We conduct experiments using both the Short-Time Fourier transform (STFT) as well as the
Constant-Q transform (CQT) transformations of the raw audios. We briefly explain our choice of loss function below:
| r | 6a7f3253369a19eed17ee460ed67997b |
These results extend recent findings on the similarities between brain responses to speech and deep learning models trained with biologically-implausible objectives and data.
First, fMRI {{cite:0f0c696997c708cd115a3181c774f641be71646b}}, {{cite:c439fa510b1a61f5749ca1d0fcdbaf49c9de80d6}}, {{cite:ea76115b2c8c463fc78bbd84a2da8f0c03da45e2}}, electroencephalography {{cite:ef8b3943ec6c81681b38adb607f76f271ae3ba4e}}, and multi- or single-unit responses to sounds {{cite:319ff234bc84f2df1b325868ad554873e35e5392}}, {{cite:7f424e16d166fe68af127317fee8f31bda8c3bc6}} have been shown to be linearly predicted by the activations of deep convolutional networks trained on supervised auditory tasks. For example, {{cite:c439fa510b1a61f5749ca1d0fcdbaf49c9de80d6}} showed that a supervised model, DeepSpeech 2.0 {{cite:9fa67d33e8df5754161cafe0da805812d608b908}}, better accounted for brain responses to speech in 102 individuals when it was trained on speech recognition rather than auditory scene classification. Similarly, {{cite:0f0c696997c708cd115a3181c774f641be71646b}} showed that eight participants listening to brief speech and non-speech sounds demonstrated fMRI responses in the temporal lobe that aligned with those of a deep convolutional neural network trained on a dual auditory classification task. Our results, based on up to 50 times more fMRI recordings of the entire cortex, not only show that such representational similarities hold with a self-supervised objective, but reveal the remarkably similar hierarchical organisation of speech processing between these two systems {{cite:bb2d397b1126e4038e4d4f72c054dca74025cff3}}, {{cite:812bffb58dcb1e38dbe916ffbf99664103eb2fa9}}, {{cite:9478c05844a337d8b90af91044a00243c8095f06}}, {{cite:a59bded4a39244150df17bc507ad1432c879fa01}}.
| d | 6077bd865bee2d245f5b119280499eec |
Our modification involves a certain projection of one half of the celestial diamonds so that there is only one radiative soft mode. We then showed how to extend this to operators of any weight, so that we can go to a basis where the corrected shadow transform is again a conformally soft limit. In the end, we've seen that different bases make different symmetries manifest at the level of the celestial OPE and explored how certain helicity-asymmetric representations of the amplitudes can be better suited to taking multi-soft limits. These explorations expand upon a variety of exciting approaches to elucidate the symmetries and structure of the 4D {{formula:edeb461f-cc0b-49f8-9704-f9e53fd1e2d9}} -matrix {{cite:e86969b31be98687ed3d4c03b468e357afb971b6}}, {{cite:60f0bf897611dc8fc3e8f5de172816a732e350cc}}, {{cite:c8ab3a3c9815fe49dbab13ebb9660b3e307e61a5}}, {{cite:3f522430985b5d650e6da751020a5265ddeab9f7}}, {{cite:3bd4e16e23de6e1a49f7092a85974cb8b07d5529}}, {{cite:3e58ceea1ffac09244770ed5e3ab7af13fde9724}}, {{cite:0434360029ab39c46c0da8c926cc03ae3b1f7561}}, {{cite:c3fad04da9e4e353f5db3ff73046599ec0a00607}}, {{cite:e8e3e652ca322346defd1a5757ba6f15e61b8bd5}}, {{cite:45ecd5379c8665c6f8727996b412ac2f32ab7b79}}.
| d | dfb19ba5742f5d7e95c636ac26f87204 |
When dealing with the “complete” evolution equation (REF ), it is quite clear how we can recover an estimate of
{{formula:f8afe4e1-b9d6-4375-a67e-a13a63de9ae0}} starting from the equation (see the Caccioppoli inequality proved in {{cite:f06e8a8e2bbc69944476b9f448e59eb0876d33bf}}).
However, combining this information (that corresponds to the choice of {{formula:512493f6-5b96-4bf8-875b-150f066e4f60}} ) with the previous theorem
does not provide us with the best Sobolev exponent for the spatial regularity, that in {{cite:bc1cbb767479123e3ac6e3a69833f02c2671d7ee}} is showed to be {{formula:25bb83a5-2daa-4f32-84c6-f74b77bac70e}} in the direction {{formula:0a576db0-7431-425b-9947-ea1439acbd75}} . As we already
pointed out at the beginning of this introduction, the method proposed in {{cite:bc1cbb767479123e3ac6e3a69833f02c2671d7ee}} is very involved. For this reason, in {{cite:2a6df44e714da8dc836f4e89f1316ba816d98016}}
the author proposed to show that a solution to (REF ) is such that {{formula:41ff361e-6d48-4bbf-9d52-e435b5cae356}} and then apply Theorem REF , with {{formula:5f0d84bf-218f-4874-839f-35978e375fb2}} .
Here, we state the analogous theorem for solutions to (REF ), for whose proof we refer to {{cite:2a6df44e714da8dc836f4e89f1316ba816d98016}}.
| r | 70b067fc122a0cf1d9ec0e05e379c4e2 |
We evaluated 10 machine learning applications that are representative of the three most commonly used neural network classes — convolutional neural network (CNN), multi-layer perceptron (MLP), and recurrent neural network (RNN).
black
Table REF summarizes the
original models
and their baseline accuracy, which is assessed from the inter-spike interval {{cite:d1c5cc654ccf368b4dd26d7a9ce5e49b56dec5c3}}.
{{table:dae81a33-3454-4e86-8c21-cffb375f48b4}} | m | 5080491ecfa061aec34bc9e0750459a4 |
Considering additional "forgetting" to provide space for information prospection. LTD, which represents forgetting, is as important as LTP, which represents the memorization of the synaptic plasticity, because our brain may be exhausted with the influx of information if no forgetting occurs. Moreover, the idea that forgetting might be beneficial for memory maintenance has been frequently expressed {{cite:b457a386b39f4140117bc5295675282286cbe9a1}}. Existing CL models always focus on memorizing important old information, and it must be examined whether active forgetting plays an important role in anterograde memory. By forgetting certain redundant information, the CL models may have additional room to learn new information and more flexibility to optimize this information.
| d | d1d5c91236b04a14f3ba1e6934d8cfb2 |
Potential game was firstly proposed by {{cite:716a0b007a34db5ddf18c94b3be8e4daecf2eba1}}, and then systematically developed in {{cite:e40c668e43a2ef23cef3be56ec1294d35258462d}}. It is of particular importance in game theoretic control {{cite:f88bbb732a8ba478fab36aab13a7b8abd743aa33}}. Various problems about Bayesian potential games have also been investigated {{cite:3b06b8712525a2c43742a6a3a1b03e4c6848479f}}, {{cite:a49463f314e664f3561f168c6ffbcf74ffe3758d}}, {{cite:ca5c65b050516b58ccd16a0956d3eade362443dc}}, {{cite:0eaa2c34cbb6de0ed076f93d843ad47f917de7b6}}.
| i | 33fcfe4341d01175b3a82318fdd3855a |
The basic components in InstaHide schemes are inspired by computationally hard problems derived from the classic SUBSET-SUM problem: given a set of integers, decide if there is a non-empty subset of integers whose integers sum to 0. It is a version of knapsack, one of the Karp's 21 NP-complete problems {{cite:1d9e17fd0c60cf0d82849b09a3dbd2752b75c0cb}}. The {{formula:5dcaf1ad-525c-4915-94b0-87b318e3a25a}} -SUM {{cite:dcb468099887b4a71e65f9f65a57ab806565ea16}} is the parametrized version of the SUBSET-SUM. Given a set of integers, one wants to ask if there is a subset of {{formula:6d092d24-fda9-4e49-9635-a3299ea80f07}} integers sum to 0. The {{formula:92ba594f-6d83-41cb-823d-5f95d2a38e0d}} -SUM problem can be solved in {{formula:683a5227-9914-4b93-b3f6-2cffe4f7715e}} time. For any integer {{formula:7b4129fe-75fb-4b8f-aefa-34061c5f9544}} and constant {{formula:f680bae3-2944-4506-9ecb-ca1ed8cada2b}} , whether {{formula:231674cc-4233-47b1-a5a5-1189b9f6505e}} -SUM can be solved in {{formula:80ce2703-3091-455a-a3fa-e24ab18d0cf6}} time has been a long-standing open problem. Patrascu {{cite:9481a9758dddb873f168c38027d2e531565eeee6}}, Abboud and Lewi {{cite:2a574bdbc9afbb41a429f411d6b73129b8f622bc}} conjectured that such algorithm doesn't exist.
| r | 22ed561b6da1a7169ff541169a310e20 |
The scope of an explanation is highly relevant to the stakeholder and goals of an explanation, and is related to whether the stakeholder operates at a system or individual level. Researchers found that the scope of explanation can influence whether or not an individual thinks a model is fair {{cite:64a5b3980eb403457a6bff61a93d49fd6b90411f}}, {{cite:7e2d85fafa0ae6ad501a4db9f3f69855b328230f}}. Policymakers and ADS compliance officers are more apt to be concerned with system level goals, like ensuring that the ADS is fair, respects privacy, and is valid overall, while humans-in-the-loop and those individuals affected by the outcome of an ADS are likely more interested in seeing local explanations to pertain to their specific cases. Technologists should consider both.
| m | 89c6d553b719f1a083cf49ff7aed236f |
Using the top-1 accuracy to measure the model performance on ImageNet is challenging. The ImageNet dataset assigns one category to each image and the top-1 accuracy tests the model prediction against this label. Although the top-1 and top-5 accuracies are reported for the ImageNet models, the top-1 accuracy is the common measure to rank the models and determine the state-of-the-art. Beyer et al {{cite:9a4652a5d45b9f6326ea0c312b85efc6ac84479e}} found 29 % of their examined ImageNet samples contained multiple objects or they could be assigned to synonym categories. Because of so many ImageNet images have ambiguous labels, the top-1 accuracy cannot be a true measure for the models. If the mislabeled images are additionally considered in this equation, it is questionable how much the top-1 accuracies over 80 % accuracy can mean a valid rank for the models. The author agrees with the opinion in {{cite:ac723e5f62ad502cd65d6a28bcd8703e8438ccac}}, the top-5 accuracy is a more valid measure for the models until ImageNet is properly fixed and gets multilabel annotations.
| d | 01f2081ab3e814c7feadbe6b42c19e31 |
We build upon the method proposed by {{cite:95dbb933ef33ec1c34152c1b5bd8898e96755a89}}, which estimates trustworthiness in predictions by a classifier.
The central notion behind this method is that similar instances should receive a similar prediction, which is also the core idea underlying case-based reasoning methods {{cite:4b141c885564ed36a2fef3d675cb215ff26e7365}}, {{cite:3f245d2fc91585f639c874eb0a133588504442b2}}, {{cite:411fc186609e0288ec6e81596d154934a553b8e8}}.
| m | 6b5cf58dd9d6d0334a9e0d7c25b368c9 |
As an application, the scaling relations of {{formula:b7fb943a-c01a-4f8d-b096-df566f7b805f}} -{{formula:e263c930-aa34-4103-b58a-5b3811c19095}} would be
useful for estimating the contamination of SZ effect on CMB, which
is important, especially, on small scales {{cite:4847dfb494abf5d5e423ab3981a68263bcc77614}}.
Recent simulation has shown that the Planck project would be capable of
probing {{formula:5814a434-b782-4a20-9c70-78512ed430cb}} on the order of {{formula:49cf0c2c-bd89-48e1-8dab-0dfdb9c99edb}} {{cite:4df24f35506f4675a1b38cd943e74767bc227851}}.
It might give a direct test on the universal scaling relations of
{{formula:e5e5a232-4f7e-4b4d-86a1-a063776f481d}} -{{formula:fd8de0cb-6e5f-48b4-bc02-041320777384}} given by the self-similar hierarchical evolution.
| d | a897aafb7c902790b016f618f2c65385 |
The LQR in (REF ) uses the same matrices {{formula:85c4aaea-c9e6-4411-b767-ee234a9ca06b}} for its cost function as the MPC. The MPC's predictor model has been discretized at {{formula:3867af6a-2481-4110-bb5a-f7be322ce731}} while it has been recomputed with a rate of {{formula:5b560650-2d72-4ac6-a8bc-5341d1db3862}} , which rendered a better flight performance during the simulation. The optimization problem has been solved using the C++-OSQP library {{cite:3802f06af2ecf3917913541ce5f8f25a82053c9d}} and the osqp-eigen interfacehttps://github.com/robotology/osqp-eigen, http://eigen.tuxfamily.org. In order to connect to the PX4 Autopilot firmware and send offboard commands from the C++ script, the MAVSDK-libraryhttps://mavsdk.mavlink.io/main/en/index.html has been used.
| r | 2e975050c9d84d64682504e1c4ea8796 |
where {{formula:34d0c30e-01e2-4a67-95dd-27f4f280c750}} is the load of the initial attacked node {{formula:29c6e5cf-72f3-4a4c-b8cf-14da316274af}} . {{formula:5b3e2b4d-5444-4a7e-abe8-00c8f5ba3d46}} is the maximal load of First Failures triggered by the initial attacked node {{formula:bc13de4e-e797-4eb7-b351-3bb52785ea5a}} . {{formula:c5605473-d699-4636-9964-3afbb441e606}} is a tunable parameter controlling their weight. Actually, when {{formula:f351413e-a315-47ea-91db-dec0eee18f9a}} , HLM is equivalent to MLF. When {{formula:cf3a2ce6-2a22-4fa1-83c6-58744f33e517}} , HLM returns to the commonly used centrality index, namely the betweenness (load) of the initial failed node. To validate the effectiveness of HLM, we employ Area Under Curve (AUC){{cite:d070f360fc193ea5924ca261e017da4bc47965b0}} to evaluate its effectiveness for predicting the final cascade size. We randomly choose a node in the left and right peaks respectively and then compare their HLM values at each time. In {{formula:380efc1e-bead-44c2-8ce7-9b95a480fb9d}} independent comparisons, if there are {{formula:091c09ef-fbfa-40e2-b395-6c434a48c94c}} times the node in the right peak having a higher HLM and {{formula:b823cb99-5806-4215-972a-3d07c50bff62}} times they have the same HLM value, the AUC is expressed as
{{formula:f1a29b05-e30d-4e10-8b85-1de888069f11}}
| r | b3084a9e6a680ade9e3658f298f4070b |
Moreton waves are a kind of large scale atmospheric wave which occur in the solar chromospher. This kind of wave was first observed through H{{formula:b8630dc3-0840-47bf-805b-fa3187a32121}} filter by {{cite:b9c0950b3130e510f53e12c5434e99d54a16c5f0}} and have velocities in the range of 500-1500 km/s, and the angular extent of 90-270 degrees {{cite:d8c5ee1a1bc30ba50dad4f4d2885b118485d71b6}}, {{cite:e5917c7615cd26b0c988a84396a39a10d9ad8f53}}. This kind of waves are often associated with flare or coronal mass ejection (CME) which are considered as the trigger of the waves {{cite:6752b054370923d98110002e0145c06ad1064866}}, {{cite:82698cbf2b35dac0ff52d8d5170fa94c706450d7}}, {{cite:1493b663656b9db289923cf16dd43629d5ba4d77}}. proposed the "sweeping skirt" hypothesis, interpreted Moreton waves as the rapidly moving intersection of the chromosphere and the flare produced coronal fast-mode wavefront. In this nature, Moreton waves are closely related to its coronal counterpart namely EIT waves {{cite:5219f1054fbf8431f2e0cf7e28bae2f35eba3aed}} that move with signficantly lower speed (2-3 times slower), accompanied type II radio burst as the result of the rising plasma.
| i | 78f0a65e68bdf5a732e1d7294fb7d0f3 |
Over the past few years, many UDA methods have emerged, which can be roughly categorized as two types: 1) difference-based methods {{cite:f0d214fbeef1ae2330708c02dccc11cc7d27eb43}}, {{cite:971df6a3c8ba1e91ac368141e456d241aee95478}}, {{cite:2e8b6f4e708af2391a09e40ecf8dd894eef4ce80}}, {{cite:bacf3475d60a26cb71289214ce7ba4a1c8147b89}}, {{cite:1230abdc772de813a9db72befd67aa5a4c0e306d}}, {{cite:a8f1a34e9348a320f69e8c7a378b170ddeb1beb1}}, {{cite:c8536459269c9cc37e0ca8b98bde9cd99f87316d}}, {{cite:ec1175bb4e6da7d086c88e28df822b5b8a8f73fa}}, {{cite:fa9c06abdd0c66ecc18727b484a98baf14b9dad2}}, {{cite:ea56ff135caefc5ed596d9f479ee9451a089ae9e}}, which explicitly mitigate inter-domain difference by minimizing statistical metrics such as maximum mean discrepancy, joint maximum mean discrepancy, margin disparity discrepancy, and optimal transport distance; and 2) adversarial-based methods {{cite:40a3e34090856d64b118069d6caf54018693fe25}}, {{cite:cbffa0f95ad64aa3c91cc87a304e9954f5826b78}}, {{cite:61a5703fa0c5a78e2914a058c9be3847e6a47ef5}}, {{cite:aeb928147540119484036431ce4ce35444e0ec63}}, which introduce a discriminator to adversarially learn domain-invariant representations by obfuscating domain-specific information.
These methods devote to strengthening the feature transferability between domains to alleviate the problem of inter-domain differences. However, under the UDA conditions, for the former, the feature extractor may incorrectly draw the distance of different classes closer, resulting in a decline in class recognition ability, i.e. a decrease in discriminability; for the latter, the discriminator forces the feature extractor to learn domain-invariant representations, while ignoring the learning of class-variant representations, which may also lead to a decline in discriminability. This decline in discriminability is ineviTable. Since the target domain is completely unlabeled, when the domain gap is large, the class-discriminative information is easily lost when performing domain alignment, and the classifier remains inevitably biased towards the source domain {{cite:61a5703fa0c5a78e2914a058c9be3847e6a47ef5}}, {{cite:1e5cda35cb963e8ddeae12c3dd7ae5e97aef0a63}}, leading to unexpected deterioration of discriminability in the target domain. Based on the above observations, our work focuses on designing a more robust solution to simultaneously enhance feature transferability and discriminability to further reduce the domain gap.
| i | 93c1ad5922bd11b6fad95f3cbad760b0 |
NAVER-Catcall-Yellow and NAVER-Posneg Results. Table REF shows the experimental results on NAVER-Catcall-Yellow and NAVER-Posneg. In both cases, ST shows the best performance. Applying ST in NAVER-Catcall and NAVER-Posng shows a performance increase of 16.89% and 2.4% in terms of F1-score, respectively. Applying ST has a much more significant impact in the case with only NAVER-Catcall than in the case of NAVER-Posneg, as shown in Figure REF . This is because the former is a binary classification task and the latter is a multi-classification task. Considering that most methods are ineffective and even may cause a negative effect on multi-classification tasks {{cite:9a6dd868c4e83c137e2fd215cba75c0da96d7070}}, the relatively small increase in performance is still meaningful.
{{figure:52a74974-1b74-4820-9522-edc2611de018}}{{table:3b8c9dce-57f9-42aa-aa32-a8773f685c89}}{{figure:2cb0feb9-0591-41ba-b9a5-3fc277e626c5}} | r | 94336fdc66a87c8af6078130f611a77f |
We compare our method with several representative unsupervised hashing methods, including
LSH {{cite:4bdf03f7b85a3b4f4e21726c225394972fd2babc}}, SKLSH {{cite:98f23da4f13f0b435e39b3a6ae6ccce65de24bf0}}, SH {{cite:878eab544f593cd889382c13e25778493201575c}}, PCAH {{cite:28f360e8add25dcf7b459c9db390c73ca58388d1}}, ITQ {{cite:4f2e9d8f65998879c504b075bd69beba59b8316e}}, SGH {{cite:0d6635a5b662e8b8dda36ebc88745a93e2b5de87}}, UH-BDNN {{cite:62531d728ea96e633d89da2a100bfc6f431832e8}}, DeepBit {{cite:ae8b53f096b310cc90c86eb9147f1bd37c1a4408}},
SSDH {{cite:edf033e6040780e72fb2ff054acba3cfe03c879f}}, SADH {{cite:229890df23c93fb1df70e858e3197aa295b47101}} and TBH {{cite:ea5addbf5d5a5729c96f40a3badb02c4d42f14b5}}. The former six are shallow hashing methods while the latter five and ours are deep hashing methods.
For fair comparisons, the shallow hashing methods and UH-BDNN abandon hand-crafted features and also use deep features extracted by VGG16 model for the experiments.
The source codes of all the baseline methods are provided by their authors.
We carefully tune the parameters of the models to fit our datasets and report their best results for comparison.
| m | 40562a14a2e8910e569a02784ed1d899 |
To achieve this result, our construction is randomized, i.e., some weights of the neural networks are selected at random, and the simulation of the WL test is correct with high probability {{formula:91596553-37ea-4c2c-8708-8852ce10dd22}} . Thus, our construction can be viewed as creating a distribution over neural networks computing the function {{formula:85586c14-e5e8-42b1-b2ad-203be6a6d62e}} .Note that selecting {{formula:27dea52b-14ee-4321-adf2-cae25959b360}} at random is quite different from random node initialization, e.g., as investigated in {{cite:ce677d29d1d9686f950e3fa7430782e4d064f7ef}}. In particular, in our model all nodes use the same function {{formula:abd6c79d-8210-4dac-8616-fcad31e35967}} (with the same parameters), without breaking the permutation invariance property of GNNs, as in the standard GNN model. In particular, this implies that, for each graph, there exists a single neural network implementing {{formula:f19fc85b-265e-49e3-803b-81b0904beb3d}} that accurately simulates WL on that graph.
The size of the network is exponentially smaller than in {{cite:ab7e8b7bf8046d78b436e25ce9f6b3b452204cd8}}, although the construction is probabilistic.
| r | 48a58a292ee9c7872446b4399c7fe1e5 |
There are many centralities for characterizing the importance of a node in a network {{cite:cf7e7504f8a0c8286dba82957b5a708731ccdeb8}}. Among those, local quantities fail to give interesting information about global structures while path-related measures are more relevant to describe the large-scale organization of networks. In particular, the betweenness centrality (BC), introduced in {{cite:d1f9088511eb048a3b353c0264bd838e8b8ac700}} is a good probe of the structure of a network. Also, if one assumes that (i) individuals or goods travel on shortest paths in the network, and (ii) the demand is uniform (each pair of nodes constitutes an origin-destination couple) then the BC of a node (or an edge) corresponds to the local traffic that can be found at this node. In reality, the two assumptions are not always satisfied and how much of the real traffic the BC can explain is a debated question {{cite:752fd26f7444cd57b5a30624d202afb8828460fb}}, {{cite:f050d6e81a8533d0a91e1372b8d1f8ce9724f5ce}}, {{cite:2be16b065c98891e05ecbafb7b05c3daa309f181}}. In general, highly congested points are signaled by very large values of the BC and this is relevant not only for transportation networks but also for communication networks such as the Internet where information packets can experience congestion problems at routers. In a router-based communication network, all nodes are connected to it directly and the BC is irrelevant in this case. A new direction for modern design of physical layer networks is to construct `wireless ad hoc networks' where routers are absent and packets of information are routed in a multihop fashion between any two nodes {{cite:e71295ab4e10c5bdf293fea971b030da0c03bfc4}}, {{cite:471d99e6f0b85b7537a38b175efba8d8f191b045}}, {{cite:394c1ae184ba2cc6b5d2b0360a66aa66ec9fbcc6}}. This design allows for much larger and flexible networks and are nowadays realised under Wi-Fi direct standards. For these decentralized systems, the BC is a very relevant quantity and can be used as a criteria for identifying cluster nodes {{cite:71a8ebd37c9f6fd5e020924c2f8c675da85f8777}}, or to identify the vulnerability backbone of the network {{cite:b5ca68a69cdde90698788a14214b2c2d41ca97a1}}. Still in communication networks, it is intuitive to think that the traffic
between nodes tends to go through a small core of nodes. In this case,
the shortest paths are somehow curved inwards and it has been suggested
that this is related to the global curvature of the network
{{cite:93c1dd71840fd5b9cff8487c2f429407cc02d21d}}, {{cite:1042598afc2c54f5e5c3d669a7d4c536c98b3c0b}}. A natural way to measure the
impact of the structure on the load in the network is then to
understand how the maximum traffic - approximated by the maximum BC - varies with various graph
properties and scales with the system size measured by the number of nodes {{cite:93c1dd71840fd5b9cff8487c2f429407cc02d21d}}.
Narayam and Saniee {{cite:93c1dd71840fd5b9cff8487c2f429407cc02d21d}} studied empirically various networks
and found essentially two families characterized by different values
of the exponent that governs this scaling. These authors proposed the idea that
this behavior is controlled by the curvature of the network and this
was justified mathematically by Jonckheere et
al. {{cite:1042598afc2c54f5e5c3d669a7d4c536c98b3c0b}}.
| i | 612ea231f86971cfeb112c10e3a792dd |
At the moment, most datasets proposed in Continuum are image classification based. We plan to add other type of datasets such as segmentation and detection {{cite:72cae4009eded9d84e8f10c888a3ff2ec4e28eae}}, {{cite:e08292ec8ce988dbe5ebc914812786c72c161af4}}, {{cite:9dd4430dcc4c18fe89b2c4c5d37420cb0e18c01f}}. We also plan to increase our text-based datasets portofolio and to provide audio-based datasets.
| d | 49d48d63694acfe90343543206ac4b21 |
This difference in color structure of the quark-antiquark and quark-quark systems make possible to extend the quark-antiquark model to be valid in the model of quark-quark system, just by changing the color factor {{formula:58914c55-85f8-478e-aac4-9b7e2e448bd6}} and the string tension {{formula:c0ab7253-3ad0-4314-8450-5038cfaf4788}} . Changing color factor {{formula:a3afe942-4994-4970-9246-55c1e265387f}} (for quark-antiquark system in color singlet state) to {{formula:a81aadc9-7bd0-4000-8c7b-ca51884733f5}} (quark-quark system in the antitriplet color state) is equivalent of introducing a factor of 1/2
in the Coulomb part of the Cornell potential for the conventional quark-antiquark system. This factor should be taken as a global factor since it comes from the color structure of the wave function. Therefore the string tension should also be divided by a factor of 2. The general rule for diquark potential from quark-antiquark potential is making {{formula:b9abd194-019f-4f03-8938-eb989387136c}} . This conclusion was done in different tetraquark models {{cite:dbb8655a77ea2528a818723724b8133a65541e9c}}, {{cite:2684f69a842a0314e2ddaecadd4d4abd0b9764e1}}, {{cite:e345c07cdee68595df2d0eb499abd54e4c5d325c}}, {{cite:5e8cff7d6df3da7bb93663312c3b6c9e34033684}}. So we also divide our potential (Eq. REF ) by a factor of 2 for obtaining diquark spectra.
| d | 6460d848886cf39b9579d509ff3be39c |
Another important application of the GCA is the precoding of orthogonal frequency division multiplexing (OFDM) signal to reduce its peak-to-average power ratio (PAPR) {{cite:943cc3cedd557123f4b7203064dd8891699df472}}{{cite:7f787c56c58b19f3ccda1a1c25eb5fed5a29c20f}}. However, the code rate is restricted by the total number of different Golay sequences obtained by projecting the high-dimensional arrays {{cite:3b1f2b5fdc906b24b42dda190197e10cf81189f5}}. The work of enumeration of GCA of some special sizes have been extensively studied in {{cite:943cc3cedd557123f4b7203064dd8891699df472}}, {{cite:7f787c56c58b19f3ccda1a1c25eb5fed5a29c20f}}, {{cite:718a69460162ee41ec85ec7c3460eec4222e96d6}}, {{cite:3b1f2b5fdc906b24b42dda190197e10cf81189f5}}, {{cite:922d4d262bba15e8623f7e36cbbfddff6da66a6a}}.
| i | cb38f43ccea1ca0b626c86c6594dc9e3 |
Among the main issues in mathematical optimization, optimality
conditions play a key role in the theoretical understanding of solutions and
their numerical computation. In the beginning, such conditions were
established for linear or smooth optimization problems. Later on,
developments in variational analysis allowed the researchers to extend this theory to general nonlinear nonsmooth convex
optimization problems defined in infinite-dimensional spaces (see, e.g., {{cite:33fb0ed941c536750c025a2ef4722f1c2648c4e0}}). In
the same spirit, generalized differentiation properties became an
intensive field of research with numerous applications to nonsmooth and
nonconvex mathematical programming problems. Nevertheless, in order to provide such general calculus rules and compute optimal conditions, a certain degree of smoothness is required, while working either in Banach spaces with the smooth norm or in Asplund spaces (see, e.g., {{cite:074764bb13526b0fdecd67a029ffa1fc6b45cf29}}, {{cite:4ddec4253121b5cca54c7622a60352f862dcec54}}, {{cite:1eb8118ff2b3a0c7b02e711709d619d5565be2f7}}, {{cite:ec48fbdb7e9b3b53040a106d4b5d07c14e2ae4da}}, {{cite:2e787810041133d214c360b3c1a974c80524d46b}}). In this paper, we provide
necessary and sufficient optimality conditions for a general class of
optimization problems under new qualification conditions which constitute
real alternatives to the well-known Slater constraint qualification. Our
approach is based on the notions of the (regular) subdifferential and
coderivative and we show that these tools work out of the
scope of Asplund spaces, not for the whole family of lower semicontinuous
functions, but for the so-called class of B-DC mappings.
This class of mappings is introduced in Definition REF , and constitutes a slight extension of the concept of DC functions/mappings.
| i | 7f01d65d2bc17565ec3d98c9bf162208 |
Adaptive synthetic (ADASYN) {{cite:aa586f5552a791976fd200bce51121c58486a92d}} was given with the goal of eliminating bias and moving the classification decision boundary in the direction of the hard examples. The primary idea behind ADASYN is to use a weighted distribution for different minority class examples based on their learning difficulty, with more synthetic data created for more difficult minority class examples than for easier minority class examples. The efficacy of this method is proved by the results of experiments conducted on a variety of datasets using five different evaluation measures.
| m | bcfb3e1dbbe60a816b2afd70818b99db |
Fig. REF shows the {{formula:40328ada-63df-4cf5-bc4b-2eebbbef52b4}} reconstruction. Whereas its left panel shows the results obtained using {{formula:ac17b596-69ca-4613-a293-3022116f3f5d}} measurements, the right panel shows the results from SN Ia data. In both cases, one can see that {{formula:07ad8d1d-bac0-426d-b257-9ac20b0c4e7b}} CDM modelFrom now on, when {{formula:a1cb90fc-dd43-4364-8881-6847ccd73efc}} CDM model is compared to our results, it is implicit that we refer to the best fit of the Planck analysis using data from TT,TE,EE+lowE+lensing+BAO {{cite:b27fdbb71e4052504ad36f1ebcf4951aec66dd8b}}. is consistent at least at {{formula:022aa9df-5e82-410e-8920-05972ee19903}} CL over the entire redshift range. Furthermore, in both cases, the best fits of the reconstructions indicate that the universe switched from a decelerated phase to an accelerated one at around {{formula:e18c7730-61d1-435f-8704-84c7bcb4b4b6}} . We estimate the CL of {{formula:72d09ba7-8f37-40d3-bca1-a97f953eb7bf}} at {{formula:e5781719-dc79-4114-8f35-3cc7b6016e25}} up to {{formula:093de5a6-b8e3-4602-97f7-6796500ba6cc}} via Monte Carlo sampling for the two data sets and confirm the present cosmic acceleration with a higher CL. Our results indicate current acceleration at {{formula:5374c799-ce45-4ead-b3bd-3da42b20b507}} CL with the {{formula:9520763f-3b32-448f-8013-b8107041f395}} data and, if we assume the gaussianity is maintained at higher levels, the SNe Ia data confirms cosmic acceleration at {{formula:a673410a-f74a-4fd0-886b-c7f9a29e1b61}} CL.
It is worth emphasizing that these results only assume a dark sector composed of a pressureless matter (whether interacting or not) combined with a general DE component, which is utterly free from a hypothesis about its nature. In the SNe Ia analysis, even though all data points have been used, we restrict ourselves to show only the result for {{formula:c3a21ada-46e0-4baf-a01c-5743bebd2005}} . For higher values of the redshift, the SNe Ia analysis can not properly constraint {{formula:9c2139c4-7094-4e0a-a267-b195957c05b3}} , and it has no physical meaning.
| r | b2bef23ae07b1768238d4e87ea255be1 |
The nodules are significantly smaller in size in comparison with lungs. Therefore, patches of size {{formula:ba5c6047-6f16-4d01-b191-63245e4c40ce}} are extracted from the region segmented as lungs in each slice in Stage 1 for further processing in Stage 2. A LeNet {{cite:c1ae5e0cb695b335aa5e8e8cf5cdf63339ee06cd}} based classifier is used to detect the presence of nodules in the patches as shown in Fig. REF . The third convolutional layer of the standard LeNet architecture is modified to have a kernel size of 5. An additional fully-connected layer having 256 neurons is introduced after the last sub-sampling layer. Also, the number of neurons in last layer is changed from 10 to 2. Thus, by evaluating patches within the lung region, the presence of nodules in each slice of the CT volume is determined
{{figure:570aa36c-b5df-47e1-91ab-70e983675a41}} | m | 0ac13973dcf361009855347be5e38e2b |
Infinite dimensional frame theory, originated from the work of Duffin and Schaeffer {{cite:fe2abe974fbf7d3ec807ee1e3d0141dccc0287dc}}, {{cite:0101d6823bc037c5619137adf105d5450c712f74}} influenced the development of frame theory for finite dimensional Hilbert spaces towards the end of 20th century and theory developed rapidly in the first decade of this century. Revolutionary works on finite frames are {{cite:8a7f48c71f5f950f056970868236e2ac27350186}}, {{cite:4b52752af4e7b99065e64011a95bec94fdae4e56}}, {{cite:0e1bc0ea1eace21f5c0375bbb9dbf80d5511f0b8}}, {{cite:f4f900ee4d5ecffaf8bc4cd7a2b451fd37dbc4d3}}, {{cite:8da483271a8da610c3a2e1376663d4289656e526}}, {{cite:5929b0782bfa8e0c1d415364beb958259f887847}}, {{cite:faca96ac1d09a9426f581b7df3a6694c346980cb}}, {{cite:3b91a44c4edd48a2a7072d183ce2d39757cd2d30}}, {{cite:3d4df06220d2afce916674aedc7c3fbdb42dda0f}}, {{cite:256f99e9f46c21055ae8321713f8e10b507d3ef7}}, {{cite:5e4bcd6c632d59578f3cab8f90fe5597b5f11ffa}}, {{cite:c88aeafed19e3c0762e1b6091fe5b350b12e2835}}, {{cite:e16c15d00217371deae1005ed6a6137cd5aa6b93}}, {{cite:4b52752af4e7b99065e64011a95bec94fdae4e56}}. Finite dimensional definition of frames reads as follows. Letter {{formula:7c3bd2d0-ffc9-479a-89a8-519a0d01959b}} always denotes a finite dimensional Hilbert space. We denote the identity operator on {{formula:cd0aedfe-2b91-4664-a860-08101bf3697c}} by {{formula:4d1ba6b4-57de-4505-8635-e8db70cf5ccb}}
| i | 4e828c2113de7ec6b2be41fefada46af |
The second issue in LD is how to choose good teacher model.
An intuitive choice of teacher model is the detector with the highest accuracy.
However, when the gap between student and teacher is too large, LD may be less effective.
To address this issue, we further introduce the teacher assistant (TA) strategy {{cite:e337c12560c48ea460fa01415ba959c8af7b5738}} for LD.
By introducing several intermediate assistant models, the possible gap between student model and teacher model can be filled in, and thus LD with TA is not sensitive to the choice of teacher model.
| i | 6efbf33a6d65187b1ba54bed76df7831 |
On the rôle of the visual modality.
The qualitative results indicate that there are still cases for which it remains unclear how the multimodal system uses the visual component.
Previous work in the context of both multimodal speech recognition {{cite:90cde02918c65bc28e4b7dd976a669e04a85a7c3}} and multimodal machine translation {{cite:3698e99656687897ed8278a6da3367395952531e}} have observed that the visual channel helps in unexpected ways.
In particular, Wu {{cite:3698e99656687897ed8278a6da3367395952531e}} suggest that the visual branch plays the role of a regularizer and is not necessarily injecting useful information into the system.
In our case, while we believe that the proposed speech data augmentation may alleviate the issue to some extent, its effect may remain still insufficient.
We conjecture that the problem lies in the loose coupling of the input modalities.
A possible solution for a more pervasive fusion would be pretraining a self-supervised audio-visual model on large quantities of data.
Such audio-visual systems are becoming common place, but they were not usually applied in the multimodal speech recognition setting.
The closest works in this direction are the ones of
Hsu {{cite:249af534813c2f50a89353d052c7204731266160}}, which uses the pretrained audio-visual representations for unimodal speech recognition, and
Rouditchenko {{cite:c62491ef0f16b1f3cfef40e27bbad05b6921b02a}}, which performs multimodal text retrieval.
| d | 1642bce9b7feec79e90ab7795bf54583 |
We wished to verify that the results illustrated in these figures are indeed improving because our clustering technique was finding groups of similar users, which allowed the prediction techniques to learn more personalized classifiers for these groups.
For instance, it is conceivable that splitting users into groups and learning multiple classifiers is helpful regardless of the groups picked, as this procedure would be similar to bootstrap aggregating {{cite:999d3ce47fd18976881d39276150fad29c29918d}}, which allows simple classifiers to model multiple weak correlations in data.
To test this, we repeated the FriendPredict experiment, but clustered agents into {{formula:b433b865-3ca8-400f-a629-ecae9793797f}} clusters randomlyThis random clustering essentially partitions the data set into {{formula:41db34cb-1ac9-4f67-87c9-07fe9aeae307}} random samples (a close emulation of bootstrap aggregation)..
The results illustrated in Figure REF compare this random clustering to clustering by social circle overlap.
This figure clearly shows that the reduction in error is largely due to the non-random clustering approach.
The solid and dashed lines start off identically at {{formula:9b290576-2e15-4095-afd8-077d36fb1585}} on the x-axis (no clustering) but as the numbers of clusters increases, the error decreases, for the case where social clusters are used.
We take this as evidence that the clustering technique is improving accuracy because clustering genuinely enables more personalized predictions, not simply because the number of models being learned has increased.
{{figure:863c8084-abc8-47d0-8a16-ea03e4b66d5d}} | r | 0ed182243ed760043f5ea8d25619d7c0 |
We compared our approach to other competing methods, including, for completeness, those which can only work with differentiable oracles. For models that originally used a GAN prior, we instead use a VAE prior so as to make the comparisons meaningful.As shown in {{cite:7af37625a51e0e26ae6fab4d5709a9a3f9c98c28}}, the GAN and VAE appear to yield roughly similar results in this problem setting. We compare our method, CbAS against the following methods:
(1) AM-VAE—the activation-maximization method of {{cite:4d01bdc0177c3ba055550575d61bc4ba42f9710f}}. This method requires a differentiable oracle.
(2) FB-VAE—the method of {{cite:bda38a6fad596e797ef5d8f1ab302e376fb28510}}. This method does not require a differentiable oracle.
(3) GB-NO—the approach described by {{cite:a28093bcf94e1dc000d5288e32fa50bd340f5026}}. This method requires a differentiable oracle.
(4) GB—the approach implemented by {{cite:a28093bcf94e1dc000d5288e32fa50bd340f5026}} which has some additional optimization constraints placed on the latent space that were not reported in the paper but were used in their code. This method requires a differentiable oracle.
(5) DbAS—a method similar to CbAS, but which assumes an unbiased oracle and hence has no need for or ability to incorporate a prior on the input design space.
(6) RWR—Reward Weighted Regression {{cite:800a1def6a18bc008092424f31bac64a70cba7b4}}, which is similar to DbAS, but without taking into account the oracle uncertainty. This method does not require a differentiable oracle.
(7) CEM-PI—use of the Cross Entropy Method to maximize the Probability of Improvement {{cite:554c8ef1a584b60042478f4269305eb00efe2eb8}}, an acquisition function often used in Bayesian Optimization; this approach does not make use of a prior on the input design space. This method does not require a differentiable oracle.
Implementation details for each of these methods can be found in the Supplementary Information.
| m | 2098124e9bb9ad62567c9f1c5608de19 |
To calculate the Bromwich contour integral (REF ) we first note that {{formula:98400488-8f7e-40a5-95f2-c358eba7a6a5}} has a branch point at {{formula:2c2f4a09-2d82-476d-a925-98c0f234914e}} . Therefore, we use the contour of Figure 1 with a branch cut along the negative real axis, cp. {{cite:212925962d93317b674ee0da280a8e2845ef42bd}}. Furthermore, because the integrand has no poles within and on the contour {{cite:6d38cb5c7f57229361c885bf34cf8e91589f1a9e}} and because the contribution from the small circle around the origin vanishes, we obtain
{{formula:0de537e8-c09e-49e8-8bdf-1c62b39fee7e}}
| i | 9edf61accfd30aac159e9cf1c5086197 |
In this paper we see that the JT theory describing the near horizon dynamics of near extremal BTZ is more capable at describing the departures form extremality than previously explored in literature {{formula:5ea35aad-825e-467b-b1bc-9dcfb94c1e45}} the constant angular momentum departures from extremality. This is made possible by writing the metric in terms of in-out going null geodesics with angular momentum per unit energy {{formula:b087fc53-d8a2-4803-b249-1a6969ae08a8}} (REF ) first used in {{cite:68eafef9df8dd5aa56f9f95c61ab99abb3b5e9ab}} and then taking a simultaneous near extremal near horizon limit generalizing similar previously known limits {{cite:5c253e973536c2ba13d74bebc515e7f4e7f5b1d9}}{{cite:4d9aa56738bfd8a99365f7b888159ae604ed4005}}. The thermodynamics of the JT theory obtained -which we refer to as JT{{formula:a9b19d9c-9b80-43ed-8b66-898bb06aa797}} ; about such a near horizon metric parametrized by {{formula:254423dd-c96a-4cf7-bdfc-6958273d44b3}} is consistent with the near extremal thermodynamics of BTZ where the excess mass and angular momentum are related by {{formula:cf005393-5dbe-49a9-8f38-fdccd4c5b2f3}} . We also verify that the behaviour of excess mass and entropy above extremality in the JT{{formula:bc8cace1-3de6-4800-a1a1-2be10d7ce6c5}} theory matches that of the relevant near extremal BTZ configuration. Thus, we also find that the expected form of the first law of black hole thermodynamics is reproduced by the near horizon description. A crucial requirement for this match in the thermodynamics is to identify the near horizon {{formula:0e28a522-1928-4d13-a2b2-c194aca5c6df}} throat temperature as {{formula:d7ab2436-399f-4004-b703-95c60481cdb7}} . As a consequence we see that a computation of the 4pt OTOC of 2d probe scalar fields along the lines of {{cite:95e4331cef9a042632b18e04040469461f535a8e}}{{cite:6c0d45600bde096c5b31f554fa0ff2fdb7ec8d35}} exhibits an exponential scrambling behaviour with a Lyapunov index {{formula:673e7856-427e-47a2-b330-cbf75665c6c1}} . We therefore are able to see partially the near horizon or the IR dynamics which gives rise to the sawtooth like behaviour in the OTOC in a state dual to rotating BTZ {{cite:118f4d6cfee1333df6bec14436dfcb1f0144ce6b}}{{cite:7001eb2a49394343a097a3bf845b006aec5c9d7a}} and the scrambling of mutual information controlled by {{formula:20b8c146-9bd9-407d-9b0c-abb8d57d6a66}} {{cite:68eafef9df8dd5aa56f9f95c61ab99abb3b5e9ab}}. This near horizon description also demonstrates how the generalized bound on the growth of chaos as derived in {{cite:06504835ac5d7b8344ff33e759187ecb9ac11044}} can be viewed holographically.
| d | 9f1f834f7cff0bb81fc2e43a27e4c5b7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.