text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Natural language explanations alone, however, might cause unwarranted trust from the human recipient {{cite:125ad4bb9058c2c715ac775815b5b08c91161c58}}, because the models which generate them usually are optimized via direct supervision towards a few human-acceptable gold rationales from static datasets {{cite:cfa57eab18c748dfeeddf6ebc6b5579694d44f05}}.
Other explainability methods excel in faithfulness (sometimes called fidelity) {{cite:44e5ab9e184dac1d994c2086912bee554528b77d}}, measuring how truthful an explanation reflects the internal representations of the explained model.
In particular, {{cite:3bfe1097d4ece7723f42b75089bd2ccb3a1d7828}} found that faithful explanations are not achievable with free-text rationalization models and truly faithful explanations might be impossible, after all {{cite:44e5ab9e184dac1d994c2086912bee554528b77d}}.
| m | a554214852cf573f863e4c5769390040 |
Results shown in Fig REF correspond to the QCM algorithm run on the IBM Quantum processor ibmq_montreal – the qubits used are indicated on the device map. Comparison simulations were run on the Qiskit QASM simulator. We have plotted, as a function of the trial-state parameter {{formula:f131df71-50e2-4f21-af30-937c14aa22d3}} , the moments {{formula:a341f9ab-d36b-45d3-ad0d-4ea12e1793be}} , and associated cumulants {{formula:921cabdf-f776-41ee-80f4-f0c9299c0f49}} , assembled from the QC measurements of the TPB sets {{formula:bd680fda-306f-41ab-be16-6ae4660de4c9}} up to {{formula:0429df7b-1b0b-4df7-9b4d-b6d16eebc336}} . Quantum calculations were carried out using {{formula:ba6f52e4-158a-4c17-a4e5-26b6f9b50745}} shots per expectation value. Note: these are raw results with no attempt at error mitigation or improved sampling {{cite:5241c3afda65fdb3fe8cac03770a27f629755434}}, {{cite:eee38d36795c4122d46034eab98149e437f2c00d}}, {{cite:da06f1c4d23c934d482398527583679f990a2aeb}}, {{cite:8161d4eb969f0e355e254e0739d29b0aeb2dfa5f}}. Compared to the exact/simulation results (solid lines), the moments computed on the quantum computer system are surprisingly free of shot noise, with deviations largely due to the device errors. The cumulants have higher statistical noise, as expected given their composition in terms of the moments. In Fig REF (d) we plot the infinum estimates {{formula:0f395894-b150-4c9c-a3d6-a1d89bbc4dfe}} obtained from the device runs together with variational results on {{formula:3e105a80-8170-40cd-9c4c-042597147997}} and simulations carried out for different noise levels (zero to 8{{formula:aa8f61d7-7556-433b-b9dc-94ceec5557a6}} device default error model). We make the following observations:
| r | 2dc1701aad279ed49c0e8e0e7937519d |
To determine the amplitudes for the systems consisting of pseudoscalar mesons and baryons, we use the lowest order chiral Lagrangian
{{cite:4f4ef2e564556941b6c91da5fc74a0fec42cbdc5}}, {{cite:32b52fa8c552b6b5379b1191918a2565a61de5a8}}, {{cite:28e6121d87456648bef04e6fc5abcb64846de576}}, {{cite:ec0c7a3a8cedc15475203a10cc26c59c78b07708}}, {{cite:61474f0f1de12118bcae099b082dcf51ea4cf9ad}}, {{cite:9192773c201407b7533aadfc85456d8d4afe8a7b}}, {{cite:fe897feb3b2fc32291fad192b9573aacacd1070c}}
{{formula:48152308-ba34-48ce-8a0e-2e4119a8722d}}
| r | ffe13f6a515ad159dd70e6f4d3a52094 |
The MolTrans framework learns to predict DTI as follows.
Given the input drug and protein data, a frequent consecutive sub-sequence mining module first decomposes them into a set of explicit sequences of sub-structures using a specialized decomposition algorithm with inputs consisting of vast unlabelled data. The outputs are then fed into a augmented transformer embedding module to obtain an augmented contextual embedding for each sub-structure through transformer encoders {{cite:6cf5bdf361034d875fe66ccbe85545e1781a1829}}. Next, in the interaction prediction module, drug sub-structures are paired with protein sub-structures with pairwise interaction scores. A CNN layer is later applied on the interaction map to capture higher-order interactions. Finally, a decoder module outputs a score indicating the probability of pairwise interactions.
{{figure:6f0f9bd2-704d-4722-adfe-e325a7133bcf}} | m | da32dd2cb040e91aa133bda6e3c81a28 |
Writing (2.16) as {{formula:d7c07fdf-332f-43a5-8315-c80d4ae5fa1f}} , substituting {{formula:c2c97407-97f6-48da-ad73-802c41ff7aa7}} for {{formula:71cba5c5-0f09-4250-89f5-8cda04e55975}} , and using the {{formula:4ebad8d3-7e80-4285-81b3-23b38868b611}} -contact property: {{formula:57d36a01-d030-4abe-831c-e4c1e1ef1e68}} {{cite:099823125b275ee40aa2c4781bf4e14343622f4b}}, we get
{{formula:db44fc4b-ce60-447e-80b3-2bc5c6811600}}
| r | 8017ec225a9d45c1ed47a3c044083796 |
Under the above assumptions, it was pointed out in {{cite:a8bee4533f1c3495a41bdba8de15970923982c9f}} that
the Cauchy problem of the compressible MHD
equations (REF )-() has a global weak solution,
which can be stated as follows (see also {{cite:cd6a8195552ab1665a60c1a2b3fb4959466185e8}}).
| r | 0b0e53a47e2f5bddd538011cbbabaafa |
The favoured scenario of binary formation is the fragmentation during runaway collapse of cloud core
{{cite:952631ff93ed58aa4cb50cb7978316fc1ae38394}}, {{cite:ca9425b524c464b92dd4324920606617e2ffd9a8}}, {{cite:a29ce606013b2e261a0bcbc8eaca848d157e9fab}}, {{cite:ec8cb4fdc794f8ae1ba9a2fe9f6294d9cdd49015}},
and the fragmentation in protostellar disc after the runaway collapse {{cite:b48ead550606e04870c168a3b1f9f907e6cf0d76}}, {{cite:f81fc3b35418f08de4600d48f72cec7551dd8736}}, {{cite:07f5a424d9c130f052c340138335962ffd6e9369}}, {{cite:07a3acdd74f757b69f9543749b2591dac558e8c6}}, {{cite:9478c3e484b7655821d01aa8fb692326c452ab03}}, {{cite:3bfcf699206a0b7a710450f7bdedc96c9d37be2a}}.
Details of the fragmentation process during cloud core collapse is described in {{cite:2b906e5777b761ebe5b9f5098f77874dcb340e14}} and the references therein.
In the present paper, we do not discuss the fragmentation process, but focus on the evolution of a seed binary after fragmentation.
| i | 27226dcb73eedef15d5fc6049bf7ed10 |
Passive cooling
allows the ground state to be reached
while preserving the mechanical {{formula:1c7b4eb6-177a-4d47-b735-fca2d4a115d9}} . This potentially provides much longer mechanical coherence times,
enabling unique sensitivities for force sensing to be attained
{{cite:9e0f6eb47c75204bf72050b45772c27a1e28ddb8}}; the figure of merit being {{formula:81157d9e-565f-4567-aae0-5f6feac0cd51}} for us at 500{{formula:aaf5624d-ee81-4a72-bb30-ec49e855b9a7}} K,
with plenty of scope for improvements
in {{formula:fda410d0-d65f-453c-befb-f99004fff562}} (see e.g. Refs. {{cite:c32337b4c948356295829d0777de564c182b8dba}}, {{cite:bcd6539661e73c5f9d3a4c23246286fa156c1980}}).
The unexpected properties of the fluctuations reported here call for
theoretical input on both thermodynamics and the constitutive matter {{cite:943c300d38a444ee8e596b109da776fc77d1859c}}, {{cite:0ca00f645bf2fe69db73c5a2772a53e52783aa37}}, {{cite:98ad42269226f6b3ec64e24f4bafc27a07962ca1}}, {{cite:3a7692f7738c31ca5aac62292855541f3be74fe7}}.
As for superconducting mesoscopic electronic devices {{cite:a382b7706609f2ac075802408b82775cb4ecfc47}}, Two Level Systems are certainly the key to the understanding of the complex microscopic environment interacting with the mechanical mode.
| d | 0a77465466e070d9cc603e8bdbdecac7 |
We trained and tested our proposed framework with both synthetic and real datasets, and then compared the performance with some related work. Most existing methods are however developed for static scenes {{cite:8fcf9d2c06eb2838e31fcc431b8acaebb2692f9a}}, {{cite:c513e080bdcf676f56ed6df3a2b44c336d64b22f}}, {{cite:fdb722479112df5ca2752e9c170fa49f70b1568c}}, {{cite:5514d6f9f1718d020716b55c295de3b307d56699}}, and some of them are not truly end-to-end deep learning frameworks {{cite:8fcf9d2c06eb2838e31fcc431b8acaebb2692f9a}}, {{cite:c513e080bdcf676f56ed6df3a2b44c336d64b22f}}. Also, their source codes are not available at the time of writing. Therefore, we compared the performance of our method with state of the arts of the image and video restoration: i) UNet {{cite:fdb09a879d775ab5e6be014b74c15d63ad7b22db}}, the backbone architecture of many image denoisers and restorers {{cite:b0f727aa47a29c5a1f99f0d21a8387adce39a6d0}}, ii) EDVR {{cite:9186eaa4f47cf42de100b7f9331d6e57058f14df}}, the winning solution in NTIRE19 Challenges on video restoration, and iii) FFDNet {{cite:2f3ec2d12a38f51a1f37ff8354dd37d08ed54888}} offering the best denoising performance reported in many surveys {{cite:892003f5e842f6e545a593df1be5ac60ac168f59}}, {{cite:b0f3d71362f5b3b015f0a7100c1e4a84fea54ea7}}.
| d | b9cae631c440a6f44646a9102998d1ce |
for normal hierarchy (NH) and inverted hierarchy (IH), respectively. The case of only two HNL generations in comparison with the case of three generations has been analysed within the cosmological framework in {{cite:89b7312d2d7ad161c6cc3532307443c726b3e3a6}}. Significant differences from the case of {{formula:70f09ea8-d7d7-4e3b-8d81-2c6200762509}} I appear with complex-valued {{formula:36b7a805-2de1-496f-86eb-428abfd4ecdf}} parameter which lead to the factors {{formula:e6953084-9d1e-4ad9-a466-ea2acb8da6f6}} in the mixing matrix {{formula:df73d102-2d37-4e1a-8204-d25f4b0c4ea0}} . Three new parameters are introduced in this case, {{formula:328d8768-fe73-4ef0-ada1-898876436b15}} , {{formula:fdc75028-f107-4d96-a579-2b1e8c18edc4}} and {{formula:e7c91b63-0cec-45b3-ace7-57af085610ae}} . Assuming the framework of thermal leptogenesis, when the lepton asymmetry is formed due to decays of the right sterile neutrinos {{cite:943d153fed49d52c13bf20dd254ac6ba7e74d11b}}, the CP violation parameter reaches sufficiently large values. Detailed and accurate phenomenological analysis of active and sterile neutrino mixing in {{cite:9eab598715bc4db38410ead9072016b2ed61cbcc}} showed that a phenomenologically consistent hierarchy of mixings {{formula:edea2fbc-b412-43bb-92a9-39d422f1dd46}} , {{formula:d47ea354-5f69-47b3-880b-53ac705d491c}} and {{formula:8a0e72d3-5d0a-4498-bdcf-1a9c03bf410b}} with suppressed {{formula:0b4b8f7f-b692-4715-9828-285acc4d6161}} relative to other matrix elements can be achieved in a wide interval of {{formula:8cca867f-f81c-4cab-ad4f-2b4e056e16d0}} independently on the values of HNL masses. Translating the experimental upper bounds on {{formula:a4268da5-4ebe-4de9-a856-8c9278bdee8e}} from the shortest possible lifetimes of {{formula:ce50716b-59ef-4a98-b917-055625a37b3f}} from {{formula:513ea0c3-faac-4c02-931f-d0be257010c2}} and {{formula:17fd99eb-49d0-4b0f-9752-61d907d888d1}} meson decays into the upper bound on {{formula:3c4fb905-750c-486d-b95f-ecf18b19ee5a}} one obtains at the HNL mass scale 10{{formula:51da0289-e771-45b3-9bf6-3d104a2e4878}} MeV {{formula:f412cd2f-5f38-44ee-9e6f-ada3077bbc57}} 4.5 for the lifetime of the order of 1 sec and {{formula:969ac733-2dda-4d80-9871-c35e6caf12ea}} 7 for the lifetime of the order of 0.01 sec in the case of NH. Values of the omega parameter greater than seven can lead to a large mixing parameters of the {{formula:3e69ab7d-a8fc-43c7-8a9a-20d9eae985fe}} -neutrino interaction not consistent with the data. In the case of IH the bounds on {{formula:344fd9ff-f8a3-4568-8f34-28dc958e3e35}} are stronger, see details in {{cite:9eab598715bc4db38410ead9072016b2ed61cbcc}}.
Complex-valued parametrization of {{formula:069656f3-8c26-42d2-aa17-33d9526c0c77}} was also used for the study of HNL properties at the TeV scale {{cite:b783f8663d1f7dc775b03b74c765c6cb101ea9ef}}, {{cite:3097239748d63757ded86a360d2c5099c3e07222}}, see also {{cite:5955e95647c23acf9035813a8172d5b65371ef11}}.
In further consideration, the case of Eq.REF is designated as mixing scenario 2. The analysis of lepton universality within scenario 2 assuming the {{formula:c6160990-03d2-4b28-b71f-dbc33a1b0d3d}} framework in the approximation of the two-particle decays was performed in {{cite:987e22866d80461f21b8e9e5cf707ebad4f8ade9}}.
| i | 4fc96ba216a238cf4014037e26e0a8ee |
Previous contributions on RIS are mainly focused on maximizing the spectral efficiency/achievable rate or minimizing the transmission power {{cite:040e23a07727f8c19a4a6148b74f01328b76b6a2}}, {{cite:9924b730deebd4f24e28bdd9245d2c599ec51c33}}, {{cite:996849ea0bb35b2b51152312940902d736063edf}}, {{cite:280499ef3d3d856d75f64867cdefe3f540cb4e3b}}, {{cite:1622807bbcc63eb4067d46382334b47067948a62}}, {{cite:55ec6bb4e275e951802662d6a6f5e528d12dabf0}}, {{cite:ef9aaa748b49aa60bf2a53e360f0c547fff4e60d}}, {{cite:ad93688803dcc9b492b85dd6101d6666b3b6dc3b}}, {{cite:e986ef471ff7d5e8ce7b843b5d4988a2028d1916}}, {{cite:ade0ae25d59db164ad4d04870b0f78db64db17eb}}. In {{cite:040e23a07727f8c19a4a6148b74f01328b76b6a2}}, Wu and Zhang minimized the transmission power in the downlink of RIS-aided multi-user MIMO systems, where the popular alternating optimization and semi-define relaxation (SDR) methods were employed for jointly optimizing the active transmission beamforming (TBF) of the BS and the passive beamforming, represented by the RIS phase shift matrix. This was achieved by approximately configuring the RIS reflecting elements. Ning et al. {{cite:9924b730deebd4f24e28bdd9245d2c599ec51c33}} maximized the sum-path-gain of RIS-assisted point-to-point MIMO systems, where the low-complexity alternating direction method of multipliers (ADMM) was employed for configuring the RIS phase shift matrix, while the classic singular value decomposition (SVD) was employed for designing the TBF. In {{cite:996849ea0bb35b2b51152312940902d736063edf}}, the optimal closed-form solution of the phase shift matrix and TBF were derived by Wang et al. for single-user multiple-input-single-output (MISO) millimeter wave systems.
{{table:eb90db3d-940d-434c-8449-fcaf52adbb4e}} | i | 86c459e7439584668b73c4641403a42a |
There are special conditions for opening a wormhole {{cite:1945e2d9e80f4dc87ef4b8ca1479f26d79c7018d}}, {{cite:80ac4c7a4d60247b25913c7641f7d573991c65fd}} that result from the equations (REF )-(), called flare-out conditions. In our model a special monotonous case {{formula:4679a721-5ca9-4c9e-9d9b-bd4c45c7904b}} will be considered. Then, the throat of the wormhole satisfies the following conditions:
{{formula:d4b4741e-0c8e-46b7-8ba0-b1ff7dc7afac}}
| m | 29cfddba6a303e897e6d13595200611a |
The objective of this section is to give an analogue of the Skjelbred-Sund method for classifying nilpotent assosymmetric algebras. As other analogues of this method were carefully explained in, for example, {{cite:4f2668a1d49478c59158b5610c3bd4fb6e50f26b}}, {{cite:3a8023511b304cde3d0a3d4d5176324561511570}}, we will give only some important definitions, and refer the interested reader to the previous sources. We will also employ their notations.
| m | ccaf5fa1ae3db3bae0820898406b844f |
The asymptotic behavior of the probability (REF ) has been studied by many authors,
we refer to
Lévy {{cite:02ac8f9652afd45c2b22b3ec8680553f6470a106}}, Borovkov {{cite:92ba3374f93e7399f7d50ab919ea386ec2159f47}}, {{cite:539cf2dac808ebed0dea7424999ebbc574968d8f}}, {{cite:ea88639158f98329b8cd73626dd8319ba35cb073}}, Feller {{cite:373ce3162511b35ceadb326ef224c68fa4a10e1e}}, Spitzer {{cite:c7ed832936d55dee89448b35c46b6684a4c944a0}},
Bolthausen {{cite:66a92ca79dadc6c84553219946607e6e6570504f}}, Iglehart {{cite:be9d5519204c510b0912bb367b89eda5f1710610}}, Eppel {{cite:4c680500d735d2c6f2772f0a36bbe20d5e44da5b}}, Bertoin and Doney {{cite:530c2cc641c8257b374590aa5b1383cce340b393}},
Caravenna {{cite:8bc29ffd4380b8720b600432b0a3fdad5a0e7d36}}, Vatutin and Wachtel {{cite:b8a408154e345b6e9379069ffee80c7cd25d3f26}},
Denisov and Wachtel {{cite:c9447fc5734095235aa6fb878f1217f3db220ccd}}, Kersting and Vatutin {{cite:e8510d35eda479cd583d0df42d04a535d0a900bf}},
Denisov, Sakhanenko and Wachtel {{cite:67399067b70873e3b4f0ea495bc8b6d1c8746ec7}} and to the references therein.
In particular, Eppel {{cite:4c680500d735d2c6f2772f0a36bbe20d5e44da5b}} and later Vatutin and Wachtel {{cite:b8a408154e345b6e9379069ffee80c7cd25d3f26}}
have found its asymptotic in the case when {{formula:dcd51c6e-2fbf-49d8-b365-ea3e68588a52}} and {{formula:677915bf-bbb1-4fe2-a8ba-ee60b4766fa1}}
However, there are very few results dealing with the case when these conditions are not satisfied. We refer to Doney {{cite:7f1a363f697c188e87836d75c2b6b9b4fb918e9d}} for the case when
{{formula:eac3ddac-af2c-4c7d-a9d0-895b51973a89}} and {{formula:89880496-59e1-475e-b35f-f55ab97b6161}}
for some constants {{formula:3ef00a5e-d056-48a2-a45a-42118c9f8157}} .
Precise large deviations for the special class of heavy tailed distributions have been studied in
Doney and Jones {{cite:2ab8e550953be0c4dd64502f352a2f0fd9efe98b}}.
Note also that the results in {{cite:7f1a363f697c188e87836d75c2b6b9b4fb918e9d}} and {{cite:b8a408154e345b6e9379069ffee80c7cd25d3f26}} are stated for the case of random variables
in the domain of attraction of stable laws, which will not be considered here.
| i | 6c78493814a1dff65a1ae34054723915 |
Theorem 1.2 (Kameko {{cite:bbce4f4d6aa24e395a14c879aaa57e183e2d9c17}})
Let {{formula:9338d7f7-ed49-42db-95e9-43a520dba492}} be a positive integer. If {{formula:1806a320-6f6a-48f0-ac6b-e0f3b7cdedd8}} , then
{{formula:01bc7c35-77bc-4515-be86-2bce6a1967f7}}
is an isomorphism of the {{formula:de660e1d-b1b1-4cf5-846f-0b73621248c5}} -vector spaces.
| i | dc08e136973e15ff84023721587333ac |
In this section we empirically investigate our theoretical findings on different BNNs.
We train a variety of BNNs on the MNIST and Fashion MNIST {{cite:56c33c606bce9d66f19e0048fb0ad58249818aff}} datasets, and evaluate their posterior distributions using HMC and VI approximate inference methods.
In Section REF , we experimentally verify the validity of the zero-averaging property of gradients implied by Theorem ,
and discuss its implications on the behaviours of FGSM and PGD attacks on BNNs in Section REF .
In Section REF we analyse the relationship between robustness and accuracy on thousands of different NN architectures, comparing the results obtained by Bayesian and by deterministic training. Further, in Section REF we investigate the robustness of BNNs on a gradient-free adversarial attack {{cite:05a31a26a386262a5874753881a19de68fb6594f}}.
Details on the experimental settings and BNN training parameters can be found in Appendix .
| r | efe4402f10ffd7bdf34068bb9f118d62 |
In addition to Quasi-Newton methods, other commonly utilized derivative-free direct search methods worth mentioning include the Rosenbrock search method {{cite:171f3e7e1e3fd8725df09c57427242e905c6c4d9}}, the Hooke and Jeeves pattern search method {{cite:a0d6978f756b84ed0f0314cada737c0a9b5bf0de}}, Brent's method {{cite:bf3f66b0c15dae75b0b23ca8398f7d7e0b7587ba}}, and the Nelder-Mead simplex method {{cite:8673cbded44017dfcd3bd442d7ad9934ae23231e}}. The Nelder-Mead simplex method in particular has been frequently utilized as a local search method within hybrid metaheuristics and memetic algorithms for finding solutions to SNEs via global optimization {{cite:9426fa19d99afed0c3f2ebd232a9b0552a06f8c8}}, {{cite:a3d6e7fbb0a7a399185c9ae6e879d0efcd10761a}}, {{cite:a38c292af1db876f400aee8f31669205ffc20d59}}, {{cite:0e5d5b02c64a3f09771081d746d7e697ddc13a39}}. The books {{cite:e317ebb4547b8e0bdf8f050d0b73b5684471d5ea}}, {{cite:db34f0f30559d90ed0c22aa8504256d4c9e8a3f4}} provide a nice introduction to methods for derivative-free optimization.
| m | 84017866af50b8a2fd647ec8d419f03c |
Using surrogate-gradients, the BPTT-gradient in the SRNNs can be computed using standard deep learning frameworks, where we used PyTorch {{cite:ba4675be846465ff98cb5a957ee4b56ac63dc225}}. With this approach, complicated architectures and spiking neuron models can be trained with state-of-the-art optimizers, regularizers, and visualization tools. At the same time, this approach is costly in terms of memory use and training time, as the computational graph is fully unrolled over all timesteps, and the abundant spatial and temporal sparsity is not exploited in the frameworks. This also limits the size of the networks to which this approach can be applied: for significantly larger networks, either dedicated hardware and/or sparsity optimized frameworks are needed{{cite:59ae459553c407e621f49cb8eadeb0be4d42277f}}. Approximations to BPTT like eProp {{cite:7f6228d7df0fc0e2279f73b06a9c46158160ee08}} or alternative recurrent learning methods like RTRL{{cite:9552b8fcb1bf4c055e20dc687b4f3da92bb3c3a9}} may also help alleviate this limitation.
| d | 83f191f43a4b4f678d04fb635fc669b4 |
[Note that EquationREF is obtained by equating {{formula:6a59b6d8-679f-4318-9819-99e18c06fb0b}} with {{formula:7698348f-6b48-4664-a5f9-eedac7bcc473}} where {{formula:a1dd7de0-e15b-49b3-ba9b-5815e95e5186}} is the solution of the fractional-order differential equation {{formula:2c377451-d4fb-4b3c-9369-dd1a7a4097ba}} obtained using the inverse Laplace transform identity {{formula:433d98f1-6184-4ddd-b662-33d09243b389}} with ({{formula:712ee34c-570b-4f3a-95f5-d1d988bbff5e}} , {{formula:437f197d-8229-4fad-b4dc-538ab78bff41}} , {{formula:9c42a66b-74b0-4557-9688-fe8256d78272}} ) {{cite:7aa2923f1985c81c95c53db62a5d05e498b2c4ed}}]. At {{formula:f954dc11-594a-49bd-9155-a85b56c737d2}} , the steady-state charge {{formula:617555f0-997a-433d-97ca-da0c35c6e862}} is a function of both parameters of the charge waveform ({{formula:c0d1604d-fe5f-4e71-9be0-18e7045b0b63}} and {{formula:cb89068f-c399-442f-a18e-1107458df555}} ) and those of the supercapacitor. [Equation REF simplifies to {{formula:500e53f0-16bc-4850-a4ae-b5cfb0adcb85}} for an ideal capacitance ({{formula:cbf19129-8559-4f6e-916d-b7dbfecbf47e}} and {{formula:08f2a4a5-9788-4089-a4f0-cc07ee9a2f3e}} ) using the identity {{formula:0c6d20e4-8fc9-428c-b59d-0a905fb8c19e}} which applies when {{formula:622b9424-fc85-4ce5-9525-8c774ae8d572}} . If {{formula:8b80c69f-512a-4f9e-b038-9c49ad012614}} (step voltage), the charge {{formula:fa57ba0b-39e8-4c0c-9502-df42e54849b3}} , otherwise {{formula:f8b87d85-f5d1-4917-a600-9b172067a039}} is a function of {{formula:a267c399-3568-4a25-aa68-13b5ff25a5f6}} ]. This is in line with our recent findings in which we highlighted that charging a supercapacitor with a voltage input results in a device and waveform-dependent accumulated electric charge {{cite:a6dbcf108ecc7afc71dd1e4ffe22a7dec5d7c055}}, {{cite:3c35fb06b6ca5bd08f2d0acbf41e9676309d7832}}, {{cite:f8e8994f76efd031f093a60eacfeeba10dabc86e}}. In a dimensionless form, equationREF looks like this:
{{formula:9489e9c3-6b19-4e37-a921-34eb7781280c}}
| d | 8ae8085a70c88cb745a564820f611ffb |
As shown in REF , 11 of 13 participating teams achieve an SRCC score higher than 0.75 on PIPAL, which significantly surpasses the highest performance of existing algorithms (0.65).
The champion team achieves an SRCC score of 0.799 and a PLCC score of 0.790, refreshing the state-of-the-art performance on PIPAL.
In order to evaluate their performance on traditional distortion types, we also report their results on TID2013 {{cite:e04d8e1dccdb66f739add35360814214ab7bef08}} and LIVE {{cite:92fa84a0b27c3cabbb011ab3575ad174127465db}} dataset in REF .
The top three teams in the challenge all achieve competitive results with existing methods on TID2013 and LIVE, showing their good generalization ability on traditional distortion types.
| r | 29dc0c0a6989c3d806aa914c7b448ffe |
We would like to emphasize that these results are achieved with only 500 networks trained using about 200 GPU-hours while the compared architecture search methods utilize much more computational resources to achieve their best results, such as {{cite:c06a7d5a35bea38611c6cd42c089e9b108ef0a6a}} that used 48,000 GPU-hours.
| r | 193cf77d3a5097238dcba3ea6897ec9a |
which, together with the boundary conditions, implies that {{formula:39bf2fd8-1ef7-4a9f-97ff-ef316f26e201}} for some positive constant {{formula:189b50d7-6b59-426e-94e8-679ebecd8b90}} . Taking {{formula:5beccb38-0cc8-4a5e-932a-b4de6f948c37}} one recovers (REF ) for a suitable choice of parameters {{formula:509385d1-59b3-4daf-8e90-b020f55718b5}} and {{formula:ebe68d75-bfb4-4d98-b5dc-663e320ef8dd}} . The Lin-Ni-Takagi problem has been intensively studied these last decades. We refer to {{cite:44223e13acdb9e77423915fce00ca5ce257f2470}}, {{cite:92d019f16da12186409b105129d35277e4c8be63}}, {{cite:05502f0e67071ca8c13a98721827ce0ffb2bae05}}, {{cite:a145bdcd4528511b4fe17b916e53508cf302aff4}}, {{cite:223efc2f99e88c0aaf79543a36691439c99775b8}}, {{cite:67467b8263e1eba512b1579a71c12bc82e8dcbe9}}, {{cite:08c9ab1da8e87c851b544cf75cc1347c19caf442}}, {{cite:1f0493913c1c7e5938202470e7a0e82f5f56e1cb}}, {{cite:ddb712652195e5dab246cc8f3161a663e9382316}}, {{cite:de8747b80bce29accb760c59f18a29a418df746e}} for a non-exhaustive list of existence and non-existence results for (REF ), related to the so-called Lin-Ni conjecture, and mostly concerning the close-to-critical case. As mentioned by several experts of the community, the role of the many highlighted equilibria, as actually the role of any stationary solution, in the dynamics of the parabolic problem (REF ), seems unclear up to now. However, it is to expect that loss of compactness by bubbling in the stationary frame plays a role in the dynamics of (REF ). This issue has been investigated in other time-depending equations, as for instance in {{cite:23208e63268b1c0481616e04e3770ac4200102ae}}, {{cite:b146749ff1cd696812e6471e81bd39e10abbbefb}}, {{cite:b47e392121e3b8d20270237bb00ea5a678cb8208}}, {{cite:0c5de426103de80b3d29cf819ce26f0b16c18989}}, {{cite:b47e9f995595d15ef751ad33acf84361c6155990}}, {{cite:4a0a6581e4920944c0a3afb0afebb36df718da25}}, {{cite:08563ec947641ef0a8d90c40e1c9bc1bd39a4154}} and the included citations.
| r | 9738476d77a5666d4b7a3b73b6c0b33c |
A solution {{formula:15ecd055-a890-44c4-962a-cf365837bbf7}} of the problem (REF ) is called a {{formula:2cbbad30-9dec-48ab-8065-edb8ebe7e6dd}} -solution of the Hamiltonian systems. The problem (REF ) has relation with the closed geodesics on Riemannian manifold (cf.{{cite:092e618873169706a520b2a5ebd975f3051a6bce}}) and symmetric periodic solution or the quasi-periodic solution problem (cf.{{cite:4e75a43c13da89c9acacf78e3804bc49e46e4245}}). In addition, the first author C. Liu in {{cite:f038eb6ea8004d13834eb8e40c4c85fd5c5e5d51}} transformed some periodic boundary problem for nonlinear delay differential systems and some nonlinear delay Hamiltonian systems to {{formula:d8f3e369-51c0-4875-beb4-759a8a5e154a}} -boundary problems of Hamiltonian systems as above, we also refer {{cite:a3487cdcaefb00fbec9f044c43e1640495ae506a}}, {{cite:f642043c3d72191c401160d1478fd9b2595a32e0}}, {{cite:39d9cd70d2979717fcabdb565fa9d6faaa888a6e}}, {{cite:6388b0b02e08ebe357dcbbdc14090038b9c002a8}} and references therein for the background of {{formula:62b1e00d-6bd7-4fc6-96d1-974cabf5d4a2}} -boundary problems in {{formula:1ca81b9e-6c02-4830-af12-f7b515007d04}} -body problems.
| i | 1d9814e76486ef07137830a54ea1240d |
Finally from the result on {{formula:48751b4f-8ec0-4aae-bf35-96186cbbd166}} , using relation (REF ) and the measured value of {{formula:3ab4e831-d295-4184-ba46-a22bf0a58e1f}} {{cite:e7697036903914c3dd31371187209d1af78e8fbe}},
an upper limit for the branching ratio of the {{formula:5888420e-1dcb-4639-a3bb-f09b3b232078}} decay can be
derived:
{{formula:51d45a4f-460a-4909-ab52-a4628b8742fd}}
| r | 84a919a16c53dadd209c4fde327c7f76 |
where {{formula:5ed61718-fa82-4c46-8748-2b6e53e6a08c}} is the constant step size, {{formula:1cc63517-3a92-4d3b-97da-99bc116643d8}} is the same constant as that in (REF ). The following convergence result on (REF ) has been established in the literature.
({{cite:a847ebcb9050f0438ede16c086801d571d0ed491}}, {{cite:29cdf93a92226e5656accee23617d0ea03b122c2}})
Let {{formula:de162212-4fe2-42aa-a65a-6b3a22629af9}} be a sequence generated by (REF ) with {{formula:7fb769bb-60c7-465a-a1c1-ad4b624b03c8}} . Then {{formula:5f741198-b80e-4bed-8817-3bc644efa28b}} converges to a stationary point of (REF ).
| m | 03782425b8c29178904e217e3494f5b8 |
The zeros of the dispersion relation ({{formula:a5075f96-e639-4f16-bf8b-9aaddde4b525}} , equation ()) were explored within the complex {{formula:b14ba18d-eea3-40ac-a3bf-3e9067aab92e}} plane inside the region {{formula:8f746024-6658-414c-a930-a90df5725789}} and {{formula:49e7e60e-4610-4b14-9ae2-b83a7d0caccc}} . The first step in the spatiotemporal analysis entails the procedure of finding the most unstable mode for real wavenumber, {{formula:81f28627-0ba0-441b-b9ec-61469cbea5f4}} . This mode is the largest positive imaginary component of any root of the dispersion relation, also known as the temporal growth rate, {{formula:9cfe2a83-c407-44bf-b156-c431646d9cd4}} {{cite:fe8d5438b9eea4dee3093d211d36720f09acd810}}. In other words, this step consists of detecting the admissible saddle points ({{formula:9c9905db-7ffb-4320-afaa-6ce98396ea67}} ) satisfying the equations {{cite:cf5fd6797b6b7311cae23d21710ca7feaa7f5d74}},
D(, ) = 0,
| m | 789c44c5a028786972bf424edc6c9560 |
Our definition is simpler and easier to work with.
To summarize, we define unbiasedness based on individual relevance estimates because it is easier to work with and is interchangeable with the unbiased definition based on system performance in previous work.
While unbiasedness has been the main focus of most click-based LTR work, it is not the only property that is important to the field.
As Section REF noted, in practice counterfactual estimation methods are often deployed in a biased manner.
In particular, propensity clipping introduces some bias but reduces variance greatly and is widely applied in counterfactual LTR {{cite:1d3bb815e5be730f263e6843d27dca01168d4a66}}, {{cite:8c7c6b83297ca261be1d84e8352a4bae17be55f7}}, {{cite:c61dc2402f514ef20d80f4af0c519b115bade5ae}}, {{cite:249878c9b8db9970a5b39b872d73233cf7c0b10d}}, {{cite:20cc68e61d4de1f2da7ef3fb8fcb9be390b2a5ed}}.
Similarly, it is conceptually hard to apply the unbiasedness property to click modelling methods {{cite:9e0dc95183f356168f66c31318d3ac243276dad2}}.
For these reasons, we will also consider consistency {{cite:79ec4e66e293d61443f5437e6177af224115c7b1}} with the following formal definition:
Consistency.
A click-based relevance estimation method is consistent, if the values of all the resulting relevance estimates are equal to the true relevances in the limit of an infinite number of interactions:
{{formula:efa798a8-cb25-4ccf-a5cf-70c7074fa08d}}
Consistency is a desirable property since it guarantees accurate convergence as interaction data continues to increase.
Furthermore, variance is generally negligible when the number of interaction is extremely large {{cite:1d3bb815e5be730f263e6843d27dca01168d4a66}}, therefore unlike unbiasedness, there is often not a trade-off between variance and consistency.
Our goal is thus to identify under which conditions the unbiased LTR approach can provide methods that are unbiased and consistent according to our definitions.
Our method to identifying these conditions inverts the prevalent approach that starts with behavior assumptions and derives an unbiased method.
In contrast, we first describe the two main families: counterfactual estimation and click modelling, in the most generic terms.
Subsequently, we derive click behavior conditions from these generic methods, thereby revealing the assumptions that are implicitly present in the existing approach.
The following two sections will do this for counterfactual estimation and click modelling, in addition, Section will discuss other methods that fall outside the former two categories.
The Limitations of Click-Based Counterfactual Estimators
As discussed in Section REF , counterfactual estimation represents the largest branch of the unbiased LTR field {{cite:55c328fd9b77e765d8baab3b9edccdc049a2a664}}, {{cite:1d3bb815e5be730f263e6843d27dca01168d4a66}}, {{cite:ee98758f791d5c742f186b75ecb589f0eb3413a4}}.
The underlying assumption of these methods is that click probabilities are decomposable into relevance and display factors, e.g. the estimator in Eq. REF assumes click probabilities are determined by relevance and display-position according to Eq. REF {{cite:249878c9b8db9970a5b39b872d73233cf7c0b10d}}.
Click-based counterfactual estimators aim to convert a click signal into an unbiased relevance signal by correcting for the display factors.
In order to investigate the entire family of click-based counterfactual estimators, we first define a display context {{formula:fd85a822-4029-4bad-a0b6-aa18b550dc40}} in the most generic and broad terms:
Display Context.
The display context {{formula:f2050b5c-8ee0-422f-aeb4-3e62800841ea}} contains all information about how item {{formula:f371c8dd-4604-4bcc-91c6-ff2638161ede}} is displayed that could affect the click probability of {{formula:bf588a3b-6b02-48db-b985-8029811bc3a0}} at interaction {{formula:69ca5b4a-564a-4018-a6f8-76889412df69}} .
It does not contain any information about the relevance {{formula:6513263e-e752-459c-97a9-f3b4e84735a7}} : {{formula:35144d28-fe79-4c29-9db3-5925f644210e}} .
In other words,
one should be able to determine {{formula:2a9fc648-5ea4-4c49-b609-e041dec981f4}} without any knowledge of {{formula:75551629-ce12-4d56-93b4-4dcd1fdaea7e}} .
For example, for the estimator in Eq. REF the display context is the display position: {{formula:ed5f6498-4e7d-4d9b-8076-766d1463ef8a}} {{cite:8c7c6b83297ca261be1d84e8352a4bae17be55f7}}.
Many alternatives are possible, e.g. {{formula:a4743181-70ef-43a3-97d4-f27476d0c3c1}} can also represent a probability distribution over positions as in the policy-aware estimator {{cite:20cc68e61d4de1f2da7ef3fb8fcb9be390b2a5ed}}, {{cite:c61dc2402f514ef20d80f4af0c519b115bade5ae}}.
With this broad definition of the display context, we propose a generic description of a click-based counterfactual estimator:
Counterfactual Relevance Estimate.
A click-based counterfactual relevance estimate is an average over independently sampled interactions where each click or non-click is transformed by a function {{formula:e1f33d4e-b360-4b7d-8136-3556d18cf213}} such that:
{{formula:3375cad1-e1b9-4012-abf3-ceeaae98ed94}}
Accordingly, {{formula:c1845433-e180-4f64-8e56-e068f8a3656f}} only has two relevant values per context {{formula:8033a496-faa2-4825-beda-106a2fcf26d3}} for the counterfactual estimate:
{{formula:36add2df-1ca4-4b82-b765-176e9085a17e}} when a click takes place on {{formula:1be84539-5118-4000-b86f-69d897ad8d41}} and {{formula:a3b754c4-1e16-4f13-a18b-5e902f3199e4}} when no click takes place.
To the best of our knowledge, Definition covers all existing click-based counterfactual estimators {{cite:55c328fd9b77e765d8baab3b9edccdc049a2a664}}, {{cite:1d3bb815e5be730f263e6843d27dca01168d4a66}}, {{cite:20cc68e61d4de1f2da7ef3fb8fcb9be390b2a5ed}}, {{cite:249878c9b8db9970a5b39b872d73233cf7c0b10d}}, {{cite:ee98758f791d5c742f186b75ecb589f0eb3413a4}}, {{cite:5fbddb277de60cfecfa5940408a21d735637120d}}, {{cite:8c7c6b83297ca261be1d84e8352a4bae17be55f7}}, {{cite:c61dc2402f514ef20d80f4af0c519b115bade5ae}}, {{cite:e3d2de23fb9e0209b7a68707a2f8b553ea6de6ec}}, {{cite:17dcfc59f21c774a3f9755d2ab1e95eaca480280}}, {{cite:0ec29301b6b781436c81f12758940c122c6322ab}} with the exception of three counterfactual pairwise estimators {{cite:4b8bbd44d8a770812a2e328f8d8a0479704f8a59}}, {{cite:de0a05dffe52453e14aab6ff068093aa0dc4792f}}, {{cite:abed48944e661d7f1b03c9c9c40ef8c54fec795d}} that will be discussed in Section .
For example, we see that the estimator in Eq. REF is a specific instance where {{formula:b943612f-be34-48c7-ae51-6764d1df4024}} is chosen so that {{formula:e7a7909e-1747-44c7-b367-835e59ffb766}} and {{formula:3d93a638-2ff7-4d70-af7b-58cdd4ff5e11}} .
The aim of our definition is to cover all existing estimators and as many future estimators as possible.
Accordingly, - to the best of our knowledge - different choices of {{formula:28145ba5-ddea-487a-85d1-4c7f70caa04e}} can cover almost all existing counterfactual estimators, and moreover, they can also cover many more possible estimators that have yet to be introduced to the field.
With these definitions, we can now start to derive the conditions for which a click-based counterfactual estimator is unbiased or consistent.
First, we note that the expected value of a relevance estimate {{formula:1ca02b5f-021e-4b83-b4b6-86718c30cbc7}} is simply the expected value of {{formula:59ca10f8-c960-4ea2-a2cf-4aee050cea94}} conditioned on {{formula:6da32d7f-86aa-4b48-9d57-07b2b7cd6697}} :
In expectation a counterfactual relevance estimate is:
{{formula:724bb7e0-60d7-4f3f-bc0d-b8a802ab94b3}}
Using Definition :
{{formula:e2792364-9f30-487c-a3a1-d598456e5da3}}
With this Lemma, we can prove that an unbiased estimator is always consistent and vice-versa:
A counterfactual estimator is consistent if and only if it is unbiased:
{{formula:008ad0e5-301b-4e33-8342-08713ecc685c}}
With the use of Definition and Lemma :
{{formula:4f27089f-dd8d-4c93-b5e1-37bae9c626be}}
Therefore, it appears that the focus of previous work to prove unbiasedness was not incorrectly placed, since implicitly this has also proved consistency.
Nevertheless, we think it is important that we have a theoretical motivation for this focus now.
Finally, we can derive the following conditions for the unbiasedness - and therefore also the consistency - of click-based counterfactual estimators:
A counterfactual estimator can only be unbiased if click probabilities follow an affine transformation of relevance s.t:
{{formula:4d22cf79-85a2-49a5-a180-a702a7acdae9}}
From Lemma it follows that the unbiasedness criteria for a counterfactual relevance estimate can be reformulated as:
{{formula:fba8d546-54ee-4077-b073-90db6de5c095}}
The expected value can be rewritten to:
{{formula:45c95a48-59a4-48ec-bc9d-c9d6f81f29b5}}
Combining Eq. REF and REF reveals that the unbiasedness criteria also has implications on the click probability:
{{formula:aa285211-d9c6-4ce0-a1f5-7220df77732d}}
To prove Theorem , we derive the following {{formula:4c948943-52d1-4df3-9421-6b72a6d8a407}} and {{formula:a36981cf-727a-40a9-97bc-f28821e0adde}} from Eq. REF which
show that click probability is an affine transformation of the form stated in Eq. REF :
{{formula:4bece1bc-3587-45fc-95a7-72c1b9580403}}
Theorem can be difficult to interpret since the expectation over the display context in Eq. REF depends on the choice of logging policy.
Nonetheless, Theorem clearly shows that - in spite of being broad and generic - Definition implicitly limits the click models counterfactual relevance estimation can be unbiased or consistent for.
It appears that this is a result of the estimate being a mean over individually transformed clicks (Eq. REF ) which makes its expected value a linear interpolation between the possible {{formula:df6af7b3-c848-4b79-bb12-8e70b18406c2}} and {{formula:b9fffc58-ed5d-4659-a330-87f54d15b3a5}} values.
Consequently, an counterfactual estimator following Definition can only be unbiased or consistent if clicks follow a transformation that can be corrected by such an interpolation, Theorem proves that such a transformation has to match the affine form of Eq. REF .
To better understand what Theorem entails, we consider what it would mean for a deterministic logging policy:
If an item is displayed without randomization then a counterfactual estimator can only be unbiased if the item's click probability is an affine transformation of relevance in the form:
{{formula:8228ed6d-ea90-4ec2-af30-47a822c0aaf1}}
This follows directly from Theorem .
Furthermore, often practitioners have no control over the exact logging policy that gathers data.
In such a scenario, unbiasedness guarantees cannot rely on a specific choice of logging policy as the deployment of that policy may not be possible:
Without control over the logging policy, unbiasedness guarantees for a counterfactual estimator are only possible if click probabilities are affine transformations of relevance in the form:
{{formula:02b17cf3-350d-4f70-9ba2-6340d6cfadf1}}
We cannot rule out the possibility that for each possible display context an item exists which the logging policy will only display in that context, thus, Cor. follows from Cor. .
While the individual steps of our theoretical derivation are very straightforward, the result is quite significant:
we have proven that the form of click-based counterfactual estimators entail implicit assumptions about the click behavior they are able to unbias.
In other words, because click-based counterfactual estimators transform individual click or non-click signals and then average the result, they can only correct for click probabilities that are affine transformations of relevances.
Conversely, this proves that no unbiased counterfactual relevance estimator (Def. ) can exist for click behavior that does not match the affine transformation of Eq. REF .
Furthermore, there is a less obvious implicit requirement that {{formula:6ead14b4-70ab-46bc-9724-588648d682c4}} and {{formula:b2d883f1-a7ce-4e80-aae7-7b42abe56718}} should only depend on the display context and not on the item relevance:
If two items are always displayed in the same single display context, their {{formula:728b305b-e23c-4e26-a355-d9168add3599}} and {{formula:420c7aff-645f-4d38-b242-fe34a07d9c5f}} values should be equal:
{{formula:d226042e-7dad-4138-8b4d-6764ede9d8a4}}
According to Definition , {{formula:68007a89-7254-479c-afeb-87a0316f0194}} is not determined by the relevance of {{formula:07ee4f6a-aaae-4ae5-9afa-16b60d7f4099}} .
Therefore, two items with the same display context {{formula:e4ce27c4-5292-4eaa-9be4-8a3bc507a5c8}} should also have the same values from {{formula:84d1acc9-292a-479e-a0a6-ab6eb23080be}} :
{{formula:8d5c4f5f-a4c4-49e9-9765-ef135489bcf1}}
Cor. and
Eq. REF show {{formula:cf0846ff-32ac-4fb7-bf2f-eb80e17c4333}} and {{formula:d1bba869-8c45-4f0c-8b3b-f35b88a8e11c}} must then be equal.
This requirement becomes more obvious when one considers the implications of its negation: a setting where the value of {{formula:7a27a198-4f1d-4b66-8801-4b60f26d6d9a}} and {{formula:d0c5c706-97df-438b-a8b1-53003cc8bf6d}} are determined by {{formula:f0387239-7388-4bd2-95d0-e44a3642ded4}} .
In such a setting the value of {{formula:fa94032e-5998-4b08-a53e-e1224b49cbc7}} and {{formula:f775e7f0-6c19-493f-8265-cccca40bf7ee}} are also determined by {{formula:abb90522-a7de-465e-9ca6-470d14c8cfe0}} , and therefore, one should first know the relevance of an item before being able to unbiasedly estimate it.
In other words, in such a setting one can only unbiasedly estimate relevances if one already knows those relevances and thus has no need for estimation.
Importantly, we are not arguing that such settings do not exist but merely that unbiased counterfactual estimation is infeasible in such cases.
In addition to the requirement stated in Theorem and Corollaries , and , there could be more requirements that have yet to be proven.
In particular, our analysis has not considered under what conditions the values of {{formula:6f6f61c8-df66-4f25-9c5b-fefdd9f9239a}} or {{formula:db22e88a-3a33-4298-b24e-e3cb08b198fb}} and {{formula:441203ce-fa34-407a-b6c6-19474b8a902f}} can be determined.
Previous work on position-bias estimation {{cite:158cd80fdba08ba82d0aeb85cdc406d456ad56bf}}, {{cite:37e71eb9191cb05e4893b9e7d87a2fb15a13bf57}}, {{cite:f0d2feceb53ee56a193f532ba2e3519a5ca2927d}}, {{cite:b43d32cb847f75e0d8fe5c42c2a1e23e49eaa7af}} shows that bias estimation is far from a trivial task.
Therefore, it seems reasonable that it is also necessary that the bias parameters {{formula:ecb0ce44-0fda-4d6f-966e-0a4cb711a372}} and {{formula:46d81ce4-64f9-4c1b-98f2-13f5288b004a}} can be accurately inferred for unbiased counterfactual estimation.
However, we currently lack a theoretical approach to prove broad conditions about bias estimation in general (Section REF will discuss bias estimation with click modelling).
To summarize our theoretical findings on counterfactual estimation for click-based LTR:
We have provided a generic definition of click-based counterfactual estimators that captures almost all methods in the field {{cite:55c328fd9b77e765d8baab3b9edccdc049a2a664}}, {{cite:1d3bb815e5be730f263e6843d27dca01168d4a66}}, {{cite:20cc68e61d4de1f2da7ef3fb8fcb9be390b2a5ed}}, {{cite:249878c9b8db9970a5b39b872d73233cf7c0b10d}}, {{cite:ee98758f791d5c742f186b75ecb589f0eb3413a4}}, {{cite:5fbddb277de60cfecfa5940408a21d735637120d}}, {{cite:8c7c6b83297ca261be1d84e8352a4bae17be55f7}}, {{cite:c61dc2402f514ef20d80f4af0c519b115bade5ae}}, {{cite:e3d2de23fb9e0209b7a68707a2f8b553ea6de6ec}}, {{cite:17dcfc59f21c774a3f9755d2ab1e95eaca480280}}, {{cite:0ec29301b6b781436c81f12758940c122c6322ab}}.
From this definition, we have shown that estimators of this form can only provide unbiased and consistent estimates when click probabilities are affine transformations of relevances.
We have thus identified a significant limitation of the high-level approach for click-based counterfactual estimation: it can never find unbiased or consistent methods for non-affine click behavior.
The remainder of this section will provide some example scenarios that illustrate this limitation and discuss some edge-cases related to our definitions.
{{figure:8d58744a-07eb-430b-bff0-a9469a3c922c}}Example Scenarios
To better understand what the implicit limitations of counterfactual estimation regarding affine click-behavior entail,
we will briefly discuss two models of click behavior that illustrate scenarios where unbiased estimation is possible and impossible.
To start, we will consider a model of behavior where a user first considers all items in a ranking and then chooses according to a Plackett-Luce decision model:
{{formula:9f436037-688a-4d50-a2a4-413f87d410a7}}
Thus the probability of a user choosing an item in a ranking is equal to its relevance divided by the sum of the relevances of all items in the ranking.
This Plackett-Luce click model is based on well-established decision models from the economic field {{cite:5282c90f64879dbbe63db5c5960d6db104cd758e}}, {{cite:1e93427cd97896e0a22e128955180fa0908182c5}}.
To keep our example simple, we have not added any form of position-bias or trust-bias in the model, although such extensions are certainly possible.
At first glance the Plackett-Luce click model may incorrectly seem to fall within the limitations of counterfactual estimation with the following parameters:
{{formula:6f5c04eb-90d0-4d2b-8050-0d1db582ac10}} and {{formula:4ef43baf-4ecf-4daa-9ef6-5e67581a185b}} (cf. Eq. REF ).
However, the denominator of this {{formula:70942c76-6bad-479d-b60e-9c12ca866ea4}} is directly determined by the relevances of all items in the ranking, thus also by the item for which relevance is estimated itself.
It therefore does not match the definition of an affine click-model.
This can be clearly seen when plotting how the click probability changes with an item's relevance; Figure REF displays this for a scenario where the other relevances that sum up to {{formula:da91dad9-12b7-4139-98f3-243c382fea08}} : {{formula:800a9a83-560a-4636-bddc-821695edc27b}} .
Clearly, there is not an affine transformation from relevance to click probabilities, and consequently, it is impossible for a click-based counterfactual estimator according to Definition to provide an unbiased or consistent estimate in this scenario.
This does not mean that unbiased counterfactual estimation is impossible for all click models where click probabilities depend on the relevances of other items.
For example, let us consider a classic cascading click model where a user considers one item at a time until one is clicked {{cite:b43d32cb847f75e0d8fe5c42c2a1e23e49eaa7af}}, {{cite:f0d2feceb53ee56a193f532ba2e3519a5ca2927d}}.
The click probability is thus:
{{formula:d39bbcde-76d0-4480-92ad-8c672e9fbd0f}}
Therefore. this scenario does provide an affine transformation with {{formula:a2b34290-af23-4aeb-bd7e-a711b623bd30}} – the probability that no earlier item was clicked – and {{formula:dd58b12b-8833-47f2-8dc5-f6ae7387a4a4}} (cf. Eq. REF ).
Figure REF also shows how the click probability changes with relevance in a scenario where {{formula:cea3b18d-019e-4288-bd77-60d24a207583}} , which clearly reveals an affine transformation.
The crucial difference in this scenario is that the value of {{formula:403bc3fa-dd59-4653-9e78-96eff48ba798}} only depends on other items and not on {{formula:48fa5652-c0bc-4ba0-ab81-200efc1ab762}} itself.
These examples illustrate an intuitive way to think of Theorem :
unbiasedness and consistency guarantees for counterfactual estimation are only possible if there is a linear relation between click probability and item relevance.
Adaptive Normalization and Clipping
While our definition of counterfactual relevance estimates (Definition ) covers almost all click-based counterfactual estimators, it does not fit perfectly with adaptive self-normalization {{cite:d0f882b58a4ad852c9b3b2476593ca01b08ac894}} and clipping strategies that change with {{formula:ca557d64-02f1-4f68-84c9-947ca7045975}} {{cite:c61dc2402f514ef20d80f4af0c519b115bade5ae}}, {{cite:20cc68e61d4de1f2da7ef3fb8fcb9be390b2a5ed}}.
For instance, a popular clipping strategy is one where {{formula:75dbc9d1-5a40-4790-8ffc-a26822063fe6}} , e.g. with {{formula:2495a1a3-0e21-4133-a3d5-a3857adb8c3e}} {{cite:c61dc2402f514ef20d80f4af0c519b115bade5ae}}, {{cite:20cc68e61d4de1f2da7ef3fb8fcb9be390b2a5ed}}.
To incorporate these exceptions one could use the function:
{{formula:4b905b5f-02cb-4c5b-9c7f-ffcbf81807cd}}
in Definition .
This change would mean unbiasedness now has to be proven for all possible {{formula:df55ac33-864e-4ad7-9b15-d79240b16a02}} values, and consistency only requires unbiasedness in the limit, i.e. {{formula:a267d40a-868e-498a-ad19-586b6fa607bd}} .
In spite of these differences, Theorem would only need minor changes and the main point would still hold: unbiasedness and consistency are only possible under affine click behavior.
We argue that the simplicity of our theoretical analysis with Definition and the lack of significant implications of including adaptive strategies, justify our choice of excluding them.
The Limitations of Click Modelling
The second branch of click-based LTR is click-modelling {{cite:b43d32cb847f75e0d8fe5c42c2a1e23e49eaa7af}}, {{cite:ac3283f7c3ab61e0c727d7fa4d378cb3f36457dd}}, {{cite:c37ff46fe1b04ebbd7a7fd193ee1579436df87f1}}, {{cite:2a56db8126d72526cc7c4d16f522e7fd52f143c9}}; as described in Section REF click modelling methods fit a predictive click model to a dataset of observed user interactions.
Part of this fitted model parameters represent relevance, thus the click model relevance estimates {{formula:74f7a80e-be68-4643-b862-7c182a07f685}} are variables that are optimized, in contrast with counterfactual estimation where they are the output of the estimator function (see Section ).
We will define a click modelling method by the loss that they optimize:
A click model loss is a function {{formula:0f3cc4ef-ec8b-433a-8687-746d2b437cde}} that takes as input the vector of relevance estimates {{formula:01456a9f-b4e6-4e82-a625-f068371a3239}} , the estimated latent variables {{formula:a5729135-8e2d-47cd-99d7-48b29fc6e067}} and the observed data {{formula:669a5d0f-76af-4565-9823-9a35930f77bc}} : {{formula:d34faf06-4858-428c-b6ab-85d02ff94343}} .
The value of {{formula:de303fb3-c502-4947-b7e0-fd5e35fe3382}} indicates the quality of both the relevance and latent estimates where minimizing indicates improvement.
Our definition is intentionally as generic as possible and - to the best of our knowledge - covers virtually all click modelling methods used for click-based LTR {{cite:b43d32cb847f75e0d8fe5c42c2a1e23e49eaa7af}}, {{cite:ac3283f7c3ab61e0c727d7fa4d378cb3f36457dd}}, {{cite:c37ff46fe1b04ebbd7a7fd193ee1579436df87f1}}, {{cite:5bdfffeb67d86d2fc98db14afbd8525a6a8e1381}}, {{cite:2a56db8126d72526cc7c4d16f522e7fd52f143c9}}, {{cite:2d8285df0ab70ae1026a0a8cf001fc3a952a1b9a}}, {{cite:a90b4abe0a34be4d739ece1f4c7dc8f8a9b153fb}}, {{cite:158cd80fdba08ba82d0aeb85cdc406d456ad56bf}}.
Usually, there is a model of user behavior based around the latent variables {{formula:8dd226a3-655c-4a6c-be8c-1ec23e98c4be}} and relevances {{formula:fa28685a-460d-4874-8514-9de43e631c51}} and the loss represents the predictive power of that model to describe the data {{formula:fc620800-a60a-44a2-a537-22b2c0590262}} .
In traditional click modelling work, this model is generally a graphical model with a clear interpretation {{cite:2a56db8126d72526cc7c4d16f522e7fd52f143c9}}, {{cite:2d8285df0ab70ae1026a0a8cf001fc3a952a1b9a}}, e.g. the rank-based position-biased model has a single latent variable per rank representing user examination {{cite:b43d32cb847f75e0d8fe5c42c2a1e23e49eaa7af}}, {{cite:f0d2feceb53ee56a193f532ba2e3519a5ca2927d}}.
More recent work has introduced neural networks for click modelling where the latent variables are the parameters of the networks, these models provide no intuitive interpretation but they have enormous predictive power {{cite:ac3283f7c3ab61e0c727d7fa4d378cb3f36457dd}}, {{cite:5bdfffeb67d86d2fc98db14afbd8525a6a8e1381}}, {{cite:a90b4abe0a34be4d739ece1f4c7dc8f8a9b153fb}}, {{cite:c37ff46fe1b04ebbd7a7fd193ee1579436df87f1}}.
Our definition covers both traditional and neural click modelling methods by avoiding assumptions about the underlying predictive model.
While in the context of click models, relevance estimates are parameters to be optimized, a click modelling method still has to produce relevance estimates to be used for LTR.
We define the output of a click modelling method as the optimal relevance estimates:
The optimal relevance estimate of a click model are any relevance estimates that minimize the click model loss:
{{formula:f069c511-35e8-4976-9e68-e6433245e5ff}}
We note that this definition is intentionally ambiguous in that it allows for the possibility that multiple relevance estimates could minimize the loss.
Depending on the exact loss and optimization method, it is both possible that there is a single unique vector of optimal estimate values or that multiple values are optimal.
With these definitions, we can now analyse the consistency and unbiasedness of click modelling methods.
To start, we prove the following conditions for consistency:
A click model estimator is consistent if and only if the only relevance estimates that minimize its loss are the true relevances as {{formula:162f74b5-0b37-4ca4-a112-dc642aa7d099}} tends to infinity:
{{formula:ca0998f3-7b69-44a9-8a16-eedba8089228}}
From Definition REF and , we see that the true relevance estimates need to minimize the click loss in order to be a possible optimal estimate:
{{formula:730ce618-6fa3-4a99-980d-64214ce60c12}}
However, Definition also reveals that multiple relevance estimates may be valid.
Thus to guarantee that the correct values are estimated, we also have to exclude other possible values that minimize the loss:
{{formula:292c8479-f4af-4cd4-aa1a-1078d5acdef9}}
Combining Equation REF and REF directly proves Theorem .
In other words, Theorem proves that a click modelling method is consistent if - in the limit of infinite data - its loss is only minimized by the true relevance values.
Moreover, it also proves that these are the only conditions under which it can be consistent.
Importantly, these conditions relate to both the click modelling method, i.e. the loss and the underlying predictive model, and the data collection procedure,
since it is both the loss function and the data that determine where minimum values of the loss are.
Thus, while Theorem provides a limitation that is conceptually straightforward: the true relevance values should provide the only loss minimum; it appears difficult to understand when this condition is met in practice.
Finally, we prove the following unbiasedness condition:
A click modelling method is unbiased if and only if the expected value of its optimal relevance estimates are equal to the true relevances:
{{formula:e5ab4616-d233-496b-88cc-f01b2debd653}}
Follows from the Definition and .
Admittedly, the condition proved in Theorem is extremely straightforward: the expected value of the estimates that minimize the loss should be equal to the true relevance values.
Similar to the consistency conditions of Theorem , this conditions relates to both the loss function of the click model and to the process that gathers the data {{formula:4e77a5c0-466d-423c-9bd7-58d175d209f2}} .
Furthermore, while the unbiasedness condition is also conceptually straightforward, it is again hard to understand in practical terms.
To summarize our theoretical findings on click modelling for click-based LTR:
We have provided a very general definition of click modelling methods that covers virtually all existing methods in the field {{cite:b43d32cb847f75e0d8fe5c42c2a1e23e49eaa7af}}, {{cite:ac3283f7c3ab61e0c727d7fa4d378cb3f36457dd}}, {{cite:c37ff46fe1b04ebbd7a7fd193ee1579436df87f1}}, {{cite:5bdfffeb67d86d2fc98db14afbd8525a6a8e1381}}, {{cite:2a56db8126d72526cc7c4d16f522e7fd52f143c9}}, {{cite:2d8285df0ab70ae1026a0a8cf001fc3a952a1b9a}}, {{cite:a90b4abe0a34be4d739ece1f4c7dc8f8a9b153fb}}, {{cite:158cd80fdba08ba82d0aeb85cdc406d456ad56bf}}.
From this definition, we derived the condition for consistency: correct relevance estimates need to provide the only minimum of the loss in the limit of infinite data;
and the condition for unbiasedness: the expected value of the estimates that minimize the loss need to be equal to the correct relevance estimates.
While these conditions are conceptually straightforward, it heavily depends on the exact model and data collection procedure whether they are met.
As a result, it seems there is no straightforward general way to determine whether a click modelling method is consistent or unbiased in practice.
On a higher level, it thus appears that click modelling cannot provide robust guarantees on unbiasedness or consistency, but simultaneously, we were unable to find clear and substantial limitations regarding the click behavior they can debias, as we have found for counterfactual estimation.
Moreover, our theoretical findings contrast with the recent empirical success of this approach that showed it to be an effective approach for click-based LTR.
We thus conclude that the main limitation of click modelling methods is that we currently lack robust theoretical guarantees regarding their unbiasedness or consistency.
Example Scenario
We found that the conditions we derived for the unbiasedness and consistency of click modelling methods (Theorem and ) are conceptually straightforward, but they also heavily depend on the exact model and data collection procedure.
To illustrate this dependency, we will consider the following example scenario that consists of two rankings with the following click probabilities:
{{table:955e5f8d-ed65-4e28-96c0-ed8bb98a33d9}}
Given all the click probabilities, one could fit the commonly used rank-based position bias model {{cite:b43d32cb847f75e0d8fe5c42c2a1e23e49eaa7af}}, {{cite:f0d2feceb53ee56a193f532ba2e3519a5ca2927d}} to this scenario: {{formula:ccd3d486-47f5-472b-89de-9351ae346b4c}} with {{formula:8266a036-169a-4f7c-a90c-2fc280ab92af}} .
Clearly, the result would follow {{formula:b9f01b16-6008-43a3-a088-2191958b7ba1}} , {{formula:d97a0ab0-c860-49c9-9c79-984515836c62}} , {{formula:a5c4578c-a750-43e1-b86a-0c8d46b44b25}} , {{formula:60a4d6c0-9f62-4060-b46b-cb281555dab5}} and {{formula:07bee829-446d-4081-bce4-4615d805dbd7}} .
Thus, while the values for {{formula:25914741-448a-406d-ae08-fe119aaa7f9c}} and {{formula:9e883d73-c8da-4add-bd6e-6ac3d2ba850b}} are determined exactly, {{formula:2345fa9e-c222-49db-9b72-68aad4d38b64}} and {{formula:260e92a9-70a1-4599-85ee-738730e7c6dd}} are merely constrained and a multitude of values would fit the data equally well.
We can thus conclude that this click model does not provide consistent estimates for the relevances in this scenario.
Our example scenario shows the difficulty in determining whether a click model is consistent.
In this scenario, even when all click probabilities and one bias parameter are known, we are unable to determine all document relevances.
Importantly, if we had an additional ranking that placed item C or D at one of the first two ranks this would be possible, illustrating consistency depends both on the model and how data is gathered.
We have not considered unbiasedness because it requires an extensive amount of assumptions and derivation: including a probabilistic model of what rankings are displayed and how many interactions are logged.
Finally, we note that the click model of our example scenario is extremely simple compared to recent neural click models {{cite:ac3283f7c3ab61e0c727d7fa4d378cb3f36457dd}}, {{cite:5bdfffeb67d86d2fc98db14afbd8525a6a8e1381}}, {{cite:a90b4abe0a34be4d739ece1f4c7dc8f8a9b153fb}}, {{cite:c37ff46fe1b04ebbd7a7fd193ee1579436df87f1}}, we expect that analysing unbiasedness or consistency conditions for such complex models is even more infeasible.
Bias Estimation With Click Modelling
While our discussion has focused on click modelling for relevance estimation, it is also often applied to estimate bias parameters for counterfactual estimation {{cite:55c328fd9b77e765d8baab3b9edccdc049a2a664}}, {{cite:249878c9b8db9970a5b39b872d73233cf7c0b10d}}, {{cite:79ec4e66e293d61443f5437e6177af224115c7b1}}, {{cite:158cd80fdba08ba82d0aeb85cdc406d456ad56bf}}.
While a full discussion of this application falls outside the scope of this work, we note that our methodology in Section can be applied analogously to bias estimation.
This would lead to similar a conclusion: for unbiased and consistent estimation of the bias parameters, only the correct vector of values should minimize the loss in expectation and in the limit.
Correspondingly, in spite of its empirical success, it thus seems that click modelling currently lacks any robust theoretical guarantees for bias estimation.
Other Click-Based LTR Methods
We have attempted to cover as many existing click-based LTR methods in our discussion as possible.
Nonetheless, we will now briefly discuss several methods that fall outside of our definitions of counterfactual estimation and click modelling.
Firstly, we have not considered the multitude of online LTR methods {{cite:0a20be2bd44509fbc688c83172c8b98a511bd95d}}, {{cite:cf18fd4c522c33bf9733b8a901cf2c4d8052f617}}, {{cite:b2d1b18d28e944cf79facc891b107311a2cfe0d5}}, {{cite:f1be0253e40c8cfdd8b0e959ea94405386e1c02f}}, because several earlier works have already determined they are not able to provide theoretical guarantees w.r.t. unbiasedness {{cite:ee98758f791d5c742f186b75ecb589f0eb3413a4}}, {{cite:fb34b306a6828aeab844ac86e8d9528bda83d665}}, {{cite:c61dc2402f514ef20d80f4af0c519b115bade5ae}}, {{cite:65471d93ec051f373da3b98810f4df100ef3fac0}}.
Secondly, {{cite:16831985cb5ecde306451de8d070c297713e6bcf}} propose using Heckman corrections {{cite:5e942f5ace5d76fa08d071bb1e6de0692ab01029}}, commonly used in econometrics, to the click-based LTR problem {{cite:d8e85b17df6cb8725ac8a175777819eea23c7156}}, {{cite:16831985cb5ecde306451de8d070c297713e6bcf}}.
Unfortunately, this approach fell outside the scope of this paper due it being so different from the rest of the click-based LTR field.
Lastly, there are several click-based pairwise LTR works: some were excluded because they do not aim to be unbiased {{cite:4b8bbd44d8a770812a2e328f8d8a0479704f8a59}}, {{cite:32ae360b05f929c55e64396f4ab9e508b83ae88e}}.
A notable exception is the pairwise method by {{cite:abed48944e661d7f1b03c9c9c40ef8c54fec795d}} which does not match our LTR problem setting, but does rely on counterfactual relevance estimation, making our analysis in Section still applicable.
Finally, there is the Unbiased LambdaMart method by {{cite:de0a05dffe52453e14aab6ff068093aa0dc4792f}} which is unbiased but not due to its method, the next section will address this interesting case separately.
Unbiased LambdaMart: Trivially Unbiased
{{cite:de0a05dffe52453e14aab6ff068093aa0dc4792f}} propose Pairwise Debiasing that aims to estimate a pairwise loss, based on the assumption that per rank there is a static ratio between click probabilities and relevances across queries:
Pairwise Debiasing Assumption.
Per position {{formula:13ab271b-d82e-4592-9d9a-f622362ef476}} there are two ratios {{formula:1e6aabe3-80f9-4151-a59d-7d98363c543c}} and {{formula:973e7e3d-a5f2-4129-bc3d-7826a3b9a81f}} between click and non-click probabilities and relevances when any item is displayed at position {{formula:502ba6b5-6c6f-400d-88eb-c590b4a9a468}} :
{{formula:3413cc87-f110-4a8a-a5ce-0da12bc9f312}}
{{cite:fb34b306a6828aeab844ac86e8d9528bda83d665}} pointed out that since Eq. REF is equivalent to the popular rank-based position-bias model {{cite:1d3bb815e5be730f263e6843d27dca01168d4a66}}, {{cite:158cd80fdba08ba82d0aeb85cdc406d456ad56bf}}, {{cite:f0d2feceb53ee56a193f532ba2e3519a5ca2927d}}, {{cite:b43d32cb847f75e0d8fe5c42c2a1e23e49eaa7af}}, the additional Eq. seems to conflict with that model.
We will now prove that the pairwise debiasing assumption actually implicitly assumes that there is no bias at all in non-trivial ranking circumstances:
If two items {{formula:45cc9a2d-139f-4342-a7a7-6d8266c85d56}} and {{formula:2271352a-25aa-4fae-8d16-4d468822f89b}} are both displayed at position {{formula:413423dc-8b89-4f9d-84a7-b9cc68f9dd0c}} for two different queries {{formula:bff9089b-1c72-48c7-9f9c-0bfafc7ba33f}} and {{formula:d43a989f-c40d-4822-970f-ccc8411420f1}} and they have different relevances s.t. {{formula:40c137f4-900e-430f-9914-963af4bea3c3}}
then under the pairwise debiasing assumption click probabilities at {{formula:4490f586-ed28-46bf-a8ee-b13f50d202ef}} are equal to the corresponding relevances:
{{formula:31cdf8b8-de34-446a-ae34-28f15d5993d8}}
Because the click probability (Eq. REF ) and non-click probability (Eq. ) have to sum to one, the following must hold:
{{formula:84d3a61e-6a26-4697-b094-0738d7265120}}
From this we can express {{formula:6cc59d64-5da7-4b48-a8c3-904a8b2ed74c}} in terms of {{formula:0fa57ded-243d-4ffd-a7b2-57a00939d724}} and {{formula:9352c98a-41e6-4fcd-bd7c-cb56fd5fed2f}} , similarly, this can also be done with {{formula:2d0597bb-1c30-40b8-ba9b-56a1efcd1ef8}} and {{formula:23cf7652-f031-4dd4-a91e-b4972c99f2dd}} , therefore:
{{formula:240850a9-59da-47ac-b898-36c6e184417a}}
From the latter part of that equality, we can derive:
{{formula:a9107edb-5fcf-48a6-83cc-c22ccc2c09fc}}
Since {{formula:b0ae08e5-9c18-4901-9225-436a9302e3a2}} : {{formula:c63ffc51-da30-44d1-919a-935a8703ec2a}} proving Eq. , in turn, since Eq. REF and Eq. sum to one: {{formula:c49fb2b9-b5b0-4869-bb6c-4f4b004cdcb2}} thus also proving Eq. REF .
Without discussing the details of {{cite:de0a05dffe52453e14aab6ff068093aa0dc4792f}}'s Unbiased LambdaMart method {{cite:de0a05dffe52453e14aab6ff068093aa0dc4792f}}, we can already expect it to be unbiased, since its starting assumption already implicitly entails that clicks are unbiased w.r.t. relevance.
Therefore, its unbiasedness is completely trivial from a theoretical perspective, i.e. rankings according to CTR are also unbiased under the pairwise debiasing assumption.
We would like to point out that there may still be practical value in pairwise debiasing, i.e. {{cite:de0a05dffe52453e14aab6ff068093aa0dc4792f}} report promising real-world performance improvements.
For our discussion regarding implicit limitations, it provides an illustrative example of why one should be critical when choosing or evaluating assumptions and the importance of investigating implicit limitations: for they can make the subsequent theoretical findings trivial and inconsequential.
Discussion and Conclusion: The Future of Click-Based LTR
We have critically analyzed the theoretical foundations of the main branches of unbiased LTR for implicit assumptions and the limitations they entail, and came to the following conclusions:
[label=()]
Counterfactual estimation can only unbiasedly and consistently estimate relevance when click probabilities follow affine transformations of relevances.
The conditions for click modelling methods to be unbiased and consistent do not translate to robust theoretical guarantees;
we currently lack strong guarantees for complex models, i.e. the best performing neural networks {{cite:c37ff46fe1b04ebbd7a7fd193ee1579436df87f1}}, {{cite:ac3283f7c3ab61e0c727d7fa4d378cb3f36457dd}}.
Assumptions that assert constant ratios between clicks and relevance as well as non-clicks and non-relevance {{cite:de0a05dffe52453e14aab6ff068093aa0dc4792f}} implicitly assume that clicks are unbiased w.r.t. relevance.
Our findings have revealed significant limitations of the unbiased LTR approach:
[label=()]
There is a clear limit on the click behaviors that the current counterfactual estimation approach could possibly correct for.
The theoretical assumptions of pairwise debiasing {{cite:de0a05dffe52453e14aab6ff068093aa0dc4792f}} are not applicable to biased clicks.
The current unbiased LTR theory lacks the ability to properly analyze the unbiasedness and consistency of click modelling methods.
These limitations do not undermine the value of the unbiased LTR field: its usefulness and effectiveness is very evident by a multitude empirical results {{cite:c37ff46fe1b04ebbd7a7fd193ee1579436df87f1}}, {{cite:55c328fd9b77e765d8baab3b9edccdc049a2a664}}, {{cite:158cd80fdba08ba82d0aeb85cdc406d456ad56bf}}, {{cite:1d3bb815e5be730f263e6843d27dca01168d4a66}}, {{cite:c37ff46fe1b04ebbd7a7fd193ee1579436df87f1}}, {{cite:8c7c6b83297ca261be1d84e8352a4bae17be55f7}},
but could help guide future directions and provide valuable lessons to the field.
The implicit limitations that we have uncovered for the existing approach, reveal that in order to find unbiased counterfactual estimation methods that are unbiased and consistent w.r.t. non-affine click behavior,
future research should differentiate from our generic definition (Definition ).
Concretely, such methods should not rely on averaging over transformed individual clicks, and as a result, proving unbiasedness would be much more difficult for them.
Furthermore, novel theoretical research is needed for finding theoretical guarantees of unbiasedness and consistency for click modelling methods, in particular, those that utilize complex models.
Our findings reveal that it is not enough that the underlying click model of a method matches the real-world click behavior; whether the method will produce unbiased or consistent relevance estimates also depends on how data is gathered, and ultimately, where the minima of the resulting loss are.
Similarly, theoretical research that explores in what circumstances bias estimation is feasible would be very valuable for further understanding the theoretical guarantees of counterfactual estimation.
Lastly, future research that introduces novel assumptions should critically investigate what these assumptions implicitly entail.
One should actively avoid assumptions that - intentionally or incidentially - make learning from clicks a trivial problem.
Finally, the findings of our critical analysis also provide some important lessons for the field:
Firstly, we should realize that unbiasedness is not always possible.
Accordingly, we should thus not invariably expect nor require it from future work, for this could systematically exclude research that tackles novel problem settings (e.g. non-affine click behavior).
Secondly, since unbiasedness may not be a realistic long term goal, the field will likely shift to bias mitigation or partial debiasing as feasible future directions.
Correspondingly, we recommend replacing the term unbiased LTR with the less-demanding debiased LTR or the neutral click-based LTR.
This work was supported by the Google Research Scholar Program.
All content represents the opinion of the author, which is not necessarily shared or endorsed by their employers and/or sponsors.
| m | 4df5b3e6f476c47f8caa70c5239d7f33 |
Role of Prediction Head: A deeper prediction head results in a student with higher representational capacity and thus a model that
better matches the teacher representations. Table REF shows results for models with
a common MobileNet-v2 {{cite:e4df4cb4ff77ff791a9b0995df4bde1f4d6b3b03}} backbone and different prediction head architectures.
The prediction head is used during
both student training and evaluation. We observe that a deeper model has lower MSE with teacher features and
better classification performance. However, a deeper model also implies greater inference time and
memory requirements. The student architecture is fixed based on deployment needs and thus requirement of larger
model goes against the very essence of distillation. To analyze performance at different layers of the
prediction head, we train a single ResNet-18 {{cite:297dda41866b003d0cc62d49b330cbc37c512132}} student with all intermediate dimensions of
MLP equal to that of the output. Surprisingly, a
model trained with MLP prediction head performs well on downstream task even when the prediction head is discarded
during inference (Table REF ). The performance using features from backbone network is slightly better than that from the final layer
outputs whenever a MLP head is used (more results in suppl.). More importantly, this observation enables us to use
deeper prediction heads for
distillation in place of linear layers without any concerns about altering the student architecture or increasing
inference time.
| r | 40448fc46730a57a779f0fec17f5a2a1 |
where {{formula:81fe112c-7f28-4430-930d-83cb35a5af59}} is the “exchange-only” version of Eq. (REF ).Note that in standard text books, e.g., Ref. {{cite:d8015fa229165e700a646398535f490e870d8998}}, {{cite:73501553f7965904535eec968a2cc7326da3f131}} the occupations are usually neglected, since they are fixed. We explicitly include the {{formula:140eae8f-8729-4fb8-8811-b196341016bc}} here to highlight the difference to RDMFT. The stationarity condition leads then to the {{formula:223782b4-b192-4d12-b145-3b89db9fc60d}} coupled equations (cf. Eq. (REF ))
{{formula:7d74502d-556a-4ca9-a31c-e2ef3a7973ec}}
| m | 68889593b28e5335adaa2925fb5da22d |
We suggest trying a wider range of hyperparameters and design choices for SplitMixer, such as strong data augmentation (e.g. Mixup, Cutmix), deeper models, larger patch sizes, overlapped image patches, label smoothing {{cite:e17ed177bb25684e55c513be5824c2689293c7ee}}, and stochastic depth {{cite:cf0b60346bf7eefa8a9cad1b6a97a584e6335dd2}}. Previous research has shown that some classic models can achieve state-of-the-art performance through carefully-designed training regimes {{cite:6421f1f5c636c80358be357da07b52322e3e8a73}},
We tried a number of ways to split and mix the channels and learned that some perform better than the others. There might be even better approaches to do this,
Incorporating techniques similar to the ones proposed here to optimize other MLP-like models is also a promising direction,
MLP-like models, including SplitMixer, lack effective means of explanation and visualization, which need to be addressed in the future (See Appendix ),
Our results entertain the idea that it may be possible to find model classes that have fewer parameters than the number of data points. This may challenge the current belief that deep networks have to be overparameterized to perform well,
Large internal resolution and isotropic design make SplitMixer-type models appealing for vision tasks such as semantic segmentation and object detection. This direction needs more exploration. Some works have already reported promising results regarding this (e.g. {{cite:4786f5e6a7f6ee17e4b272034353928c8de71e34}}, {{cite:93a410d5657223da494fa1915b3a298a847b771a}}, {{cite:9653da8058fb469dc0f5aea8ad655d315f9b5e09}}), and
Lastly, it is essential to a) understand the basic principles underlying CNNs, Transformers, and MLP-based models, b) explain why some design choices work while some others do not (e.g. {{cite:c5ccb3ced4d910a2e8fec76472d0365dad350890}}), and c) enumerate the key must-have building blocks (e.g. convolution, pooling, residual connection, normalization, data augmentation, and patch embeddings). This will help unify existing architectures, improve them, and invent even more efficient ones.
| d | 91d863e8d66b4eb2336b600895710ca1 |
Definition 4.6 (Left-point method {{cite:07afa5d7f2146d85b104f484b6f9b3ca70d6cd9d}}, {{cite:1feab0e8708424868a24c712677c8f1eb6db748e}}, {{cite:1febefc07cdafdb3818d70c6e12510a7a06b98c0}}) We define a numerical solution
{{formula:7c3d972a-6006-46fd-be14-2b4d0293b9d1}}
for the SDE (REF ) by setting {{formula:41172120-4e41-4e9a-9edc-545ff87bc0da}}
and for {{formula:b5119ca8-e99c-46c4-8eab-5a9c78ede44a}} , defining {{formula:92f17b32-09bf-4825-8bf5-2bc01f8cd17f}} as
{{formula:4361d351-e170-46cb-abf3-3a8f643cecb0}}
| m | feaa7fb460cc20426dea1cb37e561de7 |
In the past two decades, both traditional {{cite:d37c6d539865a223cd6734f67cbeea4ca91f10ed}}, {{cite:ce2704a7872ae61083b86d6cba88583da866b97d}}, {{cite:47ab273f0ff3ec3c61ca65ea371b4baa1ccfeb2b}}, {{cite:0dfea6f628c727b03d2ffe0782ec64711d98628b}}, {{cite:7c4afb08062c0244f1ac37b9745a9eb1e0f7729c}} and deep learning-based {{cite:eb8d6d3198f7c1c7cf227ee4b8149542e6e0867a}}, {{cite:0d3d9d596189c3727ec1627d971d41fd516272a9}}, {{cite:db4fad162eac659fa53dd99ab1a361a853d3bcf2}}, {{cite:6fa8ad4ce5e51fb83e0d4b73f92f1fe96175bdff}}, {{cite:7aa8918d5bb90d1556dabee6a42f91979642fecc}}, {{cite:fbb24881ee9037f154c4cb1fc31d4268143a0f91}}, {{cite:2644ee15fc5d36d36cdc05083bbff24c5495ac86}} methods have shown their effectiveness for presentation attack detection (PAD). Most traditional algorithms focus on human liveness cues and handcrafted features, which need rich task-aware prior knowledge. In term of the methods based on the liveness cues, eye-blinking {{cite:d37c6d539865a223cd6734f67cbeea4ca91f10ed}}, {{cite:0aa3af6bfc725d694c02a3dd86090a6b1194e80f}}, {{cite:2a6e42bba716c104b4e5c5a230cc396b46adaa8c}}, face and head movement {{cite:e048a7cd5713a3a19a5eb41cc0ef8369fe660fda}}, {{cite:416bcaf5bca01651c4f879e24f05153eeac25f1c}} (e.g., nodding and smiling), gaze tracking {{cite:9b5c7a6bf66f417adc0cdc7a23eb11b16a12d672}}, {{cite:2474f8384a22816d2365ab1cc81b22f331678ca5}} and remote physiological signals (e.g., rPPG {{cite:ce2704a7872ae61083b86d6cba88583da866b97d}}, {{cite:db4fad162eac659fa53dd99ab1a361a853d3bcf2}}, {{cite:916509f5f4a9a0626eb34ed744ad193d74d91e64}}, {{cite:7e6458a37997011ed770c9d4bba3fa0c645982ff}}) are explored as dynamic discrimination. However, these physiological liveness cues are usually captured from long-term interactive face videos and easily mimicked from the video attacks, making them less reliable and inconvenient for practical deployment. On the other hand, classical handcrafted descriptors (e.g., LBP {{cite:1456b23803d0b298347917cf709bb178798c7cac}}, {{cite:47ab273f0ff3ec3c61ca65ea371b4baa1ccfeb2b}},
SIFT {{cite:7c4afb08062c0244f1ac37b9745a9eb1e0f7729c}}, HOG {{cite:0dfea6f628c727b03d2ffe0782ec64711d98628b}} and DoG {{cite:720173fd69969be0c2f2af895ff88256fdf25ba7}}) are designed for extracting effective spoofing patterns from various color spaces (RGB, HSV, and YCbCr). Although such handcrafted features could be cascaded with a trained classifier (e.g., SVM {{cite:316770b3662f5cb02a7225b866fdb781f024e94a}}) efficiently, they still suffer from limited representation capacity and are vulnerable under unseen scenarios and unknown PAs.
{{figure:5fff8931-8ebe-46db-ad34-ebab67bbeff5}} | i | 7db032f6c669bf429c36ab9df376faa2 |
We conducted a metalinguistic experiment (vs. a processing task testing implicit knowledge) where participants used a web interface to assess the relatedness of WordNet senses in a two-dimensional spatial arrangement task.
We then obtained BERT embeddings for word types in the Semcor corpus {{cite:963649f536f65f41a9af1dd7a81484d55a71ebf1}} and compared them to the experimental data, looking at both cosine distance in the embedding space as well as the accuracy of a sense classifier using BERT's contextualized word embeddings as input.
All stimuli, code and visualizations are available at https://osf.io/fm78w.
| m | bf49c65b2ec7181204aa173a994c5e49 |
Normalized Mutual Information.
We quantify the echo-chamber effect by measuring the extent to which users from different retweet communities share the same sources of information, as a proxy for the information siloing in an echo-chamber.
To do this, we employ Normalized Mutual Information (NMI) to gauge the similarity between the RT and CO communities {{cite:010b5477fead1800331af670d4bfefb07fe37ab0}}, by using the normalized{{formula:19d5abdf-3919-479b-a9cb-50e299dcc6dd}} mutual{{formula:896d59e6-d467-4d8a-ab78-b87b73f10b5a}} info{{formula:7e84bf5e-da43-4321-98b1-5dc2aa97f151}} score module in the Python package scikit-learn.
Spanning {{formula:1e04cd87-0ddc-4033-837c-c7caabd89c2c}} , a NMI of 1 means that the community structure is the same between the two networks, while a low NMI means different community structures.
Note that this metric does not use the opinion leaning determined in the labeling step, and thus can be computed for any country/period network, whether or not it has a no-vax community.
| m | c48c4c8c0fb846e57114e98f5e769533 |
Approaches REF (or REF ) and REF are similar with respect to their implementation.
That is, approach REF can be implemented using GP regression {{cite:21083b9c4a98b24441d87ad60ac15b7013088bdd}}.
Therefore, the essential difference between REF and REF is that the former explicitly introduces mechanistic knowledge.
It will be shown here that approach REF is currently constrained to linear mechanistic knowledge, while popular methods based on maximum entropy {{cite:891979eb319a2773e309c1c973c65db9a3b3e445}}, {{cite:ad0358325152e037def3e576d6ea26c2b09e1a86}}, {{cite:bef02331a9cf250e797638fa54fa73053be71669}} can handle nonlinear knowledge.
The difference between GP based mechanistic emulation and maximum entropy methods is that in the latter, the mechanistic knowledge is added as constraints on the moments of the predictive or posterior distribution, while in the former its is added as the dynamics of the prior model.
Adding constraints to predictive distributions require expert knowledge available at the level of the emerging behavior of the simulator, while adding dynamic information requires knowledge about the constitutive elements parts of the simulator.
The latter is likely to be readily available from the development of the simulator itself.
| i | ce57c6be916e54a908cd3bcd469a008e |
The primordial black holes (PBHs) are the one of the type of black hole that produced in the early Universe {{cite:959c64368f602218d25e14e138c789ca84c42dbe}}, {{cite:4cc5a8824aaae681891bfbfd0ae1661b4da5814f}}, {{cite:0e1376b412d6b2986b01e08c15294b3824e0f401}}, {{cite:1c1888f286e972ab4d41d2f91c1f327b639e8bec}}. The PBHs are produced via a number of mechanisms, such as the collapse of large density perturbations generated from inflation {{cite:c5aa0d5b7b9d36b7c089c2641dd37b1840c5ee6b}}, {{cite:abfabccab66e837b13ae881af35d74b26e8d1cfb}}, {{cite:ef03d2470feae2b38a7ce9c7cfb642b9d47097ca}}, {{cite:104d3dd5dcaecc069fa3b4be51b948e2445571f2}}, {{cite:db4c484c27fa9239b9a4d6e20bcbaf4314f1231c}}, {{cite:9436103348c64e5417e91cc4b0f17c5673c90273}}, {{cite:5ee7ee6b998930edc2be2329f8b3bff452d35f2d}}, {{cite:739f5520eab2a9b24cffb5b5472199e767cb925b}}, {{cite:f0a2437cc04419c1a473fe4e46a8048e90cfb85d}}, a sudden reduction in the pressure {{cite:ed0ad1cdc106a55f20b67b89d34f21cc6a1dba2a}}, {{cite:5116c1fe3773f25e63e31860ad215ee9c86ed926}}, bubble collisions {{cite:12683c560f5f9492ad3480efe2969bdbf9692365}}, {{cite:1f25ca4e03a1348c8dbd23a856cbaf5a0f779d15}}, {{cite:bd296441984802820188497de55422e75230908d}}, {{cite:b1de72b18f9c61b8634966e1922b602b4828ae67}}, {{cite:7980abc7fc7588baa6a413080243838267f0c01f}}, a curvaton {{cite:fcf6f057c8696bc2905b74a7a7b6c6f1c76ad73c}}, {{cite:058da7d6581c6477ee04711344ccac243e833661}}, {{cite:250c99a0225bd2cbcd18162d3d2ffe7e577374f4}}, {{cite:0d8b460d256ceea00c7b7402ba453bbd341eb42e}} and collapse of cosmic string {{cite:d77e8b3dbbdfed66d4a58e642dc07bd22a44cefd}}.
| i | 3d8aec873e1c6ac644b4607812e005a2 |
Let us first recall a classical version of van der Corput's lemma (cf. Corollary to Proposition 2 in {{cite:b093c61bfbdf9e4233eb2436670b2493a36858c5}}).
| r | c66331cf1675c72577d8522b970bec63 |
However, a proximity parameter can in principle be found, so that its cubic partition be the better choice for
a given simulation. In fact, N-body simulations and the friends-of-friends algorithm
have been used to calibrate these kind of codes to obtain values
of linking length parameter, so that
it makes the best identification of galaxy groups in catalogs; see for instance {{cite:c97e9bd783e34c4005b99ca32155108a497ed619}}, {{cite:7ca2c523435111bd80759a01afef8479f98f79a7}} and {{cite:ae4e86e1edb55cd3eb27053fa3fe0640b727c87e}}.
| d | 088549023ba6624d95a29f6679e52534 |
Other experiments considered different data sources including characters (written on a tablet), and other features including physical properties such as speed and acceleration. In these tasks, other methods sometimes showed better performance, specifically a KNN classifier using edit distance with real penalties {{cite:396f2f48170c15e8d194c045df6558665e872d9c}} performed exceedingly well on the character classification tasks, and linear classifiers on the mode-of-transportation tasks.
We did not specifically design tasks to take advantage of the distances or embeddings that preserved the orientation of direction of the trajectories (as has been done elsewhere {{cite:19402b03cc9af812de988a705ab6c3e030630ca3}}, e.g., by modifying the Pigeons dataset {{cite:20d09d04b0b85c0fd74f27460672df8a452750a9}}, {{cite:7f354b7a6eaddba4cc3f70a7763c63e035e26ad8}}, {{cite:16c4ac678099fccd8efc01b4411df1126fb91b57}}), and there was no significant observed advantage of that class of techniques over ones that simply treated trajectories as geometric objects. Identifying naturally occurring tasks where this is needed, and performing the empirical study is an intriguing future direction.
| d | 03093e32340c20b48e4c5806933896c9 |
Selvaraju et al. use datasets with manually annotated bounding boxes on objects in the image to compare the saliency map and human annotation using Intersection over Union to determine the quality of their explanations. Muddamsetty et al.{{cite:a079f4e77cc83d489b0435e4046be2ce0021569f}} also create a dataset comprising the user saliency maps in the form of eye-tracking details of medical experts on retinal images. They compare the two saliency maps based on metrics like Area Under the Curve (AUC) and Kullback-Leibler Divergence (KL-Div){{cite:f7066e2915ec43ed919cb16af97007ac1cc73337}} and show that the saliency map closely aligns with human experts.
| d | 2a4cadf25fbebdff37e3553bb047467a |
Usually AM methods are only compared to a few others and only on vanilla ImageNet-trained CNNs {{cite:3142ea881778ab09d8406724be592319d3b4595d}}, {{cite:0536914cbefb39d066f203b0b085846955dec8ee}}, {{cite:8a39e84c5f9422470010b7a9e7b6e03e297c4b2e}}, {{cite:6597519704128b610e216ed3cf4be8dd95c969a3}}, {{cite:8c89231cf4a24cd2e480a1448ace646300d56b01}}.
It is therefore often unknown whether the same conclusions generalize to other non-standard CNNs that are trained differently or of novel architectures.
Here, we aim to find out the best overall AM methods.
| m | 8bd6dfe6504a34393b91217eb2a29df7 |
It has been well known that the low energy chaotic behaviour of large{{formula:263dc24a-f4f4-4ef7-8bef-539c1646d6cf}} holographic systems is captured by the near horizon dynamics and effective theory of black holes.
The JT theory of gravity has been shown to reproduce the near extremal thermodynamics of wide class of black holes {{cite:0cdc8f624012ca930488773177002cb77387faed}}{{cite:9036ea1377c561c4ce12c4ed3c776100f8e6fd81}} for fluctuations in mass and entropy over extremality with the other black hole charges held fixed.
Such a JT theory can be used to deduce the {{formula:45c4c45e-66d5-42f2-84a3-a91bbc8b28bb}} due to interactions with 2d probe scalar fields in the near horizon region and is given by {{formula:247726d4-da42-4a49-b78e-eb5dcc6cf21b}} {{cite:51f6af0279eb096885a867590d33db25b9adf537}}{{cite:4b579846e936b47da8204f009ed3be0bf7585ac5}}. However the phenomena explored in this paper (and in {{cite:a76f47f3e35da0ac4ce1f27b5693deb245e1dc87}}{{cite:ada3214b2c2b1e059c362fbeb4f9c9a0b2506356}}) concerns with perturbing a Kerr black hole with {{formula:f4c0df30-b533-4f12-9000-686a77383e25}} and {{formula:f9683733-d9d5-4e7e-a007-afc66cc814f2}} . The first law in this case takes the form
{{formula:9956ad02-c0ca-462e-b9fa-4dfd989102d8}}
| d | d37b06228afe1588f63955800385acc6 |
Annotation-Efficient Segmentation: Developing an encouraging performance segmentation model always requires many high-quality annotations, but labeling the abdominal multi-organ is very expensive and time-consuming, each volume around takes 1.2-2.6 hours. To reduce the label costs, annotation-efficient learning has attracted many researchers' attention, such as semi-supervised learning {{cite:f0801e61df829ebe30d7c20011dbb7daab0ed1df}}, {{cite:4705f155aa04b5d17997d9ecc6788af3936e14f0}}, {{cite:90f76a68c91f54dd1bd375a71733a1c92cbc9d25}} and weakly supervised learning {{cite:d9b3a3c8faff3704b88512c982a9b76f593126cc}}. In this work, we propose to learn from the scribble annotation by minimizing the entropy minimization and intra-class intensity variance minimization. Although our proposed method improves the baseline by a large margin, there is also a huge performance gap compared with dense annotations. In this work, we want to do some attempts to inspire more annotation-efficient researches in the future.
| d | 53406b6efa1418447e4954a15370b018 |
conducted a more comprehensive evaluation with all combinations of KGE models and sampling strategies totalling 648 data points (324 for each prediction model).
The rest of the paper is organized as follows.
Section introduces essential concepts to the subsequent sections.
Section introduces the use case where the knowledge graph and prediction models are applied. Section introduces related work. The creation of the knowledge graph is described in Section . Section introduces the prediction models, while Section presents the evaluation of these models. Section elaborates on the contributions and discusses future directions of research. Finally, Appendix gives an overview of the knowledge graph embedding models used in this work.
Preliminaries
In this section we introduce important background concepts that will be used throughout the paper. Table REF contain the most important symbols.
{{table:4ab3c8a2-c934-4ab5-bcc8-3c1df1787a30}}Ecotoxicological terminology
Taxonomy in this work refers to a species classification hierarchy. Any node in a taxonomy is called a taxon. Species is a taxon which is also a leaf node in the taxonomy. An Organism denotes an individual living organism which is an instance of a species.
Chemicals or compounds are unique isotopes of substances consisting of two or more atoms.
Effect, used in this work as short form for chemical effect, refers to the response of an organism (or population) to a chemical at a specific concentration.
EndpointNot to be confused with SPARQL endpoint. denotes a measured effect on the test population at a certain time; e.g., lethal concentration to 50% of test population (LC50) measured at 48 hours. Note that, an experiment can have several endpoints, e.g., LC50 at 48 hours and LC100 at 96 hours (lethal concentration for all test organisms). See Table REF for the most common endpoints.
Ontology-enhanced knowledge graphs
In this work we consider the most broadly accepted notion of knowledge graph within the Semantic Web: an ontology enhanced RDF-based knowledge graph (KG) {{cite:6b8e6318bd0e98831a6615974a16a20783ee041a}}.
This kind of knowledge graph enables the use of the available Semantic Web infrastructure, including SPARQL engines and OWL reasoners.RDF, RDFS, OWL and SPARQL are standards defined by the W3C: https://www.w3.org/standards/semanticweb/
Thus, in our setting, KGs are composed by RDF triples in the form of {{formula:e1c40baa-ca4f-47b4-9dc8-83517e18a4ba}} ,{{formula:e0a7199e-c9f8-4ec1-baf4-8ff0817caacf}} is the set of all classes and instances, {{formula:fec39210-80a1-4393-b7b3-e93210eb61d9}} is the set of all properties, while {{formula:b396fffb-b61e-4cfd-b929-61ca2f0ef9b3}} represents the set of all literal values.
where {{formula:e9b6bc06-47aa-43a7-9893-70d7e5e4867a}} represents a subject (a class or an instance),
{{formula:22195abe-e60e-4f25-9bf6-3cf6ffd34255}} represents a predicate (a property)
and {{formula:13d7692e-2e17-4333-b2ca-29d7091f85d6}} represents an object
(a class, an instance or a literal).
KG entities (i.e., {{formula:32485bc0-98bf-455a-a517-72534ba515a7}} : classes, properties and instances) are represented by an URI (Uniform Resource Identifier).
An (ontology-enhanced) KG can be split into a TBox (terminology) and an ABox (assertions). The TBox is composed by triples
using RDF Schema (RDFS) constructors like class subsumptions and property domain
and range;
and OWL constructors like disjointness, equivalence and property inverses.Note that the Web Ontology Language (OWL) {{cite:de55d1550264f5c6b1215ff6e9a5009f2c3b3261}} also enables the creation of complex axioms that are translated/serialized into more than one triple: https://www.w3.org/TR/owl2-mapping-to-rdf/
The ABox contains assertions
among instances, including OWL equality and inequality,
and semantic type definitions.
Table REF shows several examples of TBox and ABox triples.
Ontology alignment
Ontology alignment is the process of finding mappings or
correspondences between a source and a target ontology or knowledge graph {{cite:863b475f78d627e2f128ac5a3494ccce184e331d}}, {{cite:3a8148024f8ed85266597fae5e0443f14ab69cd6}}.
These mappings typically represent equivalences or broader/narrower relationships among the entities of
the input ontologies.
In the ontology matching community {{cite:fb4db37f3e6bc06add214f0acf00966000fcafd4}}, mappings are exchanged using the RDF Alignment format {{cite:938d67f73c848c08f5c019dd09728433dd8f45b8}}; but they can also be interpreted as standard OWL axioms (e.g., {{cite:dcae8e10551d7cea6d7216b34638cfcf3554ea6d}}, {{cite:5de1bf24a1e5700cb383ffdf1929e823b179d105}}). In this work we treat ontology alignments as OWL axioms (e.g., Triple REF in Table REF ).
An ontology matching system (e.g., LogMap {{cite:4c27b2be6e983a2cda0bf1cf19a2d1236123fb40}}) is a program
that, given as input two ontologies or knowledge graphs,
generates as output a set of mappings (i.e., an alignment) {{formula:569a4c47-ad8f-48ac-a65d-d1748ccad455}} .
Embedding models
Knowledge graph embedding (KGE) {{cite:bdb5cdb2503aca75ef359589c59be1497db20ae2}}, {{cite:9bd8b874fd7d4de01884a9c8e62dc65d9bf06e80}} plays a key role in link prediction problems where it is applied to knowledge graphs to resolve missing facts in largely connected knowledge graphs, such as DBpedia {{cite:b03647578cccf735b13fe5e0baf56c06ec538980}}. Biomedical link prediction is another area where embedding models have been applied successfully
(e.g., {{cite:2ad8a26f068ae3979b047dd085b008a77c927d13}}, {{cite:442985a9857c44fb36da6e2033141908f3523c5f}}).
The embeddings of the entities in a KG are commonly learned by (i) defining a scoring function over a triple, which is typically proportional to the probability of the existence of that triple in the KG,For the embedding process, we focus on triples where {{formula:b29740b0-d5e7-4062-af00-22448e48e657}} is a class or an instance. i.e., {{formula:9f24acb6-833d-4a9d-a700-53a9b9e08775}} , {{formula:d18b9c65-2e44-46dc-a338-afe9b0a64889}} ; and (ii) minimizing a loss function (i.e., deviation of the prediction of the scoring function with respect to the truth available in the KG). More specifically, KGE models (i) initialize the entities in a triple {{formula:1b862099-cdb0-4e88-83d2-adf2c653d27b}} into a vector representation {{formula:10eaa13c-79ab-4610-ba49-6576215657f7}} , where {{formula:c9b2fa9a-78b0-45d4-a83c-77e83be583fa}} is the dimension of the vector; (ii) apply a scoring function to {{formula:0954f8a1-8d55-458d-b603-0c4b4e47c04f}} ; and (iii) adapt the vector representations to improve the scoring and minimize the loss.
Several knowledge graph embedding models have been proposed. In this work, we used models of three major categories: decomposition models, geometric models, and convolutional models.The interested reader please refer to {{cite:9bd8b874fd7d4de01884a9c8e62dc65d9bf06e80}} for a comprehensive survey.
The decomposition models
represent the triples of the KG into a one-hot
3-order tensor and apply matrix decomposition to
learn entity vectors.
Geometric models, also known as translational, try to
learn embeddings by defining a scoring function where the predicate in the triple act as a geometric translation (e.g., rotation)
from subject to object.
Convolutional models, unlike previous models, learn entity embedding with
non-linear scoring functions via convolutional
layers.
Ecotoxicological risk assessment and adverse biological effect prediction
The task of ecotoxicological risk assessment is to study the potential hazardous effects of chemicals on organisms from individuals to ecosystems. In this context, risk is the result of the intrinsic hazards of a substance on species, populations or ecosystems, combined with an estimate of the environmental exposure, i.e., the product of exposure and effect (hazard).
{{figure:91201279-db48-4d2e-a046-57057971cd39}}Figure REF shows a simplified risk assessment pipeline. Exposure data is gathered from analysis of environmental concentrations of one or more chemicals, while effects (hazards) are characterized for a number of species in the laboratory as a proxy for more ecologically relevant organisms. These two data sources are used to calculate the so-called risk quotient (RQ; ratio between exposure and effects). The RQ for one chemical or the mixture of many chemicals is used to identify chemicals with the highest RQs (risk drivers), identify relevant modes of actionThe mode of action describes the molecular pathway by which a chemical causes physiological change in an organism. (MoA) and characterize detailed toxicity mechanisms for one or more species (or taxa). Results from these predictions can generate a number of new hypotheses that can be investigated in the laboratory or studied in the environment.
Note that, this risk assessment pipeline is a simplified version of the one in use at the Norwegian Institute for Water Research,NIVA: https://www.niva.no/en however, similar methodologies are used across regulatory risk assessment pipelines.
{{table:99a8b0f6-2613-43fd-b23e-17b37a58c5d7}}The chemical effect data is gathered during laboratory experiments, where a sub-population of a single species is exposed to an increasing concentration of a toxic chemical.
The endpoints of the experiments are recorded at chemical concentrations and time after exposure.
These endpoints are categorized into several categories, e.g., lethality rate of test population (see Table REF ).
Ecological risk assessment methods require a large amount of these experimental data to give an accurate depiction of the long term risk to an ecosystem. The data must cover the relevant chemicals and species present in the ecosystem, e.g., an ecological risk assessment of agricultural runoff in Norway will mostly concern pesticides and waterflees, copepods, and frogs, among other species {{cite:8df25d5504ed927659467250772cb70dee895b9e}}. Just with a few relevant chemicals and species the search space becomes immense and performing laboratory experiments becomes unfeasible. Thus, it is essential to develop in silico methods to extrapolate new chemical-species effects from known combinations. We differentiate among two types
complementary strategies:
[(i)]
highly specialized (restricted in chemical and species domains) models to predict chemical concentrations that will have an effect on a test species, and
models that produce rankings of highly representative chemical-species pair hypothesis which can be used by a laboratory to perform targeted experiments.
In this paper we focus on the latter strategy, using a method based on knowledge graph embeddings.
Methods that fall into the first strategy are introduced in Section REF .
Related work
This section will cover related work from ecotoxicology and knowledge graph based prediction.
Toxicity extrapolation
There are two main research areas in toxicology to extrapolate chemical effects,
i.e., Quantitative Structure-Activity Relationship (QSAR) and read-across. QSAR
modelling try to find a relationship between the structure of a chemical and the chemical's biological activity (cf. reviews {{cite:10b7c8e9fd2e6ba1415bed8eb28d80189c7943e6}}, {{cite:9bcf4e77d2786f3020539109e35d5f6fd7ad61d8}}). This relationship is described using derived chemical features. Some features are simple, e.g., octanol-water partition coefficient or logP, others concern the entire chemical, e.g., chemical fingerprints. The basis of the QSAR relationship is usually modeled as polynomial equations. Parthasarathi and Dhawan {{cite:72781454109f1dd199e78c9c04bea82b1c326f4d}} take
this further by using the logarithm of chemical concentration to achieve a polynomial relationship: {{formula:5c58056f-abf9-4d33-8d0a-5e7df4ff666c}} , {{formula:691cbf02-08d2-4326-9a76-ebbb3278d85d}} and {{formula:717af9fe-cd52-40a8-94ba-6ccef9edb831}} ({{formula:364b37f5-a31e-41f4-a95b-4f35f2e34abf}} is a polynomial of {{formula:561809fd-5125-4135-9f28-3949ada66e92}} th degree), where
{{formula:2e7e623c-a506-4c0d-8f09-641e825ea75f}} is the chemical concentration while {{formula:0c04ba12-e4ad-4e45-8bab-0e8e8eee77e1}} and {{formula:af80af9a-1732-46b3-a817-c4631a03c302}} denote the derived chemical features hydrophobicityMeasure of the absence of attraction to water. and electronic effects in the molecule, respectively. The drawback of these models is the applicability domains. Usually, a QSAR model considers a small set of chemicals (10ths to 100ths) and one single species.
This means that new features and relationships need to be developed for each species and each chemical group.
The read-across methods try to mitigate these drawbacks, mainly by considering extrapolation of the effect at the chemical and species levels. Similar to QSAR models, read-across of chemicals use the chemical features to create similarity measures between chemicals to justify the read-across of chemical effects. The read-across in the species domain is harder. Species do not tend to have easily derived features.
Therefore, genetic similarity has emerged as a viable option. Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS), developed by the United States Environmental Protection Agency (U.S. EPA.), is an example of such an approach {{cite:b3ed7004082eff69d2f54690a62afbb66f88da91}}, {{cite:345a90c79ff31e2056014f9449ddd37903dafb00}}.
SeqAPASS uses a large amount of data available for humans, mice, rats, and zebrafish to extrapolate to areas with lower coverage.
Embedding models
In this work, we use nine KGE models across three categories of models. Here, we will give a brief introduction to the models, while a more extended explanation of the models is found in Appendix . The interested reader please refer to {{cite:9bd8b874fd7d4de01884a9c8e62dc65d9bf06e80}} for a comprehensive survey.
The three categories of models are decomposition, geometric, and convolutional {{cite:9bd8b874fd7d4de01884a9c8e62dc65d9bf06e80}}. The decomposition models are DistMult, ComplEx, and HolE.
DistMult models the score of a triple as the vector multiplication of the representation of each subject, predicate and object {{cite:c5f80a8a86dbb5f36e3422b0fd298e6d7977c286}}.
ComplEx uses the same scoring function as DistMult, however, in a complex vector space, such that it can handle inverse relations {{cite:333d4c031fa834597f59f186aa3ff39aa7646769}}. HolE is based on holographic embeddings {{cite:14c9d37bc994a88161b402f57719334464a78865}}, however, it has been shown that HolE is equivalent to ComplEx {{cite:98e7e1d9396b3555784725a4d06245d0250d1064}}.
The geometric models are TransE, RotatE, pRotatE, and HAKE. TransE is the base of a whole family of models and scores triples based on the translation from subject to object using the representation of the predicate {{cite:738b6c10b68cf769d607781475c0ef972d4241d0}}. RotatE is similar to TransE, however, the translation using the predicate is done by rotating it (via Euler's identity) {{cite:9d3089040cd71ef4628ca47733b9d0bcebf2db69}}. Furthermore, pRotatE is a baseline for RotatE where the modulus in Euler's identity is ignored {{cite:9d3089040cd71ef4628ca47733b9d0bcebf2db69}}. Finally, the hierarchical-aware model, HAKE, where entities at each level in the hierarchy is at equal distance from the origin and relations at a level is modeled as rotation {{cite:67cc97bf0ffe0cccddf7077b1632160cb6af1a63}}.
The convolutional models take a deep learning approach to the task of KGE. We use ConvKB {{cite:b524533a1296a105c231f2734a808711982f3008}} and ConvE {{cite:34346ef0dd19f03518ee65b8773240e1c1ae4691}}, which are similar with slightly different architectures. They have shown good performance given the relative small number of parameters.
Although quite a few KGE models have been proposed, the adopted ones are either classic models or can achieve state-of-the-art performance in some benchmarks.
They are representative of mainstream techniques, and have been widely adopted in KGE research and applications {{cite:9bd8b874fd7d4de01884a9c8e62dc65d9bf06e80}}.
Thus, the benefits and shortcomings of the KGE models analysed in this study
provide good evidence of the general performance of this type of models in a complex prediction task, i.e., adverse biological effect of chemicals on organisms.
Using KGE for prediction
Our focus to use KGE models is to predict if a chemical has a lethal effect on an organism.
KGE models have been explored in the biomedical domain to solve similar predictions tasks (e.g., finding relationships between diseases, drugs, genes, and treatments).
Several works have shown improvements in results by using KGE models for prediction, e.g., {{cite:2c70979ec0b7725281b9679d93c8f79dc29d4604}}, {{cite:2ad8a26f068ae3979b047dd085b008a77c927d13}}, {{cite:442985a9857c44fb36da6e2033141908f3523c5f}}. Chen et al. {{cite:7142cc3899a5bb222e845580aaf7dd9a02d8211f}} used random walks over networks to perform drug-target predictions. The ChEMBL and DrugBank KGs have also been used to predict chemical mode of action (MoA) of anticancer drugs with high performance on benchmark datasets {{cite:27c773108cace75076a97dafb2b5e33dd3236d18}}.
Opa2vec {{cite:bdfd3e85e381bf1a0e877097e0ac6938ce8b6d31}} and Blagec et al. {{cite:12f458fddd0266344ad79a2475ebc7b37d7a8cbd}} have developed embedding models to improve similarity-based prediction in the biomedical domain, while OpenBioLink {{cite:76a93305751c288d1dcfbe3e80b4696f54c94969}}
has created a framework for evaluating models in the biomedical domain.
EL Embeddings {{cite:8d82fc69befa13f98b5f60aa5a5f8c10f55ff96e}} and opa2vec {{cite:bdfd3e85e381bf1a0e877097e0ac6938ce8b6d31}} present new semantic embedding methods for KGs with expressive logic expressions (i.e., OWL ontologies) to predict protein interaction.
The former utilizes complex geometric structures to model the logic relationships between entities, while the later learns a language model from a corpus extracted from the ontology. OWL2Vec* {{cite:995cce80a1a60562cc9215d73d2c1f00e3a0dc2a}} also learns a language model from an ontology and applies the computed embeddings into two prediction tasks: class subsumption and class membership. OWL2Vec* has also been used to predict the plausibility of ontology alignments {{cite:8f19090b49ccc3a336ec819bbe9702bba10ca6d7}}.
To the best of our knowledge there is no work using link prediction or KGE models to support
ecotoxicological effect prediction.
This study will give novel insights and empirical results of KGE models
in this new domain.
TERA knowledge graph
{{figure:9907df85-2f63-421c-9155-633003ecc53e}}One major challenge in ecological risk assessment processes is the interoperability of data.
In this section, we introduce the Toxicological Effect and Risk Assessment (TERA), an ontology-enhanced RDF-based knowledge graph that aims at providing an integrated view of the relevant data sources for risk assessment.Resources to create and access TERA: https://github.com/NIVA-Knowledge-Graph/TERA
The initial inspiration for TERA was the aid of ecotoxicological effect prediction where access to disparate resources was required (see Section REF ).
However, by integrating these sources into a KG, we were also able to directly apply TERA into the prediction process by leveraging knowledge graph embedding models (see Section REF ).
The data sources integrated into TERA vary from tabular and RDF files to SPARQL endpoints over public linked data. The sources currently integrated into TERA are:
[(i)]
biological: NCBI Taxonomy, Encyclopedia of Life, and Wikidata mappings ({{formula:8bcff442-55c1-474b-b45f-e99c4354433f}} species);
chemical: PubChem, ChEMBL, MeSH, and Wikidata mappings ({{formula:2989b66b-1540-4f0e-9e53-eb1e6c621a3c}} compounds); and
biological effects: ECOTOXicology Knowledgebase ({{formula:92ec00d6-af6b-408f-a4f6-122206c00c11}} results, {{formula:b48315f7-fba6-4c92-817e-96cf461ef118}} compounds, {{formula:1a010c0e-fd57-4c33-ab76-70909ca0aa8c}} species), and system-generated mappings.
These three distinct parts make up the sub-KGs of TERA, i.e., [(i)]
the Taxonomy sub-KG (KG{{formula:5fd80c1e-c43a-43d0-b30b-5be59f20b7ab}} ),
the Chemical sub-KG (KG{{formula:dd18985d-5836-466c-9bb2-c0f48e76543c}} ), and
the Effects sub-KG (KG{{formula:3dccd72c-8411-400d-bb51-6eb3ebc1bc74}} ).
The different processes to transform and integrate these sources into TERA are shown in Figure REF .
A snapshot of TERA is available on Zenodo {{cite:1b161420debd027da473db3c7549036205abc50d}}, where licenses permit.EOL: Various Creative commons (CC),
NCBI: Creative Commons CC0 1.0 Universal (CC0 1.0),
ECOTOX: No restrictions,
PubChem: Open Data Commons Open Database License,
ChEMBL: CC Attribution,
MeSH: Open, Courtesy of the U.S. National Library of Medicine,
Wikidata: CC0 1.0.
PubChem and ChEMBL are not included in the snapshot due to size constraints; these can be downloaded from the National Institutes of Healthftp://ftp.ncbi.nlm.nih.gov/pubchem/RDF/ and European Bioinformatics Institute,ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBL-RDF/ respectively.
The subgraph of TERA used for prediction is available alongside the chemical effect prediction models in our GitHub repository.https://github.com/NIVA-Knowledge-Graph/KGs_and_Effect_Prediction_2020
Table REF shows several examples of RDF triples from TERA.Prefixes associated to the URI namespaces of entities in TERA:
et: (ECOTOXicology knowledgebase),
ncbi: (NCBI taxonomy),
eol: (Encyclopedia of Life), mesh: (Medical Subject Heading), compound: (PubChem compound), descr: (PubChem descriptors), vocab: (PubChem vocabulary),
inchikey: (InChIKey identifiers),
envo: (Environment Ontology)
cheminf: (Chemical information ontology),
chembl: (ChEMBL),
chembl{{formula:6a049249-0f76-4e55-b16b-932abc14693f}} m: (ChEMBL molecule subset),
chembl{{formula:33561cb1-6aee-458a-beda-7a486371186f}} t: (ChEMBL target subset),
wd: (WikiData entities),
wdt: (Wikidata properties),
qudt: (Quantities, Units, Dimensions and Types Catalog),
snomedct: (SNOMED CT ontology), and
bp: (Biological PAthway eXchange ontology).
owl:, rdfs:, rdf: and xsd:
are prefixes referring to W3C standard vocabularies.
{{table:28f214e7-8db1-4ee8-a54c-f94ce671194e}}{{table:ba071909-95d0-4575-a727-b74bce3d66ef}}Dataset overview
TERA, as mentioned above, is constructed by gathering a number of sources about chemicals, species and chemical toxicity, with a diverse set of formats including tabular data, RDF dumps and SPARQL endpoints.
Biological effect data of chemicals.
The largest publicly available repository of effect data is the ECOTOXicology knowledgebase (ECOTOX) developed by the US Environmental Protection Agency {{cite:773e8b7a52384408c0a0ade2eb86b7e9cf4fc2b3}}. This data is gathered from published toxicological studies and limited internal experiments.
The dataset consists of {{formula:c9fd214a-bc8b-4c4c-8cd6-84ac33fca6ad}} experiments covering {{formula:6c75db3a-ccc5-462c-81fb-64757f1dc6cd}} chemicals and {{formula:df7f33c6-110a-4d97-9f40-0327f14f9b1e}} species,Version dated Sep. 15, 2020. implying a chemical–species pair converge of maximum {{formula:5dcd637d-d34f-4ab1-a917-3d335360b3a2}} . The resulting endpoint from an experiment is categorised in one of a plethora of predefined endpoints (see Table REF above).
Tables REF and REF
contain an excerpt of the ECOTOX database.
ECOTOX includes information about the chemicals and species used in the tests.
This information, however, is limited and additional (external) resources are required to complement ECOTOX.
Chemicals.
The ECOTOX database
uses an identifier called CAS Registry Number assigned by the Chemical Abstracts Service to identify chemicals. The CAS numbers are proprietary, however, Wikidata {{cite:02ee2f30ae813f21b597de49f37142012c68c53e}} (indirectly) encodes mappings between CAS numbers and open identifiers like InChIKey, a 27-character hash of the International Chemical Identifier (InChI) which encodes chemical information uniquely {{cite:d286cbebbab75b52f7618d9995367fbf40eb63e6}}.While InChI is unique, InChiKey is not, and collisions have greater than zero probability {{cite:e8cd0a7e85aba69664420c1599e88cc3292d12b9}}.
Wikidata also provides mappings to well known databases like PubChem, ChEMBL and MeSH, which include relevant chemical
information such as chemical structure, structural classification and functional classification.
Taxonomy.
ECOTOX contains a taxonomyIn the context of the paper “taxonomy” typically refers to a classification of organisms. (of species),
however, this only considers the species represented in the ECOTOX effect data. Hence, to enable extrapolation of effects across a larger taxonomic domain, we include the NCBI Taxonomy {{cite:75908cecd6213a36a05e9964c414dddfb58a1947}}. This taxonomy data source
consists
of a number of database dump files, which contains a hierarchy for all sequenced species, which equates to around {{formula:d1d98593-1233-4593-b87e-e7ebbde52878}} of the currently known life on Earth and is one of the most comprehensive taxonomic resources. For each of the taxa (species and classes), the taxonomy defines a handful of labels,
the most commonly used of which are the scientific and common names. However, labels such as authority can be used to see the citation where the species was first mentioned, while synonym is a alternate scientific name, that may be used in the literature.
Species traits.
As an analog to chemical features, we use species traits to expand the coverage of the knowledge graph. Apart from taxonomic classifications, traits are the most important information to identify species and will be of great importance when predicting the effect on the species.
The traits we have included in the knowledge graph are the habitat, endemic regions, and presence (and classifications of these). This data is gathered from the Encyclopedia of Life (EOL) {{cite:7bab61d9f6d21d0872c432b5c1ce34bd67158eb6}}, which is available as a property graph. Moreover, EOL uses external definitions of certain concepts, and mappings to these sources are available as glossary files.
In addition to traits, researchers may be interested in species that have different conservation statuses, e.g., if the population is stable or declining, etc. This data can also be extracted from EOL.
Dataset preprocessing
In this section we present the different steps to extract, transform and integrate the source datasets into the main TERA components and sub-KGs. All data is transformed using custom mappings (scripts) from the sources to RDF triples.
Table REF shows an excerpt of the triples in TERA.
{{table:77b2adfc-8575-4370-b703-44808ed86a3e}}{{figure:bcf30f8c-1f41-43d7-83a6-48427fc804e3}}Effects sub-KG construction
The effect data in ECOTOX consist of two parts, i.e., test definitions and results associated with the test definitions (see Tables REF and REF , respectively).
The important
columns of a test are the chemical and the species used.
Other columns include metadata, but these are optional and often empty. Each result is composed by an endpoint, an effect, and a concentration (with a unit) at which the endpoint and effect are recorded.
This tabular data in ECOTOX is transformed into triples that form the effects sub-KG in TERA ({{formula:9ef886e2-f98f-4d2e-915f-29dc4e2ea833}} ). Note that a test can have multiple results.
A subset of the effect triples are listed in Table REF (see Triples REF - REF). A graphical representation for an effect test and its result is also shown in Figure REF .
ECOTOX contains metadata about the species and chemicals used in the experiments. This metadata is also included
in TERA to facilitate the alignment with other resources (see Section REF ).
The ECOTOX metadata file species.txt includes common and Latin names, along with a (species) ECOTOX group (see triples REF- REF in Table REF ). This group is a categorization of the species based on ECOTOX use cases. Prefixes and abbreviations like sp., var. are removed from the label names.
The full hierarchical lineageAs defined by U.S. EPA. Note that species hierarchies are contested among researchers. is also available in the metadata file species.txt.
Each column represents a taxonomic level, e.g., genus or family. If a column is empty, we construct an intermediate classification; for example, Daphnia magna has no genus classification in the data, then its classification is set to Daphniidae genus (family name + genus, actually called Daphnia). We construct these classifications to ensure the number of levels in the taxonomy is consistent (see triples REF and REF in Table REF ).
Note that when adding triples such as REF in Table REF , we also add a taxonomic rank to facilitate the querying for a specific taxonomic level.
The ECOTOX source file chemicals.txt includes chemical metadata and it is handled similarly to species.txt. The file includes chemical name (see REF in Table REF ) and a (chemical) ECOTOX group.
For the units in the effect data, e.g., chemical concentrations (mg/L, mol/L, mg/kg, etc.), we reuse the QUDT 1.1QUDT 1.1: http://linkedmodel.org/catalog/qudt/1.1/ ontologies.
When an unit such as mg/L is not defined, we define it
according to Listing REF .
Alignment with state-of-the-art tools
ECOTOX database provides proprietary chemical identifiers (i.e., CAS numbers) and internal ECOTOX ids for species. In order to extrapolate effects across a larger set of chemicals and species than those available in ECOTOX, TERA integrates taxonomy and trait data from NCBI and EOL, and chemical data from PubChem, ChEMBL and MeSH
Alignment
between ECOTOX and the NCBI Taxonomy.
There does not exist a complete and public alignment between the 23,439 ECOTOX species and the 1,830,312 the NCBI Taxonomy species.There
are a total of 27,133 and 2,246,074 taxa in ECOTOX and NCBI, respectively. However, we focus on species, i.e., instances. We have used three methods, two state-of-art ontology alignments systems and a baseline, to align ECOTOX and the NCBI Taxonomy:
[(i)]
LogMap {{cite:4c27b2be6e983a2cda0bf1cf19a2d1236123fb40}}, {{cite:0f61cab5249383e37a16112c92dd52c86bd22a7b}},
AgreementMakerLight (AML) {{cite:ec36f0669a447d7934f0f5804156f3cd1c0fc7c3}}, and
a string matching algorithm based on Levenshtein distance {{cite:48cc72c2edec8a43b399c3d32f2c6da205ee08ea}}.
LogMap and AML were chosen since they have performed well across many datasets in the Ontology Alignment Evaluation Initiative (e.g., {{cite:c94d11d438c3ec8bbd04c5f515e36b2376e82e84}}, {{cite:10a972bfda30545d68f1e77794c5f70efa33db52}}, {{cite:fb4db37f3e6bc06add214f0acf00966000fcafd4}}). Most mappings in our setting are expected to be lexical, therefore, we also selected a purely lexical matcher to evaluate if more sophisticated systems like LogMap and AML bring an additional value.
Due to the large size of the NCBI Taxonomy, we needed to split NCBI into manageable chunks to enable the use of ontology alignment systems. Fortunately, this can be easily done by considering the species division, e.g., mammal or invertebrate. This divides the NCBI Taxonomy into 11 distinct parts, which can be aligned to the taxonomy in ECOTOX.
{{table:76eebcb0-2d9d-4ccc-9d2c-d9f656eb3fc9}}Note that it is expected an entity from ECOTOX to match to a single entity in the NCBI Taxonomy, and vice-versa. Hence, 1-to-N
and N-to-1 alignments were filtered according to the system computed confidence.
A partial mapping curated by experts can be obtained through the ECOTOX Web.ECOTOX interface: https://cfpub.epa.gov/ecotox/search.cfm We have gathered a total of 2,321 mappings for validation purposes.
Table REF shows the alignment results over the ground truth samples for
the
1-to-1 (filtered) system mappings. We report number of mappings (#M), Recall (R) and estimated precision (P{{formula:e1037e6f-9c64-4839-8cce-683fdfbd1b76}} ) with respect to the known entities in the incomplete ground truth, assuming only 1-to-1 mappings are valid. P{{formula:d50d2c01-90bc-4195-970f-47eec32f0879}} is calculated as
{{formula:ed676536-412a-499d-a636-b6fac23cfefc}}
where {{formula:e24ba16c-9f2e-42a9-88d1-699c622fd4d2}} is the (incomplete) reference mapping set and {{formula:80e7908a-02a5-458e-8bd6-48b7f67c811c}} is the set of generated mappings between entities {{formula:36863638-76d5-4bce-a0d0-a6d65866c32f}} from ECOTOX and entities {{formula:3404946b-30c1-4263-b7b0-8355765a8fde}} from the NCBI Taxonomy, {{formula:330c511d-e8b5-4903-9920-bae6e9dd868b}} and {{formula:dde8bd90-d316-4cfe-9cc8-2728e960c7c4}} are the sets of entities that appear in the reference mappings.
Thus, {{formula:411242f1-3ac9-4a97-9de2-b164aeb488e9}} is defined as a subset of mappings from {{formula:1a5eb064-eb5b-4e0c-91cf-e1d3d2c22b24}} involving entities in the reference mapping set {{formula:9f969ec1-c0bd-4a3e-947e-a4894d632598}} .
Recall is defined in the standard way as
{{formula:ca77905c-974a-4f52-bea2-0ceda3cfa75c}}
Note that, the recall will be the same for {{formula:82f2d53d-5da2-465d-ade6-35e697e3911c}} and {{formula:f88b33b4-e013-4a32-84c1-03a5ec16f0a4}} .
We have selected the union of the 1-to-1 equivalenceThere is no need for more complex mappings in this use case. mappings computed by AML and LogMap to be integrated within TERA, as they represent the mapping set with the best recall with a reasonable estimated precision. This choice was made by considering the large uncertainty of downstream applications (effect prediction and risk assessment), where we prefer a larger coverage of the domain.
See Triple REF in Table REF for an example of a system computed mapping between ECOTOX and the NCBI Taxonomy.
We use Wikidata as source of alignments between the NCBI Taxonomy and EOL, and among the used chemical datasets. Alignments are extracted via Wikidata's query interface (i.e., SPARQL endpoint).Wikidata endpoint: https://query.wikidata.org/sparql
The data in Wikidata concerning species and chemicals are in large parts manually curated {{cite:6465529def27f932392ea9673b0f2d6d0959d5a5}} and will have a low error rate, comparatively to using the automated ontology alignment systems.
Alignment between the NCBI Taxonomy and EOL.
In order to include in TERA trait data from EOL, we need to establish an alignment between EOL and the NCBI Taxonomy. We have constructed equivalence triples between the NCBI Taxonomy and EOL identifiers using Wikidata. The species identifiers are available as literals in Wikidata. Therefore, we concatenate them with the appropriate namespace.
Listing REF represents the SPARQL CONSTRUCT query used against the Wikidata endpoint. Here, we query Wikidata for instances of taxa, thereafter adding optional triple patterns for NCBI Taxonomy and EOL identifiers which are added as owl:sameAs triples to TERA.
Examples of resulting mapping triples are shown in REF- REF in Table REF . The proportion of species in Wikidata where this mapping exists is {{formula:3a4312fb-e265-4f2e-a3a4-3954330f1b7c}} .
Alignment between chemical entities.
The mapping between ECOTOX chemical identifiers (CAS Registry Numbers) to Wikidata entities enables the alignment to a vast set of chemical datasets, e.g., PubChem, ChEBI, KEGG, ChemSpider, MeSH, UMLS, to name a few. The construction of equivalence triples between CAS, ChEMBL, MeSH, PubChem and Wikidata identifiers is shown in Listing REF . As for the case of species identifiers, the literal representing a chemical identifier is concatenated with the corresponding namespace. For the CAS Registry Numbers we also remove the hyphens to match ECOTOX notation. Examples of resulting mapping triples are shown in REF- REF in Table REF .
These mappings are not complete, but for some the coverage is large. Out of the chemicals used in ECOTOX, {{formula:2eea0591-487c-47cd-a292-0b0e45cb462c}} have an equivalence in Wikidata (through the CAS registry numbers). Moreover, Wikidata chemicals has {{formula:9a13ce8d-03fd-4cd7-b958-3f082ca5192e}} ChEMBL identifiers, {{formula:55b4f6ba-60df-4141-875b-7208d592d112}} MeSH identifiers, {{formula:49ac9df5-e543-4eea-8efc-14235c164dc0}} PubChem identifiers, and {{formula:3864516f-0e3a-4028-9ed2-039bfbf2a2fc}} InChiKey identifiers.
Taxonomy sub-KG construction
The Taxonomy sub-KG ({{formula:a50f7d02-ca89-43d4-b92b-128d64312645}} ) integrates data from the NCBI Taxonomy and the EOL trait data.
The integration of the NCBI Taxonomy into the TERA knowledge graph is split into several sub-tasks.
We load the hierarchical structure included in the NCBI Taxonomy file nodes.dmp. The columns of interest are the taxon identifiers of the child and parent taxon, along with the rank of the child taxon and the division where the taxon belongs. We use this to create triples like REF- REF and REF- REF in Table REF .
To aid alignment between the NCBI Taxonomy and the ECOTOX identifiers, we add the synonyms found in names.dmp. Here, the taxon identifier, its name and name type are used to create triples like REF in Table REF . Note that a taxon in the NCBI Taxonomy can have several synonyms while a taxon in ECOTOX usually has two, i.e., common name and scientific name.
Finally, we add the labels of the divisions found in divisions.dmp (see triples REF and REF).
We also add disjointness axioms among unrelated divisions, e.g., triple REF in Table REF .
We use the TraitBank from EOL {{cite:d86f9bce0b3b88ccb3a3b573e9562ef7a78dbeef}} to add species traits to TERA. The TraitBank is modeled as a property graph and can be accessed as a neo4j database or via a set of tabular files. To integrate the TraitBank into TERA we validate the identifiers used in EOL and convert to URIs. If an identifier is not a valid URI, we replace invalid symbols.
A trait example is shown as triple REF in Table REF . The EOL TraitBank also includes
subsumption definitions (i.e., via rdfs:subClassOf)
for a large portion of traits. These
subsumptions
can be downloaded separately and are added to TERA in a similar way as mentioned above.
Chemical sub-KG construction
The Chemical sub-KG ({{formula:8cafbedc-a3a2-46bf-a9d9-5ca88953f8a3}} ) is created from PubChem {{cite:dd3db283086aed97769179782b066d7bcc2cecf6}}, ChEMBL {{cite:0303b8844c23980b103284077953ba55a1b39b3c}}, and MeSH {{cite:ab6261e3f04792e423ba6ed43d247e35755abe34}}. These datasets are available for download as RDF triples. In addition, ChEMBL and MeSH can be accessed through the EBI and MeSH SPARQL endpoints, respectively.
The chemical subset of PubChem is used since information about chemicals is standardized in PubChem, while information about substances is not. In this subset we use:
[(i)]
component information, i.e., what are the building blocks of the chemical or parts of a mixture;
type assertions, which either link to ChEBI or describe the type of molecule, e.g., small or large;
role assertions, which describe additional attributes or relationships of the chemical, e.g., FDAApprovedDrug; and
drug products, which link to the clinical data in SNOMED CT {{cite:88e1ed25e7acd843346b33163c92a465e10047b4}}.
Examples of these can be seen in triples REF, REF and REF in Table REF .
Parent chemical data in PubChem is limited to permutations e.g., bonds, polarity, and part of mixtures axioms (triple REF in Table REF ). Therefore, we use the hierarchical data about chemicals from MeSH.
In addition to this
data, we create similarity triples between chemicals. This is impractical to download, but can be calculated on demand. We add similarity triples to TERA where the Tanimoto (Jaccard) distance between the chemical fingerprints (gathered using PubChemPy {{cite:1501e7249d7f37ee64d65a28727120b5cb65fbe7}}) is {{formula:ee20388f-d2b0-426f-9498-c695b60f9c91}} ,Default value used in PubChem {{cite:05fca5377a86715f2ec9a12a3fe89bfee14db62f}}. see triple REF in Table REF .
ChEMBL contains facts about bioactivity of chemicals. This contributes in assessing the danger of a chemical. In TERA, we use the mode of action (MoA) and target (receptor targeted by MoA; triple REF in Table REF ). These targets are organized in a hierarchy using chembl:relSubsetOf relations (see triple REF).
The receptors will link to which organism it belongs to, however, we leave the inclusion of this information for future work.
We use the entire MeSH dataset in TERA. MeSH is organised as several hierarchies. The most prominent classifications are based on chemical groups and the intended use of the chemicals. Triples REF and REF in Table REF show examples of chemical group and functional classifications.
TERA for data access
TERA covers knowledge and data
relevant to
the ecotoxicological domain and enables an integrated
semantic access across data sets. In addition, the adoption of
an RDF-based knowledge graph enables the use of an extensive range of Semantic Web infrastructure (e.g., reasoning engines, ontology alignment systems, SPARQL query engines).
The data integration efforts and the construction of TERA go in line with the vision in the computational risk assessment communities (e.g., Norwegian Institute for Water Research's Computational Toxicology Program (NCTP)), where increasing the availability and accessibility of knowledge enables optimal decision making.
The knowledge in TERA can be accessed via predefined queriesPredefined queries are typically abstractions of SPARQL queries. (e.g., classification, sibling, and name queries, and fuzzy queries over the species names) and arbitrary SPARQL queries.
The (final) output is flexible to the
task, and can be given either as a graph or in tabular format. Listing REF shows an example query to extract the chemicals and concentrations, at which, the species in the Oslofjord experience lethal effects.
TERA for effect prediction
TERA is used as background knowledge in combination with machine learning models for chemical effect prediction.
TERA's sub-KGs play different roles in effect prediction.
The rich semantics of the species and chemical entities in the Taxonomy sub-KG ({{formula:2ea1a8ea-b83e-4c91-bae2-7a6715cbadf7}} ) and the Chemical sub-KG ({{formula:4c26d161-2018-4d3a-923f-3b6d0768246e}} ), respectively, are
embedded into low-dimensional vectors;
while the Effects sub-KG ({{formula:026f108b-343e-43e8-ad9b-b0928eb2bdbf}} ) provides the
training samples for the prediction model.
Each sample is composed of a chemical, a species, a chemical concentration, and the outcome or endpoint of the experiment.
More details are given in Section , where the effect prediction model is built upon state-of-the-art knowledge graph embedding models.
{{table:486e4df2-f1bf-4f94-bd2f-ec2020bebc6b}}Table REF shows the sparsity-related measures of common benchmark datasetsYAGO3-10 {{cite:537ebe4bd4192ca9af00803e57cd0574be22792a}}, FB15k-237 {{cite:5dca39ef5c758fffab3aa1e647c7bf0f0d8d4da3}}, WN18 {{cite:f1c99358cc1994720dd13e8bed44ec87f681cca9}} and WN18RR {{cite:34346ef0dd19f03518ee65b8773240e1c1ae4691}}. and TERA's {{formula:897f5100-ccfc-4eb4-bbc6-0456caeae1c8}} and {{formula:08a9f2c5-05c2-4482-9c4c-79c8c8f9e7a1}} (triples involving literals are removed).
We follow Pujara et al. {{cite:3944449ce91b37ed1d6bcb5ad08e0bfa4d12f435}} and calculate the relational density, {{formula:d7946d3f-6147-4b06-be77-aaa74ec3e4a2}} , and entity density, {{formula:cd8a6ef4-631a-4a35-bdc1-f07a32ba1cf6}} , where {{formula:5cf46db3-2017-45eb-a911-e22f72017d44}} , {{formula:45f7cd8e-0c8e-4123-a5a4-9f2536543e11}} , and {{formula:cf1bedd0-6728-4c6f-b36d-3ab3685f77d5}} are the sets
of triples, relations, and entities in
the knowledge graph, respectively.
The entity entropy (EE) and the relation entropy (RE) indicate whether there are biases
(the lower EE or RE, the larger bias)
in the triples in the KG {{cite:3944449ce91b37ed1d6bcb5ad08e0bfa4d12f435}}, and
are calculated as
{{formula:ea0d75f1-5b49-4b03-b857-c96fd300a532}}
where {{formula:ffc21a97-9e71-43e4-877f-b25200dc6d94}} is the number of triples with {{formula:e9c436b3-5b7b-48b8-96d1-f3f85254a02d}} as predicate, and {{formula:6ad99377-d3d0-4588-9105-c02599220c31}} is the number triples with {{formula:306da690-c169-4c52-a747-af1c7f12fa2b}} as subject or object.
In addition, we calculate the absolute density of the graph, which is {{formula:1e2d7191-1598-4375-844e-9e52dffc3e66}} . This is the ratio of edges to the maximum number of edges possible in a simple directed graph {{cite:1810ac3b0036fa9aac5c64f1a2ddf17812b4cd76}}.
High RD and low RE typically lead to a worse performance, while high ED and low EE often lead to better link prediction performance (e.g., {{cite:34346ef0dd19f03518ee65b8773240e1c1ae4691}}).
In Table REF we can see that the density and entropy values are in between those for YAGO3-10 and FB15k-237, which typically lead to worse and better predictive performance, respectively {{cite:34346ef0dd19f03518ee65b8773240e1c1ae4691}}.
This shows that TERA is a suitable background knowledge to extrapolate effect data and, at the same time, an interesting dataset to benchmark state-of-the-art knowledge graph embedding models. Note that using the full TERA (i.e., {{formula:3c0fd3e7-3d55-4aff-8985-97bc6cf2db3a}} and {{formula:83a2587f-439c-4956-8d8c-db2525b3b073}} ), according to RD, will be more challenging than using the reduced TERA fragments (i.e., {{formula:13ac4764-41df-4e2b-a9d0-ac9baf7cb44e}} and {{formula:6294c02f-031a-4255-a17e-0f17399ac4ef}} ) for prediction.
Full details of the construction of {{formula:aebdaf99-d02f-457d-9f7a-4173ff40935a}} and {{formula:febc9aad-caa3-4a90-8bf2-c92ab01a9934}} are given in Section REF .
Adverse biological effect prediction
The aim of chemical effect prediction is to extrapolate exiting data to new combinations of (possibly unknown) chemicals and species. In this section we present three classification models used to predict the adverse biological effect of chemicals on species:
[(i)]
a multilayer perceptron (MLP) model (our baseline),
the baseline model fed with pre-trained KG embeddings,
a model that simultaneously trains the baseline model and the KGE models (i.e., it fine-tunes the KG embeddings).
A MLP was chosen as baseline as it is a basic model where additional components and penalties can be easily added and assessed as we do in our third model (see Section REF ).
The models have three inputs, namely a chemical {{formula:5daa9809-c8a2-4930-9792-515c661295c2}} , a species {{formula:1e8decb2-7b76-4b79-aae2-67984ed31b56}} , and a chemical concentration {{formula:cf068735-54b6-415a-a5ef-23a9bd973238}} (denoted {{formula:e9d16268-ad4d-4bba-a24a-907c0bc50b0b}} ). The output is a binary value that represents whether the chemical at the given concentration has a lethal effect on the species:
{{formula:de424ca1-ee0b-454a-838e-dd8c262447d7}}
Note that the effect can have a more fine-grained categorization (endpoints LC{{formula:0e18d1be-f4ae-42c2-83e3-0f0d201c9b14}} , LD{{formula:9cf52918-3e73-4d3e-b012-8f4a37de8760}} , EC{{formula:caae2421-5e07-44a2-9245-ac319fdd4f52}}If effect is mortality (e.g., see Table REF )., and NR-LETH in Table REF ). Without
losing the generality in introducing and evaluating our effect prediction methods, we simplify the effect into two cases: “lethal” and “non-lethal”.
Notation.
Throughout this section we use bold lower case letters to denote vectors while matrices are denoted as bold upper case letters.
The vector representation of an entity and a relation are noted as {{formula:ca6d0b11-70e5-4868-afec-37979542b1a9}} and {{formula:deed4b57-8a4e-46e6-a0ba-46626cc62bff}} , respectively. These vectors are either in {{formula:24835dec-0ada-4a18-a811-86da0ffe4d49}} or {{formula:e36f855b-9797-4af6-b993-773754a0ed16}} , where {{formula:d4257110-dc64-415b-9103-dd0c59e131d4}} is the embedding dimension.
{{figure:83da3a7c-3ddb-43f4-adf5-f75aedd7a827}}Baseline model
Our baseline prediction model is a multilayer perceptron (MLP) with multiple hidden layers.
{{formula:18acb505-8b35-4735-80a2-df039f608a3c}} hidden layers are appended to the embedding {{formula:7c836762-9419-4d85-a6eb-15d9d9522311}} of the chemical {{formula:a19c9f1d-3578-4219-8ba2-3960417d4dfa}} , {{formula:a9ad81bd-6837-4d76-a05e-f4dbdc7aff1a}} hidden layers are appended to the embedding {{formula:18eb80d0-4d15-4d3b-b8a2-baad24c33565}} of species {{formula:e4769879-23c7-472b-8194-e0270d23e2f6}} , and {{formula:274b2ff5-1408-445a-af00-e344760523da}} hidden layers appended to the real valued chemical concentration {{formula:477c6a59-ac81-4835-b468-8fb995f6d075}} . Thereafter, {{formula:c5c4ffe1-1aa4-41c7-9eb5-b668c58618a6}} hidden layers are further appended to the output of the previous hidden layers concatenated.
Specifically, the model can be expressed by the following equations (with
{{formula:4cc7df9c-0bea-4109-8467-e632d7fe4067}}
as input):
{{formula:d2000252-2be4-4a61-9ef8-22e198b16211}}
{{formula:bf7a1e69-1fe0-4bd8-8635-2169aaf4e156}} in (REF ) denote the embeddings of {{formula:dcaf7611-8f7e-4c39-9261-d70dc2b24450}} and {{formula:37f38c6e-89e1-402f-b254-9dbfafdb19fa}} respectively, and are calculated as
{{formula:dc14bc34-c1f6-486b-8562-d33cd811960a}}
where {{formula:61907dca-45ad-40bf-a068-f74f153f4e48}} and {{formula:12f44f8f-3c79-4855-8ce6-36a11b94ca72}} denote the one-hot encoding vectors of the chemical entity {{formula:835e2a4d-c2d6-4ba6-a946-c2a76367d5e8}} (w.r.t. all the entities in {{formula:d0f493bc-a8fc-4e53-bb29-d828f309d484}} from {{formula:5dd47bc3-3ef4-4a2e-a9d9-74b765cf93db}} ) and the species entity {{formula:e6928e78-dd00-4533-b977-e3428abe237f}} (w.r.t. all the entities in {{formula:2c98638b-8ad3-4123-814b-91086d4129fc}} from {{formula:462c3d2d-b6b6-4523-beb6-e575aa103c59}} ), respectively;{{formula:9795ed2a-6749-4b7f-a7ac-2737be2d5877}} , where {{formula:864a1ba0-4399-403b-865d-564c89116ac0}} if {{formula:bb8828a2-a3b7-476e-9e89-2c71b3f5fd59}} is the {{formula:7980fc1f-91e8-4419-8777-9972699b8fe1}} chemical in {{formula:5e47e692-ac65-4826-8ac3-433808dd2234}} , else 0. {{formula:34f3ed32-c71e-4ea7-abe6-3a2426c3cf71}} is defined similarly. {{formula:1d19bba7-3bb9-46f4-9211-4e9467dae3cc}} and {{formula:073458cd-2bd6-4a00-819d-4f6fc073c56d}} are embedding transformation matrices to learn.
(), () and () represent the hidden layers, where {{formula:e9af7932-979e-4150-9a42-e45ee87f05da}} denotes the rectifier function (i.e., {{formula:6cdfaa6d-5bad-49e4-ada3-e8f5c028c566}} ), {{formula:1e6b3bf0-e89d-42d7-9e37-305a6df5da81}} , {{formula:d3bffa34-50c0-4f96-bf6b-24e645b0b9ac}} and {{formula:b03a0b35-771f-4dc3-9553-1275147ae6ec}} denote the weights, {{formula:a08e69dc-8e51-4ab4-9ca4-8f63279c3483}} , {{formula:6f374b01-462b-47d0-968f-10f3757a2173}} and {{formula:bfd8c52a-829a-4295-8d4f-510f5405c6f9}} denote the biases.
{{formula:c5e5f95b-78b8-46f5-b4b5-561c0cfeeed8}} in () denotes vector concatenation.
{{formula:820d8afc-a41d-4bb0-8cb5-c2bdb2897f6c}} in () denotes the sigmoid function (i.e., {{formula:fb5cc014-d4e6-4a89-9936-559906ae9ecf}} ).
Note that a dropout and a normalization layer is stacked after each hidden layer for regularization.
We differentiate between two settings of the baseline model (see Figure REF ):
Simple setting. Figure REF shows the model without embedding transformation layers, i.e., {{formula:f1a2903f-f617-4db1-bebb-7c5d99ad722f}} , and {{formula:7237283c-4a52-46ab-9ac4-54ebc0a35d48}} .
Complex setting. The complex model shown in Figure REF introduces transformation layers on the embeddings and chemical concentration input. These transformations aim at extracting the important information in the inputs and disregard the redundant information based on the output.
In the experiments we refer to the baseline models as Simple one-hot and Complex one-hot, depending on the selected MLP setting.
Baseline model with pre-trained KG embeddings
This models relies on pre-trained embeddings of chemicals and species computed using state-of-the-art KGE models (see Section REF and Appendix for an overview).
A (different) KGE model is applied to the chemicals {{formula:6eee6d42-f7c5-429d-ad3c-0c7f8e27bc75}} and the species {{formula:9e886bd3-0102-440b-b86d-ca246ef3ed48}} .
These pre-trained KG embeddings are then given as input instead of the one-hot encoding vectors in the baseline model.
We replace the trainable matrices {{formula:bf61741d-0aec-4ec3-b862-abc5779d1342}} and {{formula:5e2ee2d5-1bc8-4ca3-ac79-edd7d9b07304}} in Equation (REF ) by the matrices composed of embeddings by the respective KGE models.
Namely {{formula:8f525945-0d73-41a3-81a6-0f76e9a5aa52}} is set to {{formula:46a911fc-fd7b-4a21-a26d-0357afe7dbd2}} , {{formula:bd3fbfec-36ad-4125-9cd4-da7b789a229a}} is set to {{formula:fd1a7be2-9a02-4b82-88cb-b60467cf9d3c}} , where {{formula:226a6853-21f8-4296-8c52-03781ab4d583}} denotes stacking vectors, {{formula:693d9fed-9827-4d2f-aa18-d06219c2a298}} denotes the embedding of the {{formula:dfb1037e-e427-4ac6-bad3-6e4521f7e4c6}} chemical in the chemicals {{formula:704806c4-01d5-4df6-bf65-4727d6cff8b3}} , {{formula:877248af-b334-4193-8fe5-90faef230fa1}} denotes the embedding of the {{formula:a37ee6a1-1c59-47bd-92e1-a55cef9174c8}} species in the species {{formula:a5bbd00a-6335-4d2a-af01-b7c82c3a3618}} .
In the experiments we refer to these models as Simple PT KGE{{formula:138db392-3e11-4812-abe1-27c223f0c2ec}} -KGE{{formula:ded2f5d4-9bc5-425d-b3d9-1aac06ac1305}} and Complex PT KGE{{formula:56d1ff8e-f933-41b5-86a0-26e35d4d8f8a}} -KGE{{formula:a1a69a28-ff0e-4b5d-9b70-226280c7766e}}, depending on the selected MLP setting, where PT stands for pre-trained, and KGE{{formula:bff3bb55-d479-4f31-a8ce-4f1a081831b3}} and KGE{{formula:d64b0a5d-e80d-41f1-bfda-427067557215}} are the KGE models used for the chemicals KG and the species KG, respectively (e.g., Complex PT DistMult-HAKE). For simplicity, we also refer to these models as PT-based models.
{{figure:4641dbea-4ab1-4292-bb4a-de6e05116d0a}}
Fine-tuning optimization model
This model improves upon the pre-trained KG embeddings with fine-tuning based on the effect prediction data. This is done by simultaneously training the (selected) KGE models and the MLP-based baseline model. Such that the {{formula:611665b1-07fa-4e1e-99d8-aebac2cd4957}} and {{formula:fbcf83da-d213-4123-b470-62c8ce3a988a}} , and the MLP weights ({{formula:567b440d-1598-4557-b4c0-b53fe9abdc32}} and {{formula:266a3564-8ac6-46ea-8c15-3f856ec824d0}} in Equations (), (), () and ()) are optimized simultaneously. Note that we initialize the KGE models with the previously pre-trained embeddings.
The model architecture is shown in Figure REF and the overall loss to minimize is
{{formula:d8fc22e6-a51d-40bc-bb29-c24695dfac74}}
where {{formula:a43c24f2-757c-4623-a31a-9fadda2daeee}} and {{formula:1dbffa99-5956-4f94-9e82-d8ef1d6dc18e}} respectively denote the loss of the chemical KG{{formula:d30aa5e8-5d95-4a18-b274-9c6ae629ecda}} and the species KG{{formula:1c47573b-834f-48f5-8dd2-e8ffbe295f27}} when a specific KGE model is used,Appendix REF introduces the used loss-functions in this work. The selection of the loss function for a KGE model will be via a hyper-parameter. {{formula:ab9ff49c-7a0f-4ee3-a0d5-305c3bb2149d}} and {{formula:e492679c-b1af-415a-9903-ab090eb5e4bf}} denote their weights respectively,
{{formula:d6d843db-0ab0-4b42-8753-51f096104252}} and {{formula:db66b156-1e1c-49de-81e7-144d75ee8f7b}} denote the loss of the MLP and its weight.
Specifically, we use binary cross-entropy (BCE)
as the loss for the classification. {{formula:d35bd0a6-e1ca-4d07-96cf-d6aaca1aba7b}} is calculated as
{{formula:3a5c0854-e4b3-46ac-a360-7dbce05665f6}}
where {{formula:3a7b5aa8-6f11-4493-a6b3-72b100af4ef0}}
denotes the size of training samples, {{formula:0de1e2fa-c2c6-434c-8740-9855fd0f1333}} and {{formula:16d72773-8a1d-4779-9351-d92c0740fb10}} denote the sample label and the MLP output, respectively (as in Equation (REF )).
With the overall loss, gradient-based learning algorithms such as Adam optimizer {{cite:6a4f51d99bdff8154592d8a79a5ac96e7cf44993}} can be adopted to jointly training the embeddings of both KGEs and the MLP.
Figure REF shows the full simultaneous fine-tuning model and the optimization process. The initial state of the entity lookups is the pre-trained embeddings.
The full training procedure is summarised as follows:
Select {{formula:97d40571-8633-4ceb-81c4-ba3ac4e3f10e}} triples from {{formula:2abe6dad-1985-4886-9b25-27f1c3d07975}} and {{formula:7916f9e5-4b88-4625-a5b2-795f7abe65a4}} , where {{formula:ba97be61-5509-482b-b5f1-98dcefb6e7c3}} is the length of the effects training set.Section REF describes how the known effect data extracted from ECOTOX is split into training, validation and test sets.
Generate negative knowledge graph triples (see Appendix REF for details) from the extracted subsets of triples from {{formula:3f8b4f33-892f-41eb-af69-6a8cc8ba4e20}} and {{formula:b1a6d33c-004b-4ccf-9610-f465e2f6e9cc}} , these negative KGs triples are referred to as {{formula:7b892e76-1fb9-4f7d-abf0-cff61363007e}} and {{formula:44f59fa2-7082-46c5-a3e0-5c671480f3c9}} .
Feed-forward the input through the model and calculate loss for each model component and combine according the loss weights.
Optimize the KG entity and relation embeddings, and the MLP layers.
These steps are repeated until the loss (only {{formula:973d0d77-05b2-4823-9b46-cfe5fb1a349e}} ) over the validation set stops improving.
In the experiments we refer to these models as Simple FT KGE{{formula:74911ccc-5297-4eec-b43d-9c074be814e1}} -KGE{{formula:04913248-298b-4187-80a1-3029827bdb11}} and Complex FT KGE{{formula:218549e9-5e04-49c1-bba8-fecbb274914c}} -KGE{{formula:c6350fe7-68d4-4300-965d-c4c9586e9b53}}, depending on the selected MLP setting, where FT stands for fine-tuning, and KGE{{formula:1981b9cd-2ef6-4852-95e2-785d34bea461}} and KGE{{formula:1e18a772-13be-452e-8a3b-8ee7475d86fb}} are the KGE models used for the chemicals KG and the species KG, respectively (e.g., Simple FT HAKE-HAKE). For simplicity, we also refer to these models as FT-based models.
Results
Experimental setup
All models are implemented using Keras {{cite:e1b66861d5ac4bdf9bfe075976592daafabe3dde}} and the model codes are available in our GitHub repository, alongside all data preparation and analysis scripts.https://github.com/NIVA-Knowledge-Graph/KGs_and_Effect_Prediction_2020
Preparation of TERA for prediction
As shown earlier, TERA consists of three sub-KGs. These are the basis for the chemical effect prediction.All data used to create TERA was downloaded on the 14th of May 2020.
We process the sub-KGs further to limit their size by removing irrelevant triples for prediction. This is necessary to scale up the training of the KGE models. The reduction of TERA's sub-KGs is performed according to the following steps:
Effect data. For prediction purposes, the effect data in {{formula:89e1ab62-2b84-4efa-8f78-f1c98d6daeb4}} is limited to four features, namely, chemical, species, chemical concentration, and effect. The chemical concentrations ({{formula:d920f5a9-215e-4e1a-8b97-2ff6932acc92}} , converted to {{formula:a991b4c9-5308-4623-8573-7521be733fb8}} ) are log-normalized to remove the large discrepancy in scales. As mentioned, we separate the effects into two categories for simplicity, lethal and non-lethal effects. This reduces the possibility of ambiguity among the effects that does not cause death in the test species. We label lethal effects as 1 and non-lethal effects as 0
{{formula:46566390-eede-45b7-8fe9-c09963fd612d}} . For each chemical in the effect data, we extract all triples connected to them using a directed crawl.
This reduces the size of {{formula:4162f209-308d-4a86-a4a7-2f9d53b6fd99}} to a manageable size for the KGE models. Moreover, we do not deem triples not directly connected to the effect data relevant for the prediction task, and may introduce unnecessary
noise. As mentioned before, PubChem contains similarities between chemicals based on chemical fingerprints, however, for our use-case it is unpractical to query them from the PubChem RDF data, therefore, we calculate similarity triples based on queried PubChem fingerprints. We use the same similarity threshold as PubChem,
i.e., {{formula:97b9b824-3fb0-4582-9de0-bd226a022f7a}} {{cite:05fca5377a86715f2ec9a12a3fe89bfee14db62f}}.
{{formula:9b2202eb-7838-4bfa-a897-84f6e4f61cf2}} . The same steps as for {{formula:4ec4651b-c708-45e3-a1e6-94b8298e89ef}} are conducted for all species in the effect data.
A simple directed crawl over all predicates is sufficient to gather the interesting data in this setting as both {{formula:c8206b1d-42aa-473c-8103-70127caf2925}} and {{formula:4835d876-f269-4fe2-8860-df5473ef92ca}} are primarily hierarchical and we start the crawls at the leaf nodes.
These steps reduce {{formula:4d08b7d9-6364-49b8-9f1f-7aa0535b0c59}} to {{formula:d50feaac-5df7-44b0-8686-5e53888dfd91}} triples and {{formula:991ebefa-c048-4bf7-8da4-7abf9445507e}} to {{formula:c9a63932-a5f7-45ce-8053-a7fc0265dd82}} triples. Some statistics of {{formula:6c0ca7bc-86ab-4d16-b2cc-f433c9b5153d}} and {{formula:57f43fb2-59e2-4602-ab42-decdaa679b9b}} , and the reduced fragments {{formula:821ed347-c609-4efe-9ec7-a379e566cdfb}} and {{formula:33940297-04cb-4184-be8b-53dbda986e6e}} , are given in Table REF (Section REF ). In the rest of the paper were refer to TERA's reduced sub-KGs simply as {{formula:b952996b-9994-4dcf-8d5b-992d0c141cc4}} and {{formula:0278ec18-ccb1-4420-bae6-92616640d6f5}} .
The transformation from
TERA's {{formula:a6ef2ae3-a4e6-48d7-ad00-7a62827b5b15}} and {{formula:f50778d7-f48d-4b57-a540-1f7db8605802}}
to model input is done by first dropping literals, thereafter assigning each entity an unique integer identifier which corresponds to the index of a column vector in matrices {{formula:4eb024f4-e335-4dbc-a6a0-79832697b843}} or {{formula:5f735ee2-fec9-4217-ab6c-3b593bb3495e}} in Equation (REF ), depending on which sub-KG is transformed.{{formula:c82010fe-1ad5-473c-b623-307568ce23fa}} for {{formula:e068545b-c4ff-4992-b6cc-6328fb79c123}} and {{formula:f7e8b0d2-48de-4901-844e-1a3bae2059e3}} for {{formula:a94e723b-6b3c-4cf3-a5d6-9901d2f4c035}}
Relations are treated similarly.
Sampling
We use four sampling strategies
of the effect data to analyze how the proposed classification models behave by varying the data parts that are used for training and testing. Note that, we only consider effect data where the chemical and species have mappings to external sources (e.g., NCBI Taxonomy and Wikidata, cf. Section REF ) so that there is additional contextual information that can be used by the KGE models.
For each of the strategies, the validation and test sets contain unseen chemical-organism pairs with respect to the training set. The strategies, however, differ with respect to the individual organism and chemical as follows:
Random {{formula:be997000-c8dd-48b7-8d59-83a8ea9481b9}} training/validation/test split on the entire dataset (i.e., the chemicals and the organisms in the validation and test will most probably be known).
Training/validation/test split where there is no overlap between chemicals in the three sets (i.e., the chemicals in the validation and test sets are unknown). This resulted on a {{formula:c3997ee2-faa3-44f8-a47a-9f2924672aff}} split.
Training/validation/test split where there is no overlap between species in the three sets (i.e., the species in the validation and test sets are unknown).
This resulted on a {{formula:0648c940-3ddd-4d66-99ce-31eca59bca5d}} split.
Training/validation/test split with no chemicals or species overlap in the three sets (i.e., both the chemicals and the organisms in the validation and test sets are unknown). This resulted on a {{formula:0ea0fc9b-036c-4007-bfae-623cad251b82}} split.
Note that since we use the species and chemicals as groups to divide the data rather than the samples, the splits can vary.
For strategies (i-iii) there is a total of 14,377 effect data samples while for strategy (iv) the total number samples is 5,621.
As above, this discrepancy is down to the way we split the data. We do not split across samples, but across chemicals and species. For example, some chemicals are used on (close to) all species, therefore, these chemicals are discarded in the sampling strategy (iv), affecting the final number of samples.
There were originally 57,560 samples, however, this includes experiment duplicates, i.e., same chemical, species, and endpoint, with different chemical concentrations.
This is down to large discrepancies in laboratory testing variance, therefore, we use the median concentration across the duplicates. The prior probability is approximately {{formula:0286f951-9c1b-45d7-8fd1-0ea88d3113a2}} (i.e., {{formula:5cf82bc5-605f-42b9-8970-2de530948513}} of samples are labelled as non-lethal and {{formula:5bc3e4c0-5110-4fb9-805b-cf48a0703f36}} of samples are labelled as lethal) across all sampling methods. We solve this when training by randomly oversampling the minority class until the prior probabilities are {{formula:917b44c7-f310-45c5-b321-0885720749bd}} in the training set. In this case, the oversampling is performed by adding duplicates samples labelled as non-lethal. Oversampling is a well established technique used in many classification problems to remove bias during learning {{cite:080883771e4a24a8c9e5ab3f6b32b931d3cb38a5}}.
{{table:e21f380a-dc11-4a6e-ab6c-1f0793030d4f}}{{table:1febc456-7c99-4c79-9a50-eb36c8c7a061}}
Hyper-parameters
To optimize the hyper-parameters for the KGE and classification models we use random search over the parameter ranges. We conduct 20 trials per model. Tables REF and REF contain the best hyper-parameters and can be used to reproduce the top performing models.
To find the best hyper-parameters for the KGE models, we use the loss as a proxy for performance, normalized by the initial loss, {{formula:b3ef2435-9f43-473a-b6e7-7ebd4f029f8b}} , where {{formula:ccbc2721-0f18-418e-bcf4-3c93b56ef0c8}} is the training loss at epoch {{formula:15762be7-ee25-45d2-a187-a9b5fc8358c1}} , {{formula:b4858505-f2f3-42c4-ba29-7076a40ada68}} is the loss with the initial weights.
We use validation loss to select the best hyper-parameter setting for the classification models presented in Section .
The best prediction models are refitted and evaluated 10 times to reduce the influence of initial conditions on the metrics. The average and standard deviation of the metrics are presented in Section REF .
The hyper-parameter ranges for the KGE models are shown in Table REF based on common values used in the literature. We conduct 20 trials of random hyper-parameters choices and validate over the validation data. In Table REF we show the best hyper-parameters.
{{table:7dd06afb-eb59-4434-8da3-e850242f6bda}}We can see in Table REF that the decomposition models have similar hyper-parameters for {{formula:8eb88b50-059a-4775-adc5-f422de45d782}} and {{formula:e507391b-3729-44bb-84e8-2fa67a2af10a}} . As shown in Section REF , the major difference between {{formula:c0d09dc2-0e2f-4634-94f1-3335bc501c99}} and {{formula:d039512c-3edd-4f12-a99c-0d9553b5700f}} is the relational density. Therefore, it is reasonable to believe that a lower relational density KG requires more parameters to have an equivalent representation in the embedding space.
We can
get the same observation for the geometric models except for TransE, where the embedding dimensions are similar.
ConvE is more efficient in embedding dimension than ConvKB, however, since ConvE is slightly more complex than ConvKB this is expected.
The difference in negative samples could be down to our implementation of ConvE, which varies from the original. Our implementation of all models relies on 1-to-1 scoring of triples, while the implementation of ConvE originally used 1-to-{{formula:bddc313d-9a28-4971-a3ba-be27f1a91841}} scoring, where {{formula:8e928596-df04-4ad7-b30d-e7c34f175234}} is the number of entities in the KG {{cite:34346ef0dd19f03518ee65b8773240e1c1ae4691}}.
The fine-tuning optimization model (Section REF ), in order to save on intensive computation, reuses the same hyper-parameters found for the KGE models. Depending on the optimizer choice, the choice of loss weights, {{formula:4c31b202-844a-4346-8066-571928db0167}} and {{formula:9e051899-9dc4-4f3a-868d-229de80916bb}} , is important. However, our optimizer choice has dynamic learning rates per variable, and therefore, will adapt regardless of the loss weights and we can set {{formula:391891a5-b278-41d1-9be5-191f154d1090}} .
Had we used, e.g., stochastic gradient descent, these variables would needed to be tuned.
Initialization of the fine-tuning optimization models
As presented in Section REF , we simultaneously train the KGE models and the MLP-based baseline model. This is done by initializing the model with (i) the weights learned in the correspondent baseline model with pre-trained embeddings, and (ii) the KG embeddings learned with the respective KGE models. For example, the Complex FT DistMult-HAKE model is initialized with the learned weights with the Complex PT DistMult-HAKE model and the pre-trained KG embeddings using DistMult and HAKE models.
Then the model is further trained with a small learning rate. We found that reducing the learning rate by a factor of 100 worked well. Using this learning rate we optimize the model until convergence.
Simple and complex settings
As presented in Section REF ,
we use two settings in our classification models: simple and complex. This will help us isolate the effects of the KG embeddings versus the power of the MLP model.
The simple setting uses no branching layers, i.e., {{formula:0f53b73e-f75e-43b2-a6be-bef7fad64cef}} and {{formula:0e50cfe7-fd87-40a3-a6bc-f9f2ec60e512}} as in Equations (), (), () and () with 128 units in the hidden dense layer. For the complex models we use random search (20 trials) to find the optimal number of layers and units out of the ranges shown in Table REF . The optimal choices for the top performing models (using one-hot and pre-trained embeddings) are shown in Table REF .
Looking at the increasing complexity of the layer configuration of the one-hot models in Table REF we can see a correlation from the simplest sampling strategy (i.e., (i)) through the most challenging one (i.e., (iv)). The same can be seen for
PT HAKE-DisMult
from strategy (iii) to (iv), where the number of layers increase.
Overall we can see that the layer configurations of the chemical branch is more complex than for the species branch. This indicates that the KGE models are better at representing {{formula:2fecfb03-03d5-4eb1-bb27-8271e68a221d}} than {{formula:f0199f45-f429-4cfa-bbb9-cdcc6dc9981e}} .
Prediction results
In this section we present a summary of the conducted
chemical effect prediction evaluation. Complete results are available at the project repository.https://github.com/NIVA-Knowledge-Graph/KGs_and_Effect_Prediction_2020
The default decision threshold is set to {{formula:1d0b9e9f-dda3-41e6-8333-d77a136ace52}} . That is, if a model predicts {{formula:b2313402-0c7a-42d5-8101-feca5ecb12cc}} for an input {{formula:9fc56e57-0359-46df-a4c4-488990497d6e}} then the chemical {{formula:423e18d8-5c29-4042-8053-697a6f84a99e}} is considered lethal to {{formula:183529ad-b52a-47af-a0c3-e9df66a5c24f}} at a concentration {{formula:85e667af-7380-4b93-807b-56c2e799b1af}} .We set the decision threshold {{formula:05bec66d-6886-4236-9999-ee213577a2a3}} since the model output bias (cf. Equation ()) will be (close to) 0.5 after training. Recall that we have oversampled the classes to reach a {{formula:ecbcf567-a2d1-4a71-82bb-8b5fcfc9a2eb}} prior probability during training (cf. Section REF ).
We use several metrics to compare the different prediction models.
These are Sensitivity (i.e., recall), Specificity, and Youden's index ({{formula:d819d784-d849-4864-8427-1701fd4a04ac}} ) {{cite:0ebc72140601fbb587eb065bc9d2b955b09a05e3}}. Precision and F-score were also considered as metrics. However, they were not representative for the performance with respect to non-harmful chemicals. This is attributed to the larger number of positive samples (i.e., harmful chemicals) than negative samples (i.e., non-harmful chemicals) in the test data.
Sensitivity and Specificity are defined as
{{formula:c0786642-a952-435c-b7bf-0acac579e64c}}
where TP, FN, TN, and FP are true positives, false negatives, true negatives and false positives, respectively.
YI is defined as
{{formula:95c430d7-c701-4013-95a5-2d45c8f60f5d}}
We also present the maximized Youden's index ({{formula:196a6dd5-3b29-4ba2-aa3d-b7f703b785ab}} ), this is defined as
{{formula:77ae5639-5e7c-4769-a1f9-acff05f3b61c}}
i.e., we maximize Youden's index based on the decision threshold ({{formula:6afe7011-3e71-4b3a-b6f6-b132254c7558}} ), we call this optimal threshold {{formula:09ef2615-077f-492f-9028-301ae2d49d3c}} .
This metric is equivalent to the maximum of the Receiver operating characteristic (ROC) curve over a random model and can be used to select the optimal decision threshold in a production environment (based on validation data). We do not present ROC (or area under ROC, AUC) as a metric as it correlates ({{formula:86419931-ef6a-438a-8d65-b6a1ababb63a}} ) with {{formula:0b9a1a22-1457-4b03-accc-0db70c3fb1f1}} in our case.
In our setting, sensitivity is a measure on how well the models identify harmful chemicals while specificity measures models' ability to identify non-harmful chemicals.
Youden's index is used to capture the usefulness of a diagnostic test (or in our case, a toxicity test). A useless test will have {{formula:e21396f4-2aa1-4e81-a892-b79e538ad6c4}} while with {{formula:c153b5a0-4057-47f4-b205-d88e786e0ffc}} a test is useful. {{formula:ddb094af-63f7-40a3-99c7-79e388f85f7e}} is also thought of as how well informed a decision might be.
Note that, {{formula:722419dc-1075-4884-bd51-fa58687b66c5}} can be less than 0, but this is solved by swapping labeled classes. Similarly to how negative correlation is still useful.
{{table:eb354ac4-5416-4640-af98-246c82f15b28}}{{table:dd1df1f4-e204-4288-a849-fb5ee0fdb2d3}}{{table:ff16dffd-6957-4750-aa53-294e150cf4c3}}{{table:8e47f53b-0ce2-49a0-b8bf-6551585328e6}}Tables REF -REF show the results for each of the data sampling strategies (i)-(iv), respectively.
The tables include the three best models (based on {{formula:2067a682-e961-4440-b674-60155c4d6d19}} ) for the baseline model using one-hot and pre-trained (PT) KG embeddings, and the fine-tuning (FT) models using the same combination of KGE models as the selected PT-based models.
We have also included a model with middling performance
(i.e., 40 out of 81 models) and the worst performing model. Note that for the PT- and FT-based models we have evaluated 81 combinations KGE{{formula:f19338ee-a865-406a-823e-0f074117da63}} -KGE{{formula:e58073db-fefd-43ec-86ca-00a478dab8a3}} of KGE models.
All models were evaluated using the simple and complex MLP settings. For example, the model Complex FT DistMult-HolE denotes that fine-tuning was used together with the complex MLP setting, and DistMult was selected to embed the chemicals {{formula:6a33399f-4c29-4095-a3bc-331b4b3e5cce}} while HolE was used to embed the species {{formula:b0ea92bf-2fe0-4b03-8762-3b2a297db1ec}} .
We present the mean and standard deviation over 10 evaluation runs, i.e., we re-initialize and re-train the models 10 times. Results highlighted in bold are the best mean
results of the corresponding metrics.
Underlined results are where there is a {{formula:c41d496c-a819-4e8c-bf7e-fbacc532f5e0}} chance that a single run outperforms the best mean (i.e., one standard deviation contains {{formula:40211f09-f3dc-4b90-8b36-9315aca74394}} of results, assuming normally distribute results).Note that we only consider the best mean result and not the standard deviation in both directions.
Overall,
models with the complex setting
and fine-tuning are needed as the data sampling strategies become more challenging. Moreover, all models favour sensitivity over specificity at default decision threshold ({{formula:d66c0c33-541c-4957-beb2-167761f8ba56}} ). This is down to the imbalance in the data. We can see the imbalance by {{formula:aa594d2d-df5e-4739-a599-1f17384c4350}} , it is {{formula:e6a8628b-3983-424d-8666-7bea2f8d2e18}} for most models. As we use a log-loss instead of a discrete loss, this is to be expected for imbalanced data.
For settings (iii) and (iv) the performance drops and the standard deviation increases compared to the other strategies. This large standard deviation leads to large overlaps in quantiles among top-3 models in all categories, such that, by chance, one of these models could perform best in one individual evaluation.
One-hot baseline models
For the sampling strategy (i) the one-hot baseline models perform well, especially, with the complex one-hot model. This complex model is equivalent in terms of {{formula:310e3516-4221-4f0b-af9e-4f2158f8869a}} as the best simple pre-trained model. The story is largely the same in setting (ii), where the complex one-hot model performs within {{formula:30cfff70-7fd8-4831-9a3e-6772da9e909b}} of the best simple pre-trained models. With strategies (iii) and (iv) the one-hot models degrade, especially in strategy (iv) where the Youden's index is near zero ({{formula:43def93f-0733-45b6-97ac-62880e8c10be}} ). This is expected as the one-hot baseline models lack important background information about the entities, specially for unseen chemicals and species, that the KG embedding models aim at capturing.
Baseline with pre-trained KG embeddings
We can see that the PT-based models
do not lead to an important improvement with respect to
{{formula:bfcdc81a-059e-4e66-9bf9-0004462654bc}} in sampling strategy (i).
The top-1 complex PT model, however, yields a better balance between sensitivity and specificity leading to an improved {{formula:6b23d183-47b6-4bd0-94ca-e17a24efc8db}} over the complex one-hot models. The two middling performing models, Simple PT pRotatE-ConvE and Complex PT ComplEx-DistMult, still retain a decent level of performance.
The results with the strategy (ii) are similar to strategy (i), the delta in {{formula:2c1d5bc5-6115-4f7e-a0ba-3f68159162a8}} between the simple and the complex PT-based models are about {{formula:6556ae8d-c4da-4417-b8db-ddaebbb1a65a}} . This slight improvement is due to the increased balance between sensitivity and specificity which in turn leads to a higher {{formula:cfe47ad7-71b5-434d-9c98-5f4c8d12ce59}} .
In the sampling strategy (iii) we can observe that the improvement of the PT-based models over the one-hot models increases. The increase is up to {{formula:ed7f1bf9-bab8-469f-bebb-a626b7f53603}} in {{formula:20e60b53-60b4-466f-896d-e97d98f9b5e1}} of the the best PT-based model over the best one-hot model. In addition, we observe in this strategy that the standard deviation increases, especially in specificity, leading to a large portion of the models that are within one standard deviation of the best model in terms of {{formula:9aea1a3e-5fc2-45c3-9e1e-fc9062974592}} .
Finally, the impact of using a PT-based models is strengthen in strategy (iv). The delta between the one-hot and PT-based models is up to {{formula:4c8fe026-2370-4dd1-8ac4-5edab5782d5b}} in {{formula:963dc096-fc33-4418-8542-a9fe1e9982ea}} , and larger for {{formula:fe1582da-a4a9-4f91-96a0-51b4fe05ebc7}} . We see that all models struggle with specificity in this setting, this is down to the difficulty of predicting true negatives. This also leads to a larger variation, with certain models yielding standard deviation in the same order of magnitude as the metric (e.g., Simple FT HAKE-ComplEx).
Fine-tuning optimization model
The FT-based models, with some exceptions, improve the results over the PT-based models, most notably in sampling strategies (iii) and (iv).
For example, the FT-based models Complex FT HolE-DistMult and Simple FT HolE-ComplEx are the best models in terms of {{formula:3e68e7a9-933b-4c17-8888-653248efce88}} and {{formula:26069c02-2bb6-45d8-b17a-fd75b75317cc}} in strategy (iv), respectively.
We can also see in strategies (i) and (ii) that the FT-based models
improve middling and worst performing PT-based models, e.g., Simple FT RotatE-ConvE in strategy (i) improves from {{formula:09f366b6-2625-4e15-8bcd-88bd3fcec1d4}} to {{formula:b59a108e-c105-45b8-9f09-e10c157bb31b}} using fine-tuning of the KG embeddings.
The results are expected as the fine-tuned KG embeddings are tailored to the effect prediction task.
KG embedding analysis
In this section we look at correlations between KGE model choices and prediction performance. KGE models are designed to capture certain structures in the data, and this can give some explanation of which parts of the KGs are important for prediction.
First, in Table REF we show how many times a KGE model is used when regarding the top 10 performing combinations (out of the total 81 possible combinations). We focus on the choices when using the simple MLP setting
to reduce the influence of the non-linear transforms on the embeddings.
{{table:33dd745d-1b9f-45ce-b72d-085802998a27}}Looking at Table REF we can see that the KGE models used to embed the chemicals {{formula:7b8b8aae-b048-4f91-93e1-52ce1b026f43}} in the best performing models is distributed evenly across most models and settings. This indicates that the performance of the prediction models is not highly correlated with the use of a KGE model on {{formula:80a13187-065b-41e2-9ef9-21fb32af892e}} . Referencing Table REF , the high relational density in {{formula:1860027c-86b8-409d-88d9-2b010b2a95e0}} can contribute to worse performance {{cite:3944449ce91b37ed1d6bcb5ad08e0bfa4d12f435}} and therefore equal distribution of models in Table REF .
This is different for {{formula:734484e7-45a6-4fcc-9bf9-1172f7aebf4e}} . For sampling strategies (i) and (ii), HAKE is extensively used in the top models to embed {{formula:f343824a-48d3-4955-9fa5-eba160a261c8}} . HAKE is designed to embed hierarchies. Therefore, this indicates that in strategies (i) and (ii) the hierarchical structure of {{formula:3b06e2ce-902d-4ab7-9013-7b3796818ddf}} dwarfs the rest of the KG. {{formula:97f34344-0a56-43d1-89fb-a5fc113f9619}} has a higher entity density and lower entity entropy (Table REF ) than {{formula:ac4345b3-9145-4221-bc87-732e1f4caaa3}} . This should lead to higher performance generally, but might also lead to larger discrepancies between models as seen in Table REF .
{{figure:6e9a33c3-6e0a-468b-a46b-4faa1f2a2d7f}}The use of the decomposition models increase in strategies (iii) and (iv) for the embedding of {{formula:44f4debe-3e0c-458e-a493-f22310a216d4}} , which indicates that KG structures, other
than the hierarchy, are important.
Overall, DistMult and ComplEx can be used to great effect in strategies (iii) and (iv) while the geometric model, HAKE, is more successful in the less challenging strategies (i) and (ii).
{{figure:4195a0a9-7f87-4949-9630-366090ac126f}}Explained variance
Explained variance is a measure of how many principal components
are required to describe all components.We use the scikit-learn implementation {{cite:40414970dbdc0bd9c4d33df153578520b3846ae1}} based on {{cite:8c2d82dc1c151671b6f105991aee42dcdd275d5b}}. In Figure REF , we present how the {{formula:2332358b-11e0-4a4b-a8a7-9f64efdd517c}} metric depends on the explained variance of the top-10 principal components (i.e., {{formula:9325ee6c-ce50-4401-9d89-227c7086db0e}} ). We show all (81 per sampling strategy) PT-based prediction model results, simple MLP setting in Figure REF and complex setting in Figure REF . For example, in Figure REF , the best model in the strategy (iv), Simple PT pRotatE-ComplEx have a explained variance of {{formula:d4655a75-67a0-49a9-a168-1afbc1f300ef}} compared to the worst model, Simple PT HAKE-HAKE, with explained variance of {{formula:50fb961f-7094-4263-af0a-c716a1300555}} . Coincidentally, these two points does not follow the trend lines in these figures which indicate negative correlation between {{formula:e5c662c0-84b6-4498-8998-be5371773048}} and explained variance. The trend lines can be interpreted in two ways. First, it is counter-intuitive as we would expect more descriptive embeddings, i.e., larger explained variance, to perform better. On the other hand, the top-10 principal components may not be representative enough to capture the semantics of the KG embeddings, and thus, a large
explained variance does not necessarily correlate with a high performance.
Figure REF represents the explained variance against sensitivity. We can see that the trend is flat for strategy (iv), but positive for strategies (i)-(iii). This means that the trends in Figure REF are explained by specificity rather than sensitivity.
By balancing sensitivity and specificity, i.e., {{formula:765fe3d5-4a19-424a-9951-0f92c3058bac}} as seen in Figure REF , the rate of change is reduced compared to {{formula:f27e529d-d26c-422f-b866-e4ca4242a6e3}} in Figure REF .
{{figure:442dae57-f582-4be5-83ac-14ccbb8fdf80}}
Example predictions
Table REF shows a few examples of correct (TP and TN) and incorrect predictions (FN and FP).
{{table:5196ad20-1522-459a-b7cc-85abf4cae61d}}Benthiocarb and permethrin are both biocides with different targets: benthiocarb is a herbicide and permethrin is an insecticide. It is therefore not surprising that benthiocarb has a low predicted effect on sea urchins, while permethrin has a severe effect on bivalves.
There are several possible explanations for the failed predictions.
A wrong prediction of potassium chloride toxicity to a marine copepod (Megacyclops viridis) could be due to the prediction model not being accurate enough for metal salts, or the copepod species being particularly sensitive to changes in osmolarity due to salt content. The wrong prediction of lack of herbicide toxicity (i.e., carfentrazone-ethyl) to a flower (i.e., eudicots) could be due to the fact that flowers, and plants in general, are severely underrepresented in the available effect prediction data.
Discussion
We have introduced the Toxicological Effect and Risk Assessment (TERA) knowledge graph and shown how we can directly use it in chemical effect prediction. The use of TERA improves the PT-based prediction models over the one-hot baselines. In the most challenging data sampling strategies, we have also
seen the benefits of creating tailored (i.e., fine-tuned)
KG embeddings in the FT-based prediction models.
TERA knowledge graph
The constructed knowledge graph consists of several sources from the ecotoxicological domain. There are three major parts
in TERA: the effects data, the chemical data, and the species taxonomic data. Integrating each part
has different challenges. The chemical and pharmacological communities have come a long way in annotating their data as knowledge graphs and ontologies. Here, selecting the correct subsets to work with the chemical effect prediction data was a major challenge. This had to be done based on mappings between effect data and chemical data that were extracted from Wikidata.
We selected a
relatively small subset of the chemical sub-KG to facilitate faster model training, however, still
larger than the extracted fragment from the species sub-KG. The species sub-KG was created from tabular data and cleaned by removing several annotation labels with redundant information. This sub-KG was aligned using ontology alignment systems to the species taxonomy in the effects sub-KG. This required pre-processing of the KG, where it was divided into smaller parts such that the selected systems could perform the alignment. We used several standard ontologies to facilitate the transformation of the effect data into a knowledge graph. This involved not only automatic processes, but also an important amount of manual work.
Integrating more data into TERA involves the creation of mappings to the existing data. This is possible for a large amount of chemical datasets as Wikidata links multiple datasets, e.g., the chemical compound diethyltoluamide (wd:Q408389) has {{formula:4e28eb3c-709e-4b44-b7d4-26efff2dcfa5}} distinct identifiers.
Biological data, both taxonomic and effects, might be harder to align to TERA as these mappings are not available in Wikidata. Here, ontology alignment systems play an important role to fill this gap.
The additional integrated data will give larger coverage of the domain, and thereby, improve model performance. However, adding more data will also increase the memory and time requirements of KGE models. This was bypassed in this work by reducing TERA to only relevant parts.
Adding additional domain knowledge is also critical in other applications, such as using TERA for data access.
Performance of prediction models
We have shown that
the ability to embed some structure types
of different KGE models largely impact the prediction models. We see that some KGE models fail to capture the semantics of the chemicals and the species, which leads to similar performance to the one-hot baselines. Moreover, in a few isolated cases the performance is reduced further which leads us to believe that the embeddings collapse in one or some dimensions, making it impossible to distinguish among entities.
We suspect that the even distribution of KGE models to embed {{formula:2aef1af3-36ab-49d4-917d-b41e3013774b}} (Table REF ) in most settings is likely down to the structure of {{formula:1762a17e-7e6a-4008-9421-239c7f8cd1d6}} . This sub-KG has, unlike {{formula:b59381fd-a547-41d2-b049-db606bea961b}} 's tree structure, a forest structure, and models that can deal with trees (as in {{formula:a774cd39-46df-4d98-aba3-4f4b01e18e8a}} ) fail here, e.g., an entity in {{formula:5be9ecf2-2b34-4d44-befd-36efdcb34751}} can have multiple parents, but only one grand-parent.
In this case, some models may
create very similar or the same embeddings for the parent nodes.
Conclusions and future work
TERA is a novel knowledge graph which includes large amounts of data required by ecological risk assessment.
We have conducted an extensive evaluation of KGE embedding models in a novel and very challenging application domain. Moreover, we have shown the value of using TERA in an ecotoxicological effect prediction task.
The fine-tuning optimization model architecture to adapt the KG embeddings to the prediction task has, to our knowledge, not been applied elsewhere.
Value for the ecotoxicology community
The creation of TERA is of great importance to future effect modelling and computational risk assessment approaches within ecotoxicology. Where the strategic goal is designing and developing prediction models to assess the hazard and risks of chemicals and their mixtures where traditional laboratory data cannot easily be acquired.
A great effort in the hazard and risk assessment of chemicals is the reduction of regulatory-mandated animal testing. Wide-scale predictive approaches, as described here, answer a direct and current need for generalized prediction frameworks. These can aid in identifying especially sensitive species and toxic chemicals. At the Norwegian Institute for Water Research (NIVA), TERA will be used in this regard and will support several research projects.
In environmental risk assessment it is often unfeasible to assess the hazard and risk a chemical poses to a local species in the environment. These species may not be suitable for lab testing, or may even be endangered and thus are protected by national or international legislation. The currently presented work provides an in silico approach to predict the hazard to such species based on the taxonomic position of the species within the tree of life.
From an economic perspective, TERA and the prediction models are useful tools to evaluate new industrial chemicals during the synthetic in silico stage. Candidate chemicals can be evaluated for their potential environmental hazard, which is in line with the Green Chemistry initiatives by authorities such as the European Parliament or the US Environmental Protection Agency.
The effect prediction using TERA is also in line with a larger shift in ecological risk assessment towards the use of artificial intelligence {{cite:371cc86ac1c448d16ce113ef23f03bf559db9f57}}.
We also believe the development of TERA contributes to a methodological change in the community, and encourages others to make their data interoperable.
TERA as background knowledge
As mentioned, in this work we use TERA directly in prediction models. However, TERA could be used as background knowledge to improve many emerging techniques for toxicity prediction (e.g., {{cite:20737a3f738d4d485df9622bf94577fb5939cc08}}). These methods often use chemical features, images, fingerprints and so on as input, and machine learning methods such as Convolutional Neural Networks and Random Forests as prediction models {{cite:b4b57cd78c59dd551b449fdb9d3cad04e57c0b2e}}, {{cite:a2a8e25cc5db00906335e62f9665fcbe6594421a}}. These models are often uninterpretable, and the predictions lack domain explanations.
TERA can also provide context
for machine learning tasks such as pre-processing, feature extraction, transfer and zero/few-shot learning. Furthermore, the knowledge graph is a possible source for the (semantic) explanation of the predictions (e.g., {{cite:cebdf7d4a1c4e3de8bc6945689ac85203830fe89}}).
Benchmarking KG embedding models
We have shown that embedding TERA brings new challenges to state-of-the-art KGE models with respect to capturing the semantics of the chemicals and the species. Furthermore, as shown in Section REF the sparsity-related measures indicate that TERA represent an interesting KG. KGE models could be benchmarked in a standard KG completion task or in a specific task such as the chemical effect prediction.
Value to the ontology alignment community
As mentioned in Section REF , there does not exist a complete and public alignment between ECOTOX species and the NCBI Taxonomy.
Therefore the computed mappings can also be seen as a very relevant resource to the ecotoxicology community.
The used alignment techniques achieve high scores for recall over the available (incomplete) reference mappings.
However, aligning such large and challenging datasets requires preprocessing before ontology alignment systems can cope with them.
We removed all nodes which did not share a word (or shared only a stop word) in labels across the two taxonomies.
This quartered the size of ECOTOX and reduced NCBI Taxonomy 50 fold. However, the possible alignment between entities without labels is lost when reducing the dataset size.
Thus, the alignment of ECOTOX and NCBI Taxonomy has the potential of becoming a new track of the Ontology Alignment Evaluation Initiative (OAEI) {{cite:f6e3a0ad986a2de3eeceb26d8a473e34348c37c1}} to push the limits of large scale ontology alignment tools. Furthermore, the output of the different OAEI participants could be merged into a rich consensus alignment (e.g., as done in the phenotype-disease domain {{cite:297bd33de63bf11759246d4e5625e73abc70357e}}) that could become the reference alignment to integrate ECOTOX and NCBI Taxonomy.
Future work
We plan to extend TERA to include a larger part of ChEBI (which ChEMBL is a part of). ChEBI includes relevant data on the interaction between chemicals and species at a cellular level, which may be very important for chemical effect prediction. In this work we only consider effect data from ECOTOX as this is the largest data set available, however, the inclusion of e.g., TOXCAST {{cite:47ae8cd450d8b4b5c198d50cd174b7d54a1ad422}} is in our interest. New sources will always bring more coverage of the domain and will improve TERA for prediction, as background knowledge, and for data access.
We plan to evaluate the effect prediction under different parts of TERA, i.e., which sources in TERA provide value and which do not contribute in terms of the effect prediction. A similar effort in exploring different KG crawling techniques has been explored in {{cite:eee17bb7d69fa28714709dc4cd6eee45f981f86c}}. In a similar vain, we plan to evaluate how materialization, via OWL reasoning, of TERA's implicit triples affects prediction performance.
Finally, as mentioned already, some KGE models cannot deal with parts of the structure of TERA. An in-depth analysis of this is an interesting direction for future research. This could be solved by embedding the hierarchy separately, e.g., {{cite:62ff07da35d646c5f6a7df5f3c69e02db77d1e24}},
or imposing restrictions on the embeddings, such as a minimum distance constraint.
Resources
We encourage feedback from domain researchers on extensions to TERA and associated tools.
A snapshot of TERA is available at
https://doi.org/10.5281/zenodo.3559865
This snapshot does not include data that is impractical to re-share (i.e., partial {{formula:53955fae-1f1a-4b5a-a9b2-75f4620dc0e0}} as described in Section ). However, we include the full {{formula:a248760b-ac43-4e9d-94f7-c34abb946943}} and {{formula:1d6a9df4-b901-41a9-85de-10a875b9d9f3}} .
All the material related to this project is available at
https://github.com/NIVA-Knowledge-Graph/
Source codes to create TERA are available in the TERA GitHub repository.
The prediction models and data used for prediction can be found in the
KGs_and_Effect_Prediction_2020 GitHub repository.
The prediction models require the implementation of the KGE models from the KGE-Keras GitHub repository.
Acknowledgements
This work is supported by the grant 272414 from the Research Council of Norway (RCN), the MixRisk project (Research Council of Norway, project 268294), SIRIUS Centre for Scalable Data Access (Research Council of Norway, project 237889), Samsung Research UK, Siemens AG, and the EPSRC projects AnaLOG (EP/P025943/1), OASIS (EP/S032347/1), UK FIRES (EP/S019111/1) and the AIDA project (Alan Turing Institute).
Knowledge Graph Embedding Models
In this work, we use 9 KGE models of three major categories: decomposition models, geometric models, and convolutional models. The interested reader please refer to {{cite:9bd8b874fd7d4de01884a9c8e62dc65d9bf06e80}} for a comprehensive survey.
Notation
Throughout this section we use bold letters to denote vectors while matrices are denoted as {{formula:0b69e156-222e-4d6a-b793-39c315a58045}} . Common notation for all KGE models are, {{formula:b054507d-edb2-480d-914b-50d1d8a15550}} for the {{formula:df816316-0176-42cd-ac67-bfce92fc0143}} -th norm, {{formula:4c825b1c-24c2-4ad5-aba0-e8ef41d0b205}} for the inner product (dot product) between {{formula:3c4d8dee-8e38-486a-886a-1e039a5d1f08}} and {{formula:e0c315cb-c4e2-46f0-9617-fce2d2e00f20}} , {{formula:5ae229f3-86f3-4525-b4ca-3c2b21823091}} is the concatenation of {{formula:9d045b7d-0e6d-44a4-92cc-d1716ce7ed9e}} and {{formula:af0ddecb-cccd-4849-8891-e00382e3d407}} , {{formula:130afd7b-c113-4efe-83bf-c6c4dbe098bf}} indicates the reshape of a one-dimensional vector into a two-dimensional image (not in HolE where it represent the complex conjugate), finally, {{formula:e3408aa9-60ee-40c0-b09d-65c732913232}} reshapes a matrix into a one-dimensional vector.
The vector representation of an entity and a relation are noted as {{formula:b98f2d10-b296-4702-805c-a44aeb76accb}} and {{formula:4640ad51-a13a-4028-b96d-883d71e21653}} , respectively. These vectors are either in {{formula:0990e72d-eb09-47b4-a7cf-8ef593bda849}} or {{formula:aac70b4e-e3d9-46fb-9fc0-336cf82a5c38}} , where {{formula:54269b11-1cab-471d-98d4-d3e730e06e54}} is the embedding dimension.
Decomposition models
DistMult.
Developed by {{cite:c5f80a8a86dbb5f36e3422b0fd298e6d7977c286}} and shown to have state-of-the-art performance on link prediction tasks under optimal hyper-parameters {{cite:3e2c61fa0e9b00f32309f05902ca11b90545d63f}}. This model represent the score of a triple as an Hadaman product (dot product) of the vectors representing the subject, predicate, and object of a triple.
{{formula:2beccfcb-f7cc-4df8-9965-de14b59a3f3e}}
This model does not take the direction of the relation into account, that is, {{formula:1bf2bcca-cfb5-460d-9d45-322d961fcf1b}} .
ComplEx.
This model use the same scoring function as DistMult {{cite:333d4c031fa834597f59f186aa3ff39aa7646769}}. However, the entity vector representation are in the complex space ({{formula:311d0018-8ae8-47c4-a987-f48605cc47c0}} ) and therefore, the drawback of lacking directionality in DistMult is solved.
{{formula:c301163b-700b-40d0-91b5-fb0a9c17ccb8}}
where {{formula:a66c3b3c-64c4-4196-a1a1-f20b1a7c337b}} and, {{formula:255fddc2-cf2b-4a74-bb8a-19f19b49bb05}} and {{formula:cfe71c48-9b97-4663-b2de-95b495a9f7bd}} are the real and complex parts of {{formula:4ff2b388-e4c3-4b62-b1b8-3b465e7787d7}} , respectively. We can easily see that {{formula:264651d7-d070-4f87-9069-ae2ff569d405}} if {{formula:5dc3c839-cf41-49bb-8eae-6bd56b8f7c6c}} .
HolE.
The Holographic embedding model is described in {{cite:14c9d37bc994a88161b402f57719334464a78865}}, and use a circular correlation scoring function
{{formula:843d124b-589a-4402-9b7d-cafa94bdb9fe}}
where {{formula:7e2a9eaa-7308-4f28-b847-994512df1ee0}} and {{formula:4e25c4b5-03a7-499d-b9a7-5bd0b9f2b20e}} are the Fourier transform and its inverse, for this model we use {{formula:c591daf2-e3e0-433b-9575-6e8035911704}} as the elementwise complex conjugate, {{formula:9f03f04b-5a2f-49f1-9079-e6541693d877}} denotes the Hadamard product (element-wise). HolE has been show to be equivalent to ComplEx {{cite:98e7e1d9396b3555784725a4d06245d0250d1064}}, and therefore, we expect the performance to be similar.
Geometric models
TransE.
The translational model has the scoring function {{cite:738b6c10b68cf769d607781475c0ef972d4241d0}}
{{formula:cf3adc06-b500-4bdc-8a98-21a89625f000}}
Such that if {{formula:fa3be550-2c15-4c52-9ec2-8271895d1852}} exists in the KG the relational embedding will translate the subject embedding close to the object embedding.
RotatE. This model is inspired by Euler's identity ({{formula:560787b0-d02c-4d12-a653-53ea7ae9f75d}} ) and scores triples by rotating the relation embedding in complex space. RotatE has been shown to be efficient of modelling symmetric, inverse and composite relations {{cite:9d3089040cd71ef4628ca47733b9d0bcebf2db69}}.
The scoring function of RotatE is defined as
{{formula:dfa7b194-ff94-4e8f-b3dd-ae14e4fa7e5d}}
Here, we concatenate the real and complex parts of {{formula:e9bb6a29-ebda-489d-9260-a620d82bb7b6}} . The modulus constraint of {{formula:086fed90-89e7-45ce-8fa3-29941dcdd7ab}} is set equal to 1 and is therefore not included in the scoring function.
See the original publication for details of derivation.
pRotatE.
This model is described as a baseline for RotatE enabling comparison when including modulus information in the model versus limiting to phase information only {{cite:9d3089040cd71ef4628ca47733b9d0bcebf2db69}}. pRotatE has the scoring function
{{formula:9f85c075-6542-4525-bc11-7f85c141499d}}
where {{formula:05dd763a-e93b-427f-989e-b45d2fcd053e}} (phase of {{formula:71b4ac2f-2ee8-451b-adb3-664acc208ebd}} ) and {{formula:0d6220a9-0f9b-457b-bf42-c7974d67ae91}} is the modulus constraint on {{formula:5ad2c6a0-6c05-4970-ab91-d4f79ef99da5}} and {{formula:f77c2606-7608-4c8e-8a9b-605d3a290f5b}} .
HAKE. The hierarchy-aware model use the modulus and the phase part of the embedding vectors {{cite:67cc97bf0ffe0cccddf7077b1632160cb6af1a63}}. Such that entities at the same level in the hierarchy is modelled using rotation, i.e., phase, and the entities at different levels are modelled using the distance from the origin, i.e., modulus. Therefore, the scoring function of HAKE is modelled in two parts
{{formula:743f80de-ea5f-4c51-8f90-5122f0d6d36a}}
where {{formula:2de10ca3-6467-4e37-8070-af20e3c1656b}} is the modulus of {{formula:ca21fb7c-f0be-4437-a157-e134ed0cfaa9}} . The authors noted that a mixture bias can be added to {{formula:192632fb-e932-41e0-8240-a5865d880697}} to improved performance {{cite:67cc97bf0ffe0cccddf7077b1632160cb6af1a63}}. We omit these details here.
Convolutional models
The final set of models used in this work are convolutional models. We denote convolutions between an image {{formula:6799e16a-4b03-41b6-8989-e021b503c84d}} and filters {{formula:68d60eb7-458e-4eda-8d13-7375d13844f5}} is denoted as {{formula:faa8009e-024c-4ee7-aba7-3a8be84d81c7}} . The models also use dense layers, which is denoted by transform matrices, e.g., {{formula:502b62ec-1502-49df-935a-9b56a61ca3e7}} , note that this also includes bias, even though we do not explicit state it. Moreover, dropout layers are used between every convolutional and dense layer.
ConvKB.
The scoring function of ConvKB {{cite:b524533a1296a105c231f2734a808711982f3008}} use a single convolutional layer and a single dense layer
{{formula:555aba32-70d9-402b-8b3f-4c6289fc9adc}}
where {{formula:a46d470c-166d-48a0-b336-8d3b5d5f10bf}} reshapes {{formula:0421c50e-134c-4951-955f-f229822d631e}} to a 1-dimensional vector. {{formula:2c54f86a-9df4-4bc4-b6c9-7fb14eae2586}} is the convolution filters.
{{formula:5db44d30-6b0b-46ad-80a6-959cc06b27ec}} is the transformation matrix for the output dense layer. ConvKB can easily be extended to use multiple convolution and dense layers.
ConvE.
In contrast to ConvKB, ConvE {{cite:34346ef0dd19f03518ee65b8773240e1c1ae4691}} only perform convolution over the subject and predicate image (concatenated and reshaped) and multiples the output dense layer with the object vector as such
{{formula:ef37f675-f751-43b9-bdb0-dbc5ae571486}}
where {{formula:98ab343e-5b70-4f9b-8f77-733b5b190ff8}} reshapes {{formula:3e409826-d6b8-48d7-8822-86d2191a2f72}} into a 2-dimensional image.
Here, the last dimension of {{formula:481717d3-78f8-4a9d-b49e-c57331a2bf3c}} is equal to the embedding dimension. This model can also be extended with multiple convolution and dense layers, however, {{cite:34346ef0dd19f03518ee65b8773240e1c1ae4691}} found that this did not yield improved results.
Loss functions
Work on KGE models usually define loss functions specific to the models. However, as show in {{cite:1ff63094968d08913142e27749fa417a0f86e01f}}, {{cite:b04fbec02ae25c3b5040a62bc98fca45728dfa08}} the choice of loss function has a huge impact on model performance. In this work we use four loss functions. We experimented with other loss functions, e.g., absolute/square error, however, these did not materialize in improved results.
To optimize a loss function we need to generate negative examples. Under the local closed world assumption we replace the object of each true triple with all entities and sample negative examples from this set {{cite:dfca83b2a6cfd605c2e7cf337afe0e3d868decbe}}, i.e., we sample from {{formula:b1878210-1076-44da-9f63-01a093e29fa1}} . This can be expanded to the stochastic local closed world assumption, which corrupt both the subject and the object of true triples (illustrated by Fig. 3 in {{cite:3f0d4e8c460de805b865d7b1c162040d62b67a2a}}). The number of negative samples sampled per positive sample is controlled by a hyper-parameter. However, {{cite:3e2c61fa0e9b00f32309f05902ca11b90545d63f}} show that the largest possible number is favorable.
Pointwize hinge.
The objective of pointwize losses minimize the scores of negative triples and maximize the score of positive triples.
{{formula:61fbe7e5-9a21-4ad2-8fe0-914ca0dc08e9}}
where {{formula:5136ec18-05dc-445b-b144-04798af62eae}} is the set of positive and negative triples, {{formula:c01771cc-de0e-4d80-8b6f-006c48a179f8}} is the triple label ({{formula:77ce3d2d-b23b-4e3b-b111-7b33d2b0f65d}} for false and 1 for true) and {{formula:e98ea740-cf71-4c7c-b7b0-1edd59a372b3}} is the score of triple {{formula:06b476cb-ddbf-4a34-bf8a-e90fa61e940d}} . {{formula:a73dfd14-a120-4b9c-8f95-b287cee4646e}} is the margin hyper-parameter. {{formula:2576210b-7b95-4295-9e2b-52b6a0213f38}} is the positive part of {{formula:3f3851cf-f1f1-42f5-8594-69f50652c786}} .
Pointwize logistic.
In contrast to hinge loss, logistic loss applies a larger non-linear loss to predictions that are further away from the true label.
{{formula:fe91f7b5-f644-4674-9b69-7f0300873e63}}
Pairwise hinge.
The objective of pairwise loss functions is to maximize the distance (in score) between a positive and a negative triple.
{{formula:a2849649-c0e2-4e1b-98d2-7d6f18e5bf33}}
where {{formula:dbaf9f8b-9aa0-48ea-8a7e-72fb943e3805}} and {{formula:006eb9d9-9e4a-44a2-839a-db995741b3e8}} are the sets of positive and negative triples, respectively. {{formula:a30ab945-8340-47cc-80a1-6d3adbc536b6}} is the margin hyper-parameter, which for pairwise hinge loss represents the maximum score discrepancy between a positive and negative score.
Pairwise logistic.
Akin to the move from pointwize to pairwize hinge, pairwize logistic maximizes the distance between positive and negative triples, however, in a non-linear way
{{formula:c0748090-aafd-4254-86eb-e72c7c30f70b}}
Implementation
We have implemented the KGE models in Keras {{cite:e1b66861d5ac4bdf9bfe075976592daafabe3dde}} and the model codes are available at https://github.com/NIVA-Knowledge-Graph/KGE-Keras. This enables us to easily use the KGE models as components in other models as described in Section .
| i | 806549b0df80967ab4b27e348752a46b |
AttMask can be incorporated into different methods to either replace the block-wise strategy of BEiT {{cite:37c498c6398aa8b527b67d37da1eab5f222e5526}} or introduce masking. For iBOT {{cite:42fa89fd85bd8a832bc798db5a1297bbcdf673a9}}, we use {{formula:1111b213-5377-4c4e-af33-1a3c587277ae}} in {{formula:890a3a43-304c-45de-8180-3e8cecfbe42b}} (REF ) and {{formula:92966b0e-4cba-4c18-96d2-ff08ba9c6a84}} (REF ). For DINO {{cite:4939e3016f497c5a782ad894e7137f7ccbc19593}}, we introduce masking by using {{formula:f33fb7cd-27b4-47a2-8870-347390eb6b80}} for global views in {{formula:9ca5f2b5-39f9-4928-8cde-b4d14b509bb2}} (REF ), but not for local crops in the multi-crop loss {{formula:18ecd9d9-55de-4e65-b34a-3827122e53a7}} (REF ) (discussed in Appendix sec:more-setup).
| m | 029882e4a972de8db4406caa01d29a3b |
Meanwhile, delineation of anatomical structures relies on multi-modal medical images, such as Computed Tomography (CT) and T1- and T2-weighted Magnetic Resonance Imaging (MRI). CT is required by radiotherapy dose computation as it can measure the density of different structures. It can show dense structures like bones and implants with less distortion, but has a low contrast for soft tissues {{cite:1643b4d15ad66dcd28328ef7d5c6bb45b4477837}}. T1- and T2-weighted MRI can better highlight different soft tissues than CT {{cite:1643b4d15ad66dcd28328ef7d5c6bb45b4477837}}. Therefore, employing different imaging modalities can provide complementary information for accurate structure segmentation required by radiotherapy treatment planning. At present, the delineation of multiple brain structures is implemented by experienced radiation oncologists through slice-by-slice manual annotation, which is labor intensive, time consuming, and subject to the operator's experience. Thus, accurate automatic segmentation of the anatomical structures like ABCs is desirable for reproducibility and reducing the workload of annotators and the waiting time for patients {{cite:f6e48af7a521b50d0b7ecc0253139bd7fc167b1d}}. Considering the state-of-the-art performance of Convolutional Nerual Networks (CNNs) in medical image analysis {{cite:cc648ad9a380469e9c47a9cd05854f39a2aacb0f}}, {{cite:5d1eafdc0934a3a628d4bf169607803816d46dae}} due to their powerful feature representations capability, it is promising to use CNNs to automate this task.
| i | 3446182e425bebda9fb42a58ee1cbb22 |
In the outburst, the accretion rate of the ADAF at {{formula:1adc567a-765b-4143-a527-be847ef68935}} rises in the thermal timescale {{formula:ea1227a6-4961-4aa7-9660-9455635275db}} when the heating front is approaching the outer radius of the ADAF
{{cite:a247901545350ba27fa58897707174940e32a3b9}}, which is roughly at the same order of the accretion timescale of an ADAF without magnetic outflows (see Equation REF ), because the disk thickness of the ADAF {{formula:35b8b527-9147-4f9f-9ad8-9c08fd235dc9}} . In the case of the ADAF with strong magnetic outflows, {{formula:6f10cad8-5d2d-4e75-9a64-2875d42ead41}} is substantially higher than unity, the rising timescale of the accretion rate at {{formula:45f9b20e-2fed-4037-b78c-79bcf275c73b}} is usually longer than the accretion timescale of the ADAF with magnetic outflows (see Equation REF ). The magnetic coupling between disk and outflows takes place at a much shorter timescale, i.e., the order of the disk dynamical timescale {{cite:aa087c167a1547b20a726cb246d1413d7596f653}}. These imply that the analyses based on steady model of the ADAF with outflows in this work is a good approximation.
| d | cb2d51188b65b502ca0f7dfff2d2e733 |
Simulations were performed using the Quantum-Espresso
code, which is an efficient, open-source implementation of density functional theory (DFT) in periodic
boundary conditions {{cite:63b578c5e9cfa1255dd9e395faca3016ae90a2f2}}, {{cite:946896357a97ddab61da7d7deb57fbe5fe9c337c}}. The standard
package was modified to carry out free-energy calculations based on the
proton coordination number.
Energy and forces were computed
with the BLYP exchange-correlation functional {{cite:620d97cd4616eabba566aeab007317933538dc06}}, {{cite:93ea4fb55edf3fd2db777ebd49677d7dd79cfeff}} including
Grimme's Van der Waals corrections {{cite:d52a8d32879726283d5b077f53b71df0091a491f}}
and combined with Vanderbilt ultrasoft pseudopotentials {{cite:a945a0c65dd8250ae0d85a59e562850d2b160d0a}}.
The Kohn-Sham orbitals and the charge density were
expanded in plane-waves basis sets with kinetic energy cutoffs of 20 and 160 Ry, respectively.
Car-Parinello molecular dynamics simulations {{cite:b7a8616cba06a9ada200fa4044942ad6fcc7198c}} were performed
using an effective electron mass of 400 a.u., coupled to a Nosé-Hoover thermostat at 300 K and a time
step of 0.145 fs. Reciprocal space sampling was restricted to the {{formula:60cb890f-8a4b-4482-8345-d2f9a86800c7}} -point.
| m | 1ecbc0315ae995e9fefbad9461e13e53 |
With the advent of deep learning, a great number of state-of-the-art methods have been developed for low-light image enhancement. LLNet {{cite:4953d8d3d329faf1025e88c8e862f76bb64f95fa}} is a deep auto-encoder model for enhancing lightness and denoising simultaneously. LLCNN {{cite:cb81372c1228549fa81b0658dfe1b291ec9160f8}} is a CNN-based method utilizing multi-scale feature maps and SSIM loss for low-light image enhancement. MSR-net {{cite:9d15a85860d5e97c74aa889eee2f5ecf79ec7bfb}} is a feedforward CNN with different Gaussian convolution kernels to simulate the pipeline of MSR for directly learning end-to-end mapping between dark and bright images. GLADNet {{cite:b73e2b550401ab45c1cd533a8dda17a73ae8e8ec}} is a global illumination-aware and detail-preserving network that calculates global illumination estimation. LightenNet {{cite:b73e2b550401ab45c1cd533a8dda17a73ae8e8ec}} serves as a trainable CNN by taking a weakly illuminated image as the input and outputting its illumination map. MBLLEN {{cite:46e634dd8a686b49a9bdb39462d575b0cf9d910a}} uses multiple subnets for enhancement and generates the output image through multi-branch fusion. RetinexNet {{cite:12e7e87a2a2b4913808359ac2e6533991611c53b}} decomposes low-light input into reflectance and illumination and enhances the lightness over illumination. EnlightenGan {{cite:abee6ef51f065796ca18b651d51203ea195ff718}} trains an unsupervised generative adversarial network (GAN) without low/normal-light pairs. KinD {{cite:352d85ff6a8f1f4e66470eaf9bb30631bb2510ba}} first decomposes low-light images into a noisy reflectance and a smooth illumination and then uses a U-Net to recover reflectance from noise and color bias. RDGAN {{cite:03840cd3b5bbedf610a0e2f14dcf829672314759}} proposes a Retinex decomposition based GAN for low-light image enhancement. SID {{cite:16a9ff37ef4408763828b3937addca68531f526d}} uses a U-Net to enhance the extremely dark RAW image. RetinexDIP {{cite:61cbfd45144ccbcbfc884d63701270d3848b4e91}} provides a unified deep framework using a novel ”generative” strategy for Retinex decomposition. Zhang et al. {{cite:fdc25a20298ac39cafa2e50e93cad53588bc0a2a}} presented a self-supervised low-light image enhancement network, which is only trained with low-light images. Zero-DCE {{cite:45003469965f46e7af02c934af74cc26d2b0db55}} estimates the brightness curve of the input image without any paired or unpaired data during training.
| m | 57e70879d3582415114442ce837f8d68 |
However, as we often see in practice, the information assimilated in each gradient can vary widely. After the first few training epochs, the gradients of specific data points lose their value. {{cite:1bf7f51380ed7b0dc18ede33c42190264aff85d2}}. We could select data points randomly or by batching the entire training set per round. Both methods are inefficient and do not consider the information contributed by the gradients or the data points.
| i | e831fa9f8e7972ced40fd75c9153fa2a |
The axial-vector anomaly by Adler {{cite:fb7c0c278de741fc49be2a479e7a3b200bb330a2}}
and Bell and Jackiw {{cite:cc75c3ed7fa5a8fe0100447c78eb017c5e54239f}} states that parallel electric and magnetic field cause a divergence of the axial current. In units of the flux quantum
{{formula:e98bd467-c606-4829-9936-7ddf324c3a92}} , and the Planck constant {{formula:6e359c99-fffa-45e5-8384-9e0b30b69d8f}} , the divergence of the axial current reads
{{formula:7169cd12-788d-489d-97f2-7c1562c967c2}}
| i | 7bd03923537c3935efac5ac4beb47280 |
Meanwhile, analytical {{cite:d29fba447d308e4d2b538326d611efc2638c2975}}, wavelet {{cite:f631f51371262e9fec50c737984a9ba96d9d6625}}, fractal {{cite:045af4cb30a470c6516ff4b5be97c8e16afbfbcf}}, fuzzy system {{cite:ea58533f6a9b26875e741c32291120c19cefcfe2}}, Kalman filter {{cite:f97497b21f5cab3606dcaf7e3cb4c854070ce73b}}, chaotic interrogation {{cite:50dff8521bfb9c668b9b99177eed40929a3127d1}}, shape function {{cite:89a5f70decfc3bdd66a9c54f64d3b9aba29f9aba}}, and particle filtering {{cite:4f5cd8649217aefbf5a3e09a89f6cba6f2005017}} approaches, among many others, have proven successful in uncertain inverse parameter identification applications within SHM.
Broadly speaking, inverse SHM approaches can be grouped as either deterministic or probabilistic {{cite:0a9b39da3e591723bab78e32f6f7d74291a3782c}} – which is also generally the case in classical inverse problems {{cite:b91e2e2c170077551ba1fc13846b23d519f9e0ec}}.
In the latter case, estimation of uncertain SHM parameters takes the form of a probabilistic term, for example a value with an associated certainty, a probability itself, etc.
| m | 282132ea8163ecc1f3bf8c6490fb743a |
BERT {{cite:b982552ab597feb170573f729fc2f6d567e661e3}}: This is the BERTbase model from publicly available checkpoint.
ASP: Inline with ASP practices{{cite:a92fca70f4ac28516c5669a7746677ae6c0aeaaa}}, we perform one-shot magnitude-based masked pruning on the pretrained model. This baseline is considered best practices for a large pretrained language representation for semi-structured sparsity.
ADMMUnstructured: To measure the accuracy cost of semi-structured accuracy specifically, we create another baseline that uses ADMM but induces unstructured sparsity at 50% per-layer (rather than global) sparsity.
| r | c8dd3d731a1e972183c2f19f7c736c1e |
The main results on CIFAR-10 using PreAct-ResNet-110 are summarized in Table REF . We experiment with different constructions and different {{formula:7e561c78-a23d-4a73-bf3a-1d16f315109f}} s. 1xSkip is our reimplementation of the ResNet-110 with pre-activation and outperforms the reported result in {{cite:aa74cfed303fd8afb403ee3346d189f27abf1abe}} by 0.06. This margin could approximately serve as the boundary deciding whether the improvement is caused by random variations or experimental settings. As we can see, for skip connection without normalization, it is indeed the best to add the input without scaling and expanding the skip results in significant loss in accuracy. However, with the application of layer normalization, the expanded skip can also be sufficiently optimized and surpass the baseline, which is suggested by {{cite:aa74cfed303fd8afb403ee3346d189f27abf1abe}} by extensive structure search. Moreover, with the recursive design, the performance can also be maintained and yet further improved, which reaches the best result in our experiments when using {{formula:dfae098e-307d-4d80-bd9c-c71c50d2a128}} . It is also interesting to see that in combinations with layer normalization, the {{formula:6c509bed-9add-49a0-8890-2d057cf9d199}} that results in the best accuracy is also different. It is intuitive that the recursive structure adjusts the effective ratio of the skip and the transformed input more sophisticatedly than the plain expanding structure so that different {{formula:a48d2e2e-fe09-4201-b327-fca259992f1b}} may be required.
| r | 52de72c90fc3211c0005fe2211b343e2 |
Our model is of interest for the following reasons.
First, we predict the luminosity of these planets will be low even at early times, plausibly between the luminosity of Earth and Neptune or less, and should be initially rather insensitive to the planetary system's age for hundreds of millions to billions of years.
This can potentially be tested via the direct imaging of father out exoplanets in the mass range of interest {{cite:22c6aa9e1b38c22d4d8d9c998587b17137215805}}, {{cite:24edb004f228a033eda8ea3bbc771dedd72b94f7}}.
Second, a supercritical core with a very low internal heat flow may have consequences for the dynamo and expected magnetic field of the planet.
Third, the temperatures within the core may be so extreme that the contribution to the planet's density due to heat cannot be neglected, and the required quantity of hydrogen to match observed exoplanet radii may be smaller than usually thought {{cite:97c8cde09e8af9a9e04fccfc115e1aa4024210af}}.
Along the same lines, sequestered hydrogen may further contribute to core inflation.
Fourth, the bimodal distribution of exoplanet radii is a highly active area of research.
This model may be of interest to researchers seeking to explain that observation, because current atmospheric escape models rely on interior models that neglect convective inhibition and the possibility that enormous reservoirs of hydrogen are permitted to dissolve into a supercritical core.
Finally, this model for super-Earth interiors further demonstrates the importance of considering exoplanetary systems holistically using intuition arising from fundamental physics rather than analogs to more familiar worlds.
Indeed, it is yet another reminder that our own planet may be atypical.
| d | 5fedc6f22a22a8af2f0675a4af51608c |
The ALA can in principle be applied to any model, but one expects it to work best when the log-likelihood is concave or at least locally concave around {{formula:720b993d-cd74-4e45-936d-8a965c32e82a}} , e.g. models satisfying local asymptotic normality.
Yet another avenue is to apply ALA to conditionally concave settings in the spirit of INLA for latent Gaussian models {{cite:a77aaf69ae94d52820efa78251f1587c38552a2f}}.
| d | 3e853e67cbf2ac319cab69f9f3d4ce42 |
The Carrollian limit of Einstein gravity arose initially as the “strong coupling limit” {{cite:239c110f23defd65c3b3317b8ded347fd2e7355e}} or “zero signature limit” {{cite:db1d73b358ac615180c7c78c8736714d99c4aa30}}, {{cite:f98ce1c56a277b814f94f66ddee187e6e4933c53}} of general relativity. It is relevant to the description of the generic behaviour of the gravitational field in the vicinity of a spacelike singularity {{cite:e1bacf4535ff00b9097b4fc3a3ca9b7ec07a9ef1}}, {{cite:d8d11113443a450f5616132faf7fcf9baa710a2b}} and even more so when {{formula:8a1677de-a435-464d-b5fc-2a9d31e569fe}} -forms are included {{cite:a7e8290c1c24a04a07baec6546efee87928a7183}}, {{cite:4cd30c994f83b1b54b08c65e42fc2653d47eb963}} (see the review {{cite:0a77aadc4a539517e502bb5299a886662db389d9}}). The geometry of the Carrollian limit was constructed in {{cite:79b69ad8094954db2dd1d1cbb1568e2660893ca8}}. Since then, many applications and properties of Carrollian structures have been studied, in particular in the context of the geometry of null surfaces and the BMS symmetry {{cite:c7a059cfea38ef16539780166f0c7faab2c08a51}}, {{cite:fea590e29a2ad428f3da2ad99e2ae2ac1eb5b275}}, {{cite:c1dd200cd8906e36452e747dfa8ea439e574741b}} (see the recent articles {{cite:b03729b9803a48780e4501a2f8d93b5c6e3b806e}}, {{cite:5c3252ab037c17117356c3aecab2b9dacfee1354}} for an updated, comprehensive list of references and {{cite:a605c7e6ac700abec5ae87885b69412acf76048f}} for a review of non-lorentzian theories).
| i | 01d80a7f1d8859cbe56e76526b523428 |
Random Forest (RF) {{cite:c0cb3dc5c86bdb8def9e2b4cac4b3aee1793f118}} and a transformer (TF) {{cite:d2222ce551dd5249e04a7cd9e4018c4c2b700d97}} were used for the supervised crop type classification task in this study. Overall Accuracy (OA) was used as a measure to evaluate the experiments.
| m | b856da9ed997e11d9e93a30b6fdf2093 |
We presented a method to reduce duality gaps in the Lagrangian relaxation of the MAP-inference problem in a continuous MRF taking a nonlinear optimization-driven approach. Our theoretical contribution identifies extremality of the lifting as a key component, as it leads to convexifications which do not discard any information for a wide range of nonconvex cost functions. Using results from convex algebraic geometry we provide, to our knowledge, the first tractable formulation of the polynomial discretization in terms of semidefinite programming. We have provided a parallel implementation of a first-order primal dual algorithm on a GPU which can handle large problems. Indeed, the approach of {{cite:2be74a8e69508def884ef1c0361e2b4fcdd44ac4}}, {{cite:c84ba3d83f77fc9e95e7d8f8ca29a8352c7b48f4}} applied directly to the original problem (REF ) (with {{formula:caa31ba7-b659-4e97-a84d-0c23b2580701}} polynomial) attempts to solve the full marginal polytope relaxation which is tight but intractable for large {{formula:ed6c3c97-e51e-4b4a-8657-01e75f58641d}} as the number of coupling moments grows exponentially with {{formula:47da49a7-4a84-4d9d-806e-59a175ff03d9}} . In contrast, our framework applies to the local marginal polytope relaxation which is not tight in general but leads to a tractable formulation as it exploits the sparse structure of the optimization problem.
| d | 588e038c0cd781ff7c36cb842ef6b7dd |
One of the earlier {{formula:5ebb72d0-9c7f-46ca-8df5-899365e9d7ad}} studies in 2HDMs following the discovery of the light Higgs boson was the work of Broggio et al. {{cite:9a5bf93ad5c32c8b221a836d89614e80f25c08f6}}. They restricted their analysis to the “alignment limit" ({{formula:a163b5c6-a3d9-42bd-933a-d9e85cda7e3b}} ) in which the tree-level couplings of the lighest 2HDM scalar are identical to those of the SM Higgs. As noted above, they found that only the type-II and type-X models could account for the {{formula:e0e5baf6-ad8d-4ff4-b655-e4df6f806561}} discrepancy. In both models, a very light pseudoscalar Higgs mass ({{formula:d31fe9df-9c1e-4427-95de-6ee7616e1eb0}} ) is required, typically below 100 GeV, as is a relatively large {{formula:61689893-550f-49d3-9f0e-a220c5498165}} , typically greater than 60.
Another analysis of the type-X 2HDM with low-mass pseudoscalars is that of {{cite:3beaa1551c19f8c9c8c1c395c37357479fe40053}}, where it was shown that {{formula:c51d3c3d-2eaa-4c31-a1ec-f74190f7df10}} could be as low as 10 GeV. With values for {{formula:944db8cb-985e-46e1-b5bf-5b6a492368e8}} as in these references, unitarity and electroweak precision constraints then force the
charged Higgs mass ({{formula:6b3af39a-49ac-4414-a26e-74a6c57fd413}} ) to be less than about 200 GeV. This is a problem within the type-II model, since radiative {{formula:fa8fd99e-45f8-4fc7-8709-7419e1a85402}} decays, {{formula:c6b75d5e-f601-4b31-80a0-3d45461915b0}} and the hadronic {{formula:8b9856bc-31d5-42b7-85ea-d373049dc93c}} branching ratio force {{formula:108072d9-db6e-405b-88f8-bd80abea7db9}} GeV {{cite:2b7a0d6bce8d8a9ff1ae6c6028c96803fbf1999f}}. As a result, the type-X model is favored. A subsequent analysis in Ref. {{cite:fc8f771056808a56c446ca7c8497b15df800b4a0}} focusing explicitly on the type-X model considered all experimental constraints. It was noted that bounds from leptonic {{formula:16423744-7042-4647-a9a5-b409d5673978}} decay restricted the parameter-space further and that the discrepancy in {{formula:f3ba1c06-928c-4a22-bee0-0ab8b5ce7335}} could be explained at the {{formula:d29520b1-f067-402f-bd25-7e9a038f6530}} level with {{formula:a6137cd9-4a6c-4513-80bd-ab1688b06761}} between 10 and 30 GeV, {{formula:71edd79a-85ed-4883-9461-356bb6f965da}} between 200 and 350 GeV and {{formula:de3fa037-5a59-4987-b77f-89fb82dbbbb5}} between 30 and 50. Shortly thereafter, Ref. {{cite:550afcf9b5484d24d5ceeba08b89b86862805423}} included more recent data from lepton universality tests and found bounds that were somewhat weaker, but in general agreement with {{cite:fc8f771056808a56c446ca7c8497b15df800b4a0}}. In the same token, studies have been performed in Refs. {{cite:c6ca2e851addeccf73c88165b4bc9af4cb10ddf4}}, {{cite:f43cd5b1ca49bc151882750317fd969a197f95c5}}, and a more recent work including limits from Higgs decays to {{formula:b29b2091-244e-4af2-9b92-07121b4c3280}} {{cite:4d8598b26068127f87b410012ffb3e2834d327f0}} also found similar restrictions on the parameter-space.
| i | ad5bead9bb30e8dc934fa373b207d351 |
We compare LaSS to three types of knowledge graph completion methods: shallow structure embedding, deep structure embedding, and language semantic embedding.We refer the readers to {{cite:ef6c579de31a32e1190b4d69119941d80c1760e2}} for a more comprehensive review of the knowledge graph completion methods.
| m | 25ac5e8495a5e0dad1eabd79c4b0e9e3 |
Additionally, both details and context information are crucial for occluded person Re-ID, they have their own advantages in different situations. As illustrated in Fig. REF (c), we can simply identify ID1 and ID2 by local details while finding it hard to distinguish them based on their outward appearance. {{cite:7532186cb9d1ea1b4bcdfb73c8021732549cb628}}, {{cite:e183d1b6e264b8b29115687c5f6ac93d3cf8df28}}, {{cite:17ffff4db828f998473df5e0e6b5250ecca1ddcf}} proposed using additional clues to leverage foreground segmentation and pose estimation models to extract body parts and predict the locations of occlusions when global features are polluted. The methods without using extra models or annotations usually focus on extracting discriminative description from local and global features. Part-based methods {{cite:f5f280904d46d276d2c04fc5b9559495cbc41919}}, {{cite:785c977c18279b45d20662870f8ba3966e800f96}}, {{cite:c2391395111fe504cad4c2a2bd418b2817d23a70}} proposed to split the global feature into several parts, and use finer features that contain detailed information for training. And the global information becomes more crucial when the body is hindered by unknown impediments or the details are similar as shown in Fig.REF (d). ViT-based approaches {{cite:402a1a2401830948d9f4df1ecdea3b0d6d4fd663}}, {{cite:4b37c43c84019f7c0445f19a2de9120d2375e449}}, {{cite:17ffff4db828f998473df5e0e6b5250ecca1ddcf}} proposed using ViT as a feature extractor due to the extreme sensitivity of global information.
| i | 1800861b14ee0ac1c6962d010b32ec99 |
The continuous transition between replica symmetry broken and unbroken solutions studied in Section should be contrasted with the discontinuous transition between two similar large-{{formula:f07a3776-c65f-4141-9bc7-f8cdd79d4d64}} solutions in the unitary time evolution of Rényi entropy {{cite:9abe4186cebddf764212ed08c4b5ee18e260cfff}}. As the replica non-diagonal solution is closely related to the replica wormhole observed in the context of black hole information paradox {{cite:e522e28c5d6a72f6266fee802632739349f41f92}}, {{cite:b549b9f77036bde051ad18f1395ccddcf467cf4e}}, it is worth speculating about the physics of a monitored black hole. In the current setup, the monitoring is implemented on the full system and causes the restoration of replica symmetry. We expect that by increasing the monitoring of the black hole and its environment at late time, the replica wormhole continuously disappears. Because the so-called entanglement island is obtained by continuing the cyclic symmetric wormhole {{cite:e522e28c5d6a72f6266fee802632739349f41f92}}, it should also disappear with sufficiently frequent measurements. An interesting question is to monitor the environment, and study the effect of such measurements on the black hole. Though the measurement destroys the entanglement between black hole and radiation, the state remaining in black hole is expected to be maximally scrambled. We leave this exploration as a future work.
| d | 481af291ee3f7a000726eb64a17ce7a2 |
There is other evidence that the X-rays we observe along our line of sight are not representative of the 4{{formula:88fef051-f19f-4cd2-bd94-3a1a6ba95f38}} emission:
Optical reverberation mapping {{cite:a75a920ed04ca088baf1864f94552970b68edece}} has shown that the Balmer line H{{formula:75498794-b740-4da6-a513-d211cccd74f6}} and the
high-ionization line HeII{{formula:4fc88234-a9c5-47e3-8fa5-51e3ff0a923a}} 4686 of Mrk 335 follow the amplitude of the optical continuum
variabilitythe Balmer line emission is driven by the EUV beyond the Lyman limit, and the
ionization potential of HeII is 54 eV and not the higher amplitude seen in X-rays. While {{formula:c80cd887-901a-4ce6-989e-6f76429fe89d}} = 1.53 in optical
continuum variability, and {{formula:00614192-32c9-4c4e-a2b7-e5a61ba1c93e}} = 1.55 in HeII (Grier et al. 2012; where {{formula:485278e8-8f17-400c-95e0-cefcb16712d2}} is the
ratio of maximum to minimum flux in each light curve), we measured during the same time interval {{formula:ee4b6409-40e4-475f-8c23-25531f6e0283}} = 5.44 in X-rays.
| d | 1ca103ce552f96790eabdbf8f624e344 |
for its expansion at {{formula:f6ac8166-97cc-43e3-8a9e-17182a7189df}} . For simplicity we assume that {{formula:36a50ce3-c1b4-4ea4-b9b7-4b090abfe5de}} According to {{cite:c08be08c64ecd4bf88b17d2030caaa21f0c39c43}}, the {{formula:a83a4387-9ff2-4349-9f88-8f0c265eec74}} -th Fourier coefficient of {{formula:deb0cfd3-f190-4af1-acc0-8adfb0c53d40}} is given by
{{formula:3f86e31f-9aba-4980-9cdc-0543941d493f}}
| r | 4eacb7cdbc8a7c5b69819c37ec97e0c6 |
Despite the fact that the natural evolution of the platforms considered here (and also of those ones not considered) will impact the viability of the proposed network, in both its constituents and design, useful observations can be made from its current form. Probably the most striking of those is the balance between high entanglement distribution rates and fidelity of the distributed states with respect to an ideally maximally-entangled state. Although the objective is to be able to reach high values for both, there might be certain regimes where the rate could be augmented in detriment of the fidelity – the distinction between configurations A and B discussed in Section is an example of such regime. This strategy, if in keeping with the distributed state's fidelity threshold of {{formula:5aebf1bf-8f53-4e82-aa69-89621d837733}} , can benefit from the technique of entanglement distillation {{cite:bde3c3ddec950e5393c0058c93bd1a4ef3c18f9c}} inside the QRs (which have the potential to implement such techniques {{cite:79693f6275dbe1d1ca5bcc61183f8d1249871eea}}), or even at the end-nodes, to establish high quality entanglement at a reasonable rate.
| d | 39896a45f5227ef2083bbfca5cc0d0ba |
where {{formula:8e9b191a-dd87-4c46-bceb-35257db16a26}} is the sound (or thermal) velocity of protons.
Since the ratio of {{formula:90fc3ad2-ec97-4348-ad81-a2db8deb8101}} is usually less than unity,
the plasma is strongly magnetized ({{formula:9960f9aa-3b99-437e-b81e-3b76ecdf8e04}} ) at the end of the magnetic field generation.
Therefore, the generated magnetic field can be further amplified by some magneto-hydrodynamical dynamo
after the generation of the magnetic field {{cite:01ddf15667b6270e08af8f626cdb873d2a13ba14}}, {{cite:4fec017a48f904ccaeb1e4850cee99490bb19642}}.
It should be noted that the magnetic field generation is sometimes limited by a finite time smaller than {{formula:fae4253e-4919-40f6-ab5a-d8f798f4a86c}} .
Even in such a case, the plasma can be magnetized if the condition of {{formula:0c2c7860-8e36-486c-b03f-9b12ca6832d6}} is satisfied.
| d | 77dfa201789023d97b60cfe866d621d1 |
While things may still take turn in favor of the dark matter hypothesis, the current situation is serious enough to consider the possibility that the popular paradigm might need to be amended in some way if not replaced altogether. The present paper seeks such a modification. Unlike other ideas such as Milgrom's MOND
{{cite:7f8e5c68cfd8e31bf9b3c14a84e0056dc531b599}}, Mannheim's Conformal Gravity {{cite:c790c59b7a2a8fd5ad77bfe562fdf4c47042d731}}, {{cite:db35810c1fee770744a5cbd64b5ffa504f27b74f}}, {{cite:996a02bee419988b5a79a666d7827e7e26a104cd}},
Moffat's MOG {{cite:c52a0aa51fa2d61ecd1a1c8f1ddec8a9868e907a}} or {{formula:55f354f3-7f89-42bf-8c1c-859e0767e819}} theories and scalar-tensor gravity {{cite:3f0cd0b5ffb80e496c679c5283813f4d4bd00c07}}, the present approach is, the minimalist one adhering to the razor of Occam. It suggests to replace dark matter by effects within standard GR.
| i | 323f6f63fe85d7a023bdfabce4a92551 |
The historic detection of gravitational waves (GWs) {{cite:5c89e105590e1e5c3691b967bc4642c0ce8ca282}} paved the way for precision GW astrophysics and black hole (BH) spectroscopy to develop at an unprecedented rate. The increase in sensitivity of ground-based interferometers, as well as the arrival of next generation space-borne detectors, will further improve our understanding of the gravitational interaction {{cite:4da18a4d30bcf3a5699ad8d44ceade44bb85af8c}}, {{cite:f89faf9837222cd9c1c2b5c7fe24a7a85c8a3881}}, {{cite:d1d6eb42314bd99408962749af1f0970efbeacd6}}. BH binaries produce GWs through the coherent motion of the sources and form ideal testbeds of the gravitational field in extreme conditions. Together with new radio and deep infrared interferometry {{cite:40f219add1aa95bcb3c23f2da4529f593f0e3a68}}, which provide direct images of the dark compact objects lurking at galactic centers, we are now able to perform novel tests of issues such as the BH no-hair theorem {{cite:a050b0c9f358a5e333e6a6c122c055d0bc00a368}}, dark matter and environmental effects {{cite:62945c36cce67efa861188d46fbf32ad5b625547}}, {{cite:4f5028d96866a3036780fab67581518cb0818210}}, {{cite:2e2e83b61ef1866009eb14df8f0c700d8466550c}}, {{cite:d428daf039ac859c26b91e282c7a99ab7c51aae2}}, {{cite:db077dd93c626b8207baeb6deaaaba4c1b80b96c}}, {{cite:85a7a2ad6a0664ffcb17088ef2d07f26761aacea}}, potential quantum effects at the horizon scale {{cite:031f2019a929acbbd6d062f977eb4bccb21fd63b}}, {{cite:e3edbe1b7f7157595f84de8c9887e0860672b004}} as well as the existence of horizonless exotic compact objects (ECOs) {{cite:074025e83d988c36b75e07842e842436cf92e8b5}}, {{cite:031f2019a929acbbd6d062f977eb4bccb21fd63b}}, {{cite:4a954658cd1df5f340e46380a0ee52d455842752}}, {{cite:972f923851be7b24712cc2af548482151422d184}}.
| i | 63d06e12898fdc07e0ac3d9d3cc89590 |
where {{formula:895a73fa-24e8-4fff-a05e-3d351f7d75e9}} and {{formula:7e072ea7-6de9-44a2-b560-45ca5f5646ed}} .
For {{formula:6a9f8a74-6129-439e-8797-64062dc9f619}} , we present {{formula:8ec32d6b-eac0-4f3d-aed0-ce27b78c0b12}} in Table REF ,
where {{formula:53f3e613-cb79-4bea-b370-1813b945993d}} and {{formula:e977170e-cad1-44a5-ac19-624cae2a347a}} decays are both considered,
together with the decay constants
{{formula:c795c718-10d8-42e1-abcd-41f3a85cd7ef}} MeV {{cite:fba382b185a0d19c9910c6e493110c5b7a7419b7}}, {{cite:bacd0f98db6efc9e7d6121d9b4bb248d6afba4c6}}
and {{formula:1dba88ed-f2f6-4ec0-9cb3-003ef6db85f4}} MeV {{cite:5e738b1c623eef173d56beb37b69a6cd2f8ff8e4}}, {{cite:5902c5687a10ffefa5017abae0644539663c2576}}.
In the generalized edition of the factorization {{cite:401ad0c43eac6eec631564bf93b48cd41447f902}}, {{cite:003e25767917a03b91092c3277c5d46b24a8c969}}, {{formula:2e3e2454-a664-4068-b9cb-2fe23c64bc37}} is taken as
the effective color number with {{formula:e4617751-a22a-44f8-8bcd-4ac9130101f7}} ,
in order that the non-factorizable QCD corrections can be estimated.
{{table:767bcc2d-27bd-40af-b079-494a6c5c1908}} | r | dd4d81408c7bac4f86cc55a4e02ea245 |
{{formula:957b96b4-6bc9-43ab-bf6f-bea21b4cc6b8}} violation plays an important role for the test of the Standard Model (SM) and extractions of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. The processes of nonleptonic decays of {{formula:26d3caab-2e8e-40af-ac6b-eafa16acd8bb}} mesons provide us with opportunities for exploring {{formula:a5f81475-6040-4ae6-85f8-5606ab31b081}} violation. In SM, {{formula:b1f6e871-bbcd-4654-bdf6-38fd94daeb9e}} violation depends on the weak complex phase in the CKM matrix {{cite:ccaf765cb8d466ec6183af8c333bdb96c7188a03}}, {{cite:2dffcd40121f0815493019424ac019187a51e182}}. The main uncertainties of {{formula:da25c15b-126b-4c02-9af6-67da24a4d90c}} violation come from the insufficient understanding of strong interaction associated with the nonperturbative QCD. In the past few years, a large amount of experimental data have been collected for {{formula:3913ea72-b4eb-4fbc-ac96-a9a3b6a5e6b6}} violation of two body decays of the {{formula:aec929f1-06fc-4688-bf98-bf432c6ec643}} meson by {{formula:5d920465-a33e-400e-94a2-12cc1db063a7}} factories, BABAR, Belle, and LHC experiments. The large {{formula:b0899bcb-1b94-4532-bb0c-ea2d2a27ca7f}} violations have been found by the LHCb Collaboration in the three-body decay channels of {{formula:191557b5-45f6-46f3-bdef-8018f9686584}} and {{formula:b9098fc2-4a6f-4ba1-80dd-c85a6ce8c915}} {{cite:e9b97355a3f127f623d3104e627c6cc9715a6414}}, {{cite:9278ecdb62ea089c05cdd92d99bd2b6602242470}}. Hence, the exploration of the theoretical mechanism for {{formula:f7dad964-7b40-4e8e-858f-91a2c5001b00}} violation becomes interesting in the two- and three-body decays of the {{formula:d40a846c-db7d-4fd2-a219-edf4bedbba86}} meson.
| i | 233a36fab3abd5273b6405616b74fe1c |
As in the previous works ({{cite:3673c46511ab37dcd3b8238ca6ada9a253991363}} and {{cite:31aafa44fcaa6f4148c43bd2212ed269aa82cab2}}), we clearly see in Figure REF that the observed radio luminosity of the low luminous AGNs and microquasars can be explained by the magnetic power released by fast reconnection in the core region of these sources. This emission is due to Synchrotron radiation from relativistic electrons which can be accelerated by a first-order Fermi process directly within the magnetic reconnection site in the coronal region around the accretion disk ({{cite:3673c46511ab37dcd3b8238ca6ada9a253991363}}, {{cite:cd7215609bf4132dc82c3dcb36e6cf098974dbc1}}, {{cite:2608751f9a235d45cf6a48951249a1050dd6a85a}}, {{cite:1631c96f624099be44923db77533df10949c8e8c}}). The corresponding {{formula:55f7f470-97e1-4c0d-b2b6-7746a7fb5e6e}} -ray emission from these sources, which is produced from the interaction of the accelerated relativistic electrons and protons with the surrounding photon and density fields (through inverse Compton, and/or pp inelastic collisions and photon-meson decay) can in principle be also associated with the same emission zone in the surroundings of the core of these sources. However, direct evidence for this association, though found for the microquasars and the Crab, is not found for most of the low luminous AGNs. This is explained by the fact that the {{formula:5e2d3b7a-12fb-418b-bbe2-1e6c8c19be3d}} emission (contrary to the radio Synchrotron emission) does not depend only on the local magnetic fields, but also on the photon and density fields in the surroundings of the source/accretion disk, as stressed above, and these factors can provoke the loss of correlation with our nuclear emission model (induced by magnetic activity around the accretion disk).
| d | dc6805be706ba69648f54c767cba45c8 |
By Silvestre's strong maximum principle (see {{cite:714f23c9632937d5cd4a4b67f97af794303bfd51}}), we get {{formula:d5a92529-e704-43cd-8a01-c66c6d1c4eac}} in {{formula:16eee1b2-9f84-4134-912f-2e89969e7a10}} . Therefore,
{{formula:1dfc6b96-a3b1-439c-80fb-75311efe83fc}}
| r | 5cd7d3ad3a9eb7bef817eb1f2aa96545 |
where we use the superscript {{formula:aa407890-3668-46a8-a3b0-b3ae6617c4a1}} to denote the weights associated with the {{formula:61549cb5-b1b3-4abc-bc36-cd9ac1005f0d}}th layer, {{formula:7486e06a-baeb-4932-9f4e-0ca2371b1c7b}} represents the layernorm {{cite:e8ed2203389ea747595dd734a018f50cb0b1f40c}}.
| m | 8a10d7770ee786bfb5e568f1942c80d7 |
Bayesian methods have been recognized as a powerful paradigm for quantum state tomography {{cite:93ac2b0ae64438e0dcb45ec3674f42138b666bb0}}, that deal with uncertainty in meaningful and informative ways and are the most accurate approach with respect to the expected error (operational divergence) even with finite samples. Several studies have been conducted: for example, the papers {{cite:6e8d8c47204b6c808fa23bb6d4fa6852e86a222b}}, {{cite:a6750b920200006c846c6857f9ad35d0ee8372bb}} performed numerical comparisons between Bayesian estimations with other methods on simulated data; algorithms for computing Bayesian estimators have been discussed in {{cite:a0fe07b1f4b98e04540ee38746d258168006b109}}, {{cite:75d8866012e965f314a663e3fbbb966c0d2a989d}}, {{cite:5109717f44a19459f6aa4dcc5390fe432488915a}}, {{cite:13484a5554282e9ca1696381ebcf4cb761ce1466}}, {{cite:3d148aabe0b93edebd38a3adb68272d17123bfcd}}.
| i | 2f3380612b7ccd553edeb23f83343474 |
A major limitation of the framework of linear transition core MDP is the assumption of known feature maps {{formula:48aa089c-213d-4fec-b124-5f9d81f6ef15}} and {{formula:5aa42ea4-7c0a-4b01-800e-ede10d82e521}} . This allows studying the usefulness of meta learning for rapid learning in a newly encountered task. In empirically successful meta deep RL works, the feature embeddings are, however, not given but instead learned. Thus, the effectiveness of meta learning could lie within learning a good initialisation/bias or in learning a set of reusable features. For the case of model agnostic meta learning {{cite:66176a24e5e4f71f5162f413748c1b8fbd35b19b}} the empirical study {{cite:6690e5e9cb571ecd5f2cbe9654f5471680e2e593}} finds that feature reuse is the dominant factor in the examined few shot classification and RL tasks. To fully understand the usefulness of meta learning in general MDP's, it is thus necessary to also take feature learning into account.
| d | d4ebcf7a4348112e95ab37c301195dc5 |
The slow-roll correction can also make the masses of the intermediate states time dependent, and thereby generates signals with scale-dependent frequencies and scale-dependent magnitudes {{cite:0c64657b26bc8854e58e31d94401be0f8056d952}}, {{cite:f02786a904a23c78007352e03a2671aaaaf22c3b}}. The phases in such scenarios can in principle be calculated as well. As a side remark, we note that the scale dependence of the frequency induced by the slow-roll corrections persists in the deeply squeezed limit, while the phase considered in this work reduces to a constant in the same limit. Therefore, the scale-dependent frequencies can be disentangled from the {{formula:c4dd051c-3f93-4618-925a-c84da77b057a}} -dependent phases studied in this work, by measuring the signals in the deeply squeezed limit {{formula:62fa189e-728d-40c6-b2ee-c1b53f383c63}} .
| d | fe5418ffbd20b0d1fa96738b9d470ee1 |
Approximate error (REF ) converges with the same {{formula:45df78a2-445f-49c9-bf39-309b830741d3}} p{{formula:e4912593-4a33-4005-a533-e2bfe2b6e6ff}} K12 (u - un) 0,{{formula:38f138c3-b1d6-4abf-a455-c1fa8bc3fe7e}} As stabilizations for the primal and mixed formulation, we use those defined in (REF ) and (REF ), respectively.
We fix shifted and scaled monomials {{cite:1a958d03326ec5614ef6323976bfadfb5b511efb}} as a polynomial basis in the definition of internal degrees of freedom (REF ) for the primal formulation.
As for the choice of polynomial bases for internal moments (REF ) and (REF ) of the mixed formulation, we use those described in {{cite:5cfb191ac3626ab7f06e4441be14e7258689b053}}.
| r | e9e7815e886a6344f594b87f5f953f7e |
In order to highlight the potential of our method, it is necessary
to compare our results with those obtained in the previous works.
Traditionally, the angular diameter distances are derived from SZ
effect of galaxy clusters {{cite:0838c330699944710c2a8f0b774cf3c4511608ad}}, {{cite:b5a59ba7de316d045e67457a4b41326272d9b931}}, BAOs {{cite:01dfdef2335c97478474a3102578ebb9b1fe1f3d}}, GRBs {{cite:28ce176f4926864054f4d400cb288d98cd85ab79}},
and SGLs {{cite:940f57d4e3d6d0920960ec272df90c23ab40b984}}, {{cite:dcd20dafe7848f9630761668547172ed7b3461db}}. One can combine the luminosity distance
obtained from SN Ia observations to test the validity of CDDR. Our
results are consistent with the findings of their previous works,
which confirm the validity of the CDDR at early Universe. However,
the angular diameter distances inferred from SZ effect and BAOs are
model dependent, SGLs need to the assumption of a flat Universe, and
GRBs requires additional external calibrators to calibrate it at low
redshifts. We remark here that, without any assumptions, the angular
diameter distances estimated from compact structure in radio quasars
provides a new possibility to test the fundamental relations in the
early universe model independently. More importantly, in work of
{{cite:9f78cabbd1701234cc41562af3e7e64de62c9db4}}, they simulated gravitational wave (GW) observations based
on third-generation GW detectors Einstein Telescope and simulated
radio quasars from VLBI to test the CDDR. Their results shown that
the CDDR parameter {{formula:fd09fab7-e5a5-4eb8-bfb3-7be5d8e77384}} , {{formula:3ddcdc96-5d0f-43de-b2b8-e74911024961}} , and {{formula:05f493bb-7ae9-4340-b61a-fb9394adaba2}} (at 68.3% confidence
level) corresponding to first, second and third parameterized forms
in our work. However, we should seek other methods and technologies
until the observed GW events based on the third-generation GW
detectors will be sufficient to get statistical results in the
future.
{{figure:f6adf824-3500-42b7-a0ad-2159e5c4074c}}{{figure:518c81dd-7a47-4194-a2b2-765f1e1a8572}}{{figure:faefde33-eb03-4053-b9a8-405f9e021789}} | r | f0f5c586b812fd58275019ed6b3c1d3f |
The understanding that nonlocality can arise at different scales, leads to an even more challenging scenario that is the possibility for nonlocal interactions to occur and interact across dissimilar scales. It is immediate to see how this latter scenario involves the simultaneous presence of nonlocal and multiscale elasticity concepts. This generalized elasticity problem involving nonlocal effects at different scales will be indicated in the rest of this manuscript as multiscale nonlocal elasticity. The current literature of nonlocal elasticity has primarily been concerned with what might be referred to as single-scale nonlocality, and hence it has not specifically identified this multiscale nonlocal phenomenon. However, we emphasize that multiscale nonlocality should be expected in most real-world applications characterized by material and geometric heterogeneities. The latter conclusion follows not only from the observation of previously studied systems {{cite:7d85deeaa70ca2b63b59dd32c0389fba9df42577}}, {{cite:008eab7b7927bb0d9006d75fef469e27d9e280b4}}, {{cite:81e3a845cb49b14167db0c1a9a63c5c168f5c924}}, but it also becomes a rather natural consequence of the design approach at the core of next generation architectured materials.
| i | 7ccb4fc3f5d99c4bfff53c9c1662ebf4 |
We performed extensive experiments on the proposed algorithm. We compared the performance with the classical techniques such as NLM {{cite:07a8ff3dfdc2f8f4a88f03ca3fc968becb7b3dfa}}, BM3D {{cite:075b1dfc2675c2b56996483cee829c8ae4722633}}, K-SVD {{cite:bc4a11c06c2f1b9d3b0342f040688d8b4e1ab020}}, and the CoFiB {{cite:7985dcc2300fbd335eeba3ae31c0ea6689b95a2f}}. We also compared with the NN-based techniques tsuch as the FFDNet {{cite:2b44efeb1db58be861a7786ceebb845f55fc1db1}} and the DnCNN {{cite:5d83f20f98c9222ec2ff0d1d529cef584e3dddf5}} algorithms. We give details of our experimental settings in the following section.
| r | f9af770da1d3bd3db95ac60f0d514873 |
Finally, in Table REF we show the resulting Information Transfer Rate (ITR) for the MFF, the EMF. The ITR measures the efficiency of the system, and it measured as bits/trial. It is computed using the following formulas {{cite:bf2cdfb00c4a9b86311edc4eb36e7258db726c50}}:
{{formula:607c05e5-b27e-416d-864b-f95b6f1cb184}}
{{formula:61ebc0f4-d574-48fc-aec3-ffe368631af1}}
{{formula:7ef100ff-5357-4eab-a579-9bb1b75be289}}
| r | 65a763935fc8e18f95eb3066b2e75a39 |
Regarding saliency, each existing method mainly focuses on one of the following tasks –
i.e., eye-fixation prediction, image-based salient object detection and objectness
estimation. Among them, image-based salient object
detection {{cite:d07c575e078378dd52974af876399b46d21c441e}}, {{cite:b5cd0aefd5e8d873df25202169c7cf71bd4642d0}}, {{cite:0c413568baffb01e2998f380b5d3a1a62149e002}}, {{cite:6326e31ef22df5990c57b67b527a4306b2aa8a4a}}, {{cite:661d6ebf9d3512ef173ba52ccf65d696e1f75a49}}
is an important stream, which can benefit several applications including detection
{{cite:a53243fe0b1048bcfbb3a58aba2c97fd5627192a}}, classification {{cite:54f2ef4c1399d647368c0ac3860e7acdc75ac132}}, retrieval
{{cite:80f5dc8c7ce6383c06fca7f5ec1f141fd15f9e6c}}, and object co-segmentation {{cite:aba106390e09fb4e13ca4386b7b12eee322588da}}, for
optimizing and saving computation. The goal is to detect and segment out important
regions from natural images.
| i | d5334314e29d726f8630dd520a839be9 |
About Eq. (REF ), we refer the reader to Section-4.3.1 of Ref. {{cite:94229619c89fb0b192093b27ab36abe453be06f6}} for further details.
| r | 409e3cd74449027e8369dd89be46cffb |
Deepwalk exploits local information obtained from truncated random walks to learn latent representations of vertices in a network by treating walks as the equivalent of sentences.
LINE optimizes a novel objective function that involves both the local and global network structures, and employs an edge-sampling algorithm to address the limitation of the stochastic gradient descent and improves both the effectiveness and the efficiency of the inference.
TransE models relations by interpreting them as translations operating on the low-dimensional embeddings of the entities.
TransD simplifies TransR {{cite:99e1ba501d2e5a7950479615f477a12ca23a2715}} by decomposing the projection matrix into a product of two vectors. TransR builds entity and relation embeddings in separate entity space and relation spaces, which is different from TransE.
TransH makes a good trade-off between model efficiency and capacity to preserve different kinds of mapping properties of relations, by modeling a relation as a hyperplane together with a translation operation on it.
HolE employs correlation as the compositional operator to capture interactions and learns compositional vector space representations of entire knowledge graphs.
ComplEx only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors, and introduces complex-valued embeddings to better model asymmetric relations.
| m | 2fcccecee49641a4afd687bf1000293f |
While our results and those of {{cite:00f636671b556f96f135e05ca2ea9039caff3864}} represent a step toward understanding the plausibility of such an algorithm, many questions remain, including how performance is affected in networks without weight sharing {{cite:39fe71b3c2201e2f81abaf3b0093b2d142551596}} and how to incorporate continuous learning without separate forward and backward passes {{cite:6f4c086b9abbcf4ea123d1e9a32a0941aefe4979}}, {{cite:43baa7e4b1774924a9ca24007d77f87092601772}}. Further experimental and computational studies are needed to develop clearer analogies between biological and artificial neural network learning.
| d | 5edb61ebfee39474482ca21d0f772353 |
By matching the AD model to the H{{formula:0ecbd616-7f9f-4f95-a702-94017c78b0dc}} broad emission line profile, we determine the inclination, inner and outer radii and additional ring radii. The inner radius is defined by matching the far red and blue wings of the observed line profile to the red wing of AD model line shape. The red wing part is most sensitive to the and gravitational redshift effect , and therefore the extension of the red side of the line wing determines how close to the BH the gas is emitting the optical Balmer lines. We note that we determine the inner radius of an AD of the H{{formula:64f20514-92dd-48a5-b786-a9340ab1b017}} broad line from the fit of the model, and that the inner radius in gravitational units is usually {{formula:658de954-65e6-461e-a9ec-9e855167af1a}} 100, much larger than the inner radius derived from X-ray lines (like Fe K{{formula:376d3a7c-dada-47ee-a2c4-f8ba993fa3a1}} ), usually close to the innermost stable circular orbit, 1 to 6 gravitational radii {{cite:ff40b87857fa54cfd2b9b765b6d09bee9c1834b7}}, {{cite:7bc59635d84fb06cd30700c7987a275026b006d2}}, {{cite:d4fd9f4d226089d18e643ca8e770a9b7235d5e8e}}, {{cite:6f6e43450286b8d13fe0a43d1cb4e3a84d551f87}}, {{cite:cad3f79261d2d4cab91ec3a1f70a6e0793f64916}}, {{cite:435f7a8875efec4e3473dcf8fad67ab0e35f54ab}}, {{cite:014e24516d6cab785971ae1340db6f16ce0600d7}}, {{cite:a25518789cfc093dd28fe8a70693c97b9a47886f}}, {{cite:929b212adecd7d4c7b270f8417f703fd26659c8b}}, {{cite:b06e0334fb94d0ddfb7ff512c713a65ebaef2ab4}}. The part of the disk that contributes to the thermal continuum emission is the one of highest temperature, also close to the the innermost stable circular orbit {{cite:4bc226476861215727a4d1c385ca26846f35d19c}}, {{cite:2703b98112481aa6394dbe048d33eba554d72bf6}}. This region is expected to be much closer to the black hole than the AD region suitable for the optical emission line emission. From the fit of the model to the observed broad line we determine inner and outer radius of the AD and each ring as well as the inclination and emissivity law that is common parameter for the complete model.
| m | 286f68ecd4693242680765381d3a8d75 |
Initially demonstrated in {{cite:5fa2643c91325b42a9776a62316029fd66d5acec}}, {{cite:2b9b2c9265a80912cf7d91eff5541e8305705dd8}}, quantum accelerometers based on cold atom interferometry generate high precision measurements with extremely low drift over a long time period {{cite:00675a461216b8e7d4a1bffdd49b38f7e3867729}}, {{cite:ada4f5b5ccc3ed78bbab29fe7cb2aee6b4e9c637}}, {{cite:08913cd74bc7e6848aba2104219c7dd34c8a5dd2}}. Laboratory experiments demonstrate the accuracy of cold atom accelerometers to be fifty times greater than that of their classical counterparts {{cite:d4be9909e6e786623472008f70613a24798382d7}}, {{cite:fdfb32ad99975006269995aaa8ced235d12437fd}}. This makes cold atom sensors potentially well-suited to inertial navigation systems, where the bias and drift of accelerometers and gyroscopes has a direct impact on the quality of positioning and attitude.
| i | 2c10ee7815e543f94e0dac5cada0ed8b |
Deep learning models are both useful and challenging. Their utility is reflected in their increasing contributions to sophisticated tasks such as natural language processing {{cite:a24fb5da1ac024866602700d5f888531fddd3cb7}}, {{cite:42dca16e641a2cb1106658e26824f83c2b51ead9}}, computer vision {{cite:6e2424f0c5dff14288e4b412f21a2b45a0acd2aa}}, {{cite:bb9e6c64ef1ca0d5bd0418aaf9924a48ee194001}} and scientific discovery {{cite:b232e40477cf8c8db0c72ee82285e6e30bd08969}}, {{cite:2b3f568bf71b38d02561cf27f754b9cf31e1aad2}}, {{cite:83cb42d00736504cba3fed57644c10a4bae083a9}}, {{cite:3a94a59bbf587ae3e3a3d9c345d1a6b94e924fd7}}. Their challenging nature can be attributed to their inherent complexity. State of the art deep models typically contain millions to billions parameters and, hence, appear as black-boxes to human users. The opacity of black-box models make it difficult to: anticipate how models will perform at deployment {{cite:28a9a3d4ebc91b9482cbc3e9618cbadb03b385ea}}; reliably distil knowledge from the models {{cite:40ba6b2b470daf16897731f746bc9db72c8b0212}} and earn the trust of stakeholders in high-stakes domains {{cite:09cba14e266857a998f247d2f6127d3208ac7a78}}, {{cite:d8fdc858345b11bc05e0bb516308e38289e0e4e4}}, {{cite:77e8ac308697d8d56cad1bd0659393453b3f49fa}}. With the aim of increasing the transparency of black-box models, the field of explainable AI (XAI) developed {{cite:de575bcd6ee845e95c397145e431dea265b917ae}}, {{cite:63bfcf588430a4218adcc005ae3053d833ebef98}}, {{cite:6df5980f34ff579c6cb4edb44946f00b7982fe63}}, {{cite:ca9d82cf710805e9fd50de49509175c40f8c98ba}}. We can broadly divide XAI methods in 2 categories: [baseline=(char.base)]
shape=circle,draw,inner sep=.3pt] (char) 1; Methods that restrict the model's architecture to enable explanations. Examples include attention models that motivate their predictions by highlighting features they pay attention to {{cite:d35d0060a847ecf77ce958c13626bef5adc4d43c}} and prototype-based models that motivate their predictions by highlighting relevant examples from their training set {{cite:302e7768d2d88991f6c2e703aa09d62b2821fa03}}. [baseline=(char.base)]
shape=circle,draw,inner sep=.3pt] (char) 2; Post-hoc methods that can be used in a plug-in fashion to provide explanation for a pre-trained model. Examples include feature importance methods (also known as feature attribution or saliency methods) that highlight features the model is sensitive to {{cite:14e8149f851120906a4b9aeffaaf57db169f6016}}, {{cite:9554c7f13b234a2c653d6486d2acb9df83ed21bc}}, {{cite:7a0ebf61b495c176f6bb8ede19a81e718ff6ab45}}, {{cite:80e308081bc22109e38e20f5b127c2e091740561}}, {{cite:2e8cc1fa779eaa47916a4a66ece06e264379e7fa}}, {{cite:fa2d015842dc3e9f1b09b91f846075fe9945803b}}, {{cite:ac72b87114de02b0913c86523597bce63e468e93}}; example importance methods that identify influential training examples {{cite:c67c5ab93e8fa6d77c9d58d1956f71348c26dec3}}, {{cite:b923dbf05c14c48592d87d8f631a8201fa9c75da}}, {{cite:1fb576d69062b6816793978c5a9456fa4f9ffc1a}} and hybrid methods combining the two previous approaches {{cite:2d83d27fcde9558b01e88b284cb1a49585a75004}}. In this work, we focus on a different type of explanation methods known as concept-based explanations. Let us now summarize the relevant literature to contextualize our own contribution.
| i | dc7c5e10e761991c839cfb37bd6a6b41 |
In this section, we present the details of the numerical
method employed to solve the relativistic Boltzmann
equation in the Anderson-Witting relaxation time
approximation {{cite:05da43eacc1132b8b0595fa0f598784fb9462d39}}, {{cite:4a1c8178495afaaf61766afd5c562844a0ed31f0}}.
The method is inspired by the finite difference Lattice
Boltzmann (LB) algorithm
{{cite:36c32a070958a9fe80e9008845dae110e8b665ca}}, {{cite:181dd706c3eebd66b186ab8a37283226a5c43e32}}, {{cite:e2a5009a992d39909cf2ddb74fbca3e5cac0209c}}, {{cite:9cd2a34561e17dcf2fd551e5569a4451006cc235}}, {{cite:60bad80e131102d2008a0c3d3d804872aa4f6a71}}, {{cite:d73deb42de51bc6526670169876724df1ee6a537}}.
| m | a2910b0053568bb39759bc85f630efff |
In these years, the Hermitian higher-order topological insulator (HOTI){{cite:1c931ca5066e424e0287b287175483517c6e0664}}, {{cite:32952aac01c7b4793426e13076949b0c357afa99}}, {{cite:30ee01e0452f1f1fc147ed0df1abcdbf1827c061}}, {{cite:9c2e973906a0207c3812bd636d13971b78692e31}}, {{cite:cf2dc6a646a73d1d6f6ede21e247b211cf69489b}}, {{cite:a748b6460631f6de2d385577e82813d1f72ae5fa}}, {{cite:e96144cec62d5635d86db0b2ee0123b66618d46a}}, {{cite:4aee29d204f1e63e29b1e25a088e01a090089cb6}}, {{cite:56c499f92f6fc35d55ea3b79fa7285129bb31ceb}}, {{cite:d7b38e8bf5512735fbd03f0e49523ed68fe4a95f}}, {{cite:3c024e780fb8e39df35d1b998c423ecfec087701}}, {{cite:8cf57ad88782e1947ba42fd71cbc38ed5de8cb86}}, {{cite:25036e1446fc030de83b296bb575f2ff0ad95e09}} is one of the most focus of topological states. Not only the nested-Wilson-loop method{{cite:32952aac01c7b4793426e13076949b0c357afa99}}, {{cite:30ee01e0452f1f1fc147ed0df1abcdbf1827c061}}, {{cite:9c2e973906a0207c3812bd636d13971b78692e31}} but also the real-space topological invariant{{cite:56c499f92f6fc35d55ea3b79fa7285129bb31ceb}}, {{cite:d7b38e8bf5512735fbd03f0e49523ed68fe4a95f}}, {{cite:3c024e780fb8e39df35d1b998c423ecfec087701}}, {{cite:8cf57ad88782e1947ba42fd71cbc38ed5de8cb86}}, {{cite:25036e1446fc030de83b296bb575f2ff0ad95e09}} were reported to characterize such states. Specially, in two-dimensional systems, the quadrupole moment {{formula:0e3f543c-81a8-4b2e-a189-98fe5a474a4c}} {{cite:56c499f92f6fc35d55ea3b79fa7285129bb31ceb}}, {{cite:d7b38e8bf5512735fbd03f0e49523ed68fe4a95f}}, {{cite:3c024e780fb8e39df35d1b998c423ecfec087701}}, {{cite:8cf57ad88782e1947ba42fd71cbc38ed5de8cb86}}, {{cite:25036e1446fc030de83b296bb575f2ff0ad95e09}} was proposed to be the real space topological invariant of HOTI. Initially, the quantization of this topological invariant was thought to be protected by the point group in the HOTI{{cite:9c2e973906a0207c3812bd636d13971b78692e31}} which is considered as the topological crystal insulators{{cite:8d2f11ac4fd22111d5001027c1e90b9144e48ebb}}. However, Li et al. {{cite:3c024e780fb8e39df35d1b998c423ecfec087701}}showed that the quantized {{formula:a2636ad7-2afb-4f00-a2ee-f86edbbdb4ad}} could also be protected by chiral symmetry or particle-hole symmetry in Hermitian systems. Such results still hold even with disorder effects. They also carefully studied the disorder-induced phase transition in the corresponding systems and predicted the existence of a higher-order topological Anderson insulator (HOTAI).
| i | ca36021b5fb4ac7c8c4900785493d1e6 |
To solve a SDE problem, we apply the standard multilevel Monte Carlo method and regard the Taylor-Itô scheme as the discretization subroutine {{cite:b17d77b9e238eb083d44bed2956bfc49d2a951c3}}, {{cite:747f7e92cc50147e88d9a43ee7632b4aee262f34}}.
| m | ee73bea696eb237fc5221db5c3b18f00 |
The anomalous Hall effect in ferromagnetic conductor is considered to come from three different contributions, and the AHC can be expressed as {{formula:42a3e5d6-d287-44ef-bda7-cd5969decbba}} , where {{formula:79b09a6a-6ad4-473e-b7f6-4dc9efadc3c7}} is the intrinsic contribution, and {{formula:754e08d8-2c9a-46a9-91c3-bc6f993b567b}} and {{formula:7ea32422-8db7-4710-be99-9b49b962d5aa}} are the parts come from screw scattering and side jump mechanisms, respectively {{cite:5e5dbd32a7a6353f44c4dbea2a5f539db408eac7}}. Both of the {{formula:6adf834a-1d15-496b-b798-ee604dc1b4cf}} and {{formula:d10549d4-46ce-487b-81fb-926f6164f7db}} terms are the consequences of extrinsic scattering from disorder or impurities, while the {{formula:243f6f7a-2ace-4fdb-a285-bc5aab0150b2}} term depends only on the band structure. Figures REF (a) and (b) show the anomalous parts of the Hall conductivity {{formula:14e91fc2-7a5e-4326-b4ae-a64ad0e53848}} and {{formula:39dffea5-7ef3-48df-be3e-5ae36a9264ae}} , respectively, which are obtained by {{formula:b8c79cdd-21b0-4a09-b656-91d65e881bec}} . To extract the intrinsic contribution of the AHC, we employ the so-called Tian-Ye-Jin (TYJ) scaling {{cite:ef2969f3f318cb4dad0badaccf724604deeb9ce1}}, {{cite:111e483e753ba2ec599e99bca37b169d7fcdd5b4}}: {{formula:4e5759e6-a3c9-4f88-b45a-0fad1dc64a1e}} , where {{formula:e32be699-3dbc-4572-9509-05ddc4f4dadd}} is a system relevant constant, and {{formula:8c72effa-752e-4b79-9784-e52970dc5dfe}} is the intrinsic AHC contribution. Figures REF (c) and (d) show the AHC as the function of {{formula:1cf6a526-6e7b-438e-9167-a42d5377628d}} for the two directions. The intrinsic contribution of AHC can be obtained from the intercept of the fitting line, which is 170 {{formula:3b69b8ff-4cf3-4152-9d4d-0ed508c1e8d7}} cm{{formula:75dfdbf2-3495-42a2-95df-3e2ae275e24b}} and 380 {{formula:2ce6b33c-ec71-4a51-8ac1-2fe81d5e2c6c}} cm{{formula:c4187f41-5a05-4c45-b381-2e01ea5b791e}} for {{formula:d2ed6938-4632-4051-8594-baa31ab0c354}} and {{formula:7de9550e-2898-455c-95e5-1cc39405850e}} , respectively. This anisotropic intrinsic AHC suggests the influence of magnetic field direction on the electronic structure of LiMn{{formula:d264fcaf-88dd-48c2-b00f-4df339342e70}} Sn{{formula:4e090aba-0dce-403a-ad8b-73cff9230356}} again. The value of {{formula:fcd3a28e-d50b-4265-af2f-9a006fd0f18f}} can be converted into 0.88 {{formula:16ee6df7-be1f-4b9d-a3b7-b9c2d52730c5}} , where {{formula:eb8922bd-ad6e-44f3-8ed4-4d80362f7e8a}} is the cross-plane lattice constant. Considering there are two kagome Mn layers in a unit cell, it gives 0.44 {{formula:0a89390c-5c0b-483c-b12b-6b3de8e8788c}} per Mn layer. This intrinsic Hall conductivity component in LiMn{{formula:51c4582c-c64f-4de9-8e77-46de364c5b4e}} Sn{{formula:d66360e0-2a40-4318-9e14-007d379fd3bc}} is much larger than those that have been found in the {{formula:c7cf46fb-ee83-4ff8-836f-075cccc38d75}} Mn{{formula:fb64bed2-18f6-4c3e-971e-44fc9f043a30}} Sn{{formula:da2a5ef5-b4d5-4e1d-9fcd-2a265eff8a9d}} compounds with ferrimagnetic order, such as 0.27 {{formula:46f9d9aa-8b9f-4e7a-90b9-fc684888f340}} per Mn layer for GdMn{{formula:17638f7d-5ae6-4d02-93d7-0b9a83d96fb6}} Sn{{formula:1bcbee78-cbde-4ec6-bc4b-98cc6291f6f1}} , and 0.14 {{formula:35314329-c428-4251-8ce8-9adc574f1d79}} per Mn layer for TbMn{{formula:cb928055-2937-4bef-99b1-fa4efbc0dfd2}} Sn{{formula:7edf51a1-080c-48d3-a852-7b7a07a810b7}} , as listed in Table REF .
{{table:0e0d0913-79b4-4d80-948f-d166b7567bc5}}{{figure:e8b7d8ad-06e2-4801-a9a8-8054253279d0}} | r | 085cf31b290bbb0c1133966f21823170 |
In this paper, we have shown an extension of the QGT for curved spaces in which the metric may depend on the parameters of the system. The derivation of the QMT was done in two different manners: One in a geometrical way extending the work of Provost and Vallee {{cite:a134bd1047e4a1ab3b1ca0d6840fa189d134a988}}, and the second one via the fidelity-susceptibility approach, which is shown in Appendix A. To get the QGT we had to define a new Berry connection (REF ). This connection presents an extra term solely dependent on the metric of the curved space. This new term and the modification of the inner product are responsible for that the Berry connection transforms not only as a connection but also as a density of weight one. With this modified Berry connection, we computed the Berry curvature. Finally, the QGT is given and, as expected, it contains the QMT (symmetric/real part) and the Berry curvature (antysymmetric/imaginary part). It would be exciting to find out if, using the QGT, it is possible to extract some global information beyond the Chern character associated with the Berry curvature and the information contained in the Pontrjagin characteristics classes {{cite:53e6117be84290b3bb303b3bd0b5c6b878026a10}}. To show the consequences of how a non-trivial metric dependent on the parameters of the system affects the QMT and the Berry curvature, we provided four examples: three in one dimension and one in two dimensions. One interesting aspect of the one-dimensional examples is that they are isospectral, i.e., they have the same energy as the harmonic oscillator. Thus, we conclude that the energy eigenvalues are not enough to detect the particular system we are working with nor the QPT's that there might be. Another interesting point is that the example with a Morse-like potential has some similarities with the Liouville Quantum Theory on the Riemann sphere {{cite:f1c1720e23a1ea645de81954422cc9ff077d6217}} and it will be interesting to use our procedure to compute the QGT in this case. Also, the generalization of our results to a perturbative form of the QGT to the curved background could be helpful in detecting critical points in the shape of figures of interest {{cite:f337da0d01ee82a14aa33c7c5d4b781d556648bd}}, since the Laplace-Beltrami operator in higher dimensions gives the Schrödinger equation without potential in the curved background. Finally, we want to extend this work both for relativistic cases and for mixed states in order to detect QPT's, e.g., for quantum black holes, perhaps in a similar way to the quantum complexity approach of Susskind {{cite:37ed0e49ea883721e6b09cec359801eb2ae1a7e1}}.
| d | 1798fee7e2b75f0fdf674086b1ac2c34 |
where the sum of true positives and true negatives is divided by the sum of true positives, true negatives, false positives and false negatives. True positives (TP) are positive examples labeled correctly as positives. True negatives (TN) are negative examples labeled correctly as negative. False positives (FP) are negative examples labeled incorrectly as positive. False negatives (FN) are positive examples labeled incorrectly as negative {{cite:58fd3ff44bd28a12491850c575c0d3b38b4dca55}}.
Precision is defined as:
{{formula:c4871959-0c0d-48fa-a5aa-94e6f95f1beb}}
| r | 24e211fabc38cde270fc9c0fdc860e8e |
We compare DOTIN with previous pooling-based methods in Table REF . Here, we further clarify the connection and differences with similar methods gPool {{cite:ab5487ee34a5c9099140896b34b132f37840a39e}} and SAGPool {{cite:140f68d10db3994e914d6b303f1922bc2b4f9e08}}. Connection: All the three methods use attention methods to calculate the importance and adaptively choose task-irrelevant nodes to be dropped. Differences: The main difference between DOTIN and them is that DOTIN can recognize task-irrelevant nodes and the task-irrelevant sub-graph pooling also makes DOTIN achieve higher accuracy.
Moreover, DOTIN only introduces {{formula:2df42070-bc28-4f5f-a983-638c9cb43546}} extra vectors while the previous two methods (gPool and SAGPool) have to learn extra GNN / GCN layers, incurring additional overhead.
| m | 289a8d0ed14ee4c528ac8ef8252655ed |
Nekrasov's conjecture states that these partition functions give deformations of the Seiberg-Witten prepotentials for four-dimensional (4d) {{formula:cbcd3e65-5d60-4c2d-bbdc-26315592f42d}} supersymmetric gauge theories.
This conjecture is proven in Braverman-Etingof {{cite:5903b3df0f0d1de1bf80ff72a134e99c2411a5be}}, Nekrasov-Okounkov {{cite:9b3e17bc0f59e96fa36b8fb8a4748cc5e2560898}} and Nakajima-Yoshioka {{cite:65196e1b7ea945aef9403234dd01ff88790614f6}} independently.
In {{cite:65196e1b7ea945aef9403234dd01ff88790614f6}}, they study relationships with similar partition functions defined for blow-up {{formula:1876b4aa-010f-466b-8e33-0fadf7790291}} of {{formula:df8491d3-38a3-41a7-849a-336685f15225}} along the origin, and get the blow-up formula.
These are bilinear relations of {{formula:7e348dc9-63d5-4079-869c-a9e3fbb86473}} and {{formula:3bd3e530-f5f8-42a6-8351-bfbeeed726bb}} , which correspond to {{formula:f9936cf3-af35-4084-94ff-068bcd352f4c}} -fixed points on {{formula:4087ff42-9adc-47de-919b-9ba1bbc872eb}} curve of {{formula:ac4edf79-8998-4363-87e8-9bba230bb867}} .
Furthermore these arguments are extended in {{cite:2844181fb1cfdf8c571f8ee5f2a062ed73580e47}} to various cohomology classes other than 1 using the theory of perverse coherent sheaves.
This method also gives functional equations of Nekrasov functions {{cite:d71971588a3f1fbe374d774caea453bf71adf22b}}, {{cite:6df2b0d32fda9c0ddbff01a4eeb9fd7741096e92}}.
| i | f819fca1052b8073473a67707409bfdc |
Regarding the FreeSolv dataset, as the results shown in Table REF , in the training dataset our M_RMSE is worse than the one obtained from GC+BHO, however, in the results on validation and test datasets, our method are slightly better than GC+BHO. It should be noted that in terms of validation, these two results are very similar, as the reject {{formula:1c259861-f5a4-4df4-bce8-283331c64576}} -value for d.f. (degree of freedom) at 30 and at 60 are 2.042 and 2.0, as the {{formula:14d7cbf8-c4ca-4945-95a6-6ea337ae3391}} value we got is -2.117, which is nearly on the boundary of acceptance. It is also noted that, as we do not know the exact size of RMSE sample group in the reference paper {{cite:b40e902d53a784c9cd70324765893dbe4a33874d}}, we suppose it was in the range of 1{{formula:4fd8cd8c-b6eb-4ee5-b1de-45085fce8d4c}} 30, thus d.f. at 30 and 60 are both considered in this work.
{{table:cdb4797f-8862-43e0-9acc-729ae1e028a9}} | r | 8525e9ae9f0f63927939b1863ae5974e |
where {{formula:73e56d49-c6cc-427e-ace0-0d9bae4492f9}} is the batch size, {{formula:1f8cdac0-83b1-4164-9c85-bf967f641045}} when the {{formula:a86cd609-3e94-4a8c-89c9-80629cd5b97d}} graph
(protein) is an enzyme and {{formula:36bcf92e-d019-41b4-a78b-5d27e1688cc1}} otherwise, {{formula:09421191-2af3-45ad-b301-54595975fef7}} is the sigmoid function that maps the output {{formula:1b5a51a6-9ea5-403c-9ce3-5d547603ec87}} of the 3-layer fully connected network {{formula:79f79eb9-0f22-45f0-86d8-e85a562ac66f}} to {{formula:8693880a-7d1f-4f42-9d47-51ed7f45c997}} . Three performance metrics were computed: accuracy (ACC), area under the receiver operating characteristic curve (AUC), and average precision (AP) as area under the precision-recall curve from precision scores. These measure are defined as follows (see sklearn.metrics module documentation in pytorch, or {{cite:34a546cf308e655455d9ab1037bde57b1f7ca5a7}}).
| m | 72b23b53d0e1fecb8c2a4865e39b630b |
where {{formula:01e9d7d3-f1d5-4ea6-91c7-a39e62b06885}} is the magnitude of the pion three-momentum in the rest-frame of the {{formula:2e0c6be4-afc7-4f0c-b634-ec196433e47f}} .
The lifetime of the {{formula:4085c69c-e6a7-4bc7-b021-9773adc20011}} is chosen as {{formula:6c56f1cf-bf6e-4449-a941-5031e1de8a6a}} ,
the Wilson coefficients are chosen as {{formula:599cac0a-237d-4257-b022-2394cf7d89e1}} and {{formula:4b421439-2e0e-40cd-a837-b10583b4b4d0}} {{cite:49a72b4423fc1651b62617c7ce3f55bc8dd1635c}},
{{formula:85f1a3e3-5b54-4381-89cd-c6e1372b53a7}} GeV{{formula:01152e1c-2beb-4b53-93c7-74644e788200}} and {{formula:1fcbd570-a82e-411f-9828-6cf3e83e1e1e}} {{cite:46129df589ddf4dbc755b79154d93fcab65d17de}}.
The branching fractions are listed in Table REF . In the first five lines
the branching fractions are evaluated by the non-factorizable amplitudes from this work
and the factorizable amplitudes from the literature. The last four lines present the branching
fractions with both kinds of amplitudes being evaluated in the literature. In the last
column we list the values of {{formula:336a76b0-d675-4803-b095-f52c58d08f50}} from all the theoretical methods
mentioned above. We note that most of the calculated {{formula:6a819bff-d821-443a-9062-0a5eb0b75273}} are much larger
or smaller than the experimental value {{formula:7ac99f88-f9c1-4a6a-856e-eca6532af850}} .
However, the value obtained from HQET+LCSR leads to the fraction
{{formula:897d0253-0d2b-4a93-a275-e90b54eb7ce9}}
| r | 34729dc21ec63296e5ba5414a3dbb7b3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.