text stringlengths 54 548k | label stringclasses 4 values | id_ stringlengths 32 32 |
|---|---|---|
We compare our DeCoNASNet with six lightweight networks (SRCNN {{cite:3f18b38e576c155675821f60c5d296221eee5caf}}, VDSR {{cite:647097b617dd289db1e32f5d1d34b0cc9ed1d78d}}, MemNet {{cite:f8c487ea1a1b7d2c271bdd02531cdf5ae2d90387}}, LapSRN {{cite:414ac2c530cff4142019238c9530818c758e1f4f}}, SelNet {{cite:b00b74e944fb3447cbe0096eb1b05152b9e3977e}}, CARN {{cite:8fd89978b51591e1e064eeb0ae093d7b313009a1}}) and two NAS-based methods (MoreMNAS {{cite:e59e11328fc6309cde1a6e0cf49d5d10c1b4dcdc}}, FALSR {{cite:14d3ec4e4d77ba571c12618bb62768ff471426ed}}). The results are shown in Table REF , where we can see that DeCoNASNet outperforms other hand-crafted lightweight models and existing NAS-based ones while using somewhat more parameters, still within 2M. The most important advantage of our method is that it finds the optimal structure within 16 hours, which is {{formula:a52e3c8f-d9b6-4976-8c1f-89cc55311332}} faster than other NAS-based methods.
| m | d1f2fdafb02dd57db122be3c5b09808e |
In this work, we predicted M1 radiative decay widths of strange and charmed baryons in SQCS using isospin-broken effective quark masses from EMS as well as screening of quark charge. We estimated isospin splitting, masses, magnetic, and transition moments up to charm baryons utilizing the inputs listed in Table REF and REF during the evaluation process. The numerical results are given in Tables REF , REF , REF , REF , REF , REF , REF , REF , , REF , REF , REF , REF , and REF . In addition, we compare our results with experimental values and other theoretical models including HB{{formula:b862ea00-ad0d-413a-b311-7cd8dbf283e9}} PT {{cite:68bebaefbc6d7415753b1e8257d0da087b9a60c1}}, {{cite:1616891746630cae456e772a4ed1de7da7749672}}, {{cite:0c7ae004752f2cf534c3a9218688d7d313ecdbee}}, {{cite:609ff5c7c5be47eb0c0bd2d47e329ddd67280fc3}}, {{cite:dda5c87fb4dbcb89a1d4246e8c24e2a7600fa11b}}, {{cite:06dc6835828803448bfeca879ea2815eb8bead37}}, hCQM {{cite:80073b1ada27b7d67d05aff56853157d26408b92}}, {{cite:de7abbc69ed64afd044ae98cb32afbd8bc1e9e68}}, {{cite:f324de72bc13e910d2c3c53f53aa9e19cbfa4f5f}}, {{cite:a9c70478e4cc06350d647a85e3ea0bba65b9ab53}}, {{cite:3735e2ebf0ce609b716cf6ea17f683aa5cf87c6b}}, QCDSR {{cite:66cffa4cdf4ee7e1be7fadcbcbf7a0ec61c16130}}, CI {{cite:c6d2ea1d23040887a085055cb6e63a06feee3c77}}, LQCD {{cite:56a38fe4bf519fed615da8463e9b0cfd9457f788}}, {{cite:5101e390705dada4f34b07bb664094d4e252c6b5}}, {{cite:a8ab0b2fbb691a2026d3378dd0d719ea806a03de}}, pion mean-field approach {{cite:cadf7432912e88368b6c950808793f5765a8e14e}}, heavy quark symmetry (HQS) {{cite:22e94bf8922d98d454f604204cff6f7e2f6d0a5e}}, chiral quark model ({{formula:5e8c9137-ce82-43fc-9c7d-d9a7c3ae154b}} QM) {{cite:c0d091b341dae2a4ef00e2c1df1d4f8413791176}}, {{cite:7a661360cdb1807f07afb8100031edb542ff2de0}}, chiral constituent quark model ({{formula:abac19cb-8333-4e22-9cf0-a11d5685c93e}} CQM) {{cite:a71752bb81523774a7f7a8b5f52c428358fc2359}}, bag model (BM) {{cite:953ac5ff09392660e7dcf65da26b436e60b6d872}}, NRQM {{cite:f9aa2371915e5641cd875587f0e75dc5eaecdf36}}, relativistic three-quark model (RTQM) {{cite:d735bf99567408b08f00b5e828ae51581ff1ceb6}}, covariant baryon chiral perturbation theory (B{{formula:d0b74237-d785-4b36-9016-03a64eaa468a}} PT) {{cite:f91acca5b5dda1e5940c3e415f365bf290ddd9c9}}, {{cite:c721688e3dd35b6716fe6d7eec49e60995528a6d}}, {{cite:493b1743224ad2009b9fff6ab0420ee5873c1bec}}, light cone QCD sum rule (LCQSR) {{cite:d54cc2bec76748be1cc2bad2114ab8500466b800}}, {{cite:577d4cd2e6a97ad7c90f3e020214417508a35f57}}, {{cite:bed621ec402693c9d6013be0afcd162337a95a96}}, {{cite:a9285e13b5b6895567d7dd9763055130cf2c97b2}}, {{cite:3645369ff0f20b74dc919f4c0de015bd639f2133}}, {{cite:0b534134af343c0f6626c15ba7ca6ee444e97a83}}, {{cite:8c10afcb3ab9e8956c5c113c0c842b4100340036}}, {{cite:ea536f15afb8a2c2a17bb0a72921c670645665fb}}, {{cite:7a8d7e2434469144425c1ea627bdf31d92f84248}}, {{cite:6cc58fdab4e5596b536b7d51f4773393c0e785cc}}, hyper central model (HCM) {{cite:9f7c687ec758f05b0189b2a51ee6114d248d8ced}}, {{cite:14dc1001cdccecd1e1426170e4186568eb35d3f6}}, {{cite:08283fe75b4b2ad27b6f77b7c0088efc545a6e51}}, covariant spectator quark model (CSQM) {{cite:0bfc5c163901e06da69d61d7cabbc1d2bb0141e5}}, {{formula:ded842e7-d22d-4f89-b134-d7d2d8c827e6}} PT {{cite:f630a1b755e60827578adc45d61ee0ff3e89f294}}, chiral quark soliton model ({{formula:cda88e34-4647-4cc5-9e27-8afff7d36523}} QSM) {{cite:73f565d5f8e0ade06a2156e75689439a2f2d0a5d}}, {{cite:d16593e95df945e552b508bd4bc5d660a82ed2f7}}, and constituent quark model (CQM) {{cite:0172c09d122d820d6b64fb67de04852250425428}}.
| r | c3ea636fffb0cd39699ba22d98a33dad |
Traditional methods mainly rely on hand-crafted features {{cite:7d05353f866aa5b5a7d69dfd7dadf153c256fdcd}}, {{cite:14e6d17afcd45f7620685ea7bb8f131f412ab8aa}}, {{cite:04531f8ade99f0ddfe013e9ff01e3a033c61e073}}, {{cite:d46246f69468940a028ffc1e28f11eda7bcb88ea}}. Lately, deep learning-based methods have made great progress and gradually become a mainstream {{cite:87b3b4bc57dbda03e21744c48a89e465697c79a0}}, {{cite:933a6e8fc1535901db184afd12e9e31f4adcc21b}}, {{cite:19de56455a6c0cec28fe6300834c3b55a63c3512}}, {{cite:3e8db9873641aa8e9c1a28577656fe1050b9e4af}}, {{cite:be50638aa514353230ab025271188b0a4cccf878}}, {{cite:c02174a07c31337af3ee4977df0dfe3228e7ff91}}, {{cite:666c2507209ee65e6c18d96317017f657734fff2}}, {{cite:92998d0d6a663726621e9b8dacc619820921ba89}}, {{cite:5908f0137d3a566fef9849df571b5bf8faa2a75f}}, {{cite:6dafa0665b1bc3cca480265f213d75e9d381cc60}}, {{cite:5e1241184fbab1e5ddc79c649a5649e3b7e869d7}}, {{cite:1d2689b415f0fb9d7793efc189b3ba33c41d301b}}, {{cite:676aaffc093329a4b0256126c2975a1de565319c}}, {{cite:1d1f14784070f36fc9d2dda0bb73d9dfbd371da0}}. Qu et al. {{cite:b5de4c974b1cdfad5ba3955eb269d8ced850d5ad}} first introduced CNNs to infer object saliency from RGB-D data.
Zhu et al. {{cite:92998d0d6a663726621e9b8dacc619820921ba89}} designed a master network to process RGB data, together with a sub-network for depth data, and then incorporated depth features into the master network.
Fu et al. {{cite:be50638aa514353230ab025271188b0a4cccf878}} utilized a Siamese network for simultaneous RGB and depth feature extraction, which discovers the commonality between these two views from a model-based perspective. Zhang et al. {{cite:19de56455a6c0cec28fe6300834c3b55a63c3512}} proposed a probabilistic network via conditional variational auto-encoders to model human annotation uncertainty.
Zhang et al. {{cite:c02174a07c31337af3ee4977df0dfe3228e7ff91}} proposed a complementary interaction fusion framework to locate salient objects with fine edge details.
Liu et al. {{cite:b79f482733b648f7a7d12ea623fc4a80bf22a59f}} introduced a selective self-mutual attention mechanism that can fuse attention learned from both modalities.
Li et al. {{cite:5b739a6378510cd3c558ea371a3e093560e35c8a}} designed a cross-modal weighting network to encourage cross-modal and cross-scale information fusion from low-, middle- and high-level features.
Fan et al. {{cite:6dafa0665b1bc3cca480265f213d75e9d381cc60}} adopted a bifurcated backbone strategy to split multi-level features into student and teacher ones, in order to suppress distractors within low-level layers.
Pang et al. {{cite:933a6e8fc1535901db184afd12e9e31f4adcc21b}} provided a new perspective to utilize depth information, in which the depth and RGB features are combined to generate region-aware dynamic filters to guide the decoding in the RGB stream.
Li et al. {{cite:666c2507209ee65e6c18d96317017f657734fff2}} proposed a cross-modality feature modulation module that enhances feature representations by taking depth features as prior.
Luo et al. {{cite:8ab84b83d0aca0565e6314cdc41978f0491d654d}} utilized graph-based techniques to design a network architecture for RGB-D SOD.
Ji et al. {{cite:1d2689b415f0fb9d7793efc189b3ba33c41d301b}} proposed a novel collaborative learning framework, where multiple supervision signals are employed, yielding a depth-free inference method.
Zhao et al. {{cite:1d1f14784070f36fc9d2dda0bb73d9dfbd371da0}} designed a single stream network to directly take a depth map as the fourth channel of an RGB image, and proposed a depth-enhanced dual attention module.
| m | d31ae1fe25edeb601e0746db4f0a9dea |
In this section, we focus on a transformer state-of-the-art model, which has been successfully applied to the tasks of text, image, audio processing {{cite:ded9d8b20725578411799310c95881e5f2e5f83b}}. The transformer model is well-suited for processing channel data since it supports the variable number of input elements (users/layers in our case).
| m | d7bbc79ee1324ff70f139c6f98ea3e71 |
Interestingly, these constraints are of the same order of the ones coming from Gravity Probe B {{cite:39cfeee15c452049746e1774b1c96af583db55c7}}, which is the best satellite experiment so far. On the other hand, we should point out that the best laboratory constraint on deviations from Newton’s law still comes from torsion-balance experiments performed on Earth. To give an idea on the precise order of magnitude, we can rely on the results coming from the Eöt-Wash experiment, which gives {{formula:4f7293fb-cc28-4268-b6bf-983b39169fbe}} {{cite:7031cd00d1dd872115b8470dbb8060d80e612431}}.
| d | 1035add44a46afd1142bfec88eb4b32e |
The {{formula:af4349c5-01a0-47d5-963d-768e64a6798e}} CDM model faces, even at the observational level, some (yet unsolved) problems. The most important is the “Hubble tension", which originates from the serious differences between the values of the Hubble constant, {{formula:bd5b2bd2-05ca-4dfa-a075-2596cbfa1502}} , as obtained from the CMB measurement {{cite:6ae3c88deca08994fe955d72ebfcc957fdac4579}}, and the values obtained directly from observations in the local Universe {{cite:3b195185cf30ec637ebd3c2221bdd5f532a16125}}, {{cite:57dc7846138bba75ce946a0562f6cc4e9a5e5d8a}}, {{cite:8a950ae5c78321faffd6f060c24e7e7cd6d18265}}. The SH0ES determination of {{formula:3aad5022-8e68-4489-83b1-2c43fe0f196f}} gives the value
{{formula:7b622cfe-e04b-4ffd-a2b7-bc553a1a0dc2}} km/s/Mpc {{cite:3b195185cf30ec637ebd3c2221bdd5f532a16125}}. On the other hand, from the early Universe
surveys performed by the Planck satellite one obtains {{formula:a388a0a6-51e2-4b95-8fca-f40c23b321b8}} km/s/Mpc {{cite:c4ba07800d508aff4575da443bb525b76c58aa96}}, a value that
differs by {{formula:f2efedb9-36a9-4280-9709-6fdfa08f28e0}} from the SH0ES result.
| i | 954bda61d89845804e40efeb6653f89d |
By a computable {{formula:5995461d-9baa-4c28-ad13-d76f31e5f964}} -continuous domain {{cite:7c02823120be5a49bef88c215bcbfe283cdfc68f}} we mean a
pair {{formula:0553f010-94f7-4a6e-8a21-7a2343173479}} where {{formula:8264d4fb-24c8-4605-af99-a4ae08f2f1cd}} is an {{formula:8d1cd09e-065f-40a5-959f-58e6d37b4004}} -continuous domain and
{{formula:09559bfd-a9e6-4c27-b95e-88d15ba263d1}} is a numbering of a domain base in {{formula:df568ff8-aa96-4380-abea-71c40acecfa5}} modulo which the
approximation relation {{formula:c68c3c38-bf2f-405a-b147-f7fdcce5c627}} is c.e. Any computable
{{formula:8c457721-258f-463b-9b33-e02e5f237d6f}} -continuous domain {{formula:5ae69b1b-9344-4d8e-a523-628ff5fb7934}} has the induced effective base {{formula:dc3f25d2-880b-4b17-bef3-c1f3c6964a06}} where {{formula:9679e61f-e0d2-4532-a459-e6dd20a9234b}} . Most of the popular {{formula:1740c08b-bfed-482b-9584-33b97651f2cd}} -continuous domains are
computable.
| d | 9aa70ba7d5b332d3865d93454bbaa189 |
Heinrich {{cite:7e18aa2ad510fe3985888226d3805b603c471582}}, {{cite:b0475a4fb696888134055cebfe61d77b60aae284}} introduced multilevel Monte Carlo for applications to parametric integration. Motivated by applications in computational finance, Kebaier {{cite:546445486e874ef45c76419ce69ab0f385e727b7}} introduced a two-level control variate technique in Monte Carlo (MC) sampling for the
weak approximation of stochastic differential equations (SDEs).
Giles {{cite:e38a972833429d4b6c7f7a137a1f8c29a948ef58}} extended this approach to the now-famous multilevel Monte Carlo (MLMC) using a full
hierarchy of discretizations with geometrically decreasing grid sizes.
By optimally selecting the number of samples on each level and sampling more from coarse, inexpensive levels,
and less from fine, expensive levels, the MLMC
method decreases the computational cost.
This cost reduction with respect to single-level MC usually goes beyond a constant factor, unlike the cost reduction of
standard control variate techniques. MLMC can reduce the computational complexity to compute a
solution with an error tolerance {{formula:a7a06f33-d599-4b46-ad25-d35d23544aee}} , cf. Theorem REF .
Central limit results are useful for estimating and controlling the statistical error in the MLMC in terms of its variance.
These results, cf. {{cite:a616b5d5d5f39c5551306c03d1c4332ad7eec071}}, {{cite:86a0c6c2523f90125d0f2e0c4ee3304db5ec8f55}} and the generalization in {{cite:6e483cb6a1c099b06cc62878e17a899045dd4f47}}, are applications of the Lindeberg central limit theorem because the MLMC samples are not identically distributed across levels.
| i | d991179536450b26d1613691290122f0 |
for {{formula:32be5c7f-3baf-4714-b840-461bca95b5b5}} and {{formula:40a18e30-fd2f-4aad-967c-00636562e76b}}
The integral in (REF ) defines, for every {{formula:353746f0-6333-4250-9dad-e28801cc9cd7}} , a contraction on {{formula:bc6d084b-43c2-4787-94a6-169d8162bb98}} , for every {{formula:26f0c75d-4541-439c-9f06-58a072f827bf}} . By defining, for each {{formula:1aeb33c2-7dab-4176-a770-6b2fe62c20d8}} , {{formula:77fb84b9-6cbe-4dfc-9dde-31ee31164ecb}} by (REF ), the family {{formula:4da35e45-e300-40ad-9e15-6555611b650c}} is a symmetric diffusion semigroup in Stein's sense in {{formula:27aaf389-06c6-4fef-8372-b3285a9b3c3e}} (see {{cite:a4b57aa6a9731d4623df74a11750931f24b2b1d7}}).
| i | 9f0102d352447c11a31f4c9f4d59ad0d |
Finally, we point out that geometric thermodynamics is related to several fascinating topics, and has the potential to clarify the geometric properties of these topics. The classical correspondence of shortcuts to adiabatically for the stochastic process {{cite:fc96d14506b584cfe2059bdbe728d8d85ae836f9}}, {{cite:3d42dd5ad5737f589355a9b0d9ecff034d7fd1b9}}, {{cite:f4fbb2df4cc543198da72f59f49dc36b02342d60}}, {{cite:1f2ba026ad13d6dc6696415c1240ac43517d96f7}}, {{cite:4d8961783172432112fe809eecee576894c37076}} is related to the geometry of the probability distribution. Remarkably, the link between shortcuts and information geometry has been proposed in Ref. {{cite:2ac323f501dc81d709214acfc7991b878c4c4ff1}}. A connection between geometric thermodynamics and a geometrical interpretation of another excess entropy production rate proposed in Ref. {{cite:f3448f920de6ca048c47ebc1d8bbc6114692773b}} in terms of the Berry phase {{cite:c6e67cdff7e203ada9b68474712f7353bbebb8c5}}, which is related to the geometry of the cyclic path, is interesting. The cyclic path in information geometry and optimal transport theory was discussed in the optimal heat engine {{cite:6607dcdb9590f6d01cc92e0292219f01b0a37d29}}, {{cite:6b1372bcfd0548a0468e1fb43c0f3a0decef6c00}}, {{cite:6e867780512eea7b071ff906c68a1bf1b88a81f2}}, {{cite:3f3ca19a66f4082100fc6464dbf3f99e9b0934fd}}, {{cite:8b5bb80638ec4c69b9a4a60827ed8eee9f8bcb6e}} and the geometric pump {{cite:c013c38ad8f927295de4357409ac7e0df00b34ec}}. A geometric interpretation of the restricted path may also be interesting in the context of the optimal limited-control {{cite:f0b42f33119d9b59b969f16bfb67ee7d9f06dd3b}}, {{cite:ec14eceeeae24e84e0dcd3c6e488182cc90fc9f9}}, {{cite:f02a084bd88e1b138d380eeb1cf374a414afaeef}}. The dual coordinate systems in stochastic thermodynamics provide the duality in stochastic thermodynamics {{cite:64af26af5d6db0b2b4ee8541702c950bccf2e3b7}}, {{cite:9a47115d612d4473f8ea83ea7c1e07f73fbf3fd7}}, {{cite:4b006a515d5fbf860c16617ffb12752ce7263db8}}, {{cite:10c08389b57d70951703c5070f51843d79143abc}}, {{cite:f87ef5c5c14e760dd148058dbb9087f464900615}}, {{cite:cad00057a03c436307277157252186bc2b3d5c05}}, {{cite:1f3da51de0dcfa311394eb601ee4b40a1f01da77}}, which is related to the variational calculus such as the maximum caliber principle {{cite:fb3d200f4da6655cc2d287b382ac901f157cc8e2}}, {{cite:705fce09fe6206b93330dfa12b89b2b1a96c979a}}, {{cite:cad00057a03c436307277157252186bc2b3d5c05}}, {{cite:1f3da51de0dcfa311394eb601ee4b40a1f01da77}} and the Schrödinger bridge problem {{cite:74d784379b901aede076108a555cfe9ecdcc345c}}, {{cite:51b0b92068f948b949713f5d736420e339af25aa}}. A connection to information thermodynamics {{cite:e005141430088412ad97fead4f3d3b1a368164a9}} is also interesting. For example, the information-thermodynamic quantities, called the partial entropy production and the transfer entropy {{cite:a55823d05f4a3b079485de5e08a6a391e7468388}}, {{cite:b3e3cc8bb446184ba87ef6987db182e6193d0ddd}}, {{cite:6733934f7f2c6b801e4a767abada10beccf290fe}}, {{cite:a1c60a12d7c9946e2e79881df119b825b27cc5e5}}, {{cite:32dfc539a09b71e7ff13d640d100fdced74c1c46}} can be information geometrically treated by the projection theorem {{cite:b97807a6f35fd2bbc836325a769e75b9c8fc9d90}}, {{cite:711f5ecc5f9952c4f3579a3952d5f83215e4f4d7}}, and the optimal transport for the subsystem is related to the problem of the finite bit erasure {{cite:32d1d8c9a91436c9c967c07e763696eb8fb4e1b7}}, {{cite:9e4f3e23230bdacf9e7d4b2fc8db2e0588c91a03}}, {{cite:88ca06576a7577f4bde0a68e28ac8ca18c1ae938}}, {{cite:b20357fed73c0cd0a9a0d60df56de5e52d484c7f}} and the problem of the minimum partial entropy production in the subsystem {{cite:6b1372bcfd0548a0468e1fb43c0f3a0decef6c00}}, {{cite:2b37894d19aa1f826eebcfb2d20798efeff368a7}}. Applications to the evolutionary process {{cite:c06adcbcc4bbd18590c2627edcc741ddb7067692}}, {{cite:11b59cbb42a1953db0559d3f9d3d1a16e9615aab}}, {{cite:7ef0df36b84e04e39dfe34e3290bdbf7a0671aff}}, {{cite:c10dc0bacf08e7a7bf31896dfacabae3f8609471}} is also interesting because information geometry provides a geometric interpretation of the Price equation {{cite:4fa83ef65d832b05a4e3d54337d7bd738aff1586}}, {{cite:cade92e60108347cc1fcde3951c816b40d7acdb7}}.
As a generalization of the gradient flow expression, an approach based on the general equation for non-equilibrium reversible-irreversible coupling (GENERIC) {{cite:fcde87272627473dc75e5dec6b086c77908325c5}}, {{cite:c1fb0a81390062c0459c9305071dbd2da941ed6f}} might be promising {{cite:567dfcdbc1f1788a55ad37912ef685cca6acb0b7}}, {{cite:5c5cf09d2a4097de2cc39065215ab370ecb51812}}. The experimental application of geometric thermodynamics to biological dynamics is interesting {{cite:17a7bb153484dbd27212cc496df9afe2d969c2ac}}, {{cite:f88a371f408300d9acb7fe06827323e15d6d5944}}, {{cite:8fade7ed08b54fffd16217fefcf882012df95e1d}}, and the quantitative discussion on the design principle of the complex biological system from the viewpoint of geometric thermodynamics could be a significant topic in the near future.
| d | 3983f8752a9ba45dd315e23defbf7f58 |
To evaluate our method, we train and test our model on our D2CRealSR with scaling factor x8 and an existing real-world SR dataset, RealSR with scaling factor x4, x3, x2. In addition, we validate on DRealSR tesing set for performance of cross-dataset. We compare our model with other state-of-the-art SISR methods, including DRCN {{cite:406788a4e9e39ea1963d08068671937097e8acc2}}, SRResNet {{cite:b09f518326b6ca7c1fa2cd7128b2d94eed7c3fc1}}, EDSR {{cite:7b41a481a413836a821eb5123275105a0eb29dd5}}, RCAN {{cite:aa812ef8e86ee02ed21ca46770c61ff2c82bd54e}}, ESRGAN {{cite:84fd3e932ab0439061d9e3f48affc196e5ea0498}}, LP-KPN {{cite:b1f2871e98f9c972f8b27b0578e0863e001d1ae1}} and CDC {{cite:76526da30ee1032a5b6b767072738c1d250c4689}}. The SISR results were evaluated on the Y channel in the YCbCr space using PSNR and SSIM. Among these SR methods, EDSR and RCAN are the classic SISR methods and DRCN is the open-source ensemble-based method. In addition, LP-KPN and CDC are designed to solve the real-world SR problem and they outperform on real-world SR benchmarks.
| m | f8813e1eed764a15d3e4ecf185968e4f |
In Evolutionary Game Theory, cooperation is often modelled as a strategy in a Public Goods Game (PGG) {{cite:aef7a6939afbaad5813ccfc272ed344ffd0dd634}}. This game offers the possibility to analyse the main dilemma of cooperation: the tension between self-interest and the social optimum. In the PGG, individuals interact in a multi-player game with two strategies: cooperation or defection. All cooperators contribute with a given amount to a common pool of resources. The total contribution is multiplied by a factor {{formula:af275fc8-0799-4f01-b524-781baf8dd481}} (the multiplicative factor represents the synergistic effect of the collaboration) and then it is equally divided among all individuals (including defectors).
If all agents contribute, everyone obtains more than the individual contribution. If no one contributes, no one will get a better payoff. Nevertheless, if a single individual believes that the peers will cooperate and contribute, this individual will have a big incentive to defect, keeping his/her initial endowment while receiving the benefits of the collective pool. If all agents follow this rationale, no one will contribute and the whole population ends in the so-called “tragedy of the commons” {{cite:8b251d4e08d87b246918e1b8898fbd92c7d93ca1}}, {{cite:665f42d58c8fce65f880422a987adb0f7d9afd89}}.
| i | 54891e751476d9597e81151b3ce35d50 |
UA properties can also be established for two-hidden layer neural networks. The difference to the one-hidden layer case is the distribution of the nodes in each layer. According to {{cite:5ff7a2455e559e4a90a33152903a692938da6913}}, on page 517, little is gained from a theoretical perspective by the inclusion of an additional layer.
| r | edf2a5c6201533ad772128f9d7e6807a |
NEUT has a long rich history, originally developed in the 1980s as a tool to study atmospheric neutrinos and nucleon decay in the Kamiokande experiment {{cite:6db7c39ec15da8108147ad79251cfe78ec0ecb40}}, and some of the original FORTRAN77 code is still in use. NEUT continues to be predominantly developed and maintained by members of the Kamiokande series of experiments (Super-Kamiokande, T2K, Hyper-Kamiokande) and many source files contain comments messages from the numerous physicists who have contributed to the simulation over the past 35 years—including those working on the Nobel prize-winning Super-Kamiokande (SK) analysis {{cite:0bc209d0e173b59713282f3128a556ecbd1c899f}}. Recent development has targeted the improvements most-needed for precise and robust analyses of SK and T2K neutrino oscillation data and neutrino cross-section measurements. Because of the in-house nature of NEUT development and analysis usage, it is not yet open source, due in part to recent work on cross-experiment analyses. We do not yet have the resources to migrate to an open source model, but, access to the code and usage instructions are available upon request.
| i | fec98e1caf33b12990fd8be2cca44408 |
Although perhaps similar at first glance to the other two regimes, the subexponential regime is more sophisticated and requires a much finer analysis. In the subexponential regime, {{formula:31e16fe8-4976-4a13-8b37-5cf18c86ef12}} , we prove that the SVP formulation of {{cite:3fbb789da073f37624050f26ed126f04618ec3e4}} with cost
{{formula:3514527d-184c-410d-908a-0874321c7f70}}
| r | 9941d78c16f21e5528aa0712e4bc1292 |
In Fig. REF , we visualize the standard deviations of generated frames over 100 samples for Improved-VRNN {{cite:44da6b052b58e4760cd5d6e3e9363225e1704215}} and our Conditional model. While the uncertainty is mostly uniform over the image for Improved-VRNN, our model can pinpoint it to the foreground regions.
| r | f3e9426985d11e7980e858edaca26c7f |
The loss function {{formula:50ba7e51-0975-4872-969b-58bed2898b9d}} assigns a penalty when the model does not predict the outcome {{formula:16773e09-2471-44a6-abf6-e8a056d876aa}} well for example
{{formula:bafce85e-4c72-4d70-8d97-10de9e4a9f64}} while the regularization term {{formula:4781b270-84d1-4c31-9fa2-4683986a6d8d}} penalizes models with high complexity. We refer to {{cite:270512e1159f4005a30b9f6444686de6920047cb}} for a
classical overview of empirical risk minimization.
| m | e5f4f4a8545ec85659f22411afbe5939 |
While slightly beyond the scope of this paper, it is worth mentioning that LSMR (or LSQR) can be used as the inner-solver for the Alternating Directions Method-of-Multipliers (ADMM) algorithm which has become popular for MRI reconstruction {{cite:07c6b343f8a699474ae5444c098e2920c190e33c}}, {{cite:3e50c7554b27d3968b9849328e58652d739ab9be}}, {{cite:b3005dfdd232cc00e91f8ad69ca102f9707914ee}}. An example of how to formulate this with LSQR can be found on the website associated with Boyd et al {{cite:f729c300b09833ce49212e89fbdfb824df51c0bd}}.
| d | d2e3fc8c2565f23e8e34ad41ad1563b0 |
From Eq. (REF ), we can find that the contribution of the first term due
to the direct {{formula:a0ab5f46-d532-48fe-aeed-50970d1d4252}} is comparable with the
second term due to the {{formula:fc88fb12-791e-4f4c-b38e-526b6f0f541f}} mixing. Within 1{{formula:3541736c-0540-4476-a6b0-73831244c861}} uncertainties,
our theoretical value of the branching fraction is
{{formula:c766e5f5-0074-4069-bb86-d0b0855b8d05}} , which agrees with the values given in
PDG {{cite:dcad4195101bc79a21d8993e716616d08b68685e}} and by the recent dispersive
analysis {{cite:fa617ab47495019163ac1c3e86bf840cbd3092b2}}.
| d | f4ae5141bf183ef6a98dffd4b2665bac |
We shall discuss results from the systematical scans. In Fig. REF ,
we show plot {{formula:c5fd4964-c747-4e36-94bf-4e3d09b53a52}} as a function of {{formula:2fb44e37-40fd-4407-970a-91f54294c75e}} . Gray points are consistent
with REWSB and LSP neutralino. Orange points satisfy the mass bounds including
{{formula:78a5f3c4-32e3-41df-91bc-74c8423ab17a}} and the constraints from rare {{formula:83b0449b-4861-48bd-8754-106db1bca6b9}} meson decays.
Blue points form a subset of orange points and satisfy {{formula:5fa0661a-8288-4d03-942e-1720102fb0f3}} ,
while red points form a subset of orange points and satisfy {{formula:ca8e9620-8443-4597-a629-57fe327cbe8e}} .
Two horizontal blue and red lines represent {{formula:feba040f-9d45-40c1-a4b3-f583d094fdc6}}
and {{formula:e2c0c749-691c-4606-ac5a-1336861f8936}} , respectively.
The first vertical line shows the upper bound on {{formula:3525a96e-b8d7-49d5-9f8c-881be988611d}} for red points ({{formula:c069400c-1d48-44e3-97a5-09e32ef0c5b3}} TeV), while the second vertical line
shows the upper bound on {{formula:1e403ab4-880c-4e2d-848a-821d53ab8d25}} for blue points ({{formula:66e7624c-b0c8-410b-a3a9-f01d66cd8c04}} TeV).
From the upper bounds on gaugino masses {{formula:0a7fb7b3-b4d3-4edb-810e-939c15f78ba7}} given by two vertical lines, we obtain that the upper bounds
on gluino masses are 8 TeV and 15 TeV respectively for the red and blue points.
Therefore, we clearly show that SUSY GUTs with {{formula:f763e931-94f6-40b1-8b19-77b79657fe47}} GeV
for gravity mediated SUSY breaking scenario {{cite:45da9d4953dc98e87074cae82432674aeab3614a}}, {{cite:ee623e4f9de51474e074ea2f737ebcf7c9b1eb44}}, {{cite:89820d000703b9d7fee4e7807aef3394ce029566}}, i.e. the red points, can be probed
by the future 100 TeV pp colliders such as the {{formula:2dc447c2-0733-47d7-a1fb-34b138d98061}} and SppC.
Moreover, the blue points and orange points with {{formula:2abc7d4b-b35a-48d0-a47e-4252e0a71b54}}
can be explored by the Hyper-Kamiokande experiment.
In the latter part of the paper we see the impact of these bounds on the fundamental parameters
of the mSUGRA/CMSSM and sparticle spectrum.
| r | 8fe77482163d2c690828ac124782d15c |
1. Regret from deviating from expected reward: Note that regret compares the expected optimal gain {{formula:ef036344-bc8c-4af6-9d04-0c4a5469e554}} with the observed rewards {{formula:7d429000-f385-41b2-88bc-ccbf879e7fa6}} . Since, {{formula:b2715bac-3223-4824-ba6a-8324bf46f924}} is a random variable between {{formula:56c63617-3835-4f71-849d-46243ad73cea}} , the agents suffer a regret if the observed rewards are lower as compared to the mean. Hence, we use Hoeffding's inequality {{cite:0709b88a080a9c1ccb86a2516a9c32e61cebd4ba}} to bound the regret generated by the randomness of the observed rewards. This gives us regret bounded by {{formula:27b313ec-c6a0-48bf-8e83-f25849cba82b}} .
| r | 54476b8a6509098c9893b35b50127a85 |
That fermionic model is an interacting system characterized by a breakdown
of the basic Fermi liquid quasiparticle picture. Indeed, no quasiparticles and no quasi-holes
with the same quantum numbers as the corresponding free fermions occur when the interacting fermion range of motion is
restricted to a single spatial dimension {{cite:f65fc2d73429c6fed9a48b236e4ca875394d252b}}, {{cite:e75462e43ffbdd778fd7835619520deb4a028599}}. In 1D, correlated fermions rather split into the basic
fractionalized charge-only and spin-only particles whose representation is used in our study. That for finite repulsive interaction the
generators of the exact energy eigenstates onto the fermion vacuum are naturally expressed in terms
of creation onto it of such fractionalized particles renders it the most suitable representation to study the
up-spin and down-spin one-fermion spectral functions.
| d | 07213d6044d94f48beb3fb91f1c7d89c |
Typically, a predictive model is defined as a function of predictor variables (e.g., the customer id, the product id, and the categories of the product) to some target (e.g., the rating). The most common approach in predictive modeling for multi-view multi-way data is to describe samples with feature vectors that are flattened and concatenated from structural views, and apply a vector-based method, such as linear regression (LR) and support vector machines (SVMs), to learn the target function from observed samples. Recent works have shown that linear models fail for tasks with very sparse data {{cite:34359ebe949766210ae2849d8f496a15d0bc91f8}}. A variety of methods have been proposed to address the data sparsity issue by factorizing the monomials (or feature interactions) with kernels, such as the ANOVA kernels used in FMs {{cite:34359ebe949766210ae2849d8f496a15d0bc91f8}}, {{cite:128582ddf8fbdf9ab7e824cd4313a218462bb558}} and polynominal kernels used in polynominal networks {{cite:247015db6e9b9f3b94a8ecb132386a4d1dac4ac7}}, {{cite:b0b280159217b3b2454ef7cdfb6844b16c997ad0}}.
However, the disadvantages of this approach are that (1) the important structural information of each view will be discarded which may lead to the degraded prediction performance and (2) the feature vectors can grow very large which can make learning and prediction very slow or even infeasible, especially if each view involves relations of high cardinality. For example, including the relation “friends of a user” in the feature vector (represented by their IDs) can result in a very long feature vector. Further, it will repeatedly appear in many samples that involve the given user.
| i | 62978e94106b9695dcb380fcf21ce806 |
Of course, exponential Boltzmann distributions also form a basis for expressing probabilities in quantum phenomena {{cite:ab78520701a7a2be6f97d90f666abb69d228e6c6}}, or the size distributions of raindrops {{cite:b6f63285217e497196c95ce90aa66b4c595aa211}}, {{cite:2712a16a5f71239ea34b4d775f50742096075013}}, and examples of power-laws over a range of time and spatial scales can be seen throughout nature, in such seemingly dissimilar phenomena as neuronal firing, earthquakes, microbial diversity, war intensity, and personal wealth {{cite:a9e3e0c0e8ae4887f3f78f1e9fcb3c0060272d23}}, {{cite:647443d8c3d17e40a523a84ddef1d41c499b2b3b}}, {{cite:7f92c6fb081b13a09494a0d523f77003957b5b00}}. Theoretically, power laws, or the property of self-similarity, can also be obtained from a more purely mathematical perspective than was used here, as they emerge when existing objects compete probabilistically for whatever enables their growth in direct proportion to their current size; or, in the size of connected clusters within a lattice if occupancy of any given cell has a predetermined uniform probability {{cite:647443d8c3d17e40a523a84ddef1d41c499b2b3b}}.
| d | d12ecb7e3569548dda364459bf2be475 |
where {{formula:00ba6b78-9cc0-472c-93c5-81f706af58f3}} and {{formula:98d10352-7763-4036-8bba-b03e55ec0337}} are at most second degree polynomials while the degree of the polynomial {{formula:71ed71d4-9465-474d-9245-831e2474a534}} {{formula:2ac1945e-86fd-4cf9-9afe-548230a37e6b}} is strictly less than 2 {{cite:d774c2c7c9b66de539e4212ae5fd7ae164b55231}}{{cite:0df55d001996113cd82e20d06be7e6936f4a4b95}}. If we use the following factorization:
{{formula:c6a2fb03-0886-4de7-bd6c-92e0d3aa222b}}
| m | 7198fb9769565a1f6734a90d865756c8 |
To counter adversarial examples, researchers have built models that are robust to adversarial noise {{cite:6554bd99b958f024bbd55932e2a5d6a7ee8f50a3}}, or models that can identify adversarial samples {{cite:d44f353efef1e6798ce7bf39d4d7e2a659efea6b}}. Akhtar et al. {{cite:6d1cd807a9bbb2326ab015371019c7ed5a057cfa}}, Yuan et al. {{cite:59f481b4c91ee93ab1860166d5bca9044f3c167e}} and Biggio et al. {{cite:1c9eca2d771ecae54fb640afffff75e161bd724c}} reviewed possible defenses and proposed taxonomies to classify defenses.
These works focused mostly on classification networks but similar approaches could be applied to reduce the susceptibility of segmentation networks to adversarial attacks.
However, our results suggest that segmentation networks cannot easily be attacked in a targeted manner when the model weights are unknown. In situations where the exact model weights cannot be known to possible attackers, the risk of adversarial attacks manipulating segmentation outcomes appears limited.
| d | 55652474acc38efc6e0b4c7db7e659d4 |
where {{formula:e808fdf1-70fa-4147-9f7e-2f74af897afa}} is the {{formula:707d92c4-4860-459a-b499-b8126a2340e7}} order covariance matrix of the process, {{formula:7b8966eb-1c73-4f01-9c61-c592f0ac6667}} denotes its determinant, and {{formula:73a191ae-d439-49d6-b83e-b57fd3a833f6}} is the variance of {{formula:b0451948-eb18-4574-a763-b419f0d07cea}} (and of any other sample, by stationarity).
This non-negative quantity is zero iff the {{formula:4b4144ad-34d9-4fed-b738-830ca2e8625a}} order covariance {{formula:7d786dbb-da8c-466f-b78b-09eb70a839e9}} is proportional to the identity matrix {{formula:79348cf8-ad87-4cd8-b9a7-953c45e89fca}} ,
i.e., the vector {{formula:0d14fb6c-f402-40e0-a44f-4c5dae3626f9}} is white.
Thus, {{formula:817e6f94-d3d4-4888-bd41-3131b5300903}} can be thought of as a measure of the memory strength (“distance from whiteness”) which increases with {{formula:40d025a0-10c7-4375-815f-f7df15560e4f}} for a process with memory {{cite:a5634b98972252f498b48b4ef2a8e7b1a2392cb6}}, {{cite:6c287058b8eb5e1b30a00cae224bae3b27b66fa7}}, as implied by the monotonicity of {{formula:0e2e414f-d510-4954-a598-a8365173c07f}} (REF ).
| i | 6fffcd400f618a7575897e0fabfb2806 |
Spectral and singular value distribution results {{cite:67095d7a224ac1898bb48f516f1c453df760563a}}, {{cite:42a0a0c6a9ce8ff25ed3803f72d9a8722fd0e99b}}, {{cite:6705b6cee51fe4707491aa569d6a8262e20b890a}}, {{cite:02399595896e3047bdfb83cdeaca6354c567fd31}}, {{cite:d2bb3fb2cb3e71462577e293d6ecfbd52a3463f9}}, {{cite:6a13296c1d0d517156845d47ddbaafc81603a6e0}}, {{cite:47898838090afe8ac1b5555164735e971f992ece}}, {{cite:4199a22d1db3dac164f2baae9b026778578d7fd2}} of structured matrix-sequences represent one anong the key ingredients in the design and in the convergence analysis of several well-known (preconditioned) iterative methods {{cite:57c3aec44c96583a06def9e4ecfe6334215f4112}}, {{cite:048c75c303c2b6212a044632f52ff723bd33ed73}}. In many contexts, symmetry is a particularly desirable property for a matrix when we want to solve an associated linear system with iterative methods. Hence, from the original work by Pestana and Wathen {{cite:b1da530e9cdb39c4a364c464806395fae312835f}}, symmetrization procedures combined with various preconditioning techniques have been introduced and studied for the very purpose of developing a competitive method for the solution of real non-symmetric structured systems.
In particular the singular value and spectral features of the symmetrization of Toeplitz matrices generated by a Lebesgue integrable function have been recently discussed and exploited in several settings. Indeed, under the assumptions that {{formula:44506ceb-f047-4441-9254-cb18ea82cb2e}} belongs to {{formula:92384677-a1c3-44f0-b615-e36fd3fdc166}} and it has real Fourier coefficients, the spectral and singular value distribution of the matrix-sequence {{formula:14441ff1-c359-4203-99f6-c3231183f7c3}} has been studied, where {{formula:ca3685a2-e484-42b7-afbf-a3bc2667ee90}} is the anti-identity matrix and {{formula:929d05ba-4932-475a-b2b3-05593d12b4a0}} is the Toeplitz matrix generated by {{formula:c0b4032a-5bf5-438f-ab42-b0cd4d92e760}} {{cite:3370648fc1c376311935b1e7b5b8d7d20a552775}}, {{cite:faba73f8784d97b116196e430834f2dedf8456e7}}. Several extensions of the latter result have been also treated. For example the generalization in the context of block structures is treated in {{cite:3370648fc1c376311935b1e7b5b8d7d20a552775}}, i.e. assuming {{formula:0ed0bf56-9185-4a46-bff7-06ef40226f5a}} a matrix-valued function, while the spectral distribution of matrix-sequences of the form {{formula:1006c00a-8935-4fd1-a86b-4ea1a51597ab}} , with {{formula:960ae796-364e-4045-b2fb-072848b1b582}} being an analytic function, is studied in {{cite:e34e18ec041b0687834d69889b14017084ce2f93}}.
| i | 9c79468a96b7bd51d2ef17b22e949369 |
Quantization enforces the DNN models to be represented by low-precision numbers instead of 32-bit full precision representation, leading to smaller memory footprint as well as lower computational cost. In recent years, numerous quantization methods are proposed to quantize the DNN models to as few bits as possible without accuracy drop, e.g., uniform quantization with re-estimated statistics in batch normalization and estimation error minimization of each layer's response {{cite:171eacbaa7206027c828c53481e41c339d0b70dd}}, layer-wise quantization with reinforcement learning {{cite:56a597a93ce2592e271c6ec7c5263bc97b5b602f}}, channel-wise quantization {{cite:d812c832a8045dfaec8ae9fc4de9717bcfc3374b}}, etc. The extreme case of quantization is binary neural networks, which represent the DNN weights with binary values. For instance, Rastegari et al. adopted binary values to approximate the filters of DNN, called Binary-Weight-Networks {{cite:9d3e094d39f56caf403ea73e4fd80d02f48ec86f}}.
| m | 405ea7014b88f10b006b1b82ec0763c9 |
Frameworks for PIM.
DualityCache {{cite:a461106575e43ddf5461ce59726217ce36346fbc}} is an end-to-end framework for in-cache computing, which executes a fixed set of operations in a single-instruction multiple-thread (SIMT) manner. Employing DualityCache in DRAM is not straightforward due to the fundamental differences between in-cache computing and in-DRAM computing (e.g., the destructive behavior of DRAM operations and cost-sensitivity of DRAM chips). Two prior works, Hyper-A {{cite:f5a5fc16c2eea526f5db2b72821eb26064e9a97e}} and IMP {{cite:d388079c2f4482b91673eafa8fd237872982f3b4}}, propose frameworks for in-emerging-NVM computing.
Since Hyper-A and IMP target in-emerging-NVM substrates that utilize different computing paradigms (e.g., associative processing {{cite:22a5e5a732aac83dd8bbd91a05ad15d6e42129f3}}, {{cite:61313ddcd560da77c4ef1af35850ff04d00cccf6}}) or rely on particular structures of the NVM array (such as analog-to-digital/digital-to-analog converters) to perform computation, they are not applicable to an in-DRAM substrate that performs bulk bitwise operations. Olgun et al. propose the PiDRAM {{cite:1597f54bd507df4a4c775ad600a262bd48ea8e16}} framework, a flexible end-to-end and open-source FPGA-based framework that enables system integration studies and evaluation of in-DRAM computing techniques (e.g., in-DRAM copy and initialization {{cite:487412181b8847545943365d099ea21579d08b1e}}, {{cite:bf33fc2b178d0760f10ce3096abe8739364a2ac2}} and in-DRAM true random generation {{cite:0822760941a3a2c2fa2977eb639aaa9c43846022}}, {{cite:07c5bf586b4f5b04f21c50defaed04471d1d7ee1}}, {{cite:29e2c072ad7b80197168840ff9205cba90c2a99a}}) using real unmodified DRAM chips. PiDRAM is publicly available at {{cite:d6afcbb31439320cb4e299994fa92d20f659eef1}} and can be used to prototype our SIMDRAM framework in a real system.
| d | 5968f50d948249b328f52ddb03a3e1d4 |
On the other hand, most of the previous studies on the Penrose process were focus on four dimensions. However, whether our spacetime has extra dimensions is a fundamental question in modern physics. In past one hundred years different types of extra dimensional models were proposed, such as Kaluza-Klein {{formula:3911537c-adbc-47ba-9950-d2cb2118c394}} KK{{formula:6758285a-3f0a-4edb-a5d5-04feffd740dc}} model {{cite:45612a2cc4bf3a92181d2b28c442e4619e55e5ce}}, {{cite:8ad263f51fcf9e9e077f914bbdf855320c668ab2}} and string/M theory which assumes that the extra dimension is needed but quite tiny(approach Planck scale) in order to explain why it is not found in experiments. While braneworld model{{cite:199772f99c8932722d78958a98f1893b81cbd3c1}} conceives that our visible universe is localized on a 3-brane in high dimension space-time and the extra dimensions are large. Randall and Sundrum developed double brane models based on Arkani-Hamed, Dimopoulos and Dvali's {{formula:178a076b-fd64-4a51-bf3c-687b4c9860aa}} ADD{{formula:1be734f3-e83d-4d4b-a2d4-e7df5b88f2ca}} theory {{cite:da70acc12f45e7b8fef457630a85d0dc5256413c}}, {{cite:32dacca81932cfe9aef0936f8b2e2e7f4b93e36d}}, {{cite:68af29f23010d2bf860954c7215ea38ebb2a235f}} and tactfully solved the hierarchy problem{{cite:199772f99c8932722d78958a98f1893b81cbd3c1}}. Later, they further developed their model in which all the matter of our universe is localized on the single brane of original point{{formula:5f8848b2-5209-4b99-bb90-45605f3e5a2b}} and the extra dimension is infinite{{cite:793a71b0a9596d6188acccd3c04c26ced65162ab}}. Interestingly, the blackhole solution in braneworld model is obtained by Naresh Dadhich {{formula:f59d34fb-29a3-4c97-b49c-7a29f381598d}} {{formula:6df93364-2700-4900-8d6b-463be4d0394e}} gives an effective solution of rotating black hole, which is the form of Reissner-Nordstrom metric but the real charge is replaced by a new parameter tidal charge {{formula:90a5c0b5-8f91-47f3-9f77-27b1a6bdb00e}} as an imprint of extra dimension{{cite:0acd8f8e708adaaa0c2416548caa61b03327f0dc}}. After their vital work, Aliev and Gumrukcuoglu acquired stationary and axisymmetric solutions describing charged rotating black holes on a 3-brane in the Randall-Sundrum braneworld{{cite:741980128871f6529e44d89c0e69eb7300069f3a}}. The Kerr solution then can be viewed as the limit of the tidal charge {{formula:7eee07e1-1af3-4437-abac-42d1c75d6fe9}} . The collisional Penrose process of braneworld Kerr black hole with spinless particles has been discussed in{{cite:78cf6f2102066fb16f7e14437c1f64be9908055b}}. The present paper aims to study the collisional Penrose process with spinning particles. The necessities for such extension are two folds, first, the spinning massive particles are more realistic than the massive spinless particles. Second, the spinning particles in the Penrose process usually have much higher maximal energy contraction efficiency compared with the spinless particles {{cite:fc35d09c8dd1352e5f9dcb4d211f8415588af2ee}}.
| i | e3dd0a775c1f4c849f1908b402b7c3ac |
In practical, most of real-world datasets have long-tailed label distributions {{cite:969478fa3cd12248baf1fdb0f683c708c19dd156}}. As a result, deep learning algorithms usually perform poorly or even collapse when the training data suffers from heavy class-imbalance, especially for highly skewed data {{cite:b2e9eb00265273dec3f355e4f0a4d265c77c08ac}}.
Due to the imbalanced data distribution, neural networks can over-fitting to the minority classes, which leads to the deviation of the learned classification boundary {{cite:92ae915cd90dba59957d7b82e00ba75861dd564d}} .
| i | 13ed0f69f5d9443c32fb5330191747a0 |
Generally, there are two event-based approaches that have been proven effective, namely, event-triggered control and self-triggered control {{cite:9f81829c0d43db4a364f5284f7ecad10b09c109e}}.
In the former, an event e.g., a data transmission,
is triggered only after some designed triggering condition is met.
This condition should be tested at each state or output, thus requiring continuous monitoring of the system.
While a level of robustness against uncertainties and unmodeled dynamics can be expected, having the sensor continuously operating results in waste of resources.
Related work can be found in {{cite:103e8c4649e0f4b5c7bf37362c4d99da06a4c57a}}, {{cite:ffc8a3886a537d61efa1fc6892367c3de9aa4df9}}, {{cite:e7ff760824e3ef0b385c076ef3e7e6871a8acc40}}, {{cite:fb53aca509e113208cff0da7cba6c341ab99b97e}}, {{cite:6ecc633c262189c56768e1428ecade8a9a6eadb6}}.
Self-triggered control, on the other hand, determines the next sampling time and transmission once a sampled measurement is received, which does not need to continuously sample the outputs;
see, e.g., {{cite:4d9d4af86dcb9d439c32d878e37958282ce3bb89}}, {{cite:a5eca65c51bc5230362d2fcdbb1aafcc7e5fc3ed}}, {{cite:5df7b8690471f03b9630ee8d27a435d1225d195e}}, {{cite:51c4b582d4de85e1b6c9a798f7f141f93f67cac0}}.
Notably, the sensors in self-triggered control can be completely shut off between sampling times, which saves energy and prolongs service life of the sensor.
This feature appears promising in the model-based case and motivates the generalization of self-triggered control from model-based to data-driven settings.
The key idea of traditional self-triggered control is to predict the future trajectory
using the system model. It remains unclear how a trajectory can be obtained in the absence of a model.
| i | 84998bd311a0a8de69e3272849542aed |
The theory of isolated systems plays a central role in astrophysical
applications of Einstein's theory of General Relativity. A
particularly influential approach to this theory is through Penrose's
definition of asymptotic simplicity —see
e.g. {{cite:a43f654ef5167c4c7d7dc6a58f8a1c396a26752b}}, {{cite:44cd0d8857f64e1cf53e2618f18426b34c82e10c}}:
| i | b5ec495d9c2443a7eb9620b0b511a552 |
Finally, as shown in Sec. , the general theory of the massive graviton admits the flavor off-diagonal terms between {{formula:10d98cc3-fbd3-4402-8bae-ca18a370d4a9}} and charged leptons in Eqs. (REF ) and (REF ), which would give rise to the LFV processes, such as {{formula:27f8e944-12b5-4eec-bc4a-0259d9e151b1}} {{cite:40937d66ad2e0379f46459d2f15761f75ffd2fa2}}, {{formula:d84c5745-c617-4de1-a500-29771ed3bfb3}} {{cite:a2693a3a07bbe24f3ab49a3fd259d08fbcb7c154}}, and {{formula:3aa1b31d-74f7-4a7f-a68f-c09b14c1b4e3}} -{{formula:53833f92-7b5b-4fc4-88fa-a596cb06676a}} conversions in nuclei {{cite:58b0644a96604e9b33d8ce3564e0027c0b8d1a67}}. Moreover, the Wilson coefficients of these effective interactions {{formula:2856acf0-61f7-4024-a046-aaaba6e9bfcb}} are generically complex, so that they would also lead to CPV observables like the electron and muon EDMs {{cite:5c22e488122c6e6f887e5d6ae3caffd6ebcd0ead}}, {{cite:bd93241227836cabefa619ff6982fbfbd35e944a}}. Therefore, it is generally expected that such lepton flavor off-diagonal terms are stringently constrained. However, as seen from our fits to the latest Muon {{formula:6205006b-df61-478c-9be9-8abed6d35d7a}} and {{formula:6a7be50b-37e3-4035-89e3-aba1244f27b7}} results, the current experimental data still allow the {{formula:9d7a0e96-36aa-4d0a-a731-14f5a4f3ab2a}} -lepton couplings to be flavor universal, i.e., {{formula:49cab490-ecb4-488e-be4a-0a520a8f685f}} . In this special case, the total energy-momentum tensor {{formula:b4bd557b-8903-4e6c-8839-96dd256079bd}} of the charged lepton sector coupled to {{formula:7cc12306-60db-4b18-9140-18e8348a4e78}} is the same as that defined in the SM. If the charged lepton mass matrix is diagonalized in one basis, this property would be inherited by the {{formula:a339b742-4b69-4b71-b9b6-2eb314c6f9c4}} -charged-lepton couplings with the associated energy-momentum tensor. Then the LFV processes can only be induced by the interactions with active neutrinos so that they are well-known to be highly suppressed by the tiny neutrino masses, remaining to be unobservable under the present experimental status. Furthermore, due to the self-Hermitian nature of the total charged lepton energy-momentum tensor {{formula:51272819-4e93-4b07-8546-6dd2bc6c573e}} , its universal coupling to {{formula:bfe218fb-b342-4e8b-b611-f0022686dc2f}} can only be real, which automatically avoids the appearance of CPV vertices. Nevertheless, if the massive graviton theory ultimately violates this charged lepton universality, there is not any other known natural mechanism to forbid the complex flavor off-diagonal couplings as well as the associated LFV and CPV effects. The detailed discussion of these flavor issues is out of the scope of the present work, and we would like to leave it for future searches.
| d | c06ed31c2c14bfbcb5ec72a62a8f7e21 |
The masses and the decay constants of the final-state particles are from {{cite:c65d2ad4fef8d774d44de3f7bf6729926374b99f}}, {{cite:01e288ab56264d2c79b5bb4e9887b911c23cf8ca}}, {{cite:ad6bd289ea34ecdcc6f1031f649dcde9a025b5e9}}. The transition form factors in Eq.(REF ) have been computed out by several methods. We will use the results calculated by the light-front quark model{{cite:ddd56a8c5d38ddbdbea10bb4adfbe8d2ff2ccca2}} as inputs, which have been successfully used in the prediction of the discovery channel {{formula:e8a36a8a-42cb-489e-a6cb-ebf630e6e449}} and {{formula:0696de10-23e3-4c8d-997a-71b8ae1ceb90}} in {{cite:5522382d1b6e7ac9749027ea4f7f5f62298c8863}}.
Strong coupling constants can be found in the literatures {{cite:5a70297dc229feffe231133fa07cc12a0ea171c1}}, {{cite:f9534766a02172b0ec6dc39104083080d9b76471}}, {{cite:300be039c9b71b0308fc067e49c7c3a0e8ef8064}}, {{cite:ecbba71a64dea38df7e25a8616072282fef65980}}, {{cite:ce267eaed7659fd75a5034aaf92f4efa88955f2a}}, {{cite:fe90a03765361153a738456215e54de6e8e58707}}, and the unfound ones are calculated with respect to the SU(3) symmetry.
The strong coupling constants appeared in our calculation are gathered in the Appendix .
| r | ffbfb4fbcbf955844cdfbd777acc2f30 |
We illustrate our methods with an application to air pollution levels of NO{{formula:8c9ae6f8-7a64-484c-be47-9e8b96a6ccc9}} at a single location using a hierarchical Bayesian approach and the integrated nested Laplace approximation (INLA) for inference.
In general, INLA requires the model likelihood to be log-concave {{cite:455087df35cb6786e88ec78fcc9408037e9a1b97}}, which is not the case for the GEV and the bGEV distributions.
Although, in practice, the lack of log-concavity does not necessarily mean that INLA will not converge, it is highly advisable to try to mitigate potential numerical instabilities by, e.g., choosing informative priors for the likelihood parameters.
Additional standardisations of the response variable can also help to reduce convergence issues.
| d | 4a52687342fd110517511d35c3d9e1a9 |
Both {{formula:5b34c946-2482-44d7-80bf-b33d887db60b}} and {{formula:1e9b921a-bac5-4501-a3b8-473efba2df8c}} can be used to estimate the importance of different features in categorizations. Thus, this approach allows us to estimate importance of individual features in agendas (and categorization given by them) obtained possibly from complex deliberation process. These estimated importance values also provide an alternative method for categorization when we are only interested in flat categorization i.e. partition of objects. The importance values of features can be used as weights in computing proximity or dissimilarity between different objects based on features shared and not shared between them. The dissimilarity or proximity data obtained in such a way can be used to cluster objects based on several machine learning techniques {{cite:68411603e102068d52e76601625af8962861b17e}}, {{cite:515115aba79ba23863ba2e467db4d303958d8ad2}}. For more detailed properties of these transformations and comparative study refer to {{cite:d26f5ecbff0bb496d7bd90577ab7ef1995c807d5}}. These methods can only provide a flat categorization or clustering, unlike stability-based method which provides a hierarchical categorization.
| m | 8ce4973ef8b01e7ed5df3b45c8791439 |
In order to access radial flow, mean transverse rapidity and its azimuthal distribution are introduced {{cite:ba44e304cf0f7c5c383561aca6e1503d92725d56}}, {{cite:2197f44d75a453d379ccac63d4296dde6137acb4}}, {{cite:8b7c854f849842d28e9eaef9fecd815e6d2bfaaa}}. Where mean is the average over all particles in a specified azimuthal direction. So the influence of the number of particles is diminished. The 0th and 2nd Fourier components of its azimuthal distribution present the isotropic and anisotropic expansion of source at kinetic freeze-out {{cite:2197f44d75a453d379ccac63d4296dde6137acb4}}, {{cite:d9bdd6775474b2cad3188ed9ddf632f9c63d44d5}}, {{cite:f2ee61db04f219d274b6f5e960f0227fd1de2755}}, respectively. Where anisotropic parameter also arises in non-central collisions, and is proved to be consistent with the radial flow obtained from the Blast Wave parameterization {{cite:ba44e304cf0f7c5c383561aca6e1503d92725d56}}.
| i | c9fd88fd34a7e69db62d48d444776d3d |
which is situated between {{formula:19603495-e089-4f36-b1f8-d55d94f6c63c}} and {{formula:d5300320-c052-4099-98d4-b46beb59ac0a}} Finally, in noiseless situations, Mo {{cite:a01c0285682b77c95c7edbde8ffcc0a1d808f2a9}} showed that the condition {{formula:7d61d426-31e5-48bf-a64d-2ef1afc4a5e6}} is sharp to guarantee the success of OMP for {{formula:d97bfedb-a067-43cc-a916-6217084aa8e4}} -sparse signal reconstruction in exactly {{formula:de43a8b2-fe8c-4ee2-a8eb-a432d4b5bc86}} iterations.
In noisy scenarios, Cai and Wang {{cite:d717be730978a7da142eeeb2f7acaa0e9cca47c7}} and Wu, Huang and Chen {{cite:46e494c59a28e1152236585700b4ca96ffee08d2}} developed some sufficient conditions for the guaranteed performance of OMP, which consist of the bound (REF ) and certain requirements on the smallest nonzero signal entry;
Chang and Wu {{cite:9155f48b229b7b9354b8f9b1e7d109c62cf444d2}} established a similar result under the bound (REF ) and a slightly improved condition on the smallest nonzero signal entry.
Another improvement was achieved recently in {{cite:2cc55d442b477402b1e0d0e3cabb0e8f01da6309}}, wherein it is shown that the condition {{formula:c67098ee-c521-4cbf-b181-c756a578b8c7}} together with a condition on the smallest nonzero signal entry is sufficient for OMP to correctly identify the support of a {{formula:42efd38a-0c57-4ba2-b92c-a142ff774a93}} -sparse signal in exactly {{formula:1c5e0982-b4e3-4ec5-bd60-58ff3c4d165b}} iterations. It is worth mentioning that the stability of OMP under RIP assumption was also studied in the literature (see, e.g., Zhang {{cite:1a04a7c455124e15bfc2d64617f6fd6957518d8e}}).
| r | c03a56d06cffa56dd44748db88bee6a8 |
Substitutions on finite alphabets and associated dynamical systems generated by shifts are well studied from dynamical, topological, and geometrical points of view {{cite:a69211daf18d58d212455f9e8336704614c2bbea}}, {{cite:bb12e8155c20bc5916e65a454defb1696a0f9091}}, {{cite:e53b5128c5450360e005f000d1457f08c07a9e85}}, {{cite:073566ade6afdc1f06d4cfa5e8e1d39578b7d7fa}}. Under certain conditions, such substitutions generate substitution tilings with finitely many building blocks called prototiles, with some algebraic inflation factor {{formula:12384e98-aeee-4bb7-b6a8-ffd9c4e2ecc0}} .
In this setting, the algebraicity of {{formula:ca13ec80-d543-4c1a-bde8-cb4b34164e8b}} follows immediately from the finiteness of the alphabet, and hence the finite-dimensionality of the associated substitution matrix.
| i | daecf759447c49192627dd038c6c0262 |
In the paper, we compared our algorithm against different algorithms from various categories: non-learning (elastix {{cite:d63bdac3160c41cf577aada5f7b1229f8e066af4}}, a popular conventional tool); hybrid {{cite:4f5048b97206fdd5b51e824bebf9c386e2613931}}, and GAN-based {{cite:ea56a9d7086d6ff9c3dc7cc15046b9d9a2da1a95}}. The presented multi-task networks outperformed these approaches on the validation set and performed on par to these methods for the test set. However, the test time for the hybrid and elastix methods are in the order of minutes, while the presented methods have the advantage of fast prediction in less than a second. This enables online automatic re-contouring of daily scans for adaptive radiotherapy. Moreover, in our hybrid study {{cite:4f5048b97206fdd5b51e824bebf9c386e2613931}} we carried out an extensive dosimetric evaluation alongside the geometric evaluation. The predicted contours from that study met the dose coverage constraints in 86%, 91%, and 99% of the cases for the prostate, seminal vesicles, and lymph nodes, respectively. Since our multi-task networks outperformed the geometrical results in that study, we expect that our contours would achieve a higher success rate in terms of the dose coverage. This could potentially reduce treatment related complications and therefore improve patient quality-of-life after treatment.
| d | 454ab333f53830acf10dad0ba9fec0b6 |
One may compute {{formula:8d8f5277-148a-47f4-8707-cb136798511f}} as a function {{formula:d1b7cf01-c590-49e1-b24e-dae27b3f589a}} and {{formula:17b37cee-5b8b-4a3e-a43e-a1f321ed3905}} to locate the region that this ratio is much less than unity, which means that the energy level width is much less than the energy {{cite:f343c4c8511f6fda3844c7eebe4129fed79f9353}}, {{cite:6c2683f306a8855fa292821e216336f449793115}}. The merit of using this ratio is its independence from the energy parameter {{formula:1c2579ed-2fb0-47ec-8fee-b9cc73318a4c}} , hence it can serve as a criterium for the existence of quasiparticles. Here, we plot the function {{formula:b59a3240-d5e8-4592-bcc0-405722a77b55}} as a function of {{formula:21ca5800-2641-4fb2-9d03-39041d07b494}} for several values of {{formula:4ccaa357-f7e4-4794-ba6b-baf85065d9c1}} in Figs. REF -REF in order to be compatible with the foregoing plots. It is seen that {{formula:636dfe07-0f25-48d4-a1fe-7fefa63a7730}} is sufficiently small for {{formula:867d3b5f-dd3b-46e4-94fa-013ea974afa2}} , however in the region {{formula:17203221-e131-40d6-9d1b-1f7854be3fc9}} , it gets large ({{formula:71bf3374-f16d-48c5-8e83-3c32f8e5501d}} ), and about {{formula:8cde530e-10a1-4733-bb3f-cdb9256389b5}} , it becomes singular. The numerator and denominator of Eq. (REF ) as a function of {{formula:2b900d3d-4209-451c-a8a8-2e18f5e84b07}} were displayed in Figs. REF and REF , respectively.
| d | b1ecc3d452da01a1dc3ab5492172aab6 |
The optimization in Section 3.4 can be further controlled by the user by defining bounds on the value of {{formula:7a020e5a-0a2e-491a-98ac-2a1a08ec5a0e}} . If the user wants to guarantee some level of change in the image without allowing the image to get too close to the protoimage, they can place an lower and upper bound on the value of {{formula:1f12de62-e9de-444e-92d6-301742c52090}} . During the optimization, {{formula:029b1f05-3c9e-422e-a55e-b8cfa45fefb0}} would simply be projected back into the set if it were to exceed the bounds. We find this variation to be a balance between full user control as in {{cite:4927c086dd51002cfe6c7b0cc4bdab310aad8bfb}}, {{cite:dc4c54fc27f2315b310b02f8c9f57cb8500cb15d}}, {{cite:03069b8e6758c4e9674541145dc244a04b6b5d27}}, {{cite:f62058e9335a4e72ed3b10dc5bbb7a933d12dd29}}, {{cite:ef3ac38528c5e348236f617f051f75fd62c41d1b}} and our automated method.
| m | 3afa3baddd7999bac82146b112f5d18d |
First of all, we give an overview of the algorithm which is presented in detail in Algorithm .
[tb]
VCFR
[1] {{formula:57b3cc39-4e98-4cb6-8563-6008ff684dd8}}
Sample {{formula:0c81d9ac-38db-44b6-82b2-0e810d067351}} according to Observed rewards and transition probability from unknown environment
each update
Sample rewards from {{formula:ad37186e-baaf-47b1-889f-0416fb6d79d4}} , {{formula:57316ab7-d593-4ef4-bb06-8187fceae463}}
Collect all {{formula:160d9918-c848-4829-98e7-19b9b2df4648}} as datasets
{{formula:e1a1ad8b-9786-40b0-be07-63a7992d0ac2}} in datasets
Calculate {{formula:86e5868c-3493-4804-905f-11629f9ddfba}} for each {{formula:f92ead40-1599-4b73-808f-0528e1b65132}} through the model built by BNN
Construct {{formula:c8c2259f-33f5-4ce3-a6de-20708782c791}}
Train BNN using datasets and update the distribution of environment
{{formula:46a9e117-a951-41dd-a6a2-8fc390370624}}
{{formula:580b07c9-ad42-48fd-9275-06054c4218ea}}
the probability that node arrives {{formula:7e67a88a-fdce-4977-b196-42b764759af2}}
prune this node
Calculate {{formula:dcf1753f-e978-4522-b3a8-cd5cbe1415b4}} by the value of regret with Regret Matching
Calculate the average strategy with information gain {{formula:4f96b11d-a7e4-4c17-9423-c4e16cf557bb}} .
{{formula:46070eb9-bcd6-43d1-b953-540fd9f9b212}}
Gather the environment data and update Observed rewards and transition probability with interaction strategy {{formula:b8e25bf4-7003-4112-99c7-421e32954868}}
The posterior distribution of reward corresponding to each action is stored in the data pool {{formula:379c5ecf-8b20-4d53-85ca-2c96a6bc5521}} . We take the prior distribution as input, and we use Bayesian Neural Network(BNN) {{cite:584df4f4b9693f3f9972c5036a60fb3b88f9c0f5}}, {{cite:8181877fbc158686159867f1e81e8da5c0237b17}} in VIME to obtain the posterior distribution {{formula:ce7087bc-b3a2-4521-92a6-a687415d6346}} of the reward corresponding to each action. CFR can calculate the average strategy with the new reward added with information gain, and will explore the environment to collect the data according to the curiosity-driven strategy. The approximate Nash equilibrium will be found after continuous iterations. The whole architecture of our proposed algorithm is shown in Figure REF .
{{figure:c493dbdb-ce67-48c4-9270-b00cad9ea6a3}} | m | 28333bab9cfaa2afc62e8e508abe66dc |
The theory of integrability for differential systems is classic and it is useful in the study of dynamics of differential system. Integrability has different definitions in different fields. Here we mainly concern the algebraic aspects of integrability for polynomial differential systems, which involves analysis, algebraic geometry, the field extension and so on. For further information on this subject, we refer readers to Daboux {{cite:4b559458ca53002f323cda2e9e332da6551374ac}}, {{cite:f07017837afc0be9a79d2af8eca1a097983a5a37}}, Jouanolou {{cite:4aed9d4cabb1f3baae75b301062991a2f6fdf88e}}, Prelle and Singer {{cite:88c0d4bd79eba7017962b3f7ec31a398ab6ed789}}, Singer {{cite:8509a1b90037caa2a083c5fcf68e531192ec5506}}, Schlomiuk {{cite:e891f2eac29f24a3a10f900ad143d468c9774cc2}}, ¡¡Llibre {{cite:92a4746fbe63dc16a48a4c3fcb57497589f830e0}}, Dumortier and Llibre et al {{cite:9d6704e47cebecf7851a8febbbe7443b4dfb0f75}}, Christopher et al {{cite:d9f57d59dc5840341bf0b34035e3a74154d4e362}}, {{cite:039299a25e83032d882db446c894446e04b9109c}} and Llibre and Zhang {{cite:e04b0815e9ef2cfa5c9c96c115a48b222315497f}}, {{cite:44b132f02a9879d1bdba0348a08b2f35b752cfef}}, {{cite:a26a34afc81854c3c8a61119aee807ad0dbbc47f}}.
| r | 709d4d9de1ba6d121408238594abb70b |
The problem of resolving two point sources has been intensively studied as model problem for optical system resolution characterisation for more then a century. For a long time only visual observation of the light coming from the sources was available, thus resolving criteria were based on visual characteristics of intensity distribution {{cite:876bbdb7363fe6deaafa2a2437c2eda2b7076338}}, {{cite:dad1108e8a555f7af3b35c77261d343f88840d49}}. Visual criteria, such as Rayleigh criterion, were aimed to distinguish between images of two sources and one, but not to describe the problem of the separation estimation.
| i | dcb7002bb6f7aa82e773f92307d62c1a |
In the experiments reported in this paper, Python v3.6, {{formula:93ac9429-a7bd-4474-9967-0fe31e885f42}} Library and Sci-kit learn {{cite:a8ac2f143e0f9fb8d786efc44dd42b1b331c80d2}} were used for the implementation. Each experiments were executed 10 times and the average result was considered. Each dataset were split into two sets, train set and test set using 5 fold cross validation.
{{table:a0285e72-1f5c-45d5-9e06-c05604a4fffe}} | r | d4497fc161bd77aec067852c3692ca1e |
RNTK: We only used single layer RNTK, we {{formula:b6e4c7d6-687f-43aa-ac6b-8a1eb21fdf00}} and the following hyperparametr sets for the variances:
{{formula:72876402-1727-482f-a76c-4e7946c8c137}}
NTK: The formula for NTK of {{formula:627f15e8-cb1f-4b2c-b2cc-b4e598a881ef}} -layer MLP {{cite:54589919f06314f828c66675ac603a0c2fa4e58e}} for {{formula:a2193f5a-5054-44d3-826e-7f9e2a28e229}} is:
{{formula:125f6d1a-576c-40b6-8db1-94106044ccd7}}
and we used the following hyperparamters
{{formula:5dd0e6e1-569e-462d-951f-92c86b988e65}}
RBF:
{{formula:ad3c2a78-2556-4913-8b1f-1b43aa2b82ab}}
Polynomial:
{{formula:db2a493b-8d06-4ec0-9e10-a1d59199e6e0}}
| m | 87621610c1aef6c7d62f8358150052e2 |
The limitations of General Relativity (GR) on large scales led to the investigation of astrophysical processes in modified theories of gravity, e.g., theories which provide improved models of the accelerating universe. Among these modified theories (some of the review articles are listed in {{cite:28bcc2df22b08e3b6ecf718f81f60731a1cd9399}}), the {{formula:e1c44d47-02df-4952-8fdc-2a95326a96c3}} gravity presents a very elementary modification by including higher order curvature terms to incorporate the dark energy components, as well as the inflationary phase {{cite:42aaaa0b236567eee99c00519f38b6436efcbce2}}. Not only does {{formula:6714fe05-d78d-4efc-a18f-a1f2d3ef3cc0}} gravity reproduce the {{formula:a7197afd-9812-435c-8cbc-61e89fec4440}} CDM epoch, or is able to mimic the cosmological constant at the present epoch, but it can also unify the entire evolution history of the universe {{cite:c9a5d382df638a7160aa7e126117ebd162a07aba}}. The null dust non-static exact solutions in {{formula:06adbd58-21a1-4438-8f20-fa269c5c571b}} gravity constrained by constant curvature describing anti de-Sitter background evolution, was studied by Ghosh and Maharaj {{cite:a1c272bb227aefb09d052fc0b087d6c7fb58b3f7}}. Cembranos et al. {{cite:d0c06b72e8c19952f1f0606749c1df9a761e1c50}} studied the evolution of gravitating sources in the presence of dust fluid in a general {{formula:591e0b14-4068-490b-a38d-0a3c4770791a}} model with a view to determine the possible constraints on such models. Goswami et. al. studied the collapse of massive stars in {{formula:b7e31e33-1a19-4c0b-959f-bd82b2d3b179}} gravity and showed that the extra matching conditions arising in modified gravity impose strong constraints on the stellar structure and thermodynamic properties {{cite:5f1560f47156a96322a99566f9a8244bf6885550}}. Chakrabarti and Banerjee investigated the collapse of a perfect fluid source described by Lemaitre-Tolman-Bondi type geometry {{cite:f45f5419823bb32e4f8be1072ee8ff06b066df2f}}. Sharif and Yousaf studied the dynamical instability of charged spherical collapse in expansion-free condition {{cite:0832ba5c52ff54e3c26a526a77ca67e56b19300d}}, and the stability of dissipative charged spherical collapse in the CDTT-{{formula:167ec04e-b459-449d-807e-7e95f7401655}} model {{cite:1bc72964e323a2950c183064206f1573c3b158f7}}.
| i | ee5481f93ebacad5a8dbcdf1435ede42 |
We continue the training with the ground truth image- and optical flow data from Sintel {{cite:33d4ac0d56f082554bd1e37c7e6ea6cd8edd7bee}} and KITTI {{cite:4cda4ada7d286b699010b67327c07ae74d99c27e}} to compare with other methods in the finetuned regime.
We again follow the training schedule and settings from RAFT {{cite:3b0903b76e6c710498e32cda835fba6dae14bd33}} with the exception that we do not include the Things dataset and only finetune on either the Sintel or the KITTI dataset.
To prevent overfitting, we only train for 50k iterations in contrast to RAFT-ft which was finetuned for 100k iterations.
Results are shown in Table REF .
{{table:cbb23be2-7919-48cb-a54c-0ef2703aadea}} | m | a7290a3fb06cf8ea2c77d982d9850671 |
Alongside the advancement of NLG models, attention towards their limitations and potential risks has also increased.
Some early works {{cite:d6c3541408f96d69153a9f27f9ad9963f53b7b36}}, {{cite:560b03c5fe540e64588dd97a7a785d5849ed9de5}} focus on the potential pitfalls of utilizing the standard likelihood maximization-based objective in training and decoding of NLG models. They discovered that such likelihood maximization approaches could result in degeneration, which refers generated output that is bland, incoherent, or gets stuck in repetitive loops.
Concurrently, it is discovered that NLG models often generate text that is nonsensical, or unfaithful to the provided source input {{cite:669916e5980ec2141df69dbd2fbe7db011f3811b}}, {{cite:1d68c1b9408dcbddf36ee88f03f80bac437eed19}}, {{cite:07667093c766fee4c9639be054affa1c52b98fcf}}, {{cite:be6d5e33691cb041beaf109a9968c04b9f211ce6}}. Researchers started referring to such undesirable generation as hallucination {{cite:6ec2dc702743c21335ceaee79f5dadc22585063e}} The term “hallucination” first appeared in Computer Vision (CV) in {{cite:c28257175badee3fdcb3dbb7ccfad502f268d2f4}} and carried more positive meanings, such as superresolution {{cite:9606be9bbf6b5076ac5e7bb2fcdc42d1bb441a83}}, {{cite:c28257175badee3fdcb3dbb7ccfad502f268d2f4}}, image inpainting {{cite:14b44d86a004c93c4da953d1a4fcb2d8840b925f}}, and image synthesizing {{cite:035104c29f705d1553dd9924aed89fbf0fce98b2}}. Such hallucination is something we take advantage of rather than avoid in CV.
Nevertheless, recent works have started to refer to a specific type of error as "hallucination" in image captioning {{cite:64158c51425e77c746d8a4e384598111afb443f1}}, {{cite:07667093c766fee4c9639be054affa1c52b98fcf}} and object detection {{cite:bf64d28d95c511866fa0d9d340e6cb2a69343899}}, {{cite:ba9a479371a65bd29ff808dc2474805cb5c89bcf}}, which denotes non-existing objects detected or localized incorrectly at their expected position. The latter conception is similar to “hallucination” in NLG..
| i | b79f8e4a78905f2d9e2d74a5dda1ffd9 |
where {{formula:cdad8405-87b5-4a75-8211-27b7b94cd584}} is the constant of normalization. Here, the initial quantum state {{formula:213d494e-eeed-4692-8eff-b8abb6971f93}} can be the leading eigenvector of any Kraus operator {{formula:80a29429-3241-4d18-8dcd-a8c846d2b2a4}} (see Appendix ).
This naïvely suggests optimizing over {{formula:92a2537e-faf8-4fd2-b2cf-cae7a3f24568}} , but this is generally cumbersome as {{formula:49cb8ed3-ed3b-4a66-b9c0-20a50f891f36}} is constrained by the completeness relation {{formula:e04dc28f-c1b6-42fa-bb3f-842136541352}} .
In the Methods, we instead devise a way to optimize over a set of unconstrained {{formula:d634cd9a-f58b-48cd-ac23-03c055f166a4}} complex matrices {{formula:4e9eb0a0-ad7e-49b9-982a-98abe1521877}} , whose value enables the deduction of {{formula:7b3cfbd7-e4df-4d84-a832-e2f2c21c796e}} via tensor network techniques {{cite:73d87c314b3d3ee602ae1ca04177228eadb528a4}}.
| r | 4c23e2446bd6c047f463d6fd5c331cc4 |
In the equilibrium statistical mechanics of lattice systems, the non-uniqueness of the infinite-volume grand canonical Gibbs measure is referred to as a phase transition {{cite:3ba851f55312a269a28409fae360df1c7cb7e8ee}}. The presence of such a phase transition can then be detected with the help of some order parameter, for example the thermodynamic pressure for the nearest-neighbour Ising model, or the average magnetisation of the ensemble for the mean-field Curie–Weiss model. Based on the behaviour of these quantities (or rather on the behaviour of the infinite-volume partition function), one can then characterise the phase transitions as either continuous (second-order) or discontinuous (first-order).
| i | 5026332b024d492c476c90e5ba66c839 |
To improve the interpretability of the DeepAR model and explore the impact of the considered features on the model output,
we also perform the computation of the Shapley values {{cite:633536f1af888daeb89442fc47aec1ecc8a23efe}} of the model using the SHAP library available for Python {{cite:1e199566e57c5695d94408923d0b4f7ad614e0f2}}. SHAP (SHapley Additive exPlanations) is a game-theory approach to explain the output of any machine learning model {{cite:633536f1af888daeb89442fc47aec1ecc8a23efe}}. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions {{cite:1e199566e57c5695d94408923d0b4f7ad614e0f2}}. In our case, we use the model agnostic KernelExplainer method, which is used to explain any function by using a specially-weighted local linear regression to estimate SHAP values for the considered model. For this analysis, we only focus on the model having the Nelson and Siegel term-structure factors and all GDELT variables as covariates, that is DeepAR-Factors-GDELT.
To get an overview of which features result to be the most important for the DeepAR model, in Figure REF we illustrate a standard bar plot with the mean absolute SHAP values of the top features over the data sample, sorting by importance from the most impactful features with respect to the model output to the worst ones. As expected, FACTOR_1 has the largest impact, although this variable is immediately followed by other GDELT features generally referring to emotions, such as sadness, diffidence, stupefaction, confidence, distress, anxiety and happiness (see Table REF in Appendix B for the complete mapping).
These results confirm the findings in previous literature that have shown media emotions to be relevant predictors for movements in spreads {{cite:e9adba713c40bc1611082bbac854e75adb322a51}}, bond markets interest rates {{cite:4cac741d4c1964b2e79ce4e8b17b22447f38dfaf}}, {{cite:1e26b9dcd1f71f1fab85bce58c725687efc00c2d}}, and stock prices {{cite:d056655ea90e4cc5acb1fbe66a4b8a4eb8d9288d}}, and pointing at the explanatory and predictive power of these alternative variables in conjunction with classical determinants used for government credit spreads forecasts.
| r | a312933227c877c24131f73d261e75cc |
Results for this experiment are presented in Table (REF ). Despite using a much smaller pre-training dataset, training for just 12 hours, and using less than 50% of the parameters, our model outperforms the LSTM model described in {{cite:c4e8b5be0c026b596a968569a515a812f5ddd6b9}}. We also include a self-attention based model, DistilBert {{cite:c2b750df04ae1b43f5d696ced21a0e604ecd1b87}}, for comparison; it must be noted however that DistilBert was trained on a more general yet much larger dataset (concatenation of English WikiPedia and Toronto Book Corpus).
| r | 411f3266d2d0fc1621bdb341da549f74 |
In this section, we provide additional quantitative results on KITTI {{cite:714d84724e439a5173aa03c23063ff46eaee6a8f}}, {{cite:fa0a33a60e027e0680201165357fc2262172caea}} in addition to the ones presented in the main paper by comparing the proposed shared encoder between the pose and mask to separate encoders for each, by changing the image resolution in the paper to both lower and higher resolutions, and finally expanding the Cityscapes {{cite:6e3ca5ed08517163272d17f5e74f4004461c2997}} and DDAD {{cite:0648e7353afb469991a726d859a7d6ee740f5444}} results, and the results on the moving regions into more detailed tables.
| r | 786c9ed9e556879ff0e1a12dcd1323cd |
Applying Theorem REF to the generating function (REF ) (cf. {{cite:ca47722ff6d9c0818ec55473995350a2cd208e89}}), one finds that {{formula:ba662a59-8b08-461a-aa59-5309cc37ecf1}} as {{formula:657b83c9-72e9-4c54-a06f-9b3c89640d1d}} , with {{formula:63bef9aa-9ef5-404b-818d-cf766c7a9e12}} ; this is Robinson's constant.
The theorem can also be used to solve a problem
of
Erlebach and Ruehr
in counting Hamiltonian
cycles for bipartite graphs; see
Knuth {{cite:cf5bcfd5bf0f397e093b5d3b8a60efe4a2f1431b}}.
Darboux's method plays an important role as well in determining orthogonal measures, especially those with explicit three-term recurrence relations; see
Ismail {{cite:0047c67c4d96b6ede1a7e0ebce3bd4b5c85e9198}}.
| m | 1112ebed0edb77f37d6818a4f292a48b |
Overall, we believe this work brings a new perspective into questions related to Hamiltonian complexity {{cite:54b58634c88aa1476a59e41a2fbbcdef3eed94d1}} by focusing on problems that can be solved efficiently by quantum devices, unlike
problems such as the QMA-complete ground state problem {{cite:856f74ec9c3958edeb3b102c0478f2628197cab2}}.
Furthermore, we believe it could inspire new demonstrations of quantum advantage for measuring other quantities of interest in quantum many-body physics, which would strengthen the belief that quantum computers and simulators can answer problems about quantum matter beyond the power of any present or future classical algorithms.
| d | de93b45fbd88e108dd5adf09811349c3 |
The use of higher dimensionality such as 512, in DCC layers is shown to perform better {{cite:bd97fea332aa3b78d96c8162e6b12cded3590278}}. Using the 512 dimensions in the transformer decoder, however, can make it computationally intensive. So, it is better to run the DCC encoder at higher number of dimension than that of the transformer decoder. Since we use projection layers to convert between the encoder and decoder dimensions, to mitigate the effects the projection layers have on gradients, we include a skip connection in the decoder. Our ablation study shows that the skip connection improves the SI-SNRi from 9.06 to 9.43.
{{table:f900e4ed-642b-4f2e-815d-56a72a823da2}} | r | 6f76bbb3854ba4adeb1c9fec140685b3 |
Note that by duality (see {{cite:3367243dd52ad9ac49399b01b563c113db3083d0}}, ch. XII, §2)
{{formula:ae109200-d85d-4093-94eb-30a59de853a8}}
| r | fc324fda2d0a60c77c9e6a0523b6cc7d |
The monolingual and multilingual models offer comparable performance according to all measures and across all languages. This indicates that including other languages into the train set, besides the target language, does generally not improve performance of the models, especially if the training dataset is sufficiently large. This finding supports the so-called curse of multilinguality {{cite:ac28a71142f0d361199d8f6b6ef75551426bc363}}, i.e. a trade-off between the number of languages the model supports and the overall decrease in performance on monolingual and cross-lingual benchmarks. It is however very likely that the transfer between languages would be more successful if the language set would contain more similar languages.
{{table:8480fa36-d2ce-4246-acde-fb332f4c1a79}}{{table:c0be7241-7a6c-488c-b4eb-ece4f57e002b}}{{table:6a1893ac-4c47-4f72-be01-af070cffa45e}} | r | 3312340476f5a56bd456c029cc0728a4 |
The problem of estimation of functionals of parameters of high-dimensional and infinite-dimensional statistical models has a long history going back
to the 1970s. A very incomplete list of important references includes {{cite:e6d5bdf75cf95239673ac54a1bb797071ca6010c}}, {{cite:c1d9a24c3e94d7dc99379741281c2217e6fbb249}}, {{cite:10114c6b6e51a7a54330a6eeb41f0b977a923173}}, {{cite:a62015bca651b68068ffc55b4794168b2b64a137}}, {{cite:fa8ddec068da6b9873a0ff386841003212e52d4e}}, {{cite:6f4126d710439e66416480695164c237ad81c3ad}}, {{cite:666cf934acfdefedc15ccc8b3466248728ef9cbb}}, {{cite:69dd716e835111b7f590ca7a76c1e858c23d9a3e}}, {{cite:7d5cdfcadad615ca25fcc86c39e131294117e532}}, {{cite:fcf8d18282c16b419f08dfee3e88b6e2a77a6544}}. One of the main difficulties in this problem is related to the fact that, in the case of a high-dimensional parameter {{formula:42ee6dc3-d7f5-4b51-aad4-caebe0daf432}} and its
reasonable estimator {{formula:bd7330dc-8c82-46b8-befe-b2b572bbbe9f}} (for instance, the maximum likelihood estimator), a naive plug-in estimator {{formula:28942792-9e53-4540-897c-efd5e0bbbebd}} of a non-linear functional {{formula:2680a593-83ad-47ee-b182-34e47a1dfdc2}} would typically have a large bias and, due to this, would fail to achieve optimal convergence rates. Thus, the development of bias reduction methods becomes a crucial ingredient of functional estimation. Motivated by this problem, several general approaches to bias reduction (jackknife, bootstrap and Taylor expansions methods) have been studied in recent paper {{cite:5aa4dfae849710e1375a35664b871d3a147bfb1e}} in the case of estimation of a smooth function of the parameter of binomial model and it was shown that,
even in this simple case, the analysis of these approaches lead to non-trivial problems of approximation theory.
| r | f0aee38de513bf12e1e4a2bd60fa6ca6 |
2. An experimental detection of the axion particles is evidently a demanding task. As mentioned, the haloscope approach may be a feasible method {{cite:1c9694400b768b8b48b35b5e3f5952b71e9e9718}}, {{cite:0afe65095400e55ce39d2c8f0d3ed0380ea3b55a}}, {{cite:f5b14b30e2027d127684f4d6514b8831e0992811}}, {{cite:f61d22b64e0df199f5bae75480c8f34fbba40099}}, {{cite:39f1d451297b813df106bf32dd76d53be8c1c971}}, although one must then be able to lower the resonance frequencies {{formula:bea32afb-858e-4aeb-a545-c33f1af4bf81}} in a dielectric cylinder so much that there occurs approximate coincidence with the axion frequency {{formula:05ef6ec3-a95e-4082-abb3-59ee5183e3a4}} rad/s. There are various ways of doing this {{cite:f61d22b64e0df199f5bae75480c8f34fbba40099}}, {{cite:39f1d451297b813df106bf32dd76d53be8c1c971}}. For instance, in Ref. {{cite:f61d22b64e0df199f5bae75480c8f34fbba40099}} a special variant of a dielectric haloscope was proposed, aiming to detect axions in the high-mass range {{formula:72b5fc9a-56c6-41a2-b342-27b237ac2a98}} eV. A stack of a large number {{formula:46b27c96-a0b3-42bc-a4d3-40bd937aa261}} of parallel plates ({{formula:a31820e4-250d-407c-8f3c-edabe4e8aefb}} ) was assumed to be situated parallel to a parallel mirror. This setup implied advantages from a large transverse area, and also from the possibility of making both broadband and narrow-band tests.
| d | d2190d16ce418bca8cd1bfef6cd04466 |
The positively drifting type ii structures have also been interpreted as being due to downward particle acceleration at
a termination shock generated by a reconnection outflow ({{cite:29f29ca33f62e635907b1a1ab29f0a4d6309e65c}}, {{cite:83ce73c5860af5f008534477dca9e0a341f22a54}}, {{cite:8be2bdb9a63d3d12781e93215dc89f259082cac6}}) or to slippage of
field lines causing the shift in the reconnection point ({{cite:253d07d880c3e1728e430fd38b8882510eb66cb8}}).
However, the event reported in this letter occurred very far from the flaring active region and was produced
in a region where, according to the field extrapolation, no oppositely directed magnetic field lines were observed; as such, we can rule out the above hypotheses.
{{cite:61273d2ccc56152b420d79f3b6e40a0789228f6d}} used a 2.5D numerical code to perform simulations of fast-mode MHD wave propagation in the corona and its interaction with coronal holes.
In their simulations, they presented the temporal evolution of the incoming wave, its impact with a low-density region characterized by high Alfvén speed, as in coronal holes, and the subsequently evolving secondary reflected, transmitted, and traversing waves.
As the wave moved toward the coronal hole, they observed a steepening of the wave that subsequently developed into a shock.
At the coronal hole boundary, a reflection of the incident wave occurred as an immediate result of the impact of the wave on the coronal hole.
As a matter of fact, a large coronal hole was visible in the STEREO-A/EUVI images at lower heliolatitudes (see Fig. REF ), and a clear interaction between the EUV wave and the coronal hole boundary could be seen in the 195 Å passband (see Fig. REF ), hinting that the shock reflection must have occurred, as expected, somewhat earlier than 10:15 UT.
Our proposed mechanism is exemplified in the cartoon shown in Fig. REF : The reverse type ii radio spectral feature was emitted at the intersection of the shock wave, reflected at the coronal hole boundary, with an intervening low-Alfvén-speed region characterized by an open field configuration.
| d | b25e0999154c7d9c2992c52146be1e6a |
which we can solve by the well known subgradient
algorithm {{cite:25f8db3d9b65458c05986ed4efb157af696f1ed2}}. From now on, we concentrate on the
calculation of the subproblem (REF ).
| m | 3360342eb528cf9ad94642443b460a7d |
VGG: is a convolutional neural network (CNN) model that achieved significant success in The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) competition in 2014 {{cite:891d22b44b0583da43060fc4fa9e4f014370381f}}. VGG consists of sequences of convolution and maxpool layers. In our numerical experiments, the VGG16 variant with 16 trainable layers is used.
| m | f0c3582ce32000e875a17db3b6442aa8 |
(For a more precise estimate of the error term in the Prime Number
Theorem, we refer the reader to {{cite:6035220c5b5823263850cd35d6aa5e0b97d973a0}}). Furthermore, for {{formula:791964bc-3393-43a1-b678-a464371090fc}} , we have {{formula:7c95aac2-efed-4ce1-82f2-da1966e3105d}} . Hence we get
{{formula:11cf78d9-a97a-465c-835a-3dc1954b7230}}
| r | 4a77e46e7ecd143b37d425eaa7c2d0a3 |
In this paper, we formally derived the asymptotic posterior normality for the family of GEV distributions under blacka large class of priors. Proposition REF demonstrated that the posterior approximation can be easily obtained if the parameter space {{formula:7888a60a-ea2b-415f-9f98-f17302d43e48}} is compact, as is generally the case {{cite:be6f073d5043ae8d76ea5b8a444ad981f294c3e7}}. The bulk of the technical difficulty lies in the evaluation of the integration of {{formula:7f369b08-63f4-4957-a769-5fea4a347f6f}} outside the neighbourhood of {{formula:14262586-4723-44c2-8f26-00470363fa2f}} , which is closely related to the elliptic integral equivalent to the Carlson {{formula:d743d6cd-5bdb-47ce-b78b-da4578722750}} -function; see Lemma REF . An interesting side result from the posterior normality is that the lower bound in (REF ) is attained asymptotically. Therefore, it provides a novel way of estimating an integral that would be difficult to calculate otherwise.
| d | ef1ab9dc12e8e2edf7ba8e4094dd1196 |
Optical flow is intuitively a matching problem that aims at finding corresponding pixels. To achieve this, we can compare the similarity of the features for each pixel, and identify the corresponding pixel that gives the highest similarity. Such a process requires the features are discriminative enough to stand out. Aggregating the spatial contexts within the image itself and information from another image can intuitively alleviate ambiguities and improve their distinctiveness. Such design philosophies have enabled great achievement in sparse feature matching frameworks {{cite:18eb563f4605bb587600f0092c3e85d1d12d9a9d}}, {{cite:4cca6c26b6a7d6c7d145aa0e152c862fac73e917}}. The success of sparse matching, which usually features a large viewpoint change, motivates us to formulate optical flow as an explicit global matching problem in order to address the issue of large displacements.
| m | 703f0420b6c60eed0d0bfea9eaccc6be |
To be able to talk about thermodynamic phases, phase transitions, temperatures and so on, the system under consideration must reach equilibrium {{cite:561f19ed80af6c724a2399c641b64ad6e0db3b1a}}. Only at equilibrium, variables such as temperature, pressure and entropy density can be defined, and thermodynamic relations between them can be investigated. Therefore, equilibrium is assumed in lattice QCD and various QCD-based models in their calculations of phase transition.
| i | 429e63788164b50e04829b49dfd90396 |
In the star forming regions, protostellar jets are observed as proof of star formation.
Jet and outflow driving in the star formation process has been investigated in core collapse simulations.
However, in many simulations, the jet driving was investigated only in a limited parameter range {{cite:8e40562ebb6ebaf96743d495f4dc191f0509184e}}.
Clouds with strong magnetic fields and low-mass accretion rates are usually chosen as the initial condition of core collapse simulations
to investigate a high-velocity flow driven by the protostar
{{cite:8e40562ebb6ebaf96743d495f4dc191f0509184e}}, {{cite:2e7fa06d7da1d1e0633930fa7ed8c516f647081b}}, {{cite:56857c00529f6c2e68fcb7b5a0574a9eca0bb9fa}}It should be noted that, for the low-mass star formation case, {{cite:0c906924b77e42f93cca172dbea2ea6b2ed0afcd}} investigated the jet driving in the clouds with different magnetic field strengths ({{formula:bd5ebab6-b84e-4e85-a90e-38377bc355ed}} , 10, 20, 100) resolving the protostar.
Although they calculated the jet driving only for {{formula:67933db7-0877-4628-8745-28ff7e9f5b8e}} yr, the high-velocity jet extends to {{formula:1444936e-8626-414d-8ce5-a74b905c5f49}} au from the protostar in the models with {{formula:82d354e9-5c68-40b5-8e35-28425892fbe6}} , 10, and 20.
Note that, in their study, no jet appears in the model with {{formula:716c4c83-d2c6-4895-8859-1cb9f02780a6}} .
I confirmed that, in my simulations, the jet appears and is sustained at least in the very early phase for the model with {{formula:a34df256-7be7-42b3-a553-fa6797f31040}} , 5, 10 and 20 (Fig. REF )..
The difference in the initial condition may cause different outcomes {{cite:6ff9dd7b0bb3c4c91f76c69992527dd066e3234a}}, {{cite:af835a643736cdd2b1e34ea50a21a66bc62844d8}}.
Thus, we need to perform core collapse simulations with various initial conditions to investigate whether protostellar jets universally appear in the star formation process.
This study focuses on jet driving with different star-forming environments.
In other words, I especially included jet driving in clouds with weak magnetic fields and a high mass accretion rates in the investigation.
Another purpose of this study is to investigate whether a jet driven near the protostar can help to drive a low-velocity and wide-angle outflow, based on the finding in Paper III that the outflow fails to be driven with a weak magnetic field and/or a high mass accretion rate.
{{figure:dbfd8ae7-4fe2-43e2-b11c-32f13d4dcbff}} | d | faa3ede516b755dd8819a82ad51323f8 |
Figure REF (Right) illustrates the ROC with respect to the pixel-level measure. In Table 1, we compare the pixel level EER of our approach to that of other approaches. Our method's EER is 24 percent where the next best result is 29.9 percent reported for the method Li {{cite:8ffe4266d5ef65101ea1b88bb5d264eb7b857d36}}. Our method is 5.9 percent better than the otherwise best result.
The results show (both ROC and EER) that our algorithm outperforms the other methods for the pixel-level measure. We also use a dual-pixel level measure to analyze the accuracy of anomaly localization. Figure REF shows the effect of the parameter {{formula:a1a07ac8-f8d0-44da-aa80-828749cab95e}} on our algorithm. The algorithm has a good performance, even better than the state-of-the-art, in pixel level with {{formula:08238373-f422-44d5-8125-1a8b43b03bb0}} =0.05 percent and 0 percent. Figure REF illustrates comparisons at frame level and pixel level of our approach; in contrast to all reported algorithms, the pixel level measure is very close to frame level measure in our algorithm.
{{figure:49d7115d-9f6c-4d25-8b22-c41e266c4ef0}}{{table:1088259a-4625-491f-b9ce-1fb90dca152a}} | r | 0229576bde8736d9881c6a5a01dc97ce |
The general position problem of finding a largest general position set of a graph is a generalization of the No-three-in-line problem in the {{formula:e43ae5d7-3cf4-493b-9950-7b48c393e3f7}} grid from discrete geometry, which can be traced to the famous Dudeney's “Puzzle with Pawns” of his book “Amusements in Mathematics” {{cite:ca1c36a1293eb0e90fa8a7602731a4980200e60b}} from 1917. In 1995, Korner {{cite:4ae2f072ec87224a8e38c31e202808479a7bf2ed}} investigated the general position problem on hypercubes, while in {{cite:dc9ae8392e39ec91b996e5ee7e51e01c92cdfbea}} it was considered for the first time on general graphs. However, the formalisation of the problem as we know it today and the notation that is in use have been introduced in {{cite:1ce3d05f1ece71212b633c17afcc5b9fb848740a}}, {{cite:5300741d52e614c99164139f89223274fe107754}}. Also see {{cite:155c1fa2ceebbefbcd07872d98ae6595f63b7aab}}, {{cite:101a105f70f5e13de907fb9c62c3c3bfcc6d0a52}}, {{cite:d43f89fe93e5bcc024a625e8d48d216b62564535}}, {{cite:4ea25503cc85414365db7b209fb7aebf06dadf57}} for the related general position subset selection problem in computational geometry.
In 2019, general position sets in graphs were characterized {{cite:8cbf8e734fd670d3bae2ac783fd313e717deaae4}} and, after this, several additional papers on the general position problem were published, many of them with bounds on the maximum size of a general position set and exact values in graph products {{cite:cdc0129559400e93b1b58837b9467c32e8dc1022}}, {{cite:b312fb0f2ef12efcef620a06959d9e2f1b44bbbb}}, {{cite:f14db3bd73d7f67dd521c8db6942c93069058530}}, {{cite:bc25f9f4107cb9b629092b23f916531ffb877a87}}, {{cite:5f638b9c050f9d6a9afd7cc8b54fc9a56fcccd07}}, {{cite:169fce8340b0beb3bc3c6f49a929ee73d1aa7d16}}, {{cite:c256d2c72e7379577ef2eee49b7fd1093568eec3}}, {{cite:2552f6fe592fb18984bc9888a64c037cf98c5aab}}.
| i | 5a2e73dc5825f9900dd0b3ca5f27e5ba |
Coherent control of quantum states plays a significant role in extensive practical applications including quantum information processing {{cite:fc740f91668ab03cb0d325128e74f1f01c004b5f}}, high-precision measurement {{cite:e8c1bbb6fdfa7ee24adf7761c7d08c7413a9b98c}}, and manipulation of atoms and molecules {{cite:b561a086ce97cb44b3ed92235c417a3c39f0fde3}}. To robustly obtain a target state with high fidelity, adiabatic passage (AP) as well as its variants has been demonstrated as an effective means through well-tuned radiative interactions {{cite:5fd209fbf0734d51d1b85babaaf53f4f6a212c6a}}, {{cite:e73413f957beee9348886fa77ff1dd1e7e33e264}}, {{cite:bcc69785f3b616d332e1e36832a8159b9630e5af}}, {{cite:65ff5456c2e9c496ee73d1ff8aa6be8081b4e380}}, {{cite:9506d85753ab54ca0e0b001a8c8bf7139fed066c}}, while suffering from lengthy evolution owing to the limitation of adiabatic criterion. Recently, methods for speeding up AP, called shortcut to adiabatic passage (STAP), provide new approaches for rendering a system to reach the desired state quickly, accurately, and robustly {{cite:dafa3a25d49a584022ad112d1e361387796e8527}}, {{cite:4c1542d08f0ca04aedc008d68879d47a88e499b6}}, {{cite:b79ac59de60ca963f8669d2ecff617d2c1c7e0b4}}, {{cite:870d0a608ca0bee9b84e4d60f69c08cf07b20440}}. The development of STAP makes it applicable widely from quantum optics {{cite:d08a4c2ac79c4ffc4bf2ac58f87414006fc4f837}}, {{cite:82da7b5f51761deb658a87b18d6155df0b2b78a6}}, {{cite:79ca2bde22c00eee9299b51dee5b1920bc8d1ed9}}, {{cite:f015b83947af9ba1f189243bbb6c392121a1c0ba}}, {{cite:f4ea7ba495153b4b63e3da1c93f6a1e39e021e6e}} and integrated optics {{cite:72591bfa2045981871dca27ef431c0919d30fd0e}}, {{cite:7069f239846b412ab7b8b782cda99058ce1ea213}}, {{cite:d67581264b9d9a4c67e8df1773d3317150be6dd1}}, {{cite:70e84c545abca8769bc748863f0bb06936e32378}}, {{cite:566c327b0cd6b0e214402bb83d0695f3c5c131e5}} to mechanical engineering {{cite:336669e476596d7274a8732a70e0543fcd58bf66}}, {{cite:c6e4e47c94462dae3278945c9d32a3b949ecf291}}, physical chemistry {{cite:88a647b6a123a532a13c86bc9d3bf2163349f55b}}, {{cite:75a3419889bf7da20911b9afbd3c4a41ae30afe3}}, and biology {{cite:416d2777793ae41058853af9604f5169afbc0e45}}, {{cite:c80ed0eceaee50c11188d93af147f5e9d0150738}}. In this work, we intend to apply STAP for designing compact couplers and functional metamaterial in the field of acoustics.
| i | 4575463ca459fd183696af4b5cb65367 |
In fact, analyzing the effect of dynamical friction of dark matter density spike in a binary system is not a new idea. Nevertheless, most of the related studies have focused on the binaries of the compact objects (e.g. black hole binaries) rather than the BH-LMXB systems {{cite:84b95150008c0e485bd85ebfa2e2bcc61bc5039d}}, {{cite:9dbc84e7ce0c33e2657d3fd6e470a096d01bb7b8}}, {{cite:4a232d6197501701d7e51a1a8c693ebdefb0b5f6}}, {{cite:2c95aa7242563e45cab2e5f494a82d1eaf98c4a8}}, {{cite:80f91ee7987b8a542417dc51439f31cb904fa9e8}}, {{cite:4819f41ebc58829d12866ab15002a8772cae97e0}}, {{cite:74dad56b2e6df22320c3e9ec51548f7eb792e1af}}, {{cite:1bf0d191a152c43995591a8f19a172f104a6de5d}}. In compact binaries, both gravitational radiation and dynamical friction of dark matter are significant. Therefore, gravitational wave detection might be required to reveal the nature of dark matter, which might contribute extra uncertainties in the constrained parameters. Since optical and X-ray observations can give very precise measurements for most of the important physical parameters in BH-LMXBs, we anticipate that analyzing BH-LMXBs can better reveal the nature of the dark matter density spike surrounding a black hole. There are at least 18 black hole X-ray binaries in our Galaxy {{cite:de70e353c984f56d6a4127a2feaf6e7c348ec117}}, which can give rich information to constrain the nature of dark matter. For example, one nearby black hole X-ray binary Nova Muscae 1991 also shows an abnormally fast orbital decay {{formula:f219c993-4578-4ee8-b829-9fcc66e5400d}} ms yr{{formula:5b3b8c21-4500-4b3e-a27b-414149fee413}} , although the uncertainty is quite large {{cite:09e4dd8ce8a691e235402c5556fb3929e256f194}}. Future high quality measurements may be helpful to further confirm the existence of dark matter density spike in these black hole X-ray binaries. This kind of analyses would open an entirely new window for observations and theoretical studies to investigate dark matter astrophysics {{cite:e9de9f7826c18af819b7ccef6d9339cafd543e25}}.
{{table:8cbc7298-4b99-4106-9d64-f70e9b863560}}{{table:2768c735-ab16-4cc0-957f-31309d593434}}{{figure:b47a415e-e1c5-4d5d-bf34-ec2705db3ffa}} | d | aa79b56c72f28b808e32fa11e91da7d3 |
In addition to the studies on some aspects of the new solution, we also discussed the Kerr/CFT holography associated to the spacetime. The agreement of near horizon metric (REF ) to the general form discussed in {{cite:9d2c75b0a10e03911e3dc9f4a50ff0b0cffa9422}}, {{cite:a65fd786184602a9e8dcf4e9c2cc6fa05fba8e44}} allows us to employ the method developed in that works to obtain the corresponding central charge and Frolov-Thorne temperature. In turn, these quantities give us the extremal black hole entropy after using the Cardy formula, which can be viewed as the microscopic calculation for the black hole entropy. This result is a generalization of some previous works on the magnetized Kerr/CFT correspondence {{cite:7c51964e58373b485d69f21f31408e9a81943fa1}}, {{cite:1cb4acf03230209d3996654855afb1ca9656c52d}}, where now the spacetime is equipped with the NUT parameter.
| d | 36a97be29c9c14511bfa5beb1fd20022 |
Finally, we recommend consider alternative inference methods in order to account for larger datasets, which is currently an active research area in computational statistics (e.g., {{cite:9bf54c84931b9532935758eb9562dfda70766def}})
| d | c2aea142354718477848000ed7a76f03 |
In this section, we test the original PINNs with different weight selections {{formula:41a2f550-67e1-4190-9e60-41a40efe7a87}} for the sake of contrasting their capability to represent manifold equations with lbPINNs. We aim to highlight the ability of our method to handle the classical Navier–Stokes equations, which are closely related to the physics of many scientific phenomena. The completed Navier–Stokes equation is often valid for engineering interests, such as vascular flow studies, Molecular diffusion analysis, airplane, and automobile design. We apply the proposed lbPINNs to simulate different incompressible Navier-Stokes flows. First, we consider two-dimensional steady Kovasznay flow with the analytic solution to investigate the effectiveness of lbPINNs based on boundary constraints. Then we employ the self-adaptive loss balanced method to unsteady cylinder wake in two dimensions. Finally, the three-dimensional unsteady Beltrami flow is also successfully simulated by lbPINNs, which considers initial and boundary constraints. CFD simulations provide all collocation points {{cite:8fdc4704ecbe2d4bdbbd54608415962af12ac144}}.
| r | 0cbf14117933e8ecd4cc0fb91233634a |
where the shape parameter {{formula:8abce3ea-ff98-43d9-8fb8-9a13d32b0f44}} GeV has been fixed
using the rich experimental data on the {{formula:6ad406cc-60e9-4556-801d-23daa1f0b21b}} and {{formula:ea1c8799-80f6-44ab-a704-dcdb8fea6a7e}}
decays{{cite:fc283edb906c56f4ddbbf8f53c58ccfb74125804}}, {{cite:95dcc2ac1d082b4956e863914823ae8514353310}}, {{cite:c9d9ea2b6ba0c18682b687048206ec4df34c8898}}.
| r | 489963f91c5ee965b44524141f40764f |
This sounds more or less straightforward; however, finding the global minimum of a nonlinear function such as {{formula:2729e296-ae39-478f-9a30-3536be14c9a4}} is an NP-complete
problem {{cite:a7e11210b185b198cdbd482b92317bf3cb84ad3e}}. In a practical sense one cannot expect to succeed with such problems. However there is an attractive feature of the form of
{{formula:234dc6b2-1d01-48a8-af31-fcb07780be33}} that permits us to accomplish more.
| m | a2cea222dc666a5e3e0a8c9fc166c1c3 |
Experiments.
We first created explanations for the same data point following the methods of LIME, SHAP and SAT, which are described in detail next.
LIME {{cite:ff509a873e554941ae54f9893f983324ab8e2b55}} provides local explanations of model predictions global by creating an approximation near the input. Feature importance can be inferred from this local linear approximation by looking at the feature weights. For our experiments, we mirrored the feature importance experiments of the LIME repositoryhttps://github.com/marcotcr/lime.
{{figure:69f1dd54-fd19-4811-9f4e-a7fbde5effd1}}{{figure:36eb7042-8949-4138-b11c-13779aca1a81}} | m | 976b245f81a74de73c5f7655b7969b7b |
where {{formula:03a0a956-f05b-4007-9e6e-c1c102a73e60}}{{cite:c2f96c6a8380f38e77de211684c71b267be9a579}}, {{cite:18fdf27edf451dad4423f5987bb51c254cf0a31f}}.
Therefore, the initial length {{formula:f4f5ac36-f143-4b21-a788-9da114a5e943}} of the loop decreases as
{{formula:db803d57-2933-42d8-a38f-95349735d8ae}}
| d | 945a79e15eb9e411f930e01c69fe4ed7 |
Against this background, this paper considers continual learning by extending EWC to PSFA, which is regarded as underlying multimode dynamic processes for the observed sequential data. The proposed method is referred to as PSFA-EWC. Data from each mode arrive sequentially and unknown modes are allowed.
Moreover, the proposed algorithm would be best to distinguish the real faults and the normal operating derivations. When a new mode is identified by PSFA and prior knowledge, it is assumed that a set of data are collected before learning. A quadratic penalty term is introduced to avoid the dramatic changes of mode-relevant parameters when a new mode is trained {{cite:207f8d108b77497592965e6d4095ab902a0cd734}}. Similar to {{cite:e04fffeae905d203824084c731ae646cc14f7585}}, EWC is adopted to estimate the PSFA model parameter importance. The information from novel modes is assimilated while consolidating the learned knowledge simultaneously, thus delivering continual learning ability for successive modes.
| i | d235c7419fbc1e519e7b206bef18aad5 |
Circuit-QED solid-state systems {{cite:0ad2e5fe3f35e9a9f68d36afbc6f1e0900720706}} made of artificial
atoms (AA) and resonator modes {{cite:5abf9290f3531f0ffc5b183344ad56819996523e}} are paradigm models
for studying fundamental physics from measurement
theory {{cite:0ad2e5fe3f35e9a9f68d36afbc6f1e0900720706}} to quantum
thermodynamics {{cite:43758035227e45e3ce4629c18abff71220595367}} and quantum
communication {{cite:22c4e96f5f5694b04f6fc52880b1a9a739d99cc3}} besides being one of the most
promising platforms for quantum hardware {{cite:0b85e2f862f44919aeb5c8bd0c5f43f03879bcf1}}.
| i | e2957a61214e813e7882da06363cca6f |
Li et al.{{cite:b29ce6ad89bf634108659d336d2ea17e18d01fb2}} developed a whitening and color transform (WCT) method for style transfer based on a feature transform. These methods can achieve a real-time, arbitrary style transfer.
| m | f95708881c8a83034688a277338d0554 |
Various post-processing modules have been proposed in the literature. Modeling of the image-prior, based on maximum a posteriori (MAP) criteria is used in {{cite:ec75ec8ad6c1f70e18edc7b108c3b277c35ab428}}, but it is found to be quite more expensive than handcrafted models. The post-processing method in {{cite:9a6925ec8dea0400cc0cd7f75e24abe963b1b93c}} based on the adaptive DCT method is quite good, but failed to generate sharp images. The notion of Wiener filtering for denoising used in {{cite:ff31482403b576b80b206667e0e684ec7ab2555a}} was novel, however, the images generated contained highly visible noise at the edge regions. Similarly, the modeling of image prior and quantization is found to be expensive in {{cite:7ebfac82ba7e0e220ab045e04a278d0698739ecb}}. Ren et al. {{cite:73aa7034d49028839006af617de42deeca7d0fd7}} combined the local sparsity image prior model and non-local similarity model for artifacts reduction using low-rank minimization and patch clustering proposed in {{cite:ec75ec8ad6c1f70e18edc7b108c3b277c35ab428}} and {{cite:7ebfac82ba7e0e220ab045e04a278d0698739ecb}} respectively. Dictionary learning and variational regularization used in {{cite:4e8d0bd80f94014127aa1ae733e9d6ee4ed2ed63}} for restoring images outperformed the other decompression method but consumed substantial computational time. Weighted nuclear norm minimization (WNNM) concept used in {{cite:0ada5d1b3d28c037065f368d466430f18f2360f3}} being complex, outperformed Dabov et al. {{cite:ff31482403b576b80b206667e0e684ec7ab2555a}} approach while preserving the local structure and reducing visual artifacts taking more computation time.
The deblocking method, given in {{cite:51459f5429e0fcbf3682ce5b3d4ecc7cc1214258}} using a non-convex low rank constrained model was a good optimization of the problem. The denoising method proposed by Zhang et al. {{cite:7ad926594b462573c98197befe926aae48322b7a}} helped in reducing artifacts but with the lack of retaining edge information in reconstructed images. The benchmark deblocking method ARCNN, proposed by Dong et al. {{cite:383b2ada868f5ac974795dded15a93f06a87fdf8}} outperformed the above-discussed methods used for image compression.
All these deblocking and post-processing models aimed to improve the reconstruction quality while ignoring the joint optimization of the traditional codec with encoder-decoder structure.
| m | 06aa196749968f6dd903f6cd1c6af8be |
The DFT+NEGF transport calculations within the local density approximation (LDA) {{cite:4953cc68a33438c7d358082584b62d3b1937ad0e}} were carried out using TranSiesta {{cite:1f37e0d26a04f047a3e7df0f5a1a8d7bc60d3d81}}, {{cite:dca54a71fe4231e6fd9bcfe3ac84eefabde21676}}, and the post-processing codes TBtrans, and sisl {{cite:94690cde02ffeba709fabb0beffe58b5a8a04279}}. The transmission function is calculated using the NEGF approach as
{{formula:8d6c7e17-9e05-4f8c-8674-df66ed2cadb0}}
| m | fe669e7c7270bbe456bc88a031de88af |
Due to the lack of work that only focuses on text and audio, we compared with models that also considers the visual modality.
We compared our performance with two recent state of the art emotion recognition models on CMU-MOSEI.
Specifically, these are the graph Memory Fusion Network (Graph-MFN) {{cite:b7189cebe26f621f47e0af7f545d5dd7af60b306}} and the contextual inter-modal attention framework (CIM-Att) {{cite:53b8d67101eb4e5a83ff102c267a5a002599f6f7}}.
To match learning conditions, we compared with the single task learning (STL) model of {{cite:53b8d67101eb4e5a83ff102c267a5a002599f6f7}} where only emotion labels are used in training.
| m | 331f7e796c5673f74c56e75e004b1c33 |
F1 evaluates the average of the F1 scores between the positive and negative aggregate predicate sets.
{{table:1439da91-8076-4684-8882-2835aff2652a}}{{figure:cab06673-4083-4c2e-b7ef-fbd9ee99f447}}{{figure:ab6b735d-ee31-430f-a1ed-adee39ef780e}}Baseline Comparison
Table REF presents the scores for the baseline, GoalNet and its variations. For a fair comparison with baseline, we have considered a grounding-aware version of GoalNet where we include the instance groundings for all objects in the input instruction as part of the goal-object set {{formula:84c96ae6-1b75-4c71-b8e0-ef81cdf3242e}} (see Section ). GoalNet improves SJI, IED, F1 and GRR by 25%, 34%, 27% and 26%, respectively. However, when we use Rintanen instead of SymSim, we get marked improvement, albeit taking {{formula:e68fc081-c7cf-4703-a876-6ef31fea6186}} more time to train the model. With Rintanen in the training loop, we get 44%, 47%, 40% and 51% higher SJI, IED, F1 and GRR scores demonstrating the importance of action side-effects introduced by the planner in lieu of using a simplified simulator. The improvement is primarily due to the dense representation of the input state, unlike the hand-crafted feature approach of {{cite:b50b3bb8cf2cc552c910982c391e6cc6b8a95357}}, enabling GoalNet to generalize to settings unseen at training. Figure REF and REF shows state-action pair generated by GoalNet in kitchen and living room domain respectively, demonstrating its ability to execute tasks successfully and reach a goal state.
Analysis of Model Components
Table REF also presents scores corresponding to model ablations. We fix the model capacities for a fair comparison.
Without the relational information in the form of adjacency matrix {{formula:cfc7ff52-59d1-46b6-8308-c8f533d4a5f3}} for input {{formula:1e821d70-0386-4bc0-96c5-150cf8aaec0a}} , the model is unable to capture change in the spatial relations among world objects. For instance, when filling a mug with water (see Fig. REF ), the {{formula:e0c81876-ae97-4d71-97eb-bde2a7e5b6ca}} establishes a {{formula:0c2cec72-b0c8-4e9a-96c0-647a66cc647c}} relation between the two objects. Missing such changes in the state lead to a drop in performance of {{formula:47280fb4-0e59-4ac2-b002-e7d2da8b2c92}} 5%.
Without the instance grounding of the objects the problem becomes harder. For instance, without knowing that coffee-table in the instruction “place beer on top of coffee-table” is mapped to {{formula:1bbdfe30-3c83-476f-8f3a-9c567c507d96}} , the agents needs to additionally infer the specific instance that needs to be manipulated in case multiple tables are present (see Fig. REF ). We see a drop of {{formula:d253c6b7-d5dd-4d29-b365-24695e69da9c}} 6% in this case, though still higher than the baseline.
An alternative approach could be to predict only positive or negative predicates. For instance, when we only predict positive predicates (- {{formula:5c5a1670-1907-4cc7-9b06-e056e1bb7a47}} prediction), GRR drops by {{formula:aa452e82-2ded-4bdc-b2c6-832ad089167b}} 11%. Such a model is unable to predict the required predicates when only relations are removed from the state. Examples include dropping objects when negative predicates include {{formula:9013c8cd-cdc7-47d4-ac05-dc9e9b1ba8a4}} for an object {{formula:a31c3d99-3607-40ba-afd5-c4311c9a4fca}} . Similarly, when only predicting the negative predicates (- {{formula:d1d73a1c-2ac2-4bb3-8a86-06632207fcca}} prediction), the model suffers a drop of {{formula:232e2bc0-9e68-471c-9d33-f37380f211ea}} 80% in goal reaching performance. This evidence demonstrates the importance of predicting both positive and negative predicates to be sent to the underpinning planner/simulator.
The inclusion of temporal context allows learning of correlated and commonly repeated action sequences. For instance, the task of filling a mug with water typically involves placing the mug beneath a faucet/tap and turning on the tap (see Fig. REF ). The ablation of this component leads to erroneous predictions when a particular predicate in a common plan fragment is missing or incorrectly predicted, for instance, when turning the tap on without placing the mug underneath.
Successful execution of an instruction may involve manipulation of multiple objects, such as beer and wine, when fetching both (see Fig. REF ). Attention over the objects in the input language instruction (see eq. REF ) enables GoalNet to attend over the goal objects to dynamically prioritize manipulation at each time step. When we replace this (- Goal Object Attn) with the mean of {{formula:4b262dcf-65dc-4bae-9b37-88e793ed25cb}} embeddings of goal objects, the GRR drops by {{formula:33704438-713c-4192-904c-2dea90510389}} 8%.
We use language instruction encoding {{formula:231e9d32-e8d9-4c0f-8c95-ff909fcd4e3b}} to attend over the world objects. The attention operation aligns the information of the input language sentence with the scene to learn task-relevant context by allocating appropriate weights to objects. Without this, the GRR score drops by 2.6%. Finally, without the grammar mask {{formula:c47acf4f-a8a8-4f47-9bf5-a1c9095b31f2}} , the GRR drops marginally (2%), showing the ability to learn grammar-related semantics of the domain.
{{table:241cca5d-5a80-4a80-8449-561116cfaab2}}Model Explorations.
We also explore additional variations of GoalNet. For instance, when encoding the language instruction using {{formula:6baa555a-6d47-42bd-a5f9-a609e708ee1e}} instead of {{formula:01320206-42c9-4136-bdc6-f85c71e84c39}} , the scores drop by at least {{formula:b4e59703-9f3a-4636-8f08-1ff6194f007e}} 26%. This highlights the power of pre-trained language models in their ability to encode the task intention in natural language instructions.
Additionally, we explore sending past predictions of both {{formula:6a93b23f-624e-4ab4-b277-dcdf91576599}} and {{formula:d9748fb9-4a1f-4741-b07c-b2311e65eb8c}} predicates when encoding the temporal context. In such a case, the GRR score drops by 8%. Similarly, when encoding temporal context using the encoding of the previous states, i.e., utilizing {{formula:533ee542-eebd-42da-aa37-359e2583dda0}} , we see a drop in GRR by 19%. This indicates that we need only the positive predicate information to encode the temporal context and additional information is superfluous to predict goal predicates effectively.
Generalization.
We additionally test the ability of GoalNet to generalize to unseen instruction inputs by building two generalization data sets (see Table REF ). We test the performance when replacing verb frames in the training data with those absent in the data set. For instance, we replace boil in “boil milk” with heat. Even in such cases, GoalNet is able to successfully reach the goal state (see Fig. REF ). We additionally paraphrase the language input to test instruction level generalization. For example, we paraphrase “gather all used plates and glasses, place into sink” to “collect all used dishes and glasses, keep in wash basin”.
Table REF above presents the performance scores of the baseline and GoalNet models. It is observed that the baseline is unable to generalize to unseen verb frames and objects without any human intervention. On the other hand, GoalNet generalizes in case of novel object references and unseen verbs relative to the baseline. GoalNet improves GRR by 65% in both cases of verb replacement and paraphrasing instructions. This generalization is achieved mainly due to the presence of dense token ({{formula:ed816f49-98ac-444e-bedc-e5f77ffb874f}} ) and instruction ({{formula:096e74cb-3552-41be-9838-68e4731a9f92}} ) representations as opposed to storing observed verb-frame hypotheses in the baseline.
Conclusions
This paper proposes GoalNet, a novel neural architecture that learns to infer goal predicates for an input language instruction and world state, which when passed to an underpinning symbolic planner enables reaching goal states. GoalNet leverages independent inference over the objects in the world state and the spatial relations, applying instruction conditioned attention and using temporal contexts to autoregressively predict goal predicates. GoalNet is trained using human demonstrations in kitchen and living-room environments and is able to generalize to unseen settings. This work demonstrates how learning and classical planning can be tied together to address the challenge of following multi-stage tasks for a robot. The neural model enables generalization to unseen language instructions, outperforming a state-of-the-art baseline in terms of the goal reaching performance.
Future work will investigate out of domain generalization to apriori unknown number of objects, learning from sub-optimal or failed plans, and principled handling ambiguity among equally plausible goals.
Acknowledgments
Mausam is supported by an IBM SUR award, grants by Google,Bloomberg and 1MG, Jai Gupta chair fellowship. Rohan Paul acknowledges
support from Pankaj Gupta Faculty Fellowship. Shreshth Tuli is supported
by the President’s Ph.D. scholarship at the Imperial College London. Jigyasa is supported by Samsung Research and Development Institute-Delhi.
We thank the IIT Delhi HPC facility for compute resources. We express
gratitude to Joyce Chai and Lanbo She for sharing code and dataset of {{cite:b50b3bb8cf2cc552c910982c391e6cc6b8a95357}}, Jussi Rintanen on his guidance to use SAT planner.
Dataset and Domain details
Domain Details. A typical world state of kitchen and living room environment consists of 40 objects each (Table REF ). Some objects in the dataset do not play any functional role and have been removed to facilitate training. These include buttons of the Fridge and Microwave in the kitchen domain and buttons of the TV remote in the scenes from the living room domain.
Each object has an instance identifier ({{formula:04d32de8-eaab-4b92-ac77-3f330807bb25}} ), a set of properties such as {{formula:c0d529bf-747b-472c-8233-37196d6de8aa}} and {{formula:792a2a05-e196-4796-9caf-a031a432c1a8}} used for planning and a set of boolean states such as {{formula:b1eb4346-09f7-4901-bd0b-eff76fe996eb}} that can be changed by robot actions. The robot
is also an object in the environment.
The environment model consist of 12 actions including {{formula:7d2458a0-f6ed-42b3-81f1-6b1a36797c14}} , {{formula:b8dcdbb3-b3d7-4236-a0d7-ce56717f2e8a}} and {{formula:b63d8e44-a540-404e-b766-99cd8612a300}} , each having environment objects as arguments. Examples include {{formula:881067d9-ac14-46dc-8aaa-b8759a3ee82d}} and {{formula:fcc2b6b4-d802-4ff7-97d5-84ecec677de8}} .
Each action results in effects encoded as postconditions in the domain description (via PDDL). The encoded effects take the form of conjunction of predicates or their negations. Action can introduce a new spatial relation between two objects, for example, {{formula:0acf1f65-4b93-4cf4-b07e-97271ad456d0}} or modify state of object, such as {{formula:929c7815-eac6-4251-bf5a-2a4efb3599e7}} .
{{figure:031c136a-802f-40c2-bba7-5fd5a966db54}}{{table:fb172fcc-ce95-4e69-b739-438881a3292c}}Dataset Details.
The dataset is devoid of the metric information such as position as well as the point-cloud models that includes the object geometries, bounding boxes and pose data.
Both training and test datasets consist of long-horizon plans with up to 30 action sequences, albeit with decreasing frequency as we increase plan length (see Fig. REF ).
The original dataset of {{cite:25bdb581401982745cf2aeb599412be439f27af4}} defines ten high-level objectives, five for each domain. These include “make ramen”, “clean the room”, “make coffee”, “prepare room for movie night”, etc. Each high level task is described as a sequence of low level instructions mapped to an initial environment and action sequence. For instance, high level task of “make ramen” is decomposed as “Take pot on counter and fill it with water from the sink”. “Place the pot on a burner on the stove”. “Turn on the burner and let the water boil”. “Once it is boiling, open the lid of the ramen cup and pour boiling water in until it reaches the top of the container”. {{cite:b50b3bb8cf2cc552c910982c391e6cc6b8a95357}} extract these low level instructions, each with a single verb and its arguments.
The above decomposition leads to a training set with 77 unique verbs, an average of 6 words per instruction text and average plan length for train and test instances of {{formula:5d9850bd-bcf5-47e7-be52-9694b2dc51fc}} 5. Out of 1117 data instances, 56% are from kitchen domain and remaining from living room. We perform data cleaning as performed by {{cite:b50b3bb8cf2cc552c910982c391e6cc6b8a95357}} to remove noise in the dataset, for instance, removing wait statements in the discrete-time control setup we consider.
{{figure:6114444d-5121-4ba7-bc57-7984a4a3dcaf}}{{figure:7b190045-3678-4e8f-8843-29589be2888f}}{{figure:2c302c2f-bde3-4a7d-aa93-164b837faa0e}}{{figure:3fb6a594-5cc4-432a-9f80-5ce5b46f1a5e}}
Implementation and Training Details
We detail the hyper-parameters for the GoalNet architecture introduced in this paper.
The Parameterized ReLU activation function with a 0.25 negative input slope was used in all hidden layers of the world-state encoder described in eq. REF . The word embeddings (derived from {{formula:f7d24e87-d937-4e6c-823b-88bb0e488bd1}} ) have a size of 300. Sentence embeddings from {{formula:8f41c283-55de-46b8-b362-4874c5bc0621}} have a size of 384. Additionally, the properties of objects such as {{formula:63f440b9-9343-42ae-8b54-1ce7442dacd1}} , {{formula:bf4d35a3-7eb9-4780-9521-73690dcf1e3c}} , etc. is encoded as a one-hot vector of size 12. Object states such as {{formula:f340da2d-ec1e-462f-9b4e-7b4ad65497c9}} are also encoded as one-hot vectors of size 7.
World-State Encoder: The relational information between object nodes, in the form of adjacency matrix, is encoded using a 1-layer FCN of layer size {{formula:9726e932-340f-447e-a592-35cb2e5827b1}} with the {{formula:a767d968-871d-4273-bb8d-746c6b772d3c}} activation function.
Temporal Context Encoder: A Long Short Term Memory (LSTM) layer of size 128 is used to encode the temporal history of predicted goal constraints, encoded as a concatenation of likelihood vectors as {{formula:2a523aff-1cb8-40e5-8221-d715d607599a}} .
Instruction Conditioned Attention: The language instruction is embedded using a pretrained SBERT and then passed through a 1-layer FCN of output size 128. We attend over the state encoding conditioned on the encoded input task instruction where the attention weights for each object in the state are generated using a 1-layer FCN with the {{formula:cf0a9d85-768e-4f80-be5a-e0f564e529f4}} activation function. Next, we use these goal-conditioned state object embeddings to generate the encoding of the objects in the instruction using Bahdanau style self-attention. This is again achieved using a 1-layer FCN with the {{formula:f7d3231c-5e70-438f-87ea-0bcf2d17e1c8}} activation function, generating attention weights for each of the objects specified in the input instruction.
Goal Constraint Decoder. After generating the final goal embedding by concatenating the instruction attended world state {{formula:3ff072f7-2da1-41fd-b8ec-163801a5cee0}} , encoding of the constraint-history {{formula:a13764c5-3597-4165-951e-e3e30177f29f}} , encoding of instruction objects {{formula:1bfff313-7b79-41c7-8e84-2bf3c64ea0ff}} and the sentence encoding {{formula:bd8b1abd-e3f9-4197-905d-290fee91a5ae}} , we predict the predicates. We pass {{formula:eaccc62c-eba1-4929-bb4f-8f0f1e85b3a6}} through a 1-layer FCN of size 128 and {{formula:7d976831-19de-4c52-a028-4188728b148d}} activation function. We predict a pair of positive and negative constraints as relations {{formula:57c7c403-8235-4797-9362-83cef61cbb63}} and {{formula:349dfb29-682e-4458-8a9c-da560b320733}} . There are two identical decoder heads to independently predict the positive and negative constraints. To predict the relation in each constraint, a 1-layer FCN was used, with an output layer with size {{formula:ce46d0c5-d761-44ea-b578-34676249aed4}} . We take the output of the Gumbel-softmax function and pass it to the decoder of the first object. The {{formula:db1eb797-946e-4a30-a841-b87525663edf}} predictor generates a likelihood vector of size {{formula:8359ebe5-ad2c-44b7-8a79-4892f8bfcd37}} by passing it through a 1-layer FCN and the {{formula:d10edb57-54ee-4203-8933-69558b2fe984}} activation function. The Gumbell-softmax output of the likelihood vectors of the relation and the first object are sent to the {{formula:94f19514-a46c-4c24-8403-25a4d9e12f05}} predictor to predict likelihoods for all object embeddings. This part was implemented as a 1-layer FCN with output size of {{formula:fca6f2ea-cc12-4e31-9650-8966f918ae58}} followed by a {{formula:fb27ab12-473f-4f2e-ae22-8966ceab734e}} activation function.
Model Training.
We use the Adam optimizer to train our model {{cite:f49cfbf0605e9828676f5247823d0255b4db49d1}} with a learning rate of {{formula:b76b6772-93bd-470f-a3e5-4aae7572bbae}} . We use the early stopping criterion with loss on the validation set as the signal. We also decay the learning rate by {{formula:97473953-8b67-4607-afa8-30281fabbf04}} every 50 epochs. In training, we use a constant teacher-forcing probability of {{formula:d1084aa9-fa0f-49a6-9024-d592c5421098}} of using the planner to iteratively update the state instead of using ground-truth state-action sequence.
System Specifications.
The GoalNet neural network is trained and evaluated on a machine with following hardware specifications: CPU: 2x Intel Xeon E5-2680 v3 2.5GHz/12-Core “Haswell", GPU: 2x NVIDIA K40 (12GB VRAM, 2880 CUDA cores), Memory: 16GB RAM.
Additional Results
Performance with increasing complexity.
Figure REF characterizes the variation in SJI, IED, F1 and GRR scores as the size of ground truth constraint set (({{formula:c40ec82c-6c08-434c-b4a7-5ee8e6075c84}} ) increases. We observe graceful degradation in performance as constraint of model as the plan length increases. This suggests that there is a scope of improvement in the model to achieve good performance agnostic to plan length. However, we observe that the performance of GoalNet is better than the baseline for most cases. And this performance gap between the two models widens for larger constraint sets, showing that the neural approach in GoalNet is able to effectively encode the temporal context enabling it to outperform the baseline in multi-stage long-horizon tasks.
Generalization.
Figures REF , REF , and REF demonstrate the ability of GoalNet to generalize to language instructions unseen during training. These instructions correspond to verb frames that are out-of-distribution from the training data. Robust inference of conjunctive goal predicates enables the symbolic planner to generate feasible plans and reach goal states for these unseen tasks. Figure REF shows a trace of the inferred goal-predicates and actions generated by the Rintanen planner for an input language instruction of “boiling the milk” when the instruction uses the verb heat instead of boil in the training data. GoalNet correctly predicts the predicate of opening microwave, placing the bottle of milk inside and turning on the microwave to heat the milk. Similarly, Figure REF shows a trace of predicates and actions in case the input language instruction is “fetch milk from the fridge" when the instruction has the unseen verb fetch instead of known verbs such as get or bring. Figure REF shows another example where the model generalized to paraphrased sentences by performing the task correctly changing “bring pillows to the table” to “arrange pillows on the table”.
| r | f2133fca5fc0918d818ca8353ef83230 |
We provide additional experimental details on the validation split of the KITTI {{cite:2035059542c4d70fcc72d421e181b1e22f9b479e}} data-set in this section. In tab:kittivalcyclist, we first show the 3D and BEV AP for moderate difficulty on the Cyclist class for PV-RCNN and its variants. The table shows that both our proposed modules improve on the baseline results. This showcases the robustness of our approach in also naturally benefiting smaller and more complicated objects like cyclists. We then proceed to list the 3D AP and BEV performances with respect to distance from the ego-vehicle in table:distance. We find that the proposed blocks especially improve upon detection at further distances, where points become sparse and context becomes increasingly important. These results hold especially for the cyclist class—as opposed to the car class, which shows that context is possibly more important for smaller objects with reduced number of points available for detection. In tab:kittivaldetails, we provide results for all three difficulty categories for the car class. We see consistent improvements across backbones with various input modalities on the hard category. This is consistent with our premise that samples in the hard category can benefit more context information of surrounding instances. We also note that PointPillars {{cite:1fe6b94b3365eda69867e258fa277c4413d9305e}}, which loses a lot of information due to pillar-based discretization of points, can supplement this loss with fine-grained context information.
| r | 2f1d58ff53daa9cafcf7892b35d44436 |
Here we show that one can. To do so, we build on two recent developments in deep learning theory. The first is the theory of infinite-width networks, which has shown that, as hidden-layer widths tend to infinity, commonly-used neural networks take remarkably simple analytical forms {{cite:cb45d6e2ea6d0789a8e01f5b6b892e29f31391af}}, {{cite:537d2f21ba0cf106eaa82a443e3adfcdb3a70c9e}}, {{cite:fec779db6edad926cf854399261dac7b536bd735}}, {{cite:fc4417d421482be187556b7d67a3c8df6d093075}}. In particular, a wide network trained by gradient descent with mean-squared-error (MSE) loss is equivalent to a classical model called kernel regression, where the kernel is the network’s so-called “neural tangent kernel" (NTK) {{cite:091c95e72e6466acb0bc9450351cead88c2c0f90}}, {{cite:ae3984a4d46699e6d56e96ea03623dcc4a6f8080}}. This line of work suggests that, by studying these simple infinite-width expressions, we might gain insight into the behavior of real, finite networks.
| i | 49c8ce1121b504c95e8088b1a37a8f34 |
Recall that earlier in this section, we suspected that fixing the support pool randomly prior to the start of training may induce higher variance in the final meta-test performance over different runs.
From the observations in Figure REF (c), we confirm that this is not true. We compare and on -5w5s and -5w1s over five different runs each, where for every run of a different random support pool was sampled and then fixed. We can clearly see that the performance standard deviation is not only not worse but actually better than .
Finally, in Figure REF (a) we see that Protonets trained using objective achieves competitive results on -5w5s when compared with other other meta-learning methods and with a simple transfer learning method {{cite:0f7f35d6e5a567972b6902d4080bf660d6418ffa}} devoid of any self-supervised learning {{cite:4cb79675068a0cf6a813edaac07a62c26a26ce27}} tricks or a transductive setting {{cite:1b5382a0edf68690f13871a3e7b7d6304e2de5b0}}.
| r | 3263fb80a1dd318eac7bc2a7ec5bf740 |
The covariance and cross-covariance operators are one of the most important and established tools in RKHS theory. These operators are used in the computation of kernel principal components analysis (PCA), the kernel Fisher discriminant, kernel partial least squares, the kernel canonical correlation analysis (CCA), and many other learning algorithms {{cite:fb12a7e57bdccc75800fb4feeb43ea4ed5d85de1}}. Some learning algorithms relying on the use of (REF ) and (), can be written in terms of their corresponding Gram matrices. For Gram matrices, only the inner products between {{formula:4446107c-fb8f-47e9-abc7-a7bfea45b4db}} and {{formula:bcd8485f-2d77-4ecd-ba9d-5fdcc6a247a2}} and {{formula:a31bfea7-e5c7-4cc4-a47b-ded9a3c5fef3}} and {{formula:52d5dec4-a0c8-4cba-bede-9fa27857d572}} need to be computed. Thus, by computing Gram matrices on a finite number of data points, we can alleviate the need to explicitly compute the possibly infinite-dimensional covariance and cross-covariance operator. The ability to construct algorithms in the inner product space using kernels, and more specifically the kernel trick, lends them to be particularly useful in variety of learning scenarios {{cite:9e8410ca2374c2170c79f1a3cc189b612609c991}}. In the next section, we will lay the framework for use of transfer operator theory in the RKHS and later show how this expression in RKHS can be used as a learning technique for modeling coherent sets and planning with this awareness.
| m | 45306031b9730da92a8edeb2dd561f35 |
Framework Overview –
Our proposed framework is composed of three components: perception, task and motion planner (TAMP), and skill module, shown in Fig. REF .
A Chemical Description Language (XDL) {{cite:0633e242c56f7b3c75ccca3919d9c973f68e2b01}} provides the experiment instructions as an input to the TAMP.
The perception module updates the scene description by detecting the objects and estimating their positions using fiducial markers {{cite:7c7d43dc04510dd8fcbfb6a67de5fb4497105ba2}}.
Currently, we assume prior knowledge of vessel contents and sizes, and each vessel is mapped to a unique marker ID.
Given the instructions from XDL and the instantiated workspace state information from perception, an action sequence and robot trajectory are simultaneously generated by the TAMP module using PDDLStream.
The resultant plan is then realized by the skill module and robot controller with perception feedback.
| m | b141ce38c63b751f69a2a85eaaff058e |
It is clear to see that {{formula:6c273545-51ba-4b39-ab25-3a29507280d5}} involves the residual of the projection of {{formula:70dbe59c-55e1-4413-ba66-2604b9cc43ac}} on {{formula:19f32695-0e36-482a-9aa9-1875bd283843}} , so {{formula:9cf073f4-c29b-40d6-a6f6-d1237caadf95}} . Note that the latter is the variance of the inverse probability weighting estimator when {{formula:199570e0-3d56-4239-93f1-53ba57bc7697}} is known in a data fusion setting. This implies that the efficiency of {{formula:a3d677f4-5d88-4b7f-8431-647464e5c7bf}} can be improved by modeling {{formula:e4aef80c-b805-4488-aa4b-a56d27bdd875}} even when it is known. Such a counter-intuitive fact has been studied in traditional missing data problems; see, for example, {{cite:ad3f77a649e075d9761877622b823e3bacb90fd4}} and {{cite:3f15023cd84baa62c20d6aacbd890e7a7f5d5d54}}. The following proposition presents the efficient influence function of {{formula:8b233c54-a356-47a2-87a6-fee1a477be16}} defined through (REF ) and provides the corresponding semiparametric efficiency bound.
| r | fc42aa4f8537cb0b0a341bbd6c5a7cb7 |
E-commerce application as one of the largest multi-modal downstream scenarios has facilitated person's lives.
And there are many product-based tasks in the application scenario, such as similar item retrieval {{cite:f087bfedff5303451b6c2debd3f14b4aa0828e91}}, online commodity recommendation {{cite:f5b659e0dae9e68a4053b3f7617df8c37da3c11b}} and identical product match for cross-price comparison {{cite:75e56e1e158e3e0e65c60ba44f6716133f689b09}}.
With a wide variety of downstream E-commerce applications in the real world, we focus solely on the pretraining of multi-modality data of products.
In the general scenarios, multi-modal vision-language pre-training model {{cite:b4b70af88207c1e021551da4cd7f2c4aba98b759}}, {{cite:6e4a7f7bc2fccd085d301e7fc7c96499ce2578ce}}, {{cite:ed9f64e42c77a971373b032016c8543dac70af55}}, {{cite:c635930e3e701b6b4e2c555c351b201478d4ec71}}, {{cite:5dae1190941be6f8610c0f4a7703fb8693ae5e9f}}, {{cite:e33df9779a38e8504eca9fab9b377cfd70a7bc1f}}, {{cite:c157941101bf7fd78c2a14614698fce7299cd12f}}, {{cite:6d57498c7c3130c449beb002590ef46e93f50091}}, {{cite:ba5070158f1168e23c00368795e7b7d53ff4156d}} such as CLIP {{cite:e33df9779a38e8504eca9fab9b377cfd70a7bc1f}} and VilBert {{cite:b4b70af88207c1e021551da4cd7f2c4aba98b759}} have shown superior performances on the diverse downstream tasks, such as zero-shot classification {{cite:4c1c35848ccda4f3b6b1f6b7b2c932268d557880}}, cross-modal retrieval {{cite:18948459fa3b502b69ac9aa6d1a0d988925a2a7f}} and open-world detection {{cite:53556b7b1532f7d8029b966a1a6c8321d75e7407}}.
They learn visual and textual embedding features from a huge number of image-text pairs acquired from different sources and have quite generalization capability and robustness.
The fundamental methodology behind these models is either contrastive semantic alignment of images and texts or specific network architectures designed for better common space learning.
However, these models mentioned above focus more on the attention regions adaptively learned by themselves and are easily influenced by the contents that are not related to the visual objects from each modality, leading to ignoring the importance of concept knowledge, i.e., words or phrases with a certain meaning.
As an illustration in Figure REF , we expect the trained multi-modal network to pay more attention to four typical objects (Fair Water, Facial Cream, Clearner and Clear Lotion) and to ignore extra content (Four piece Set) in a more fine-grained application setting.
If the multi-modal network is trained directly by visual-language contrastive learning, the learned attention region may be SK-II, Cream Cleaner, Lotion Four-piece,
which is unsuitable and incorrect to achieve the expected objects of attention.
In the past few years, there has been increasing attention in designing the knowledge injection {{cite:5d72fd644793df3c7bd77dbba4a7b169dd977eba}}, {{cite:4cba14a6375090a29c4719ca972eba1066912e20}}, {{cite:4a6692c5fa4d79aec4d6a47fb2afda099c28c99c}}, {{cite:8fe27cec39c14522fa24cc936abd75bd65ae5e73}} for natural language processing (NLP) tasks to demonstrate and enchance the attention effectiveness of NLP models.
These models are more based on a well fully annotated knowledge graph, which is time-cost and laborious for a new task.
In this paper, we balance the trade-off between knowledge base construction and knowledge extraction, and propose a simple but effective entity-graph enhanced visual-language pretraining model, which explicitly inject the concept knowledge generated from the caption data into the multi-modal model via a special knowledge graph (entity-graph).
{{figure:749784fb-55af-44bb-abec-8b28833990dc}} | i | 4f4098727e2d3f5ce0d4167f43c8bada |
The to-be-fused decisions are determined by the corresponding feature extraction models (FEMs), which have a large number of choices.
As examples:
(1)
Standard decision (Std-Dec), the FEM utilizes a standard CNN-based classification structure, such as {{cite:49e26edb8b88c5d9eee34c61f2abcab6ce8a6554}}.
(2)
Meta decision (Meta-Dec), the FEM introduces the meta-learning strategy to the network, just like {{cite:dd4cddabddb3d17e678db225267628f00f01ab99}}.
(3)
Self-supervised decision (SS-Dec), the FEM adds auxiliary losses to the standard CNN-based classification structure from a self-supervised perspective to strengthen the robustness of the network, similar as {{cite:758dd4a10a5e41bef8e73dd203820ee329f594fa}}.
We show the results of all kinds of fusing ways in Table REF .
| d | 37abf01b4f937661fdd07558b482e187 |
The DRL training and optimation process is relatively standard. We use Deep Learning (DL) as a function approximator that generalizes effectively to enormous state-action spaces through the approximation of unvisited states {{cite:fa078192f2b8cf41eff391043861f4ed12701de4}} as shown in eq:drl.
{{formula:0f527581-a1a5-4616-ad5a-d52f1e422bef}}
| m | c149bbf41f2042a2beeed54c78f95065 |
It is important to bear in mind that the X-ray scaling method is not equivalent to making some general assumptions on the accretion rate and the bolometric correction and deriving the BH mass from the X-ray luminosity using the formula {{formula:3e1d4363-7a0d-40b2-b74d-e88e79e2a7fd}} , where {{formula:3324098e-51b3-48de-976c-d8013b22b4bc}} is the bolometric correction that may range from 15 to 150 depending on the accretion rate of the source {{cite:2acce1ec6a7de6d7c71e3731e1576494949da0bf}}, and {{formula:d6f6558d-f8ad-4c55-9b3b-cb3e463879c2}} the X-ray luminosity in erg/s. With this simple equation, without an a priori knowledge of the accretion rate of the source, one could at best obtain the order of magnitude of the {{formula:4eb9a815-1ec9-4afa-9cd6-172c56c82ef0}} . Since {{formula:81caa5e1-bc80-45bd-9a33-e323fee04319}} can vary over a broad range (for example, for this small sample of obscured AGN, the Eddington ratio varies from 0.01 to 0.3), it is not possible to obtain a specific value of {{formula:ab6e9940-7780-496b-9eda-8896cbaee270}} that can be quantitatively compared with the value obtained from the dynamical method and find a good agreement, as we did with the scaling method.
| d | 7cacf2576d20a31e5007af2cc395a171 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.