text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
An important trend is to use large networks on external training datasets such as ImageNet {{cite:0303c6f7df627761512863314725482d9be2e4dd}} to model the distribution of normal features. For example, P-SVDD {{cite:61111a20831b686fb0ecaa4e6aaf71608b1658cc}}, PaDiM {{cite:5cece5899561e931007ea04483404065e8eabbd4}}, SPADE {{cite:61111a20831b686fb0ecaa4e6aaf71608b1658cc}}, PatchCore {{cite:13e0da0d60d9ae278ead8d4bddc31abae10b08c2}} or U-Students {{cite:5205db0dc479702347d12eeecd362232aa4a31a2}} assume that the normal data fits into a predefined kernel space. They then have to define the distances between the normal data and the abnormal data, which are assumed to be located outside this space. To do this, P-SVDD looks for the smallest hyper-sphere that surrounds the normal data and uses the Euclidean distances to the center of the hypersphere to detect anomalies {{cite:61111a20831b686fb0ecaa4e6aaf71608b1658cc}}. While some models use clustering techniques {{cite:61111a20831b686fb0ecaa4e6aaf71608b1658cc}}, {{cite:13e0da0d60d9ae278ead8d4bddc31abae10b08c2}}, {{cite:233a75e69c85603ec9a5fe8dcd45178ced5ab283}} to detect samples outside the normal distribution of the data, others model this distribution by Gaussian models {{cite:5cece5899561e931007ea04483404065e8eabbd4}}, {{cite:653b3236a23196d6ce1b00b6897845b7536c1774}}, {{cite:5205db0dc479702347d12eeecd362232aa4a31a2}}, {{cite:29f66b651927fa28b92d0e3f663d7005a816d6c7}}.
| m | c5f27c0d30ebee3f66b97fee7a2cdade |
It has been known that the Palatini formulation {{cite:c5f6f61d00664eb260e7737c1f1efc70bb7f0611}}, {{cite:fc135bcce630858112a9820e4262902fffb388e4}} of General Relativity (GR) (first-order formalism) is an alternative to the well-known metric formulation (second-order formalism). In the latter, the spacetime connection is the usual Levi-Civita one, while in the Palatini approach the connection {{formula:3c425517-93d1-40e5-bddc-b9b0ab2db57b}} and the metric {{formula:54cd4fd0-8a6d-4122-a95c-4bc04d1ec2f7}} are treated as independent variables. In the context of GR, the two formalisms are equivalent at the level of the field equations, with the Levi-Civita connection in the Palatini approach being recovered on-shell. When nonminimal couplings between gravity and matter {{cite:c19abfaa30f84e6fcdc296688ffbc50298864520}}, {{cite:9ce0a464943e01cb763e26937b894f49664970be}}, {{cite:c0dd5ed11e3ab2f829b0303501bf00df89f0b5ed}}, {{cite:ab1ed75357399fca996f67a6e5056c5247626f3b}}, {{cite:b966f51f154cb5ccd7fe1be94adcfdd3da95da00}}, {{cite:f7dcd53a4e81cd184c8642a781652ac37a49e295}}, {{cite:6dbf5d3a87eabed4cf5e8c162b998c2d3c301fd4}}, {{cite:2802fed2d783ff8e3104e02c2bfaaea26dd815f6}}, {{cite:ecf37b1cd49533c1620a568791ce2306f97c3579}}, {{cite:be906ad4826d1bfe5d92107d173d2bb762054dfd}}, {{cite:7acfdd17131e7ac70b16c761f9275f3bc368fe5f}}, {{cite:84926ef94e75f26f872ae6087d950db89dff6f0d}}, {{cite:a5e63eecff9dae7f58158afc48a2c7c854db9ccb}}, {{cite:d7b2c1395aeccdc25a9c8e29396e270a34868649}}, {{cite:c695a0b76d04341eaa66731b6599bacaeb5845bc}}, {{cite:e8108b024c79b0d2683e29995d7661daac5f5915}}, {{cite:dc1e9745595cccb825c9e7f35782ab90bc7dda2a}}, {{cite:3ab8cfce14003b62f413e49e903c3391d1b4a86a}}, {{cite:9cf176caef05e371114e0276710786d79dd7e437}}, {{cite:2d159beb94028e681bb196ae0c0817a84af3a756}}, {{cite:fc78a289249d77519c7ff41cdceb3d35ebad2462}}, {{cite:65114e196ea62deb34737ecd35cadb31abb00701}}, {{cite:e88aec8d401a5416aa4ff910a83688924bca22cd}}, {{cite:d68f09b9cffbd7df704c7568dd62274f95a730bc}}, {{cite:6d7d7ff5441e2ea200ccebf0e8a3d885e35978cd}}, {{cite:1a1f6e634773e307d0ba704b2ebd6a6622c89277}}, {{cite:f045668ea8189fb2f97b4124bb0eb344dea2e9a0}}, {{cite:67bfac7fe3239c9075f0941704f04c93f21ff22e}}, {{cite:6dda87aca72534036af912dd9e1d847af16f804b}}, {{cite:774a19d54d065ff43b31eff50919a31729cf76b9}}, {{cite:78a5d5a1f073d7b5fd9c78c82a10c9e6b2161b73}}, {{cite:3a03f1cf3ba430e10e4d0dd7d3b67d207e9fa4d6}}, {{cite:8653849bea37b21af6d54d844238c2a83d074ada}}, {{cite:e2cdbacfdfdcaa7837f70afa5cfb1c0a9f9a8cf8}}, {{cite:26637a0a24c530ea08488c9e330ff84fd283dd2c}}, {{cite:5c8781257bcdb5e5b25f2431a846f1a75a220a1e}}, {{cite:8b77b7b337861b4547b13fe3be5a8890e07c930c}}, {{cite:5c6e3c2dc8313b404d6db8bdfb12f7225764ac7a}}, {{cite:315ab5f0295e26940a637d5d436746c2a489088b}}, {{cite:5dd8c2d82f7bf457f9e699de94e37d92ade2fc4c}}, {{cite:7b61e9e1370ec141d66043ca504d902fafd4e759}} or/and {{formula:c5855fb1-42ec-4f7b-a44f-6b1ec523be87}} theoriesThroughout this paper we use different symbols for the curvature scalar and tensors, which in the metric formulation we denote by {{formula:4b713089-e43b-49ee-acf6-0032cce843ae}} , while in the Palatini approach by {{formula:8cc6ee26-5ea3-4126-a647-658ffe469813}} . {{cite:2427c20507e56b86f005f22faa726bf059f7115e}}, {{cite:1cfc28a28dffbf9319752020b1caf721d17f264e}}, {{cite:1808451295ab1b7484dd8582b9898a1f2ec57e3e}}, {{cite:1600f052b3c460f0d9aba0a273d4ae6423db17b0}}, {{cite:f86c5dcc076b17eaab8b382af84580793967d263}}, {{cite:bbbef250441da891d8c744cb3ff555fa986ba43d}}, {{cite:8dd6d3915ea930b1ba9632b5663e4a1109d0d6f6}}, {{cite:9704d5161961dc35ea6e6513eeeb0b9468c1b114}}, {{cite:35333f392ecdd937bd5818a118b2772d517699ee}}, {{cite:9303973ea99d5fa67961002479aaea796b86a3b5}}, {{cite:d59c36476b774bdc185ea67d3be388c6a355e5f0}}, {{cite:89d016b38c2977bee5095daf5a981c7181b9d161}}, {{cite:299be260ccf570dd90daf7bcb4a4d4931eefd9ad}}, {{cite:fd7db28958730e79138778dab63027d129a283f5}}, {{cite:a3ac658239ed1f0da2415e14a27bea0227a2e8c3}}, {{cite:673efce716678283958876c2b1e7e2071c7dead9}}, {{cite:bd95ddbaf994c32b6e07cabd2cb58714de5acd0d}}, {{cite:7685b8d5d09dfa1551481bf2eecf7d39bc29d918}}, {{cite:4d81ac616e288fdf117d673353721a01649d9090}}, {{cite:92b4f9bac5dc51775d4e1df77eccdd81639921f3}}, {{cite:c2245b2dc1894addf369563cd863828ddefffbc2}}, {{cite:1a1e0415c4b917312a0a1ebc82ba82672832fb3e}} are considered, the resultant field equations are no longer the same and thus the two formalisms lead to different cosmological predictions. A remarkable example is the Starobinsky model of inflation {{cite:0385b345b2bd2add7c62702dd870c40ed89f69f1}}, where the addition of an {{formula:a092fc7c-6234-4303-b417-5c75d27f32ee}} term in the usual Einstein-Hilbert action is translated to a new propagating scalar degree of freedom which plays the role of the inflaton. In the Palatini formalism there are no extra propagating degrees of freedom, therefore the inflaton has to be added ad hoc in the action. The advantage of considering the Palatini formulation is that the addition of the {{formula:2ce41437-33a3-4108-8ac0-b3d915605799}} term can be used to reduce the tensor-to-scalar ratio {{formula:189c4326-adfa-44f1-a764-43caf5b87e62}} {{cite:fc4eab78fd663e534ec2e76d732dfbdfdeabb49f}}. Thereby, various models where inflation is driven by a scalar field can be rendered again compatible with the observations {{cite:1600f052b3c460f0d9aba0a273d4ae6423db17b0}}, {{cite:f86c5dcc076b17eaab8b382af84580793967d263}}.
| i | f0b456385568a43e38d1fb4b4323e108 |
TorMentor is developed as a technology demonstrator that enables the encoding of domain-specific expert knowledge to generate plausible distortions.
While defining an augmentation is constrained by its formalism, the fact that an augmentation is defined once and automatically applied on images, pointclouds, etc., makes it much easier to maintain than popular frameworks such as Albumentations {{cite:8467c55fd00d99ef1dfb83aebabf0ef35987096d}} that redefine operations for every kind of data.
The choice and cascade augmentations are also created through the `^' and `{{formula:de8160e5-c1cb-4d08-8daa-4f091cf4d763}} ' operators respectively allowing for readable and dense pythonic constructs of complicated augmentation graphs.
The fact that it could be used to successfully train a UNet strictly on augmented ground truth maps proves its potential.
| d | 70e3f9feb12deaf7ac7327b615079906 |
If a charged massive scalar field is scattered off by a charged black hole, under certain conditions, it is likely that the energy of the out-going wave becomes greater than the energy of the incoming wave. As a result of that, the outgoing modes at spatial infinity are amplified {{cite:d47525096094118f94f9b1f299afdb0a6bb082bd}}, {{cite:216c97b24aa889967eaf5b155f8156bc28292610}}. The latter effect is known as the superradiant scattering {{cite:d47525096094118f94f9b1f299afdb0a6bb082bd}}; a process by which the outgoing wave extracts energy from the black hole, in similar fashion as it happens in the Penrose's mechanism for extracting energy/angular momentum from a rotating black hole {{cite:7b0a8284a93954d8235bae267f126c4349bb9e41}}, {{cite:d47525096094118f94f9b1f299afdb0a6bb082bd}}. If these modes are trapped on a well-potential such a process could lead to the so-called superradiant instablity {{cite:d47525096094118f94f9b1f299afdb0a6bb082bd}}. Having a system which exhibits a superradiance effect does not necessarily lead to a superradiant instability, additional conditions must be held. A simple way to visualize the superradiant scattering effect is by considering that the event horizon acts as a membrane that looses energy. Numerical studies of the superradiance effect were carried out for different kinds of black holes, namely, static black holes {{cite:216c97b24aa889967eaf5b155f8156bc28292610}}, stringy black holes {{cite:15d82230dae503598ad60df21292889cb6ce928a}}, and Kerr-Newman black hole {{cite:45be5f7df7fc29fce446654a15703806279def70}}(see {{cite:d47525096094118f94f9b1f299afdb0a6bb082bd}} more details).
{{figure:527a551d-73dc-4af4-96f4-e12a9b3ae16e}}{{figure:1d1b3ef0-2387-4193-a93d-c81b3685f04a}}{{figure:b4fb959f-d325-4f95-83b6-e3f1bd73c0a7}} | r | a45ff6aef65fcb44d7bd235100c2389b |
Whereas the criteria used in temperature - dependent specific heat and magnetic susceptibility, as well as in field - dependent magnetization measurements are outlined in many publications and are reasonably well established, it is less so the case for the magnetic field - dependent specific heat measurements. To provide a consistent choice of criteria between the different data sets, we compare in Fig. REF the ambient pressure, field-dependent magnetization and specific heat measured at {{formula:29587ae0-3569-43cf-93c6-09c153bda785}} K, with magnetic field applied in the {{formula:d4f1e341-0990-47c4-8ec9-b99758680e13}} plane, and with the measurements done both on increase and decrease of magnetic field. The {{formula:1c5b80e8-d1d7-4d5c-9d63-deeea1c4c85d}} data are consistent with the literature {{cite:ef93afb56daaf4329b0f1881c4588018096ec83e}}, {{cite:4bf63a842b31c22bfcec2757229fe07fb4e85c74}}. The lower metamagnetic transition, {{formula:1d8007ee-c5f9-46e3-9d53-19aad7881613}} , has a clear hysteresis. For {{formula:475ab1e0-8a54-4a79-a4a4-c9ff633fe6d8}} , the magnetization reaches {{formula:bea20382-c4f3-4765-92eb-ed5d57da99e7}} /mol. The hysteresis of the second, final, metamagnetic transition is negligible and {{formula:8aa4404c-e942-498e-ba80-005b8e5110ca}} above {{formula:04acda24-d9d5-4212-9de8-65cd5b3913ff}} has a value close to 6 {{formula:b2336082-0b89-4d12-9679-d1fbd027719d}} /mol. The specific heat data clearly show features associated with both metamagnetic transitions. The lower transition is seen as a shoulder. Specific heat in the antiferromagnetic phase has basically no field dependence. On further increase of magnetic field through the {{formula:1b27adf4-5c49-406d-a7c1-daeadd3bb64b}} transition, {{formula:afefd3c8-fddb-4059-ae39-662cb649c9ea}} decreases. In contrast, a rather large peak in the {{formula:4f4645e8-5c7d-4359-b87a-726936b90eab}} corresponds to the upper {{formula:9df140fc-2116-48f5-b9af-68f14f32d410}} transition. Whereas in magnetization the position of the peak in {{formula:a385a214-970c-4117-a4cd-7ce86443e1ec}} is often taken as the criterion for the critical field of a metamagnetic transition, it appears that (at least in our case) the positions of the shoulder and the maximum in the {{formula:9c5f441f-2031-4027-8179-15257bd29487}} [see inset to Fig. REF (a)] could approximate the lower and upper critical fields reasonably well. The slight difference that is apparent for the upper transition could be either specific for the measurements, or caused by slight misorientation of the samples with respect to the applied field in these measurements.
| r | 9c17870a590cfc1eafc3ddf3a13e1980 |
Allen and Dynes.
Building on the work of McMillan, Allen and Dynes (AD) in 1975 presented a re-analysis{{cite:f88e13b623b95ef654ef1d35bee920cc43d40d68}} of T{{formula:acdcdabb-54a3-4de7-897a-2a701d767007}} , using more than two hundred solutions to the Eliashberg equation for {{formula:63b041fb-9c8f-4368-a91f-c3c7546600f1}} up to 10 and {{formula:583f4c51-2491-4ccf-b833-23839eb0b78b}} up to 0.20, and experimental determinations of {{formula:cdb1ee0e-6bfc-48f8-b040-a6a40a5ca622}} (by numerical inversions of tunneling data) with associated measured T{{formula:5b9dbfff-aa41-4124-a1a0-3d5f756f0166}} . AD chose to generalize the McMillan equation. Most obvious is that the frequency prefactor from {{formula:03ac0eb3-2a01-40c6-aa21-8fcefdf1e51f}} (altered earlier from McMillan's {{formula:55e35220-098f-4a10-856b-67123baf61fa}} ) to the logarithmic moment {{formula:078d85c4-ad33-46de-bbb5-862675d9f460}} with an adjusted constant, as the latter produced a better fit to their data.
| d | 0d30a993f6e703dcbf577418869b2f9e |
In light of recent results showing that the Sun has been able to give rise to several extreme SEP events which are likely not manifestations of unknown phenomena but rather the high-energy/low-probability tail of the “regular” SEP distribution {{cite:9eecf54d8839b8029f174dab43a180c14658b1d5}}, we have identified an upper limit to the spectrum of conditions produced by the extreme events (i.e. an upper limit {{formula:0fe3555d-e7fa-44cf-b2de-c1d9a9a7567d}} X600 SXR flare {{cite:d61a5ab6ed9e6c68949d7c6b77e1a2681747de96}} based on the AD774/775 cosmogenic nuclide event; {{cite:1fc3030865cd770bd902e1743b34d95f4e883a35}}). We find that the {{formula:1c99b1c7-b2a6-454f-9d58-71096ce30c4e}} at E{{formula:01722350-1db9-427d-9aef-10f18560304c}} 200 MeV is {{formula:6caf4761-0ebb-442f-91c7-e1241b32a3e0}} 10{{formula:f94b6aba-6ab0-40ad-8b20-1d1d002e1d8a}} cm{{formula:868663eb-55ce-4b11-ba11-532a6850c875}} and at E{{formula:38003b71-c2a5-4d55-90a6-92fef621c86d}} 430 MeV is {{formula:015cdeda-4b01-45e8-b268-59ccca71f9a6}} 1.5 {{formula:79778a08-f6b9-4acb-81cf-47dba936d4e4}} 10{{formula:ba127ede-99ba-4ece-b009-1fdff290af13}} cm{{formula:03c218bf-5e39-453f-979b-30d530146ddb}} . In turn, this also means that the Sun produced several extreme solar flares in the past that most likely affected the Earth's radiation environment and evolution. It should further be noted that such extreme events in terms of fluence could also be caused by two {{cite:d61a5ab6ed9e6c68949d7c6b77e1a2681747de96}} or three equal intensity SEP events {{cite:70846104e4f20a844ad994bf056a5fe1ae46cfd0}}. To assess such a possibility, a re-scaled estimation of 3 x {{formula:a7a3a23e-4cfd-45ea-b7a1-3b495df55d4b}} X140 SXR flares were used in this work and the obtained fluence for E{{formula:1951bce9-3de7-410e-afb4-e24f52067baf}} 200 MeV and E{{formula:ea6a16de-4e26-4695-b926-569efce67337}} 430 MeV were added in the inset of Figure REF , as red triangles to facilitate direct comparison.
| d | 464c3814a59577a0367056da3c709cf3 |
Strengths and Weaknesses.
GraphVAE-MM inherits the strengths of GraphVAE {{cite:a93a2faa463fada94b0d1181ec80402908b6e70a}}: expressive power through graph embeddings, and fast generation due to all-at-once parallel edge generation.
GraphVAE–MM also inherits the known limitations of GraphVAEs: 1) We need to know a maximum number of nodes before generation. (Smaller graphs can be generated using a mask.) 2) The decoder is an FCL that ouputs {{formula:0a4bec66-1d7e-42a3-bb98-551c4c651757}} numbers; we need to (implicitly) assume a node ordering to evaluate edge reconstruction probabilities based on the FCL output. The dependence on a node ordering is common to both GraphVAEs and auto-regressive GGMs, and efficient heuristics have been designed whose effectiveness has been confirmed in experiments, including those reported in this paper. We note that MM modeling makes GGM training less dependent on a node ordering because it also uses permutation-invariant graph statistics.
| d | 075723a0bccf575168796d50777783af |
with {{formula:46e5479a-a9bf-43f5-88c2-dd35f758ba52}} being the ground state of the elliptic equation {{formula:9b15eb26-ec6a-4f12-88a0-54b2feabd073}} , Holmer and Roudenko {{cite:99555f501777f5fbd2d815e0b7e1eeb00266d3b9}} utilized concentration compactness/rigity method to obtain the global well-posedness and scattering result for the radial initial date. Lately, this was extended to the nonradial inital data by Duyckaerts-Holmer-Roudenko {{cite:8048ce7c232df0d0bfe2fa574f66e6cb12383578}}. Recently, Dodson-Murphy {{cite:93542a85c99647972a8c9ddd08bf1af190d663b5}}, {{cite:1c08298017a7cbdcde69b8c0a8a17dafb446818b}} gave another simple proof that avoids the use of concentration compactness by making use of virial/Morawetz estimate. When {{formula:b8bac92a-918a-4e79-b8bc-fde0e9d97fb6}} , Farah and Guzmán {{cite:f1328b014f56255060b7cbedee03efc8393ba223}} established the scattering result for the radial initial date below the ground state. This was extended to the nonradial initial data by
Miao, Murphy, and Zheng {{cite:c62042839a787013f4071c86ee34bf31271168c2}} by exploiting the embedding of nonlinear profile. Recently, Campos and Cardoso {{cite:cfbd590ee582ed593d3cd5993edb656c48b4b00b}} gave a simple proof of the result in {{cite:c62042839a787013f4071c86ee34bf31271168c2}} by using Dodson-Murphy's new method in {{cite:93542a85c99647972a8c9ddd08bf1af190d663b5}} and developing the decay {{formula:372d6b04-ef35-4c3c-b764-dd7376db6e48}} in the nonlinear term that avoids the use of interaction Morawetz estimate. In this paper, we will utilize the argument as in {{cite:cfbd590ee582ed593d3cd5993edb656c48b4b00b}} to consider the case that {{formula:96931e90-2cc1-4c47-9cd5-65e5af17fb31}} and {{formula:f56eca47-cb09-4374-90aa-ff30e0ddd2c4}} .
| i | e4638bc7016405c8a34223cc781bbe3b |
According to the Perelman's seminal paper {{cite:0893980b156d00a1759b6cfdf1051fe1e18dd308}} the Ricci flow is an entropic based differential equation. In fact, employing an appropriate diffeomorphism {{formula:f7395138-c58e-4d5b-bf47-9d1d1320ae22}} , it can be seen that the Ricci flow is the gradient of Perelman's {{formula:6da21b84-b692-4bb1-ac4f-ed857b414395}} -entropy (i.e. {{formula:7e849aab-431c-4bfc-b19e-f11ff1e7bd9d}} ) via a volume preserving variation of the metric {{formula:fb787b4e-9e74-4e82-9f5d-7dd1528eb3c0}} .For more discussion about the appropriate diffeomorphism see {{cite:fb0861d6a026b4c5ce17e78cd1de68887ade3539}} and {{cite:0853d6e2d83e72256562e79b7e45b403aefc4c81}}. The entropy functional {{formula:a8e34c34-eac5-4c14-94d9-f1b576d436a7}} has a natural interpretation in terms of Bochner-Lichnerovicz formulas which we will use to formulate the Yang-Mills gauge theories within the Wiener fractal measure in an independent paper {{cite:0d3516d3358d316d18210147f09dbcae387cbef0}}. However, from the theoretical physics insights, {{formula:f2238a7a-0c11-4058-af91-cf71c0673c9d}} and its first variation describes the low energy effective action in the string theory, wherein {{formula:f828104a-3421-4c08-b482-76197bbb62d1}} is called dilaton field.See for example {{cite:7771fdac01a471c91b02eb394d0308d0a7561573}}, where the author shows that the generalization of the Perelman's {{formula:86aeedc8-4482-4d38-a3fe-5c325b17f5ec}} -entropy to all loop orders equals to minus the central charge at the fixed points, in agreement with the general claim of the Zamolodchikov's c-theorem.
| d | e86fe96398b6d34e19704260224b9547 |
We also compared the above algorithms with major CNN-based semantic segmentation models including PSPNet{{cite:245d7800c4b3aba9f65229f0636429f146d8579c}}, U-Net{{cite:7c530475822f1d56a3824effedb3ba999ea7efd0}}, DeepLabV3{{cite:73fd08276b4b7a79dc6d792258c79a93cd1759c1}}, FPN{{cite:9f0b649091ce7f20559d6c84dc7b7c2e4a4a634b}}, PAN{{cite:2a28fb1ef7facbee4dca3c9109a34241d2bc7967}}, and LinkNet{{cite:97fc0e0c3e6d938a188636bea6b92cdaf2310863}}. These models were implemented using the open-source Semantic Segmentation Models package{{cite:121836ee085ef58653a5e80b4202c093cfbdf18a}}.
| m | 1871be33bf6e3cb18d2b8bbb8f6419b2 |
In panel (d) of Fig. REF , we compare the cosmological mean metallicity ({{formula:6533cfa4-ed15-4073-9d0c-32d120eb58a4}} ) computed for
individual elements (e.g. Zn, S and Si) of DLAs and sub-DLAs (taken from
{{cite:b5cc62ed98e262270a71403608487a0741d4da64}}, {{cite:18c517b66d8c59ab5b3369c490ca35170fdf10fd}} and {{cite:84793c3c06e59533463e1ad34c1cd5f1ab875c8f}}) with our metallicity estimations. The abundance solar value is also indicated in this plot.
Our results predict a mean metallicity for
local objects in agreement with the solar value (12+log(O/H)=8.69). This
value is about the same that the maximum oxygen abundance derived for the
central parts of spiral galaxies {{cite:6518bb89cc33be9c13225ae43558ad4f6d786af3}}, and for circumnuclear
star-forming regions in both AGNs {{cite:9aae5078d1cd89098038557b5ec5769c3fbfba1a}} and normal galaxies
{{cite:c6c9aa0c51075eb26a4c7f641b4ed4747f595f3e}}.
Concerning the {{formula:03dba105-bf5a-4b1d-bb2d-e31972e00ce1}} in DLAs and sub-DLAs, they tend to decrease with the redshift while our estimations for
AGNs present an almost flat behavior, showing an agreement only in the Local Universe.
{{cite:b4a6dc0ab149fa183af6a52968bd83157dae610a}} pointed out
that {{formula:8dcc9d85-2501-4675-95c0-c17abda44dae}} estimations in DLAs can be systematically underestimated
due to two factors. First, dusty high metallicity systems might dim quasars in the line
of sight {{cite:629389c63cedfa6af35cb1d0114cc08ab748bffb}}. Second, the outermost
regions of spiral galaxies have often lower {{formula:b66f3928-8fd9-49a4-a8ab-fb668f24c0dc}} than central regions, thus,
{{formula:c83a2c9f-15e5-4f3e-9a7c-bd97f0a9ed27}} estimations of objects at high redshift, not spatially resolved, represent
values lower than the one attributed to the active nuclei.
The {{formula:e48cc125-78bd-4f5f-ac69-cea55f8618a0}} estimations for the objects in our sample are affected at least by the second
factor. Therefore, it is unlikely that the discrepancy
found in Fig. REF (d) may be due to the factors discussed by Somerville and collaborators.
| d | f9a58392549a08f7a4ac80fd1aec7909 |
For the plateau height of {{formula:f57a3dc4-6074-4dcd-ab24-0e52396430f5}} to satisfy the constraint on the inflationary energy scale{{cite:11ff87bfb83cd6d1b9b4348eeb57fac8953518df}}, the choice of model parameters must ensure that
{{formula:8df64311-b3ec-47d9-ba0d-ed8109a02d0e}}
| d | 9c5986083f8f994d79abcf86f3b1b4d8 |
As their name suggests, these particles are a generalization of the quantum chromodynamics (QCD) axion, predicted by the Peccei-Quinn mechanism in order to solve the strong CP problem {{cite:deea30477d3c5d09236ee3954e3a381c5c5819b4}}, {{cite:13e1fdddc5fc769d31d0fcb9de150e212bb54e47}}, {{cite:0640434c7509dfa53d8bb72a2286973ef31b4590}}. In contrast to axions, the mass and the coupling constant of ALPs are completely independent parameters {{cite:3b19bff5d7558527f19b965dc0dbca146580f447}}. ALPs are also a cold dark matter candidate {{cite:7e9f37481e6ed0cf6f9a487160355d7e5c78c63a}}, {{cite:ebed988cf4a7849bc034ce02bae2dc8478da9bd7}}, {{cite:b464bfdce803e899dfc4ff7634be8f4cdae7dec7}}, {{cite:d0b11ae559ae942650c3245faeaae8b0d1490dbf}}, {{cite:d339ee4ae7c243fdfc814a8736f5d2187a9b8ba0}} for certain values of the mass and the coupling.
| i | c45037b4d7dea83ee21ea81c9dbe468a |
#5. Temporal action proposal model: to understand how well an class-agnostic action boundary proposal model can detect generic event boundaries, we train a BMN model {{cite:656b412193250fe50fb6a16d3a3e7361b00a82bc}} on THUMOS'14 {{cite:4fab543a6ce23dc2c38d5bd404449d18db6891d3}} and test it on Kinetics-GEBD to generate action proposals.
We denote BMN as treating both the start and end of each action proposal as event boundary.
Alternatively, since one intermediate step in BMN is to evaluate two probability scores of respectively being action start and end, we watershed each probability sequence to obtain internals above 0.5 and treat the center of each internal as an event boundary. We take the union of all these centers and denote this method as BMN-StartEnd.
| m | 78d43dfb43c748075c96732dd6526d07 |
Results on Age Estimation. The comparison between SPUDRFs and other baseline methods on Morph II and FG-NET are shown in Fig. REF .
The baseline methods include: LSVR {{cite:75882c12dcb90dae8574d04c1641c495dde9e4d2}}, RCCA {{cite:64b70c51f1b39127878e887a5a7f3ba67dae2f95}}, OHRank {{cite:af0b99e7b2d072208a59d4a2627be2b25dcc6ea6}}, OR-CNN {{cite:c61f9c634bbb443b07c93c9f1e85bcf9ad2daf2a}}, Ranking-CNN {{cite:40fc14294cf7e550ddcf8f2ecb21447cff6cfb7a}}, DRFs {{cite:5ce715db4e3c49fe86341a7c0c7365318821669b}}, DLDL-v2 {{cite:30cab3b8099fa09cb250fc178aa58778512a8eea}}, and PML {{cite:16f0a17e8751ad391bcab5ffc45b5c7d1570e42c}}.
The results show some consistent trends.
Firstly, owing to DNNs, SPUDRFs have superior performance to conventional discriminative models, such as LSVR {{cite:75882c12dcb90dae8574d04c1641c495dde9e4d2}} and OHRank {{cite:af0b99e7b2d072208a59d4a2627be2b25dcc6ea6}}.
Secondly, due to the self-paced regime, SPUDRFs outperform other DDMs, thereby leading to more robust and less biased solutions.
Thirdly, SPUDRFs outperform SP-DRFs on both MAE and CS, and almost achieve state-of-the-art performance.
Fig. REF (b) shows the CS comparison on this dataset.
We observe the CS of SPUDRFs reach 93.34% at error level {{formula:4bddb2ce-5124-4343-9c74-9c54698287be}} , significantly outperforming DRFs by 2.04%. Fig. REF (a) shows the comparison results of SPUDRFs with the state-of-the-art approaches on FG-NE, where SPUDRFs reach an MAE of 2.64, outperforming DRFs by 0.16.
Besides, the CS comparisons are shown in Fig. REF (b), SPUDRFs consistently outperform other recent proposed methods.
{{figure:72ae97be-7350-4cf0-a2fd-93d0bff72e48}} | m | 01a3e22c8da25d3565003de8f4ee490c |
The ALIGNN-FF model has been integrated with atomic simulation environment (ASE) {{cite:76dbd337029981423a183078327b61cac4eed610}} as an energy, force and stress calculator for structure optimization and MD simulation. This calculator can be used for optimizing atomic structures, using genetic algorithm {{cite:ab1f62b7dc30d33ef4dc40369dd6074adc5bf495}}, and running molecular dynamics simulations, for example constant-temperature, constant-volume ensemble (NVT) simulations. The structural relaxations are carried out with the fast inertial relaxation engine (FIRE) {{cite:053d150051205ce0b21a9b4e9375e69087bd8bc6}}, available in ASE.
In order to predict equation of state/ energy-volume-curve (EV) simulation, we apply volumetric strains between ranges of -0.1 to 0.1 with an interval of 0.01.
| m | 2e13b25d8cc3ff216b3b7cc41c60ed20 |
Previous work in {{cite:d7bcc60c3070773b55efceb3202eb55e5648eace}} uses an autoencoder to learn novel, highly nonlinear modulation schemes that offer an implicit coding gain and are robust to channel variations. In another example, the authors of {{cite:eedde3745698ae4b083cc48dee01b472d3c7266a}} propose an OFDM-autoencoder structure that performs a pair of time/frequency transforms in the latent domain to impose structure on it similar to that of an OFDM signal. Using autoencoders for compression has been extensively studied in computer vision, where they are used for lossy compression of images {{cite:8781d67d5f511472369ea6421d778acd955b638c}}, with results showing that they are competitive with state-of-the-art JPEG compression, as well as being used for image denoising.
| i | b318584dc1e4b908e5eb569788d9fbf3 |
Note that, after we factor {{formula:ae492683-f3c6-44c7-9dd5-2c211c56ff0e}} , the mixing {{formula:1affbd6f-c3bb-485f-964c-6af8a5793275}} in () of up-quarks with the heavy {{formula:1ad21eda-5e99-4cf4-98b5-0cfcf71d3807}} quark through {{formula:ec967261-d7de-4be0-a936-99af22320c87}} exchange is given by
i= v2YBi/MB,
i.e., it is proportional to {{formula:1cac5fc2-3262-44c4-95b9-f44745a25384}} in (); see eqs. () and ().
So the constraints in Figs. REF , REF and REF can be interpreted as constraints on {{formula:76d732cc-c464-426f-9c93-cad1fedc5f18}} .
Similarly, flavor changing neutral currents couple to the {{formula:f6105b20-4b4a-4920-8490-0c5998cb9e1c}} boson through {{formula:015c1445-27ec-4a82-af33-97fc496ddfdd}} in (), which is proportional to
Xdij-ij*= -v22MB2(YBiYB*j) , ij .
The approximation refers to the leading seesaw approximation.
The factor inside the parentheses is exactly the quantity appearing in Figs. REF , REF and REF .
Indeed, if we link the flavor changing physics scale in () to {{formula:0069f43c-96f9-44e6-bacd-3eac97943412}} , apart from a global factor, we can extract from the figures the milder hierarchies for the NB case,
sd-1:bd-1:bs-10.18:0.4:1 .
In part, this is due to a larger set of observables we consider, which is not restricted to {{formula:288b1075-bfa7-4eec-8ad3-4e95f2bc5316}} observables.
It is important to emphasize that the NB scheme leads to a hierarchy of {{formula:a0d3b950-d7e0-449b-a84d-886f6823a7a1}} inherited from the SM Yukawa couplings but the overall scale is not determined {{cite:3d512f4b89c1d31b2ed545b299bc7ec543250dcd}}.
In special, such a scale is not suppressed by the bottom Yukawa, such as in the MFV setting, and is only limited by perturbativity from the theoretical side while flavor observables in the {{formula:c2330da2-acc9-4baf-b381-f64da74343ba}} sector, cf.Fig. REF , allows {{formula:b2717d5e-a9ed-4d36-8727-174ca62ac5ec}} for {{formula:14c25d1b-da0a-4203-a324-a9f837907019}} , considering that {{formula:ab0bec3d-702b-4dbf-b849-44a9a531bee4}} .
For larger {{formula:fe389106-2e6d-4468-8e10-1ab7fe9f16b0}} , larger values are allowed.
See appendix for approximate formulas for {{formula:b57f3e86-62e4-4c00-b9eb-a646d725af0e}} .
| r | 25df9246dc5d71b25bc5cd26d90d7ed2 |
Deep Neural Networks (NNs) {{cite:1d998eb7897b12dc279fa44a5362d8228a7bba6d}} have revolutionized many machine learning areas, including image recognition {{cite:374cdc6655dfbfad8e05fc3f9c56543b2738a901}}, speech recognition {{cite:c702648d2e51277275a2a2811324385c585cb0da}} and natural language processing {{cite:74f5802a5ce78e4f91993c5624eb6444163c7f0c}}, etc.One major strength is their capacity and effectiveness of learning latent representation from Euclidean data. Recently, the focus has been put on its applications on non-Euclidean data {{cite:e90a68c0e00b4ec30bf7e05eda90ce177bd3b78f}}, e.g., relational data or graphs. Combining graph signal processing and convolutional neural networks {{cite:c11ed972e0b7003b5ce47fe0d51f3eed74addda3}}, numerous Graph Neural Networks (GNNs) {{cite:747b2f35e7b77079d15a3cbc947a7d310d24f97e}}, {{cite:40b599a74f054551f0ac32d075a46e44cb3aa0d6}}, {{cite:a716ae7a0e42a695026eab39a0da8708621fd411}}, {{cite:5981b030ab5feb51d6c6c4737b1035a47886363f}}, {{cite:623109a3c8f1c7647f934f6318a016ffcce537ad}}, {{cite:37a5d366f5d59725b353667d62aa79b872c1c520}} have been proposed which empirically outperform traditional neural networks on graph-based machine learning tasks, e.g., node classification, graph classification, link prediction and graph generation, etc.GNNs are built on the homophily assumption{{cite:bac221f35c9ee4d3d57dc910ed407855fd934c21}}, i.e., connected nodes tend to share similar attributes with each other {{cite:491879da2fde669cae92b39a7e09b9afd840bf4a}}, which offers additional information besides node features. Such relational inductive bias {{cite:0e1438a34fc23ffc6ad038a836a1a64450ef43e4}} is believed to be a key factor leading to GNNs' superior performance over NNs' in many tasks.
| i | 075e2096df689d80c1fcd9690df560fb |
In this paper we are interested in on-line estimation using recursive algorithms. It is well known that, in contrast with off-line estimation schemes, on-line estimation provides, via the accumulation of past measurements and noise averaging, stronger robustness properties. Moreover, if the adaptation gain of the estimator remains bounded away from zero—a property usually referred as alertnessIt is well known {{cite:b9fcd6033ecf7315315196b7eec19434b1a9d6ea}} that, due to the so-called covariance wind-up problem, the alertness property is lost in standard least-squares estimators. Therefore, we concentrate on gradient-descent schemes.—it has the ability of tracking slowly time-varying parameters. We concentrate our attention to the case of a single uncertain parameter. Our main motivation to study the scalar case stems from the recent development of the dynamic regressor extension and mixing (DREM) estimator {{cite:912fa7af34f800b53845b0a1f5394327fb6d7026}}, which is a procedure that generates, from a {{formula:11c98721-02e2-44d6-85cf-d7ec0065ac42}} -dimensional LRE, {{formula:325daf3c-0235-4c09-a0b5-58ba47b6edd3}} scalar LREs for each of the unknown parameters. It has been observed in several applications that the absence of excitation stymies the successful use of DREM. For instance, in {{cite:ac5093c87f7d80e1f316214b7f25663c07ebb2fe}} it is shown that consistent estimation of the parameters of a linear time-invariant (LTI) system with DREM is possible if and only if the original regressor is PE. Actually, since the key scalar function that defines the convergence properties of the gradient estimator in DREM is the determinant of the extended regressor, this converges in many cases to zero, hence we only have excitation on a finite interval.
| i | 44c1a117bce3279103cda2405cfc1069 |
Continuing their great success in many areas, DNNs are being deployed in critical systems such as personal identity recognition systems and autonomous vehicles. This creates great security concerns about the DNN deployment. In the recent years, researchers have already investigated various types of adversarial example attacks on DNNs and proposed algorithmic countermeasures for adversarial examples. In this paper, we have proposed a new type of stealthy bit-flip attack on protected DNNs to compromise their robustness while reserving their accuracy, by attacking the DNN weight parameters in the hardware. We mathematically formulate this stealthy attack as an optimization problem and introduce a gradient-based algorithm to efficiently find the most vulnerable weight bits. Experimental results demonstrate that the robustness of protected DNNs can significantly decrease under our adversarial attack with a small number of bit-flips, while there is negligible accuracy loss for clean inputs. Our attack on TRADES {{cite:cda95a467889a11d990aa0e8e5781fd6fbad81a0}} protection-based models can decrease the robustness value by {{formula:ba2fcbb5-5001-4a6c-8454-b215b3d21735}} to {{formula:73b4cf08-d92d-4e29-b0ea-7d355fad7f09}} .
| d | 15083220e18b0a1416d540dd5fdc1dce |
Wormholes in General Relativity (GR) are solutions of Einstein equations presenting hypothetical tunnels connecting different parts of the Universe or two different Universes. The first discussion of a wormhole configuration was presented by Flamm {{cite:6ea8843403d05a8f55c41d485456965230574c3f}} and later Einstein and Rosen (ER) introduced the “ER bridge”, a physical space being connected by a wormhole-type solution {{cite:f085224a337351885129ce4e88c9591308290127}}. The wormhole spacetimes as solution of GR were further investigated in the pioneering articles of Misner and Wheeler {{cite:03bd7fb936302881276bd5181489e9d577286e0f}} and Wheeler {{cite:48da7876f846226df65162db44f83e45157cdf3e}}.
| i | eca1ce59c9ddadbb7bbdc8810d7c16db |
We apply both the distillation algorithm and naive substitution in DQN (in which we simply substitute the deep neural network in DQN with a neuro-fuzzy controller) to three different OpenAI Gym {{cite:b488a10bb170d2cd94311c0ab7eb624ecbabc698}} environments: CartPole, MountainCar and LunarLander. In the CartPole environment, the agent is tasked to balance a pole attached to a cart by applying a force of +1 or -1 to the cart. In the MountainCar environment, an underpowered car on a one-dimensional track is positioned between two hills. The goal is to drive the car up the right hill, by driving back and forth to build momentum. Finally, in the LunarLander environment the agent controls a 2D space ship and needs to land it on a landing platform. The results are summarized in Figure REF . In all three cases, the teacher model was a deep Q-network with 64 hidden nodes followed by a BatchNorm layerModels retrieved from https://github.com/araffin/rl-baselines-zoo.
| r | 6a921e6a925c5acd70d29c6f4cad7ee2 |
Due to the boundedness of the sequences {{formula:a1723a11-3a1c-4c2a-9f63-febfb3c208d4}} ,
the convergence {{formula:84758e32-bddf-473d-9f6f-29c58d2eeeff}} as {{formula:dc02587a-754c-467b-904e-86548af862ed}} for all
{{formula:924d0861-c97f-4c95-a7e6-c107529c2c19}} , and the outer/upper semicontinuity of the
mapping {{formula:042c0462-d698-47a8-b62f-d247ee5829cb}} proved in
{{cite:7b1f3bee17a759ae98e1e0f39ea1baddd4cd1ed2}} we have that the sequences
{{formula:412299d0-b966-4588-9c93-121acb82306d}} are bounded as well. Hence there are subsequences of
these sequences (without relabelling), scalars {{formula:2c365a03-19bd-44ea-bfa1-029d5f3f8fc8}} ,
and matrices {{formula:b770c39d-c711-45ce-af04-7965dcc1b266}} as {{formula:cfc12664-6026-4ca2-86cf-0084a3cfca6c}} such that
{{formula:8e2d095e-b28f-43b4-96bf-842d600ed292}}
| m | acfe32c24ff796b9733b8baa9b2ebcfd |
We have calculated LHC bounds upon four models that have been fitted to {{formula:eef87f5a-cc46-4323-a027-5307d02222f6}}
anomalies. Each of the models includes an electrically neutral, massive
{{formula:8b8ab2a7-189f-476d-ae7f-a8d27001fab9}} gauge boson which has family dependent couplings to SM fermions,
the most important for our discussion being the couplings to {{formula:c7d15fbf-7634-40e7-8fa1-f33f52c59710}} and to {{formula:ce610f5a-6229-48a6-8ebd-525f739cc4c0}} .In principle, a {{formula:8a20ff93-c933-481e-9257-ce28828f5612}} coupling to {{formula:63ff5952-0661-49cd-842b-2d60ddd45e0b}} can
change the prediction of the anomalous magnetic moment of the muon {{formula:7607f131-ed8e-45a1-9154-b805ffb9e405}} ,
which has been measured to be in tension with its SM
prediction {{cite:ae0dc1ba9b9437f0d8990643ca0549ddffed65dd}}. However, in order to satisfy other experimental
constraints, simple {{formula:47f7ce01-16c0-4cb1-870f-9bb89c9858ef}} models such as those deployed in the present
paper are
forced into a parameter space where the beyond-the-SM contribution to {{formula:480c3067-305f-468f-bc1a-33895bbb55a6}}
from the
{{formula:e0a06331-00c7-4e3b-baca-11d8af3e0d6d}}
is too small to explain the discrepancy and so further model building
involving additional
fields and/or additional {{formula:754e093d-9fbc-4e87-8b69-2ba3ab0803e2}} couplings {{cite:488a69661ed1d7f4c22c7405148ef5b799c3de4c}} would be required to
explain the measured value of {{formula:7ae1ba06-2b28-42e2-be33-abae8c848dc7}} .
| d | c74b6dac64d6850ed8a4a95a66cd6a3f |
The property of {{formula:14fbd242-0e07-4e20-be07-b2faaf94320e}} becomes a very interesting topic since its discovery, because it is generally thought that there are not enough unassigned
vector states in charmonium spectrum (taking into account the recently reported {{formula:9a5eb95c-2f6d-4a45-831f-d3051eda6acd}} , {{formula:6d171a93-0955-4515-8fea-783c3d0f59b6}} /{{formula:022ca83a-f1e5-4aa5-aaff-1f70486d8365}} states), according to the naive quark model predictions {{cite:86b052e2b7de6935a18f83dcc547b7a20fffe915}}. The
only such {{formula:c82b764b-27df-45dc-8e2a-4a951c3b11bd}} states expected up to {{formula:a3de6d41-640a-491e-b9c9-957cb52a5f5f}} GeV are generally {{formula:a2c1bade-97f9-4eec-b219-7687402c12df}} , {{formula:d7fbc982-9cef-4902-9cbe-4e0517935975}} , {{formula:116d5a3f-6f90-4606-8510-fc2018a24122}} ,
{{formula:d03f6a80-606e-437f-a4e5-af7c93f02074}} , {{formula:911236a4-b4b7-4fc1-b593-b4bed95b43d6}} and {{formula:80ba6429-46de-499f-89dc-fddc8bd564d2}} , and they seem to be well established {{cite:21184bca24c03dc2b412db7383e10c5644434d9b}}
– the situation is depicted in Figure REF .
{{figure:c2e050ef-3494-4a84-aff7-5d8eeb379e6b}} | i | a34ab1d0a0749c82fa44651231e0bb1b |
On the other hand, three hidden-charm {{formula:d5519123-c317-43e7-af20-e2d87a1b5a2a}} pentaquark
states were observed by LHCb collaboration in recent
years {{cite:32f797367291138211a522fe64989a6eec101a87}}, {{cite:1b4dc1302b696b3268cf3d44f720b05dbf7e79fd}}, where the experimental data
are in agreement with the predictions made in
Refs. {{cite:5af46b97cd969e62b34da9d9f496d443d7c1de86}}, {{cite:c79133ebf2b4905e2bd2ecc89e3782ef4ca0ee76}}. Consequently, rather than the
three-quark picture, it may be more interesting to study the
properties of baryon resonances near or above 4 GeV within a
five-quark picture, since the energy for pulling a light
quark-antiquark pair to form a pentaquark configuration as the
baryon excitation may be lower than that for the traditional orbital
and radial excitations of a three-quark
configuration {{cite:f84f43e72b27ae4cb3e24bd5faa25f74cf7bf5f0}}. Taking the five-quark picture, the
{{formula:9330982a-5fac-499f-9d4d-f071f8663faa}} excited states with negative
parity {{cite:cbe3a949c633fa68daa7e1ba7356db973b464c44}}, {{cite:e309a68347fc839dddc404a6a49fac2efced9ee0}}, {{cite:ccd5679cbcaeca3b3476b5a323874a0590690243}}, {{cite:eebbff606fccbe83f1346c15506103604e4b27f5}}, {{cite:e86cd771322ff304a2c391155a92776c09686a5b}}, {{cite:0ed0156034d0f20c5dc385e880bcb42c3ebee320}}, {{cite:e90d9be55107517b7721e21efab71fbe9be6989d}}, {{cite:2f48be248e10fe6abaeb58c43542414fda42559f}},
nucleon excited
states {{cite:4bb7ee9028ab44b3d773510038e71592490b0a92}}, {{cite:10231a6490340f3045a117f2c49b12caf49d3106}}, {{cite:1367d2b413787d48176e948602ae18e5b10e2dbc}}, {{cite:313c12739d31ee701c7ce542ab8d4e9795cc2ee8}}, {{cite:55293cfd89845b5e95b4f873afffb8b27e5cac44}}, {{cite:304810d514d3d01f3cb699f571855093782b3099}}, {{cite:94983dd39546a88032632d7aefeb734e1d8d700d}},
and the newly observed {{formula:15363f74-8823-42c1-b8a8-6835094e83dc}}
resonances {{cite:f6d1f47967033d231a3818dc6e8621a4daa2e6c6}}, {{cite:30985f702f3ca80a8068012e448823d431179d33}}, {{cite:1f64323e964202b557ae9fe7928bd6c478652ccb}} are
investigated explicitly. Suggestions on how to observe the
{{formula:c38c0515-5fa2-452c-a3c8-aa9bbf3203ff}} state by looking at {{formula:c851e8c0-8b88-4f16-8692-a6185a925657}} weak decay process
have been made in Ref. {{cite:355d05927f5e9bf04ed68477ab1ad69935da4c11}}. It was found that the
observed small energy splitting of the {{formula:c86ac286-4376-48e6-8656-832c66cd7564}}
resonances {{cite:1f64323e964202b557ae9fe7928bd6c478652ccb}}, the masses and decay behaviours of the
observed {{formula:a1927d55-2a9a-4d0c-9a2f-64dee71992e1}} {{cite:e86cd771322ff304a2c391155a92776c09686a5b}}, {{cite:0ed0156034d0f20c5dc385e880bcb42c3ebee320}}, {{cite:e90d9be55107517b7721e21efab71fbe9be6989d}}, {{cite:2f48be248e10fe6abaeb58c43542414fda42559f}}
can be well
described by taking either the hadronic molecule picture or the
compact pentaquark configuration,
while it's of course that one cannot
rule out the three-quark components in the baryon
resonances.
| i | 9f9618762c209344c6f84731f5004ef6 |
where {{formula:68856c88-5f01-493f-bb64-14b5a42ff400}} is a regularization parameter, and {{formula:ac380e76-6f36-4008-9fa5-19fdb75f3f97}} is the projection matrix (see {{cite:12f87282adbfcd5e3163ac7d8a5767ccfdbe0e50}}) which will be learned jointly with {{formula:cb13f471-0caf-46f8-9374-8fb8423d90f9}} . The whole optimization runs iteratively over {{formula:f204fa70-4700-4bf2-83cc-4490d32bcd85}} and {{formula:fef2d13e-747d-4474-a846-ad43f321909c}} . When {{formula:c677dd39-7fe7-4847-9ef4-d3b98694e7cb}} is fixed, we apply DPCD algorithm to {{formula:3b04d1a5-a70f-438c-ad79-4295daad3a26}} . The key step is to calculate the gradient of {{formula:97454eff-683d-43d5-8391-365b781fee55}} as:
{{formula:99c6ef8a-d9b5-4a23-9665-34f564161408}}
| m | ef42bcd385140ff25f3a92860ad93e6a |
Qiu et al. 2018 {{cite:7593b8bb941a2d028a7cf814478508bcaf4e30b4}} SAE Epilepsy Normal, Interictal, and Ictal From Andrzejak et al. {{cite:679219f58a4afe9b829b1e90079bfe77e3a88236}}, 10 Participants (5 Healthy and 5 Epileptic Patients)
| d | 1d0528da39df45ea85fe0d780abe04ff |
OCRA builds on capsule methods {{cite:378b10b52292bceb3af96240ae4c4f7f898bb0f8}}, recurrent attention and DRAW architecture {{cite:afa15ecf61568b00e09c5d32f0b214ba6ddb3e13}}. While it addressed some of their limitations, there is still room for improvement. For one, we believe that a better capsule routing algorithm will be important in applying our method to more complex objects that require more complex part-whole matching. Also using a bigger backbone and generally a bigger architecture will be crucial. It is a direction for future work to determine whether our approach will scale up well to more complex objects and scenes.
| d | c1f4871348eaea621e6555ba5ab7b523 |
In our simulations of LIF neurons, we compare against the Akrout method {{cite:bd396e66fa2bc1c5673c10acd4dc3e2f094567b3}}.
This rate-based method makes use of an inference phase in which neurons are stimulated (with mean zero) and then the levels of activity of input and output neurons are correlated to form a weight estimate.
This approach was shown to be highly successful for weight inference and thereby training of rate-based neural network models.
However, since we simulate spiking neurons, which cannot have a negative firing rate, we instead demean the neuron firing ratesand randomly stimulate the input neurons (post-synaptic from the perspective of the backward synapse).
In particular, we use an update rule of the form
{{formula:b9a01201-bcc9-4db8-97e3-6ac7f4a78a92}}
| m | 2208f1e52caee10c4f981db8462485cd |
Our study found that CNNs for brain MR segmentation can exhibit significant sex and race bias when trained with imbalanced training sets. This is likely due to the algorithm introducing bias and/or the well-known effect of representation bias {{cite:09ee951e80aa71c5065ddf1cd28300b778b55108}}, in which distributional shifts combined with data imbalance lead to biased model performance. Interestingly, the biases we found have a strong spatial component, with certain brain regions exhibiting a very pronounced bias effect (in both sex and race), whilst others show little or no difference. This is likely caused by a similar spatial component in the distributional shift, i.e. differences in brain anatomy between sexes and races are likely to be localised to certain regions. We found that sex bias in performance still exists even when the model's training set is sex balanced. This is likely due to algorithmic rather than representation bias.
Overall, we found that the effect of race bias was stronger than that of sex bias. Furthermore, the bias effect was much more pronounced in black females than black males.
| d | 404e04e0d94d911f494559578a04211c |
Tables REF -REF present compare our results for logical structure prediction from the table image on tablebank and pubtabnet dataset, respectively. The scores are obtained by averaging the score for every table across all the tables in the evaluation dataset. From the tables, it can be inferred that despite trained with a much smaller set of data, our model achieves better performance than {{cite:bc5374e9e5c25bc2384e027d04debbff6df8aec5}}. Direct comparison, however would not be fair because of the use of different input modalities for training.
{{table:c52b59d5-99b8-43b0-9d24-d2ace7ca00b2}}{{table:b1cc10a5-4f44-4aa4-b957-589a8ec3c240}}{{table:84c72f2d-445e-4d80-9b97-bb5956d9e197}} | r | 81495d015fa390d38504b943916b683a |
In the plots of fig1 we verify the pretty stable mechanism
{{cite:0e6cb80decc9458070d7739b9c387dcbd4a9d025}}, {{cite:8773dad0c84cb3c59405b16f1c9465a82947fda6}}, {{cite:e486758940ae3380929aa8456a058ed9879bd05d}} which establishes T and E model
inflation: {{formula:9988edf5-08b5-42e8-a550-ffdcbfa29a5b}} expressed in terms of {{formula:7edf2336-2511-4b1f-865d-b85e1adee142}} develops a plateau
for {{formula:421bfb4f-8f0a-474c-9fbc-992d4dfb12c9}} but {{formula:cc9f03de-b90d-4973-973f-c32a22c30850}} – since {{formula:d371e6b8-bcc4-4ab9-855a-e5b86c3abf9a}} increases w.r.t {{formula:d7c33620-ece2-47f3-880d-0f09a14789bd}} as
inferred from je. Indeed, the {{formula:dab51355-0294-4895-95cf-5b92cc58bc28}} values depicted in
Fig. REF -(a) – and arranged in the Table of fig1 – get
enhanced according to je, i.e., {{formula:06827f5c-f2f8-46eb-a787-5b7923cf7dd6}} [{{formula:978f4a8a-3e4c-45b0-99c1-8e6d108102b8}} ] for
EHI [THI] – see Fig. REF -(b). Contrary to the standard E-
and T-model inflation, though, we here observe that (i)
the inflationary path terminates at {{formula:e278c333-029c-44d4-91b2-85bbfded7639}} due to the
instability in vevi – this is the same for both EHI and
THI since {{formula:688ea93a-70bb-471e-850e-acb4f5feacd7}} and {{formula:34cd2b16-6a57-42c3-979b-de478e261802}} are kept fixed in both cases – and
(ii) the required {{formula:9f77fcab-076c-4894-87e2-5551ba9cd0ae}} does not lie extremely close to
the location of the pole at {{formula:a3887077-60b7-424d-9ffb-2159a491bf05}} and so the relevant tuning of
the initial conditions, somehow quantified – cf. so,epole
– by the quantity
{{formula:d3b34f88-508d-4167-81fc-2a4dc0563ebc}}
| r | 8bb880a799f9d96e5d965873d078512f |
Studying the high-redshift (high-{{formula:923cdecf-54a8-44ce-bb85-f34573b20167}} ) universe allows us to investigate the first formative stages of the Universe. Among these stages, the last one to occur is cosmic reionisation. During this stage, the ultraviolet (UV) and X-ray photons escaping from the first galaxies caused the neutral intergalactic medium (IGM) to be heated and ionised {{cite:06ec8455b3c66e7a9f79197912c37303752511f4}}. One of the most important probes of the high-{{formula:15cabf78-b795-457b-ad04-305c9aa04f41}} universe is the hydrogen Lyman alpha (Ly{{formula:135f7ca7-31fa-4f33-9ad9-dab80088bdca}} ) emission (line). Hydrogen is the most abundant element in the universe, and Ly{{formula:0d655d40-517c-485a-a51c-1d22fc8a4465}} emission is produced by hydrogen atoms’ electron transition from n = 2 to n = 1 (ground) state, two of hydrogen’s lowest energy levels. {{cite:39f7fe24a3f27917345931d243a81b0d30935a3c}} first predicted that an early galaxy produces powerful Ly{{formula:94d2dcc2-2e61-46e8-837f-f54879934558}} that composes {{formula:255ac69d-305f-41d5-b5ac-5e6424f43215}} 6-7% of its total bolometric luminosity at {{formula:532c8999-2bcd-464f-ae40-057cebfff615}} {{formula:6c4bd12f-cecf-474d-85bb-9f5591609af4}} 10-30. {{cite:95425f7a777a1b2792158ca025c6b5800a5707e3}} even predicted that the fraction of Ly{{formula:f8784e44-b389-4af5-8f58-4e80728b50d1}} that composes the total luminosity may reach up to 20-40% when the metallicity and the initial mass function (IMF) at higher redshifts are considered. However, it took around 30 years after the first prediction by {{cite:39f7fe24a3f27917345931d243a81b0d30935a3c}} to detect high-{{formula:178da473-8a0c-4bf7-a131-6d8761f095ea}} galaxies with prominent Ly{{formula:5978d370-04d0-41d5-a10d-4600ac251ad2}} emissions due to the limited sensitivities of the telescopes at the time.
| i | 8e2901fca4704dde56f1d3453e2f4d4a |
Theorem E.2 (Martingale Central Limit Theorem {{cite:e6653f29edc1af82618078a78c780ad65467af05}})
Let {{formula:2cd6726d-4cc2-4e35-9658-53f5e08e9367}} be a square-integrable martingale in {{formula:22e18919-9686-4c71-85b6-e4b38e5a1542}} adapted to a filtration {{formula:cfbe225a-64e9-41db-aab1-a6b5ba767042}} and let {{formula:f75292d5-f8a0-4a00-9a5e-3ee6bb1134df}} denote the predictable square variation of {{formula:dd04fa3a-90df-41fc-8f14-5182ce9b3e6a}} , given by
{{formula:085f6982-3fc5-42ff-8e67-186f340a9eb8}}
| r | b6c8dd722c57afecbac891afe9420941 |
If the two temperatures we have detected are a simplified representation of a range of temperatures, then the properties of the two components should be correlated. Figure 10 compares the EM of the warm component to the EM of the hot component. The best fitted linear relation between the components has a slope of {{formula:0acfcefb-07e9-4e6f-87f2-74ff8ad134ce}} and an intercept of {{formula:ae525ce6-99fc-4e47-94ee-87858898b507}} . The correlation coefficient for the data is {{formula:bca59398-e764-4821-b27a-4d22605a6f95}} (90% confidence interval). Since the intercept is so close to zero, the slope is effectively the ratio of the component EMs. The linear trend persists if harsher or looser restrictions are applied to the included fields, with the slope maxing out around 0.13 for the full data set, and dropping towards {{formula:848cc889-6d6b-4f36-bcbe-340d3ddfc70d}} as more high EM fields are removed. The slopes for the north and south data sets are consistent, even though the distribution of the north and south data visually appears a bit different. Our EM ratio of 0.092 is 2-3 time higher than the EM ratio for NGC 891 from {{cite:82cd37b28094b36b0c93d2e320208b5d113ceaae}} of {{formula:8ef1812e-14af-4c71-9084-a59571a3f156}} .
| d | 8024919469a667b8edba082b5eb5ecb4 |
For all three data sets the full AHC solutions, employing single linkages, were also computed for comparison. The water coordinates
were used as is while the two larger data sets were processed by transforming to standardised values, by removing the mean and dividing with
the standard deviation across samples or pixels. The microarray and digits were further transformed using
UMAP {{cite:84089d87344619d35b62715e965bccb1dff325fb}} prior to clustering.
| m | e61845a069a174b9e167280e428b033e |
For KL Routing, experiments were done on a simple unsupervised perceptual grouping task {{cite:22cf549a9904b010f509cedf3cb62172775bf3ba}}, where KL Routing performs better than dynamic routing. There is no comparison between EM Routing and KL Routing, but given the type of task, it appears that EM routing will also perform well. Given the performance of EM routing on SmallNORB and Self Routing beating both Dynamic and EM Routing, Self Routing will likely perform well on such tasks. Furthermore, capsule networks give better performance when used in an ensemble {{cite:84c66ed51a8d416d464b44ed11abe6a6b38288bb}}, supporting the Mixture-of-Experts approach towards routing.
| d | ca194d8c88b8c69dad9353ad0a27198e |
In the meantime, SGD with momentum acceleration (SGDm)
remains the current workhorse for the state-of-the-art (SOTA)
centralized deep learning training {{cite:d007d8049baf90a382bb492b5330bf1cc3cff15b}}, {{cite:822660ea280f1d9b2eb0dff1dd96269a4a2a7faf}}, {{cite:6fadb4bf3fd5e03f312928f5b78b21a7a9510b4d}}.
For decentralized deep learning,
the currently used training recipes (i.e. DSGDm)
maintain a local momentum buffer on each worker {{cite:ff891d3332b69bd3e3bad8b431b7bcac165e9e70}}, {{cite:ae5b3767c6f29cfb2d2604f461b634812e661c76}}, {{cite:7f9b689232af85570bf9b93303ddb12e91bbf53f}}, {{cite:b7baba8771860404a04df93880de3fe91e34bd7a}}, {{cite:9cf5f8d8f6654f8538d16bbca0fcdb0633c4bf57}}
while only communicating the model parameters to the neighbors.
However, these attempts in prior work mainly consider homogeneous decentralized data—and there is no evidence that local momentum enhances generalization performance of decentralized deep learning on heterogeneous data.
| i | 04787f2003260cbcca29ccf2e0d29149 |
classification model which is based on word embedding, thus we applied fastText
https://fasttext.cc/docs/en/
word embedding tool that represents words as vectors embedding.Those vectors embedding was trained on Common Crawl and Wikipedia. We used the Arabic ar.300.bin file in which each word in WE is represented by the 1D vector mapped of 300 attributes{{cite:2cdee66bee75c1fc36d6a1a0252a3eb04ff791e6}};
| m | d7c9b5483b615d2d09b39c04f2b7e5f8 |
We now present our simulation results for evaluating the performance of the proposed algorithms. The dense C-RAN is assumed to cover a square shaped area of 700 m {{formula:4452db2f-0107-4950-9eb2-708f593417c6}} 700 m. The numbers of RRHs and users are set to {{formula:d246ad52-7e1e-406e-98a7-4a8405145407}} and {{formula:6a521ef7-a21b-460b-894f-54fbd184fcbe}} with the densities of 73 RRHs/km{{formula:8f2d9bc6-7d22-4813-a5d8-0759afe5d9f1}} and 49 users/km{{formula:8fe39b6b-6120-4007-b363-3613b85fc2bf}} , respectively. This is a typical 5G ultra-dense cellular networks as stated in {{cite:34a5205f68809fa1ba251835c3be325610159d8a}}. Both the users and RRHs are uniformly and independently distributed in this area. It is assumed that each user is potentially served by
its nearest {{formula:d71dde72-6742-419a-942c-4d3cfb1c271e}} RRHs. Each fronthaul link is assumed to only support three users, since mmWave communication is employed as the wireless fronthaul link. The maximum transmit power for each RRH is 100 mW, and the pilot power is 200 mW. The maximum pilot reuse time for each pliot is {{formula:f40349ec-4d06-4c57-95e9-5b9695120d2a}} , and the rate requirement for each user is 4 bit/s/Hz.
We compare our proposed algorithm to the following algorithms:
| r | a76e3ed0744920f68eada1841407bc17 |
Reviewing the gravity models, we find that three groups of authors independently employ different ideas that lead to equivalent results (within given parameter configurations).
We interpret this observation as strong support for the gravity idea, i.e. that some sort of interaction decays as a power-law with some sort of distance, interfering in some way the urban scaling.
A closer inspection of the variants permits us to draw valuable conclusions in terms of (i) good access to all parts of the city, (ii) influencers reaching distant parts of the city, and (iii) interaction between socially distant people. We also discuss how these aspects affect the scaling and socio-economic development of cities.
The analogy to gravity in physics has a long history and has been studied in a wider context, including population flows {{cite:d87900397d3a260ea9b96b8c793d9e66b967a6e3}}, like commuters {{cite:fdae2e82a4966c11935002eec1edd44f2fc6ca43}}, or spatial explicit modelling {{cite:5b4b187431e9d866f8074eb6614380fa0fd163ed}}.
Accordingly, as a small outlook, it would be interesting to further unify gravity applications which could then also lead to an explanation of the {{formula:2aedc4e7-7b4e-4a24-a150-253d3fdc42ea}} -exponent itself, that is, the parameter that controls the space impedance.
| d | 8504340e8e308678949ce29e8c3f68b8 |
Perhaps due to the needs of identifying causal relationships, the study of DTRs originated in causal inference, with the pioneer works of {{cite:122a242447f2b76bb6783f2a68a5c3f39bd12145}}, {{cite:e751441d8e1d509c482c3f295557fa835832f230}}, {{cite:576681d4c07579e4c6727885eb4d5458eb8f233b}}. Over an extended period of time, the author introduced three basic approaches for finding optimal time-varying regimes in the presence of confounding variables: the parametric G-formula or G-computation {{cite:122a242447f2b76bb6783f2a68a5c3f39bd12145}}, structural nested mean models (SNMMs) with the associated method of G-estimation {{cite:8acd093e4b09249485ece695e6ba79c877038a99}}, {{cite:25c6919ef4124857a4dfbfe18eb5ca71ef280b2c}}, {{cite:e751441d8e1d509c482c3f295557fa835832f230}}, and marginal structural models (MSMs) with the associated method of inverse probability of treatment weighting {{cite:422288ed9e3563bc6dc879f4a91a0e9154bd3eae}}.
In spite of their advantages, SNMMs and G-estimation have not become as popular as MSMs and IPTW methods. Possible reasons are discussed in {{cite:5076e7a500d107587cfa58d92c4077626ddcb14b}}, who use the appellative “partially realized promise” referring to SNMs and G-estimation.
| m | a5c92a68d4c773a8ffc292666b816afb |
The predicted masses of the {{formula:938ebd6e-c0d0-44d6-b0a6-783257f1d8ba}} baryons up to {{formula:2bffd7a3-2b30-489f-9f13-75d2b164a709}} shell have been given in Table REF and also shown in Fig REF .
For a comparison, some predictions from the other models are listed in Table REF as well. It is found that the masses of the first radially excited states {{formula:0bd99ec9-0b52-4070-90c9-1234e3212171}} and {{formula:56775008-a84c-4229-ad8e-3ae8e1cc214e}} obtained in present work are compatible with the predictions in Refs. {{cite:94e19552f76f252dddc3664907081c0b9a198522}}, {{cite:b8d6d516e511a873ab8e345a458ef96e50c7b384}}, however, the mass splitting between them {{formula:463ad12d-21e0-4971-a8b8-98cc6c07c3ff}} MeV predicted in this work are obviously larger than the other model predictions. The mass of the {{formula:a0cb103d-f5a0-424b-a527-2f0947f64705}} {{formula:9e58211f-7bd5-4125-8c24-ce5e65665ff8}} -wave state {{formula:f7dd50eb-01ee-4f9f-abe6-68429d50b2e2}} , 2141 MeV, predicted in this work is close to the predictions in Ref. {{cite:4db28e8c1f0d90629ac53277029a660a75acf1ab}}, {{cite:b8d6d516e511a873ab8e345a458ef96e50c7b384}}, however, our prediction is about 60-160 MeV lower than the those predicted in Refs. {{cite:7cb86965748f93b3dcf188e2b439e476dbe18fda}}, {{cite:bdf5fd706549083ab5f830140164560157d619bc}}, {{cite:f87c763b8dcb1c8ecb66be074171e847accaae97}}, {{cite:94e19552f76f252dddc3664907081c0b9a198522}}. The masses of the {{formula:4946beca-163d-4058-a3a9-2be922377ce8}} {{formula:c4d2e769-ef77-4e87-ad6b-d6d0099becee}} -wave states
{{formula:6f1be3d0-895a-4e99-bee4-74c81e020be5}} and {{formula:ac2412a9-ba2d-4381-9060-3f7759a36069}} and their mass splitting {{formula:511c34e0-29a2-4ef4-8c9f-377e72d14cae}} MeV predicted in this work are close to those predicted in Refs. {{cite:bdf5fd706549083ab5f830140164560157d619bc}}, {{cite:f87c763b8dcb1c8ecb66be074171e847accaae97}}. The masses of the {{formula:054d0e6d-5df5-4b3d-a75e-cebfc9cb5100}} {{formula:dbd98703-dd74-44c3-a459-76cd0aea86a4}} -wave states
{{formula:c0d35ce6-7d7d-428c-be8d-282b59758a19}} and {{formula:6ea6ff8e-f14a-4f6c-9e4a-b8d1a0e1520a}} and their mass splitting {{formula:7f3b3809-e9b1-4395-91d7-d5393a348c7d}} MeV predicted in this work are close to those predicted in Refs. {{cite:94e19552f76f252dddc3664907081c0b9a198522}}, {{cite:bdf5fd706549083ab5f830140164560157d619bc}}, {{cite:f87c763b8dcb1c8ecb66be074171e847accaae97}}. The mass of the {{formula:1f5b53b0-7a5f-41eb-b0a0-a1011aa9c0da}} {{formula:e901a68b-c1b8-419f-a9bf-ef6bf7e7bfae}} -wave state
{{formula:ee906e3f-0892-463a-a320-636754446e7c}} is close to those predictions in Refs. {{cite:94e19552f76f252dddc3664907081c0b9a198522}}, {{cite:7cb86965748f93b3dcf188e2b439e476dbe18fda}}, however, about 100 MeV higher than the predictions in Refs. {{cite:bdf5fd706549083ab5f830140164560157d619bc}}, {{cite:f87c763b8dcb1c8ecb66be074171e847accaae97}}, {{cite:b8d6d516e511a873ab8e345a458ef96e50c7b384}}. Finally, it should be mentioned that if we consider a fairly large mass splitting {{formula:be4e7e56-63d9-4798-8a5e-4f5aef26f759}} MeV between the two {{formula:a07469eb-f899-4d60-b05a-03be95ff1433}} -wave states {{formula:19965a84-726e-4c93-a4d6-5f93bf19e6af}} and {{formula:2eec67a6-b763-48ce-bc03-8253686a12c8}} due to the spin-orbital interactions, the mass splitting between two adjacent {{formula:483fb9c4-c55c-461a-ad42-00f3035b5eb7}} -wave spin-quartet states {{formula:32990768-f931-474d-8777-6d8a4c1d2080}} and {{formula:ca92a4f3-9bcd-426c-970c-e3a1fd1cc005}} might reach up to {{formula:8215ac88-4adb-4c19-87a2-24b8ca194632}} MeV which is larger than the value {{formula:60f4d166-1234-4ad6-bf3f-20859362d2a1}} MeV predicted in the literature {{cite:94e19552f76f252dddc3664907081c0b9a198522}}, {{cite:7cb86965748f93b3dcf188e2b439e476dbe18fda}}, {{cite:bdf5fd706549083ab5f830140164560157d619bc}}, {{cite:f87c763b8dcb1c8ecb66be074171e847accaae97}}, {{cite:b8d6d516e511a873ab8e345a458ef96e50c7b384}}, {{cite:f3d81a14a468bacb628a0e9bc26d31adfcf5dc76}}.
| r | f25b9af5db1c9a347c926692c54145d9 |
To verify that our method is easy to implement, as we mentioned in the manuscript, we explain how we can apply our method to Soft Actor Critic (SAC) {{cite:7e8722043820efc46faa3d6a893f999edac507fa}}, {{cite:0ba1b07789f94091405927b74bb8fc7cea1f7293}} based on PyTorch 1.7.1. In SAC, it uses double critic networks and their target networks. We can implement model-augmented Q-networks (MQNs) from the given critic networks as
{{figure:d746ffcc-83c7-4c35-8977-d971588cb5a9}} | m | 253c706e3d74045d2337b463c6420908 |
Having introduced the notion of an abstract trace map and Green identity (REF ), we switch to symplectic description of self-adjoint extensions of {{formula:2f5f006b-6dec-4734-a2e9-6de4689444f2}} and a symplectic version of the Krein resolvent formula. We note that the right-hand side of (REF ) can be written as {{formula:fb77756f-a85e-4eb1-a75b-823bdb867788}} , where {{formula:77a076b6-b260-4a9b-ab42-57d8a5ee1c3c}} is the natural symplectic form. It is well known that self-adjoint extensions of {{formula:65e242cd-f8db-4d1a-9092-4c630db14f05}} in {{formula:8587140d-8563-404c-991d-ffb475b30d4e}} can be described by Lagrangian planes in various symplectic Hilbert boundary spaces. W. N. Everitt and W. N. Markus {{cite:df25d71d05914abbff6fd19175829a348a6dbc81}}, B. Booss-Bavnbek and K. Furutani {{cite:1bc595f8aaa808428d81d53ba46fdeffa36ba36f}}, for example, relate self-adjoint extensions to Lagrangian subspaces of the symplectic quotient space {{formula:dd877265-8c5b-4b70-8a70-45cb4199d6f4}} , while J. Behrndt and M. Langer {{cite:cc6f8e8b6fbd2e147a19912199dbfff90a9d370b}}, K. Pankrashkin {{cite:24c5e5594ff22a7f37b13c3889f1f493cc153e4e}}, K. Schmüdgen {{cite:aad7c5fd9e4c309305b9690984d84e56f2d4d9d7}}, on the other hand, discuss self-adjointness in terms of linear relations. Closely following these works, we utilize the abstract Green identity (REF ) assuming (possibly, non-surjective) embedding {{formula:8a895b28-4f5b-4dbf-9108-3fb140a9c1fc}} , and associate self-adjoint extensions {{formula:98bd0ab3-c4d6-491c-8071-f0a4967e9273}} of {{formula:695e7b9f-97e6-4855-856a-0d08dbc62387}} to Lagrangian planes {{formula:c21486ec-231d-4ac5-a42a-3de8885da24f}} via the mapping {{formula:b796db4b-f5a5-4f53-a101-06e924cea138}} , see Theorems REF , REF and Corollary REF for more details on this correspondence. This observation brings us one step closer to the perturbation theory for self-adjoint extensions with continuously varying domains of self-adjointness as it allows us to recast this non-additive perturbation problem in terms of the perturbation of Lagrangian planes, or more specifically, in terms of perturbation of the orthogonal projections onto the planes.
| r | d87c031eb149f34cd8aca763d8343d72 |
The current version of the algorithm assumed bright objects on a dark background. Depending on the raw image material, additional preprocessing steps such as intensity inversion, contrast adjustment or even more complex methods such as vessel enhancement filters might be necessary to produce reasonable results {{cite:e8c61cc2649219cd4d04ccd73643511fd7fea4be}}. The current implementation only searched for a single elongated object of interest in the images. The approach can be adapted to detect and extract multiple elongated structures by simply applying the regression-based ROI extraction on all connected components that contain elongated structures separately. In cases where the elongated structures overlap, a template matching approach that searches for line-like structures could be employed to perform the initial localization of the objects.
| d | 99a29a0203cc5197205a212107b65230 |
Contribution.
We demonstrate that our U-Net architecture is able to accurately predict surface wave dynamics in complex straight-sided and curved geometries, even when trained only on data sets with simple straight-sided boundaries.
Our U-Net is able to simulate wave dynamics four orders of magnitude faster than a state-of-the-art spectral/{{formula:28b4fefc-c19d-4aa8-8d6d-1740004cfeff}} element numerical solver {{cite:64bc71c3811549018ef5a124b6b4ea71856a203c}}, so it could be an effective replacement for numerical solvers in applications where performance is critical.
We also demonstrate that a 3D CNN is able to time-interpolate our U-Net predictions and increase the temporal resolution of the simulations by a factor of four.
| i | fde5829bd1e1115afd93e843df385f71 |
Proposition 3 [{{cite:8c45c1f4916ffa07959c219cf4811383d3b12f58}}, {{cite:b54ab2799eafea2c9a4d8f074b847896764914c4}}]
Let {{formula:17b4308c-01e0-47b9-a1c7-c0b6812c6cd1}} , {{formula:74441c0a-74d4-4ff3-8060-9f87f1fea5a1}} and {{formula:12115da2-d57b-4190-b1b4-45c50870d28e}} . Then,
{{formula:1abbf728-db75-4579-9e85-770ead41bb32}}
| r | cb6a8178a842dce16459512b2492938e |
An alternative to deal with the non-differentiable issue is to use a continuous function to replace the samplings. After a multinomial distribution over the words from a given vocabulary is estimated by the generator, a differentiable sample, Gumbel Softmax for example {{cite:b623e6161f2b09e34e9f5b92b865ee1b90a5795d}}, that can be smoothly annealed into a categorical distribution is used to replace the non-differentiable sample from a categorical distribution.
However, as the support set of the multinomial distribution is the whole vocabulary, words with close-to-zero probabilities are all taken into consideration.
Such approximation become imprecise since these unnecessary words account for a large majority of the vocabulary, which is well-known as the long-tailed distribution.
Although it can be mitigated via temperature to control the “steepness” of the distribution, this problem cannot be completely solved because many unwanted words with nonzero probabilities are still involved, which makes the training inefficient.
| i | d48e7cbc1ed2aa10bdc33f677dc23e8a |
The most common class of explainability methods for image classifiers visualize the reason behind the classification of the network as a heatmap. These methods can make the rationale of the decision accessible to humans, leading to higher confidence in the ability of the classifier to focus on the relevant parts of the image, and not on spurious associations, and help debug the model. In addition to the human user, the “computer user” can also benefit from such methods, which can seed image segmentation techniques {{cite:96e55277126754de545321b357c29dc7abb60e0e}}, {{cite:465ccb065084e6dd19bb297fddf10515be54411d}}, {{cite:df281021f926a3ed613d367c940c3f18dfe74b0c}}, {{cite:890d71d1ac31f0bce7e251cf5b6f39a76db93017}}, or help focus generative image models, among other tasks.
| i | 74d2ad8f84246b2f7e819d26c9333c12 |
Then, assuming that {{formula:673c8d49-88a6-4a79-9d78-912726f27bdd}} is big enough and requiring other technical but mild conditions
{{cite:1721285575faaccc1dac2d0ffdebc433a95fb4ac}}, the constrained problem (REF ) shares the
same solution of the penalized unrestricted optimization of the Lagrangian function,
that is {{formula:2de8472b-1305-4907-8adc-a410146c9e33}} . Therefore, the unconstrained optimization problem becomes
{{formula:07b510cf-07a5-4268-b4c9-0811a2f1fdae}}
| m | 285a454150588e93f33eb109c27db3b8 |
A vital part in the Langevin MC approach is choosing the step-size {{formula:94d4ae30-386d-42ca-8581-0341f3a04655}} . Here, in this work, the choice of {{formula:23f91705-f6df-44aa-8496-870578060add}} is picked such that the acceptance rate in MALA is around {{formula:bbd6b82e-a7e4-4cec-8bc6-cfb3663bfb2d}} , motivating from {{cite:8bd8ac351c9b4f0508ce9369aedcc1bbb685352f}}. We have tried with a decreasing step-size as {{formula:928d17cc-a6f9-4315-8555-43301ede3e6c}} , however this choice does not improve the results at all compare to the choise defining through the acceptance rate. It is noted that there are several other way for choosing {{formula:de8870b3-a73a-4108-b2e8-7e965f384a18}} , for example, {{formula:406bc77c-07f5-4739-b22f-dd1d8aaf2b15}} is adaptively changed in each iteration as in {{cite:7904b4fed6476f47045cd7618172eca58ccd4702}}. The study of such approach to BRRR is left for future research.
| d | a261ef1cf6f5857c6b45b76fcccaf3f8 |
We report snow simulation results in Fig. REF . As the figure shows, 3D Stylization changes the floor texture but cannot add physical entities to the scene, limiting its realism. Swapping Autoencoder {{cite:a3ab123e3614bd44d4dbe1f0765c76fa06f74617}} changes the overall appearance but hallucinates unrealistic textures (e.g., car texture). On the other hand, ClimateNeRF simulates photorealistic winter effects, including accumulated snow and change of sky and tree colors, etc. ClimateNeRF even piles snow on tiny structures like pedals, as shown in the figure.
| r | 9b005da8a869523229dd70e1f4fe12c6 |
During the decay phase of two bursts in Obs 1, the oscillations are found with {{formula:2f6c90e5-e233-4e11-8278-a4d7000f6348}} confidence. Decay phase oscillations are generally described through the cooling wakes {{cite:22494acd6b003ff186b0b5393d0350b6f19f14d6}} or the surface oscillation modes {{cite:71c5595cf0fe57e61708df4c1dc1e5a83c93de34}}. Cooling wake is the temperature asymmetry due to cooling of the neutron star's surface, spreading across the star in a finite time after an X-ray burst. High amplitude decay oscillations (>10%) are usually explained using this model {{cite:25dec132bf63a83f09dcb7e90a57699ed4327755}}. Surface mode is the asymmetry in the neutron star ocean or atmosphere excited during an X-ray burst. The oscillation amplitudes are typically {{formula:cc510215-73f0-4c07-bb2a-8318f1234c2a}} 10%. It may be plausible that both the models contribute to the evolution of burst oscillations {{cite:6dd9333cefc629010ebbc2bedcabb7fab82b509b}}.
| d | 9d3c03983ba37bc2c644364aeff36acb |
Table REF summaries some of the other annihilation decay of the {{formula:1032c2a2-b59a-4999-8e5b-a6c59fd51b82}} into {{formula:ddb26a2e-7649-4369-a2d0-56df41ca1f69}} and {{formula:fb2c87c4-bf6c-40be-96e0-fe6aa1f56bfb}} . As far as the {{formula:78397af8-0b2d-40e7-be93-5a47356f2e3d}} concerns, we find that present results are considerably higher than {{cite:f9787c69670173e0e5ab3fe909bc055ed5adb46f}}. For {{formula:caf03c0f-f16c-423f-9ebc-f6683cfc6966}} it is again consistent with the PDG {{cite:4806a585f85da3f95dc5920230c91f9c22e95883}} and {{cite:f9787c69670173e0e5ab3fe909bc055ed5adb46f}}. The radiatively corrected results are allmost matching with the {{cite:f9787c69670173e0e5ab3fe909bc055ed5adb46f}}. Results for {{formula:734fe30e-ffc0-41b5-9bc3-71852678bdae}} are few keV lower than {{cite:f9787c69670173e0e5ab3fe909bc055ed5adb46f}} but still comparable. For all these decays one has to wait for the experimental confirmation.In general, We can conclude that our instanton potential predictions and those from the constituent quark model predictions {{cite:f9787c69670173e0e5ab3fe909bc055ed5adb46f}} are in a good agreement for the {{formula:2973498d-9842-4ef9-8c4e-bbf734d67acf}} as well as {{formula:e5099b8d-73da-4c27-8f2a-244d0868f79d}} decays.
| r | 11c39d226544a33a121e4f125604fd80 |
Deep neural networks are often considered black boxes with only limited possibilities to look into and comprehend their decision-making processes.
However, due to their state-of-the-art performance in an increasing number of tasks and the advantage of applying them to problems without domain knowledge, the necessity for explainability grows faster and faster.
Furthermore, debugging such complex models is a tedious task for developers because finding a good architecture fit for underlying data is often a trial-and-error path.
While large models such as AlexNet {{cite:847a85d6768a497213341d603402298e77d10ecf}} or GPT-2 {{cite:43ad9fec66f9a37e5f6642c9abf40258fb77be8c}} achieve peak performances in image classification and natural language processing, such deep neural networks are not deployable in many real-world cases as these are too large for used hardware in, e.g., embedded system {{cite:aa7b53fa9c2c32c651649b7438523348fa3dedc7}}.
| i | 3ce5a5a120b0c87a238364b4f6b3eaf0 |
Holographic models from string theory {{cite:223048228d330bbeed37110e37dba161ee87cec0}}, {{cite:1da6c058a29705ec7393a0665f5170f9c8708191}}, {{cite:fc0cafadad7377dee2213c2f1fdb1761153779f8}}, {{cite:29aaa8dc7c9bb4869165e9f4ad111bb13261b8a1}} have been extremely successful in describing a great variety of quantum gravitational phenomena, including the physics of high-energy scattering processes where gravitational effects are important, the formation, dynamics, and evaporation of black holes, and even the emergence of spacetime and gravitation from intrinsically quantum mechanical phenomena. However, it remains a significant open challenge to find a quantum gravitational description of the physics in cosmological spacetimes like our own universe.
| i | 8c79562be2ae57682dc798bb37ee0a00 |
The core novelty is that the model is trained to improve the quality of its explanations by using diagnostic properties of explanations as additional training signals (see Figure REF ). We select the properties Faithfulness, Data Consistency, and Confidence Indication, as they can be effectively formulated as training objectives. Faithfulness is also employed in explainability benchmarks {{cite:575b4c85b5344ce06307bcf2c6172e00c1096a29}} and in related work for unsupervised token-level explanation generation {{cite:6c8f4d91795fcf3fae7d8a1bd5172b976234844b}}, {{cite:9347183b9ef8f7389073dce23fd1d593a8f10fb9}}, whereas we consider it at sentence level. Further, multiple studies {{cite:7d07f994cafa0de07ac03df5fb132136cbf6b571}}, {{cite:ed8ebce5997af8993baa51d44b2f96405b70a461}} find that explainability techniques are not robust to insignificant and/or adversarial input perturbations, which we address with the Data Consistency property. We do not consider Human Agreement and Rationale Consistency, proposed in {{cite:c3336e4356726f809913d62efc20398b06f53acb}}. The supervised explanation generation training employs human rationale annotations and thus addresses Human Agreement. Rationale Consistency requires the training of a second model, which is resource-expensive. Another property to investigate in future work is whether a model's prediction can be simulated by another model trained only on the explanations {{cite:21c2d63b9ec049bdaadcaa00d5e4c0ab31b18c01}}, {{cite:4119b3c4593c46b503afd898d400ff02fbd3ba10}}, {{cite:74b17c12ecc8a1d721a47fff5fd640a06a931f9b}}, which also requires training an additional model. We now describe each component in detail.
| m | 37f0553f3ca41902119012d24e21c5eb |
In this paper, we consider a natural extension of the gravitational Schwinger-Keldysh path integral prescription of {{cite:b507d3b8f82e5c15708278025c745b47a2f1bd9d}} to rotating BTZ black holes. The gravitational space-time asymptotes to the real-time Schwinger-Keldysh contour of the dual rotating CFT at a given initial state with finite temperature and chemical potential due to angular momentum. We study a probe scalar field in the rotating BTZ geometry and obtain the ingoing quasi normal modes and outgoing (time-reversed) Hawking radiation in section . By computing the on-shell action and integrating over the complexified radial coordinate we construct the effective theory of an open scalar field that is coupled to a two-dimensional rotating thermal CFT plasma at the boundary in section . The quadratic influence functionals computed in section REF match with the results of {{cite:c5b27f7f560ae9e171e9b1c713fa229478c08a39}} under appropriate coordinate transformations and redefinition of parameters. The quadratic effective theory has a dual stochastic description in terms of a linear Langevin equation in presence of rotation. The noise and dissipation terms in the Langevin equation are related by the fluctuation-dissipation relation with chemical potential.
| d | 9414748ecc568beac6b5b27986131dec |
After dealing with the theoretical treatment of electron-nucleon scattering in the absence and presence of a laser field, we will explore in this part the numerical analysis of the obtained results.
As mentioned in the theoretical framework, the DCS of the electron-nucleon diffusion has two factors: The number of neutrons {{formula:f04db11c-5aa3-4855-8654-58996fd86cdf}} and the number of protons {{formula:37462d10-9b26-4fb5-aa23-f8d07802a14a}} .
To be more precise, for the electron-proton (e-p) scattering, we will consider the scenario where {{formula:db3081cf-a9b8-4eab-8186-f64e34933dde}} and {{formula:fd587894-84f1-4a9b-8321-3c3a97f430b1}} , while {{formula:56e62b37-298d-496e-990c-6380f70acf0f}} and {{formula:36311cc9-53d4-4cf9-b097-325f7197f92b}} for the case of electron-neutron (e-n) scattering.
We use spherical geometry for both the absence and presence of a laser field such that {{formula:8f87e740-0447-4810-bb04-901c496271ac}} and {{formula:af0c38be-9985-4492-a1d3-a7134568411c}} are the spherical coordinates of the incoming electron, while {{formula:daa0d96b-e80b-41df-8138-8b1b56970be7}} and {{formula:b311f32e-9d60-40c1-b6e0-9166ac43c15f}} are the spherical angles representing the scattered electron.
Throughout this work, we choose {{formula:98fdde76-5019-45f8-b9fe-1b92f9387944}} , and the direction of the vector wave is taken along the (oz) axis ({{formula:4fb97871-dff8-4969-bbb0-0f839baf0c6c}} ).
The traces that appear in {{formula:41eb98f6-e350-453d-8572-fdbbfd6b200e}} and {{formula:6755908a-42bc-489b-a11a-34b79e86dc60}} are computed by the symbolic-algebra program FeynCalc {{cite:02eb64d441db212de4641018c70cb34c9127fe6b}}, whereas the numerical evolution of the DCS is performed by using the Mathematica programming language.
We begin our discussion by analyzing how the electromagnetic field's strength and its frequency affect the number of transferred photons {{formula:eb8bd76f-7938-4322-b915-140f144cb511}} .
{{figure:3a26f714-bdda-46b8-9659-d9bbebc8dc2e}} | r | 93fdc827fccdb4d1e15dc69204758c62 |
First we discuss string-averaging methods in which a single pair {{formula:61e43f00-c1aa-4e70-89cf-f666cca2b50f}}
is picked at the outset and kept fixed throughout the iterative process.
Such string-averaging methods will be termed “static string-averaging
methods”. We will make use of the convergence theorem in {{cite:d74d4a03d18e40432d2c0f94ed1af9823cb2cda1}}.
Halpern's algorithm is a sequential algorithm which generates a sequence
via the iterative process
{{formula:b4257194-128f-41dd-bf17-1dc7e4eac61a}}
| m | 9f168e7713b8085611e5c64572845243 |
In this section, we briefly review the basic features of each method employed in our work. A more comprehensive analysis of the GP-based techniques used here can be found, for instance, in {{cite:1ae4b20f45c37417de69b3461396710c138b258b}}, from which the discussion below is largely inspired. In order to compare our GP-based methods with a strong Bayesian DL baseline, we additionally implement Monte Carlo Dropout (MCD) {{cite:f1c5736312b1a2185984d590f6f8b497906039fd}} and we apply it to the same RUL benchmark dataset. Our choice of MCD is also motivated by the interpretation provided in {{cite:f1c5736312b1a2185984d590f6f8b497906039fd}}, which establishes a connection between MCD and the probabilstic GP introduced in {{cite:3d729db69a96b882d332d8ad29309dec82224a2a}}.
A description of the main principles underlying MCD can be found at the end of this section.
| m | 9d2cc3be04662a42fdf39360b5193d2d |
Remark 3.9
By invoking an argument
based on the central limit theorem
(see {{cite:8cc2accd9b6bd520678dda3306382d245d103123}})
the Kahane–Khintchine inequality
for Rademacher sums (REF ) implies
a corresponding result for Gaussian averages:
For all {{formula:87219812-60a1-4b02-8a43-29ea418c46dd}} ,
for any orthogaussian family {{formula:ed15b94e-a17c-4eab-b9db-de3a4fe39a45}}
on a complete probability space
{{formula:3f8c8f69-160f-4d35-b5c7-276c253ec3e9}} ,
for any real Banach space {{formula:ef83349c-953f-4ecf-8cc0-2dfcf0846029}} ,
for all {{formula:ec4e15fe-37ca-4656-a81d-9e14f9fc4fec}} ,
and every {{formula:7a39d97b-8cef-487a-936a-8d26b84eea04}} , we have that
{{formula:6016966b-3f3e-42cb-a102-852404f0be48}}
| r | 33b141e536880cd3dd7dd2e0091ec903 |
Theorem 6 [Theorem 1 in {{cite:491817baab9b2bb01a82f47e0512e4dad43e29e8}}]
For all {{formula:3343b843-5224-41a9-9b85-7bfc4613602b}} :
{{formula:70a6c725-0df9-4271-83f2-3426dff4eb78}}
| r | 24ffffbbc7e84a009ebba1267de0596d |
The question remains why Mrk 335 is no optical Seyfert-type changeras observed in an increasing number of other AGN {{cite:d847c243ea892b483a23817afeaba5a9a8f89290}}
despite its high amplitude of X-ray variability (a factor {{formula:1cc5905a-37c7-4cbe-b40f-9915c44814c1}} 50 between highest and lowest state in our
long-term Swift light curve). As described above, the emission-line regions seem to see a less variable EUV-X-ray
SED. Furthermore, because the broad Balmer lines are intrinsically bright, even a drop of a factor of 2 still preserves the
type 1 nature of Mrk 335.
| d | e9d5e90c1d8852a7594397b158b298c9 |
It is possible to accelerate the rate of convergence from {{formula:9a46f6fe-05fa-4d3b-b8ae-31c4de028f77}} to {{formula:1866333a-de6e-449d-806b-1fe0b4f435fd}} for gradient type methods. The first acceleration result was shown by {{cite:572a9f1a29e2076118c526f88cc9f87897b4f076}} for solving smooth unconstrained problems. The technique has been generalised to accelerate gradient-type methods on possibly non-smooth convex programs {{cite:456b4d45407d643ff047b5106e3f088db784bc45}}, {{cite:bddedc7d92f96a9aaec7491dec4753f2ee3d8ca4}}. Primal–dual methods on solving linearly constrained problems can also be accelerated by similar techniques. Under convexity assumption, the augmented Lagrangian method (ALM) is accelerated from {{formula:3a3e0899-04d6-4cc5-b79a-4e266e2de5d9}} to {{formula:690955cf-1733-42c4-950b-e14f86b8ab48}} in {{cite:cbd8b868cf265aecbe9c9ec77b50c099b95efc65}}.
| m | ce8234f9ae8e81703caf921b8e2992f2 |
The results show all RGB-D methods outperform the RGB based methods. Even with only 20 sparse depth sample, the overall accuracy is greatly improved. Our approach consistently outperforms the state-of-the-art {{cite:33223d582b3baf8347445bc06ead855273dd7ee4}} and others {{cite:49348c5ed2a113a9ed6f5c39d732d3d76e6a66a7}}, {{cite:6a7e5bb91a163cf78d4e26574887c0dae443446d}} with different amount of sparse samples. With 200 sparse samples, which is common in most applications such as AR or SLAM, our results are significantly better compared to Ma et al. {{cite:33223d582b3baf8347445bc06ead855273dd7ee4}} in all the metrics.
Next, we will evaluate each component in our design and demonstrate how each component helps to improve the accuracy.
{{table:7e64114c-c051-4c7a-892e-d76a30c2c0da}} | r | 111823929933dec9db9ecc3f43dc58f6 |
We consider following schemes for comparison: 1) PDCA: the proposed PDCA joint-design scheme; 2) AO: the alternating optimization (AO) scheme proposed in{{cite:0a1e35acea5e94ad229f0e47bf7c4013177f1566}}. To be more specific, on the one hand, the RIS phase shifts design problem is solved by semidefinite relaxation (SDR) with Gauss randomization technique. On the other hand, the transmit beamforming design problem is derived in closed-form by leveraging the solution of the generalized Rayleigh quotient problem{{cite:14b598acd3e41a173d0804a3eb8510e8169f84cc}}; 3) Opt w/o RIS: the beamformer is designed to maximize the LESR in the absence of RIS. By substituting the solution of the LESR maximization problem into formula (7), ESR is computed by Monte Carlo simulation through taking average of {{formula:7c7f65eb-d9bc-4945-9820-d2eec0f219bc}} i.i.d. random {{formula:17fc6a85-8c70-457b-b3e8-a823ef4a8d10}} realizations.
| r | 00360e213e56f46b2ddd80be285bb469 |
As we have discussed above, we adopt a single pyramid framework. We hope the features of every single spatial size can independently learn rich multi-scale information. We notice that layers within the same spatial size also have different receptive-field sizes. Thus, aggregation among these layers may also be able to exploit multi-scale information. Motivated by this, we propose an orthogonal attention block (OAB). It performs layer-wise aggregation via a densely connected architecture similar to {{cite:ce1b0acf1841ed1ddd362ea34cebf00a85e63805}}. However, we further enhance it by mask and channel attention units, which can explicitly help different layers to be more diverse and reduce the redundant connections. It will be further discussed in Sec REF .
| m | 3fcbf475f9928bdc7dd2f1d132128ad2 |
It has been shown {{cite:96f030483b4943b719c6d87425d0c573b84a6fef}}, {{cite:55ca68be9c76bbdbe29b7204b7dfaaba1b933182}} that the update (REF ) enjoys {{formula:cdbe4cca-691c-4b49-ac0a-a842d0ce4119}} rate of convergence. In this section, we show that this algorithm indeed has asymptotically geometric convergence.
| r | a05678bdee10e160f72265fcd770e585 |
Next-generation wireless communication systems are expected to provide a 1000-fold increase in the network capacity over the operational system for satisfying the ever-increasing demand for higher data rates driven by emerging applications such as augmented reality (AR) and virtual reality (VR). To achieve this goal, promising techniques relying on massive multiple-input multiple-output (MIMO) solutions {{cite:2fff993398fa81c975fef01b543cf80924c384d2}}, millimeter wave (mmWave) communications {{cite:563d14d9a34cff5ff9b59d3a4db8213b66872ebb}} and ultra-dense cloud radio access networks (UD-CRAN) have been advocated {{cite:babc283bf4a4c732dfe93709d272d150762c773e}}, {{cite:badefbdf716e8ea4c3bb3ae04daef7982249c4f2}}, {{cite:5c4729958dc678358c82b35556e13e2b83055ce9}}. By deploying a massive number of antennas at the base station (BS) for transmission over the millimeter-wave (mm-wave) bands, significant spectral efficiency improvements can be achieved by exploiting the joint benefits of a high spatial multiplexing gain and high bandwidth. However, escalating signal processing complexity, increased hardware costs as well as high power consumption are incurred by the associated high number of radio frequency (RF) chains operating in a high frequency band. These issues erode their practical benefits. Although the access points (AP) can be densely deployed in UD-CRAN systems for reducing the distance between the users and the APs, the limited fronthaul capacity becomes their performance bottleneck. Furthermore, these techniques have to operate in the face of unfavourable electromagnetic wave propagation, improving a high blockage probability.
| i | faf267096871657e6797581298cd2cb8 |
Novel View Synthesis (NVS) aims to generate images at new views, given multiple camera-calibrated images. It is an effective line for realizing Visual or Augmented Reality.
With Neural Radiance Fields (NeRF) {{cite:7c0a4dea25854b0319dc85becd48504774b3afe7}} proposed, NVS tasks {{cite:b768306c0afb4ef1ef39c44a72d56e2b70ece6b2}}, {{cite:06e3c1cda5bef4a2461dd683ca2a0644ca10afd8}}, like large-scale or dynamic synthesis {{cite:3cbad1c39083ff75955df3e6796d81e14dda2e26}}, {{cite:a6ab6a85f54d17e5a4d769af3725bd2b3f1eef42}}, {{cite:387b0fbbcbca8f72381e8179a0b01c698cce20a3}}, were successfully dealt with in high quality.
NeRF adopts implicit functions to directly map 3D-point spatial information, in terms of locations and directions, to the attributes of color and densities. To synthesize high-resolution images, NeRF needs to densely sample points over the whole scene, which consumes far more computation than traditional solutions {{cite:41b7f1ae3aca5be8862e419d69db0c64adad37d7}}, {{cite:e4664d428e95087533edce14fb51709136d65961}}, {{cite:1541fab64e6769bec1c3f0f2481c389f415e2253}}. For instance, for a scene containing 100 images with resolution {{formula:6dbeaba5-ca13-47b4-9e94-844d8d2a2266}} , NeRF training time usually takes 1-2 days {{cite:7c0a4dea25854b0319dc85becd48504774b3afe7}}, and the per-image testing time is around 30 seconds. These two inefficiencies impede the fast practical applications of NeRF.
{{figure:59e19a55-a959-4f43-9515-e3e6bbf59fbe}} | i | 47a1e97f5fb15d2d41bca4720c621087 |
We first reviewed the expansion due to Girvin and collaborators (cf. refs {{cite:6123228cd0e3cffcc33031e09e97a62b6e939f22}}, {{cite:60145e9e527764c215f90628c567f0fc42f10fa0}}).
This expansion is ill-suited for extrapolation to the thermodynamic limit since it is highly non-orthogonal. As a consequence,
it renders causes the evaluation of the expansion coefficients to be numerically unstable.
We have found that by orthogonalizing the original basis using the Gram-Schmidt procedure,
a new basis is obtained in the form of (modified) Jacobi polynomials.
Using this basis, we have been able to extract pair correlation expansion coefficients for a wide range of wavefunctions, including the Laughlin series,
composite fermions with both reverse and direct flux-attachment, as well as Moore-Read and Bonderson-Slingerland states.
We have also been able to extrapolate these coefficients to the thermodynamic limit.
This way, we obtain expressions for the correlation functions on an infinite disk.
| d | e27b10938c6defe8e9bd6d607dfbe7ce |
Quantales were introduced by Mulvey (see {{cite:79dfd31ed40995f89d6d16939a9d75b917ccb5bb}}) in order to provide a lattice-theoretic setting for studying non-commutative {{formula:ded1ba6e-cac7-4c75-9a59-adf381aa5592}} -algebras, as well as constructive foundations for quantum mechanics (see {{cite:7186ba5ef394e0bdef6a24e27d4c09e066f00e4a}}, {{cite:06459ceb38986e5d9e204162e71852bdde4e29b6}}, {{cite:f99cf95690093899f2da5648fe042c356c99abc9}}, {{cite:96941293579993de10cf5cf0d08a61fca0615fbd}}). Quantales form an
important class of ordered algebraic structures, and interest in quantales was
stimulated by the fact (see {{cite:31469717431a174b177a45aae7278d9444cd79ee}}) that Girard quantales provide a sound and
complete class of models for linear intuitionistic logic, just as complete Heyting algebras model intuitionistic logic. More various types and aspects of quantales, which are not explained here, please refer to {{cite:4262d980a54718e32c32dc91cb5998e78fa3d826}}.
| i | ab077315735bbc83f89d8f43855493f0 |
This is a common quantity of interest in causal inference and we choose to estimate it with the common difference-in-means matching estimator, defined for some match assignment {{formula:3fdb9964-8e25-4634-b7c5-63ddfba2fac8}} as: {{cite:a204ab07035e8fe4a3c50659e456a6c06891b192}}:
{{formula:e79e3799-97a4-4ed6-9f1e-109b31af5ae5}}
| m | 329add4a4c7d4155f0a13aada17c5807 |
The Hubble parameter at the second lowest redshift must be computed using the third, fourth, and fifth galaxy (counting from lowest redshift), as they yield the effective redshift {{formula:f9d0d078-522f-4e37-bae3-38ade53871f7}} and {{formula:56171da1-acff-4e87-a387-fdab9c3693ad}} km/s/Mpc. For this value the uncertainty is in the order of 80%. To obtain the same uncertainty as in {{cite:12ee0a8dbb366ae1cfcdae7c5615dd5ea2aa1f65}}, that is 20%, one must lower the age uncertainties to {{formula:a0c47f94-23aa-4cd1-bb9e-07d1ad0490bf}} . Yet again a significant reduction.
| r | 2c282bb1dd93bd9f55e08cae922c7c95 |
The electronic structure in the vicinity of the Fermi level in the dimerized phase is shown in Fig. REF (c). We used the local coordinate system with coordinate axes pointing to the oxygen ions. One may notice a strong bonding-antibonding splitting for part of the Ru {{formula:e92d1ad9-1f7c-4d08-89a7-6f9a10005d9d}} states. These are the {{formula:191ed7db-b247-44ec-9b3c-ae5937dcb10d}} orbitals, which look to each other in the common edge geometry {{cite:3ba4a57020a7f251a6daccdd5c05c7ad487a2c76}}, {{cite:a5608c41d9fdd67591254ae2be7f7d50877eeb94}}, {{cite:3e651933bd1750daef817758683873807ec18339}}. This large bonding-antibonding splitting is due to a direct overlap between the {{formula:432e0d30-fff4-4bc8-8caf-6d4f4129e7d5}} orbitals on two different sites forming Ru-Ru dimers. The splittings of the {{formula:c4191fde-45a6-4f4c-9231-28f77bd24905}} states are much smaller. One may compare corresponding splittings in dimerized phases of Na{{formula:4833592f-bc7c-47a7-adbd-61fd314362fc}} RuO{{formula:4571c836-6a12-4863-a57e-78ae88862467}} and Li{{formula:8f594535-da7e-463e-aa70-f5e985a694bc}} RuO{{formula:635e988f-f872-4759-9cd7-8628d0333ff8}} . The bonding-antibonding splitting between the {{formula:03ee0b86-6dba-48ec-8d3e-b675bfd7f83e}} orbitals in Na{{formula:4f76ff0d-1146-4572-be3a-3ec2eae5cc99}} RuO{{formula:e5cdd872-4c71-477b-8a32-b1662996e5be}} is {{formula:fa6ae618-53d2-4ba4-aa09-bb0d4713097c}} eV, i.e. on {{formula:22fc9063-3b0a-4869-a5e3-475e1210cd28}} eV smaller than in Li{{formula:7a9b42c1-20b9-4333-81b4-913e3cf54ed2}} RuO{{formula:625c3127-829b-4ad0-988c-4afde35c094c}} {{cite:3ba4a57020a7f251a6daccdd5c05c7ad487a2c76}}. The dimers are mainly stabilized by the formation of these bonding orbitals and thus one may see that even at pressure Ru-Ru dimers are much less rigid and stable in Na{{formula:ede5d86c-f86a-4e45-b0c3-3c110e564d73}} RuO{{formula:061c9cb0-531b-41e9-a0e5-0a51c0ac90db}} than in Li{{formula:5581f275-8c7f-4cbd-bd36-3d9848ad9b60}} RuO{{formula:1ea6a677-d8c8-4745-808e-c8a88ea2b954}} . This also suggests that one may expect a smaller spin gap at low temperatures in the high-pressure phase of Na{{formula:999edf53-b195-4a3f-85b5-3a79fe161e09}} RuO{{formula:ca0a5ed8-a32c-4008-8a66-dc5b4bf13c15}} (smaller than in Li{{formula:e7c27e00-aa77-45aa-b75b-5e970e35f2d8}} RuO{{formula:8c214249-9427-4873-b3cb-050a58f6cbf9}} ).
| r | 3c13e5dc200d34886a8990492a14f43b |
An alternative approach which yields larger ROA is presented in {{cite:2e8f436e59273de1c111478a90885e2df8dd6ae4}}, which uses a disturbance affine feedback gain to parameterize the control input, as {{formula:74ba1824-19f0-4d1a-82be-85189a6920ed}} , for {{formula:1c2e0bda-0194-4d18-9c5f-de51047e406c}} . In this method, referred to as fully parameterized disturbance affine MPC (FPD) in the latter, {{formula:975867f5-70f0-4780-9262-c4c4886396b8}} and {{formula:5ef5769f-8652-4370-8ce3-27d43d7496a9}} are online optimization variables.
| m | 58ae59237b346058d3afd0d7aeaf5abd |
The sky signal is a composite of ground-emission spillover, atmospheric emission, and cosmic signal attenuated by the atmospheric absorption
and is beyond the scope of this discussion (see {{cite:6b940942faab4a450636c62ced8b52c696820d46}} for more information).
If two measurements of the sky are performed along one of the switching procedures described in Sect. 1, we can compute their
difference to cancel all unwanted signals and retrieve only the cosmic signal of interest ({{formula:bd235ea8-b964-4f05-9dae-95365cad5004}} is hereafter referred to with the
more commonly used {{formula:d1771a86-f1f2-4b62-8de0-0395ff3ad369}} or antenna temperature, and similarly {{formula:e68a7c5f-6d37-4f55-8495-1dc946ab3d50}} becomes {{formula:57a20fb4-7d56-421f-8684-59b0f5af1864}} ):
{{formula:3d473026-5349-45f7-a265-ab46be44410a}}
| m | a626323e5f519dfc23622bdf00c770b4 |
SLAM Integration.
One benefit of our method is CPU-friendly and can be implemented in real time. We integrate our NDD into existing LiDAR odometry and mapping{{cite:c42ad2c619b7d77c0dea6a711bc23c30822e765e}} (LOAM) system for loop closure detection, as shown in Fig. 6. NDD-LOAM is accurate for loop closure detection and corrects the cumulative error of trajectory during mapping.
{{figure:21fae89d-7cd2-4058-9cf3-b1f457e1a637}} | r | ff36150242dc47361c234f4afe252905 |
Also to compare the predictive accuracy results of our method with others here we used Diebold-Mariano test {{cite:3ff8638b3fe58b5bda964d04a735f183d7460bf7}}. Base on the test, the null hypothesis of equality of any two given methods at the {{formula:546a631f-492a-4816-b372-53bca54dd2fb}} confidence level is rejected if {{formula:e216f5bb-7305-4bed-ab1d-bf40f276470f}} , where DM is the test statistic of the Diebold-Mariano test calculated based on the corresponding squared-error residuals.
| r | 005886cbb0bcaab419a7a47d85488844 |
When {{formula:3591365f-4adf-46cb-97a9-fcfc35113237}} , the designer decides how much data to sample without controlling the type of data. We explore classification on CIFAR-10 {{cite:8e9c650369425a233ae811082ad39a6b841b89ac}}, CIFAR-100 {{cite:8e9c650369425a233ae811082ad39a6b841b89ac}}, and ImageNet {{cite:5fcb7d09553155b7812b2c375f5bc2db4f2124fb}}, where we train ResNets {{cite:0f424e0583fadc0f6ec9a75349186100401d7d37}} to meet a target validation accuracy.
We explore semantic segmentation using Deeplabv3 {{cite:b398f128f96ef696a10e341f9219a65d54735c3e}} on BDD100K {{cite:9d7830631a27d78f24c0bf1618c8c0289173dea7}}, which is a large-scale driving data set, as well as Bird's-Eye-View (BEV) segmentation on nuScenes {{cite:37aa09ef1de8a9ae13b8dc43654fed9db5b4d6d2}} using the `Lift Splat' architecture {{cite:19a92fc44088e5c1606266274b957fd280863137}}; for both tasks, we desire a target mean intersection-over-union (IoU).
We explore 2-D object detection on PASCAL VOC {{cite:66fb3b056dcf1d89367df5886163a4c254b7bd28}}, {{cite:1a764fd6a04bb215adb0ff20dbec5f0351ae7652}} using SSD300 {{cite:25c978c77b9617bb1b0ea2baa22586382e266c56}}, where we evaluate mean average precision (mAP).
| m | 1eaea1f7abef6612dd3dd5fe2051d9eb |
The first motivating point for this paper and its prequel {{cite:8155f31b8acae2096ff622a987b71361c8c5f6f2}}, suggested to us by S. Shatashvili, concerned precisely the last item: namely, the systematical understanding of the relation between 7D abelian Chern–Simons theory and 6D Kodaira–Spencer {{cite:5022f55f27b2a45320415afae378ed5a88f24bed}} gravity (otherwise known as BCOV theory {{cite:793ca843cb4391c7c138e8cdcc8f6df7387c2481}}) from the BV-BFV perspective.
At the semiclassical level, the relation is a result of Gerasimov–Shatashvili {{cite:8e6bc47e05c4ebcc281bcb909444773c5a457080}} (see also our review in {{cite:8155f31b8acae2096ff622a987b71361c8c5f6f2}}). In this paper, we explore the perturbative BV-BFV quantization and show that, for an appropriate choice of gauge fixing, no quantum corrections are added to the semiclassical result.
| i | bf25099aaaae0f426250c4c857609925 |
DFT calculations. We used the generalized gradient approximation
with the Perdew-Burke-Ernzerhof (PBE) density functional {{cite:88d8a5e5c491907756c742b2b8da1e64a39c7e0f}}, {{cite:6380983478575c95369e7f7687fdf1a03e8fbe6b}}
implemented in VASP {{cite:4408a62bf68c0f23b9fbd2f3aad3cbade7e574e8}}, {{cite:9c62e80dbcade384a9bc8abba11b3a3051d096db}}. The calculations
were performed with the kinetic energy cutoff of 410 eV and a Methfessel-Paxton
smearing of order 1 with the smearing width of 0.1 eV. Monkhorts-Pack
{{formula:ec0663a1-79f8-449e-bedd-71757f9f7617}} -point meshes were used to sample the Brillouin zone, with the
number of grid points chosen to ensure the energy convergence to around
1 meV/atom. Before the training, all supercell energies (per atom)
were shifted by a constant such that the DFT energy of the equilibrium
BCC structure is equal to the negative experimental cohesive energy
of BCC Ta.
| m | 2053c2244961c668dc1008064e2b4ea9 |
Importantly, in order to account for all the above empirical
phenomenology, the model needs to assume multiplicative
variations, i.e. that the variability between the parent's trait
and those of its offspring increases (linearly) with the parent's lag
time: the larger the parent's lag time the larger the possible
variation. This multiplicative process — at the roots of the
emerging heavy tails in the lag-time distribution— resembles the
so-called rich-get-richer mechanism of the Matthew
effect {{cite:1ce62dfff1d406d19c50e5a9cbb762d80898e37f}}, {{cite:3bee634264a79339f3d67d60fc2210c8f9c32a63}}, {{cite:960fffc3029d3114434631c971efbf7ab20e15d6}}, {{cite:acd96c00ef2736972c46bf80d171f653b3fbde10}}, {{cite:81aba0004631645e40a505257b97f209591fc3de}}. This type of variations implements an effective dependence
between the parent's trait value and the variation amplitude, that was
hypothesized as a possible mechanism behind the experimental results
and that could stem from a highly non-linear map between genotypic
changes and their phenotypic manifestations
{{cite:9f796f3e661bab0d723b24504f750ca3479387eb}}, {{cite:504caa4a08a81d2c0da8be4564cc401595013f97}}.
| d | 38f0721a4dd24d401e1627ee187327eb |
We bring the study of adversarial robustness to the field of neural combinatorial optimization. In contrast to general learning tasks, we show that there exists a model with both high accuracy and robustness. A key finding of our work is that the assessed neural combinatorial solvers are all sensitive w.r.t. small perturbations of the input. For example, we can fool NeuroSAT {{cite:81d1ee71c7e8175b2ae7573fe982ee112616cb49}} for the overwhelming majority of instances from the training data distribution with moderate perturbations (5% of literals). We show that adversarial training can be used to improve robustness. However, strong perturbations can still fool the model, indicating a lack of expressiveness. In summary, contemporary supervised neural solvers seem to be very fragile, and adversarial robustness is an insightful research direction to reveal as well as address neural solver deficiencies for combinatorial optimization.
| d | 85132837f2dbacff2972a6a44f9c1cf3 |
Discriminator Network.
We use the R50+ViT-B/16 hybrid model pre-trained on ImageNet from ViT {{cite:c1b05ca8aeb70fe220298fef6c64864f2760d6a4}} as a starting point for our discriminator design, in this case using the pre-trained strategies to learn effectively on the limited size target task data. Then, we simply apply a two-layer multi-layered perception (MLP) to make a prediction about the identity of the class-aware image.
Following previous work {{cite:8cbf41a4670acf084497921a36f2a762d7f20711}}, we first utilize the ground truth image {{formula:404043f8-e62f-4722-87e7-68c810abc676}} and the predicted segmentation mask {{formula:fc275013-b58e-4bdb-a674-50f25a52f9b1}} to obtain the class-aware image {{formula:41db5711-0f53-428f-a22c-e1160a211b7e}} (i.e., pixel-wise multiplication of {{formula:8bf6a395-31ba-4bd0-9e80-0e0a33877bcc}} and {{formula:a31b62a1-44b1-4982-839e-4da32de2c183}} ).
It is important to note that this construction re-uses the pre-trained weights and does not introduce any additional parameters.
{{formula:3ddb7bb5-9e87-4ae9-b83b-1d0af6c9a592}} seeks to classify between real and fake samples {{cite:c715d1fb978e0fbe8b314554f7975fd1c85a99bf}}. {{formula:94c84714-1f90-4e52-9c8f-a138aab34000}} and {{formula:c48105ae-8d6f-4bb4-974b-263b705beb26}} compete with each other through attempting to reach an equilibrium point of the minimax game.
Using this structure enables the discriminator to model long-range dependencies, making it better assess the medical image fidelity. This also essentially endows the model with a more holistic understanding of the anatomical visual modality (categorical features).
| m | 80ae41c66010c00a4880e1b2ebb316d5 |
In finite precision arithmetic, due to the influence of rounding errors, the Lanczos vectors computed by the Lanczos bidiagonalization gradually lose their mutual orthogonality as the iteration progresses {{cite:a3d1e023ed60e34b77001cd3e4f8743fca22156e}}, {{cite:8daa3d1c8b4ab24c617ebc39229f1cfea778611d}}. This is a typical phenomenon that appears in the Lanczos-type algorithms, which is first observed in the symmetric Lanczos process used for computing some extreme eigenvalues and eigenvectors of a symmetric matrix {{cite:4130418468e5022a76d76c2af1665eb10fb14ce8}}. The loss of orthogonality of Lanczos vectors will lead to a delay of convergence in the computation of eigenvalues and eigenvectors, and sometimes it is also difficult to determine whether some computed approximations are additional copies or genuine close eigenvalues {{cite:70842eb61a29836f6e57f47e04ab761482b41bc0}}, {{cite:89338068bb8c2de8beef187c4fb0e7e3d45afb2b}}, {{cite:98ab47098d15d64aa3817513066763a14f33489d}}. The above properties can be adapted to handle the Lanczos bidiagonalization since the Lanczos bidiagonalization of {{formula:59adc1b3-8347-4f7d-a311-c9dbea14e3dc}} with staring vector {{formula:afe1c032-2ba9-415c-b4ee-2f71cccc19b1}} can be written as the symmetric Lanczos process of {{formula:686cbc0e-0357-497c-bd51-5cec3a74223d}} with staring vector {{formula:6fb3d7db-53f7-400d-a1f1-c3b694fe8553}} {{cite:70a074a7bdd960250b251546abf09fddb06792b4}}. On the other hand, when using the LSQR to solve least squares problems, the loss of orthogonality may cause the algorithm requiring much more iterations to converge; the finite precision behavior of LSQR is very similar to the closely related conjugate gradient (CG) algorithm based on symmetric Lanczos process, and we refer to {{cite:c49ded7d0b8782af3df8b962ae757424e2fdb662}}, {{cite:50db1dc58624dd6563b668a65ba7ae42a564e38b}}, {{cite:aecabec304afc1ba4b76e1372e1b89c8319007ed}}. For discrete linear inverse problems, the Lanczos bidiagonalization based regularization algorithms also suffers from the delay of convergence of regularized solutions, which can make the propagation of noise during iterations rather irregular {{cite:8db49173f2f7d376570a70fd6a6b13b0f6e809be}}. For these reasons, the Lanczos bidiagonalization is usually performed with reorthogonalization for solving least squares problems and discrete linear inverse problems. There are several reorthogonalization strategies proposed to maintain some level of orthogonality, such as partial reorthogonalization {{cite:8daa3d1c8b4ab24c617ebc39229f1cfea778611d}} and one-sided reorthogonalization {{cite:17f6f18206be7fbad29945eadf30d230887c6c6f}}.
| i | 68d76b74d9647c2e54e73ece49df674a |
Table REF shows the end-to-end results on Spider dataset.
Based on the codebasehttps://github.com/microsoft/rat-sql provided by {{cite:c3287c6b9fd90b1ceeed1bcb445c6fb4ed783011}}, we replicate the RAT-SQL + BERT large model, achieving 0.665 exact set match accuracy on the development set.
This matches the RAT-SQL V2 + BERT but still worse than its V3.
By replacing the BERT-large with the encoder of BARTThe encoder of BART has 12-layer transformers while BERT-large has 24-layer transformers., we obtain accuracy of 0.676 on the development set and 0.651 on test set.
The BART Encoder based model achieves comparable results with RAT-SQL V3 + BERT large model on the hidden test set with less encoder layer (BART encoder has 12-layer transformers while BERT large model has 24-layer transformers).
With our GAP Model, the RAT-SQL can be further augmented, benefiting from enhanced contextual encoding ability.
The model achieves accuracy of 0.718 on the development set and 0.697 on the hidden test set.
This confirms the effectiveness of the Generation-augmented pre-training.
This performance achieves the state-of-the-art performance on the hidden test set with less model parameters on Spider dataset at the time of writing.
Comparing scores of the development set and the test set, we observe BART based models (+BARR Encoder or GAP Model) have better generalization ability on the hidden test, considering that the gap between the development set and test set is smaller than the model such as RAT-SQL V3 + BERT.
Concurrently, {{cite:3feb9d28763733f438b1439146e63abb488a9eb0}} used synchronized context-free grammar to generate synthetic data for pre-training; {{cite:71e6e8f26eb28f544d1b536130368414c4db53c3}} leveraged existing large-scale data-to-text dataset for enhancing the structured data representations. Both of them achieve comparable performance as ours, but require more model parameters (24-layer transformers in the pre-trained model) and extra crowd and expert annotation efforts.
{{table:85cbba84-9ada-4070-80e5-9e9fa307b17e}} | r | b8a8fd5994065270c3450be5aff5fde1 |
where {{formula:9dd45d58-e16c-4971-b0f4-5fa0b3d715d0}} and {{formula:ba9fac0f-5c02-45a7-9221-f8642af75e44}} (see {{cite:d47d704da2619ba4916ec3f3c952dbf0e2af10e8}}). In particular, for {{formula:64a9f44d-ddba-4991-983c-1a5bd43e03b4}} , we have {{formula:7b3879fd-460c-4e30-9d09-1569de76c487}} .
The following estimate holds for the modulus of continuity of the Bessel potential function {{formula:5eeeb4a1-5982-49c1-a8da-bcb39265896b}} :
{{formula:443c9803-deb5-4739-aec1-ae9232532f86}}
| r | 0b70425249752a1935ffd695e830df75 |
Galaxy surveys have revealed a strong bimodality of the galaxy population in
colour, star formation rates, and morphology {{cite:d16c9df6ce466b66ce7643be822b1cea270770a9}}. Galaxies can indeed be broadly described as either
red, predominantly early-type galaxies, with low or no star formation
or blue, predominantly late-type galaxies, with active star formation
{{cite:2ed8ded66847ade8a1550c9c01464bf3ffe54026}}, {{cite:2f1a4b7dd24ccd217c2bbf52e6d0de12d54fce2c}}, {{cite:ac875a9602d4764cc1317ca5c778fed6e7217c8d}}. A major thrust of ongoing
research is to understand how the quenching of star formation starts
and works in galaxies, which leads ultimately to the build-up of the
passively-evolving population. The fraction of star forming galaxies is the
lowest inside galaxy clusters, while at the same time the fraction of early type
morphologies (lenticulars, ellipticals) is the highest in
the field. {{cite:b393832af07a0178a0e24db3e1ca47e85b3c427b}}, {{cite:bacb7e4019c3f48cd432b439ef826ddc4a495b67}}. There is no shortage of
proposed physical mechanisms to explain how galaxies stop forming stars at a
higher frequency in clusters relative to the field: tidal stripping
{{cite:14004bf58ce56a02eecf279b72f1a29687d83948}}, ram-pressure stripping {{cite:4d2e515f03ef7dc9e6af5a52ec455d5111771e0a}}, thermal evaporation
{{cite:96cd499eee9e70f1f41bbe8c9ea7d0fe7a1270e7}}, and encounters with other satellites
{{cite:e79a12591ddf1314c89c8b35e80742bb9c8b8cdd}}, removal of the diffuse gas reservoir of
galaxies {{cite:5948d9845715b405a6feaeabe002cc9dfc8434f1}}, {{cite:ca6d8868e97d5e606e76f0961137711de3e9cd87}}. However, we are
still lacking the observational evidence that will distinguish between the
relative importance of the different mechanisms put forward and set their sphere
of influence.
| i | e81235c37bbaeb1c0ad671d61840c02c |
Definition 3 {{cite:d47d704da2619ba4916ec3f3c952dbf0e2af10e8}}
An analytic semigroup is a family of bounded linear operators {{formula:9400f07a-1772-4b29-9918-0a6d78af5125}} in a Banach space {{formula:3716d64b-28f4-40df-94ab-4cb7ae1732dc}} satisfying the following conditions:
| r | 44812c257f5ee6531f2f34072e0b32a3 |
It is an open issue with regard to the question of whether jets are launched in the soft state, that is, in the thin disk mode (see also {{cite:9f3f42868122737770c7885b085b47a867a27021}}). The leading model for the jet formation mechanism is the Blandford-Znajek and Blandford-Payne mechanisms ({{cite:65c9b77f941cdf20a8cf974dd34b6393bc5459da}}, {{cite:c5a502dd1b74daf5d557d7f6d491fea5ef6f95d7}}), which tends to generate a continuous jet unless there exists high instability in the disk, and requires the presence of a large-scale open magnetic field. For the episodic jet production mechanism, {{cite:de98b674344cf86c8ef6e252eacba6dc747328d5}} initially suggested a magnetohydrodynamical model by analogy with the coronal mass ejections in the Sun. A general view is that thin-disk flows do not have strong large-scale magnetic field, therefore, which should not produce strong jets ({{cite:5cbb7c0614637452964ede7f6ca83857dc215466}}). Whereas the large-scale magnetic field could be also produced effectively in the case of a thin disk, when the radial velocity of the accretion disk significantly increases due to the presence of the outflows ({{cite:4952ae81a0a433083b31a36d3b07b7ef7727bc5e}}).
| d | f98f848ef64947ce206617b2544b85e4 |
We start from the effective structure factor {{formula:c04b860e-799b-4547-86ac-ad564044b001}} , which
represents one of the relevant quantities for the variational
approach proposed in this paper. In the upper panel of Fig.
REF , we plot {{formula:e9440d2b-cc1a-4d67-b554-62df63552335}} as a function of the wave-vector
{{formula:9af5ea2e-c8e9-4a8a-a9ca-c3bb7520f81a}} comparing different approaches for the treatment of
polaron-polaron interactions. In the case of Hartree-Fock approach
({{formula:e4c7a75f-4622-43ad-b71a-23c109085db5}} curve in the upper panel of Fig. REF ), the
structure factor increases quite fast for small {{formula:b3e86ef1-e960-45cc-834c-a620df9ca6b3}} , and it
presents a discontinuity at {{formula:43cc097a-68fe-4bf8-a793-c443edb272b3}} . On the other hand, the
structure factor obtained within the Gaskell approach ({{formula:656e719a-cc0b-40a8-ac28-e318407e1af6}}
curve in the upper panel of Fig. REF ) increases more slowly
for small {{formula:18c377ac-4c5b-4fd6-a0a1-e60e1864e508}} indicating that charge correlations are accurately
treated for small values of {{formula:dc4ec3f9-8bbd-4268-b297-7a52536511fd}} . Actually, in the limit {{formula:442ee499-1f25-45a8-b54d-93e8dcf1dd24}} , the structure factor recovers the Bijl-Feynman formula
{{cite:04711df7a54040d8e5e5ba779fc96f08700d3aee}}
{{formula:ce6daf24-c8bd-4772-ae30-6ccebce746c1}}
| r | 770b580a7037eb13a72f63aa2845969f |
As it is custom in the classical deterministic Krylov–Bogolyubov (K-B) averaging (e.g. see {{cite:9d61373e3da70bc71b6612fa2058d7c98cbfea4f}},
{{cite:8a04d38f6ec85df6b13b641915f4035bdd761e95}} and {{cite:c0e1eef367ea46070fb13eeedadd6d06154fe1ac}}),
to study solutions {{formula:995016aa-ac75-4b64-9d41-bf30e258b6c9}} we write them in the
interaction representation which preserves the norms of the complex components {{formula:6d4dd97b-ad31-45c4-afd7-f99b303da447}} , but
amends their angles. See below substitution (REF ).
The first principal result of the work is given by Theorem REF , where we assume
uniform in {{formula:ab0bc2f4-9e93-4b98-9648-027855f17a48}} and in {{formula:9caa09b1-d8f3-4bef-9478-2070ea78004b}} bounds on a first few moments of
norms of solutions. The theorem states that as {{formula:74eaccae-ab32-486b-a256-14016655d4a2}} , for {{formula:556d04c3-608d-4d8e-9c6c-8271ec3cc8f8}}
solutions {{formula:abf53de3-ed74-4500-ad7f-0013626efa93}} , written in terms of the interaction representation, weakly converge in distribution to solutions
of an additional effective equation. The latter is obtained from eq. (REF ) by means of
certain averaging of vector field {{formula:dee1252b-6dab-4ed1-b705-876f6ca33661}} in terms of the spectrum {{formula:5293daf9-2db7-45c9-bbc4-da65c5bf0206}} and in many cases may be written down explicitly.
Proof of Theorem REF , given in Section , is obtained by a synthesis of the K-B method (as it is
presented e.g. in {{cite:c0e1eef367ea46070fb13eeedadd6d06154fe1ac}}) and the Khasminski approach to the stochastic averaging {{cite:6205759bb6c8243c9193436c6919dd7bdc182725}}; it may serve as an
introduction to the latter. The number of works
on the stochastic averaging is immense, see Section REF for some references.
We were not able to find there the result of Theorem REF , but do not insist on its novelty (and certainly
related statements may be found in the literature).
| r | eb7254d4f0ee2747db14b8cc21f4973d |
We are interested in understanding when (under what conditions on the distribution shift) and why (via model properties) neural networks can exhibit effective robustness. We perform a systematic study of three tunable "knobs" that are available to the neural network practitioner and have been shown to impact OOD robustness: pruning {{cite:0810a4cd2e38f2415e2e325e0811930d17fce72e}}, data augmentation {{cite:512dae1dedcb0987e33b5ded854bc456917bea77}}, and weight ensembling {{cite:687676a05cdec3d46579dfa3bb16b67c5c0722ec}}. For each knob, we evaluate a benchmark set of pretrained CIFAR-10 or ImageNet models on the original in-distribution test set, several OOD test sets, and a suite of model property metrics capturing local smoothness (via model Jacobian norm), frequency response to pixel-space interpolation within and between classes {{cite:6369f86dec88a2cdd57bbed2f63a56c2a3c93ce8}}, and Fourier amplitude and phase sensitivity (via high frequency fraction and consistent distance). Representative results from our analysis of these three knobs are presented in Sections REF , REF , and REF . For each dataset we consider one natural distribution shift (CIFAR-10.1 or ImageNetV2) as well as six synthetic corruptions (from CIFAR-10-C or ImageNet-C) comprised of two low-frequency corruptions, two mid-frequency corruptions, and two high-frequency corruptions (in that order). Full results on all the corruptions in CIFAR-10-C and ImageNet-C are deferred to the supplement. In all figures, we show 95% confidence intervals around each measurement (Clopper-Pearson for accuracy measurements and Gaussian bounds for averages of other metrics).
| r | e1a46c3b60fbc7671335f025f34f5a06 |
In this paper, we explored the behavior of entanglement of purification (EoP) for different excited states dual to asymptotically AdS geometries. We used the holographic proposal established in {{cite:62435874ab3073862ae0670280c3c9b66cae128b}} for computing this quantity which gives the EoP in terms of the minimal cross-sectional area of the entanglement wedge. In particular, we computed the variation of EoP under the small perturbations away from the vacuum, when generic operators acquire nontrivial expectation values. To get a better understanding of the results, we also compared the variation of EoP to other correlation measures including HEE and HMI. Our study was mainly for a symmetric configuration consisting of two disjoint strips with equal width, which is the simplest case to utilize the holographic proposal to compute the EoP. However, we expect that the qualitative features of our results are independent of the specific configuration. Although, for finite excitations we did a numerical analysis, considering small perturbations around the vacuum, we evaluated the leading order variation of holographic correlation measures analytically.
| d | 11ea64ee1743d2afcc77b30bbc52cd36 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.