text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
However, it is clear that semantic shifts are not always accompanied with changes in word frequency (or this connection may be very subtle and non-direct). Thus, if one were able to more directly model word meaning, such an approach should be superior to frequency-proxied methods. A number of recent publications have showed that distributional word representations {{cite:66b12c8773d89de88958db64c2a21e89ff8a7148}}, {{cite:4ae4e9f05a2b9fd358c7d1a66d4a7b6e5b2bc45d}} provide an efficient way to solve these tasks. They represent meaning with sparse or dense (embedding) vectors, produced from word co-occurrence counts. Although conceptually the source of the data for these models is still word frequencies, they `compress' this information into continuous lexical representations which are both efficient and convenient to work with. Indeed, kulkarni2015statistically explicitly demonstrated that distributional models outperform the frequency-based methods in detecting semantic shifts. They managed to trace semantic shifts more precisely and with greater explanatory power. One of the examples from their work is the semantic evolution of the word gay: through time, its nearest semantic neighbors changed, manifesting the gradual move away from the sense of `cheerful' to the sense of `homosexual.'
m
0f8446199c41c9b421b1c5661562471b
Long-range multi-particle correlations in the final state have been considered as a signature of hydrodynamic evolution of the strongly coupled quark-gluon plasma (sQGP) {{cite:560908a2d9deb141a683d76e28d2d3816af73e20}}, {{cite:b3b6e8eb3a3f30f4f82d82c593724c33fcf1cfa4}}, {{cite:9b45e3001552e5d0de2fb5d3cecc2cbfccfb9c98}}, {{cite:121d4a290863bcf66599ed45e6d5bf3776b52efb}}. LHC results, however, unexpectedly revealed that high-multiplicity proton-proton events exhibit similar collective behavior, with substantial azimuthal anisotropy ({{formula:066aa412-dd92-41a1-be6d-5ae71e6345f3}} ) {{cite:ed5cbe45fe98a1bc78447019f2b94b51d5e5d63c}}, {{cite:907b0197fe3fb2fda16270dfd736a1e0f386d227}}. Part of these effects can be explained with vacuum-QCD processes at the soft-hard boundary, such as multiple-parton interactions (MPI) {{cite:df05220b287ce45537b831757e2295226e659a4a}}, although the creation of sQGP in a small volume cannot be completely ruled out {{cite:bfce1fe23e7f12a797507261b7601343bf8e84f2}}, {{cite:32f69842d5da47ec06fcab9e3f0b795c3d902347}}, {{cite:4ef2659c1d54753c6a7d28c1c803bd61402824da}}, {{cite:9e62781ea83af3331874ac6c7c624be4c29e6a35}}. It is worth mentioning that the experimental data support the presence of MPI in high-energy hadronic interactions, see e.g. Refs. {{cite:4e0bb50eaf0daaa69e063a3a3cd22ff144589780}}, {{cite:75e5c183bfaa47c4ea4a80a54216ecf19c18fdaf}}. In models that handle color flow in softer MPI processes in a simplified manner, an additional step is applied {{cite:4982a11c98f0df3eeed780563af8068e6dc256b0}}. This step, called the color reconnection (CR), has a major role in forming the multiplicity as well as the transverse momentum ({{formula:3e51b8b6-d5e3-48bb-951d-36ec68697ebc}} ) distribution of the final state particles, and may be a key ingredient in the formation of collective behavior {{cite:411756ed34f639bb488da3211beed39394abf411}}, {{cite:c231bac154c72f6af5e0a06d87ef62511b4c3b42}}. Hence vacuum QCD effects establish connection between the leading hard process and the underlying event (UE) from particle production in soft and secondary hard processes as well as from beam remnants  {{cite:baca0d52b94d5d486b90cb92191623c841b83abe}}. The possibility of this connection resulting in the observed collectivity was explored in phenomenology calculations {{cite:507142cff5df1781d44a1e7b3795c4e641c3c845}}, {{cite:efa06bd258434a183e3544dc41d0babc7b68bd62}}, {{cite:a1d8698516f1cda97bccca2f9c9da626584be4bf}} and later corroborated in the experiment {{cite:d2549320ed143aeda9ea003e85f21de6a7c3d376}}, {{cite:77ecbe68fd4c69d2089b796d727a38414766d458}}. These works, however, mostly concentrated on light-flavor and strangeness observables. Jet fragmentation is affected by the flavor of the initial parton in several ways. First, light-flavor jets are initiated either by a gluon or a quark, while heavy-flavor jets are mostly initiated by quarks, therefore the average color charge of the initiating parton is different. Then, the heavy-quark fragmentation functions differ from those of light quarks and gluons {{cite:8627a10edc49ae31c0b21ae54b5790cb98e2e36f}}: heavy quark fragmentation is harder on the average than light-quark fragmentation, meaning that a larger part of the momentum is taken away by the heavy quarks {{cite:fc2352504987dff6d094655668ba033aee2e4f59}}. Finally, heavy-flavor jet shapes at low {{formula:d7e80ae4-8b28-4655-bef8-b931997ffd0e}} are influenced by the dead cone effect, or the suppression of forward gluon emissions {{cite:d240a27271c155f9a4ba5ae9c73909d112ac52de}}.
i
abd36ec7757844e6f8d109e8b57ac625
We remark that the space {{formula:422b0e53-bdd4-432a-a907-67b3bd347399}} is different from {{formula:b69afc8a-39d7-4d8e-b1dd-64e66d9fb169}} introduced in {{cite:b79b69f6c83fadece5daaf28ad6bb0a97452c17e}}, where {{formula:804c12fd-90d8-4921-a38c-3e90c8eaa550}}
r
1f12803eb2a25a9e8464a9d1249edff5
It was shown in previous works such as {{cite:17bba087d0c326f385573b7a9cd1222fd55c29eb}} that self-supervised learning usually benefits from longer training compared to fully supervised learning. In Table REF , we show the impact of different number of epochs for pre-training our model. We find that the accuracy improves significantly when the number of epochs increases from 100 to 500. However, only slight improvements are achieved when training it for another 500 epochs (for a total of 1000 epochs). So, for our final model, we pre-train the model for 1000 epochs.
r
be50f437e2a142f5316d08cc1c9cfa6a
Wavelet analysis. We utilize the wavelet analysis to quantify the periodicity of the flare microwave emissions. The core wavelet software is provided by Torrence and Compo{{cite:59b9c6a6a7c259278e6ea38921cd28ab76090613}} and available at http://atoc.colorado.edu/research/wavelets ; the code for fitting the background power spectra and determining the significance levels is adopted from Ref.{{cite:c326025cfacb49510e81f58b508b788a9690d7a7}}. Here, we use the mother wavelet `Morlet' and fit the background Hann-window-apodized Fourier spectra with the combination of a power-law function and a constant value as {{formula:4ee714eb-2866-4423-a55b-e1a5f03cb5d9}}
m
4c6246d0041934477216b8a783938527
P()=23-6+E   , where {{formula:4e54e2e1-158f-4b03-a5ca-d12dc52b10ab}} ({{formula:605fb3e3-fbe6-48b0-821b-583a272f2a83}} ) are the roots of cubic equation {{formula:a2f5918a-13f6-4b42-aa16-ab5936e67038}} . The first integral in () is a table integral {{cite:561df22d5a3bf2a19cb2bb1b528a03985efe916f}}. The second integral has to be calculated.
m
80050a7006ece6f53def7fd424cf71d7
The deployment of FL over real wireless networks still faces significant challenges, among which the communication latency is becoming the major bottleneck with the rapid advances in computational capability. Considerable researches have been devoted to address this issue for both analog and digital FL systems. In analog FL systems, over-the-air computation (AirComp) technology is broadly leveraged to implement the efficient concurrent transmission of locally computed updates by exploiting the superposition property of wireless multiple access channels (MAC) {{cite:96cba4567f4ff20e27850abe98ce98b165e55b0f}}, {{cite:1993e5f5f6e619d7041326648401b8b4053aa534}}, {{cite:96f5a850c46f3bdf3698ea87ccac9ba2f18f160c}}. For multiple-input single-output (MISO) AirComp, to better trade off the learning performance and the communication efficiency, {{cite:96cba4567f4ff20e27850abe98ce98b165e55b0f}} proposed a joint device scheduling and receive beamforming design approach to maximize scheduled devices, and {{cite:1993e5f5f6e619d7041326648401b8b4053aa534}} proposed a broadband analog aggregation scheme which enables linear growth of the latency-reduction ratio with the device population. Furthermore, a distributed stochastic gradient descent algorithm was implemented for a bandwidth-limited fading MAC {{cite:96f5a850c46f3bdf3698ea87ccac9ba2f18f160c}}. However, the model aggregation performance of MISO AirComp is severely limited by the unfavorable wireless propagation channel. To build up a communication-efficient edge FL system, multiple-input multiple-output (MIMO) technique has been widely recognized as a promising way to support high-reliability for massive device connectivity as well as high-accuracy and low-latency for model aggregation via exploiting spatial degree of freedom {{cite:e3432a60103367ec2ac71d3c6e8ad18810f1cffc}}, {{cite:c745e5e53fcc86a5f234dd0a874438f5049080c0}}, {{cite:6bb9c52a2feac7cb086afac21ba4efae069eb5ac}}.
i
2db690bb101359e59755c1d82d57e043
Diffusion models belong to a category of probabilistic models that require excessive computational resources to model unobserved data details. Their training process requires evaluating models that follow iterative estimation (and gradient computations). The computational cost becomes particularly huge while dealing with high dimensional data like images and videos {{cite:d3971e83770aff369eabf2737e634ab422c24577}}. For instance, a high-end diffusion model training in {{cite:f806b8d65a7ce993bbaad75e232be1fc9f2db365}} takes 150-1000 V100 GPU days. Moreover, since the inference stage also requires repeated evaluations of the noisy input space, this stage is also computationally demanding. In {{cite:f806b8d65a7ce993bbaad75e232be1fc9f2db365}}, 5 days of A100 GPU are required to produce 50k samples. Rombach et al. {{cite:376d597be32d4920dfc63e7445642ebf95eeb599}} rightly noted that the huge computational requirements to train effective diffusion models present a critical bottleneck in terms of democratizing this technology because the research community generally lacks such resources. It is evident that the most exciting results using diffusion models are first achieved by e.g., Meta AI {{cite:f6de49ed2e06cc2248e2806b1f660343aa51aef5}} and Google Research {{cite:f219b1e279f58ed904fadf5e701e136b84d03ce8}} who have an enormous computational power at their disposal. It is also notable that evaluating an already trained model has a considerable time and memory cost because the model may need to run for multiple steps (e.g., 25-1000) to generate a sample {{cite:7ffd2babc0eb9cc6675667f2c7946b04e28af052}}. This is a potential hindrance in the practical applications of diffusion models, especially in resource constrained environments.
i
f418d9a4d1b0582a5cbe39c447da10a3
For the basic inequalities, like {{formula:6bd52bdf-b1a1-4cbc-ac45-64cca9d3d1f3}} and {{formula:878b8d21-ce26-4359-98b0-4662bfd13ebb}} one may look at {{cite:e8aa68ff073ef0735aa2a58e9e5fafc22790710c}} for a reference. In this article we will prove a very fundamental inequality in both discrete and continuous case. We will also show that the inequality is more general than both the AM-GM and the AM-HM inequality. We will also give, as applications, an asymptotically better bound for the following quantity that occurs in Hardy's inequality {{cite:ba3740bc4b72c571447a66adb5b92f15275a48c7}}. {{formula:db3a106c-4e2c-4d0a-b1a7-b9dd0f34233c}}
i
a78560890b040e82ad70deef4b95c184
A common problem in objective metric-based Single Image Super-Resolution CNN models is the generation of over-smoothed images, regardless of high PSNR values. EnhanceNet {{cite:dc5b5b6c216e635cd7e40dce4371e801ab1660cf}} aims to address this problem by focusing primarily on generating realistic textures along with higher perceptual quality rather than just optimizing PSNR values.
m
7798ca9fd6297130762fcb2a95f4aaf9
Independently of the theoretical predictions of {{formula:9c3f7916-60d9-4014-a440-82e5d5690cea}} mentioned above, its value will be fixed definitely once the axion dark matter is found in the future haloscope experiments, such as the Axion Dark Matter eXperiment (ADMX) {{cite:3d417c81ff589ce1ca61377381771802816d4316}}, Capp Ultra Low Temperature Axion Search in Korea (CULTASK) {{cite:d2938b6d00e99a7f1cbab9ac6ce43e174f5321b5}}, Haloscope at Yale Sensitive to Axion CDM Experiment (HAYSTAC) {{cite:7b50399cf34e82e70a086f88f23438f971e8edb4}}, MAgnetized Disc and Mirror Axion eXperiment (MADMAX) {{cite:f20502c0d30a5ef5d0042d7051c88ac775e2a0fb}}, {{cite:172ec13d3dbf965b35815d661964a82b06d8ee62}}, and Relic Axion Detector Exploratory Setup (RADES) {{cite:0803202d77d0f9681ffdfd88a36c18883ad15b5a}}. It is likely that the axion dark matter would be detected already at the epoch when the advanced GW detectors like ultimate DECIGO are taking data, if it actually constitutes the main component of dark matter. Hence, the measurements of the spectrum of GWs should bring us some further information about the model, such as the self coupling of the PQ field, the Higgs portal coupling, and the number and masses of exotic fermions, in addition to the value of the PQ scale. In this sense, the future GW observations can be used to probe the details of the PQ sector that may not be reached in other high-energy experiments.
d
43cb8f8b79116b079b371b349ce59ac3
In Fig. REF , we illustrate the effect of correlation on the CSP for both scenarios. For the random individual mobility model, we have assumed all the mobile users move with the same speed {{formula:d10950d8-7e68-46f7-92e7-a2785d146d91}} , and each mobile user moves in a random independent direction {{formula:5dfa0a1b-250d-43e1-850f-812b2ec5f42f}} , uniformly distributed in the interval {{formula:fea3e6d4-fa45-4b1e-af74-3fe6f296eacb}} . To better understand the impact of correlation, in Fig. REF , we also compare the results with the independent case, where the CSP is equal to {{formula:e69e7555-d461-46a9-ae3e-4edc94d0ca21}} . According to the displacement theorem {{cite:63ad1a4b65442cfe6495a566e45c763ed2a6f8df}}, at any time {{formula:f1bc2c8b-8805-496d-acd3-a812b2ab0be7}} , interferers form a PPP with intensity {{formula:4ddc93a3-4f43-4b9f-95fa-65a76edf6d1d}} . Thus, in Model II, {{formula:fbf53ff5-a652-411b-846d-486cda6deb8b}} does not depend on {{formula:b368c094-7a9b-4627-8e7f-817bb8358248}} (and consequently {{formula:bf55717c-e706-475f-b239-647042e8d2fa}} ). Using the stationarity of the PPP, we can obtain the same result for Model I.
r
037bb108dda157f963aec2fac6c22fb4
The results of score-based attacks on MNIST and CIFAR10 are shown in Table REF . For ZOO {{cite:aaeb18c2c2d3ceeb262a610a0383282644f10fe9}}, the maximum number of iterations (Max.iter) limits the number of searches used to perform gradient estimation. The larger the value, the better the approximation. Therefore, a large quantity of maximum number of iterations usually results in better performance. Similarly for Square attack {{cite:a8fadd979f16aec8e392a218ab3e24c85c62e666}}, the maximum number of iterations controls the number of random-walk steps to search for adversarial examples. Therefore, a large value usually leads to better performance. The authors of Square attack studied the property of adversarial distortion and leveraged it to perform the attack. The method achieves better performances with the same maximum number of iterations when compared to the ZOO. {{table:1b269e70-c25b-4a39-92f1-5827cc7171ef}}
r
150e5d4ffcac098eb23cab948031eb1f
To train a compact model efficiently, knowledge distillation {{cite:8255348b19165b1c3ec4c7e07ed25972029c393d}}, {{cite:d71c72e626218cdacbaafeed9dc85f1ebfee506a}}, {{cite:7c85c57c0bc543f62db3734b22de74ce770046fe}} concepts can be tailored for LFM based neuroimaging. Taking the depth localization of neurons using LFM for example, this problem can be formulated as a multi-class, multi-label classification problem after converting an original light-field into the Epipolar Plane Image (EPI), a particular spatio-angular feature in the phase-space {{cite:391ca10772d8582d7edf3eeef4786a6528aa1578}}. Different from common classification tasks, this task has a specific feature that the hard-labels are well-structured. In particular, for an arbitrary class label, its adjacent/neighbouring class labels exhibit high correlation and coherence. This leads to a Gaussian-shape group structure that indicates the inter-class relationships, similar to the class probabilities represented by the softened teacher model's logits. Based on this observation, {{cite:391ca10772d8582d7edf3eeef4786a6528aa1578}} proposes to exploit the specific prior knowledge and directly construct the soft-labels (i.e. class probabilities) by performing a convolution between the hard-labels (i.e. ground truth) and a Gaussian kernel followed by normalization. In this way, the Gaussian kernel plays the role of a "temperature" scaling function as in knowledge distillation. It effectively smooths out the probability distribution to reveal inter-class relationships learned by human experts serving as a teacher model for the specific task. A larger kernel width corresponds to a higher "temperature" that allows the model to pay more attention to the inter-class correlations. Accordingly, the soft-labels are used to compute the loss function, more specifically, the distillation loss function. Soft-labels also bring some training benefits as incorporated inter-class relationships bring more guidance information to accelerate the training, as well as to enforce group sparsity into the network's prediction.
d
6e2cb211ff7a24cf9ba7282ae14a005e
Apart from the results of interval analysis, we use the following results from classical convex analysis throughout the article. (Projection {{cite:da02f386272af681b93d6d088a8792d873ca3a52}}). Let {{formula:45e287cb-c134-451f-a8e1-aae362314f7b}} be a nonempty closed set in {{formula:a6192403-b7a2-440d-b36e-454bcc15176e}} . Then, the projection of a point {{formula:ecb0acc0-c0a6-4e8b-b991-39b7350b9c42}} onto the set A is denoted by {{formula:345e13f1-5730-4c49-bc9f-906a1aaa975a}} , and is defined by {{formula:ab4d01b4-333c-43e2-a6eb-975455ad9de8}}
r
da20ce02dce4dcac5125c4284c791c94
We assign these categories based on (textual) semantic relatedness of the phrasal modifier, and numeric proximity of the count. For example, regional languages is likely a subgroup of 700 languages, especially if it occurs with counts {{formula:2390f874-d9bf-4314-8731-66db50c1b8ff}} . tongue is likely a synonym, especially if it occurs with counts {{formula:72f62535-b10e-4889-a565-323e9cf5d748}} . Speakers is most likely incomparable, especially if it co-occurs with counts in the millions. CNPs with embedding-cosine similarity {{cite:e441ddf13bad9950abe5a23a3db60139d3a41fe7}} less than zero are categorized as incomparable, while from the remainder, those with a count within {{formula:50be0fde-ff8c-492a-b030-c63e7a30dc62}} are considered synonyms, lower count CNPs are categorized as subgroups, and higher count CNPs as incomparable.
m
6bdde66cf1ee9f8a6f2b900b68cd7b85
In practice, we rely on a modified U-Net architecture {{cite:e1d163843ab5d392c6a15926ef17ea5be2741684}} and on an extended 12-class density grid that improves the density resolution compared to traditional BI-RADS classification (4th edition). Compared to the state-of-the-art, our classification and segmentation scheme does not rely on the model s attention but uses a loss function efficiently correlating a tissue mask with the target breast density values. Moreover, the output is constrained with the breast binary mask removing useless activations.
m
647474ebffb529fedcaa3b8827644b62
Radiatively driven fluid jet in relativity has a very rich class of solutions. The `e'-type solutions may have one inner-type sonic point, multiple sonic points, and shocks. While the `f'-type jet is a low-energy solution, such solutions pass through the outer sonic point. The radiative driving is the most effective for `f'-type jet solutions (Fig. REF a). This class of solutions can be compared with radiatively driven {{formula:cb90b475-2865-4ae5-acc8-c12c4de026cb}} jets in the particle approximation {{cite:6e4558c301dce7ac71b005f452b869440a257d3b}}, {{cite:f4913a06f526141e1a6258f9086c9a8522dc4dfb}}. Interestingly, discs with sub-Eddington luminosity can power lepton-dominated jets ({{formula:7b970648-e289-483c-80c9-a273eef2ed1b}} ) to terminal Lorentz factors {{formula:0298e8df-f88d-42da-899c-d786338e604d}} , but super-Eddington discs can power those `f'-type jets to {{formula:ee2c5798-c812-4803-a272-7ce0243a9db4}} (Fig. REF b). Above, we argued that the radiation driving of particle jets is more efficient than that of fluid jets due to the presence of the enthalpy term in the denominator of the radiation term (Eq. REF ). However, the advantage of considering radiation driving of fluid jets is that wherever the jet has been hot, radiation driving is not effective, but the thermal gradient term is. In the region where the temperature falls down, thermal gradient becomes less effective, but radiation takes over, provided the region is relatively close to the disc ({{formula:cfd11f30-3902-4f3d-b9ab-202e93ec41d0}} ). Therefore, the lepton-dominated jets achieve terminal speeds similar to the {{formula:a8651fac-8d7a-4653-adc3-1c4d42be09a7}} particle jets, and, in addition, the radiation driving can produce fluid phenomena like shocks in the jet. An unstable shock can also produce effects like QPOs in the jet, a scenario worth investigating. Moreover, such internal shocks close to the jet base have been invoked to explain the high-energy power-law tails in some of the microquasars {{cite:b7fd3be0335d7ff5b32d253d404085d0c4cd4871}}. {{cite:9ac498630f6e6bb3260ddb39eee10febaad03dd5}} also showed the existence of shocks in radiatively driven jets, when the disc is quite thick and jet geometry deviates from the conical geometry. Although the authors were not considering the effect of acceleration of radiation on jets, the {{formula:c1627252-85e6-4605-9b8b-c57434e60b54}} quoted by them were all mildly relativistic ({{formula:17017a10-096f-4513-a905-733b47eb6b54}} ). Whereas, we find {{formula:4f89cbfe-401c-46a8-bf09-49ae72da673a}} is a few times higher in general. Because {{cite:9ac498630f6e6bb3260ddb39eee10febaad03dd5}} considered mostly isothermal jets and therefore missed the thermal driving factor for the jet.
d
c546cc5db4efc3a88553699f3d1d9569
where {{formula:6a4b26b9-e628-46d8-bf07-ec80086554a2}} is a normalised bond vector (tangent vector). For the mutual (steric) interaction between the beads we used the Weeks-Chandler-Anderson (WCA) potential{{cite:7a8f5ca359f563ba9eb7e2ead59295dc3cfc7734}}, which for bead {{formula:03002de8-7972-4218-b6fb-f6536abcfe57}} on chain {{formula:fc30b203-eb2e-4bd4-ad85-8f28f92b6977}} reads for {{formula:f852c5cb-d601-42cf-a021-3bc2b2d69104}} {{formula:811f5e42-a76d-489e-bc5b-b7e09b40d6d7}}
m
a65510783cfced81503f8094711c545c
Moreover, to showcase the massive level of transfer learning that is happening via the pretrained models, we will show the performance of the three networks (CNN6, CNN10 and CNN14) without the use of a pretrained model, that is, initializing their weights at random and not making use of the AudioSet {{cite:b5414bb3f62214544496c4efce2c4b5e62b27638}} pretraining. We call these three models the Baseline CNN6, the Baseline CNN10 and the Baseline CNN14.
r
833f732dddbc6cea5efbb3e136c5695d
Reaction-diffusion systems arising in chemistry often have temperature sensitivity, both through reaction rates (such as those arising from scaling reaction terms with temperature activated Arrhenius parameters {{cite:3dc95af74a45e8059a87a2e87db82c8d95212adc}}, {{cite:806aafe54f820ae49e573c76e20ef3b49fa73986}}) and temperature-dependent diffusion (such as the linear Einstein, Wright-Sullivan, or Stokes-Einstein-Sutherland relations in liquid reactions {{cite:dd2e01f9e00f0400e164d2c9b8bbd0cf434c5606}} or more general power-law relations in gas reactions {{cite:a444aa95008e4896aa841e923388dd54c1589567}}). The role of temperature on Turing pattern formation from reaction-diffusion systems was recently studied under a variety of mechanisms in {{cite:dddec6c6ca4a4a8f01fd82298ca8c23137b87f6d}}, and it was shown that localized Turing patterns can be modified through spatial or temporal changes in local temperature. Since temperature can be used to modify both diffusion and reaction rates in experiments, it may be an attractive method by which to implement wave management in chemical systems that permit travelling wave solutions. Temperature-dependent wavespeeds have already been observed in experimental work in fields as drastically different as physiology {{cite:b0fa274f533a024c2fc57447329354a62b07fe6a}} and energy management {{cite:d370c260c0e7b1cf23da7cc92ad7fe50e3a30380}}, and a more precise control through wave management techniques outlined in this paper could be of service to any of these applications.
d
8cb1ceb2f757eae6ee92dc31625ebf22
In this study, our models are compared with state-of-the-art multi-task methods, including three aggregated loss optimization methods, GradNorm {{cite:b33c16924ae98a17572c2d74cd81cd5c7c2dc872}}, MGDA {{cite:d8b457f46a23631bcd13933d44a61e4a7edf1e5f}}, PCGrad {{cite:2a01359c3da61bda04b19e7b96ac4838bef33d9e}}, and one partitioning method, Maximum Roaming (MR) {{cite:065d38f23636e5886100fbc27a862cb7f7b4d74c}}. To ensure fairness of comparison, all baselines have been implemented in the same pipeline, under Pytorch {{formula:75b7e1fc-6edd-4bed-a9ac-cf993425b553}} , and run on Nvidia Titan-XP GPUs. For every baseline compared on a dataset, we perform a grid search on the learning rate. All the scripts we used to perform our experiments are available on GitHub.https://github.com/lucaspascal/AITSO_MTL.. The data used comes from official releases for the three datasets.
r
e632eb903e027c40ec8cdd6887647c93
Inspired by difference rewards {{cite:4eaa53ca9985db14d407da8fc7d778f774fb589c}} and counterfactual baseline {{cite:cf17877389178b74c9e37d7aa0a724f8431019a1}} for policy gradients, we propose a counterfactual assistance loss. For each agent {{formula:fd2af9b0-9f50-4f93-a137-2236dbefa14f}} we can use an advantage function that compares the {{formula:e34a985e-87b8-4809-9e2e-e0a0b22ddaf8}} from {{formula:80ed1c3f-752d-412b-a117-7e37227693d1}} and {{formula:5fae2d4a-7201-44f6-83dc-66e790dd4e38}} from {{formula:f6c362e9-b4c5-4080-ab24-756b062f31ed}} to a counterfactual baseline that marginalizes out the agent's potential optimal action which relegates {{formula:36b07ccd-8096-46b8-ae5a-488700a1519f}} , while keeping all other agents' action {{formula:8632ef44-0231-49da-b617-43d94a0e6f13}} fixed, a counterfactual assistance loss is proposed as: {{formula:bc91afc5-132b-461b-b230-b07446c5fe0e}}
m
72347a94b21712fa08603ec4871bdfb4
Camera calibration, or camera resectioning {{cite:9aede1ac000df48b1dc10a7fe3d3d63e9bf31736}}, is a process of retrieving camera intrinsic and extrinsic parameters. Usually, the intrinsic parameters include the focal length, coordinates of a principal point, axis skew, and distortion coefficients. Extrinsic parameters contain information about the pose of the view in the form of rotation and translation w.r.t. the scene origin.
i
087e92586e7e61934f8f876cd5c4e8d0
The se method, based on the modified equations (A3) and (A4) for the estimate of t{{formula:070c6ca2-2ef1-4403-aca6-b1be6c3e4f0d}} , which includes the logO{{formula:a1fb7bdc-d9a6-4c16-a56a-e05d2eed2fce}} term, allows us to remove the systematics inherent to the original se method by {{cite:3cdb8892a2c8d96cd1ae01db6f697dd0b44d939a}}, as illustrated in Fig. REF (bottom panel). As can be seen, blue and red points show the even scatter on the linear regression line drawn on the whole sample. Moreover, as a result of the removal of systematics, the general scatter on the linear regression is reduced substantially: from rms = 0.125 dex in Fig. REF (top; original se method) to rms = 0.090 dex for the mse method. This indicates that the addition of a term with logO{{formula:cf1aa5b3-c584-4013-87f9-25d3097ebb88}} to the original formula for T{{formula:eb3565c1-e311-44fd-8e72-5dfc663e4a5a}} from {{cite:3cdb8892a2c8d96cd1ae01db6f697dd0b44d939a}} indeed improves the accuracy of O/H for the indicated O/H range for a wide range of O{{formula:42c6a072-bfbe-4f3b-913f-5ea5fee490e8}} .
m
115700d9fbfac2e5fa056c0d2690d0ab
Results of the two scenarios under study indicate that synthetic data can prove useful for training DL models, particularly related to UAV-based aerial imagery. This evidence is backed by related work, listed in Section . Nevertheless, we need to be cautious with these indications, because the DL models were optimized to perform well in the specific validation datasets. It is questionable (and it has not been tested) whether the DL models can produce similar results in different real-world datasets that focus on similar problems and applications {{cite:96f62db139c5b2cba72f22225248c85a10356e18}}.
d
f6950be1e8d01b8e80f2939d7c553631
Why ViT is better than ConvNets? From the results, we can see that the Swin Transformer{{cite:7ed88d6969ec0b8c8b0c534f994045b14dd469a9}} based achieved a significant better result than ConvNet based on both with simCrossTrans and without simCrossTrans, although they have a similar computation complexity. In {{cite:534243dd0155d56fd59fe242f7f7d6f92021ce4b}}, the paper claims that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. In the ViT{{cite:1d323104bb95b96d87733c1b0748e5e96089c439}} paper, it claims that "Transformers lack some of the inductive biases inherent to CNNs, such as translation equivariance and locality, and therefore do not generalize well when trained on insufficient amounts of data. But for larger datasets, learning the relevant patterns directly from data is sufficient, even beneficial." From our observation, we believe the following are the critical reasons that Swin-T performs better than ResNet-50: {{formula:dfba2d93-a70e-48de-8a46-7bc047948e60}}
d
6dee6ca58c63f7a1751b1d08730c31ad
Discriminator (D) Network: We further use a PatchGAN {{cite:2dd647090fda29d180b896eb5e3ec2732d927a93}} based discriminator network to distinguish foreground and background on a patch with scale of {{formula:12615a54-803d-41d0-89d4-ab8b070f3c33}} pixels. The proposed architecture is shown in Fig. REF . It is designed by adhering to the recommendations made in the work of PatchGAN {{cite:2dd647090fda29d180b896eb5e3ec2732d927a93}}. It consists of five convolutional layers with strided convolutions. After each convolution, the number of channels doubles, excluding the last output layer which has a single channel. The network uses a fixed stride of two except for the second last and last layer where the stride is set to 1. It is noted that a fixed kernel size of 4 is used for all layers throughout the discriminator network. {{figure:49152517-1675-4399-8474-e7aab9ab903d}}
m
23a82705b442d0f394f8e58e1eee22bf
Surprising observations produce filepath and emulation models. Both models individually perform relatively poorly, especially if compared to ember FFNN. A potential explanation behind this observation is the ember FFNN training set consisting of 600k feature vectors from the original Ember publication {{cite:22e26955b6ebb1d42e32a9348f16c7d9ddcfe85c}}. Such training corpora produce a much better generalization of "true" malicious PE distribution than our 100k samples reflecting threat landscape in a specific time window.
r
7830a5602e457aefc9ec6503f661cdde
We observe that most existing methods are trained on static images. Directly applying the image-based models to videos (image sequence) might lead to unsatisfactory results since they fail to consider the temporal consistency across video frames. To conquer this dilemma, a large number of approaches have explored utilizing the additional temporal information to achieve higher pose detection accuracy. According to how the temporal information is exploited, we broadly divide these approaches into optical flow-based {{cite:f58c24e824f6b0c5fd20799887f94ab6ebee2ceb}}, {{cite:a346c3a3c093dca609f8545f1153a9bfbc034828}}, {{cite:74d1f004a5ad27979593dcedaca651293aa27a5d}}, {{cite:409b3baf507fb76cdd7e6b4fbcb92c89aea9f2b1}}, {{cite:ee0fa7e9eabff5382a74ac0ceb0d7980dfe725f9}}, RNN-based (Recurrent Neural Networks) {{cite:ba58eee01cb944f47e53ea011ebc5d1a2e08c69a}}, {{cite:82ace995cc9964d222b1ee011ea7115d34ce42ce}}, {{cite:e1de757c5f629c43ea2e4c5f60eee2afd9c5d954}}, pose tracking-based {{cite:24d7583d2bcae07fe788560141c2f917686a4200}}, {{cite:8dc229b758a88b3cd1130b41197d077685041782}}, {{cite:5509ed1b2fa482c3d5799e671797da077a22bd66}}, {{cite:dc565ac1279f62649692b7b969c44db293ef02de}}, {{cite:6a8f5eea02724baadcd5fabcb33d06b17b5eb304}}, {{cite:4e4031aa379c29908617790299c7199b05498fc2}}, and key frame-based {{cite:b43e8d856a4dd3ac6070adb01c27f1a4be15189b}}, {{cite:72ad4a6766bafa159637947ef5b160aba6245303}}, {{cite:8e1758665b10365c03530712ae7719a0babe2389}}, {{cite:7cc573ce807e1ce1cbbbd98df90dbfe0198a15b3}}, {{cite:c1fabfcd606d8d85836fd8b4b8ac52aef0a1f09e}} paradigms. Below, we elaborate these methods in detail.
m
9022e0d5df3ba205e0e689991b51f872
Quantitative Comparison. The comparison between our PoseVQ-Diffusion and the competing methods is shown in Tabel REF . The row of PoseVQ-AR refs to the vector quantized model with an autoregressive decoder. The row of PoseVQ-MP refs to the vector quantized model with the Mask-Predict {{cite:37dadd89b4916cf71f9cf9ab969224e38f7a1fab}} strategy, which is also a variant of discrete diffusion model {{cite:1a99fd3e065df6d56ef91b8c8bf61c8bab71a39e}}. PoseVQ-Diffusion refs to the vector quantized model with mask-and-replace diffusion strategy. As indicated in Table REF , both diffusion-based models outperform the state-of-the-art G2P models with relative improvements on WER score by {{formula:99233cfb-225e-48f0-962d-49257a78be74}} ({{formula:3ebea350-abe0-4a21-a160-2cb101a5fea5}} ) and on BLEU-4 by {{formula:e4e27089-d7fa-4d79-a92b-c0cf7896f9a3}} ({{formula:bbf053af-dd80-4fb2-86ae-83ac5305bbac}} ). This shows the effectiveness of the iterative mask-based non-autoregressive method on the vector quantized pose sequence. In addition, the Mask-Predict strategy is a mask only strategy that is similar to PoseVQ-Diffusion with {{formula:fc64c79a-4341-47ad-97da-2a15b8903923}} . Therefore, PoseVQ-Diffusion achieves better performance than PoseVQ-MP. This reflects the mask-and-replace strategy is superior to the mask only strategy.
m
759649958ef3ef7d4f372193fa49e5c1
As pointed out by one reviewer, the original envelope formulation uses a decomposition of the variance of the error term. The independence between the material and immaterial part is only guaranteed under normality. The null covariance only guarantees that the information of {{formula:1e2fbd28-510f-491c-9762-7ef31d713b76}} is immaterial in the first two moments, rather than all moments which is implied by independency. Motivated by such an observation, we explored alternative ways to guarantee independence in a separate paper {{cite:2746d36148fd8a07c9a9266b13d6a881e9290109}}. Specifically, we modified the envelope method by imposing the independence conditions directly and used semiparametric methods to derive the semiparametric efficiency bound. The missing data under this newly defined envelope model can be handled using semiparametric estimating equations {{cite:e2f73f093584a0724062ae8b4b7f19d63d28590e}}, {{cite:ad1115488291a70cdf24b1025cca3f3763695898}}, {{cite:b1205164aea5c62650cd356d0481f363d02c6da8}}, {{cite:34cc8ac307d5b0af4a61f1340b7260a43b01e922}}. We leave extensions of our missing data estimation methods to semiparametric inference to future research.
d
a5776cf80c1702905f79962f4b0c92de
Several previous approaches in the DSL shared tasks have formulated the task as a two-step classification, first identifying the language group, and then the specific language {{cite:21ec399485a5bb9a53d88b86ccc015e5d090a475}}. Instead of taking this approach, we formulate the task as a multi-class classification problem, with each language / dialect representing a separate class. Our system is a deep neural network consisting of a bidirectional Gated Recurrent Unit (GRU) network at the upper level, and a Deep Residual Network (ResNet) at the lower level (Figure REF ). The inputs of our system are byte-level representations of each input sentence, with byte embeddings which are learnt during training. Using byte-level representations differs from character-level representations in that UTF-8 encodes non-ascii symbols with more than one byte, which potentially allows for more disambiguating power. A concrete example can be found when considering the relatively similar languages Norwegian and Swedish. Here, there are two pairs of letters which are interchangeable: where Swedish uses `ä' (C3 A4) and `ö' (C3 B6), Norwegian uses `æ' (C3 A6) and `ø' (C3 B8). Hence, using the lower-level byte representation, we allow the model to take advantage of the first shared byte between these characters. The architecture used in this work is based on the sequence-to-sequence labelling architecture used in Bjerva et al. bjerva:2016:coling, modified for the task of language identification. Our system is implemented in Keras using the Tensorflow backend {{cite:45e0f39ccb225b3240a717d9121f2a20d8238803}}, {{cite:33baef9b3984d7d9aaacdab2289012771f139fd5}}. {{figure:ffb50ae8-8c88-4079-aa83-a368641bd7fd}}
m
b740ccd7f2bd9b33754d0b0132d0d30e
Traditional GOF methods, mainly inspired by {{cite:3bdbed92a61447ae22a888b153f54512bf30c13e}}, construct orthonormal polynomials on a case-by-case basis for each parametric discrete distribution {{formula:9cbb17fb-02e7-48f5-aea4-ea52e59fc534}} . This is generally done by solving the heavy-duty Emerson recurrence {{cite:23ce140f732f29907b9d63d42115c758d5997210}}, {{cite:15c552d9a99c5ffa9b14e0d608842c0adc367c4e}}, {{cite:927e70bb0a667de2a54a0391d0adf883f871c9e9}} relation, which could be quite complicated for non-standard distributions. Few concrete examples: The proposal of {{cite:4a91e27101a99710a88c99b547a3be50484c1ff6}} for testing Binomial distribution is based on Krawtchouk polynomial; {{cite:bc08f5d6aac4c8b1a3784eb2d0ee7b557b15e388}} develop test for geometric distribution using Meixner orthonormal polynomials; {{cite:2f27de2adecc9875d7f3f332ec538176e8fc59bf}} construct Poisson GOF method based on Poisson-Charlier orthonormal polynomials; For a theory of classical (distribution-specific) orthonormal polynomials refer to {{cite:b327741c478f2eb8e9176756f7988f2dab0ef4dd}}.
m
2b99ea0d2ccb479ab9b80b0613a6650d
Optimal control problem involves finding controls for a dynamical system such that a certain objective function is optimized. Traditionally, most research for solving the optimal control problems for non-linear dynamical systems uses optimal control theory which, in principle, finds optimal control by deriving Pontryagin's maximum principle or by solving the Hamilton–Jacobi–Bellman equation. These classical strategies are offline and require a complete knowledge of system dynamics making them unsuitable for dynamical systems with uncertainties {{cite:92a4da044e85f2fcbe135b0f7b48ee69e1046e76}}. Recently, model predictive control (MPC) – a feedback control based on real-time optimisation – has attracted increasing attention in stochastic optimal control research {{cite:add666d495a50a43ce32680e823afd2d3498fc32}}, {{cite:de35254e3167302eed054ead1b421f45a399c16e}}, {{cite:7254594a33560cee527bb83bab529037de84cc11}}. Alternatively, optimal control problems could also be solved using reinforcement learning (RL) approaches. {{cite:1a6529f6155ce82df0f60fe3feb76a6042549b8f}} provide a benchmark study for comparison between model predictive control and model-free reinforcement learning approaches where the authors show that RL results are comparable to MPC. Further, after a certain break-even point model-free reinforcement learning is shown to perform better than nonlinear model predictive control with an inaccurate model. Furthermore, as opposed to MPC, RL provides an extra advantage of generality since it doesn't need complete knowledge of model dynamics. So far, research on application of RL in optimal control problems is advancing rapidly in fields like manufacturing {{cite:517e432ff046f12ff256852953ffba2580867bb6}}, energy {{cite:8c5203e9d387ace5890cde8fa3b269a3fcc4b617}} and fluid dynamics {{cite:1b4f4f10aaa00e5fda2c19862d3108003df918a0}}. However, research focused on developing RL strategy based on assessment of its robustness against the uncertainties is still an open area, especially for cases where model uncertainties has a substantial effect on the optimal control.
i
7d675db384a3f1b558debb0b9e4196cf
Let {{formula:fd684254-ca73-44f5-aeaf-027b3cd29e45}} be a simple, connected, finite graph with vertex set {{formula:4e40e94c-096e-423b-a7a8-3c92a51b2da2}} . Let {{formula:7331b3b0-05d8-4147-be41-ff50be82baf9}} be the {{formula:2031e481-e284-4458-b8a6-bdf40a0044be}} -adjacency matrix {{cite:dd729f4cbd6d149d62196ece655130d53d115cf2}}, {{cite:3c1530d03555c82d072cf578f11054fd20bde71c}} of {{formula:e88191ff-1820-4a46-ba12-dbebd4e3181e}} . Then the Seidel matrix {{cite:713513be5e3c5af3055e7fbc949cfd3cffb2efda}}, {{cite:38780d496a12b00babc24edb539188f802343ab4}}, {{cite:e29ca6873f66448ce3941640606a10ed828e00ed}}, {{cite:b42602c03f98551019edec89b7503d62423ffa71}}, {{cite:3775461c226de899ef88543b0d7b86d501fe4140}}, {{cite:d3741aee2e47dccce3174c499700f72d09ddf656}}, {{cite:85ba21c47f2d2aea477a529ea939ad942b93e53f}}, {{cite:2e662d11d299cf4e89d1a301ed392936601b2a0c}}, {{cite:383c9f6ff0478975b01f1c23868ef3a2cf9106c3}} is a square matrix of order {{formula:41a858c6-3f06-4672-8eef-ada8fab999ed}} defined by {{formula:fecb0ea3-499c-4f29-a0f9-13f9af9aa9a4}}
i
28fa9f6ea2346207c0420c05763b1fc6
In Fig. REF we provide some qualitative examples, illustrating cases for which the QB-Norm model correctly retrieves videos that are not retrieved without QB-Norm. Examining failure cases, we found qualitative examples for which the retrieval ranking produced with QB-Norm was more “reasonable” (as shown in the bottom set of Fig. REF ). However, in line with prior work {{cite:494abf1b344ecec639dc96665375ad82d321613b}} suggesting that hubness is a property of the distribution (rather than driven by individual samples), we did not observe consistent, obvious qualitative trends among the samples that were corrected, or remaining failure cases. As an example, we observed gains for queries with both shorter and longer, highly descriptive captions.
r
cc572081b7d75bb3b93feaf768289b92
(3) TMM {{cite:6832f008fde0c6775fb4e6b31852e430d22d683c}}. This method supposes that all data points are drawn from a central Student’s t-mixture model (TMM) and utilizes the EM algorithm to estimate the TMM components as well as rigid transformations.
r
4490e7abd67d4b36bebb588a7220a40c
Discovery of the giant magnetoresistance{{cite:cb80535cb95cfd7a18797c43d5f3b96a8ab39564}} greatly facilitated the longitudinal magnetization, enabling to accelerate the development of high capacity magnetic storage devices. In 1960s, the inverse Faraday effect (IFE){{cite:e0d567aff227188aba255e12407acddc6c8fc5d5}} was firstly proposed to describe an optically-induced magnetization in a nonabsorbing material. Apart from the pioneering work, a new physical phenomenon called all-optical helicity-dependent switching (AO-HDS) was experimentally discovered{{cite:d33c7e7c43a259621c68b8dda467e9c8e234bf84}}. Later, the behavior of AO-HDS is found that this phenomenon is not a trivial one and related to paramagnetic materials{{cite:e0d567aff227188aba255e12407acddc6c8fc5d5}} or nonmagnetic metals and antiferromagnetic metals{{cite:83e029c9da92d5213d9f7da562ca3d3ade695506}} or specific laser fluence in ferrimagnetic materials{{cite:d4858d61dd4dafa2acc39efd7906f38c83e4fab1}}, {{cite:0947da65ff7a760f67bb3c93dc744766aa3b8242}}, {{cite:e8224e4797bceb1ac2dbf89b4c065dceac14ca91}}, magnetic thin films to multilayers and even granular films{{cite:1f2769324e1bf09dd7855acafebd882387429c66}}. Owing to the high density, high energy efficiency and erasable feature, all-optical magnetic recording (AOMR), has emerged as a key component for the next generation storage technology of an ultra-high areal density{{cite:e8224e4797bceb1ac2dbf89b4c065dceac14ca91}}, {{cite:1f2769324e1bf09dd7855acafebd882387429c66}}. To improve the storage density, the circularly polarized beam was focused into a half-wavelength region under a high numerical aperture (NA) lens{{cite:a96e47ee04a0ee7524ea0acb1cfeafa006f13863}}, {{cite:88ce19ba5ab29f8c34ea4dc6ceadf1a2291019c8}}, {{cite:595b74a019162185b3edc6d9637f924b1faa9fa9}}. For the purpose of potential practical application of the longitudinal magnetization, the magnetization needle{{cite:ef4eb2c6ab32e9a5f2e9d1e32de8024cf0bf1808}}, magnetization chain{{cite:622c64d5ef63646b15c2e2ec63df8f9bf52f1166}} and magnetization spot arrays{{cite:ca504476aa4f74d5ae6960a8e35c4cb16abc3606}}, {{cite:1d25fbfc4e43c86507bc112700e63523cc5bbc49}} were also produced by the control of the phase, amplitude, and polarization distributions of the incident beam.
i
ebf41164491b6017a66d85425ef9ee89
The hypothesis that the starting point of iterations {{formula:6ea0f255-60ee-42f5-b0ab-1ddc9c5db61f}} is such that the ball centered at {{formula:8616a282-5ee3-498a-98aa-b6a2ea88f974}} with the radius {{formula:e2e38c27-5506-4569-a3c9-7c0189606c2a}} is contained in {{formula:35b851e2-7b04-4a04-b9ab-21b175d79de2}} does not weaken Theorem REF . In fact, if this hypothesis is not satisfied, we can replace {{formula:0831cd35-020b-4bf5-b9f0-5118abfc012b}} by a larger ball {{formula:4cd1a1e3-7895-4e78-a31b-ce5711c6f02e}} where {{formula:6a9570a5-1222-4041-9123-d82be553a260}} Note that in the convexification method, {{formula:bfd9cf6e-d308-4403-98bf-9dd4d256828d}} is the ball with an arbitrary chosen radius. The assumption that {{formula:45278daa-db4c-415c-8247-70376cb67d89}} is inside {{formula:89f622bd-ab5f-4f70-9543-1a63a6946c8b}} is the main reason that helps us to replace the gradient projection method in {{cite:57586e5acf327459750a1b740a629e3a11cf0360}}, {{cite:f10fd27e1336b030355d0d5144c7c89d374e2abb}}, {{cite:69010802b2908651e85ac1269f4064e8fb3ad95b}}, {{cite:3d2ff90be67dcc88431600ec15fb6ba65d12fc9e}} with the gradient descent method in Theorem REF . Without this assumption, elements of the sequence produced by the gradient descent method might be outside of {{formula:fce41663-0540-4c50-ba97-35d4a93586d9}} , thus making this sequence diverge. We refer the reader to {{cite:0b574e3a157bf7b057800bb9a3d0787c42c17e17}}, in which the authors proved a particular case of Theorem REF .
m
ca39d6d5a8c4c9ea922055dc94a689b9
Defense Algorithms and Mitigations Mitigations of poisoning attacks have been studied extensively in recent years. As the major focus are put into neural networks {{cite:a1ca2b6731d63354cc8087f07a0b756dd389e149}}, {{cite:c84916f8ba824491de77b4e77ba357bc739a239a}}, {{cite:18f411c60733e478c5376e028c1bcbf713dc3651}} and classification tasks {{cite:ae9f3af5da86bc1b70e4998cfc24be94ace046ff}}, {{cite:884db4015d6dc082f8cbc6603816975c9c8fe11d}}, there are very few works on mitigation of linear regression poisoning attacks. Liu et al. {{cite:ee090b49e1c02c1daa54e497c34675bdc1a06365}} proposed a robust training algorithm on high dimensional linear regression models. This algorithm, however, addresses noise in high dimensional data. Interestingly, poisoning CDF functions tends to “populate” relatively dense areas of the key space and as a result we expect that the poisoning points (in this new CDF context) to remain undetected when removing noise.
d
969db368acf0a47972a94db50901e778
A method for unsupervised deep-learning image synthesis of medical images has been presented. To synthesize new intermediate slices and thereby recovering spatial information, the method exploits the latent space interpolation ability of autoencoders. New intermediate slices are generated by mixing the latent space encodings of two spatially adjacent slices. High-resolution ground-truth images are not required to train the approach. Results of our preliminary experiments using MNIST data demonstrated that our proposed approach outperformed a variational autoencoder (VAE) and Adversarially Constrained Autoencoder Interpolation (ACAI) approach {{cite:9f25c99089f91ed80c7786bab34c61bd70a81b00}} for interpolating rotations of handwritten digits. Evaluation of the approach on cardiac and brain structures using four publicly available MRI datasets revealed that the method can outperform cubic B-spline interpolation. Performance differences between evaluated methods become more apparent when the upsampling task becomes more difficult i.e. for highly anisotropic volumes e.g. cardiac MRI or larger through-plane upsampling factors. This might indicate that the model can infer the missing information from contextual and anatomical information captured in the latent space. Furthermore, the experimental results revealed that our proposed approach can compete with related unsupervised {{cite:ae67e92f1e2e46e2d2072a7e9addc2819314fefd}}, {{cite:6fa5f1efefb76a65532fcc2c5e90329776d107aa}} and supervised {{cite:d457b13d28bfb895aab82a7a7dd43ccf583bf657}} super-resolution approaches. Compared with unsupervised super-resolution methods of {{cite:ae67e92f1e2e46e2d2072a7e9addc2819314fefd}}, {{cite:6fa5f1efefb76a65532fcc2c5e90329776d107aa}}, {{cite:cfb27ff9d79636ae8b05eb7b6e47834f753f1c66}}, {{cite:2f7be5d9fd157f57fa12aa08ea5c822a8b6b0499}}, {{cite:2ee944cf47964569ea6808866d9070db7e9df202}}, {{cite:2bd0c77ca7530e9ad5ca2f334817325e9f75011b}} our approach can be applied with any desired upsampling factor and uses a single encoder-decoder structure. Moreover, applying methods of {{cite:ae67e92f1e2e46e2d2072a7e9addc2819314fefd}} and {{cite:6fa5f1efefb76a65532fcc2c5e90329776d107aa}}, {{cite:cfb27ff9d79636ae8b05eb7b6e47834f753f1c66}}, {{cite:2f7be5d9fd157f57fa12aa08ea5c822a8b6b0499}} requires optimization during inference for every image at hand and therefore, inference requires several minutes of GPU processing time {{cite:2f7be5d9fd157f57fa12aa08ea5c822a8b6b0499}}. In contrast, at test time our method can synthesize multiple intermediate slices between each pair of adjacent slices in an MRI scan in less than a second on a GPU. Furthermore, using the method of {{cite:2ee944cf47964569ea6808866d9070db7e9df202}} necessitates creation of a common atlas space to which each image must be transformed. Finally, it is fair to note that the approach of {{cite:cfb27ff9d79636ae8b05eb7b6e47834f753f1c66}}, {{cite:2f7be5d9fd157f57fa12aa08ea5c822a8b6b0499}} performs explicitly anti-aliasing by using an additional CNN.
d
d5eed94483de44a0906e623329f593f7
When trying to understand the content of a video, it is essential to consider the video from a wider perspective to learn contextual information. To achieve this, some prior works {{cite:feafcb3899b790178b32d245d7e202ee8d367222}}, {{cite:f002aeb473a6c6c4010b7e4546c012d3668e7ce9}}, {{cite:323f4e9e8cca616a422d7e96c430f34de66e6b5d}}, {{cite:39f907edfcfbfbea871ea6d12825d7dc5188c18c}} try to obtain global contexts via self-attention mechanism {{cite:ce40e7d2fa8e1bbeb5bbd78e6aad3a0664721f5f}} and LSTM {{cite:dcbdda441537ae6ccc2155b77cb8420e0782857b}} to enhance video comprehension. However, when generating global features, the fusion of contextual information will bring information from other segments to local content, such as introducing noise from background segments to event-related segments. Since the temporal boundary of the event is required, it is necessary to predict whether a segment belongs to the background or an event based on local contents. Therefore, though global information benefits video classification by obtaining contextual information, it also brings noisy information to local content, which has a negative effect on temporal localization.
i
f92443544b6d533cae2b1bfa98d6375d
The proof of this theorem, however, does not use the explicit formula of the operator {{formula:7d7d77c4-ef6c-4451-939a-6feda658fcb2}} and relies only on the fact that the operator has orthogonal polynomials as eigenfunctions. The operator {{formula:ac5b8c6c-7efe-4085-a273-8097b061151c}} in (REF ) contains a factor {{formula:0515986b-fad2-4b3b-827b-480059645fe9}} . It is known that the Laplace-Beltrami operator {{formula:9a8cf2c6-1eeb-4d66-9698-e03637f5dbd1}} can be decomposed in terms of angular derivatives (cf. {{cite:85addec4ce3bfa634fd9b32d99e20370dd923509}}), {{formula:e1711c73-753e-48b5-a6a0-d47913c77633}}
r
07c859d0186aed1d47b33a58d156670b
There are diverse number of studies utilizing quantum CS or alternative approaches in more practical manners for further reducing the number of measurements and the required resources such as online learning and shadow tomography {{cite:006ff258d6572a46b13e36269f5b0d58ec9d1e8b}}, {{cite:759fafdef7ef4d333c389ace689b4ff8314d0f34}}, self-calibrating quantum state tomography by relaxing the blind tomography problem to sparse de-mixing {{cite:c9063b81726dc5287269e22e383e63ad92c7d404}} and hierarchical compressed sensing {{cite:dd9f14e51d79cdb6ef22a54e07ecbeb448037c81}}, adaptive compressive tomography without a-priori information {{cite:f8fb0339bc32264399135a8e3368def7f3dd8e4c}}, reduced density matrices {{cite:05dc6d3cc78c714b093c6d417f99c917b1e227cf}}, {{cite:639a0253614bfdc772693410c42ee0089f706d16}}, matrix product state tomography {{cite:5bed195b80b00c0e747f5b2967837adc24f61eaf}}, {{cite:737b2ce16f6595149a2832b39411bf2999e86182}}, neural network {{cite:5c0ece4193fe41adf54430d05de00f2f2b19e987}}, machine learning {{cite:4f36033e0f1ebcfded48d391d9b43a6d32b1b230}} based approaches or different methods including {{cite:a405ba608bfa3a9569cbedf5e392723c7caa88f5}}. There is not any polynomial-time quantum algorithm or solution method available for exactly reconstructing {{formula:1e805dd0-9822-4232-912e-9dc8c80af99d}} -sparse pure states in (REF ), i.e., {{formula:c965d91c-7634-4d45-bc2d-b17ac9594d15}} -sparse vectors in dimension {{formula:b2534bcd-4f91-439e-ad42-38633c96ccc7}} , with neither quantum nor classical polynomial-time resource complexity. It is an open issue to achieve {{formula:b417f5f3-6fe0-4bd9-97d2-4ec70ebea9ed}} -norm minimization based QST of {{formula:1774447c-fe64-4598-b602-08b352e50fcf}} -sparse pure states with quantum algorithms of polynomial-time complexity.
i
a66e54a3121bdb07f3f0e4e10e259e3e
We report results on the same 10,000 instances for each TSP size as in {{cite:6c9410d95707617f7c1cb097fd2b609fcc981e83}} and rerun the optimal results obtained by Concorde to derive optimality gaps. We compare against Nearest, Random and Farthest Insertion constructions heuristics. and include the vehicle routing solver of OR-Tools {{cite:81a588d441814e09ec957076295cdb78a39572b7}} containing 2-opt and LKH as improvement heuristics {{cite:786733c0848342136ec8c9e29fa005a73f9df97f}}. We add to the comparison recent deep learning methods based on construction and improvement heuristics, including supervised {{cite:28490fe1f4bb1ffcf698f3a87492ae38273565b8}}, {{cite:36311c1158c46088b2e5657c05739c76faffe7f6}} and reinforcement {{cite:624ec026a83d9da9ee5c8ab7e8953ce5dd95aaea}}, {{cite:6c9410d95707617f7c1cb097fd2b609fcc981e83}}, {{cite:cf467fcd11df0a2cc2697820240faa236b6daa90}}, {{cite:36c9b18ed7af143738fcdd016f8b87be2c055242}}, {{cite:786733c0848342136ec8c9e29fa005a73f9df97f}} learning methods. We note, however, that supervised learning is not ideal for combinatorial problems due to the lack of optimal labels for large problems. Previous works to {{cite:6c9410d95707617f7c1cb097fd2b609fcc981e83}} are presented with their reported running times and optimality gaps as in the original paper. For recent works, we present the optimality gaps and running times as reported in {{cite:6c9410d95707617f7c1cb097fd2b609fcc981e83}}, {{cite:36311c1158c46088b2e5657c05739c76faffe7f6}}, {{cite:624ec026a83d9da9ee5c8ab7e8953ce5dd95aaea}}. We report previous results using greedy, sampling and search decoding and refer to the methods by their neural network architecture. We note that the test dataset used in {{cite:624ec026a83d9da9ee5c8ab7e8953ce5dd95aaea}} is not the same but the data generation process and size are identical. This fact allied with the high number of samples decreases the variance of the results. We focus our attention on GAT {{cite:6c9410d95707617f7c1cb097fd2b609fcc981e83}} and GAT-T {{cite:624ec026a83d9da9ee5c8ab7e8953ce5dd95aaea}} (GAT-Transformer) representing the best construction and improvement heuristic, respectively.
m
cf06d63fa5d258aa9ca08c5317d703ea
This problem is well suited for an interior point method {{cite:0be63faf0dbc45bbbdb6a53a698e6b8060eaf4e3}}, {{cite:71b47dd9d2140cbddcbb33952c2eeccfebde9eb6}}, {{cite:e11ab1df1a1d4abd8d21242100897c7d5bfe8a8b}}. First, we relax the inequality constraint {{formula:e603dea0-ff38-4179-b551-ed0368582222}} via a log-barrier penalty, obtaining a minimization problem for a new objective {{formula:78177ab3-2d23-4c08-a456-5dfa3bea6913}} : {{formula:a52dbbb5-fbf6-4949-866a-81af6a4d46be}}
m
e88eedc44de080cdd54a0507a4ac800d
Motivation offers a framework compatible with other methods in machine learning, such as R-learning, goal-conditioned RL, and hierarchical RL (HRL). In R-learning, {{cite:bba6b01a5451a6d4b4d5f1cffea899f47ffbf75c}}, {{cite:fa766842c2f67566fd378f11a5b781b713649d13}}, the cumulative sum of future rewards is computed with respect to the average level. The average reward level is a slowly changing variable computed across several trials, which makes it similar to motivation. In goal-conditioned RL – the closest counterpart to RL with motivation – the Q-function depends on three parameters: {{formula:09b2b11f-f0b3-48eb-954c-a9f8d77da573}} , where {{formula:145936f1-031f-4184-b2dd-d4d5ebfef92d}} is the current static goal. In the motivation framework, multiple dynamic goals are present at the same time, and it is up to an agent to decide which one to pursue. HRL methods include the options framework {{cite:3930d5a3d02e3b6b5ef6760181b58d60064ac8ec}}, {{cite:99c0118c9829945c16125a62c38e3cd4d1113acc}}, RL with subgoals {{cite:99c0118c9829945c16125a62c38e3cd4d1113acc}}, feudal RL {{cite:a12881b1338f72e7536731fae35ba48f5269add2}}, {{cite:289e961246a1dd0dd8daa3596bc3cef54f45184f}}, and others. In HRL, complex tasks are solved by breaking them into smaller, more manageable pieces. HRL approaches have several advantages compared to traditional RL, such as transfer of knowledge from already learned tasks and the ability to faster learn solutions to complex tasks. Although HRL methods are computationally efficient and generate behaviors separated into multiple levels of organization – which resemble animals’ behavior – a mapping of HRL methods to brain networks is missing. Here, we suggest that motivation offers a way for HRL algorithms to be implemented in the brain. In case of motivation, both manager and lower-level actor nerworks receive the same reward, which makes motivated networks different from e.g. their feudal counterparts {{cite:a12881b1338f72e7536731fae35ba48f5269add2}}, {{cite:289e961246a1dd0dd8daa3596bc3cef54f45184f}}.
d
d415a64886e24fbca1f0a53fac47fead
For tumor section, the techniques adopted to detect tumor inside WCE images are mostly based on SVM {{cite:b689f1cba4f92bad99e14c5d8dc9f16a82d61ef4}}, {{cite:12961dfe4b985db0793bfee42c46e017cf27c9f5}}, {{cite:33ff10f932f0a42aa46439fa6b8b09ff16e3e20c}}, {{cite:9dc7e6563a55b9d04a5de1fe45df8c6eb572fd7c}}, {{cite:3ed5d138f933ab964f82ab2580ebcd9873f09aee}}, {{cite:3b85844749590af32bec96dd5bc5b50c0d8148ed}} and deep learning (DL) {{cite:d7613446b2b74fda623cc58b391722bf19de6a58}}, {{cite:4e0ddd536cc1fc050c48d62a5335ed88033ef40e}}, {{cite:df67c16dec0229788dee84797b5ae6e84c75e8e6}}, {{cite:be807817a5c19e314313fc7572cd52610d9de56b}}, {{cite:b373cf71b4e520fac339957bbf3bea1a0eab6dd8}}, {{cite:ec5e2dcf630adeaa2dfdbc7ed603cb6e9c48a26c}}, {{cite:1df0ac49c322572b7de3b42c39b02783a091163a}}, {{cite:08769e587296803a61bcf0ef43e54bedf900b0c1}} after extracting features. Preprocessing step for each technique is an essential step followed by a classification approach. For instance, in {{cite:3ed5d138f933ab964f82ab2580ebcd9873f09aee}} LBP is used as a texture feature extractor with two more feature extractor i.e. SVM-SFFS and SVM-RFE on 1200 images yielding good performance. Similarly, in {{cite:be807817a5c19e314313fc7572cd52610d9de56b}} the data set is large, implementation of CNN models like LeNet, etc., and high-performance results. Albeit, each technique is following a generalized rule of preprocessing followed by a classification approach for classifying only one disease at a time, none of the studies are attempting to address the issues such as JPEG compression during the generation of WCE videos and AWGN effects while transmitting these frames to a remote physician. The proposed possible future work is concentrating on the cascaded approach for denoising these artifacts via DnCNN {{cite:26909acfb40c6783fa55a0c8353e9496139edc0e}} and then proposing deep learning model {{cite:f57a8de9cb1918ace96acf7e4d5485978201f5a6}} that will categorize tumor, polyp, and ulcer in a joint classification manner.
d
6ace4e20d40368380dad2eec9c8ab1fa
As mentioned in {{cite:573ede106e9e9007cbf4b6ba3c7d004037c415ee}}, InceptionTime networks can exhibit high variance in terms of performance between training and therefore can benefit from an ensemble method approach. Consequently, this work also considers an ensemble of five networks for mood-state classification for both the Short and the Short-Long network. The predicted state will thus be the average prediction over the five networks' output. These methods will be referred to as the Short Ensemble Networks and the Short-Long Ensemble Networks respectively.
m
f062826ad71c660c50d7fd9475ddc36e
The following moments bound on Gaussian mechanism with random sampling is proved in {{cite:088fc87c25b0e04f1bf0a5ef470627663d807b93}}.
m
c4879a8f9c73721b3e1af1bbf9172357
In all three cases and all datasets, the posterior of {{formula:55050add-9c2f-462c-b15e-4980afe12d1e}} has two distinct modes, namely, the SI mode characterized by a larger {{formula:813b19e0-d9ab-4df2-88df-4d2d2c9a2484}} , and the MI mode with a smaller value. The origin of the SI mode is explained as a result of degeneracies between {{formula:1d5ce58d-aff3-4c95-8977-c26469cf2852}} and other {{formula:d2e8a584-d389-4502-b241-fcbe618b5aba}} CDM parameters. The absence of free streaming neutrinos in SINU leads to a phase shift and enhancement of the CMB angular power spectra. These changes are compensated by the other {{formula:5f795719-bb74-445e-afc7-073a4592794c}} CDM parameters. Specifically, {{formula:8d6a3f3a-1e1e-4881-9f0c-5794276888e7}} compensates for the phase shift, {{formula:5934092c-7785-463f-b0c3-9a7966f8b06d}} and {{formula:844b4f88-e63a-419f-b149-1e39526607d2}} correct the overall amplitude, and {{formula:a2f8556a-4319-47d1-9aee-c547cc6a6a4d}} tilts the whole spectrum to achieve a good fit to the data. We showed that this compensation mechanism works only for those values of {{formula:de5122a6-f164-4aa3-bddf-e341d24da067}} for which neutrino decoupling happens close to matter radiation equality. In intermediate values in the `valley' between the SI and MI modes are not favored by data. The large {{formula:b50c13c0-120d-4c0a-ad19-53371b04e1c3}} corresponding to the SI mode globally affects the whole CMB spectrum observed by Planck, which can be undone by changing other parameters. In contrast, the MI mode spectrum is virtually indistinguishable from the {{formula:2da23fc3-9de7-4615-b793-7f4810afdc31}} CDM as it only affects very high-{{formula:e6a06768-3d4a-45c2-96f3-3a0a4394a5a1}} modes, due to small {{formula:1e10d2e7-c5ea-4d35-8058-3a8437d999e1}} , which are not observed by Planck. The valley in between corresponds to intermediate {{formula:7644ca8e-4402-41f7-81fb-dde2aa6a2eeb}} values which modifies only the high-{{formula:4c2b9fa5-51f7-4b9f-8966-ccedd1ef3d0f}} portion of the spectrum. These partial changes are difficult to compensate using other parameters, resulting to a worse fir to the data. In the {{formula:bd036858-f7d2-4c19-a0af-91b21791c1b4}} scenario, the significance of the SI mode is greatly diminished if CMB polarization data is included. This is a consequence of the relatively poorer fit of the model to the low-{{formula:9fefed04-a76e-4979-8d8c-1ab9eb70fa4c}} polarization data. Even though the phase shift from the free-streaming neutrinos is the same for both temperature and polarization spectra, the peaks in the polarization spectrum are relatively sharper. As a result, the EE data is more sensitive to any change in the number of free-streaming neutrinos than the TT data {{cite:a51e9468e67016644fe4c63618fe5945a8deb7cd}}. The inclusion of the polarization data also shifts the whole posteriors towards smaller {{formula:c7e7910a-0dda-4401-a457-2de479e4b9b5}} . However, the SI mode is rejuvenated in the flavor-specific {{formula:fb1295c3-e9bf-45c6-a574-4c3220c79fdd}} and {{formula:b6977389-a751-4a7f-aa7d-0fcc16414e9a}} scenarios which are in lesser conflict with laboratory experiments. We showed that even with the polarization data, the SI mode significance is comparable or sometimes even greater than that of the MI mode. In these cases, the number of self-interacting neutrinos are less than {{formula:c2b24aaa-4594-4944-8fc5-0edfee4c755e}} , and as a result, the changes in the CMB spectra are also relatively moderate. This allows for more freedom to use other degenerate parameters to achieve a significantly better fit. Due to the phase shift in CMB spectrum, SINU scenario favors a larger {{formula:8326b7e4-b48a-453d-b8a8-be0bca973923}} in the SI mode. For example, in the {{formula:2fb2654e-d12e-4d42-a85f-6697b2d73f6f}} scenario, we find {{formula:d4293a33-acfe-4a5f-858f-5d6bfc5fd2a4}} for Planck temperature, polarization, and lensing data. Although in flavor-specific scenario, the significance of the SI mode increases, the value of {{formula:84b5c691-1401-43f5-8548-aff7df6572b5}} slightly decreases due to lesser phase-shift. We do not find any strong correlation between {{formula:497f0257-ff58-4d8e-8fe4-01cfff18346a}} and {{formula:0f36ad82-7264-43d8-85d4-e0ee7021b7c4}} .
d
14011c76c3d5d123cc6a869f98ea0450
Timing and speedup. DeepTag is implemented using PyTorch {{cite:5ce066abe6b25130559ba737fdb6b6899c3f82b2}}, and tested on a PC with an Intel i7-11700K @ 3.60GHz CPU, 64GB RAM, and an NVIDIA GeForce GTX 3090 GPU. CNN prediction can be accelerated with GPU while postprocessing on CPU. In Stage-2, multiple ROIs are parallelly processed by CNN and serially postprocessed. Computational times are recorded for input images of different resolutions. For each resolution, we use the average time of 100 images.
d
dd11017c9e32ca8d1be6dc5abea4d2c9
Besides, our framework mainly consume time on the bucket statistics aggregation, best split finding, and preparation for data to left or right branches. We only take into account the time consumption of MUL because it is the main computation. Our implementation of MUL can execute element-wise multiplication, thus is much faster and executes less in number of times. For bucket statistics aggregation, it executes {{formula:2b3845b4-69a6-4d1c-a3b5-39caa74ad125}} MULs. In our implementation, we actually suppress it to {{formula:c93adf95-a360-44ec-af23-11d3e9197034}} MULs by a proper design. For split finding and sign determination, {{formula:ec2e12de-5f88-4fa5-8b88-94f7778c2fd9}} MULs are executed. In preparation of data on child nodes, 6 MULs are conducted. During training, each prediction update requires element-wise MULs between {{formula:7712afaf-5f64-4fcd-af44-50cacc995ef3}} shared local indicator vectors, which have {{formula:6eebd6fe-eb38-4fe2-bedd-953146a2dc2f}} elements. Thus the prediction needs {{formula:45609319-67f2-4e2a-9491-87d2328ad4e8}} MULs. {{formula:4fdd0f02-5fc7-47c5-9313-0d384e92dc2e}} rounds of training need the update of prediction, causing {{formula:edc3199f-6855-4566-af2e-33021351f2f9}} MULs in total. Suppose each MUL needs {{formula:8bb7eb81-d9be-4a04-8bd6-34a23e5e6e6d}} , in total the time cost of operation inside our framework is {{formula:7d657cf3-1115-4570-8b9a-2d987829a938}} . For executing one python packagehttps://github.com/n1analytics/python-paillier of the Paillier algorithm {{cite:63aa493ca79c72588c45d76038a03bf85f659032}} with a 1024-bit key, it takes {{formula:068ce473-63e1-41e1-b120-fd4fb1543363}} and {{formula:d6370084-067e-40b5-8719-5d65b7db96f7}} while our MUL takes {{formula:4c1674fc-3760-4b4d-9549-02fa2203adec}} on average. Suppose we have {{formula:86311590-316f-402a-90ab-d6ddb11775fa}} participants and {{formula:9aae0d56-997b-42e4-ba40-818cc5bfbd7e}} instances. If we build a model with {{formula:2f9b691d-c9ec-47fa-96cb-72d452ed4fed}} trees, max depth {{formula:c609ebdb-361f-4ba5-8b10-8790bc395d5c}} for each, {{formula:8ff75aca-cde0-4a27-b54b-27edb3ba2f17}} features and {{formula:198071d4-8e35-4632-8777-085124d9cfb2}} buckets, we will have {{formula:1f1ad239-95e1-477e-ac35-0bb50416bf1f}} s and {{formula:23594322-d0d3-4e42-840e-e6ab44260d09}} s. Notice that we don't calculate the time cost of communication because we just simulate our framework in one machine with virtual participants. We aim to show the computational efficiency of our framework.
m
d34986d2f88dde4097176233d00fb5ac
The benefits of label smoothing have been recently highlighted by {{cite:3ce51e327567453570a1851075bd68ac016b2a55}}. We were able to shed light on a severe limitation of the approach, which practitioners were currently unaware of. Similar to LS, other easy to apply calibration methods are also damaging in practice. We observed that the degree of refinement degradation varies from one dataset to another. {{cite:5e1983fb9a8127c1ab5cac808f3c71c38ce057f2}} discussed the causes for miscalibration and accredited it to the over-fitting on the training data (under cross-entropy loss). We found that the training accuracy achieved by the baseline is {{formula:6a5e2aba-8c64-40de-b0a6-000b71298a32}} , {{formula:b514fce4-6620-42f3-a2a5-55e62fd0226d}} and {{formula:92b3121d-e1a7-46d8-a6dc-0c9872c8eac3}} for CIFAR-10, CIFAR-100 and ImageNet respectively. This signals towards a comparably lower over-fitting in models trained on ImageNet and subsequently, a lower impact on calibration vs. refinement trade-off. A point to takeaway is that CIFARs are still relatively important in assessing impact on calibration and refinement for DNNs as they are helpful in evaluating family of over-fitted models.
d
682fced7d21d7451acf008441b772115
Nevertheless, one can test the effect of the HOMO-LUMO gap opening in the absorption spectra of a finite (5,5) CNT using the Aharonov-Bohm effect{{cite:cfe3f685f221efa7524fb6efce4f16543fe93271}}. For that we add to the normal DFT/TDFT calculations a static magnetic field parallel to the CNT main axes with a value, for the (5,5) CNT with {{formula:d7643e95-b242-42c5-89e8-0c5f4f3e3697}} , between {{formula:aa41a877-7772-4ef1-bc49-ff22a5775537}} and {{formula:08fe9a35-d150-45b7-ae99-0df2aab2f820}} Tesla. In a celebrated work,{{cite:cfe3f685f221efa7524fb6efce4f16543fe93271}} Aharonov and Bohm describe a possible experiment to study the quantum effects of the application of a potential to charged particles. In their work{{cite:cfe3f685f221efa7524fb6efce4f16543fe93271}} it is described an effect that arises from the quantum nature of the matter, i. e., a charged particle feels the effect of a potential that was acting on it even if there is no field in the current region. To explain the latter effect, Aharonov and Bohm proposed a hypothetical experiment where it is demonstrated that, by using a magnetic flux in a selected region of space, two electron beams acquire a phase difference that is proportional to the magnetic flux quantum, {{formula:5cf83eda-33e6-46c9-96a8-2e9db6ecef7f}} with {{formula:628ab95e-8261-41e3-af04-765bda849843}} as the Planck constant and {{formula:7dcb1895-40d4-40af-8ce0-808d6f7d5571}} the electronic charge, even if the magnetic flux has a zero value in the region where the measurement is performed. The Aharonov-Bohm effect{{cite:cfe3f685f221efa7524fb6efce4f16543fe93271}} was demonstrated experimentally by Tonomura et al.{{cite:7cea641c9fd1d00635f5eb64408781e7c050b5fe}} in 1986 starting the application and study of this effect to meso and nano systems. For infinite CNT, the application of the Aharonov-Bohm effect results in oscillations of the band gap first described by Ajiki and Ando {{cite:9aeb2d266b641d13d8baa11931737847fad32cfc}}, {{cite:58cc52f46f850bf046f3b01115857f257b0e05f9}}. Furthermore, in the case of the metallic CNT, there is a gap opening that is dependent on the applied magnetic flux and has {{formula:74a656fb-f488-4ba0-9a93-576ac3219f7b}} periodicity{{cite:9aeb2d266b641d13d8baa11931737847fad32cfc}}, {{cite:58cc52f46f850bf046f3b01115857f257b0e05f9}}. More recently it has been shown that curvature effects break the periodicity of the gap oscillation {{cite:d32764f320f2c4377870fd86ab3ca91d8090c215}}. In Fig. REF it is presented the absorption cross section versus applied magnetic flux for the (5,5) CNT. It is possible to observe the existence of two type of peaks. The first one, the plasmonic peak, oscillating with the increase of the applied magnetic flux, and the usual metallic peak splitting with the application of magnetic flux. {{figure:04555adf-937b-41a1-bfbf-6c935f5d9ab5}}
r
cb8975d8b5e2c9017bb7034ec42648a3
[Proof of Lemma REF ] (i) Suppose that {{formula:5e4ac122-13a0-4ed4-920e-492820094ab8}} is bounded below. Then the operator {{formula:eb104a7b-07d0-408a-8763-ea6982318637}} given by {{formula:79695bde-9183-498c-8537-1ccfd75a16ba}} , for {{formula:6c05559b-33ec-4294-b574-c1a0bac62539}} , is a continuous map between Banach spaces, and has a continuous inverse. In other words, {{formula:44cf8f68-9dd8-4050-a88f-204154cba82c}} is an isomorphism between Banach spaces. Thus {{formula:a9c23d29-9431-494a-91a0-fc73c7715437}} is also an isomorphism between Banach spaces, and so, by the inverse mapping theorem {{cite:3651b69c7b5091c39b2441c69f87240bba3ac65e}}, so is {{formula:e2478a72-b2cf-4191-aa75-528803dd87e1}} . Consider now an arbitrary {{formula:1179b346-9551-4a2c-8796-73a0dd199ad2}} , and set {{formula:374b6415-12d1-4afb-96f1-e2e580c74411}} . As {{formula:459a6f60-d64f-43bd-9d19-62ed15bac416}} is a continuous linear functional on {{formula:b5439709-43f5-45b7-88b7-e34af9a17455}} , it follows by the Hahn-Banach theorem {{cite:3651b69c7b5091c39b2441c69f87240bba3ac65e}} that {{formula:5cb006d1-fbbc-4e70-bd0f-79cea795bd42}} can be extended to a continuous linear functional {{formula:53f485ef-23ba-4b97-845b-e2e4fb9fbc52}} defined on {{formula:5ea34ebd-cdf5-49c6-9675-a72e47a930eb}} . Now, since {{formula:3bed7815-9859-495f-b613-166a92c9ec19}} , we have {{formula:2c77a62e-8772-483d-abc9-e79eee2d4c9e}}
r
d46b58e115d7fd5359da5d2b13c5a360
Symbolic dynamics associated with horseshoe-type structures: In connection with the previous discussion, subharmonics can be also detected by applying to the Poincaré map and its iterates various methods coming from the theory of dynamical systems, the most paradigmatic being the celebrated Smale's horseshoe (see Smale {{cite:558d0c5cb70856a2deec7dcdfddb801b5faf0e8b}}, {{cite:54ade4a29a55f162036f9cf2bbe16b00c8be012a}}, Moser {{cite:99de4d4922ddb48f4d0dce90a0920be287d26f58}} and Wiggins {{cite:45ca629c1e8ebc8d4f857fb988166659a41f47f1}}). Essentially, it consists of a toy-diffeomorphism stretching and bending, recursively, a square onto a horseshoe-type domain.
i
04ec843e38f4147fdd19f67a9fc6d17e
More specifically, we show that the stiffness maximization problem of trusses subject to unilateral contacts can be recast as a second-order cone programming (SOCP) problem. This is a convex optimization problem, and can be solved efficiently with a primal-dual interior-point method {{cite:b505986b08cca5b1a470ac2baa0ba7c7ee00731c}}. We next extend the presented formulation to continua. Here, we adopt the conventional solid isotropic material with penalization (SIMP) approach {{cite:667ab50d498ebd107127f1c7fcba5fb0c2b4ac1d}} with the density filter {{cite:bc70d62dcd90c4aced9e8d5e34a6cd1c85531438}}, {{cite:188ea4c0fb90d414de872a2ebd37dd2550ffaead}}. Due to the SIMP penalization, the formulation extended to continua is nonconvex. However, this formulation is suitable for application of a sequential SOCP approximation method, where the SIMP penalizations on densities are sequentially linearized. Thus, the proposed reformulation for continua can also be solved (in, in turn, a local sense) with a standard mathematical optimization approach (i.e., it does not require any special treatment for complementarity constraints).
i
02f63de9dabc66d6b598431096525272
In this section we compare the proposed approach with t-SNE and U-Map. In order to provide a quantitative comparison, we cluster the output from Sections REF and REF using K-means and compare the results using three clustering performance evaluation metrics, namely Purity {{cite:0a0c742f5858fa599f8aae6d02743e5ba9e3f99f}}, Normalized mutual information (NMI) {{cite:77a4fcaf2379de3fcbbfc07134dcdaf8340f0582}} and Adjusted Rand Index (ARI) {{cite:4fde2bfd385343b96d9901781ec3754da76d0520}}, see table REF for results. For this class of problems our proposed method gives a better result, which can be linked to the robustness of representing the data in terms of the transition probabilities directly rather than coordinates generating the transition probabilities. {{table:c3d960a9-296c-41d8-9a03-5bcc73df3a93}}
d
76e91686248f1d2c78df7033aa121be8
Automated WS (AutoWS) methods are a best-of-both-worlds solution. These techniques generate LFs automatically by training models on a small initial set of ground truth labels {{cite:696b7fb995275743e850b10bf600a94bb1d7634d}}, {{cite:c22eacae22ec5e04fd652bd963f04c3e709a7d27}}, {{cite:c7a4f1d2fe44235398560fdfedb78446ca01a5e3}}. At first glance, AutoWS techniques resemble few-shot learners. However, they are much more, forming omnivorous frameworks capable of incorporating and combining signals from many sources, including powerful few- and zero-shot models. Indeed, recent evidence {{cite:9626473f95f0a3facadd167350b75d69a8246dc7}}, {{cite:787d80e5792de876a1f38604d4eeda5ff44ffeaf}} suggests that WS can exploit signal from powerful foundation models like CLIP, GPT-3, and T0++ {{cite:2a4128adb34a5affc50edb634b92afaa5e6d088a}}, {{cite:fa21693f76affe751ad331a58063ee62b5f4c2a6}}, {{cite:c7296d75b479c219cbc9d36e1e8c5a99af13bdcd}}. {{figure:ef0d8df4-062f-4997-954c-692590f553b9}}
i
6e9a129ab4a328514d5a1454f188c85d
The goal of the Retriever in Retriever-Reader type ODQA systems is to search and find a small set of relevant abstracts which are then fed into the Reader. Therefore, the performance of the Retriever influences the performance of the entire system significantly. A variety of methods ranging from sparse lexical models to dense deep learning based models have been used for retrieving relevant passages. DrQA {{cite:ca9d52e9dc0113f93aceff021b17bb135afe9f29}} uses a TFIDF {{cite:53db7917b2eae39540176a47ea742b673e13fd8f}} based retrieval model to retrieve relevant Wikipedia passages. TFIDF and similar sparse lexical models like BM25 {{cite:98e2f61eabf69df859b3293e9ab462f408b4937d}} have been used extensively for document retrieval in ODQA as well as other information retrieval tasks. First an inverted index of different words / word-tokens in the entire text corpus is created {{cite:7e47c209f970775af45b064f4f8c66ba0b16d39b}}. This produces a sparse high dimensional representation of the document. The query terms are used to match with the inverted index and a list of top scoring documents (scored by BM25 or TFIDF) is returned.
m
1e2dbf69e3703a7010d636fda448b6dd
We use the Dice Score for our our final evaluation on the extra testing data, to be notified, all of the testing data does not join the training process. For the comparison, we run five settings, the source only: model is only trained with annotated source data; Adv UDA represents the method of {{cite:3e4c54c2bac620af12a66f3701e00c73de1c7f0a}}; Box-Adapt: we run our model without any label information from target domain, which drops the loss of {{formula:6afecfc2-c7ef-4e73-995d-4c2185364cb9}} and {{formula:b16e5bac-9adb-4023-b6ee-e31b41976962}} in the training stage one and stage 2. In this case, the stage one for Box-Adapt is just source only model; Our Box-Adapt stage one: it represents the results of our proposed method in stage one; Our Box-Adapt DA stage two: the results is generated by our proposed method of stage two.
r
8de1198059cad1ca8a478c794fce8fa7
In this section, we explore the performance of the detectors derived in the previous section through numerical examples. Consider a narrow-band MISO communications system with {{formula:c9358cb9-8b7c-4ef5-9df6-8e9bc8327571}} transmit antennas and a single receive antenna, whose carrier frequency is 900 MHz. The duration of each data packet equals to a sub-frame of 5G New Radio (NR), i.e., {{formula:ba51e528-9861-4b25-8efb-19a52daa86a6}} ms. Block-fading channel is assumed, such that during one data packet, the channel remains the same, but it generally varies from packet to packet {{cite:514686b0ee98bda483a9261b9f26b22620b7abc7}}. We take the training sequence {{formula:19a6983a-0670-453f-9aca-937666ddfedd}} in (REF ) from a normalized discrete Fourier transform (DFT) matrix of dimension {{formula:0bb038a2-6fd8-43d4-a0b7-2b58233ebf11}} , e.g., {{cite:514686b0ee98bda483a9261b9f26b22620b7abc7}}. Specifically, {{formula:b08e436b-53b4-4a45-be74-489c8deed4f0}} is chosen as the first {{formula:635ed9e7-4de6-41c7-a38e-b924d969c3ee}} rows of this DFT matrix. Also, the SNR is 10dB. For parameters used in Algorithm REF , we choose {{formula:6149d029-92f4-41c9-a185-c75cf5f59332}} and {{formula:1fa90cda-dddb-4d8e-b972-b3c5fb2ce39c}} .
r
178e105ae3c20b1de15287cc31f9ec28
M101 was chosen for this survey because its nearby distance enables its properties to be studied in detail {{cite:29d51e6f39c83cab47dd06780c0cf6d8954d54c5}}, {{cite:2a48cdab15f19be1459a9bebc2b9d7b119f7fda5}}, {{cite:f7c4bb9116c34659b9648b5dcf6e9d2bff481b23}}, {{cite:5dc0c7e973cd0eee126b2ed73ff68c6464e98837}}. M101 is also currently interacting with its satellite population, likely its massive satellite NGC 5474 as evidenced by its asymmetric disk {{cite:cfac558f59419aa15d45106da80bb2d721627a83}}, {{cite:b1a6695d1c31b3249588ca8a861cad54bcfb71ad}}, {{cite:cd283cb981093c895fd65a4c69a87ca87e25806d}}. Given that interacting systems often display extended or outlying star-forming regions, the M101 Group could be an excellent case study to explore the conditions under which extended intragroup star formation is triggered. Additionally, M101 has been found to harbor ultradiffuse galaxies {{cite:45c11040efd70f776300d4fab5845f6a8fdd1433}}, {{cite:84f65bbcda017462de6c8b573b598c84dd4672f9}}, {{cite:000916ac3094e41e6d3b8affc249b5df896890dc}}, {{cite:522fd35781f8ffbc5ae33f19fc8e102161511095}}, {{cite:f013f2476ed546b5b67b89e91826aa908dd1cc7a}} and constraining the star-forming properties of these objects will aid in understanding star formation in low-density environments.
i
f2318d156126524c850fe0cd3c556baf
In this section, an overview of spectral difference methods for fully compressible Navier Stokes equations for an ideal gas is given. We begin with presenting the governing equations, followed by discussing the discretized solution evaluation methodology specific to unstructured hexahedral meshes with straight-edged elements. However, we note that, such methods can be easily extended to generic shaped meshes containing tetrahedra and prisms {{cite:84702bc390d5901233abfad0ddd437d84801cd28}}, {{cite:f87d3e3a8dad1a8520ab636b1faa807535d3c5c2}}, {{cite:597712a86754a5b5bfd8ef37b15d7f778c238b29}}. In further sections, we discuss the adaptive order refinement strategy for multidomain spectral difference methods.
m
6f70afd7f69dee3332145fa7a95c321e
Imitation of human behaviour was central to our training approach. Unlike other domains, like language {{cite:db51296f9fe0254250726391baf0e4deae365880}}, {{cite:78cceed3db55ec9fb3240d8cb281bd251d28e7e3}}, {{cite:1c3ae557c54f7604190b95402d577597f00a25bd}}, human behavioural data in 3D simulated worlds often needs to be collected and curated from scratch. Some previous work has collected human (child) multimodal behavioural data {{cite:cfc7627dd89801048225bcb6ed40047348a02d43}}, {{cite:eb531746209dd7f0ad5bedb059a40d6419f70f13}}, {{cite:b7ccbcd34e1488b0df14b838cff937bb88aff82a}}, but has not used it to train artificial agents. Social learning, imitation, and mimicry is prevalent in the the animal kingdom {{cite:914007d2048ea0edeacfd4746285099a06eb645a}}, {{cite:42d4e6615ba0c7901667b70f4b31c6057b1d92d7}}. In humans, infants naturally imitate both the language and actions they encounter {{cite:4780c358c2a2d5175cce31e59afe961a84a6a8f4}}, {{cite:8ac0874237247419ca99daa7af05b2d2386e9490}}. However, in AI, imitating multi-modal, embodied behavioural data is especially challenging because latent in all trajectories of human experience are intentions and goals. It is not clear whether we currently possess the algorithms and models that can interpret implicit causal factors of human behaviour {{cite:667220ef6a46b4f27b8067894ebfd91c4e41f756}}, and hence, can engage in consistent and coherent naturalistic interactions. Nevertheless, as in previous work we observe that some techniques, such as dataset and model scaling, reliably improve performance as measured by human evaluators. In addition, a number of architectural improvements proved crucial: self-supervised learning that gauged whether cross-modality representations “matched” or not, and hierarchical control of motor actions both substantially improved agent performance.
d
af282175d57c9e5a6d8a9bfea6fdb984
Spin singlet physics in 2D systems has played a large role in cuprate HTS, and similarities of nickelates to cuprates have encouraged extension of this concept to superconducting nickelates. The dominant concept has been that of ZR singlet {{cite:3b4ed36dfc5a17b2cb66ff48df0dabbaa3ecc2c4}}, building on the characterization of undoped cuprates as CT insulators rather than explicit MH insulators. The latter would involve only {{formula:a29fa271-01cd-4a7a-ac0e-75a1614131fe}} orbital reoccupation by doping or low energy excitations, while in CT insulators doped holes will lie primarily on neighboring O {{formula:8bd55d8b-57da-496a-b192-b0b0d52cab97}} orbitals. A linear combination on four O sites will form a molecular orbital of ({{formula:d4da94e2-cd92-4f87-8eae-3d6fed9d816a}} ) symmetry, and a hole in this molecular orbital will pair with the central spin-half Cu{{formula:dfe44f40-4034-4add-a9ce-70e442b54cfc}} to form a spin singlet, the ZR singlet. While most studies have assigned nickelates has having more Mott than CT character, the ZR singlet concept is sprinkled through the recent nickelate literature. The infinite layer nickelates are more often considered from a Hund-Hubbard viewpoint {{cite:faf9811232ac4cad9ef40b4698a905ff3bc008ab}}. We find that electronic structure parameters suggest that BNOAS lies in the vicinity of a charge transfer to Mott-Hubbard (CT-MH) crossover, making the basic `physics' harder to pinpoint.
d
d02b7b8119b217c4bbeba599a25781e1
This paper focuses on whether the actual sequence of updated strategies converges to an equilibrium, i.e., the last-iterate convergence, which is inevitably a stronger notion than the average-iterate convergence. A series of optimistic no-regret learning algorithms is proven to exhibit the last-iterate convergence {{cite:cfe44819735696e68374b18500d6931ce1e5a800}}, {{cite:a0115f2b0f3ac3a3477d3e5860ecfae516f3d722}}. In particular, the optimistic MWU (OMWU) algorithm is guaranteed to converge to a Nash equilibrium at an exponential rate {{cite:0c682612052fbe5fcfcc7d0e0fe26b0ff68ed713}}, {{cite:82452870196d7c70b6cf916e0085d6cb0b39b81a}}. However, it is required that players observe the exact gradient vectors of their utility functions at each iteration, which we call full-information feedback.
i
bdd1c9672cce4ba859088be0f581d51e
But sensitivity analyses predicated on such sensitivity parameters suffer from a number of shortcomings. First, users of grid-based approaches must explicitly state the granularity, range, and even signs, for plausible values of the separate (conditional) associations between the hypothetical confounder and the exposure, and the outcome. Reference values are sometimes estimated or postulated based on the measured confounders' (conditional) associations with exposure (and outcome) via an empirical “calibration.” A specific measured confounder is selected as a reference based on e.g., its (standardized) estimated regression coefficient {{cite:3243e6a97a9a4805c59120953b73971f0cf03674}}; or its marginal effect size on the outcome {{cite:e8d953e5b798d97c91fe68b814b90fd97371ac9b}}; or its relative risk on the (binary) outcome conditional on other measured confounders {{cite:323fb1b0175071bdc5ef9aebb2ac6001f657ec38}}; or its contribution to the variation in the outcome {{cite:594817c03607d94ef98664288dd2583f44239c0d}}. Bayesian sensitivity analyses for unmeasured confounding {{cite:717bb9c75ab0f57d54595e6f061b97f0eeacb63f}}, {{cite:f1b71bdb6858ae817d961007ee9abfec147ee193}}, {{cite:17ab37e4a88d907663d6f0ee52a7ca60ee3f279b}}, {{cite:27c40c47c87ff392b2e56dc7d54ef95ec6198f81}} avoid such user calibration through the application of Bayes Theorem to obtain a posterior distribution for the sensitivity parameters, but require more restrictive parametric assumptions about the outcome model. Second, with the exception of {{cite:03507242d9409c66e137df7a857559d616d1ea95}}, {{cite:e5276486d9e4f9368907edf576d2550f3db3dd8b}} and {{cite:d383c31df188660207a7682e336254bf36819eb9}}, most sensitivity analyses that evaluate the associations between a hypothetical confounder with exposure, or outcome, or both, (implicitly) require certain assumptions about whether the confounder is continuous or dichotomous, or the distribution of the confounder, or whether the bias is away or toward the null. Third and most crucially, sensitivity analyses based on such sensitivity parameters can potentially lead to scientifically meaningless and logically incoherent conclusions {{cite:31d3120a873e5a2057b578b461b143dddeb20ee5}}. Consider the following simple example where the (true) probability of receiving a binary exposure ({{formula:76e076bd-3f83-43fd-af8f-6f1659f67343}} ) depends on a logistic model with main effects for three confounders, {{formula:d2e4fcbb-7f43-4d29-a126-bd27c241c1dc}} , and {{formula:9063d885-10d7-425e-863d-5916e0daa901}} ; e.g., {{formula:6fdee40f-8c73-4a50-9127-c99c94933ff2}} , where {{formula:7bc44abe-3f6c-4d77-a7e5-f1362865a322}} . The interpretation of the conditional odds ratio of being exposed due to {{formula:2c825275-72fe-47f6-85d4-437111d31c70}} under the true model, {{formula:5cee09ff-4de0-4199-a508-20469f4ba1e9}} , depends on the values of the other confounders {{formula:ddb00e11-fe5d-4d5a-bf48-c276b10a7759}} and {{formula:c7ec7573-5d4a-4357-990d-bc8093daabcb}} , even in the absence of any interactions. Suppose that the researcher could only collect data on {{formula:1508954f-82e3-4951-8462-4605310c2c57}} and {{formula:d08d6b71-b91d-496c-b1f7-006b78e1c9d9}} , so that {{formula:f8a88c08-7a93-4adc-873d-5950bcbcbe94}} is left unmeasured, and is thus restricted to fitting a different treatment model, e.g., {{formula:41090a2c-4b9f-4aa0-b958-7bd4fd0f3007}} . The interpretation of {{formula:13cbf6ce-eb5a-4171-a45e-495f32396f88}} under the fitted model thus depends on only the value of {{formula:deaa48d0-53a4-4d91-a8d1-b579e133f7cf}} . It follows that the interpretation of the sensitivity parameter, {{formula:ef7d0f05-4110-4aa1-a05a-f9c934e8f61d}} , is incompatible with both {{formula:fb57db07-bec2-4363-ba02-ba131c9cad38}} and {{formula:e45cc36f-783b-49da-b1a3-76912e444832}} due to non-collapsibility {{cite:20e30fcfdefb4aa7052e80ddd88f8e6883dd81fc}}. Of course, one may consider a different treatment model instead, e.g., {{formula:267f138a-91af-4407-8ae0-150281ffbe6b}} , so that the interpretation of the sensitivity parameter {{formula:480edbaf-71b6-48cc-b6c8-0a54583d1a20}} that encodes the conditional odds ratio due to {{formula:d42ad41a-b08b-443a-92c6-734d7d0a0e1c}} given {{formula:6f98b791-9022-4b55-95d8-d3426d4f2e9d}} is seemingly comparable with {{formula:00521642-1466-4f78-af19-2b5cb2b51154}} . But there is no guarantee that the range of plausible values for {{formula:bcd0ddd9-8069-4f20-9fb5-c45941fbd694}} will necessarily be narrower than that for {{formula:9a3c4ca0-1707-46ee-ba16-5578c4e4b7db}} , even though the former is based on an exposure model that adjusts for both {{formula:7435f72c-2d10-42e2-8012-ce90fce24731}} and {{formula:a5d43eed-b92a-4f69-b824-5239538cd834}} thereby reducing the extent of biases due to unmeasured confounding. It can therefore be difficult (or impossible) to conceptualize sensitivity parameters that retain the same meaningful interpretation regardless of the measured confounders being adjusted for.
m
2b9cc8d02fb1ad8102e7c7afe7d9a1be
Deep learning models are widely applied in medicine. However, these models require more interpretability to simplify the steps of the prediction process. Despite advances in XAI models, there is a huge need to consider XAI with DA and FL when the topics are related to medicine to achieve the goal of personalized medicine. For example, DA-based XAI models have the ability to improve the performance in computer aided diagnosis {{cite:c0849f9950cda10f7952fd0e8e12f3aa0219fc2d}}. But the same model will not be able to provide a feasible performance using multi-institutional data. For this reason, FL is a good option for data sharing under the condition of protecting data privacy {{cite:ad47bc16eb45ab1b79331c79ad646936b423f4ca}}. However, there is an additional issue related to intra- and inter- site difference of distributions. This leads to a domain shift between sites. Recent work proposes to integrate Unsupervised Domain Adaptation (UDA) into the FL framework {{cite:a230e61b0985df189ac83d9003b868055211331f}}. For example, UDA guides the model to learn domain-agnostic features through adversarial learning or a specific type of batch normalization {{cite:c4dd264edec173ee093a550ff493b1428513a164}}. Furthermore, DA techniques such as mixture of experts (MoE) with adversarial domain alignment can be used to improve the accuracy of different sites in the FL learning setup {{cite:61b509edbceea82b77208d9d5d38914e7a19fcb3}}. These techniques will be presented below in three subsections related to DA, XAI and FL.
m
0240dab29a5c7b9988056485233f9030
After the decoding, the BS updates {{formula:7272d3ca-4d7e-4ff6-b148-53006bfa8294}} to {{formula:0d4d2b55-748a-4d5e-a7fb-ed4ca5765c2d}} based on its optimizer, and iterates above process until Eq. (REF ) is fully maximized. This paper updates {{formula:810a9caa-0c05-4379-ac7d-c6ef2592f54a}} based on multi-start local search {{cite:54aae2706f942dbb198b8ee805ad0975d08ded14}} with Nelder-Mead simplex {{cite:827db84e958eae4b9e412967c45b4025f392c6a3}}. The multi-start method iterates finding the local solution with differently initialized hyper-parameters to find a more good solution. Further, Nelder-Mead simplex is a heuristic optimizer to find the minimum of an objective function in a multidimensional space that can efficiently search for a local solution based on only the output of the objective function: a combination of the multi-start local search and Nelder-Mead simplex can efficiently tune the hyperparameters with avoiding local optima.
m
a449ee8e65791fdf5e04de6ea4421037
Sparsity approaches have proven successful, but more complex models with additional structure have been recently proposed such as model-based compressive sensing {{cite:ceaed1ef1a283f04791c3b467faadc9e33d0ab1b}} and manifold models {{cite:495325f7ffc376cc845390d5c4bfc8d726de188e}}, {{cite:4701553ca145fd8c66f984a3a2a1f8b135052b40}}, {{cite:17089109fa153f6ce786f286a1ca98985bb44d7b}}. Bora et al. {{cite:616c98c966072bccaafcb18ec4d984e09caf8f1f}} showed that deep generative models can be used as excellent priors for images. They also showed that backpropagation can be used to solve the signal recovery problem by performing gradient descent in the generative latent space. This method enabled image generation with significantly fewer measurements compared to Lasso for a given reconstruction error. Compressed sensing using deep generative models was further improved in very recent work {{cite:356470d7a5bc2edbf502fb8aa04da0e6674090f1}}, {{cite:55cbe044c32c8fa685aadf8b44fcd5e4ecee7c01}}, {{cite:f1ec63a05c67323c9f1d6fd1934ba83f9c883ef3}}, {{cite:48711a9f879fe82da9d47fa593e605f33208e593}}, {{cite:50d01b0bd866e9a2370b9a0c977f54358bf049d5}}, {{cite:7e3b058268783f822bb25904d3b8c6b1a4a92b84}}. Additionally a theoretical analysis of the nonconvex gradient descent algorithm {{cite:616c98c966072bccaafcb18ec4d984e09caf8f1f}} was proposed by Hand et al. {{cite:447736f6eb916e16f5cb02cb08c4123f6f2a7c42}} under some assumptions on the generative model.
i
7c87e64043edf930fb526dc07a3a15b0
We train two models for each calculator: one with lab values and one without lab values. Missing values are imputed using {{formula:27298837-220c-4370-a684-8b2f17685f9b}} -nearest neighbors imputation {{cite:52d44b8f5601d102bb0a1b271703de87b0d39720}}. We exclude features missing for more than 40% of patients. We train binary classification models for both risk calculators, using the XGBoost algorithm {{cite:b87ca30c98a977cf5fdbfd575de2e865ca387be1}}. We restrict the model to select at most 20 features, in order to make the resulting tool easily usable. We use SHapley Additive exPlanations (SHAP) {{cite:6a113dd45981817be2f437e333759990abab5f9f}}, {{cite:aebd7bb9a1aa9162fc31e2c0083658c2dd9f46d3}} to generate importance plots that identify risk drivers and provide transparency on the model predictions.
m
1d6b735bf95cdeae4036f76b148fd50d
As regards the identification of the absolute and spatio-temporal analysis, the 1D velocity profile extracted form the baseflow at several streamwise positions is considered parallel as in the local stability analysis {{cite:2c654b67a8e90ce13ef28b7e8106710c972f1978}}. The resulting parallel linear equations are then discretized using a pseudo-spectral method employing Gauss-Lobatto-Legendre collocation points. The saddle-point in the complex wavelength space {{formula:6291103b-1de0-455f-823f-ea60b3319d46}} of the local angular velocity {{formula:09b244e1-efa2-4c6c-90e0-4be0b06fe4e0}} is then localized using a Newton iterative method.
m
3633f18973d8f66d0219e8a7ab420e25
Blind-spot networks offer a solution to the predicament of requiring noisy-clean pairs of training images for deep learning denoising procedures. Previously utilised in applications on the likes of natural images {{cite:ca35a2fb237ff0099fc487cc0838d4aa72ca8ca6}}, Computed Tomography (CT) images {{cite:72e0e8f0a53b2915a83e5cc37b20db62b4c86139}} and Synthetic Aperture Radar (SAR) images {{cite:514a223e35cd4f0a6599037cafdaff4f4fedd41d}}, we have shown that under the right circumstances, N2V can also be a powerful denoiser for seismic data. N2V relies on the assumption that noise is statistically independent between pixels, or, as in the seismic case, between each spatio-temporal sample. In reality, noise in seismic data is always correlated to some extent. Despite this, we have shown that, whilst providing the best results on WGN, N2V can still efficiently denoise both synthetic band-pass filtered noise as well as recorded noise from a field acquisition. It was observed that for seismic denoising the number of epochs had to be significantly reduced in comparison to the initial N2V applications, whilst the number of active pixels had to be increased. The reduction of epochs hinders the network from learning to replicate mildly correlated noise whilst still providing adequate training time to learn the dominant signals in the data. Whereas, increasing the number of active pixels acts as a regulariser to the training procedure by introducing additional corruption into the training dataset.
d
0be873243adc21f67c94cfcb5ec16332
In this paper, we proved an operator extension of weak monotonicity. It is interesting to note that our argument also leads to yet another proof of strong subadditivity {{cite:5dfd4a118bfdae8a90e5de49c0d38d6abb3e7bd2}}. What is notable about this new proof is that the strong subadditivity is proved by first proving the weak monotonicity, not the other way around. The key observation was Lemma REF , which followed immediately from constructions of certain isometries. We leave it as an open problem to explore the consequences of this simple but powerful observation.
d
017cf7245382924f9d479fa93a50cefb
The neutron pair-distribution-function analysis revealed that the local CDW fluctuation occurs at {{formula:ed074dc1-6d63-4ea4-aaf7-218bbf0e12fb}} and disappears upon further cooling {{cite:d827f6835318f7ee5ac85f0e1c9ea15322aa62ea}}. Temperature evolution of the electronic structure is shown in Fig. REF . The photoemission intensity along {{formula:04b74fb1-ecb0-421e-b96c-bbb02ff455b9}}  - M (cut #1) was displayed in Fig. REF (b), we found that there is no evident spectral weight change with temperature within our energy resolution, except for some thermal broadening effects [Fig. REF (d)]. Along M - X (cut #2), as shown in Fig. REF (c), there is a spectral weight suppression with increasing temperature for {{formula:c96a6813-7522-4cf9-9b96-0c0175dbe2ab}} , {{formula:fb936a84-f1b9-41bb-82c3-69a92f33df4f}} , and {{formula:41cc1d86-a6d9-4246-90d5-ed883e06d33c}} , as indicated by the up arrows in Fig. REF (e). However, the spectral weight suppression rate do not alter noticeably across {{formula:550ce519-bad7-4ef4-9c6d-0ba8a667705a}} . Similar spectral weight change was observed in Sr{{formula:758d2764-09b4-44b9-af37-471d65d77434}} CuO{{formula:e01eeff5-6f39-4614-a68a-3de80b45de4e}} Cl{{formula:af86bdb4-39fa-4ed2-84c5-a29cf5b96aec}} and BaTi{{formula:af01c50e-dc19-4fad-943f-338fb243f5a3}} As{{formula:2fd0636d-9f75-4d7b-80e4-c6fc776ab7c4}} O, which is explained by strong coupling between electrons and magnons or phonons {{cite:e4ebf499cf9c279d125902315ecf0cd75af7d583}}, {{cite:c8c55cdfa5fe9e2d067ecdd4498298650528549e}}. For BaTi{{formula:0b0c9ffd-2497-4f51-9d88-4b282451e45f}} As{{formula:9a44fb4f-f325-40e0-8372-bc9d7fe88b2d}} O, a change of the spectral weight evolution rate was observed at the CDW ordering temperature. The absence of anomaly at {{formula:5b909df3-ff41-4c7c-82ff-44b4fcd602b2}} here is likely due to the fact that the CDW fluctuation in KNi{{formula:01709763-74a9-475b-9df7-5090c649c5de}} Se{{formula:00cca306-dd19-43c1-a7ae-ab02b774abd3}} was reported to be dynamic and/or entirely uncorrelated between unit cells within the {{formula:30a52970-ba0e-4491-94c4-d137f00e9794}} plane, in contrast to the coherent CDWs observed in structurally related compounds such as NbSe{{formula:45960f76-105e-40ae-9de2-0dc27b7bdc5a}} {{cite:a80898ed6dae83e3ada2bc4c2cf55b64c04d4bbf}}. Nevertheless, since we observed spectral weight supression at nested Fermi surface sections, it may indicate that the enhanced electron-phonon interactions in these sectors may be responsible for the local CDW fluctuations.
d
05bbe0741cf4a5634a568b2855b8deb7
Graph Convolutional Networks(GCN) {{cite:67a68b79e16a28bbb1173888b61f35b353a0950f}} borrows the concept of convolution from the convolutional neural network (CNN) and convolve the graph directly according to the connectivity structure of the graph as the filter to perform neighborhood mixing. GraphSAGE {{cite:ae5458bc3dc1b4ffe0cf0a7147b6f98df9b9db84}}, unlike GCN, is an inductive learning algorithm that learns aggregator functions, which can induce the embedding of a node given its features and neighborhood information.
m
f9c3607f736227e4a3fb80ced87e9d52
These properties are reminiscent of the behaviors sought by the line of research on Dynamical Movement Primitives (DMP) {{cite:49ada89df42c9356679174a31324e8532a620733}}, aiming at modeling attractor behaviors for autonomous nonlinear dynamical systems. The essence of the approach is to define a simple dynamical system, and tune it so that it exhibits prescribed attractor dynamics by means of a learnable forcing term. The online control performed by the prediction error minimization process in our model gives rise to similar attractor behaviors when confronted with external perturbations.
d
24ee871bffe9b9616c15826d8ab8834a
On the other hand, determining the order of {{formula:99441b03-326d-4b4d-8d32-fc52f243e7e3}} -cables of a torsion knot is much harder. The only known result of this kind is given recently in {{cite:e51bd62cf731dcfdb5d994501e6b49b3af78b766}}. They showed that when {{formula:3c337fe7-8beb-4e55-9345-3e792a078383}} is the figure-eight knot, the set of cables {{formula:e7fb595b-a247-4abf-9f9a-a1a3c9f4333e}} is linearly independent in {{formula:6749be29-7aee-4307-a7a1-a463c8aaf091}} using involutive knot Floer homology {{cite:702be31ec819c01f80f216382907dd7b46f8121f}}. (This also gave a first example of an infinite order rationally slice knot.) In this article, we expand this result to a much larger family. Roughly, we show that “half” of the torsion knots have infinite order once cabled (compare it with {{cite:11d3623f9ca7d8300eea4422b3407054720b7754}} where they consider cables of fibered knots). Let {{formula:78266a3f-9e13-44ac-b4eb-668b6b6ddc26}} denote the iterated cable of {{formula:fb70e018-0019-469e-ad7e-a770f74ffd45}} .
i
7793be3e628165a51d284b3897cee588
In this section we have described the performance of the Pre-Trained (PTR) model on the datasets described in section REF . The model is pretrained on time direction of subjects from Human connectome Project (HCP) {{cite:c486b97680eeba654d02bf6c833407017b6d9e93}}. The pretrained model is then further trained on four datasets for the downstream classification task of three abnormalities, namely, Autism, Schizophrenia and Alzheimer's diseases from healthy controls in the respective datasets. Schizophrenia is a severe psychotic mental disorder. The symptoms include diminished emotional expressions, the lack of motivation, paucity of speech etc. Alzheimer's Disease is also a neurological disorder which is a progressive form of Dementia. It leads to declining of memory functions and deteriorating social and behavioural functions. Autism leads to severe social-emotional reciprocity, affects non-verbal communicative behaviors and also impacts the ability to understand and maintain relationships. Interestingly, all these three mental disorders have common clinical features, as they present some kind of impairment in the cognitive ability {{cite:7248f2c46ac309f07032bba8025c31505ecd41d5}}.
r
209ec5a6eab4c882e55b0b1d7610d001
In this paper, we have proposed the new kind of the Chandrasekhar transformation for the Teukolsky equation in Kerr spacetime, which reduces to the original work of Chandrasekhar under the limit {{formula:ec3d1f7a-47bf-430d-a7ef-a8aba63b8426}} . The original Chandrasekhar transformation had been obtained from the point view of the gauge transformation for the linearized Einstein equation. We could expect that our transformation would have a simlilar origin, and it would be interesting to find it. One can also find that obviously there could be other transformation from different choices of the variable {{formula:32ca3967-0bc2-4e83-8745-12f30582a766}} , the differential operators {{formula:430b6445-1723-4e70-b59b-767226dc18e6}} , the multiplication transformation and the functions {{formula:56df38ba-1043-4923-8d32-d8f6ce62b510}} and {{formula:081c1061-18a7-4e89-b8ce-11af89987291}} . Well-known examples are of course the (Chandrasekhar-)Detweiler equation and the Sasaki-Nakamura equation {{cite:82904124599b6dc6e27d8eb6bfe07156b5650d67}}, {{cite:c06373e92bcbcb6dffb07836f7238bf5401d91d3}}, {{cite:e04ab20d25ed6fe9fc7aaa240ceaa3a93a4bcfc1}}, where the tortoise coordinate (REF ) is used. Instead, here we have used the coordinate {{formula:649e5d00-2ba5-4674-9d8e-68e8daefa6bf}} defined by (REF ) from (REF ), in order to keep the structure of the singularities. We have obtained the differential equation (REF ), which is also reduced to the Regge-Wheeler equation in the limit {{formula:aeed99e2-ff81-4d51-8698-6a72e4be9d0f}} as the (Chandrasekhar-)Detweiler and the Sasaki-Nakamura equations are. The extension to the case of different spins (the scalar waves and the electromagnetic waves) {{cite:3f2d60d31bf6981a10665f43afc8e67840c51407}}, {{cite:814a2e8a7070107e81f38731759a4894349ec394}}, {{cite:e03fcf05a0469ad38791a5b1ae50aed689d8626c}} and the explicit comparison of the quasi normal modes {{cite:87c7b404787aefe409a06eedb60e065d07eb8cf9}}, {{cite:ecc5e8b08738a495f9aca474bbd90f1217015fd7}}, {{cite:3e527c3f9536ce38a30d9d6340a1047879593a06}}, {{cite:9e8ec50353215b1c09d69b0b1da34815219ffafc}}, {{cite:bcb801d0212f7815aaed7629d896bd4ac13367ef}}, {{cite:d901c2fdf9074643c6c822f523f0267485184cff}}, {{cite:9ebee89f1c12f01fbd9d74caa8083e9711c713e3}} would also be interesting.
d
7de0cd79c79908c1da3d5da9d5556f34
CIFAR10 images were also rescaled to the range {{formula:e590a74a-74b2-4e36-9200-1197d0a6e1cd}} and normalized using the mean and standard deviation of the training data set and augmentation methods were applied during training. For CIFAR10 data set, Resnet 9 {{cite:29738695b1f552acea578664aaa65bf16e8320df}} network architecture was used with a CELU activation function for all layers. The lambda value was chosen in a similar to how it was chosen for the MNIST data set. Details of the training hyperparameters are given in our Supplementary Materials section. The model was trained using the proposed loss function (See Eq. REF ) and the generated unlabeled data set. The training and results comparison were conducted similar to the MNIST experiments. Table REF shows the results obtained using Input Gradient Regularization {{cite:2dadd503331d3bafffaa93b69f1dede1d27a7607}}, Jacobian Regularization {{cite:35a7870855a03474b7340b30ada528195f4a02e5}} and the method proposed in this paper. The {{formula:414078ca-4640-458f-a424-fd5aae4ac4b4}} values achieved for the proposed algorithm and when combined with adversarial training are larger compared to models trained with no defense. Regularization methods and adversarial training also show larger {{formula:cb2bb58d-b48a-48a6-a9fe-0961a877bc74}} values. The test accuracies achieved were slightly lower using our training framework. We believe the prediction accuracies are compromised over achieving a smooth SoftMax surface outside the support of the data distribution in the high dimensional input data space.
r
9bf3ba888dc5700e470f239b3bacd29c
Mahalanobis distance (Maha) {{cite:03691e4d3da8903ecd8b7c2f83427c3375e49685}} which measures the distance between the test input and the fitted training distribution in the embedding space. The training distribution is fitted using a class conditional Gaussian. The uncertainty score is the distance. Relative Mahalanobis distance {{cite:9a97f1fa6eee003b30cd402902845a0e23549ef4}} is a modified version of Mahalanobis distance which corrects for the background confounding effect using another Gaussian distribution fitted using entire training data ignoring class labels. Entropy of the softmax probability. High entropy suggests high uncertainty. So the uncertainty score is entropy. Maximum over the logits (MaxLogit) {{cite:ebaa3173f80335eed74c49dd7b5bb79e8de4a069}} which uses the maximum of the un-normalized logits as the confidence score. Then the uncertainty score is then the negative of the MaxLogit.
m
6f0da43266c48dc5b715c684a64146de
In Fig. REF we illustrate some of these properties by using DMRG {{cite:6d89e6bd94a9c51128433d11a86d3b1f6d8d5a48}} to determine the ground state and its correlation profile for several open-boundary quasi-1D Hubbard lattices with {{formula:6b90ce26-f2a1-45ab-8c2c-79bec4fa8517}} to minimise any boundary or finite-size effects. To aid our analysis we introduce the distance measure {{formula:75a99ce0-d248-46f5-8b2a-efa728e00d0d}} which is the minimum number of edges (bonds between pairs of sites with non-zero {{formula:7ca39790-3c1a-43b6-9c93-d4bf7aa1a229}} ) that must be traversed to move between sites {{formula:411a31b9-71a7-4508-9fda-ae086af7be5b}} and {{formula:e7bad912-b6d5-4c69-bd27-01d12cacb0ae}} . In graph theory terms, {{formula:3888ee31-d6cf-4299-a10f-7d43c930bac5}} is the shortest length path between {{formula:f3876c66-a85d-41d1-a286-e741fbbc744b}} and {{formula:abb13604-e586-4cd9-b2a5-44c2ecd37594}} . Using this quantity, we can define the distance-dependent correlation function {{formula:aa60fd23-e927-4290-b7be-8c99f6c4b0f1}}
r
2c8f1bc0d34676abbbd1b370f50c7f96
Theoretically, in addition to {{formula:75bb8e0a-1c69-4eca-bce4-0e4d2b7d7a79}} , it has also been possible to obtain the spatio-temporal behavior of different thermodynamic quantities in the hydrodynamic regime. The variation of density, pressure, temperature, and velocity with spatial distance, {{formula:758ae57b-f478-4d7c-aaa1-59199e0fe536}} , and time, {{formula:db199e58-6389-4e9b-b687-b00dd0dc56b7}} , are described by continuity equations for mass, momentum, and energy. These hydrodynamic equations correspond to the Navier Stokes equation. In the scaling limit {{formula:632c14aa-60a6-4ac5-9198-7f9c8f495b2f}} , {{formula:0f221fa7-eeba-4222-93f4-4754a3fdcaf7}} keeping {{formula:9930f99d-ffce-4182-95b2-609c454167f7}} fixed, and after appropriately non-dimensionalizing the thermodynamic quantities, it was shown that the heat conduction and viscosity terms vanish and hence can be dropped. The Euler equations, resulting from dropping heat conduction and viscosity terms, in three dimensions were solved exactly by Taylor, von-Neumann and Sedov {{cite:6041e7c17a1f3914f49d87844562d01be8b91901}}, {{cite:78ec9aa60bb6cff230ac1565019e24b26af690d1}}, {{cite:2eabe6e819cf1b81f7e5fb3740253d0c95d7d9c0}}, {{cite:fb2ba0ac4cad41c4a19e0c7434be24643b8db9fa}}, {{cite:2d64937750fb30521a54e43501e70068cfff321a}} to obtain the scaling functions analytically, and we will refer to these self-similar solutions as the TvNS solution. The significance of the TvNS solution is in its wide applicability. For instance, it has been used for modeling evolution of supernova remnants in the adiabatic stage {{cite:2b1c78bd547bb892d66d551847d5cc9480ecf87b}}, {{cite:432719aa7268791e12df256421e9771d658e6b7d}}, {{cite:19126fd30f8a18e11701cc6dc8bd08848ee86555}}, {{cite:4eaff529076ccb5aa4dc2865ee043968365dab1e}}, {{cite:436af4d36e26222aede188b750b23c733f4cdb49}}, {{cite:4e70b0a34450d0c790ccc51cc2e5e3e784ccfa81}}. TvNS theory has been used to study systems where energy is continuously input in a localized region of space {{cite:07e8408cccaa5bdda00e933a326beb9664eca590}}, {{cite:eeb44b37eee4fba432140400cbc856c47e938b44}}. TvNS theory has also been generalized for granular systems, where conserved quantities are fewer in number because energy is no longer conserved {{cite:1751072f043c70b6cb837cfed99f79259212dca6}}, {{cite:6c4f7f30ba76250b651d86a071197e79c159d14d}}. There are many examples in granular systems where a blast wave is generated when perturbed by either a single impact or by continuous energy injection. For instance, crater formation in granular bed due to single impact of an object or continuous jet {{cite:985e37be1b27ca7e073ac4fc35c31cbe53261ecf}}, {{cite:f484dd7da819667cc23f3a597021b50b4f885d1d}}, {{cite:65f6997a6161480838a4b27c110cf076476ec745}}, shock propagation in granular medium due to sudden impact {{cite:dabf183703ed00cfa01de511fc98b8d790971562}}, {{cite:ecdb4f229869bbf2f65c2e34c49abd7698b16c89}}, {{cite:6b7a1afe375b9627729fc9bc9aee392b3f024b32}}, viscous fingering due to continuous energy injection {{cite:749fca9e33764c410a5ad8ccb7e7fc3377e5351a}}, {{cite:8af215f4d19cf902ef33203529cf8285477d6743}}, {{cite:ab86d9eb3681ee024d62fe46d577d8b16f718a05}}, {{cite:9d87001b39567c12f1d111b8aa8110184f8cb7e4}}, {{cite:e9582591489e31b218546305f8c0084df3eb71cd}} or shock propagation in continuously driven granular media {{cite:705b7532e0a93342bf754be287efa7838f8df9ad}}.
i
cd4c4f09076d692584c91bde8ae57f3f
Our approach is depicted in Figure REF . Two losses are computed to optimize for both content and style in a coupled manner at each iteration. We build on the drawing representation and the content loss methodology of CLIPDraw {{cite:63b9518b2197240d7a64ba4bf7886457e086e5e3}} for which we include brief descriptions for self-containment.
m
db74abf89ae7473feb8c3ee822278135
In this section, we present some results that are needed to prove the main theorems of the paper (in Sections and ). We first provide a block anti-triangular decomposition of Hermitian matrix pencils. This result can be proven in the same way as the analogous result for symmetric matrix pencils {{cite:123779c894cfdfc0bd552402a09d643d478504db}} but using the factorization {{formula:9603fa69-7a0d-4b7a-b8b2-afccc1e12b5a}} (equivalently, {{formula:0034b507-0a6a-4580-a2b3-506c54c7054f}} ), where {{formula:ef55d4d5-9011-49ae-a7a9-f88a5da51fbe}} is unitary and {{formula:989bcf7d-4176-4810-a89c-0047c6fef4bf}} is upper-triangular, see e.g., {{cite:7a662d720b8bb5569baabae212845a94f0a15d1d}}. (Block anti-triangular form of Hermitian pencils). Let {{formula:3a758ab0-6758-4c98-9565-6d0e1f842be5}} be a Hermitian pencil. Then, there is a unitary matrix {{formula:c8f998f1-2dfb-4973-907a-8e8a309da767}} such that {{formula:7392a687-7f2b-4ed4-866f-9d8cf69dea3b}}
r
25dd0abf6764da673f39166f22165624
The flux drop in the radio or optical/NIR band usually associated with the hard-to-soft state transition is thought due to the jet quench {{cite:71067cccbd6c6b2cfa29f0ea21612ae70859bba8}}, {{cite:17f87b908d1723047b2081a9b5d3eadd4f870147}}, {{cite:cd0218d837ef5904b9e1f1f06f0db6b3fd2e783f}}. In the observations we analyzed, a drop of the UV flux started about 10 days before the transition. The flux drop is similar to those drops in the radio, the NIR, and the optical, which is highly suggestive of a common origin. So the drop in the UV flux should be due to jet quenching as well.
d
e6c6c18b1bb8f4d09d8ad43ba4ffd405
The direct searching for the right handed boson sets the lower bound for the mass of {{formula:f886f2bb-0dfd-4c3d-98a0-cc6218191376}} boson as {{formula:4b72f096-8adc-4baa-934f-2b6daa197f89}}  {{cite:c015be4214067e79d58312139233b1dbe10d1203}}, {{cite:9701d094737e5e0846617bb8546e0ec45a855a90}}, {{cite:ad3ec88ee0fc5f1ffcadba8b9947f329f7948944}}, {{cite:bddd217f165412f658c9f58eb94de4c9264adef4}}, and the {{formula:deaf8d82-eede-4f5f-b531-435d1148c53b}} , {{formula:bee4eeb3-c0b2-4a1b-8d28-558bf482ae9b}} mixing angle as {{formula:0e686ad0-7bf6-487c-b0e9-2abde9ce2689}}  {{cite:92cdbdeacd18ffa4dd95adec9c832916ec6933eb}}. Finally we have {{formula:941ba22e-505e-4aaf-90d8-31bd25ce1b83}}  {{cite:d5f51ed4b350c7ff8f2277216b5b33c61e4f9b3a}} required by the CP-violation data for the {{formula:d5c68014-7106-4973-b210-5c6f168b8837}} and {{formula:e6ca5de9-a589-407f-9eec-73809a3e9ff6}} mesons, where {{formula:cb2c3226-b61c-412f-85df-74eb98c64c5e}} and {{formula:cda42cb4-c678-4506-baf2-8fd465607950}} are the vacuum expectation values of the two Higgs particles respectively in the LRSM.
r
792cdeaec93b2d4240b7c7ece11bf1b3
Through a Python tool, the memory gains of the proposed synapse-compression technique are quantified. The tool calculates the synaptic-memory requirements of a given cnn for the proposed method, the naive lut approach, and the reference hierarchical-lut approach {{cite:095a8511c6b27eaab3ff915f7918e83e12aa5031}}, {{cite:f71a1af07dc9dd423b394c770210d510a4f3901f}}. For the proposed technique, other than for the reference techniques, the memory requirements of a layer increases with the number of fragments the layer is split in. Thus, the tool calculates the memory requirements for the proposed scheme considering fm cuts that ensure that the total memory footprint of each resulting fragment is below [256]kB, which is the single core limit of our chip. By considering the real (non-beneficial) required fm cuts in the proposed technique, we report realistic rather than optimistic gains for the proposed technique.
r
03f5971d6eee842a455eee88a086a7eb
where {{formula:ed61c79c-ec43-4fa6-abea-2b66c4137114}} is the vacuum pressure. From {{formula:607e8105-1541-4055-8826-9fb24860189d}} one can determine the energy density, {{formula:9d3e1f32-a93b-4355-8f6c-63156c71dcfb}} , the trace anomaly, {{formula:accd52da-c868-426e-9e48-745d5fdfc363}} , as well as the conformal measure, {{formula:fb887b13-3a61-4d96-bdbc-b06e0ce35c01}} . For simplicity let us start by considering the case of symmetric quark matter, {{formula:3226d07f-b4c2-4f55-821a-767a6fcb6339}} . Fig. REF illustrates the baryon density as a function of {{formula:6ce834c1-93ab-4595-8532-6cd42b6d0ab9}} for {{formula:f1a86171-cd8d-459d-86c9-f7b0f3373c41}} , {{formula:aeb199ce-a3e9-482d-96e5-221c67a5b1c0}} and {{formula:a0ce8dd9-4489-41a3-8d46-84e723398002}} . The figure clearly shows how {{formula:e753b232-d237-4db0-b4e7-d734df30f283}} interpolates between the other two cases predicting that, after the chiral transition, {{formula:79b59062-1e77-4ad8-be99-beda1956f675}} converges to the free gas result. The possible phase transition patterns can be better analyzed by evaluating the quark number susceptibility, {{formula:e3e9ed34-f9b5-4550-9aad-d7d5b4b0c5d3}} . The results displayed in Fig. REF show that all the three possibilities reproduce the usual first order (chiral) transition which, as expected, is delayed and softened when {{formula:c8ef8737-2c2f-4b25-9583-1e7155bdc792}} {{cite:06a03a023134949405ae87492e7929ac297675c1}}, {{cite:562825640bcae72a9d5018d493d105d5b11263fe}}, {{cite:37012354405b4b0644917abcd4efc76857375b53}}, {{cite:53726d1f43b2a44dae440fecfc2122fbf5812028}}. On top of that, at {{formula:630a39f0-cc40-4962-a096-08948b9ed8ff}} the running coupling induces a cross-over towards the free gas result. Fig. REF shows the squared speed of sound as a function of {{formula:6d4f7475-d3d5-4162-a2a9-be79259c9c35}} . The results obtained with the running coupling indicate that {{formula:4fc028f7-8190-475e-8221-f3461e6d2024}} exceeds the conformal limit at {{formula:490f5849-9526-4685-8cb1-2e8c6f396ad0}} for both cases in which repulsion is present. However, when {{formula:017f0cc9-4ec5-4edd-ad7c-91755c70c357}} is fixed, {{formula:21306f55-76ec-4917-8f0e-51d0e471cd6e}} continues to rise monotonically whereas a non-monotonic behavior is displayed by the running coupling which produces a peak, {{formula:62ac940a-92eb-4f12-8715-9089fcf6e83b}} , at {{formula:c017ccae-672c-4f3a-9768-15f2eadd582b}} . After that, {{formula:3548ffd2-adc5-4eb4-9bf2-e32ec5abeb0b}} returns to the sub-conformal region and reaches a minimum induced by a cross-over (at {{formula:242789ce-052c-411f-88e1-a8af874d6552}} ) before converging to the conformal value. The figure also illustrates the pQCD results when the {{formula:c20db9ff-9efb-4324-a3e8-be63e9016fa4}} renormalization scale varies from the “central" value, {{formula:06032821-55c9-41e5-91b2-ff7a65566bd3}} , to {{formula:15b8e066-4569-49d8-be54-d08d3ef77403}} . The pQCD predictions were obtained by adapting the {{formula:577330cc-62b1-4dae-8527-45e1179eccb5}} results of Ref. {{cite:20db62af3b3cb5cbabe58e20f2810ec1d97f30b3}} to {{formula:6fd557c9-f659-4ec2-8a0f-a8446b64f91a}} . This can be done with ease, owing to the fact that the authors have presented per flavor results to the first non-trivial order. Notice that the conjectured coupling running predicts that after peaking at the super-conformal region, {{formula:306ac937-4611-47a2-b6ec-f92ceff835e0}} approaches the conformal value from below, like pQCD, whereas evaluations performed with the hard density loop resummation {{cite:2b5ea50ba6b8941de829432302f730fee8da84d7}} predict that the approach is from above. A preliminary analysis with the renormalization group optimized perturbation theory resummation {{cite:14d5761edff9169754232a99b01bf7c797845504}} also indicates that the approach is from below {{cite:d5f1fec50cd4267663a0134b89e92b4cc6027176}}. Finally, it should be emphasized that the shape of the curve generated with {{formula:941b951b-6dfc-45d3-8997-24496a478a5c}} resembles some of those recently predicted in Refs. {{cite:bb18a38e84b361b389b1705ee192938abdbf1d01}}, {{cite:db427bc7eb615789fdfda3fcac768a73c776f1ea}}, {{cite:b5f844a43a3886f04387728f2f337395015d5055}}. {{figure:d4fb7991-3519-476c-bf26-edb337be620c}}{{figure:a5343a3b-0a89-4635-9949-952da14c6790}}
r
aae0ea4456509357e99e752d5830917f
In December 2019, a severe respiratory disease emerged in Wuhan, China {{cite:f1c6685442f40c839ac5c9ef9a22ed735f65012a}}. Since the first identified case of COVID-19 was reported on 8. December 2019 {{cite:ef379ef5f6baf047350e10cf9a2e23755845ae93}}, {{cite:6b5fb07d361867e14b952027d52ddeae1a17f7d2}}, the virus has spread quickly, causing a worldwide health crisis. The spread has occurred in waves, and the virus evolves and mutates, and consequently we have yet not seen the final numbers of ebb and flow with 215 million confirmed cases and 4.48 million confirmed deaths to date. To halt the spread, various non-pharmaceutical methods were put into place, such as limited travel in combination with social distancing as important measures to slow down the short-term spread {{cite:614179737d2969eb771beccab738c3e374b57909}}. Eventually, pharmaceutical interventions were administered through vaccinations, to slow down the long-term spread {{cite:4eb2d5fc417e5918a7c3812f993d5d0ecebfa77a}}, {{cite:d3c00ad7feecba5cdbe93e804ce641cabb4b4a30}}. The first vaccine doses were administered in December 2020, a year after the pandemic started. Since then, vaccinations have been seen as the utmost important strategy for containing the spread of the virus, for minimizing risks for citizens and ultimately for slowly alleviating the non-pharmaceutical restrictions that have been put in place {{cite:d3c00ad7feecba5cdbe93e804ce641cabb4b4a30}}. As of August 27 2021, 33 % of the world population has received at least one dose of a COVID-19 vaccine and 25.1 % are fully vaccinated.
i
5d0cfe670d1be58a5b1c223d376a5ceb
Human-made 3D shapes, such as chairs, are composed of a set of meaningful parts and exhibit hierarchical 3D structures (see fig:partstructure). Extracting multi-level part instances from the point cloud is challenging, especially for fine-level 3D instances, such as chair wheels. Existing studies independently performed 3D part instance segmentation on each structural level and also suffered from the insufficient labeled-data issue on some shape categories. By utilizing the hierarchy of shape semantics and part instances, we extend our feature fusion scheme in a multi- and cross-level manner, where the probability feature vectors at all levels are used to aggregate instance features. Furthermore, to better distinguish part instances that are very close to each other, we propose to predict the centers of grouped instances, called semantic region centers, and use them to push the predicted instance centers away from them, as the semantic region centers play the role of the centers of a group of semantics-same part instances. On the PartNet dataset {{cite:ea4fd99e27c7ffd21f73a41f2cdbfd5d8d58e4c0}} in which 3D shapes have 3-level semantic part instances, our approach exceeds all existing approaches on the mean average precision (mAP) part category (IoU{{formula:11539d59-ff26-419d-9702-e984fe14b9ca}} 0.5) by an average margin of +6.6 on 24 shape categories.
i
c2fd304b072d1b0136625a0ecc9a37a3
Here we compare our method with textual-based methods XLM-RoBERT {{cite:0eec3e7260a920ac37dbe5903d6a1599156dd0d2}}, InfoXLM {{cite:8a0fd7bef719cd60d46970e5cb7c496923dc0dce}} and LayoutXLM {{cite:b00234f8b4f3ebbeba5ce8fb1c6339919ec7d306}} on XFUN. The results are shown in Table REF . From the table we can observe that XYLayoutLM achieves the best performance among the listed methods. More specifically, among the multimodal methods, XYLayoutLM outperforms the original LayoutXLM {{cite:b00234f8b4f3ebbeba5ce8fb1c6339919ec7d306}} by 1.48% F1 score on the XFUN dataset for the SER task. Besides, our XYLayoutLM achieves a 0.6779 F1 score in RE task, which is an obvious improvement beyond the baseline LayoutXLM (0.6432). Similar conclusions are drawn on the FUNSD dataset as shown in Table REF . Our XYLayoutLM achieves comparable performances with latest methods like DocFormer{{cite:3f4a0b430740d2d029005f2d794cb50f1516168e}} and SelfDoc{{cite:54af49681448ad6819e9406204bb5bf3dc344d39}}. Note that StructuralLM*{{cite:6a90dcabea92ab209e0d8ef69b12cfd1dbc257bc}} uses the LARGE model to get the best performance while other methods in this table only use the BASE model. {{table:1a6b1dfb-2011-4252-b900-7940d0d47822}}
r
98708b82e95b62b708be1051fe2d3e72
To introduce the notion of translations on the Cayley tree {{formula:ea9e371e-2a66-4cf9-818c-3d5813dc84c8}} , one uses its group representation {{formula:ce8a50c0-5120-4eb8-9bb2-329f369acf6b}} which is free group with generators {{formula:8766e901-edf2-437c-8ed9-6b36f5676ae5}} of order 2 each (i.e., {{formula:26de46ad-d8e4-4746-a31d-ab89205bf66e}} ). It is known (see, for example, {{cite:f063e5df441587b9a4d65a54ec914710895fcc3b}}) that the vertices of the Cayley tree is in a one-to-one correspondence with the elements of the group {{formula:3ec13b6b-be13-4828-92f2-02489ae792f2}} . Consider the family of left shifts {{formula:102bb039-dfd2-4d9c-be2a-7396596aa98e}} ({{formula:7fcaaf40-dbd1-42db-95e1-74081b927923}} ) defined by {{formula:9c5ed00e-c41c-433c-a9f2-b93cfc9fd5f6}} , {{formula:69696d2f-152b-43ff-b811-b72a09e0a9c8}} .
r
2ef9a815afc83e6b533a1c2d3abcddae
The nonperturbative inputs used in this work include decay constant, Gegenbauer moment and form factor. Unfortunately, for the {{formula:62a5d443-883b-4af1-ab6f-d2e703e34c49}} and {{formula:9636013d-254e-48f1-b221-94b18fb18958}} decays concerned in this work, some of these nonperturbative inputs are not known. The standard light-front (SLF) approach {{cite:43f160c7f72392a85d46e16cb3eb93f6818c01f5}}, {{cite:e58db326bcd26045fee85b540ffca8260c054708}}, {{cite:5c4c67e21264e5f9c1fc737c0219b47b3b138b83}}, {{cite:80921abfc9e77fb3c40004fe9d5d10bebc160482}} provides a conceptually simple and phenomenologically feasible framework for calculating the non-perturbative quantities of hadrons. However, it is powerless for determining the zero-mode contributions by itself and the Lorentz covariance is lost. In order to cover the shortages of SLF approach, a manifestly covariant light-front (CLF) approach is exploited {{cite:194596daccfc9e57981def9592d4289502a2148b}}, {{cite:80efe887bebaa53a7eb468d7c725f09551754540}}, {{cite:9e9c4ee0020d500ef12a79cb70e2ff146af51ee0}} with the help of the manifestly covariant Bethe-Salpeter (BS) approach, and has been applied to study the {{formula:53abdbd3-81f7-43d1-948b-69c52eba4eb3}} and {{formula:c29e67f2-ebc5-4c63-9515-df445429fd0b}} decays {{cite:74bd5a165325665f50a5fec671d0d62a92fd2fa7}}, {{cite:abdce88390a290f6cbea407b30bdc0c2f486b759}}, {{cite:deb34a20e12589e60c9b6e8cc38aab64806e4054}}, {{cite:6ac002756ab01e01192db78fcb1e3eda4af166e3}}. Unfortunately, this traditional CLF approach has some self-consistence problems, and the covariance in fact can not be strictly guaranteed due to the residual spurious {{formula:d6c86011-4866-42d1-ba99-45bfff3f5c9f}} -dependent contribution {{cite:9e9c4ee0020d500ef12a79cb70e2ff146af51ee0}}, {{cite:34f7aa187b6ce3c0ec3880860ed0bbb6f9719233}}, {{cite:94480971ea776f9ad1199033db26eb8faa032217}}. In order to resolve these problems, a self-consistent scheme is presented in Ref. {{cite:34f7aa187b6ce3c0ec3880860ed0bbb6f9719233}} by improving the correspondence between CLF and BS calculation, and has been tested in, for instance, Refs. {{cite:d95c77fc2cece5c6f9881db0a8135f46af4ae3b9}}, {{cite:9f00105b4199e4e575beddc707d5aecb8ef90f60}}, {{cite:c849c5218d3744c5b2e4635dbacc66514e051a47}}, {{cite:133844e8825ec52a1e59ee7929059c8763af863f}}, {{cite:a9737cf5247f98309c90d0c8e2773caa04ce09cc}}, {{cite:94480971ea776f9ad1199033db26eb8faa032217}}, {{cite:1447dd93e74399898783d61ce5c47a2e77f2664f}}, {{cite:85420459a232caafcd482dcc0199341fc9b4943b}}, {{cite:be747160391349a91421330f6ff6d599e226c22e}}. Most of the results based on such improved self-consistent CLF approach for the decay constant and form factors generally agree with the experimental data and the predictions obtained by using Lattice QCD (LQCD) and light cone sum rules (LCSR) (some examples can be found in Refs. {{cite:133844e8825ec52a1e59ee7929059c8763af863f}}, {{cite:a9737cf5247f98309c90d0c8e2773caa04ce09cc}}, {{cite:85420459a232caafcd482dcc0199341fc9b4943b}}, {{cite:c849c5218d3744c5b2e4635dbacc66514e051a47}}, {{cite:d95c77fc2cece5c6f9881db0a8135f46af4ae3b9}}) , while the self-consistent CLF results for some DAs of light {{formula:00540b17-1d1e-4785-a477-c79119d25df9}} -mesons ( for instance, the twist-3 DAs of {{formula:7df985cf-8e6c-445a-aaf3-666c82e69572}} and {{formula:993488be-ab26-45cb-97cf-d7aebc091b50}} mesons {{cite:9f00105b4199e4e575beddc707d5aecb8ef90f60}}, {{cite:d5973f8ae3b5aa2c084654167c375874244c4e2e}}) are different from the QCD sum rule (QCD SR) results.
r
5d69c16736430ea526820f3af5d03be6
After computing the energy deposition, the ionization and the thermal balance are solved in NLTE {{cite:d73a9249f9363174e09607ed6b753bee023e660a}}. Ionization is assumed to be entirely due to impact with the high-energy particles produced by the deposition of the radioactive products, while photoionization is assumed to be negligible {{cite:85c6b8a1373d356983d43a56545d2f36b267827b}}. The rate of impact ionization and the recombination rate are balanced for each ion to compute the degree of ionization. Level populations are computed solving the rate equations under the assumption of thermal balance, i.e., equating the non-thermal heating rate and the rate of cooling via line emission. Under the assumption that the nebula is optically thin, radiation transport is not performed. The resulting line emissivity is used to compute the emerging spectrum.
m
8263dc7e6833768c8e9a83eec5464587
As for the method proposed by {{cite:61ff1b7cd62578481851585e4381508fcec01e69}}, which we present in Algorithm REF , its second order Edgeworth expansion was discussed in Section . To make the statement more precise, let {{formula:667ea1b9-3506-4772-8269-9b494cf0d80a}} be drawn from Algorithm REF , and {{formula:bbe823ee-e59d-4ef3-9969-9c4ba312daf8}} be drawn from standard Weighted Likelihood Bootstrap {{cite:544bf2c63d948bcf8e87cdc07ed1cd4295893a00}}. Then up to the term of order {{formula:ad4921d5-0666-421d-84bc-be3ef29a6985}} the Edgeworth expansion of {{formula:91ff8292-9685-42cb-b0c8-d29f014e38b2}} is the same as the Edgeworth expansion {{formula:de8d1e69-6097-4279-9304-d69d77468158}} , for any choice of fixed parameters {{formula:a4664984-6946-4c1b-bed7-27bb593c69b3}} and {{formula:0ffb86ce-a354-4f06-bab6-2ed2c1ee416d}} .
m
bc81ea85eb61421cce6656d0f2e957e8