text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Leighton {{cite:ce040779c557a0d8d446ce87342b5798d77a0630}} used the well-known Lipton-Tarjan planar separator theorem {{cite:fadec49a29025e0e142ee2c099ceee4ec62f68f5}} to give a connection between the crossing number and the bisection width {{formula:d1ed62c1-7467-41fc-bfa9-4460ee539992}} of a graph. Explicitly, it was shown by Pach, Shahrokhi and Szegedy {{cite:d104034e3039384a3d23326fe241a5da4454e1c7}} that for every graph {{formula:2a8ba0b3-10ed-46d9-aa44-ffa26bfad6b5}}
{{formula:8ff6711a-1318-4a1b-a418-dffe82462126}}
| d | 3536564f29a7c7653de3134d8daaf722 |
Finally, the best results are obtained when using CPE. In contrast to the other PE approaches under analysis, CPE is adaptive depending on the input signal itself, which might be an advantage, although it makes the generated positional embeddings harder to interpret. Despite not having explicit information about the absolute position, CPE outperforms absolute PE, and gives results close to the state of the art in ESC-50 dataset. Finally, we also tried
summing absolute positional embeddings to the transformer input (CPE + Absolute) but no significant gains were observed. This suggests that absolute position information is not required or it can be learned by the CPE as shown in {{cite:8c271ba1dd51e65a19524db7b3a03687e93e2d54}}.
{{figure:3b9121de-5bf0-4a61-a3c6-3238927a965a}} | r | 8f26d497885cfa8a0db10325141ee41f |
Template matching is one of the earliest methods for question answering using knowledge graphs. In this class of techniques, a dataset of question templates and matching query templates are used. A new question is processed using linguistic rules and converted to the question template. The question template is mapped to a query template. Both the question and query templates contain placeholders for different entities and relations. Finally the structured formal query is produced by mapping the surface forms or terms in the question to the knowledge base entries (entities, relations, classes etc.) and using the KG entries to populate the placeholders. The final query is then executed on the knowledge graph and the answer is returned. Examples of the process can be found in Figure 2. The initial focus of this paradigm was on converting simple natural language questions into a triple representation. However, since all questions cannot be represented only using single triples, mechanisms for the conversion of complex questions to formal queries were explored in {{cite:e4ca0c7b09b57d356bd046fdc6bcce1bf1651afa}}. Since the templates are hand crafted by experts and hence have restricted coverage, the method can be very fragile when unseen question formulations are used {{cite:fc43aeb82ef02939fbb1e1923d4411fc16887524}}. One solution to this problem was proposed in the QUINT {{cite:fc43aeb82ef02939fbb1e1923d4411fc16887524}}, {{cite:b7bf5bc55885e270292f40a69c635b5075d0c821}} framework, which learns the query paths using distant supervision. The question terms and the query entities are aligned using integer linear programming and further processed to create question-query template pairs with placeholders for terms and entities. These templates are used to map new questions to their corresponding templates and entity linking step is used to populate the query template with entities.
The QUINT framework mainly uses syntactic similarity to match new questions to question templates. It cannot be used to answer questions which do not match any known template. The NEQA framework {{cite:9bd489481b742d03b9802362f399efe2f00f319b}} expands upon the ideas of QUINT and uses the semantic similarity of new questions (which do not map to any known template) with known questions (in the training set) to derive the most likely query template. The semantic similarity score between a known question and a new question is calculated using: 1. the surface level matches or the word level matches among the questions, 2. the cosine similarity between the word2vec embeddings of different pairs of phrases and words in the questions and 3. the expected answer type of the questions. The template of the highest scoring known question is used to construct the query corresponding to the new question. If the generated query can answer the question correctly (based on feedback from the user), the new question and query pair is added to the known set and the query ranking model is updated. Therefore, the NEQA system is designed to continuously keep improving using positive and negative user feedback.
| m | d83889cfd7afde7db178bdd3d3e5d9a8 |
Typically, in literature, we only observe final results for the fully quantized setting, making it difficult to assess the impact of the individual quantization choices for each tensor. In this section, we aim at better understanding the impact of the individual range estimation methods for either gradients or activations quantization at the final accuracy. We compare our hardware friendly method, in-hindsight min-max range estimation, to the commonly used dynamic quantization methods: current min-max {{cite:4766ec978d3e3146db07fa814b1de5c52671e1d2}}, {{cite:6def504f6e9d8bef0a39ab39628c208b17c5da35}}, {{cite:c560b6f0cf2f7e4c448c593c868ca202ee01ce76}}, {{cite:3b55a6fee4ef2619e47b5569467ac70c2ffe282d}} and running min-max {{cite:a1a73cb177557e07cdb1f338c595e5099775a0cd}}, {{cite:775b7ce7dfb0dd4819b3dfcc9149c4e3f8d3ab3e}} estimators.
| m | 5039a6574db26a16b2b9ca9882310a32 |
As an example of our former calculations, we revisited the fitting of the Geminga profile using the analytical 3-D electron distributions and numerical projection to the 2-D surface and convolved with a Gaussian function with a kernel of {{formula:2231e20f-c707-489b-813a-63ec0a97fb26}} . The results are shown with a red curve in Fig.REF . The derived best-fit {{formula:ff038e9a-b1cc-4943-95c0-23b9c1ed55f1}} is {{formula:ab368189-80f1-427d-ac05-881ab3e54ad8}} , which is consistent with the results in {{cite:95f5ac9b585c9dabb16461cdfec1faf1906c1b3e}}. However, we found the same profile can also be fitted with the projected {{formula:f95c4239-b892-4391-aea3-f0019eb946ce}} profile with a smaller {{formula:101339b2-f1c8-4b7a-baf5-59d6ed0ed22a}} of about {{formula:d9cf9fbb-4fe2-4df3-8a49-e66e1fbf1988}} . We note that in the fitting of {{formula:02b52570-3bce-4ae5-b2ef-3e04e90a8a3a}} profile, an additional uniform background is added to fit the flat distribution beyond {{formula:1a45d1c0-a42f-48c5-857e-1c561efaaf5c}} from Geminga. But such a uniform background is quite reasonable for TeV {{formula:172bff83-b4f5-4187-b5ea-2a7fbe19991d}} -rays due to the CR contamination. Thus, the profile itself cannot rule out the hadronic origin of the {{formula:eb08fdaf-b491-4040-8c97-a305d2668f69}} -ray emission. Of course, for this case, the leptonic origin is much more natural considering the presence of the pulsar and the lack of CR proton accelerators in the vicinity. A hadronic origin requires an unrealistic energy budget in parent particles {{cite:6446a6e11e233ed44268f52da6926ac4fce160f4}}, {{cite:d3186d5f2733a4347fa1474228c34a9792244164}}.
| d | d0e07abf83b1fdfeea24ca42a57d60ee |
Sun et al. {{cite:f0d74b01b2d3e465caa76cf2427ae122dc158192}} apply part level features because they provide fine-grained information for person description. The authors propose a Part-based Convolutional Baseline (PCB) network for part-based feature extraction. Part-level features are learnable by conducting uniform partition on the conv-layer without explicitly partitioning images. The spatial consistency within-part is exploitable to refine the coarse partition provided by PCB. The improvement in the uniform partition is made by the Refined Part Pooling (RPP) network. It achieves 92.3% of rank-1 accuracy with only the PCB. Further, it improves to 93.8% of rank-1 accuracy by employing RPP with PCB network on the Market-1501 {{cite:d9126fcf732cf51402659f5cde7c8e4e7ec097cc}} dataset. The text attribute combinations exist on a large scale in a real scenario. However, a minimal amount of combination with sufficient data is available for training. Except for such combinations, others are never modelled during training. Thus, Dong et al. {{cite:6b372bd1746101c39d90a3edd6e1bc361f1458c7}} formulate a textual attribute query-based person search problem as a zero-shot (ZSL) learning problem for the first time. The authors propose Attribute-Image Hierarchical Matching (AIHM), which matches attributes and images at multiple hierarchical levels. The algorithm achieves state-of-the-art rank-1 mAP of 43.3% on Market-1501 {{cite:d9126fcf732cf51402659f5cde7c8e4e7ec097cc}}, 50.5% on DukeMTMC {{cite:d04a235d0ac7b0c43d13bea7b234630013abfeb1}} and 31.3% on PA100K {{cite:267f7260ecc4ead8e68d3fa0e847f3386e4e8bea}} datasets. Table REF shows a comparison of handcrafted and deep feature-based methods with their advantages and disadvantages.
{{table:5fbc7c91-7e44-4973-bbbb-3d3aa16fa2e6}} | m | 9cf6c76b2417efb111dfd0870e75e26a |
A smooth latent vector representation for outfit style is learnt using Variational inference in a novel style encoder named Variational Style Encoder Network (VSEN). There are two main trends in denoting an outfit, as an ordered sequence of items {{cite:c268e674f9d099f4688be801f1ed9a363307d08e}}, {{cite:2a66e3adffb41dbe7c1071707c3f9441caba8763}} or as set {{cite:136caa1648a2d5460153efb36a87e76d586c83b2}}, {{cite:09b087795479406066514a1e7770145504e07fc1}}. We choose the latter representation, which brings in two important properties, namely permutation invariance and allowance for varying length. This assumption enables us to select the set transformer approach proposed in {{cite:bd687121b18ba79dba3b2a5c6e785ce6c69768a0}} for our style encoder job. Keeping in mind that our work is not restricted only to compatibility learning and also involves outfit generation, we ensure that every outfit style is represented by the first two moments of a Gaussian distribution which is proximal to the unit Gaussian {{formula:2def2d68-2882-443c-8270-0878aa6fc088}} , a mechanism we borrowed from Variational inference {{cite:14c9bfc6fd57de975219b7be9f00825fda6fd4f7}}. This further ensures smooth representation of the latent style space. The advantage of this step will be clear during the outfit generation stage.
| m | 8637fd57b6aa4185f4abc3752eafc587 |
The other three components of our architecture are optional and are introduced in an attempt to help the emotion and enrolment encoders learn their intended functions: the emotion encoder should learn a speaker-agnostic embedding {{formula:af46c0fc-3b29-4e46-af89-894ff64b5a35}} that the enrolment encoder will help personalise through a speaker-aware embedding {{formula:b296da68-952a-469e-869c-a147757615ea}} .
This conceptualisation offers the following design principle: we should guide the emotion encoder to `forget' any speaker information and help the enrolment encoder learn more of it.
This is achieved by the additional speaker classifiers, {{formula:eedcb0a3-bd3d-4f26-b376-cc874e8224c8}} and {{formula:f5887abc-686c-426a-9585-63329ba9a5cf}} , respectively.
Both accept as input the corresponding embeddings {{formula:971ae402-5d81-4d00-9d50-6e4f9eca8b28}} and {{formula:9e156bcc-f25d-4278-8a25-a5a90a663498}} and output a predicted speaker label {{formula:a50d3461-3cb9-44bb-be87-a54ef4d692d5}} ; they are then trained with a speaker loss {{formula:f0f40d9f-f6c7-40b9-8ef2-15b743cccff9}} .
However, there is one crucial difference: the speaker classifier attached to the emotion encoder is preceded by a gradient reversal layer.
This layer was introduced by {{cite:a6588f4230b46c5de5a41a8f11d019c589c86178}} for domain adaptation, and has been widely used to remove undesired information from learnt representations – including for speaker-invariant SER {{cite:3478d69120956e5be67cddacfcdc3720eadcd0e7}}.
The functionality of this layer is simple: it merely inverts the gradients forcing the emotion encoder to perform gradient ascent (as it continues to minimise the error which now corresponds to the inverse of the speaker loss).
| m | a7a0e153d5acc3e79cab4475bd681ef4 |
When studying the association processes of multiple chains, we switch to periodic boundary
conditions and use the smooth Particle Mesh Ewald procedure {{cite:6d18fb88861b23d52ddd2a86336b81d5db068470}}
to treat the long range electrostatics.
The procedure is a variant of the Particle Mesh Ewald method {{cite:79e6b37d042c3fbe62c8e5120f1d60c84561b2aa}}
as implemented in NAMD. The grid spacing is 0.16 nm.
In the simulation of two chains, we used the cubic simulation box
with the edge length of 60.0 nm. We prepared
50 starting systems in the following way.
We first chose all possible pairs from the set
of the initial structures that were simulated as single chains.
This yields 10 different sets of pairs. For each of them,
we prepared 5 different conformations by
changing the mutual orientations
of the chains by using the PyMol software {{cite:5df5ec7501c0ea33003918fd640b3c75f9c80340}}. Each starting
system was simulated for 30 ns which yielded 15 000 frames.
Altogether, we have considered 750 000 conformations.
| m | b0dea50a13761d9c3c5419434c206abe |
Recall efficiency decreases with an increasing storage time, in part due to addressed atoms moving out of the interaction region via atomic motion in the warm vapour and because of atom-atom collisions. The reduction in recall efficiency is clearly visible as reduced area of the recalled signal in Figure REF and is plotted in Figure REF . The efficiency drops from 84 {{formula:99563d20-2d56-4b06-8b96-490c8b15b4b1}} 3% at 4 {{formula:d363332c-b0aa-4397-847a-e71011eb9504}} to 57 {{formula:62f3643d-08e1-40d2-9dae-7e7e3aaef80c}} 3% at 13 {{formula:9d087400-81d6-4ccf-97bb-f27885a00a2d}} s before falling below the no-cloning limit {{cite:291441689ed347e334adb7ec6250ae2744bc30b8}} at 17 {{formula:ec24face-200a-4b41-a6b0-1919e9030f53}} s when we recall only 40 {{formula:a8027964-939a-45fa-803b-bf0d6e3157b7}} 3% of the stored signal. Storage times shorter than 4 {{formula:3eea47c1-28d5-4041-aa5a-664113f3cfd0}} are inaccessible due to limitations of our setup. Error bars are derived from the propagated statistical noise in each measurement, directly related to the total number of coincidence counts measured. Our protocol was optimised to store the single photons, the success of this optimisation is evident as the recall efficiency for the single photons outperforms the recall of the coherent state at all times. From the relative scaling between these, it is evident that the single photon storage does not suffer from any additional degradation for extended storage times compared to the coherent state storage. A detailed discussion on how the recall efficiency was calculated can be found in the Supplementary Materials.
| r | 0ffa1653f895b5b6b5993bf35700fd01 |
where {{formula:4e80fcb5-b953-4b71-9f7f-3e52db367357}} is the diagonal covariance matrix of
{{formula:ad57ad99-b720-41d6-943b-aece5db1e668}} . This result also follows from standard VAR-process
results {{cite:18dd55b069eefaeab744564a6c93bcb12c0c570e}}. We
will say that Equation () is the integrated
covariance equation. We will see that the linear Hawkes model and more
general time series models also satisfy this equation when the
parameter matrices are given the correct interpretations. There is also
a clear similarity with the parametrization of the observed
covariance of a linear structural equation model as noted by
{{cite:192c5d97d046b0311189cd4db8332b167dea1dc6}} in the linear
Hawkes model. Therefore, more general
identification results from cyclic linear structural equation models
may be used {{cite:192c5d97d046b0311189cd4db8332b167dea1dc6}}.
| m | 574831e2a01977a744ba213df537c817 |
Recent studies of FL provide promising results in healthcare applications.
For example, data from 20 institutes across the globe to train a model (EXAM: electronic medical record (EMR) chest X-ray AI model), for predicting COVID-19 {{cite:12da8c24c3ff88befed1ecafdd855a3537a1ffca}}. This model is based on HFL that considers differential privacy to mitigate risk of data ‘interception’ during site-server communication. In {{cite:77fab28e0d9d14be180487425be64596f5f409ce}}, three main models using standard transfer learning, FedAvg, and Federated Proximal frameworks, respectively, are considered to diagnose diabetic retinopathy. In {{cite:484773a45d18a400d2b8492007dde599176e17a1}}, a new study proposes federated disentangled representation learning for unsupervised brain anomaly detection using MR scans from four different institutions. To solve the problems of security and trustworthiness, densely connected CNN based on FedAvg proposed and used for COVID-19 {{cite:54ccfb8a4afb6d909aaa84b0612852427d0f7f44}}. Due to the limitations of sharing the data across hospitals and countries, more collaborations will improve the feasibility of FL in medical applications.
{{table:3cdbbba3-e625-49e6-a054-e88fa506511a}} | m | b5425f8a9c8b35b0d7761c52743b0d46 |
Theorem 1.1 {{cite:a8f7786747c6c1a04a418d2c048151a1fc4d0f3c}}
If {{formula:bdf8c6c2-e48f-46c9-a770-a51f8842617c}} and {{formula:a39e319e-b8cf-4353-960b-d129b5fa2ee5}} for {{formula:7e12a445-7c56-4d22-80d5-0bf5da62aa4a}} , then
{{formula:f9ea9822-7e4e-4570-97f3-7be3e01fb1a1}}
| i | 83f51f3832f7e5e9731cec31f094f1e6 |
where {{formula:05a2905c-d785-499f-8329-37dd9049e625}} is the pure electromagnetic contribution, {{formula:9d98b1d9-c6b9-48f5-867e-735b532777ae}} is the hadronic contribution, and {{formula:d5261526-50a4-43d2-a34b-b5341a329cca}} accounts for the electroweak corrections due to the exchange of the weak interacting bosons. At present, {{formula:7d09039c-4c13-413b-9bdd-88d686fe876f}} was calculated with high accuracy up to five-loop order {{cite:08d4b5c3565e0b3b510298deb6b5f97157a19c37}}, {{cite:4640a7ed20a91e359299c308a61ebf669b7893f4}}, {{cite:58bc7f3c7b764855873173bfbc9eb1c098b2eab8}}, {{cite:15917ad6640a708589bb5aef2893924d6ed49aa1}}, and {{formula:121a98a9-d6da-4708-a341-8b2620b017c4}} was also done up to two-loop order {{cite:efc5125625e2a9ec488fc3fc94bafc7065ae757b}}, {{cite:25a7efe2ccc2c35078375025171768fcc4a2a962}}, {{cite:6be58a09932ac9857afc67601c44bfa34dba9e35}}, which were given by
{{formula:e821687e-0df7-476b-83bb-2f6958207882}}
| i | 2f42a6c0f54d0078d79592d4946f7811 |
Though some recent works {{cite:bba3333eaaeda98aa53411b5caf8c345ba77f57e}}, {{cite:5640d99c8c2f10b5be9bd5edc25b0276f554a81d}}, {{cite:a3a1004f5903edd9bd8d8319281ac982e9fd26bd}}, {{cite:4c1395458abc3d5e186139cb01fc965ef3d1dd17}} on deep learning-based interactive segmentation have shown good performance, it is a great challenge for current CNNs to generalize well on previously unseen object classes, as they rely on annotated image to learn directly {{cite:60fa18dacb961bee74b3f7e39f590190df2402fb}}. For medical images, annotated images are very precious and scarce since accurate annotations require both expertise and time to obtain. This limits the performance of CNNs to deal with unseen objects that are not present in training set. Compared with traditional CNNs {{cite:a6f528b5bfff9c9028e84bc2a57ec45201e463d5}}, {{cite:c36b46d729f6e07159b551cfd88c208d3c81bfd3}} and transfer learning {{cite:945443879c82755fa9ad85b9a9c5dcbcbcb514b1}}, {{cite:ac5f0ccbde7eb7d24b548aa16b7a14013c30791e}}, the major advantage of ours proposed framework is that it can segment unseen objects without re-training or fine-tuning. Therefore, it reduces the burden for collecting and annotating data noticeably, and can be applied to segment or annotate unseen objects directly. Compared with DeepIGeoS {{cite:bba3333eaaeda98aa53411b5caf8c345ba77f57e}} and BIFSeg {{cite:945443879c82755fa9ad85b9a9c5dcbcbcb514b1}}, MIDeepSeg only requires few clicks as input and has a higher generalizability.
| d | 9aef8636e8c3d3b990385ec51e8ec281 |
Thus, we obtain the so-called Yule-Walker equations and therefore we can estimate the AR parameters by solving this equation using Levinson recursion {{cite:5682713fa4b9786fcd0f48ec676d0bed1f8e934b}}.
| m | 469a0b0abbf78e49b502c7a41bad8654 |
UniLM {{cite:a2f917bc230978ae07561a0d3a902421e0fbc5ba}} It is also a pre-trained language model. It unifies three tasks: Seq2Seq generation, conditional generation, and NLU.
| m | 4b05b402bfc94fb043a8121efb6800e8 |
Keyword Spotting
As shown in Table ({{formula:34964cc4-ebee-4000-8dbb-4429cfe50060}} ), the performance of phase1-{{formula:a65ccfe3-de84-48e9-85ee-c07458450145}} decreases compared to vanilla. This is because, in this phase, we pruned unimportant 90% weights of full net and quantize important 10% weights from 32-bit to 8-bit or 4-bit using iterative pruning and QAT. A compact model, ResNet-8-narrow using fewer channels than ResNet-8, is more sensitive to the model compression. It degrades 9.7% and 17.3% at 8-bit and 4-bit in phase1-{{formula:79004d91-7e23-4e56-be4a-e1e09fe54f54}} . Such severe performance degradation of the compact model with quantization is also reported in other researches {{cite:3a700d8455981caf4c1f664d090dffa81a069c6c}}, {{cite:be2879caa4930a14e7a095ef32758e5fd0e64e84}}. At phase 2, by training unimportant 90% weights of the full net, it becomes a teacher to improve the performance of the pruned net. Surprisingly, there is a large performance gap between phase 1 and 2 in the pruned model of ResNet-8-narrow compared to that of ResNet-8. Table REF shows that PQK is more effective at the compact model in terms of recovering the decreased performance at phase 1, where performance enhancements are 4.7% and 9% at 8- and 4-bit in ResNet-8-narrow. Concerning bitwidth and accuracy, 8-bit consistently outperforms 4-bit because of its high representative power from more bitwidth. In ResNet-8, Regardless of bitwidth, as pruning ratio decreases, the performance of pruned net increases. These numbers show the usage of model parameters is important to the performance. In ResNet-8 with 4-bit, although pruned net performs well, accuracies of phase2-F are lower than those of phase2-P. ResNet-8 with 8-bit shows the opposite trend, meaning that combining 32-bit and 4-bit trained model is more unstable than combining 32-bit and 8-bit trained model.
| r | 2e0c0494883969e810736be88056c7fa |
We noticed in this work that the best mercury density and sound speed that fit the simulation and experiment are a bit off of the nominal mercury physical properties {{cite:7b9902f1e8ae599cb0db77e3c6acc07cb9a63caa}}. The reasons for having this offset are coming from three major sources. First, the limitations of the Sierra model or what is known as the “model-form uncertainty” that would result from numerical methods and physical approximations. Second reason is the biases and errors in the strain measurements. Third reason is the mercury cavitation damage that also contributes to density reduction. Consequently, the EOS materials model parameters try to compensate for these effects to improve fitness to the data. Therefore, we should emphasize here that those settings should be only used for the problem, computer model, and strain data used in this study, and cannot be generalized for other applications involving liquid mercury. This overfitting nature of the results is indeed the case for most of the Bayesian calibration studies {{cite:be6d38b599f81714b38702cacd8407f454c62b27}}, {{cite:86ad0d75dce4e161c6b906e437bae884e9ecf106}}.
{{figure:9fb18ea2-bdf3-450e-8aef-11fe90b12a17}} | d | db72401d0730b59c8de97cb7c2cc52a8 |
In order to active modulate the Casimir effect, one straight way is to
change the dielectric properties of materials under external means {{cite:f7b17945a09bc3b82a8df913a222634d2e822784}}, {{cite:24c07dc0025e967c353cabd36d3020ebe5c9b908}}, {{cite:449f71d1178e97403e7c42ce731c6dc6f0328a28}}. Vanadium dioxide (VO{{formula:4aebcd31-0825-420c-ab7f-9d9fbc8794ef}} ) {{cite:ec2b84d835c9c38c866877218905a96141a934e7}}, {{cite:d8a96188f1b2d9d678916e799cd71434320148b3}} is a
phase change material(PCM), which undergoes a transition from a
low-temperature insulating phase to a high-temperature metallic phase at
critical temperature 340 K. The phase transition of VO{{formula:32e7ee18-76fc-49e6-8668-88995d5f3295}} is accompanied by
a structural transformation from the monoclinic phase to the tetragonal one.
Meanwhile, the dielectric function of VO{{formula:74cbc748-940d-4b72-9e61-ecdb0fadbc72}} changes dramatically during the
phase transition, leading to many interesting applications {{cite:9bfa2eb4426f60757e3b197dd3fb4c73ffa3707b}}, {{cite:a93e2173df1e4fcb10b48070edaed7ac93b18c1d}}, {{cite:440ee87fff84c921d98a35fb665cb64347fcdf6b}}, {{cite:8364b6bfe91cc297b68c4dc6c022fafbcc8ffc6e}}. In general, the phase transition of VO{{formula:c2200cff-7682-4850-8ee6-3ef5b35b000f}} can be
induced by changing the temperature of systems. Alternatively, the phase
transition can be driven by optical lasers {{cite:da84194c51e5d1460df128ba0090ee858c92fdaa}}, {{cite:8e0fae4f8840e5b20878971214a58dba9a62f00d}} or
electrical gratings {{cite:5fdea1f1407a976655923c638e5db009fed760bc}}, {{cite:608fdc6c5b87e2164437f4d5a17d6bcd7f384e42}} on a sub-picosecond timescale.
Recently, VO{{formula:af8f807f-3063-4aed-8354-15f4c967dddd}} has been employed to study the tunable Casimir effect in
the vacuum {{cite:8f4f0f4cb84d45b828ce035a8437f0525a70215d}}, {{cite:ca0d96ef5196e40ad9dc3fa0015ffdfe0e9477dd}}, {{cite:e75a30948db076db792ca6c028ecb98191c3ec06}}. For a large separation (e.g., {{formula:8d79f5ea-088d-4942-b2b2-c1881bd49cc8}} 1 {{formula:dcc0997f-ca1f-4611-a4fc-64ffdfd84e6a}} m), the contrast of Casimir forces due to the phase-transition is quite large (e.g., over 2 times for two semi-infinite plates of VO{{formula:856c5ec3-8209-49b0-a11e-388a038cd83a}} , this value could be even larger for the case of finite thickness {{cite:8f4f0f4cb84d45b828ce035a8437f0525a70215d}}, {{cite:ca0d96ef5196e40ad9dc3fa0015ffdfe0e9477dd}}). As the separation is small (e.g., {{formula:eb40e760-ea8d-418a-9732-0a0d0dc9992f}} 100 nm), however, the modulation of Casimir forces owning to the phase transition and finite-thickness decreases greatly {{cite:ca0d96ef5196e40ad9dc3fa0015ffdfe0e9477dd}}, {{cite:e75a30948db076db792ca6c028ecb98191c3ec06}}. Nonetheless, the Casimir forces are always attractive and only magnitude modulations have been reported in a vacuum-separated configuration. The influences of phase transition of VO{{formula:68e48dbe-d389-4e43-a9f6-f295b62cbf1f}} on the sign modulation of Casimir forces (e.g., from attraction to repulsion) are yet less explored. In a liquid environment, the function of sign modulation and the related phenomena such as tunable Casimir equilibria are expected based on the phase transition of VO{{formula:1acdc6f7-7435-439c-9406-cc2586b04545}} .
| i | 48538c7003e935895f83713ebe1e05cb |
Training was completed in the Gazebo simulation environment using the SAC algorithm provided by the StableBaselines3 RL package {{cite:ac70c0dd7970368174ebedda565b73dd0fa70947}}. In each training scenario, policy convergence typically occurred within 3000 episodes/rollouts (Figure REF ). Compared to the previous work by Habas et al., which used the parameter-exploring policy gradient method applied individually to each set of flight conditions {{cite:1311abc231641fefc413064b2ece845e65b05586}}, the Deep RL method required approximately {{formula:c1368b69-5588-408d-957f-19826c6975e0}} of the number of rollouts to cover similar velocity and angle combinations {{cite:e782d4fbd642cdba66f536f7804000a24996bf29}}. In addition, the output is a continuous policy which can handle untested observation values within the training range. Due to the nature of SAC maximizing both the reward and entropy of the system, there existed a large variance in the learning curve (Figure REF ). This behavior can be understood as the system exploring various state-action combinations to find the optimal policy which caused the curve to vary wildly at times. However, it was observed that this behavior still resulted in a robust convergence towards an optimal policy and consistently improved the reward across the learning time frame.
| r | caf82705b541f2a4a562408de9268873 |
When the output normalization is {{formula:25dd1f15-84e6-4615-ad78-aebf84d2a6ba}} -norm instead of BN. Experiments in [fig:identity-init-performance]Figure fig:identity-init-performance seem to suggest that there is still a gap between using {{formula:5bbaa08d-ee48-439e-9b84-2bb8965b6958}} -norm and BN as output normalization methods. In this case, the acceleration effect may not happen in exactly the same way as in the BN case, but we believe they share the same underlying mechanism and can be proven in theory.
The mystery of the projection head. As our experiments in [wrap-fig:1]Figure wrap-fig:1 showed, the outputs of the projection head in the symmetric case (without the prediction head) suffer an extremely strong correlation even with batch normalization used. However, the impact on the base encoder is milder and thus the network can avoid complete collapse, shown in [wrap-fig:1]Figure wrap-fig:1 and [fig:identity-init-performance]Figure fig:identity-init-performance. It is mysterious how the projection head works in non-contrastive learning, and also how it compares to the case of contrastive learning, which has been studied by {{cite:8bb55a97ca88ad0556911d2cc8721198f518e41e}}, {{cite:534cb25e03617686959ed7532e70e3f7b809679f}}.
Learning non-linearly features. For the simplicity of analysis, we have assumed the features in the data set are linear. It is of interest to study whether neural networks trained by non-contrastive self-supervised learning can learn non-linear representations better than traditional learning methods such as linear regression or kernel methods, as there has been a series of papers {{cite:70a5ac3ccbe1c015ccf43b1ab3b6aa999e7331eb}}, {{cite:f4759024cee74fb047f8dbf952c4d9ee2f01b3e0}}, {{cite:19eda972251d1a8e55f638d56d99bdcfde49866f}}, {{cite:412738d7571c079b301aa27cab2982ce26b58585}}, {{cite:02c19d1a2a548d01a6cc01a0b30415c6f193df19}} trying to understand it in the supervised setting.
| d | 72f3507e4519e8f5c23439d45d9cf530 |
Apart from the alignment of existing implementations and the introduction of new abstract theories,
our future work includes a comparison and investigation of other approaches, like the aforementioned {{cite:56b8e3467d5e92516c24dbf316c40638d5d0bc4c}},
and other approaches to AMT {{cite:b4ea327566c0dd3fb53fccd4d3f329ac91d29f4a}}, {{cite:4fcf69dcefd1fc9175545696bb5d38a407e28aac}} or plain SMT {{cite:ccf5acc026c6ace091bb3977159847f4fed32eb5}}
in view of their formal characterization in terms of {{formula:ab604b79-321e-44ee-8055-a762c5b2582b}} .
| d | b3573d399259b8078892465eaf8eaf85 |
One of major questions that we approach in our framework is how experimental perturbations should be represented in the model. While the focus is often on model parameters as representations of physical dials that are experimentally accessible, direct correspondence is not obvious for a statistical model inferred from data. As an example, when couplings inferred by a pairwise maxent model were directly compared with physical contact between protein residues, the model recovered only a subset of real interactions even while recovering the statistical ensemble {{cite:66f23ea525b50d2c1f989c4730dfe9cd4409b22e}}, {{cite:e4a1ef26a8697e6f47791844e6e38064117ba3c7}}. Taking the pairwise maxent model, it is clear exactly what increasing a coupling does to the energy function, enhancing the tendency of a pair of neurons to coincide, but the actual result is nontrivial modification of the entire distribution. For the experimentalist, it seems more natural to consider the problem from the perspective of the observable statistics that can be perturbed in a controlled and precise manner. In the case of Boltzmann-type models, the relationship between observables and model parameters can be made exact and results in a simple form for small perturbations. Thus, our formulation is a theoretically and experimentally tractable framework for predicting the effects of perturbations.
| d | 00a03b3cde4c573a7edfcc34fb629599 |
We train all networks (both the original and the optimized) using 4 adversarial training methods: Standard Adversarial Training (SAT) {{cite:8596897b317da628a12b605437eec9eeb592f09e}}, TRADES {{cite:8d3c669317b4b7166d65bd0136de6352adbee856}}, Misclassification Aware adveRsarial Training (MART) {{cite:d79fb1620ea26f2c89850df4cac008b5408278db}} and Robust Self-Training (RST) with 500K additional data {{cite:9bef0c8900cf06b7a211e20a0f7bf3da2e993d9e}}. For SAT, TRADES and MART, we apply their training strategies to train the networks for 100 epochs using Stochastic Gradient Descent (SGD) with initial learning rate 0.1, momentum 0.9 and weight decay {{formula:1973288f-6f34-4ce2-8305-bbfb0a92f49e}} . The learning rate is divided by 10 at the 75-th and 90-th epochs. For RST, we set the weight decay to {{formula:f5d0c8b9-e02a-4ff6-a8fc-8445d4c6c326}} , train for 400 epochs and use cosine learning rate scheduler {{cite:c4a47a0d2a5dbe3de136eb7ff7a233e63070af6c}} without restart. We train the networks on both CIFAR-10 and CIFAR-100 with maximum adversarial perturbation {{formula:5cbb680a-3c45-441c-93eb-c1599512fd02}} . For all training methods, we use the PGD{{formula:1af1566b-6d1e-4618-8485-f0b33c95d9bd}} with step size {{formula:0894061f-6299-41bd-a3d3-6b4b2e83c811}} for its inner maximization. All experiments are performed on NVIDIA Tesla P100 GPUs with PyTorch implementations. Code and pre-trained weights avaliable on Github https://github.com/HanxunH/RobustWRN.
| m | fe998243fab2211689e8fb57786c8e0f |
libvdwxc implements these partial derivatives, while the calling DFT code
is responsible for calculating the density-derivative {{formula:79bbf1d6-409a-4b1e-89d8-16f3d33688a2}}
and combining the calculated partial derivatives (REF )
and () to obtain the potential.
Any DFT code that supports GGAs already implements the requisite
functionality, which is also the requirement for calling libxc.
For completeness we provide the expression for {{formula:b16fb471-149a-41da-8637-c2a4528eed82}} in the
appendix (see also Ref. {{cite:473fc8b14e9b07fde5ce04354828073b7aa69f7a}}).
| m | 0f659fb282016692aed0a4a33cec174a |
Our proposed method differs from the original RuleFit in two ways. First, rules were created based on the transformed outcome rather than the outcome. Second, the original RuleFit fits the rule terms and linear terms into a sparse linear model using a least absolute shrinkage and selection operator (lasso) {{cite:1e50aa9993dbd5fdcb1a3826eac42086c974a32e}}. However, we constrained the same base functions for the treatment group ({{formula:bbd0941f-744c-41c5-a2b7-7b0bb61e1d86}} ) and control group ({{formula:449134cc-1cf7-406d-8362-fa049b704b06}} ) models. Differently expressed, a group of parameters must be estimated for each base function. Therefore, instead of lasso, we use the group lasso, which can spark our base function to be sparsed at the group level. We summarize the algorithm of the proposed method in Fig.REF . The algorithm is illustrated in detail in the next subsection.
{{figure:0c02242d-24c3-43fc-82ae-3a1afa988210}} | m | 90b575f2b4cb4646badd68a65ae913d5 |
kutuzov2020uio report that different test sets from the shared task manifested strong preference for either the PRT or the APD method, and that this is correlated with the distribution of gold scores in the test set (but not with its language). If the right method was chosen, then using contextualized embeddings to rank words by their degree of semantic change consistently outperformed the shared task baselines (frequency-based and count-based approaches) and the methods relying on type-based embeddings with orthogonal alignment {{cite:8f8f8bc65a36b5cfdc147b5af0fc120eb097c0b7}}.
{{table:4fc716a2-9a7f-4575-9148-d1af49ce1436}} | m | 58c1ce4778bc7cee874aab42b150145d |
The dual-gated architecture of the device allows independent tuning of both the charge-carrier number density, {{formula:d98bccd3-f959-4426-9967-52d471955854}} , and the displacement field perpendicular to the device, {{formula:f80cea16-d1e8-4158-aacf-7d4df52022ea}} , using the relation {{formula:d1dfb8ab-1ad7-439d-8ccb-bf53281b4339}} and {{formula:5a1eb8ad-6d03-484f-a364-9cc2d87731e0}} , where, {{formula:65175048-ec26-4023-a361-6e28d37bf2b6}} is the residual charge-density due to doping and {{formula:18e8bca6-2d7b-425a-9ba3-36c4246d3247}} the net internal displacement field. The values of {{formula:669a7052-7957-4dd7-bd1e-96a28a8538a5}} and {{formula:4231d185-1136-4fcd-9d28-bda4945d2773}} were extracted from the position of the Dirac point in the {{formula:982d5df8-7022-414d-8246-f1c7093c1ea8}} plane. A plot of the four-probe longitudinal resistance as functions of the top-gate voltage, {{formula:b3b841be-57db-4eda-817e-ff792d55bad8}} at the back-gate voltage, {{formula:ee11387e-00f5-4aa5-b73b-408de67b576c}} equal to {{formula:07c52f03-94d1-4241-b6cf-e01f6beb9f1b}} V is shown in Fig. REF (d). Arrows show the primary Dirac point (PDP) and the two satellite peaks, called clone Dirac points (CDP). These two satellite peaks are the result of band structure reconstruction of BLG due to the moir{{formula:0b5cfe83-aa63-48f0-9db4-f90496d8b1ed}} superlattice potential caused by a near-perfect alignment of the top hBN layer with the BLG {{cite:fbb3aad832aac48aa219f36db0a64918d910defc}}, {{cite:85cfa5f63d3eeaa8c7bfabf9d9970c49895cf6ac}}, {{cite:e73bebfcab5241733d7663639441a4a345cf7381}}. From the positions of the CDP, the angle between top hBN and the BLG was estimated to be {{formula:e665fd35-d4ee-4a05-b3bc-aeb1e35cd7ee}} (see SI). In Fig. REF (e) is plotted a contour map of the 2-probe device resistance versus {{formula:53bf4931-d6f6-45c1-851b-ce14e6a6f4f8}} and {{formula:fd7fba4f-f64b-44ec-b8a1-44d02def3bd2}} . The asymmetric feature seen near the PDP (outlined by dashed lines) is a consequence of the band-splitting in the BLG by the induced SOC {{cite:d16581784384d4cf58b1c9bb9cd4d908232a7789}}. Depending on the direction of {{formula:bc7782b2-83f1-42ab-83a6-31e98e018d15}} , this splitting occurs either in the conduction band (for positive values of {{formula:d8412984-b636-40e1-9503-dde28329c5b4}} ) or in the valence band (for negative values of {{formula:b81e19bf-3d91-4aac-8e7f-22eceb345d23}} ). Note that the appearance of the moir{{formula:8d34fcd9-3ff5-4dd6-8b24-a56dbe6687df}} potential induced subgaps in the band structure of the BLG ensures that the {{formula:7439551f-b12f-4878-b946-6d5d5266e4dc}} and the {{formula:34bc8d95-ff8d-4bd2-9d93-06a2a736b7a6}} points are no longer connected in the same band which significantly suppresses inter-valley scattering in our system {{cite:0526761301a53e9c916ad82eafbaafeb02a57876}}.
| r | 26d4e732c093d4b330dc3934de062e42 |
Supervised Quantum Machine Learning is kernel methods{{cite:386a8b9e4336307634e7dd4187fe9cfa13e4ac17}} - algorithmic solutions for pattern analysis, which can address non-linear relationships such as cancer evolution. Specifically, kernel methods (such as Support Vector Machine) solve data-driven problems by transforming the input features into higher-dimensional space, making hard problems in the original space becomes easier in the embedded space. The same concept appears in the Gaussian process, where the time-continuous stochastic process is assumed to have Gaussian kernels. In the modern Quantum Neural Networks, the input features are embedded into high-dimensional Hilbert space to derive feature maps or representations of the inputs (The Hilbert space of quantum computing system having {{formula:3b1fc599-e3fc-49b8-bfca-c56ec2cba2a6}} subsystems of qubits is {{formula:a5e23910-40e3-4477-9a26-182aff89349f}} ). Kernels are defined as positive definite functions of observations, taking real or complex values. Our {{formula:d93faf14-25c4-42cc-8b8a-5c35f0e82fcb}} Net is kernel method, in which the evolution of tumor dynamics is modeled by Equation REF , leveraging kernels {{formula:cf4df265-8277-49ce-86b8-ff18a92dba44}} . The quantum state after evolution of time {{formula:6d4991e2-2ca4-4032-8ef7-deeaec547131}} encapsulate tumor dynamics to corresponding time by variational learning {{formula:bfc61a6f-ed02-4dcc-962f-5f3fac63fbcb}} and {{formula:5fd7780e-1520-45fe-989c-ad948eb64076}} , which can represented as {{formula:e88daa2c-0fdf-435f-ac2a-f2d53d6ee828}} with {{formula:340d9593-cebe-408c-9ab0-7d3cbea32984}} representing the number of qubits. Measurements by associating observables {{formula:be986527-3bc4-46f8-9470-54f129c86c4d}} outputs in feature maps {{formula:29926a06-fbb3-4171-9398-d43ff1ca2d35}} = {{formula:513d8a75-3f33-42b6-9909-2a84c769f289}} .
| m | f2f1053a40f940cec1b6489c179e1bfe |
There are many studies showing that the jet helical structure can be produced either as a result of the black hole precession or the action of magnetohydrodynamic instabilities {{cite:f48b96a504265c0fbffe984161bf44f41d3ff551}}. The long-term brightness variations and apparent motion of superluminal jet components are explained by helical jet structure arising as a result of the black hole precession
{{cite:0d42a1934e1fef64e276e1f38f49c3ad7239d008}}, {{cite:c83cd6c1dc91a8fc75d90392e1f7c39d5af5b667}}, {{cite:89f25002fcedb911f3855f9d756e21db1387b7cc}}, {{cite:be9b10c228030ce8086a7ea9733f130255ccecf8}}, {{cite:e2774c662535442df6ed956554f5191511a6b850}}. The shape of spectral energy distribution of the blazar Mrk 501 and its variations also find explanation under the assumption about the helical jet {{cite:a31e589ce98ecf36adb295e5c12bd2a1f24d65f1}}. The two-peaked light curve of the blazar OJ 287 is interpreted by a double helical jet that originates from precession in the binary black hole system {{cite:34d2e06d0488f54d0ddc6e3da114136e60e10d95}}.
Therefore, it is naturally to assume that the blazar S5 0716+71 may have a helical jet. This is confirmed by the observed periodic change of the inner jet position angle {{formula:721cb381-2eb4-432f-973c-bfe974f548a9}} {{cite:043ec2a88fff8c7fc91ae1a4356af8df08129545}}, {{cite:1adc2122d5e49f307ec3449d88556f6ec4f9e43e}}. From the analysis of the optical light curve {{cite:bba7d41822fbd5e351ee08716b6538f0588db84b}} conclude about the precessing jet of the blazar S5 0716+71. For a good explanation of the multi-wavelength behaviour of S5 0716+71 optical and {{formula:d8587fe9-181c-4e68-a37a-a3c027ccf00f}} -ray flares, {{cite:592ef6b4684aae35db8bbed9769ddfb277693b91}} and {{cite:1ad7efb403619c4b02b4f34f4ca31f8bf9fdb06a}} supposed that a shock wave moves along the bent (possibly helix) trajectory.
In this paper, taking into account both period and amplitude of the {{formula:b5175d1f-328b-47f0-8185-d82c5929cdba}} variations {{cite:1adc2122d5e49f307ec3449d88556f6ec4f9e43e}} and assuming that jet matter lies on surface of imaginary cone, we obtain geometrical parameters of this cone (Section ). This result does not depend on the fact whether jet components move ballistically or not. Found value of the cone's half-opening angle agrees well with the jet opening angle defined from VLBA-maps {{cite:d4219c2879f79e3f09f6f147aa88f9925fbda303}}. On the one hand, this confirms the validity of our assumption about the jet geometry. On the other hand, this proves that geometric effects affect the variability of emission flux from S5 0716+71 substantially. {{cite:1215cd33a9961001e29ae473cb2194fee7ebb045}} come to the same conclusion based on the analysis of long-term data on optical variability. They reported that, first, the optical color index weakly correlates with brightness; second, the amplitude of variability is proportional to the magnitude that can be easily explained by variations of the Doppler factor. Also, different values of the apparent speed of jet components and difference of their position angles evidence in favour of the geometric interpretation of long-term variability of the blazar S5 0716+71 {{cite:043ec2a88fff8c7fc91ae1a4356af8df08129545}}. It is worth noting that there is a contrary conclusion as well. Based on B-, V-, R- optical observations {{cite:4fbc9fc44fee9dc67fc66e4259c7332c17a9e4d6}} detected that S5 0716+71 shows the blue-when-brighter chromatism attributable to variability caused by shock waves passing down the jet. However, the color index behaviour requires further comprehensive study as it can be influenced by differences in step and amplitude of variability at different frequencies.
| d | b593bc66b3df7a9066214ce18ba8a250 |
In the simulations, we use random initial value data with {{formula:25cdbec3-90f5-4e0f-9449-7854c1e444e1}} mesh so that the size of the input data is {{formula:83af5937-ed75-4c08-b20c-92d2665ff71a}} containing a pad as a boundary condition. Also, {{formula:87952c29-c8cb-4fc9-9084-84597ee12bab}} (Heat, Fisher's, AC) or 9 (Sine, Tanh) for {{formula:c507705a-2d4e-4c88-bfc6-8a2e3c5f94ed}} is fixed depending on given equations and a {{formula:140495d9-f7c1-4077-b141-f3b9497109c6}} convolutional filter is used with the stride of 1 in Eq. (REF ). Hence, the filter has 10,000 ({{formula:49008f22-f8c8-4840-8c2c-35fc6d5c7ee9}} ) chances to learn the evolution of results images, so training a model using only two consecutive images are enough to optimize nine or thirteen model parameters ({{formula:ba3d608f-e354-47bb-b7f8-f9ead6730f8d}} ). As an optimizer, ADAM {{cite:4f90d276651f5b84c89c5ba277cc82d1ffdc9a9f}} is used with a learning rate of 0.01 and without any regularization. Instead, we apply early stopping {{cite:dd246f80e1749dd7f8cca7c88fc68c97efda4e41}} based on a validation data to avoid overfitting. To demonstrate the approximation {{formula:7cfeaeeb-b9ff-409f-94c1-f391db123900}} for non-polynomial functions {{formula:85e62748-98a9-4659-9dab-625abc0a71c0}} , we additionally consider sine and tanh functions besides heat, Fisher's, and AC equations.
| r | 3c484673564ed047388993dad8f7e2ac |
Another important set of demonstrations includes the macroscopic cases shown in Fig. 2(e-h) and Fig. 5. We chose the example of Nd:YAG for concreteness, familiarity, and relative ease of theoretical description. We wholly expect that there are better lasing platforms to realize the physics proposed in this work. However, given the size of the design space, we have chosen a simple system somewhat arbitrarily for concreteness. For the gain medium, an obvious choice to consider is electrically pumped semiconductor gain media, which can provide rather high gain, need not be laser pumped, and can provide gain over a very broad frequency range, enabling compatibility with many different nonlinear materials. Another important advantage of semiconductor gain media is that they could be integrated into nanophotonic platforms which present high single-photon nonlinearities (at least 10 orders of magnitude higher than the bulk realizations presented, due to the correspondingly reduced mode volume {{cite:7d31c2ab71e6a86e7fa7116b4a38fe8546556f0c}}). Moreover, recent advances in nanophotonics enable the construction of very small, and very sharp transmissive elements (in particular, consider the recent developments in Fano resonances and bound states in the continuum in photonics, as well as earlier seminal work on high-Q photonic crystal defect cavities, as well as microsphere and microring cavities {{cite:40e762015c2e3b8b28e2c9312ac81645a9891cd3}}, {{cite:d44bc47e6281597baf70799da54cbc7b4e2edc71}}, {{cite:a76f67f925fd8ad532552bce36fe5dfc4f45d25a}}, {{cite:990eb40825d8aadbdda8b7c42aa5ceb8ff3b6fbd}}). These small cavities could enable production of photon states with a small mean number but uncertainty well below 1 photon, looking more like a “true” Fock state. In terms of experimental realization, because the effects are optimized by having sharp loss, high nonlinearities, and regions of low loss: we expect that experimental observation of this effect will require mitigation of mechanical and thermal noise (via external-feedback stabilization techniques {{cite:06e784116d0ffa60419e44513b95ec94298f324b}}), as well as noise arising from the pump (e.g., quiet pumping schemes, as applicable to semiconductors).
| d | 48c32a0f5892db7d2e2372b12eb01f78 |
DP theory is based on the rigid band approximation. Surprisingly, this
approximation works well for most cases when the electronic bands around the fermi
level are not highly degenerate{{cite:b9bf1cedf0bc7987a69e01c362620b9def40e7a2}}, {{cite:02af785c818beec93c9e15430b56083730f21f8a}}. Besides,
VASP, Quantum ESPRESSO, and BoltzTraP
softwares{{cite:f5d06b9381ed97ecde32144281a68b467e8aa724}}, {{cite:e773ee9da81d29b84a4598fa878939a39cad7739}}, {{cite:5d30d82c8f3368fb8b937b092cacd4fc04ba4879}}, {{cite:312e72a93dbfa728593e6621adcf1fcd55884231}},
to name a few, also can add electrons into or remove electrons from the system
through a compensating uniform charge background of opposite sign to maintain
charge neutrality, which verifies the correctness of this approximation
additionally.
| d | fc06bfb4869baef4ab606dd24ca60772 |
The {{formula:92c89411-91c4-4c7c-9b16-667dd239e9fd}} obtains the classification results including {{formula:196ac4bf-ff4a-4a2a-a1f8-c451e71492a4}}, {{formula:24058491-ad66-4fb9-8ffb-9605266fa324}}, {{formula:b2461a24-2380-4a7e-a87c-48433931dea4}}, and {{formula:1afd8f9c-e053-455f-be8b-638c48d3fa27}} over Clean, PPMD, PMLM {{cite:26cd393673d6bc3cf12a8dc4c79e4c918b214f93}}, NbAFL {{cite:955bb317f1d339ed2e6d3f2b36d9fb30b2cbaff2}}, and MLPAM {{cite:48b1e9f01cbae4c68ba42c874f3892324304d481}}, as demonstrated in Figs. 5(a)-(e) to 8(a)-(e).
In PPMD, the maximum value of {{formula:801968a1-3034-4f0e-bcdb-319e35a562fa}} is 93.75% on the Arrhythmia dataset using the ANN classifier. The minimum value of {{formula:6ba4657f-f5a7-4de7-b2cb-33435525dfb4}} is 62.50% on the Hepatitis dataset using the KNN classifier. The average value of {{formula:edbfb1af-af6f-4509-a5d7-485a49da5995}} is 72.20%, 73.40%, 73.61%, 70.03%, and 84.11% over Heart Disease, Arrhythmia, Hepatitis, Indian-liver-patient, and Framingham dataset, respectively.
The highest value of {{formula:220e3559-31da-4fbd-9d44-f819a0195f69}} is 94.11% on the Indian-liver-patient dataset using the NB classifier. The lowest value of {{formula:5f71cf05-4aa5-446e-89c5-fe21322db31e}} is 43.33% on the Heart Disease dataset using the ANN classifier. The average value of {{formula:85494288-6ba6-46bc-960e-bedfe8c3e201}} is 69.33%, 53.51%, 74.18%, 78.23%, and 76.15% over Heart Disease, Arrhythmia, Hepatitis, Indian-liver-patient, and Framingham dataset, respectively.
The maximum value of {{formula:fd1175d0-9bea-4864-acff-10b4d0948d97}} is 100% on the Indian-liver-patient dataset using the SVM classifier. The minimum value of {{formula:7a4cd43c-8f36-4fc3-9d7c-6e9d4bac1045}} is 39.13% on the Arrhythmia dataset using the ANN classifier. The average value of {{formula:482ed085-efe8-4fbd-b9b1-d7411d0e8bde}} is 59.62%, 62.48%, 82.30%, 76.95%, and 84.06% over Heart Disease, Arrhythmia, Hepatitis, Indian-liver-patient, and Framingham dataset, respectively.
The highest value of {{formula:b4345eed-bde4-492e-a38a-192b913e4e36}} is 87.99% on the Hepatitis dataset using the RF classifier. The lowest value of {{formula:0b977104-9275-4eeb-bf11-583a39e95dc0}} is 41.37% on the Arrhythmia dataset using the ANN classifier. The average value of {{formula:19a6dec9-3ad0-486d-b83f-115929385cb1}} is 62.84%, 57.20%, 79.48%, 75.97%, and 78.77% over Heart Disease, Arrhythmia, Hepatitis, Indian-liver-patient, and Framingham dataset, respectively.
The datasets’ performance descends in order: Framingham, Hepatitis, Indian-liver-patient, Heart Disease, and Arrhythmia.
{{figure:064e1fd9-fc65-4842-9ffc-01499690caa5}}{{figure:37ade863-cfab-405a-a760-b77c885c7b12}}{{figure:ac5c9127-f52d-4265-8491-922e1789013b}}{{figure:136cac18-2f8a-4362-807b-70f6ecb9f01b}} | r | 5db5e57d6a28ccc75113a637c792fc64 |
In contrast to the literature on semantic textual similarity, where deep neural architectures predominate - e.g. {{cite:09b355e3c6feaa990154e44a9b1348a3b4955771}} - text de-duplication overwhelmingly uses {{formula:44524314-b9eb-4bae-9e3b-92c1172ddd67}} -gram methods.
There have been few efforts to formally evaluate the adequacy of {{formula:a6978e60-e153-4d3a-8554-ac6f739e800f}} -gram based de-duplication or to explore potential performance gains from neural text de-duplication.
This study builds a large de-duplication dataset and develops neural methods for robust textual de-duplication that significantly outperform {{formula:e0f3292b-fdd5-4930-9d09-9648c6407916}} -gram based methods and scale efficiently.
| i | c7ba6cfc16436d2eb06207a585a7d940 |
As an information distance between two random variables {{formula:c864c402-d88c-4638-bd8c-3185421131f6}} and {{formula:8bf00060-06f3-4c4d-9c55-0ab42af11352}} , {{cite:5d6f0894f11f19b019e110f8c64852791859c40f}} proposed a directed divergence defined as
{{formula:a50f7cf8-0bc2-4ccd-8407-dfd300a01cf9}}
| i | ca6251a56d07efe93907283ac7c81985 |
We used a within-subjects study design where participants were asked to imagine they were at a train station. Tasks were then completed on both a touchscreen and LMC using on-screen point-and-push. To prevent step memorisation, the tasks in the two conditions differed slightly.
The order of the touch and non-touch conditions was randomised between participants and tasks were ordered so they increased in difficulty (Table REF ).
The system recorded an activity log for each participant and after completing each condition, participants were asked to fill out the System and Gesture Usability Scale surveys {{cite:2ecf1b05f5f3729ff705af0c0c97b8dacf4d92e3}}, {{cite:c69ca247ccd2bb46d6fb60597b2f91a60c6e46e0}} (SUS/GUS). After both conditions, they also filled out demographics, interface preference, and overall system feedback questions. Finally, we provided information about the cleanliness of the screen and asked for their preference again.
| m | dab44c73c5f5e04f0ddecfcf73a29065 |
Restricting to abelian C*-algebras and to orthogonal projections,
we show that our parameters coincide with the
Choquet capacities of Haydon and Shulman (see {{cite:8a8baa0fe4d6cd21675ab854ef709b6688cc251b}}).
The positive operator {{formula:0aa1b6e4-8330-430b-8c89-0d6885babb22}} can in this case be thought of as a measurable
cost function in the sense of the theory of optimal transport; in the special case of a bounded cost function,
our duality result recovers the classical Monge-Kantorovich theorem {{cite:97d788e24082399d72cd26244bae7f57ee673358}}.
On the other hand, restricting to the case where the C*-algebras are matrix algebras,
we see that the duality result implies
the quantum versions of Strassen's Theorem established in {{cite:1b2da12f5f938205cc9ec016b965812a9cee81d7}}, {{cite:e18b01d473be8f39b18101678df4539fee5709bd}}.
Thus, our result can simultaneously be thought of as a quantitative extension of a
C*-algebra version of Strassen's Theorem and of
a non-commutative version of Arveson's Null Set Theorem.
| i | a0878f09c074f44ce33303a9028539e5 |
The Gauss-Seidel method is known to be convergent when {{formula:df3418b2-174a-44fa-aff8-9aedffff947f}}
is symmetric positive-definite, e.g. {{cite:718108ad64e986fa9aa77771e62f4e6945582080}}, or strictly or irreducibly diagonally dominant, e.g. {{cite:1f081dd5f14ac009c4b8a71d0ca819a50e4a3a2b}}. The case when {{formula:b8c46b95-f841-4abc-bfd7-7acf0cdcadd2}} is only positive semidefinite
(with positive diagonal) seems to be less well-understood, and our approach sheds more light on this case. In particular,
numerical results of the type shown in Figure REF apply here.
| m | dd4a2590a51d3644cb7f5018dcfa55a4 |
Since the policy {{formula:c8586675-02df-4583-babe-8d1e13d9f63c}} is a discrete binary variable, it is intractable to optimize the policy network with backpropagation due to the non-differentiable problem. To resolve this issue, we propose using the Gumbeling Softmax sampling method {{cite:ed59f9d304c768ad251fbb7a76909c0ce4cddac7}}, {{cite:94d1aafdef6ac74f3723cd57b78d895ca8cd57ef}} to generate the actions (freeze or fine-tune) from a discrete distribution. We refer UAF with Gumbeling Softmax training as UAF-Hard.
| m | ebf79bd58716a2716d994e9407595798 |
Some of recent works try to solve the representation limitation directly through joint Q value function with complete expressiveness capacity. QTRAN learns a joint Q value function with complete expressiveness capacity and introduces two soft regularizations to approximate the IGM principle. QPLEX {{cite:878928e2755b446765b76f33827df5d217e44cf2}} achieves the complete expressive under IGM principle theoretically through a dueling mixing network, where the complete expressiveness capacity is introduced by the mixing of individual advantage functions. However, as the state space and the joint action space increase exponentially as the number of agents grows, it is impractical to learn the complete expressiveness in complicated MARL tasks, which may result in convergence difficulty and performance deterioration.
| m | d489ee8f814246b61d3f21075d3077d9 |
For validating the proposed approach, we have evaluated architectures with vanilla algorithms as Maximum Likelihood Estimation (MLE) {{cite:a2d923005720493457f807b21d51fd288a103b50}}, Support Vector Machines (SVMs) {{cite:b399e9551fdad3732964bcf8e249fb08139c4f24}} {{cite:b09601f8871d4c54b3314d61f4c837fe02a8a1c3}}, Convolutional Neural Networks (CNN) {{cite:9c63975b7b045e898fe2f5049e9febd65daf9132}}, and Long-Short Term Memory (LSTM) {{cite:e5addf237db1eb44eb1ff20648eb5503d53d5bc2}} to determine the most suitable technique for predicting turn-taking in conversations.
We have found that the CNN models achieved higher accuracy than the other ML techniques on all three datasets, and that the content improved the overall performance of all approaches for the topic-oriented finch and multibotwoz dataset but not for the chit-chat dialogues of the sitcom dataset, where the ML models have not been able to beat the baseline. Finally, we found that the size of the corpus had a very positive impact on the accuracy for the content-based deep learning approaches, since we observe that those models perform best in the multibotwoz dataset, i.e. the largest one.
| i | 781e6e3b28677d34eab918f2fd78d10c |
Inflationary cosmology {{cite:65c6407e75c498be9fa9332593153aa7cf37fdcd}}, {{cite:293b170bee3e4c5f1937bbde0c29610610d9e9df}}, {{cite:ce32403479d8d9ccc1857d54b23a5f13462dc5c4}}
addresses many conceptual challenges of modern physics. It has become an integral part of the cosmological standard model, which assumes a period of inflation, followed
by the inflaton oscillation epoch, reheating and a long period of radiation–dominated Universe evolution. These ingredients are sufficient to explain the observed structure of the Universe
{{cite:8dc5aab80f35c1b20d99f581d008294f8546e1cc}}.
| i | 7c4a8da000a164f83d6ba912c3bb41b5 |
In recent years, advances in machine learning (ML) have achieved astonishing successes in many applications that transform our society, e.g., in computer vision, natural language processing, and robotics.
Traditionally, ML training tasks often reside in cloud-based large data-centers that process training data in a centralized fashion.
However, due to the rapidly increasing demands for training data, high latency and costs of data transmissions, as well as data privacy/security concerns, aggregating all data to the cloud for ML training is unlikely to remain feasible. To address these challenges, federated learning (FL) {{cite:d5a49542c08109af916d3e6dab493e7249ed5197}} has recently
emerged as a prevailing distributed ML paradigm.
FL employs multiple clients, typically deployed over wireless edge networks, to locally train a learning model and exchange only intermediate updates between the server and clients.
FL provides better avenues for privacy protection by avoiding the transmission of local data, while also being able to leverage parallel clients computation for training speedup.
| i | e58d6913fed92742c2cd42226aac3d77 |
In addition to the specific problem of variability reflected in multiple data sets, observations of epidemic dynamics are often incomplete in various ways: only certain health states are observed (e.g. infected individuals), data are temporally discretized or aggregated, and subject to observation errors (e.g. under-reporting, diagnosis errors). Because of this incompleteness together with the non-linear structure of the epidemic models, the computation of the maximum likelihood estimator (MLE) is often not explicit. In hidden or latent variable models which are appropriate representations of incompletely observed epidemic dynamics, estimation techniques based on Expectation-Maximization (EM) algorithm can be implemented in order to compute the MLE (see e.g. {{cite:a5d3d0ba217cf2fdcda8b6bf4836c06a20effd2c}}). However, the E-step of the EM algorithm requires that, for each parameter value {{formula:f058d25e-7892-449d-80fe-65d20d963d96}} , the conditional expectation of the complete log-likelihood given the observed data, {{formula:05a8c053-9c53-40e2-816b-592ad185daff}} , can be computed. In mixed-effects models, there is generally no closed form expression for {{formula:a9bc4d79-d15e-40c3-8f4c-bec55474ef1b}} . In such cases, this quantity can be approximated using a Monte-Carlo procedure (MCEM, {{cite:319bc0cc34c4dd58913bcd8f964279c4d1a41b60}}), which is computationally very demanding. A more efficient alternative is the SAEM algorithm ({{cite:414da559cec8381a9e4a0c65df9b6dbf2da5b241}}), often used in the framework of mixed-effects models ({{cite:71e129f8db16ae307b9fc5f4118031113b0ff683}}), which combines at each iteration the simulation of unobserved data under the conditional distribution given the observations and a stochastic approximation procedure of {{formula:c4a4b38b-e988-43cc-b9ce-8118042b8066}} (see also {{cite:42276095d54b56b46eb949083f309fb51ba38c36}}, {{cite:99fbccfec79e9cf7298904df60bc5d9c0fbdf010}} for the study and implementation of the SAEM algorithm for mixed-effects diffusion models).
| i | 6c26c37f9ffd19413bc7b56377bbab1b |
We consider estimations via the three-stage kernel anchor regression with disjoint data set projection (KAR) and two-stage kernel anchor regression with joint data set projection
(KAR.2). The baseline approaches include the kernel-based nonlinear methods: kernel instrument variable regression (KIV), kernel partialling out regression (KPA), kernel ridge regression (KReg); and the linear models: linear anchor regression (AR), linear instrument variable regression (IV), linear partialling out regression (PA) and ordinary least square (OLS).
We use Gaussian kernel for all kernel methods, where the median heuristic is used for choosing the bandwidth {{cite:3c0649d46b933825199c7af927fe4df34d01a6cb}}.
For the synthetic example, we set {{formula:fdc5eb9a-e0a5-4b8c-b503-5809e2c2005b}} for all anchor regressions (KAR, KAR.2 and AR).
| m | f843e665f215490ca53277288b642e48 |
with the functions {{formula:51ce3801-cdb1-4c8d-b1d5-60bd99ba54bd}} belonging to a suitable neural network space. The use of neural networks in this setting has the advantage that one can easily implement meshfree methods by randomly sampling collocation points (see {{cite:8e6bff9226dd272b4eca38d0d0e9c4ec7e883dfe}}, for example), and thereby be able to deal with high-dimensional problems, where most classical numerical PDE methods become unfeasible.
| m | 0f6f11e01e38704083f38801df1e2557 |
Action recognition in videos is a fundamental task in computer vision. Recently, with the rapid development of deep learning, and in particular, deep convolutional neural networks (CNNs), a number of models {{cite:9944d507af1df170db8e9e9221bb5d5805f2fac1}} {{cite:49a2044dd9b750e2bfa02f99d4229513aa5b49a2}} {{cite:575563cf410256a17ca0389f648d9e0a13987452}} {{cite:fd864b699b19903accf81c66d3dc6ac6ce566706}} have been proposed for image recognition. However, for video-based action recognition, a model should accept inputs with variable length and generate the corresponding outputs. This special requirement makes the conventional CNN model that caters for one-versus-all classification unsuitable.
| i | f175e27b96b7dcf2f0e9e83f0dbf007f |
To evaluate the above algorithm, we numerically solve the time evolution that implements the protocol for the density matrix, using QuTiP {{cite:d32ed50faa0069ae7f5ee4d17b276fb896c33d7b}}, {{cite:34904651178c0cd33b9dac5f7edefdeb5737270c}}, satisfying
{{formula:f8407953-f677-4212-a4d9-997d31b2f030}}
| r | 3d144fa2f7066798fc20e2a73f70e290 |
We have also shown that the general procedure of
{{cite:b6d9dc8d552a3126e35229ae5237ea5c9b284f48}}, which solved for the non-local boundary
conditions related to QNMs in terms of monodromy data of the Heun
differential equations involved, can be used effectively to find QNMs
in the zero temperature limit. This limit led us to consider the
monodromy parameters to be determined by the Painlevé V transcendent,
obtained from the confluence limit of the Painlevé VI. The importance
of the expansion of the Painlevé transcendents in terms of {{formula:00eb2e54-ffa9-4f1e-89c7-1f41e5216701}}
Virasoro conformal blocks, as studied in {{cite:010f410f0d7fa8074ba161cb8e00fb06c5706b35}},
provides us with the same interesting factorization of the
four-dimensional conformal blocks in terms of
(semi-classical) two-dimensional ones, as first anticipated by
{{cite:de3f342440b4ce5b0ee1db994e35c76554619ea6}}. The new ingredient here is that the
two-dimensional conformal blocks are of the irregular type, as also
appeared in the black hole perturbation context in
{{cite:71788774203ba137ce77bc201d26974910621027}}. The semiclassical conformal blocks
associated to the radial perturbations seem to arise from a unitary
theory, and the Vertex operators associated to the inner and outer
horizons can be interpreted as thermal states, with their degeneracy
equating the entropy of the scalar perturbation with quantum numbers
given by {{formula:c30be94a-2970-4d03-9014-55bd515b6936}} and {{formula:7e21c6d3-2502-4d44-a7e2-3caa5f01b461}} as it is absorbed by the black
hole. Finally, the study of QNMs at small values of {{formula:9f431d4b-1e33-4618-a1be-62b679d34c12}} showed the
relevance of short Virasoro representations for the intermediate
states of the conformal blocks, again anticipated in a different
context by {{cite:604bd1d8e88c7122548bb164038d3d920f80dd84}}.
| d | 17a0f993f311268e416bc7f8fb12806c |
SMILES-based methods {{cite:ee54f2ceaa38df32716799837dd22615cde9eb52}} are infeasible for scaffold-based tasksA drug design scheme where the drug candidates are designed from the given scaffold. since molecular structures can substantially change through the sequential extension of the SMILES strings. Also, as explained in the Introduction, atom-based generation methods such as You et al.'s GCPN {{cite:5f5bd2e1b101e91ddefe16b49252772ed05b95b0}} inherently suffer from unrealistic generated molecules. Thus, we focus our discussion on motif-based generation methods.
| m | 4f23f61280c1e5295558173d4e07366a |
The current trend on video captioning is to perform Dense Event Captioning (DEC, also called Dense-Captioning Event in videos in {{cite:056586ab2ee5dd5f4351794704ced70dd1d55095}}). As one video usually contains more than one event of interest, the goal of DEC is to locate all events in the video and perform captioning for each of them.
Clearly, such dense captioning enriches the information we obtained and is beneficial for more in-depth video analysis. Nevertheless, to achieve this goal, we need to collect the caption annotation for each event along with its temporal segment coordinate (i.e., the start and end times) for network training, which is source-consuming and impractical.
| i | 14ca96334ed4c5149016f0f22defae20 |
We also compare against SAC {{cite:20874e02018d858cbbdd9e23450cb978b0b2c136}} and QT-OPT {{cite:31f16c4736c1918bbc553d294d586b7b17c7797c}}, two commonly used off-policy RL algorithms. Despite significant effort, we have so far not succeeded in training a successful policy using either method on this goal-reaching task (as shown by scores of 0 in Figure REF for both algorithms) or on the simpler task of returning a ball to the opponent's side of the table. We do not claim that off-policy methods do not work on this problem, however we do observe that it appears significantly more difficult to train policies using this approach compared with either on-policy RL or GoalsEye.
{{figure:f218a122-f74d-4e03-953e-7102946e0881}} | r | c7585a2aba5065f0126e3503ab89627a |
Remark 1.6 In the literature (see e.g. Hatcher {{cite:a58719ccd2018b4bd51626024b490564a4fbcbff}}), an (open) {{formula:4929ea3f-f014-40a6-a5d0-10b48fae7470}} -cell of a CW complex is a topological space that is homeomorphic to the open unit ball in {{formula:8aa6fc47-1b47-4769-8f6e-d6ad6cee78f5}} for {{formula:a9aef13e-44b9-44ea-b575-59a79cf70960}} (a 0-cell is a single point). The {{formula:a0b6464e-5773-4bf1-87dd-4cdbb6103fed}} -cells of the CW complex defined in Theorem REF are the nonempty sets {{formula:80dd934b-40d1-4f5f-b494-39f4e261d8c8}} with {{formula:69d8cbc6-df51-4606-a871-d802f3855e2f}} ({{formula:4bc6f0d6-8749-4a66-adad-d6af3484be4f}} ). Here, for a {{formula:5cebf38c-b852-48d5-987d-418a8957bf89}} -manifold {{formula:1429670f-a66e-4908-8c41-bebda05bc6e7}} with boundary, {{formula:8e2427f7-f4d0-451f-83cc-7f4d3634c731}} denotes the set of {{formula:b68e1929-0492-4804-bc27-e9e6784ff68f}} having a neighborhood that is homeomorphic to a {{formula:40f717c4-d9f2-477b-8603-6c8692fa72f2}} -cell (contrary to the topological interior {{formula:37a9aced-7afc-43ff-bdfe-d3bce84d7289}} of a set {{formula:2ffcd4b9-eebc-4b24-8a1a-3df84c8db06a}} w.r.t. some ambient space). We use closed cells for notational convenience.
{{figure:ad6df757-c815-4b5c-ac7d-2120f3eaf4d3}} | r | c214e6a848957bb3f98c95e5a8b53494 |
In the literature, UDA methods have predominantly been designed to adapt from a single source domain to a single target domain (STDA). Such methods include optimizing statistical moments {{cite:0a96bfbc7201fad41f3d33c747819f9539b1eecd}}, {{cite:c5fcb8f198f451870e3be504b379c3ef73aa3565}}, {{cite:ce85860931af0d05c71c6693cc7800d8d4ce7646}}, {{cite:1b534b8d8dbb39e1bee3142224de0d30f7d854ee}}, {{cite:81837ecaaa764c7db33b48a8ea1cef50bed33783}}, {{cite:779995fe83e9813700f3cb9afd219944705cf471}}, {{cite:153c80e2617430225473be36c6e319d6e8c975a0}}, {{cite:83731e38dd21b32b002d78f1f76d196c3dd23f3c}}, adversarial training {{cite:77a1a9a1d697bd5a6250f91a05608151f684a5c0}}, {{cite:7d164cfdf120d07c31c817953505c9c72ca44c70}}, {{cite:7498406aafad81c0a37d142ef19de98f1af3219c}}, generative modelling {{cite:26698a74ac0998d1a2d2e1df35bebe51f8bd8404}}, {{cite:86f66d030883be92c780e8c186ca2c8f69978834}}, {{cite:513c223c2d585ba3b94934473b0b9f92cf54a932}}, to name a few. However, given the proliferation in unlabeled data acquisition, the need to adapt to just a single target domain has lost traction in the real world scenarios. As the number of target domains grows, the number of models that need to be trained also scales linearly. For this reason, the research focus has very recently been steered to address a more practical scenario of adapting simultaneously to multiple target domains from a single source domain. This adaptation setting is formally termed as Multi-target Domain Adaptation (MTDA). The goal of the MTDA is to learn more compact representations with a single predictor that can perform well in all the target domains. Straightforward application of the STDA methods for the MTDA may be sub-optimal due to the presence of multiple domain-shifts, thereby leading to negative transfer {{cite:72f7335434d50aeded2d5f5872b2e85e6970d1e7}}, {{cite:f0d301bc75f21a461918702c715aedaf85b0f07c}}. Thus, the desideratum to align multiple data distributions makes the MTDA considerably more challenging.
| i | 1c149187076380ca62d2021e2ca2ad69 |
We can apply a simple example of bias correction following the
procedure outlined in {{cite:ba93064a4df981e3a7bb0382681f3156237ee239}}. Given the simulated and fitted values
of the Sérsic index for the Ser model, we plot the bias as a function of
the fitted value output by PyMorph. In this case, the output value represents
the measured value in real data. The simulated value represents the true
underlying value of the galaxy Sérsic index. We can determine an average
bias and uncertainty in the bias, labeled as {{formula:18ea51d2-c128-4dc8-bfff-3d635e940ae3}} and {{formula:813bfdd4-4c06-4cbf-a105-61183b86ada0}} , as a
function of output Sérsic index. Additionally, we can measure the random
error in the fits from the width of the bias distribution as a function of
Sérsic index, labeled as {{formula:ce2a6ce6-de09-4ca6-810a-8aa924336446}} . Then the corrected Sérsic
index and uncertainty on the corrected index is
{{formula:52b6ad8a-c020-4a63-9fb4-ea2361e4f39e}}
| d | a00dce4e859a8a7c1ef3d9d86e4dbb7f |
Is such spin-induced descalarization detectable with current or future GW observatories? For such descalarization to be detectable, one must first detect that the binary components were scalarized during the inspiral. Our simulations showed that the scalar charges lead to scalar quadrupole radiation because of the highly symmetric configurations (equal mass, equal spin magnitude) we chose to evolve. More realistic astrophysical configurations (with unequal masses and unequal spin magnitudes) forces the binary to emit scalar dipole radiation. Such emission of dipole or quadrupole radiation accelerates the inspiral, and thus affect the GW phase at {{formula:3b4d8164-d12d-4e73-a13e-4c5c18c89b9e}} 1PN and 0PN respectively, as shown in shift-symmetric theories {{cite:bc6fba8fba365de2d36398999c21eac4dfe139b9}}, {{cite:39de57dcd04b610df1ad0f76b34d4677cf35b63b}}, {{cite:4f54e3c03ed07cf011777ca5206c12841c031cab}}, {{cite:e80b630dd4bceee43317a7123ecea7d9a229a5e3}}, {{cite:1012ba1f978f0f5d95b224306016477aa42a1886}}, {{cite:ca27538820dcccd4c8059fdcbf87a93b3f77078a}}.
These effects in the inspiral are observable and can thus be constrained with current ground-based {{cite:19ad3ef261264e0b383646d2387c5ac0826e9fa2}}, {{cite:78fb62a9472f83d294009b212cb4f84c7e479faa}}, {{cite:0cbf5c8b9e8183a51bc21cecde07ea7c06b90734}}, {{cite:970c4b3a790813cce28d973af5a0283edc367071}}, {{cite:ceffc736ac3ffc65c31847d55cd41dc1887cb804}} and future detectors {{cite:ff471c3b4e5397ee8fb3acfb50d8da01b193faec}}, {{cite:35459736bfbe421b6d2affede97f139c2365eba6}} within the parameterized post-Einsteinian framework {{cite:70a10aaa63839b6a378e98316e55df3f597f5f04}}, {{cite:3e3c718bcd78c97e5ccb3efbf692090fde582d63}}, {{cite:821dae5b13b7231643b5b2e952cc4e6cdfb3b567}}, {{cite:ee4bd9291928cc66f1259c7cc9040105f4ba0dfe}}, provided the binary is of sufficiently low mass such that enough of the inspiral is observed {{cite:35459736bfbe421b6d2affede97f139c2365eba6}}.
In fact, a constraint of this type was recently obtained using the GW190814 event {{cite:cee8e7825aaf8aecaf9e8e581bacf526c1264f85}} in {{cite:02d7570b73c77447b8587136b3e400586b4caed0}}.
| d | 1a799689e6b5f5fd3d6e17360f411d32 |
For models trained with DG methods, we observe a large gap between the certified loss and the empirical loss on benchmark and synthetic domains (see Fig. REF ).
Therefore, to improve the certified loss of the models trained with DG methods, we also propose a DRO-based iterative training algorithm, DR-DG, that augments the training data with samples incurring high loss under the current model.
DR-DG can be used to improve the certified loss of models trained with any DG method and effectively reduces the worst-case loss over a large distance from the sources
with computational complexity similar to adversarial training {{cite:0761cb4d761377adee0f4a8c46ef0fec1bf5356d}}, {{cite:fa69c0db0789682ef3e0fe389d56cd852dfb1947}}, {{cite:de475b377efa0f79a18116f4da259d60e0b430f3}}.
Our results demonstrate a significant improvement in the certified loss of the models trained with DR-DG with a minor decrease in the accuracy of the models on the source domains.
The reduction in the gap between the empirical and the certified loss of the models suggests that DR-DG trained models are more generalizable to unseen domains as well their performance on benchmark datasets is representative of their performance in the wild.
Thus, our certification framework, which explicitly provides performance guarantees based on distance, and our algorithm DR-DG, which trains to minimize the worst-case loss, are efficient and effective ways for certifying and improving the generalization of models to unseen domains.
| i | 46ac35cfa979aa3bb1700fd05ffba603 |
Ab-initio nonperturbative approaches are called for to solve QCD, as Lattice Quantum
Chromodynamics (LQCD) {{cite:08481f2b924630b5b4ba921ee2cc131aa70d0f40}}, {{cite:97f1b270e5cb0c8ce34b48a42dbb7ef492942bd8}}, {{cite:b993e9c50340db0e6ed02829e826263e6c8fe39f}}. This well known method
starts with the QCD action, {{formula:1da68320-d422-4f69-8679-dc6a524e27d3}} , and evaluate the generating
functional {{formula:531c5550-3fea-45e3-9baf-a226ebfe93c4}} by
numerical simulations and finally accessing the matrix elements of the relevant operators between
hadronic states.
Such a procedure is performed through both, space-time discretization and Wick rotation, and changes
{{formula:2e7c37a2-047b-423e-9f56-fb925a807fdb}} to {{formula:63342391-a414-434e-9c12-a9a35cc712b6}} , i.e.,
putting it in correspondence to a Statistical Mechanics formulation.
The numerical calculations are performed from Monte Carlo simulations.
Despite powerful, LQCD faces some intrinsic problems such as the need to
extrapolation of the outcomes for lattice spacing approaching to zero, the need of powerful
dedicated computational facilities, and the “fermion sign problem” {{cite:a09a8e647cdda8bf2859790b2c63c16336911956}}.
| i | fb84f87977ab0620a728863c070b9b10 |
We have analyzed the optical imaging and spectral data on CTD 086.
Surface brightness profiles of HST data do show a minor intensity enhancement in its central 1-2region, but it is not well resolved to comment upon.
Based on the isophotal shape analysis, we may conclude that CTD 086 is an elliptical galaxy free from dust and/or other sub-component in it. Also, it should be classified as E2; it was misclassified as doubtful spiral (S?) in the third reference catalogue {{cite:2e0b4adaa26bfecd607a73917b9e4d497c805017}}.
From the analysis of optical spectra of CTD 086 we conclude that it is clearly a narrow emission line galaxy.
The emission line widths and intensity-ratios are typical of type 2 AGNs. {{cite:1780d3a9fb83830d354bc28fba93d3cc34011cd7}} have defined the low luminosity or dwarf AGNs to be those with {{formula:5e80abd1-a729-40a7-8bdd-ca30e76bde86}} and
{{formula:5a613ad4-ebc8-4950-a11f-4c6b60be908f}} ,{{formula:6400e0ff-f6e4-44d1-b2de-83e9739988de}} , and {{formula:4356adf9-e007-4aac-b8bf-502775cf1648}} (Seyfert) or 0.17
(LINER's). The H{{formula:2fe4face-8a6a-412c-9f93-84de89df5e8d}} luminosity of CTD 086 is about a factor of
four smaller than those for the low luminosity AGN's.
The flux ratios {{formula:64140983-79bb-4271-b87a-0c84ff4ce407}} , and {{formula:a5b43849-649d-4683-9b21-336710cebf3f}} for
CTD 086 are much higher than limiting flux ratios set by
{{cite:1780d3a9fb83830d354bc28fba93d3cc34011cd7}} . The forbidden line [O I]{{formula:aaa78e44-240f-45fd-be0b-35424100f727}}
is clearly seen in the spectrum of CTD 086. The
flux ratio {{formula:ecf5025b-e1db-4fdf-9915-36d3a15dc3ce}} for CTD 086 suggests that it more likely a case of a Seyfert nuclei rather than a low ionization nuclear emission region (LINER) galaxy.
| d | 1051ba439a3b2396a203c4b82ef7f44e |
Very recently, {{cite:37d0857fa1d7f7bd92f07cc4e4004f761c280448}} reported an marginal candidate, GW190920_113516, whose secondary could be a heavy NS. With an effective inspiral spin of {{formula:a3a4263e-a720-4475-b437-46b32630aded}} , this event could be a potential NSBH merger with the NS first born. {{cite:f87368aaafb2f78ded82a2f9d327228579ee2e99}} claimed that the fraction of the NS-first-born systems with different SN engines is {{formula:e0d55ac0-3e83-46a0-8687-f8ddf158cbd7}} {{cite:6595d753ae7c57c79bf40211319223c01dd36ad3}}. The event rate and system parameter distribution for NS-first-born NSBH mergers are subject to further studies in the future. With the upgrade of GW observatories and the update of large survey telescopes, it is foreseen that one may detect high-confidence GW signals with associated EM counterparts from NSBH mergers, especially for NS-first-born NSBH mergers, in the near future GW-led multi-messenger era.
| d | 1456056047c9d734d42e517a20e8ec29 |
If we replace the auto-correlation function {{formula:87f9418e-4579-4bd5-acb2-1164e09bbe2b}} in Eq. REF by a vector containing the density, the
momentum and the energy, the GLE transforms into a matrix equation {{cite:dd2bc2a8c46990ce29e383896aa6cd5f66e604b7}}, {{cite:1ce510bab6873b2270ff7947ebdc0916295f7ba4}}. Applying the same approach,
we can derive an expression of the dielectric function that conserves the three previous quantities.
This makes the memory function/memory matrix approach a very powerful and flexible tool to
build up a physically meaningful dielectric function. Moreover, in case of need, two or more memory functions describing relaxations on different timescales can be used within Götze's mode-coupling formalism {{cite:37113e2804d106c7e60d541116f6089ac32f8753}}, {{cite:6b830c14bc1ef6cb4901a72dfff080c08d0229c3}}, {{cite:74e8eb6c48fd732b3ba18bcac17e47f3b363ae21}}. This
allows to incorporate different decay channels into the modelling of the dielectric function.
| m | f8c46445c7f2b4330ae84ec154e77a8a |
The mathematical structure of chemical kinetics and chemical thermodynamics, since their establishment, has been continuously developed within various fields such as physics, applied mathematics, applied chemistry, and systems biology over the past century {{cite:bf750ea8e3538a222baf5480578fb4c64a4f5d88}}, {{cite:6f58d11bb30560833eece3f4a8938bc6afd8987b}}, {{cite:cd50d39c6b87d956d226245cbc44485f678a7990}}, {{cite:ca7ef7ca245c0c215d8b3cceb1c75e0b344c0e74}}, {{cite:20c8bad3f8baec7265f59f6d686aa453c7c56aef}}.
However, the developments were mostly separated and shared only within the individual fields.
In applied mathematics, there is chemical reaction network theory by Feinberg {{cite:ca7ef7ca245c0c215d8b3cceb1c75e0b344c0e74}}, which is based on the work of Aris, Horn, and Jackson {{cite:cd50d39c6b87d956d226245cbc44485f678a7990}}, {{cite:680ff3725a46fcb0243efe89dc31bc97d64bfedf}}.
In real algebraic geometry, chemical reaction systems and toric geometry are becoming important research topics {{cite:0a2312aae3f8d2f15461915bed75f0ce00538931}}, {{cite:88d72fe6149d8cd8728547669972b856c6bc6e6f}}, {{cite:b50fbd9197a61820f9a197f7c05d76850a5ca52c}}, {{cite:7b08a16e70360e81a8d6c34765865c6518ee3c05}}.
In systems biology, a new theory emerged, which connects properties of reaction networks with the network topology {{cite:d2f18fd775ac703f43a443571d34e500e3611e17}}, {{cite:f543ed880ed276ded4881986a20acb729cc41c06}}, {{cite:4c5a69036e90574d5ccf635dfb03dc06031f27cb}}, {{cite:97763233c2fc9da8c186c1db86d01a4eb44f0615}}, {{cite:3200786c0c20750ca6ec26da38dd458beb8b8a19}}.
In physics, network thermodynamics by Hill and Schnakenberg {{cite:3252329c9c8d6d73ac60b3208b01bcb15f10ce6a}}, {{cite:721566617b012bb8ef58efcb9ceda1e3620a3fe9}} and stochastic thermodynamics of chemical reaction theory by Qian and Esposito have been studied {{cite:bf750ea8e3538a222baf5480578fb4c64a4f5d88}}, {{cite:6f58d11bb30560833eece3f4a8938bc6afd8987b}}, {{cite:1c616dc25df360dcf5bfa3cb8b99cbbcf69a0fff}}.
Now, information and Hessian geometry can be added to this variety of applications.
Even though the theories have been developed to explain the same physical object, i.e., a chemical reaction system, the interrelationships between them are not clear yet.
It will be the next important step to integrate these theories from a unified perspective, which is expected to boost a further development of chemical reaction theory.
| d | c0c39a649de9b6e1486f6fa416ece5e6 |
Eq. REF indicates that GaitSSB can bring much of the ability to pull negative pairs away (increasing the inter-class distance) into the evaluation stage regardless of the challenging cross-view factors.
However,
we notice that some contrastive frameworks{{cite:19e9b0b30169d17ba9c0d255d338b7dcdb125b07}}, {{cite:5302d8843dd7a16a0905eba079261aa24f570253}} have achieved amazing performance without the design of pulling the negative pairs away.
We argue that the task granularity makes it different a lot:
| d | ca42a54ea568c7ae8e1668063523a1aa |
We have computed the Hubbard {{formula:b643a483-8eb9-4e1d-8dd6-087b48a60f00}} parameters both for Ag and F using the linear response approach of CDG as implemented in VASP software package.{{cite:b425b9aa3dc230d210dbbe8fe5e2e8edc6755331}}See also “Calculate U for LSDA+U”- Vaspwiki available at https://www.vasp.at/wiki/index.php/Calculate_U_for_LSDA%2BU
The Monkhorst-Pack sampling of k-mesh was 6 x 6 x 7, and the energy cut-off for plane-wave basis was set as 520 eV. The PBEsol implementation of the exchange-correlation functional was used.
| m | 9790e13ced1978610c85c6a287f82637 |
However, if the problem is {{formula:06563dda-43f7-4f30-8ea6-871fea6e5db9}} -critical or supercritical, i.e., {{formula:3aeaf0d5-d837-4f15-b936-08821a5479e6}} , where {{formula:fac960ce-7a96-48ec-a3fd-64b3a210aaf7}} if {{formula:dea59220-d3be-44c3-a1fc-c88bc256de87}} and {{formula:73dc3c6d-199f-48c0-b470-daa62c23bb0d}} if {{formula:f2ded64c-340a-4913-b63e-3af1106fc4df}} , then {{formula:1cb1c0cb-3bb9-4ba9-aa75-48c0bb09afc1}} is unbounded from below on {{formula:44ca301e-bde8-4a8a-857f-f8a5fc7b0ba2}} , so it seems impossible to search for a global minimizer to obtain a solution of (REF ).
Furthermore, since {{formula:09eb5b5a-cc61-49a6-8d6e-f98ef4b735fe}} is unknown and {{formula:732a2929-913b-4a4d-93ff-cbbd47f3ffac}} is not compact, some classical methods cannot be directly used to prove the boundedness and compactness of any {{formula:892f6019-dcbb-42d8-83bd-a4f1f5b684fb}} sequence.
This case was first studied by Jeanjean in {{cite:c66bc9bbb209aa3e0824776205c190004068b927}}. For quite a long time, the work of Jeanjean {{cite:c66bc9bbb209aa3e0824776205c190004068b927}} is the only one in this aspect, and the idea of {{cite:c66bc9bbb209aa3e0824776205c190004068b927}} has been exploited and further developed in the search for normalized solutions when the energy functional is unbounded from below on the {{formula:a87b46de-e57c-434f-a79a-a3998869a178}} constraint, see {{cite:4cde7e08fcdf73c806df81959cb1f6c7e12e6220}}, {{cite:47d55b4b6e3ee1b3e337cd4c9a5f0e35c95a1e35}}, {{cite:a79be5a4235fb149ffec51c625e81b944f9741b5}}, {{cite:bad6934f03e48aef12770d5c66db29a0779db627}}, {{cite:a7abe03f3ff5964b72919bf1dc2fc598bc241930}}, {{cite:6711f52e4c446a4369e6f2919e6b31c3b9429f7d}}, {{cite:c801656b5bfd1c5d38c08ab477d516fe2c8e06ff}}, {{cite:6977cb75a0ce5d02bf19ef6e31553a286d4f1b32}}, {{cite:59432d4b46d46d6218736b9e4691ede28ec1e4f5}}, {{cite:99b95f2f1af5ba4d01670dbe56aa333697f80505}}, {{cite:f27312dfeeb78c9565ed937141e6168670df98b3}}, {{cite:0cf6c5644850f11e1f5c40a82e9c5a87ec898d7b}} and so on for normalized solutions in {{formula:7b18a56e-f1c4-4d85-85c6-4e64a64397b3}} , {{cite:4820086d94e2870aa6149486a060c5c8f3d6669c}}, {{cite:5234ac05053e70e8d4addce0ca31934c017daa45}} for normalized solutions in bounded domains, and {{cite:5cfd5ce0e8d18497c8fde6466298fc7e57f4cf57}}, {{cite:ecd8a2cbe2133cf9101a9d52f193aa960f5453d7}}, {{cite:9ee5749ce0724749985bf36b0e07df519c0ddebe}}, {{cite:ccfaca22121762a9408c11d318bbbdb9e57f50da}}, {{cite:fd7f5dafea701a22c0ca885d4a2ff4ee932a48bb}} for normalized solutions of Choquard equations. It is worth pointing out that in {{cite:a79be5a4235fb149ffec51c625e81b944f9741b5}}, {{cite:bad6934f03e48aef12770d5c66db29a0779db627}},
Bartsch and Soave presented a
new approach that is based on a natural
constraint associated to the problem and proved the existence of normalized solutions for (REF ) by using a minimax principle based on the homotopy stable family.
Borrowing some arguments from Bartsch and Soave {{cite:a79be5a4235fb149ffec51c625e81b944f9741b5}}, {{cite:bad6934f03e48aef12770d5c66db29a0779db627}}, Jeanjean and Lu {{cite:c801656b5bfd1c5d38c08ab477d516fe2c8e06ff}} also complemented and generalized the results of {{cite:c66bc9bbb209aa3e0824776205c190004068b927}}.
Comparing the above two methods, we find that Jeanjean's method requires the energy functional has some scaling-type property.
| i | 0455cd1879e4e19c5f19bdb8a3707c12 |
One can compare Eq. (REF ) with the following Eq. {{cite:31f45b4fd6445e2cd3f15ecaa4fbe36ef1af3318}}, {{cite:6e9aa02e5ea5324f74d9086452c0fc0aa11915a9}}, {{cite:5e38d1da17acad27263de2469fc87d68a2601fac}}, {{cite:ce43db60ca087d8ff0a778e09596234de8846ce6}}, {{cite:288365ba46805b0c8b112073a10206288b3fec65}},
{{formula:3e64da8a-bf96-4400-8460-397f58e9dc3c}}
| m | 13805a5ec9c11af79946c0ff188e01db |
The spectrum efficiency for the proposed DAP architecture is shown in Fig. REF , with the same simulation settings in Fig. REF . The SNR is set to -10 dB. The baselines include the fully-connected hybrid precoding with 8, 16 RF chains using an alternating minimization algorithm{{cite:052c152fafe60888450b90de5a0734c800029dae}}, sub-connected hybrid precoding with 8, 16 RF chains using a SIC-based algorithm{{cite:e6b250ae16e341eb401b6fd99675996145d9c95b}}, and fully-digital precoding. When the transmitter-receiver distance is small, the proposed scheme can achieve a two-times increase in the achievable spectrum efficiency compared to classical hybrid precoding schemes. This is because the proposed architecture can benefit from the extra DoFs in the near-field region. When the transmitter-receiver distance grows, the fully-connected hybrid precoding outperforms our proposed architecture. The reason lies in that there still exists a gap between the fully-connected and sub-connected architecture with the same number of RF chains. When the transmitter-receiver distance grows larger, the DoFs reduce and our proposed architecture could no longer obtain the multiplexing gain from the extra RF chains.
| r | 4d03176323d4d9de5853e97ecc7ad60c |
The subclass of {{formula:2c9f0d07-d4ee-4193-a6ed-b98501c0ee38}} spaces having an upper bound on the dimension by {{formula:07f4ff05-5fac-4297-8460-348e971f8813}} in a synthetic sense is denoted by {{formula:6698b90c-4599-4469-a8da-d0035e7af417}} , see {{cite:0c5f533466464968159b2dc0da738c2b1bf7207a}}, {{cite:da2c9fbed69537575cd2b5149f567cf8fd7b705d}}, {{cite:734ec991a20053d3d7cce2a166ba7aa5a2641453}}, {{cite:efd4c091588239533cc2711b9721822b3a1265db}}.
| i | d239022dfca8c82eff0ecc0d54f1afbc |
Using {{formula:402f772d-8aea-4508-85f4-544a4da1e6cb}} GeV{{formula:c1c8f8f1-418f-4bd2-bf51-7eab71bd6e53}} {{cite:9fd2b3cc404dac593b3327552aa8f16a36af6bbf}} and {{formula:5d26d103-f731-45fe-972a-f805465e033e}} {{cite:e53ba5038782ff0d852089b64ec35f8ca48df6dd}}, the hadronic decay of the W boson with QCD corrections to one-loop order, obtained from equations (REF ,REF ), is
{{formula:d5cd000c-a57d-437e-8947-0c359e13355c}}
| r | 205edf583637f1897310b492390628ea |
Panoramic image stitching is a fundamental problem in computer vision.
When solving this problem, we are given a sequence of images taken from a single point in space with a camera rotating around some 3D axis.
The objective is to map the images into a common reference frame and to create a larger image composed of the captured ones, thus, covering a much wider field-of-view than each individual image.
In other words, the goal is to estimate the unknown relative rotation and inner calibration parameters of cameras with coinciding optical centers, , cameras undergoing a pure rotational motion.
In the computer vision community, this problem is often considered to be solved with a number of existing solutions {{cite:9d3755f2100175dabce9f76d721562dc9658697a}}, {{cite:9b97c502e5713584bbee479e44ca879d5b636132}}, {{cite:15d604c07251b207acc6c6e04a05e48c472407ae}}, {{cite:0f66a1bda47627894a7bdad9ebb5dd7e8825d803}}, {{cite:4a32066daedf93aba5c6ec1c59b930b3f0982a4c}}, {{cite:6d38bfdfda3336e3b72b86cdadd4cdac2a73de08}}.
However, in this paper, we will show that the existing solutions do not exploit all available information which can be easily obtained from recent devices. This information can be used to simplify both the problem formulation and solution.
{{figure:aee3a8ce-85d3-4d69-8656-ef2026c69973}} | i | 28f5efe3e855e2ad9a83fc0d37927ffb |
To further verify the impact of teacher forcing, the integrated models (row (e)) with high inter and inner-layer teacher forcing probability (rows (f)-(h)) are also evaluated.
Note that when the teacher forcing is activated probabilistically, the strategies are also known as schedule sampling {{cite:58f6985fd785dfa2faa5978b675480b0a038c60a}}.
Row (g) shows that high probability of triggering inter-layer teacher forcing results in slight performance degradation, while models with high inner-layer teacher forcing probability (rows (f) and (h)) can further benefit the model.
| r | b390a5762448d4cfd6faba31d19a51c5 |
Lemma 2 (Hoeffding-type inequality {{cite:5ab00fbcd19729d480098fe7911cab2c60f13e48}})
Let {{formula:73a94f1b-5b57-4e3e-8cbc-7e6bd68183fd}} be independent zero-mean sub-Gaussian random variables, and let {{formula:a7bddbeb-7b59-4e93-bc69-458c7edf2b3d}} . Then, for any {{formula:a794f7cc-9590-411c-b507-54ea28423826}} and any {{formula:2b5ad06b-04b7-44b4-83c4-817573de4711}} , it holds that
{{formula:545e207e-b9e1-45f8-8a90-b340a48e8882}}
| r | 8d93e60150d00d6d632f5cbe31522aa0 |
In order to obtain a numerical estimate for the HFS, determining the {{formula:a51cae95-73eb-4df0-bdb6-ea15472af6df}} couplings is almost as important as fixing the sign of {{formula:347639c4-76cb-4887-ad59-49ccc0c25555}} . In the following, we use short-distance constraints, that allow to relate the nucleon Compton scattering tensor to the nucleon axial form factors in a transparent manner. This allows to fix the sign and, eventually, obtain the desired couplings within a resonance saturation scheme.
In particular, the relevant short-distance constraint follows from the operator product expansion (OPE) of two vector currents in the limit where {{formula:c2ebfaae-d26f-4c5d-83db-49004887568c}} , where we have introduced {{formula:9d8f3186-3baf-4643-8e2a-044a28a3874e}} and {{formula:5d44dd06-c8da-4d28-bbda-ff6392fe6f7e}} . This reads {{cite:0c312a80479390f78919782e7fa0fa010dd6999a}}, {{cite:83882bc0ba71ed3fff8dba59f20f0bc041651ce3}}:
{{formula:280b6c9b-4fea-488c-8319-2a221ea95167}}
| r | 2481a74855e771f891128aeeafb6af0e |
Traditional feature selection methods including both unsupervised and (semi-)supervised methods all exploit the knowledge of seen concepts rather than unseen concepts {{cite:20707bb64855d79286c8218d86254e170e7a6cb6}}.
Specifically, unsupervised methods generally prefer the features best preserving the intrinsic structure of seen concept data.
(Semi-)supervised methods prefer the features best reflecting the discrimination (i.e., class labels) among different seen concepts.
Consequently, as the data may vary dramatically among totally different concepts, traditional methods may not generalize well to unseen concepts.
| m | fa6490c94fcc0d0f9effdd3105b4d760 |
Prior methods {{cite:1008431812e51846b7b637ca681789323783064f}}, {{cite:d6c41487dace8efa8f67cf3749df5aead231ecf3}}, {{cite:c2d5006cdc2e6e7f00c6e66a4d06e0cc88889598}} are tailored for dense prediction and use pre-defined manual rules to match corresponding pixels. They emphasized on local feature learning and largely ignored the learning of global features that is also important in transferring to both classification and detection tasks (see Sec. 3.4 in {{cite:d6c41487dace8efa8f67cf3749df5aead231ecf3}}). In contrast, LEWEL is a generic method and benefits both image-level and dense predictions. LEWEL leverages the global projection head to predict the spatial alignment maps such that couples the learning of global features and aligned features.
Our experimental results in tab:expsptalm show that LEWEL significantly outperforms {{cite:1008431812e51846b7b637ca681789323783064f}}, {{cite:d6c41487dace8efa8f67cf3749df5aead231ecf3}}, {{cite:c2d5006cdc2e6e7f00c6e66a4d06e0cc88889598}} in terms of classification by up to 13% while performing on par with or even better than {{cite:1008431812e51846b7b637ca681789323783064f}}, {{cite:d6c41487dace8efa8f67cf3749df5aead231ecf3}}, {{cite:c2d5006cdc2e6e7f00c6e66a4d06e0cc88889598}} on detection/segmentation under {{formula:3918a04d-8151-46fe-bbb1-301f9adb18be}} /{{formula:2f650b18-7cfd-4fee-8fcb-c46f8417425a}} training schedule, highlighting the generalization ability of LEWEL.
| m | 21701630637dc4c622e6f0e975200922 |
During the encounter between the primary cosmic rays and the earth atmosphere, a non-negligible number of muons are generated over a wide energy spectrum. The fundamental basis, on which the muon scattering tomography is founded, is to follow the propagation of the cosmic ray muons within the volume-of-interest (VOI) where the entering muons of a certain energy deflect from their initial directions in the wake of the physical processes primarily hinging on the atomic number, the material density, and the material thickness {{cite:04c83abe4a5fc5f46cfdf35fc27b3a0b7a5cb3f2}}, {{cite:7e0d25e8a2c30a6be56fea34216f03f42d07d33a}}, {{cite:53f78d74930df6f7e67a39c745b1e46ed8318f56}}. Among the detection modules existing in the tomographic setups based on the muon scattering are the plastic scintillators that have substantially found their application by accentuating their favorable aspects like fast rise and decay times, high optical transmission, ease of manufacturing, low cost, and large available size {{cite:681f539addc815e1df1b0e56a13256c8519fa50e}}. The hodoscope structure for the scattering-based tomography consists of two sections that are installed atop and beneath the VOI under the investigation {{cite:cc1290466bd0989fc969c505359789279f9fc605}}, and each section is composed of two or more distinct detector layers {{cite:f21c097e15a86a82ca17055446bfcabb28698a64}}, {{cite:df9aa85df6dc72900b721698a2c0c6c6668c93e7}} occasionally made out of plastic scintillators usually with a moderate thickness. In the course of the muon propagation through the detection system, the hodoscope components slightly contribute to the deviation of the transversing muons up to a certain extent, and this tiny contribution might serve to categorize the detected muons by building a binary relation between the deflection angle and the muon energy {{cite:2387cfce4ce6840d9b295c65c8a04b04e6c3b266}} especially in the regular cases where the tomographic configuration is inherently incapable of directly measuring the kinetic energy.
| i | 5d773474dda78de1d37866d22861e00f |
Experimental results obtained from the Poisson, the gamma, the Weibull, and the CMP models are computed using the stats {{cite:6b2dff190c28ab824ecb445a62c49c107dbcd41a}} , Countr (gamma and Weibull) {{cite:d85a7d30209800be5ebe59186e11c660a17a694b}}, and COMPoissonReg {{cite:c6d1a058623c5cf10b3fc0cd72e82bf456f9f127}} R packages. The fertility and the takeover bids datasets are available from the Countr and the mpcmp {{cite:43cb1af1820f735d0dd6714f5f13f64599c3125b}} R packages, respectively. The SUE is implemented in R {{cite:6b2dff190c28ab824ecb445a62c49c107dbcd41a}} and C++. Most of the code is written in C++ via the Rcpp {{cite:320e2391affb23b1f027c94ecbc0e035d4ccbf8c}} package needs for accelerating computations.
| r | 775a9644d621e69e34bf7b8b8c5b340e |
PROBLEM: Under what conditions a datum can survive in a sensor network?
Given that the nodes as well as the links can fail with some probability the obvious model can be a Markov chain, but such a model can grow in complexity very quickly because the number of possible states becomes {{formula:58cd3bea-0962-4aff-b8c7-124cc6d8e8d0}} where {{formula:2ca68148-e0a5-458f-8ab9-a6f74d381e97}} is the number of states. To avoid this mathematical problem, one alternative is to model the system as a non-linear dynamical system.
Recently they have appeared in the conferences and journal articles some very interesting and relevant research articles about the virus spread behaviour in a P2P network or in scale free nets such as the Web. In {{cite:ab680613a2a9b53682a4121d6646288be9863c73}} the authors study the communication mechanisms for gossip based protocols. Another very recent and interesting work on how to distribute antidotes for controlling the epidemics spread is presented in {{cite:d133c77bbc45fe72e5e2c6999b5541166ef7f0ef}}. In this research the authors analyse the problem under the approach of contact processes {{cite:701604ad27ae82294e8341046dca1773597e060a}} on a finite graph and obtain very interesting and rigorous results. Concerning the properties that arise in the random graphs, such as the existence of a giant component, percolation phenomena, node degree distribution and small world phenomena, and that are the base of many recent works on virus spread on networks, we can mention {{cite:0e46745d382ab86106fb855f3b2aad748481c4d5}}, {{cite:1d257b7dbd576162e50b0fd2e2522641206bffb6}}, {{cite:b58ff379c9a9fc4155002c5989753d51510f6573}} as well as {{cite:b4bb1fc5fc246b2f37635eca0e9424f0733e2f70}} {{cite:e0eba200c728fcb3a06a8cbcafda194321ea3fa9}} {{cite:2e6ed16691e2cce6abbd80a5ba915e9d93b3f82e}} and {{cite:77554fe11f8684e51f4a319f22e36e3b1e4e0dee}}. Concerning the subject on mathematical modeling of epidemic spreading we should mention the outstanding work done by Romualdo Pastor-Satorras and Alessandro Vespignani in {{cite:062ae13604be0f10ef5e29b241400b5635e9cfae}}, {{cite:a27241d2e72a4a583c93d4e51ebd75cfe0eab424}}, {{cite:e08e0f101176fc271b230162a1baffe0cf2d97e8}}.
In the present work we will take as source of inspiration {{cite:c18a657151d844e2317354475a031b4c41475f1b}}.
In {{cite:c18a657151d844e2317354475a031b4c41475f1b}} the authors implement some experiments on several real sensor and P2P networks (from Intel, MIT, Gnutella, and others) to show the accuracy of their method.
In this work it is claimed that their method is not only applicable to sensor nets but it is also applicable to many more settings where a piece of information may be replicated across faulty links and faulty
nodes. The authors establish a survivability condition that produce a bound in the design of distributed
systems, allowing to:
| i | c5b0f3d9554e995b65a307a78e056e50 |
Using the subspace encoder method (REF ) we estimate a model where the three functions {{formula:88946d8c-9bc6-4373-b317-d104a199fea9}} , {{formula:4d402ebf-d66f-4ca1-bcec-595ea59b35f4}} and {{formula:f8ff31be-69e9-42be-bc0f-2a292757b71b}} are implemented as 2 hidden layer neural networks with 64 hidden nodes per layer, tanh activation and a linear bypass from the input to the output similar to a residual connection for both benchmarks. As ODEsolver we use a single RK4 step between samples and assume that the input signal is zero-order hold. As for the implementation of the CT subspace encoder method the following hyper-parameters are considered; {{formula:c994eb20-12fb-429a-92e4-d0f5c3344a7f}} , {{formula:5417cbd2-89a1-4df0-99e1-b9eaa496589d}} and {{formula:dc06269a-6d46-4eb0-b136-9c9d0874ecef}} for CCT and {{formula:af36c72d-45f3-419d-949c-680e043c48c5}} , {{formula:e6740a8d-df64-4b65-b7cd-20aac203bbcb}} and {{formula:9ca64940-58a6-4258-8f77-38691ee89a14}} for CED. These hyperparameters are chosen based on few-step prediction-error figures as was shown in {{cite:80e9f573905bea0392742a4d7e2493194e55a12d}}. The training is done by using the Adam optimizer with default settings {{cite:84071444f6ae6f91c88410d8149f473b50eddabc}} with a batch size of 32 for CED and 64 for CCT and using a simulation on the validation dataset for early stopping to reduce overfitting. To increase confidence in our results we estimate at least 17 models when doing experiment per hyperparameter setting for both benchmarks.
| r | 85eb98a97badc5e1d8e8d84c7281592f |
Further, we do not require a dataset where ground truth body, hands, and face reconstructions are all available at the same time: creating such data at sufficient variety is very difficult.
Instead, we only require existing part-specific datasets.
Our network features four task-specific modules that are trained individually with different types of data, while being end-to-end at inference.
The first module, DetNet, takes a color image as input, estimates 3D body and hand keypoint coordinates, and detects the face location in the input image.
The second and third module, namely BodyIKNet and HandIKNet, take in body and hand keypoint positions and regress joint rotations along with shape parameters.
The last module, called FaceNet, takes in a face image and predicts the shape, expression, albedo, and illumination parameters of the 3DMM face model {{cite:85a98a9d1de397ea7109599720f0d6ca393d0d01}}.
This modular network design enables us to jointly use the following data types:
1) images with only body or hand keypoint annotations;
2) images with body and hand keypoint annotations;
3) images annotated with body joint angles;
4) motion capture (MoCap) data with only body or hand joint angles but without corresponding images;
and
5) face images with 2D landmarks.
To train with so many data modalities, we propose an attention mechanism to handle various data types in the same mini-batch during training, which guides the model to utilize the features selectively.
We also introduce a 2-stage body keypoint detection structure to cope with the keypoint discrepancy between different datasets.
The above multi-modal training enables our superior generalization across different benchmarks.
| i | dd9ba118c5e817a269b76f08fd8518df |
Classification
Promising results on reinforcement learning tasks lead us to consider how widely a WANN approach can be applied. WANNs which encode relationships between inputs are well suited to RL tasks: low-dimensional inputs coupled with internal states and environmental interaction allow discovery of reactive and adaptive controllers. Classification, however, is a far less fuzzy and forgiving problem. A problem where, unlike RL, design of architectures has long been a focus. As a proof of concept, we investigate how WANNs perform on the MNIST dataset {{cite:8aa3f78a2b6aae7769766ef88ad6f3db2d55d62a}}, an image classification task which has been a focus of human-led architecture search for decades {{cite:a4fea6a7973516a94e98ed3e2e61f899de6d529e}}, {{cite:e7416bbb52129c115b2c564f01a246cee2b190f2}}, {{cite:61f75eef5969355ae717e0a76d237e123672a657}}.
| r | 378e69c1d698f8ca813b5d187577b798 |
The existence of this singularity has direct implications for understanding generalization in terms of the training process, especially the Neural Tangent Kernel (NTK) {{cite:d31b805ea769ab60f19d61f5165749fa944de9d8}}. First, this confirms that training is a dynamical system, not a difference system. Second, NTK and other mean-field approaches take the limit as the width of the network approaches infinity. Under this assumption, the Hessian of the training loss does not evolve during training. In practice, the scaling of neural networks generally involves scaling the depth, input dimension, and dataset size as well as the width {{cite:901925d3cda58efd956cfdd0c6fdfed00f43cc69}}{{cite:1d65156010255377c2f72509e51bccd4648f53a7}}. If multiple terms are asymptoting to infinity, then the assumption that the Hessian will not change during training may not hold, as we have shown here.
| d | 7d154981ecbb21e6159129f290842562 |
Several studies have been conducted on the linear MRI theory ({{cite:5b723287220b53d22cba810442c0f0e065bca9db}}, {{cite:76931c45283945666c02d5ae6783ab7fe1cf112f}}, {{cite:282cf548ed599fc8fbf1ca8b8ef74d06da852ab0}}, {{cite:3f400a7ccbeb5d60d2eadcade9dea604717e861a}}), but the full nonlinear development of this instability cannot be treated analytically. Decades of research efforts have employed magnetohydrodynamic (MHD) simulations to study the complex, turbulent state that arises during the nonlinear stage of the MRI. In particular, local models of accretion disks — where only a small sector of the disk is simulated — have attracted widespread recognition, owing to the possibility of studying the MRI development separately from its embedding astrophysical environment. From the very beginning, such simulations have been conducted with the so-called shearing-box approach ({{cite:1194f0c466e7472e2006f6ca7e6c3f8b8749f361}}, {{cite:714f201c7c1d95756b34657a038b604b173805f4}}), where a small simulation box with locally Cartesian coordinates is taken as representative of the large-scale behavior of the whole disk. In the shearing-box paradigm without density stratification, the vertical ({{formula:9aa39072-a0ca-4e2c-b8af-3cd00e0f169a}} , parallel to the rotation axis) and toroidal ({{formula:afac2308-ae18-492b-9bbc-9bbf2543a2c4}} , the direction of rotation; locally {{formula:23934c40-ffaf-482f-943e-2c77a4e56a60}} ) coordinates are periodic, and the radial ({{formula:4d90a583-9d24-41fb-98c8-4339c8105adf}} , locally {{formula:26aa8ea6-eb7d-4d79-bd19-8bb4d8becb35}} ) coordinate employs shearing-periodic conditions that model the background differential rotation of the disk.
Several MHD codes implementing the shearing box have been developed (e.g. {{cite:6c77956a623bdb55c5d939c479c5559670cb724c}}, {{cite:874f2dbb010264bb4c04cf0036baca8f72e237f7}}) and applied (e.g. {{cite:f425d8f53af876162df567a2e43d0e00181f02cd}}, {{cite:579a2b0cbbf4cf76a11650078187205893358838}}, {{cite:04d578e3180debdf9931744efb58ef5e589901c6}}, {{cite:3c29e350299f4f81573f39e9fa6ef8ea2d29ecdc}}, {{cite:aea88ef3a77b503b6a9f9e7a7ad897cef98c08a7}}) to study the MRI. These works have found that, when the initial conditions involve a weak, purely vertical magnetic field, the system initially evolves to a state of so-called “channel flows” where strong radial and toroidal magnetic fields develop. The magnetic-field polarity and flow bulk velocity change sign at the channel interfaces, which become susceptible to the development of parasitic instabilities (e.g. tearing, Kelvin-Helmholtz, or drift-kink modes among others; {{cite:76931c45283945666c02d5ae6783ab7fe1cf112f}}, {{cite:3f400a7ccbeb5d60d2eadcade9dea604717e861a}}). These secondary modes feed off the primary instability, eventually destroying the macroscopic channels and driving a turbulent state.
| i | ef29f2f8b10ded9d7cc44234cc33146a |
In Table REF we present 3D FILM results but using the contrastive pre-trained encoder described in Section REF . Specifically, we perform an ablation on the type of data augmentation used during the contrastive pre-training stage (described at the end of Section REF ) and find that 3D data augmentation is essential for the encoder to distinguish whether a pair of views come from the same scene or not, as shown in the `NCE accuracy' column (9.13% for 2D versus 99.72 % for 3D). However, both 2D and 3D data augmentation is necessary in order for the FILM task to yield the best results, as seen in the last row. This is consistent with the observation that very strong data augmentation is required to ensure that contrastive techniques do not learn trivial features that perform poorly on downstream tasks. Similar to Table REF , utilising the viewpoint camera for rigid transforms produces the best results, with 86.01 {{formula:cbfbe061-96fc-45e9-9d7e-8d25f325eddf}} 0.69 % test accuracy. While the best result of Table REF is slightly higher, we re-iterate that some of those runs hit undesirable local minima, which we did not experience with this contrastive formulation. Furthermore, as noted in {{cite:35d581cb96938ffdd7099a003dbf5d9db85fca6b}}, contrastive encoders have to be significantly overparameterised compared to their supervised counterparts in order to achieve roughly the same classification error, so further architecture tuning may be required.
| r | 13d6504279dbf55fe75bde87781f7eb1 |
It is a common practice in training SGG models that the relationship detection module also learns to output refined object detection results, which either only refines the object classification (labels and confidences), e.g., {{cite:7a4a540b37ed8857efaa4a6b87a97bc38e925d8d}}, {{cite:981f405f2f12ba34d68326824251e15c3676fd0a}}, {{cite:08bfc1bf525a73145b2cb0f296637b8ad0d5110e}}, or refines both object classification and box regression, e.g., {{cite:7ef593cf21fd909dab0e19f08be375145bc51ce1}}. This refinement introduces extra OD parameters and an OD loss in the relationship detection module. The hope is that by jointly training OD and relationship detection, the OD performance can be improved. However, we found that it is not the case. The refined OD results are typically worse than the pre-trained OD results. As shown in Table REF , the worse and constantly-changing OD results also make it more difficult to train the relationship module, resulting in worse relationship detection performance. Therefore, we do not refine the object detection results in the relationship module.
| r | 3e907caa4c482bb107104fb9ec233252 |
Figure REF provides the overview of the proposed method.
Our SLU model is a combination of two pretrained models. First, we
use Encoder block of pretrained end-to-end ASR model {{cite:140e0809ee0104c5ff7e6f78cdb57caaa2e1724b}} in order to covert acoustic features of speech signal to hidden representation.
Second, we feed the hidden representation
through a learnable linear mapping
to pretrained masked language model {{cite:9fd264049eb5929afcb6a62c73c07e024e07fb4e}},
fine-tuned to produce semantic sentence embedding,
which serves as NLU model.
Finally, we utilize teacher-student learning method in order to align
output of our SLU model to output of pretrained NLU model.
Both ASR and NLU models are based on Transformer architecture {{cite:02490ad5f7d2cf5bb0b2aadbb270bc0112e54f03}}
widely used for sequence processing.
{{figure:32dc69d3-e14f-4b7a-8ceb-5bb6ae24d797}} | m | 35f451f92d7bb8748df51c783ab03d06 |
In present work we provide a rigorous definition of parameter concentrations and demonstrate it for the variational state preparation. The definition is motivated by how this effect can be leveraged for efficient training. However, different approaches claiming concentrations appear in the QAOA literature, yet our results have a clear distinction. Specifically {{cite:a8e5f837a8b2f584b2acd5d8dd8722ce75daa47a}} analytically addresses what we call instance concentration in the case of the Sherrington-Kirkpatrick model. Here, one finds that the variance in objective function value vanishes in the infinite size limit ({{formula:c22b94c4-a626-44d1-b117-6a77d251df4b}} ), and therefore, QAOA becomes instance independent. However, the result alone neither predict nor address the behavior of optimal parameters.
| d | 82b13d1bfc6e6a74359187d7c6ff0034 |
Fig. REF shows the style transfer when the style and the content images have an extreme mismatch of image features. DPS {{cite:21702dd039853dc943ae2c5a06892935b4ec8ec4}}, WCT2 {{cite:b7933280980ba987620b1935ce9f99f68b8020a1}}, and STROTSS {{cite:e3544df5b1e52f0b9738f4936a4025abc4a28e7e}} output images with higher perceptual error and lower image quality score. DPS {{cite:21702dd039853dc943ae2c5a06892935b4ec8ec4}} does not preserve the semantics of the objects. WCT2 {{cite:b7933280980ba987620b1935ce9f99f68b8020a1}} trains a decoder on sample images. Therefore, the higher perceptual error might be because of the bias toward the sample images and lacking the generalization to the new images. STROTSS {{cite:e3544df5b1e52f0b9738f4936a4025abc4a28e7e}} transport style features onto the content image with minimum distortion to the geometry of the objects, but in the challenging scenario of content mismatch, the structure preservation reduces. DeepObjStyle outperforms other methods and preserves the semantics of the objects in the output.
| r | c9d2dc39c2a003b5bd7b38b4702bf260 |
NSF-TransE (w/o SDBN) and NSF-DistMult (w/o SDBN) are the models trained using our approach without the ShuffledDBN layer. In NSF-TransE (w/ SDBN) and NSF-DistMult (w/ SDBN) we transform {{formula:b0510528-62c5-4e2c-89c3-1263d494aefd}} , {{formula:95c4480f-6d33-4146-a0e9-c5e998d83dd7}} , {{formula:604a923f-97f5-41b3-9cc8-3ade87af7683}} , and {{formula:1c1326ad-502e-40fb-87b4-d6231c4f20ff}} using the ShuffledDBN approach described in {{cite:5aa92112fe19c6ea08614623e14f6dc7a21a7b9a}} before feeding them to the BT loss. We compare our models to TransE {{cite:d616ca6bc34dec39a321861769fc4cba5f23ecdc}} and DistMult {{cite:26baf84449c73a98ceb4b6d59b7366324287f2dc}} trained using negative sampling on the FB15k and WN18 datasets. The results reported for these two methods are taken the corresponding original papers. The performance of the baselines and our models on the FB15k and WN18 datasets is shown in table REF . In addition, we conducted a set of experiments to compare the performance of our approach with that of the only negative-sample-free approach in the literature reported in {{cite:e5ab01907ad25ca5cea9727e9c60c74380795c71}}; The results are shown in Table REF .
| r | 8b7d79b2931ece6a5770ea78f0d45635 |
Among various extended models, models with no dimensionful couplings in the tree-level action have been attracting much attention recently {{cite:c4edb0207a639c6907517a1bdf4989aa6e39457a}}, {{cite:e39c25456ce535f0a3454d3ee73ffc233ce2dcb0}}, {{cite:2ab6db3da89b785f98be311a7c4c9e55703b6b0c}}, {{cite:7c5384ea9625f4a5aad18c92cb177d02a604a0a2}}, {{cite:97457c54c1ebc8d082b36fce1db7037bf85351ff}}, {{cite:02d224fa7f38bf2ef20892c4a496b77e96e67b9e}}, {{cite:b211675ad207c5227c9ebce49bb6133709271058}}, {{cite:69e2e75c9226c072d1c84b547d40b34517657afb}}, {{cite:f645a5cb3754e8d96428454f528a09be35852404}}, {{cite:9a5dbc097611e55cb0d9353b86d56138f9909863}}, {{cite:33d07c4729f4aa095ba41d4c4a5036acff89c054}}, {{cite:69d4678364532a6a47a3c4363ffad3f130c3b9ac}}, {{cite:250828d6300dc3c59cb69c0850f3df748642aec0}}, {{cite:a527d6f9dc153e90017ffc3b3d96a3d3f0ea1f4a}}, {{cite:e0494be6363dab8d1d6018a1d860eaabbef8f644}}, {{cite:327eb54a97dffa2caba5b95e0e140b18ef9225e3}} because they can naturally realize the Coleman-Weinberg (CW) mechanism and predict a very strong FOPT with ultra-supercooling {{cite:4ed0bd4f267545c17d3447ca0d87d6003273067a}}, {{cite:1c64c141c158ef058a239b14bc366d4e195676f3}}, {{cite:1d72b432644a14af0c74f49014f0a570ca76c6c6}}.
In this case, the resultant GW signals are largely enhanced, and they can be detected by future detectors such as LISA {{cite:f4dc1ccf94b7db9016ddfda8c14ff8e91c0a5470}}, {{cite:a0eff4c1415d06dd98f11fb35a57762891eb85a0}}, {{cite:e1f6aa90756ae0e1a3fa0452b03c195b2efc4711}}, BBO {{cite:7f6141381cab2b30e28ff2574412aeed806b08e9}}, DECIGO {{cite:f4c5de21f7817d2774959d5cc381248fefedf348}}, {{cite:93362ef3cbd52c1ae463bbca823f1bad7601dd6f}}, Ultimate-DECIGO {{cite:3d73ea668cd46678adf00b1961ef929a5acafe73}}, {{cite:8b3d3158dec5ef0ddeb182eaf220f54a7018fff2}}, SKA {{cite:2f7a9bb9d89c3c4b6afbfc45418e400ade8c9822}}, {{cite:2d38d5cf8ca34d9fc15ab2505802161da3e5a21b}}, {{cite:165a45df7116c1d65f1b566af15adaec2f2d1ff5}} and so on.
| i | e6cad01097c9f9caea361daa26fb791a |
Table REF summarizes the results achieved by
{{formula:a01b09b2-b2c2-486d-b788-6e225c99b2f3}}ARSRG on COIL-100 database. In order to
perform a direct comparison with the methods employed
in {{cite:26e1686d8380705913b1da0bc5c4a6ce77b5dfb5}}, {{cite:43112d9992a67ecddcc3865207f602d96188b57b}}, the same setup is
adopted. Precisely, 25 objects are randomly selected and the 11% of the images as training set and the remaining
ones as testing set are selected. The results are achieved by baseline Logistic Label Propagation ({{formula:2177cb4a-d378-4b25-9821-923d0afca981}} ) {{cite:30fbae42868f8310acddddb45f3ccd009ce5758e}} + Bag of Words (BoW) {{cite:6605a645080eac08bd42de38de0bfb824e42d98f}}), and those obtained
in {{cite:26e1686d8380705913b1da0bc5c4a6ce77b5dfb5}}, {{cite:43112d9992a67ecddcc3865207f602d96188b57b}} by
employing their approach (VFSR) and the approaches proposed in {{cite:6780f4056039be450aa6e311c1e6dd021039aa6a}}
(gdFil), in {{cite:66548a75790f9180481c91597e0b7428289052a7}} (APGM), in
{{cite:9fdaae879a3fbbc8d2717bdef0e6c1fbbed5a2ab}} (VEAM), in {{cite:92a8b4fa3ff1f785626edd45e9bc40ef89cd8f8d}}
(DTROD-AdaBoost), in {{cite:70a731148312f6de289ad7d4898be74ba96a016c}}
(RSW+Boosting), in {{cite:957c28e961fdf5b006f3ab8ab10aa2fc91172519}}
(Sequential Patterns), and in {{cite:688b4010a79dbff9ba9647f747ab26bd36dcc4db}}
(LAF). The results are presented in terms of accuracy and the best performance is highlighted in bold face. As can be notice ARSRG embedding confirms its qualities also employing this database. Indeed ARSRG embedding obtained the best overall accuracy.
{{table:6fa586fe-d73a-4a0a-aa50-c1c750010982}} | d | 146da446845225d2fb6f0cb0cdb1d4da |
Over the last two decades, extensive efforts have gone into producing
efficient, fast mixing MCMC algorithms, in general, and in the context
of GLMMs, in particular. For example, effective data augmentation (DA)
strategies have been proposed for specific GLMMs {{cite:544aa63cbc07008a47044e1c3ac5878b53d41bcc}}, {{cite:f761e6fa8ac0467c58236e8d16460fad3da81a6a}}, {{cite:b6e3954f28d0916cc0bad9c19949e158ef1f68de}}, {{cite:7372bf898d3d24d05a41e94fc250a4086bbd094f}}. Also novel MH algorithms based on Langevin diffusions
and Hamiltonian dynamics such as the Metropolis adjusted Langevin
algorithms (MALA) {{cite:f46eb9a551741a861cedee92ca7dfc345778e7cc}} and the Hamiltonian Monte
Carlo (HMC) algorithms {{cite:d7338aeceec288e28b37157cb7b8909daebf1a67}} have now emerged as the
popular methods for MCMC sampling due to their ability to make distant
moves with high acceptance probability and favorable scalability with
respect to increasing state space dimensions. MALA and HMC have also
been applied for inference for GLMMs , {{cite:6cb6c70f325439473035a1a40960b01732b509ba}}. The goal of this article is to present these efficient MCMC
algorithms for fitting GLMMs.
| i | c2024292aaf9a27d54ec73d422474588 |
Classical orthogonal polynomials as many important special functions satisfy remarkable relations both in the physical as well as in the spectral variables {{cite:b2aceb619632ce008ad24e7950a4c32c34799809}}. More precisely, they are eigenfunctions of an operator in the physical variable (say {{formula:6e07d148-4a11-4197-9866-329b6ae17110}} ) with eigenvalues depending on the spectral variable (say {{formula:0d1994bf-2bd3-4b69-bb84-74ba714344a3}} ) as well as the other way around, eigenfunctions of an operator in {{formula:8fda3310-7058-40d9-824b-f7e5b2e00463}} with
{{formula:705d886a-3da2-4d88-a8e7-3793d9a35d59}} –dependent eigenvalues.
Such bispectral property was explored in the scalar case in the work of J. J. Duistermaat and F. A. Grünbaum {{cite:a10930c6f5cd18d68c1b7376fb6e971aae0cb875}}. It turned out to have deep connections with many problems in Mathematical Physics. Indeed, it could be
arranged in suitable manifolds which were naturally parameterized by the flows of the Korteweg de-Vries (KdV) hierarchy or its master-symmetries
{{cite:6ed6380b74a8651b0585a143abcc981b64b68132}}, {{cite:5d1e4f2f9a84cd4977c3295180cfed3b00805273}}. It led to generalizations associated to the
Kadomtsev-Petviashvili (KP) hierarchy {{cite:5797b370661744d9d2358739761cb630095e5410}}, {{cite:4d2c523c90016c7734dea170a320b6b994f89914}}, {{cite:9ca4cddb1068315d6f4d1f0e238564242b5b7a42}}, {{cite:20487b1c410a8b6868859e2b85b52ed42d0af698}}.
| i | e2e54c6998d6d53baddad0bd52cf54ef |
Key challenges: We present the first theoretically grounded work outlining how to create good graphs for learning from unlabeled data. Graph-based semi-supervised learning literature has largely been focused on learning approaches given a graph and very little progress made on the arguably more significant problem of designing good graphs. The problem was noted by {{cite:ad80860422d37075209d84955c7e7f83960acfb0}} and has remained largely open for two decades. We use a data-driven algorithm design perspective {{cite:17b68ce26b6125278a178b4e7f6e451a0f1f5e66}}, {{cite:c13c1d116ec93d32be840c1d4b969f77f02b44d5}} and take steps towards resolving this problem. We remark that our techniques are very general and they apply simultaneously for learning the graph when we do prediction by optimizing various quadratic objectives with hard or soft labels (Table REF ).
| r | c3f0b320fcadde28d4f5e71f26a00f4b |
The new dataset follows the same
principles as Kinetics-400 {{cite:909069f39962e6a87a8cd23e740cd144449b95d2}} and Kinetics-600 {{cite:36ab7ca2c48e97dfba11653120b93e81b5474218}}: (i) The clips are from
YouTube videos, last 10s, and have a variable resolution and frame
rate; (ii) for an action class, all clips are from different YouTube
videos. Kinetics-700 is almost a superset of Kinetics-600: the number of classes is increased from 600 to 700, with all but
three of the Kinetics-600 classes retained. As in the case of Kinetics-600, Kinetics-700 has
600 or more clips per human action class – this
represents a 30% increase in the number of video clips,
from around 500k to around 650k. The statistics of the three Kinetics datasets
are detailed in table REF .
| i | 95d30827253f9b4d4f363d6e06e79d32 |
Self-absorption is not expected to produce double peaks in lines of low species, as their concentration in the low density foreground layer is not large enough to absorb photons emitted in the dense core. Nevertheless, self-absorption has been considered as a possible contribution to double-peaked line morphologies for some transitions in past works {{cite:973a2600fbb2e8c92c0254420eaf0d8d57ad3eb7}}, {{cite:c1db8e62212905edc08ecd754500f315c9672a64}}. To make sure that a sufficient of HC{{formula:c88b14ac-322e-4a6e-bf00-56d62b2cf2a6}} O{{formula:b04f2a51-8ae5-409f-8d88-8c9b3d535617}} in a foreground layer would not induce self absorption, we modelled the observations by adding an profile from 0.3 to 1.5 pc with a HC{{formula:e0b619fd-c75f-4e56-8328-b88e34d24794}} O{{formula:3fe5b3cd-84a3-4d1e-8454-95d0db09a466}} layer corresponding to a visual extinction of A{{formula:77b804e3-bc91-44cf-995c-5d02a67b6e26}} =4 mag (more information in the Appendix REF ). Also this test did not reproduce the observed spectrum, as shown in Figure REF .
| d | 4438e9ba5e46edc899bc79ca0029bbf0 |
Lemma G.1 (TPM, adapted from {{cite:0ab8b11e0f58eb7f6fd6d2998a7e126af2366202}})
Consider an increasing sequence {{formula:d9b3951e-5780-413c-9981-4e51c4783d6f}} defined by {{formula:6ef02738-a963-433b-81aa-9107eefaa0d4}} for some integer {{formula:0876e28c-c8d0-4972-9365-2bea46bed303}} and {{formula:00178979-bf90-4c9f-b764-298040dbe673}} , and suippose for some {{formula:aeb1f705-7b6e-4996-b918-16b1c2f66c94}} there exist {{formula:a25ec11b-fbe1-4cde-9772-09f51bd9b123}} such that {{formula:32c29efd-b520-4531-8226-2e756217de97}} . Then for every {{formula:b128ade4-49f1-4b2f-bdb6-ba1cd58b4222}} , and every {{formula:b4bf522b-96c3-4d6f-9c84-0dbf9da7b88b}} :
{{formula:01db341f-2a56-41c2-9ee6-b772f31a4ced}}
| m | def7e217f458ce44532d787b3b583954 |
Note that, by definition, {{formula:8b5ccd74-37b7-4190-ae36-6001ac80887a}} . The Lagrange functionalThe derivation follows directly from the AL method discussed in the next section and hence is omitted. {{cite:b6ecf9e6b3335670533e8d1f3cf401ad11b53049}} associated with the optimization problem (REF ), i.e.,
{{formula:f0a246ce-06ca-4aeb-a818-4bce5ab4f1c5}}
| m | 0a68be5a32fa080994f95dd78907c400 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.