text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
A new PUF using the priority arbiter called PA-PUF is designed. The PA-PUF offers a uniqueness of 49.63 {{formula:4f58365b-0761-4e44-931f-90c56780a03f}} and uniformity of 49.45{{formula:a0407075-f591-4240-b96d-0403b0d8ed1c}} at the output. The non-linearity in the output of the PUF is increased with the use of a priority arbiter. We demonstrate the configurability of PA-PUF by varying the number of CRPs. The number of CRPs can be increased by increasing the length of the data path by introducing more feed-forward arbiters. The performance of the proposed priority arbiter PUF is studied as a function of the number of feed-forward arbiters in the data path and the length of the data path. For example, the uniqueness of the PUF can be increased by increasing the length of the data path. It offers a reliability of 94.5{{formula:7e2c357b-2758-41ce-84ac-f964ac180c9f}} for a 128-bit response, which can be increased to 100{{formula:f78ca60f-0743-4d5d-a222-c89df97f8bca}} by implementing Bose-Chaudhuri-Hocquenghem (BCH) error-correcting codes {{cite:6c5155502330e213a52aa2ff924064e5842cd539}}. {{figure:4f77c0e0-a9bc-4b7f-a07f-813415fa8d50}}
i
de8a1e7b248d5a93e5610741ba55f4cb
An example of a purely AI approach is {{cite:66734beaa8ea08e8985a41015265fe82d6f12e01}}, which evaluates the performance of an artificial neural network (ANN) on data from a laboratory test rig for an engine. However, it provides no understanding of {{formula:58d4ed89-c200-4aa9-ab00-f1b75bf7310e}} formation and had spurious non-physical results. Instead, engine scientists prefer an approach to predicting emissions that is interpretable using domain knowledge {{cite:1bff84b5eb232571a358ccba9fc3f2feaaf3d1bf}}, {{cite:038046266a4aac5026d5320eab565cd38eb2b691}}, {{cite:1837a481bfe4f08becfb5c9f7e35548d8337cf90}}, {{cite:a79e2c439f55f5d3a8efea9916a9c6c6dba073fb}}.
i
72958dce1d5e7ec8600e81d44408e1cc
As a second novelty, we propose the novel DeepFH segmentation. DeepFH extends the segmentation algorithm by Felzenszwalb and Huttenlocher (FH) {{cite:241c07c330f47a2eb0c18d0fe5b00190d8286d66}}, which relies on RGB features for pixel similarity, with semantically richer deep features. We generate deep features with a lightweight encoder-decoder architecture based on a variation of ResNet {{cite:e5abf8aca9edad81732bb4e591dae4d3ceb1675d}}. In our evaluation, we show that DeepFH outperforms the original FH and further improves the downstream superpixel-based object proposal refinement. Since FH's general structure is unchanged, FH's properties including non-compact superpixels stay unchanged.
i
80d9f1f9978aba1d230ea5c469027951
MOT16 We compare PatchTrack with other MOT systems on MOT16 {{cite:b61da861df65d5c82470b40e509bd17167e026fc}} test set in private protocol (Table REF ), where PatchTrack achieves state-of-the-art results in MOTA, ML, and FN. Compared to LMP_p {{cite:f4ab442f5e5b597f26007afaed9957b414b57f5f}} and POI {{cite:97da927003d81f617c0bd1252c6f560477d2dce1}}, which collectively achieve best results in the remaining metrics, PatchTrack has significantly lower ML, showing overall better tracking performance. Figure REF shows additional visual comparison with LMP_p and POI, where PatchTrack is able to track partially occluded objects and distinguish crowded objects better without missing objects or tracking one object multiple times.
r
c05a96c9e97cd8901ec4d736ffd2b60c
The overall observational properties of SN 2012au show that it is a transitional event between SESNe and SLSNe I {{cite:e004e1bf3d8ca7a152f905399d88cd98866ecd6c}}, {{cite:5d803072cb91d443f2a80d91f3b5df49263bda71}}. Therefore, we also compared (see Fig. REF ) the linear imaging polarization properties of SNe Ib (SN 2007uy and SN 2008D; {{cite:b29771e8bfc9b92f3566ce1270a9f57de88136ab}}, {{cite:d6d57169d32d6632723061b958d24862578ca90b}}), IIb (SN 1993J; {{cite:49ef2366fea7223a6a04856e5ec56d6ce727238e}}), GRB-SN (SN 2006aj; {{cite:1a5c35fc47ae362d5a0b02af261e0220a3067856}}), and SLSNe I (SN 2015bn; {{cite:3c482fe2117210ad1dc3d5040908a973cd35d1bf}} and SN 2017egm; {{cite:60dbed6cf5e7a1d652ebaa6354705b2358173831}}) from the literature along with those determined for SN 2012au. Here, we consider events only with multi-epoch observations. This analysis reveals that among SNe Ib, SN 2008D and SN 2012au show a major variation in the polarization parameters, whereas the SN 2007uy evolution is minimal. Similarly, SN 2015bn exhibits an increasing trend in the degree of polarization, while SN 2017egm remains below 1 per cent, without any significant change. Pre-maximum, SN 2006jf exhibits high values of liner polarization in comparison to all presented SNe, however, after peak, it shares polarization values closer to SN 2012au.
r
f6dba8c5522cf16fed50859f7471ff3b
CTPN {{cite:143935dc198ca54eb1a6b677ee7a3589679b748f}} is a pioneer work that brings text detection into the deep-learning area, but it simply merges text segments according to a certain threshold and can only deal with horizontal texts. SegLink {{cite:57acc015606d2f35a9c4ccdec87f37f28b13898c}} is designed to detect multi-oriented texts, which aims to connect the centers of two segments with an eight-neighbourhood link prediction. Subsequently, as the research focus has changed to curved text, text detection has transitioned from the detection of bounding boxes to the detection of contours. TextSnake {{cite:b8decb3b5ed4ee14b875bc703c0453f6f6ae339a}} is the first work to attempt this transition by using several circles with predictable angles and radius to fit curved texts. At this point, both geometric and visual information has been largely considered in curved text detection.
m
be283e442fda777a3c2815f59e63dfe7
We observe that the Davis' decomposition is also valid in the martingale Hardy-amalgam spaces. Indeed, we have the following decomposition which can be proved as in the case {{formula:8a99b924-5f23-49d5-b7fb-5d5363d5e4f0}} (see {{cite:534428b852494c871cdb156b64110f4bf6fec525}})
d
8f0b941f41306269f19043366b5cc1f0
Variance of the gradient. Using a single domain discriminator also helps reduce the variance of gradient. Large variances in the stochastic gradients slow down the convergence, which leads to poor performance {{cite:2662ef1d8a0e467e964e5ab095e7cc425afca5ea}}. Herein, we analyze the variances of the stochastic gradients of existing optimization constraints. By excluding the weighted source combination strategy, we can approximately express the optimization constraint of existing adversarial MDA methods as sum of the information constraints: {{formula:16dca6fa-1cf5-44e4-9412-2facb9101495}}
m
5efaa115005039f0db3b593208aad340
Adapters are small bottleneck modules consisting of a down projection, a non-linearity, and an up-projection, with a skip connection (see Fig. REF ). The initial implementation {{cite:23cfccd0c024b67e8bb741ee60d078f212389f9e}} applies adapters after both the self-attention and feedforward layers. However it is possible to apply adapters in different positions throughout the transformer block {{cite:276105ed4dfe99be3f504dd1c2a45521a54b73b1}}. The fully connected layers are initialized as a near identity function. The identity initialization and the skip connection allow the module to be ignored if not deemed necessary during training.
m
607b712c29d822ba4f6bfc27a2c98f69
{{cite:b6144640f729c6aff5d3e0d333435d485f170432}} propose some strategies for storing experiences in the replay-buffer that has been shown to reduce catastrophic forgetting in RL. All our methods use prioritized replay buffer that resembles the surprise strategy {{cite:b6144640f729c6aff5d3e0d333435d485f170432}}. In addition, we also compare this with FIFO and Reward strategies. For FIFO, we set the prioritization exponent {{formula:44f14dbf-b778-41fa-9d83-4e4da4083ec7}} to 0 {{cite:6fd5bf3493cd0e2e9f966012d86aafb8184fb4ed}}, which is equivalent to uniformly sampling. In case of Reward, we do prioritized sampling that favors experiences based on the absolute value of reward instead of TD-error as done in our default case. As can be seen in Figure REF , ER with prioritized sampling performs best compared to Reward and FIFO strategies in terms of both average score and average forgetting. Implementing other sampling strategies such as Global Distribution Matching and Coverage Maximization are left for future work. {{figure:fe09fc94-defd-4e88-81ca-50d1ad479d40}}{{table:d02dcd49-5bc7-4051-b7de-c02af3a0924b}}
m
ae3942424d4882d1cf82e02f19fe1a86
where {{formula:a83f335c-869c-4c80-83a2-4fef6baab4fa}} is critical exponent related to the scaling of the magnetic susceptibility near {{formula:38263b2c-6af7-4c76-9897-a9c439e61625}} , {{formula:acce19b4-1d93-451d-9ce4-6cebf2bfabda}} is the critical exponent related to the behavior of the two point correlation function right at {{formula:ae5b0142-cb32-4e57-b4f7-72d16d38c941}} , and {{formula:94986a30-6da0-49f2-9d17-3fd454d339b8}} is the critical exponent related to the scaling of the energy of the system as a function of the magnetization near {{formula:836e02a7-d177-4c3f-b5a6-fc5b3e3ebc72}} . Evaluating Eqs. REF -REF using the {{formula:65175e11-9095-4fbd-b173-ce54fec0b7e4}} found at {{formula:bfa8657d-f7fb-4b01-86e3-b2b2a01186be}} , and {{formula:0f8c0e68-7f75-4eaf-989e-0fe38b1dc184}} from 100 different simulations, gives mean values of {{formula:012752de-ccfa-48cd-a71c-cd2f46e9c411}} , {{formula:10926a5f-4812-4520-8395-afc81e5b31a8}} , {{formula:d879007d-0f93-4352-b254-d95d099273cb}} , and {{formula:b8b2e6af-04af-4176-82aa-df94a824c18c}} . These compare well to the true values {{formula:901447cc-b47a-438a-8c27-3d23a7e79ac4}} , {{formula:afe41d2c-f87b-45df-ba16-8f7e3aadeb53}} , {{formula:9e63618c-21ef-4f40-9e36-6a35fb521752}} , and {{formula:8dfd28f7-f3bc-49b5-b9ff-87324a6036ef}} {{cite:226bd6cef2d17de29dcb0c628ca322b25aa1aed0}}.
r
dbeed67a8f98f69348cd93f9959e21bc
where {{formula:85c6cc4e-0501-4d4f-8134-f8372e091bef}} denotes the inverse DFT matrix {{cite:8d2ec61d4fb8bcc798c91812f78feb14c360e7fc}}, {{formula:39ac3abf-8d46-44e1-8e1d-e65abf5bf7a9}} is the matrix nuclear norm, and {{formula:b69be1da-0a03-40a8-82a2-6c916ffe4df7}} denotes {{formula:03226a1e-942c-4045-9b9e-4105045deddf}} -th frontal slice of {{formula:79b44ef6-67ce-4335-9d79-ed90b53f7fb3}} . The problem (REF ) implies that low-tubal-rank structure could be characterized by the summation of nuclear norm of frontal slices under the linear DFT. {{figure:88ed396f-a9ba-418c-a43b-ab6e50313942}}
i
e27f691909e575963e96ec1f2b8bfb21
It is usually assumed that the remnants of PPISNe are massive black holes and no {{formula:6f65d6ef-5ead-4bcd-b536-430e3d317776}} Ni is ejected to power the SN light curve {{cite:f36d0fe5854a80197f7bbe884b1fbdf31a74b019}}, {{cite:e7da4650c3734e225362804573c5c80fceb4a716}}. {{cite:741de0b2c27cd87c887de8f1fbe88c8d476894d5}}, on the other hand, propose that a PPISN can eject a large amount of {{formula:69c8817b-d90d-4dc0-8653-1047571d807e}} Ni ({{formula:65d40376-fcc0-4bcd-9744-36c6fa27986f}} ) to power the light curve of SLSN PTF12dam, despite the formation of a black hole in PTF12dam. Given the moderate amount of {{formula:8a20e405-86d4-420f-ab37-41b7362fe6a5}} Ni in the CSI-LW+{{formula:6cdea2be-bd20-45a9-b4eb-b49b5c80fffe}} Ni model (see Table REF ), we suggest that the {{formula:47e632ec-483d-4d18-99ef-3238a80e7c5d}} Ni may be ejected during the collapse of the helium core to a massive black hole, i.e., a collapsar {{cite:e18e74fbb48f62fb268b905bbc97200ff936613b}}. This implies a rapid rotation of the helium core. However, such a rapid rotation is in tension with the existence of the hydrogen-rich envelope of the progenitor of iPTF14hls since rotation tends to remove the hydrogen envelope. This tension can be relaxed by assuming a differential rotation of the progenitor star. The derived explosion energy (Equation REF ) is also consistent with the collapsar model.
d
1360bc900fd40d389ec1f4525bfb8e77
Arguably the first approach to the neighbor search was developed in 1974 by using a quadtree {{cite:1b260073ad91e4e519812edc9e4f4ff1c315a7a5}}, which hierarchically indexed a reference set {{formula:61fbf188-83fc-4cee-a996-316b33bf8b68}} . In higher dimensions, a KD-tree {{cite:443ff3ee5553f6be8ba5f868529d5c4b8b136d27}} smarter subdivided a subset of {{formula:1897e6b5-6cfc-4f22-a6ed-db93dacf1820}} at every level into two subsets instead of {{formula:892aa565-248e-4a24-aeda-d5693e00610c}} subsets. {{cite:4fbc73cebeed247b27ebf7fb37edb4a8bcf7eb6a}} described further developments since 1970s in detail. Section  will technically compare the most recent results after formalizing the key concepts below.
r
ab0b2ed2d29d44a32a622890d55f090c
standard NMF algorithm {{cite:410560ee5997e18f660a028c5072d556710f9f56}} {{formula:b53270ca-8f24-4cad-a7ff-958a884ba817}} LS, original UNMF algorithm {{cite:797f1aecc39be052cfc31fd7105899e270373804}} {{formula:920215f3-6165-4d91-bfa3-7aa4bc018ef6}} D-U, original BNMtF algorithm {{cite:797f1aecc39be052cfc31fd7105899e270373804}} {{formula:8f7d242b-5196-42b0-8e39-8aaed13f3615}} D-B, MUR based algorithm for UNMF, i.e.: algorithm 3 in {{cite:2b64065573d9dd042fc43131b5a7a89972c8cbb5}} {{formula:1452e3e8-bbff-44a4-85be-449a649273f2}} MU-U, and convergent algorithm for UNMF proposed in our previous work, i.e.: algorithm 4 in {{cite:2b64065573d9dd042fc43131b5a7a89972c8cbb5}} {{formula:1f70bedf-1268-4e76-be82-90791ca70b70}} AU-U.
r
0d3bef123854c2ec2dcd7e31878bd9fe
The ARD procedure described above takes approximately the same amount of time as adversarial training. Adversarial training is slow since it requires far more gradient calculations than natural training. Several methods have been proposed recently for accelerating adversarial training {{cite:3fac2ecd395f1baa1ea6aa3430c87bc92f95c328}}, {{cite:13ffadfcb9a5054202baecdcf25842d6c1095c95}}. We similarly accelerate performance for ARD by adapting “free” adversarial training to distillation. This version, Fast-ARD, described in Algorithm REF , is equally fast to knowledge distillation (see Table REF for a list of training times). During training, we replay each mini-batch several times in a row. On each replay, we simultaneously compute the gradient of the loss w.r.t. the image and parameters using the same backward pass. Then, we update the adversarial attack and the network's parameters simultaneously. Empirically, Fast-ARD produces less robust students than the full ARD above, but it produces higher robust accuracy compared to models with identical architectured trained using existing accelerated free adversarial training methods as seen in Table REF . Furthermore, Fast-ARD from a TRADES WideResNet onto MobileNetV2 produces a more robust student than our most robust MobileNetV2 produced during vanilla knowledge distillation and in the same amount of training time. Our accelerated algorithm is detailed in Algorithm REF .
m
16ed359022dd52f0ad95276611739055
Variance reduction: The variance of gradients is detrimental to SGD, motivating variance reduction techniques {{cite:a57ba8207eabf5c3245fe2b46c106319d218b50b}}, {{cite:62f6800bcdc198a07d396d24393f568cc5be6df1}}, {{cite:cc0afd8ae0fb94726c40ea1e0f2037f1481badf3}}, {{cite:c207d363476a08b3d15f58f411c3e2e0fa271107}}, {{cite:38285626825c2430ebac29b7a1d7791410289377}}, {{cite:25443a00db29879436975c30788b04d375d1ce46}}, {{cite:53e8298713e3b448db7ceb7d3b97fd924061b0f0}} that aim to reduce the variance incurred due to their stochastic process of estimation, and improve the convergence rate mainly for convex optimization while some are extended to non-convex problems {{cite:e7e089abcf8f5611483311d3797cceee8133b97b}}, {{cite:fb6ee6a6b231026ffd6ec35c2397fd704a673cb8}}, {{cite:bfd23a024acde152988fab6a9d4e338c1b8bffc6}}. One of the most practical algorithms for better convergence rates includes momentum {{cite:bc6e299f01261635ad0e7c6e5f537d7fa8356ae3}}, modified momentum for accelerated gradient {{cite:1c85b1c4604788db7fe95149fe44b6e22a46b199}}, and stochastic estimation of accelerated gradient (Accelerated-SGD) {{cite:30ceddb4753f8bb0cecb8261375907eb3d1de94c}}. These algorithms are more focused on the efficiency in convergence than the generalization.
m
f93821d8eb6e62c3f89f4078d3de9b5f
Qualitative findings of the DiSCVAE handling four sequences from the Sprites test set are depicted in Fig. REF . The middle row presents reconstructions of these example sequences with swapped {{formula:1b69c6ee-6a19-47d8-be53-c6453f0d9e54}} and {{formula:9627288a-04ba-46e1-8c70-09bdcadc394b}} , where the global and local variables exclusively manipulate caricature traits and their behaviour, respectively. The same observations were reported in the DSeqVAE paper {{cite:a6b7893a33568ae0eb364f3d47600d6179934930}}, yet the DiSCVAE required a specific value of {{formula:b87237ef-2aec-4acd-a70c-98801c3c109e}} to replicate this exact effect. To further expand on the relevance of {{formula:ba067d94-bb1f-4123-bc31-fa7eab4076b5}} , the figure's bottom row shows how sampled last-frame predictions are correct when drawn from the inferred cluster (top frame in grid), but exhibit different outcomes of diversity depending on {{formula:7be3fb3b-0e97-4da8-afba-7118a57fb21d}} . For {{formula:4bfda913-3b7e-4ce2-be84-6b6e1c882253}} , component samples influence how the action pans out, even portraying unseen orientation turns in the first column of the grid. On the other hand, {{formula:c5f5f62d-696d-4ccd-bbb6-788b020a076e}} components mainly control attributes, such as the sprite's hair colour. {{table:309236f2-fed7-4cfb-bfed-cf6f3f4cf5be}}
r
bda07a2c50394f96010077133b3d6c24
Blind source separation (BSS) is the problem of separating a set of source signals (sources) from a set of mixed signals (mixtures) given little to no information about the sources and the mixing process {{cite:b974c00ce2f3be28809c6fd645470bce213eac25}}. In the linear setting where the mixtures are linear combinations of the sources, the blindness often refers to not knowing the source realizations nor the mixing weights. Without any further knowledge about the sources, the problem is infeasible, and hence further assumptions on the sources are needed to facilitate source separation. An example of BSS is the independent component analysis framework in which the sources are assumed to be drawn from independent non-Gaussian distributions. Other examples include the statistical blind source separation regression (SBSSR) model in {{cite:973127496a5fbbcc22d0bfa38599146efcfd51ea}} and non-negative matrix factorization in {{cite:8324631ba01cce55c8797c24f14cec3aee7c24f2}}. In this section, we provide a brief description of these methods, and present how to utilize them as seeds to our causal structure learning framework.
m
709b4b0c52635dc27e47a2afb09ed03f
With regard to future improvements of our approach, we believe using character-level embeddings {{cite:3d43bd043cb6ef128cb12421490c0f81b0751924}} instead of word embeddings or even combing both methods could boost performance. Combining domain-specific word embeddings with generic embeddings has also been shown to improve performance {{cite:962246b87e626c5e5c4045a5f8959d8555f15cff}}. Future work may also involve hyper-parameter tuning as well as the study of model ensemble techniques, other approaches for extracting features from convolutional layers, and other similarity measures for comparing features.
r
19973be246590c2834b97234395d2c9f
As mentioned in the introduction, an obvious question is the origin of such a field. The magnetic decay timescale of {{formula:e012f967-3a9f-49c4-9161-de25d4d5d409}} plasma is {{formula:3a79c596-779a-42fb-8a28-88af55301d6c}} , so it is natural for such a plasma to support magnetic fields, and the question is identifying its source. Within dynamo theory {{cite:ef77e966cb4eb0739e511aa13d6815745d31c86d}} there are several mechanisms to amplify the field up to the saturation strength of {{formula:6712f230-4421-4756-8588-ac78f41b792f}} {{cite:3005b90d44ebabea86289922e33cc9eb7dfabc3b}}, {{cite:b10365f5415a05f57e73f6e94c1c7213f6923c23}}, {{cite:9f512328c7c9b730c8321d302ca64a6f5a175ff0}}, {{cite:426a0279641480f79f40f276edfc4a69a5b03c9d}}, and the nature of the amplification mechanism will be imprinted in the magnetic field structures. This may happen at several points during the evolution of the WD. In the Chandrasekhar mass, {{formula:7109af4b-2503-4998-af93-f8dc18205f09}} , explosions considered here, a WD close to equilibrium begins to burn as a result of compressional heating, which in turn results from accretion from a companion. This leads to subsonic deflagration, which then transitions to supersonic detonation. The correlation length of the flow during each phase will set the correlation length of the magnetic field, which in turn impacts the escape of positrons. During the accretion phase, the dominant length scale is the radius of the white dwarf. During the late-stage run up to the deflagration, the so-called smoldering phase {{cite:7fa15cb98c95bc6587fd64bc42679d8bc7c6d962}}, {{cite:cb7fbdefda8d0f41abe7328fe7cdb25b64a3471f}} the dominant length scale is the pressure scale-height of the star. During the supersonic explosion, the scale is set by the sound-crossing time scale, because the flame propagates as a weak detonation. Our results show that amplification during the deflagration phase is unlikely. Hydrodynamical simulations {{cite:185b3e18cf3368a580748a4f5f5c20933ce74eb3}}, {{cite:3dc8e6abd2bdde176d479a7d909b20e5bae20809}} have shown that the instabilities that could give rise to a dynamo are frozen out during the expansion phase of the WD. This leaves the accretion phase or smoldering phase as the likely candidates for magnetic field amplification in WDs.
d
75501e4e77db7a2ce1695a30356194f4
Optimality. Hierarchical Reinforcement learning is categorized into two notions for an optimal solution. i) Recursive Optimal and ii) Hierarchical Optimal. In Recursive Optimal or MAXQ {{cite:74f1a7b70ea32715969ac5f694ee38f6d6586f3b}}, the expected return for performing a subtask {{formula:5d4841c6-efa6-4719-b3c2-353b98f86028}} is {{formula:a7b07e17-a7a6-4af1-9154-17a32a0344de}} . The expected external reward {{formula:8f4c2dfe-3910-44f0-b59c-9016a811975a}} after the completion of subtask {{formula:006a900e-d5e4-480e-ad80-9fc00164ebe2}} is not a part of the value function which makes MAXQ a Recursive Optimal solution. In contrast, in Hierarchical Optimal {{cite:b108aa902bf6e628c14f579ab3cb34b80713be39}}, when solving the entire hierarchy, the policy is hierarchically optimal given the rewards from the external subroutines (Eq. REF ). Each task may or may not be locally optimal. The Bellman equation for Hierarchical Optimal is given in Eq. REF .
m
0838577741d9d1a222302ba49cc046df
The master equation does not necessarily preserve the positivity of the reduced quantum state of the system. There is only one form of the master equation that is a completely positive map, the deeply respected Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) equation. {{cite:dd53ec4dac0febf8c4e8c56e401ba9a1aa01bdd5}}, {{cite:b2cdf8d44c01ed736f82919579b5ccc75f7aaf17}}, {{cite:9aeea5fbd1627bd9884584ab4cecc8108fb3eb28}} The GKSL equation is a consequence of the Dirac–von Neumann axioms, while the physical scope of the equation rests in the coarse-graining of the reduced system dynamics over a timescale much longer than {{formula:d1e1a5dc-c42e-46f5-8d22-da3f3c23148d}} . {{cite:39c757b62be1239b1d3ed6a788fddc8311eb8d31}}
i
c551bc78f059819281558c900a2a29b1
Models designed using traditional methods usually rely on statistical and design thresholds through data such as semantics, pose, and order constraints of objects and predict the relationship between objects. {{cite:4816776230c176ad7067a5e783a86e87a2bd3c92}} employed information about the spatial layout of detected objects to predict the relationship between objects. To predict the correlation between objects, {{cite:252c4aeafb9bc2242133fab9b14a832644a7703e}} uses a priori statistics of the owner-member relationship between objects obtained through Conditional Random Field (CRF) {{cite:b48a62abaa723a2b9646eca42b296fa0a34eaa49}}. {{cite:095452f0d027ef8c3023ca90e80533eb4fa89c41}} used a priori knowledge to train the model to learn objects that frequently occur in pairs, and LSTM (Long short-term memory (LSTM) {{cite:dad3fd03bc0a144d4900267b67f37a4b26276a89}} to encode the images to enhance contextual information and thus improve the performance of relationship prediction.
m
20d7027221f901a818d2d8e31c3d06f5
We also analysed the sensitivity of semi-annihilating DM through their neutrino spectrum in detectors like Super-kamiokande {{cite:f5f9a8dbce70e894a2deb9c22c2f104fc30dd7b9}}. We found that the limits are atleast {{formula:a09e8d0b-26eb-4c51-a664-2ccf69bc7b01}} magnitude weaker than the DM annihilating to two neutrinos {{cite:de22a63e254835aec806ada5c0591467ee8d15ce}} and 2-3 orders of magnitude weaker than the complimentary photon channel discussed above. This is due to the fact that the neutrino spectrum is weaker in comparison to photon spectrum and additionally, neutrinos interact weakly with the detectors. Thus, a dedicated analyses that include direction and energy information might provide the stronger limits. Similarly, neutrino experiments/detectors like DUNE {{cite:df5c344dc822ab66147227849c8ad94130e78ad4}} and Hyper-Kamiokande {{cite:e59ec0869a9dcc35ffafa5b582fc566827769a29}} can provide stronger limits for low-{{formula:d2916dd1-23da-4088-b202-4c2f8b521f92}} whereas IceCube {{cite:2f8b751aa0d4ffd3d79a9714428a36f3fb7f8b50}} and ANTARES {{cite:ce51221485745ab8955bf4d1c1679e50d410d7e5}} can be utilized for heavy-{{formula:2ef0dd4d-6e81-428f-ab5f-a7237dff0870}} . Even so, these are expected to be weaker than the complementary photon channel.
r
c5ca1d5522832c8221dfdb8b4f92f866
The successful reconstructions raise some concerns about the potential vulnerability of certain common anonymization methods used for MRI data. While the current implementation has only been shown to work on test data coming from the same dataset used to train the model, pre-trained networks are commonly used outside of their original domain. As an example, the Human Connectome Project {{cite:7317768df96e58503909c1043115da674a31cedd}} provides face-blurred data of 1113 subjects, which could have been used to test the generalization properties of our procedure. However, to respect the privacy of these subjects, we have not attempted to apply our trained GANs on them. In addition, the generalization properties of the network could be improved by training it on datasets acquired from multiple sites and with different scanning parameters.
d
a06f37ec9f43edf3280ad5a9639ee809
In this section we report our main results. We first consider the convex case with constant step size, where we prove 1) that the existing bounds in  {{cite:d6f5e571e539304853308cbbd342dc1f7460eaf2}} are tight, and 2) for linear models, the we report a data-dependent analysis to show that {{formula:c5320acc-083c-4089-a580-e277ac925c4f}} does not increase with {{formula:87528774-4803-4f71-a744-10c838646626}} . Then we move on to the non-convex case, where a) for decreasing step size we report a lower bound suggests that within a wide range of {{formula:75681b34-eee4-4957-9caf-ec878b6a1f66}} , existing bound in  {{cite:d6f5e571e539304853308cbbd342dc1f7460eaf2}} is not tight. We prove a tighter upper bound which matches our lower bound thus, and b) for constant step size we give loss functions whose divergence {{formula:2277f5fe-7b4b-4825-a0ab-c1b7dfced158}} increases exponentially with {{formula:8f05ac7a-f58e-45b3-8b7d-dc423273a43e}} .
r
ea778a5daed2e522520e25016cd3c8c7
In the previous research on covert communication, researchers studied the performance of covert communication for various channel models, and proved the existence of the Square Root Law (SRL), i.e.,{{formula:2a93fee1-30dd-42ba-acde-433eb4b7fdf2}} {{cite:c024ee6190b3482dee6d581d5e15a3561b08e5df}}, {{cite:5325c8452edd572ee164a1170dba8cf5cad7487d}}, {{cite:8fb819a70158ad7d3d1de43616bc3fab11e04c65}}, {{cite:f919c5c7d707069b90e570daca29f3ad38e71215}}, which was first discovered in the AWGN channel by Bash et al.{{cite:c024ee6190b3482dee6d581d5e15a3561b08e5df}}. To increase the covert transmission capacity, many researchers exploited the uncertainties of the system in order to reduce the correct detection probability of the adversary. These uncertainties include noise, channel, power, and transmission slot uncertainty {{cite:214a70f766d1aa129f349a977909e6c77958efd4}}, {{cite:d715b2c53ab36ddba66488e0581a22a5641f7002}}, {{cite:26127082e9b7ff26f27a6f2de00a9e95338de7c4}}, {{cite:85f1c4bbd22a6995284579314b8f33ca9a255766}}, {{cite:dd290ca3cb1e586491408c63460e040e4ed53481}}, {{cite:598aa434eb03bcb139b9a4bc1b59ea9bc59ac99b}}, {{cite:6cd088b38940c87653415fd902f994e297896549}}. The authors proved that when noise uncertainty obeys a certain distribution, the covert communication rate can be improved {{cite:214a70f766d1aa129f349a977909e6c77958efd4}}{{cite:d715b2c53ab36ddba66488e0581a22a5641f7002}}. The influence of channel uncertainty was studied in the Binary Symmetric Channel (BSC), and it also helped to increase the covert rate {{cite:dd290ca3cb1e586491408c63460e040e4ed53481}}. Additionally, a random transmit power was able to enhance covert communications performance {{cite:598aa434eb03bcb139b9a4bc1b59ea9bc59ac99b}}. Moreover, the authors showed that a positive covert rate was realized with the help of Alice's transmission slot uncertainty {{cite:6cd088b38940c87653415fd902f994e297896549}}.
i
76c548c46efe82c5899de5ea0731598e
One approach that has gained attention recently is the use of skills which are short sequences of single-step actions representing useful and task-agnostic behaviours extracted from datasets of expert demonstrations {{cite:195270eef9fc7119d5a0cdaaed88563dbb61ba36}}, {{cite:1d474d6fe5363240a9c9d5ace8e8c7ded4d3c559}}, {{cite:86b1afb5d9852dad20ada6c60a5d823b011c43cc}}. In the manipulation domain, for example, such skills could include move-left, grasp-object and lift-object. These skills are typically embedded into a latent space, forming the action space for a high-level RL policy. While yielding significant improvements over learning from scratch, there are still outstanding challenges for training RL agents using this latent skill space. {{figure:7cc490f4-01d3-4f5b-9ad1-cbf457a9e8a7}}
i
24c8abec1580619b93d4a685b0bbb470
The study of neutrino properties is known to be a powerful tool for searching for physics beyond the Standard Model. The observation of flavour oscillations in experiments with solar, atmospheric, reactor and accelerator neutrinos imply that neutrinos have nonzero mass; this, in particular, means that they should also have magnetic dipole moments. As neutrinos are electrically neutral, they have no direct coupling to electromagnetic fields, and their electromagnetic interactions should arise entirely through quantum loop effects. In the simplest extensions of the Standard Model capable of producing nonvanishing neutrino mass, the predicted neutrino magnetic dipole momentsActually, neutrinos may have magnetic and/or electric dipole moments. The former are described by the real part of the matrix of neutrino electromagnetic dipole moments {{formula:4947fd0e-a49f-4285-a241-2974ddd3cecd}} , whereas the latter, by its imaginary part. Both can cause the physical processes we consider in this paper. For brevity we refer to {{formula:b4fa98e4-9887-4e52-9263-dae71ea02807}} as simply the magnetic dipole moment. are too small to be probed in a foreseeable future. However, a number of models with new physics at TeV scale predict neutrino magnetic moments that may be close to the current experimental upper bounds (for a recent discussion, see e.g. {{cite:e78a13cf1ae4426561c6b93d14b0d34d1b6c21a7}} and references therein).
i
533a6dfcc622383db0aa35251f40d6b2
Expert Evaluation. Two error metrics (i.e., MSE and MAE) have been used to evaluate the quality of our models in Tables REF and REF . They are both pixel-based, which may be insufficient for assessing structured data such as images {{cite:57afb9acca600614a3736cf5e643bb5391696f72}}. To rule out any subjective evaluation, we conduct a blind survey completed by 20 domain experts to evaluate the quality of our interpolation and extrapolation data. The participants' number of working years in the seismic imaging area are listed in Figure REF a, which can be used as an indication of their level of expertise. The survey can be found in our online Google form https://docs.google.com/forms/d/e/1FAIpQLSfEZVvuXHcIub_mVzmLGEIOA4ZXkFWL_Tz8sXwwG1nqRrcukA/viewform. In this survey, we design five questions regarding specific geologic features of overburden, CO{{formula:9815a446-f0a6-4777-aef6-cf0b8fe76eaa}} plume, reflection below CO{{formula:002eafe0-5d41-4900-8266-a274a8ac227e}} plume, noise, and overall quality. We pick those features since they have been utilized to characterize CO{{formula:20d186f7-9f10-4303-8fb3-d700d27a74c0}} migration in various work {{cite:301a186d2a5f0a47d3a1895a139248ba11b99d57}}, {{cite:38152e419c6e34d26ee07c0707fa4a3f6962edec}}. Four different seismic images are used in the survey including, real data, 2D interpolated data, 3D interpolated data, and 2D extrapolated data. We then ask the experts to grade (on a scale of 4 points) the quality of the dataset based on the aforementioned geologic features without telling the participants in advance whether the data is generated or not. The results are shown in Figure REF b.
d
0b1af98ac86f17f4b6ddb422007a3880
We evaluate the impact of different parameters on the sum achievable rate. We assume that the Rician factor is {{formula:cb9c837d-795d-46e2-a3fa-1b6606c9fa38}} = 10, the noise power is {{formula:3bdd44a9-7758-4b73-86e3-c52450be7523}} , and the transmission power is SNR = {{formula:7e11b0db-94e2-4108-91ab-79d111dc7abc}} , for {{formula:11c996ee-7fe2-4b01-aab2-67a9c0a3ea2a}} . The main parameters for the GA are: {{formula:de11fd08-bf53-4bd2-8f8d-dfdbf3f8f2e8}} = 100, {{formula:710db1dc-f2ab-48cf-9495-51e7ed095d65}} = 50, {{formula:e6c0d909-bd46-4286-8372-58c7385ab278}} = 50, {{formula:863e2483-e98b-46ca-aa7b-cd2e8218759d}} = 1, {{formula:5552c09f-f067-48e7-892d-427f5f55567c}} = 10000, {{formula:6713ad76-2236-4e0e-8a02-2f5c322ba706}} = 0.1, and {{formula:c255a986-b4bf-465f-ad5b-cbd04de4ecca}} = {{formula:6150f07c-91a4-4d1e-84d5-2ff4ae3c4f96}} . The other parameters are summarized in Table REF , where the AOA and the AOD are randomly distributed within [0, 2{{formula:c97196bb-683e-478a-a13a-0eb03dfc021d}} ), and the large-scale fading coefficients {{formula:95e156e1-8bf9-4e98-a218-fcc3c24fe2fc}} and {{formula:418400b8-83e9-4bb3-9653-d34efa1c6c13}} are set according to {{cite:a9c566c6c9505f91757e1fa97e14494fa47aa4c1}}.
r
dca5f07bfdc2ff5bd8dcf0c880af94a2
We test Mixing Method++ on three well-known applications of (REF ): MaxCut, MaxSAT, and MIMO signal detection. We use the same formulation in {{cite:35a04744a3215fb8f16a41467a88d694867cfb49}} to solve MaxCut and MaxSAT, while for MIMO we follow the experimental setup in {{cite:6415a185e9315f9238cb58825373f3f0e40570cc}}. We implement Mixing Method++ in C with {{formula:413fac37-96b4-46f4-9d7d-e644ab74e188}} .We extensively tested our theoretical bound on {{formula:b5ce6a40-2a1c-45f0-a2df-f292777021fb}} in experiments. After testing all values in the set {{formula:f9048a5c-3c8c-4e91-9bb7-0f1dd9e05bd6}} with an interval of {{formula:8de89a27-9f46-4620-b4f2-0044f2d9a049}} , we observe that {{formula:5dbbb5bc-c0fc-4a02-a503-7a2f64ba6bd4}} indeed yields the best performance on over 90% of instances. Each experiment is run on a single core in a homogeneous Linux cluster with 2.63 GHz CPU and 16 GB of RAM. We compare Mixing Method++ with wide variety of solvers depending on the application. For MaxCut, we compare with CGAL {{cite:83aaa1aa1a578ee3e4ed9e7876174c58f631f2b0}}, Sedumi {{cite:10d001feae63c2c6aa5d7a2b8a803f1da292dc82}}, SDPNAL+ {{cite:18dcfdba180373920bb3ed58fe2ab5b37cca4d7c}}, SDPLR {{cite:879493e30c58a1bed6c82f0a9407590c70eb3a26}}, MoSeK {{cite:ec198168c2d83f5eb713f1a028b8e88ba554e938}}, and Mixing Method {{cite:35a04744a3215fb8f16a41467a88d694867cfb49}}. For MaxSAT, we compare with Mixing Method and Loandra {{cite:24351f307de8cbf1235a44a4854ecfc1ee703dbb}}. Lastly, for MIMO signal detection, we compare with Mixing Method.
r
0abde96b410cd169ece803a2e9965017
In this work, we propose a new multi-resolution spatiotemporal analysis of multivariate time-series. In contrast to standard multi-resolution analysis using wavelets defined on Euclidean space {{cite:4fb670708417584e6291db37f679345b3b1bdea4}}, {{cite:f93a4417863abf4c2661613ea3ad28a70cdd3e86}}, we present an operator-based analysis approach combining manifold learning and Riemannian geometry, which we term Riemannian multi-resolution analysis (RMRA). Concretely, consider a multivariate time-series {{formula:77dcf028-288d-4833-8ffa-e7a1e317bc51}} . Suppose the temporal propagation of the time-series at time step {{formula:21f95b0c-7185-4a1e-b651-122e6c12a8f8}} can be modelled by two diffeomorphic manifolds {{formula:ff6def8c-4580-4dc7-8b90-e3f30b879cc2}} , and suppose the corresponding pairs of time samples {{formula:32c5f0dc-4ffe-4db1-bd5d-9b8992f8c68c}} are given by {{formula:805b8479-b52d-4ac3-89ac-35ad7a26f7db}} and {{formula:5db65edb-56fd-4d06-b7dd-4ce6eee7b92b}} , where {{formula:3d2fc202-b950-4f36-b953-72b6ed7af8c3}} is the {{formula:6f54f06e-d687-4226-a4f5-a384710851b1}} th entry of the sample {{formula:5c7e9b6a-cc6a-4d2d-a741-183a7e06aca4}} for {{formula:395da0fa-54e7-43aa-a3ac-5c65f5343a87}} . Note that the entries of the samples {{formula:c8d1688f-0b9a-4bf4-ad50-2228200d9b5b}} lie on a manifold, and therefore, each entry is typically high-dimensional. In other words, at each time {{formula:6cd16f27-12f5-4ec6-879a-82245663a3ac}} , we have {{formula:feb163e7-243e-4821-85c3-621ae1431c6b}} high-dimensional points that are distributed on the manifold {{formula:466851da-569f-44fa-8724-935a5ec4872d}} . Our RMRA consists of the following steps. First, we construct a diffusion operator for each time sample {{formula:f57cf54b-c3bf-4924-b2cf-2df05ecb65d1}} , characterizing its underlying manifold {{formula:f68e976f-8768-4dd4-bbdd-c4d077737c9a}} . This step is performed using a manifold learning technique, diffusion maps {{cite:753229f3e7784e4d939b8a13c17760875e04e8a3}}, that facilitates a finite-dimensional matrix approximation of the Laplacian operator of the manifold based on the time sample. This approximation is informative because the Laplacian operator is known to bear the geometric information of the manifold {{cite:83282b98fa1905f959d288aa2949e14752fcf0f9}}, {{cite:dcca5ce08e0d477464e2d45a443afcdf7fadef81}}. Then, for each pair of temporally consecutive time frames {{formula:83e5a347-636d-49be-bb56-63cc801939f2}} , we present two composite operators based on “Riemannian combinations” of the two respective diffusion operators. Typically, diffusion operators are not symmetric, but they are similar to symmetric positive-definite (SPD) matrices. We could thus define diffusion operators as SPD matrices, whose space is endowed with a Riemannian structure. Therefore, taking into account this Riemannian manifold structure for the composition of the operators is natural. Indeed, we show, both theoretically and in practice, that one operator enhances common components that are expressed similarly in {{formula:63ce53cb-2ebc-454b-a2fd-c8cf61f8f834}} and {{formula:8aa152df-459f-41d4-8c91-ad3c8b32ba1e}} , while the other enhances common components that are expressed differently. These properties could be viewed as analogous to low-pass and high-pass filters in this setting, leading to a spatiotemporal decomposition of the multivariate time series into “low frequency” and “high frequency” components, by considering the common components expressed similarly (resp. differently) as the slowly (resp. rapidly) varying components.
i
f21bfcc3b8e2519d0ca0486766c226e9
In the non-trivial eigenspace, {{formula:89bd7ef1-9216-4c30-9770-c255e40cbd05}} takes damped Newton-type steps, while flat directions are updated with SGD at a learning rate {{formula:6ddb8f47-62ec-4072-b5ff-2df5c1774879}} . Based on the findings in sec:exp-noise-monitoring and {{cite:b9d5463346ea3ee475a498ee6111ebc9e1193175}}, we omit the SGD update along flat directions in the following due to their negligible overlap with the gradient.
m
df5bff35735d19fa0e7217aa8b1755e8
Quantum frequency conversion of photons is important in the development of long-distance quantum communication and effective optical quantum computing {{cite:89a9070638f764a4819d1d34b0a46e245f2ad754}}, {{cite:0198ca6db716efca371b0493ee6813a451ff8e2e}}, {{cite:5270c5a63c30828c91bb71111661f987e884a4e7}}. In nonlinear optics, the wave mixing effect away from resonance can not only prevent the quantum noise caused by vacuum fluctuations but also convert the photon frequency to a larger range and bandwidth {{cite:da0dd0067d38fd5b72d1fb4b9feddc81c9efa374}}. Therefore, in the past few decades, researchers have often used the wave mixing effect far from resonance to achieve efficient frequency conversion. Most of the experiments to achieve high-efficiency frequency conversion are performed in solid materials along with sum-frequency generation {{cite:953bdc78a9ffd02aa73f9b5671cf2b68c27ba742}}, {{cite:fd06cadc20b696f8b3ffadd0e2a8b5720519b68b}}, {{cite:5678294db2d12bd4f356c0312bf8d76174041cb3}}, {{cite:ce24a7572805631d0f6e044a62e60720260ae8c9}} or Bragg-scattering four-wave mixing (FWM) {{cite:72c6668fbc3dfc7c3c5ee5257c896d4f3c0ea252}}, {{cite:d6f8ba19fa829015d9f70961a7e61e53a3d598bc}}, {{cite:37d9ade93584c403e57493c9775b617a425ccc39}}, {{cite:8398dc418a0cd621c801596ce2ebd151d39edfb6}}. In nonlinear optical systems, the highest internal conversion efficiency (CE) has reached thus far has been of >90% by using sum frequency mixing in a nonlinear crystal and Bragg-scattering FWM in an optical fiber {{cite:953bdc78a9ffd02aa73f9b5671cf2b68c27ba742}}, {{cite:8398dc418a0cd621c801596ce2ebd151d39edfb6}}. In nonlinear crystals, a strong pump light is usually required to achieve a high-efficiency QFC. However, under strong pump light conditions, additional noise photons are often generated due to spontaneous Raman or parametric conversion effects, which can cause difficulties in the practical application of QFC {{cite:e7f1c8a4b6e3ad1e49b623aebb4450c4e8d7dab8}}. Although the required pump power can be reduced by using waveguides, cavities or fibers, this results in the coupling loss of incident photons, thereby reducing the overall CE of QFC.
i
24677f13a258db49789fe18ad39e16b0
However, the in and out of distribution performance of any Bayesian method depends heavily on the choice of model (prior and likelihood combined). We begin by noting that the training-data-dependent EmpCov priors from {{cite:a2a8a259c9e27a4a972a04e0bbf7eaec78a0a122}} can be equivalently viewed as training-data-dependent likelihoods. We then develop a new training-data-dependent likelihood, ShiftMatch, which has two advantages over EmpCov priors {{cite:a2a8a259c9e27a4a972a04e0bbf7eaec78a0a122}}, First, EmpCov priors apply only to the input layer, so might not be effective for more complex corruptions which are best understood and corrected at later layers. In contrast, our likelihoods modify the activity at every layer in the network, so have the potential to fix complex, nonlinear corruptions. Second, EmpCov modifies the prior over weights, preventing the use of publically available samples from BNNs with standard Gaussian priors {{cite:608251b90e111c806bc622d960bf4e2279415fcb}}, which is especially important as some gold-standard Bayesian sampling methods are extremely expensive (e.g. by Hamiltonian Monte Carlo, HMC in {{cite:608251b90e111c806bc622d960bf4e2279415fcb}} took one hour to get a sample on a ResNet trained on CIFAR-10 using “a cluster of 512 TPUs” {{cite:608251b90e111c806bc622d960bf4e2279415fcb}}). In contrast, ShiftMatch keeps the prior and training-time likelihood unchanged, allowing us to directly re-use e.g. gold-standard samples drawn using standard Gaussian IID priors from {{cite:608251b90e111c806bc622d960bf4e2279415fcb}}. Indeed, ShiftMatch is highly efficient as it does not require further retraining or fine-tuning at test time, allowing us to e.g. use a very large batch size.
i
8e3ef4d4dc2d732c5cb5b78d882e56a6
Ideally, the discriminator should measure the gap between {{formula:56a90c08-f28b-414a-926b-6998b9bd4e63}} and {{formula:a318c868-02ec-421d-8c00-c4678cac9fa2}} and guide the generator towards {{formula:846097d9-5880-4540-ab60-3d715c2ddf0c}} . However, in practice, large capacity discriminators can easily overfit on a given training set, especially in the limited-data regime {{cite:6387e4ea4d115a4761e7be633341cf8cf08f8aa4}}, {{cite:8429e3eb250e188d721b0c1d123ac6a0ea10632b}}. Unfortunately, as shown in Figure REF , even when we adopt the latest differentiable data augmentation {{cite:8429e3eb250e188d721b0c1d123ac6a0ea10632b}} to reduce overfitting, the discriminator still tends to overfit, failing to perform well on a validation set. In addition, the discriminator can potentially focus on artifacts that are indiscernible to humans but obvious for machines {{cite:6bdd6fdce9c17fa294291a0e2cc1d88790e97e49}}.
m
cd49f02c09f8d4861481cbdab905d658
The structure of CNNs employed in this paper is chosen to be simple enough so that it may be trained on most modern high performance GPUs. The architecture described in Fig. 5 was found to be effective and is employed in the experiments here. During training, mini-batches of size {{formula:17d2fac1-891c-4614-8aa8-0dba603cbc2b}} consisting of 16 triplets from each of 6 randomly selected tracks were formed in real-time by randomly selecting from 28,345 songs, excluding all songs in the SALAMI-IA and BeatlesTUT dataset. 256 min-batches form 1 epoch, and training took place over 240 epochs taking approximately 8 hours. The triplet margin was set to {{formula:7db7a8af-6640-4607-977f-13e088c1bc14}} . Despite the observed error rates in Figures REF and REF , {{cite:4761c39bbb0b16b9be4b82c16583f44fba572ecf}}, {{cite:b39420651f30f0b340cf391c962b6e0165b8b35b}} argue that it is the difficulty of separation between examples that is important, and so not all false positives and negatives are equal. In practice it was found that {{formula:727cfde1-6cb8-4658-8da7-d603976aba33}} , {{formula:c0aed777-4035-4843-9160-afc27f41b812}} and {{formula:b5f47918-49ed-48fb-ad70-bdb79c299f28}} provide optimal results.
r
51e42c68c93ef93e3ed573da5f1fdd97
Theorem -1 (G-computation formula. See {{cite:7adbe1d2aa26a2b6c92593bea8984d69fe3d2186}} and {{cite:a376a67c44adc6f4b49f50572ac44dfafe3d9996}}) Let {{formula:363b8af2-709c-4ed5-8681-fac940e93041}} denote a counterfactual outcome in a hypothetical world where {{formula:1151ce60-3567-4dae-8869-0c731ccd38f6}} . Assume {{formula:2cef862a-1fe0-4d62-bdf9-aafa06b09b10}} and {{formula:71d64a52-cd19-445e-869c-9ba2da26b957}} for all {{formula:3feb920a-b124-4592-a940-ba17be58d4a6}} . Then {{formula:39e26f9c-f1ae-48a4-81dd-054c5835c303}} identifies {{formula:f85cd1a9-8e5c-4c8e-9314-554bfeba3fe5}} .
r
3f46ab52d156974361240318257fa4f6
Exact solutions in the Einstein-Maxwell theory are always fascinating objects to be studied {{cite:ebfdb0237f1af14cd15c28e9c92b49917c7db36c}}, {{cite:b2f3567c166d83d2054689bbb26e23c9482b7ab6}}, {{cite:85c79cb6268ac136d4b9998c090723a4338247d1}}, {{cite:e25545e4808f0ce80faa2b43b33e0f8b05e93ab2}}. It starts from the mathematical aspects of the solution to some possible astrophysical-related phenomenon. In the Einstein-Maxwell theory the spacetime can contain electromagnetic fields, and sometime is known as the electrovacuum system. The most general asymptotically flat spacetime solution in Einstein-Maxwell theory that contains a black hole is known as the Kerr-Newman solution. This solution describes a black hole with rotation and electric charge as well. Despite it is very unlikely for a collapsing object to maintain a significant amount of electric charge, this type of solution has been discussed significantly in literature {{cite:ebfdb0237f1af14cd15c28e9c92b49917c7db36c}}, {{cite:85c79cb6268ac136d4b9998c090723a4338247d1}}, {{cite:b4b70fde61728faff89cb56ce95b56f707200dc0}}.
i
0df87f19bd1f0c585b0fa97e01f57d01
Deep neural networks (DNNs) have achieved great success in various areas, including computer vision (CV) {{cite:a61accd68126c61f54a89120c46cf66c98dcca5f}}, {{cite:63671c5892a0d7c92e60bfc4a7794c0da9375099}}, {{cite:fb1757f12ff4ceb57a7ef02a62fad55deee27b18}} and natural language processing (NLP) {{cite:0fb2e302a039f6f0ea7c44272b60e73cb0aa1bc9}}, {{cite:1375ce9fc065ff519de83f772923669882545379}}, {{cite:f08686f908180398d6e6bd0fef87a459bfaa9343}}, {{cite:2562fdca9494bd618d4dfbb97f988c9a6eda1d56}}, {{cite:249e43682528367391eb8d4eb9ceb78185d05821}}, {{cite:34597c9068c0bd42abd2bbda72c02ea033767557}}. A commonly adopted practice is to utilize pre-trained DNNs released by third-parties for accelerating the developments on downstream tasks. However, researchers have recently revealed that such a paradigm can lead to serious security risks since the publicly available pre-trained models can be backdoor attacked {{cite:79f5e853450134a00ca10265585992fb53483bad}}, {{cite:a20619aa344db6dc46d8a94131950222c8ee6ed6}}, by which an attacker can manipulate the model to always classify special inputs as a pre-defined class while keeping the model's performance on normal samples almost unaffected.
i
b6600f5d8a84aba0c4b8cd800e8b9560
First, the parameter setting procedure used in this study could be refined to avoid optimization bottlenecks and limitations that affect all layerwise training protocols at scale. In particular, the Powell method while effective at small {{formula:77e1a046-2097-42cd-aeca-6ea2ba911cb2}} would eventually become intractable, and alternative gradient methods might be hampered by barren plateaus if the parameter setting protocol is kept to be “layerwise” {{cite:3570dd9882946a6f4bacf0dfc07f95826c20fd6e}}. Recently developed analytical methods based on series expansion  {{cite:68308d35c45a6c50cb866349762f3a2782ea6053}} or quantum control  {{cite:7749269fbaf29323c5de6954407d4f2a5f5fcf21}} methods might come in handy to analyze further the reachability deficits and strengths of these algorithms {{cite:6fb8341b99592326e1e0b61aa2b6f79fb29a2a49}}, study the observed optimal parameter concentration {{cite:5fc9b6516beb1d80e50eafd0dc6a81c2b628d0a0}}, and to estimate the performance of QAMPA at scale. Ties between quantum annealing schedule and QAOA parameter setting {{cite:a115be6e8a2f5b7f454e4d73ef46be407c4036d2}}, {{cite:98a9ffe2a6ad257b186334e6cab3584e867d6eba}}, {{cite:4cfdfcdb3f492cc6f2367559badf6e1044b4c04f}}, further indicating that cross-overs between digital and analog optimization methods are also an interesting possible development for QAMPA {{cite:e488a9f5da955e8879a884b963987e592ada988e}}, {{cite:046a0e079679cb485d049de1d5358605f24495b7}}, {{cite:c50435246beeaa5110f9f1349254a453d946de79}}, {{cite:42416064f830c44088321dee5a550bca28bf80b9}}.
d
7bc5bedfc1fab2966f64a9ff4ae6c04e
As already mentioned earlier, our proposal to calculate the smeared spectral function is not limited to the case just discussed. Any sort of weighted integral of the spectral function can be considered. A well-known example is the contribution of quark vacuum polarization to the muon anomalous magnetic moment {{formula:331e655e-4225-428a-bd7e-ae2f111acc2c}} . Phenomenologically, one employs the optical theorem and dispersion relation to relate the vacuum polarization function in the Euclidean domain {{formula:81ebde70-c099-4a70-ace3-651792cf1493}} to a weighted integral of the experimentally observed {{formula:00b1d816-66e4-4a32-b24b-7c7c8c700d61}} -ratio, or the spectral function. Then, an integral of {{formula:2eb31d05-c9a4-41ef-b288-4ad23c4838d3}} with an appropriate weight gives the contribution to {{formula:4ce222f3-1c64-4eaf-b67b-20022204b9c1}} . In this case, however, a direct expression in terms of the time correlator {{formula:321974ad-dc12-4244-a743-de1e1de41662}} is known {{cite:230e033a9be7eaa3741d33cb3a68c6bc1138f67d}}, and we do not really need the approximation method developed in this work.
d
9ba6bc72fc23862780ee519fb64e1373
Luminosities of gamma-ray pulsars are over-estimated when the decay of their flux density {{formula:57fc5d05-1920-4c7b-aaa8-75e104fe1033}} is assumed to obey the inverse-square law {{formula:02210b96-552b-4e9f-904d-b954fd114de3}} instead of {{formula:d34d3161-e54c-4d9e-a748-2f60d3d35d8b}} by the factor {{formula:a235aba3-3bc1-461d-8740-45172cb6eb90}} (see equation REF ). The value of the scale factor {{formula:58a4d272-10cb-490f-9a65-8675efc51cf9}} is of the same order of magnitude as the values of the light-cylinder radii of these pulsars {{cite:4becc877abdc24e2d486c00415af3169bd476d41}}. Hence, the factor by which the luminosity of a 100 ms gamma-ray pulsar at a distance of {{formula:bf68eb7f-c1c4-4cd4-b878-d62b1d77d10b}} kpc is over-estimated is approximately {{formula:a42793f4-d5b7-45c6-a9e5-417176ead7a2}} . Once this is multiplied by the ratio {{formula:14cf49e4-2f8c-48f2-8fdc-9161356b10e5}} of the latitudinal beam-widths of gamma-ray and radio pulsars (implied by the fraction of known pulsars that are detected in gamma-rays), we obtain a value of the order of {{formula:71ba32f1-3b97-4942-b4d3-12a5d905849f}} for the over-estimation factor in question: a result that implies that the range of values of the correctly-estimated luminosities of gamma-ray pulsars is no different from that of the luminosities of radio pulsars.
d
bd5962bf82c21607e4e52e18741c2df1
Finally, we present the changes in {{formula:744f8816-f8bd-4df8-a237-87d94b8bdf68}} relative to the Planck+ACT {{formula:55ef6861-bce8-4f77-878d-8fd4a68b9d2e}} CDM baseline for the other three best-fit models, listed in Tab. REF . The model M1 increases {{formula:6b001b59-0d5c-4d02-bfa8-da79bdaa4cff}} by about equal amounts in Planck and ACT, while the model {{formula:e12c83bf-7bb5-4f0a-8502-ff721377e37d}} with one extra degree of freedom in the baryon density PDF is comparably more disfavoured by Planck than by ACT. ACT TT actually appears to favour the baryon clumping models, which holds true for both the entire {{formula:2939e044-d596-4a84-af17-89ad4dca7b42}} -range covered by ACT and {{formula:b4016c0a-0c11-4c6a-afda-86391bb57e8e}} , the part used in our likelihoods. On the other hand, ACT TE and, to a lesser extent, EE, disfavour baryon clumping. This may be explained by the inability of those models to keep the last scattering surface's thickness fixed (c.f. Fig. REF ). Another possibility is simply that the steeper slopes in the polarization spectra translate into higher sensitivity to {{formula:d98a61ff-89b0-47fd-95ea-ec662eb5a76b}} . It should be mentioned that there have been hints of inconsistencies between ACT TE and other CMB data {{cite:c0dfce7d6550d414ac956182be88ca8ceaaf335e}}. As no mechanism has been proposed that could explain these trends, and the tensions are still relatively mild, we do not see this work as the appropriate place to investigate them further.
r
8877716f8deef358515e061d80d86c8b
derived in Fokas {{cite:7e0ec8e4c4edbc8dbd94c67f4d0b79effb208f56}}, Fuchssteiner {{cite:85a92f80eaa28dbca3ac211bd9efe57a7dab283b}}, Olver and Rosenau {{cite:e3093fe7e829c5d70c97ae6a77f15890650688c0}}, and Qiao {{cite:af66442690e80c3a61dc3c3e8d1025bb9e6a2f5f}}; while the choice a = 0, b = 3 gives the Novikov equation (NE) {{formula:e32caf99-4b7e-4b32-9926-466f8d0fac63}}
i
3111e7233e3ae5a23f35395763306315
From Table REF , there are several observations. Firstly, for both the nature models and federated adversarial trained models, randomisation is successful in mitigating the adversarial attacks. When we apply randomisation onto the original test dataset, the performance only drops by a small amount (usually less than absolute 1 % on UAR). When randomisation is applied on the generated adversarial examples, it improves considerably the performance. This is because randomisation probably destroys the specific adversarial perturbations pattern {{cite:88b6842f7b506ae35f574e2b5576e08cb7feb17a}}. Specifically, randomisation helps the models mitigate the DeepFool attack, improving the UAR from 2.96 % to 71.34 % ({{formula:032c8014-a5c3-4752-a29d-89f9720599af}} in a one-tailed z-test) for the nature model and from 1.98 % to 72.32 % for the adversarial federated learnt model ({{formula:b592edf1-4ea7-49f0-8165-c37931497cb6}} in a one-tailed z-test). Moreover, the UAR improvements for nature models under FGSM (from 20.82 % to 47.91 %) and PGD (9.96 % to 41.55 %) attacks are also significant ({{formula:b8aca249-39c8-4c5d-b915-718787e58ba3}} in a one-tailed z-test).
r
b47cd47ac483092d50c7a2c1779ea499
Group testing is a well established area attracting attention of specialists in optimum design, combinatorics, information theory and discrete search. The origins of group testings can be traced back to the paper {{cite:c4264b5700b62938581e682b8c1837467b9c9675}}, which is devoted to sequential procedures of blood testing for detection of syphilitic men. Since then, the field of group testing has seen significant developments with extensive literature and numerous books dedicated to the field. The textbooks {{cite:3d3abc2107c945a66cd5a3a62931d1441c1809aa}}, {{cite:425d4e3e62ca798fc0ff9305ad3a768b00e40812}} and lecture notes {{cite:0beb8029584cf39c53c86dd10969d439b6d1ce1f}} provide a background on group testing especially for zero-error non-adaptive problems. An excellent introduction and summary of recent developments in group testing and its connection to information theory can be found in {{cite:7b939c3a0a309ccb5a42379203e03aba383e9a85}}. The group testing problem in the binomial sample is especially popular in the group testing literature, see {{cite:7b939c3a0a309ccb5a42379203e03aba383e9a85}}, {{cite:31772e045b1d58a906cff9355a6a0ffa11a97539}}, {{cite:da54d0d2b93fd59f43dd96b9ce5875ba095a7258}}.
i
98283c53b8df0e42988d925027d005a8
In the result below, Theorem 1, we prove the existence and uniqueness of the process describe above and provide an uniform control on the maximal membrane potential of the system. The proof of Theorem REF is omitted here since it is analogous, modulo a small modification of the notation, to the proof of Theorem 1 given in {{cite:ee99f33903f7415318caf614de60adedb46f61bc}}. In what follows, for any vector {{formula:5f4425d5-899d-471a-b2cb-016dcce626e2}} , {{formula:c83ec3ff-47b4-444b-9123-5f19ece470ab}}
r
ac428788d481252cad5102ffd3698c1d
In existing BP literature, there has been much interest in exploring the use of message schedulings for improving BP performance. The naive scheduling is known as Synchronous or Loopy BP (LBP), where all messages are updated in parallel {{cite:dc439716de2483ea759a36c2081156f33d04e2b0}}. Asynchronous approaches, where some amount of sequentiality is enforced during the message updates, for example via subgraph updates {{cite:e8337cce4f9cf2ec4cf7d3d3a475fdf8aa1053f1}} or greedy message selection {{cite:9120249c17067ab03bb56da9a600ccd0b8331555}}, {{cite:b56e7b5bdac063b39ca363642d84e3d0596e8d03}}, have been shown both empirically and theoretically to outperform LBP in single-core environments. The general intuition is that enforcing sequentialism in the scheduling encourages more direct propagation of information, thus converging faster. The contrast between LBP and Asynchronous BP introduces a parallelism vs. efficiency spectrum (also found in other graph problems such as SSSP {{cite:3edfa7eb774f65f7ec385aecfaa81ff551a25ceb}}). LBP exposes high levels of parallelism but is work-inefficient. Asynchronous BP is efficient and convergent but exposes little parallelism. We hypothesize that there exists a tradeoff between the parallelism and sequentialism in Belief Propagation, and that GPUs can effectively harness that tradeoff to yield performant BP.
i
71800721bf4b221f1367db4c021cac94
Symbolic representations need to capture both syntactic and semantic structures in code. Approaches to generating symbolic representations can be categorized as sequence-, tree-, and graph-based. Sequence-based approaches represent code as a sequence of tokens and only capture the shallow and textual structures of the code {{cite:9b9d38200e020a9cded72c3f34c87a60cb67693b}}. Tree-based approaches represent the code via abstract syntax trees (ASTs) {{cite:f8bae96eedd932b8a665d808246516db731d8a2d}} that highlight structural and content-related details in code. However, some critical relations (e.g., control flow and data flow), which often impact machine learning models' success in abstracting code information, are not available in trees. Graph-based approaches augment ASTs with extra edges to partially represent the control flow and the data flow  {{cite:66e96d4ef22f7353abb53d84d124b6c93f92b61a}}, {{cite:9b9d38200e020a9cded72c3f34c87a60cb67693b}}, {{cite:5910a99ef61cd597c8b17ef712e1f5e1250c6833}}. Depending on the type of symbolic representation used, the approaches for generating the neural code representations are either sequence-based {{cite:6e63cf1a6c332dbb1c5b246f109239be53a32fb4}}, {{cite:9afcdea7fb2eae45453714a05fb00c4a48460760}} or graph-based {{cite:9b9d38200e020a9cded72c3f34c87a60cb67693b}}, {{cite:5910a99ef61cd597c8b17ef712e1f5e1250c6833}} neural network models. However, these works are generally task-specific, making it hard to transfer the learned representations to other tasks. In addition, the scarcity of labeled data may cause insufficient training in deep learning models. {{figure:53f66a39-8050-42a3-b4a0-cc076b41ccba}}
i
ec39175de24a253f774c6fab837457fb
In this context, machine learning (ML) is beginning to provide powerful new capabilities for the computational design of materials with targeted properties. For example, ML can be used to train a model that replaces the direct computational evaluation of the property of interest, which significantly decreases the time needed for each iteration of an optimization routine {{cite:9864c355b931a40fe88f55953756ec1c7a84fa6d}}, {{cite:322619f4fcf68480f40658e1b817fcfef919c006}}, {{cite:a34188dcd379d7a9d5bc82d331dbb8ccd0abcf7a}}, {{cite:b4deab4024ae0c46e7517d93958f6ccc5d5d5423}}. There have been several recent reviews that discuss other ways in which ML can be incorporated into an inverse framework to enhance materials design, including using ML to generate new molecules and materials, and to aid the optimizer for prioritized search of design spaces {{cite:84f2c044d265fccd8796c6011e5fa9d59f293c8b}}, {{cite:89ee8c0766d5bc38108482186b36fe13bd146a36}}, {{cite:e5dd8abd5c6315b498f1f9907adc42525c0fb9fe}}. Other reviews have focused on ML-assisted design for specific classes of materials, including photonic nanostructures {{cite:cc729e15804bdf3d756fa7947456c50328aa1c4c}}, {{cite:f516cc028899329e2d8ab83346db17106b19d8bf}}, {{cite:3f5087ae150c3d5ae4b8417c5e68354c083433e7}}, chemical compounds {{cite:0fa690b79321bae36326756b81f0219ca7e0cff6}}, {{cite:1d180415ab5fecf14d20cb31648d868f7fb444bb}}, {{cite:52e496cd686b18300920444f720c483b3cf31c8b}}, and self-assembled soft materials {{cite:80cb1b7d8be467e730b67458958879e8b5b85917}}, {{cite:dd3b3460fb2c3269a339994fea432d20748078a3}}, {{cite:f50ffcbb207cd12bd0b3bc8f2ce657e81b549feb}}, as well as how ML might be used for high-throughput experimental investigations {{cite:a605b7456c401ce7e165a77eaf641191505dfb32}}.
i
183cffe8525e18b0c46aef32657c92ea
Sample images and faces generated by three systems are shown in Fig. REF and Fig. REF , respectively. Qualitative inspection shows that models are able to generate high quality images. The generated images have high fidelity, are diverse, and most of the time are physically plausible (with few exceptions e.g. the right woman in the last row of Fig. REF ). The generated faces (Fig. REF ) in general look very good, although not as good as the faces generated by models specifically trained on portraits (e.g. StyleGAN {{cite:b3b13add1cf15d7c1dc8fe7c765a0cd5363f79b5}}).
r
5279eff479e6fa878c85beddfdfd835b
Various works investigated the possible observational signatures of wormholes. They include studies of the shadow {{cite:148b332094921b19e8d5e52182b56d437928131a}}, {{cite:545de532760f084e2bcbb2e6c8410d55614f6d04}}, {{cite:8333fd5bb899369627af6ac8b65f4d69765ca5b4}}, gravitational lensing {{cite:321286b02ac32ce62dff27298a0cad7639cad91b}}, {{cite:0224a4dea69babbea44b93bf5f1a9411bff3e8b0}}, Lense-Thirring precession {{cite:962b1de358bb8f8a41a2618cf04afd929c7f4a96}}, accretion disk radiation {{cite:feee76034039dc579a140e9652cabd3d3c416354}}, iron line profile {{cite:025fd0dd633eec05fdd04fb40384f76cb0c9e26a}}, and quasinormal modes {{cite:10054795dee3e8ca0ac25310e20b0ad30d45f236}}. It was demonstrated that in some phenomena wormholes can mimic closely the Kerr black hole, while in others they possess distinctive features. Since it is hard to distinguish wormholes from black holes in certain experiments, being familiar with a broader set of characteristic effects will be useful for their identification.
i
54a550022f779d6e40a6461d49fc9247
The last expression is known as the Jacobi triple product identity {{cite:4e3d6e119b733aaabce7b37ff30b1fdfaaabceed}}. The function {{formula:8f9c8f07-6686-4f01-8085-24c63d59675e}} has simple roots on {{formula:4a691ecf-2de7-4ed6-b9d0-04fff9caae49}} . We define the {{formula:be4e41e1-9a3e-4a3a-b230-8c14696c9d12}} -character to be {{formula:94fc1bb2-1f38-4032-a05f-0ca4fdb22b43}}
r
10b08b920554d640253dbcb456ef889b
When studying black holes in {{formula:011c4953-25f6-4439-a892-49c2805c555d}} dimensions with {{formula:ba3cd009-1a99-4368-b0cf-456b483d366d}} , it turns out that the non-trivial, interesting part of gravitational dynamics are strongly localized within a region closeby the horizon {{cite:233298858920c559abbd8c93ac89f70b4a3d256e}}, see {{cite:8b6971d2614a56a77f33a31c04c0b3e5c4e81b23}} for a review. Beyond the near horizon region black hole effects are suppressed. Within this near horizon region, {{cite:b9e26a7da26fe0da7626183ca92394bb8c8d7054}}, {{cite:21ac6b5b23f0d1d8c5a36d54ff6ef8fa508bf694}} pointed out the following. For a wide class of black holes, the near horizon dynamics becomes that of the black hole that arises in the low energy limit of two-dimensional string theory {{cite:107b9d3dc8a6bc359762e17c5d3a4285b4cc37a0}}, {{cite:71602c33b01e844fb5063003481f5a89cd0dbc19}}, {{cite:f11e1f1e609c778c1d825a5c4660f2268adb099e}}, which is governed by the two-dimensional string effective action.
i
747e96aedf4920f693e46191dda3f6e8
A standard way to calculate a spectrum of doubly heavy baryons is i) to first calculate heavy diquark mass, and then, ii) regarding the whole system as a heavy-light system, to apply the potential model like a heavy-light meson. In this situation, especially in the second step, people normally obtain the eigenstates with definite quantum number {{formula:4f4d111f-3e44-4a33-b015-a07e2465b579}} since the interactions for a heavy-light system include the spin-orbit or spin-spin interaction like {{formula:b24c79e6-f14d-4437-9f49-c7c3a9c9bf7a}} or {{formula:3e5ad63b-a521-4f57-9638-fc0acd0254b4}} , which breaks the heavy quark symmetry (HQS) as shown in, e.g., Ref. {{cite:4843199f4642fd6dd00a6f9d25c80d86e17c816b}}. Another standard way might be solving the three body system in the quark model like Ref. {{cite:2e7b681ed52ed124a276d1b40cfcfc13ffca5284}}. To obtain the heavy quark symmetric spectrum for doubly heavy baryons, one needs to explicitly exclude such terms that break the HQS or include only the term, {{formula:e19189e3-a426-422b-b59b-8736fa8bbfc8}} , that keeps the HQS. The best way is to first obtain the spectrum of doubly heavy baryons using the standard way with the potential model, and rotate the obtained wave functions as well as masses with mixing matrices Eqs. (REF , ) although there appear off-diagonal elements in the mass matrix that should be neglected as an approximation.
d
5fbc72599bd1cfa51b96dd0f48efd373
Fig. REF compares the AUD performance of different schemes versus the number of OFDM symbols {{formula:c67f3344-834c-4d7f-a70c-13771672b0f0}} . In Fig. REF and Fig. REF , we set {{formula:4e3a92cd-6cf1-4ed0-9683-e0da1f0cd1bb}} and {{formula:449127f5-973c-4b9d-8ddb-887ef4c63745}} , and we consider the cases of {{formula:5682b00a-8727-491b-9854-864f3899794b}} and {{formula:e6bf7184-3728-4ac4-a269-7f8bcbc01901}} . We observe from Fig. REF that the proposed OAMP-EM-MMV algorithm outperforms the other three greedy algorithms (namely the SAMP, SP, and SWOMP algorithms utilized in {{cite:3d5f5396bf3474992e424b126e7a4af8364a8747}} as baseline schemes), despite using less pilot subcarriers, and has a significant advantage over the GMMV-AMP algorithm {{cite:3d5f5396bf3474992e424b126e7a4af8364a8747}}. Furthermore, for the proposed OAMP-EM-MMV algorithms relying on the CG-AD and BI-AD, the AUD performance of BI-AD is distinctly better than that of CG-AD. When {{formula:d9bd3f48-d844-404d-9545-ecc3227e1b76}} , the AUD performance of the CG-AD and BI-AD for {{formula:bd98b31c-cbcf-4312-971b-47aaa16fbbc4}} tends to zero quite rapidly. In the case of {{formula:7346f31f-82ae-4218-9e4d-e79fc984478e}} , the AUD performance of the proposed BI-AD tends to zero rapidly when {{formula:078325e8-87c4-479a-8286-a23f25d63aea}} . Hence, all the UEs can be detected correctly within the access latency of {{formula:a09a9dc3-a561-4e38-9533-f065fd3ae72e}} .
r
022a8d2c7d2ad36dfbb81cf680490ec0
Linear Projection (LP): This is the simplest method developed in Chapter . For every embedded word vector {{formula:af3554e7-780c-49b8-b22e-34a4aabf3ab9}} it projects it along {{formula:a9c68425-96b3-42cd-82e7-4b529cd63783}} to remove that component as {{formula:16134ee1-fed8-4c27-a75c-66bba46e3b7d}} Afterwards the {{formula:f1eca9cd-0b33-4bf2-b564-bca0f6398a1a}} -dimensional vectors lie in a {{formula:743bf8aa-16cf-4861-aec8-7faaeafae6f2}} -dimensional subspace, but retain their {{formula:0556a532-81f8-42c6-973a-344e14984db4}} coordinates; the subspace {{formula:0b2d7b8f-c3e0-4f7c-bcba-3ea58e1f4539}} is removed. Laushcher  {{cite:2e5fce991bc53ac5999951bdf69796eb7ee2084d}} show this method {{cite:dd678dd3eae43d71f8bc9f5c8884feaf3d2d5458}} reduces the most bias and has the least residual bias. Hard Debiasing (HD): This original method Bolukbasi  {{cite:695b0e6f9c54eab91473dcb6ffdff1c4f8d4e7f4}} begins with the above projection operation, and first applies it to all words {{formula:78899e87-e117-4e04-9abf-09e4ec064ea0}} not in {{formula:d83f51c9-d5e6-40e2-83a2-fba5f052214d}} . Then using an identified subset {{formula:a143af12-2827-46b9-a09c-14bff9482697}} which comes in pairs (e.g., man, woman) it projects these words also, but then performs an “equalize” operation. This operation ensures that after projection they are the same distance apart as they were before projection, but entirely not within the subspace defined by {{formula:a7616dce-ac62-42e4-b7e1-cb8bb8cf5d5a}} . As we will observe, this equalization retains certain gender information in the embedding (compared to projection) but has trouble generalizing when used on words that carry gender connotation outside of {{formula:d7181dab-ef15-4e42-bd48-6de609d6df38}} (such as names). The final location of other words can also retain residual bias {{cite:f65cef889a6ab9ca4b088142210641cc612e316f}}, {{cite:2e5fce991bc53ac5999951bdf69796eb7ee2084d}}. LP and HD are detailed more in Chapter . Iterative Nullspace Projection (INLP): This method {{cite:3169ba6fef8261d5107d5c68794c7b95180450f7}} begins with LP using {{formula:322738ee-ce97-4382-b358-8643788dde4b}} on all words except the set {{formula:c6ca4d44-a10c-4686-97e5-bbe116bdcdcd}} . It then automatically identifies a second set {{formula:d99be2fc-a0ea-483d-a01d-98b7044ad3bf}} of most biased words: these are the most extreme words along the direction {{formula:b7d5a966-fd9a-4b3c-8038-7f5ef8114fa2}} (or {{formula:bbd6d171-a403-4a11-8ebc-1eec99e275e7}} ) {{cite:3169ba6fef8261d5107d5c68794c7b95180450f7}}. After the operation, it identifies the residual bias {{cite:3169ba6fef8261d5107d5c68794c7b95180450f7}} by building a linear classifier on {{formula:7b416236-2c76-4e89-864e-d9206beced59}} . The normal of this classier is then chosen as the next direction {{formula:7d617219-ea62-4feb-89ac-2bcea7a37be5}} on which to apply the LP operation, removing another subspace. It continues 35 iterations, finding {{formula:b036c629-c7e5-49e0-9001-496c4d106854}} and so on until it cannot identify significant residual bias. : We also apply , using `he-she' as {{formula:2b5cd7b1-947c-4dc8-b24e-9caa41821535}} , and the subspace defined by an occupation list (see Appendix ) as {{formula:2dfeb825-941a-472a-b7cf-860713a5b066}} . This subspace is determined by the first principal component of the word vectors in the list. Our code for reproducing experiments will be released upon publication.
m
e930f26a101e61b3e2f4fd5f9a19f831
Discussion – Several reasons explain the superiority of the proposed method over conventional PIT. First, the conventional PIT applies a hard decision on assigning the output-label permutation that minimizes the total separation error. This is not an efficient decision especially in the initial steps of training when the network is unable to perform an effective separation. Hence, in the probabilistic PIT, we consider the costs of all possible permutations for training the network. Second, the minimum cost function used in PIT is replaced by a soft-minimum function in Prob-PIT. In several applications of machine learning {{cite:47a8f8de9b117d288b2c52d72d1c4664a9068a18}}, it has been demonstrated that replacing the minimum by the soft-minimum results in a smoother optimization landscape and therefore it is less likely to converge to a poor local minimum. Our results confirm this finding for speech separation as well. Two observations confirm this finding: (1) SDR and SIR values of Prob-PIT are generally better than PIT; (2) the variance of the SDR and SIR values are lower for reasonable choices of Gamma ({{formula:c42eff03-4298-4d98-b3ca-82d11d7a441a}} ). A lower variance in the results shows a more stable system, which may be caused by a smoother optimization landscape.
d
5de505bb042199ec22b04251a30129d2
In the limit where all the quantities and functions in () are known with perfect accuracy, {{formula:9cd002a1-f477-449c-9c97-1531d631d1b2}} is a likelihood. By the Neyman–Pearson lemma, the ratio between the likelihoods obtained under two different hypotheses {{formula:a2aba987-f0e7-4b0a-ae5b-817086f42254}} and {{formula:23e4f523-c10a-4671-9b8d-c050122b9274}} is the most powerful test statistic to discriminate one from the other {{cite:de23df649f072547ecc8dac3d01332f3ade847f1}}. Hence, if it can be implemented, the MEM should provide optimal experimental sensitivity. In practice, we are limited by the use of leading-order matrix elements, or by assumptions made in constructing the transfer function and efficiency term. The quantity () is therefore not a true likelihood, and the Neyman–Pearson lemma does not strictly apply. For discrimination purposes, it is then common to use the event information as input of another multivariate method (typically a boosted decision tree or a neural network).
m
b2b476048d7a1058327c2569603e1efb
Estimating covariance matrix: In this work, LogDet estimator measures entropy with covariance matrix. In which case, we can adopt many covariance estimation methods to improve. One possible approach is to introduce prior knowledge about estimated samples. For example, when estimating with image data, the locally correlated nature of pixels can be introduced. Therefore, we can reduce covariance between distant variables, while keep only the correlation of their {{formula:50405353-0467-4341-974d-e328e5f964de}} neighbours. This can be achieved by eliminating other but the diagonal and off-diagonal values in covariance matrix {{formula:947d8b7b-0c86-4d2b-9014-0894c04f0c3f}} . Covariance estimation has been well studied in modern statistics, we refer to {{cite:ad3cc83c8aaf7ed83123a7075a6daff61135d55d}}{{cite:c199d8868b0148baf0e2fd49b2c9d7a124b2fd22}} for potential improvements. Kernel extension: Since the multi-signal extension reveals the potential of kernelizing the similarity matrix {{formula:a2ac8e3f-e6fe-400b-92ea-eacd385be672}} in {{formula:51f1f0cb-4eef-41a6-9077-3af37ce948b3}} , we can replace it with Gram matrices where each component is an arbitrary kernel function {{formula:04e90f8d-0cb5-4d3b-8a2a-2df29cae619e}} . Similar kernel extension using {{formula:91cc0ade-1f59-454e-8bad-80b2e58d5756}} function has been applied in {{cite:8851371d5f012595880824615ec461f16fd24f8e}}{{cite:7a9ed25fce97154db2ac07d8b03371edcb6478de}} and is shown flexible in visual recognition problems. We believe this can also be applied to enhance the LogDet information measuring on more sophisticated sample distribution. Learning Objectives: LogDet expression has revealed the potential to act as an objective in supervised learning tasks, and express robust performance to corrupted labels{{cite:08b5cd63574eea88f6911d5b3f711acf787edd59}}. By further analysing the network behaviour, it is possible to use LogDet as objective function and optimize it through backpropagation. For example, we tried to use a kernel function {{formula:e6646d06-f873-4b8c-8372-3b568bb94492}} to force the similarity matrix to only captures positive correlations. Then, we use {{formula:14456121-32e4-4ece-bd91-abf6eb9936d9}} as a learning objective. This expression can be used to training neural network classifiers, and can increase accuracy through training. However, since {{formula:930302c1-b1c0-4138-b304-851eb1bf5097}} cannot eliminate all negative correlations while retaining the positive ones, our objective is not an ideal adaptation. We believe this is a quite promising direction.
d
ab0a140224dc1ec6de623e9f35f56c4e
The last sub-net makes the final prediction {{formula:740d75d6-d224-4b0d-9dcf-4deedf9b873e}} , which is used to compute three of the losses: i) a Reconstruction Loss {{formula:fb83073a-d3df-4f27-838d-8b20c9e637fb}} ii) an Adversarial Loss {{formula:c77414f3-8784-44f0-9eb6-a0ab9fb31f99}} and iii) a Structural Similarity Loss {{formula:56db0784-9bea-40e6-b0dc-149fd6e19d04}} . In {{cite:7bac7400b06865531467ab3940908dec62c6c0a0}}, Shao et. al combined the {{formula:26191c31-09c6-4b4f-aa60-5d63452e4d00}} and {{formula:740c71af-7878-4dab-9e59-259d5de7ae3d}} losses to improve the enhancement results in comparison to a single use of the {{formula:c7e03f8d-cb94-4a4b-847c-0359c5851691}} loss. As discussed above, both the {{formula:f2e0e98c-d81a-44e2-91e5-2c54ea4468ee}} and {{formula:175adb54-538d-47ab-a19d-1ece9f049e40}} losses are based on {{formula:84025a07-3cd9-417c-8309-f747e94ce545}} and thus, by adding the {{formula:b6359d5d-0e51-4cae-bf91-53b5fb0ad89d}} loss (see Eq. REF ) to the overall training objective, enforces the model to learns from the pixel distribution in the ground truth patch, thus leading to a model with more consistent outputs without increasing the inference time of the method. The attractive characteristic of the SSIM loss is the fact that it has been proved to be successful when dealing with complex illumination changes {{cite:070a28281bdd4af0f30dcc9b5b8d9fecf6d3ee10}}. This fact enabled the proposed approach to improve the results of the original LMSPEC implementation. {{formula:eddfd14b-8808-4a79-95d9-882ce828ff45}}
m
5116c4a8b9a3dd3ceac6b36268783564
where {{formula:42c39814-4a48-4176-948b-1f148571454d}} is the Frobenius norm and {{formula:042b182b-9293-4b28-9bf2-5499202f60b2}} denotes the smallest singular value of {{formula:4f7a89c7-fe88-45ef-ae83-9e626a5346d1}} . This elegant result triggers a great of researches into developing new variants and corresponding convergence analysis; seeing {{cite:b21d2970949adbf7154bd434fd8ce057e755f411}}, {{cite:d68f18018e7a7ee39c74ceb732c2054665a4a52e}}, {{cite:48c2b253117adad3cf20f6c506b20b8047b08d3b}}, {{cite:b053b0f4e07190e63dd2503338736d768adbc64b}}, {{cite:becda09815e4982245e01c73f93432cb4af87405}}, {{cite:9a431659d3bc3dba451829253871990778b334af}}. Recently, finding sparse solutions of linear systems becomes a popular topic in data science and machine learning. In this paper, we focus on another variant of the Kaczmarz method namely the randomized sparse Kaczmarz method, which was developed in {{cite:47cdd1184226a57961a38ca2bab839fd01898af9}}, {{cite:47da1ba7d427941129f9efba6fdb80e5eb08e3a6}}, {{cite:f8f12415801ab19559cabaf125541d642145cf9d}}. Specifically, the iterative procedure of randomized sparse Kaczmarz method can be formulated as {{formula:6f1eb5fa-6ac4-42c1-b18b-877539c773cb}}
i
c43ada3cda86793b0367de6ce31f7852
Agricultural environments, such as an orchard, offer many challenges for autonomously and safely deploying robots at work {{cite:accbf63d6bf5ccb7660587a57b542b688c1d6d0c}}. For example, the authors in {{cite:07fccbd30ca9f8ceaad03011ab524865a189a651}} compared state-of-the-art SLAM methods in a simulated vineyard environment and found that plants changing appearance can lead to a degradation of the localisation module, with the robot drifting from its course. The main reason why existing SLAM methods fail in guaranteeing robust and long-term performances in the agriculture domain is that they are developed and evaluated mainly in urban or indoor environments, which are consistent across time {{cite:84e389dc16f91fe07a7f2ecd3b8a6ce654bedc9e}}. Agricultural environments, on the other hand, are subject to seasonal changes, with plants varying in size and colours, offering less stable localisation features than buildings and road infrastructures. For this reason, achieving long-term autonomy for robots in such environments is both a challenging and interesting research problem {{cite:ff0817a4b8f3f2fbb8906b4457d4803dd8931b19}}. Motivated by this challenge, we deploy an autonomous mobile robot in the Ktima Gerovassiliou vineyard (Greece) at specific time intervals and record sensor data to better capture the map's variability and, possibly, exploit it later. To do so, we propose a long-term robust deployment of the robot in a vineyard navigating on a topological map. We planned to record sensory data from March 2022 until the harvesting season in October 2022. Differently from  {{cite:f79a2abbebc53ba266a1b5d579e182c3d1fccfee}} and any other dataset present in the literature {{cite:bfc86d2a746f45472669553ee0721b37630769ab}}, our objective is not only to create a dataset of robot sensor data in the agricultural environment, but we aim at capturing and studying the long-term nature of such data.
i
f2433dcfedb253aff3d0fa973f860baa
Off-policy Actor-Critic. In on-policy AC, the data samples are generated in an online manner, always sampling based on the current policy at hand. In contrast, in this paper, we focus on the off-policy AC, where the algorithm updates the policy based on the data collected (possibly in the past) by a fixed policy, called the behavior policy. Off-policy learning is inevitable in high-stakes applications such as healthcare {{cite:714f990958711cc23eb635c9a83c1063fffd977e}}, education {{cite:4e8c5d747daaf4ee20998b6f280e7876056107c1}}, robotics {{cite:44919e40111a7ba6e440971170a98b5e811b0ffa}} and clinical trials {{cite:514bb0022bbddca711848ef4db02678342409fb6}}, {{cite:a2a172f119e726eccb89b10b83f5111838e8bc54}}. The agent there may not have direct access to the environment in order to perform online sampling, and one has to work with limited historical data that is collected under a fixed behavior policy. Moreover, off-policy AC enables off-line learning by decoupling data collection from learning, and is observed to extract the maximum possible utility out of limited available data {{cite:f0a9693bef3672e2e73b2f0984fed3d9929a571f}}.
i
748543839a56faaf9381763e103a2f5d
Quantum simulation remains one of the most promising applications of quantum computation due its potential impact on high-energy physics, cosmology, condensed matter physics, atomic physics, and quantum chemistry. While the vast majority of quantum-simulation-based algorithms have been designed for the fault-tolerant quantum computing era {{cite:586562d8aa3ed18b2e4878165e0cbb998abcc1ae}}, {{cite:74e2f161ba572349433869f3cfa529223557ee1c}}, {{cite:dbc41543f6d2b806e9d693a4296c9dcef7f1f850}}, {{cite:8da9fbac0d08fa2f021f419d6e847be926518ea2}}, {{cite:8dc2dc4d24b4209e3d927bbdf838403d688cd6b8}}, {{cite:ada9f271e4b32ff13e6b9c66edea0fe44faab6c0}}, the current generation of noisy intermediate scale quantum (NISQ) {{cite:5755450cd26da93b41539f88c2a4d44295e2fce7}} computers limit the types of algorithms that could be implemented in the near term {{cite:5b842348cbaa08c8fa6cc8c8e5996c856d3cb1c7}}, {{cite:3347fdfc5de2274ae4d4d022a3d4aeff5612f00f}}, {{cite:dcc0249a9bffc5d7713bedfbf24a915b2faf8318}}, {{cite:85f4831f01b6c4c79f586be9f30f6223f433d9a4}}. Variational quantum algorithms (VQAs) with parameterized quantum circuits have emerged as one of the leading methodologies capable of dealing with these constraints, and within this context, several NISQ-friendly quantum simulation algorithms have been proposed. These include the subspace variational quantum simulator {{cite:1ef75d984a12cde584d3527181466314a426f156}}, iterative approaches {{cite:6c5fb32d8635d54228d89f79da71a89a545fb06e}}, {{cite:bb86b9391af8a80e31bfbd71a81f587825d068a0}}, {{cite:37480acd6243cda9cd587fd5e179b60ed1f701d6}}, and fast-forwarding approaches such as variational fast-forwarding {{cite:4710eb1ad0b70fe8c738d4197798ecf4f61410e0}}, variational Hamiltonian diagonalization {{cite:aa25667a312163a7ed86ef4139f0cd2cb5b20034}}, and fixed-state variational fast-forwarding {{cite:064bfe9837426e8177ac9c6b8cdd3f329f22ac37}}. The overarching idea in all of these methods consists of using a variational wavefunction, {{formula:86bb9048-e8ef-4c88-94ce-9d119b6ebc33}} , defined with respect to a parameterized quantum circuit {{formula:f7ebb951-34d7-424a-ae95-0f389f99cd74}} and using a quantum-classical computer feedback loop to solve the optimization problem. In recent years, however, it has been shown that a wide variety of optimization problems relevant to VQAs can display non-convexity and vanishing gradients which can lead to fundamental optimization challenges {{cite:350334bff5cca8bf0985c988df5ed2d2e7371fac}}, {{cite:f3a67d057e38d16767dad761fec65db8b102be1c}}, {{cite:a8dab02d29255f8d1d740b6403323ef4b1925a3f}}, {{cite:c57c02156e96bdc45d4f3aa6a63d379bdc40edd5}}, {{cite:dfb26631dd7728d5f8aea054363c8614f8982836}}. In addition, these algorithms suffer from large measurement overheads which can lead to long run times {{cite:3347fdfc5de2274ae4d4d022a3d4aeff5612f00f}}, {{cite:dcc0249a9bffc5d7713bedfbf24a915b2faf8318}}.
i
92178abbad4868ba229bf36f947bb7f3
Using the tools described above, we find the allowed/ forbidden areas in the {{formula:6c5ab11c-3f20-4cbc-8f34-87cfb246f74c}} plane as shown in figure REFComparing to the results presented e.g. in {{cite:9572875467f69a713dbf2eea2c9f30baeabefcb2}}, {{cite:966bb5bb250c57116680ef4c61f6d49fdc2f0fdb}}, we see that constraints for negative values of {{formula:c31ada81-b761-4346-856e-2b1dd53f702d}} agree, while we find more stringent constraints for positive values. However, note that we use different approaches, and for example HiggsSignals makes use of the most recent constraints as well as STXS information {{cite:b9ed09fd7093d2c5f223253e5c26f66f947b1f73}}. We thank the authors of {{cite:9572875467f69a713dbf2eea2c9f30baeabefcb2}} for useful correspondence concerning this point.. Some points are additionally ruled out by {{formula:9d409173-80c7-49bf-b6f4-5828bd129c24}} {{cite:4a98b640812aac7ed5ac0abdf27eed45d00f8751}}, {{formula:cecb97a8-c31f-4bea-9d3d-124fef52f799}} {{cite:d37623d9ee61957d4f92d257ef816cc9ed27b026}} and {{formula:2275846e-c3c4-49a2-9632-f60eff891f19}} {{cite:1b4abcfff89ee1d59f8445671e809581189bdafa}}, {{cite:fa3d7c03ea5ca233fe1832d769511f8527711431}} searches. Values of {{formula:8110be7e-cc11-4060-9a4e-686c97413cd8}} and {{formula:95a55824-39ca-4961-a520-5b7434e6e697}} are excluded from {{formula:d847c8e9-6dc1-49f2-913f-9acc34a23fe0}} {{cite:6b474533832f35be23ac58adfbb7a7a6c73e7e01}}. Furthermore, the region where {{formula:323d7b91-4db5-41d5-9b06-ae17549eda77}} is mainly ruled out from both perturbative unitarity and perturbativity.
r
21cb035c88c7b76bbb12607660b0d4ca
In this section, we rigorously prove the main results given in Section REF . The technical tool so called the backward error analysis {{cite:7c5e2fa310306c95686d7a019537c78fa32300b6}} which is indispensible for the study of equations or numerical solutions over long times will be used in the following proofs.
r
c599e7c741d1428e86e9f28d12a89700
When the {{formula:3aaf9da7-7a46-4c6d-853c-7ebabe7f2599}} matrix is learned using our extension of the relative attention proposed in {{cite:fb8b1b790dc7a1814c2d5512dfb0a6675a93aeb3}} (Learned relative), the performance is further improved. We think that in this scenario where a decent amount of data is available for training, making the {{formula:7db9bff0-737a-4124-b1e7-306747c65d54}} matrix trainable gives an advantage over ALiBi which uses a fixed {{formula:d9a92058-849a-4cd6-9f3a-67e5849f0efb}} matrix. Figure REF c shows that, in contrast to the absolute positional embeddings, the relative positional embeddings are able to differentiate time regions in the past, present and future. Moreover, in this model, the positional embeddings interact with the queries, which might give the model more capacity.
r
1a89a270924c08a972c8be46afbb4f84
With rating prediction and LFM method on CF have been widely researched in early literature {{cite:de057674dd65bad1e6c0ee46da3d1d4e47e489b7}}, {{cite:3772390372f7f7fe034c216d51ce9eca839b9a14}}, {{cite:efd48d51d6a79f71db5b6f58058eb66dc2ac605e}}, {{cite:2f9ca5d5f79487837bea4ac2a89d2e9327758c7d}}, a variety of probabilistic factor models {{cite:26e72e76196aa61cbd73cda173e6d4ef2006412a}}, {{cite:458162b8b182066557ced18bc7f6f92e836c8a94}}, {{cite:958bb2d4d27bbd3da05cc944d438a2acb5eb2320}}, {{cite:d8e6a4d2fad16bd2300bd2e9980ce12375e0813b}} and Bayesian method {{cite:1d070a8a448ce911f9f5afd06694621a601e2563}}, {{cite:d3ca162e42270beca02ebda70a67b5bf22b5201e}}, {{cite:939b718c1a181377f8104ffd09bd37b66d738572}}, {{cite:96d22a70ca4bdc3c96462274e13c01254d5b7afb}}, {{cite:6872736bc6cfc8a327ed200c2b4cb339cde8e96d}}, {{cite:57f2eeebd159017c166b49ba3bfba0f34f16ff01}} have been proposed to enhance the performance. For estimating parameters of probabilistic model, Expectation Maximization (EM) {{cite:d7f75c96cc8ad0688646c84cef28f8687c52c3ef}}, {{cite:858a4dc9bc843a2c09bfe9fabb966df7b5cce38b}}, {{cite:8f81a227056e7dddce374e6bc434abb1240f68c7}} and MAP were used in early works. Zhang et al introduce a novel adaptation of the EM algorithms to learn the parameters of a prediction model for personalised content-based prediction{{cite:a521053de5ff0b300d6e7dc2bb037e982da3f415}}. Stern et al instead used Expectation Propagation and Variational Message Passing to learn a model using both ratings data and content {{cite:19535b4a85326174018a68c5a850a3047aa351eb}}. Related to applying Bayesian method to solve the probabilistic factor model, the early pioneer works are described as follows:
d
b8a3a39159692da551c74db3afe7e84a
In this section we demonstrate how the GRE framework reveals further connections between Gaussian process regression and the Nyström approximation for kernel Ridge regression (KRR). These connections have been known for a while and received some attention recently {{cite:c5b900e306bb6e1660070d6dfee7c725cc0e70b9}}, {{cite:07ef3106efa27971a178fa52d3706fb6d9826f51}}, {{cite:b3d8f75158eebf6699408ca0c740eb59ff9b3667}}, {{cite:7e13e77a35e6152bd2fe03bd46b35623ae30ad11}}
m
a44cd1abd6b1c45605f3c1a5080b6c79
Furthermore, we find that the separability of two non-adjacent displacements results in the relation {{formula:d5ff719f-223a-4afd-81d1-ee370a1a701f}} for {{formula:6dcfa154-6e7c-4b52-94b7-461686cce7b8}} . Since {{formula:d2c88451-09bc-4aca-95e2-6b0325bde304}} is a multivariate Gaussian with 0 mean, we can apply Isserlis's theorem to solve for all of the moments of the distribution {{cite:b058e4f75b570a05c21a809a2ccd71dfcf2c06b3}} given knowledge of the second moments of the Fourier transform on {{formula:82343753-b16b-47fd-9e20-1252448f21c9}} , which can be expressed as a covariance matrix. This allows us to express Eq. REF as {{formula:6bbf37fc-aab2-4f7d-8c5b-c477e7084b5e}}
m
1ad0d2c5dfc4912c376640d7d75820e6
To address such “data isolation" problem, Federated Learning (FL) was proposed as a decentralized approach that enables collaboratively training while keeping the data on clients by only exchanging gradients/model parameters {{cite:3775ba6a86bc0eff8b28ed00d3f8c06ddc2690db}}. In particular, Federated Averaging (FedAvg {{cite:76067dbd1ac44b8e3c7f670f1b7d1815018d79c4}}) was proposed, as the de facto method in the federated optimization, by averaging the gradients from the local clients. However, even sharing gradients may unintentionally lead to information leakage {{cite:eddd2ac7ae5c6aebbb327c8e9d40732d11219448}}, {{cite:a381c2246b663a5a0a7df40d762c90de3c3172d0}}, {{cite:da2242bdec832f6d220b6b6eaced339f7f36b1a2}}, {{cite:52f93153d3bb4620bc5a68e7c63d25552237e5d5}}. In order to protect the gradients of local clients, several approaches have been explored. Cryptographic approaches based on homomorphic encryption and secret sharing to ensure the privacy of local information can be found in {{cite:3b49db29a1d7ed4f06995fba4d125b76a9675cd9}}, {{cite:80e73ac5d1eaa61a8e001523c46a9e75ea2dc5d6}}. Those methods are computationally inefficient for non-linear operations, thus not practical for ML models at large scale or trained with frequent communications. Several recent studies address the privacy issue in FL by combining differential privacy with existing federated algorithms to provide privacy guarantees {{cite:0bdfde9dce48639d9f25bda5f0cd39073cb5ea08}}, {{cite:a7609e556628b0d0a039fa6066d66e3d72822376}}, {{cite:84437b50e75aa834e9feae98dfcb832928ff6b61}}, {{cite:f643f6f64a49cd12aee38625ad29c88a0873132d}}.
i
4fc49ed473d4e91e061996b5fd57ee53
The ACADO code generation tool, an open source software package for optimization problems {{cite:aafe85f3af3b2ea8bcdabdbc34e87831a4ec5cda}}, {{cite:23a35f46985f1efde9454c70dd308df78251a132}}, has been used to solve the constrained nonlinear optimization problems in the NMPC and NMHE. First, this software generates C-code, then the auto-generated code is converted into a .dll file to be used in LabVIEW. Detailed information about the ACADO code generation tool can be found in {{cite:e751c223a0bf16657b830f41fd6e5950bb62739f}}.
m
9cf24b433608f0ffed881e04d2b859e7
In this paper, to get a well defined principle symbol, we have shown the spacetimes in the so-called “Einstein-Gauss-Bonnet Gravity in four dimension" have to be (locally) conformally flat. So, locally, the metric always has a form {{formula:f3ece315-766e-4bb9-a82f-88db4d884726}} . Although the theory is diffeomorphism invariant when {{formula:6f25db26-d446-4d11-8bac-1baa5752510b}} , the final four dimensional theory has preferred spacetimes, and it can not be diffeomorphism invariant. This point is similar to the conclusions in {{cite:a95f531acc8af53f804f2b1a8f539590bf8596f5}}, {{cite:095f76c0fe4119134023a2005f52e84473a7ec88}}, {{cite:976853cc72529a2582ab7bef593e97b72b7580af}}, {{cite:21bd6d66ae3619498d7b20411a09acca87b0dcde}}.
d
241372cd6a5736c55c89a45df8b4e06b
Most algorithms on word vectors and related representations are applications of linear models, that perform predictions using linear combinations of feature values weighted by learned parameters. The tasks that are commonly solved using linear models include regression, classification, ranking and clustering. In text regression regularized linear regression can be applied {{cite:f3e0ded03cce6f3b9e218c181fe9c1cc85b4be66}}, {{cite:373c25718a76ead534a530106c90f53c2f13856b}}, {{cite:376037d8b870972b3407685e9e400d39f389cc7b}}. In text classification classifiers such as Centroid {{cite:0056189c491c187bbb1f2c39865e1553bbdd822a}}, {{cite:6ab34c4851ee17ad01fe79ffb1e483b57063525f}}, Bernoulli Naive Bayes (BNB) {{cite:42bfc61038c176ae91ede70de2b5a76f541852ef}}, Logistic Regression (LR) and Support Vector Machines (SVM) {{cite:410113d6e15e2f21f12a67e3f4f5240cd50f6a3e}}, {{cite:dd5d28ed53b68f6096c243a82246b93b875ec052}} are linear models. In text ranking and text retrieval, all of the common scoring functions are linear models, including the Vector Space Model (VSM) {{cite:ad3492f19f2e732b8568c01b0a51900de1f0f20c}}, language models {{cite:16920056874fc69f6722eff22d96ac9c294f1c39}}, {{cite:90bcab0f993cb8628d0f3aad303831d167484e15}}, as well as more recent discriminative ranking models {{cite:e010b09d6b781f2750a63a657eb9ff0d3c930eb8}}. Text clustering commonly uses linear models, such as multinomial and Cosine distances {{cite:ef663bda69fca7d13532196c61ed2af5c3765498}}, {{cite:b0d80f964190fdc99ccff904bd9da7146539de34}}, {{cite:592c3ad51ea2a92d8af3c1f566285b7e6629038c}}, {{cite:bf3303337c48d43e51b63d8400893361f68fc7d6}}. The following presents a succinct overview of the linear model framework for TM.
m
baed503d9cd518c7c3bb0bd8283aff51
We have evaluated Pattern-Net on the ModelNet40 dataset {{cite:400c7de4fd01fb78218403b22e3412f2a028db32}} for the classification task. It contains 12311 meshed CAD models from 40 categories. Similar to the other work, 9843 models were used for training and the rest for testing and the models were normalized to a unit sphere. Each model is uniformly sampled from the mesh faces in 1k, 2k and 4k samples. During the training step, the points are augmented by randomly rotating, scaling and translating for being transformation invariant (Property ii above). The quantitative comparisons with the state-of-the-art point-based methods are presented in Table REF . Our method for 1k and 2k points is on par with the other methods and gives the best result for 4k points. {{table:5a931467-1496-4625-84e2-65443ac153a4}}
r
1377465ffece0d5588c2664414d88556
We present Smooth-Reduce, an extension of the randomized smoothing approach proposed in {{cite:f9f7b8ac62a2cd6a55cde939bc035ab87104f3f1}}. We empirically and theoretically proved that Smooth-Reduce classifiers improve over standard randomized smoothing in terms of certified radii, as well as abstention rate. Our approach relies on the performance boosting properties of ensemble classifiers, which we emulate by creating an input set using patches. We show better certification performance as compared to other smoothing methods under two different aggregation schemes. A major benefit is that our approach requires no additional re-training.
d
08f8a0648e6569544278dcdbff6c26fc
In this work, we have focused on the spectral properties of the normal state at zero temperature. The next step could be the analysis of superconducting states {{cite:511020b7cf2976797f02a2c49f9b90067bc4ecd5}}, {{cite:d58c29395f4c6ae0d9b1e57ee2ff237135c6bb0c}} where the coupling to longitudinal optical phonons plays a non negligible role {{cite:63b4afe28a5068e9fc2b892a3b97b21f40019b2d}}, {{cite:1c7f6920413c329ff2e4267b3da0f32d2012703b}}. Finally, another interesting aspect could be related to the role of electron-phonon coupling on the temperature behavior of spectral and transport properties {{cite:876d3593d7a070fd8d6d561ad588c76b8f724da2}}, {{cite:84e9b653cc8d8ad32deafc07772cea32c7a5037d}}, {{cite:ba552fa8c3472fd519fe5059e9ba78e98f8da93d}}, for example of the thermoelectric Seebeck effect {{cite:4926d36d95db44821e16163908632209f031ccd2}}, {{cite:81ef4b495dffe604bc80ff3835a367ec2a9e086d}}.
d
dd877c6e5e1ef5d515f31488707deb95
Then {{formula:a23b661d-04f4-40e3-a0d3-e7a610afb251}} is said to be twice epi-differentiable at {{formula:9519b432-2cd6-4f0c-a4c5-81a6c14df597}} for {{formula:de4be383-0d69-4c6b-83d1-844cefd9a693}} if the sets {{formula:8352a355-b326-4bc5-8e82-ae1d69e08d24}} converge (in the sense of the convergence of the corresponding distance functions) to {{formula:54ad3429-c592-438a-bc84-851b2c9ae52e}} as {{formula:69673c23-983d-4a05-919f-fbe5abdff714}} . If in addition the second subderivative is a proper function (i.e., does not take the value {{formula:0b7cd85d-e889-4059-8580-c70c8fa92142}} and is finite at some point), we say that {{formula:9005a2ea-8f2d-482e-bb11-e9f44eedbbc4}} is properly twice epi-differentiable at {{formula:d6acef89-ac33-4b52-9be7-4901e33fccc1}} for {{formula:5a6865e4-d5d8-4097-994a-391715602bf7}} . By {{cite:560f23b9ff377ff4a36c010b1fa1ac2229d9b2a4}}, the twice epi-differentiability of {{formula:23c47040-ac3b-4887-bb15-530fe3ab0ae6}} at {{formula:9951c1f6-024e-432d-bf08-7bbdfb2c05a7}} for {{formula:7872f1ba-becb-4333-855e-345b21613daf}} can be understood equivalently as for every {{formula:aad4a7f6-896f-4d1b-9f26-50cf0285adcc}} and every sequence {{formula:2dcffddc-eb10-4c8a-8663-7d6ed326890b}} , there exists a sequence {{formula:ed08e396-133c-4607-b26b-00c6ee8bae91}} such that {{formula:12f22d5c-0cd4-42be-8e69-033c16172cb1}}
d
97ff67ad21d5ad75606dcdb04cb224c2
Finally, we have proposed a construction method which can be used to find out some new non-trivial examples and discover the computational power of Deutsch-Jozsa algorithm {{cite:105df883f95a24490cc7adca9b970c153e4c391f}}. Most importantly, the construction method paves a way for finding out more problems that have quantum advantages. In contrast, the existing problems that have quantum advantages were not proposed constructively and thus difficult to be employed.
d
f32582b9fe4f2db8a25f70d2f8fa0fb0
We implement the QBMC method described in Section  as a module within HyST {{cite:ea427b0096d006ba800aae358023de6c99f4675d}}. HyST takes as input a hybrid automaton model in an extended form of the SpaceEx XML format {{cite:932afa90d5fbef6c53d4aeebe20880ad18434cb1}} (supporting e.g., nonlinear functions instead of only affine ones). Then it will generate a Python script that includes the transition relations of a hybrid system expressed as quantified SMT formulas using the Z3 Python API. We evaluate the proposed QBMC method on several instances of Fischer and Lynch-Shavit mutual exclusion protocols. We compare the results to that of dReach, which is a state-of-the-art BMC tool for nonlinear hybrid automata {{cite:94e4a175cf2da85d255d43b16c709745ada2467d}}, and with that of HyComp using the MathSAT SMT solver {{cite:9ab528e3b395792752ff2bc2bec1a54cf678a096}} in terms of execution time and memory consumption. We note that the comparison focuses only the BMC feature of dReach and HyComp, and all of the models of them were generated using HyST. The experiments are performed on Intel I5 2.4GHz processor with 4GB RAM, executing the method described in this paper and dReach in a VirtualBox virtual machine running Ubuntu 64-bit. Z3 version 4.3.2 was used in the evaluation. We collect the execution times in second and the peak memory usages in MB for different examples. The preliminary implementation described in this paper, along with all the examples, is available online at: https://github.com/LuanVietNguyen/QBMC {{table:c7b71725-5bce-4def-a46e-64adc18374fd}}
r
8fc1694b1c6514eed543fcc31f4810ff
In Bayesian deep learning, temperature scaling is a practical technique to improve predictive performance {{cite:3cf0edcc73a126b51bf1e8e69d89a3da47a69b33}}, {{cite:0e0898398cfc0d098e965b8ab0636d2e167db150}}, {{cite:08659ddc5d10e836ae2fce1269400b711b7734f9}}. There are two main approaches to tempering the posterior, namely (1) partial tempering and (2) full tempering {{cite:60b2d5cbf5a1b2a03299641b3e0fb24584e79939}}, {{cite:6aa73acc61d8f3935f84c2416a26d4378b9db3bd}}. In this section, we investigate rigorously the posteriors induced by the {{formula:f3863ff4-9407-4b88-bf3a-24044fa3681e}} prior and optimized prior under different tempering settings. We use the same setup of 1.10mnist as in the main paper, with 200 examples for inference. For the optimized prior, we use 100 training samples for learning prior. For the {{formula:b95ff084-04c1-4aa7-a232-17d0a7fec4a5}} prior, we use the union of 200 training samples and the data used to optimized prior for training.
r
0b5d00945d48edbf7388a5125d8c2edf
1. FedHealth with incremental learning. Incremental learning {{cite:c4e133068cac5342ee4ec9ecdf2b397c4c679a6b}} has the ability to update the model with the gradually changing time, environment, and users. In contrast to transfer learning that focuses on model adaptation, incremental learning makes it possible to update the model in real-time without much computation.
d
c7308807d7ffa7ba59d3770201d81442
The potential of deep autoencoders in this study opens the way for exploring more complex AE such as denoising or variational autoencoders {{cite:8cd66c3af205eb7875bd1e2f479eee8f3993693f}}. Future work also includes the possibility of considering this problem as a multiclass classification problem rather than a binary one, aiming to distinguish between AMR detected in the first 48 hours, AMR detected later and non-AMR patients.
d
b4bc512ee45d3e49ee35d33a450cbcca
Comparison with State-of-the-Arts. In Table REF , we evaluate our model on the HO3D v2 testing set and compare the results with state-of-the-art methods {{cite:727d5fa77d1211b2323df36cb0b3d1b993dc98da}}, {{cite:8ec68f50b9c40ebcccca7e7a30238990b91d25b5}}, {{cite:dcaa50574c4d9018a279e4f20b1923158b554e09}}, {{cite:d383a4516c9cf1b85d14bfa91eb6b771bd57d7f7}}. All results under hand metrics are collected from the official HO3D v2 CodaLab Challenge outcomes. From the table, we observe that our method achieves superior results across all hand, object, and interaction metrics. In particular, our method not only produces more accurate hand and object poses, but also generates physically realistic hand-object grasping in higher quality as we observe a lower penetration and a higher contact rate than {{cite:8ec68f50b9c40ebcccca7e7a30238990b91d25b5}}. Meanwhile, our method leverages an efficient feed-forward pipeline from a single image input and does not require computationally-expensive optical flows as temporal clues {{cite:727d5fa77d1211b2323df36cb0b3d1b993dc98da}} or iterative optimization process {{cite:8ec68f50b9c40ebcccca7e7a30238990b91d25b5}}. Furthermore, our method does not rely on sophisticated contact losses as in {{cite:727d5fa77d1211b2323df36cb0b3d1b993dc98da}}, {{cite:8ec68f50b9c40ebcccca7e7a30238990b91d25b5}}, showing the superiority of our method in modeling hand-object interaction. Compared to {{cite:dcaa50574c4d9018a279e4f20b1923158b554e09}}, our model is trained with remarkably less augmented data, yet achieves improved results without introducing much complexity. Finally, thanks to the dense mutual attention, our method improves the performance by a large margin than the sparse keypoints-based method {{cite:d383a4516c9cf1b85d14bfa91eb6b771bd57d7f7}}.
r
adde938373a1edaace731a4503a9d172
Investigation on Other Pre-training Tasks. As mentioned in the main text, we only adopt Masked Language Modeling (MLM) and Video Text Matching (VTM) as pre-training tasks for both the proposed Lavender and the task-specific baseline Lavender-ts. Here we briefly discuss other popular pre-training objectives with Lavender-ts. The first is Frame Order Modeling {{cite:8b116f5856c66926c81256864fc8df064522820e}}, {{cite:2781f060d7a14ff6c27525564816de06704bc9a1}}, where the input video frames are randomly shuffled and the goal is to revert back its original order. Different from the video-ASR pairs utilized in these works, the paired text in our pre-training data is not temporally grounded. In most cases, the shuffled frame sequence will probably still be globally aligned with the textual description. Hence, such fine-grained temporal reasoning objective is not applicable in our case. The second is Masked Visual Modeling (MVM), where the model learns to reconstruct high-level semantics or low-level details for a certain percentage of “masked” visual inputs (i.e., features or patches). Different variants have been proposed and shown little-to-none effect in vision-language pre-training, such as predicting the object category of masked image regions {{cite:e4d99498d8cbc50cf5bf42640244a9f4d135072b}} and distilling region/frame features from well-supervised vision encoders {{cite:e4d99498d8cbc50cf5bf42640244a9f4d135072b}}, {{cite:8b116f5856c66926c81256864fc8df064522820e}}. More recently, by taking advantage of pre-trained DALL-E {{cite:e27a505781be4d18ff234ee25215c8235a61971c}}, researchers {{cite:3b4d2eb0d97b577d7173f308abaa83c709003f86}}, {{cite:36880f823466a684aee3e4c26cf16517f6975150}}, {{cite:3dd7c360a4438e287e64b6df0d57f4b7f3097825}} have shown potentials in masked visual token modeling, which asks the model to recover the discrete latent codes of the masked image patches. {{cite:c7b7bbf1f6a62e9be7ce5e7ea4aacdfac88d0b00}} explores image feature descriptors such as Histograms of Oriented Gradients (HOG) as the prediction target for self-supervised visual pre-training. In Table REF , we investigate three different MVM objectives on top of VTM + MLM pre-training for Lavender-ts: ({{formula:6abed385-2b29-4fec-805f-d244c8f56ed0}} ) VQ Token: to recover the discrete codes extracted from pre-trained DALL-E following {{cite:36880f823466a684aee3e4c26cf16517f6975150}}; ({{formula:513be124-ccec-4287-bf96-7816cbc82cb5}} ) Pixel: to regress the RGB colors as in {{cite:c7b7bbf1f6a62e9be7ce5e7ea4aacdfac88d0b00}}; and ({{formula:f60b7355-95b5-457d-bafb-54f9e55905e6}} ) HOG: to regress the HOG values, following {{cite:c7b7bbf1f6a62e9be7ce5e7ea4aacdfac88d0b00}}. Results show that only MVM with HOG achieves a marginal performance improvement of +0.3 on average. Therefore, we adopt a simple recipe for all other pre-training experiments in the paper, that is with only MLM and VTM. {{table:90c33ec2-7b45-489f-812b-de57ea8beb1b}}{{figure:59a4ac4b-0b96-4c82-b75c-889b15a2ea09}}
r
20c4541dc6b4ea0ddf146a0e954cab59
Generally, the matrix {{formula:67dbf481-258d-4d54-ae3f-2bd96b779470}} need not be invertible (in particular, {{formula:24784fc2-e1c6-4981-975f-b82156cf7191}} is not invertible – see {{cite:cdc34bd46f351564bec7041a1071084d67cddac3}}). {{cite:7d516bfbcfb55aeb70999ce0f051011e0533df0f}} proposed using the Moore-Penrose pseudoinverse, which generalizes a matrix inverse to matrices of less than full rank, of {{formula:a78c14a4-afe4-41fb-ac4f-304e58965304}} as a kernel, with applications to collaborative recommendation. The approach and the application domain of {{cite:7d516bfbcfb55aeb70999ce0f051011e0533df0f}} are similar to that of {{cite:cdc34bd46f351564bec7041a1071084d67cddac3}}.
m
384415c1398580a7171fbebadbc9669c
Baselines. We have six baselines: (1) Noise-unaware search: the SubCircuits are searched with noise-free simulation. No noise information is involved. (2) Random generation: with the same gate set, we generate random circuits and constrain their #parameters the same as the QuantumNAS searched circuit for fair comparisons. We generate three different circuits and report the best. (3) Human design: we also make sure the same #parameters. For U3+CU3, RXYZ+U1+CU3 and IBMQ Basis spaces, human design has full width in the several front blocks. For ZZ+RY, RXYZ, and ZX+XX spaces, we stack multiple blocks introduced in the original paper. (4) Human design+noise-adaptive mapping: the circuit has the same #parameters with QuantumNAS. The qubit mapping is optimized with state-of-the-art technique {{cite:e2834977ad45f3bf053b3ed3bf29736b8ee0e306}}. (5) Human design+Sabre mapping: the circuit has the same #parameters with QuantumNAS, the qubit mapping is optimized with Sabre {{cite:42d13dc62b0d179b53f9c253e6e9d73e5c63025c}}. (6) Human design(1/2 #Param)+Sabre mapping: similar to (6) with half #parameters. (7) For VQE, we have an additional UCCSD {{cite:da1cfafc14f2e5069a1c351be595d7e5980ef158}} baseline. For UCCSD of CH{{formula:9f6a5c1d-6d4e-472d-b7d8-ea836dbde676}} -10Q and BeH{{formula:5e5947c4-cc96-4df0-83b9-868d06a9fcc0}} , the original circuit cannot be successfully run on IBMQ machines because of too many gates ({{formula:73bb94f1-1b6c-4729-a80c-98472995dc9e}} 10,000), so we only take the front 1,000 gates.
m
c273520442fc719fcb0fd3f82e6705e9
Salakhutdinov et al. {{cite:6872736bc6cfc8a327ed200c2b4cb339cde8e96d}} proposed BPMF based on MCMC and rating matrix which we have discussed detail in experiments. Lim et al. proposed a Bayesian approach to alleviate overfitting in SVD, where priors are introduced and all parameters are integrated out using VI {{cite:96d22a70ca4bdc3c96462274e13c01254d5b7afb}}. For parameter estimation of low-rank matrix decomposition, this work demonstrated that Bayesian approach is more robust against overfitting than EM and MAP. This work also showed that the effectiveness of VI is not just due to the priors introduced in probabilistic model, but also due to expectation rating. It is trained like as BPMF on rating matrix with extreme data sparsity, this is difference with our work. Harvey et al. presented a Bayesian latent variable model for rating prediction that models ratings over each user's latent interests and also each item's latent topics{{cite:57f2eeebd159017c166b49ba3bfba0f34f16ff01}}. This work constructed a probabilistic topic model with assuming the user-interest and item-topic variables drawn from multinomial distributions. It is different with probabilistic factor model in which map users and items to a joint latent space. This work used Gibbs sampling {{cite:483050ce829a840843ea62daf5f0de748eb1986a}} which is a MCMC method to estimate its parameters and showed that it is competitive with the gradient descent SVD methods. In fact, the model of this work used the ability of Latent Dirichlet Allocation (LDA) {{cite:765279859ca7c3c689764b66508a3c476203127f}} to extrude latent factors from sparse data. By incorporating user-dependent and item-dependent biases into the model, this work make an extension to enhance the performance of rating prediction. We concede that the ideas of performing rating prediction using Bayesian method and the inclusion of user and item biases is motivated by this work. However, the probabilistic model constructed and Bayesian inference method applied is different.
d
1caad5ae15b4268c07d25be42adeea0f
The previously fastest algorithm involves maintaining the matrix inverse {{formula:a73e9e60-b85e-4108-85d0-2be91355728d}} using subspace embedding techniques {{cite:664403739ac7fb8a42277a63e95ddb4f01c65678}}, {{cite:88955101cee61169acebf0121b57691421683187}}, {{cite:1dbe4bdf83b803bd2985b3b07d9f978f097f5596}} and leverage score sampling {{cite:acb47c34e08a06c142ccd97696845451bf54863f}}. In this paper, we maintain the projection directly.
m
46f9d926a0a6ed50dcad7e6065cb335a
Despite the above remark, models {{formula:0c7c9ffe-58ad-49f9-9f3e-60eac558a2ad}} 2 and {{formula:8f09fab7-eb1c-48d4-8eb6-5b00f778e2a9}} 2 fit the data much better than models {{formula:b1abf016-8142-4006-8487-172ee622aead}} 1 and {{formula:b8861804-9cc1-45f0-90f1-1b16ad643cc5}} 1, especially for late-type galaxies. In merger model 2, the {{formula:a5656374-fcb0-4d04-8c52-b8611cd8dc16}} –{{formula:ad1126f8-4fa2-41e6-9c52-3a8a4f0bf1cd}} relation for galaxies with {{formula:75004d06-17de-4623-ae19-2da72bc443eb}} overlaps with the observations of {{cite:c7c2e59480dab9dd33386fdf91fc793ce760ec65}}, while merger model 1 is completely off.
r
fbc215658fab58973134f50704b97bd2
How does the performance of a semi-supervised distillation method compare to CheXseg and fully-supervised methods? We find that the implemented distillation method has a 6.65% worse performance (mIoU) than CheXseg. That said, we observe that self-distillation can improve learning performance for each fully-supervised model setup. Previous work shows similar results for natural images {{cite:b98b44417ca6b288c1d2ff0fa018635e8f44e487}}, {{cite:3e609c6a2f073694406c0fc9ca27943e2d3fade8}}. Future work may investigate the result of introducing different sources of noise during training, such as data augmentation, stochastic depth, and dropout, which may further improve performance from teacher to student.
d
1c1ddb0c00bf2f339c66e128d5813026
Random: All network weights are randomly initialized and no pre-training is used. The ViT backbone initialization follows the code of {{cite:1cf8c6642c79d0b7f60a520968c86aae30d23096}} and the Mask R-CNN initialization uses the defaults in Detectron2 {{cite:6bcfcb2fdcaf2109af9ae728751e7b500f8a2968}}.
m
017afef449ecea3f903c30f27d4a1a10
Training details. We used Faster R-CNN {{cite:b5a7609b7f6f7856137f66233ef12e0b56eeafdd}} and Cascade R-CNN {{cite:8bc9a6b669458460d8eacd0523efe527a05cbc31}} as anchor-based detectors and FCOS {{cite:06fae47dd06c5af0122bfdd0e7df00a9d0a70a8b}} as an anchorless detector to compare performance in the Epic-Kitchens object detection dataset {{cite:9b2ef4dac7c8a230bce743a4eca4e20d01500a1a}}. As the backbone CNN for training the detector, ResNet-50, ResNet-101, ResNeXt-101, and HRNet-V2p-W32 pretrained with ImageNet were used, and training details for each combination of backbone and head structure are shown in Table 1. All experiments were conducted using the MMDetection library {{cite:e3ca0c4f3a22b91e67dc924da1d21f4801fb136a}}. {{table:6d73d94a-62ec-485f-bb66-732a15c5cfbf}}
r
08546ae3453d735f46b26110fa660958
where {{formula:c520a19a-5156-457c-8a71-86d985f8b30c}} denotes the conjugate transpose. By standard arguments one can prove that there exists a unique sequence {{formula:d3f424b4-57ed-468c-98aa-3a605674da19}} of monic matrix valued orthogonal polynomials (see for instance {{cite:ca072a1ba66a4d9473d5ca26552ff714ea77d5ee}}, {{cite:23042654c54edce3651773b5cdfaa4b5fa4a681b}}), i.e. {{formula:6d4a3dbd-526c-4f5b-a76d-494129504a80}} is monic, of degree {{formula:2c0158c2-a811-4e1a-b046-0502d306f768}} and {{formula:435dee67-2097-4913-b334-d6cb91eaccfd}}
i
d957661de83baafb911f9b27aa714b1e
We first note that some adaptations of the lemmas from the global convergence proof of the Adaptive Regularisation algorithm using Cubics (ARC method) are used to prove Theorem REF , see {{cite:4ef2aae9afb38076fe92f4c341559d82cb70acb2}} and {{cite:e0cb14a3d0fe3260f50116e9bc54d3aae5035fd0}}. We begin the proof by deriving an expression for the predicted model decrease in terms of the gradient. We require the use of an upper bound on {{formula:2b311173-7bfd-41f2-8440-3b1f1887cf83}} , denoted as {{formula:35ee5dcf-00e2-4c37-acbc-7275dea42f3f}} , which is derived using a property of Lipschitz continuous gradients. We show that {{formula:24f8d931-f3e9-4958-9d05-2c244155331f}} for all {{formula:815d133a-45e9-4213-8496-514d1afdcfea}} by first showing that if {{formula:ae61be8a-190e-4d78-8b0d-671c73df2d8b}} is large enough, then we have a successful step so that {{formula:9fed651a-e4ad-4559-8541-4b988346b7bb}} can stop increasing due to unsuccessful steps in Algorithm REF . We use the expression for {{formula:aee955e4-e65a-4441-aaff-4f42e2132300}} to prove global convergence of the REG method under assumptions - by showing that the gradient norms converge to zero as we iterate.
m
281dd7505ebaf326f9db4ce803a2e964
We now record estimates due to Lang and Weil {{cite:54cd22cae01a8236e244ad6fe9c57d066369c2c8}} on the number of points on varieties over finite fields. The following is a well known consequence of Theorem 1 of {{cite:54cd22cae01a8236e244ad6fe9c57d066369c2c8}} (see, for example, {{cite:163861d2e529ae3b5ea4b2593dfa1508aef754eb}}), but we give the short proof for completeness.
r
61d4de2723c3fc5aefcbdb36f29872c5