text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Adiabatic quantum algorithms (AQAs), also in the class of quantum computing approaches that employ a quantum hardware expressed as time-dependent Hamiltonians, have preceded VQAs {{cite:24f8062d90e98c3de85d24bd77cb2d876d0b871c}}, {{cite:b702388ac0f3c8451251526276d73e18116ba34f}}. AQAs exploit the fact that a quantum system remains in its ground state if the evolution is made sufficiently slow. In this approach, the system evolves adiabatically from the driving-Hamiltonian to the problem-Hamiltonian, whose ground state encodes the solution of the computational task. However, the time required to keep the adiabatic condition valid can be too long to be useful in practical situations. Several alternatives have been proposed to circumvent this problem, such as the local adiabatic evolution or the use of counter-diabatic drivings {{cite:ec7b2f2153c82a47cc8680c8bd5ae39103e3bf12}}, {{cite:ab530773000ccebff338435fefbdd2de7f1bcb2e}}. Related to AQA, the method of quantum annealing (QA) emerged as a quantum version of the classical optimization technique of simulated annealing {{cite:d3411106f81012eea028425be54b30012bf8faad}}, {{cite:c8282c4ecfc1011ac73e780b46ba1733d4462f32}}, {{cite:386a54098a092f2b7f4fbf12dcf19c61eef10663}}, {{cite:64dbf51e7115ab159cac46fd3202168a239893a5}}. As in AQA, the connection of the initial driving-Hamiltonian to the problem-Hamiltonian in QA is carried out through a smooth continuous function of time, but allowing for faster, nonadiabatic dynamics.
| i | 329d9c489bbc9a4173cf14a002bbd641 |
In {{cite:b3d9fe8b86f64eedf5717be23ce26fb5c1bd76af}}, An et al. (team “SJTU EIEE 2-426Lab”) proposed a framework which is based on the subsequent application of 3 different U-Nets.
The first U-Net is used to coarsely segment the tumor and then select a bounding-box.
Then, the second network performs a finer segmentation on the smaller bounding box.
Finally, the last network takes as input the concatenation of PET, CT and the previous segmentation to refine the predictions.
They trained the three networks with different objectives.
The first one was trained to optimize the recall rate, and the two subsequent ones were trained to optimize the Dice score.
All objectives were implemented with the F-loss which includes a hyper-parameter
allowing to balance between recall and Dice. The final prediction was obtained through majority voting on three different predictions: an ensemble of five nnU-Nets {{cite:244b43bbeca8f5f55600648d3f4484905486c0e8}} (trained on five different folds), an ensemble of three U-Nets with Squeeze and Excitation (SE) normalization {{cite:96b8e37185ba0727e8c41829dbbdffe641b0d806}}, and the predictions made by the proposed model.
| m | 5c7d86d3a14f3493b8508fb71935b3cf |
We go beyond the so-called scaling regime by computing the time evolution of the string network parameters (long string mean velocity and correlation length) and thus the loop-production efficiency during modifications of the equation of state of the universe, see right panel of Fig. REF . Including these transient effects results in a turning-point frequency smaller by {{formula:dd5ef548-4d5f-4d82-9fbc-21957f68e7d5}} (20) compared to the prediction from the scaling regime.The turning-point frequency can even be smaller by {{formula:86dcaf8a-3db7-43f4-8161-38a6c5144dc5}} (400) if in a far-future, a precision of the order of {{formula:e88b01e3-b62b-408c-8c37-237c2e78a26a}} can be reached in the measurement of the SGWB, c.f. Eq. (REF ). As a result, the energy scale of the universe associated with the departure from the standard radiation era that can be probed is correspondingly larger than the one predicted from scaling networks, see e.g. Fig. REF .
We investigate a large variety of non-standard cosmologies, in particular models where a non-standard era can be rather short inside the radiation era, due for instance to some cold particle temporarily dominating the energy density (short matter era, see Fig. REF ) {{cite:d51c44f1f0af815766f7757008494446fcf22d99}} or some very short stage of inflation (for a couple of efolds) due to a high-scale supercooled confinement phase transition, see Fig. REF .
Such inflationary stages occurring at scales up to {{formula:763228b1-9501-4a5b-b739-31697994b027}} GeV could be probed, see Fig. REF . Even 1 or 2 e-folds could lead to observable features, see Fig. REF .
We also consider longer low-scale inflation models. For instance, an intermediate inflationary era lasting for {{formula:96a72a60-53fb-4825-953b-126ce4528fda}} efolds, the SGWB from cosmic strings completely looses its scale invariant shape and has a peak structure instead, see Fig. REF . A TeV scale inflation era can lead to a broad peak either in the LISA or BBO band or even close to the SKA band, depending on the precise value of the string tensions {{formula:cd755bf7-8547-43c3-b269-ec8e1727633f}} , and the number of efolds {{formula:94042d53-8712-423a-9b38-9a096e76841e}} .
We include high-frequency cutoff effects from particle production which can limit observations for small value of the string tension {{formula:a2cc50c2-70f2-44f4-8132-26ffa4d44f5c}} and high-frequency cutoff from thermal friction, see Fig. REF and top left panel of Fig. REF , as well as low-frequency cutoff from unstable CS networks, see Fig. REF .
We provide the relations between the observed frequency of a given spectral feature and
the energy scale of the universe for different physical effects, see Fig. REF :
i) the end of a non-standard matter or kination era; ii) the time when particle emission starts to dominate; iii) the time at which the CS network re-enters the horizon after an intermediate inflation era.
We discuss how to read information about the small-scale structure of CS from the high-frequency tail of the GW spectrum, see App. REF .
We discuss the comparison between local and global string networks, see App. .
| i | 9453fffe4eca17321c249cc606ceec11 |
More recently, there has been significant interest in cross-domain text-to-SQL modeling innovation {{cite:22ea062f873e71b4456fc416d8d78b50a27c952d}}, {{cite:808988db65d43e2fd08fc701d331a9e9e9576232}}, {{cite:953235470e9554c830432c46446220c304cda611}}, {{cite:09d34f18f2021010dbe1f3754e09bbe6f64773b7}}, {{cite:a6b2d91f1d64d4d10e4fcbd913dfebb38430d6a8}}, {{cite:0edfdcb4946b94b466b6f97ccfb53fef53831745}}, and different benchmarks are proposed, such as WikiSQL {{cite:e7f3ffc97d81b62a73746e60ef8b09b1c68edfe5}} and Spider {{cite:27c35fe08f1a04d8d1bed6d378ecef722c6c3c7a}}.
Especially for the Spider dataset, some of the SQL queries are nested which are more complicated than those in WikiSQL.
Furthermore, the attribute that training and test set do not have overlap databases makes the task zero-shot.
Therefore, linking entities in the question to the correct schema (columns and tables) from the unseen databases becomes a major problem in the cross-domain text-to-SQL parsing task.
Another tricky problem is that some questions contain database-dependent values in different domains, which requires the model to extract them and fill in the corresponding positions for composing an executable SQL query.
However, this value filling problem is ignored by a lot of previous work in the cross-domain text-to-SQL parsing, making those parsers impractical.
{{figure:92e57992-184c-4e40-81d5-1f452cb00b63}} | i | 19d9c4180a0e541dbb2396092821e8df |
We assume that labeled clean data ({{formula:99674db4-8dc9-41af-a148-f42f9060975d}} ) from the source domain ({{formula:7f36d7c3-7581-452f-bd1c-2c62956645b9}} )
and unlabeled weather-affected data from the target domain ({{formula:f2133c4f-2e2a-4007-ae80-22febc62dfa6}} ) are available. Here, {{formula:87f0f27e-80f7-41ef-805a-6d876016639f}} refers to all bounding box annotations and respective category label for the corresponding clean image {{formula:8460ab5c-fd0e-4af7-8e85-f65a2ceb3774}} , {{formula:c91024f5-3a4b-4deb-9204-0eaad0711a5f}} refers to the weather-affected image, {{formula:ee81e31b-8f32-4cae-88fc-7ec8b9c1c298}} is the total number of samples in the source domain ({{formula:84645cb9-5dcb-4943-b4eb-52a6b9d23d21}} ) and {{formula:0bca7bd5-cf21-4c1e-9d4f-bf9dd39b3283}} is the total number of samples in the target domain ({{formula:2a159133-5b7d-4394-b051-8c23e9893d38}} ). Our goal is to utilize the available information in both source and target domains to learn a network that lessens the effect of weather-based conditions on the detector. The proposed method contains three network modules – detection network, prior estimation network (PEN) and residual feature recovery block (RFRB). Fig. REF gives an overview of the proposed model. During source training, a source image (clean image) is passed to the detection network and the weights are learned by minimizing the detection loss, as shown in Fig. REF with the source pipeline.
For target training, a target image (weather-affected image) is forwarded through the network as shown in Fig. REF by the target pipeline.
As discussed earlier, weather-based degradations cause distortions in the feature space for the target images.
In an attempt to de-distort these features, we introduce a set of residual feature recovery blocks in the target pipeline as shown in Fig. REF .
This model is inspired from residual transfer framework proposed in {{cite:adea2ac48be9e8a6d89a4de41825fa0932ee2faf}} and is used to model residual features.
The proposed PEN aids the detection network in adapting to the target domain by providing feedback through adversarial training using the proposed prior adversarial loss.
In the following subsections, we briefly review the backbone network, followed by a discussion on the proposed prior-adversarial loss and residual feature recovery blocks.
{{figure:438923de-e494-46f4-8676-9e7383f7a177}} | m | 8c33eea128742247db9985f9c370f930 |
Although the experiment results show promising results, IGSG has some limitations. Mainly, false or failed predictions from the Faster-RCNN object detector and the scene graph generator results in a corrupted image scene graph. This prevents IGSG from achieving accurate grounding and asking relevant disambiguating questions. With an "ideal" image scene graph generated for the images used for evaluation, IGSG can achieve 0.68 average number of interactions and a success rate of 91.17 percent. Second, the robustness of the language scene graph parser is limited due to its rule-based approach. Since {{cite:89150e0449ba7b0c3f9012fd84386290d8ae4eac}} was first proposed, there has been recent advances in natural language processing with the appearance of LSTMs {{cite:e65374f7d5b8ae5777976a7d2cc73b285d378b04}} and transformers {{cite:e8a134598fdf55aac4bb7609b639883d89474860}}. We believe a data-driven learning based approach that encompasses the recent advancements of natural language processing, can further increase the robustness and accuracy of the language scene graph parser.
| d | b99f4e3bb1ed47a77677bd05330f5800 |
The galaxy KK242 is also interesting for a deeper insight as one
of a few known dTr objects in the void environment.
The great majority of the 30 known dTrs within the distances of
5 Mpc {{cite:b4098d984137cb9cbfd87905343f2f4056add709}} are related to a more typical
environment like the Local Group and similar nearby groups. Such a connection
can be related
to the origin of this type dwarfs. Only two of these 30 dTr, UGC1703 and
KK258 reside within the nearby voids described in {{cite:c0c0aa038caf7a36dba2bfbd0d0571d1f63c632d}}. Two more
dTrs, KKs03 and
DDO210, are well isolated, despite the latter is situated close to the border
of the Local Group.
| i | 8fba4969388ed6c83410696b13eace83 |
Comparison with competing methods.
To demonstrate the superiority of our proposed multi-label classification optimization objective, we conducted several comparative experiments with other alternative methods based on ResNet50 {{cite:bd02b3ee7195a01ab48595b9ea7b7f66beed758b}}.
We consider several methods including:
| r | b8e3d12dcd731da7376b502360e58c19 |
The first process is the formation of primary precursors at the vacuum-medium interface. It depends only on the highest frequency {{formula:0266dec4-589b-4b29-a9d0-60bb280db1a5}} (REF ), which is translated into the finest time interval {{formula:2db4b507-6ce3-4378-a9f5-28e81a981c78}} and the shortest distance {{formula:9ddd41f8-2b44-48af-9612-e8b7e38d3d85}} . Its sole parameter is the density of all electrons in a medium. The electric field of primary precursors can be found analytically as light electrons begin to radiate and develop collective behavior forming an index of refraction nearly immediately. Therefore, this stage is totally under the jurisdiction of the rigorous theory of dispersion {{cite:5fac7164fc004d4b08e370b24492044266fadb04}}, {{cite:19e1fb7825ca3018e870e9e1273d91bb9dfbff78}}. Our calculations show, that in the vicinity of the interface ({{formula:566b9c4b-df6e-46b0-9839-e1aee4e8f456}} ), the electric field near the leading front smoothly oscillates. But the deeper the front penetrates inside a medium, the sharper the first oscillations become. Regardless of how deeply the pulse penetrates a medium, its amplitude at the leading front stays the same as the amplitude of the incident signal.
| d | 95eb87bf1985cc7c90773847259f84ad |
The single organ segmentation has been the dominant task for decades, where numerous solutions have been developed {{cite:81cada71ba9b7f5e3b01d1018d519f9fb49dcd27}}, {{cite:b13188a2342a5bed890df85f44e7df599f888a02}}, {{cite:bdb690da17a1be63921a68dcbf7c67a05acb4bd6}}, {{cite:8a30659ce19af5696a7c825086b53cf8bd45ea0c}}, {{cite:9cbb627bf6c32844ae2835539b90d98fc26f1e2d}}, {{cite:542e150ffae58ee7fe327578ad6949a9ede71410}}, {{cite:1d09fd0a7a1f1536bd27a38bf2ca1136e2793519}}.
For example, to precisely segment the liver and tumor, H-DenseUNet {{cite:81cada71ba9b7f5e3b01d1018d519f9fb49dcd27}} designs a hybrid 2D/3D network for better features extraction.
By proposing a self-configured framework based on the naive UNet {{cite:22e60c30e138c712115680ff5d28bd9c0e9009cd}}, nnUNet {{cite:b13188a2342a5bed890df85f44e7df599f888a02}} achieves superior performance on segmenting the liver, spleen, kidney, as well as pancreas,
blackthe approach can be easily adapted to multi-organ segmentation tasks.
Besides, to resolve the challenges of small target organs, a series of works {{cite:bdb690da17a1be63921a68dcbf7c67a05acb4bd6}}, {{cite:8a30659ce19af5696a7c825086b53cf8bd45ea0c}}, {{cite:9cbb627bf6c32844ae2835539b90d98fc26f1e2d}} adopt the cascaded structures, where the networks are designed in a coarse-to-fine manner.
More recent works {{cite:542e150ffae58ee7fe327578ad6949a9ede71410}}, {{cite:1d09fd0a7a1f1536bd27a38bf2ca1136e2793519}} replace the manual designs by the neural network searching for optimal segmentation architectures to achieve better performance with fewer parameters.
| m | f7005197112f1669e62f8ede6fc9855f |
We train the STDP block independently as a separate neural network as shown in Fig REF on the ImageNet traffic dataset. The ImageNet traffic dataset is a subset of the ImageNet dataset consisting of only the traffic signs, cars, and pedestrians. Since STDP is an unsupervised learning method, it need not be trained on the full ImageNet dataset. This is motivated by the observations shown by Kheradpisheh et al. {{cite:c3306606b563a3e50cfe1dbf82a17222c69f2eea}}.
We use these pre-trained STDP layers as a supplement to the ResNet backbone as described above.
With these STDP layers frozen in place, we use the DNN-to-SNN conversion process using the process described by Miquel et al{{cite:a80a2a4ed8e74bcf486b2b26f3b4572fcac1f72d}}.
For the DNN-to-SNN conversion, we use a rate-encoding of the activations. For the conversion process, we assume the firing rate of the spiking neurons over a certain time window approximates the activations of the original analog neurons {{cite:51d901e1ed986a3c9683cfca769fe821d2cb6f50}} and there is a one-to-one correspondence between the analog and spiking neurons. The dynamics of the spiking neurons are modeled using the Leaky Integrate and Fire (LIF) model.
After this conversion is done, we retrain the model with the Spike Time Dependent Backpropagation (STDB) method to finetune the weights and activations obtained. Table III. shows the implementation details of the spiking neural network STDB training method.
| m | b80877ea053bead56b0873a7bb021038 |
We next consider this problem in {{formula:daeeceaf-eacb-4340-b729-591e500f7f19}} , which is the dual of the projective tensor product space {{formula:b402de4f-fba2-4eb8-9f81-318979d58782}} .
Since we are in a dual space, for {{formula:cf577f94-4801-4442-92d9-a7de862b38f4}} that attains its norm, we use the notation {{formula:609a1211-8608-4c16-b22c-872864074f38}} and {{formula:60d47bc0-b74e-4bfa-b6fe-14dfa67804cb}} for state spaces in the predual and the dual. The hypothesis assumed in the following theorem is satisfied when {{formula:3b355021-d7d4-47be-ab6c-ecd170e7c8e5}} are reflexive spaces, see {{cite:5dbc815a343e96d43d316a6136e4139390e2b091}} Chapter VIII. We recall that in any von Neumann algebra, the identity or any unitary, is a point of norm-weak u.s.c for the preduality map, as well as {{formula:443d791a-f54c-4175-b865-b8cd9280a6c4}} for any dual space {{formula:90f7794c-c9bd-4b7d-8b04-194feac86879}} .
| r | 3de17bc5c79b7488e7949bd3fb151198 |
In this section, we compare AdapterBias to other parameter-efficient methods, including Adapters {{cite:a93a65010dc48e4f395f1ca53ad4ca2988e1da4d}}, Diff-pruning {{cite:828350db07348f58c35f174909dfa810a8605380}}, BitFit {{cite:d4e221c4c7530a238271d54c3e5111461d5c26ac}}, and LoRA {{cite:276a976473592b97e28757acda5b6844142e0249}}.
In Table REF , we report the test scores on the GLUE benchmark and the required new parameters per task. Here we use BERT-large as the PLM. AdapterBias reaches 81.2 average score in GLUE benchmark, with the smallest amount of parameters (0.17M) added per task. AdapterBias shows competitive performance as its parameters are 40{{formula:3cbccdab-1e37-4d76-b7ff-d00c09075693}} less than the works of {{cite:a93a65010dc48e4f395f1ca53ad4ca2988e1da4d}}.
Although Diff-pruning {{cite:828350db07348f58c35f174909dfa810a8605380}} achieves the best average score among all parameter-efficient methods, their work trains an additional vector whose parameter count is equivalent to the parameters of the whole PLM. Thus, Diff-pruning requires 340M trainable parameters of BERT-large during the training stage, while AdapterBias only trains 0.17M parameters.
Furthermore, AdapterBias achieves comparable performance with BitFit and LoRA with fewer parameters needed per task. This shows that AdapterBias is a worthwhile targeted fine-tuning method.
| r | 40f8c2821c22c1b6e726d1a247357f09 |
In this section, we derive an efficient strategy for solving the formulated problem (REF ).
We first rewrite (REF ) and (REF ) by using the equivalence between the WMR and the WMMSE to reformulate the original problem (REF ) into a more tractable form {{cite:e45cf415bf7ea77e199c90093472f4ed7a1b4b03}}, then optimize the subproblems relying on the block coordinate descent (BCD) algorithm framework.
| m | bd698e4084e57899421fd7c1ccb4809f |
The present work is devoted to the geometry of the Gromov–Hausdorff distance {{cite:976c8f485b09c59f9e707349cd2a1c9038948b10}}, {{cite:368505724145adbd44377da872ee54c69cb8ebda}}, {{cite:3dbe50cf8369cd0a6d6b63942243d003a764add9}}, {{cite:985595558c96e692219b1b3eca5d5993baae6d71}} defined on the collection of all non-empty metric spaces. It is well-known that this distance is a generalized pseudometric vanishing on isometric spaces (“generalize” means that infinite distances may occur, and the prefix “pseudo-” that the distances between different spaces may vanish). Traditionally, the distance is studied on the Gromov–Hausdorff space, in which all metric spaces are compact {{cite:985595558c96e692219b1b3eca5d5993baae6d71}} and considered up to isometry.
| i | 8300e14c66a8f02b602ac194228766ff |
Fig. 11 shows the symbol error rate (SER) against SNR. We use 16-QAM transmitted symbols.
The curve Random phase represents the SER performance, where the IRS phases are chosen randomly.
The curve Method {{cite:a7b113811125413ef0924ea33c3689bf622c5f97}} and designed phase represents the SER performance when the CSI is estimated by the method proposed in {{cite:a7b113811125413ef0924ea33c3689bf622c5f97}} and the phase is designed using the method of {{cite:c7eccf1edb94cbbc086270e7c01200aa66672462}}.
The curve Designed phase and designed pilots represents the SER performance, where the IRS phase is designed when the CSI is estimated by the proposed TS-OMP algorithm based on our designed pilots.
The curve Designed phase and random pilots represents the SER performance, where the IRS phase is designed when the CSI is estimated by the proposed TS-OMP algorithm based on random pilots.
The curve Designed phase and perfect CSI represents the SER performance, where the IRS phase is based on the perfect CSI.
The curve Random phase and Designed phase and perfect CSI shows that the appropriate design of the IRS phase substantially reduces the SER. The curve Designed phase and perfect CSI and Designed phase and designed pilots shows that the proposed TS-OMP algorithm based on our pilot design achieves similar SER to the perfect CSI scenario.
| r | 73f40aebd176d9bf9def564992c61754 |
In Tab. REF , we present the Top-1 classification accuracy of our method and comparison methods. The results of comparison methods are quoted from {{cite:1061022b1c1342a25243106dbb1423ace1e9d869}}.
Teacher-student pairs of the same and different architecture styles are considered.
For pairs of same architecture style, we use wide residual networks {{cite:b76275800ddebb200b738c51b36a06762ff86db6}} and residual networks {{cite:8058be420171da78232e31ff25828bcddf8bedce}}. For pairs of different architecture style, residual networks and ShuffleNet {{cite:fd3b9579a168c9b54c72bf0ee0c8842e50145013}} pairs are chosen for experiments.
As shown in the table, for distillation with both same and different architecture style, our method reached new state-of-the-art results.
Specifically, our method outperforms standard KD by a large margin, which verifies the effectiveness of our method.
| r | 5aa99d86c612ba59345f6b0db431e6bb |
To detect keypoints in blurred images, a straightforward way is to utilize a deblurring algorithm to restore the latent sharp image and then to detect keypoints from the restored image. Image/video deblurring methods have been well developed over the last decades, which mainly consist of classic gradient-descent methods {{cite:b74cc07b1e825d9fe881c03f36d9aa3b46c34fc7}}, {{cite:446f320202f4ba1e7f9124fdf94064d4a7dbaf62}}, {{cite:253601b789f29d493578a12e8b1732d25dbb3981}}, {{cite:040136e58f0add00c303b6283b812513a69df331}}, {{cite:34dd1494a04332dd4f0bd49988491762e63a96ee}}, {{cite:ee5e8df0180368ab9f1a1b50a114ad5e602fabf9}}, {{cite:37798335333b0b4cc5ed291e892550fbbf299d07}}, {{cite:c87877a1ac061ccb001940ef7d8ac09d875278f8}}, {{cite:7a0e91ad426df38ee65693c48e00cc0916b94146}}, {{cite:3c1a6f513002c75bdfbe115df8302d966cd5435c}} and learning-based methods {{cite:595132e42c6842a921c4f01b59d6bc836095b971}}, {{cite:16a744b13509e0cd51cc8558cd80e301b2d4be0c}}, {{cite:4b17a1c9a4a60014d6e669072461d3d3bb2fd688}}, {{cite:e1fe44ea453fdbd97b27fe3ebb2cb8b627f9769d}}, {{cite:d39625aa2fe86bcfd668ebedaf45ea8a17decfce}}, {{cite:8b777d3665b724cb3ed2641ce5f6b9ee8c57f692}}, {{cite:2defb3f98f9052dee81ff920ad6793a4ec7edc8f}}, {{cite:ae19e918170cc1bf0ad64a3da09d9b8695c2ca89}}, {{cite:fd5f0c3b4b39ee7f6d59392c1f92321fbe23fd06}}, {{cite:810a5313bc68e00f60909d7524b15a16d18f45ee}}, {{cite:ff215f9dc83fdb25817b202167ef75280a34f3aa}}, {{cite:cb7b715084b829eee082d6cb9d60885de7d757bc}}, {{cite:d555ae5514bd252a917c80e542ef3bc5c51b5c8d}}. Although deblurring algorithms have achieved impressive performance recently, there are still several limitations existed. For example, existing state-of-the-art methods usually require high computational resources and are hardly to run in real-time even with a high-end GPU. Another limitation is that current methods still cannot perform very well and might introduce additional artifacts for severe motion blurred image, due to the limited information preserved by a single blurred image. We thus aim to design a novel one-stage efficient local feature detector from a motion blurred image directly, without any intermediate deblurring operation, to avoid those drawbacks.
| i | 09f4437e89e8a8873be32702e2a51c24 |
We have
{{formula:8b5c83d0-5be5-4e60-9483-e11e635e68d3}}
According to the monotonicity of {{formula:ab63fab4-0658-4332-99c1-9d7767b5986c}} we have
{{formula:6518b6f5-cd89-40f9-82b2-21c9cb82dde3}}
which implies
{{formula:7d1b183d-d456-4b34-8158-8c62481eadbb}}
After division by {{formula:c0846e59-3a44-4571-b4bc-aec42866d260}} we obtain
{{formula:241920b8-78e8-4a89-87d2-6f59814e8dda}}
We rely on the differentiability properties of the viscosity curve {{formula:62234199-627a-447f-acd1-ab905e3940d3}} .
According to {{cite:528f36376d8970e88c663d744a693d6341f186a7}}, {{cite:8309b003d4a8bdb5013085d6e7f710c10a40c5fe}}, {{cite:a7a81bccfbdba4ad47c61933cad0b0ed1f84f335}}, the viscosity curve is Lipschitz continuous on the compact intervals of {{formula:a500dcfa-5297-4d01-8472-46143c65e9e0}} . So it is absolutely continuous, and almost everywhere differentiable. Therefore, the mapping {{formula:e69df036-e14f-4288-a522-1557f596e901}} satisfies the same differentiability properties.
By letting {{formula:326a84ec-dac7-4a06-a4f5-dc50b5abe942}} in (REF ) we obtain that, for almost every {{formula:a57ab5ea-8b49-4d40-ae8e-2e05c174e60b}}
{{formula:b71c00f2-ecb8-4bd5-96cf-ba0fb8c7f7d1}}
which gives the claim. The last statement follows from Cauchy-Schwarz inequality.{{formula:afd12975-13fe-4f06-a0b6-40a9966e6bd8}}
The steepest descent with vanishing Tikhonov regularization: Lyapunov analysis
In the Lyapunov analysis of the continuous steepest descent with vanishing Tikhonov regularization (i.e. {{formula:81a65981-4376-47f0-9acd-456bd02abd38}} as {{formula:5af1cb47-eef8-451a-b3ec-2db17d0b8b91}} ) that we recall below
{{formula:98d91b82-396f-47c1-9b81-9f76c64d5118}}
the following function plays a central role
{{formula:a3f10588-1653-4c94-93fb-26fb767f104e}}
together with
{{formula:0290839d-4fbf-4651-a863-146400045524}}
Let us state our main convergence result.
Let {{formula:c22add5d-ddea-4bc1-a0a1-b4e10b663c02}} be a solution trajectory of the system (REF ). Define {{formula:2c1eb0d3-c72d-44cf-b4dc-69744aae7bb4}} and {{formula:a8112f0a-4e26-4ede-b064-c1edef3c1bb5}} respectively by (REF ) and (REF ).
Then, the following properties are satisfied: there exists {{formula:bcf06fc6-dff4-4abe-bad4-4af5f238af94}} such that for all {{formula:9501184b-2e89-46e9-a7e8-1b9fe6c2762c}}
{{formula:c9216331-5df9-4224-b1c7-1698378174fe}}
Suppose that one of the two following conditions is satisfied:
{{formula:f885e8bf-189b-4759-841a-8b3cc38c6088}} {{formula:6af25718-32b5-48af-9b1f-2507cd71c02e}}
or
{{formula:7c7357d7-65d6-43d9-90a6-b8af3c4022b3}} {{formula:460845c3-58c8-464b-9082-f9e5345d43d0}} .
Then {{formula:57a045cc-74fc-4b7e-882c-12ae9e02ec1e}} converges strongly to {{formula:c5310530-75fd-46db-add5-a5ac081afd12}} as {{formula:0af3905a-a566-46b4-ae7e-40b55415e62c}} .
Since the mapping {{formula:cd42c292-9bc9-45c1-a179-d39177a578b9}} is absolutely continuous (indeed locally Lipschitz) the classical derivation chain rule can be applied to compute the derivative of the function {{formula:59cb25d4-4738-4872-8735-7d1922cbe612}} , see {{cite:5ae97cfc3b269431f2ff3983e5af7ee3e6656201}}.
According to Lemma REF {{formula:8da3cba0-2ce2-4147-b052-55d620cff352}} , for almost all {{formula:4d78dc68-4b89-4ba5-9446-1df629dfceda}}
the derivative of {{formula:508f367c-de73-4e34-8dc0-34f351e44bf7}} is given by:
{{formula:d2d1f5c5-2da6-478b-a103-be2f843fbebe}}
According to (REF ) and the {{formula:9755f58d-f56f-4f32-b615-aa9de2e1eed1}} strong convexity of {{formula:83475dc9-8072-462b-b0ec-fc05eddc3ca4}} , we get
{{formula:3d7b182e-39d2-42d7-80a8-2e9e29f20028}}
According to Lemma REF (i) and Lemma REF (ii), we obtain
{{formula:107f90ce-f999-476f-b498-3037a17d9326}}
where {{formula:811cd14d-2237-4abe-b4ef-be9bfabba5f4}} is a continuous and positive real function to be choosen later.
Combining (REF ) with (REF ), we deduce that
{{formula:d0dd7356-95ed-452a-83ab-7301021b31ef}}
Let us now build the differential inequality satisfied by {{formula:48201441-6841-4d6c-9bef-77683f695a7d}} .
{{formula:aee18495-29d0-444e-ad89-247f096d40c8}}
Let us now choose {{formula:6b78d587-1b5d-465e-a519-93c60124bec0}} in order to make equal to zero the coefficient of {{formula:e66b2cb7-3200-4eb4-b0ba-e369920fabab}} , that is
{{formula:c59f49a4-4fbf-4ab1-93c0-159a5d577470}}
Replacing {{formula:52e79591-af2c-4aa2-9735-c5e03ac204fa}} by this expression in (REF ), we obtain
{{formula:37fa8e71-20ae-4d8b-8ecc-42c4dcda1b15}}
By taking {{formula:434f98b3-6ea0-4b2d-957f-6e4911e2a0ca}} , we conclude that
{{formula:7705a3d9-ef87-478c-a139-8cceea733b81}}
Therefore
{{formula:8096ad49-17ea-4309-a337-1d5f4b5dd6ff}}
By integrating (REF ) on {{formula:a403f316-1fc8-4eae-ab56-49877fe1ce06}} , and dividing by {{formula:60f38486-8dc1-45a9-b989-03f0c6189b9a}} we obtain our first claim (REF )
{{formula:9f43e01e-0289-4022-9ab3-78afda3901f6}}
Coming back to Lemma REF , and according to
(REF ), we get, for any {{formula:719f6a6b-f8e0-4883-9698-7ef891241b88}} ,
{{formula:044b0eda-2614-4c18-8f97-f903105df672}}
Similarly, according to Lemma REF and
(REF ), we get, for any {{formula:95ae3b96-3826-46c3-bdee-7cd96f67df28}} ,
{{formula:232fc7c2-9e27-4b8a-87db-f3a97b5c3e2a}}
By Lemma REF (ii), {{formula:1b831ccf-d261-464d-83bc-581e9d62ccee}} converges to zero.
Therefore, {{formula:317a7c25-dd4f-4ada-aac8-3199208906d6}} converges strongly to {{formula:f027e09f-4f31-4d83-a697-f845ca645155}} as soon as
{{formula:d0772ffe-4999-4c72-804a-cf08f1642ce9}} .
Finally, according to Cominetti-Peypouquet-Sorin {{cite:e35f9bce0cfa07ab4068e62e1911c81256f1d5b1}}, we have that {{formula:0d2225d0-b8bd-45b0-b77f-89558ca114e8}} converges strongly to the minimum norm solution {{formula:a47fad84-5ebb-4ed5-99ad-54c82b58fb24}} as soon as {{formula:56fe57dc-ab73-4a4f-a61a-2f5da4b37dc9}} .
{{formula:f88873e8-b689-422c-9d45-ca6ab6ed2474}}
Case {{formula:750b63f8-9616-440d-9421-a60493384f42}} , {{formula:59fcc976-029c-4d51-9429-c39404b12338}}
The convergence rate of the values and the strong convergence to the minimum norm solution will be obtained by particularizing Theorem REF to this situation.
Take {{formula:d3198efc-c71d-45f6-94ef-85bfc41f382f}} and {{formula:24dac7e8-4349-41c1-9c0f-0791de29c89f}} .
Let {{formula:9267990d-1a18-4d1e-a131-78025f054475}} be a solution trajectory of
{{formula:ebf67cb3-ad50-43a4-a7a8-14e373eed3c6}}
Then, we have
{{formula:eb093f15-0fde-42f3-8c7c-c4c061f39d9a}}
{{formula:82722554-9558-40a8-91ff-6ee9dcb29282}} The solution trajectory {{formula:41ad5a6e-7ea8-4392-9102-1c057c7d38c6}} converges strongly to the minimum norm solution {{formula:afd68bf0-94da-4384-b8a4-5aaf222e9e63}} .
a) Take {{formula:e33ef6a9-e281-401a-a90b-5ba30088721c}} with {{formula:39f80517-7270-40fc-b426-0f2a52b67f1c}} . We get
{{formula:a05ea403-3daf-4f36-9e63-8e490fca3df5}}
By Theorem REF , we get, for {{formula:5042fc01-cf0f-43c2-af5c-a9c8ab1b5d75}} large enough
{{formula:6c6b869a-ab3b-4ae0-b7f1-92edadfda48d}}
Since {{formula:cd6bf0c8-4a89-4939-a875-626ad0f13d76}} we deduce that
{{formula:a8f108b8-4b5d-494d-b910-a6e17b34bf18}}
According to Theorem REF we get
{{formula:057adfad-3839-4bf3-a723-f76359b7c91c}}
Since {{formula:8ad9072d-a008-4277-81de-6b2d1a3d8753}} we deduce that
{{formula:eff9f2be-9a2a-461d-9cfe-a78d8aed9f07}}
According to the above estimate and Lemma we immediately obtain the following convergence rate of the gradients towards zero
{{formula:4e2802ed-7265-4200-a643-d4b42ae521b9}}
Finally, since {{formula:52c7e40e-47fb-4aec-9a29-ceaa42436d45}} , according to Cominetti-Peypouquet-Sorin {{cite:e35f9bce0cfa07ab4068e62e1911c81256f1d5b1}}, we have that {{formula:a6533d95-50c5-40ba-9463-7a64e40fa795}} converges strongly to the minimum norm solution {{formula:63fbce24-90cf-4675-b31f-9fe37a0a7f6f}} .
{{formula:25ae64c0-c74e-4a69-9804-d3ec7f54b223}}
Case {{formula:454f7fcf-4aad-4a41-84f0-56a149de4b84}} , {{formula:e9c147eb-3cee-4c71-b72c-6b38327435b7}}
Take {{formula:26f3ca4c-7faf-4052-849c-c9485871ffdd}} , {{formula:dbd67c63-05f3-42de-acba-9d119de2d5e5}} , {{formula:48fb4bb5-6c4d-4ed3-a3f9-2a198dee28a3}} , and consider the system (REF ). The convergence rate of the values and the strong convergence to the minimum norm solution will be obtained by particularizing Theorem REF to this situation.
Take {{formula:3d3f2870-5943-43d0-9b0c-cd7b3d5315ae}} and {{formula:b3a4195a-cab0-4950-a319-d96320619c02}} .
Let {{formula:2c21fa3e-b55b-4faa-ae49-442fc9baea4b}} be a solution trajectory of
{{formula:af28f2fe-2988-4af1-92ec-c51425c15bac}}
Then, we have
{{formula:dfc2b298-0a92-4237-8a7f-24b608678892}}
a) Take {{formula:59ac6d8a-957e-4f14-bf07-83c6b0bc6711}} with {{formula:29aec4c4-4a16-4d80-80de-7e3dd7900d5f}} . We get
{{formula:9bb5f153-8213-4703-92f0-79b6e5173122}}
By Theorem REF , we get, for {{formula:296d6b9b-8928-4625-8387-d65782c0265d}} large enough
{{formula:82108ae4-b5de-4115-a749-eab9f226f5b6}}
To majorize this last expression, we notice that, given a positive parameter {{formula:ff704eb9-359a-4cd6-8d85-19b3b8f98d44}}
{{formula:abc2798e-45c1-4079-aa70-533ea482e467}}
as soon as {{formula:4f2fa9be-8401-40c2-bc17-ee1578d80551}} (for {{formula:bdb749c6-9f4c-4931-96de-556138fcae0f}} sufficiently large).
Therefore
{{formula:ecec3e63-db3a-4b0e-91db-7e9c0d4d6a43}}
Since the term {{formula:ee7b3541-3bf7-4144-9923-1b1ce72ea17f}} converges to zero exponentially, we get
{{formula:13b3c819-b8f9-443b-87b8-ecf60db0a185}}
Similarly, according to Theorem REF
{{formula:ca158b8b-1328-41d0-8059-acaa03f5dbba}}
Since the term {{formula:52980abb-c1f9-4516-a05a-41fa28a8c64e}} converges to zero exponentially, we get
{{formula:0b64201e-b65e-423a-a6af-685be61ab67e}}
According to the above estimate and Lemma we immediately obtain the following convergence rate of the gradients towards zero
{{formula:238223bd-d3ee-4ae9-92c2-75714da1ed64}}
Finally, according to Theorem REF and {{formula:16e53f6d-86b3-4e7e-bc22-e78159ae95cc}} , we get
{{formula:831984cc-2c07-48f0-bfc6-09ffec94143a}}
Since {{formula:6d24956b-d0b7-4d0e-b69a-9aff39b7c6f6}} we conclude that {{formula:b344182d-0a5f-4405-9f5f-674d5b277611}} converges to zero. According to lemma REF we have {{formula:bff8d829-13cb-4417-ab84-72f35d0757d2}} .
Therefore {{formula:3be82ace-e941-44ec-b4af-d98b44de8378}} converges strongly to {{formula:84597f22-da30-4e3e-bd19-b6a974b40cc5}} .
Indeed, this could be obtained directly as a consequence of the Cominetti-Peypouquet-Sorin {{cite:e35f9bce0cfa07ab4068e62e1911c81256f1d5b1}}. Our Lyapunov analysis provides indeed a convergence rate. This completes the proof. {{formula:f04c74e0-6604-4a27-92c4-6a580beccde8}}
Let us provide another convergence rate of the gradients towards zero, of interest when {{formula:354ce04c-3920-41fe-86b2-4b5845420fba}} . Let us observe that
{{formula:3afbc249-fd87-44b2-a0c3-9fc92ea19168}}
Passing from the first-order to the second-order differential equation
Having in view to develop rapid optimization methods, our study focuses on the second-order time evolution system obtained by applying the "time scaling and averaging" method of Attouch, Bot and Ngyuen {{cite:842d8ee7ba718f5b259d362eae5263b7f7119b9c}} to the dynamic (REF ).
In doing so, we hope to take advantage of both the fast convergence properties attached to the inertial gradient methods with vanishing damping coefficients and the strong convexity properties attached to Tikhonov's regularization.
So, let us make the change of time variable {{formula:9315f269-4980-4e86-a7d8-c9396f607140}} in the damped inertial dynamic
{{formula:81a846cc-2aa1-43b9-adb1-f9f21d165546}}
where we take {{formula:cb08a903-2133-4ee1-b558-5664473a6ab3}} as a state variable and {{formula:08ccbbb7-b3f6-4805-b874-9add3cfd57c8}} as a time variable, which will end up with {{formula:1cf9eebb-409e-4b7d-b764-6ebb299195c5}} and {{formula:27510997-f728-46a0-86aa-9962f008d1eb}} after time scaling and averaging. The time scale {{formula:c8d18ca4-4f42-4f48-89ad-1af5533d04ad}} is a {{formula:a19b3b31-2f2a-4fb5-bb6f-abe19e6449f8}} increasing function from {{formula:9e82ef40-70a5-4636-8034-ce499e984ec0}} to {{formula:763daad8-0773-4ce9-9335-aca5e9997149}} , which satisfies {{formula:02a69500-c7f2-4ba9-8a9f-f3ac910e532c}} . Setting {{formula:09712e01-8091-4d55-a0d5-dfec32b76e95}} and multiplying (REF ) by {{formula:87fae6e7-b3dc-4c7e-a335-f607cf72e265}} , we obtain
{{formula:82994aa5-a64b-41bd-aacd-d020c7db348f}}
Set {{formula:04592f99-6e8d-44c2-9f2f-5a7066dd42ca}} .
By the derivation chain rule, we have
{{formula:d32bdbb0-71c3-426b-9fcf-a5a8c0d14d67}} .
So reformulating (REF ) in terms of {{formula:c5b1e7fd-97f2-42c8-880b-7d3b35af9ca8}} and its derivatives, we obtain
{{formula:9bba6ae2-8523-4497-8a65-b21a8800e8b8}}
Let us now consider the averaging process which consists in passing from {{formula:8431c2ab-34aa-4b7f-9ba2-7e414427c5cf}} to {{formula:81152f90-01d5-41cc-88d0-9157596fa315}} which is defined by means of the first-order linear differential equation
{{formula:9ffcbbdf-8f5f-46c3-a6de-9a7637f19512}}
By temporal derivation of (REF ) we get
{{formula:9d51518d-80f1-4a65-9724-6a1c9807724e}}
Replacing {{formula:f19a96a9-88c1-4f5e-84c6-846ce88822d8}} and {{formula:2820013f-2840-4db7-a28c-6b11da809b21}} in (REF ), we get
{{formula:545ae6f6-6162-4fc6-81cd-d72d97d95c4a}}
Dividing by {{formula:3a5a672c-5ee5-48e0-9e7d-9696807c9d86}} , we finally obtain
{{formula:bc6c1493-478a-488f-8f74-487c1d304c38}}
According to the Su, Boyd and Candès {{cite:e6175997465e6509876b98c65fdb30359a9de3be}} model for the Nesterov accelerated gradient method, we consider the case where the viscous damping coefficient in (REF ) satisfies {{formula:22f2fa13-f598-4430-aea6-c85ec79fc7a1}} for some {{formula:0659f93f-2861-41fa-9597-04756f4984ff}} . Thus {{formula:729eebe4-dbda-4270-8208-1a55735cfc8a}} , and the system (REF ) becomes
{{formula:fba1f0fa-30da-4aad-b5c4-7333a6fa0a02}}
Equivalently
{{formula:bda0e0bd-7747-43ce-ae23-d3ed3a870ded}}
We can observe that the time dependent parameter {{formula:086290ae-db49-4f36-aa16-132b9e41d099}} enters both the damping coefficient and the Tikhonov regularization term.
In parallel to the study of the first-order evolution system studied in the previous section, we consider the two cases of the function {{formula:b1907d7c-9ea5-4709-a969-dc6a40d3cbc6}} which is in our hands, namely {{formula:a3e36667-0363-496e-ba39-394b0b0f958b}} then {{formula:fe1f3ab1-c355-4123-b784-3ab9729bcbd6}} .
We give two different proofs of convergence results, each relying on a specific technique of independent interest.
{{formula:53cd20ce-b74c-4c53-8f3c-94f02fd7b346}} with {{formula:1b6f713f-248d-4910-8b7b-588884cbd311}}
The system (REF ) becomes
{{formula:e996aef5-18d9-48f3-bedf-ff0d477aeb7e}}
Let us state the main convergence properties of this system.
Take {{formula:9b3af65b-720a-4aa7-8164-7881d4d29fde}} and {{formula:503d99da-65dc-4ee3-a9bd-ed4b4a0d4498}} , which gives {{formula:33c097ca-56b4-4027-9da1-f87e9e2dc8c9}} .
Let {{formula:94c8c118-6dae-4363-9695-95c0a8d97d71}} be a solution trajectory of
(REF ). Then, the following properties are satisfied.
{{formula:57539ce8-bfa2-4640-a7f6-9989fcfcb549}}
According to Theorem REF , the rescaled function {{formula:9c2ce193-d7fa-4434-8b6d-e114ec11bd7f}} satisfies
{{formula:63876a7c-364a-4465-878b-62a5e18097ba}}
Our objective is now to obtain the corresponding convergence rate of the values {{formula:2c816685-85f0-4c58-888a-ccb4d9c79410}} as {{formula:c8294e27-099c-4f4e-86c0-d8ffb92d1aa5}} , and the strong convergence of
{{formula:793f2b39-7c8b-4a59-9642-ee052f9c3047}} to {{formula:5b67398a-694f-4b94-ad3f-d4de2200cf9b}} the minimum norm element of {{formula:571a2b94-2a3d-401d-9dc0-c7a39bbdf960}} .
The following proof is inspired by Attouch, Bot, Ngyuen {{cite:842d8ee7ba718f5b259d362eae5263b7f7119b9c}}. It highlights the averaging interpretation of the passage from {{formula:391fe068-9204-4cdb-89ea-6e5e359a94f9}} to {{formula:51558313-7164-4c5f-a5aa-95cbb03b773f}} .
{{formula:8b9337fa-4193-4122-aa90-03045d50dcdb}} Let us rewrite (REF ) as
{{formula:a817dd65-edcf-4b04-b69e-80c1e1c94c19}}
After multiplication of (REF ) by {{formula:b83f5a21-331a-408e-91b2-49916fcf2936}} , we get equivalently
{{formula:73ff9dad-6a75-44ee-b922-4a9faf604e97}}
that is
{{formula:3f70476a-0653-475f-8120-10fecc6cc81a}}
By integrating (REF ) from {{formula:27e5532e-c2a8-492c-aa9f-4b917df806a6}} to {{formula:13c07875-5e73-474d-ad6d-316e08471a2d}} , and according to {{formula:3fe1d39c-dfd7-42b9-bc3e-6a640e46c2dc}} , we obtain
{{formula:e2fa434c-ad5c-4200-8e06-c01a7e7b3128}}
where for simplicity we take {{formula:8080859a-c32a-412f-9c11-d83d0df16234}} .
Then, observe that {{formula:62bd9a10-2d7a-4e9c-a021-437d6f450bb0}} can be simply written as follows
{{formula:a949ade8-48fd-4df2-9c6b-0ae3949b99af}}
where {{formula:f21d8ad8-6544-4adc-9416-6f14b956ee6d}} is the positive Radon measure on {{formula:7e84fa5c-be6f-4019-96f5-fd0bb8b4498b}} defined by
{{formula:f5f79448-35ea-48b3-9900-13507613b549}}
We are therefore led to examining the convergence rate of {{formula:0e28cb2c-a2ec-4d89-b00d-e96a9ae96547}} towards zero as {{formula:77fd95e7-5f65-4ab8-b70a-b00df778baf6}} .
We have that {{formula:264f06b9-f919-45c2-a2fd-788b559e2980}} is a positive Radon measure on {{formula:de8827cc-719a-4072-ae4d-4a3e985a0680}} whose total mass is equal to 1. It is therefore a probability measure, and {{formula:8829fa1e-fdea-4f6b-95ae-f0e9bb6e5659}} is obtained by averaging the trajectory {{formula:082e74ed-b45b-4ec7-b069-481f87e3ab82}} on {{formula:b732fce9-ccf2-4111-854d-05d4b48a1fe9}} with respect to {{formula:4ac74a64-99a6-4cee-b5bc-009760234808}} .
From there, we can deduce fast convergence properties of {{formula:f453f39d-1171-4d1c-b157-0d26a6f48a4e}} .
According to the convexity of {{formula:8a4e1b08-4113-411f-b465-4ed9a7594775}} , and Jensen's inequality, we obtain that
{{formula:da096a5a-525c-4cfa-95d2-d7a0835f40c1}}
where the last inequality above comes from (REF ).
According to the definition of {{formula:c66cc4d4-bcd3-442b-bbc3-149ad5a5fb40}} , it yields
{{formula:317d1590-ad86-4a12-8e9e-165909e3ccf6}}
According to (REF ) we get
{{formula:e647d460-22e2-4a9e-9a14-525e0cdd8f05}} For {{formula:833ed1cb-2fe9-4429-9d20-fa86e7155190}} ,
{{formula:5e8d2d87-681f-49aa-8e0d-a07aebd20b5a}}
{{formula:602fda9d-9146-408d-bd06-08e318bdc81d}} For {{formula:8a4a4e55-3732-410f-8e98-a94e0c818850}} , we have {{formula:7ee94cc8-296e-4af9-8601-71094f8689b0}} for every {{formula:b385aba0-d8c7-46c1-be47-c61ce2057128}} . We therefore obtain
{{formula:f194da1b-2cc1-4457-ae27-0ed3a9eff9a9}}
{{formula:7d73179f-4538-45e0-93af-608934213a64}} According to Theorem REF , the rescaled function {{formula:7725ba7b-47d4-4e3c-980e-af3bee781d60}} strongly converges to the minimum norm solution. From the interpretation of {{formula:661c73ab-482d-4725-81f7-53e38d5d920e}} as an average of {{formula:e9345574-f5f1-4836-b7b1-74be6669918d}} , and using the fact that convergence implies ergodic convergence, we deduce the strong convergence of {{formula:c1d514ca-e845-4274-9abc-c00d09e12e75}} towards the minimum norm solution.
This argument is detailed in the next paragraph, so we omit the details here.
{{formula:558988c0-cf6f-43d0-8451-2077683c9799}} We have obtained in {{formula:42d2e2f5-a425-4957-823f-0516d84d6ebb}} that the trajectory {{formula:361bd01a-ed43-46ee-b3f8-bda7afcd60e2}} converges. Therefore, it is bounded. Let {{formula:c2b222b3-4fe0-47aa-a5de-351154b8ed42}} be the Lipschitz constant of {{formula:ecba1859-f86b-41f2-9d30-35ae2db8c846}} on a ball that contains the trajectory {{formula:49a3c6b9-d49e-40f5-a37a-2eed4fb8c539}} . According to the convergence rate of values and Lemma in the Appendix, we immediately obtain the following convergence rate of the gradients towards zero
{{formula:57766242-3487-4b46-b2b9-b1d50b8e09a2}}
So,
{{formula:9da6d822-dc16-44b8-9c96-9ad334d2eee8}}
This completes the proof. {{formula:9514077e-cd74-4a9e-8850-a88629510ed5}}
{{formula:675749a6-4c02-49c8-b8f0-83e02b51620f}} for {{formula:ee97854a-d2d5-4962-944f-51e2c5a6f258}}
The system (REF ) becomes
{{formula:e5db0d4e-1cde-411a-b5d3-f8cdd2fed57c}}
According to Theorem REF , the rescaled function {{formula:f35bfa53-89ee-425e-afd1-3c6e5fa8527c}} satisfies
{{formula:fb5b302b-93a6-44c1-b1cf-181a9e7353c4}}
Our objective is now to obtain the corresponding convergence rate of the values {{formula:c7d64063-3ea0-4214-b4c1-423e1858e72f}} as {{formula:781749f4-8143-4747-98b6-a63c00eec0b1}} , and strong convergence of
{{formula:f3edbc06-9d9f-4bea-91e3-4e53a225d4a8}} to {{formula:41956568-78ce-4ffc-80a7-d56398140298}} the minimum norm element of {{formula:f86253a1-9ed3-4e39-ac77-cbd287713cdc}} .
Take {{formula:95475f78-7c70-46df-a491-61c9f3f6d497}} and {{formula:0753a8cc-c21e-4d1f-8c9a-d7530e5fcaf7}} .
Let {{formula:ce7167dd-8c9c-4e0f-a492-f61a25d8e1db}} be a solution trajectory of
{{formula:8506bf30-e6ba-483d-a8f7-aa3c99b0b3d8}}
Then, the following properties are satisfied.
{{formula:d6c70dd4-fbfb-4ae0-9f66-a89b7e16cdc5}}
{{formula:3dce8a43-f42a-4444-8350-3e7838462029}} Convergence rates to zero for the values {{formula:df911a5f-cecd-4e75-b5b9-e30bf9cb58a6}}.
By definition of {{formula:1b85b7b0-acce-4462-98d7-b3d6aaabb131}}
{{formula:3835e69e-7a9e-426e-bbfd-f07201f7c019}}
Equivalently
{{formula:6e06b67f-1826-4f9c-8fd2-817cd4a7bea1}}
From this it is easy to verify that {{formula:9a741946-1594-4907-9abe-8472c41ee186}} can be interpreted as an average of {{formula:e874391a-4e3d-4702-b552-1ca28d710ffd}} .
After integration from {{formula:e7af5d06-5271-4117-bb3b-b9c04d98d29f}} to {{formula:3515f165-0af8-4581-9e2c-d62fe273c891}} of (REF ) and division by {{formula:2bb0dbc1-fdf4-478f-9f61-692e0f92d278}} , we get
{{formula:66b38c85-8d89-4b45-9410-3323c470ad45}}
where the last equality comes from the change of time variable {{formula:2d03a027-f7c1-435a-b6ad-a3eac4518a9a}} .
According to the convexity of the function {{formula:62fbf796-d7eb-4a97-b82a-57c30fef9f02}} , and using Jensen's inequality, we obtain
{{formula:543782c2-e647-4d96-a687-e4b9b4b57214}}
Using again the change of time variable {{formula:b40c0b75-48bc-4fbc-bdaf-fc28aecb0374}} , we get
{{formula:61adee29-6638-484b-8f24-b0daa930e75f}}
It follows
{{formula:3fe83325-5cfa-4830-a43b-2a7dd3cd0c98}}
Using (REF ), with {{formula:b28c320d-0dfb-4050-9dd4-a5f4b96683ca}} , we obtain the existence of a constant {{formula:b4410c79-cf0a-4d05-8eb3-80021ac9b778}} such that
{{formula:310f3c3a-2d88-4b88-91f7-a29128b8dce1}}
Therefore, multiplying by {{formula:98640bc6-bb44-4cce-b99f-5a69c81b244b}} and rearranging the terms of this inequality, we conclude
{{formula:49a9bdb0-7bdb-4778-b528-a83dee029828}}
By letting {{formula:8c713a0c-13da-4f82-ab33-ccfade4fa0f5}} in the above inequality, we first get that {{formula:99128874-96ff-4031-b76d-11564cad78b3}} where
{{formula:24766635-a10f-4b43-a312-aaeba6838814}}
This implies {{formula:47f235f9-2b63-407b-b0b0-8e15214d9804}} is nonincreasing. Therefore for each {{formula:2b5b5ac1-da1a-46ef-82db-de423547fa23}} and each {{formula:64265946-b641-4996-beaf-2b5f81b04a4a}} , we have {{formula:fddf9d58-ad06-4511-9e1a-c82c9431ab3e}} , which gives
{{formula:7330cd26-c26a-4505-954b-da175a8ded71}}
Consequently,
{{formula:07052701-2d12-4edd-8ab7-b4505890423f}}
We conclude that, as {{formula:0fb93651-ae04-49b0-95f9-5b2a1fde0dbb}}
{{formula:e6b90220-90ad-4560-94fc-0c0a992b211e}}
As a consequence
When {{formula:0f560f80-e7f4-4ef6-8c3e-fbd37c26af72}} we have
{{formula:8cbc6d31-b869-43ca-b173-fa4bf850015a}}
So we can get as close as possible to the optimal convergence rate {{formula:164a8bec-0f93-4b6f-9cb1-ab88ac2fc9f8}} .
When {{formula:ea621af2-6148-4756-8be8-44bdb75d29a2}} , by taking {{formula:b7b41085-fdb9-4970-8f90-07bf379c10fe}} , we get
{{formula:3436cc8d-f138-4e53-97ef-eab44f28bb5d}}
{{formula:3dea5c5d-e66b-4a6e-9196-0434ddc1eb5b}} Strong convergence of {{formula:cb2b55fc-d387-41fe-ac07-049020423ff3}} to the minimum norm solution.
Let us go back to (REF ). We have
{{formula:80508be0-2bab-4cfc-bc07-84a98e4babd4}}
Integrating from {{formula:d2f936f6-70eb-4a24-b634-ff87e88dc1e2}} to {{formula:c38529f3-1a02-41ce-8f4c-41e1fbe368a2}} , we get
{{formula:14fce8bc-2d06-4ef7-aed0-671597e86160}}
Therefore,
{{formula:97110685-c437-4a71-8e8e-16a7b142156d}}
By Theorem REF , we have, as {{formula:a8b0bce4-c793-4bea-bc8d-a3a4322caa74}} goes to {{formula:de6f2ffe-bbb2-42db-bff1-0de70ac4219d}} , strong convergence of {{formula:fc64b834-886b-4c2b-a762-12eeee3c861c}} to the minimum norm solution {{formula:2760a3f9-f6b7-4ccf-8fe5-5d799e327198}} .
Since {{formula:36df81dd-22ee-4372-8f5d-880bf0b455ee}} , then given {{formula:e9ea3219-943d-4101-94b7-3ed06c83308c}} , there exists {{formula:f8d59166-b5bc-44c6-bc1e-0b314a942f61}} sufficiently large so that {{formula:917ccce0-a2bb-4fe9-bc43-718ce2d5d62d}} , for {{formula:8912b80d-44b6-46c8-a30c-9d8343e90e34}} .
Then, for {{formula:06193e21-eae2-455b-b980-c1f7db0b4bfc}} , split the integral {{formula:af579a1f-c620-43a3-959e-1ef244f4244e}} into two parts to obtain
{{formula:b275f0f8-b0af-476d-9961-6dc5610d4b10}}
Now let {{formula:b08e92b9-9f17-42f7-8d18-2168d328f34b}} to deduce that
{{formula:c2a6dd75-887a-406b-a2c0-4451e9fa4516}}
Since this is true for any {{formula:cfe8f7d3-7a16-4381-b79e-94ff1eeaa065}} , this insures
{{formula:9bd13346-aaf5-4389-9691-6a000cffa244}}
Going back to (REF ), we conclude for {{formula:3f29ebad-a3bb-402b-b719-34458472ea74}} that {{formula:79caf42c-d220-416a-8dea-862d2532f85c}} . This means that {{formula:9e6ed3e2-5dd5-4c2f-9267-a13ddb4fdf75}} strongly converges to {{formula:18d6a4d3-c99c-47bf-bdea-7672c2207adb}} as {{formula:3bdc00b5-380f-41a3-8e74-9089826e2af4}} .
{{formula:472eaccc-edc2-4a10-96b8-d4b0dcd23e12}} Convergence of the gradients towards zero. We have obtained in {{formula:848863de-e712-44b8-bfc4-8a90bd1a8cc6}} that the trajectory {{formula:5bac08aa-f47a-440b-8731-42b02b48c2b8}} converges. Therefore, it is bounded. Let {{formula:9b9915cd-1c84-4cc8-8834-0568d9d739af}} be the Lipschitz constant of {{formula:978b1447-1935-4efa-8795-26f4eccd8123}} on a ball that contains the trajectory {{formula:bf84bab6-cf57-4360-be24-1ac4eb4913c0}} . According to the convergence rate of values and Lemma in the Appendix, we immediately obtain the following convergence rate of the gradients towards zero
{{formula:7718fc97-0aa0-422d-9085-05c643a3b987}}
This completes the proof. {{formula:abdd5cfc-2d2d-437e-aaf0-2e903e0875fd}}
Nonsmooth case
In this section, we adapt the approach of {{cite:842d8ee7ba718f5b259d362eae5263b7f7119b9c}} to our context.
The pair of variables {{formula:2e192f4d-5057-4785-a935-ff0c2d60486a}} defined in section satisfies the following differential system, which only involves first order derivatives in time and space
{{formula:e4048271-4c36-48ce-859d-bb04dadfb517}}
This differential system naturally extends to the non-smooth case, replacing the gradient of {{formula:3696fb80-2ce8-4f28-92f8-35d043aadaaa}} by its subdifferential.
Given {{formula:1bb5758a-95a5-41d6-9e78-af42c72cd7d8}} a convex, lower semicontinuous and proper function, this leads to consider the system of differential equation/inclusion
{{formula:cdf9ec95-9a5b-4925-8a5c-8e4cf0fb8e3b}}
Solving this system gives generalized solutions to the second-order differential inclusion
{{formula:683d4174-e2a5-4ed9-9f17-005bdddf626a}}
whose direct study raises several difficulties.
To extend the results of the previous section to this nonsmooth case, we need to avoid the arguments using the Lipschitz continuity of {{formula:ecdd2999-8367-43dc-b459-f7e0693af6b5}} .
So, we are led to consider the Cauchy problem for (REF ) with
initial data {{formula:9330e215-b1c7-43c2-9686-33ac032ad1ba}} and {{formula:faffabcd-141f-464c-9e50-75bbf65c532e}} , that is with initial velocity equal to zero.
So doing we have {{formula:b5e1f147-0f24-48ef-a6b7-798fdc90a5e5}} , which allows to interpret {{formula:e8aedfc4-d72b-40d1-acf6-97b68247121f}} as an average of {{formula:11ad1f4d-df84-4db0-90be-032c7140b327}} .
Then, the existence and uniqueness of a strong solution to the associated Cauchy problem relies on the equivalent formulation of (REF ) as a perturbation of the generalized steepest descent dynamical system in the product space {{formula:f2b9c87b-c97b-4a2d-9e25-e4aae9c3065d}} . Precisely, define {{formula:d5d1b4ae-6cca-44b3-aad4-4cba99b33489}} , which is given for any {{formula:9ee39313-c7cd-4185-af48-16de46f7b5b4}} by
{{formula:13ab7426-dfb6-4626-8cad-61bb1df5cc2f}}
and let {{formula:0e29483a-1c3e-45f7-aa4c-473f397a8116}} be the operator defined by
{{formula:d189ee54-a6d5-4bf6-a55d-4c2aed28ed76}}
Then (REF ) is written equivalently as
{{formula:eb9a7749-2ff8-4fd3-aeea-0c2796126331}}
The initial condition becomes {{formula:58b336db-c9e1-459f-a695-aee4f8c0b61d}} which belongs to
{{formula:56e92354-6402-4de3-8b2a-de344b2c6d2d}} .
According to the classical results concerning the Lipschitz perturbation of evolution equations governed by subdifferentials of convex functions, see {{cite:13aa68232254121c8fa682f19ae80ba4fe14ce47}}, we obtain the existence and uniqueness of a global strong solution of the Cauchy problem associated with (REF ).
As a major advantage of the time scaling and averaging techniques, the arguments used in the previous section still work in this more general nonsmooth situation. The rules of differential calculus are still valid for strong solutions, see {{cite:5ae97cfc3b269431f2ff3983e5af7ee3e6656201}}, and Jensen's inequality is still valid for a nonsmooth function {{formula:f6d10490-bfbc-4a2b-a78b-7c21e8a5a2ff}} . Indeed Jensen's inequality is classical for a smooth convex function {{formula:72511299-7f4e-4a71-84b4-e8e9e888fc95}} . Its extension to the nonsmooth convex case can be obtained by first writing it for the Moreau-Yosida regularization {{formula:12368bca-62aa-4dad-ae49-10e5e31482ab}} of {{formula:2ef92ef8-9799-4866-a0e3-9ef1188d5cfb}} , then passing to the limit when {{formula:6bf92fb1-834b-4a49-a602-f37d2084ea1c}} . According to the monotone convergence of {{formula:132801b6-c915-43f1-8281-8b842f901ea8}} towards {{formula:dc3727e4-2cb1-4ba0-80d2-021a517a4a9e}} , we can pass to the limit in the integral term thanks to the Beppo-Levy monotone convergence theorem.
We so obtain the following theorems corresponding
to the cases {{formula:2f5cfd24-04ef-47e0-b8f0-5fa1875adc55}} , and {{formula:17ebad4c-dcc3-4eac-9bfa-ab9a36e40db6}} .
Let {{formula:9dce2e3b-3e48-4857-a4a2-775d36cec4dc}} be a convex, lower semicontinuous, and proper function such that {{formula:db6d1616-c2b7-48ca-8475-f49d8cc32308}} .
Take {{formula:7dbffc22-47c7-4ae9-a88a-83a0e4f0f83c}} and {{formula:7f5abd0a-a2d2-4bee-9981-d3597933d09a}} , which gives {{formula:1361fdda-9d88-4cfe-ad7f-95d4d931068d}} .
Let {{formula:e8c89254-32ac-45b1-897c-38413fbec206}} be a solution trajectory of
{{formula:9ddc3f21-8872-4328-a2f6-dce9b0b2b7a2}}
which is taken in the sense of (REF ), and which satisfies the initial conditions {{formula:0b2f7be2-6afe-4d9e-be0d-430668fb97a8}} and {{formula:b75afca0-e3d6-454d-bb2a-e9ba9e90b7b4}} . Then, the following properties are satisfied.
{{formula:4dedf6cf-7452-473e-8f50-79534954a1f6}}
Let {{formula:ef9e6280-e0d1-4f80-ab01-120115d4d594}} be a convex, lower semicontinuous, and proper function such that {{formula:86814843-c403-47a8-8e59-c95c0f487e86}} .
Take {{formula:5fd96f6a-c486-429a-bec8-e560a7f4bbd1}} and {{formula:4d635318-ff12-4fc5-bb6f-d678acc944d1}} .
Let {{formula:45bdd273-36ad-4415-aa82-6e581e74c817}} be a solution trajectory of
{{formula:ff708ec6-6796-4a6b-8c2d-584963a6b578}}
which is taken in the sense of (REF ), and which satisfies the initial conditions {{formula:83e48cc8-d378-46ef-a063-54c7633487c5}} and {{formula:7cb8b1d6-2ca2-49b2-a493-668861725fa6}} . Then, the following properties are satisfied.
{{formula:11c4d12a-3a8c-4d5c-bc20-a8cbe80c525f}}
As a consequence of {{formula:44221203-5213-496c-8317-a4ce1a5aba43}} , we have that {{formula:e0130413-a88e-468d-b4fe-69d972f8dddc}} remains in the domain of {{formula:57bb9605-12e9-47d0-af94-8493c89c148b}} for all {{formula:56b33221-bad5-457c-9a5f-552e2eb007cc}} . This viability property strongly depends on the fact that the initial position belongs to the domain of {{formula:f29dbb24-0c81-4003-8de3-577287e59130}} , and that the initial velocity has been taken equal to zero.
Numerical illustrations
The following simple examples illustrate the properties of the trajectories generated by the dynamics studied in the paper. They show that the trajectories verify both a rapid convergence of the values, the convergence towards the minimum norm solution, and a notable attenuation of the oscillations.
This highlights the additional properties obtained by the presence of the Tikhonov regularization term.
A detailed numerical study of these aspects should be the subject of further work.
We consider the dynamical systems (REF ), (REF ) and (REF ) in {{formula:69ec6000-76ce-463a-993f-24ed74bbdcc9}} :
{{formula:2323a63d-b6be-458a-8d41-8098da65aa52}}
Let us illustrate our results with the following two examples where the function {{formula:7b184539-b6ba-407d-8988-226c256f7d31}} is taken respectively strictly convex and convex with unbounded solution-set.
We choose {{formula:95876df7-438f-4eaf-86d8-44c2cd8b95d4}} , {{formula:accc5c5a-c358-4ba3-96ed-b95203d1e6dc}} and {{formula:b2cacf75-5f7f-4320-a91b-f1a1cbdf6217}} .
Our numerical tests were implemented in Scilab version 6.1 as an open source software.
Take {{formula:d42adc40-82da-4cc4-8937-9212915fd090}} which is defined by {{formula:730af671-8cd0-4a9c-b1df-ffe17ba16721}}
The function {{formula:207714b3-524b-4960-8ca3-c43c55bc3442}} is strictly convex with {{formula:89353d63-5fcd-4c41-93c0-5430e36997cb}}
the unique minimum. The trajectories corresponding to the three systems described above are shown in Fig. REF .
{{figure:709db701-1abe-4518-ba60-a080ef0e0208}}Consider the non-strictly convex function {{formula:416c137a-8425-4dd8-9444-dcacfdc3f2c5}} defined by
{{formula:57fec355-47fb-4d2a-af08-3a71f5abc025}}
The set of solutions is {{formula:0f94c5ad-1224-43dc-8b32-b32a3cbdc6c6}} , and {{formula:ca395d36-f807-442e-9a63-62a11b9b6843}} is the minimum norm solution. The trajectories corresponding to the systems {{formula:7fa69935-c6c4-49cd-afab-81c7085b6fee}} and {{formula:c7151555-69a0-4d0d-b0a1-b836b81fb005}} are represented in Fig. REF .
{{figure:d890bd02-ab2a-4a24-bd65-9d7666212324}}
Conclusion, perspective
The introduction of a Tikhonov regularization with vanishing coefficient in the optimization algorithms is beneficial in several respects.
For general convex optimization, instead of weak convergence of trajectories/iterations towards an optimal solution which depends on the initial state, it provides strong convergence towards the minimum norm
solution. This is especially important for inverse problems, where one seeks a solution as close as possible to a desired state.
In this paper, we show that this can be achieved while preserving the fast convergence properties attached to the Nesterov accelerated gradient method.
Our approach is based on the Attouch-Bot-Nguyen scaling and time averaging technique, which proves to be flexible, and allows to address these issues in a unified way.
As a striking result, for the first time we obtain simultaneously the rapid convergence of the values {{formula:7cb90583-e9f5-4394-955f-3cdffdc1c535}} and the strong convergence towards the minimum norm solution.
Let us mention some other interesting questions to examine further:
a) Our results open the way to the study of a new class of fast algorithms in convex optimization. Lyapunov's analysis of continuous dynamics which support these algorithms will be a great help. It is probable that such algorithms share the good convergence properties of continuous dynamics, according to the results obtained in {{cite:842d8ee7ba718f5b259d362eae5263b7f7119b9c}} concerning the time scaling and averaging method, and {{cite:5d79441d8d1a7cb404b1f4868d2e28f013f932b4}} concerning the case without the Tikhonov regularization.
b) An interesting and open question is whether a similar analysis can be developed using closed-loop Tikhonov regularization, i.e. the Tikhonov regularization coefficient is taken as a function of the current state of the system, see {{cite:43b6d764563cc520c65550034703ee0d2a11f345}} for a survey on these questions.
c) The Tikhonov regularization and the property of convergence to the minimum norm solution is a particular case of the general hierarchical principle which is attached to the viscosity method, see {{cite:40061acf2f148f6fe66f3b61c4c383cdd9148aff}}. In the context of numerical optimization, as an example, it is therefore natural to extend our study to the logarithmic barrier method for linear programming and to the selection of the analytical center.
Appendix
The following Lemma provides an extended version of the classical gradient lemma which is valid for differentiable convex functions. The following version has been obtained in {{cite:7954220ade148840f63b069912cf45df6e08958a}}, {{cite:9ed472d1778abc54ed3007e6eb65310b5fcdce20}}.
We reproduce its proof for the convenience of the reader.
Let {{formula:6039eaa2-1767-4968-b2f4-63eff5080415}} be a convex function whose gradient is {{formula:bf98d8bd-5c20-4b11-8799-ff5927c3cb0d}} -Lipschitz continuous. Let {{formula:0b321b17-ae14-4365-b21f-45db878c4c32}} . Then for all {{formula:dad41d62-8ce8-4cec-904c-5fa13155ceef}} , we have
{{formula:c440bb95-f55c-4e05-99d7-3336ee9d86ab}}
In particular, when {{formula:b13b4f87-1ffe-4477-81d8-f6d1b748e3c2}} , we obtain that for any {{formula:307aa260-b637-4fe3-8ab0-80bdcfbce8d8}}
{{formula:185e481a-937f-4eb4-836b-c168b125204c}}
Denote {{formula:2afab1ff-bceb-4273-aa6f-767e7c8facbc}} . By the standard descent lemma applied to {{formula:5d2a0689-704d-4f00-8e3e-820f39b40aed}} and {{formula:d553a14a-7e1e-4027-a986-90a0b88a065e}} , and since {{formula:73ae3bcf-f600-409f-86d0-5fd69d27d2f5}} we have
{{formula:00e5243f-5a98-421c-9d27-298378640816}}
We now argue by duality between strong convexity and Lipschitz continuity of the gradient of a convex function. Indeed, using Fenchel identity, we have
{{formula:732bc3ac-3c49-4f7c-ac28-809bfe3415fc}}
{{formula:3586ad05-e9c2-4688-808e-857e0d3e63ae}} -Lipschitz continuity of the gradient of {{formula:f5ab3f06-7516-4f77-b221-33371281168f}} is equivalent to {{formula:0acd6721-196b-431e-ab85-528edc30d3f9}} -strong convexity of its conjugate {{formula:414526c6-7f23-461f-9237-f52c7dc19339}} . This together with the fact that {{formula:b582e316-5573-446b-a5ec-3700a5b40150}} gives for all {{formula:d2d0005e-c790-463d-884a-435567a6af82}} ,
{{formula:8c5942b0-bdda-4a8d-b0b9-22928833dfda}}
Inserting this inequality into the Fenchel identity above yields
{{formula:59d4b40a-e516-4c06-b7a0-f7cfd3b95f92}}
Inserting the last bound into (REF ) completes the proof.
| r | 44a36b586411d27a2ee3ebee930ac54b |
Despite the importance and ubiquity of non-stationary risk-sensitive RL problems, the literature lacks provably efficient algorithms and theoretical results. In this work, we study risk-sensitive RL with an entropic risk measure {{cite:8d1d9ac7aaccbe1c7d3296a743769d12606305fa}} under episodic Markov decision processes with unknown and time-varying reward functions and state transition kernels.
| i | c97509326ac4b106d88aa27920560121 |
We compare several methods, some of them are designed to work in a linear model: Lasso {{cite:0e458741e27de19327cbb812702b6ca0b8eb06ec}}, adLasso {{cite:8a98f873ecba25b7c7451656b8ba80975e01e0f8}} and procbol {{cite:e64ee282585b98dbe76b99fd150de3af49c811ab}}, while others are designed to work in a linear mixed model: lmmLasso {{cite:dc22f179566ec68895c3b9ddbf017ac7ecdd7e19}}, Algorithm REF (labelled as Lasso+), adLasso+Algorithm REF (labelled as adLasso+) and procbol+Algorithm REF (labelled as pbol+).
| m | c4796617f28bcd8d0223bb75b331f067 |
The authors highlight that neural networks allow for tremendous freedom with regards to structure, hyper-parameters, regularization, and other factors that may affect performance.
We have tested several architectures and selected the best result for this paper, but acknowledge that our work falls considerably short of testing across all sizes and training procedures.
Doing so would require an extensive computational resources and results would have no guarantee of generality beyond the problem specifically discussed in this work.
Rather, we sought to investigate the use of various classes of models (non-dynamic, ROM, full order simulation) and representations of data (POD modes, FFNN) for the purpose of forecasting a quantity exhibiting extreme events.
We also note that the work contained in this manuscript has been done without the inclusion of artificial noise, as is common in works applying machine learning to synthetic datasets.
The authors suspect that including noise would not change the results in a meaningful way, since flow reconstruction methods have been shown to be robust to noise {{cite:de3f792b7aaee9d7cda6af1eaf08a0b1a1017de2}}, {{cite:934c1027900be473f8d0654ab13f643bb67c6ddf}} and quantities with dynamic models may be estimated with filtering.
Nonetheless, further work studying the effect of noise would be necessary to confirm this.
| d | 3595773a087fef80d182deec63f636ca |
In this section, we present our active learning framework for object detection. Let {{formula:3f2ee8b3-e5f3-4723-820b-b6248a637749}} be a dataset divided into a labeled set {{formula:65584dc3-e453-4e1a-b83f-3694c8b09ccb}} and a pool of unlabeled data {{formula:0859a931-0ba6-42cc-a86e-f58efb796ecc}} .
We start with a deep object detection network trained on the losses specified in Sec. REF , and then proceed with the active learning cycles. These include mining a subset of samples from the pool of unlabeled data {{formula:1a838388-bc75-48f2-92c5-af87b1ab763b}} and transferring them to the labeled set {{formula:dea75e2a-3dcd-4773-b743-582ad1d67252}} , incurring a labeling cost.
The proposed acquisition function used to select the samples to label is defined in Sec. REF .
Intuitively, we are interested in mining hard samples for which we can rely on supervised learning during training. Nonetheless, arbitrarily augmenting the set {{formula:1d5526c2-2ae6-4cb7-8464-26b8be8cc8c0}} with only hard samples creates a distribution drift in our training data.
Hence, we propose to include in training the easy samples, i.e., objects for which the network's confidence is high, by using pseudo-labeling.
We train our network with our new set of labeled images, and repeat the whole procedure for {{formula:7622346c-0973-49dd-a1b1-0ae45659af90}} active learning cycles. To train the object detection network we combine the standard supervised Multibox loss {{cite:25366587bd59a2839f1bf7c40ac294b1eb9ac351}} for labeled samples with a semi-supervised consistency loss {{cite:613e7232dc98b75d890f1f63785e021f81f3c89e}} for unlabeled samples. Fig. REF and Algorithm REF show a high-level description of the pipeline and the algorithmic steps, respectively. Below, we give the details of every step.
| m | f6cd7607fb0bbd2d9d6212c48079178a |
A fundamental concept in quantum information theory is the understanding of
entanglement. Quantum entanglement can be viewed as a crucial resource in
quantum information. The key question is how to quantify and classify
entanglement of quantum states. Polynomial functions in the coefficients of
pure states which are invariant under stochastic local operations and
classical communication (SLOCC) transformations have been studied extensively
{{cite:7abf5889a25cbdb135b97510dac46db9111ce5d8}}, {{cite:b7153a7223ddf1d8bd504cd4ef4c9809b11d4a92}}, {{cite:6b448fa0ffcbb85ccbba48005aece1d0734640f3}}, {{cite:49cce96a54bbb2d0ec9aaf06132bf73471d8a0d2}}, {{cite:a931e9adc878bf4b1b3b58ef5cc25be1d5961ba2}}, {{cite:f2e2850e91c17754314f000e5372386d7831011f}}, {{cite:61fb062af480a59e2ce0c856dad9a598972e4933}}, {{cite:17bbdc4818270220627b3aafb8ebb06f4170e608}}, {{cite:09ba04ebe4deacb795ac927184e7c0a6f1705fec}}, {{cite:0e61a669fb6318b023cbee2e8f7db9fdb478313f}}, {{cite:a3e1026f2cf197a1e2067a4076307ee77267cea6}}, {{cite:02226b01730b3a369ac7ce009acb24b210fa7c73}}, {{cite:c6f2796efffb81410752ee860a504aca8a11f30c}}, {{cite:acdaf204b622315ca9877d2b317dc8ea7e07f119}}, {{cite:2a3d509629e5d531fe8731088c6cf3760da6c50a}}, {{cite:23af33d9c40fad54fee386400eac7b72b7f7c0c1}}
and exploited to construct entanglement measures
{{cite:7abf5889a25cbdb135b97510dac46db9111ce5d8}}, {{cite:b7153a7223ddf1d8bd504cd4ef4c9809b11d4a92}}, {{cite:a931e9adc878bf4b1b3b58ef5cc25be1d5961ba2}}, {{cite:02226b01730b3a369ac7ce009acb24b210fa7c73}}, {{cite:c6f2796efffb81410752ee860a504aca8a11f30c}}, {{cite:acdaf204b622315ca9877d2b317dc8ea7e07f119}}. The concurrence
{{cite:7abf5889a25cbdb135b97510dac46db9111ce5d8}} and three-tangle {{cite:b7153a7223ddf1d8bd504cd4ef4c9809b11d4a92}}, which measure entanglement
of two-qubit and three-qubit states, are polynomial invariants of degrees 2
and 4 respectively. It is known that the concurrence and three-tangle are
the absolute values of hyperdeterminants for two and three qubits
respectively {{cite:6b448fa0ffcbb85ccbba48005aece1d0734640f3}}.
An expression has recently been derived for four-tangle, which is
a polynomial invariant and a measure of genuine entanglement of four-qubit
states {{cite:49cce96a54bbb2d0ec9aaf06132bf73471d8a0d2}}.
Polynomial invariants of degrees 2, 4 and 6 for four and
five qubits have been constructed from classical invariant theory {{cite:a931e9adc878bf4b1b3b58ef5cc25be1d5961ba2}}, {{cite:f2e2850e91c17754314f000e5372386d7831011f}}. The absolute values of the polynomial invariants obtained
in {{cite:a931e9adc878bf4b1b3b58ef5cc25be1d5961ba2}} may be used to construct entanglement measures of
four-qubit states. Further, polynomial invariants of degrees 2, 4, 6, 8, 10
and 12 for four and five qubits have been obtained using local invariant
operators {{cite:61fb062af480a59e2ce0c856dad9a598972e4933}}. Despite these efforts, few attempts have so far
been made towards the generalization to higher number of qubits.
Three-tangle has been generalized to {{formula:c6f45d53-6aae-4ee3-9a23-ebe6a3e746d7}} -tangle for even {{formula:4983f78d-e694-4a9e-806e-fc806af6e422}} qubits {{cite:17bbdc4818270220627b3aafb8ebb06f4170e608}} and has been shown to be equal to the square of the polynomial
invariant of degree 2 {{cite:09ba04ebe4deacb795ac927184e7c0a6f1705fec}}. A generalization of three-tangle to odd {{formula:12a85ae3-3830-43e4-869f-db18b2f7d994}} qubits has been recently proposed in {{cite:0e61a669fb6318b023cbee2e8f7db9fdb478313f}}. In {{cite:a3e1026f2cf197a1e2067a4076307ee77267cea6}},
polynomial invariants of degree 2 for even {{formula:d42cb423-5e9f-4860-9021-bd0138dde490}} qubits and degree 4 for odd {{formula:0a897660-2b4a-44d5-8ecd-95e06fe1ce39}} qubits have been derived by induction based on the definition of SLOCC.
| i | a48d6e32e3f9ce50ae9e2e3074c96711 |
The biological motivation for our the model is the retina, where both lateral
and vertical coupling by electric synapses (gap junctions) occur, forming
neuronal networks with stacked layers {{cite:fe7b7da9fce22ec3015847ef624e5c18ce027494}}. All these electric
synapses are plastic, from the milisseconds to the minutes time
scales {{cite:9cceb4334ed8d9b7b46b4e0c00de7b4f90977e73}}, what opens the possibility of homeostatic tuning to
criticality {{cite:28234a5cf899814d7c6242f1c4025c0f22dce187}}, {{cite:305d896145d35dbd6ecd46481403fe13ed83ea6b}} as supposed here. Moreover, there is
experimental {{cite:1fb9fcb6f98a19f30d078878dc7e756507ccee2b}}, {{cite:6f72f8cf26cf76952b3a5c139491a15d2185c252}} and computational {{cite:4969db14aadafa65e2b4e6f47d15ce73400dc3b7}}
evidence that disruption of electric synapses diminishes the sensitivity and
degrades the retina dynamic range. We emphasize that we worked here with
stochastic integrate-and-fire neurons, not cellular automata as done
in {{cite:5e45e239b8604c53e6cc1c734b3a82e1693a2969}}, {{cite:3816b300d722022f5f523a1fe228b2e93917fb14}}, {{cite:9e81214d525a8ae732096f2c1617bd03e31b0d23}}, {{cite:2265854bf1a9e76cfc4b7c6a4b261f8cbf862d43}}, {{cite:68efd150b9c25230e0f926d3483479634c02a465}}, {{cite:6dead57d9de82590851e4f38574a8fc7e783fe28}}, {{cite:749f27a3bf8d9b269bacd35e65c7ca43cfd92520}},
generalizing thus these results to biologically more realistic elements.
| d | ff1a72bb5c9cf5e0c415c64d712e10b1 |
More recently, a new method has been proposed where finite-size effects are expressed in terms of the forward Compton amplitude of the pion {{cite:2a477b71da5e1fbff3e537ef17546e86378fe0da}} in an expansion in {{formula:770b6ccd-a15c-4521-9444-d6d9eaa4a39e}} . The first publication was restricted to the dominant {{formula:8f00500a-5bf1-4b4f-a289-89fee19c47cb}} contribution and the sub-dominant contributions were neglected. However, the latter are numerically relevant and this limitation has been overcome in a more recent publication {{cite:42790f2b1e923518c63d96528123c934d2b49740}}, {{cite:dc5f3c2094232ffbefbc0516fab51f7b5d1a902d}} where sub-leading terms {{formula:c5ec734a-2ff2-4f98-a9d2-2b4376dbf00c}} and {{formula:78d093c9-4d4c-4896-8e9c-794b342d82b8}} have been included. Numerically, one observes a nice convergence of this expansion. This methods has been employed on lattice data for the first time by the BMW collaboration {{cite:2d469cc488a0c823d73fbbb211b1b6fe8626cb0d}}. The results are found to be numerically compatible with NNLO ChPT and the previous method based on the time-like pion form factor. Interestingly, this method also provides the leading correction for the finite time-extent of the lattice.
| m | 84e2788e31841663bbc775b4a55e5685 |
Optical Character Recognition (OCR) has been deployed in a wide range of environments, from document digitization {{cite:b8a0ae7df2252c67acfb10f58fb9f6d90d7f7c75}} to road sign recognition {{cite:39aae7be4df7b3e0cc4be4930a53791454e74747}}. Egocentric data
presents a number of additional challenges, such as occlusion, motion blur, orientation and text size, particularly when objects are being handled {{cite:bb3b7025ee54cde3c56005c0e395f37204a5c717}}.
| i | 7b2564ec9d9273e7d42501dd7918badb |
Optimality for coarse SNMMs is simpler than optimality for more traditional
SNMMs (see {{cite:87162a3a9937472f69b602a9a6241ae06c1f8e25}} {{cite:87162a3a9937472f69b602a9a6241ae06c1f8e25}}, {{cite:0b9513753df87f31fc87d5f075c52c11ed5736e2}} {{cite:0b9513753df87f31fc87d5f075c52c11ed5736e2}}),
because coarse SNMMs avoid the need for accumulating the effects over
time. This article therefore
includes an accessible illustration of the steps involved in calculating optimal estimators.
Our Web-Appendix explicitly proves all our results for time-dependent coarse SNMMs, thus providing a self-contained example.
| i | 3670d4312c2807b051d78bfa7be37a4d |
Currently, deep learning methods
dominate the field of machine learning
owing to their experimental performance in numerous applications
{{cite:985fb9db6d8685791a95f4efcf1299f87eaa762e}}, {{cite:51d3ee95c7142e7f4d59b9888503f4bf8fe4a02b}}.
However, they typically consist of overly complex models
of a great many parameters,
which comes in the expense of time, energy, data, memory,
and computational resources {{cite:eb089a7adc7e0b6153ac228baf8ec4e770f6160d}}, {{cite:e4fbc2d319c665adfdbcd5c169fd78cc769e6a1c}}.
Furthermore, they are inherently uninterpretable
and vulnerable to small perturbations
{{cite:d055a6932ec11ed6bc3d470b51b15f8c08cb833e}}, {{cite:c738bda6fb29d67852fb27a7957709593d8ab9e3}},which has led to an emerging hesitation in
their usage outside common benchmark datasets
and real-life or security critical applications
{{cite:10686e5570f63d58c404d10c0d1648c0da363695}}.
| i | 511353a5c5658e1251ce6ef4c958391d |
Remark 13 Let {{formula:1ed4511f-32fa-4737-ae8f-fbaa345592f9}} be Banach spaces such that {{formula:53f7a0be-c268-4083-b66c-68856ec6e4bb}} or {{formula:79f8c276-1444-49cd-9b78-3c9b4d45940b}} has the compact metric approximation property. It can be shown that there is a contractive projection {{formula:cc37b270-4a61-40ce-9621-62bf9044737d}} such that {{formula:0902a69d-6236-419f-b3b9-4d4994f530de}} . See the remarks on page 334 of {{cite:5a7572f29b18d9a5769f385e77536f6234f9f17d}}. It is easy to verify that this projection is identity on functionals of the form {{formula:84a99394-e9f1-47f8-9586-69e9d7405df7}} . It therefore follows that {{formula:89dad6e2-e3f2-4a91-b298-00d23800bf2b}} , where the inclusion is the canonical embedding. Suppose the projective tensor product space {{formula:c58979f9-32e6-4516-8e19-09923e9f1c35}} has the RNP (hence it is the unique predual of {{formula:4407b3ca-54f3-4c1f-b65a-09debad5a251}} . Let {{formula:87d20e7f-262a-4943-b43e-217240a58161}} be point of norm-weak usc in the bidual, we do not know if it is also a point of norm-weak usc in the intermediate space{{formula:348245e7-cb50-4b02-8959-e709c65e601b}} ?
A difficulty being, that {{formula:0cbd00bf-d32b-4bb9-8c4c-0a01ead739a3}} need not be a weak{{formula:24d8a3ae-f04f-4f71-b3e7-7351f9724060}} -closed subspace of {{formula:fb0aba06-ba30-49cf-a378-bc2de77e6363}} .
| r | 5566bc9cee427d2a9d72a5dd4ff5d04d |
Many typical verb representations, including FrameNet {{cite:e5d75dbc6e20bb11ae697c1faf3c68ae8416e4a0}}, PropBank {{cite:55d2a4c9c4d43a14bde3877834f6bc909c231d90}}, and VerbNet {{cite:693ed01086bd36a876c58bf18a56dadb3958fac3}}, describe verbs' semantic roles (e.g. ingestor and ingestibles for “eat”). However, semantic roles in general are too coarse to differentiate a verb's fine-grained semantics. A verb in different phrases can have different semantics but similar roles. In Example REF , both “eat”s in “eat breakfast” and “eat apple” have ingestor. But they have different semantics.
| i | 5bc41537bf9c8ccfd46751e8201e60d7 |
Experiments were conducted using sagittal slices covering 15 positions on the right liver lobe from 12 volunteers. Each anatomical position comprises 50 dynamics. These cine-phase scans were acquired on a Siemens Skyra 3T scanner using a 2D T2-weighted true FISP sequence. Pixel spacing, slice thickness and temporal resolution are equal to {{formula:7f1ff590-e0a8-4e31-a1d3-bc813bc0e8ac}} mm{{formula:2350c2d8-5123-4e62-921b-a1f4d75bf8c2}} , 3 mm and 320 ms, respectively. The dataset was divided in train, validation and test sets following a leave-one-out scheme on a subject level. Thus, the model was tested on all the slice positions belonging to an unseen subject. We compared the proposed network with statistical modeling {{cite:f783916cd2f5a20ceae01878dde9ec368cf7917d}} and with a similar architecture which uses the traditional feature extraction scheme (Conv-Pool stacking) {{cite:db37c6199427faa2e54ce98774ad3a88a3e49ca1}}. Two blood vessels were manually annotated in each image. We report results when varying the number of extrapolated times {{formula:7ee0558e-928c-48ab-8ae0-a70c17dd86fa}} given 5 input images. Table presents mean Euclidean distances between ground truth and predicted vessel positions. Figure REF shows a comparison based on the Normalized Cross Correlation (NCC) metric. The proposed model outperforms the compared methods for the in-plane motion prediction task. Results show a lower performance when more time steps are predicted. It is natural that, based on the same information, the error increases when extrapolating more time points. Moreover, the model does not cope with out-of-plane motion which also influences the reported values. Figure REF illustrates the vessel trajectory through the target and predicted temporal images. Our multi-scale encoder-decoder model showed the closest alignment with the target trajectory. Finally, Figure REF shows an example of the output sequence obtained by deforming the last input image with the predicted deformations. While the model showed a competitive performance, some limitations should be considered. The first is related to the inherent error introduced by the quantization, which depends directly on the selected number of bins. Also, since the predicted displacement fields are recovered from motion labels, potential misclassification may lead to unrealistic and ambiguous motion. This aspect should be investigated in a future study.
{{table:02405f5d-a884-4629-aa41-f22a12bf9669}}{{figure:2ec6ce84-c6fb-45ad-956a-d5928144abe6}}{{figure:ea760a26-4366-4af1-a441-924e8f20002d}} | r | 2337733a469dac9353c3be1ed23c6ad2 |
Finally, transfer learning, in the form of pre-trained embeddings such as fastText is extremelly useful in the sense that the learned representations capture semantic properties of words in a unsupervised learning fashion and we also allow fine-tuning, which is proven to further enchance the accuracy of the models {{cite:a627af009107e958e6fc63604033b37ae593e91f}}. Fasttexts' ability to implicitly utilize morphological structure in the form of sub-word representations is also proven to help the overall downstream architecture to significantly improve. We conjecture that this property holds in languages with a rich morphological structure like Greek.
| d | 55ede0d4cd70792e7d7fa69a926b7927 |
We also note that quasiperiodically-driven many-body quantum
systems, with two or more incommensurate drive frequencies, have
received attention only
recently {{cite:9a07a477373545f54949807edf27bf4fc00fb610}}, {{cite:0600ccb7023acf3fbf6ba94bfe40be2fd08fa3c7}}, {{cite:1dba21ea790d54e1861ed051b908b19ca888f2bb}}, {{cite:3f1ab410af61d9150d5371f207e98ff7cc25cfc4}}, {{cite:df445d42c54f03fcd33183f9bc04541d4203574e}}, {{cite:662f47f0891846031331767e1715c502f5ac1486}}, {{cite:97436d2fe214d6a3506e15f0c62d05b632df3658}}, {{cite:bc719a02d857ffb05fc7cebd02547fd6964d89ae}}, {{cite:5d9572ac6069bd1b904717c787d6867891a9d90d}}.
While the steady state is again expected to be described by a
featureless infinite temperature ensemble, much less is known about
possible prethermal phases in such settings. In case such prethermal
phases exist (see Refs. quasi7 and
quasi9), they would likely constitute a much richer
class than their Floquet analogs. Finding reliable perturbative
approaches, particularly beyond the high-frequency regime where a
generalization of the Floquet-Magnus expansion to quasiperiodic
drives exists {{cite:a5b7103d94113e270a06d39e8e47b3f613e8bed7}}, remains an uncharted territory.
| d | f01fcfc073e1034367af5741f403ec3c |
The quantum stochastic drift protocol {{cite:ce12268e96d760aec219f711deb47ae602d68f4c}}, or qDRIFT, is spiritually related to linear combination of unitaries but uses classically controlled evolutions rather than quantum controlled ones. This approach drifts towards the correct unitary time-evolution with high precision and with a gate complexity independent on the number of terms in the Hamiltonian. qDRIFT was later generalized to the continuous qDRIFT protocol for time-dependent Hamiltonians with an {{formula:bb347722-5bf4-4190-8bb0-e55ff5bc33f1}} -norm scaling in the gate complexity {{cite:97837094c95502468147c827b56373646e12a7b2}}. The principal disadvantages of this approach are that it has a larger scaling in the simulation time {{formula:2b2a384e-89cc-4068-a357-f3266cc12234}} compared to other algorithms and does not exploit any commutator structure between the terms of a Hamiltonian.
| i | 990747a4fd6ff53681e833c4aa3e5e03 |
The Stein Variational Probabilistic Roadmap proves to be a robust method for efficiently generating graphs well-suited for multi-query planning, outperforming existing biased-sampler PRM approaches.
Future work will investigate how to pick the appropriate prior distributions as well as number of particles, with more emphasis on planning for manipulation as these results were very promising.
Another exciting result worth investigating further is the tunable optimism parameter for partially explored environments.
This is especially applicable when running the algorithm in an online setting where the particles are updated incrementally as new observations arrive, similar in spirit to the results in {{cite:e422b5f0d10100230ba6ea76020cd91f458a637e}}.
Finally,
this approach would be an excellent complement to learned neural network samplers that leverage experience to guide sampling {{cite:2de6c0afde3a4f9e783e60cdfb0f465fc0904ed9}}, {{cite:ff322723ba45e4983a3fe192d1ca607fc0bc91c0}}, {{cite:393567ba5f676d7da2dc4454d36dc7e483fa38dd}}, {{cite:7ec032fc969e2b673b9bdb3c8bfc672dbd234707}},
where SV-PRMs can continue to optimize these biased samples
or directly incorporate the scoring functions as a probabilistic cost.
| d | a02b4abf233074051f0ef112af75c514 |
In terms of methodology, there are several ways in which the definitions of the post-block trajectories themselves could be refined. While we measure departure as a binary value, in reality community participation occurs on a variable scale and it is possible that blocked users do not depart, but drastically reduce their participation
(or conversely, increase it).
Similarly, defining recidivism in terms of a second block is an imperfect approach because it is possible that some users reoffend without getting blocked.
This is especially true in the case of personal attacks, where each individual's threshold of acceptable behavior may vary.
One possible way of addressing this would be to combine our approach with prior work in automated detection of toxic language {{cite:55752e4147434bc6f4fd90c40032db0f271d8cb0}}, {{cite:9976e5ad817aafbcdcadeda3fe5f2626c523d814}}, {{cite:dab122e35958b9e2df1bb01a8024fe61a7474644}}.
| d | 04ad191e1e461eeadd4e14a40a11e3a7 |
The quantitative boundary estimate (REF ) was established
by Chen {{cite:adbd6fd6043f2c5537e556657b6c50c9715903ed}} for Dirichlet problem of complex Monge-Ampère equation on {{formula:de287695-4ff3-4174-a708-53b8f59a55fe}} ,
and further by Phong-Sturm {{cite:ab178453419a9787515472c68b671874693f1745}} for Dirichlet problem of
complex Monge-Ampère equation on compact Kähler manifolds with real analytic Levi flat boundary.
See also Phong-Song-Sturm's survey {{cite:f1f67a4a75c8e1b615ab61df4caefb9defbe6859}}.
As in the statement of Lemma 7.17 in {{cite:b277ec1ea1349704b0652634587b23eea7effc04}}, (REF ) indeed holds for complex Monge-Ampère
equation on compact Kähler manifolds without assuming the boundary to be real analytic Levi flat. To the best knowledge of the author, only the literature mentioned above has studied this topic.
| r | ec90b378b38640cd515f0923c27935bf |
Rather, the ablation experiments of Table REF demonstrate the contributions of our methods with respect to a fixed network architecture.
The first row establishes a baseline, using the network trained without the negative depth loss (i.e., {{formula:4e392ca1-95e9-4837-a054-93003f7adb8d}} ) and no occlusion-handling method, explicit or implicit.
Importantly, when the negative depth loss is excluded, we also include in-frame negative-depth points from {{formula:ad5219e5-cf78-47b6-a2f5-7f40e053fd5f}} in the other losses, as in prior work.
The second row demonstrates that the incorporating the negative depth loss (and excluding {{formula:cb74825b-1da3-457e-b04d-ee7176b0e788}} from other losses) yields an improvement across all but one metric.
The next group of several rows incorporate the z-buffer for occlusion handling at different points in the training process (i.e., beginning, 25%, 50%, and 75% of the way through).
Having found an optimal time to insert the z-buffer (epoch 11 of 20, 50% of the way through the training), the last two rows test the relative contributions of the negative depth loss and the alternative occlusion-handling heuristic proposed by Gordon et al. {{cite:3f1866ed8beee02049da2a894ffa75b3276ad3bf}}.
| r | cd127961a99a3659dd5ba469d0bb080b |
Multi-agent trajectory prediction aims to predict future trajectories of multiple agents conditioned on their past movements. This task is critical to numerous real-world applications, such as autonomous driving {{cite:0603f27a6fd07fd46e468eed85bc7208f18c30bc}}, {{cite:33b6b03c03bf463db0ce990b2531d5a4111a0617}}, {{cite:3fb7ea28b393bd88cee9ec3f9462cf75ae2649db}}, industrial robotics {{cite:8d1f1e7843ce9b4ea816ddc7c011821a090b74e0}}, {{cite:75840f91d07cd005f48382aea6fb64a2d48cbe85}}, human behavior understanding {{cite:66fac7ce37c080bceea24ac385496387c05c4b6a}}, {{cite:0024453af59b9ac43bac739c0c7de5e7740cca35}} and surveillance {{cite:561c3e7b5542aebbb277ac58e679e6f947fb95ed}}, serving as a foundation to bridge the knowledge of the past and the action for the future.
| i | 0f3b2111f801577ade184b4e964af151 |
In EigenFind, for images {{formula:93eb500b-0506-47ba-bede-d18a822cf204}} classified as {{formula:68d2e33e-5ed9-4a30-afb1-48c64062baa4}} , we calculate how moving their latents in the direction of each of the top {{formula:466ba956-1830-458b-89b5-8d57757dc8d0}} StyleSpace Eigenvectors affects the classifier decisionBoth positive and negative directions are evaluated, but the negative directions are omitted here for brevity.. Next, we follow {{cite:4eeb0fb9d0f3ee1904d81485e57422e9e98a724f}} and estimate the most significant Eigenvectors by calculating the average difference between the classifier logit before and after the change on all input images {{formula:975a24d7-05b0-4905-823a-c401b08a693b}} . Finally, for each image, we find which of the most significant Eigenvectors is able to flip the image label to {{formula:423480f8-c16f-4363-80e7-4b776c1cb1b9}} . The resulting image {{formula:31c8bbdd-fbb5-45aa-a62c-20b04e9f507b}} is the counterfactual.
| m | 0b0e98e72b8b28ac499d60f024c3b9b7 |
More latest observational evidence indicates that the center of our
Milky Way harbours the closest candidate for a supermassive black
hole with strong magnetic fields. Using multi-frequency measurements
with several radio telescopes, {{cite:e75a3f5e45224fa58ca28f9c73f6d44a08fcc9ad}} showed that there is a
dynamically relevant magnetic field near the black hole. If this
field is accreted down to the event horizon, it provides enough
magnetic flux to explain the observed emission from the black hole,
from radio to X-rays. In addition, {{cite:71597e743b630f2a9fa99874b8ee83380111f049}} reported that jet
magnetic field and accretion disk luminosity are tightly correlated
over seven orders of magnitude for a sample of 76 radio-loud active
galaxies. They concluded that the jet-launching regions of these
radio-loud galaxies are threaded by dynamically important magnetic
fields, which will affect the disk properties. In this paper, we
investigate the plausible modifications of the standard models of
quasars and AGN in light of the very recent observational evidence
for the important discovery of a dynamically relevant magnetic field
near the GC. In particular, we focus on the possible origin of the
strong magnetic field in the galactic nucleus and study some of the
important effects of the ultra-strong magnetic field. The very
recent astronomical observations concerning the strongly magnetized
super-massive central black hole are depicted in considerable detail
in Section 2. The key roles played by such observed ultrastrong
radial magnetic fields in the standard models due to the effects of
these fields are elaborated in Section 3. The possible origin of
these strong magnetic fields near the GC will be considered in
detail in section 4. Our model of super-massive star with magnetic
monopoles (SMSMM)is given in Section 5. We show explicitly there the
generally accepted {{formula:9cc4149d-08ca-4198-bdba-3c06314bc810}} -turbulence dynamo mechanism of Parker
can not be used to generate the observed strong radial magnetic
field by a preliminary estimate in terms of the observed W51 data.
However, good agreement with observations may be achieved if the
central black hole of the standard model is replaced by a
supermassive stellar object containing magnetic monopoles
{{cite:c56afccf1c62bbc11c50db603a2a0e305e363950}}. In these model, the production of the strong radial
magnetic fields can be naturally explained. We also discuss other
evidences against the black hole model of quasars and AGN in Section
6. Finally, in Section 7, we briefly summarize and emphasize our
results.
| i | 398d407e596ae53d03de8e10ca06b5d2 |
In this section, we present numerical results to verify the performance of the proposed transmission scheme based on Bayesian optimization. The BS is equipped with {{formula:b66ac639-e77a-4d74-a341-099f61cf2f5c}} antennas. The number of elements in RIS is {{formula:4b756003-cb4f-4845-a997-57d49a568a33}} if there is no specific introduction. The parameter {{formula:a76615ec-d4ed-4191-b6e7-1a7ad39f6674}} in (REF ) is set to {{formula:3027706f-26ef-425d-bbbe-0357230f6022}} . The window size is {{formula:973334a4-a750-4cb2-b456-a56c40f2b0a0}} . The SE kernel is chosen in Bayesian optimization and the hyper-parameter {{formula:be82a999-b66b-44e5-abf0-39b9df2d125c}} is learned by the maximum likelihood estimation in {{cite:65389ca3d938d329e1a478247ce6c6eb8027a6d1}}. The number of partition {{formula:a7b20330-47e3-41a9-9684-0d04dc4b6d7d}} is equal to the input dimension {{formula:c121392f-3e19-414e-b9fa-099d509244ac}} . The smooth parameter is {{formula:7b4c30b9-fd01-4ef6-97db-e94d862b286d}} . The simulation results are obtained by taking an average over 1000 random realizations and the maximum iteration time {{formula:2abacb39-9ecd-4df1-894b-b4770dc94c7c}} .
| r | 53cd9e03c419cba5682a7eeabeaa4f68 |
The KITTI outdoor dataset {{cite:3c5552d8d1bb5a1e9d6b5d6ae04556f8b45bb9dc}} has a maximum distance of 100 meters, which makes it more challenging compared to an indoor dataset. We compare the results of our approach with existing approaches {{cite:259ec8e602e0328ff7229bf12e2ede7e4b6faa92}}, {{cite:33223d582b3baf8347445bc06ead855273dd7ee4}}, {{cite:49348c5ed2a113a9ed6f5c39d732d3d76e6a66a7}}. Table REF shows the results. Without sparse depth samples, existing approaches have a RMSE error larger than 6 meters, while our approach achieves the best accuracy with a RMSE around 4.5m. We also compare the results with different amounts of sparse points. Our results outperform Ma et al. {{cite:33223d582b3baf8347445bc06ead855273dd7ee4}} and others consistently, and reach the best accuracy of a RMSE of 3.1m with 500 sparse samples.
{{table:00f7d0e8-5a2c-444e-85d7-0022145a7468}} | r | d87addf918fabbfe4e2eee4f47307cde |
On the other hand, the fractional calculus is a generalization of classical
differentiation and integration into non-integer order. Some fundamental
definitions of fractional derivatives were given by Riemann-Liouville,
Hadamard, Caputo, Hilfer, Liouville-Caputo, Grünwald-Letnikov, Riesz,
Coimbra and Weyl. The fractional derivatives describe the property of memory
and heredity of many materials. see {{cite:242149a6ae56e347220f78b42fa6b7dc53ee7378}}, {{cite:f929bd9a20a78e194bcd06ea40e64413e58998ac}}.
| i | f670f58ec7c082e390a5c27883b4985b |
Curriculum learning for neural network optimization has been implemented in a variety of ways. Approaches in {{cite:2885eb32a576e5a2db2a3208cec2f5314e6b9cab}}, {{cite:36ce8ddda8fb64433b3b1c54b13bf56a6e7f15e3}}, {{cite:94527d9cc359578e5735c2925dfb471e1cbf8e81}} propose a sampling strategy where a scoring function is the source of prior knowledge for the model obtained either from the properties of the data or from a pre-trained `teacher' model and a pacing function that adjusts the sampling weights depending on the student's progress. Our approach employs the reinforcement learning paradigm adopted by {{cite:926db326f1485ef4b352d1e4cbe37793a4ace38b}}, {{cite:7f548d4a8003f0aac85a0f3c1cc0ce705a0abd5c}}. Graves et al. {{cite:926db326f1485ef4b352d1e4cbe37793a4ace38b}} experiment with a k-armed bandit algorithm while {{cite:7f548d4a8003f0aac85a0f3c1cc0ce705a0abd5c}} use more complex deep q-learning networks.
| m | 5a362a4864c046107f984051c8c3b81b |
where {{formula:36508d7d-3c9d-45e0-a8b6-e504292ece66}}
represents the
desired gap or “margin” between scores for {{formula:bce3102c-11f7-4607-835f-e85cfef84305}} and {{formula:4d3a6940-bda6-4676-baf9-34a128777a17}} .
For suitable {{formula:d8f7fc1d-e9ac-4db6-9fd0-554235296add}} , this loss allows for greater emphasis on rare classes' predictions. For example, {{cite:d1c5666ad6f1244b0d31776bfbf4caf396748e97}} proposed
an “adaptive” loss with {{formula:bf2b810f-3189-4458-b67b-bd084f5301ef}} .
This upweights rare “positive” labels {{formula:e036e3e3-b6e8-4aff-b979-645067c70572}} to encourage a larger gap {{formula:c333b22c-0f48-4fc1-9cd1-a250fec20983}} for such labels.
{{cite:568d5a8d195db2853b1e2a879c8531671c80e869}} proposed
an “equalised” loss with
{{formula:2186ac4c-6d3d-44ad-a9b9-f92d61012a2d}} ,
for increasing {{formula:bd25f13c-c57a-4171-9b59-1f531a6f6fa3}} .
This downweights rare “negatives” {{formula:e4d0f4a3-ccb8-4eaf-a2fa-50f117b92d48}} , thus preventing {{formula:a248043b-9fae-409c-8acf-43c34978d71d}} from being too small. Finally, {{cite:1d49e405c322483ef54b9f5824aeade9fefe925b}} proposed
a “logit adjusted” loss with
{{formula:7dfd9556-192c-4a7c-be6f-52ecada9d968}} .
This encourages larger margin between positives {{formula:b3c9d608-d4e9-44a9-a48b-835dc90245cf}} that are relatively rare compared to negative {{formula:e6174169-e760-4230-b71a-a53f267e312b}} .
| m | 54bd521ed3a892efa87fb89b682cadbd |
In this paper, we propose a novel framework – Spatial Content Alignment GAN (SCA-GAN) to mainly enhance the content consistency for garment texture and human facial characteristics. We adopt a two-phase approach to map the misaligned content information to spatially aligned feature space. Firstly, we employ a Prior Content Transfer Network (PCT-Net) to transfer the edge content in a separation manner. Instead of using part-level parsing labels suggested in {{cite:bbad8b70132b12f57e0f379425f21aabaef59c37}}, we leverage pixel-level edge maps to explicitly highlight high-frequency signals which dominated a wide range of spectrum for content information. The prior transferred edge map can be served as an extra constraint in high-level perspective to provide spatial hints for the characteristics of both person identity and garments. Secondly, we develop a new Image Synthesis Network (IS-Net) to synthesize photo-realistic person images according to the appearance of source images, target pose heatmaps and the prior transferred content maps in edge domain. To encapsulate the unique pattern statistics from the original image, we exploit the residual-like Style Encoder Blocks (Style EnBlk) to strengthen the ability of feature extraction during the encoding stage. After that, we propose the Content-Style SPADE (CS-SPADE) to synthesize fine-grained appearance details by generalizing the input source for accepting both aligned content images (pose and edge) and misaligned style feature maps during the normalization. We conduct extensive experiments on the challenging DeepFashion {{cite:f39b1ec3e175bb04183ad079451e9dd8e20e98ac}} benchmark to verify the superiority of the proposed framework compared to some state-of-the-art approaches. In summary, we conclude the contributions as twofold:
| i | ffd885f33e984111b14e7ee339964c7d |
Earlier works {{cite:7cc573ce807e1ce1cbbbd98df90dbfe0198a15b3}}, {{cite:bddef2e20afec2b68de8339154e5204599973c8a}}, {{cite:1705e59ce078eab14953836c8d475d994c7dd9d3}}, {{cite:409b3baf507fb76cdd7e6b4fbcb92c89aea9f2b1}}, {{cite:add815f663c2eac812f236ecf19cb531c2b2d831}}, {{cite:60337ba78681d0bf0514f22501c072bd2e139f08}}, {{cite:e14e5e0dab81da7632eb742f2a2dc613fa59e856}}, {{cite:b1a918a26c05271d3340490f3bb23885fc964eaa}}, {{cite:539ac483c0d3ea60234982818375e35bde8f31ae}}, {{cite:d969df623d60e27858a8d03a5e1b1c53ebf2f95d}}, {{cite:018a4f83a2ce9be841ca2a1ec79322136bfdaed5}} attempt to learn a mapping from input image to the pre-defined kinematic joints via an end-to-end network, and directly regress the keypoint coordinates, which we refer to as the regression-based approaches.
| m | 32a5c649fb482e6d1806ab4e1ac0a88d |
Fractional calculus and related equations have become important
topics in many science and engineering fields. For instance, they
appear in mathematical modeling {{cite:307de33ef8e4cbb4367f85e23370c89535782582}}, {{cite:393bd700f90f9a2f39b62f13c768fe0e27877fdb}}, control engineering
{{cite:a616f5eb04eb685a204ac8380bdbe0f5a5cdf81f}}, {{cite:5cd21b30f615228e60345dd8d169b7e5fbb05e02}}, biophysics
{{cite:d1bc6355aa994ce9eed63179a8667cfe0bc471d2}}, {{cite:627616bac24c025976ad9bb8481a399b1bbb8675}},
electromagnetism {{cite:1ec4f79084264e75332d4c3305bd3bc5333d19e9}}, {{cite:9a1e3ddaf0e4f2584631d39b42afbc32496439fc}},
polymer science {{cite:6b103c1ebd3c81837e4cc7ab1087812ae0e2e8a9}}, {{cite:0263f53e402ae8fe9c343126e3c4daa716baf157}},
hydrology {{cite:537e0b24d9e606051d7d0fbb9af2b2a6b92d2dbb}}, {{cite:762cf74690c488b5fa9ade6a163458470bc36251}}, and even finance
{{cite:94b647e8afc6dfe71c6dc057590879b188207c53}}, {{cite:c896eafe6586c8e2d607662e0de0ac1f922e718f}}.
While the classical heat equation {{formula:d09bcfe4-c261-464b-99e1-52281919c0b2}} describes
the heat propagation in homogeneous mediums, the time-fractional
diffusion equation {{formula:d7465286-14e2-4965-9601-ced2c73dafd0}} , {{formula:d641b6a3-5ce0-46b8-89a3-394deb5c2a15}} , can be used to model the anomalous diffusion exhibiting
subdiffusive behavior, due to particle sticking and trapping
phenomena {{cite:4abdaa6dc123c2e43faa3777f72b07f1f1ac6b61}}, {{cite:c478d241e1c6ee1ae2740f304002db9160d6d057}}. The
fractional wave equation {{formula:7ec0ce84-42f1-4a15-a0c8-b5081a559a3c}} ,
{{formula:be103576-8c3d-49e8-8f72-87681fbc844a}} governs the propagation of mechanical diffusive
waves in viscoelastic media {{cite:143ef003c759a371e5110e203e8ce12fb3a2ca90}}.
The
fractional differential equations have an another important issue in
the probability theory related to non-Markovian diffusion processes
with a memory {{cite:b2324212fee56c2ce0db7c6b4204938ebec0a8d2}}, {{cite:f5d395114c4fb31eee170cdb8e5022b828c8140c}}.
However, so far, the
study of time-fractional partial differential equations is mainly
restricted to deterministic equations.
For the results on deterministic equations, we refer the reader e.g.
to {{cite:f9f045ddac196ebde236df7d0eaa0b1977eb2491}}, {{cite:360c517b74a97fee9f8730e32b2255e0462338bb}} ({{formula:1ab6252b-1451-4fe9-b565-ee950fe302a5}} -theory),
{{cite:358ebebdab1c70dafdbcaf829f37e8d4a6b4149c}} ({{formula:78ca1701-603d-4869-81ac-d56b1e7d6087}} -theory),
and {{cite:42cac8575670fbb60e3e92d18313866c234d3c76}}, {{cite:520ace3dda0f30fb054a234f8e56958769e8315c}}, {{cite:1e009e36ec4d7577dce47466735a93264624d95d}} ({{formula:d91efc9c-83e5-464b-90ad-336cd22a12bd}} -theory).
Also see
{{cite:465e5f1090c224b1e3dd34dbd2293221fb64aa40}} for {{formula:d96e2be4-354f-4a3a-9a66-18bdf8f42c8f}} -type estimates,
{{cite:3fb7e676071f272b85d2c92ef706597eaa4c7891}} for Schauder estimates, {{cite:f7b3d474c45913f3cb77a657b13e7bd08c341fd7}} for DeGirogi-Nash type estimate, and {{cite:fe79793e83fd73b74932319b82a3bfdd24750e76}} for Harnack inequality.
| i | 2cd8ce0042bd468630002fdf79a2a195 |
We evaluate both Private-UCB-PO and Private-UCB-VI under different privacy budget {{formula:7024cffc-d640-4f1b-a21a-3abe34014840}} and also compare them with the corresponding non-private algorithms UCB-VI {{cite:03d3eb7c8669d0758374a88f3980d38636622160}} and OPPO {{cite:68d7d36a578eb561f0f6d8463cfb8f66c4a60720}}, respectively.
We set all the parameters in our proposed algorithms as the same order as the theoretical results and tune the learning rate {{formula:63ec60c1-21c5-49e0-b409-8fc11f18ba57}} and the scaling of the confidence interval. We run 20 independent experiments, each consisting of {{formula:199fd6f8-57d6-4c64-9ef7-ab03bb413e6a}} episodes. We plot the the average cumulative regret along with standard deviation for each setting, as shown in Fig. REF
{{figure:0f37d610-7cfa-4504-b569-8599d7c0a510}} | r | e7c8f489f0440fc918e782e0cb7eb040 |
The comparisons in this paper are made without consideration of the
solar cycle, although the outer heliosheath regions respond to the
variations in the solar wind dynamic pressure and magnetic field that
characterize the solar activity cycle
{{cite:2a84ac326d0e73b25b3013655b66ec2397dd9d55}}, {{cite:6f3966625b794daa4eb4cc4e668a42a92996835e}}, {{cite:b0315b03034e8defa1e2f2a648cfde660d850ec7}}, {{cite:e3424c2d6eb38c1062b9ce162f358ed944478c00}}.
Although the solar cycle will cause the heliosphere to expand and contract as
the solar wind dynamic pressure changes, these pulses travel
only a relatively short distance upstream of the heliopause ({{formula:33e3d396-257b-4872-97ea-f60738ee3f83}} AU). The influence of the solar cycle on ENA production and the
Ribbon phenomenon is not yet understood. The Ribbon intensity may
vary over latitudes due to the ion energy differences and travel
times. The extremely low levels of solar activity during the first
year of IBEX observations suggests the solar activity cycle variations
must first be understood before reaching a conclusion that we have
entered a new interstellar cloud
{{cite:64c18c8be0e10ce2ea4c126f27bd1eef4728b6ec}}, {{cite:26906d3fb82a419e66256064df5fbeab0eaf6b5d}}. As the
theoretical models of ENA production become increasingly robust, we
expect that studies such as this will yield definitive information on
both the heliosphere boundary conditions and the physical properties
of the interstellar cloud around the Sun. Finally, while this study
has examined only one of the possible sources of the Ribbon currently
under discussion, the other ideas for producing the ribbon
{{cite:bb83967fdbcee08b289d1cbc16e95d22e9bd24c4}} also generally invoke and seek to match up
with the orientation of the external IMF, so even if another
explanation eventually becomes accepted, it may still be possible to
directly detect the interstellar transition with IBEX.
| d | 2f3bf3cc2b55e5c8cd7724dcafe30736 |
Toeplitz methods: Toeplitz methods {{cite:6ad23cfc99c74c3f5f879b45b5bf6888aabc19b6}} work for stationary GPs with equally spaced design points. These methods leverage the Toeplitz structure of the covariance matrices under this setting. To make a prediction in terms of solving (REF ) and (), there are two approaches. The first is to solve the Toeplitz system exactly, using, for example, the Levinson algorithm {{cite:a893374a998fb83f6ea6d9117f38462e1925d581}}. This takes {{formula:0843ed8b-15f8-416b-8036-825509334ab0}} time. A more commonly used approach is based on a conjugate gradient algorithm {{cite:1c603809c85127c87e0e80ffa003938be2e9185e}} to solve the matrix inversion problems in (REF ) and (). Each step takes {{formula:51c17923-1676-45f0-a8db-c00f7c8c5456}} time. For the sake of rapid computation, the number of iterations is chosen to be small. But then the method becomes inexact. Moreover, the conjugate gradient algorithm is unable to find the determinant in (REF ) {{cite:483b16e112605d3b716764a8fc501dd50cc8993c}}. Thus one has to resort to the exact algorithm to compute the likelihood value, which takes {{formula:8de0d4ac-aa01-4d3b-9e2e-9ace05810f32}} time.
Toeplitz methods only works for equally spaced design points. This is a strong restriction for one-dimensional problems. For multi-dimensional problems in a tensor space, having this restriction can also be disturbing, especially under a sparse grid design. Many famous sparse grid designs are not based on equally spaced one-dimensional points, such as the Clenshaw-Curtis sparse grids {{cite:6e4d1524879ae186e06c02d1bbcdf640063867ad}} or the ones suggested by {{cite:1eb62e72f54b3bed82195268ebae5a92765e1902}}.
| m | f3c27f1bfd1e7af0e758d53146c1aae0 |
A new coronavirus designated Covid-19 {{cite:135775482076a2783e98a49eeeae40a348fb3c6d}} was first identified in Wuhan, the capital of China's Hubei province. It has been reported that people started developing pneumonia {{cite:1a1124ecf7691a03b33668a6fe2fde6f01057be9}} without a clear cause and for which existing vaccines or treatments were not effective. The virus has shown evidence of human-to-human transmission. As of 24 January 2021, approximately 99 million people have contracted the virus and 2 million have lost their lives. As ripple affect, a lot of people have lost their livelihood and about 40% {{cite:a8dd08dd9e00a24cc19f24a44b7665501348c724}} of small businesses have closed down. Majority of the countries weren't prepared for such pandemic situation in their hospitality domain which led to situation of a lot of doctors' risking their lives and working on multiple cases. With the help of technology, Covid-19 detection through CT scans can be automated to reduce up to 2 minutes per scan basis which generally take close to 10-15 minutes. A lot of recent research papers {{cite:4d0a3c4d33ab2c11b367e56f6a880b3f5df1b719}}, {{cite:0ab2c04ba0c9bb5f37045931ea5b9a25a8ab7b73}}, {{cite:891e94c0ae6edec668988bbe6098593f5ab9599e}} have suggested to tackle this issue with the help of deep learning, but with less amount of data and biased data scenarios, its tough to make a good inference on their results.
| i | 20fdfd2787d3b3d6779ef43a52fa23be |
Another area where quandles find applications is the theory of the set-theoretical Yang-Baxter equation. The Yang-Baxter equation first appeared in theoretical physics and statistical
mechanics in the works of Yang {{cite:29d4718f36e55d0153949c7c5b70db6fa3ae8877}} and Baxter {{cite:94ab67b2ba4cb3c80fbecd97a2fe354c2f04a728}}, {{cite:1cf689abf90995aa6f10f64b927ed3b57ab2bb28}}. The notion of the set-theoretical Yang-Baxter equation, was introduced by V. Drinfel'd in the context of quantum groups (see {{cite:dae2faa218862ebdfa44b52c9bd93abdbdd66d08}}). Recall that a set-theoretic solution of the Yang-Baxter equation is a pair {{formula:8226ad98-80a1-4135-9f51-af096dd38032}} ,
where {{formula:d79cbe8a-447f-4562-8720-e7bef86689b8}} is a set and {{formula:2f15c42e-6fb7-4d76-9254-f6ccc8e9c2bf}} is a bijective map such that
{{formula:15d0f341-eabd-4789-a1b9-8106f5858a69}}
| i | d26aa117661134fd1894294e356cacfd |
Designing machine learning models ({{formula:3b33fc4c-94de-4b65-b2c8-23f18df0a742}} ) that maximize data efficiency is critical to the success of solving real-world tasks. Indeed, breakthroughs in machine learning are often driven by novel architectures such as LeNet {{cite:dcbce7a5d502e5c0dab8c9e882c2739791428bea}}, AlexNet{{cite:763435a39af3d331b92fbc37c9c6de586598a97a}}, Transformer {{cite:c1bda5644b45b5fb733cf29d21b57dfc5f22caa6}}, etc. While some of the inductive biases of these methods are clear (e.g., translation symmetries of CNNs), others tend to build off of prior empirical success and are less well-understood (e.g., the implicit bias of SGD). To build our understanding of these biases and how they affect learning, we conduct a theoretical analysis of them in the infinite-width setting {{cite:9598a82c0cd6df54605efa512d62114e65c0f11b}}, {{cite:a8c46c8940fd072861f82253c249873099fd0d2d}}, {{cite:9bc1e46308bcda7e642316cf9b4699be0272e8dd}}, {{cite:524608458a392598e3c79bf1a10666eb247346dd}}, {{cite:5c7c8f4e790734000e5116ccdc492cbc90a1d1de}}, which preserves the most salient aspects of the architecture while enabling tractable calculations.
| i | c661aaa29424aab8153c48c91053dee2 |
What is the evidence for the supersolidity?
The direct evidence of supersolidity is the observation of
a Nambu-Goldstone mode {{cite:444f62881e0ab71e672ddd4878ccb88ac0de2332}}, {{cite:7dcf8b2a7ea60fe97e88773a566fd27da12d51b3}}, {{cite:0051532cd7f2b45a0254b1fb6b0c31851d044b86}} due to SSB of the global phase as was confirmed very recently for an optical lattice supersolid {{cite:08111cb779f5f56ed2341b8ddb67376fd58fd547}}, {{cite:a341d6b8db2360573c4f8366f7e76bfc4f48864e}}, {{cite:6c5f3c4dc6296cabf2d18870eaa7a2f2f79c8f6d}}.
Since the superfluid density {{formula:d4d94f0d-3c18-409a-942f-2e396d3e613f}} is the order parameter of the SSB of the global phase {{cite:4db858b08e2a39675a7ad68b11f7ef6ff8af45c4}}, the existence of {{formula:10a7051e-4de3-4ef1-b13d-60c00f40cbed}}{{formula:ffc1d5ca-0515-4f54-b03c-3eb8f7951c1e}} in the GCM {{formula:e64e3b49-b215-49f9-804c-fb6f8cc29b2b}} cluster wave function of the ground state due to the duality accompanies a Nambu-Goldstone mode state, which is a very low-lying collective state and is difficult to explain in the shell model.
This logic is same as the emergence of rotational band states in deformed nuclei.
Quadrupole deformation with the order parameter {{formula:e2d1483e-c671-4dc9-92ca-c4b1b251a501}} is caused by a quadrupole boson condensation in the ground state due to SSB of rotational invariance {{cite:e66565147c0d68bd4b4c8548995e450f9211de2a}}.
The appearance of the intruder collective states at a very low excitation energy near the {{formula:0257ab70-cbed-4b06-b32f-066d8e97a24d}} threshold such as the mysterious {{formula:6f099142-5b3b-487c-81b4-3a7699a4cc12}} states in {{formula:88281508-8e50-42e8-b1f4-bbe7933f062c}} Ca and in {{formula:d29bef08-53c4-4c37-9672-207a20ca953c}} O, analogous to the intruder {{formula:28aa16d7-1912-45c8-8a6b-bcbe1cd3946b}} state in {{formula:a7d304fa-4241-479a-8cf3-bf05734aa36a}} C, which has been understood by the empirical threshold rule of the Ikeda diagram {{cite:a6ea13efbacb779761473621f357e2fbc79b0b88}}, {{cite:3ad16d1e958b5916e4edbd4ac717bea87a37a354}}, is considered to be understood from the viewpoint of the Nambu-Goldstone mode due to SSB of the global phase of the {{formula:36fe8c7f-df1c-4954-90a3-2448a7ab28a1}} cluster structure.
| d | a839f7816cc59eb72769251e737292b4 |
After training the various Places-CNNs, we used the final output layer of each network to classify the test set images of Places205 and SUN205 (see {{cite:0814ae654a2abbe76ad9ee31a75ed33c20abfb9e}}). The classification results for Top-1 accuracy and Top-5 accuracy are listed in Table REF . As a baseline comparison, we show the results of a linear SVM trained on ImageNet-CNN features of 5000 images per category in Places205 and 50 images per category in SUN205 respectively.
| r | 41dc83c70cecb5e48171bcc9471ac2fa |
Temporal Coherence. We would like to ensure that the generated videos are temporally coherent. That is, a smooth and non-jittery motion is generated. To this end, we apply a temporal regularization on the generated keypoints and minimize the distance between keypoints in adjacent frames:
Ltmp =
i=0NA - 1 ka,i - ka,i+12 + j=0NB - 1 kb,j - kb,j+12
Since JOKR is encoded for every frame only from its respective image, sometimes flickering is introduced because a keypoint shifts in meaning between frames (e.g., a keypoint describing a back leg suddenly describes the tail). We observe this usually happens when the figures undergo large motion. Hence, similarly to Siarohin et al., {{cite:3fbaa592b6ce8dbdc66a8c97d8b591b8725e4843}}, we ensure that the generated keypoints are equivariant under an arbitrary affine transformation; We apply a random affine transformation on the keypoints and the original frame, and compare the transformed keypoints with the keypoints extracted from the transformed image. This ensures the semantic meaning of each keypoint is consistent, and significantly improves coherency, since decoding temporally coherent keypoints results in temporally coherent frames.
For an affine transformation {{formula:2a899ed1-88a7-4b83-9833-877f07b25d03}} , transformation equivariance loss is defined as:
Leq =
i=0NA - 1 T(E(ai)) - E(T(ai))1 + j=0NB - 1 T(E(bj)) - E(T(bj))1
| m | 3b7bfd737305ed1c9f35df498c65a8ff |
Let {{formula:186b8ac2-828f-48ad-841d-8cb44a60677c}} be a compact (closed and connected) Riemann surface {{formula:44c0eb00-c8ab-4160-a5ec-8abd86836df5}} of genus {{formula:8740e167-384a-4138-a7e6-b1a2fee00bf0}} . Higgs bundles over {{formula:5b3ea3ed-f957-4743-9334-4fdcc5dbb92e}} were introduced by Hitchin in {{cite:8877696fa7046879e408a3732d9beb006aa5c287}}. A Higgs bundle over {{formula:5009b7a3-458b-4ec8-b8fb-e44109ddf6be}} is a pair {{formula:5231d1e7-da53-45a7-9645-cb2c9f2cc5ac}} consisting of a a holomorphic vector bundle {{formula:00c288d4-afed-47f3-8667-61a78d52ea1b}} and a Higgs field {{formula:e1008d02-1579-4997-b7ed-5c0ae336814f}} , where {{formula:24631c51-649c-41ec-a6cb-9ffe183095d6}} is the canonical bundle over {{formula:b256034a-2631-4b19-b096-c72eba645ea1}} . The coefficients of the characteristic polynomial of {{formula:a675e767-325b-43d4-95d5-882be4967e72}} defines a morphism
{{formula:30f5435b-d1f6-46ec-b251-0426be4dbd0c}} ,
from the moduli space of stable Higgs bundles over {{formula:420884b0-84af-471f-9afa-5bceb4885892}} of fixed rank {{formula:73c32d5f-af3d-49bb-8a64-2f68dd978f44}} and degree {{formula:6205efd1-d516-450b-9d30-679cd6df88cf}} to a vector space {{formula:a68de628-c52b-492a-a91b-e542ff410d1a}} , called the Hitchin map (see {{cite:ba7fd1c8ba1691bd2e576c0a730be19f8148b66d}}). Hitchin in {{cite:ba7fd1c8ba1691bd2e576c0a730be19f8148b66d}}, showed that the generic fibers of {{formula:8d1aa628-6656-46bd-9f80-9918061e00de}} are abelian varieties and this map gives the Higgs bundles moduli space a structure of an algebraically completely integrable system. Later in {{cite:685f858c20a12f1893f40e16f7d845604b2d1fcc}}, Markman generalized this result for the moduli space of {{formula:108687ec-fcb7-46fd-8e2a-b03ba79a9ce7}} -twisted Higgs bundles {{formula:e16ae654-ad0f-4de3-bb23-2fa1d97c96ee}} , where {{formula:75050d22-34ba-4e94-ad67-eced8e43f2a7}} is a line bundle over {{formula:13373c27-cfa9-4219-9637-dc4130fada46}} and {{formula:406671a5-f73a-4df9-a2ed-dda42377a426}} .
| i | 0b4042279036eb0a26860e881ca011e0 |
Proof: According to {{cite:52e8be8bb2ebdbd09f1b34b55fc2e79a1eb901d4}}, if the
Hessian matrix of a function is semi-negative definite, that function
is concave. In particular, we derive the Hessian matrix of the exp-sum-log
function {{formula:5ce14bb5-f8dc-46a3-b4ff-581a1847e842}}
as
{{formula:08b56880-6722-4f2b-b4aa-772504b0dd66}}
| m | 2d306a6f24118253419f101e571ab17d |
Despite the above advances, to the best of our knowledge, few frameworks exist for privacy-preserving incremental training of end-to-end automatic speech recognition models. Prior work on federated learning for speech-based tasks {{cite:937bc808defc0981314ba640b44917c343c3174d}}, {{cite:fd28d13298e3b6cd2c06c5f2bd13331d7d8298c0}}, {{cite:b8bbb737dcc668f4d05d3ac5d9de43acfff098eb}} and end-to-end ASR {{cite:ba806fef3f41171b69330bcaf32cc94b0f1c7d81}}, {{cite:5eb26da5811b90b81e4985b5d303f4c043284b1c}}, focus on standard benchmarkse.g. LibriSpeech {{cite:e56d2e2ecd852fb386d9ed69504e5e71799d67fe}} is a small sized dataset ({{formula:6016db78-5cfc-45ff-bc70-e5976357a398}} hours) recorded in a controlled environment and not on large scale production data. Privacy-preserving IL on device for end-to-end ASR poses a number of challenges. Production-sized end-to-end ASR systems {{cite:e7e50ba9258a6f6514a939ce65ae669813a79241}}, {{cite:17ac1475e8ec8277e83745af392d9ac8a23341ee}} are expensive to train even in traditional distributed setup, on-device training needs more work {{cite:061fddea5640b4f6f2c1f2d2250f18b6b29977af}} to accommodate restrictive memory and computational constraints. Generating training labels i.e. speech transcripts, in near real-time, on the devices is another challenge. To alleviate unavailability of near real-time speech transcripts, teacher transcripts can be used in a semi-supervised and/or self-learning fashion. For example, consider the problem of improving models deployed in edge devices that run voice assistants. In such cases, the number of devices is in the millions, which results in a large scale of streaming data being generated. We propose to use large batch processing for the utterances being collected at the edge devices and sent to the cloud for processing, and the data is only stored ephemerally. However, deploying all or part of the above components on resource constrained speech devices (such as Alexa, Google Assistant and others) is challenging.
| i | dd29e1c6b05129ba3b39f762f52ccf3c |
The baseline method is an improved version of BottleNet++ {{cite:20ea91f4ed57c0e5cc49ba692c500422aef99532}}, which can compress and transmit feature of the deep model over digital channel. In order to fit this model into the digital channel, the quantization step is placed at the end of the encoder. Since the output is constrained by the Sigmoid function in [0, 1], so the quantization is {{formula:13fbc727-5f20-4ce9-9097-edf88792bd0f}} , where the floating-point {{formula:83eef5f3-cd03-486f-a1a3-4ec9b813ee69}} is represented by an {{formula:813ca030-2d3e-49a7-81e0-9fd237a9fc8f}} -bit sequence. Correspondingly, the dequantization is {{formula:73a6b069-e548-4ed9-9d83-8af627120634}} at the decoder. In backpropagation, the quantization process is treated as an identity mapping.
| m | 978be3477057b198daf12fb08f4f6f12 |
Example 4.3 (hydrogen oxidation reaction)
In this example, we consider a model for hydrogen oxidation reaction where six species H2 (hydrogen), O2 (oxygen), H2O (water), H, O, OH (radicals) are involved in six steps in a closed system under constant volume and temperature {{cite:bcf5a2ba40511501754fae7ce6a9bd8f51000e21}}, {{cite:47cd6510e57eea7314322dc2e4411659a2f3f2d6}}:
{{formula:d78322ac-28e2-4b7b-b6b8-0d772a7d226c}}
| r | 5ea8b103d16ac7fb5518ce54eb5f4f13 |
With the introduction of the renewal process for the background
seismicity, the main-shock rate depends on the lapsed time since the
previous main-shock which is not observable. This makes likelihood
evaluation challenging. However, using a recursive algorithm motivated
by that of {{cite:8916adf47c89ef3d9d25f00295c2b1201e45da68}} for the RHawkes process likelihood
evaluation, we can directly evaluate the likelihood of the RETAS model. We can then fit the RETAS model by minimizing the negative log-likelihood and obtain the variance estimate for the
maximum likelihood estimator (MLE) by inverting the Hessian matrix. For goodness-of-fit (GOF) assessment of the RETAS model,
we propose a novel approach based on the Rosenblatt residuals
{{cite:df435f2ed17dd4038b661c8d44bc787cbaffbe96}}, which can also be applied to the classical
spatiotemporal ETAS model. The approach avoids the simulations
required by the thinning spatial residuals based approach of
{{cite:63d6ad900d55495bd0b5f911177937a452604965}} and can assess the GOF of the temporal and
spatial components of the model either simultaneously or separately.
| i | 6ad3002cdc0a290766678101362e17f1 |
Despite this progress, the linearly stable nature of the typical mean velocity in wall-bounded turbulent flows implies that solely the linearised Navier-Stokes equations are not able to describe sustaining velocity fluctuations. Therefore, some minimal amount of nonlinearity must be retained in the mathematical description for the self-sustaining turbulent fluid motion. Towards this quest, various forms of `quasilinear (QL) approximations' have recently been proposed. Common to all, this approach introduces a decomposition of the given flow into two groups: one in which all nonlinear terms are kept, and the other in which all self-interactions are ignored or suitably modelled. The resulting equations for the first group are unchanged from the original, while those for the second become equivalent to a linearisation around the first group, often with an additional adhoc model (e.g. stochastic forcing).
The earliest work may be found from {{cite:834648eed54384acc69d0ab129e845a5326a805d}}, {{cite:496927bc31e3eee6acb5321e3bb2279ee55dbde2}} and {{cite:4ccd46e1cffcaa0298e8138885d2d67dbde6a245}}, {{cite:d844c18f91782ef7e8b16447b24867f9333f5002}}, {{cite:53865dd44f9380e88e3e9f8664b56a102d27fdd5}} where the `marginal stability' criterion was applied to the second group for the closure of the formulation. Modern variants of the QL framework have been proposed for many different flows with various types of suitable models for the self-interaction term of the second group (e.g. stochastic forcing, eddy viscosity, etc): for example, stochastic structural stability theory (S3T) {{cite:cf09d86f3996e41815bda53535fbce904477a369}}, {{cite:476238e7c47bc349734c39fee2ad1330b86638c8}}, direct statistical simulation (DSS) {{cite:fcd2b3dab9ef5e635eb6b534404d0ea5dc6b4250}}, {{cite:d04c2031c341e21c319ac0a8b1c6b6206ea2a95a}}, self-consistent approximation for linearly unstable flows {{cite:5feab4affeb1c6ed8148feb7fd9012daa1bea504}}, {{cite:9923026955da6cebc178ba933a6430778b86a573}}, minimal quasilinear approximation agumented with an eddy-viscosity model {{cite:01e82435ad49f66a69b9797f9f48fe36f2fb22c5}}, {{cite:ac805993c85ce7b12ee85e1b352d4e63ffd930b2}}, restricted nonlinear model (RNL) {{cite:8a631177bfea7d60b6ca09d93c056940036b3919}}, {{cite:9069e60974836ad4725ab2587203f03051d9990d}}, {{cite:d5d31e7edbafc4e83d6aa9ccfac217fbd611a245}}, {{cite:1bccd489627763f7bbcdb6ac889a6a305d0787c7}}, {{cite:47d6ec8e618a674a2664a19e6dcd8f88b74ee370}} and generalised quasilinear approximations (GQL) {{cite:b5eea27041eaff9874279fa361e7714958007880}}, {{cite:913f7ba8a81bd8fa917efe9bd0d632383311a765}}.
| i | 798862789fb48f4cdcf86f7326d8d816 |
To examine the behaviors of our model on the evolutions of the universe, we fit the models with the combination of
the CMB and Hubble constant data sets. The CMB data include temperature
and polarization angular power spectra from Planck 2018 with TT, TE, EE, low-{{formula:a313b003-bfa1-470b-aaac-984334855ea1}} polarization,
and CMB lensing from SMICA {{cite:d0c79467f1af77676bf6faae263515e13e68b8a6}}, {{cite:38d948a76c232494bf6e33ca2b0b25858b7ef68a}}, {{cite:f5ae01c17891299a17931eb5f7cba851377cccd0}}, {{cite:abf16f90f6a576c4b1d40ff340e83e73affac907}}, while the Hubble constant of {{formula:7e9ee9e8-c805-4210-a3c3-87e3a96de1f8}} {{formula:6f71d405-c361-4ac5-99df-5bf587b075cf}} is from Hubble Space Telescope (HST) {{cite:fd2c768c42c22f989ecf214929f7cf40a0458016}}.
As we set {{formula:1ecbe365-e3e9-4357-a653-c72cf2facbf8}} and let the neutrino mass sum be a free parameter, both our model and {{formula:664eb4c3-a3e9-459d-9705-107a4137be30}} CDM contain seven free parameters, where the priors are listed in Table REF .
To obtain the best fitted values of cosmological parameters, we use the {{formula:bb1d1bf0-a490-4a2d-badd-cc82a792812b}} method
with
{{formula:dd694201-a725-4053-bc70-7e7ef969a3ff}}
| r | d8297c143070c9867765f855b082e3b6 |
Fourth, the approach of {{cite:95f841bf42d674a2ace99c8b2574e198e4e37b27}} is corrected in Section REF and REF .
Assuming that expansion constants and aspect ratio of a reference set {{formula:9d1cc141-c921-408a-bcef-58e0bd0f88d0}} are fixed, Corollary REF and Corollary REF rigorously show that the time complexities are linear in the maximum size of {{formula:21530c65-4415-4911-84d3-781647635d22}} and a query set {{formula:4f2f1018-8a2d-4e47-ace0-b06a467ffb28}} and near-linear {{formula:ec9571fa-0fed-40b4-a093-26971335f146}} in the number {{formula:32ab7fe4-b9f1-47f3-9854-8dbb5913a5b5}} of neighbors.
| d | 45c2cebffec14619b0e6c57171d4c74e |
Let {{formula:7fb996b2-8ded-402b-accb-75335d9c8042}} be a division ring with center {{formula:e225db33-3333-43cd-8058-2ebb3bdfbb19}} and {{formula:0f8393c9-8a5e-4529-9188-e35255401280}} be the multiplicative group of {{formula:6d761af3-6700-4f23-b10f-1c7ce552b912}} . The subgroup structure of {{formula:b2f33ed8-0198-4a27-95d2-08c0629d6ce7}} is one of subjects which attract the attention of many authors for last time. A particular interest is that to study maximal subgroups of {{formula:d8efcf1b-5434-4231-8aab-aba02385d4db}} , see for example, {{cite:d8c893156035669f6bf2b48312eb4c500b74ba84}}, {{cite:fdda4dd4c3ef9f88bc3b8c7476059e96472d84b8}}, {{cite:8e0fbfef49f4a0ff715b8810f12a7b99d61709e2}}, {{cite:b4a0d87527634e24e3efdc5d836a463455f20822}}, {{cite:0b2517f583733b3794e55d3f6f6c6f4f13e3e8d4}}, {{cite:2cb082135e54654fb0404ede03d92142e6991642}}. In this paper we replace {{formula:1dfe1d59-cd71-4acb-ac68-c8267746826a}} by its arbitrary subnormal subgroup {{formula:5ce0550f-f741-4a90-9e02-f29f017d27e6}} and we study the subgroup structure of {{formula:73f4887f-d482-4dd1-b59a-ad7e58e09149}} . Recall that in {{cite:d8c893156035669f6bf2b48312eb4c500b74ba84}}, {{cite:8e0fbfef49f4a0ff715b8810f12a7b99d61709e2}}, {{cite:2cb082135e54654fb0404ede03d92142e6991642}}, Akbari et al. and Mahdavi-Hezavehi study maximal subgroups of {{formula:8aa03014-a517-4d28-8c07-4ce34f889c0f}} and many nice properties of such subgroups were obtained. In the present paper, studying maximal subgroups of {{formula:3ca9302a-a50b-413f-95d1-e577aeb31ebd}} , we get in many cases the similar results for these subgroups as the results obtained in {{cite:d8c893156035669f6bf2b48312eb4c500b74ba84}}, {{cite:8e0fbfef49f4a0ff715b8810f12a7b99d61709e2}}, {{cite:2cb082135e54654fb0404ede03d92142e6991642}} ... for maximal subgroups of {{formula:e7988cde-f334-4724-924b-f24176767cb0}} . Other problem we study is the problem of the existence of noncyclic free groups in a subnormal subgroups of a division ring. This problem plays an important role for the understanding of the structure of division rings. For more information we refer to the works {{cite:5d8f170fb422ed8d20db8c1c9188c73c4228286e}}-{{cite:76c18710b5e60f82de9c1aee3f88dcbd38c8e27f}}, {{cite:37a0d082591519fdd209d22e7c15327701e0100a}}-{{cite:ccc170bb349666449f912bb8e15901a153955ec2}}, and {{cite:5d225aff616844d1d5351895d04b04c0fb0c2fff}}. Section 4 is devoted to this problem and the new result we get in Theorem REF gives the affirmative answer to Conjecture 2 from {{cite:76c18710b5e60f82de9c1aee3f88dcbd38c8e27f}} for the case of locally finite division rings.
| i | 9aa4e85474b16d976b456cd845ec45a9 |
In this work, we have demonstrated a blinking-free source of polarization-entangled photon pairs, based on a GaAs quantum dot operated at a temperature of at least 20K. The intrinsically low fine structure splitting, owing to the local Al droplet etching technique{{cite:dd2a492ee7e3d03d437f673fcc5ce6fd1a830b00}}, together with the employed p-i-n diode allows us to generate an uninterrupted stream of photon pairs with a fidelity to the {{formula:8cf03115-f766-4e80-b8a8-a1765f21695b}} Bell state of 0.925(2) when using the pulsed two-photon-excitation scheme{{cite:e99a7ba9634a7a97eb8d94f74d0538cb7619438e}}, {{cite:72459c2e29fd734b46ff43abdfa155ccce9f10f9}}. The device also allows the fine-tuning of the emission wavelength within a range of 0.2nm while keeping the blinking-free operation intact, which is favourable for interconnecting multiple sources to quantum networks{{cite:8a92db0b5c6896958de9a4ab673f20b0208d4db3}}, {{cite:3e150cc399bfab7935ec97e651036ba938cce975}}, {{cite:d9d388ca8c4a7a71395b5efcd76156402b6a24af}}, {{cite:e01babe9c7c9c7dbd878762735142f48200cf0e5}}.
| d | 343559979de48cf38686852258af226f |
The main contributions of this work are: 1) we introduce OP as a way of solving the zero-shot coordination problem, 2) we show that OP is the highest payoff meta-equilibrium for the zero-shot coordination problem, 3) we show how to implement OP using deep reinforcement learning (deep RL) based methods, and 4) we evaluate OP in the cooperative card game Hanabi {{cite:873a14462df4c10f2f17d57e2cf991302ea0d33c}}.
| i | a20d6eec9e817957d5321ce2cc384bc4 |
One has a similar identification for {{formula:5b3aea7c-9f0f-4e21-b0d3-c4e4507edabb}} . These groups are in turn linked via the Tate spectral sequence {{cite:8f776dcf6af4d3129621c4ec25535c257deb4b80}}
{{formula:0cd0f891-e161-478e-bf0b-de487101463a}}
| r | e0c09005681817a9c980dfcc09dbd36a |
Our results appear consistent with the anti-correlation observed between the dark matter densities and pericenter radii of bright MW dwarfs reported by {{cite:f008d5589e80c3799f106c037ca55ce854a82430}}. In their work, the authors found that dwarf galaxies closer to the center of the MW tend to be hosted by denser CDM subhalos. We find that at fixed {{formula:653d1338-ca34-4eda-adef-bb0c9ae85c73}} subhalos with small pericenters are indeed denser (smaller {{formula:db432ce8-c024-4d29-b5a4-1776dc2bf721}} , see Fig.1), as seems to be the case for MW satellites. At fixed {{formula:a439b144-c9c7-4cf6-bffa-e0e7f44ce415}} we find a similar trend (not shown). An alternative explanation that might drive an even stronger anti-correlation may be that dark matter is self-interacting, which could drive core-collapse in high concentration low-mass subhalos {{cite:b66ee1e6a2dde5043be97236fef3052311d194ee}}. This was not observed in the analogous phat-ELVIS SIDM MW simulation in {{cite:2363762ae5693ce71236617d2d00a522bd60ec6c}} for a velocity-independent cross-section over particle mass {{formula:edb12b42-055a-4144-945f-6c8016bd7b99}} . More simulations are required to test the statistically significance of the abundance of core-collapsing halos and whether a larger {{formula:42df0c72-e67f-4b2e-92db-2a806ab25675}} is needed.
| d | 44f2cff90b561f483b230f5fb5a389b9 |
Quantum chromodynamics (QCD) is believed to be the basic theory of the strong interaction. However, the strong interaction is non-perturbative at low energy scale. Thus many phenomena such as hadron and glueball spectra, chiral phase transition and confinement-deconfinement phase transition, the equation of state (EoS) of the strong interaction matter under extreme conditions (high temperature, high baryon number density, rapid rotation, and strong magnetic field) can't be explained from the first principle until now. Therefore many efforts have been paid on developing non-perturbative methods, such as lattice QCD {{cite:35fab5ee705033037eb2d3ae578a5bab4b4d4b48}}, {{cite:ae4e5344e7650a2b89f9fd05622b73772c612039}}, {{cite:15ccb0ed0f4dbbbe8d83faf0691942b86dec1e69}}, chiral effective theory {{cite:3e7c466ae0a30c7b5950e5f6b1ec0f1302558b79}}, Nambu–Jona-Lasinio (NJL) model {{cite:e519e0fa897a8af9c2e88356e3a16b0bbef8481a}}, {{cite:7b4112d6dd5418f2618f39809f301d238514126a}}, QCD sum rule {{cite:479ce7b48d739289c56cbfc84e4e3e6b15f255f8}}, {{cite:bf8b2b774eeaacbc79cacbc49a638eae936cb3e6}}, {{cite:f7e28a83efa1a2dd4ea14c2da190993016b04a0a}}, {{cite:c9991a28527719d4379ea1cad1b152c68d715e57}}, {{cite:63cd05a798e42f3e574ad3801407e668cb106107}}, {{cite:24710bcfefdebc503e28a5fe060a38a2d0e04122}}, {{cite:5033a3e753cdc5d18f580f059ca5142c46246958}}, {{cite:09be29c5ba4107bd27315f38f28700e04a8a5316}}, Dyson Schwinger equations (DSE) and functional renormalization group (FRG) {{cite:aa8912fa76d5cd4868943015955dc4acad19e57b}}, and recently holographic QCD method {{cite:de865251f1424328ba4eea2e4cabf9a46e416e0c}}, {{cite:0f16f6980a39adfcb7a2419ed226cb598291f297}}, {{cite:7059748a88f614848a24d1296c73135ed2c5e5f6}}, {{cite:44b9ad5ae77314f86669f542085bf0fe6965a73f}}, {{cite:db23be4f37422ff3433bd781854e5743357ad16e}}, {{cite:4b0ed99118ab8fce13c36cadccf6c804b9685235}}, {{cite:649c1b93f80fb1c2f6f07043dda5100759710d78}}, {{cite:4da7ae31bb3ba48c027512dcd5815f0f8cdf2bb4}}, {{cite:44bc62cedbed02a64474229c3d1a9b64424abf79}}.
| i | f9251f6e68e74f954b8799152ecf6aae |
Visual Genome: Table REF shows that GPS-Net outperforms all state-of-the-arts methods on various metrics. Specifically, GPS-Net outperforms one very recent one-stage model, named KERN {{cite:fd7550adc8832d4bc1013b10d75b68b3c0b98542}}, by 1.8{{formula:bc11e46c-769f-49c5-abd3-d0723e5cb652}} on average at R@50 and R@100 over the three protocols. In more detail, it outperforms KERN by 1.9{{formula:e1d77911-8ae8-4611-8102-5f00a77c88e6}} , 2.7{{formula:a1d1a665-cbfa-49e1-b6d9-bdf18eb0a098}} and 1.2{{formula:0a25d5eb-8cab-4708-a901-8256ce403e4e}} at R@100 on SGDET, SGCLS, and PRECLS, respectively. Even when compared with the best two-stage model CMAT {{cite:e680bafec01f0ce3eb6a2ed9c90f4bb608b60f59}}, GPS-Net still demonstrates a performance improvement of 0.5{{formula:b626ed62-81ca-4055-b755-a917ca599b53}} on average over the three protocols. Meanwhile, compared with the one-stage version of VCTREE {{cite:8f8356deb92619f40f112745a73a484bab945a2d}} and CMAT {{cite:e680bafec01f0ce3eb6a2ed9c90f4bb608b60f59}}, GPS-Net respectively achieves 1.5{{formula:ac1919ca-0fcf-4c96-8d41-fd7590ebd1b5}} and 2.5{{formula:39065bdc-6f51-45da-b76d-9fa649145658}} performance gains on SGCLS at Recall@100. Another advantage of GPS-Net over VCTREE and CMAT is that GPS-Net is much more efficient, as the two methods adopt policy gradient for optimization, which is time-consuming {{cite:603a3c8b3278e252dc522ca3ae73424b0f0bb144}}. Moreover, when compare with RelDN using the same backbone, the performance gain by GPS-Net is even more dramatic, namely, 5.5{{formula:954445d6-88df-47e1-9a4a-792088936905}} promotion on SGCLS at Recall@100 and 2.5{{formula:a84f8133-157a-415f-8ac1-8f8be34a8083}} on average over three protocols.
| m | 4336c5b589b56be9e1f9ced945d6a6d4 |
As for the remaining models, we show that while our approach performs worst than the best one from the MVTec paper, represented by the student-teacher architecture (7.8% AUROC score difference), it outperforms every other method by a margin going from 1% to 37%. One important aspect to notice is that the two top-performing methods presented in the MVTec paper, namely the student-teacher and the Feature Dictionary models which are the only ones exceeding an mean AUROC score of {{formula:b4d848a0-51d9-488c-9791-e6a8dd19c5cc}} , both rely on feature extractors pre-trained on much larger datasets such as the popular imagenet one{{cite:f8f97febbf8d205708e979bb830485e5e7306179}} while our model is trained from scratch on the MVTec dataset making it the best performing model not relying on extra data for training and thus showing the possibility of adopting transformer-based models, typically considered very heavy, even to scenarios where we have a relatively small amount of data available.
| r | 73352c7f34f72de7a816a7e6e0510082 |
Chordal graphs are graphs in which every cycle with a length of at least four has a chord. Gavril {{cite:891e0cc16ca36b7b8a4ca800fd829068f611b320}} proved that chordal graphs are the intersection graphs of a family of subtrees in a clique tree. A tree {{formula:d4acb260-19a8-4550-95c4-8ed50555b9c7}} is a clique tree for a graph {{formula:70d6c5ee-2ce2-4a60-b1aa-f96e9bdc2d16}} if each node in {{formula:789a3d64-db07-4931-aec9-800cb4414407}} corresponds to a maximal clique in {{formula:2b72bb43-92a7-42b6-8e17-be0a5b0b1f7a}} . To avoid confusion with the vertices of graph {{formula:e4259868-e31c-4bda-b53f-02a3cc5a0ac7}} , the vertices of tree {{formula:e00563cd-b32c-4a73-babc-839e7f4415ef}} are called nodes or clique nodes. Let {{formula:871dc0a4-1fc7-43c7-9931-86aaac8567e8}} denote the set of all cliques of {{formula:dc4e5906-867e-462c-8bb2-985972477048}} that contain vertex {{formula:c9de5d84-a482-4c97-974c-424abe5f56cf}} . {{formula:a8f27ab6-1b7a-4f16-897c-a76e532871fa}} is a chordal graph if and only if {{formula:b552d119-39d7-4737-896e-6139794e5efd}} is a subtree in a clique tree {{formula:093bc1f5-123c-4e36-805e-850b04f7b146}} for every vertex {{formula:78eb1481-3c1f-45e7-8c75-68f00f021745}} of {{formula:992b0447-9e61-4b72-ac19-d74a8f2564a0}} . The concept of chordal graphs suggests definitions of some subclasses of chordal graphs. Undirected path graphs are the intersection graphs of a family of undirected subpaths in a clique tree. Directed path graphs are the intersection graphs of a family of directed subpaths in a directed clique tree. Rooted directed path graphs are the intersection graphs of a family of directed subpaths in a rooted directed clique tree. A tree is called a rooted directed tree if one node has been designated as the root, and the edges have a natural orientation away from the root. Interval graphs are rooted directed path graphs in which the clique tree is itself a path. These graph classes are related by the following proper inclusions; interval graphs {{formula:7c8ab234-b8b5-4df6-b0b0-e93a9651aa11}} rooted directed path graphs {{formula:2058478b-be65-44a4-811a-14e5a5f4b6df}} directed path graphs {{formula:6c71ccce-a374-478c-a8cc-cd2e1b2eb172}} undirected path graphs {{formula:3e38e7fe-9352-445b-916e-778f4b9ecfb2}} chordal graphs {{cite:4cf94c483e34061d1dde869a88e0789d522a39d3}}.
| i | 42642da61339903f871d36271df14782 |
We include the effect of stellar winds, tides and mass exchange on the rotational velocity of the
secondary star {{cite:7f70f62ba659966cee1671c3527d011e94f339a3}}. The internal rotational profile
of the secondary star is approximated to be rigid rotation, which is a reasonable approximation for the secondary
star residing at the main-sequence stage {{cite:3de7a0b5d4fcf3b1dad20601d8d2c0d8ede7a74c}}, {{cite:0c5cd0ecdb64f2792db1384be4c98276fccdc531}}. When the mass transfer occurs via Roche lobe overflow,
the rotational evolution of the secondary star is
mainly controlled by the competition between the spin-down effect due to tidal interactions
and the spin-up effect due to mass and angular momentum accretion.
We follow {{cite:0deba11a3ecb9706c1c75d554d967a4c8ce57408}} to treat the spin-orbit coupling between binary components due to tidal interactions.
During the mass-transfer process, the transferring matter may either form an accretion disk around the secondary star
or directly impact on its surface, depending on the minimum distance {{formula:a83fdff3-8035-4a76-aaa9-28ff853d28e6}} between the mass stream and
the secondary star {{cite:5c6bba68f5832bd4840e7969680cbbdb1c7fdef3}} compared with the radius {{formula:45f071aa-13c5-43f2-a4f4-7469c755ac30}} of the secondary star.
If {{formula:4c64ae8d-914b-4bb0-84a0-65f8ee971f34}} , the mass stream is assumed to collide with itself after which the viscous process
leads to the formation of an accretion disk. It is assumed that the secondary star
accretes matter from the inner edge of the disk with the specific angular momentum of {{formula:0021b033-1b98-4106-ac4e-0ce9232e820e}} ,
where {{formula:b6f314ef-9158-41b4-8ceb-6663638494f0}} is the gravitational constant and {{formula:93eed457-cb6a-4f5c-9c4a-7d1cbfff4060}} is the secondary mass.
Otherwise, the matter stream impacts directly on the surface of the secondary star and the specific angular
momentum of the impact stream is estimated to be {{formula:2e6fb4ea-2ae1-4854-8cfb-2ca7c0b89ded}} .
| m | 293df1984b75194ad539624b5eea4bc8 |
Theoretically, several works had been carried out to interpret the MEs
features of n-type cuprates. Adopting the single band description with experimentally fitted parameters, Krüger et al. claimed that the fermiology random
phase approximation (RPA) approach with a momentum independent (or weakly
dependent) Coulomb repulsion cannot account the low-energy commensurability. Their numerical results indicated that the MEs in n-type
case should be more incommensurate than that in p-type case{{cite:044270bb11ba790198ee3f3d3f36324ed558409d}}. Such conclusion may reveal the fact that the single band description is invalid in the n-type cuprates{{cite:a3154059a072af6250e3154969690ece39275cec}}. Ismer
et al. showed that this may be improved by a strongly momentum
dependent Coulomb repulsion with form of {{formula:1a4e7f48-72e5-4717-9260-b4e0ae20cbea}}{{cite:9aad62bbb5112678cfdc589ea14f922e5e8d6233}}. However, such improvement is more likely originated from
its sharply peaked form at {{formula:cbda67ab-110f-4a65-9d4e-0c9beca6ad46}} , which cannot be understood
physically. Using a slave-boson mean-field approach, Li et al.
showed that the commensurability can be established in the SC state{{cite:20d0a17261bf961205d162b94132463b72449d4d}}. These above mentioned theoretical works are all based on the
belief that the d-wave SC is response for the low energy commensurability.
It is hard to image that the small superconducting gap can produce the wide energy range commensurability.
The most important is that the commensurate phenomenon is also found in the normal state of
n-type cuprates, where the d-wave SC disappears. The MEs had also been discussed within the frame work of coexisting of SC and AFM{{cite:476e71cca8ea1d580ec5e2a0e6fd60fe781c7daa}}. Commensurate ME had been subsequently obtained{{cite:3397d12308d6519d5e40bd60eaa4713aa98d3f78}}. However, this result clearly contradicts with the stoner critition{{cite:8cb9b977c1c291cf03135f2a9e681c223af17ad9}}, {{cite:7c0337704ae8b8f17f3392320fdaac627a6bc723}}.
Furthermore, recent INS data reveal a magnetic quantum critical point where the
SC first appears, implying that the coexistence may not exist{{cite:a21bcf871043a1b0e069abc7a205d408f6ceb436}}. Additionally, as we mentioned above, the
commensurability also exists in the state without long-range AFM order.
| i | 1b8b61b011a1058b43ff10c7a0576dec |
NC neutrino interactions are of particular importance to {{formula:7a7e20dd-43f2-4725-a9d1-b24ee2e3bd99}} and measurements in the energy range of a few hundred MeV. This is especially true for detectors that cannot perfectly differentiate between photon- and electron-induced electromagnetic showers, and therefore where NC {{formula:bb578bbe-af18-4d04-8ff6-ab8c7429cc5a}} production can be misidentified as {{formula:f28db098-2db4-4937-8f9d-b16c30e4943b}} or CC scattering. Misidentification of photons as electrons complicates the interpretation of {{formula:39d11944-df2e-4345-a12c-a582c696a58a}} appearance measurements aiming to measure subtle signals. These include sterile neutrino oscillation searches with the upcoming Short Baseline Neutrino (SBN) experimental program {{cite:11ffd1a061610003e1cc3b12a7c8a6ab72af1b68}} and CP violation measurements and mass hierarchy determination with the future Deep Underground Neutrino Experiment (DUNE) {{cite:6f984b5ded817d2af024b9d6a6348accf561dc77}}.
| i | bdbd8c3cc2e158e9826ac64331f3c397 |
The important thing to be observed in the above table is that the components of {{formula:539da0ef-b598-4ceb-8eb5-270e2ae1f722}} are such that the boost weights relative to the null vectors {{formula:065ca2cc-1111-473d-b7a6-0f202a54c97a}} are always equal to the boost weights relative to {{formula:06451e9d-2343-4103-958e-583d71f6eb6d}} . So in the language of {{cite:b3e529ae7aa246d949803c31c4eba54266c240d2}}, {{cite:3dfd9627e05cdd03ffc40c755ab39cab980c071f}} the only allowed types for {{formula:bfcf2dac-a569-42b4-b918-38189eab5e28}} are [O,O], [I,I], [D,D], [II,II], [III,III] and [N,N]. These are respectively what in the present article was called types O{{formula:d66702bc-f2cc-45f6-a8b0-9629c7edb63c}} , I{{formula:92218cc8-9c29-49da-80d4-a7ac753b9033}} , D{{formula:1d76e54f-a5aa-41a6-991c-45cd59a7b66c}} , II{{formula:65f4a8b7-c7ce-42ed-9c6b-c2084ec94928}} , III{{formula:d1a23fdf-3b6e-4aec-b148-ab4275fa5890}} and N{{formula:5ac2570c-fb57-4f4f-85fb-012e8ee51a11}} . Obviously the operator {{formula:a60700e6-9856-49b7-944e-921b804478ed}} can have only the same six types. Finally, to apply the boost weight classification to the whole Weyl tensor, the classifications of {{formula:4f3e3ab6-2479-4dff-9229-498802a23223}} and {{formula:ebd924b7-0188-4c7b-bc1a-cce83852bf03}} should be composed. It is then clear that in the Lorentzian and (2,2) signatures the boost weight classification for the Weyl tensor furnishes the same types as the bivector approach presented here. Obviously it must be considered, as was shown in section , that in the Lorentzian case {{formula:32d5fb6e-5c23-49d7-ba5c-071a9cc01a9b}} is the complex conjugate of {{formula:c862d963-e589-4907-b62a-52ae7f56a7b0}} , so that the type of the first operator is the same of the second one. Analogously the boost weight classification for the Weyl tensor for the complex and Euclidean cases, in the form explained above, follows easily and produces the same types as the bivector approach presented here. As an example let us treat explicitly the Euclidean signature.
| m | e6fb433a2a6a7988d56ee8a9e1dd348a |
Water is one of the most important substances on Earth. It is a key player in a vast variety of biological, geological, environmental, engineering, and technological processes. The majority of interpretations of the experimentally observable properties of liquid water have been possible through modelling. Various models of water exist and can be broadly classified into three groups: quantum-chemical, atomistic, and coarse-grained {{cite:826ab478fad4fc5dbb96bc64dfd89eae8a416578}}. In 1975, Stillinger, Lemberg and Rahman proposed an isotropic coarse-grained central force (CF) model of water {{cite:91ae7d75a1f36ef560850b915b52f196e3c284fb}}, {{cite:ea6ac883093afe4482ef432480a4bf54bd77bfad}}, {{cite:1127cd42ce25afb4f862e90747d35be30db443f7}} and later introduced some improvements {{cite:66296e85e0fc1fcadae6f198710ef8d8e8efd63c}}. In this model, water is regarded as a weak electrolyte, where oxygen and two hydrogens spontaneously form ion-triplets, i.e., a H2O molecule. A collection of three pair potentials describing the hydrogen-hydrogen, hydrogen-oxygen, and oxygen-oxygen interactions are responsible for a non-linear geometry of H2O molecule and its dipole moment. In a revised version of the model (CF1) by Haymet et al. {{cite:41ed858fc2782b8656abc84ee732d447e37f8a22}}, the set of potential parameters was adjusted in such a way as to bring the pressure of the model system at 25{{formula:9a72a614-b561-4db4-8969-9202fe9ebd6f}} C closer to the atmospheric. The model has also been used to study solvation of ions {{cite:8cf9ed32067f93a37df5c6d01a2fd349d9a72f0b}}, {{cite:8a411cf2855840a9d425e0d68e204800cd2ede5e}}, {{cite:53c18e315c7d3b22bc5ee02ef8e5bcf7732a36d8}}, {{cite:0339dff7b1003adbcc985eb07244b2c44dfb11b2}} and of hydrophobic solutes {{cite:d0a0c61e79c36c9a5219909ff1ea90adee862926}}, {{cite:90e43a8e72b0ec1693f91b30637e2e07b513f334}} as well as at the interface with a planar wall (electrode) {{cite:5591caaef3673e9df9bb433803edf849c508a055}}, {{cite:6394001e7786e2d3f3a3a689fab63c68ed6a8071}}, {{cite:7ee4029b3e84c9fee01166e8c3e9f86d3e7fab3d}}.
| i | 4a34669a3fee98d7e468c3b3a5a0f957 |
General Relativity (GR) has passed many observational tests from weak-gravity regime to strong-gravity regime {{cite:ad8a0ce8424868567ff474d0d34aba10772cb7f8}}, {{cite:e67cac68230a636cae7ce2542f493b7b79b63335}}. However, GR
with a Universe containing baryonic matter and radiation can not explain some cosmological observations. The most important among them are the cosmological inflation in the early universe {{cite:ea06afb34eb9be9f4bd085b7f3cb3a6832bc2adf}}, and the late-time accelerated expansion of the universe {{cite:92c1ff68d8b9b2d8dfe822a649d003107de2827f}}, {{cite:a64456a436d44c5172399e02a973063e239b87f9}}. Even though a solution for an expanding universe exists in the GR framework with baryonic matter/radiation, to explain the accelerated expansion, one needs to make modifications to the theory. One option is to introduce either exotic forms of energy/matter with negative pressure, commonly known as dark energy {{cite:4071033169a3b79b5c34ea8c2f538a0d7a362dbb}}, {{cite:f8ccbf9a2f193752f914b8813bdb5837988329f7}}, {{cite:75c1072a4c4778abea0a88a2cb6bb198994fd5da}}.
| i | 4a79ab35c7905801181d074860e307e7 |
The results presented in this section were obtained from a standard di-jet sample in Pb+Pb collisions generated with Jewel-2.3.0code publicly available from jewel.hepforge.org using the proton PDF set Cteq6LL {{cite:95011a643e5920b86f92609f6bf4d040a036bf26}} and the Eps09 {{cite:3bd73fa2bc43c450fa9d44b9943f2945ae753dcc}} nuclear PDF set, both provided by Lhapdf {{cite:a9882750357e7746f76a3ae61eec7788b0f13638}}. The QGP is modelled with the simple medium model (with parameters {{formula:fb7bf9b6-c409-4beb-9184-b2f129a1e4b3}} and {{formula:7f0f6d72-f0c0-4ec5-b874-3482aba42f05}} for {{formula:ccf0385a-1683-47c4-9a8f-92f9ab0fe27f}} {{cite:d21289e360832f30ae4bba0f14de3607491fcc57}}, and {{formula:333f349e-8180-4f0d-ae09-886392484109}} and {{formula:cf4bfc8e-ed20-4b62-ade4-78679379f415}} {{cite:631b8fe9159644e4e163448a3af9d2e77ca3f65d}} for {{formula:f8ceeb17-1c5d-48e7-9d97-5e095d07839c}} ). The Jewel events were analysed using the Rivet framework {{cite:7a8f15caa444c0929186005de92d37aff389d8bb}}. Jets were reconstructed using algorithms provided by FastJet {{cite:a1eb8a6fe0ec01ac1651cb035f2a1580738a124c}}.
| r | bb549a8efea32a797a8966257e9c4f97 |
Quantum nonlocality has been significantly generalized by considering complex causal structures beyond the standard LHV models {{cite:ff628598edabac253d7e19fbba0ee899a8d18263}}, {{cite:260d841f4c13f22f6f90f6fe8adb9f5b042e3d35}}, {{cite:bebfc17b2ee033d3a288bd39b7e2c6cf24723966}}, {{cite:955df1fe2f1ecf8bb7703745c20be70cffb8e1ad}}, {{cite:1d377539525e3da26b0f2913ec26f882c0f20827}}, {{cite:7e9a55da315aaa04756db31819b1573194ebf496}}, {{cite:ef12f687bb9ad73f7f7adc3cfe7725a305d8ad40}}. These improvements aim to provide rigorous theoretical frameworks of causal relations and structures {{cite:f92161ba09996e59d17310b541059463add7ad1b}}, {{cite:1b094d2577a656ab5acd02ecc4be3dec64822377}}, {{cite:a1b46687adba78c86a872af338fce897e6de1637}}, {{cite:c746274268c4784aa7bccd87307b5a77692ba69c}} and are useful for deriving Bell inequalities {{cite:ea36f61a44d368c4daacbc315e2bc06eae00aee5}}, {{cite:f92161ba09996e59d17310b541059463add7ad1b}}, {{cite:ed8d38f00129fd32f24b70126ed6aa5d6c4a5d06}}, {{cite:d35eeded62f3d72373b046905af38cd8b1e309e7}}, {{cite:afcae6b312b6f35f373fab71a88ba0709fd1e6e2}}, {{cite:9994e2050c3a0a9f629c894d67e076dd07dbdf1a}}. These inequalities are applicable for those networks with a single source. Nonetheless, for general networks there are several independent sources for distributing hidden states to space-like separated parties in terms of generalized locally causal model (GLCM) {{cite:f92161ba09996e59d17310b541059463add7ad1b}}, {{cite:1b094d2577a656ab5acd02ecc4be3dec64822377}}, {{cite:a1b46687adba78c86a872af338fce897e6de1637}}, {{cite:c746274268c4784aa7bccd87307b5a77692ba69c}}. As reasonable extensions of a single source, the multipartite correlations should be defined by multiple sources. Meanwhile, a meaningful Bell-type inequality enables the characterization of these correlations across the entire network. How to feature and verify the nonlocality of multipartite correlations not only are theoretically important to prove the supremacy {{cite:f90caa0a73d7ab31191680bca09169b214290630}}, but also are experimentally challenging in the implementation of quantum networks {{cite:265da384ad7d5cd4596fd76e4228788b5d0449b8}}, {{cite:f303b76641b454bc97272c00aea311c8b53bf212}} and quantum repeaters {{cite:554f492e437babe43877a8b350215151b4848805}}.
| i | 788fed571ba0b43a279cf5c18b1e0ca9 |
Cooperative multi-agent reinforcement learning (MARL) is a promising approach to a variety of real-world applications, such as sensor networks {{cite:da41ac4d7d8570f46bb1b3c001dfdfbb8d65e9c3}}, traffic light control {{cite:2879f25b956a2ff905a301166384c65e53ed2064}}, and multi-robot formation {{cite:c26a2f75955a4e2e3c5b51f0e4b3cf7db811fc09}}.
However, "the curse of dimensionality" is one major challenge in cooperative MARL, since the joint state-action space grows exponentially with respect to the number of agents. To achieve higher scalability, factorized structures are employed within the paradigm of centralized training with decentralized execution (CTDE) {{cite:6401b91f96569e11ee3f24052e29537991975e0f}}, which allows agents to learn their local policies in a centralized way while retaining the ability of decentralized execution.
| i | d3abe762b5f1c078036da351a398d2e9 |
We can attempt to constrain the magnetic field strength by the polarization and Zeeman splitting of the absorption lines in GRB optical afterglows. {{cite:17edce378acd56355f87d9714a462034fd6e5a63}} performed some numerical simulations to identify the amplified magnetic field behind the relativistic shocks. In such cases, the absorption lines are strongly affected by the magnetic fields. The magnetic field in the GRB afterglows can be simply estimated as {{cite:18e00969b73ca5647e70b489e747ea5c5f53ad51}}:
{{formula:48e92f01-6b2b-412b-bffe-8d837d8d835b}}
| d | b2235430057c81bca2935d6df8d45f69 |
The outer parts of the accretion disk of supermassive black holes (SMBHs) in active galactic nuclei
(AGNs) host many poorly understood, complicated processes. Star formation is unavoidable in these
regions because of
self-gravity {{cite:6b3906dd0e969498912e1c79c7a137b009f4dee7}}, {{cite:e0e73dafd76fdc8486a0838de90d35c45f06f738}}, {{cite:1dc9aaa946d1b531d0585f6eaf1f0c6eb1de5e6a}}, {{cite:d7d016a5267cfcf13096cae0ae608ee4de6a84b9}}, {{cite:b38862d4977f6def68344d41ae2e5571841c46ca}}, {{cite:28805a0cad677009a7e86418b4b950fa7c13ba42}}, {{cite:475c2214396f980f7c223f66855d1b18b7e80282}},
producing compact stellar remnants from the rapid evolution of massive
stars {{cite:fa365b8b39aff2b4d71abd48ecfbc2d02fa21df7}}, {{cite:ab00e336aff448ff40f91a193b4c55b7ad32f678}}, {{cite:8c6a64498cedd7b4259ca54648c3dd0fdc832dbe}}, {{cite:b2b227268f4b3b94ce7b96e0c68078b91e9c6eb3}}, {{cite:c6c307b470cf603b5952a87cc7f173a1c673054b}}, {{cite:f4f57f9273fd9d9cec7ba50c28e804cc517e0c57}}. Stellar
evolution rapidly
releases metals into the outer parts of the self-gravitating (SG) disk {{cite:e365281a6eeadf72e509a6811683a422224ee3d5}}, {{cite:6c1f67060c85520f2d0fe34f5fb1d5c079afc5cf}}, {{cite:ef70140044bcfa93a4f7f5069a56b78894660f37}},
offering an explanation for the super-solar metallicities observed in AGNs across cosmic
time {{cite:3195bee3b8e22743dcc860a1d706d9062a24a987}}, {{cite:5f661eaa37af30f43817050cc98861d34b6ca17e}}, {{cite:f226c4b37ce9583ac7e99963ed9632ab5acfa972}}, {{cite:72746859496e024d67316961cbf0758736e742dd}}, {{cite:6eb37a1b9c965b672757e1231fd1d0286aa26790}}. Interestingly, quasi-periodic
ejections have been found in normal galaxies by eROSITA {{cite:696f0f55af749a14022eb259a640840e7338d87f}}, implying that
stellar-mass black holes (BHs) do reside around SMBHs in galactic centers. Compact objects form binaries
in the very dense gaseous environment of SMBH disks, leading to {{formula:c3b44ac4-9cd2-41b5-852b-b16d647d0571}} -ray bursts and gravitational wave
(GW) bursts from galactic nuclear regions {{cite:ab00e336aff448ff40f91a193b4c55b7ad32f678}}. The detection by Advanced LIGO/Virgo of
GWs from the mergers of stellar binary BHs (BBHs; e.g., {{cite:a105c71b9473b29ff388d8fb3eff00afa7e79ef2}}, {{cite:85bd356c57395f4a68eef3cfa79864913f854e1c}}, {{cite:2d5ee2f5717ece41e09176db7a5b6a18ae2e72fc}})
has renewed theoretical interest in this
problem {{cite:92660f39210afdf4a3edde3c3f5e5fadfbec09fc}}, {{cite:82fd4fe7605a7088e1801ced22ad4b769df23545}}, {{cite:60a3226f40aa245d2c6a8b179d403db2471f3fd0}}, {{cite:cb60c0874ea5157cb863d3e8facd7406439c8d90}}, {{cite:353da9d3e90e7f4d342eaa7b0dd8e56509abe005}}, {{cite:3f53dfa1d5ef34e7b26db0ad470598d84104f1e8}}, {{cite:8fef836e33146a9cf10b4041768e308c8b2039bc}}, {{cite:489023b98d98c5cdcb5b5415719eda0e39cd2539}}, {{cite:e7e29a2d3488d1176020393c46c9115a91eeb5c1}}, {{cite:12c73e5d945967a91d41e00b643f3ee09971ebf1}}, {{cite:2bbd4c109d69a97cbe4799bb6b1ec54fe9e63578}}. The GW190521 event has garnered special attention,
not only because of the large masses of the two constituent BHs (85 and 66 {{formula:fdde42e0-c437-4c32-ad5e-09ab3001385a}} ; {{cite:79fdee43da7329e7d14ad140e9069c221539acd9}}),
but also because the event was potentially hosted by the quasar SDSS J1249+3449 {{cite:07e809399fbbfb9e7aa1c6491d584cc9cea51642}}, {{cite:4b9581fc81d683c2e63e51e35d019c094fb60c73}}.
AGNs and quasars could be natural factories of high stellar-mass BBHs efficiently formed in situ in
SMBH disks.
| i | 6e73a71b6cc5e6c8c6b3341407028840 |
see {{cite:48f5a3b411ee5089c31996516de90f8dee4b5b62}}, {{cite:b10fe836a124a29c958e74bfb3506aaeaad2e964}}.
Hundreds of papers are devoted to the generalizations of the Selberg
integral formula and its applications, see for example {{cite:b10fe836a124a29c958e74bfb3506aaeaad2e964}}, {{cite:d1299b9051ea6e86fdd3ed411f177aed7153eb81}} and references
therein. There are {{formula:81e501bf-ff26-4c4b-8ca3-f929be3462f9}} -analysis versions of the formula,
the generalizations associated with Lie algebras, elliptic versions, finite field versions,
see some references in {{cite:b10fe836a124a29c958e74bfb3506aaeaad2e964}}, {{cite:d1299b9051ea6e86fdd3ed411f177aed7153eb81}}, {{cite:c23c31764eaacb6bebb218bf8ec3bcf2510f5fdb}}, {{cite:50c7048897d707902aef015d56f5b834dfafd0ae}}, {{cite:3d35a967e988c6a911c9ad746b1c1783b5aadfe4}}, {{cite:281727b6495acc60b359c036ef0689dbe5733ff3}}, {{cite:af5265909af80311e12ecd829b897d6f94a169f6}}, {{cite:e9a5890f0ddb4d350eb36ae77d16da177a5729d9}}, {{cite:171a682b29e21a5cfd0b47943083f5929678767e}}, {{cite:bdbe8a014c7f85e7a620eced1e0bf182e2804ac2}}, {{cite:8ef1e0384c0ef891701085b7cc722818b14ad03f}}, {{cite:8e50d0464bbdae7944ddd46daf21f20d8811d857}}, {{cite:def361764b5fc7ffeaba4e7ad276df54d6d8f904}}, {{cite:5da436695747ff536edc52bec151cf513812e73b}}, {{cite:c81ff9e55f72a7f14929f652ba27d4278ea9ca91}}, {{cite:76a388bef6144b9fc1a5ce8b8c7bd6bb43516c0e}}, {{cite:5c0cfc1cda5718076ff72e5063e5710b64391575}}.
In the finite field versions,
one considers additive and multiplicative characters of a finite field, which
map the field to the field of
complex numbers, and forms an analog of equation (REF ), in which both sides are complex numbers.
The simplest of such formulas is the classical relation between Jacobi and Gauss
sums, see {{cite:b10fe836a124a29c958e74bfb3506aaeaad2e964}}, {{cite:76a388bef6144b9fc1a5ce8b8c7bd6bb43516c0e}}, {{cite:5c0cfc1cda5718076ff72e5063e5710b64391575}}.
| i | 3ca17bf470d5501365915f16c48f9a8b |
We also see implications for quantum black hole physics. When a black hole forms, imploding matter gets compressed against the past event horizon, as seen by a distant observer at later times. Similarly, outgoing Hawking particles line up along the future event horizon, ready to spring to life much later. When matter reaches very high energy densities this way, it may well be that the highest energy state possible is approached near both horizons. This state contrasts with the state with lowest possible energy density, the vacuum state. We called it the `antivacuum' state. In the classical picture, a symmetry relating vacuum to antivacuum seems to be evident. When we describe stationary black holes, matter appears to be almost absent, as if the antivacuum of compressed imploding particles has been transformed into a vacuum. Since matter is the source of curvature, this replacement of antivacuum with vacuum forces the past and future horizons to change their effects on space and time. This is where the `antipodal identification' is suspected to originate. Antipodal identification is known to be needed if one wants to restore unitarity for the evolution of a stationary black hole.{{cite:0577722a993929e0152213c01933f0f6ec5f3126}}
| d | 6145b01860e19123c3ca231f0d44214e |
The field {{formula:378572f1-b84b-44d0-8a74-82e86e5ad391}} plays a distinguished role in defining the state of quantum fields. The structure of the counterterms suggests the simplest form of the classical action for this field. Its Lagrangian density reads {{cite:c4531bd5601aaa5b4294e593d1073feec271a3db}}, {{cite:3ee58c747fd9f994a2126df71ff7374c2a25bdeb}}, {{cite:9197687a972cfade59e83f15f0ff1bcd16ba214e}}, {{cite:d53c6635a8b0ed9c0153b81c61ffc284789a6349}}
{{formula:e752d48d-3d84-4e31-a519-2bdd51982323}}
| r | e33231c804e96b4afcbd1c73bab1b295 |
In particular, Guo et al. {{cite:a8f164d062dca05b3dfbe9054e7d11091a12bda2}} design a training curriculum based on complexity clusters. The authors show that most examples with noisy labels get assigned to clusters with higher complexities and hence, only influence the training process at later stages, at which the model has learned the dominant patterns already. Han et al. {{cite:6b65d3527cdeec69964492a82acaabf12663e206}} build multiple prototypes for each class that are used to produce corrected pseudo labels based on a similarity measure in feature space during the training process. The loss function of the neural network is mutually influenced by the observed and corrected labels. Following the strategy of sample selection, Malach et al. {{cite:09c752e27c5fd5fc5a341552184d0b626f7a60fc}} maintain two networks that are being updated only if their predictions disagree. The strategy assumes that correctly labeled hard examples are more likely to produce ambiguous predictions than mislabeled easy examples. Inspired by this work, Wei et al. {{cite:f20f74373d157e518368664acedf300ead669668}} train two networks with small loss instances only, which are derived from a joint loss to ensure the agreement of both networks. Addressing noise robustness via loss function design, Huang et al. {{cite:17b7ca979025d46433760c52907981a953667e23}} employ a label refurbishment loss that makes use of the exponential moving average of predictions to progressively correct wrong labels. Further, Liu et al. {{cite:5afb5d523916228d403fee70d350a811a5743aec}} exploit prediction values from the early learning phase to impose a regularization term to the standard loss in order to neutralize the gradient of examples with false labels. Li et al. {{cite:d21160b2f06e0c6c3929808f4e5c702ffba627f5}} propose to incorporate information on mislabeled examples into the loss function within a meta-learning step. During this phase multiple mini-batches of synthetic noisy labels are generated by a random neighbor label transfer. Each synthetic mini-batch updates a copy of the current model enforcing consistent predictions with a self-ensemble teacher model and all meta-updated models at this stage.
| m | 8f5f3c49443c94f06da7f916ee9db603 |
Previous research has been largely focused purely on improving calibration and accuracy using label smoothing techniques and primarily uses one-hot encoded labels to do so.
However, the existing literature has disregarded the vast amount of information that lies in example difficulty, which could potentially be useful for more-precise label smoothing.
Our work thus seeks to expand on existing label smoothing methods by incorporating example difficulty.
Perhaps most related to our work, Peterson et al. {{cite:cfd135d1d56bbc2647c9752f18991507951ac3b0}} found that performance on CIFAR10 can be improved by sampling ground truth labels from a distribution of human annotations, which is equivalent to our linear agreement-aware method.
Our work proposes a superset of methods operationalized via label smoothing.
| d | fe25a591071b76fd8fa8db3137bd1980 |
The simplest and most common Bell inequality is the Clauser-Horne-Shimony-Holt (CHSH) inequality {{cite:66fc0a1fbf727dd7ba4b7583c1fd6f5febdda39b}}. It applies to the scenario where two parties – Alice and Bob – share a state and perform each one out of two dichotomic (with two-outcomes) measurements. The experiment is repeated many times to assess the CHSH score - a single scalar computed from the statistics of the outcomes. Whenever the CHSH score is above 2, the underlying correlations between Alice and Bob's results cannot be described by a local causal theory. Furthermore, if the CHSH score attains the maximal quantum value of {{formula:213d4ecd-c28d-4546-a5bd-56ef319757e3}} one can certify that Alice and Bob share a maximally-entangled two-qubit state, i.e. a singlet up to local unitaries {{cite:b322ecf361ea01247cb6eb4126cc8db9dc5a7935}}, {{cite:435a98ad2786044e3bbb2e2f6370153e0c50d8f0}}, {{cite:56768de241d866983d403cb6a18b582c0f434ea9}}. Hence CHSH self-tests the singlet.
| i | d1ab2ddd8f7fe31c4ec6b4e48d020bf4 |
This opens up a conceptual framework that is attractive in its own right and so deserves more systematic study: the vision that high-energy supersymmetry survives at low energies dominantly in the (possibly complicated) gravity sector despite supersymmetry being badly broken for all of the ordinary particles described by the Standard Model. This is a vision that often actually does descend from supersymmetric UV completions {{cite:428870150b703e682552f62ccb88e39bdc47e825}}, {{cite:0639b0417ed1089a506267bab91bf43b3119490a}} (though the calculations that show this are usually only done without including quantum corrections). It does so because splittings in any supermultiplet arise proportional to the couplings of that multiplet to the supersymmetry-breaking order parameter, and gravity usually has the weakest couplings of all.
| r | c50c6a91b7bd6862d1eddd357a2ec8cd |
We introduce a new signal (i.e., collaborative information) to support the training procedure of automated image understanding processes.
We propose a general multitask-learning (MTL) {{cite:565b7199c5a81d0425a9b9f327f8b4222b473108}} framework to incorporate collaborative information in the form of latent item representations into existing single-task content understanding models.
We explore several alternative approaches to combining collaborative information in the training procedure of image classification models.
We show that our approach is also particularly effective in typical real-world situations when labels are missing for a fraction of the training images.
| i | b14c4afa6d58be042c8d1fbbc4078c05 |
The problem of removing malicious nodes from networks has long been of considerable importance, and it
has attracted a great deal of recent attention. In social networks, accounts occupied by malicious parties spread toxic information (e.g., hate speech, fake news, and spam), stirring up controversy and manipulating political views among social network users {{cite:9a3fda17cd86f13faf2703815bd945ad653b6b6a}}, {{cite:b4e1ce135f30c9d5b6f2f665f0e7c83be7bf1c4f}}.
Major social media entities, such as Facebook, have devoted considerable effort on identifying and removing fake or malicious accounts {{cite:d46f9af27fc3c376a5b4a6568828539c95828410}}, {{cite:7c33e93021e3b68c88d894b1ac00dda3d4365c87}}.
Despite these efforts, there is evidence that the problem is as prevalent as ever {{cite:dd6030bcbc109080758770f6bd7ac04e0b46c726}}, {{cite:bc2b8d6462aef755c830ce7818a1f57af76d8aff}}.
A similar challenge obtains in cyber-physical systems (e.g., smart grid infrastructure), where computing nodes compromised by malware can cause catastrophic losses {{cite:70090424a3347ef0bdcde79ba43988f1903dff86}}, but removing non-malicious nodes may cause power failure {{cite:e652c9ba28b6ae35aef500aaf9d145606aa6aa4c}}.
| i | 90ac9b30f694f5e191bd1e53aceda5ba |
While the numerical evaluations presented in {{cite:60bb69197bd987af721eaf1cef0ed59ccc28fe18}}, {{cite:d77f19f43a084941449ce20aac092ae652784e0a}}, {{cite:3dac1e0621c1c20ce765dadc624655ffe27e8b22}}, {{cite:e327fcd3bc0a8bf1aa18695dd9b228ac42943efb}}, {{cite:42181b50fc20f32ea0b5812a02caa68d9d914c22}}, {{cite:961d8094111646ac05c22fb0bb2b7419d34b3887}}, {{cite:b4a800d58de6e0eab93a9a85c350e76f05c1d711}} have shown that adaptive methods using {{formula:a88ab2a9-b3ec-42ab-9769-1224a0ef37aa}} and {{formula:4b5f0476-df10-4077-9fac-562b1f020eb8}} close to 1 are advantageous for training deep neural networks, the theoretical results in Table REF imply that adaptive methods with {{formula:ea6d1b08-9b16-4c1a-a0eb-79fee9222f4a}} and {{formula:eae99bc8-fbbd-4f7e-bb84-1698ebfabf9a}} close to 0 are good for solving nonconvex optimization problems in deep learning. This implies that there is a large gap between theory and practice for adaptive methods. Our results in Table REF show that Adam indeed performs well when {{formula:51175e9e-1d52-4ebd-b527-4161bfebfb1d}} and {{formula:78909cc3-3205-455e-afdd-586773ff4f46}} are set close to 1. Thus, the gap between theory and practice can be bridged for Adam. Our results also show that using a large batch size makes the upper bounds of the performance measures small, which implies that our results match the numerical evaluations in {{cite:d7f8188bef8dac709ce44e304cf725d494ffbf2c}}.
| r | 63189213188557c9cf106f81ecae94d0 |
Artificial Intelligence is intelligence demonstrated by machines, as opposed
to natural intelligence displayed by animals including humans. Leading Artificial
Intelligence textbooks define the field as the study of artificial intelligent
systems that are understood as systems perceiving their environment and taking
decisions and actions that define their chance for their goal attainment
{{cite:d4242f2467fc2331bfdfdc78435c1e756b4a089e}}, {{cite:c3091beca58433a2c3acde46941b0e750ed520a4}}, {{cite:48d6eca296517647eb6e4b6f4da05120fce7accd}}, {{cite:358ffb815c3459ed84751515f40e3e6a6e7e77b7}}, {{cite:5d0f698309c60ba4915fbde60acff62ae655e159}}.
| i | 74570aedd78ce805856e8dea5e419e43 |
which is valid to first order in {{formula:9155805e-8776-4132-b0fe-54cf60daa433}} . (Note that the term {{formula:7f5ae10a-9a19-4821-bba5-d415135279ed}} is negative.) It is easy to see that when one imposes the fact that the first order perturbations are of the “first order” themselves, it is not possible to make {{formula:bf38cf69-67ca-4be7-95e5-c5c4299486c8}} –defined by SW– negative for the first order terms. The variational method does not reproduce the previous results due Hubeny {{cite:c862e0add04a5de9e462264bc7400085930a1e6e}}, Jacobson-Sotiriou {{cite:79534007a897f84912c8e6e52bc97a954d284125}} and Düztaş-Semiz {{cite:ceaf3b60260d44c2db897db33c77a1fe997c99cf}}. For the first order perturbations, the results of SW contradict with the previous results when the fact that {{formula:6fc9e9b1-2ce3-40be-8dd4-b3a311491b7c}} is inherently a small quantity is taken into account.
| m | f4832b3ab7bbe45f3ee0a074603c0764 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.