text stringlengths 54 548k | label stringclasses 4 values | id_ stringlengths 32 32 |
|---|---|---|
For present day various types of space-frequency distributions are successfully
used for the analysis of nonstationary signals {{cite:1c768296eece59eb72e080586ae0fac5676bd0f1}}, {{cite:52a7eebcc44ea0aaeb073a0b2da3927bdbb4ca43}}. Many of such
distributions are characterized by advantages as well as by disadvantages in
use in various fields of physics. Wigner and Weyl distributions are widely
used in space-frequency analysis and in particular in optical information
processing systems. Well known is a fact, that the very distributions posess
such charateristics, that are successfully used for description of many
optical systems. The spectrum of appliance of these distributions is
extremely wide. They are used, in particular, in the theory of optical lens
system, theory of communication, hydrolocation and other fields
{{cite:b3b1140652414cb94f2998ad2aae6c14bbd70344}}, {{cite:bf44aaed3a5cb4301e4c4e2f4b3db09b94ef0216}}, {{cite:5163ab477228110d5b39698883b0fd34a4a52aa8}}. The researches of last year proved the efficiency of
use of space-frequency distributions in biology and medicine; especially
Wigner distribution was successfully used for renewal of volumetric structure
of objects within the framework of optical tomography {{cite:a95ccb8b798a25f45fdfceb4bd7a2a7b51e86dc8}}, {{cite:015b5bc86766ddee96d1ff4017044c0555022373}}, {{cite:a9c6161ea971dd844f0e7b0e40ba5e76cdf37d63}}.
One of the promising investigation directions within the
space-frequency processing of signals is studying the properties of novel
space-frequency representations of the distributions, with the aim of their
further applications in different areas of physics and medicine.
Unfortunately, it often happens that some space-frequency distributions
do not meet demands raised by one or another specific application.
In this relation, many of the existing distributions need generalization
or improvements when applied to a given problem. During the second half
of the past century and the beginning of this one, a cleartendency has
been observed towards generalization of different space-frequency distributions.
The first attempt of such a generalization has been due to
L.Cohen {{cite:c36501c19fc9320d779d076be66528095cab398c}} as long ago as in 1966. The author has introduced a
number of quasiprobable distributions that provide proper quantum mechanical
marginal distributions. Within the limits of this research the Wigner
distribution was examined as a separate case. The next step has been done
by N. De Brujin in 1973 {{cite:c812edb7856b5af00f621881d71a9104b20c50fb}}. His work has been devoted to elaboration
of theory of generalized functions, with application concerned with Wigner
and Weil distributions. Summarizing the results of numerous investigations
L.Cohen {{cite:1c768296eece59eb72e080586ae0fac5676bd0f1}}, {{cite:bf44aaed3a5cb4301e4c4e2f4b3db09b94ef0216}} has suggested to has suggested a generalized
distribution involving a certain kernel
{{formula:f5e8e9f2-0588-4488-8ece-bd5a61f5b7ed}}
| i | 37e3c954a85d507d3554c1c07cb797e3 |
In this paper, we propose a new practical way to enhance the security of PUFs using quantum communication technology. First, we classify the adversaries into adaptive and weak adversaries based on the querying capability of a PUF device. In the PUF-based authentication protocol, one important issue (in the case of both classical and quantum PUF) is that an adaptive adversary can query the PUF with arbitrary input challenges. It permits such an adversary to learn efficiently and emulate the input/output behaviour of the targeted PUF. By harnessing the power of quantum information theory, here we propose a construction of hybrid PUF with classical challenge and quantum response. The main idea is to encode the output of classical PUF into non-orthogonal quantum states. In Theorem REF , we show that learning the responses of the underlying classical PUF from the outputs of such hybrid PUF is at least as hard as learning about the classical PUF {{formula:637e8ab7-1d90-4e11-8fe7-5d9a59373c89}} from the random samples of {{formula:1352e711-8efd-4ad3-92e4-d2569851dbd1}} , where {{formula:d196157e-b704-4bb1-b999-a63fc93e91e8}} is a random error term. Note that, in general, learning about a function from its noisy sample is a hard problem even if the actual function {{formula:691f9078-1cce-4b4a-9e41-d12d9fc81dc4}} is a simple linear function {{cite:be2cfd45b5e3a83c18d4a82fd0831efcaefadb8e}}, {{cite:7ccbb1a521b630463ced42db5ccdf61bb61746b1}}, {{cite:df64f2a23a325c3bf22b528cb0378367d44b7b94}}, {{cite:928f55d2336cd0f8742224d8f4120fd56f926a7c}}, {{cite:847ccef29bae21b0935083b0c52a31e123dc10af}}. This implies, if learning of a CPUF from its noisy sample is hard, then it is hard to learn about the CPUF from our HPUF. Note that, here, we get the noise term {{formula:098ff499-f54c-414f-b42b-9256eff69c5b}} without any pre-shared key. It comes from the impossibility of distinguishing the non-orthogonal quantum states. We further propose a hybrid PUF-based authenticated protocol with the lockdown technique and prove the security against the adaptive adversaries. The advantage is twofold: On one hand, the probability of knowing information about a quantum state is upper-bounded compared to a classical PUF due to the quantum information theory. On the other hand, the implementation of hybrid PUF is practical nowadays with quantum communication technology.
| d | 0f958cabc6ff7b9861217df9a29fd222 |
Understanding how events will affect the world is the essence of intelligence {{cite:85f6e5edc42d994e038ef4cfb5b09af81684cdc9}}.
Procedural text understanding, aiming to track the state changes (e.g., create, move, destroy) and locations (a span in the text) of entities throughout the whole procedure, is a representative task to estimate the machine intelligence on such ability {{cite:8372dfb0e2e1b5dba88d42701741fd7877b20dcf}}.
For example, in Figure REF (a), given a narrative describing the procedure of photosynthesis, as well as a pre-specified entity “water”, a procedural text understanding model is asked to predict the corresponding {State, location} sequences: {Move, root}, {Move, leaf}.
Compared with conventional factoid-style reading comprehension tasks {{cite:8d7a67f369c6534b6692612df46e6c2c717d276b}}, {{cite:f24759df75c01f9b3dc55078ed6aed05e45f4870}}, procedural text understanding is more challenging because it requires to model and reason with the dynamical world {{cite:8372dfb0e2e1b5dba88d42701741fd7877b20dcf}}, {{cite:4ce2d7a291dc6a39d1f10e5a511bd6b29bcbc551}}.
| i | 0fe6b5637275e9d482829d62d64d1107 |
For quantitative evaluation, we use the synthetic stereo videos from the MPI Sintel dataset {{cite:a76a45aaa6f7a7f217d934f1614b13ccce01bdc6}}.
We select 6 videos that show a variety of characteristics in terms of scene motion, camera motion, and the size of moving subjects.
We use the left video to train our models and render them from the right video's viewpoints for ground truth comparisons.
| r | 8b71728c4986c161bdd0ebd99a29ddba |
Considering that the objective of our project is to identify individuals with a face mask, we applied a transfer learning technique where a model developed on another face recognition task will apply to our specific project. We use transfer learning on a convolutional neural network-based model, in this case, a ResNet-50 architecture. We selected ResNet-50 based on its performance in several image recognition projects, particularly, it has the best time and memory performance when compared to VGGNet19 and DenseNet121{{cite:aa303fc11a622d9ff0381988fafb6a4b011bdf28}}. We use such a pre-trained model obtained from this work {{cite:4336f3b4f5fc56739dede147c98e8e7e5c743018}} and fine-tuned the parameters on our dataset of faces without masks. Next, we ran our model on the masked faces and fine-tuned the model based on our results. The objective was to identify an individual with a mask after training our data on faces without a mask.
| m | e7a6df58ba7c3a08f25dc8d1508670f4 |
HMC generates a proposal in three steps. Given the current state {{formula:3005d9b2-9225-455d-aedc-8c6ffdf42c88}} , a {{formula:40ef06a9-4e57-4bda-996b-41273f43005b}} is sampled from {{formula:3e39275c-2646-4f0d-9a93-b2672767aebf}} . Then, Hamilton's equations: {{formula:699141f3-5ae7-4486-910e-c2a4e420978d}} , {{formula:02028212-f718-4ef7-b2a8-9b2cbff23dfc}} , where {{formula:da07f4c9-6308-415f-9e87-5fabb83a17f5}} is the Hamiltonian, are solved for the interval {{formula:54ad3bc6-9eb7-46ef-b171-c1d974d45bce}} ({{formula:a02592a0-00f5-4b86-998f-1177324b15ba}} is commonly known as the trajectory length). Typically an analytic solution is not available, so the dynamics are discretized and a suitable sympletic integrator, commonly the leapfrog integrator, is used with step size parameter {{formula:a9fe8934-ef9f-46e3-83ad-bbd3eb6b0db0}} . The number of leapfrog steps {{formula:96661036-92c4-4cd5-8948-b138eb40933e}} is related to {{formula:8e639928-9dab-40ca-9040-c5551aefd6d9}} as {{formula:d17ed099-9e4b-4a7e-ac0e-b34575419e66}} . The proposed state {{formula:799e3718-e713-4d1f-835f-34b1816518dc}} (the momentum is negated to ensure a reversible proposal) is accepted or rejected using the Metropolis-Hastings step {{cite:8d6ac50473d034ca5127388768a4434498b236e2}}, {{cite:a4756a907fb9ab98f6c4450797fcd42065333203}} that targets the joint {{formula:a22df4c7-af21-45ed-8089-546513f84b38}} distribution.
| i | 853309c1833f37b9864217e879df8616 |
As shown in Fig. REF , our domain adaptation framework works in a teacher-student mode. The teacher network can be any existing source dehazing models (e.g. FFA-Net {{cite:45c7464106457a0a00169efb31366f1c2fda5cc7}}, MSBDN {{cite:539a54ebb71e3436a913936034f2fd46688b792b}}, we choose MSBDN as the source model by default), while the student network consists of our DRN module and the source model. DRN module is expected to make the representation of real data match the synthetic domain to adapt the frozen source network.
Besides, although the source network performs poorly on real hazy images, it owns the ability to preserve the structure of images. Thus, we froze the parameter of the source network during training so that the structure information of the source network can provide supervision for the student network. The parameter of the student network has two parts: the DRN module part and the source network part. Only the parameter of the DRN module is updated during back-propagation to keep the dehazing ability of the source model on synthetic domain representation.
{{figure:a775bf30-c776-4cf8-aa48-7661cf36d014}} | m | 6036eeea6e6b27fc017b35dc9ac9d217 |
The success rate of the attack is the ratio of the times that Mozilla DeepSpeech {{cite:9ef9f2300a5092069779371a179cf193dd41e69c}} The Mozilla DeepSpeech ASR is the current audio adversarial robustness benchmark {{cite:fd150c4ea70648ff5998ba26cd1a05ac388f2096}}, {{cite:24f8dbd98af64c38f97d03a76101540c090b117c}}, {{cite:07803760aa0b68cb4c27712c50c43b76eb8f6d90}} and shows coherent performance including Kaldi {{cite:1d6ffebc2c74b1608bf65046993c51bf5a234fa2}}, {{cite:fd150c4ea70648ff5998ba26cd1a05ac388f2096}} under adversarial attacks from the previous studies. transcribed the recorded adversarial example as the target phrases among all trials. The success rate becomes non-zero only when Mozilla DeepSpeech transcribes adversarial examples as the target phrases perfectly. The generated audio with high adversarial examples remain a SNR of {{formula:ebff1f65-7fed-40ef-af91-fb2d449b6c36}} (dB) for all the experiments. In Table. 4, the model trained on Noisy{{formula:835c5965-afe9-42a0-999d-98eb1969a459}} could not attain a high WER and RoSA under Grad{{formula:fc7535d0-2a9e-4d6e-919d-09e1318a514d}} and Evo{{formula:765d300d-8336-42b0-b6bb-d04ccfdc2935}} without the improved performance from the adaptive training with related adversarial examples. Fig 2. shows the spectrogram difference in the recovered detail of speech enhancement and adversarial training.
{{table:ec6e290f-793e-4629-85a1-b77fc3d75dea}}{{figure:e78d68cb-ed40-4ad2-9191-26a1c49b7490}}{{figure:1d4983c6-0d7a-45d1-a168-673b059549bf}} | d | 3ef97708c1428a436962b3ce30cf04bc |
We experiment by integrating two sub-tasks of personal health mention detection and emotion detection in a multitask learning setting. This section gives an overview of our multitask learning framework and the parameter sharing scheme (Multitask Learning), with a graphical illustration of our framework in Fig. REF . We present a comparison baseline framework (Single Task Learning) for personal health mention detection, build based on a state-of-the-art model, BERT {{cite:89214046ae0110f1bcc3956375d504ccd3b64bef}}.
{{figure:20c38e58-8b2e-4fbb-a266-513a05fd789f}} | m | 1023ffc56fd757bac0176ff4a0ed9b1e |
for all {{formula:42a3a80e-7e17-43e0-8e47-0d264d787655}} and {{formula:ce49abaf-44f7-422f-b4a2-d6eadb14e3af}} . The process {{formula:4f97adf3-ed8e-4db6-9678-2db7d9d46e9e}} is called common component, the process {{formula:c48d5e96-4336-419c-93a3-13a945d7465f}} is called idiosyncratic component. Representation (REF ) is the GDFM. In vector terms, we can write {{formula:7aa62eb9-59fa-4462-889d-06b63112ca7a}} , where {{formula:acb9c3b8-b9a5-4f31-9860-131d782b4ac5}} , {{formula:75e3be6a-a9d2-4702-9825-7a27bfde5ff6}}
and {{formula:aca33bbd-0e09-4cff-8059-d4ad1d0310be}} are {{formula:ab46c5f2-fa51-43e0-8075-82acbe4cdf76}} -dimensional random vectors. The GDFM encompasses the approximate static factor models of {{cite:ebe159d719992453dedc09a50d628ee6646b3585}}, as well as the exact dynamic factor models of {{cite:7cb30b7d51ce3321c66520da97719ea16a05514a}}.
| r | 66797050bd476c4015f377aaf106425a |
The paper is laid out as follows. In Section we recall how we
have obtained the Kähler two-forms on the Siegel–Jacobi disk {{formula:80bf2c9a-ee1d-41a1-a4c2-f34ce93b7638}}
and on the Siegel–Jacobi upper half-plane {{formula:c2df6c10-aa60-4001-aa57-38a8d87cddc7}} , specifying
the FC-transforms. Section describes the real Heisenberg group {{formula:92a2d058-7068-42ea-96f9-07349bc489b5}}
embedded into {{formula:be149c0d-59d8-451c-a18e-378f9164d4dc}} : invariant one-forms, invariant
metrics in the variables {{formula:5443597b-f154-490d-bf30-a2dbee1a4cbd}} . Note that in the formula () the last parenthesis {{formula:d5cf365f-2cbb-426b-872c-7007626eef41}} replaces {{formula:90d3ea00-4c2c-440b-9b8f-bc4f2487bef6}} on the
Euclidean space {{formula:7627c87f-a81a-4414-abb3-56abd8be1914}} and the idea of the paper is to see the
effect of this substitution in the invariant metric of the five-dimensional manifold
{{formula:c3b8a493-3cc0-41f8-b759-71be672776fa}} . Section deals with the {{formula:8d688e5d-8dc5-47f9-b27d-c2e807140423}} group as
subgroup of {{formula:7e5c0ee2-d872-4434-8190-f4bbfe3284bf}}
in the variables {{formula:fb238e22-a45c-4b01-b409-aaf31a987428}} , which describe the Iwasawa
decomposition.
{{formula:9e87a261-b81a-498e-9d14-40a89a809f86}} is treated as a Sasaki manifold, with the invariant
metric written down as sum of squares of the invariant
one-forms {{formula:79c8a303-eb98-456e-a963-56a179c5875c}} à la Milnor {{cite:03973ab7670703da24aa6c5d16b053b19eccf7c2}}, while the metric on
{{formula:1ab39051-017d-4d4d-965a-d42c468468d8}} is just {{formula:20e36347-d547-4498-b460-2dfdaaedc606}} . Invariant metrics on {{formula:6efc0b0f-a984-4c99-ac1e-361c85e10bed}} in
other coordinates previously obtained by other authors are mentioned in Comment REF . Details on the calculations referring to {{formula:cf7734fd-9d91-4c17-a1be-2b8439dadea6}} are presented also in
Appendix REF . Section presents the real Jacobi
group {{formula:b0bfb9f7-fc12-44b4-90ff-6c059b046083}}
in the EZ and S-coordinates {{cite:98a797d249a1a19d09f6d2b7d10ff7dbf8c7e31c}}. The action of the
reduced Jacobi group {{formula:9748d80b-4e6a-48ff-9179-903104b094bd}} on the
four-dimensional manifold {{formula:a71462e9-da42-49c9-a691-d1512084125b}} is recalled {{cite:c801d70c429533d99a53ef32725fc3fb316ab661}}, {{cite:d38e8870216730467ec96eb7105713269d33e2d4}}
and the fundamental vector fields on it are obtained. Also the action
of {{formula:3e05d5bb-3d88-4540-a759-9327331807e1}} on the 5-dimensional manifold
{{formula:8b45c843-9a91-4e42-b29a-f984910a3433}} , called extended Siegel–Jacobi upper half-plane,
is established in Lemma REF . The
well known
Kählerian balanced metric on the Siegel–Jacobi upper-half plane is
written down as
sum of the square of four invariant one-forms in Section REF . For this we have
obtained the invariant one-forms on {{formula:ba51a990-b95b-4b7c-9c6b-62933d1ca503}} in (REF ).
In Comment REF we discuss the connection of our previous papers {{cite:c801d70c429533d99a53ef32725fc3fb316ab661}}, {{cite:f5022581091ef5c5626ac39bcd761e25c83c8472}}, {{cite:b2fbd0448c834ae353935377a9e329faffac63cc}}
on {{formula:62093447-6049-4d85-a89b-d5d980e639ba}}
with the
papers of Berndt {{cite:3a3d08c1b986f79c713752cb11f151b232870525}}, {{cite:dbfc4de4fb42654a94657eefa62b0ef4c7b055b2}}, {{cite:98a797d249a1a19d09f6d2b7d10ff7dbf8c7e31c}} and Kähler {{cite:65f2764a024a153d44e416a16db948747c290d7a}}, {{cite:5cebb92add82e52840c7f8c8fbda46406b2adbca}}, developed
by
Yang {{cite:2b2ca6958af43f4420ca2bbd294512dce0eb6100}}, {{cite:4e0bd9c17f47abbe1d67adbfc532a7794cd70171}}, {{cite:8eb0a52b822df0eed35d5dc18609a95c6dddd3f2}}, {{cite:72d3df85c7e6f40aef2ef303d0561541d9111439}}, {{cite:64ff835f6ea944cb339f364d660daeded7d93827}} for {{formula:c0de2626-0502-4427-bee9-cdf4ab12a6b0}} . We have also
determined the Killing vector fields as fundamental
vector fields on the Siegel–Jacobi upper half-plane with the balanced
metric (). The same procedure is used
to establish the invariant metric on the extended Siegel–Jacobi upper
half-plane, which is not a Sasaki manifold. All the results concerning the invariant metrics on
homogenous manifolds of dimensions 2–6 attached to the real Jacobi
group of degree 1 are summarized in
Theorem REF . As a consequence, we show by direct calculation
that the Siegel–Jacobi upper half-plane is not a naturally reductive
space with respect to the balanced metric, but it is one in the
coordinates furnished by the FC-transform. In fact, this is the
answer to the starting point of our investigation referring to the
natural reductivity of {{formula:3eadaa89-3db4-4ea6-a449-fcc445c932a7}} . We also calculate the
g.o. vectors {{cite:29f965beaaed6e0fc9308249f042d47c3bb753fe}} on {{formula:88c08856-eec3-4c61-bca4-2a4a542cb3ff}} applying the geodesic Lemma REF .
| i | 9bffb66c4e3e1089cc368152d637420f |
The mobile operator's goal is to provide QoS Internet services for large populations of clients, while minimizing the overall computing-plus-communication energy consumption. Hence, a trade-off is required between QoS and energy savings.
Future MNs are expected to learn the diverse characteristics of users behavior, as well as renewable energy variations, in order to autonomously determine good system configurations. Towards this goal, online forecasting using ML techniques and the LLC method can yield the desired system behavior when taking into account the environmental inputs, i.e., BS traffic load, server workloads and energy to be harvested. Next, the mathematical tools that are used in this research work are reviewed, namely the LLC method {{cite:a4ef1ced5436155f2d30e4f1348deedc45be5efb}}{{cite:42890ef0fe079a401a2be9658e83dc1971709e8e}}{{cite:c132f4e6453a38ee061e064cd09e424987df8f56}} and LSTM neural networks {{cite:ff2ad997c7dc424bbe7ac29d072bd4b767016c5c}} {{cite:3dcf40225dc212efe43c22eaa59925807540cfdf}}.
| m | ca9b48138e2ebdc67df6876b4739c524 |
Our model aims to achieve better {{formula:ae7c5421-6f67-41a0-997f-a0796740c0ed}} and {{formula:116b4352-f9b7-4127-ab33-21d6a50348ba}} than other state-of-the-art deblurring models so that real-time deblurring could be deployed on machines with limited computational resources such as embedded computer on UAVs or automobiles. Therefore we choose to use lightweight backbone models such as U-Net{{cite:650612e7e9d197c19ec00d67d94aed6302fceda2}}, MobileNetV3{{cite:9d320b8f5c3538d893c032eb576cd97ec3c75f7b}}, EfficientNet{{cite:c6264d0bf8e602bcde6e9d4fa1ce47e3c41322cd}}, and MNasNet{{cite:d66e12bd6a338f6eb74b83551b8fb2f4cf323183}}.
| m | 6b9d886e53c1fb2475885e69bb1c9b04 |
The integral part in the right hand side is actually the nonlocal Laplacian operator, reflecting the non-Gaussian Lévy fluctuations {{cite:2fceb13d858a84288d893819041906637c5f3e3b}}. The equation fulfills an initial condition
{{formula:40f62f82-e711-426a-a951-4ceb3baf8c0a}}
| m | 02eb3bbac1702ac3c821c48775a7d70b |
After some calibration, setting the correct parameters to minimize the RRMS error (see details in{{cite:3ed266e4f697574edd2bf6929aa8fbb7a837d94d}}), HODMD was applied using the following parameters: the number of snapshots {{formula:211b262e-f3bf-42f2-ba6b-5dfddeccf786}} was set to 100 snapshot in all of the cases except for healthy LAX data, diabetic cardiomyopathy (both LAX and SAX) and obesity (SAX), such that, in these cases {{formula:cffd7c80-accb-4582-a2ec-ce79be81fb56}} was set to 200 snapshots, where the noise level in these datasets obligated us to analyze a larger number of snapshots, such that, they cover a sufficient number of cardiac cycles allowing us to capture the relative frequencies in two parallel lines. The thresholds were fixed to be {{formula:22ea647b-5332-4888-8551-cf29791db14e}} for all the datasets. The time step is given by {{formula:fb25d070-8579-4f4f-a352-238db18c1f08}} , meanwhile, the timespan {{formula:93b7a9be-1fa3-4fd9-ad61-340798986ad1}} is scaled with the time step {{formula:cac4685e-5332-4cdf-8588-2c9d5aa0ce2f}} in the cases where {{formula:dac48f8b-5238-46e3-805b-907c7bb2addd}} and with {{formula:ba1bed8a-27dc-4fee-8172-1742b773db5f}} in the cases where {{formula:16df4929-d96c-4e9d-b8dd-b08aa96d4195}} . Finally, the the index {{formula:afc80015-a579-4b09-8b09-ad1408925895}} varies between (30 to 35) when {{formula:32a2f43d-3598-4d26-aff8-be7fb224b7d4}} and between (60 to 70) in the cases where {{formula:4654d196-a64e-4e76-b9e5-874ae226f995}} , in good agreement with the calibration process described in {{cite:ec8bf1e73fd47f3da7637b85cc6994b9c3cd77d3}} ({{formula:36dbdd82-5ddf-4d00-83ca-a804990b0346}} scales with the number of snapshots).
| r | 34d2a3c9a4ece330f5a80b277f3ca070 |
Remark 3.1
(a) In some scenarios, one cannot compute the Lipschitz constant of
{{formula:80669866-2c8c-4bd2-b521-2cce6b6b7c8e}} , instead of an upper estimation for it. In the rest of this paper,
we denote by {{formula:17c0077e-3d62-48b2-b489-9253836c1add}} an upper estimation for it.
Next we show that Algorithm is well defined. Indeed, if {{formula:76917a84-d9e9-48c5-8a24-71208ee7c32c}} is a positive integer
such that {{formula:2f7acebc-36c4-4b2d-8d22-6b5cf2d43afc}} , by the descent lemma {{cite:2f171df9d9bd0e402f336398663eef6fcc80f4bf}}
it follows that
{{formula:cf95f9e7-d86a-4d3f-84fd-9b8e5c41f159}}
| m | 85018de71024ee1703c1597aa3df5df8 |
Blockage: The BS-UE link is often assumed to be much weaker than the BS-RIS and BS-UE links, which is motivated by the following observations. On the one hand, coverage extension to areas where the direct link is blocked is an important application use case for RIS deployment. On the other hand, for cases where the direct link is not weak, an excessively large passive RIS is needed for the RIS to have a non-negligible impact on the end-to-end link, which limits this application use case for the RIS deployment {{cite:6836d0323eec161266a83adc4b9c22f4177d4ce1}}; see Fig. REF in Section for numerical verification of this claim.
| d | 0b3f40a6130547773644c4eeea27f014 |
Detection methods consist of input, backbone, neck, and head. The input can be images, patches, or image pyramids. The backbone can be different CNN architectures such as VGG16, ResNet50, ResNext-101, and Darknet. The neck is the subset of the backbone network, which could consist of FPN, PANet, and Bi-FPN. The head is used to handle the prediction boxes that can be one stage detector for dense prediction (e.g., YOLO, RPN, and RetinaNet {{cite:7f5bfdcbaf885008b244a501c1007aea8f4b134d}}), and two-stage detector with the sparse prediction (e.g., Faster R-CNN {{cite:2339703cd8693b806de70693b3c7299f244b6657}} and RFCN {{cite:8ee1c89e9762de7ccf55801e0d172c50bb6c5034}}).
Recently, one stage methods have attracted much attention due to their speed and ability to obtain optima accuracy. This has been possible because recent networks utilise feature pyramid networks or spatial-pyramid pooling layers to predict candidate bounding boxes which are regressed by optimising loss functions (refer Figure REF ).
| m | c486e58ee4f33d8e40f2238be785d4f9 |
Recently, Transformers (e.g., vision Transformer {{cite:918fed72bcbe8d68bb2b6b4bcea5d4b11c9a2fa0}}) became a de-facto choice for modelling long-range dependencies in computer vision, inspired by their success with self-attention mechanism in natural language processing.
Compared to CNN methods, Transformer models have larger receptive fields and excel at learning global information. But, they also have drawbacks, e.g., high computation cost, slow convergence, and short of CNN's inductive biases.
Two types of methods attempted to reduce their computation cost. (1) Limiting self-attention to local windows (e.g., {{cite:23ae54b129277dce1ed86d084a2377ebef064ccf}}, {{cite:1612604fe2aba41722fea94fe921a86eaef3c4f9}}).
(2) Downsampling the key and value feature maps (e.g., {{cite:96b9857477ca61aa09ef2779e5c0c7fd62b172c1}}). Though effective in capturing global information, these Transformer-based methods still yielded unsatisfactory performance due to deficiency in learning local information.
| i | fdc167d47079e5bf3cbf115206bf9b12 |
Statistics showed that a large amount of the data traffic is generated from a small amount of most popular content files. These popular files are requested by a large amount of users, which results in duplicated transmissions of the same content files on the fronthaul and backhaul links. Therefore, content caching in RAN can be a promising solution to significantly reduce the fronthaul/backhaul traffic {{cite:f242ed381e2240205c3e91ede4e73aa8cdc8d56f}}, {{cite:e9f46d172f692eb9cf0a3faba4d3525c8a2040d4}}, {{cite:2c02ec7305ebe82068eb50672de39b824e10e3e9}}. During off-peak times, popular content files can be transferred to the cache-enabled access points (macro base station, small cell, relay node etc.). If the files requested by mobile users are cached in the access points of the RAN, the files will be transmitted directly from the RAN's cache without being fetched from the core network, which can significantly reduce the fronthaul/backhaul traffic and meanwhile shorten the access latency of the files, thus improve users' quality of experience (QoE). In Cloud-RAN, thanks to the ongoing evolution of fronthaul technology and function splitting between the BBU and RRHs {{cite:f7f9843da35aa079c98324ee5580780e7fcda3db}}, {{cite:3278e05d9250c2f34403ca89f3cb6bdb346e3f7b}}, there comes possibility to realize content caching in RRHs, which allows users fetching required content files directly from RRHs and thus can further reduce fronthaul traffic.
| i | 21986d31a71840ec50083e0ddf4445c6 |
Corrected images with DeshadowGAN did not exhibit strong artifacts that can often be observed with adaptive compensation such as: inverted shadows, hyper-reflective spots, noise over-amplification at high depth (see examples in Figure REF ), and hypo-reflective retinal layers. For this latter case, we found that compensation can indeed reduce tissue brightness in the anterior retinal layers (while enhancing deeper connective tissue layers) by up to 50%. Brightness is typically not affected with DeshadowGAN. We also believe that compensation artifacts could cause issues for automated segmentation algorithms that rely on the presence of homogenous pixel intensity values within the same layer. {{cite:06e2057ed670de9d2f42b16bd562716c7b922f1a}}, {{cite:5e9f428c30d3cdbef7e246b9efa58141feb487db}}, {{cite:76cecd746b7b8e8676642319ceacb15f471b3380}} Because DeshadowGAN generates significantly less artifacts, it has the potential to be used as an AI pre-processing step for many automated OCT applications in ophthalmology, such as, but not limited to: segmentation, denoising, signal averaging, and disease classification {{cite:8d10390733be62ac24d0239f4a5f78b2d503607e}}, {{cite:2d7413cec39c14157cb6e0cd40717b0ce7598fa0}}, {{cite:fcbb306b1028be47480c230dc40a37ad91ae9d2c}}, {{cite:f36ad0f878919ecad5f401aa916aea64ad50687b}}, {{cite:baab62b9c3845f93c478d08bea8d03dc4cb50165}}.
| d | 979f20951ecf507edb38647ee8dad34b |
We propose Tetrahedral Diffusion Models, a 3D shape generation model that extends denoising diffusion models to tetrahedral meshes. By combining the flexibility and structure of tetrahedral meshes with the generative modelling power of DDMs, our model overcomes some of the limitations of existing 3D DDM frameworks. A key ingredient of our method is the tetrahedralized representation of 3D shapes developed in {{cite:014b16f5e447b7cfd25a3f26fb9bf9f3c99e5d69}}. It imposes a predefined, fixed topology that allows to directly extract meshes. Inspired by the U-Net architecture {{cite:570ba7a066e4bee08c10c82a618a1e7891012a64}}, and in the spirit of KPConv {{cite:5046dfe024d3c6ea69f2aa3a17693b426d691c27}}, we develop a convolutional operator, upsampling and downsampling operators on the deformable tetrahedral structure. Our DDM learns to predict vertex displacements and per-vertex signed distance values from noise, which in turn define a mesh that can be extracted with a differentiable Marching Tetrahedra scheme {{cite:d7e5218ddaabf3a6c946072323f559fc5f732600}}.
| i | ccab0cbde824f5a898b63ff6f13f56ac |
BALD {{cite:626fe7f826e69c008208e663ed1822bc4832bbcd}}. All strategies are evaluated on several image classification tasks, including CIFAR-10/100 {{cite:a036a83532140d96e6af49410cb95049dc647fbe}}, TinyImageNet {{cite:ee358cf1e89e73a2f5219b6835125e27d12a01fd}} and ImageNet-50/100/200, which are subsets of ImageNet {{cite:5fb09ea413e0458d9a3dd2870799d470e385c38a}} containing 50/100/200 classes respectively {{cite:e8c3c9cf5382206d33ddb95a9f958976ac2de42c}}. All code will be published upon acceptance.
{{figure:1839141c-8934-4e15-ab09-0d08eb627832}}Results: Low Budget Regime
The amount of labeled data that should be considered as low-budget is likely to vary between tasks. In the following experiments, unlike many previous works, we focus on scenarios where {{formula:16563a9a-6f13-497e-a641-bb0461277f8e}} of the examples are labeled.
(i) Fully supervised framework.
Fig. REF shows the accuracy of networks trained on CIFAR-10, CIFAR-100, and ImageNet-100, using the labeled examples queried by the different AL strategies. Denoting the number of classes by {{formula:27de6d2f-1c12-4531-b7a3-ecfe06871cca}} , we use a budget of either {{formula:70c472e6-3174-4fb1-8834-c1db7b4a5d11}} or {{formula:55a2e938-0379-4d03-815b-64bfff508244}} labeled examples, with {{formula:44139371-e1b7-41c0-8651-e921fdb0020b}} . See App REF for other budget sizes.
We see that in the low budget regime, both TypiClust variants outperform the baselines by a large margin. Specifically, all other baseline AL methods perform on par with random selection or worse, in accordance with {{cite:f85710839a0fd7bb94364fa3114d75cf4f874607}}. In contrast, the typicality-based strategy achieves a large gain in accuracy.
Noting that most of the baselines are possibly hampered by their use of random initial pool selection when {{formula:90532e98-70c1-4776-8dad-545283ab3fcc}} , our ablation study in Section REF demonstrates that this is not the decisive factor.
(ii) Fully supervised with self-supervised embedding.
As self-supervised embeddings can be semantically meaningful, they are often used as features for a linear classifier. Accordingly, in this framework, we use the extracted features from the representation learning step and train a linear classifier on the queried labeled set {{formula:d8323910-7ecd-4d70-b42d-e3290c089e2e}} . Unlike the fully supervised framework, here we use the unlabeled data while training the classifier, albeit in a basic manner. This framework outperforms the fully supervised framework, but still lags behind the semi-supervised framework. Once again, TypiClust outperforms all baselines by a large margin, as shown in Fig. REF (see App REF for additional datasets).
{{figure:8bf5074d-d811-4ebf-a182-5a8c0e5764d9}}(iii) Semi-supervised framework.
In this framework, we evaluate TypiClust and different AL strategies by examining the performance of FlexMatch when trained on their respective queried examples.
As semi-supervised methods often achieve competitive performance with only a few labeled examples, we focus on the extreme low-budget regime, where only {{formula:2b06bf6e-38b3-4d3c-af94-ef15772655ad}} of the data is labeled. Note that semi-supervised algorithms typically assume a class-balanced labeled set, which is not feasible in active learning. To compare with this scenario which dominates the literature, we add a balanced random baseline for reference.
In Fig. REF , we compare the final performance of FlexMatch using the labeled sets provided by different AL strategies. We show results for a budget of 10 examples in CIFAR-10 (Fig. REF ), 300 examples in CIFAR-100 (Fig. REF ), and 1000 examples in TinyImageNet (Fig. REF ). We see that both TypiClust variants outperform random sampling, whether balanced or not, by a large margin. In contrast, other AL baselines do not improve the results of random sampling. Similar results using additional budgets, baselines, datasets, and semi-supervised algorithms, can be found in App REF .
{{figure:acf37e2c-a385-43eb-bffd-77a01a977067}}
Ablation Study
We now report the results of a set of ablation studies, checking the added value of each step in our suggested strategy.
{{figure:02db3026-ad44-4225-b3af-17a32b53f86d}}Random Initial Pool Selection
As our AL strategies are based on self-supervised learning, they are well suited for the case {{formula:61a8c3ff-0ed2-48fb-9e0b-e8bef05a8af4}} , and can actively query the initial selection of labeled examples. By contrast, the other AL baselines use random initial pool selection when {{formula:da0a0a1a-2c6e-406c-af99-94cdcde318fb}} . To isolate the effect of this difference, we conducted the same experiment as reported in Fig. REF , giving TypiClust a random initial pool selection just like the other baselines. Results are reported in Fig. REF , showing that TypiClust still outperforms all baselines. Importantly, this comparison reveals that non-random initial pool selection yields further generalization gains when combined with active learning, which is useful in real-life problems.
Comparing Class Distribution
With an extremely low budget, covering the support of the distribution comprehensively is challenging. To compare the success of the different AL strategies in this task, we measure the Total Variation (TV) distance between the labeled set class distribution and the ground truth class distribution on each strategy. Fig. REF shows that TypiClust variants achieve a significantly better (lower) score than the alternatives, resulting in queries with a better class balance.
The Importance of Density and Diversity
TypiClust clusters the dataset and selects the most typical examples from every cluster. To assess the added value of clustering and typicality selection, we consider the following alternative selection criteria:
[(a)]
Select a random example from each cluster (TPC{{formula:a67c3ad0-0230-44c7-8828-c3c95bf8084e}} ).
Select the most atypical example in every cluster (TPC{{formula:8a497643-8d0c-473a-b51c-f5d377ad717e}} ).
Select typical samples greedily, without clustering (TPC{{formula:cdb82a53-a7db-4b77-81f9-80d8252a5a6d}} ).
The results in Fig. REF show that both clustering and high-density sampling are crucial for the success of TypiClust. The low performance of TPC{{formula:2c7bb567-e699-4ae0-a26f-c9dab41b0253}} shows that representation learning and clustering alone cannot account for all the performance gain, while the low performance of TPC{{formula:bc73d46a-b8bb-4cb8-871e-787a9b1d8a9c}} shows that typicality without diversity is not sufficient.
Uncertainty Delivered by an Oracle
When trained on only a few labeled examples, neural networks tend to overfit, which may result in the unreliable estimation of uncertainty. This offers an alternative explanation to our results – uncertain examples may be a good choice in the low-budget regime as well, if only we could compute uncertainty accurately.
To test this hypothesis we first train an "oracle" network (see App. REF ) on the entire CIFAR-10 dataset and use its softmax margin to estimate uncertainty. This "oracle margin" is a more reliable uncertainty measure, which is then used to choose the query examples. Subsequently, another network is trained similarly to the setup of Fig. REF , adding in each iteration the examples with either the highest or lowest softmax response margin according to the oracle.
{{figure:2636e74d-156c-44bc-8672-280de1e9969f}}The results are shown in
Fig. REF . We see that even a reliable measure of uncertainty leads to poor performance in the low-budget regime, even worse than the baseline uncertainty-based methods. This may be because these methods compute the uncertainty in an unreliable way, and thus behave more like the random selection strategy.
Summary and Discussion
We show, theoretically and empirically, that strategies for active learning in the high and low-budget regimes should be based on opposite principles. Initially, in the low budget regime, the most typical examples, which the learner can learn most easily, are the most helpful to the learner. When reaching the high-budget regime, the best examples to query are those that the learner finds most confusing. This is the case both in the fully supervised and semi-supervised settings: we show that semi-supervised algorithms get a significant boost from seeing the labels of typical examples.
Our results are closely related to curriculum learning {{cite:3b5f52bb8128df81382400d16efb941dbb9bdc27}}, {{cite:a17d95c63c625efd753d62e26499aa92665b3a36}}, hard data mining, and self-paced learning {{cite:e9a5799caac8bf2e7f66569d174566f5c81bbd86}}, all of which reflect the added value of typical ('easy') examples when there is little information about the task, as against atypical ('hard') examples which are more beneficial later on. The point of transition - what makes the budget 'small' or 'large', depends on the task and corresponding data distribution. In complex real-life problems, the low budget regime may still contain a large number of examples, increasing the usefulness of our method. Determining the range of training sizes with 'low budget' characteristics is a challenging problem, which we leave for future work.
Acknowledgements.
This work was supported in part by a grant from the Israeli Ministry of Science and Technology, and by the Gatsby Charitable Foundations.
{{figure:669a6c10-f398-48cd-9cc5-537584ea4f54}}
Appendix
Visualization of Query Selection Method
{{figure:13983ca4-5359-42af-b813-f24e83e2d690}}{{figure:16e1604e-0698-450f-a515-65018a16fbfe}}The Importance of Diversity and Typicality
Fig. REF compares the selection of 80 images according to three selection strategies on CIFAR10:
[(a)]
Clustering the feature space to 80 clusters, and selecting most typical example from each cluster.
Clustering the feature space to 80 clusters, and selecting least typical example from each cluster.
Greedy selection of the 80 densest examples.
Specifically, Fig. REF shows the selection of TypiClust, which selects diverse samples that are easy to recognize visually.
Fig. REF shows the selection of the least typical examples from every cluster, which are harder to recognize visually.
Fig. REF shows the most typical examples, selected without clustering. The selected examples are highly correlated, and often appear to be different variants of the same image.
Visualizing the Selected Examples
For visualization, Fig. REF shows the 100 ImageNet examples selected by TypiClust from ImageNet-100. We further visualize our strategy's selection criteria, by demonstrating the selection of 30 examples from CIFAR-10 in greater detail.
TypiClust first clusters the dataset to 30 clusters - using SCAN clustering algorithm.
Fig. REF plots the tSNE dimensionality reduction of the model's feature space, colored in various ways:
Fig. REF shows the tSNE embedding colored by the GT labels.
Fig. REF shows the tSNE embedding colored by the cluster assignment.
Fig. REF shows the tSNE embedding colored by the log density (for better visualization).
Examples marked with {{formula:343aa2c9-2da2-4f06-8e9b-a8ec5451630c}} are selected for labeling. Fig. REF shows that the actual images thus selected.
Mixture Model Lemmas and Proofs
Proof of Thm. REF
Theorem REF .
Let {{formula:a8e5c118-406f-4a3e-954a-3c40deb32b05}} , {{formula:c6839a64-dcae-47ed-8181-1401277f94a9}} denote some constants. Given error score {{formula:4e8d3b2a-f49f-4bc8-baa0-774fb2206ada}} , the following threshold test decreases the expected error for sample
{{formula:4540c9b0-99ea-409a-acee-ec2d502ad459}}
In Section REF we define this biased sampling strategy:
{{formula:3c8213ce-9c0a-4a5b-933c-a6a0d620918c}}
Here, region {{formula:2d2e6fff-f9d6-4778-b4ff-4bd424de308d}} is over-sampled when {{formula:7de6cd93-2025-4370-8974-c14b86e7c2b0}} , and vice versa. Starting from (REF ), we obtain the test whereby the error score {{formula:49540a07-81ca-4f85-ae6b-98edcdbe368a}} decreases when {{formula:01309a32-b82b-40da-81c9-537b7267886e}} :
{{formula:e3504072-8b31-47d9-8614-5e7766bdd4d0}}
Since {{formula:e16ff9ca-c92e-4fb9-a241-ed8daafceae0}} is differentiable and strictly monotonically decreasing with {{formula:bcfcab1b-eaa8-4f43-9ece-2d8c6ea8eedd}} , in the limit of infinitesimal {{formula:a401caff-f9ee-4199-9790-a8d72f31980b}}
{{formula:15afe55a-6b33-45f4-8b56-054dbd4d2416}}
The proof for {{formula:59a21206-edfe-4748-9312-786e6ab4b367}} is similar.
Undulating Error Score: Sufficient Conditions
In this section, we provide proof for Thm. REF , which is stated in Section REF . The theorem provides sufficient conditions for error scores to be undulating (see Def. REF ). We start with a few lemmas which will be used in this proof.
Lemma 1
Let {{formula:28d54a53-0911-4e8d-8a02-12903f230bf1}} denote a differentiable function with {{formula:e41d48e6-1fa2-4d07-b146-4b62cb6522ca}} . Then
{{formula:385a0f69-866f-4deb-8055-6d2e464a25d0}}
Lemma 2
Let {{formula:c781bdce-4112-4e69-bf25-14c20b5e2881}} and {{formula:36df6091-7805-40db-b1f0-8fcbde878ee2}} denote a positive differentiable strictly monotonically decreasing function {{formula:7bdf15b3-1da9-4b42-9f8a-73e4719b5043}} . Assume that {{formula:6b653131-5ded-4b6f-b7b9-662688f40cfb}} , and that the limits {{formula:bb9bfe56-30b6-4351-b5f3-dd704820166f}} exist {{formula:5a8a440f-bf31-4213-84e8-9c6b64fb4761}} . Denote {{formula:0e05ca76-41b7-4a53-a25b-4b849bf54ccb}} . If
{{formula:e33a82f3-560b-4dc4-bc1a-af0cdc765c3d}}
then:
{{formula:9fa30e98-c98b-4cca-a8e8-36e9966db64f}}
We can write {{formula:258823ca-dfcc-4a03-aa1a-f075430f8808}} .
It follows from the mean value theorem that {{formula:6f82121d-c206-42b1-905e-0e487591a1e3}} , such that:
{{formula:39e30ab8-b1be-4036-bcce-b8b14fb2ac13}}
Since {{formula:648418ca-98ee-42ec-a11c-a80d2360233e}} , we get:
{{formula:c97b9254-f225-4742-8225-982d26e8308e}}
As {{formula:52c3f990-14ec-4d4e-a512-70a3265459bf}} , it follows that:
{{formula:a79dd1a3-2b31-4e38-817f-05b0b7e374f9}}
From the assumption that the limits exist, and since {{formula:477ad781-070b-404e-a5f0-35eccf3c662c}} , we can use L'Hôpital's rule and get:
{{formula:346f191d-04b3-4ed9-bc6e-1d89d9358591}}
Lemma 3
Let {{formula:f01cb60f-66dd-48d4-a4a6-e31b6f76ae80}} denote a positive differentiable function {{formula:26618c52-90c4-472f-bef5-d8e8d7782e05}} . Denote {{formula:963b66bb-1aa7-4ea8-8a92-b02c7daa3bca}} . Assume that
{{formula:9e9e510b-4ccf-4bd9-80e3-a1e7992d3404}} exists, and
{{formula:81604e77-4946-4d56-a7aa-d425ffab81ed}}
Then
{{formula:49ab8c82-fc35-4b79-a966-dd0c9dbac7fb}}
{{formula:6e67c065-a9dc-4e06-a762-df355c31a849}} implies that
{{formula:515ab139-e546-41ac-ae3c-dececfa5d4dc}}
We can now use L'Hôpital's rule and get:
{{formula:060b9f5d-ca33-4bff-a0e8-cfb0cd4d5ae4}}
Theorem REF .
Let {{formula:b8970adf-0f3e-4dcd-82d8-e77adb59589d}} , {{formula:6df89fab-3308-4422-a3c1-ba8bc49777c3}} denote some constants. Error score {{formula:e5fc2dbe-c0bb-4935-8b6c-d0e143815250}} is undulating if the following assumptions hold:
{{formula:7c96104a-80df-4706-8edd-1bcf3a5f05fb}} is a proper error score (see Def. REF )
{{formula:ef4577ca-5797-4998-9c71-fabdbbcfb32d}} exist {{formula:d9cf95bf-8587-488e-9a36-c20b5366f238}}
{{formula:66c08dfc-28cd-4eb6-b12e-17d4b9dea43f}}
We define {{formula:fb479b68-aba7-4108-81a2-4b6d94ddb09a}} . From assumption (1) and using Lemma REF , we get:
{{formula:e2691347-658e-41c2-bb47-be3fb32dd9a4}}
Therefore there exists some {{formula:397efb22-7f34-402d-a0db-9b8a230bf10e}} such that {{formula:c1f7ad36-a498-47a1-babd-a17316c623bd}} :
{{formula:bcda679e-9efd-4ea9-8d02-ca224519f287}}
From assumptions (1-3) and using Lemmas REF -REF , we get:
{{formula:34b84dac-31b0-46b4-ab71-0bdad582a34e}}
and therefore there is some {{formula:3fb10f81-237b-49c2-a1dd-7698cd50412c}} such that {{formula:61e727ab-ace2-4590-a878-216b964fc211}} :
{{formula:0943260d-cd75-44bb-9bcf-d7e11236609d}}
From Def. REF we get that {{formula:7512667a-99f9-4997-96ef-76e7c394b667}} is undulating.
SP-undulating Error Score: Sufficient Conditions
In this section, we provide proof for Thm. REF , which is stated in Section REF . It provides sufficient conditions for error scores to be SP-undulating (see Def. REF ), extending Thm. REF . We start with a few lemmas which will be used in this proof.
Lemma 4
Let {{formula:9ae80d08-a528-4dd7-ba66-e6552e13dfb8}} denote a positive differentiable function {{formula:d0cba883-662e-4b60-b47e-1251e4623396}} . Let {{formula:a7483224-0b17-4382-b685-4663cae26fb4}} denote some constant. If
{{formula:e8487b23-8306-4373-9852-2527afdcf926}}
is strictly monotonically increasing, then
{{formula:2ea41bb3-b66d-4a60-a758-0d64f4274db4}}
is strictly monotonically decreasing.
{{formula:5a6dceea-3d90-4ea8-be57-01e1ab2841e2}} is monotonically decreasing iff {{formula:324b9362-8fd3-4f49-8af8-b077097193ad}} , where
{{formula:7501fddf-bcb0-42a3-a956-fc268df7666c}}
This condition translates to:
{{formula:02339f0a-f82c-4abc-a86d-99ab819539e2}}
As by assumption {{formula:93cce84f-7cd4-4717-84fb-9e891a49bafd}} is monotonically increasing and {{formula:f1d4e979-34f1-4612-8614-1f2e998e79cb}} , we get that {{formula:301ccb20-1d13-4772-9308-8841a742a22e}} :
{{formula:8c13cb17-da25-45e5-bcc5-de921fc3e885}}
Lemma 5
Let {{formula:d6e00947-e1b5-4f26-a1b9-665ddd674c21}} denote a positive differentiable log-concave function which is strictly monotonically decreasing {{formula:808ab782-d28e-4979-a900-cafd86817cc0}} . Then the following function is strictly monotonically increasing:
{{formula:51dac211-279d-44a6-99a0-edd625ced71c}}
{{formula:af001b7b-4688-444e-b0ca-393d62de767c}} is strictly monotonically increasing iff {{formula:27bb57da-061d-4d6f-94d6-d446e12105ce}} , which holds iff:
{{formula:9f1f5a77-2a4c-4345-a2c8-9fa32aed73dd}}
Recall that {{formula:bd06b770-be45-4591-a882-02e3d95334cb}} and {{formula:de82e6a5-3efe-4a67-a061-bc4898b59e5a}} . Since {{formula:965e5d00-ac46-4105-8d75-286ad9e692f6}} is log-concave, we also have that {{formula:2640d94d-be7c-4627-a041-1180ba3d34e9}} , which concludes the proof.
Theorem REF . Let {{formula:50572a7a-4bf6-465a-b66c-0d07a15afaab}} , {{formula:ab4363ce-3658-436d-8353-a381bacf8f93}} denote some constants. Error score {{formula:ad7966d9-73e6-47be-8519-8fe8f4b35205}} is SP-undulating if the following assumptions hold:
{{formula:c1f509b8-340f-4c8a-b496-b7e10abe7978}} is an undulating proper error score.
At least one of the following conditions holds:
{{formula:d6320b13-d17b-4bd4-b53f-1d20cff055ec}} is
monotonically increasing with {{formula:0dae507c-c19f-438c-ac4e-4de52aef955e}} .
{{formula:929efd10-7b05-4c77-87ad-0dc5fd19ca16}} is strictly monotonic decreasing and log-concave.
Define the following positive continuous function
{{formula:c945647e-33ff-40bd-86aa-bbf6b05005f4}}
Let {{formula:de2478fe-7a2a-453d-a4e1-a7ed984fff10}} . Note that assumption (REF ) follows from assumption (REF ) and Lemma REF .
Assumption (REF ) therefore implies that {{formula:73bd0511-8186-46c9-952e-26b40f64b01f}} is strictly monotonically increasing, and by Lemma REF we can conclude that {{formula:b4857b13-fa44-46f4-bb32-4524ae3525cc}} is monotonically decreasing. Together with assumption (REF ), {{formula:54607a0e-fb84-426f-aa35-3c13d19310cb}} at a single point, and we may therefore conclude that {{formula:259acac9-2133-4dd0-a082-0033013af244}} is SP-undulating.
Corollary 3 If {{formula:81a478fe-f8b6-4ee4-bab1-8eb3bccc380f}} - the probability of region {{formula:a6bcd79b-b6b0-4949-8624-93f84ded1ee8}} - is sufficiently small so that {{formula:f4030ff8-8ae2-4b64-8c18-2fc2ec9f824b}} , then the conclusions are reversed: it is beneficial to initially favor the hard to learn region {{formula:abff391f-cdec-46fa-bc20-b5aafb5ad3a2}} , and favor {{formula:5e712e88-231a-48a8-9aa1-f6cf0036cd64}} only towards the end of training.
Error Function of Simple Mixture Models
Mixture of Two Linear Classifiers
In this section, we provide proofs for Thm. REF and Thm. REF , which are stated in Section REF . Thm. REF provides a bound on the error score of a single linear classifier, showing that under mild conditions, this score is bounded by an exponentially decreasing function in the number of training examples {{formula:b22fb798-3b69-4452-9d74-e31b30960ee2}} . Thm. REF states conditions under which the error score of a mixture of two linear models {{formula:817049cf-b7cb-4694-b6cb-f17511352fd6}} is undulating, thus depicting the phase-transition behavior.
Bounding the error of each mixture component.
Henceforth we use the notations of Section REF , where for clarity, {{formula:b5fde791-3967-4baf-8e9a-6359a7ddc955}} is replaced by {{formula:5b46cf19-8b67-4a02-a2ee-d9aedcc1a0ae}} while we are discussing the bound on a single component {{formula:26048085-93b1-4d83-894e-7d89726086b2}} . Let {{formula:ec95593e-af1e-44f4-9e36-1ae1e788c167}} denote the {{formula:861a3abb-19b8-4044-99ed-f7c04927ad16}} -th data point and {{formula:e4d49c21-d1e2-4ec6-8a58-8bac6540a221}} -th column of {{formula:7b59d746-894b-49d9-9c67-4a8743ca355a}} . Let {{formula:41c12f6b-7049-422d-98dc-afc88e089150}} and {{formula:7fd8202c-1a04-4c22-bc8f-24b8168aad2e}} denote the respective means of the two classes, and {{formula:e31da643-ca2c-41bb-850b-ddb064818f45}} denote the vector difference between the means.
Assuming that the data is normalized to 0 mean, the maximum likelihood estimators for the covariance matrix of the distribution {{formula:1ee705ea-c6c9-47cc-983f-285444526e98}} and class means, denoted {{formula:2ac59f5c-617c-44dd-8754-8a128e19d003}} and {{formula:b4124cd6-e78b-4189-90d6-ca3e6e0b7061}} respectively, are the following:
{{formula:836efbb2-54c2-4612-8cd0-e1bc3fdd97a0}}
where {{formula:0a118f3e-f5f5-43a3-b119-6bcc42059b16}} denotes the set of points in class {{formula:bad921a6-c2b2-417b-b9f1-308c3ae63d74}} . Thus, the ML linear separator can be written as:
{{formula:7946d400-2515-43f3-b679-dab77e58cbd8}}
where {{formula:fb40680e-b729-42c6-9a56-39174ad340be}} . Note that {{formula:cca84a03-6912-4796-a28e-04ea8851b216}} is the sample mean of vectors {{formula:7e99d14b-1c87-49d5-b916-fcf7318e2b34}} , and {{formula:058ce887-ad45-4673-aa46-ae46b738fad8}} is the sample covariance of {{formula:91e728c0-5aae-441e-88f9-2172cf7416cd}} .
When {{formula:8d7a2c5a-31c4-43b8-ba91-7a3a699c4a3f}} (fewer training points than the input space dimension), {{formula:abfe38fd-9478-4fb0-9bc4-c2b449b100da}} is rank deficient and therefore {{formula:098e4941-d19c-47aa-b398-4ca6e9107157}} is not defined. Moreover, the solution is not unique. Nevertheless, it can be shown that the minimal norm solution is the Penrose pseudo-inverse {{formula:826bdbc7-af9e-4299-b4a3-952d67de6dfd}} , where
{{formula:e63b32fb-a2ba-4735-8eca-1b4291583c08}}
and using the notations
{{formula:deed44b1-3c27-43a7-91ca-da560e9eaddc}}
Ignoring the question of uniqueness, estimating {{formula:d5161285-7cb7-41ac-a960-4badbfa20fc9}} is therefore reduced to evaluating the estimators in (REF ) and (). These ML estimators have the following known upper bounds on their error:
Bounding {{formula:7ee8d3dd-b64a-408e-9cf3-f2284178c15e}} : from known results on covariance estimation {{cite:85d2264e0ba29126cff0d89ca340c69e17dd4186}}, using Bernstein matrix inequality
{{formula:48c7283a-9eec-4d68-ac7e-4057332d00d0}}
Constant {{formula:53601a19-3689-40b9-8d18-9b74e6ea93f2}} does not depend on {{formula:b9a54bf3-8438-4a08-afdf-98e91a4a1692}} ; it is determined by the assumed bound on the {{formula:3354a022-aaa5-419f-8ea4-5137fbe1cc5e}} norm of vectors {{formula:6c5531c8-b427-4733-b9ca-8116a9b7f691}} , and the norm of the true covariance matrix {{formula:4f5e2765-201f-45ca-a7ac-c792ff370182}} .
Bounding {{formula:dc7feb07-ffd0-4c96-b7e8-ec55524b62ec}} : by Hoeffding's inequality, we have that {{formula:1750f25e-7eef-4c1e-8da0-96b7d9f287d9}} {{formula:954ffc4c-1856-4365-9c66-da9d1506806c}} . Thus
{{formula:342c8f65-3bf7-4eb9-9052-1f12bdc5d8e3}}
The first inequality follows from the union-bound inequality.
Theorem REF .
Assume: [(i)]
a bounded sample {{formula:f2e04133-0b6f-4339-9bba-b6f80f390d19}} , where {{formula:0d2dfed4-d22c-49b8-acc4-bc0437cd06bc}} is sufficiently far from singular so that its smallest positive singular value is bounded from below by {{formula:20a864d2-6945-45a5-900d-9b06e5624994}} ;
a realizable binary problem where the classes are separable by margin {{formula:d591617a-9c14-4d1a-a87b-bfab3b339953}} ;
full rank data covariance, where {{formula:817ea93e-c2c2-418d-85f2-2df9838aa508}} denotes its smallest singular value. Then there exist some positive constants {{formula:90c9a121-5d7d-4ffc-aa19-9c7571d9e8a9}} , such that {{formula:75f4cf1b-c2ad-4e64-80bb-150dd736493a}} and every sample {{formula:5b66231c-5f47-416d-96ae-135355c0673b}} , the expected error of {{formula:cd1228ed-5c71-4e38-8ddd-7b16a1ead4d8}} obtained using {{formula:809244b0-ab23-4673-8bcd-2a9eff8cff74}} is bounded by:
{{formula:00e33d44-da0d-4935-a1a0-ab3b937c9c46}}
It follows from our assumptions that {{formula:c2b69e31-ac14-44d2-9600-34ed4f55a389}} . An error will occur at {{formula:31641a38-7b5d-4e18-be23-3b13b6a1c92f}} if {{formula:369b8518-3973-46d0-ba8f-946a1026efaf}} and {{formula:811887bb-3308-406e-9958-a62407767876}} , and vice versa for {{formula:c73bc523-68b6-4017-b770-c3ec1b4d7907}} . In either case, the difference between the predictions of {{formula:cfd51c64-e6ac-4eff-92f9-5e154d4f228e}} and {{formula:32386526-99aa-4e19-94e0-15b60ebbb4fc}} deviate by more than {{formula:a1d3b058-6801-403c-9558-361f298fb568}} . Thus
{{formula:46238dd2-ab2f-48b2-8665-cbc6abbb0b6a}}
Invoking the triangular inequality
{{formula:14f4ea8a-3685-40a5-b847-f886dcad412b}}
Before proceeding, we need to get from {{formula:dcd2f005-0ecf-427b-8d84-e97abdc5386d}} to {{formula:827d2d66-a1c2-4ed6-b355-8fa00185abc6}} , for which we have established a bound. Because {{formula:b1eb912c-e478-4ce3-8c77-a72b07306226}} is of rank {{formula:f03a4f04-2f44-4092-bf44-83301de98fe7}} , {{formula:72460394-7bc5-4231-8ca5-e164e122fa53}} is a projection matrix of rank {{formula:5a9b680a-507b-4065-a143-4f6e13d0d256}} , projecting vectors to the subspace spanned by the training set {{formula:304fc511-e499-41d5-afa5-4a182ccff6b6}} . Thus {{formula:ec9da720-b8b9-4578-aa3e-f067edc39798}} . Additionally, by definition, {{formula:35c8ae78-08ac-4dcb-bc9d-525bc9d35e1c}} and {{formula:918177da-ff2e-4aae-86f4-b61426d3e156}} is symmetric. It follows that
{{formula:389c9b52-af7e-4b2f-bb1c-a21e847ae77a}}
while
{{formula:4f0e2c31-ad6e-4f8c-8532-e4a87b16c30d}}
Together
{{formula:bf2d62e3-5615-482d-bc8c-e98d368a2f7d}}
Inserting (REF ) back into (REF )
{{formula:e55db18e-4c90-4355-8854-401438e7cd34}}
It follows that
{{formula:2e2ae91a-052e-48f0-a72b-5093a73091c5}}
The second transition follows from the union bound inequality, and the third from (REF )-(REF ), where:
{{formula:1d0d19ea-ed6c-4af7-81eb-9b0613050e00}}
A mixture classifier.
Assume a mixture of two linear classifiers, and let {{formula:d9015a17-d752-462a-87a9-c608e0d6c4e9}} denote its error score.
Theorem REF .
Keep the assumptions stated in Thm. REF , and assume in addition that {{formula:9b062615-6ce3-4ed5-8f9f-eb9440513cd3}} , {{formula:2f7f1544-a270-40a2-abfb-e891a768c119}} .
Then the error score of a mixture of two linear classifiers is undulating.
In each region {{formula:7256329c-f5c2-4dc8-bd8b-a189942df981}} of the mixture, Thm. REF implies that its corresponding error score as defined in Def. REF
{{formula:e6a3bd75-4ade-4ecb-bbbe-7036747454ba}}
is bounded by an exponentially decreasing function of {{formula:c1057bfa-aabc-4fdc-8d69-f7ac2037fca4}} . Since {{formula:a7dd41d0-60e9-4fa3-8afd-560c71958d3b}} measures the expected error over all samples of size {{formula:bb591ef8-33d4-40b0-bd34-47b738ce6ac4}} , it can also be shown that {{formula:d3bb0c89-c86b-45c8-a8bb-c6c57a3faf41}} is monotonically decreasing with {{formula:a6eea195-77a0-47b9-8102-a68245d50127}} . From the separability assumption, {{formula:07ecb740-8916-4176-aeec-64f00f309b69}} . Finally, since {{formula:7319317d-ac03-4b12-ad92-cf21552e16bf}} is a linear combination of two such functions, it also has these properties. We, therefore, conclude from Cor. REF that the error score of a mixture of two linear classifiers is undulating.
1-NN Classifier
If the training sample size {{formula:77a28358-3fcb-495a-8305-83d3f1a02a70}} is small, our analysis shows that under certain circumstances, it is beneficial to prefer sampling from a region {{formula:12b6f6d0-1894-4a34-9383-3fdbca763425}} where {{formula:edfbb59f-8320-43bf-8b54-f178857ba8f4}} . We now show that the set of densest points has this property.
To this end, we adopt the one-Nearest-Neighbor (1-NN) classification framework. This is a natural framework to address the aforementioned question for two reasons: [(i)]
It involves a general classifier with desirable asymptotic properties.
The computation of both class density and 1-NN is governed by local distances in the input feature space.
To begin with, assume a classification problem with {{formula:cbfa985f-a62a-492e-919f-7ea4043394dd}} classes that are separated by at least {{formula:5f285739-dcd4-418a-a0ca-a1b3327e8ddc}} . More specifically, assume that {{formula:3d15276f-b8cf-4bd4-9b0b-f40e88635498}} , if {{formula:4afef462-b2b5-4a4f-9d43-79a934d9e281}} then {{formula:1f73aba8-4d05-4300-9c98-1754e250ccf1}} . Let {{formula:33d28582-6e0c-4636-a51a-f0f2e86ddd06}} denote a ball centered at {{formula:b8860ac8-b598-4b35-943b-363ac99f9c69}} , with radius smaller than {{formula:6cc584b6-e3ab-49eb-b1ab-be0a03c7dbf3}} and volume {{formula:24cec3cb-b103-47e6-a8f1-fbad9bae37c8}} . For {{formula:7c227205-08c5-4dd3-b1f7-26dabe91716c}} , let {{formula:f9d26b78-3aae-476f-9756-4afe4a738891}} denote the density function from which data is drawn when sampling is random (and specifically at test time).
Assume a 1-NN classifier whose training sample is {{formula:31905d22-4901-4271-968f-618975230c30}} , and whose prediction at {{formula:8eba8c45-46b2-4370-8415-41c95ca9ddb7}} is
{{formula:881ee576-018b-4663-8cd8-07e44a29361c}}
The error probability of this classifier at {{formula:178dfbee-ef64-4542-8413-9ddb421d258a}} is
{{formula:a4b96f46-5f26-4a3d-9ea5-878cb55c5cf8}}
The {{formula:22d4b16d-7071-410d-a16d-8f83f4ba3416}} loss of this classifier is
{{formula:7fc8eb1f-8af0-4a2d-951a-bd53914b4589}}
where {{formula:4908e32d-1368-4b14-89c0-8fbc899983aa}} denotes a ball centered at {{formula:5dc48462-64dd-4fe1-b1f3-8a84b693d1b7}} , with radius smaller than {{formula:dbb5e718-d1e5-4612-b59d-66a2f959f963}} and volume {{formula:6c7f7242-719e-4945-a1d9-1a4e64d4560f}} .
The next theorem states properties of set {{formula:8197332b-6504-4f62-a03a-0922ec6f367c}} which are beneficial to the minimization of this loss:
Theorem 6
Let {{formula:708794eb-76e6-4655-bcf6-84c7309e8249}} denote the event {{formula:1513bffd-003c-4e36-81f6-e612f9f8aa43}} , and assume that these events are independent. Then we have
{{formula:914045ef-164b-423b-ba61-f52a8db1ceae}}
Using the independence assumption and (REF ), and assuming that {{formula:fc238613-0818-4a5d-ab70-1d19fbb75d33}} is sufficiently small
{{formula:6fbd6b88-ac3c-478a-9e47-e1e6584755e7}}
In Thm. REF , we show that if {{formula:1ca0411a-212f-4032-a825-516086284583}} is sufficiently small, the 0-1 loss is minimized when choosing a set of independent points {{formula:547f8fcf-06d9-4aa3-a119-e6efdd4f87b4}} that maximizes {{formula:15fabb64-95e8-419b-8e02-ab1dcef7c240}} . This suggests that the selection of an initial pool of size {{formula:5d5c64ef-66f4-439d-ba19-3fe62e454553}} will benefit from the following heuristic: nolistsep
[noitemsep]
Max density: when selecting a point {{formula:dc78214f-09be-4a65-bc1f-87ab1a567094}} , maximize its density {{formula:e7c8b849-1186-4767-b43b-77443359b10b}} .
Diversity: select varied points, for which the events {{formula:5f06b916-0499-4d99-a84c-d2937181966b}} are approximately independent.
Theoretical Analysis: Visualization
Linear and Concave Error Functions
Fig. REF illustrates that if the error score is concave or linear, over sampling from {{formula:1a7f6d08-e58d-4441-8b85-5b6982a2f254}} is always beneficial, as {{formula:18b94c0e-d416-4c9c-b776-d4423d5b8462}} . Such functions are never undulating.
{{figure:6464f55f-cc4e-452c-b29a-1ebdc6edaeb7}}{{figure:b1804afc-2896-4b69-9dfc-1305c7266be5}}
Mixture of Two Linear Models
We now empirically analyze the error score function of the mixture of two linear classifiers, defined in Section REF . Each linear classifier is trained on a different area of the support. The data is 100 dimensional, linearly separable in each region. The margin is used to determine the {{formula:223110ff-ac79-4c2a-ad20-be5a92bfa877}} of the data. The data is chosen such that {{formula:fa3d557c-31d8-42ed-88c1-2a4d243f04b5}} . The results are shown in Fig. REF .
{{figure:dff8e96e-d7d4-423a-9125-ac73516b106e}}
Error Scores of Deep Neural Networks
Next, we plot the error scores of deep neural networks on image classification tasks. In all datasets we tried, the error of deep networks as a function of the number of examples drops much faster than an exponential function and therefore can be shown to be undulating. In practice, such error functions are also SP-undulating. To see some examples of these error functions in super-classes of CIFAR-100, refer to Fig. REF .
{{figure:232eb780-323e-42e2-b942-1820b3516edd}}
Implementation Details
Code for Our Suggested Strategy
[hbt!]
TypiClust initial pooling algorithm
Input: Unlabeled pool {{formula:be988257-f910-4a7c-8f74-91ba18f82a42}} , Budget {{formula:0fa09b00-ae58-4a76-bf43-dcafdffac864}} , Number of neighbors {{formula:453cfbfa-0341-4325-a626-d3d4c797a340}}
Output: {{formula:de651dfc-4c62-4524-a7b2-8a8062200778}} typical and diverse examples to query
Embedding {{formula:49630b07-a3d1-4172-afcf-f3de9bf83b2a}} Representation_Learning({{formula:6a70d0dc-e434-42fd-97d6-fb643061a61a}} )
Data_clusters {{formula:abc6d05e-cf72-41f4-bb39-52ed4131635a}} Clustering_algorithm(Embedding, {{formula:0e3cf528-ee53-4221-add5-d6fe410e0047}} )
Sort Data_clusters by cluster size
Queries {{formula:b77678e7-4ec7-4e46-a29a-1116bc02bcc3}}
{{formula:0b6b859f-de45-41d7-bfea-b9832bc0120c}}
I {{formula:be8298d9-a5d9-4fdf-9b82-89ddb8e69425}}
Typ {{formula:a008c02a-45d1-4baa-9ec2-d4d239b860b2}}
Add {{formula:be105bcc-6fbb-4eef-886e-07db172537f7}} to Queries
return Queries
Alg. REF provides pseudo-code for our suggested initial sampling strategy, as described in Section .
Method Implementation Details
Step 1: Representation learning - CIFAR and TinyImageNet.
We trained SimCLR using the code provided by {{cite:e8c3c9cf5382206d33ddb95a9f958976ac2de42c}} for CIFAR-10, CIFAR-100 and TinyImageNet. Specifically, we used ResNet18 with an MLP projection layer to a 128 vector, trained for 500 epochs. All the training hyper-parameters were identical to those used by SCAN.
After training, we used the 512 dimensional penultimate layer as the representation space.
As in SCAN, we used an SGD optimizer with 0.9 momentum, and initial learning rate of 0.4 with a cosine scheduler. The batch size was 512 and weight decay of 0.0001.
The augmentations were random resized crops, random horizontal flips, color jittering, and random grayscaling. We refer to {{cite:e8c3c9cf5382206d33ddb95a9f958976ac2de42c}} for additional details. We used the L2 normalized penultimate layer as embedding.
Step 1: Representation learning - ImageNet.
We extracted embedding from the official (ViT-S/16) DINO weights pre-trained on ImageNet. We used the L2 normalized penultimate layer as embedding.
Step 2: Clustering for diversity.
We limit the number of clusters when partitioning the data to {{formula:9bef3f31-61cb-4438-abf3-41f8c4cf314d}} (a hyper-parameter). This parameter was arbitrarily picked as 500 for CIFAR-10 and CIFAR-100 and 1000 for TinyImageNet (other values resulted in similar behavior).
This is done for two reasons: [(a)]
prevent over clustering (and having too small clusters);
stabilize clustering algorithms.
The number of clusters chosen is {{formula:fd42245d-9cb0-4f2d-8dda-31a283a087b2}} .
K-Means.
We used scikit-learn KMeans when {{formula:e96629f9-4b72-445b-b9c6-a6d231edcf92}} and MiniBatchKMeans otherwise. This was done to reduce runtime when the number of clusters is large.
SCAN.
We used the code provided by SCAN and modified the number of clusters to {{formula:09b1a0b4-a38a-41c4-85fc-61322ffb41a3}} . We only trained the first step of SCAN (we did not perform the pseudo labeling step, since it degraded clustering performance).
Step 3: Clustering for diversity.
Since we introduced {{formula:bc0dcb5a-0b6f-4e8a-859a-9e748dc385fa}} , we are no longer guaranteed to have {{formula:2e326b59-830d-4aac-9774-b727ff2a7576}} clusters that don't intersect the labeled set.
Moreover, to estimate typicality, we require {{formula:856c80b1-3371-4c2f-839a-a178e75451c2}} samples in every cluster. To solve this, we use {{formula:aa022a24-5a69-499f-b0d3-124332b65cfb}} nearest neighbors. To avoid inaccurate estimation of the typicality, we drop clusters with less than 5 samplesThis limiting case was rarely encountered, as clusters are usually balanced..
Therefore, we add points iteratively until the budget is exhausted, in the following way:
[(1)]
Out of the clusters with the fewest labeled points and of size larger than 5, select the largest cluster.
Compute the Typicality of every point in the selected cluster, using {{formula:19d030c9-a6a4-44ff-8de2-5fe5084c06da}} neighbors.
Add to the query the point with the highest typicality.
Evaluation and Implementation Details
Fully Supervised Evaluation
We used the active learning comparison framework by {{cite:2b9fcdfc06b4a4ce3d03d741012887940b4de871}}.
Specifically, we trained a ResNet18 on the labeled set, optimizing using SGD with {{formula:a45af88e-eff9-489e-b391-699add807c8d}} momentum and Nesterov momentum. The initial learning rate is {{formula:7920d7a2-3f1b-49dc-841f-26f16b58cbac}} and was modified using a cosine scheduler. The augmentations used are random crops and horizontal flips. Our changes to this framework are listed below.
Re-Initialize weights between iterations
When training with extremely low budgets, networks tend to produce over-confident predictions. Therefore, when querying samples and fine-tuning from the existing model, the loss tends to "spike", which leads to optimization issues. Therefore, we re-initialize the weights between iterations.
TinyImageNet modifications
As training did not converge in the original implementation over Tiny-ImageNet, we increased the number of epochs from 100 to 200 and changed the minimal crop side from {{formula:d4b68bcf-497e-4aef-a033-a202be35fea9}} to {{formula:d719e56d-0cc8-420e-9f56-04383d5db894}} . This ensured stable convergence and a more reliable evaluation in AL experiments.
ImageNet modifications
ImageNet hyper-parameters were identical to TinyImageNet except for the number of epochs, which was set to 100 due to high computational cost, and the batch size, which was set to 50 to fit into a standard GPU virtual memory.
Linear Evaluation on Self-Supervised Embedding
In these experiments, we also used the framework by {{cite:2b9fcdfc06b4a4ce3d03d741012887940b4de871}}.
We extracted an embedding similar to App. REF , and trained a single linear layer of size {{formula:758b6dd5-eca4-4d4d-9540-e1826c178f09}} where {{formula:e762aba3-1682-497f-88f8-f1d0b4ac198c}} is the feature dimensions, and {{formula:11868a4e-ead6-46ad-9c35-b7d62b2b6fc6}} is the number of classes. To optimize this single layer, we increased the initial learning rate by a factor of 100 to {{formula:e8aa7699-13ae-4940-8728-a95e9c964288}} , and as the training time is much shorter, we multiplied the number of epochs by 2.
Semi-Supervised Evaluation
When training FlexMatch, we used the active learning framework by {{cite:25649ca350cb1ace0a7ea9db41ebee9977f0de0d}}. All experiments contained 3 repetitions.
We used the following hyper-parameters when training each experiment:
CIFAR-10. We trained WideResNet-28, for 400k iterations. We used SGD optimizer, with {{formula:5927f9d7-425b-4e6c-b656-30e0a01cb9ac}} learning rate, 64 batch size, {{formula:11c4992d-3124-486a-8d39-bc8c0f6e94c1}} momentum, {{formula:1cec428f-153f-4861-a46c-285338c1b253}} weight decay, 2 widen factor, {{formula:e359080c-0c95-4d47-9e7f-3055c2a97b44}} leaky slope and without dropout.
The augmentations are similar to those used in FlexMatch. The weak augmentations include random crops and horizontal flips, while the strong augmentations are according to RandAugment {{cite:30e1a6b421310196ebe45d3599430db2df5a2479}}.
CIFAR-100. We trained WideResNet-28, for 400k iterations. We used SGD optimizer, with {{formula:eeef97d8-3fff-4d6d-a3eb-3c21bf5f91cf}} learning rate, 64 batch size, {{formula:c5bbbc7e-4ccd-4c47-a090-a575436d1feb}} momentum, {{formula:cff910d7-16c6-4456-a3de-15d80db44a78}} weight decay, 8 widen factor, {{formula:df25a07c-c0c3-420a-b63a-d7848d135e47}} leaky slope, and without dropout. The augmentations are similar to those used in FlexMatch.
TinyImageNet. We trained ResNet-50, for 1.1m iterations. We used an SGD optimizer, with a {{formula:0e98e337-1f5c-4cb5-937b-2a0162341a16}} learning rate, 32 batch size, {{formula:57fcfdc4-cfc2-42d7-a767-50adc942a60f}} momentum, {{formula:5b7e233f-5433-4ecc-be10-3940396ee4c2}} weight decay, {{formula:2606fe76-c6cf-48c6-bae9-672dc310b9f0}} leaky slope, and without dropout. The augmentations are similar to those used in FlexMatch.
Ablation studies
Margin by an "oracle" network. In Section REF , we compute an "oracle" uncertainty measure. When training an oracle, we use a VGG-19 {{cite:3f28815826a63ea3ce3b30f625005733b4ce81da}} trained on CIFAR-10, using the hyper-parameters of the original paper. We calculate the margin of each example according to this network. We note that an AL strategy based on this margin works well in the high-budget regime for both the oracle and the "student" network.
Additional Empirical Results
Supervised Framework
In the main paper, we presented results on 1 and 5 samples per class on average. Fig. REF shows similar results using additional budget sizes.
{{figure:97188860-6fbb-4706-826a-5fdeaf66375e}}In Fig. REF we present results on additional datasets, which include ImageNet-50, ImageNet-100 and TinyImageNet.
TypiClust outperforms all competing methods on these datasets as well.
{{figure:b38b86f1-0bbf-460b-bb84-90b6f4c1163a}}
Supervised using Self-Supervised embeddings
In Fig. REF , we show the results of a linear evaluation over self-supervised pre-trained embedding on additional datasets. We see that the initial pool selection results in a very large boost in performance - especially on ImageNet-50 and ImageNet-200.
{{figure:8010ca97-f5a3-411e-91d7-9c90563329f7}}
Semi-Supervised Framework
{{figure:9d8bc955-d276-44cf-8851-6781bd5acb4a}}{{figure:2305b7bb-8ec4-4922-8005-a15c698b2f07}}In this section, we describe additional experiments to those plotted in Fig. REF , for the semi-supervised framework.
To test the dependency of our deep clustering variant of TypiClust on SCAN, we evaluated another variant based on RUC {{cite:3b812ae40cc8769d2194e245fb18ac79ed740617}} and denoted {{formula:eb58e779-6981-4812-838c-0beb3fe81caf}} . We plot its performance on CIFAR-10 and CIFAR-100 in Fig. REF . As RUC is computationally demanding, we cluster with it to the number of classes in the corresponding dataset, further sub-clustering using K-means to the desired number of clusters. In all the tested settings, {{formula:cc7ed075-545b-4641-9019-3292149fe14f}} surpassed the performance of the random baseline by a large margin, suggesting that using SCAN is not crucial for TypiClust, and may be swapped by other deep clustering algorithm.
Additionally, we performed experiments with other budgets. In Fig. REF , we plot the same experiment as Fig. REF , with a budget of 40 examples on CIFAR-10. We see similar results in this budget.
To see that the performance boost we see is not unique to FlexMatch, we repeat the same experiments with another competitive semi-supervised learning method, Semi-MMDC {{cite:7b504d5c1f5178377f02595388bfd925afb7de4b}}. Using the code provided by {{cite:7b504d5c1f5178377f02595388bfd925afb7de4b}}, and following the exact training protocol, we train Semi-MMDC using 20 labels on CIFAR-10. Similarly to the results on FlexMatch, we report a significant increase in performance when training on examples chosen by TypiClust. Results can be seen in Fig. REF .
We note that in all experiments we performed on the low budget regime, TypiClust always surpassed the random baseline by a large margin.
| m | 2799b69fa9fc50158a9bda67c0d1ab0a |
Real-world applications of model predictive control using Gaussian processes (GP-MPC), such as vision-based robot path-tracking {{cite:4e08cb15b82df4ab88219314fe55bcbffc22b377}}, trajectory tracking using a robotic arm {{cite:cf150ccf4529b32cf9070e3d2ab717f156156d94}}, autonomous racing {{cite:9b962859ba37eafbcbef5dde3bbb49eac1d26ae6}}, {{cite:6724fd36b2ef8051577203ec0fdccffe04b99eb0}}, or high-speed quadrotor flight {{cite:dd2103cb47a2cdeeb255411c77ad438d4af0a007}}, have showcased its potential to leverage closed-loop data for constraint-aware online model adaptation.
| i | 49e01137d41153bc8e2bfdffec44d04b |
We now present numerical results for both synthetic and real data to illustrate the proposed approach. In synthetic data examples the ground truth is known and this allows for assessment of the efficacy of various approaches. In real data examples where the ground truth is unknown, our goal is visualization and exploration of the dependency structures underlying the data, similar to {{cite:2af8d10a276a6c86196692ddc5b65e47ae184977}}, {{cite:3b81003c5f52bf853f634a1e4139c0afe40dc046}}, {{cite:f0a88e8c021c911ae6552ef33d624bfe434f70db}}, {{cite:6b8048c1e38e979852b56a04acacd743c95584b9}}.
| r | 2850670dd7b178f47f7733803ee69597 |
We simulated the model on a complete graph and several regular lattices, and found that the qualitative behavior is similar in all these interaction topologies. As the temperature {{formula:fc562b6d-1806-4171-b079-21a260e40a1f}} is varied, we observed that the system exhibits a quite rich variety of behaviors. For {{formula:a286a48c-2661-41f9-b1eb-1aa0e1d90d22}} the dynamics is only opinion imitation (voter dynamics with zero noise), which eventually leads the system to an absorbing state of consensus in one of the two opinions; a property of the original voter model on all topologies {{cite:cf43211fe80d8173efd8b428b229234f9e3e56c3}}, {{cite:3d3ee9dea6de7aa3064611ee501ff10d03eeeb2a}}. For intermediate temperatures, the system exhibits two different phases separated by a transition temperature {{formula:632affc3-bc71-4fb6-80b0-70274b13243f}} . A bimodal phase for {{formula:201c58af-0e32-430e-b142-a8a4c2de0d2e}} where the magnetization {{formula:add0ecfe-73e6-46de-b21b-94e34fe22711}} (population's mean opinion) in a single realization remains close to the extreme values {{formula:1f76595c-7200-4639-9011-76d797aa77b2}} or {{formula:fdb87db6-8114-4f4a-8ef9-cc7f6824cc40}} , describing a population that shifts between ordered states, and an oscillatory phase for {{formula:84266422-42ad-457e-a22d-7bcd0380e311}} where {{formula:29630c6f-2cbf-406d-b9d9-2dac04d67756}} oscillates around 0, corresponding to a population that is easily influenced by the external propaganda, where individuals' opinions oscillate over time following the mass media trends. Finally, for {{formula:deebf746-ab6d-4724-a794-e5d0daad1630}} the high level of noise drives the system to complete disorder, with a magnetization that fluctuates around {{formula:fbfcd3dc-eb0c-49be-9593-45214ddf2576}} , corresponding to a stable opinion coexistence with similar fractions of agents holding one and the other opinion.
| d | ff6162e493686206a8b0459a644da9d8 |
Gradient-based algorithms iteratively update the primal and dual variables where {{formula:866b3fec-bc49-4073-9e9c-63e82fc2bd7f}} acts as a learned penalty coefficient in the objective, leading to a constraint-satisfying solution {{cite:a4327573f663978607f5a5b875012e16bf808168}}.
| m | cfcef534b031cbbce1df231540662335 |
Distributed model training plays a key role in the success of modern machine learning systems. Data parallel training, a popular variant of distributed training, has demonstrated massive speedups in real-world machine learning applications and systems {{cite:4497af28f1c3bc4508f325df676c351c33803c34}}, {{cite:db61fb5564e7b07fc36c81276b00ec9cda9e0a15}}, {{cite:44d0ab03a84b9df4da58ef673458a08f7af1d813}}. Several machine learning frameworks such as TensorFlow {{cite:6ca5daaba10998b383eb0af9a623c18325c1a224}} and PyTorch {{cite:9c16ef3a77b0dcb213d966005967fc641df33e3c}} come with distributed implementations of popular training algorithms, such as mini-batch SGD. However, the empirical speed-ups offered by distributed training, often fall short of a best-case linear scaling. It is now widely acknowledged that communication overheads are one of the key sources of this saturation phenomenon {{cite:db61fb5564e7b07fc36c81276b00ec9cda9e0a15}}, {{cite:d6619fb2a950359180e5d5020543828e830d1838}}, {{cite:2f5baffc241fc92582c2522704df8f6a5831ed49}}, {{cite:f521c3d4e4d54a9948519e4098e036587f02ad7e}}, {{cite:05648cc3bfed1b735017773ec5cae8c00042b841}}.
| i | 12cca6e10d2bb696a5a373a5e5a6f825 |
We present cosmological models with dark energy described by the RHDE in the DGP brane-world cosmology with and without interaction between among the cosmic fluids in the universe. Both models accommodate the present accelerating phase and share a lot of common features. It is found that both the cosmologies accommodate the present phase of dark energy that is effectively phantom fluid in nature. For large values of {{formula:956766f5-7ee8-4604-a4c7-a73a6dd79b90}} ({{formula:74b89fe1-3061-4506-8ded-8f0ce60ce4e3}} ), the dark energy, however, does not enter into a phantom phase of expansion. It is found that the dark energy density parameter ({{formula:7c38bcef-7a64-461b-9045-d62e62aff45c}} ) increases with time in both cases but converge in the future epoch. Both the cosmologies are found to satisfy the observed value {{formula:c34a7831-d8d8-44b9-88e6-b45bdba41c01}} ({{formula:54cfcf91-6c17-44c9-8d98-658e58baffe2}} ) which are consistent with the recent observations such as the Planck {{cite:8b16b22e7bfc099cec683cb82c8c5fbab1ba5926}}. For different {{formula:8662d649-eac8-45fa-92f0-952cd88e4aec}} values the EOS parameters ({{formula:6b9669fd-2b30-4866-9b51-3df44166f1b8}} ) are found to converge and in future, where {{formula:dabfe16e-cf7b-48a4-a97d-587b52c4ee61}} ), although they differ substantially in the past. We do not find any cosmological model that accommodates a past phase of deceleration for higher {{formula:a4426916-29f0-4b5d-9266-314cc497a106}} values ({{formula:35187edf-4d8b-46c5-abf3-339bbfa32195}} ). Both models suffer from the same problem. It is difficult to envisage an eternally accelerating universe since as an early decelerating one is more suited to the structure formation. Similarly, positive {{formula:3ad59a0c-e4a2-49fd-b96e-87d201b92268}} values correspond to unphysical evolution of cosmological parameters. Although the present value of the deceleration parameter is almost the same in the presence or absence of interaction having a wide range of {{formula:7f5dc6fd-9ea3-4653-9d0a-fbf63f3c4b3b}} values, it is also found that {{formula:6b14bacb-9be2-45cc-8d72-2b3178699b34}} significantly differs in the past and future evolution. Although the nature of evolution is found to be similar in both models important phases of evolution such as transition, redshift are found different in interacting as well as in non-interacting fluids even for the same {{formula:36f3f4ee-4082-4ce2-a697-b16f162086be}} value (see fig.()). The cosmological models are found to transit to {{formula:48cf68cd-33bc-4ece-9af5-193b275ad8aa}} CDM ({{formula:5429f2fc-c729-40ef-a3f7-a7a9e3ade733}} ) scenario in future. This is also evident from the {{formula:bc397eb9-ef96-477e-abf0-4b0ce7bfd4ab}} -diagnostic analysis of the models. The {{formula:53f905b6-bb18-4ff4-80a4-91625147aaf3}} diagnostic also points out that the dark energy evolves through both the phantom and quintessence phases. None of the models, though, is classically stable against small perturbation at present for smaller {{formula:99339120-c297-4479-a321-65d9bfc4136a}} (eg. {{formula:4d954466-61fb-475a-85fc-02a997f893ff}} ). For relatively larger values of {{formula:a2d97a16-7333-409d-bf20-f60b8b4f2789}} (eg. {{formula:f33bcf59-a20f-42ef-97f3-37d3b3c5734f}} ) both models are classically stable throughout. It would be interesting to explore what constraint recent observations put on the model parameter {{formula:99e2a2b4-594c-4375-b8b6-4baefa105587}} , thus clarifying the viability of these models, which would be taken up elsewhere.
| d | 4d790377e103b1b35d3998122f5a26e3 |
We close this introduction by discussing the
{{formula:81ebdc68-9798-457c-ac95-61b0d7e63d75}} -completeness of this theory. Gravity theories with a finite number of higher derivatives generically
suffer from various pathologies, as the presence of unphysical ghost modes
in the spectrum.Counter examples are the Starobinsky model {{cite:ab169b6e828effad0b5c0da0ab846a4d47c627a7}},
which augments the Einstein-Hilbert term
by the square of the Ricci scalar, and `new massive gravity' {{cite:95c63b7325384f2e19a2c6aef54fb3bca4c260ef}}, with a particular curvature-squared modification
of 3D gravity.
For this reason, the usual view is that higher-derivative modifications only make sense
in perturbation theory, with features of the two-derivative theory being corrected by a small parameter
like {{formula:3d81a3cf-70c9-4a45-94a7-58506654fe2b}} and carrying
an infinite number of such higher-derivative corrections. Another perspective showing that a finite number of higher-derivative corrections is problematic is
that of {{formula:d6cafe28-a062-4ccd-b990-6d6fc50f6e25}} invariance for backgrounds with {{formula:fb3b4719-ee01-4213-941b-9cdec53032f1}} Abelian isometries, in the following referred
to as `duality', which string theory must possess to all orders in {{formula:167c2b5b-939f-4510-848e-074e54e9cdac}} {{cite:c83129824f862538a0b6ff6f1bbe0fac8f01b025}}.
While in the two-derivative truncation this duality is easily recognized and realized exactly, once higher-derivative terms are included the
situation becomes more subtle {{cite:a9c3e367ebafa229775e1ca558ae65a2d86a4c69}}.
Adding the known four-derivative (order {{formula:bd504ccc-b120-43bf-a7d5-6055d5bfa8f1}} )
terms in the bosonic string action
and performing dimensional reduction along {{formula:b089a621-781d-46a1-afa9-aeb36e4c4690}} dimensions, the resulting theory does not possess
the expected duality invariance. It is, however, invariant under higher-derivative
deformations of the duality transformations up to terms of order {{formula:27643bed-5806-4cda-ad94-7604b530ea30}} {{cite:a9c3e367ebafa229775e1ca558ae65a2d86a4c69}}. Thus,
within a theory with infinitely many {{formula:d58d4393-50f0-427a-8d2b-e20cf34b427d}} corrections this shows compatibility with duality,
but a finite higher-derivative modification is generally incompatible with duality.
The HSZ theory to be investigated here is exactly duality invariant and in this sense {{formula:81df56cb-9ab1-4431-a867-f910e9a1da65}} -complete.
| i | a1e41502c6c764adc90545b9fec151d9 |
We implemented the numerical solver for 3D Maxwell's equations with FreeFem++ (see {{cite:b8a03ef9393f472233359747b305db1e5b452078}}). Our test domain {{formula:9717122a-936d-4147-8a1d-a0fb1fe60ac5}} is the unit ball of {{formula:ddf490f6-f0b8-4558-b9c8-abe131b767b2}} . We consider a tetrahedral mesh {{formula:6c261655-2ae9-47d5-912f-66fde9b3cd17}} . For any {{formula:d6d69640-e81e-40bf-b92e-ad3ae1ed549d}} , let {{formula:31409082-ea4e-47d5-9fb6-663a3f72fc25}} be its diameter. Then {{formula:f6726ec1-e318-468c-ab03-9be946ce6ab1}} is the mesh parameter of {{formula:da06ea79-0348-4c97-85a6-b46e82fd5458}} . For any {{formula:469d8496-c873-4b3a-9d46-cd9ff82ba3b9}} , we denote by {{formula:5ecb0cbc-4aeb-4707-8715-2a1e5428692f}} the number of edges. Edge finite elements of order 1 (see {{cite:7622a71a06a972bda2125cb172b3d7edd9dc3860}}, {{cite:3939bf7bfded928a99e996afe4a0c08d6be8925b}}) are used to approximate the respective solutions of the problem (REF ) and of the sensitivity equation (REF ).
| r | dc281e3aa8a2dcc09bb4b50f22baf9e5 |
We first pretrained a DenseNet {{cite:4f146f73708fbf928db7f148418d29198e67c780}} classifier on a Positive vs. Healthy binary setting per pathology in the dataset. Both the Generator and the Discriminator will then be conditioned on the class labels {{formula:b52da8d3-230d-4965-aeca-dbc3c8af14c6}} predicted from the classifier by concatenating an embedding of the image labels to their inputs. The encoder is based on the Discriminator architecture while removing the batch-normalization layer. The main purpose of the encoder is to allow for mapping of any kind of image into the latent space, by which we will be able to create counterfactuals for real images.
| m | 194495d814484c99400c5ca588aff141 |
In Declarative PM, the de-facto standard for expressing process properties
is Declare {{cite:1d4e6b1091ff7072daa4d79b95edaac109b00420}}, a temporal logic consisting in
a set of
template formulas of the Linear-time Temporal Logic over finite
traces (ltl{{formula:076dde7e-a200-4e1e-9881-98b45761f046}} ) {{cite:080e0ef981626a0dce840b7e3ee3f3420f0880de}}; here, we use a strictly more expressive extension,
which we call local ltl{{formula:8e752f78-c88d-4a1e-9ed6-db6ad831062e}} , i.e., l-ltl{{formula:c69db08a-88d7-4d85-8b7c-2544f0ba412c}} .
This logic features a simple automata-based machinery that
facilitates its manipulation, while retaining the ability to express
virtually all the properties of interest in declarative PM.
Specifically, l-ltl{{formula:873fa265-6f95-4fc5-bf25-4e8fe48bc482}} subsumes Declare and even its multi-perspective variant
MP-Declare {{cite:15eb00dbcbae09f49ebce0418a761b62cbb51e07}} without timestamps and
correlation conditions (i.e., conditions that relate the attributes
of some event to those of other events),
but does not subsume full MP-Declare.
Observe that since MP-Declare does not subsume l-ltl{{formula:ed1d4647-6577-47a7-9e04-bc9767d7ef19}} either,
the two logics are incomparable.
| i | 5e7f6e99888a8b029517ca5b4f970d4e |
Here we have only treated pure gravity and thus to properly address the cosmological constant problem we should understand the situation when matter is coupled to gravity {{cite:cfb7b7419e8b0716fa0c6cf8c7201291209205f3}}. Indeed, arguably there was never a cosmological constant problem in pure gravity since if we adopt dimensional regularisation only terms proportional to {{formula:e5a4b5e1-ec29-4224-9dfd-e179c0001e15}} would renormalise {{formula:d657c406-5753-47f3-afa8-37a6500aa6b2}} and we can simply set {{formula:81135d76-5904-4f92-9c90-7b22782efed6}} . What will remain true even in the presence of matter is that there is an inessential coupling related to a rescaling of the spacetime metric. This might shed new light on the cosmological constant problem {{cite:f11bdfe69d2736fe67fc59a2cfc1b9f018d6d8ae}}.
| d | 477e62956f07be1bff5e94b00a7a4920 |
Let {{formula:a9f6262c-b8d2-4d69-8a45-3d066fa69b7e}} be a piercing vector field for {{formula:679fba77-ccde-44c9-8083-8d3f64101e91}} normalized to {{formula:3e8f51dc-02cf-4393-ada8-38ee9818d0f1}} where {{formula:dc545f2c-a138-47b8-9ef7-a14ac8b9b4b4}} is a background complete Riemannian metric on {{formula:a1f48f19-2460-4d3f-b988-8190dc88de73}} . Since maximal integral curves are inextendible as continuous curves, the integral curves of {{formula:c949c751-29c2-4ddd-8b53-e7b04cafe96e}} have domain {{formula:a4b969e1-cd31-4e6d-94ed-39015edd145d}} . Let {{formula:7ffb83ca-0a51-40ce-8eab-ce1324f840f1}} denote the flow of {{formula:6db9cd94-bf8e-47b9-a4ad-325dfb7fe59b}} . Let {{formula:014f3ede-33e8-48ea-9702-a3f2a0df9cda}} denote the restriction of {{formula:dee221a8-505b-4dbc-9692-b6a3124c954c}} to {{formula:d8f026d9-f7de-4b99-b70d-a0d79f2b6a80}} . The piercing assumption and the fact that integral curves don't intersect imply that {{formula:345d3a10-58f3-4351-a1ac-3d6ee4bdf441}} is bijective and hence is a diffeomorphism {{cite:ff1c1ba687365c35f497c7546d9fcddffa63cf07}}. Let {{formula:a6eedfd7-0097-4419-b44d-dc8bb19996f7}} denote the natural projection onto {{formula:97aac723-4629-410a-9bf3-0eb72153bbe0}} . Put {{formula:3a1527e8-6bd4-4ac4-a2b4-b51e826082cd}} . Then {{formula:6692e1e2-07fa-47ef-ad94-6a9801f9b602}} is a smooth retraction of {{formula:ada50fb1-97d0-4601-8451-262119a51451}} onto {{formula:c3a5f01a-04f1-499d-a93d-1d48a22d873a}} . Lastly, let {{formula:afebbc74-f386-4a92-b912-3ee875eab563}} denote the timelike cylinder formed by the image of the integral curves of {{formula:8ec18364-06a5-4517-802f-fbe228a20e2d}} through {{formula:982cbf3a-3100-4c27-a116-610e8aeba373}} . That is,
{{formula:3ad63281-5f8a-4380-93e8-091f5e7ccd35}}
| r | c211e2cdf29894ec4e8d2aa2863a3fb5 |
Few-Shot Segmentation (FSS) comes to the rescue by leveraging approaches like metric learning {{cite:c370a3c5b404b18c8d0b048f8f30a1c3f0ba2c23}}, {{cite:f9e6120cfb40c1063d8b2372e754d656d3b36ec8}}, {{cite:cafa8600a90b6f0128c409baf71a4c3c35982416}}, {{cite:5f58cb40873eec77614ddcf2c4ecab07d522b07d}} (e.g. Prototypical Learning) to segment novel or rare classes/organs with examples varying from one to five per class. An FSS model has the ability to segment these novel classes by learning support and query image relationships from a completely disjoint set of base classes with ample number of examples. During testing on novel classes, model uses the learnt metric to segment unseen regions present in the query images with the help of few labeled support images. Past few years, many FSS methods on both natural {{cite:c370a3c5b404b18c8d0b048f8f30a1c3f0ba2c23}}, {{cite:f9e6120cfb40c1063d8b2372e754d656d3b36ec8}} and medical {{cite:cafa8600a90b6f0128c409baf71a4c3c35982416}}, {{cite:6db101178d6e1eccff5a018a530d216e7081b6e1}}, {{cite:d08f9be36df75897f5ddac017f3c2564a13bcf01}}, {{cite:055fde1269ec9e402f1586364169a7546d8fc11a}}, {{cite:f9343587a6ab6d376de9f895a403a3b255391a78}}, {{cite:5f1be713dd8f906ff9cd85b3de9a24b2c3aa7f98}} images are proposed that employ CNN based feature extractors. These methods are devoid of the capability to explicitly force prototypes of support images to lie closer to query image features that may lead to poor performance. This is further aggravated in medical domains where the novel/test query images can be slightly different from the images in the base classes due to variation in data modality, texture, tissue appearance and orientation, camera characteristics, color intensity and size and shape of the target organs. These additional test query image characteristics can be regarded as perturbations of the query features and the learnt metric in prototypical FSS fails to capture these subtle variations.
Further, due to lack of well-annotated data, these models are vulnerable to several kinds of white and black-box adversarial attacks {{cite:901bc3818ba24085181ec3c3e0739be49fb472a9}}, {{cite:bbd7349eb85218372478e9fa9b69df81f790ae57}}, {{cite:88c73d283c2ca74f62252f8e3c3d3062bcf121ad}}. ML practitioners employ FSS
to learn patterns using well-annotated base classes, finally to transfer the knowledge to scarcely annotated novel classes. This knowledge transfer is severely impacted in the presence of adversarial attacks when the support and query samples from novel classes are injected with adversarial perturbations and models fail to recognize organs, important clinical landmarks etc., present in the image thereby questioning the credibility of these FSS methods for medical image segmentation.
| i | b5d7413d8d8522a20c2f9295f2008b1a |
It should be emphasized that the total dc current due to the group velocity modification vanishes for a fully occupied band, so it is not of interest for electronic insulators. Instead, here we study an analog of the circular photogalvanic effect for neutral bosonic atoms in an optical lattice, where they occupy the lowest energy state at {{formula:db3f6da8-4246-4c4f-993c-1638211eafae}} at low temperature. An analogue of the electric force {{formula:dc6d9eec-76ff-485f-b6ee-5404b73038c4}} for neutral atoms is obtained by shaking of the optical lattice, which produces the force of inertia {{formula:d6cb1ab5-bb19-409a-bd9c-6aca58c722eb}} , as experimentally realized in Ref. {{cite:c69c3ab1aa7fe7fe2dbc456e2974f474033741a8}}. Here {{formula:ed9f98bf-162a-4412-be98-be63e18326af}} is the mass of an atom, and the time-dependent vector {{formula:e35fa341-4f5a-4343-85e4-fda67a90be0b}} represents displacement of the optical lattice.
| i | cdb5da1104483fa4bfc328d906015494 |
From the experimental perspective, the obtained systems from our {{formula:ae4e5acb-09fe-4ee2-b64d-d2822163033e}} th-rooting procedure are expected to be implementable with the same setups employed for realizing their parent models. To this end, the additional degrees of freedom required in our {{formula:a6e7d542-a2e1-43ee-90bd-4e1eec3d3bad}} th-rooting procedure can be principally implemented by coupling multiple copies of the target system. In the context of the non-Hermitian FTIs and FSOTIs explored in this paper, their corresponding square- and cubic-root systems can thus be realized in principle via setups like photonic quantum walks {{cite:f9ad497b250737a3213dadbbbf455b42cdfb0d9a}}, {{cite:62420e058bb629fcc69b886e7c74ea2339862184}}, {{cite:06304beadb3f0a5cd24ddcff6dd3745bd589e90d}}, {{cite:d30bc6bc0b53b23b5faefff01e4ed72b8e47b955}}, {{cite:8072b8c0c2c5aa8bf1526e0d7d10ed4d78a19af5}}. For example, the anisotropic hopping amplitude and non-Hermitian lattice potential in our parent models can be implemented by introducing controlled optical losses with acousto-optical modulators in coupled optical fibre loops {{cite:f9ad497b250737a3213dadbbbf455b42cdfb0d9a}}. Moreover, the winding number invariants used in characterizing their topology can in principle be experimentally probed via measuring mean chiral displacements {{cite:4149a93c2d3ce7ea72414e16b52390cbfc0cd2e9}}, {{cite:99e9ebfaab5a6b89c53ea81740251f6e624434d5}}, {{cite:499b61767b396858bbfaa3577d52da0dcd2a0d53}} or time-averaged spin textures {{cite:9485f068e26724402977d5fd3c89484e7935f47c}}, {{cite:5041ac8b683e5d01e6ff6e0bba2bfd3123b9ba54}}, which can also be conducted in similar photonic setups {{cite:62420e058bb629fcc69b886e7c74ea2339862184}}, {{cite:06304beadb3f0a5cd24ddcff6dd3745bd589e90d}}, {{cite:d30bc6bc0b53b23b5faefff01e4ed72b8e47b955}}, {{cite:8072b8c0c2c5aa8bf1526e0d7d10ed4d78a19af5}}.
| d | d8ffc1b236fa6d6aadf054855eb899c7 |
Proof of Statement 3: We are now in the position of bounding the computation cost of Algorithm REF . Let {{formula:653ac88a-26c9-42bc-b457-9f56fbafb15e}} be the computation cost for implementing the unbiased MCMC subroutine {{formula:4c58dade-3950-4928-b0f7-57758f3ddfe8}} once. It is shown in {{cite:58aacd9c3423623675d09076de21a9ddf6e6a9a5}} that {{formula:c7a5e171-54ae-4c1b-b9d1-6b6e40f0952a}} . The computation cost for implementing Algorithm REF essentially comes from {{formula:93ab8d72-7532-4913-a2f0-e4b677fc5f18}} calls of the subroutine {{formula:6a67836d-8b7b-482e-b74a-aca23e8a1184}} , where {{formula:ea879d48-6eb0-4de5-8aee-1e92ed10e6fe}} . Therefore it suffices to show {{formula:2bbc2572-8d48-4bec-94dc-34aa96800d8f}} has a finite expectation. We calculate
{{formula:339bf6a8-4acb-49ea-94f0-bc8f5d0ef37f}}
| r | b75efa44e20428147e2bd237104901d4 |
The normalization of this distribution gives {{formula:ce2c3347-202c-49e0-8ff9-b3b8c620552e}} .
We assume that all binaries initially have circular orbits, as the outcome of the interactions of
systems with the same semilatus rectum is almost independent of eccentricity {{cite:0deba11a3ecb9706c1c75d554d967a4c8ce57408}}.
All stars are assumed to be initially in binariesObservations show that most ({{formula:aff6d4c4-7761-404b-95d6-f1bde86ea4e6}} ) of
OB stars are born as members of binary and multiple systems {{cite:fc1f2f6b6470fbb1fe97df7f495e0908230c0d49}}, this can reduce our calculated birthrate
and expected number of Galactic Be{{formula:47957231-a494-45be-bcbc-5108335b938b}} He binaries by a factor of less than 2..
The initial metallicity of stars
is taken to be {{formula:f934adc9-549a-488a-b83c-94cd366ffaf2}} . The Galactic thin disk is an active site for ongoing star formation,
which dominates the formation of relatively massive binaries that we are working with.
Observations of HII regions via radio continuum emission indicate that ongoing star
formation occurs with a nearly uniform efficiency over the Galactic thin disk {{cite:1b19255a5b638a13b1b7d2f2fff6c27fe3a9eec9}}.
We simply assume that the Galactic thin disk is infinitesimally thin and the recent star-formation
rate has a uniform distribution over the disk.
We adopt the distance of the Sun with respect to the Galactic center to be 8 kpc {{cite:a0ff5afe08ee84234f56dfd1898afc6f561ae6a0}}.
The Be{{formula:9abedfb6-e7a7-4393-8b61-26fc42d0d7e2}} He binaries are assumed to locate at their birth places without considerable movement,
considering the relatively small velocities and short lifetimes of massive binaries.
| m | f4b4b115d8ae1c6e605ad7a42a5bee06 |
Blind demixing in a single random graph. fig:singlegraph shows the results of demixing with a single diffusing graph of varying {{formula:76a60a6d-4a01-412c-94c0-4ea7fc569628}} and {{formula:7ab29489-246c-4559-9661-6d6f8bff9e18}} . Random graphs are drawn according to the Erdős-Rényi model {{cite:c9b2be6ad086c43f01469a1db0be29aa407be3e7}} with edge probability {{formula:5d65034f-61f0-45a4-82cc-00b20c9c14f9}} . Four settings with different values of {{formula:cac4dc9f-c1ae-4d79-9dd7-43a1b1623f57}} and {{formula:d6716fcf-1db7-4d06-a03e-2a0bbefbc698}} are presented. As expected, the performance worsens for larger values of {{formula:f9691edc-a276-41c6-8a6e-84c3a2e68aa4}} and {{formula:1cb9d06c-f620-45f0-ac0c-8592708f20a6}} . Interestingly, demixing for {{formula:8647a744-f3e2-49d9-bfe9-be20170cce61}} is less successful than for {{formula:88233f3e-ed12-4956-82de-47bc2000cf02}} . Since the number of signal coefficients to estimate in these two settings is the same ({{formula:87fdbb65-9ee0-45e8-8843-fad4a71d64b7}} ), this suggests that the recovery is more sensitive to {{formula:78b822d7-2a3d-47a2-b651-d980a372c387}} .
| r | 27636c810c47ed4b5e9e12a7cbb8fe58 |
Normalization methods are used to improve the trainability of deep neural networks. In this section we will show that four commonly used normalization methods—batch normalization {{cite:dc32fa586a02c99b725d05e9de56245e6eb06717}}, layer normalization {{cite:5f3bb68ebfcbfa7375500e90cf1e1fade9632c60}}, weight normalization {{cite:f66371eee9662ece2a2eb3dfda077ea1ee8f4f01}} and self-normalizing neural networks {{cite:d5d2d8a28bc03d2e7c08865e3f1c84b9267879d0}}—cannot convert untrainable networks into trainable networks.
| m | 6c3305469921b0954ea3acff10ea574b |
The performance of CAN is reported in Table REF , along with those of graph kernel methods: Random Walk Kernel (RWK, {{cite:75598028d04dcf70072577dc90c58579c5663f02}}),
Graph Kernel (GK, {{cite:e6a4eb72da39b50d989eb0eb145f68e5943f5324}}),
Propagation Kernels (PK, {{cite:96668a5ea5978ccc6acecdc6936f01773a16532d}}),
Weisfeiler-Lehman graph kernels (WLK, {{cite:c43dc98b4c9fedc0f2019afb348cd1896d51fc0d}}); other GNNs: Diffusion-Convolutional Neural Networks (DCNN, {{cite:0c9d25f3360cd90e786cbf30c9a89781550b95ca}}), Deep Graph Convolutional Neural Network (DGCNN, {{cite:ef8d440016a500f06ba3573dc2d84f9c19505fac}}), Invariant and Equivariant Graph Networks (IGN, {{cite:4f6e4b56246e2422be33ea3c93f12f20abc9c04d}}), Graph Isomorphism Networks (GIN, {{cite:10d8718f032d353c451e390e983f3f5d7ef85425}}), Provably Powerful Graph Networks (PPGNs, {{cite:35ac93731d965a815683bfc35f51ce714db5f190}}), Natural Graph Networks (NGN, {{cite:ade32a73074984dfb823afc102c2936c9a80ebfc}}), Graph Substructure Network (GSN {{cite:0f899ac63e58ceb6fa096e7f97a8f885b6788f51}}) and topological networks: Simplicial Isomorphism Network (SIN, {{cite:244623084ae7e9805a47f8c591e469b9383ec51b}},
Cell Isomorphism Network (CIN, {{cite:327e6c12fa14f57a719b23527f36c10678a65fc2}}). As we can see from Table REF , CAN achieves the best performance on four out of five benchmarks, while performing very similarly to CIN in the last experiment (i.e., NCI109). Since CAN has a much lower computational complexity than CIN (cf. Appendix C), these results support the validity and the performance obtained of the proposed architecture. The tested models have been implemented using PyTorch {{cite:dc10a2250ac9f699b6120dbc6764c37d1c104f16}}. The datasets have been taken from the PyTorch Geometric library {{cite:ae0441dc917712943d39076bf0da3c6a404dbfbf}}. The operations involved during cellular lifting maps use the code provided by {{cite:327e6c12fa14f57a719b23527f36c10678a65fc2}} under MIT license.
PyTorch, NumPy, SciPy and are made available under the BSD license, Matplotlib under the PSF license, graph-tool under the GNU LGPL v3 license. PyTorch Geometric is made available under the MIT license. All the experimental results have been made on NVIDIA® GeForce RTX 3090 GPUs with 10,496 CUDA cores and 24GB GPU memory. The operative system used for the experiment was Ubuntu 22.04 LTS 64-bit. See Appendix for an extensive description of the tested architectures and an ablation study The code implementation for the proposed architecture is available at: https://github.com/lrnzgiusti/can.
| r | 5247a34265379ea1ff0d7689d9343564 |
In the first two experiments, we observe better generalization capabilities in policies using iGibson's domain randomization. For PointGoal navigation based on depth images, the performance goes from 0.27 to 0.40 SPL {{cite:21133a369e09daf5207ffbfd8a5c86c8a200139e}} and from 31.25% to 44.75% success rate when using randomization, indicating that the larger variety of shapes observed in the training process generates more robust depth-based policies. For object navigation based on RGB images, the performance goes from 49.75% to 57.5% success rate, indicating that material randomization helps in obtaining RGB-based policies that are more generalizable to unseen scenes and textures. Finally, for PointGoal navigation based on LiDAR signals, the policy achieves 33% success rate in Rs_int in iGibson, and 24% success rate in the real-world apartment. With only a 9% drop in performance and the failures mostly occurring in the same episodes (same pairs of the initial and goal locations in iGibson and real world), this experiment indicates that the LiDAR signals generated in iGibson are realistic enough to facilitate zero-shot policy transfer. In summary, as shown in Table. REF , iGibson provides unique support to train with realistic virtual sensor signals (e.g. LiDAR) and domain randomization, which leads to more robust robot navigation policies that successfully transfer to novel scenes.
{{figure:d7354178-efc5-4d34-8146-655b392fe7cb}} | r | 902ae734baf057d765e0b302455ca1f9 |
where {{formula:1f3c5192-5f94-4619-a6a3-81fa8451a4d0}} is a generator of an analytic semigroup on a Banach space {{formula:90abf8a1-a3ee-4171-a727-1aad20e8843b}} and {{formula:f96d4b66-6613-43ef-8e42-7ebef537b469}} is a locally Lipschitz function with {{formula:f9337b47-78fb-4af4-9d5c-f78779fdd2c4}} , {{formula:edeb3217-2c07-4fd5-90c4-d379ef6202e1}} , can be an interpolation space between {{formula:f2c4aa3b-e5bd-4997-9cb2-1a53038ffc34}} and {{formula:a322fa40-c2f6-45c2-b999-59cecbf5ec85}} or {{formula:fdb8acd9-326d-4aaf-afe4-10a7f2ad792b}} , is well-studied, see e.g. Lunardi {{cite:85f0cb44afc359b6d216c80428dc764a6aa1a8c3}}. As already discussed in {{cite:85f0cb44afc359b6d216c80428dc764a6aa1a8c3}}, we may have situations where {{formula:649da03c-9afd-4579-97d4-4e8d9acff115}} is set to a domain that is neither an
interpolation space between {{formula:39ddaa58-8d55-47bc-bb93-365e8ca4c9ff}} and {{formula:d8777ea3-89d3-49cb-93f0-b58bfe7a5ae2}} nor {{formula:69316ebb-86ec-4cd1-b7bc-965b49e9b5a5}} .
| i | 13085034d517d08eeb94a9e7a060ab53 |
According to {{formula:3f48c5e1-86f2-4124-be7b-4ce6de8fc6ed}} {{cite:c64fc652bc94e386e21084295c4a72a547476114}}, we obtain {{formula:4d5445bb-cce9-46eb-8fb2-d33541ecf5b5}} and {{formula:9ad66dc6-678c-4563-a537-4217dd145229}} .
| r | 61c7e521336b900c1412ba8b6c856036 |
The heuristic technology is calculated according to the characteristics of the graph structure, and the key is to calculate the similarity calculation score for the neighborhood of the two target nodes {{cite:9f93d5d8c0b68c08add04a3d131188fd1a38ffd4}}, {{cite:cdc94bbaee2b51ff92a6e8270726506f10235e4c}}. As shown in Figure 1, according to the maximum number of neighbor hops used in the calculation process, the heuristic methods can be divided into three groups, including first-order, second-order and higher-order heuristics. Zhang et al. {{cite:8785dda3a324ff3a9a77b9f0d1a467a2c8177230}} proposed Weisfeiler-Lehman Neural Machine(WLNM) to extract a subgraph for each target link domain as an adjacency matrix, and train a fully connected neural network on these adjacency matrices to learn a link prediction model. Studies have shown that higher-order heuristics such as rooted PageRank {{cite:cc70301e72392e1a27c3d11657aa161cd81f73d7}} and Simrank {{cite:655a94f62f30f7a0a203bd52fd437a17fef4a84e}} have better performance than lower-order heuristics {{cite:a89c4e8c4390db25adef7d4f9de1d848ccf6c8f8}}, {{cite:63f0b33a8ea5dda0ae2b90eb7906b1eaa5af4d2a}}. However, the increase in the number of grid hops means higher computational costs and memory consumption. Moreover, the lack of heuristics has poor applicability to different types of networks {{cite:cdc94bbaee2b51ff92a6e8270726506f10235e4c}}, {{cite:4ccb9f50c5d2f7c2777b8747a6b14a064fcdfedc}}, which means complex computation is required to find the appropriate heuristic for different networks.
| m | 18c66b371e552463daf2e6fe6aa22ed5 |
In recent years, there are two lines of research on SLU. One aims to improve the prediction accuracy of
ID and SF. These models often learn ID and SF jointly by regarding ID as an utterance classification
problem and SF as a sequence labeling problem. Following this Classify-Label framework, various
joint models have been proposed {{cite:9ec63b821646291136d85d0fee37f799d1a11f41}}, {{cite:138df7cacc8a2b78c74297f5a6703a2f745c3b9c}}, {{cite:ebc9e6e794beb2307c0943d30dc053d0e5396c24}}, {{cite:d28a3882ccbf7c004944d5e2284e9936f3725fd1}}, {{cite:64e8936bf2240f16060af97bd825244281007edb}}. These joint models can utilize the semantic correlation between intent and slot and hence result in higher prediction accuracy than separate models.
Despite its success, the Classify-Label framework lacks domain adaptation ability.
This is because the category label spaces of the source domains and target domains, which are made up of class indexes, are not necessarily equivalent.
| i | a5077478c1ceec5f96c61d6747709b31 |
Finally, a sequence of recent papers characterize the asymptotics of the Bayes-optimal
estimation error in the two models described above {{cite:8f2f2f993439a053cb4cbb51cc18ec1c929ac453}}, {{cite:66bc2beb2dc1d44f19860cef4104ec21295865bb}}.
It was conjectured that, in this context, no polynomial-time algorithm can outperform Bayes AMP, provided these algorithms have access to an arbitrarily small amount of side information.Concretely, side information can take the form {{formula:7ea79e97-f8ea-4da0-933f-8f889ce12b0a}} for {{formula:a5d28955-dc60-4f4b-8838-11a5bdc7c6a0}} arbitrarily small, {{formula:1eff7f16-ff58-4fd7-9c29-4aa571d92818}}
Theorems REF and REF
establish this result within the restricted class of GFOMs.
| d | 5cec1075cd1f1128bf50ce1b1d095bc4 |
The entirety of theory and algorithms for numerically solving optimization problems like (REF ) can not be covered in this thesis.
We focus on the essential techniques and point to {{cite:bde5bce9950f1511843826c3b7ca742b2eefeee0}} for further reading.
In the sequel, we review iterative optimization methods for finding a stationary point of a general
optimization task, i.e., finding an optimal point in the input- or control space of a functional {{formula:97dd53c6-44fd-4011-8b41-5ea24bd76ea4}} with arbitrary constraints and complexity:
{{formula:986accf5-9a0c-49c2-a61b-85825fb07196}}
| m | e4cbe9b3c52717645f721769690ce70f |
Topic eSignature (1.5%) contains discussion about different issues and customization for electronic signature of documents, i.e., docusign about collecting user's agreement/permission for sales or account opening. For example, there are questions such as “Auto Add Document to DocuSign [Platform] Using Custom Button” in {{formula:ac48edf4-96f1-43f1-82e3-4a70e938f17d}}, {{formula:bd358011-86bf-4ddd-a041-134c2bc0094d}}.
[flushleft upper,boxrule=1pt,arc=0pt,left=0pt,right=0pt,top=0pt,bottom=0pt,colback=white,after=
]
RQ1. What topics are discussed about LCSD in SO?
We found 40 Topics organized into five high-level categories. The Customization category (30%) has the highest number of questions, followed by Data Storage (25%), Platform Adoption (20%), Platform Maintenance (14%), and Third-Party Integration (12%). Window Style Manipulation from the Customization category has the highest number of questions (5.9%) followed by build Configuration (4.1%) Management from the Platform Maintenance category. Our studies reveal that low-code practitioners struggle with RESTful API Integration, configuration and maintenance of the platforms. We also observed that proper documentation could have mitigated these challenges to a great extent.
How do the LCSD topics evolve over time? (RQ2)
Motivation
Our analysis of RQ1 finds that LCSD topics are diverse. For example, the Customization topic category contains discussions about developing and customizing the application, and Platform Adoption and Platform Maintenance topic contains discussions related to different features provided by the LCSD platform providers. The platforms for LCSD continue to evolve, as do the underlying topics and question types. We study the evolution of these topics and question types to understand better the evolution and adoption of LCSD development and its community. This analysis will provide valuable insights into the LCSD community and help identify if any topic needs special attention.
Approach
Following related studies {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, we study the absolute and relative impacts of each of our observed five LCSD topic categories as follows.
Topic Absolute Impact. We apply LDA topic for our corpus {{formula:408ad123-8da8-4044-afa1-9eb7df67aaf6}} and get {{formula:55cbecaf-0157-4f05-95c3-4039ad82a5f9}} topics ({{formula:f472b9bc-78fb-41fd-8ab5-654d7ffe289d}} , {{formula:6709fa83-5954-41ff-87d0-512da17c42be}} ,........,{{formula:34c0c3a6-d574-41d4-a74b-d6896a4955f6}} ). The absolute impact metric for a topic {{formula:0ad20832-8600-4418-9244-8d8de40ae6ce}} in a month ({{formula:7f3c3e85-bb28-42c9-a66d-837af091e92c}} ) is defined as:
{{formula:6941fc8e-1477-4cd5-a331-9f6ceb47d902}}
Here the {{formula:567f5327-76c8-477b-bbf4-968c6296842c}} is the total number of SO posts in the month {{formula:85a7a7fd-31e3-42a1-9c45-43f6ba204548}} and {{formula:29bfd4be-981d-492d-8d1a-1e522e90b4c0}} denotes the possibility for a post ({{formula:20b34867-2a72-4c6a-9565-61ed4554d6a3}} ) belonging to a topic {{formula:dd319300-3be6-4f22-bcd6-8c1b9ad24464}} .
From our topic modeling, we found 40 topics that were categorized into five high topic categories, i.e., Customization, Data Storage, Platform Adoption, Platform Maintenance, Third-Party Integration. Now, we further refine the equation for absolute impact for LCSD topics to find absolute impact metrics for a topic category ({{formula:c68ae90c-dc93-4e89-bace-57be178b6daf}} ) for a month {{formula:48044fdd-c75d-4a0f-8d7f-1407822dedde}} as follows:
{{formula:e9101783-ee1b-412a-ba13-bdc72ae48401}}
Topic Relative Impact. Related impact metric signifies the proportion of posts for a particular LCSD topic {{formula:8655cfd8-7696-429f-8ec7-ebddb341785e}} relative to all the posts in our corpus {{formula:61cea266-a60b-4fe3-9613-757f0da67457}} for a particular month {{formula:841f5784-16f0-4d03-acf8-76beb8c78d5c}} . Following related studies {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, we compute the related impact metrics for LCSD topics. We use the following equation to compute the metric for a topic {{formula:5f1ca10a-0731-452f-ba0b-0d9bedcf3893}} in a month {{formula:1fa09761-42b9-4faa-9a7d-676cd767c736}} as follows:
{{formula:38ee657c-c26f-4548-addb-91980b394df5}}
Here {{formula:0ece4dce-9c9f-4466-91ee-825670a6f051}} denotes the total number of posts that belongs to a topic {{formula:a2cb22f9-85b2-464b-8144-5d9abc29065a}} for a particular month {{formula:d4b45958-c391-4b17-9b80-37017026b6c8}} . Here {{formula:a9abf668-7d61-4bb7-9011-8012e3300df9}} denotes the probability of a particular post {{formula:fd4e8995-7942-4cfc-b48b-ebddc48cac32}} for our Corpus {{formula:e8314a0a-c4e5-4fac-ac3d-28e39f1273e0}} belonging to a particular topic {{formula:64274233-87bd-4c57-a81c-4afe57d2f57d}} .
Similar to the absolute impact, we refine the equation to compute the relative impact on LCSD topic categories as follows:
{{formula:7c1927c6-b58f-49e4-94f6-585921ebb0fb}}
Here {{formula:7b71ae05-ba61-42da-a02b-fc05dd737d18}} donates one of our five topic categories and topics that belong to each topic category.
Result
{{figure:d04c8b61-ec21-4b3b-8652-0a750a6a699e}}Figure REF depicts the progression of overall LCSD-related conversation using the absolution impact equation from our extracted dataset between 2008 to 2021. Additionally, it demonstrates that the peaks of LCSD-related talks occurred in mid-2020 (i.e., around 400 questions per month). It also reveals that LCSD-related discussion has been gaining popularity since 2012.
In the section below, we provide a more extensive explanation for the minor spikes in the Figure REF .
{{figure:e6eb9dc6-bd48-4ed0-bf0f-8fd81e3efedc}}{{figure:da16df04-45d5-4451-a9d1-0eb1b2e2f5ff}}We observe that in the early days (i.e., 2009), Platform Adoption along with Data Storage topic category has more questions compared to Customization. Customization topic category starts to get a dominant position from mid (i.e., August) of 2011 over Platform Adoption, and it remains the dominant topic till the end of 2021. The number of questions in the Customization topic category gradually increased over time, from August 2011 (23) to March 2012 (81) to May 2020 (99). Data Storage topic category briefly exceeds Customization Category during August 2013, but it mostly holds a dominant position other times compared to Platform Adoption topic category. On the other hand, Platform Maintenance and Third-Party Integration exhibits very similar evolution over the whole time.
We further look into the Figure REF and see there are mainly two significant upticks in the number of posts about LCSD. The first one is between August 2011 to May 2012, when there is a sharp increase in the number of questions for almost all topic categories, especially for Customization and Data Storage Category. By this Salesforce{{cite:6d7818a420cf1824e285ad5917f3b2de7e5b4ec9}} LCSD platform-related discussion becomes quite popular in SO, and around that time, it ranks very high as a CRM platform. The second increase in posts is between February 2020 and August 2020. During this time of the Covid19 pandemic, many businesses began to operate online and there is a significant uptick in the number of questions in the Customization category, followed by Data Storage and Platform Adoption
Moreover, there is an uptick in the number of questions on building a simple app to collect and report data, especially the Salesforce platform. There is an increase in the Platform Adoption topic category between mid-2016 to mid-2017. During this time Oracle Apex platform released version 5.0, and there is an uptick of questions regarding different features such as interactive grid in {{formula:e4668bbf-3cec-4143-94a0-02fd73d1efc3}}, drag and drop layout in {{formula:41bfcf1d-1975-41fb-9f0f-ae468a663bcc}}. Now we provide a detailed analysis of each of the five topic categories.
Customization
This is the biggest topic category with 11 topics. From 2008 to mid of 2011, all of these topics evolve homogeneously. From the mid of 2011 to the first Quartile of 2012, Dynamic Page Layout topic becomes dominant. “How to get fields in page layout” in {{formula:e18a4353-dc73-4018-900b-aef80e527008}}, issues with page layout in different LCSD platforms (e.g., {{formula:f8836727-fd5b-44e5-9769-1b19122509dd}}). From the end of 2012 to 2017, Window Style Manipulation topic remains most dominant. “Passing values from child window to parent window which is on another domain?” in {{formula:ab39494d-7bc2-4cf4-8d13-86936772769e}}, view related issues {{formula:b53af3f2-884f-4eb7-9ee1-4683be478b5b}}. From the end of 2017 to the end, our dataset Dynamic Form Controller topic remains the most dominant.
Data Storage Category
From mid-2015, Database Setup & Migration topic becomes the most dominant topic in this category and has some high spikes during the pandemic and mid of 2017. For instance, there are queries like “Using Jenkins for OCI database migration” in {{formula:cd3256ce-7714-4dd9-ba8a-af41ffb52579}} and “Almost all the cloud service providers have 99.95% of data available on the cloud. What will happen if the whole region sinks in an earthquake?” in {{formula:97714709-f9de-4518-8a97-23be3d786e36}}. Since 2017 DevOps and database “Domino Xpage database building automation or continuous integration using Jenkins with maven.” in {{formula:d91f5bde-4a2b-4cd9-9878-ed701103fbd3}}. From mid-2011 to mid-2014, DB Stored Procedure topic remains dominant. “Oracle APEX: Call stored procedure from javascript” in {{formula:fb1c0c1a-3837-4e25-94be-054e9208dcc1}}.
Platform Adoption Category
From 2008 to mid-2011, Platform Adoption related topics were the most dominant (e.g., “Suggested platform/tools for rapid game development and game prototyping” in {{formula:16d193d5-94c9-4be8-8ebb-8aa26baedcde}}). Between mid-2011 to mid-2017, Authentication & Authorization topic becomes dominant (e.g., “Can I implement my own authentication process in force.com or it is against terms of service?” {{formula:cbe5bac2-4265-4e4b-8c5d-35458dd62017}}, {{formula:e0497cdf-e2c4-4f90-a304-938b2edd778e}}). Since the end of 2017, Platform Infrastructure API remains the most dominant. So, practitioners ask queries like “VirtualBox VM changes MAC address after imported to Oracle Cloud Infrastructure” in {{formula:c342b94f-9b9d-4c17-8677-fa756d1a9fe5}} and “How to send a classic report as mail body in oracle Apex 19.2” in {{formula:5c28afb9-0105-4364-a9d6-b90c75df822d}}, report layout ({{formula:07380556-6d11-49ec-a307-2dc692d06b71}}, {{formula:31972d19-1874-4f49-aebd-2bd04bcf3e7f}}).
Platform Maintenance Topic Category
From 2008 to mid-2019, the Build Configuration Management topic remains the most dominant topic. It has some high spikes in the number of questions during the beginning of 2012 and the first quartile of 2014. Build error {{formula:569a61e7-7bee-4e55-96ce-22e135f5a3f3}}, {{formula:c6e98038-c1da-4f83-91ec-bfe8d41a777c}}, build projects automatically {{formula:67da9bb3-7a1e-42c7-bc4b-d43f5c1bcc95}}. From mid-2019, Library Dependency Management topic-related questions became popular (e.g., library-related issues (e.g., {{formula:5a36667a-ae61-46e9-a225-9e2d6bcc1cc5}}, {{formula:096bbb8d-735b-4438-84f3-d0a78b1a6a0a}}), library not found{{formula:9f18af51-d154-4fa2-917f-5880d213d86a}}).
Third-Party Integration Topic Category.
The five topics from this category evolve simultaneously. From the beginning of 2015, the External Web Request Processing topic has become more dominant than other topics with a slight margin. External Web Request Processing and Fetch & Process API response, E-signature topics become dominant during the pandemic with queries such as platform support on e-signature {{formula:81a849bb-3ecd-430f-9323-8458232ebdda}} and etc.
In Figure REF , we now provide more insight into the evolution of LCSD topic categories. It confirms the findings presented in Figure REF and adds some previously unknown insights. For instance, in the last quartile of 2009, it is apparent that Data Storage is the most popular Topic Category. According to the absolute impact metric, all five themes are increasing monotonically. The relative impact measure, on the other hand, indicates that the Customization, Platform Maintenance, and Third-Party Integration Topic group evolves in a nearly identical manner. However, this Figure demonstrates that, beginning in 2016, Platform Adoption-related conversation increased and eventually surpassed Data Storage-related discussion. This in-depth examination of evolution is significant because it demonstrates that, while Data Storage Topics are the second-largest Topic category, Platform Adoption-related queries are evolving rapidly and require further attention from platform vendors.
[flushleft upper,boxrule=1pt,arc=0pt,left=0pt,right=0pt,top=0pt,bottom=0pt,colback=white,after=
]
RQ2. How does the LCSD-related discussion evolve?
Since 2012, LCSD-related talks in SO have grown in popularity, and this trend has accelerated since 2020.
Initially, the Customization and Data Storage Topic Categories dominated, but in recent years, Platform Adoption-related inquiries have grown in popularity.
What types of questions are asked across the observed topic categories? (RQ3)
Motivation
This research question aims to provide a deeper understanding of LCSD-related topics based on the types of questions asked about the LCSD platforms in SO. For example, “what” types of questions denote that developers are not sure about some specific characteristics of LCSD platforms, while “how” types of questions denote that they do not know how to solve a problem using an LCSD platform. Intuitively, the prevalence of “what” types of questions would denote that the LCSD platforms need to better inform the services they offer, while the prevalence of “how” type of questions would denote that the LCSD platforms need to have better documentation so that developers can learn easily on how to use those. Initially, in 2011 Treude et al. {{cite:a508ab60d30a239ff6d36884aed86d408a959879}} investigated different types of questions on stack overflow. Later Rosen et al. {{cite:23e6cb31cebf09faefad37db14d8c8dadc5e5a02}} conducts an empirical study like ours on Mobile developers’ discussion in stack overflow with these four types of questions. Later, very similar studies on chatbot development {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}} and IoT developers’ discussion on Stack overflow {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}} also explore this research question to provide more insights about specific domains and complement the findings of topic modeling.
Approach
In order to understand what-type of questions are discussed in SO by the LCSD practitioners, we take a statistically significant sample from our extracted dataset and then manually analyze each question and label them into one of four types: How-type, Why-type, What-type, Others-type following related studies {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}, {{cite:23e6cb31cebf09faefad37db14d8c8dadc5e5a02}}. So, our approach is divided into two steps: Step 1. We generate a statistically significant sample size, Step 2. we manually analyze and label them.
Step 1. Generate Sample. As discussed in Section REF our final dataset has 26,763 questions. A statistically significant sample with a 95% confidence level and five confidence intervals would be at least 379 random questions, and a 10 confidence interval would have a sample size of 96 questions. A random sample represents a representative for the entire dataset, and thus this could miss questions from the subset of questions that may belong to smaller topic categories. For example, as discussed in RQ1, we have 40 topics organized into five categories. As random sampling is not uniform across the topic categories, it might miss important questions from smaller topic categories such as Third-Party Integration. Therefore, following previous empirical studies {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}, we draw a statistically significant random sample from each of the five topic categories. Specifically, we draw the distribution of questions in our sample from each of the 5 topic categories with a 95% confidence level and ten confidence intervals. The sample is drawn as follows: 95 questions from the Customization category (total question 8014), 95 questions from the Data Storage category (total question 6610), 94 questions from Platform Adoption category (total question 5285), 94 questions from Platform Maintenance category (total question 3607), 93 questions from Third-Party Integration category (total question 3247). In summary, we sampled a total of 471 questions.
Step 2. Label Question Types. We analyze and label each question from our samples into the following four categories. The categories and the coding guides follow previous research {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}, {{cite:2b3ac571c972fb05c36e0ffe136779ff6fb84ce4}}
How-type post contains a discussion about the implementation details of a technical task {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}. The questions primarily focus on the steps required to solve certain issues or complete certain tasks (e.g., “How to create a submit button template in Oracle APEX?” in {{formula:53441584-9d74-4fea-801c-bd5727a85e87}}).
Why-type post is about troubleshooting and attempting to determine the cause/reason for a behavior. These questions help practitioners understand the problem-solving or debugging approach, e.g., in {{formula:8dffc32d-e369-4526-81a9-866543100170}}, a user is trying to find out why an SSL server certificate is rejected.
What-type question asks for more information about a particular architecture/event. The practitioners ask for more information that helps them to make informed decisions. For example, in {{formula:c6878e03-1a28-46b8-96d8-c474bebbbb36}} a practitioner is asking for detailed information about the Oracle Apex platform's secure cookies.
Other-type question do not fall into any of the above three categories, e.g., “Initiating Salesforce API in Google App Script” in {{formula:283aaff6-3656-463d-8812-80095dd880aa}}
Three authors (first, third and fourth) participated together in the labeling process. We assessed our level of agreement using Cohen kappa{{cite:89630bac3785bf8e38a20f8628aad7353b9c10eb}}. The disagreement and annotation difficulties were resolved by discussing with the first author. In general, the authors achieved a substantial agreement ({{formula:34c53576-0c9c-4c27-a0b7-a1641cf66acd}} > 0.6) on the 471 questions classified. Our coding guidelines and the final annotated dataset are available in our replication package.
Result
Table REF shows the percentage of type of questions across our five LCSD topic categories.
Similar to related studies {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}, {{cite:2b3ac571c972fb05c36e0ffe136779ff6fb84ce4}}, during our labeling process, we observed that some of the questions can have multiple labels, e.g., What-type and How-type. For example, “How can I get jQuery post to work with a [platform] WebToLead” in {{formula:2f622fa3-8566-40c5-95b0-3e7d46aa2b25}} discusses making Ajax request using jQuery where the practitioner is getting an error response. At the same time, the practitioner is further querying some detailed information on how Ajax requests.
Therefore, the sum of percentages of question types reported in the result section is more than 471 classified posts. We now discuss each question type with examples.
{{figure:81765d05-2d65-490f-b336-e4fe65e9fa4b}}{{table:5bed6fc5-637f-4263-9709-9e71dc482c48}}How-type. Around 57% of our annotated questions fall under this type. This type of question is most prevalent in the topic categories Third-Party Integration (61%) followed by Data Storage (60%), Platform Adoption (57%), Customization (51%), Platform Maintenance (49%). This high prevalence is not surprising, given that SO is a Q&A platform and the LCSD practitioners ask many questions about how to implement certain features or debug an issue. Additionally this also signifies that LCSD practitioners are asking a lot questions while integrating with third-party library (e.g., {{formula:b5bb28b4-6da0-4178-99f3-064ab7ba21d5}}) and plugins (e.g., {{formula:69b8a498-8c0a-40d6-8d56-66564bff32a7}}) and managing the data with a database management system (e.g., {{formula:ddb7da5b-dae6-4480-ada5-3a0bdbb259ed}}) or file storage (e.g., {{formula:12caa017-46a3-4d5c-974e-588fa78f8736}}). To explain further we find questions regarding implementing encryption, e.g., {{formula:9d197d20-a040-49df-b143-b32708efcde3}}, or Making HTTP POST request, e.g., {{formula:da78a275-e744-4afb-abc7-c28fc8dda714}}, or debugging a script, e.g., {{formula:6aeb460a-e95e-444d-adb0-c1287872986f}}, or implement a feature, e.g., {{formula:65b0881b-833b-4c30-9c08-9e35661c7fc0}} etc.
What-type. This is the second biggest question type with 18% of all annotated questions. This type of question is the most dominant in the topic categories Platform Maintenance (20.5%), Platform Adoption (20%), and Customization (18%). This type of question can be associated with How-type questions, where the practitioners require further information to implement certain features. For instance, in this question, a practitioner is querying about “How to implement circular cascade select-lists” in {{formula:8dadb02e-0046-4f99-ae65-7f881f9099ff}}. The questions in this category signify that practitioners fail to find relevant information from the official documentation (e.g., {{formula:795d1937-4f45-43cb-8da8-2a104028fb43}}) sources. Therefore, as this type of question is prevalent in Platform Maintenance and Platform Adoption category, LCSD platform providers might focus more on improving their resources. As an example, we find questions on JavaScript events not working correctly, e.g., {{formula:cdf2d0ba-3bfa-4e64-9f86-2018c54af1fd}}, roll back changes, e.g., {{formula:429cccb4-7814-4c2e-84d3-9fc26302ad1b}}, designing workflow, e.g., {{formula:d0aaa219-f7f3-4154-a116-7dca795f3730}}.
Why-type. This is the third most prevalent question type category, with 14% of all annotated questions. This type of question is the most prevalent in the topic categories Customization, Data Storage, and Platform Maintenance with around 17% questions. These questions are mostly related to troubleshooting like when LCSD practitioners implement particular features or deploy an application. For instance, e.g., “Why does this error happen in [Platform]?” in {{formula:a4c8a772-0a96-4a78-bc7c-7b8b423178b1}}, “Why isn't the document going into edit mode” in {{formula:4773f227-b766-46e3-9f02-04006407e120}}, “Not able to launch Android Hybrid app using [Platform] Mobile SDK” in {{formula:2027442d-8bc6-4a73-8ca8-e443fafea304}}, “Java code running twice” in {{formula:ec029149-a1be-4841-a877-0a713708ea24}}.
Other-type. Around 14% of our annotated questions fall under this type. The questions are almost uniformly distributed across the five topic categories. The questions contain general problems, e.g., “UTF-8 character in attachment name” in {{formula:e658c58e-4686-4437-bb8a-2ddc10ff2a3d}} or “Domino Server is installed on Unix or Windows?” in {{formula:8c34ec53-e001-405b-9a82-17d69e6507c0}}. Some of the questions in this type also contain multiple/ambiguous questions (e.g., {{formula:1b2df913-d300-4367-9d38-328871cfcce4}}). For example, How to test an application?, which library is better? etc.
[flushleft upper,boxrule=1pt,arc=0pt,left=0pt,right=0pt,top=0pt,bottom=0pt,colback=white,after=
]
RQ3. What types of questions are asked across the observed topic categories?
How-type (57%) questions are the most prevalent across all five topic categories, followed by What-type (18%), Why-type (14%), and Other-type (12%) questions. Practitioners in the Customization and Platform Maintenance topic categories are more interested in troubleshooting, i.e. (Why-type, What-type). Practitioners generally ask more implementation questions (i.e., How-type) in the Third-Party Integration Category. Practitioners in the Data Storage topic category are interested in designing databases (i.e., How-type) and troubleshooting (i.e., What-type, Why-type). This indicates the necessity for a more robust community for troubleshooting and debugging issues.
How are the observed topic categories discussed across SDLC phases? (RQ4)
Motivation
As we observed the prevalence and evolution of diverse LCSD topics in SO, we also find that the topics contain different types of questions. This diversity may indicate that the topics correspond to the different SDLC phases that are used to develop low code software development (see Section for an overview of the LCSD phases). For example, intuitively What-type of questions may be due to the clarification of a low code software requirements during its design phases, which questions/topics related to troubleshooting of issues may be asked during the development, deployment, and maintenance phase. Therefore, an understanding of the SDLC phases in the LCSD questions in SO may offer us an idea about the prevalence of those SDLC phases across our observed LCSD topics in SO. This understanding may help the LCSD practitioners to determine how SO can be used during low code software development across the various SDLC phases.
Approach
In order to understand the distribution of LCSD topics across agile SDLC phases, we collect a statistically significant sample of questions from our extracted dataset {{formula:79fc3884-d608-4fe4-945c-ac1ef9450054}} into one of the six Agile software development methodology {{cite:4dccc426b5f417e29f1c642b903c185031426853}} phases: [(1)]
Requirement Analysis & Planning,
Application Design,
Implementation,
Testing,
Deployment, and
Maintenance. First, we generate a statistically significant sample size. We use the same set of randomly selected (i.e., 471) posts that we produced during RQ3 (see Section REF ). So, we take a statistically significant stratified random sample for each topic category in our dataset with 95% confidence level and 10 confidence interval to ensure that we have a representative sample from each topic category {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}. We manually annotate each question post with one/more SDLC phases.
We followed the same annotation strategy to label SDLC phased as we did for RQ3 (see Section REF ). Each question was labeled by at least two authors (second and third/fourth) after extensive group discussion on formalizing annotation guidelines and participating in joint sessions. We find our level of agreement using Cohen kappa {{cite:89630bac3785bf8e38a20f8628aad7353b9c10eb}}, {{cite:5efc22f5b87c23a13109ed182794e099b991c8aa}}. The authors generally achieved a substantial agreement ({{formula:b21ff5e1-bc26-41e6-933e-59cacbe384cf}} {{formula:9d47f8f0-0bd0-4e30-9d39-e36bda2adece}} 0.70). For example, a new practitioner is tasked with finding the right LCSD platform during the planning stage of his/her LCSD application. The practitioner queries, “Are there any serious pitfalls to Outsystems Agile Platform?” ({{formula:76204854-58b7-41ff-8976-e63df24bd5a8}}). We thus assign the SDLC phase as “Requirement Analysis & Planning”. Another question asks, “Google App Maker app not working after deploy” ({{formula:3c5ecbf0-77d4-445c-aa1b-90912b20a613}}). We label the SDLC phase as “Deployment”. For some questions, it involved significant manual assessment to assign appropriate SDLC phase, e.g., Requirement Analysis & Planning phase vs Application Design and Application Design vs Implementation phase. As such, we developed a detailed annotation/coding guide to help us with the manual assessment. This annotation guide was constantly updated during our study to ensure that the guide remained useful with all relevant instructions. For example, one of the questions that helped refine our annotation guide is the question noted by the respected reviewer, i.e., “Can AppMaker be utilized with SQL Server?" in {{formula:04c6d945-a624-4864-8b05-3e520b38f3dd}}. The user in this question wants to know if Google App Maker and SQL Server databases can be connected. This question was categorized as Application design phase. Based on this, according to our annotation guideline, this question can be labelled as Requirement Analysis & Planning phase too. However, after discussion, the first and third authors agreed to label it as Application Design phase because from the problem description, it seems the question mainly focuses on connecting the application to a custom data storage. As this question focuses on data source design, which is often explored during the Application Design phase, we concluded that it should be labeled as such. The labeling of each question to determine the precise SDLC phases was conducted by several co-authors in joint discussion sessions spanning over 80 person-hours.
{{figure:ceb7432e-05da-4de6-b2fa-98872caff769}}Results
Figure REF shows the distribution of our LCSD questions into six agile SDLC phase. We find that the Implementation phase has 65% of our 471 annotated questions, followed by Application Design (17%), Requirement Analysis & Planning (9.1%). It is not surprising that the Implementation phase has so many questions because SO is a technical Q&A platform and practitioners use it mostly to find issues when trying to implement some feature. Though the percentage of questions is not very high (e.g., between 2-3%), we also find practitioners ask questions regarding Testing and Deployment phases too (e.g., “Automated Testing for Oracle [Platform] Web Application” in {{formula:febcbfc6-f3b9-4b2c-8658-c2a497ed249a}}). This analysis highlights that LCSD practitioners ask questions regarding the feasibility analysis of a feature to make design decisions to implement the feature to deployment.
We provide an overview of the types of questions asked during these six SDLC phases.
Requirement Analysis & Planning (43, 9.1%).
Requirement analysis is the first and most important stage of software development because the application largely depends on this. Requirement analysis is a process to develop software according to the users' needs. In agile software development methodology, features are implemented incrementally, and requirement and feasibility analysis are crucial in implementing a new feature. During this phase, the operational factors are considered, and the feasibility, time-frame, potential complexity, and reliability. Requirement management tools are typically included with LCSD systems, allowing developers to collect data, modify checklists, and import user stories into sprint plans. Throughout this stage, developers tend to ask questions regarding the platform's features (e.g., “Does Mendix generates a source code in any particular language, which can be edited and reused?” in {{formula:fbf97b52-9839-4abf-91aa-0592b4eb1853}}), learning curve (e.g., {{formula:d9d427b9-93c7-4ef2-b832-531856182a35}}, {{formula:c817264f-19c9-4b38-9952-83e25fb9b8ee}}), and the LCSD platform's support for faster application development (e.g., {{formula:ea5af0d0-9f90-417f-93f5-c0f2e480c562}}), general deployment/maintenance support (e.g., {{formula:ca59bb9d-b5b7-42d9-a640-6491a440b348}}) in order to select the best platform for their needs.
For example, in this popular question, a new practitioner is asking for some drawbacks on some potential pitfalls for a particular LCSD platform, e.g., "Are there any serious pitfalls to [Platform] Agile Platform?" ({{formula:4e6277f6-b0ea-4345-ae67-d1b24147882f}}). A developer from that platform provider suggests using the platform to build an application and decide for himself as it is hard to define what someone might consider a pitfall. In another question, a practitioner is asking if it is possible to integrate Selenium with an LCSD platform (e.g., {{formula:f96af897-5ef8-4475-9052-531b4673f045}})
Application Design (80, 17%).
The design specification is created in this step based on the application's needs. The application architecture (e.g., {{formula:b8f6c9ea-da64-4a15-85c1-370a66350f95}}), modularity, and extensibility are all reviewed and approved by all critical stakeholders.
The LCSD developers face challenges regarding data storage design, drag and drop UI design, connecting on-premise data-sources with the LCSD platform (e.g., “Can AppMaker be used with SQL Server” ({{formula:57951e56-e35a-42aa-adaa-9347adff1bb1}})), data migration to LCSD platform ({{formula:fb9c7001-15ce-4b10-8ba0-db1d97b4910a}}), following best practices (e.g, “Salesforce Best Practice To Minimize Data Storage Size” in {{formula:a8b79a11-2d0e-40b4-8d69-cc53f10d1771}}), designing a responsive web page (e.g., ({{formula:02193fa7-ee57-4823-b0fc-2615934584d2}})).
Implementation (306, 65%).
The actual application development begins at this phase. LCSD developers confront a variety of obstacles when they try to customize the application (i.e., personalize UI (e.g, {{formula:f3d33db3-2237-4e93-ac36-d55be241934b}}), implement business logic (e.g, {{formula:5cf0634e-161d-4b74-9e11-b8a4db21ec9e}})), integrate third-party plugins(e.g, {{formula:3b143562-25c6-4e1d-91e3-78a927868aca}}), debug (e.g, {{formula:79786be0-beb1-416f-9a21-c99b965eed9f}}) and test the implemented functionality. For example, LCSD practitioners ask customization questions such as How can they change the timezone in a platform in {{formula:f5775667-a970-46f0-aaf9-d4498f1abf87}}, customizing UI in {{formula:4ff29cde-88e3-419a-a635-e4d17803cc98}}. Many of these challenges arise from incomplete or incorrect documentation. In {{formula:4cef02bd-db78-41ab-959a-829926603264}}, an LCSD developer asks for sample code to convert a web page to a PDF. The official documentation is not sufficient enough for entry-level practitioners.
Testing (13, 2.7%). LCSD testing differs from standard software testing in some fundamental ways. In LCSD development, many of the features are implemented using graphical interfaces, and they are provided and tested by the LCSD platform providers. As a result, unit testing is less important compared to traditional software development. In LCSD approach practitioners face difficulties to lack of documentation of testing approach in LCSD platform (e.g, “How to bypass login for unit-testing [Platform]?” in {{formula:3d28c4a6-e400-42ae-b67d-7939bd112e83}}), test coverage (e.g, {{formula:0d569753-0ae0-4b77-8b84-0d91de348d17}}, {{formula:a1a58227-db8f-468f-bf62-f0bdfa4c932d}}), automated testing (e.g, “[Platform] 20.1 automated testing” {{formula:5215c467-140a-451f-a830-0184185b5ce9}}), testing browser compatibility (e.g, {{formula:0946dd07-46c3-4ce3-ad2e-51ec615d37db}}), troubleshooting errors while running tests (e.g, {{formula:75ab8f8a-a9d9-4368-944e-8b756f30cbb1}}) etc.
Deployment (16, 3.3%). At this phase, the feature of the application needs to be deployed for the targeted users. One of the goals of LCSD development is to handle many of the complexities of the deployment and maintenance phase. Many LCSD platform providers provide advanced Application Life-Cycle Management tools to deploy and maintain the staging (i.e., testing) and the production server (e.g., {{formula:3c1ade7e-1e6c-4c85-af79-a6e3dba2471e}}). However, LCSD practitioners still face many challenges regarding deployment configuration issues ({{formula:8e785086-dc50-4319-9690-20658bc0016d}}), Domain name configuration (e.g., DNS configuration (e.g, {{formula:cdb5a7a9-9e70-403b-ab7c-2b9de2567cee}}), SSL Configuration (e.g, {{formula:3ac55ae2-89dd-44a6-b361-0406dfa7776c}})), accessibility issues such as with public URL ({{formula:91c8db68-b815-4036-a4e1-fcf948426701}}, {{formula:55e1d6df-c34a-41be-8e40-1b25445ff98d}})) etc.
For example, in this post, a practitioner is having deployment issues (e.g., “[Platform] app not working after deployment” ({{formula:433fd41c-bf54-42f5-9193-eb1796400888}})). A community member provides a detailed description of how to accomplish this in the answer, highlighting the lack of Official Documentation for such a critical use-case. There are a few questions concerning delivering an app with a custom URL or domain name (for example, "How to make friendly custom URL for deployed app" in {{formula:e6a21f94-e8a3-460f-84b6-0e03f0358bb1}}). It was challenging in this scenario because the platform did not have native support.
Maintenance (13, 2.8%).
At this phase, the LCSD application is deployed and requires ongoing maintenance. Sometimes new software development life cycle is agile (i.e., incremental) because new issues are reported that were previously undiscovered and request new features from the users. LCSD practitioners face some problems at this phase, such as event monitoring (e.g, {{formula:6db4c169-8404-4e62-9bd6-b0b6b234fcc2}}), collaboration and developers role management (e.g, “Role based hierarchy in report access” in {{formula:68fffd0c-14e3-4782-b807-7669c590e030}} or {{formula:91ec86e7-3d0e-4a77-a8f3-354e50fb4469}}), and application reuse (e.g, {{formula:5e2a57bb-a811-4579-b165-2a87520ff312}}), application version, i.e., “Do I have the latest version of an [Platform] component?” in {{formula:498d3b91-b607-4c26-8308-b6b3636056a9}} or {{formula:151f01b9-2dd3-47cc-8119-1c8650081f50}}, etc.
{{table:31735aa0-7cf0-4d7d-914f-7a2bc6d30883}}Topic Categories in different SDLC phases.
We find that for all five topic categories, LCSD practitioners need some community support from planning to debugging to deployment (e.g., “How does one deploy after building on [platform]” in {{formula:c8f340fc-de12-4825-acdf-f7adc46173d9}}). We report how LCSD topics and different types of questions are distributed across six SDLC phases. Table REF shows the distribution of SDLC phases for each topic category. Our analysis shows that for the Customization topic Category, most questions are asked during the Implementation (75%) and Design (18%) phases. The most dominant SDLC phase, i.e., the Implementation phase, is most prevalent in Customization (75%), Data Storage (74%), and Third-Party Integration (73%). Requirement Analysis phase is dominant in Platform Adoption (18%) and Platform Maintenance (12%) topic categories where practitioners ask questions like “Disadvantages of the [platform]” in {{formula:365320af-7870-4d1f-ad61-b01775c8c953}}. Similarly, question in Platform Maintenance topic category is also prevalent in Testing (11%), deployment (13%), and Maintenance (6%) SDLC stage.
{{table:89de2b71-d61a-4fe9-a35b-1e7334364013}}Types of questions in different SDLC phases.
We report the distribution of question types across SDLC phases in Table REF . It shows that for Requirement Analysis & Planning phase, most questions (35%) belong to What-type. This insight signifies that at this phase, practitioners are making inquiries about feature details (e.g., {{formula:d05e5eab-2c04-4cef-b742-9d9e10ce5503}}). In the Application Design, Implementation, and testing phase, most of the questions belong to How-type, i.e., practitioners are querying about how they can implement a particular feature (e.g., {{formula:afd8a2a8-db1c-4c9f-9aad-08bc00281880}}) or test it (e.g., {{formula:e62ff6c5-e0c1-4317-9ebf-82a3fb958118}}). At the Deployment phase most prominent is Why-type (38%) followed by How-type(31%). We can see a similar pattern for the Maintenance phase, where the most significant question type is How-type (46%) followed by Why-type (31%). We see this pattern because, at the Deployment and Maintenance phase, most of the questions belong to some server configuration error (e.g., {{formula:15ddbc0e-7be6-4a01-8251-57701822144e}}) and the practitioners' inquiry about how they can set up specific server settings (e.g., {{formula:8eca21c7-37aa-4915-a0f3-3c9dffc8b189}}). Similarly, we find that What-type questions are more prevalent during Requirement Analysis and Deployment phases.
[flushleft upper,boxrule=1pt,arc=0pt,left=0pt,right=0pt,top=0pt,bottom=0pt,colback=white,after=
]
RQ4. How are the observed topic categories discussed across SDLC phases?
Among six agile SDLC phases, the Implementation phase is the most prevalent (65% questions), followed by Application Design (17%), Requirement Analysis & Planning (9.1%), Deployment (3.3%), Maintenance (2.8%) and Testing (2.7%).
The Implementation Phase is most prevalent in all of the five topic categories and four question types. During Requirement Analysis, Testing, and Deployment phases, Platform Adoption and Platform Maintenance topic categories are more dominant. The How-type question is most popular in the Application Design phase, the what-type question is prevalent in the Requirement Analysis and Planning phase, and the why-type question is prevalent in the Deployment and Requirement Analysis phases.
What LCSD topics are the most difficult to get an accepted answer? (RQ5)
Motivation
After reviewing LCSD-related topics and discussions in the agile SDLC stages, we discovered that LCSD practitioners encounter generic software development problems and particular challenges specific to LCSD platforms (e.g., Platform Adoption, Platform Maintenance). Some posts come up repeatedly, and some have a lot of community participation (i.e., answers, comments, up-votes). As a result, not all topics and SDLC phases are equally difficult to get a solution. A thorough examination of the complexity and popularity of the practitioners' conversation might yield valuable information about how to prioritize research and community support. For example, LCSD platform providers and academics can take the required measures to make the architecture, design, features, and tools of LCSD platforms more useable for practitioners, particularly newbies.
Approach
We compute the difficulty of getting an accepted answer for a group of questions using two metrics for each question in that group [(1)]
Percentage of questions without an accepted answer,
Average median time needed to get an accepted answer.
In the same way, we use the following three popularity metrics to calculate popularity of that topic in the SO community: [(1)]
Average number of views,
Average number of favorites (i.e., for each question number of users marked as favorite),
Average score.
The five metrics are standard features of a SO question, and many other related studies {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, {{cite:5efc22f5b87c23a13109ed182794e099b991c8aa}}, {{cite:7a39f3cd9d6b11d64a88a30a4d1994c362b64965}}, {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}, {{cite:c461e7f25c505314ff6e3a69cb196b976c971a3b}} have used them to analyze the popularity and difficulty of getting a solution for a question. In SO, one question can have multiple answers, and The user who posted the question has the option of marking it as accepted. Hence, the accepted answer is considered correct or sound quality. So, the absence of an accepted answer may indicate the user did not find a helpful, appropriate answer. The quality of the question (i.e., problem description) might be one reason for not getting an acceptable answer. However, the SO community collaboratively edits and improves the posts. Therefore, the lack of an accepted answer most likely indicates that the SO community finds those questions challenging to answer. The success and usefulness of a crowd-sourced platform such as SO depends on the community members to quickly provide relevant, helpful correct information. In SO, the median time to get an answer is around 21 minutes only {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, but a complicated or domain-specific question may necessitate additional time to receive an accepted answer.
It can be non-trivial to assess the popularity and difficulty of getting an accepted answer for the topics using multiple metrics. We thus compute two fused metrics following related works {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}. We describe the two fused metrics below.
Fused Popularity Metrics. First, we compute the popularity metrics for each of the 40 LCSD topics. However, the average view counts can be in the range of hundreds, average scores, and average favorite count between 0-3. Therefore, following related study {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}} we normalize the values of the metrics by dividing the metrics by the average of the metric values of all the groups (e.g., for topics {{formula:5b105a23-f3c2-4dfd-8a66-c57462031021}} = 40). Thus, we create three new normalized popularity metrics for each topic. For example the normalized metrics for a group {{formula:1254ac37-9b40-48b0-b1cc-ec42abee1b63}} for all the {{formula:7b984677-f812-4d23-b7d7-6f39a3ced258}} groups can be {{formula:1fe9c546-fb14-425a-9b9f-239c0c50d7a9}} , {{formula:66f96a95-ce70-4a34-8575-26536aaea1ea}} , {{formula:16bd8120-2b6a-4527-b9b9-43d2c5f1d4d0}} (e.g., for LCSD topics {{formula:aede6558-a832-41c8-8bf3-2ae4c05b3001}} = 40). Finally, We calculate the fused popularity {{formula:ea695156-70d3-48b1-b7c8-6de1807cf6ba}} of a group {{formula:293951d8-01eb-4217-bbda-ae31a61f9087}} by taking the average of the three normalized metric values.
{{formula:1e246ed6-95f5-48f8-a16c-46cd6f19a907}}
{{formula:fa98b0a8-d74c-4eeb-82e2-28a3daaf2fd5}}
Fused Difficulty Metrics. Similar to popularity metrics, we first compute the difficulty metrics for each topic. Then we normalize the metric values by dividing them by the average of the metric value across all groups (e.g., 40 for LCSD topics). Thus we, create two new normalized metrics for a given topic {{formula:64405930-a07c-43a1-a62d-a3cc6a29cd29}} . Finally, We calculate the fused difficulty metric {{formula:3abf4db7-32aa-4376-87b6-70cb708b9d03}} of topic {{formula:c4030a0f-fa40-4891-b25a-340638a7fac4}} by taking the average of the normalized metric values.
{{formula:4a163da7-2c91-4089-ba12-39fa7709f8c4}}
{{formula:6abbb9ba-73ff-4573-8c82-e2e4fa23c88c}}
In addition to this, we also aim to determine the correlation between the difficulty and
the popularity of the topics. We use the Kendall Tau correlation measure {{cite:223d2c7836c103aa73e439fa1dc67ec6150588ec}} to find the correlation between topic popularity and topic difficulty. Unlike Mann-Whitney correlation{{cite:153ac88ef864201ef4bea21fd672d7bf7e3f6e71}}, it is not susceptible to outliers in the data. We can not provide the evolution of popularity and difficulty for these topics because SO does not provide the data across a time series for all metrics such as view count, score, etc. However, asLCSD-related topics are showing increasing trends in recent times, our analysis is valid for recent times.
Results
{{figure:9378cd70-5daf-4e67-b6de-7e4581d424e4}}{{table:fef38392-ef77-4f8c-9068-3cec68d6595f}}{{table:9d9b9475-a16a-4db8-9a48-5cf343f17678}}{{table:eea94d9d-b4ee-470a-b542-46464356fad6}}In Figure REF we present an overview of the five high-level topic categories and their popularity and difficulty to get an accepted answer. In the Figure, the bubble size represents the number of questions in that category. The Figure shows that Platform Adoption is the most popular and challenging topic category to get an accepted answer, followed by Customization, Data Storage, Platform Maintenance, and Third-Party Integration. We can also see that three topic categories, Platform Maintenance, Data Storage, and Customization, are almost similar in terms of difficulty to get a solution. From our analysis, we find that practitioners find the Third-Party Integration topic category relatively less difficult because many questions in this category are also relevant to traditional software development (e.g., integrating Google Maps in {{formula:96e862b8-c8c9-4837-b9e3-6913eaa2648c}} and {{formula:5058989a-adb9-4442-b9bb-d5951530caa7}}) and thus easier to get community support. Similarly, we find that questions in the Platform Adoption topic category are quite specific to particular LCSD platforms and thus sometimes have less community support to find an acceptable answer quickly.
Topic Popularity. For each of the 40 topics, Table REF shows three popularity metrics: Average number of 1. Views, 2. Favorites, 3. Scores. It also contains the combined popularity metrics (i.e., FusedP) that are based on the above three metrics and using the Equation REF . In the Table, the topics are presented in descending order based on the FusedP popularity metric.
Platform Related Query topic from the Platform Adoption Category has the highest FusedP score. It also has the highest average favorite count (e.g., 0.90) and highest average score (e.g., 2.60). 1.7% of total questions. This topic contains discussion about LCSD platforms features of different platforms, software development methodologies such as Agile and RAD development.
The topic Message Queue under Platform Adoption category has the second highest FusedP value. This topic is about different asynchronous service-to-service data exchange mechanisms such as using a message queue. It generally contains discussions about popular micro-service design patterns.
The topic Dynamic Page Layout under Customization categories is the third most popular topic and it has the highest average view count (e.g., 2447.2). The posts under this topic discuss about UI (i.e. page) customization, hiding or moving elements based on some user action or an event (e.g., disable a button for dynamic action in {{formula:3e7a3fa4-3005-4057-8cbd-4c94b5d0c78a}}. The eSignature topic from Third-Party Integration is the least popular with only 1.15% of total questions, a fused value of 0.52. It has the lowest favorite and score count. This contains discussion about different issues and customization for electronic signature of documents, i.e., docusign about collecting user's agreement/permission for sales or account opening. This topic is not that much popular and easy to get an accepted answer because this requirement is not generalized and not all the low-code application requires this.
Topic Difficulty. In Table REF we present the two difficulty metrics: for all the questions in a topic 1. Percentage of questions without accepted answers, 2. Median hours to get accepted answer. Similar to topic popularity, we also report the combined topic difficulty metrics (e.g., FusedD) using the Equation REF and the above two difficulty metrics. The topics in Table REF are presented in descending order based on the FusedD value.
Topic Message Queue under Platform Adoption category is the most difficult topic to get an accepted answer in terms of FusedD value. Most median hours to get accepted answers (21). This topic contains discussion about general micro-service architecture (i.e., producer and consumer) and well as LCSD platform-specific support for these architectures. This is why this topic is also second most popular topic.
Library Dependency Mngmt topic from Platform Maintenance is the second most difficult topic to get an accepted answer. Around 70% of its questions do not have any accepted answers. This topic concerns different troubleshooting issues about library and decencies of the system, server configuration, different library version compatibility issues.
Web-Service Communication topic from Platform Adoption is the third most difficult topic. It has a long median wait time (around 20 hours) to get an accepted answer. This topic contains discussions about service-to-service communication via web service description language, HTTP REST message, and Windows Communication Foundation.
The topics that contain discussion about general software development (not specific to LCSD platforms) are the least difficult topics to get an accepted answer. For example, topic SQL CRUD under Data Storage category is the least difficult topic in terms of FusedD value (e.g., 0.5). This contains database CRUD related queries, and advanced queries too, such as inner join, nested join, aggregate. This also contains discussion about Object query language, which is a high-level wrapper over SQL. Topic SQL CRUD and SQL Syntax Error from the Data Storage category are two of the least difficult topics in terms of median hours to get accepted answers. Topic Pattern Matching and SQL CRUD are two of the least difficult topics in terms of questions without accepted answers.
Alternatively, topics that are specific to LCSD platforms are the most difficult topics. Four out of five most difficult topic belongs to Platform Adoption Categories. These questions can be popular as well as difficult. For example, LCSD-related Third-Party Integration related topic eSignature is the least popular topic from Table REF , is the most difficult topic in terms of questions without accepted answers (71%). Topic Platform Related Query is in the mid-range in terms of difficulty but most popular to get an accepted answer.
Correlation between Topic Difficulty and Popularity.
Here we want to explore if there is any positive or negative relationship between topic popularity and difficulty. For example, Message Queue is the most difficult and, at the same time second most popular topic to get an accepted answer in terms of FusedD and FusedP metrics.
Platform Related Query is the most popular but mid-range difficult topic.
Table REF shows six correlation measures between topic difficulty and popularity in Table REF and REF . Three out of six correlation coefficients are negative, and the other three are positive and they are not statistically significant with a 95% confidence level. Therefore, we can not say the most popular topic is the least difficult to get an accepted answer and vice versa. Nonetheless, LCSD platform provides could use this insight to take necessary steps. Most popular topics should have an easy to access-able answer (i.e., least difficult).
[flushleft upper,boxrule=1pt,arc=0pt,left=0pt,right=0pt,top=0pt,bottom=0pt,colback=white,after=
]
RQ5. What LCSD topics are the most difficult to answer?
Platform Adoption is the most popular and challenging topic category, followed by Customization, Data Storage, Platform Maintenance, and Third-Party Integration. We also find that LCSD practitioners find Software Deployment and Maintenance phase most popular and difficult and Testing phase to be least difficult to an accepted answer. This indicates that LCSD platform providers should provide additional support to enable low-code practitioners understand and utilize the platform's features.
Discussions
During our analysis, we observed that several LCSD platforms are more popular across the topics than other platforms. We analyze our findings of LCSD topics across the top 10 most prevalent LCSD platforms in the dataset (Section REF ).
Finally, we discuss the implications of our study findings in Section REF .
Issues with not accepted answers or posts with negative score.
In this paper, for topic modeling we used questions and accepted answers only. We did not consider the posts with negative score too because of the following observations. [(1)]
Many other similar empirical studies on Topic modeling on SO posts also considered the questions and accepted answers only, e.g., IoT developers discussions in SO {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, big data related discussion {{cite:7a39f3cd9d6b11d64a88a30a4d1994c362b64965}}, concurrency related topics {{cite:c461e7f25c505314ff6e3a69cb196b976c971a3b}}, mobile app development {{cite:23e6cb31cebf09faefad37db14d8c8dadc5e5a02}}.
A significant number of studies {{cite:6192f20ec2bc40c25b008df891df57a8adec8d77}}, {{cite:0e1cc7c13508512b654c60ba7d245b409a4414dd}}, {{cite:a998e8911948ba65016702714403427542fcf9eb}}, {{cite:72a05c2843ccb2921da5fae74c61e02844373906}} report quality of questions and unaccepted answers in SO are questionable and therefore it is quite a standard practice for SE researchers to consider the accepted answers in SO only. For example, In {{formula:98d6395c-b604-466b-a500-47c0acef7949}} (Fig REF ) a user asks question about python code/package to connect and retrieve data from Salesforce. The accepted answer {{formula:c5081112-fe88-41e2-bb46-169722ddf1f5}} provides a relevant python code snippet the unaccepted answer {{formula:6503db5e-fbf2-4372-b35b-1b040da10c3d}} provide resource link for a command line tool which may be relevant but exactly not what the user asked for.
Negative scored questions are typically incorrectly tagged (e.g., {{formula:c39ebf5b-039e-49b8-9357-5df772e4a591}}, {{formula:c81229a7-9a67-4a27-a68b-b4184844c928}}, {{formula:74dcbc45-348f-46e0-95d8-d597645ce252}}), duplicates (e.g., {{formula:92bde1ea-e2f8-47d3-a703-083e2f085d1e}}, {{formula:e9e424c4-f03e-47d6-ab6a-ededa6112e39}}), lack a detailed problem description (e.g., {{formula:f12d220b-d6c7-4cd2-ba69-c96b31623805}}, {{formula:99993cf1-3609-48d5-9f87-38dd2a5cb672}}, {{formula:a6eda256-0dc0-47ee-99db-da0848f3b09f}}), lack correct formatting (e.g., {{formula:441e2aa6-07f1-4a16-a2a5-2adda86a194b}}). For instance, in {{formula:c761d86f-99f4-419a-842e-ffb682fc61a5}} (Fig REF ) a user inquires about an error encountered when attempting to contact a Zoho API. However, crucial important information such as an issue code or error message is lacking from the question description. In {{formula:b541e922-9aa2-4efb-bec5-864dc7e8b931}}, an inexperienced user inadvertently tagged a question about the Oracle Apex platform with the Salesforce tag.
We, therefore, choose not to include questions with a negative score or unaccepted answers. We also provide potentially missing out some insights for this choice in the threats to validity section (Section ).
{{figure:5044defc-eddf-42b7-9668-2716c0918a9a}}Discontinued low-code platforms and future trends.
From our analysis on Section REF we see the evolution of LCSD platforms, especially from 2012. According to our data, we can see the discontinuation of some low-code platforms but they are usually soon replaced by new low-code/no-code services. For example, In Jan 2020, Google announced the discontinuation of Google App Maker {{cite:5710a5f497f045e0cf6d8066c49ebd88b0a980c6}} by 2021 {{cite:d3629ec019c11d4ca02409e3b546e627ddc7dde3}}. But, shortly thereafter, Google announced a “no-code” platform called “AppSheet” {{cite:2be45e343677e9f9b5ea1b381773f02b72625a2c}} and promoted their fully managed serverless platform called AppEngine {{cite:42369d231f03b4356f31c312ef4c04baf74f5d7b}} to create web application promoting low-code approach. Microsoft and Amazon are also competing for superior low-code/no-code platforms with the emergence of new low-code service platforms such as Microsoft Power FX {{cite:cd06b292909527d008aa0eeba0384376b7dc8f69}}, Amazon Honeycode {{cite:410a4da63bdd14588069678ae5ab1fc6f09d2566}}, AWS Amplify Studio {{cite:7610ab4c47f23d13f9c8cb1522fa596e8a6da8d4}}. The low-code approach is attracting increasing interest from traditional businesses, particularly during the pandemic {{cite:0ee35c939ca30c15d7fb7f6bace8d6658188278a}}.
LDA parameter Analysis.
In this study, we applied LDA topic modelling, which employs Dirichlet distribution, to identify practitioners' discussions on low-code. As described in details in Section REF , we followed the industry standard to configure the parameters and hyperparameters and also followed the industry recommendation to manually annotate the topics as described in Section REF in order to avoid sub-optimal solutions {{cite:bc5ffd9d283e3f11b43f22c3f97cb6f8411ccce8}}. Following similar studies {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}, {{cite:5af5a7a50ccbc56c7d5c035af10ba28901721eb6}} we use the use the coherence score of of each model for different values of {{formula:33033cb2-d36f-49a1-b6bf-005f653be3e5}} . However, since LDA itself is probabilistic in nature {{cite:1c4ea0ab3d3a884aa91b75249babd467a4d6d3ec}} and can produce different results different runs on the same low-code dataset. In order to mintage this problem, we run our LDA model three times and compare the optimal number of topics. Fig. REF shows the result of different coherence score for different values of {{formula:9422aced-7ca4-46af-83aa-1ac08c9b9ae4}} . Moreover, we can see after reaching highest coherence values for {{formula:a6c6f649-3f88-42d7-af69-3cb644c86673}} = 45 the overall coherence score decreases as the value of {{formula:87404750-f0b0-4e57-96d0-775f2c6383f5}} increases.
{{figure:ca1a4c83-b6fb-4789-9341-0dc577ff4f27}}
The Prevalence & Evolution of Top Ten LCSD platforms
Our analysis of the evolution of topic categories (see Section REF ) shows that there is an overall increase in the number of new questions across the topics in SO. Our SO dataset is created by taking into account the LCSD platforms. In Figure REF , we show how the 10 LCSD platforms evolve in our SO dataset over the past decade based on the number of new questions. Salesforce{{cite:6d7818a420cf1824e285ad5917f3b2de7e5b4ec9}} is the biggest and one of the oldest LCSD platforms (released in 1999) in our dataset with around 30% of all questions followed by Lotus Software{{cite:e917b428805bf34ca10d6bbfa868897d28802143}}, Oracle Apex {{cite:2c56729e555920459ba13765fd17006c7d5d2cef}}, Microsoft powerapps{{cite:653eb6615e944d07c2bb218311b15d7b6563d419}}. Among these platforms, IBM Lotus Software was quite popular during the 2014s and gradually lost its popularity, and IBM finally sold it in 2018. Salesforce platform has been the most popular platform in terms of SO discussions since 2012. Our graph shows that these other three platforms, especially Microsoft Powerapps, are gaining lots of attention during the pandemic, i.e., early 2020.
{{figure:4510f8c8-b88a-462c-a1ab-1f3b7edd98ab}}{{figure:f663ae80-a911-405c-89ca-7cb66e62c665}}We provide more context for these platforms in Figure REF by illustrating the distribution of our observed five topic categories across the top ten LCSD platforms. We can see that Powerapps have the most number of queries in the Customization and Platform Adoption category. This happens because Powerapps is a relatively new LCSD platform (released in 2016) and it is gaining more and more attention from the community, and thus there are more queries such as business logic implementation (i.e., {{formula:ca0ca0cb-aeed-4637-b4cd-147c3f17dba5}}), connect Powerapps to a database in {{formula:4b9d8138-fc08-4d5c-8974-de35c47d1b6f}}, user permission (i.e., {{formula:8402fa7a-4c6d-4570-90e2-824ad5d12c43}}). We can also see that older platforms such as Salesforce and Oracle Apex have more queries regarding Platform Maintenance, Third-Party Integration topic category. Practitioners ask many different questions regarding these platforms such as deployment-related (e.g., “Deploying a salesforce.com flex app without visualforce” in {{formula:0558afb3-d3f6-457d-8c70-2c9d4ade1463}}), third-party API integration (e.g., “Google Map Integrated with Salesforce is Blank” in {{formula:716b22e0-dec0-49ff-b612-9891388490f4}}), maintenance deployment (e.g., “Salesforce deployment error because of test class failure” in {{formula:25c250dd-aba1-4bf4-b26d-57421c83d68b}}), interactive report in {{formula:2fa8d334-c72e-44a7-8ea5-27dcda4eb2ea}}, customization with JSON {{formula:b2433871-ac77-4d13-b80d-37512d6eeabd}}, “what is dashboard?” in {{formula:2a3e16a3-df28-4e5f-9459-731f979ca99f}} and Oracle Apex how to use platform in {{formula:5b50da9c-4a1f-492a-9e03-5c05c470f535}}. Platform Adoption is a prevalent topic category in the Powerapps, ServiceNow, and Tibco platforms. We also notice that the Data Storage category is quite popular in Filemaker and Lotus Software. Interestingly we see that Zoho Creator LCSD platform around 60% questions belong to Third-party API integration (especially email configuration {{formula:681fd0f8-1835-4c8d-b25f-74816cf891f5}}). This data sheds light on the popular discussion topics of more recent and earlier LCSD platforms.
{{figure:6b0958c6-e8a8-4e29-a0e2-c2a59d7bf15a}}
The Case of “aggregating" data across multiple LCSD platforms.
In this study, we analysed 38 LCSD platforms and these platforms have distinct characteristics and challenges. Our goal is to offer researchers and practitioners a comprehensive overview of the LCSD domain as a whole, as opposed to focusing on a single LCSD platform. Hence, we integrated the data from all of these platforms. For instance,Fig. REF demonstrates that some of the most popular platforms such as Salesforce, Oracle Apex, and Microsoft Powerapps have more questions in SO than other LCSD platforms. Fig. REF demonstrates that questions across these platforms over different topic categories differs slightly. However, Fig. REF shows that questions for Application Customization and Platform Maintenance topic category for top ten platforms vs others remain about the same at around 30% and 13% respectively. Popular platforms have more questions related to Third Party API integration (15%) than others (4%). The top ten platforms have relatively fewer questions (15%) in Data Storage (23% vs 29%) and Platform Adoption (19% vs 24%) Category compared to other platforms. Overall, we found that the observed topics are found across all the platforms that we studied. Given the popularity of some platforms over others, it is understandable that those platforms are discussed are more and as such some platforms can have more coverage (in terms of number of questions) in a topic over other platforms. However, the prevalence of all platforms across each topic shows that the topics are generally well-represented across the platforms.
{{figure:1b03d261-5920-440f-89f3-5fb3ed506ded}}The popularity vs. difficulty bubble chart for 40 LCSD-related topics.
{{table:951a67d3-c926-424c-89f9-f4a5d79b63e2}}
Implications
In Table REF , we summarize the core findings of our study and provide recommendations for each findings. The findings from our study can guide the following three stakeholders: [(1)]
LCSD platform Providers to improve the documentation, deployment, and maintenance support,
LCSD Practitioners/Developers to gain a better understanding of the trade-offs between rapid development and customization constraints,
Community of LCSD Researchers & Educators to have a deeper understanding of the significant challenges facing the broader research area to make software development more accessible.
We discuss the implications below.
In this empirical study, we infer implications and recommendations based on our observation of practitioners' discussion in SO. So further validation from the developers' survey can provide more insight. However, the diversity of the low-code platforms and topics makes it non-trivial to design a proper survey with representative sample of LCSD practitioners. Therefore, the findings can be used to design multiple LCSD related surveys focusing on different low-code topics and platforms.
LCSD Platform Vendors.
In order to better understand the issues of LCSD, we present a bubble chart with difficulty and popularity of different aspects of LCSD such as Topic Category in Figure REF , Types of questions in Figure REF and agile SDLC phases in Figure REF . These findings coupled with the evolution of LCSD platforms (REF ) and discussions (REF ) shows that Customization and Data Storage related queries are more prevalent, with the majority of these queries occurring during Implementation agile SDLC stage. However, one of our interesting findings is Platform Adoption related queries are increasing in popularity. LCSD practitioners find LCSD platform infrastructure and server configuration-related quires tough and popular during the Deployment and Maintenance phase. The top five most challenging topics belong to Platform Adoption and Maintenance topic category.
Many new practitioners make queries regarding LCSD platforms, learning resources, basic application and UI customization, and how to get started with this new emerging technology. Figure REF shows that Platform Related Query topic is the most popular among LCSD practitioners. We find that Documentation related queries are both top-rated and challenging. Our findings also suggest that many practitioners still face challenges during testing, especially with third-party testing tools like JUnit (in {{formula:6e3dba20-e7df-41c5-af6c-0d018b6b2060}}) and troubleshooting. Consequently, many of the questions on this topic remain unanswered. It reveals that to ensure smooth adoption of the LCSD platforms, providers should provide better and more effective documentation and learning resources to reduce entry-level barriers and smooth out the learning curve.
LCSD Practitioners/Developers.
Gartner {{cite:e6c99c10fd96716a3cd5fd76c096a169f7fce5ed}} estimates that by 2022, more than half of the organizations will adapt LCSD to some extent. Additionally, our analysis reveals a rising trend for LCSD approaches, particularly during Covid-19 pandemic (Fig. REF ). We can also see that new LCSD platforms such as Microsoft Powerapps are gaining many developers' attention. LCSD platform enables practitioners with diverse experience to contribute to the development process even without a software development background. However, our finding shows that practitioners find debugging, application accessibility, and documentation challenges. Hence, the practitioners should take the necessary steps to understand the tradeoffs of LCSD platforms' features deeply. The project manager should adopt specific strategies to customize, debug, and test the application. For example, many practitioners struggle with general Third-Party API integration and database design and query. We find that DevOps-related tasks such as CI/CD, Server configuration, and monitoring-related queries are most challenging to the practitioners. So, a well-functioning LCSD team should allocate time and resources to them. It provides valuable insights for project managers to manage resources better (i.e., human resources and development time).
{{figure:bdf164dd-8273-4be9-9c09-587286a47e87}}Figure REF shows that Maintenance is the most popular development phase, followed by Deployment, and Testing is the least popular SDLC phase. Similarly, the figure also shows that questions asked in deployment phase are the most difficult followed by Maintenance. Implementation, Requirement analysis and planning, Application design phase are in the middle range in terms of popularity and difficulty spectrum. Thus, our analysis indicates that LCSD practitioners face more broad and complex application maintenance and deployment-related challenges, on which LCSD platform vendors should concentrate their efforts. This finding can influence the decision-making process of LCSD developers and practitioners like prioritizing their efforts during the design, development, and deployment of software that uses LCSD platforms. For example, if sufficient support or tools are not available for scalable usage and deployment of an LCSD platform, developers may look for alternatives that have better deployment and maintenance support.
One fundamental shortcoming of LCSD platforms is that their abstraction and feature limitations can make customization and debugging extremely difficult.
Additionally, managed cloud platforms make data management and deployability more challenging {{cite:128b4624c598a87cbe733d32d37ce5dfc1accfea}}, {{cite:e1e1464fce5e7fb5d5f8ad61a9cdaab2b93137eb}}. The findings in this study help to present some strengths and limitations of the overall LCSD paradigm, which complements the findings of other studies {{cite:128b4624c598a87cbe733d32d37ce5dfc1accfea}}, {{cite:3cb9c25bdb57e3cac16e497bfd8916072da7fe53}}, {{cite:e1e1464fce5e7fb5d5f8ad61a9cdaab2b93137eb}}, {{cite:9f0f7adeecd64b62d5eb8bed755e122f8d77f566}}, {{cite:8059f48ebc79691c4d3904c8e2d347ec7fd9a305}}, {{cite:50674ce956ee09b987df811493e2d113f6623cd7}}. The analysis could assist LCSD teams in selecting the appropriate LCSD platforms, which is critical for future success.
LCSD Researchers & Educators.
The findings of this study have many implications for researchers and educators of LCSD platforms and the border research community to improve the software development process.
We discover that What-type and How-type questions are popular among LCSD practitioners. They also find them challenging because of adequate usable documentation. Thus, practitioners ask questions about certain limits or how to implement certain features, and in the accepted answer, some other user simply points to the official documentation page (e.g., “Domino Data Service API Documentation” in {{formula:43626027-36b5-4fbf-a1fb-caf5eafb09a7}} and {{formula:e10f7721-fe28-41d5-b101-e1bcbdef68a2}}). Many of the challenges faced by low-code petitioners are similar to traditional software developers. So, researchers from border software engineering domain can contribute to improving aspects such as improving documentation {{cite:2c14211cb82dd0db87eb28534f4a7706dd24ef76}}, {{cite:13264e052a5d29931727709f1ed3f9b8cc5af37c}}, {{cite:5f9e6f6d8f782e62fcb8136b75bc2f0bd0bc381f}}, improving API description usage {{cite:a4463c8f0b0ca7940bb962dbf556f3cd0ab9555b}}, {{cite:fd9dbb015cb5d2a31d0f7d7567e5d28e8125b55b}} and make it more accessible to general practitioners. In the Customization and Data Storage topic category, we find practitioners asking help in generic programming queries, database design, and file management. So, research on those topics will also help the adoption of LCSD. Some LCSD platforms provide great in-build support for unit and functional testing. However, we find around 2.1% of questions belong to Testing topic. Most of these LCSD platforms heavily rely on cloud computing, and thus research improvement of server configuration and library management, i.e., DevOps {{cite:fbe367477b2a2bf5a63550187e9a0450ca28912a}} in general, will aid in better platforms. On the other hand, educators can focus their efforts on making the learning resources on Automatic testing, Server config. and DevOps practices such as CI/CD more accessible to the citizen developers.
{{figure:e2fd6b7d-c349-4026-8d41-46de4f46556e}}Figure REF shows What-type of posts are most popular, followed by Why-type, How-type, and Others-type. Additionally, it demonstrates that the most challenging question type is Why-type, followed by What-type, How-type, and others. So, although How-type questions are most dominant, the details types (i.e., Why-type, What-type) of questions are more popular and challenging. This analysis implies that LCSD practitioners have a harder time finding detailed information regarding different platform features. As a result, LCSD platform providers should improve their documentation.
Intuitively, How-type questions can be answered with better documentation for LCSD platforms. Given the official API documentation can often be incomplete and obsolete {{cite:b80b023a482b939076e3d7c99dcfdc9d872179ab}}, {{cite:e65759e2f5179aa63314151c6903678a8b6da58b}} and given that our research shows that LCSD developers use SO to ask questions about various topics, LCSD researchers can develop techniques and tools to automatically improve the documentation of LCSD platforms by analyzing SO questions and answers. Indeed, existing research efforts show that solutions posted in SO can be used to improve API documentation by supporting diverse development tasks and programming languages {{cite:f72354ab09023d564c69e7fdf49008d70b5cdeef}}, {{cite:bb700f46ce70f40ce080545263a760dba83cdd4c}}, {{cite:c16e9c9ba693f0a168c83d1d8ad634e11faf52b6}}, {{cite:094e9ca4eb3e2dd968d62e0fefa8b67804bbb7cd}}, {{cite:18c41f50d32e00af929a2e87a25cf89c15775fc8}}, {{cite:a5e378ad787a7f739163eb2a48e7c253f89b1107}}, {{cite:10c76ea876914be40f13144a1447618a5c575665}}.
We find that the LCSD paradigm's challenges can be different from traditional software development {{cite:128b4624c598a87cbe733d32d37ce5dfc1accfea}}. Simultaneously, researchers can study how to
provide better tools for practitioners to customize the application.
Security is an open research opportunity for such platforms as a security
vulnerability in such platforms or frameworks could compromise millions of applications and users {{cite:097bfae6d7f64c1aefaf9c1e3067af5a6d40a6a2}}. Researchers can develop better testing approaches to ensure faster development and dependability. Educators can also benefit from the results presented in Table REF , REF and Figure REF to prioritize their focus on different topics such as Library Dependency Mngmt, Web-Service Communication, Asynchronous Batch Jobs, Testing, Dynamic Form Controller.
Threats to Validity
Internal validity threats, in our study, relate to the authors' bias while conducting the analysis as we have manually labeled the topics. We mitigate the bias in our manual labeling of topics, types of questions, and LCSD phases by consulting the labels among multiple authors and resolving any conflicts via discussion. Four of the authors actively participated in the labelling process. The first author reviewed the final labels and refined the labels by consulting with the second author.
Construct Validity threats
relate to the errors that may occur in data collection, like identifying relevant LCSD tags. To mitigate this, we created our initial list of tags, as stated in Section , by analyzing the posts in SO related to the leading LCSD platforms. Then we expanded our tag list using state-of-art approach {{cite:7a39f3cd9d6b11d64a88a30a4d1994c362b64965}}, {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}, {{cite:c461e7f25c505314ff6e3a69cb196b976c971a3b}}, {{cite:2b3ac571c972fb05c36e0ffe136779ff6fb84ce4}}. Another potential threat is the topic modeling technique, where we choose {{formula:ee8fccdc-325e-47e2-95b9-33345afecb15}} = 45 as the optimal number of topics for our dataset {{formula:187dbd37-4d3b-402e-aa24-9b04a97aab7a}} . This optimal number of topics has a direct impact on the output of LDA. We experimented with different values of {{formula:4124daca-07a6-4a4d-b7b9-d3eb7c12cc81}} following related works {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}, {{cite:7a39f3cd9d6b11d64a88a30a4d1994c362b64965}}. We used the coherence score and manual examination to find {{formula:e66feac8-7fc7-4223-9139-b302c65253e0}} 's optimal value that gives us the most relevant and generalized low-code related topics {{cite:5efc22f5b87c23a13109ed182794e099b991c8aa}}, {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}.
External Validity threats relate to the generalizability of our findings. Our study is based on data from developers' discussions on SO. However, there are other forums LCSD developers may use to discuss. We only considered questions and accepted answers in our topic modeling. We also had the option of choosing the best answer. In SO, the accepted answer and best answer may be different. Accepted answer is the one approved by the questioner while the best answer is voted by all the viewers. as discussed in Section REF it is quite difficult to detect if an answer is relevant to the question or not. Thus We chose the accepted answer in this study because we believe that the questioner is the best judge of whether the answer solves the problem or not. Even without the unaccepted answers, our dataset contains around 38K posts (27K questions + 11K accepted answers). This also conforms with previous works {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}, {{cite:5efc22f5b87c23a13109ed182794e099b991c8aa}}, {{cite:72a05c2843ccb2921da5fae74c61e02844373906}}. Some novice practitioners post duplicate questions, assign incorrect tags, and provide inadequate descriptions, which receives an overall negative score from the community. To ensure that topics contain relevant discussion of high quality, we only use posts with non-negative scores. Nevertheless, we believe using SO's data provides us with generalizability because SO is a widely used Q&A platform for developers. However, we also believe this study can be complemented by including the best answers to the questions in SO, as we discussed earlier, including discussions from other forums, surveying, and interviewing low-code developers.
Related Work
We previously published a paper at the MSR 2021 based on an empirical study of LCSD topics in SO (see {{cite:5efc22f5b87c23a13109ed182794e099b991c8aa}}). We compared the findings of this paper against our previous paper in Section . Other related work can broadly be divided into two categories: SE (Software Engineering) research on/using [(1)]
low code software development (Section REF ), and
topic modeling (Section REF ).
Research on Low Code Software Development and Methodologies
LCSD is a new technology, with only a handful of research papers published in this field. Some research has been conducted on the potential applications of this developing technology in various software applications {{cite:bb6ce3c9f3fe3d40449ffe506236ceec20a3967c}} or for automating business process in manufacturing {{cite:9f0f7adeecd64b62d5eb8bed755e122f8d77f566}}, healthcare {{cite:50674ce956ee09b987df811493e2d113f6623cd7}}, {{cite:3488a4ed957b7eaf509d1f89a215cd8c3ef4c085}}, Digital transformation {{cite:b3dde23748c0a49b155026555195263a632a6da3}}, Industrial engineering education{{cite:8059f48ebc79691c4d3904c8e2d347ec7fd9a305}}, IoT systems using LCSD {{cite:702a3791759aef5c4d355e98af88e014a654f996}}.
Sipio et al. {{cite:61cc1b4fed910b316a9b79d2d973c3805bf59994}} present the benefits and future potential of LCSD by sharing their experience of building a custom recommendation system in the LCSD platform. Kourouklidis et al. {{cite:10e77d79be68f0a18afd1c725e4bb9833088cb2f}} discuss
the low-code solution to monitor the machine learning model's performance. Sahay
et al. {{cite:128b4624c598a87cbe733d32d37ce5dfc1accfea}} survey LCDP and compare different LCDPs based on
their helpful features and functionalities. Khorram et al. {{cite:0cd3c3204679e3eceb7a45b1b4ecbd2dd9a1af5b}} analyse commercial LCSD platforms and present a
list of features and testing challenges. Zhuang et al. {{cite:52d23a81cdf98331a2b61982871472253447d52c}} created a low-code platform called EasyFL where researchers and educators can easily build systems for privacy-preserving distributed learning method. Ihirwe et al. {{cite:ef2e1a5480e1ec2f9461d83902a8edc29764d521}} analyse 16 LCSD platforms and identifies what IoT application-related features and services each platform provides. All of these studies compare a single LCSD platform and its support and limitations for various sorts of applications {{cite:604c6d30fadade307bac02a99763ed04f80d26e8}}, rather than taking a holistic picture of the difficulties that the broader community faces.
There are also some studies where researchers proposed different techniques to improve LCSD platform such as Overeem et al. {{cite:12351d826486bae39a9f84aa76ec5874649c09d9}} on LCSD platform's impact analysis, Jacinto et al. {{cite:32d5bc94597cf093856be26d363d56ac118b2317}} improve testing for LCSD platforms.
Additionally, there are some studies that describe the difficulties faced by LCSD practitioners. The main research methodology and objectives of these studies, however, are significantly different from this study. Yajing et al. {{cite:e1e1464fce5e7fb5d5f8ad61a9cdaab2b93137eb}} analyse the LCSD platform's characteristics including programming languages used, major implementation units, supporting technologies, applications being developed, domains, etc., along with the benefits, limitations, and challenges by collecting relevant posts from SO and Reddit. In this study, we use tag-based approach to find relevant LCSD-related posts which is much more reliable than text-based searching. Furthermore, the SO related discussion used in this study is significantly larger and our research objective about LCSD platforms challenges are quite different. Timothy et al. {{cite:f7ba7b57e2caca3ade2bc53b76a8d9d132c348c0}} discuss experiences with several low-code platforms and provide recommendations focusing on low-code platforms enabling scaling, understandability, documentability, testability, vendor-independence, and the overall user experience for developers as end-users who do some development. Danial et al. {{cite:5be1888ea89bcf5bd9c7e7c271f867443ea00f0d}} and ALSAADI et al. {{cite:3cb9c25bdb57e3cac16e497bfd8916072da7fe53}} Surveyed on factors hindering the widespread adaptation of LCSD by interviewing LCSD developers or conducting a survey. To the best of our knowledge, ours is the first empirical study of LCSD platforms based on developers' discussions from Stack Overflow, and hence our findings complement those of other studies.
Topic Modeling in Software Engineering
Our motivation to use topic modeling to understand LCSD discussions stems from
existing research in software engineering that shows that topics generated from
textual contents can be a good approximation of the underlying
themes {{cite:f575bb9694fb8a12c4175b5ea1c26794a6f928e6}}, {{cite:1958393d5c50d07569e95e70deff7e39eac60c88}}, {{cite:e40484b2ab47b4be902746947264af2e51adf102}}.
Topic models are used recently to understand software
logging {{cite:2d8f7ad890f21f3a4253e76e03e5c9cc357eac61}} and previously for
diverse other tasks, such as concept and feature
location {{cite:f192c8180f1d7212a03795343f2b97a9462ed7a3}}, {{cite:5e54c160a6e188c042eb828c03107cde60a64bcf}},
traceability linking (e.g., bug) {{cite:d49d38acafc67c882ea46bdfbc1b6f16f867668c}}, {{cite:a9fb9af75613196cda3b3f941c8aa39f8a616129}},
to understand software and source code history
evolution {{cite:db42b229319165d953e205d7e8f984c67474cd05}}, {{cite:c995dda6ad97efdad2e15453075feaf9ae35cc81}}, {{cite:f3d3edc869623b2b2a462505d8b39d507e181f9c}}, to facilitate code search by categorizing
software {{cite:8fe65a6adad488f55b46497eaeffca174f87f7fd}}, to refactor software code
base {{cite:f091f45b1921c0f78d38d8a1147a273d9820cbcc}}, as well as to explain software
defect {{cite:8ce213295dc00a4c7ec5c67133e5a169797b06a0}}, and various software maintenance
tasks {{cite:c7bd596a95a087518dde7abc2f2c1915331d5034}}, {{cite:1958393d5c50d07569e95e70deff7e39eac60c88}}.
The SO posts are subject to several studies on various aspects
of software development using topic modeling, such as what developers are
discussing in general {{cite:78e53662af92f8607d5716c883686c6a6837425c}} or about a
particular aspect, e.g., concurrency {{cite:c461e7f25c505314ff6e3a69cb196b976c971a3b}}, big
data {{cite:7a39f3cd9d6b11d64a88a30a4d1994c362b64965}}, chatbot {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}}.
{{table:9af6d85a-f4c6-434f-831b-b3b80e07138d}}{{table:d8d971bd-a47d-40ca-8d92-211d9ceaccab}}In particular, SO posts have been used in various studies where the researchers analysed topics for that particular domains. For instance, SO posts has been used to study developers challenges in IoT {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, big data {{cite:7a39f3cd9d6b11d64a88a30a4d1994c362b64965}}, chatbots {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}} and so on. The distributions and the nature of these posts differs. As SO is arguably the most popular public forum for developers, the analysis of these domains' characteristics may help us identify the SO community better. Therefore, a systematic analysis of these domains is interesting. Following related studies{{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, we use six metrics in this study: [(1)]
Total #posts,
Avg views,
Avg favorite,
Avg score,
Percentage of questions without accepted answers,
Median hours to get accepted answers per domain.
The first four metrics are popularity metrics and the last two are difficulty metrics.
In this study, we do not replicate the findings of the original study in our dataset. Rather we only report the findings from the original study. So following related work {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}}, we compared our LCSD-related discussions with other five domains: IoT {{cite:4a7f905f21eccfb597e3cd75a7faae24e69de24c}},big data{{cite:7a39f3cd9d6b11d64a88a30a4d1994c362b64965}}, security {{cite:86c3f499b72dfe721b4c63bdeae18feb80b883f2}}, mobile apps {{cite:23e6cb31cebf09faefad37db14d8c8dadc5e5a02}}, chatbots {{cite:efe39c32e4516876a2146e317820ebf657aa61b8}} and concurrency {{cite:c461e7f25c505314ff6e3a69cb196b976c971a3b}}.
Table REF provides an overview of the seven metrics. We can see that it has a greater number of SO posts than chatbot domains but fewer than the other five domains. There are two other studies on Blockchain {{cite:f6198ab0dd27f8de602969f8f4956f87914e5782}} and deep learning {{cite:5af5a7a50ccbc56c7d5c035af10ba28901721eb6}} where the total number of posts are 32,375 and 25,887, respectively. However, these two studies did not report the other metrics, so they are excluded from the Table. Although the LCSD-related discussion may have fewer posts than these other domains, as discussed in RQ3, this number is increasing rapidly.
We can also observe that the LCSD domain shows similarities with IoT, Concurrency domain in terms of Avg. View count. Security and Mobile domain seem most popular in terms of Avg. Favourite count and LCSD rank lower in this metric. LCSD domains most resemble with IoT domain in terms of Avg. View, Avg. Favourite and Avg. Score. In terms of difficulty metrics percentage of posts without accepted answers, LCSD domain ranks lowest, which is good. However, it takes much longer to get accepted answers( 0.7 hours for Mobile, 2.9 for IoT). Only the chatbot domain requires more time to get accepted answers (i.e., 5.3 hours in LCSD vs 14.8 hours in chatbot).
We further discuss the LCSD, IoT, and Chatbot domains with the distribution of different types of questions in Table REF . We find that LCSD domain is in the middle of IoT and Chatbot domains in terms of How-type (57%) compared to IoT (47%) and chatbot (62%). This signifies that LCSD domain practitioners ask more about implementation-related questions. In the Why-type question percentage of LCSD-related domains is lowest (14%) compared with IoT (20%) and chatbot (25%). This suggests that practitioners in the LCSD domain enquire about relatively modest troubleshooting issues. For What-type, we notice that the IoT domain dominates with 38% of questions, compared to the LCSD domain's 12%. Practitioners of LCSD are less inquisitive about domain architecture and technologies compared to IoT domain. As a result of these analyses, we can see that LCSD domain practitioners exhibit several traits that distinguish them from practitioners in other domains, which LCSD vendors and educators should take into account.
Conclusions
Low Code Software Development (LCSD) is a novel paradigm for developing software applications utilizing visual programming with minimum hand-coding. We present an empirical study that provides insights into the types of discussions low-code developers discuss in Stack Overflow (SO). We find 40 low-code topics in our dataset of 33.7K SO posts (question + accepted answers). We collected these posts based on 64 SO tags belonging to the popular 38 LCSD platforms. We categorize them into five high-level groups, namely Application Customization (30% Questions, 11 Topics), Data Storage (25% Questions, 9 Topics), Platform Adoption (20% Questions, 9 Topics), Platform Maintenance (14% Questions, 6 Topics), and Third-Party Integration (12% Questions, 5 Topics). We find that the Platform Adoption topic category has gained popularity recently. Platform Related Query and Message Queue topics from this category are the most popular. On the other hand, We also find that practitioners find Platform Adoption and Maintenance related topics most challenging. How-type questions are the most common, but our research reveals that practitioners find what-type and why-type questions more difficult. Despite extensive support for testing, deployment, and maintenance, our analysis shows that server configuration, data migration, and system module upgrading-related queries are widespread and complex to LCSD practitioners. Despite significant testing, deployment, and maintenance support, our analysis finds numerous and complex queries regarding server configuration, data migration, and system module updating. Our analysis finds that better tutorial-based documentation can help solve many of these problems. We also find that during the Covid-19 pandemic, LCSD platforms were very popular with developers, especially when it came to dynamic form-based applications. We hope that all of these findings will help various LCSD stakeholders (e.g., LCSD platform vendors, practitioners, SE researchers) to take necessary actions to address the various LCSD challenges. Since the growth indicates that this technology is likely to be widely adopted by various companies for their internal and customer-facing applications, platform providers should address the prevailing developers' challenges. Our future work will focus on [(1)]
getting developers' feedback on our study findings based on surveys and developer interviews, and
developing tools and techniques to automatically address the challenges in the LCSD platforms that we observed.
| r | c613f4f7ed774aa6ef3f3393cbb9e16f |
We have also shown that both the 1 and 2 technologies can
realize nearly identical low frequency sensitivities for CE2. While this is
true for high frequencies as well, achieving the specified quantum and thermal
noise performance for both technologies requires further research and
development not discussed in this paper. Additionally, if the arm length of
Cosmic Explorer were significantly shortened, other fundamental noises would
disparately impact the two technologies. Coating Brownian noise scales with
arm length {{formula:6cec67b2-0151-462a-b7ea-f19223b8ff7e}} like {{formula:11858ba9-c467-4b6d-bb36-1c6705495e97}} {{cite:727cc0124fc4f21395d49e6f570f0f5a6918be78}}; because this noise
is already higher in the 1 technology than the 2 technology
at {{formula:1ecbe368-9a51-46fb-b6da-e643aa521f42}} , the latter technology may become clearly advantageous if
the Cosmic Explorer facility is shrunk.
| d | 546382d858a46f105db7093348d76479 |
Equation (REF ) can be obtained by lightcone bootstrap techniques, which we review and apply to this case in section . In section , we derive a Lorentzian inversion formula for the defect OPE (REF ), analogous to the one obtained in {{cite:031c9dc64a5acd667d331449b0c9dc0ebeadfee6}} for the four-point function of local operators – see also the recent derivation in {{cite:bfcec0152e134188b924bd3675ad3426bacc4110}}.
The formula yields scaling dimensions of defect operators and bulk-to-defect OPE coefficients as analytic functions of the transverse spin, and thus implies the existence of trajectories in the {{formula:27c2b6d1-fad5-4afc-8a42-c242d04fa613}} space. It also resums the results that could be obtained by a systematic analysis of lightcone expansion, as done in {{cite:5da0826486d2d186f3644d9db1a95bc332f61332}} for the case without defects.
However, (REF ), and thus the analytic constraint, is valid only for {{formula:997116ae-0668-4472-b24b-f9e7725ee525}} larger than a certain minimum {{formula:a26f4f93-2b96-4a2b-ba6b-540ca319b6ff}} . Unfortunately, contrary to {{cite:031c9dc64a5acd667d331449b0c9dc0ebeadfee6}}, we are not able to prove a theory independent upper bound to {{formula:a3557598-cf5d-4bd7-9c77-5ac45a7ca6c2}} . We therefore ignore if there is a universal value of the transverse spin beyond which the spectrum of any defect CFT is constrained by analyticity.
| i | 0983aaa7aa86ed0ac6dbc7b6890ca265 |
On the D4RL benchmarks, we compare RvS learning to (i) value-based methods and (ii) prior supervised learning methods and behavioral cloning baselines. For (i), we include CQL {{cite:30b13a809a96167e739d459b1de259ddb84acdd5}} as well as the more recent TD3+BC {{cite:1537e591a995fb60635aa1815a204b5fff039c0b}} and Onestep RL {{cite:936d10e7c145167becc883468215da641da39b8d}} methods. TD3+BC and Onestep RL both involve elements of behavior cloning.
We use CQL-p to denote the CQL numbers published in {{cite:396cc8128ee6da5a1893cf318d76cf9697dfb5f8}} and CQL-r to denote our best attempt to replicate these results using open-source code and hyperparameters from the CQL authors.
For (ii), we include behavioral cloning (BC), which does not perform any conditioning; Filtered (“Filt.”) BC, a baseline appearing in {{cite:76460c463b8021644a34e9b1c0b6ed5f90c629cb}} which performs BC after filtering for the trajectories with highest cumulative reward; and Decision Transformer (DT) {{cite:76460c463b8021644a34e9b1c0b6ed5f90c629cb}}, which conditions on rewards and uses a large Transformer sequence model.
Both BC and Filtered BC are our own optimized implementations {{cite:9e0c5e861eb8f7d7c14f8c65839eff0d2cfe8b5c}}, tuned thoroughly in a similar manner as RvS-R and RvS-G. In Kitchen and Gym locomotion, Filtered BC clones the top 10% of trajectories based on cumulative reward. In AntMaze, Filtered BC clones all successful trajectories (those with a reward of 1) and ignores the failing trajectories (those with a reward of 0).
On the GCSL benchmarks, we compare to GCSL using numbers reported by the authors {{cite:9ebfdbdef77d6e0e4311e6af0a8be9a0fadca518}}. This gives the GCSL baseline the advantage of online data, whereas RvS uses only offline data. We do not run RvS-G on the Gym locomotion tasks, which are typically expressed as reward-maximization tasks, not goal-reaching tasks.
For additional details about the baselines, see Appendix .
| m | 2fb74fa6a9b8a8884dd2bb8531d7c0a4 |
Backgrounds. Expected and almost sure convergence results have been extensively studied for convex optimization; see, e.g., {{cite:c577a317faaf9674f2f30573c4b799e9876224b2}}, {{cite:4e5f076758ef38c14183512fc2a36e4fb763d66a}}, {{cite:0fdc6a78caef527261b98359e8e459757ef0a8ef}}, {{cite:765d1e53f57bc986c1f5f2b7c8cca35bd08773d1}}, {{cite:9f389b683d7345830de8069c6844ffea3c48d6a0}}, {{cite:b499d648ba02300e687082df9108e56d831496f4}}. Almost sure convergence of {{formula:8304980f-a620-41c6-813d-3e65f90bb8e6}} for minimizing a smooth nonconvex function {{formula:50bf9a20-a105-4ccd-9cf1-8469ad8e2183}} was provided in the seminal work {{cite:316a94aa5ab604796577815d411d7ff33f9bd664}} using very standard assumptions, i.e., Lipschitz continuous {{formula:46430388-42eb-4d24-a3f3-6cb561041ec8}} and bounded variance. Under the same conditions, the same almost sure convergence of {{formula:f413b196-8c80-4f13-9394-ca46b707d709}} was established in {{cite:8d3c0acc05d05a3d939509e79823a0ccc5446ff8}} based on a much simpler argument than that of {{cite:316a94aa5ab604796577815d411d7ff33f9bd664}}. A weaker `{{formula:bf113ff3-dd16-4b0a-b3b6-a54eafba8f3c}} '-type almost sure convergence result for {{formula:90d45d77-363a-4c41-a9ef-c7bd1245b7ef}} with AdaGrad step sizes was shown in {{cite:b321cd1191040d2b09e2b712b0c831d4c8672ed4}}. Recently, the work {{cite:d6c8822b427a05018c4de3715bd5d1c0b62d38aa}} derives almost sure convergence of {{formula:db59dfc7-c6bf-4520-93b3-9c7baded9dee}} under the assumptions that {{formula:979562a4-c721-4e87-b103-f8f1f0a32d54}} and {{formula:bb5bc435-185f-43eb-b3da-affb6b28ea2e}} are Lipschitz continuous, {{formula:434ac374-3d75-42cd-898b-53c0cad1dde2}} and {{formula:ec1d1464-7ddd-42ea-bdb2-fdb8a4048e0c}} have bounded sub-level set, and the second moment of the stochastic error is bounded. This result relies on stronger assumptions than the base results in {{cite:316a94aa5ab604796577815d411d7ff33f9bd664}}.
In terms of expected convergence, the work {{cite:1e212abc725b5d616e8c0b12d1f4150d3c69ae81}} showed {{formula:62480749-5b6e-46e0-bc58-41e5a9cce8e4}} under the additional assumptions that {{formula:64819dba-9218-4dfc-8d21-aa1dd86145fd}} is twice continuously differentiable and the multiplication of the Hessian and gradient {{formula:a8b557fa-9aed-468b-9801-050e990602c0}} is Lipschitz continuous.
| i | 7812ed8a0463c7e16291ab641cb475ad |
The third finding regards the information-based acquisition functions. Through the range of different simulations PI and EI failed to find good results for the Ackley function. The reason for this is the large area of the parameter space where all response values are identical and thus the response surface is flat. As discussed in section II., improvement-based acquisition functions propose a point that is most likely to improve upon the previous best point. With a flat function like the Ackley function it is likely that all of the initial starting points fall into the flat area (especially in higher dimensions). The posterior mean of the Gaussian process will then be very flat, and will predict that the underlying acquisition function is flat. This leads to a very flat acquisition function, as most input points will have a small likelihood of improving upon the previous best points. Gramacy et al.{{cite:1c7ebf38e5a1e91098b1ce67693461167f81c654}} mention this problem when optimising a flat EI acquisition function. This results in the evaluation of points that are not optimal, which is especially problematic when the shape of the objective function is not known and flat regions cannot be ruled out a priori. If such properties are possible in a particular applied problem, the results suggest that using a different acquisition function, such as an optimistic policy, achieves better results.
| d | b85f21d63ad0d9537d880bc0faa9f640 |
In the more general setting of Markov chain Monte Carlo (MCMC) algorithms
{{cite:243730d7f975bb7e304f322a6456dc6e2847614b}}, further results characterise
the improvement brought by Rao–Blackwellization. Let us briefly recall that the concept behind
MCMC is to create a Markov sequence {{formula:27a25620-3fec-446f-9ee7-5bcc6db1aae5}} of dependent variables that
converge (in distribution) to the distribution of interest (also called target). One of the most
ubiquitous versions of an MCMC algorithm is the Metropolis–Hastings algorithm
{{cite:c0528c2c788aeb61fa73e3ab3435def3b4f594b0}}, {{cite:ffab63648e5c1de4cb4137afd33dde27369e2a50}}, {{cite:b956ab3e36774578f9b96f515753b77516361d7b}}
| m | 724ad4a43262c362ec3d34d5f45727ab |
Several recent works have investigated the nature of modern Deep Neural Networks (DNNs) past the point of zero training error {{cite:8b78d534e605a22f2784c4654f3f3b1648dd77ad}}, {{cite:41c1692c69c1e2bf6057db4ab4c08fec76978929}}, {{cite:20f7a292eda6fc6a5085096496d326647fd7b36f}}, {{cite:cc2ec48b3c86cc5f003af9303547bef2746a1b71}}. The stage at which the training error reaches zero is called the Interpolation Threshold (IT), since at this point, the learned network function interpolates between training samples. This is not to be confused with zero-loss, but simply the point where all training samples are correctly classified. The stage of training beyond the IT is coined the Terminal Phase of Training (TPT) in {{cite:a6c57f68d5b39c45e924e4c818a1308a0fbec29a}}. It was in this paper that the term Neural Collapse (NC) was introduced to describe four interconnected geometrical phenomena that describe the network behavior past the TPT. Let us briefly describe the properties of NC that are most relevant for this paper:
| i | bb67c74db4af4230d2be1cd1969bdc04 |
Extracting stress mentions from posts. We applied a state-of-the-art NLP deep-learning tool {{cite:2908a5e175b5054a03a21c9da15e092814cf0ed9}}. The tool uses deep recurrent neural networks {{cite:f1836a2b4b5b0d84cd0de1fe0a6bb4d55cda478c}} and contextual embeddings {{cite:0f3d7a49df86a12e42cdd51353f158af2c0b6b19}}, and it was trained to extract any mention in the health context (e.g., medical conditions) from free-form text. The pre-trained MedDL models {{cite:2908a5e175b5054a03a21c9da15e092814cf0ed9}} are provided for three social media datasets: AskAPatient {{cite:bc85579b4b24b0028bc22ef33f275e1ca7c884f5}}, Twitter, and Reddit. We used the model trained on Reddit as the reviews analyzed here have similar format; users write sentences in free-form language without any length limitation (as opposed to Twitter), and without a specific focus on health (as opposed to AskAPatient).
{{table:1dd510c3-1d31-4d3e-8ae3-48924ba7fe62}} | m | 0d41e2555646e0b7d7d567e2c4340ab4 |
iWTA is compatible with other models that approximate brain computation through connected areas represented as binary vectors like in {{cite:285819cc35f831ab53ecd08a6293d558d7cc2d38}} and {{cite:c1789a652c045c22c93482b727a31b265cc201d7}}. However, with iWTA, different areas can selectively influence the output encodings by gating, modifying, or predicting the response, which is not possible to do with a simple kWTA model. Also, with iWTA, it is possible to implement ideas from predictive coding, where higher hierarchical areas inhibit the predicted response from lower areas {{cite:daff8f0146d4cca455422e17c10b085b475d27d7}}. Biologically, it leads to saving energy for signal transmission. Computationally, it also can be used to improve the mutual information between areas.
| d | 8d75b8bcc34b73aba7fcba324b629e95 |
Techniques.
Computing frequency-based functions is challenging simply because we don't have enough space to store all the exact frequencies. However, there are efficient small-space algorithms—e.g., Misra-Gries algorithm {{cite:344de7cf58dc01b789d37d9fc75e5f4345726cf9}}, Count-Median Sketch {{cite:73b66ce0934ae233f5639ed307e894fabc74c113}}—that return reasonably good estimates of the frequencies. We use such an algorithm as a primitive in our schemes. The estimates returned partially solve the problem by helping us identify the “heavy-hitters” or the most frequent items. There cannot be too many heavy-hitters and hence, the all-powerful Prover can send Verifier the exact frequencies of these elements (which of course need to be verified) without too much communication. On the other hand, the rest of the elements, though large in number, have relatively small frequency. We show a way to encode the answer in terms of a low-degree polynomial when the frequencies are small. Prover can then send us this polynomial using few bits, enabling us to solve the problem with small communication overall.
| r | d766deae762cc4c3bc3e3f1ce458c457 |
had no bias, and the PAC-Bayesian priors were chosen in a completely data-independent fashion (they coincided with the distribution of the network at initialization, as suggested by {{cite:89bbc421c81989a08a94081983ad063dcadb6880}}, {{cite:ac4bab43908d6aa91b761c1e1daf397a7e96b6e0}}). We leave for future work a more comprehensive experimental study.
| r | d1fd018a967101faf1c8517b489bb80d |
Nowadays an increasing amount of large-scale unlabelled visual data has been made available. However, most existing learning algorithms are either designed for supervised learning where the ground truth is known, or for unsupervised learning using distance measurements. As for supervised approaches, manual annotation is a time-consuming task and can provide a temporary solution only since the labels correspond exclusively to a specific dataset {{cite:010fff7e779c77786b531bbd503e73f089932063}}.
| i | d0999dfb5b627bf0c799a9dcee9cc43c |
Example 5
Let {{formula:f4e2e4bc-61a9-40c2-ae2f-132d3e5542af}} and
{{formula:d0ff2176-d6e3-4748-9755-b28853758e21}} , where {{formula:b43ea341-96fd-44de-a231-9bf19ff934e5}} and {{formula:8a3d7e2c-0fc9-41b0-bb78-c87b68cb1b37}} are {{formula:c4e0c5d9-d320-4c01-bf0b-b1fd37b84686}} functions such that, for each {{formula:08fd8d53-b291-4db1-a83a-bec5996bc19a}} , {{formula:bb6d93ed-3690-4394-ba47-4e879dd02732}} for all
{{formula:39d15790-7b4e-42ff-8fca-54bcbfc326c7}} such that {{formula:60976af9-d855-4b9c-b30f-2463f1b2c65f}} . Furthermore, suppose there exists {{formula:3cd2aa60-b97d-4049-89ac-d632b3b4a70c}} such that {{formula:d1d89f16-ff6f-429d-8b38-3dfdb99905a4}} and the vectors
{{formula:8518e50c-5676-460c-8e8b-2ed6c39d0ed5}} and {{formula:561ee4d8-b1ab-42e2-b017-bff3ac6e2d7b}} are linearly independent. For this choice of {{formula:9927bd88-fc95-49bf-bec2-5f29beba2f07}} , we show that it is not possible to find a {{formula:b120f5ec-652f-4221-b317-7a932b522d1a}} barrier function candidate. To arrive to a contradiction, we assume the existence of {{formula:70e36f5d-b470-4a54-9f65-ebab41b7e282}} such that {{formula:59eac440-a726-450f-96ed-aaf880197089}} .
Assume without loss of generality that {{formula:ac6a0a4e-bb51-4d35-8ea3-20fb37836648}} . Hence, using {{cite:191ba702f7d57c5553da71f1f706f7a1beec7088}}, we conclude that {{formula:703f99ec-2195-4798-8c04-4fca12831cc1}} .
Moreover, from the construction of {{formula:dffc57fe-f5c8-416a-b059-f433582675fa}} using {{formula:1b177b7b-b930-477c-a6cf-f4d7ed79dc80}} and {{formula:40ef0655-9584-4c8e-ab68-3d2aa6d2a16d}} , we conclude that
{{formula:482c1f47-4be8-472d-8471-eee9f8bd2968}}
| r | f8f403f3819749ce6b8bbd0bff1e0a7c |
In this section, we evaluate our method on the Bonn dataset and the SSW dataset {{cite:7de623139fd31e00344d00c5041600f297fbfdf3}}. For comparison, we also report the experimental results on various existing deep learning models, including Multi-Layer Perceptron (MLP) {{cite:c07dfbc76ce18a6af37085dea9b8039039782ca5}}, Graph Neural Network (GNN) {{cite:d36f7f4ee4c897162e3a74c99d10cecb2ad1ec74}}, Convolutional Neural Network (CNN) {{cite:c8cb487b8fbaebe75ed81faf70dcb6cfe0f817ba}}.
| d | e41077f0a18ec90f85085637d00be3a5 |
We report the performance of EQFace on the widely used LFW {{cite:7a7d35e0c9a26421c3a562c164335b9ce719a92d}} dataset, and the CALFW {{cite:dbed7b1155df64707ad2993f53e6b1711f36a862}}, CPLFW {{cite:77f5e702f1fbe761ccafd368b77c2977c76920b5}} and CFP-FP {{cite:ab2cdce0c3e7155ee56adfa5e4235dcf539e9c6f}} datasets. Furthurmore, we also test EQFace on the template video face dataset YTF {{cite:98e44f7b25c620f68c15e75ecce55ec8bf1a6482}}.
| r | 6888b33a2cdde37a0c298dcaa752ee82 |
There are basically two distinct ways of SSI solutions, namely
`direct synthesis' and
`recognition-and-synthesis' {{cite:77da433aea571fa5a159271c740f8a5a458d5ed2}}. In the first case,
the speech signal is generated without an intermediate step,
directly from the articulatory data, typically using vocoders
{{cite:1bb4ef3083cc2a3bb9c4cd446aebf8ad6dd52952}}, {{cite:ba3d2cb131192209c5a96a7e437ddba2f1875970}}, {{cite:9a3c524b4d269876cb74d9cd3f3dbc49dbd73405}}, {{cite:8b2bf8e6d8a77ea56194be1b76ab0c7840b5ca48}}, {{cite:37902e46c4a042067b7726bad87e140f75121536}}, {{cite:f6365ae871568d58b05a16aedd1f7453f97bcb12}}, {{cite:8d66a4744b9c429d3600052f0a9994522e37951f}}, {{cite:89bab8ebc0fed8d2637462412cd67d78719e0a87}}, {{cite:bf1510250f6a3a8974ec442a90a47d2e508ad532}}, {{cite:cac258605845b6d7b4895ea3c761bbc978226670}}, {{cite:da0b658233ebf0d3962df0cb5b9e3db866589f0b}}, {{cite:5ec251402b7b06e858de29b00fe04b80f72f0740}}.
In the second case, silent speech recognition (SSR) is applied on
the biosignal which extracts the content spoken by the person (i.e.,
the result is text). This step is then followed by text-to-speech
(TTS)
synthesis {{cite:437bb8074741a29064f8f6527c144552835108e2}}, {{cite:a545e62cf9a5b179ff60d6af1a7cccc54e64434c}}, {{cite:bb32bd572ade56d2d10bd78c9afe8d0b534b2d1c}}, {{cite:58f9a640a8d5573d479acb4fde942c26449b0556}}, {{cite:fb91a464a140c269acf739e1db1276b2ab7e6750}}, {{cite:1154fb97a6ae82d30b2e55a0741c613644dea344}}, {{cite:67d25ff1b7ecd46a8f69cad49f3a14e054e1375c}}, {{cite:29d3d0f2b4fa8f10334ec6ec2caabd2fa66ba45e}}, {{cite:5afe197cd4e9891f18a54f523b2489745c7f74d7}}.
A drawback of the SSR+TTS approach might be that the errors made by
the SSR component inevitably appear as errors in the final TTS
output {{cite:77da433aea571fa5a159271c740f8a5a458d5ed2}}, and also that it causes a significant
end-to-end delay. Another drawback is that any information related
to speech prosody is totally lost, while several studies have showed
that certain prosodic components may be estimated reasonably well
from the articulatory recordings (e.g.,
energy {{cite:8b2bf8e6d8a77ea56194be1b76ab0c7840b5ca48}} and
pitch {{cite:37902e46c4a042067b7726bad87e140f75121536}}). Also, the smaller delay got by
using the direct synthesis approach may enable conversational use
and allows potential research on human-in-the-loop scenarios.
Therefore, state-of-the-art SSI systems mostly prefer the `direct
synthesis' principle.
| i | e897d6a460b9f146e681b40862e781d1 |
We verify that OFA can achieve competitive performance in downstream tasks of language and vision.
On natural language understanding tasks, while {{cite:b0c2dbf8037f034076f165396be3d9d4605900a4}} report the results of the Base-size SimVLM, we also conduct experiments on the Base-size OFA and compare with baselines of the same scale for a fair comparison. Table REF demonstrates the results on GLUE {{cite:2c81317988295aead45c5dd7dfc43dd8da9a2fc7}}.
For models of Base-size, OFA outperforms previous multimodal models by a large margin. Compared with SimVLM, OFA achieves better performance even with a much smaller pretraining dataset. In addition, OFA also outperforms BERT in most tasks, indicating the good natural language understanding capability of OFA.
For Large-size models, OFA performs competitively with {{formula:07da491b-fc27-4bf4-8d50-0b553bdc8ab6}} .
Table REF demonstrates the model performance on Gigaword. The baseline models are all pretraining on text-only datasets. Though pretrained on various tasks, OFA still outperforms most baseline models.
{{table:a54a8d40-2ebb-4695-909c-efa93af89809}} | r | 3b528737145c37065868845578e7c8a0 |
Now, we state a sublinear convergence guarantee that generalizes the original result–Theorem 2 in {{cite:6b1ad8a04caaef70a72f7bf663d483dbd5beb7af}} for Nesterov's dual extrapolation method and the newly developed result–Theorem 7.4 in {{cite:d23acec64ff23b43d3ccee73decc9608e2a171ae}} for UMP.
| m | b2b2ed24ef5bb034b92ee14c81ac4d84 |
In order to explain the various points of views we find useful, we start with simple examples and work our way up.
In section we start with a CFT computation to leading order in {{formula:b54664c7-b30a-430d-98b1-615f4ed3e1b1}} of the action of {{formula:955686e9-1ea1-4ec1-86d5-8b7c45522789}} on a scalar bulk operator interacting in {{formula:7d4182d4-6dfa-4b9c-96c2-0e6c7d15b4f0}} with one abelian Chern-Simons gauge field, dual to a CFT current {{formula:6d0544da-097d-4bc2-840c-10138887dff7}} . The result is of the general form
12i[Hmod,] = M M +
which we will encounter for each field {{formula:dc52192c-ff64-4cee-a345-bf0e16baed22}} that we consider, i.e. dressed by interactions with gauge fields or gravity. The first term is the one expected for a local bulk operator {{cite:f2393b7b0eedcdc9c6dbf717d942cdb9ee0cd560}}, while the second term carries information about the non-locality of the dressed field.
From the found {{formula:6e72b085-4d3e-4f5c-a8cf-726a376895da}} action on {{formula:ff1c26b3-c566-4994-afd4-e68836b49755}} , we show there are two possible actions of the Casimir operator on the bulk operator. Comparing both, one gets a correct bulk equation of motion for the scalar bulk operator. This is then generalized in section REF to all orders in {{formula:e9a929ec-22f5-4dae-b6a4-369456168403}} . In section REF we use a Jacobi identity
to generalize the construction of section REF to the case where the scalar operator is coupled to two Chern-Simons gauge fields such that the dual CFT has both a {{formula:6c7977fd-733d-4c7c-bdc5-12257f0cda6b}} and a {{formula:c59c28fe-2550-439a-ae34-61308a5fe5f4}} conserved current. We then explain that the procedure we used is equivalent to constraining the equation of motion directly by the action of the modular Hamiltonian and reproduce the same results from this new perspective in section REF . In section we explain the relationships between different approaches to construct the bulk operator.
In section we use the point of view of section REF to constrain the equation of motion for a scalar operator coupled to a bulk gauge field in higher dimensions. This is done using the {{formula:5df1b724-f379-429e-9d0a-c69895da81e2}} action on the bulk operator and on the bulk gauge field, and this to all orders in {{formula:66812260-ab36-438e-bea6-788afe340e27}} for both the scalar and the gauge field.
We then argue in section that our success of getting any consistent bulk equation of motion relies on having a compatible pair of {{formula:9771e9b8-1bf7-4fb4-ad51-519e0e141985}} action and bulk equation of motion, and that one can use the existence of a simple bulk equation of motion as a guiding principle to build up order by order in {{formula:b229be77-a029-438e-9d87-61fb9a5d260d}} both the action of {{formula:f88d7513-8896-44d6-85e6-9a875bbe8fad}} on the bulk operator as well as the bulk equation of motion. This is demonstrated in section with the example of a scalar field interacting with gravity in {{formula:2e60a8d2-1f33-4f69-b10f-19148ad11541}} . We end with some comments about this approach. Some computations are presented in Appendix A-F.
| i | 345616cba542297e3e3c4638b8cd8944 |
In this study, we compare nonlinear dimension reduction methods isometric feature mapping (ISOMAP) {{cite:f1e3445a55920fd3302d2674463fb367386aae47}}, diffusion map (DIFFMAP) {{cite:65e5867231cf8a54d0c8c0880e0ffd50b182a3d0}}, t-distributed stochastic neighborhood embedding (t-SNE) {{cite:a603926c404d17beb144121700b72a3aa7011358}}, and uniform manifold approximation and unfolding (UMAP) {{cite:1fef7f6927f53c26732d8efe4efdb39d103594e2}} for functional data. All these methods have locality parameters that control whether (rather) local structures or (rather) global structures are considered, that is how much “context" of the respective data points is taken into account while constructing the embedding. These parameters influence the result strongly and need to be tuned. We apply MDS as a simple tuning-free benchmark reference method.
| m | 412bd65fde348a4ab79206e282bdfcc8 |
Mutli-agent systems have received significant attention from researchers and have gained more attraction due to their various applications. One of the significant applications of multi-agent systems is the formation control which has been inspired by nature. In {{cite:9eb53e900bb9ec2e1501703c4886cdb52dea4150}}, different categories of formation control problems were reviewed. The displacement control, one of the main categories of formation control problems, includes leader-following consensus control problem (also known as distributed cooperative tracking control problem) as a particular case that was widely studied in the literature, e.g., {{cite:11652adaa2f6faeb7041f570d203a201f7be8ade}}, {{cite:578c83f9ca5f82c1b1f98d4ffd335aea144d37f4}}, {{cite:503a0ca6e6677ac688b743d410aae825bdfd271a}}, {{cite:351963a828077262be349b572b132f56e19370e2}}.
In the formation control problem of multi-agent systems, it is common to consider single- and double-integrator dynamics {{cite:c41bf22078730fa96b523eb9fdf1b53e366ec034}}, {{cite:503a0ca6e6677ac688b743d410aae825bdfd271a}}, {{cite:66d797beef9f2d898d4d31278fcd4e926fa4ca04}} or nonlinear dynamics {{cite:578c83f9ca5f82c1b1f98d4ffd335aea144d37f4}}, {{cite:11652adaa2f6faeb7041f570d203a201f7be8ade}}, {{cite:29fc2234d92f035387f7e1bc638f863d205e806d}}, {{cite:e2386fc0132f1644ec3a4cb29cd7d6887d1ce00f}} to describe the dynamics of multi-agent systems. Most of the literature that considers an unknown nonlinear dynamics, utilizes NNs or fuzzy logic controllers to approximate the uncertain nonlinearity in the system dynamics. There are two main reasons for this: first, the multi-layer NNs and fuzzy logic controllers are universal approximators that can approximate any continuous and smooth function over a compact set at a desired accuracy {{cite:b517620920ed5726b45489ff1609564e55c7d715}}, {{cite:578c83f9ca5f82c1b1f98d4ffd335aea144d37f4}}; and second, a precise mathematical model for the dynamics of a multi-agent system is not required, which eliminates an extra effort to acquire an accurate system dynamics {{cite:cbd2758eb5b0464077a78d98a52b5a7f7c0b06e4}}.
| i | 9bcb6eff72567e9b6bdc0fbec86e989d |
The study of the AdS/CFT correspondence in low dimensions has seen renewed interest in the last few years {{cite:50ff4b5565deab2e37bb2c18312735c8ebfe9187}}-{{cite:0de6555621e19577a5adc425369ef05db47d1cfd}}. On the AdS side of the correspondence, a plethora of new AdS{{formula:1fa4f18b-2341-4251-8bd3-0cf80f4d3fc4}} and AdS{{formula:ca0fec6e-0a7e-4197-babc-f8c26a515311}} solutions of Type II and eleven dimensional supergravities with different amounts of supersymmetries have been constructed. In turn, on the CFT side it has been possible to identify the 2d and 1d CFTs dual to some of these solutions as IR fixed points of explicit quiver field theories, from where it has been possible to explore some of their properties, in particular to compute their central charge. These AdS/CFT pairs thus represent perfect scenarios where the Bekenstein-Hawking entropy of black strings and black holes can be computed microscopically. This is particularly promising for the large classes of black strings and black holes with {{formula:2c7c5afe-10df-404a-9fb1-06449f1cedb8}} and {{formula:968dbda4-7dff-480b-8f74-3bc935d7be6e}} supersymmetries constructed in {{cite:6c4ae9a720e5eb30681cb1e779f4f0fe01004796}}, {{cite:ae27307bf8e2fe679c71d81f043122ea22db5c98}}, {{cite:251e9585fd30d72ec54311f8ce9f9f4fa0fd9499}}, {{cite:c2b40d866b89d563361d5cf2f95cf4ca4d00027d}}, {{cite:799346c038bda2008e118f7fcad370b75bb75c04}}, {{cite:2014ac76fc7cc2050813c8a9fc5131bdd7c5f17a}}, {{cite:74af8f1a322b2dc3099fbf8c1463c7d4e7e3b2ff}}, {{cite:7e0aba3f761fd50fe82273340fcead1394b469c7}}, {{cite:5f621d7bcfdcf3f22ce7f72d7ceb809d87ce7254}}, {{cite:7587af5754f542600720c2597ccb0e95c99422d6}}, {{cite:3d29067fda603d8d3ea033f8c156e3db6b5a2954}}, {{cite:e2f8d6b764e5c7252a952279600807ecbd67af7c}}, {{cite:6c4e61fb50a9abca65e384e7830d24f1d54dd8b5}}, {{cite:8af95cd27da176837258dbe96b587f97e64dfcbf}}, {{cite:a6990b91f5a100ea54595aea38892ad667518520}}, {{cite:51c2faf601ec199e08bbb42f75ede94c25c22baf}}, which enable extensions of the seminal studies in {{cite:ead8b8bd9eab9d173535c873c710e279882fb1c6}}-{{cite:6a31084dae949bca54f9b9e5c4fee9872caa9ba6}}.
| i | ed2e9177e98e5a80c964184d8c6db1db |
We prove the following chain of implications
{{formula:dce031a8-537f-448f-81c8-a9b707da8884}}
The first two
implications are trivial.
(1) {{formula:32c534e5-bf42-4d9f-8fb9-c8ea86864071}} (4) is {{cite:c159d7f0027ffb08111eb75889f82d548fbb29d1}}.
(4) {{formula:3ad6f544-50fe-41c4-85be-4eeb2ee40e94}} (5) is trivial.
(5) {{formula:2d733648-9291-4e26-8dd4-1d682a571749}} (6): by Corollary REF
we have {{formula:2457aedc-4642-43d0-9128-accf6141f16d}} . In the {{formula:3b417884-79f9-4b28-87a4-e94784b78c4a}}
case
we can relate defect to {{formula:7047a93f-2429-4b13-a784-84fcb033e767}}
using results of {{cite:c159d7f0027ffb08111eb75889f82d548fbb29d1}} as follows.
By Theorem REF
(or using
the diagram (REF ) directly)
we see that {{formula:2be88761-d170-4f9c-8b11-3c0d69faa584}}
(see {{cite:c159d7f0027ffb08111eb75889f82d548fbb29d1}} for {{formula:56c28927-349c-4fa5-8791-e127e817f70e}} of a nodal curve),
hence {{formula:65e56d10-4a48-4858-9848-96b07cc1c6a5}} does not have
maximal defect by {{cite:c159d7f0027ffb08111eb75889f82d548fbb29d1}}.
Finally if {{formula:34959398-c019-413d-8005-19ad5fc3bc67}} , then {{formula:4243eaf9-58cc-4144-a081-2f098dc6aa9c}} can not be nodal
by Theorem REF .
(6) {{formula:c7dddf9c-1935-4ab4-b725-6737abe22ef5}} (3) follows from Theorem REF
and Lemma REF for the {{formula:edc56609-b6bc-41eb-aa6a-e2da63f2b828}}
case.
For the {{formula:625007e4-9ffb-4f7f-b74e-337c6013eba0}} case
the result is
{{cite:a1237a59acf4d5ee05fe12d72d0928dfb3320253}} (the variety {{formula:c8b1ad95-21db-4940-8de6-1d6844876072}}
in {{cite:a1237a59acf4d5ee05fe12d72d0928dfb3320253}} is the unique
nodal {{formula:4a2e7b7f-c631-4a51-9255-d75c0467e1cc}} by {{cite:65d196a0cb5ab30b72879b3174bd58a4fdccd1f5}});
see also Remark REF .
| r | a445b59a3c645bc2ad10b864d321079d |
We validate the efficacy of Cascaded-C4Synth and Recurrent-C4Synth by comparing it with GAN-INT-CLS {{cite:a5890aa4a757d854578d0111b637a9c7ddf23351}}, GAWWN {{cite:65b26cb2813f3550d5a3cbdc99ca0dd058cac3ad}}, StackGAN {{cite:728d357caa6497c32ffa57f9407d1c241d62554c}}, StackGAN++ {{cite:22af2c99950986bff7caaf352f69af012ddccc69}} and HD-GAN {{cite:ceab741ddb487214d5f88a55330c3d69bac029cd}} (Our future work will include integrating attention in our framework, and comparing against attention-based frameworks such as {{cite:a4cd77e4cce11a08bca48529db3a55cc1ff20b1f}}).
| r | ab945bdaf455e7ba4f490f2b7e631f71 |
Given these image pairs, we can train a deep network with the encoder-decoder architecture (see Fig.REF ). In this network, image features are extracted by three dense depth-wise separable convolution (DDSC) modules after the image is down-sampled twice by a factor of 2 to gradually reduce spatial resolution and extend feature channels. Note that each DDSC module contains multiple depth-wise separable convolutions cascaded in a densely connected manner for efficient feature extraction. Besides, we adopt the FA block from {{cite:6f48459c914ea25cd8636b140c43decb9c1cf886}} to re-weight different channels and different spatial locations of feature maps. Finally, two up-sampling layers and one convolution layer are used to generate a dust-free image that has the same size with the input image. In this manner, the reverse image degradation process of Eq. REF is inherently modeled by the network architecture.
{{figure:63b254d9-c99e-4d27-91ba-4b82438d9b73}} | m | d2ac1e7c68ab88616edc807ea8820f64 |
{{formula:de43f71a-dae9-47a6-b46c-2e5a2953f592}} production is a probe of the hot and dense medium created in ultrarelativistic heavy-ion collisions {{cite:f5fa1857cd4609f5f51e56224adc6904d5cb9c91}}, {{cite:e646d8747e32ae595598e25bdd712df40a2e49d7}}. Moreover, it is also sensitive to nuclear effects not related to the creation of deconfined matter, called cold-nuclear-matter effects, such as modification of the parton distribution functions {{cite:248da975a3fb9bd8265b5ea000ab820a24debf6a}}, {{cite:ce9a84d6b22bc426a8075a9d1867cc6e0035e69b}}. In order to gauge both the hot and cold medium effects, precise knowledge of the {{formula:8931a929-af95-480e-b10b-fface986e93f}} production rates in the absence of a nucleus in the initial state is of paramount importance. The {{formula:183c1601-f2f6-4ed1-b12a-d6ca8181d2aa}} measurement in {{formula:9cdf2ed5-e9b0-42b1-8951-792965a3c326}} collisions constitutes a baseline for the quantification of nuclear effects in both nucleus–nucleus and proton–nucleus collisions.
| i | 1484fb9107aa28dff7d9a64fa3834481 |
In this section, we validate the performance of the proposed method and compare it with recently proposed state-of-the-art denoising methods, including BM3D {{cite:db5ba86a7866d0cbb609bae2515a25798f01a8f8}}, NCSR {{cite:ce177dd8259ebac01253971fc536f32d70675960}}, WNNM {{cite:4178a5b884cd691da38e8c0dde362becfb047b2e}}, AST-NLS {{cite:392d953c2594bb57c794632abd7d45002dda0546}}, and MSEPLL {{cite:8cc74422123f60dcce0cba4e556e610ea0579923}}. The parameter setting of GSRC is as follows: the searching window {{formula:f09bb282-ca3f-4db9-b083-8d43e5089ac8}} for similar patches is set to be {{formula:a8782d96-2f7b-4086-a718-6c14b3170894}} and {{formula:6ffac1a4-dcf7-4eb5-8c79-fcb82676048a}} is 0.0001. The size of each patch is set to be {{formula:9faa878d-eff7-4cd4-9c08-9122b4d9305c}} , {{formula:cd5b94a5-4a30-441f-90da-eaf2ce498fc1}} , {{formula:f4d96363-f16b-4680-bd9b-47e5d67e22e2}} and {{formula:83ed45ee-1200-46cb-98e1-79ccef89e16b}} for {{formula:87425aab-8250-43c6-92a1-67ec6524bf32}} , {{formula:c51ed714-7e73-4e61-97af-b929704a6f3c}} , {{formula:713c8c60-647a-4d3e-830d-1e0aa6a09bd1}} and {{formula:0820e095-38d5-4953-a7cc-6363b242f1bc}} , respectively. {{formula:b3d40e91-136b-4da5-bce2-cdebf215bf57}} are set to (0.2, 0.18, 0.67) when {{formula:d0f1a02a-3ed5-46b7-9dd0-85b1ba748d3e}} or (0.3, 0.22, 0.67) otherwise. The source code of the proposed can be downloaded at: http://www.escience.cn/system/file?fileId=85492.
| r | 13539fbaed8493342e65cb2846ba2edd |
The constitutive relation of the NLG permits a different cosmic expansion history as compared to the prediction of the standard {{formula:df890b5d-703c-43e3-ba0d-99f938049a8b}} CDM benchmark model. Consequently, the modified TEGR cosmological models can give rise to a rich phenomenology to be confronted with observation data. It should be mentioned that besides the {{formula:296165c4-7aa2-4fc0-9820-4a0d025db7c0}} tension, there is a less significant discrepancy known as {{formula:87056f0a-d4e0-44ff-9b2c-aa7556e18c09}} tension. The problem is related to cosmological observations that suggest a smaller amplitude in the matter perturbation in the late time universe as compared to the prediction of {{formula:4b0fa01a-9075-498b-98f7-727975cf0e93}} CDM on the basis of CMB data. The late-time observations that show this tension are mainly in the domain of weak lensing results {{cite:1e073921b951dca3e667ccc677c3508aa0d84fe4}}, {{cite:c38c53f0d493f54ce92fa881309acb8a4942382e}}, {{cite:ab8f044f82dcebe6836bf01c03f038e940c7f69f}}, and the growth rate measurements {{cite:6c5de0e3193c5d94eb78b44033273a2f1b3616de}}. The fading memory effect of modified TEGR can be a hint that the model can address these tensions as well. In future work, we plan to investigate the cosmological implications of this model at both the background and perturbation levels.
| d | 4fa4d8e3f0a2b88c80e777ac4ee94e7a |
We remind that Theorem REF says that a closed ancient flow must converge to a shrinker after rescaling if {{formula:05bc550f-404b-4556-aec9-882eef4a0bef}} . In the super-affine-critical case {{formula:a5a488ec-fa25-41fd-8ea4-af503ecf8fe8}} , a closed convex {{formula:f2d7e04d-a0f7-4638-82c0-eeef4df40ad3}} -CSF must converges to a round circle after rescaling at the singularity by results in {{cite:ea59d03d70f7d925f83b50c35d1c22228c7fe6d8}}, {{cite:d6edd1a1d0115612c0c4a7b51b72c066f749dc51}}, {{cite:07ec10e9eae43329a1a0697c6f9d021a0c2e48fe}}, {{cite:80d0b47ff48e880b6a89660ae82e65e9776f3968}}, {{cite:ef2d4617302b83a0a35d39978c09a61b23dd9af2}}. In the super-affine-critical case, the analogous theorem in higher dimensions was established in {{cite:ea59d03d70f7d925f83b50c35d1c22228c7fe6d8}}, {{cite:85caed6a3eda97e4586cabd1c2f6bf95a994f0a3}}, {{cite:3d4bf9ab2326e8bf37163270df187dcb93cbf406}}, {{cite:55efdc6a9b2cba1b4c570285653e40f507865276}}, {{cite:539e8644c0a266e21af2b25d1f2e63891731d523}}, {{cite:95f62d70295b4aab43ee6df08f40b076e5f81104}}. Indeed, the convergence results at singularities strongly use the fact that the entropy of closed convex curve is bounded from below if {{formula:9d9d75b4-409f-461d-99a9-022eebecd210}} .
| i | 832d347c28a4e071fbfd269ec1888c6d |
As shown in Table REF , we compared the performance of the proposed model SPFNet against the other state-of-the-art models in terms of FLOPS, parameters, speed. We run the experiments with the training set and validation set. The input resolution is reduced from 2048 × 1024 into half to train our models. We evaluate the model segmentation accuracy on the test set, then submit the results to Cityscapes dataset online server https://www.cityscapes-dataset.com/submit/ to get the results on the Cityscapes benchmark. Here, the network is compared against small networks such ICNet {{cite:cda894feadf35529448d777904a4d01deaec6f40}}, BiSeNet {{cite:7fc8ab9b489a9e8820f92289b2b21fec9dd26a42}}, and larger networks PSPNet {{cite:a5c2b396159b08f3be9feb325c47155d8bb4b667}}, RefineNet {{cite:cf3cfe5c015aab3b1febd563d977dc40d3ac19de}}, and Deeplab {{cite:55c64cb505a85050fda0de11395821b4f1f65ebc}}. The proposed SPFNet34H achieves 75.7% mean IoU, with 41.9M parameters and inference speed of 12.7 FPS, while SPFNet18L obtains 71.9% mIoU, with 31.7M parameters and inference speed of 46.5. Most of the larger models in TableREF incorporate very deep feature extraction architectures with a larger number of parameters, whereas the proposed model used ResNet34 and ResNet18, which have fewer parameters but still obtain comparable results. We do not use multi-scale testing or multi-crop evaluation, two techniques that been used by many practitioners to help improve the accuracy. We illustrate some visual examples of SPFNet on Cityscapes validation set in Figure. REF .
{{table:f3fc99bb-65f8-4b4b-b6f4-4ddbf990e68f}}{{figure:fc12f8d8-fa36-4631-9638-34b824229816}} | r | 39a765af21ef5bcd52049df4994d9f1c |
Surrogate methods were studied in depth by {{cite:54ee9e528981fcc41b50227a202195fee7cf9ee2}}, who proposed a generic framework to relate the excess of the original risk to the excess of surrogate risk through an inequality of the type
{{formula:ab38ac78-3358-4568-a73d-824384d83daa}}
| m | ed298ec26c4b06e7c5d1bf38419dd6be |
On the classification side, we note that classifiers are sensitive to GAN reconstructions; accuracy on the reconstructions tends to be lower than that of the dataset images, requiring an ensemble weighting hyperparameter to merge the predictions of the image and the GAN outputs at test time. In most cases, we find that GAN transformations that modify style tend to be more beneficial than those that modify poses. This is in line with previous works that note the benefits of style-based training augmentations for image classification {{cite:d3d8258cfcda6ba88d5b7dd6df30cf43e6f460a0}}, {{cite:b3e5579093acac4cc6ba0afd8b25e1a54b3df9e0}} and related positional sensitivities of classifiers {{cite:6b08a4bd46defccb0bed8d2ec9d74912fd1dc4b4}}, {{cite:49156da3828e060f9f47b0aff260159fad5c9a47}}, {{cite:f9477a646d0789f9dfd36cf830059a8ad18ca428}}.
In the more difficult Imagenet classification problem, we found performance degrades substantially during image projection, and therefore GAN perturbations offer limited benefits. In supplementary material, we show results on using the class-conditional CIFAR10 StyleGAN2, where we also find the GAN reconstructions are more difficult to classify than the original images, such that adding GAN-reconstructed views does not benefit classification at both training and test time. When training classifiers, we use standard random flip, resize, and crop transformations on the image, but we further find that the alternative image augmentation strategies during training can slightly outperform using the GAN-based augmentations at test time, also shown in the supplement.
| d | c52fea3b625ff06e8c8976ba537564b8 |
We remark that previous data structures handling failures in {{formula:8c3dd3be-4148-4c82-b630-215f8ef1f9c9}} time either
work only for the single-source version of the problem (see dominator trees, or {{cite:d6860fc36ff653af7b5572e3fa21be6d5d985356}} for two failures),
or work only on undirected graphs (see, e.g., {{cite:5eb012b94c7ab9797e63a4681eaabf0b20a189bd}}, {{cite:ecc92c31f17e79688e16d6084b8c06063b0288cc}} for oracles for general graphs, and {{cite:4720b630b2323fb7a846428d46d547b40d57374e}}, {{cite:0c2b3253723c6be07e45b15b4c07cc18a0faa32e}} for planar graphs),
or achieve nearly linear space only for dense graphs {{cite:74f55a7cbceedb91868ca2fc61421f28966762cb}}.
It is worth noting that for planar digraphs vertex failures are generally more challenging than edge failures,
since, whereas one can easily reduce edge failures to vertex failures,
the standard opposite reduction of splitting a vertex into an in- and an out-vertex
does not preserve planarity.
| r | 31cbe890c2d545d2ed07c360238bd685 |
A recent work {{cite:2774fa7c7bb19d38ea3d063b7e593ef71b98b128}} argues that previous WSOL methods in fact do not outperform the pioneering class activation map (CAM) {{cite:dcc1ca584ad247deaa83f2480715407b33a92e91}} method, and further claims that WSOL is an ill-posed problem when it is not given any location annotations. In {{cite:2774fa7c7bb19d38ea3d063b7e593ef71b98b128}}, using only a few images with groundtruth pixel-level annotation (, few-shot) can outperform existing WSOL methods. This few-shot learning setting, however, is not weakly supervised anymore.
{{figure:e362f606-948e-48ae-a586-210c54da02e4}}{{figure:7cb737a4-b334-458c-8af1-48ce6a36c82f}} | i | 80f2f853617d73e8ad4a0a788d33bcbd |
From the first relation in eq. (REF ) and the {{formula:2fe6e8ea-a0dc-4bbc-843e-11db9216048c}}
range of {{formula:dabad573-01a6-48e9-ade8-a29a31022a1a}} from Ref. {{cite:9f6a332e35a4d3939502bf702ba9f4ec83ebe698}}, we deduce that {{formula:d7d7274a-36e3-4115-bfee-01136bc95e8d}} lies in the range {{formula:acf35d56-9804-446f-921d-722acd4c22e8}} . Moreover, by using the {{formula:35c754ee-9c46-4001-b4b4-9f0ac8ee0797}} ranges of {{formula:f6f52a8f-317c-4440-b697-1997947bf527}} in the case of the IH and the neutrino masses in eq. (REF ) as well
as the constraint on the sum of {{formula:3c48f739-f7a7-4437-a7a7-9b04470f5442}} from cosmological
observations {{formula:b56940b0-e8e6-4b9a-a11f-2951e250a372}} {{formula:0c837e1c-ee16-4306-8d72-5302f7f40966}} {{cite:476b5ef9de98e894b7cef8787bbadbee40020516}}, we find that {{formula:87e62ba6-5ab6-4ee7-a749-c2f5e12bf942}} lies in the range {{formula:2f3ab595-dc6a-44d7-8c4e-bdc86e73de7a}} which
indicates that {{formula:93eb4ab9-2145-4020-885e-1a3ada2adf98}} and {{formula:a3d80191-a5b3-4061-8b4a-e6b41015ca5f}} fall far
outside their allowed {{formula:f3443741-c113-4a6b-af8c-960ea95e9ba8}} experimental range. For this reason, the IH
pattern is excluded in our model.
| r | 79b199d053ea8e1584f4717fd919ffe6 |
be a non-zero cuspidal Hecke eigenform of half-integral weight {{formula:8e6ddb00-cc0a-4bb8-b92a-9c21d28489eb}} and level {{formula:890e36a4-fb27-4913-968c-22fb96b3eac6}} with Dirichlet character {{formula:1c3961fd-7463-4b76-b1ff-c00edc74a8fc}} , and let {{formula:996bffca-7c9e-4f5e-9c58-602b669261a3}} be a square-free integer such that {{formula:c7891d4d-e96f-4c6b-aa29-da6de589dd31}} . The Shimura correspondence {{cite:c08be08c64ecd4bf88b17d2030caaa21f0c39c43}} lifts {{formula:0eb276cd-2d8b-441f-8fb1-71aae0a781b8}} to a Hecke eigenform {{formula:f204649b-a24f-4b7f-88c0-3114b4a9d061}} of weight {{formula:86097183-060c-4066-8bdb-6621d9d7bd5d}} for the group {{formula:e77f23c9-c6e7-4567-b414-7b0882e357f5}} with character {{formula:9911858c-11d1-431e-a180-6294c4469613}} . Let us write
{{formula:50b8b3a1-05b8-4217-b612-e891efd81eac}}
| r | 0abad5d7fd93258be659bca03fe9848d |
From tables REF and REF , it can be found that the branching fractions obtained based on S2 are generally much larger than the ones based on S1 because relatively larger decay constant of {{formula:cd57e8c3-43a8-4df7-b733-f5ddfcc1af47}} and form factors of {{formula:1d997ef4-6654-4917-bc1c-3f20a07076fc}} transition are predicated in S2. In addition, most of predictions based on S2 (S1) are favored (disfavored) by experimental data, the only exception is {{formula:8dd9bbe6-929a-485d-aff6-a4a8be8ed7c5}} decay. Our result {{formula:bb2a8689-421d-47a4-81fa-cfabcdaeabd7}} in S2 is consistent with the results obtained in the previous works, {{formula:cefa974d-bfdb-4e64-bf01-a1a826eeb5a3}} (QCDF) {{cite:164b9dad30cf9b94b68518ed10c46511da27865d}} and {{formula:06650a10-e667-4500-b1a9-ec60b93dccee}} (pQCD) {{cite:61a0b1cd252136cc94ba293e939db96b5fb2e10d}}, however all of these theoretical predictions are an order of magnitude larger than experimental data {{formula:a1fdbd0a-2517-4e8c-85a4-316399be7135}} . The reason will be analyzed in the next item.
{{formula:b50ea4dd-0fe3-4729-b41a-b5523eebeda7}} and {{formula:4f6d0eb0-06f7-4d76-af66-de8783272a23}} decays are penguin dominated. After neglecting the power suppressed contribution, their simplified amplitudes can be written as
{{formula:60c1e822-c489-46cd-94d1-97b28b89ab53}}
A significant feature of scalar meson is that its chiral factor is proportional to {{formula:05cb4db6-1662-42c8-b93c-53809ef3ef4a}} , which results in that {{formula:99670dbe-dfd4-4195-ad00-ab2ea9e90017}} is much larger than {{formula:8e74c934-fbcc-4255-a2cb-56351af45781}} . Numerically, we obtain {{formula:c96d9c6b-981e-49fe-a111-97ae495dfffa}} at {{formula:0c46308c-2d91-4a6c-8fc6-06f9b982ab6d}} . As a result, it is expected that {{formula:b3ce761c-0374-462e-b080-7bad8f855def}} . Moreover, for all of the penguin dominated {{formula:7a979fd5-86c0-4b8f-af7e-56f1f1d79be3}} and {{formula:f31832ff-bc8a-45a9-8ab6-3699e12d2f35}} decays, the decay modes with {{formula:6510598c-0280-4ad8-8e19-7477bce60aac}} ({{formula:95de5a80-b91b-4127-8117-b71d108ab035}} is the emitted meson) generally have relatively larger branching fractions than the ones with {{formula:c79dc3f1-7be1-44e2-b4cb-f50974195240}} , which can also be found from the numerical results listed in tables REF and REF .
In addition, the large {{formula:2e846e26-82ec-48f3-864a-807b9f64e1cb}} also results in the large theoretical predictions for {{formula:7ffb2bf0-7db6-4fed-add6-374b8b213063}} compared with data, which has been mentioned in the last item. It should be noted that the significance of data, {{formula:99c011ba-8374-4900-914a-830271d39bb8}} , is smaller than {{formula:2054697f-93b8-4879-bd66-9270e37a4002}} , thus more precise measurement on {{formula:b2c0d4d5-44f3-457f-858b-5c9e40db8270}} is required for confirming or refuting such possible anomaly.
The {{formula:02cc4839-3f6d-4268-bf29-1e37148c1036}} and {{formula:ec562042-1fcc-4c5d-9c6a-0bea0bd7e064}} decays are pure annihilation processes, and thus are very suitable for probing the effects of annihilation corrections. However, these decays have very small branching fractions {{formula:85c69d09-5196-435c-9901-ceb4b47963ed}} because their amplitudes are power suppressed, and therefore are not easy to be precisely measured in the near future.
The {{formula:7853b4fa-b7d4-4cd7-b631-944248e1b02f}} and {{formula:f3df155a-9f9b-47f1-9348-dafc8c898282}} decays are tree-dominated, while the former are color-allowed and the later are color-suppressed. As a result, the branching fractions of former are an order of magnitude larger than the later.
The {{formula:2de712ed-1f97-4c59-9000-8ca4e895f7f2}} flavor symmetry indicates some useful relations between the decays considered in this work. Taking {{formula:5dab4ece-c4c0-43ae-8c0b-092e1f08d184}} decays as examples, after applying {{formula:ab04cfd3-1af4-4158-9c43-8e78d7921e77}} flavor symmetry on the spectator quark, one may expect that
{{formula:9e1195b6-d1e2-41a7-a472-7d528a2abef0}}
which further imply that
{{formula:0a5c4f11-a5fa-4ad1-a083-1edbc53821ba}}
Besides, the {{formula:079222c8-ae59-410d-ad2d-459aea4c2e9c}} flavor symmetry also expects that
{{formula:071c9242-73ea-45db-a19d-31530527cba3}}
It is found in our calculation (S2) that
{{formula:c80723d6-8871-4318-9ae1-774f61e0cfcc}}
which generally agree with the expectations of {{formula:10010968-80cb-4d78-bdbb-5e8f82c88076}} flavor symmetry. The flavor symmetry breaking effect is mainly ascribed to the remaining suppressed contributions. Taking {{formula:88f42010-acb5-447f-bf56-5c9eabad5b7e}} as an example, the amplitude of {{formula:2ff7b76c-99a4-4f09-a004-5af878a3ffe3}} decay receives additional CKM-suppressed contributions proportional to {{formula:ef286158-0800-49ec-87b3-df38ca8175f9}} , as well as CKM- and color-suppressed {{formula:885562d4-a1f5-4490-a940-25728f105c76}} , compared with with {{formula:9183011c-04cd-4403-a23e-af01ac9f06b9}} decay.
The {{formula:e312b876-deeb-4328-81d9-0df7b5b82abf}} corrections to the amplitudes of non-leptonic two-body {{formula:fe790cd0-7c7c-43e4-a350-dd8e412a7e19}} decays have been evaluated in recent years {{cite:79f62f264617fe46d64f6bc929d70596b756bc0d}}, {{cite:b23a91e44e0d00079dc71ee0fbcf712fbe276425}}, {{cite:ee9217c521d4eab0fd79dbd4dd7a218aa99215ce}}, {{cite:283e74ca4328672cbe0b4897d00e019d25019f21}}, {{cite:172dce655b0bebc1fa3b0218d093e6e094a98deb}}, {{cite:e017f7702bfb7ea26b8c22ab131c30d16a46b17c}}, {{cite:b6e3128629bac0287397f3dd3ac85272f97a390d}}, {{cite:3161d27005b2824df84ec8912275f7707a803bb1}}, {{cite:2ce917b70490bc1252848bfbdaa4cb2065c3768d}}, {{cite:a060315407f7d920ff99f547d900a63950a624e9}}, {{cite:3b8576e67fd342511efe7788d1f520985510e294}}. In Ref. {{cite:79f62f264617fe46d64f6bc929d70596b756bc0d}}, it is found that the next-to-next-to-leading order (NNLO) vertex correction to the color-suppressed amplitude {{formula:c8901ef6-e63f-4927-89e0-34eed2f92b27}} is sizable, but when combined with the {{formula:7ef758d1-f5c6-46b4-8679-3ad0f0d76d3c}} correction to spectator scattering, the overall NNLO corrections to the color-allowed and -suppressed tree amplitudes are small due to the large cancellation. Therefore, the {{formula:a65d63c6-968f-474d-beec-4c6bb287f688}} corrections to the tree-dominated {{formula:f9833cd0-2414-49bd-9ae5-474a539df48a}} and {{formula:c8311e67-6074-47c5-84ff-f75f74948b51}} decays would not be significant. The NNLO correction to the penguin amplitude has also been studied in Refs. {{cite:b23a91e44e0d00079dc71ee0fbcf712fbe276425}}, {{cite:ee9217c521d4eab0fd79dbd4dd7a218aa99215ce}}. It is found that the NNLO contributions from current-current and penguin operators are sizable, but there is a strong cancellation between them, which results in a much reduced overall NNLO corrections to the penguin amplitude {{formula:e2434468-5f67-492e-a926-0060d849f6b1}} . As a consequence the full NNLO result for {{formula:22d68be7-abb1-4954-9369-ae34f12e44aa}} is very close to the NLO result {{cite:ee9217c521d4eab0fd79dbd4dd7a218aa99215ce}} (an example, {{formula:49bdeba8-33a2-4312-811c-2164abdb6504}} , is shown by Fig. 8 in Ref. {{cite:ee9217c521d4eab0fd79dbd4dd7a218aa99215ce}} ). Therefore, based on these previous works on the {{formula:ee737180-c949-4a1b-a9f2-b3df119680c4}} correction, we can expect that the {{formula:999aa772-db08-4b17-bcbd-c389eed1185a}} corrections do not affect the main findings of this work.
{{table:5877674e-659e-4892-8066-8edc167cd58b}}{{table:d021e084-0312-4a18-aa20-3052ff8ac8d1}} | r | 233feaac62a6e7b33a28aae804c34f47 |
We have comprehensively searched for the {{formula:0b4ec994-e4b5-46b9-adfa-ff1e8381b88e}} modular invariant lepton and quark models with the lowest possible number of free parameters. After heavy numerical analysis, we find that the minimal lepton models make use of five real couplings together with the modulus {{formula:516ea319-f43e-4950-a23b-ad1c224b2999}} to describe the charged lepton masses, neutrino masses, lepton mixing angles and CP violation phases. Thirteen minimal lepton models are obtained, including nine Majorana neutrino models and four Dirac neutrino models, the classification of the matter fields under modular symmetry is summarized in table REF and table REF . Notice that the models L2 and D3 were already presented in Ref. {{cite:e40174ce2e57aa9dd73206d4de73bd172258b46f}} and Ref. {{cite:1a9ee349125b43fbe114dd327870fda4b84d73e8}} respectively while all others are new. The experimental data from neutrino oscillation, neutrinoless double decay, tritium beta decay and cosmology on neutrino mass sum can be well accommodated, as shown in table REF and table REF . Moreover, the predictions of these models are expected to be tested at forthcoming experiments with higher sensitivities. In most modular symmetry models, the right-handed are assumed to be singlets of modular group so that at least one parameter is introduced for each charged lepton and the hierarchies among electron, muon and tau masses can be reproduced by tuning the coupling constants. From table REF , we see that other assignments such as the triplet and double plus singlet can also be in agreement with the experimental data. The model L1 is particularly interesting, all the lepton fields {{formula:de1ba7f7-58b2-4f85-9aae-7d6dc8352f58}} , {{formula:48678d06-ebcd-4046-8da1-8a8aa2bb0bc9}} and {{formula:1958df18-3f25-4cf5-be6d-c79140fe721c}} transform as triplet {{formula:65cdd0bb-8a01-4d3f-b649-e99f22b7a56f}} under {{formula:4a751d2f-b21a-4b2c-bb21-7cf707e363cc}} , the light neutrino mass matrix only depends on the modulus {{formula:f9997951-cfd9-44b7-ab30-011a45dd769d}} and an overall scale, the four coupling constants in the charged lepton mass matrices are of the same order of magnitude and the charged lepton mass hierarchies arise from the deviation from the fixed point {{formula:1af80f48-955f-4d57-8dbc-44e945b9f442}} .
| d | fb673ddd799bc0f785cc4edfeb72835e |
Over-relaxation steps are made of non-local moves (REF ) and (REF ), which flip the signs of Polyakov lines {{formula:38929ff9-e084-4312-87d1-22a945357368}} and Polyakov planes {{formula:89c0c0e6-625c-4e9b-be6c-b16de257bc14}} , respectively. Since such transformations do not change the value of the action of a given configuration, they would be always accepted. Hence they are not subject to the accept/reject step. It is known that the incorporation of such moves between Metropolis moves reduces autocorrelation times significantly {{cite:0ef67b4297d808fa363401f503a31596d173e374}}, {{cite:123012d3f5e1f1d1e79aa53f96271c5d4418bce3}}, {{cite:d5cfea156534903cf3ec1f3e3a12ab25b4ccbb16}}, {{cite:da94ca988d6379301ba1e41d8d7c1295afb3a96a}}, {{cite:cf668951ed1ccc266d35eaad6e6421029fa43db1}}.
| m | c2bd4f6e29afe31359fe681a763312ca |
A natural next step would be to produce similar VEB methods for
non-Gaussian (i.e., generalized) linear models
{{cite:c0ee4a6cefeba3116d8f7f7b9f506a49f74a042e}}. Extension of our methods to logistic
regression should be possible via additional approximations that allow
for efficient analytic computations {{cite:7cafd4e7c4439fe0238134d164f831b8d2cf4bbc}}, {{cite:7f754fbb71036cf28e82fbbdb6f83a1e339bf34e}}, {{cite:ad3b5fea2d06b15c9e79262aa64699236081c066}}, {{cite:1964a97303a48b08f4a0d0c3cdaee2a210777f00}}, {{cite:ea27c36ce7f344c02531f88c5f600208e49b3d4d}}, {{cite:9cb52b7071730fd9ac625dec1b6614c3f1db6750}}. Extension to other types of outcome
distributions and link functions should also be possible but may
require more work.
| d | 96cd0154ea20adce160aa8106910627b |
In addition to circuit depth, another metric associated with boosters is the success probability.
In the case of ideal boosters (not the truncated boosters analyzed in Fig. REF ), the more that a booster suppresses high-energy eigenstates, the lower the success probability will be.
In our numerical simulations, we had not designed boosters to accommodate success probability.
But, in practice the success probability will lead to a time overhead of {{formula:36cec78c-0572-4acf-ba44-7ec9d4b03f57}} when the booster is used in an algorithm.
One solution, as considered in Section REF , is to design boosters which yield a success probability above a fixed threshold.
In some cases, this approach may not yield boosters with sufficiently high success probabilities.
In such cases, an alternative strategy is to boost the success probability using amplitude amplification {{cite:c3d394d345719186c92e112d94c1ed35e41126f0}}.
Such techniques cost additional circuit depth and we leave it to future work to analyze the cost-benefit trade-off in the setting of state preparation boosters.
In general, there appears to be a trade-off between the booster depth and success probability. Depending on the constraint on the circuit depth, it may be preferable to apply multiple rounds of shallower boosters in sequence rather than applying a deep booster, with a higher overall success probability, once.
Further studies are necessary to better understand the trade-off and identify when each strategy is preferable.
| d | b24cc801bf557b75232619703f43c80d |
A theme that has emerged recently in reinforcement learning-based multi-agent control {{cite:007bcfccc504db28d07d23a70a1b49eab9c131c7}}, {{cite:dc6c07a560fa4a9eefd0cd0a3ef517b8fe8a63f4}}, {{cite:688d0f0201f5104adf9305abf2fb6753e518b3d1}} is to obtain decentralized, cooperative policies {{cite:034da0c5def3fd2986b383b89994580a55b1d74f}}, {{cite:a0a2c7373d1684a4ecb7048994bfd7e0970f07f2}}, {{cite:88b20461a60aee09772be96a7a791324e9b4b7e3}} to tackle non-stationarity and limited information. A popular paradigm is to train a centralized policy and execute it in a decentralized fashion {{cite:43dead3f5a64fc02a5564c8992d56598d1fff252}}, {{cite:8c65cd7bd4151254ca011144b5d8c2e871693be0}}, {{cite:ee77f92f94f6007a35079daeacd186ba7b6a78c5}}, {{cite:1e5a076aa6043f62b2f5e0449c643b67111928db}}, {{cite:60b6bf83262f883400340b75a7ac29b577a0ea15}}. Emergence of coordination has also been studied {{cite:1a28952614ecd0df2774d042a95476036532f39e}}, {{cite:243c3a7fec5c4017e270b4ed30e83880b601c9a0}}, {{cite:90670f5d17a82cda26179cf1b2315c71638eccf0}}. These methods typically suffer from poor sample complexity but some have been scaled to large problems using heuristics {{cite:a022a3eca26eb34065892659de77b5f3cfe03f99}}, {{cite:0779a3bc6c39ebde2c2cbd6ceac3a95b49bfaa33}}, {{cite:6ddb346a55a97d679d065d08bbf24485baa29f48}}.
| d | d1669a3d390a3a64ddec61d83f533f79 |
We used the entire 3D lung scan with both the right and left lungs to identify clusters within our images. We used a similar CAE as in our simulations, but changed the 2D convolutional layers to 3D and reduced the second convolutional layer from a stride length of 1 to 2, to reduce the number of learned parameters which can become very large in the 3D space; then, the 3D input image was of dimension {{formula:eb1bde67-7b7f-436e-a4af-1409ab574fdc}} , with the output of the last convolutional layer at {{formula:ac339f82-7b23-4a2c-a810-6a9001fe3b77}} . We estimated the optimal number of clusters prior to the clustering layer of our framework using the Silhouette method {{cite:ca2334aabefe3e19abd9e1eacfb8f17a53fc6faa}} with sparse k-means {{cite:35a3b1cf93839fe88138d663348d6592a6e44f56}}. Then, setting the number of clusters to the optimal estimate, we identified clusters within these data, using CLAM to localize the discriminative regions. To evaluate the usefulness of the new subtypes identified with our clustering methodology, we found the linear associations with our clusters and various clinical outcomes. Clinical outcomes included lung function using the forced expiratory volume at one second (FEV1), forced vital capacity (FVC), diffusing capacity for carbon monoxide (DLCO), laboratory testing including CD4 cell count and C-Reactive protein (CRP), and patient reported outcomes, including fatigue assessment scale (FAS), gastroesophageal reflux disease questionnaire (GERDQ), cognitive failure questionnaire (CFQ), shortness of breath questionnaire (SOBQ), Promis, and SF-12 questionnaire.
| m | 2e98d92329fc9207dc74ca41a3afa853 |
where {{formula:a73ead71-cd08-4d30-a40c-8f7a2224868a}} are the neighbors of {{formula:77a5b4ea-1224-423d-8eab-c83d3fa420b9}} .
Shortest path based heuristics are generally more expensive to calculate than neighborhood similarity as they require global knowledge of the graph to compute exactly.
The Katz index takes into account multiple paths between two nodes. Each path is given a weight of {{formula:7aa173d7-f2a6-40ac-834d-5c4c1757c7bc}} , where {{formula:7119af2a-a34f-4a4c-a60c-8a4b1644888b}} is a hyperparameter attenuation factor and {{formula:6e292300-f8d4-40f6-8b4e-6f2ae04a150d}} is the path length. Similarly Personalized PageRank (PPR) estimates landing probabilities of a random walker from a single source node. For a survey comparing 20 different LP heuristics see e.g. {{cite:8e3eb9a5466259763ee338642068d1220f60c363}}.
| m | a4a443b8a86e8b137aa9fb16ad4baaea |
The precision control available in ultracold atomic gas experiments has opened up a platform where models of condensed matter physics can be simulated in a relatively defect-free environment. In particular, ultracold atoms trapped in optical lattices are described by Hubbard models {{cite:a09270b65f16d9d7b4ac61d77f64743110515a9d}}, {{cite:1bd39c6d0c97bf96c49d6fb1f60ce20109e0010a}}, which are of central importance to the study of solid-state materials {{cite:fc6923d5a2264d8bbdd7447c1e8c8ad66d046f9b}}. The reduction of three-body losses in lattices makes possible the study of strongly-correlated regimes absent in the continuum, such as the paradigmatic Mott insulator (MI) to superfluid (SF) quantum phase transition for single-component bosons {{cite:ae83666fa86ce34b539cc383437559705a28d02d}}. In the two-component Bose-Hubbard (BH) model a richer phase diagram emerges, including the additional possibility of pair (PSF) and counterflow (CFSF) superfluids, supersolidity, charge-density quasiorder, and peculiar magnetic states {{cite:c98ee739423bd4370875a1420eb77e76161dd7f7}}, {{cite:9762d16e869a367791afd5610d43bdb64ec58515}}, {{cite:5241477f91a0e4ff5295b94d7a55107b53f470c9}}, {{cite:dd7588f4430e35fcf5be3ce959bb6c2e7a9dc8e2}}, {{cite:c048dac03f5766559354c76b7f0ae79f872290d4}}, {{cite:7ecefb97622db5b79e0e0184393b30ad318f4a8e}}. Such coupled superfluids can undergo also mutual dissipationless transport with an induced entrainment or counterflow of one component due to a non-zero superfluid velocity of the other. This phenomenon, better known as superfluid drag, was first discussed by Andreev and Bashkin in the context of three-fluid hydrodynamics {{cite:36522df8640c31e28fc01e2ffc9634f7ad0cbc06}}, but is of universal relevance to systems ranging from neutron-star matter {{cite:6ea42ba06c8c15434df00e7ad3f7c2f245f3c323}}, {{cite:04dc6ca51463465ece745ce31b71119a91e33da4}}, {{cite:75a46ffbc51ed43663dd16a167eba7bd4380f166}}, {{cite:f230c17036d758a106b622450d9a86c6c27e63e2}}, {{cite:7d15db4694f3da25189fd9ee7e30e1e576c70388}} to multicomponent superconductors {{cite:42b2876d64a62d686d8c0f255e9b18749d44c691}}, {{cite:aba10c0780569a09d68382f4266ae9f11fa72f7e}}, {{cite:9d87fa39f449a0794da980b7c8717beaf5482931}} and ultracold atomic mixtures {{cite:3978745b227829867797b51cc75dc0c529e91027}}, {{cite:04b951b35731700f80bb31288cf1ae931ada082e}}, {{cite:f9a813b70559d007123355be9270e16f4353e2cf}}, {{cite:7346254f370d26117506b32622547ffcc371aac3}}, {{cite:fb550f94109b89f307d53384d39df01e5e4a009e}}, {{cite:750c1b2144d6861c79b261db044eccc752f1f31a}}, {{cite:31adc23a9372cfc4dbabdc1f8fe09acb1ebe45a3}}, {{cite:fa55503c29be22f4de10dea23a678aaf0899729b}}, {{cite:1e1d8322012970164e5f4b8ed0d26f2fcd5952b2}}, {{cite:c048dac03f5766559354c76b7f0ae79f872290d4}}. Direct measurement of this effect has however remained elusive, due in part to the low miscibility of superfluid {{formula:cd528fd4-493a-4616-a39b-57d8d7e1439e}} He and {{formula:6371275d-22b5-4bb9-9123-020660e622f8}} He, and recombination heating in strongly-interacting ultracold atomic mixtures. Recently, the PSF and CFSF phase transitions of the two-component BH model have emerged as promising candidates where the drag can saturate at its maximum value {{cite:750c1b2144d6861c79b261db044eccc752f1f31a}}, {{cite:31adc23a9372cfc4dbabdc1f8fe09acb1ebe45a3}}. Still, a deeper understanding of the fundamental role played by quantum fluctuations is needed to gain insight into the physics of such strongly-correlated quantum critical regimes of Hubbard models at zero temperature.
| i | 2411b56ec936ceeab268ec95da087e26 |
with {{formula:4d6f9ea2-dda4-453a-805c-a3b7a744eb19}} and a one-to-one correspondence between the cosmic
time {{formula:f88b787d-a0c2-47e7-88b9-dc0ec2858e5c}} and redshift {{formula:bcbe8297-cf85-45a7-b887-1fbe3c17f4ba}} {{cite:03a856d18e491067e4f2c8eca765204a2de24550}}. Such simple relation,
which was first used to obtain model-independent measurements of the
spatial curvature {{cite:0cc0efc625d562478392183ceeea074691e02b82}}, {{cite:031536ec5b2c3c288a700af9d4b4d901d9f6a17c}}, has also been
recently discussed to test the validity of the FLRW metric in the
Universe {{cite:e7c3f14068355ead8b0fbb4cc08e059cebb89f01}} based on different types of gravitational
lensing events. Now we could rewrite this fundamental relation so
that the strong lensing observations (from LSST lenses) and
luminosity distances (from DECIGO standard sirens) are encoded
{{formula:eec0d0dd-bc46-40d5-9fbe-178607cb1c1b}}
| m | e6fee940faf8d45a45ba9a6e11ab8866 |
Logic programming {{cite:1d95a4d99fe673069d56b32a1e3eea8dd5113cc2}} has been applied to many areas such as fault diagnosis, databases, planning, natural language processing, knowledge representation and reasoning. During decades of exploration, researchers have developed various semantics for solving different reasoning tasks. Among those semantics, the stable model semantics based answer set programming (ASP) {{cite:be9dd2f0351f5a6cc315f6324a11bdce35227ce1}} paradigm is popular for knowledge representation and reasoning as well as for solving combinatorial problems. Though computing ASP programs is considered to be NP-hard, there are a lot of ASP solvers (e.g., CLINGO {{cite:ab13ec66e49964b6c118f675b6b9699e3d0673e8}}, DLV {{cite:ed2cf164b72056c252391a3a1662e4e6bfd72a9b}}, s(CASP) {{cite:b32b2a1d915c61b13a7507f5d624c2333ecd14a9}}) that can compute stable models of an ASP program efficiently. Meanwhile, there are also many approaches to solve programs under the well-founded semantics, such as XSB {{cite:6c21ea735e933b111eba01ddfb18f66aaf79abe3}} and XOLDTNF {{cite:2723437e1db31cb602cd1b9df8f3877fc46dfd30}}. For the co-stable model semantics {{cite:e1bda795445c0562db4715ed7776bc6a24a0d1e8}}, {{cite:d2660d338977e0930f722b92b8276d667b6cab56}}, there is no specific solving systems designed yet.
| i | 8fde53201afd7115c0557614892a73d7 |
Theorem REF ensures that, given a suitable penalty {{formula:b74e307f-f73b-4016-9637-2e18b582f538}} , one can carry out the valuation procedure with finite number of basis functions and obtain the same convergence result as {{formula:d3adeb9f-9c38-403e-b984-2c407416f2eb}} increases. Furthermore, the number of basis functions considered in LLSM never exceeds that considered in LSM for the same convergence result based on the same initial set of basis functions. The Irrepresentable Condition is a stronger condition that implies the compatibility Condition. It depends on the gram matrix and the signs of true coefficients; see {{cite:19d200c3134af79f1db8f71bfc9d60595474dfbb}} for more discussion.
| r | 63b23399821828dd32f36f086fdd8cc4 |
where {{formula:9f523d0b-b13e-4c7d-9951-9e2cbd816132}} and {{formula:e37f2f15-4bc6-4d7c-827e-050bfcb74136}} are the assimilated solution and the truth, respectively, at time {{formula:03f17a9c-0bc0-43e3-a460-0b747b04ac36}} . The time averages of the assimilated and the true time series are denoted by {{formula:8f735bcf-2165-4f66-8d36-a60212d34b9f}} and {{formula:61207e10-f984-42d9-a722-60ae31f0f46a}} , respectively. The RMSE and the Corr are widely used metrics in practice to quantify the path-wise error. A smaller RMSE and a larger Corr correspond to a skillful assimilated time series. On the other hand, the relative entropy is an information criterion {{cite:44cfeb239f83c7e4802f93419d377f5bb1dee12d}}, {{cite:c8bddbc21e66bcc0cab591742c639e7b04a0b584}}, {{cite:17b4758d48a553edaa450ba34b446afc292c26fa}}, {{cite:76fea72cdccd437a03ba2a99bdd70c3deb7913dd}}, which is adopted to quantify the statistics error between the two distributions formulated by the assimilated time series and the true signal, respectively.
The relative entropy has many attractive features. First, {{formula:2167693f-f31f-473e-be45-d508b2c375d8}} with equality if and only if {{formula:b7573fd3-6880-49b9-8686-b6b09b9577a5}} . Second, {{formula:dd1cdbd2-e320-4e0b-8f50-89665fdc94d0}} is invariant under general nonlinear changes of variables. A smaller relative entropy value corresponds to a smaller statistical error.
Figure REF shows the skill scores using different filters.
First, the larger RMSE and the smaller Corr in the G-ROM EnKBF confirm the intuition from Figures REF and REF that the path-wise error using the G-ROM EnKBF is much larger than the filters using the CG-ROMs. Second, the relative entropy, which quantifies the error in the PDFs formed by the posterior mean time series and the true signal, indicates that the G-ROM has a bigger statistical error for all the modes as well. It is also worthwhile to mention that the difference between CG-ROM Closed Form and CG-ROM EnKBF implies that closed analytic formulae in the CG-ROM is advantageous in not only decreasing the computational time but reducing the sampling error as well.
| r | 7d3081a013cbfbbf206d7a92dee50372 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.