text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Given the typical velocity of DM particles in the solar neighborhood of {{formula:98743072-0b5b-4a2e-9548-bf060795e958}} , one needs to achieve an energy threshold of less than an eV in order to search for DM particles in the sub-MeV range. Among the proposed materials to achieve this goal are superconductors {{cite:c797e4521c2501cc25c2b32374b11676bfeb85b0}}, {{cite:def17ead5fd46664ec3303b60dc8a648a3dac479}}, {{cite:39f7ceda5e7b037967b7a2a0322dab087baaba2e}}, superfluids {{cite:d64a09a3b71773049d4aeb24bce25d1feb60e1aa}}, {{cite:e5e6c4b4b0b3ae8266f09190318560ccc606dcd9}}, {{cite:12d25eeb9a2ed04734f89fcaf8baa476f7aad3f4}}, polar crystals {{cite:75462dadf086c5d20841c3371ddb6df1f18794bc}}, {{cite:bcc34aea6e99031c9e926119ccd02f4cd52dbc1d}}, {{cite:c10c514f3cceb5156b00c2fdb35552d7a0576a46}}, topological materials {{cite:aca880ee645d93c144d23b62e0410b662710c44b}}, and finally Dirac materials {{cite:63193de56555a78e6ee567513743e32f98209cce}}, {{cite:059ca94f525880fa0bed348ef0a8b27079d83287}}, {{cite:a3f60a3bf4c1487a501895b44f6bea26f06048e2}}, which are the topic of the present work. Dirac materials are defined as materials where the elementary excitations can be effectively described via the Dirac equation {{cite:1af883a191c8207d35d432f375222030ba11ed65}} with the relativistic flat-metric energy momentum relation
{{formula:748118af-76e3-471b-b2cc-72e656e32130}}
| i | eadf9d8d75d8d02e825d6a8888d98c04 |
The detection-based methods {{cite:9567ad26e2caa27b9edf97b562c3606dbabf7edc}}, {{cite:711ba6a248c012a747ac119f670fe00d0799d84d}}, {{cite:e32b8fef2de1348866db66dc1e18812f5ccef3e7}} mainly follow the pipeline of Faster RCNN {{cite:a4c9130a149d0d276690b5047134c9c9c12f3ee9}}. PSDDN {{cite:711ba6a248c012a747ac119f670fe00d0799d84d}} utilizes the nearest neighbor distance to initialize the pseudo bounding boxes and update the pseudo boxes by choosing smaller predicted boxes in the training phase. LSC-CNN {{cite:9567ad26e2caa27b9edf97b562c3606dbabf7edc}} also uses a similar mechanism to generate the pseudo bounding boxes and propose a new winner-take-all loss for better training at higher resolutions. These methods {{cite:9567ad26e2caa27b9edf97b562c3606dbabf7edc}}, {{cite:711ba6a248c012a747ac119f670fe00d0799d84d}}, {{cite:e32b8fef2de1348866db66dc1e18812f5ccef3e7}} usually use NMS to filter the predicted boxes, which is not end-to-end trainable.
| m | fbb99eecaf02068f5de7214e7636e5bd |
we obtain, by Theorem 4.30 of
{{cite:5f47e0d15f8312c0942eb2e88c37f6e029a26c0a}}, that
{{formula:62ea430c-1184-409e-97e6-82e1439f787b}}
| r | 08b895ae635d9d224d8904439fed62ae |
. Copula formulation give a clear seperation of marginal distribution and dependence structure. Use of Copula has been found in wide variaty of applied science such like quantitative risk control {{cite:266af87d065c44cac25b6e080fd840fc5fc5013d}} and statistical modelling. Intense research on copula based statistical method has been proposed. {{cite:996045cdb122932b86cd54b5ea1687692d5ece19}} introduced copula at the first time. {{cite:f9774ff67f5822d56e9edf0067bc0945d2b374fd}} give a theoretical background for copula. Most classical results on copula can be found in {{cite:02f8150752391e4b9ce19f6baab3fa98a50b808f}} . {{cite:7cce79b3ab0cba128a8cbdbaff26601913e73d0c}} propose a copula application for financial derivative products management. {{cite:6fb93abee3bc387ee7901bf06c8bbc9b74bfaf46}} and {{cite:c68854ccc1665cf8951199eace2f5c38e8100b17}} propose a regression model based on copula, which they give name as copula regression.Even in recent years, copula based regression method have been continously proposed. {{cite:3ff9a06dd77081798c3db8eccb08d214a4bec85d}} and {{cite:c1fac370d9b32eda9bbef7e7f0a6fedb11670efb}} have studied copula regression for binary outcomes.
| i | acee9743b8461fca02b723c38587cce4 |
In Part I of this two parts paper, we presented the advantages, the reference architecture and three core applications of wireless federated learning (WFL), the main challenges toward its efficient integration in the sixth generation of wireless networks (6G), and the identified future directions {{cite:c235153d1e937c78709666a6eae025737b29c56b}}. This second part is motivated by the fact that, as it has been thoroughly discussed in II. A and III.A of the first part, the utilization of advanced multiple access protocols and the joint optimization of the communication and computing resources can facilitate WFL to meet the stringent latency requirements of 6G networks {{cite:c235153d1e937c78709666a6eae025737b29c56b}}, {{cite:2c0d13bd90b6fb90ad25121910837506bdcc181f}}, {{cite:4046a5f95ebcaacd399fc27956d42c4fe5f8098a}}. To this direction, the use of non-orthogonal multiple access (NOMA) has been proposed in the first part, due to offering low-latency and improved fairness by serving multiple users in the same resource block, in opposition to orthogonal multiple access (OMA), where each user occupies a single resource block {{cite:21fa27fadb6bda8eadbf31cbd45cbe9e0d4c5dee}}.
| i | d8311e28da7af6d72568d7eadf435157 |
In this work, we propose distributionally robust offline model-based policy optimization (DROMO), a model-based offline reinforcement learning algorithm that penalizes the Q-value for OOD state-action pairs as well as its own variance. In particular, compared to the previous work we invoke an identity from {{cite:5984c43ebfe8874ed8cb709375cd98c8bfb666a0}}'s work on distributionally robust optimization (DRO) to obtain a robust expectation of the Q-value evaluated under the rollout-induced distribution. Despite the theoretical advantage of DROMO, our future work will be empirically evaulating the performance of DROMO.
In addition, DROMO, with variance regularization, may exhibit a lower sample complexity as well as a faster rate compared to COMBO {{cite:86af9cffb513622ca4ae1299c74283cda4ca874e}}, which we also plan to show in the future. There is also a number of directions for future work. One of the interesting avenues for future research to inject or enforce pessimism by applying DRO chance constraints under different ambiguity sets, such as Wasserstein. Also, as we are able to present an offline hyperparameter tunning scheme for MOPO in Appendix , it is difficult to use the same method to automatically control {{formula:87836af3-fc87-490c-9d2c-5b3d2dafda31}} , the interpolation factor of rollouts and the dataset. Also, by writing the following remark, which follows from Theorem REF , we suspect that Lipschitz regularization can also improve Model-based offline RL.
| d | f74f53ebb62dcb32c2dcb0e24a3f3de8 |
Computing the product of two {{formula:f2c6056c-1106-4a8c-9f8e-dfb009846fc3}} matrices using the straightforward algorithm costs {{formula:587ca54d-b948-44e9-a27c-dd95052e1aa2}} operations. Strassen found a multiplication scheme that allows to multiply two {{formula:a5a14bc9-19a1-47b6-921d-ba9c8d85b1ff}} matrices using only 7 multiplications instead of 8 {{cite:52be7c0b301ffa76fe75f8694734ff351f16dfbf}}. This scheme can be applied recursively to compute the product of {{formula:b2dafe73-84cd-4791-8c1e-aa1815df8529}} matrices in {{formula:a2c042ae-4b6a-41ac-b59c-a7ac108500bf}} operations. This discovery lead to a large amount of research on finding the smallest {{formula:7787fc1c-9e69-4664-9b04-b6dcf9ff8e6d}} such that two {{formula:37549458-0825-4e43-8892-328b3eb5cef8}} matrices can be multiplied using at most {{formula:871825fd-13eb-419c-8829-0ffa80e1bd95}} operations. The currently best known bound is {{formula:1bca1b58-ae07-46ac-8a01-ddc91e8609f9}} and is due to Alman and Williams {{cite:ab9cf3f9d28ddb4bcff56a13cea898c91029c470}}.
| i | 6ca23875c5b40c99db055a439175548d |
Training and validation dataset: We employ the ACDC cardiac MRI segmentation dataset (bSSFP sequence) {{cite:55a5e2552f84541dce0f1b5535c2b05d7490ea40}} for building the segmentation model and the proposed calibration model. Specifically, we take the ES fold of ACDC and split it into training, validation and (intra-domain) testing sets of 60/20/20 cases. To simulate data-hungry medical image segmentation {{cite:13e7c2e3d543f90e545ec900bc10c8786dbf22aa}}, each time we take 20 cases out of the training data for building the segmentation network, and 5 out of validation data for validating the segmentation network and for training the calibration model. We repeat this process for 3 times to cover all the training samples, and obtain 3 segmentation models. For each segmentation model, we repeat training the calibration model for 3 times.
| r | 3cd8b164ad4bb4da170d0d69ddd5dba6 |
Original {{cite:1278855c8625fa366812f913dfabf157b7c43c7b}} is the method where the rewrite is set to be the same as the input query.
AllenNLP Coref {{cite:d3deec9c475be216e62978e1a5cb669a26cb5040}} is a deep learning based NLP tool. We utilize its coreference resolution model to generate the rewrite.
(L/T)-Gen {{cite:142829969682072d6019558a1618010432ccd82e}} denotes the LSTM/Transformer based encoder-decoder generation model. For Transformer-based models, we report the performance of two variants: Early Fusion which utilizes the same encoder to encode image and text features, and Late Fusion which first embeds image and text into vector spaces separately and then performs fusion into a joint embedding.
(L/T)-Ptr {{cite:edcd1f654f37ebe0d2e4c19aee51b0cde1514398}} adds a pointer generator to the (L/T)-Gen model.
VL-(Bart/T5) {{cite:5c61879300c1b3f46662e227e8fbdb7621cc95db}} is the mulitimodal implementation of large pre-trained language model Bart/T5.
VL-(Bart/T5)-Ptr is the model depicted in Section REF that adds a pointer generator to the pre-trained model.
| m | 201766abd3bcce0180398ba9685898e5 |
Similar findings can be reported in the 13 TeV analysis. In this analysis the production of dielectrons was studied as function of {{formula:5cadb072-ced5-419e-b234-ba5c2f357715}} and {{formula:1b651796-b1fe-4908-937a-9b1eefb2c065}} for a minimum bias and a high-multiplicity data sample.
In Fig. REF , the ratio of the high-multiplicity dielectron spectrum over the inelastic one is shown as function of {{formula:1664a9f8-1d74-4bd4-8c44-58c7f058ee2f}} , integrated over {{formula:fc488f8e-f3fa-4f60-8e40-feb441a6d411}} and for {{formula:a96bd30a-65ba-45d4-8292-0b29a561284f}} GeV/{{formula:162ff7aa-d317-4107-9c89-60efcb99244d}} , left and right, respectively. The ratios are compared with ratios of the expected hadronic contributions. The cocktail ratio reflects modifications measured independently at high multiplicity.
We use a measurement of the multiplicity dependence of D mesons {{cite:b78c5e6efad8cccffb120f74915065b9eb3ff2e5}} to scale the heavy-flavour production, including the B mesons. For the light-flavour part of the cocktail a measurement of the multiplicity dependence of the {{formula:75d146ca-efde-4eb4-b320-6ea73b88cf20}} spectra is used, which shows a hardening with multiplicity {{cite:ca340ce132e4320364b61298d3e75fc655e3f2a1}}.
No significant deviation from the cocktail in both {{formula:2d793274-79de-4057-98d7-d383c0268ffc}} intervals is observed.
The high {{formula:9bc31fba-04b8-402c-8a09-fa83f6f438cf}} part of the spectrum, dominated by the heavy-flavour contribution from beauty quarks, can be described by a cocktail constructed from D-meson measurements. This suggests a similar scaling of charm and beauty production with multiplicity at LHC energies.
| r | c5eb8dfdf98b78fab2e17036b5a7a66b |
One natural class of situations where the maximin and the minimax values coincide is when there exists a “global saddle point” {{formula:4e007831-0f65-4a3c-a572-99a18abb40c6}} such that {{formula:8f4bf206-7976-4dd0-93f4-630c3bab1456}} is {{formula:b0384ef2-92f6-4557-9551-4bdef8a57c5f}} -minimized at {{formula:1e135779-601a-4349-8564-b33577b4bc91}} while {{formula:160fb865-9a36-45ae-83c5-d59624c8e735}} is {{formula:5dba8f65-a020-453e-b36c-42169f032cc8}} -maximized at {{formula:59f97af7-11e8-42cc-b7ec-aa1d8f5ce028}} . More broadly, the criterion for the minimax to equal the maximin is specified by the minimax theorem (originally developed in the context of game theory). In the continuum context, the crucial criterion is for the function to be convex-concave in its respective arguments. Before devising such a function in our context, we first consider a more localized geometric prescription where all the action effectively takes place within a single hypersurface of a specified class, and then optimize over such hypersurfaces. The maximin prescription for holographic EE {{cite:1c983a69a939fc0537c8c5f50764587a89959d29}} is but one example; here we can think of the action as taking place within a single Cauchy slice, and within this slice we can use Riemannian max flow-min cut theorem to convert it to slice flow, so that upon maximizing over all slices we arrive at an alternative, “maximax” prescription. But instead of Cauchy slices, one could equally start with a different class of hypersurfaces. Since, roughly-speaking, an HRT surface area increases under spatial deformations and decreases under temporal deformations,
This is just a heuristic to build intuition; the separator is not precisely null, and in fact one can typically find spacelike Cauchy slices (which are not maximin slices) along which the HRT surface is not the minimal area surface. This is possible whenever the expansion of the null normal congruence from the extremal surface becomes negative (which is generically the case). one could first find the maximal-area surface within a timelike hypersurface (which we will dub “time-sheet”) and then minimize this area over all time-sheets. This is our “minimax” prescription.
| r | 51cd21d69d8ff24f618c40ac9f117c2a |
The concept of Knowledge Graphs (KG) was proposed by Google in 2012 to utilize semantic information in web search to enhance the performance of web crawlers and upgrade the experience of clients {{cite:1b2c2337dc6fc3bed63b8661ee4c2b1c83a216bd}}. The Knowledge Graphs are based on numerous information retrieval frameworks that obtain admittance to organized information and are utilized to distinguish and disambiguate elements in text, advance query response with semantically organized outlines, and give links to related entities in experimental search. Leveraging real world information in data frameworks is one of the significant advancements in automation {{cite:c85eca1a5cc0e517cc51ac5e564429a95c181479}}. Representation of data and logic inspired by human critical thinking expected to present data or information to secure, improve the ability to deal with complex questions and have drawn in incredible scholarly thought and professions {{cite:1b2c2337dc6fc3bed63b8661ee4c2b1c83a216bd}} {{cite:2e952685a1994c798107749186a235ae8b568c03}} {{cite:d183e71936c8adfdb2a0d43bf9639e806600e4f1}} {{cite:866b0e013c0ab5986b4b0f1b35ed2970aef290cc}}.
| i | b9141f6b6ffc78cd678114305d122e37 |
We begin by considering the bias free case ({{formula:3f3b2c4d-7908-4445-bcb0-bc3a3f3309c2}} ) where the only factors driving evolution are migration, daily movement and conformity. Figure REF (a) shows the range of model errors in this case. In approximately {{formula:6a3f5fd9-bafc-4135-af9f-1a45032afaf0}} of cases the model generates a final distribution with error ({{formula:215ee343-7dcc-4e3e-9fec-d7e04e20dce5}} ) comparable with errors obtained by fitting a bias field (Figure REF (b)). These are variants for which spatial process and conformity play a substantial role in explaining their evolution. Many exhibit one of two phenomena seen in two dimensional coarsening systems studied by physicists {{cite:feed5edb78e7208ad0627243ee31fcd8c599235c}}: stripe states, and shrinking droplets.
{{figure:c0d91cc3-bc73-43b1-95e7-8dd805f7704c}} | r | 4cdc8e5430491db94ecfe122eb575c15 |
It should be noted, on the one hand, that the modified gravity is among the approaches used to describe dark matter and large-scale matter distribution; on the other hand, observations are efficiently used to constrain the deviations from GR (see, e.g., {{cite:c279f62596e11ebccea2dc14660b175f97d4fbf9}}, {{cite:14dfdc2532b219ed2b3112743d0a7a392625ee0d}}, {{cite:74043df49f5e104ab71b707815e6232d692b9bf1}}, {{cite:cd63b8b46f949d56582788139f0771791c066190}}, {{cite:2ad72a9b02dcc92ff0d984e453f90bae4d5d2e46}}).
| i | 9173079109a2f81516b97aa26e3ff5ec |
We refer readers to {{cite:5357a68d3d00a27b5b88cf3ba32c1f0ad83ef69f}} where elaborate convergence results are discussed.
| m | a8e178b879939d3b582de3daae84dce0 |
In summary, Figure REF highlights the potential of
PRISM to probe the parameter space of our leptogenesis model in the
mass range below {{formula:980c0310-509e-472f-99bd-d4cf2143a577}} . On the collider front,
Figure REF shows that high luminosity
{{formula:f137acf2-575f-4095-b328-9981bf4b2f04}} -factories could probe the parameter space in a narrow range of
masses, but for remarkably low values of the light-to-heavy neutrino
mixings. Below the mass range we analyse, an extensive portion of the
parameter space is ruled out by searches for heavy neutrinos that are
produced in fixed target experimental facilities
(e.g {{cite:02b9e98a0d33078725d27beca7daba0b8b0cf4d3}}, {{cite:abf8e36d3bf8d1030b67bd59cd6460d0e6e5c7e1}}, {{cite:34ba74d1a678424625ac2917fb679d3a7091292d}}, {{cite:ba9fb153f23b74e42859172ca7525aba81e4bb4e}}, {{cite:f3ced5767d1273e49d1cb47b0298470ca3b5d826}}, {{cite:74bedf9e87369faf08be236e6755ce9992459b3c}}, {{cite:f2d9b8e19d1a85272594ffb0da20728d40542eb5}}, {{cite:96117d52fc337d6017adcb28d15ae276e6596efd}}, {{cite:fe59d1260b6b4d527a8faada91fc2c5477896b82}}, {{cite:9eef7c0b056f269bc5a9df010177863f10c019d3}})
or in atmospheric
showers {{cite:fd23a52f9f60d83d79dbc1de0cd1a2d227ecb539}}, {{cite:dce7728d8c7e5411bdd5866a5d74ae218706ec00}}, {{cite:43bf1b17a253b86ee49c3850714b0d65db9c761b}}, while
future upgrades promise a significant gain in sensitivity for the
heavy neutrino parameter space. This further motivates a complete
analysis including heavy neutrino oscillation effects, which could
reveal a viable parameter space within the reach of these
experiments. Above the {{formula:2ea1174c-5575-40bd-9ee9-571867ee9e8e}} pole, the LHC 14 projection lies far
above the region where leptogenesis is successful. In models with two
singlet neutrinos, an analysis searching for LNV lepton-trijet and
dilepton-dijet signatures in future electron-positron, proton-proton,
or electron-proton colliders shows that a sensitivity close to
{{formula:eb24b9c1-1468-4c2f-9986-be71ec4ca73a}} could be achieved in the range of a few hundred {{formula:d2eac418-8b3b-4eb0-b34c-7f784c02ef5b}}
{{cite:a7ada13f822c56b9843a340c8df8752a72b98206}}. However, this is still an order of magnitude
above our prediction for the viable leptogenesis parameter space, and
a potential improvement on the sensitivity by the addition of a third
singlet neutrino would require a dedicated analysis. A recent
extensive review of current bounds and projections, including several
exclusion lines that we omit in the presentation of our results, can
be found in {{cite:02b901aabdf51ff86796202d6557bb67fa48fada}}. For bounds and projections on
multi-TeV heavy neutrinos, the interested reader may consult the
recent results communicated in {{cite:30ea5e99900c68c9a39ed6befa49e9e6a213e19e}}.
| r | b5d647e5bad0b9526a588f94a87ba5e1 |
The proposed method generates an expressive mesh {{formula:afd76e71-1348-4e70-bb7c-5ac75aae69b8}} of a subject, given as input its corresponding mesh in neutral expression {{formula:812cbf5e-63ee-4204-9f59-39a4ac73e623}} and a set of target 3D landmarks {{formula:1dec086e-777f-4775-9a4f-03dc184f3bab}} of some expression. We evaluate the accuracy of our approach by comparing against landmark-based deformable model fitting methods.
To perform this comparison, we first build a 3DMM transforming the training set of 3D faces into a vector space representation based on PCA {{cite:fd3825dbd6d657735b2823ba2f079149be58c381}}.
The resulting {{formula:e2a63894-293e-434f-8296-4de88429edb9}} principal components {{formula:9cc25c1f-54b6-43d5-a343-59b47dd20264}} , with {{formula:2b537a06-90c5-481c-a187-016316b33f98}} , describe the facial shape variations and are used to generate novel shapes {{formula:bcadc47e-cae1-4f10-a75f-c487c1b55ff5}} by deforming an average face shape {{formula:d9d0489b-d1c7-4d8a-8de1-ec8a2e8c0dce}} . This is achieved by linearly combining the components {{formula:0b85f42f-6d7a-4df2-930e-1de7f92c7a43}} :
{{formula:6010f3de-0206-4cb8-b94d-121be9b34ab1}}
| m | dfc19d3a360b5e8c38b0381ab23e9c51 |
Monocular 3D reconstruction of general categories is a task that humans perform with ease, yet remains challenging for computer vision due to its inherently ill-posed nature: the observed 2D image is the result of a confluence of multiple sources of variation, including non-rigid intra-category shape variation, rigid transforms due to camera pose, as well as appearance variation. CNNs can easily learn to discard appearance variation, yet the treatment of the geometric sources of variability remains elusive.
Even though strongly-supervised approaches have delivered compelling results e.g. for human reconstruction {{cite:3f0206a2e77e425d3d9006f715539826fd7c89ae}}, for general categories we need to rely on weaker forms of supervision as well as self-supervision stemming from the know-how of computer vision.
| i | 383677bc55fe59cff1bb5442acd86d56 |
Let {{formula:a74ac682-f1cb-45ca-b6a4-845b0bd6f3a5}} and notice that it follows from Lemma REF and the Cauchy-Schwarz inequality that {{formula:10c19428-b5fa-411b-ae02-ea38eca9d276}} and {{formula:75a2b56d-34ae-4a70-b072-2d816bf380e5}} is locally bounded. Use (REF ), (REF ), the compensation formula, {{cite:dc9c2dc8434443633512dc4a01a95098f9124329}}, {{formula:1f767ead-8be7-4ea6-a06e-a3d4fa07bbc3}} , the Itô isometry and REF to get
{{formula:0e02042d-6c37-4c43-9135-3ef6ef512ee4}}
| r | bbc3516e68f686eae4ba979ff5c92de0 |
Remark 1
Hybrid dynamical systems cover a very large class of practical settings under its canopy. Not surprisingly, researchers in systems and control domain have proposed various frameworks to model a hybrid system for control design, see e.g., {{cite:3c829aa1edc2ee818380bf0f2145445207bb1394}}, {{cite:41c2ece007e090f0b056f62ee8b46505e85faa13}}, {{cite:b9656b41e8fe74f6b8b5edea8219b739304b591a}}, {{cite:4ed9315b8bcfde7f1d943bc20e5ab024335e693d}}, {{cite:4fbe6cff1918a883324451088382132e020e1a26}}, {{cite:c065ccbde831b6a7e53d1ad23b12e8900bae78a1}}, {{cite:2a948ac1a5d4e0ab68494b69d132c47bd018e65e}}, {{cite:5fe5d0bf95aabc78e6ddae272a28b142ad36033f}}, {{cite:360a13f072d1e8c97f90e59b783408d937151a23}}, {{cite:43165c9cbecd99c7c0d7d7ed20d701e8c89c9393}}, {{cite:bc155a096b5acbaeba1f6f329141494a844d3462}} for detailed discussions. In this paper we restrict our attention to the frameworks from {{cite:b9656b41e8fe74f6b8b5edea8219b739304b591a}} and {{cite:4ed9315b8bcfde7f1d943bc20e5ab024335e693d}} (described in §REF and §REF , respectively) to develop our results.
| d | cea56a1084c3d2707802358f9fe944c0 |
We have proposed a Bayesian linear model that can predict a binary or a continuous outcome using data that are both multi-source and multi-way, with any number of sources or dimensions. Both the simulation and data analysis results have shown that the proposed MSMW model can improve classification accuracy and reduce MSE when the underlying data have MSMW structure. However, the performance of any given approach depends on the conditions that the data were generated, such as the true rank of the underlying signal or whether different sources have different signal variances. Thus, practical data applications of this model may require applying different versions of the method and comparing their performance. In this article we have focused on three-way arrays ({{formula:82e2191a-603b-4a00-89d4-7c7fb4f8265b}} ), however, extensions to high-order arrays are straightforward, for which the coefficients array will take the form of a CP decomposition {{cite:c6777fbd0e495adf310c18d0f47acec8951462a2}}, {{cite:bd99812a7bacc92eeec133952a6fb07f692b9007}}.
| d | 3db04d66ee902a58ed87543c0f4de045 |
SMSS J1606{{formula:a8cd9910-668a-4189-b9d3-f47a2855e16a}} 1000 belongs to a sub-class of close binaries consisting of a white dwarf paired with a very low mass companion. Such binaries exhibit a peak in the M dwarf companion distribution near spectral type M3.5 with a very steep decline at spectral types later than M5 {{cite:f6423d47f70eef18d2b4e46146dea8afa681731f}}. The survey of {{cite:cea0b6ed4508fcfd43bee9007b3cbb39a8fb4127}} found, relative to this peak, {{formula:49c177be-5f71-4584-8f9c-ef1fddd4bfd9}} times more L dwarfs and {{formula:f2dcfb01-56ac-499d-a0e7-3b6a0dd93e60}} times more M6–M9 dwarfs in the field than among companions to white dwarfs. Despite more recent surveys with greater sensitivity to late type M dwarfs and brown dwarfs, very few of these systems have been detected {{cite:f65b5a59afb5e252a82c351bf843c3ed1b09ac7e}}. Furthermore, {{cite:746a1b648c9a95c6ec4522edc9a074fd26a7d07b}} have reported that there are only 23 brown dwarfs transiting main-sequence stars with orbits within 10 AU revealing that the `brown dwarf desert' puzzle still persists. This phenomenon observed by {{cite:4bf6dd7881f8348f455eaccc06fa0ed81f421e9f}} consists of a lack of brown dwarfs within 3 AU of a main-sequence star. The descendants of these binaries, a white dwarf with a brown dwarf companion, are even more infrequent. If the white dwarf is magnetic, such a pairing becomes very rare. Only six magnetic systems with very late M type or brown dwarf companions are known (Table REF ).
| d | 3b72b6887f137198c18109d3b33a5d04 |
An extremal configuration related to the parameter {{formula:2a1c458f-2214-4d9f-a42c-d7e9ff45cd66}} arises as a consequence of the RG-improvement in accordance with previous results {{cite:8487fc77295a00ad7afd6b23a62eaf214530d00f}}, {{cite:10299b8168ff14825da6c36e4d448a195ed55187}}. This configuration is reached at the Planck scale, where {{formula:3d870523-f775-41a6-99f6-0cac4de929b3}} with {{formula:c27c8ab6-ee30-498f-ac26-c10bc4002672}} {{formula:f7b69732-d779-4257-9c94-061572a03c38}} , and it must be distinguished from the extremal state at {{formula:78b9c740-1616-4d3e-8a25-35a0000a5e26}} for the classical Reissner-Nordström spacetime, where {{formula:a48690b9-a458-41e9-8fd3-e57acc8a41e6}} , the macroscopic charge of the black hole remains finite. The similarity of figure 3 for {{formula:3235e7c6-4fc1-40e3-a7d9-d583bdbfb24c}} in our present work with figure 6 for {{formula:641190d0-0c77-4485-9721-5c0764b4c081}} in reference {{cite:10299b8168ff14825da6c36e4d448a195ed55187}} is suggestive. In reference {{cite:8487fc77295a00ad7afd6b23a62eaf214530d00f}} the features of a possible remaining state at the Planck scale are explored, where among several results, {{formula:4bf7405b-7328-4885-ab95-77927bc420c3}} tends to a final and unique value {{formula:075c1d88-65e7-45cc-ab94-638472c17520}} and the temperature reaches the zero value after an infinite time interval as prescribed by the third law of black hole thermodynamics. Also the connection to the Reissner-Nordström spacetime for the mentioned extremal state at the quantum level is investigated. The result of our present work for which {{formula:d4564edd-992d-4101-94ed-fccce61f6b0f}} {{formula:91b3e022-fa1d-446a-91ba-d157e1fecebd}} {{formula:33b769c7-4fac-4983-9948-36a1c01956f3}} for {{formula:4dada612-d6d6-4fdf-94ad-880c3122afc2}} joined with the result previously found in {{cite:10299b8168ff14825da6c36e4d448a195ed55187}} for which also {{formula:da2081f4-d868-4ee4-89b5-d0dc9e046b4c}} {{formula:0f5644dd-3c08-4755-91aa-27ed531face5}} {{formula:4cfc93fc-d025-41f2-a165-a9adfa2fa723}} for {{formula:3948bc64-22d8-4e2a-aff1-9947ec60c18c}} contribute to add hints into the possibility of existence of a “cold” Planck-sized remnant which would solve the loss of information problem in the process of evaporation of black holes {{cite:8487fc77295a00ad7afd6b23a62eaf214530d00f}}, {{cite:aa5298e7ff0a6aed299d53047be2293ff661c57b}}.
| d | eba4ad441b8bfc7635675454593d468d |
The current outbreak of the novel coronavirus disease (COVID-19) {{cite:4da9062200d7c72df1cb6db93f3e09ed75bdce9e}}, {{cite:efb95a4b311b6b4419c443a5086105d5d7cb186e}}, {{cite:0c3d1cab5282678d644e0241ebd222efbcc0fb4c}}, {{cite:d4ed94f081595e29a356951c5af3b4af641a10b9}}, {{cite:a218e21da6840d5bb4b8895d3c647052b66f25c3}} has dramatically brought to worldwide attention the crucial importance of epidemiological models for choosing the best strategies and policies to contain disease outbreaks {{cite:a155356f84cc51691a13e7daa27392601fbfa693}}, {{cite:a218e21da6840d5bb4b8895d3c647052b66f25c3}}, {{cite:f4fba7dd9edd3613524e261df0ffb67b632c8a9b}}, {{cite:fda5364c8164712a7217f314acb058a1c0bc36d9}}, {{cite:09509d65661e06d031af93764523f143a1e63b48}}.
Machine-learning approaches have been already proposed to help disease diagnosis {{cite:d89717462475795b7364be768cefbaa4488af99b}} and epidemics handling {{cite:fda5364c8164712a7217f314acb058a1c0bc36d9}}.
In fact, in the last few years, various neural-network architectures have been employed to manage human diseases {{cite:be81e9a5040863d724c7ba56c67c604a39425e6e}}, {{cite:50a8eb488a217f06a1ed586a4995afee98db26d6}}, {{cite:5735915ee069008f733f260978821a50fd618caf}}, {{cite:5e8084651a79a64e8df2a3c7efc36ff49cc089be}}, such as malaria {{cite:083a0b4e9ea17edb17b875792d6589091481497d}}, and animal diseases, such as in swine flu {{cite:10528617e0042370851621dca398ae4e77d2a40e}}.
In this work, we have now shown how a neural-network-informed strategy can improve the containment of an epidemic, even when only a small number of specific tests is available and some of the individuals are asymptomatic.
This improvement can be seen in three key aspects.
First, integrating the neural network into the outbreak handling improves the performance of contact tracing, while performing the same number of tests and isolating the same fraction of individuals.
Second, the neural network autonomously tunes its weights to the ongoing outbreak, without needing to explicitly know its underlying model or its parameters, and therefore does not require a priori knowledge of the disease outbreak characteristics.
Third, since the neural network is regularly retrained as new data become available, it can automatically and dynamically adapt itself to the evolution of the outbreak as well as to the changes in the behavior of the population, e.g., due to containment measures or different social habits.
As a striking example, we have shown that, in the case of temporary immunization, the neural-network-informed strategy can prevent a disease outbreak from becoming endemic.
| d | bd6cc34134c774432ce4e8657b19c137 |
For evaluation, we adopt a diverse selection of MARL communication methods which fall under the GDN paradigm. These are shown in Table REF , along with the respective paradigm (whether the method simply falls within GDNs or whether GNNs are explicitly used for communication), the MARL paradigm, and communication graph structure. We use the code provided by {{cite:3c2a8fcd18f5e67488abd114090f1a5d7a7e82fc}}, {{cite:8aa1e40e219ced80e11a4e5a9e91fe80772e3ca0}} as starting points. The code of {{cite:8aa1e40e219ced80e11a4e5a9e91fe80772e3ca0}} uses an MIT license and the code of {{cite:3c2a8fcd18f5e67488abd114090f1a5d7a7e82fc}} does not have one. All of the implementations are extended to be able to support multiple rounds of message-passing and the baselines are augmented with the ability for their communication to be masked by the environment (e.g. based on distance or obstacles in the environment).
| m | 320f4b379e3ed6d87e2b19359ffec9fb |
By viewing iterative algorithms applied to a given problem as discrete-time dynamical systems {{cite:8bdf51c2395d3881ef579e5d3d9d5e857e648002}}, we developed a framework for identifying equivalent algorithms via the spectra of the associated Koopman operators. The key to this approach relies on the fact that two dissipative systems that are conjugate have the same Koopman eigenvalues {{cite:5d9b0bcedb8ed1239e173b16f84278ff9b02e0e3}}. Similarly, two dissipative systems that are semi-conjugate have Koopman eigenvalues, where one is a subset of the other.
| d | 3c3fa3a586f879df2c32387529583ddf |
The circle method was developed by Hardy and Littlewood in order to handle additive problems in the number theory. It was significantly improved by Vinogradov {{cite:2e56b48e3801ef40d7a37ce0307dea70c4d01d63}} in 1928 and was used by him to obtain the asymptotic formula in the Goldbach ternary representation problem. The main innovation introduced by Vinogradov was the usage of the exponential sums instead of infinite series which greatly simplified the method. Usually the application of the circle method is followed by using estimates for the exponential sums like Weyl's or Vinogradov's inequality. Over the years the circle method has become a widely used tool in analytic number theory as well as in harmonic analysis, see {{cite:803788a81f347e459cbd65ca8b5920e4f720ba17}}, {{cite:904ca802260cb5df2ca681c0680cea2f038ff3df}}, {{cite:13dccb5b80551ce7d6003de2366be358a299e5a5}}, {{cite:b5148b0bb443b065a022439972376d9cf24e86a7}} and references given there. We recommend an excellent treatise on the subject due to Vaughan {{cite:ed66d9272e168901a8d4c779a60376d2a05928d5}}. For more general view on the analytic number theory we refer to the monograph by Iwaniec and Kowalski {{cite:e992425a698d17ebea6a3a116d763e48b9cd1cac}}.
| m | d4692bd2b67876494368f935d51aff6d |
Finally, {{cite:a989d7d0f7832ce9ae0d12919e617b01692689e2}} proposed a predictive quantitative model for each participant based on the historical steps and goal data for that user, as in {{cite:62756d9f4ce4ace3e9c74924ec40ccfb7ae30a40}}. It involves a two-stage RL for selecting the optimal interventions: in the first stage, inverse RL {{cite:ab4b3a5b70e0a1f1b8eea7cb49a46ddac2c28914}} is employed to estimate the parameters of the predictive model for each user, while in the second stage, an RL technique equivalent to a direct policy search {{cite:40947ffa98da31e60a877dc5cbee74f5e19b46eb}}, with model parameters estimated in the first stage, is used.
| m | 5ba40317accde37d862ba8d8c9836270 |
Since {{formula:1f5c0322-a3c6-46fb-a137-706d98cd40e8}} is not determined by the method, in what follows we take {{formula:c0e6124a-6902-48fe-98e8-98643215e451}} {{cite:bfb69c807f83fa233c6c0b279f8d63e08074adbf}}.
| m | 8b417853cbbfb8e89bf1f6335ade17a5 |
A generic consequence of a new force is the ability for DM particles to scatter off of each other. If the force mediator is light, then the resulting dark sector interactions could leave observable signatures on galactic and sub-galactic scales with a cross section that is generically velocity-dependent (e.g., Refs. {{cite:0968e98b384aee278e5af6f3313f87a8bf706eb5}}, {{cite:d1c036349994b076973675cfcfadcd70794bdb4e}}, {{cite:e588648178d4d2bc255075556245be1880f59423}}, {{cite:b2c5d0b78832a0ff6e8c27668c4bdaa6961f5625}}). However, a light mediator is not a necessary ingredient of DM models with a large self-interaction cross section that is velocity-dependent (e.g., Refs. {{cite:0fa1912e8cbf0c028670a336f54483a50c623d77}}, {{cite:1587ec6563c47374d812898bb037cea6c9138541}}, {{cite:8a904a04a010a03521c239109c2610d4b38e5cea}}).
In this review, we discuss what happens if DM is assumed to have interactions with itself (self-interactions) beyond gravitational interactions.
If dark-matter particles have a non-trivial probability of interacting on {{formula:c2dd6855-b733-4ad8-8aec-270c4d275ddf}} Gyr timescales, this will allow energy and momentum to flow from one part of the dark matter halo to another beyond what is enabled by gravity.
As we will highlight in this review, the introduction of DM scattering has profound implications for the DM distribution within individual halos and in the hierarchical assembly of structure on non-linear scales. Furthermore, the types of particle physics models that admit strong DM self-scattering could also lead to imprints on the DM power spectrum. Thus, self-interacting dark matter (SIDM) phenomenology includes deviations from CDM on scales of individual DM halos and subhalos, as well as their population statistics.
| i | 84d2252e0ff3f5b8b9861673bb583271 |
We compare our method to those traditional and recent learning-based MVS methods. The
quantitative results on the DTU evaluation set are summarized in tb:dtucompare,
which indicates that our method has made great progress in performance. While Gipuma
{{cite:df99e9cbcbb328d719d55de96394ff27770d63ae}} ranks first in the accuracy metric, our method outperforms
all methods on the other two metrics significantly. Depth map estimation and point
reconstruction of a reflective and low-textured sample are shown in
fig:dtudepth, which shows that our model is more robust on the challenge
regions. fig:dtupoint shows some qualitative results compared with
other methods. We can see that our model can generate more complete point
clouds with finer details.
{{figure:92b296aa-f8c5-4d02-9782-2438024d23c9}}{{table:e382baef-ee19-48cd-a598-e59c7b5a6707}} | r | 8f5f48c59721e92e50cf32c00d9d9650 |
Though impressive progress has been achieved with fully supervised models, there remain many challenges for SV systems.
For example, fully supervised models with a limited amount of labels usually do not perform well in a domain mismatched condition {{cite:b68016f0cd7a65ad0c1259a5d8b6881fbc6c012b}}, {{cite:d56c4cc54c1d830646a4e28207d5db00ae663ccd}}.
In many practical cases, it is very costly and difficult to obtain sufficient data annotations.
Self-supervised learning (SSL) that only requires a large amount of unlabeled data provides an alternative solution for learning representations from speech.
This study mainly focuses on self-supervised contrastive learning approaches, where the goal is to discriminate positive and negative examples. The popular contrastive learning frameworks, such as SimCLR or MoCo, are usually constructed by an InfoNCE loss, neural network-based encoders and a data augmentation module for providing different views of the same sample {{cite:ed790dfd4256d6943908c487e4c63a0a332a82e1}}, {{cite:c1eb819343f98e496c37fe66c685eb80c34b4dbc}}, {{cite:27e76fc102f561f32f210d78f5597ee1e3025bd7}}, {{cite:e46cadaaf4087fc332c2699b790a38edd12c836b}}. The InfoNCE loss guides the network training, so the distance of a positive pair in the latent space (i.e., embeddings) is forced to be small, and distances of negative pairs are large. Recently, in {{cite:3d1cba207b4ccffa52b125b0697a7bc30dce57b5}}, {{cite:7b5a09bff0c82668edd9f286d89cc22c76603de0}}, {{cite:526221f43534afc7f2326bb94da6fe1dba76d9b8}}, {{cite:9d9d52ff74b39ac9dfd847e86369fe0f6429bd0e}}, researchers explored self-supervised contrastive learning for speaker verification as an unsupervised learning framework or a pre-training method for subsequent tasks. They have shown the effectiveness and the potential of self-supervised contrastive learning-based speaker embeddings.
| i | b4c55211c02eb2c932505a08ab96dc45 |
Currently constructing optimal higher order models is a timely topic {{cite:bf7edcd20115ec6429164a050fe24287e3986668}}. Our method can be generalised to include even higher order interactions, such as quadruplet and so on. The method can be used to indicate higher order evolutionary mechanisms in a network and suggest what is the most likely order of interaction. Our work shows that studying the evolution of small graphs over short time periods can reveal important information and predictions regarding network evolution.
| d | 8ae2b833e887c1e0ac90231c71592378 |
[leftmargin=*]
BPR {{cite:1673e942751f005707c7b6ddc83412df22284d1c}}. Bayesian personalized ranking devises the maximum posterior estimator from the Bayesian perspective. The BPR loss is computed as the relative difference between positive pairs and negative pairs.
LSTM {{cite:17a647e6c6296f508c025790c8e5c97a9c4d18c6}}. An LSTM based sequential recommender. We use the final hidden state as the user interest vector and an MLP predictor for click-through-rate prediction.
CNN. CNN extracts features from the behavior sequence, and we take the pooled feature as the user interest vector.
We use various kernels with various window sizes to extract features from the behavior sequence and apply global pooling on the extracted features to obtain user interest vector. Similarly, we use an MLP predictor for prediction.
NCF {{cite:e78885e16475d78d1e22ac34113507f45acded2d}}. NCF is a collaborative filtering recommender equipped with deep neural networks, which can be of great representation power compared with the traditional inner product.
ATRank {{cite:a4832b7b003dbfceac6eb37846e77e4a605c9833}}. ATRank is a ranking recommender that comprehensively leverages the power of attention mechanisms.
THACIL {{cite:db111d57460ef75765a28a097a4805fa287e3895}}. THACIL divides the historical behaviors into multiple temporal blocks and captures both the intra-block and inter-block correlations. In addition, THACIL considers both category-level and item-level user interests for representing users' interests.
ALPINE {{cite:b44f17a83b208810320ee94be655a1c3c9be92d0}}. The essence of ALPINE lies in the temporal graph-based LSTM module, which captures dynamic interests within the temporal behavior graph, and the multi-level interest modeling layer, which models multi-type behaviors.
MTIN {{cite:36662d01dc28f0e23bf3a9baa2866588d9beb3cb}}. To investigate the multi-scale time effects and item-to-interest grouping problem, MTIN devises the time-aware parallel masks and the group routing algorithm.
| m | e39695ee2d30ed095b444f9f28eb6a68 |
In tab:results, we present development set results for all investigated alternatives, as well as test set results for our top-5 architectures as found on the development set.
In addition to the standard score metric of ExVo-FEW-SHOT, the average CCC of all ten emotions ({{formula:7334de40-42f4-41d5-a1c9-089ca6f5718a}} ), we also show {{formula:2f1c197a-75d3-4b2f-9c61-329333e5fa31}} confidence intervals (CIs) for the development set, obtained via 1000-sampled bootstrapping (with replacement)
and relative gain over our baseline.
Our first observation is that our baseline, which comprises solely of CNN14, vastly outperforms the official ExVo-FEW-SHOT baseline, which relied on 1D-CLSTM networks trained on raw audio inputs {{cite:0106be17a8610d73bea378a612a707d84ae69566}}, with a {{formula:ca673289-9135-4981-9904-be89615f74a6}} of {{formula:a2b3b0d9-0fdc-4523-8393-20040a18c3e7}} on the test set, compared to a baseline of {{formula:b01c3931-f25b-4380-b42f-332eae38b816}} .
This indicates that 2D-CNN architectures based on log-Mel spectrograms might be more suitable for the analysis of emotional vocalisations.
Interestingly, inducing speaker invariance by adding an auxiliary adversarial speaker classification output to the main emotion encoder branch ({{formula:83a40b19-c3fa-43d8-9431-16d97c0166ac}} ) substantially reduces development set performance.
| r | 0d899b568f75254e7f06f7d32983ebd6 |
Even though there are many techniques present for testing {{cite:151c9e42fefc17eca9a14a25dd0f654d3a348e34}}, there has been a dearth of comprehensive ML model testing tools which can work across modalities, different types of models, and go beyond generalizability. There exist few toolkits like AIF360 {{cite:265c4f165686f2af53fb5fb42b758403a71db82c}}, CHECKLIST {{cite:54ac4f260767aa7d6fa916ffb9381439163c2e47}} which either concentrates on a single property such as fairness or single modality such as text, but a comprehensive set of testing algorithms under a common framework is scarce.
| i | 751040406955f83c3950090fb11b52cc |
By studying the critical exponents at the second order phase transition, we
found that 2d lattices of stochastic integrate-and-fire neurons are compatible
with the Directed Percolation universality class. We then proposed the topology
of two coupled square lattices to increase the dynamic range of a retina-like
sensor. The first one receives Poisson inputs at rate {{formula:c655f547-2aff-4094-9db4-8a79847afc31}} , and represents it as
a neuronal activity {{formula:148aa15e-7b78-42b9-a6aa-23cb65760c7d}} , with
{{formula:b5d949d7-ed7d-4623-9922-d6ff1731b113}} . This activity is passed, by a fraction
{{formula:e28e3fae-5ba5-46da-b4f3-d66556e1f7bf}} of neurons, to the second layer which then presents an output activity
{{formula:8c5ca1b6-fdd1-4ee0-bd92-59b271e0faea}} . The final Stevens's exponent of the system is
{{formula:5e626eab-cc81-41b5-9ed5-92c7c0f67a64}} . Thus, the exponent
relation Eq. (REF ) proposed in {{cite:5e45e239b8604c53e6cc1c734b3a82e1693a2969}} seems to be valid,
regardless of topology, as long as the stimulus intensity is moderate: the power
law response is valid only before a saturating regime (Hill's like curve) also
found in biological sensors.
| d | ffc2d1292373d64a97b9947acfffa8a9 |
A second quality of minion tests is that their soundness can be checked algebraically, as stated in the next proposition and shown easily using a compactness argument from {{cite:cede99f97e39960ca3e630da7c35cc7747ad59ce}}, cf. {{cite:cea2cfd3408d989f9158cfc4dc5951a7f16c3f0e}}.
| r | 8aa3f20a76bddbd51e097c13c6284afa |
Using the one-dimensional stellar evolution code mesa (section ) we simulated the response of RSG stellar models to injection of energy to the outer envelope and to mass removal. We list the values of the five cases in Fig. REF . We take these values from the three-dimensional hydrodynamical simulations of {{cite:ee8b397f0ad5243e418d0abb009470684a2e2bf5}}. They simulated CEJSN impostor events where a NS on a highly eccentric orbit enters the RSG envelope for about half a year and then exits, to return after an orbital period of {{formula:e9521841-cafa-424e-9a41-b5b3b569df01}} . Because of numerical limitations we could not simulate the most energetic cases that {{cite:ee8b397f0ad5243e418d0abb009470684a2e2bf5}} simulated. We therefore analysed in detail the simulation S(47,0.03,1) for which we present the density profiles and mass profiles for in Figs. REF and REF , respectively.
| d | ca44a6bf44eec8a8797a4dcd4591afe7 |
where {{formula:8e87cf13-22b4-4cef-8da2-1497eab2d738}} indicates the features after applying the attention module, and {{formula:616fd74b-57e1-41f6-b74f-751455ff3903}} is a parameter which is initialized to zero and learnt during training {{cite:a3bebc04219ef457924bf23c56e227d1afaa348b}}. The method in {{cite:a3bebc04219ef457924bf23c56e227d1afaa348b}} used a channel attention model for the segmentation in which the channel weights are obtained by the features themselves using a self-attention approach. However in our method, the weight {{formula:222c4f33-08b0-4f26-9668-4352ee80ffbb}} is the classification probability which is computed in the classification branch. This approach is supposed to reduce the false positive results as the correct classification output {{formula:55dd3294-16ba-4403-b403-2471b883569f}} for a non-fire image is close to zero, so it attenuates the activation of the segmentation output {{formula:395b2da4-29d9-49f1-894b-d6f9bc4f2f72}} . In the case of fire, a value of {{formula:01b6ab8d-ddb3-43cb-a5fb-33b906e55aba}} close to one helps to recognize even small portions fire in images. This also encourages the consistency of the results between the segmentation and classification outputs.
{{figure:c7fea6c7-1d41-40eb-a11d-2393a2f748ed}} | m | bcda7d70816cbaf5bd672dfbafdb6a88 |
Agent-based models (ABMs) are built by describing agents (individual and active subjects) and the interrelationship between them, as well as the environment where these interactions occur {{cite:0053bb8cc89d47c5a3d1771f8c6ddb7d1fb7bbc6}}. ABMs are artificial simulations made in a computational environment. They mimic the core mechanisms of a phenomenon under examination {{cite:bf9bff4379ecb4a5ddd55b2e390e5309dde274f4}}. Therefore, ABMs are simulacra, artificial, in silico, reduced to core aspects of the phenomenon, and their purpose is well-defined.
| m | 0305bb2860ad3ca85ac468f978cdfa06 |
These classes of Boolean functions have been introduced by Kauffman {{cite:3699b25fa3e159e420917fcd3f69824f5c783d18}}, {{cite:0985aab16d25068aebb57cdb65c59cad851b3927}} to formalize the “canalizing” behaviour observed in some discrete systems. This idea is also at the basis of Waddington's work in embryology: he described an epigenetic landscape guiding embryogenesis by canalizing configurations {{cite:9fd89af1bee83a038cccbe7859c1e710dd00eef6}}.
| i | 0af7af9b39fa038e6b1b2ee3a939bf2a |
We want to give one possible intuitive way of understanding why the disk+crosscap contributions to two-point correlation functions do not decay over time here. Before doing that we review an intuitive understanding of the handle-disk given by Saad {{cite:1826a3a173e700c9b2e88bf938a87308a65028d6}}. A handle-disk can be viewed as a baby universe being emitted by one Hartle-Hawking state and then being reabsorbed by another Hartle-Hawking state. At late time {{formula:48095459-1a74-44d1-99b4-1a9b069c9dda}} , the Einstein-Rosen bridge (ERB) of a Hartle-Hawking state becomes very long (proportional to {{formula:d29c6dd3-1d4e-4972-8afb-703f7561d10d}} ), so its overlap with a second Hartle-Hawking state is small thus causing a decay with time. If a baby universe is emitted, it takes away most of ERB length and leaves behind a short ERB, so the handle-disk gives a non-decaying contribution to two-point correlation functions. The number of ways to match the two baby universes is proportional to {{formula:f27e0f90-a406-4e2f-bc55-3ddea987b683}} .
| d | 9441426f07a23637a689db95d4b7702a |
These games were studied by Roth et al. in {{cite:9cf3177b2e77fca8090cdda3c0599b71de8fc693}}, and they showed an FPTAS for computing an {{formula:3c80a9b8-55f7-4503-9c5f-737db6eda628}} -equilibrium.
Their result provided a theoretical separation between the alternating move model and the simultaneous move model, since for the latter, it is known that an
FPTAS for computing approximate equilibria does not exists for games with {{formula:71798e20-fd53-4f8f-a3ce-7d0743813a68}} players unless P=PPAD.
Their result was obtained by a simple reduction to mean-payoff games on graphs.
These games were presented in {{cite:8b93e05673b539e42fad84d0188a190cf2b652ef}}, and they play an important rule in automata theory and in economics.
The computational complexity of finding an exact equilibrium for such games is a long standing open problem, and despite many efforts {{cite:ad971ae420d81854563a877501b7023298b8452e}}, {{cite:4fa7fd5660650363e21248e629bd04881958d3cc}}, {{cite:899918cd5b6a44cda3782831f8a201f9c2520601}}, {{cite:b19b0632e196060ef18f9f144188039fba0f52e7}}, {{cite:a48f4a02f9454777b17fc6c6afcdef72c000dfbd}}, {{cite:29238b8d2db1daed7549f4e88cc3eee6c5de6436}}, there is no known polynomial solution for this problem.
| i | 3d15ae61889d50c9a5c8dcc7669ae636 |
In portfolio management, one of the key questions that most investors want to address is how to find an “optimal" asset allocation fraction so that the desired risk-reward objective can be achieved.
To address this, {{cite:8080c50e0f3e42af2e10a04c238841f42c5297c3}} and {{cite:72fed22c87f00040a97cb2031db2f5eebbd4cb6f}} propose the celebrated mean-variance model in a single-period setting.
Since then, many extensions and ramifications are developed along the line of portfolio theory and optimization; e.g., see {{cite:e3fc348027e3439757e3afecf3f5ddf92be6409e}}, {{cite:62f835d9306a3542a6e475a0549574e3db215e9a}}, {{cite:6ddc04bf5a068b1427e5cc75a3507122de339f7a}}, {{cite:25a606f03a137a16087cb6567b6a9dd86d6125b0}}, {{cite:5333095bd9ca213cc0013deec23007bd2dbbaf59}}.
A good survey on this topic can be found in {{cite:50d083d6ba6da870c542fdf612f9ab71d1bccf3f}}.
However, the Markowitz-style approach is static in the sense that it only optimizes for the next rebalancing.
| i | 4fac83a36880144d74c8905d081a181e |
In the above inequality, the logarithmic factor appears as a residual of the result obtained before sparsification, contrarily to the bounds (REF ) and (REF ), which do not explicitely depend on the size of the initial sample {{formula:b1ec67b9-dae2-4c5d-8f7e-4f4bff6f50c0}} . This results in a gap of
a factor {{formula:9e1f69dd-0705-410e-b9f9-e07366d62b3a}} between (REF ) and known lower bounds for {{formula:2d922484-c516-465f-bd3a-ce64329ee8ec}} , see {{cite:df039b19343df14afef59684938eae5a459b35a2}}.
| r | 574589fe9f7e39220f07d3a0bb7eab36 |
Consequently, to solve this problem, we propose the intermediate supervision mechanism to enhance the small objects learning capability of U-Net based deep segmentation models.
Intuitively, we believe this enhancement can be achieved by adding some additional intermediate supervision signals for the coarse small object features {{cite:7291d31ff8537eef1c67451eb2c323beb340f3c7}}.
We denote the deep model that integrates U-Net with intermediate supervision mechanism as Inter-U-Net.
| i | 57adfced5b56e0ae94555a914836abbd |
Some solutions of the Einstein's equations of General Relativity, the so-called wormholes, initially suggested in the works of Flamm {{cite:01f1f5d5891e0b7150db6cbb615daf2f3330eef9}} and Weyl {{cite:d3e5a3ce94983f68867fcd996f3a2b401b6dd6cc}}, {{cite:5268b18cf6ad58b6fca432fb8d5a89224c09f054}}, represent a tunnel or throat that interconnects two regions of the world {{cite:3eb7251c6de1c33c43c581529ce1d2e5107e5650}}. The first solutions found (Einstein and Rosen {{cite:310e6f0ac306186c92a8ad4978a05b3d3a4f0be4}}, Wheeler {{cite:f3934435415dbdf54075821151ee689db288f1c1}} and Kerr {{cite:cea10a1515aec93c4c4c8a0b2c11a8a35ff55e99}}, {{cite:3796bfe4f5576df6b00d6fc764cfd3fe45179597}}, {{cite:98bed14e4a83ea61d1ad325a061e9ea868f42fc2}}) were, in general, non-traversable. However, a considerable progress was made when Michael Morris and Kip Thorne, taking into account static, spherically symmetrical and non-rotating wormholes, found a traversable solution {{cite:b36a15c8e51fd945ba375b7d6cc8e58d865e0309}}. It is worth remark that the actual increasing research on wormholes has its principal motivation in the discovery of the deep connection exists between these objects and quantum entanglement {{cite:4b7fe61f48fb213a45e91d9bf637c980dfd2404d}}.
| i | 602fa22191d339d3be03458e442d066a |
There is no straightforward method to incorporate these violations in the shower formalism, since they explicitly break unitarity.
We note that, while Bloch-Nordsieck violations are not particularly significant at the LHC {{cite:7808657692d96333b8b1d7ab7c95224def028ae0}}, {{cite:69a5779adbc5e19f56308eacbcf15af0021a6293}}, they will be important at future collider energies and a comprehensive treatment will be necessary.
| d | 3c590407f6f184eae92b87d950eaf240 |
The input to the constraint satisfaction problem (CSP) is a set of
variables, each ranging over a specified domain of values, as well as
a set of constraints, each binding a finite set of variables to take
values in a relation from a specified set of relations. The problem
asks to find an assignment of values to the variables in such a way
that all the constraints are satisfied. It was first pointed out by
Feder and Vardi {{cite:948a7651a15daa3f258eec4432c1c615e2c5a8b9}} that the CSP can be modelled as
the homomorphism problem for relational structures. In this view, the
input to the CSP is a pair of relational structures, the
instance {{formula:a62d0122-98ae-40c0-ac17-2c85a54e41f4}} and the constraint language {{formula:264a5baa-719b-4cf0-805d-359398e83380}} , and we
are asked to find a homomorphism from {{formula:48fff9c0-89fe-42df-ad5d-7533f39e58b3}} to {{formula:0f28abe8-c8a0-474b-bc3d-4afb316f016d}} ; i.e., a map from
the domain of {{formula:b3a09cda-7e0f-4c9b-809a-7de23a515b09}} to the domain of {{formula:9528c051-3895-4690-96e7-2ac5e81c249b}} in such a way that all the
relations are preserved. In the fixed-template variant of the
problem, the constraint language is fixed and part of the definition
of the problem, and the instance is the only input. The fixed-template
CSP with template {{formula:b764de6c-ff52-4601-866c-11cd5ae47d5e}} is denoted by {{formula:86f08aae-e441-42e8-9798-114db12f5992}} .
| i | 907acce06276a7b475edbc1e03a88a3d |
We tackle this question by considering {{formula:4cbe0b35-55b7-40ee-9ccd-605ee2ce6bab}} , the category of partial groups (see Section for definitions). Partial groups, defined by Chermak {{cite:88bdabe16c6c2aa74fba628a7288821785d02a8d}}, generalise the concept of group and are introduced as a setting for the study of the {{formula:1bbea40f-5cb9-4bad-94ba-b48f969c002c}} -local structure of finite groups within the framework of fusions systems as defined by Broto-Levi-Oliver {{cite:edab42a548a8a241455c2c738ffb52c8c385b1c6}} (see also the monographies {{cite:f5c135e2805420d469080db5bf2efa517443195a}} and {{cite:9d02ae09522575aeec3af25b500900814446b4a2}}). Every group gives rise to a partial group in a natural way {{cite:88bdabe16c6c2aa74fba628a7288821785d02a8d}} and so {{formula:03e5689e-e984-4873-8dc9-b8cff116d422}} is fully embedded in {{formula:20a8d499-ed7f-464c-a0ab-8639f18a11c9}} . Therefore we answer Question REF in the positive by proving
| i | 75279606848ebcfcdc75c491e67ef153 |
A natural extension of our work is the investigation of Absolute Negativity in sets of quantum channels. An open question for such research would be to determine the practical advantages of non-zero AN channels over channels described by a stochastic transition matrix for a given frame representation (see {{cite:e8c53e790e7e2f26ad60835045384caa6df50f4d}}, {{cite:27bcf0dfd89d1fa72fe8081d585f624d87cfd91e}}). However, we believe that the extension of our study to quantum channels deserves independent investigation, because of the phenomenological richness of quantum channels. For example, an investigation of channel resources can vary dramatically depending on whether its composition is sequential or parallel {{cite:1f8553e631b32cb6a64336ad7339e86a45814358}}. Some fields that could benefit from such a study include magic state computation and shallow quantum circuits whose quantum advantages require contextuality and therefore negativity {{cite:88b80a0b649cdeee3c56d83bbca18ea1cf37ab6d}}, {{cite:8a01c273756780959740344f2a58380c2364c9cf}}.
| d | 277cac618b4886925d7fa91c0fc31b87 |
Models by {{cite:801d6f40d7af80afd03e85086317d3536e9aabcd}} and {{cite:ba8069073734dc237f4006669c01e4dc71a6efd0}} predict higher metallicities than our estimations (see Fig. REF ).
This could be due to the H I density values used as input in the models of these authors rather than an incorrect
selection of the star formation parameters, which control the enrichment of the ISM. The highest discrepancy is found for the
model evolution by {{cite:d697855ff21053aff4505211e6bf99b91e4f694d}}, which shows higher values of {{formula:21c1ac3b-7bca-4791-b3e7-36dd09f5ebc2}} than the ones derived by us. Interestingly, the results from
all these chemical evolution models inferred a solar metallicity for the Local Universe, except for the one by {{cite:d697855ff21053aff4505211e6bf99b91e4f694d}}.
{{figure:131bc21b-dc0b-4ebd-a49c-83ea333c12e1}} | d | 21223831c35b3001416ffb9dd5c0e4a7 |
We empirically evaluated the proposed “multi-pose hypotheses” approach on the recently published Human3.6M dataset {{cite:46741318e6d51934bcfbe3fbdef60d0d7226176f}}.
For evaluation, we used images from all 4 cameras and all 15 actions associated with 7 subjects for whom ground-truth 3D poses were provided namely subjects S1, S5, S6, S7, S8, S9, and S11.
The original videos (50 fps) were downsampled (in order to reduce the correlation of consecutive frames) to built a dataset of 26385 images. For further evaluation, we also built two rotation datasets by rotating H36M images by 30 and 60 degrees. We evaluated the performance by the mean per joint error (millimeter) in 3D by comparing the reconstructed pose hypotheses against the ground truth. The error was calculated up to a similarity transformation obtained by Procrustes alignment. The results are summarized in Table REF for various methods and actions. For a fair comparison, the limb length of the reconstructed poses from all methods were scaled to match the limb length of the ground-truth pose. The bone length matching obviously lowers the mean joint errors but makes no difference in our comparisons. One can see that the best (lowest Euclidean distance from the ground-truth pose) out of only 5 generated hypotheses by using {{cite:1de5dae93fb403bcea5ad23c6fe674b1a472161b}} as baseline for 3D torso and projection matrix estimation is considerably better than the single 3D pose output by {{cite:1de5dae93fb403bcea5ad23c6fe674b1a472161b}} for all actions. We also used the 2D-to-3D pose estimator by Zhou et al. {{cite:103a348658ab6c9432c762f78a78fdbaa2824bfb}} with convex-relaxation as baseline and observed considerable improvement compared to {{cite:1de5dae93fb403bcea5ad23c6fe674b1a472161b}} in both 3D pose and projection matrix estimation. Using {{cite:103a348658ab6c9432c762f78a78fdbaa2824bfb}} as baseline to estimate the 3D torso and projection matrix we generated multiple 3D pose hypotheses. Since the accuracy of {{cite:103a348658ab6c9432c762f78a78fdbaa2824bfb}} is already high, the best out of 5 pose hypotheses cannot significantly lower the average joint distance from the single 3D pose output by {{cite:103a348658ab6c9432c762f78a78fdbaa2824bfb}}. However, by increasing the number of hypotheses we started to observe improvement. Table REF also includes the best hypothesis out of conditional samples from only the first diversification stage , by diversifying conditional samples and using no kmeans++ clustering (shown by No KM++), using {{cite:103a348658ab6c9432c762f78a78fdbaa2824bfb}} as base. This achieves the lowest joint error in comparison to other baselines. The pose hypotheses can be generated very quickly ({{formula:5c5eed8f-4199-40ae-9ff8-9b508c4ee32d}} 2 seconds) in Matlab on an Intel i7-4790K processor.
{{figure:aa4061c9-2338-47fc-9517-8d2b631da2e8}}{{table:ac7331e2-9828-48f2-b3c3-b7d2e03f4695}} | r | 6a3c9014d81bed5f2619f650254af8bd |
Indeed, capture into the 3:2 resonance becomes certain at {{formula:a3172381-d5f7-44a3-bd75-9d03be41acaa}} for {{formula:eb8f6837-8c76-48d8-a18a-7404fa6abf78}} yr, and {{formula:b3ea521b-a9a6-4eb6-930b-adc3da60991c}}
for {{formula:8d613e89-3d17-4e80-ba2c-66b12a274f80}} yr (Fig. REF ). The simulations by {{cite:a893ea3ad28fab39cfc94e624d40fb177237ab8c}} suggest that the orbital eccentricity
acquires much higher values shortly after the onset of the evection resonance. Furthermore, these probabilities
are computed for the current mean motion of the Moon, whereas the giant impact theory implies much smaller
orbits for the early Moon, down to {{formula:342f99cd-5900-4ca8-9f7b-b2d4faf22f26}} . Somewhat counter-intuitively, the probabilities of capture
into a spin-orbit resonance become smaller for tighter orbits, all other parameters being the same. For
example, the probability of capture into 3:2 is only {{formula:ba0a6a93-bfe3-4112-9f4a-51ab811b8e2e}} for the Moon at {{formula:ec703d5b-b12c-459e-ac68-be0791d863f1}} , {{formula:230bd777-4f54-4e52-9ef3-19ce8ffae27a}} yr
and {{formula:34bc2831-e86e-48db-8276-a0f07484cf1a}} . Could the Moon traverse the 3:2 resonance (and all the higher resonances) while it was
still very close to the Earth? Our calculations show that the Moon is inevitably entrapped in the 3:2
resonance at {{formula:69948602-f860-45ab-8d58-1dc15c3200d1}} , if the eccentricity exceeds {{formula:6cfe62dc-11e4-4813-9c0e-e594786dc1d6}} . But the evection mechanism quickly boosts
the orbital eccentricity to much higher values, up to {{formula:8bd3e204-0a3c-44ce-80a2-68f1d1cfd448}} . Therefore, the only realistic possibility
for the Moon to avoid the 3:2 resonance within the giant impact scenario is to spin down to its present-day
synchronous state before the onset of the perigee precession resonance. This may take, depending on the initial
spin rate, up to 10 Kyr. This scenario also requires that the Moon remains fairly cold and viscous during
this pre-evection stage, which, due to the proximity to the Earth, may prove another hard problem.
| d | 214bd749fdf329b7abab7c9fb30c43c7 |
Finally, as in our previous work, we implement the bounding method of Refs. {{cite:8935f38d25480fe73d7c0281f1c519300a4d8284}}, {{cite:c3fd5d72d246247ea0320434b6f9928e3f2a9645}} to further reduce statistical errors on the extraction of {{formula:dd9c456f-7ec2-4509-9303-576fcda5912e}} .
Here we use that the correlator of Eq. (REF ) has a lower bound of 0 for {{formula:71654217-c05b-4924-847f-f63629d2f19d}} and an upper bound of
{{formula:30a5126c-d47a-4bf2-ae3e-991cdb01e63f}} ,
where {{formula:19262da5-ca47-4644-8e03-bce617bb0586}} is the lowest (two-pion) energy state in the vector channel.
For sufficiently large {{formula:f7dec012-837d-40d7-bacd-3a230da3dd70}} , the two bounds
overlap to give a more precise
result for {{formula:835e0d67-0a94-42ea-ba80-1107816a83a6}} than we would obtain by summing over the long-distance tail.
| m | 937c74ddb68d1ae7a3576b0054556e5a |
Basic {{cite:12ec74f405f3816a9cd63fd1ab7c812090543d89}}, {{cite:5c1ce2117a0b1cfefbc7b08a62b8f0507cb6fb4e}}, {{cite:03a23d2a8aa915fc06fc9457fc6cc4aab39a6809}}, {{cite:811d4f5d6f018b6daa068fbbc87d549a89582814}}, {{cite:ec05b3ac62ecf3f313d683742865108ab80d3400}}, {{cite:ec64f02fc142aca2601ae423601dd1ac5272905f}}, {{cite:755fa4d81626cc0465f1e6971d8336679cced7cf}}, {{cite:11b9b2f587b7a620c26f4fd30af8d9f25e66d486}}, {{cite:cc44f8481c2689e6ffa9374df990122caeb2dee4}}
| m | 19ed88cce03a1a79209bc508910788af |
We demonstrate the GP approach for different surfaces in this part, whereby we assumed Gaussian noise models with a constant parameter ({{formula:32ca2544-e89b-4d93-901a-c0642c11d880}} ) in all cases. We simulated grinded surfaces and honed surfaces by manually design ACVFs in the first two subsections. Also, we used the method in {{cite:9afe1cfdc8072876ff518177313f7eb2d93c3b33}} to automatically reconstruct ACVFs of turned surfaces from measurement data in the last subsection.
| r | e51e1e7e8f73a20539937221867e9f49 |
Reinforcement learning is hopeless to learn anything without encountering a desirable reward signal in exploration phase {{cite:7385d5dc397de54283e681abd6919fdc4661c87b}}. In our case, due to the fact that a random agent never scores a goal, by using a success/failure reward signal, our baseline article {{cite:dc110d0e9cfb97b77a265a5c287b8d07a1ff882e}} fails to learn the task. To overcome this issue, we use a prediction-based exploration strategy called “Curious Exploration”. However, this exploration strategy degrades the problem of completely absent positive reward signal into a sparse one. To deal with the sparsity, we modify the replay memory in a way that it tends to remember more promising memories and we call it “Return-based Memory Restoration”. Complete source code for our agent is available at https://github.com/SaeedTafazzol/curious_explorer. In the following subsections, we will describe the two mentioned approaches in more detail.
| m | 01c82f8b6a8bbdfd3a6e96e2b3e40153 |
We additionally present this work as an attempt to renew interest in SSL objectives which operate without multiple inferences of a transformed image, such as Deep InfoMax {{cite:a2169faf5556ac090f6b833ef3d2c8e463449471}} and Greedy InfoMax {{cite:1d409ae367537d19fb1f1c1bba714b29daaa3c06}}, by allowing them to exploit the theoretical foundations developed for multi-view SSL {{cite:5f642b88d316590db8092915ba0b8ff1323df832}}, {{cite:6cb805ab230532d7c6b224597f77790441e464df}}, {{cite:4ef1872cfe1b4f982a6ec11589e76f35bdf51322}}, {{cite:12e48b8d0a447107f915680afa8821563f4a0210}}, {{cite:2eca9119114cfe4b43f031b50a0e69b38d19ca50}}, {{cite:e3b850e9140a99d5a07a9f33ed32c8f57614796a}}. Although DIM-like methods have to-date not yielded the same performance as their A-SSL counterparts, we believe the coupling between objective and network architecture is likely to yield more parallelizable algorithms which are therefore more scalable and biologically plausible {{cite:1d409ae367537d19fb1f1c1bba714b29daaa3c06}}.
| d | b53b0282ec193f932bd333bf69eab1e9 |
In the spherical thermal convection system, the multiple unstable modes originally degenerated under homogeneity are revealed simultaneously at a critical Grashof number, estimating the ratio of the buoyancy to viscous force.
Various types of highly symmetric steady states are invariant under a set of transformations of point groups, such as axisymmetric or polyhedral patterns, and bifurcate from the static state via nonlinear interactions between these modes; thus, the isotropy inherited in the system is first broken{{cite:19cc79493ceb19661ff9addc29fc1c5db7b08d0b}}, {{cite:b40bf69f8aafe0b9d77c30ee1d881651c09802fe}}, {{cite:c040bfc51e1806afaf171ce337b41c09b6b252a4}}.
An arbitrary incompressible field may be decomposed into toroidal and poloidal components in the radial direction.
The velocity fields realizable over the threshold consist only of the poloidal field, because of the absence of an energy source term that sustains the toroidal component{{cite:4b7d3fc8d7a14b2a577c81764b8f746f790bf660}}.
This context is analogous to the question of how super-rotation is sustained on Venus and to the deduction that the velocity field bifurcated sequentially with a quasistatic increase in the Grashof number never leads to any net angular momentum.
The present study provided a counterexample against such an intuitive deduction, and showed that spherical thermal convection is a realizable system that can sustain a steady state with no momentum exchange between the fluid and boundaries, as presented in Ref.{{cite:7ef707a67bd34b56c7f958f44356059126b1bdc1}}.
| d | 88cec575f46a76d5bebee90bc7726d73 |
The crop row detection algorithm is based on U-Net {{cite:0460b10d11e4bf3344cd5c01a3ae8626564844c8}} architecture. U-Net has been one of the most popular image segmentation algorithm known for its ability to be trained with lesser amount of data and faster predictions. U-Net, being primarily intended for semantic segmentation, has been used to identify pixels belongs to a plant in a given image in agricultural applications {{cite:cdf13a8097da52616e817171b78003ccdebf3c0b}}, {{cite:c7017e292b025c67d274ba245944675580e9ea61}}.
| m | 0032b5aa6f1f7baad71403b4393ed8f0 |
In this work, we propose a new class of repulsive priors based on the type-III point process.
point processes are a class of repulsive point processes first studied in {{cite:cefddd9a9b25bf625eb5939925f3befedb78fa51}}, {{cite:09f7b070e771b1b36eeec04350bb82d167eae531}}.
More recently, {{cite:d7043c945b39a43140cc162fa1a3c1810769e2cd}} developed a simple and efficient Markov chain Monte Carlo (MCMC) sampling algorithm for a generalized type-III process (see sec:background).
In this paper, we bring this process to the setting of mixture models, using them as a repulsive prior over the number of components and their locations.
Treating the realization as a latent, rather than a fully observed point process, raises computational challenges that the algorithm from {{cite:d7043c945b39a43140cc162fa1a3c1810769e2cd}} does not handle.
We develop an efficient MCMC sampler for our model and demonstrate the practicality and flexibility of our proposed repulsive mixture model on a variety of datasets.
Our proposed algorithm is also useful in point process applications with missing observations, as well as for mixture models without repulsion, as an alternative to often hard-to-tune reversible jump MCMC methods {{cite:300864c1015554ec510d494427a2248dba7c6519}} to sample the unknown number of components.
| i | a419e0e06cac4d0632ee178b4f7e1412 |
In the presented scheme, the atom-cavity system is exploited to construct the CPF gate, which plays an important role in our complete HGSA scheme. Therefore, it is necessary for us to discuss the feasibility of CPF gate with the current technology. In 2004, Duan et al. proposed the CPF gate in theory, and their numerical simulations showed that the CPF gate is robust to the practical noise and experimental imperfections in the cavity-QED setups {{cite:60779daba2d70ecfd3db2bb8202162fc7b495415}}. Considering a neutral atom trapped in the Fabry-Perot cavity with {{formula:b1733926-0f33-47f2-8f42-3836c55f6391}} , one may calculate the gate fidelity {{formula:7ea78fa3-6f11-4cae-92fe-0bf67d820226}} to be 0.999 if {{formula:802030d8-9554-4e2a-af48-5c9429c8fbb5}} and the error probability {{formula:480f6d5a-6d13-4b6f-8cae-d8894283dbf1}} due to the spontaneous emission {{cite:60779daba2d70ecfd3db2bb8202162fc7b495415}}. In the same year, Xiao at al. proposed a scheme to realize the CPF gate between two rare-earth ions embedded in cavity, and their numerical simulations showed that the CPF gate is robust and scalable with high fidelity and low error rate {{cite:5fa47f5e8ae2752ae77bd76c89e140bab6e33fe8}}. In the case of a rare-earth ion embedded in a silica-microsphere cavity with {{formula:e042f57a-70d8-45ee-b27d-a288231b7e81}} and {{formula:4f6cde5c-8c12-4c5c-ba41-bbda00e0e14f}} , {{formula:a0e7d6c4-f021-42fe-b988-85ca499b4c78}} can reach 99.998% and {{formula:09c7cfb5-b1da-4dd4-a574-4944894e04fa}} is about {{formula:4116501b-77c9-4005-80da-dc872e4f6807}} {{cite:5fa47f5e8ae2752ae77bd76c89e140bab6e33fe8}}. In 2006, Lin et al. presented a scheme for implementing a multiqubit CPF gate by only one step, and showed that {{formula:e69fc3e7-f025-4798-af50-93c8a33ad426}} can has its minimum value {{formula:cff47ab5-1c31-45ed-a442-2b30ad0883e9}} , and the quality factor {{formula:5fd86e68-414d-4349-90a6-4e5e7fd0c7d1}} is independent of the variation of coupling rate {{formula:a2c51988-b02d-42ff-b747-5ee924d40a12}} {{cite:890be85d09bf93e52c42ad77372ff66f6dd8a6ba}}. In 2008, Gao et al. proposed the scheme to realize CPF gate between two single photons through a single quantum dot in a slow-light photonic crystal waveguide {{cite:95e63899bb6f0625889c16cb5d2bf0e762d3a811}}. In 2015, Wei et al. proposed the scheme for implementing optical CPF gate by using the single-sided cavity strongly coupled to a single nitrogen-vacancy-center defect in diamond {{cite:576644c3737c780f4ac1a82368a1f04e342035d7}}. In the same year, Hao et al. presented a scheme to realize the CPF gate between a flying optical photon and an atomic ensemble based on cavity input-output process and Rydberg blockade {{cite:7c220599c21a549da4b615613eb05debe8f19aa2}}. In 2016, Hacker et al. utilized the strong light–matter coupling provided by a single atom in a high-finesse optical resonator to realize the universal photon–photon CPF gate and achieved an average gate fidelity of ({{formula:bf0e37b2-6c80-443a-937f-2cfbe6732343}} ) percent {{cite:30be1705c487ec34a1a655bc403c62ceba844d1f}}. In 2020, Kimiaee Asadi et al. proposed three schemes to implement the CPF gate mediated by a cavity {{cite:499e4d213b0455439bb8fb2baab9696e393c5443}}. In the past years, the CPF gate has already been well studied in the theory, numerical simulation and experiment {{cite:60779daba2d70ecfd3db2bb8202162fc7b495415}}, {{cite:5fa47f5e8ae2752ae77bd76c89e140bab6e33fe8}}, {{cite:890be85d09bf93e52c42ad77372ff66f6dd8a6ba}}, {{cite:95e63899bb6f0625889c16cb5d2bf0e762d3a811}}, {{cite:576644c3737c780f4ac1a82368a1f04e342035d7}}, {{cite:7c220599c21a549da4b615613eb05debe8f19aa2}}, {{cite:30be1705c487ec34a1a655bc403c62ceba844d1f}}, {{cite:499e4d213b0455439bb8fb2baab9696e393c5443}}, {{cite:374d246bf8673e038e5fd5c5746332678bdda431}}, {{cite:1f79639f48b340ee023328df7004bb642b9a45ad}}, which will make our HGSA scheme based on CPF gate more practical and accessible.
| d | c7ab496bd89cb3c490ebd2aff9cce2d5 |
We validate the proposed bounding box correction method, followed by the weakly-supervised segmentation framework, on the liver segmentation dataset provided by the organizers of the Medical Segmentation Decathlon {{cite:a22632bccd89a051e535aa11e9bd4fcf33564f96}}. The data consist of 131 3D contrast-enhanced CT images and was divided into training and validation sets in the proportion 100:31. We normalize the CT data as suggested in {{cite:8b47af489a084af8ff2688e41a1ee3a831c2a5a5}}. First, the intensity values of pixels that fall under the segmentation masks are collected for the whole training set. Second, the intensity values for the entire dataset are clipped to the [0.5, 99.5] percentiles of the collected values. Third, z-score normalization is applied based on the statistics of the collected values.
| d | 254a5dcaf5f6eb50d7b54e936dcb5979 |
Remark 3.3
(a) Algorithm requires, at each iteration, only one
projection onto the feasible set {{formula:caf78623-2fe0-4891-929b-8fed4fc3c034}} and another projection onto the half-space {{formula:ee1b0b0e-440d-4e40-88ad-cef9153f43e4}} (see {{cite:20b85a0dcb24941939ceb10ef89de4fde420eb60}} for formula for computing projection onto half-space), which is less expensive than the extragradient method
especially for the case when computing the projection onto the feasible set {{formula:9b63f02c-7da9-42d9-8c31-c2a52d90835e}} is a dominating task during iteration.
| m | 23bb0a6e434cc774bd5681f40c7f44d8 |
Since demonstrator errors are minimized, the success of our approach basically depends on the data representation and model selection for the policy.
We use deep convolutional neural networks (CNN) to represent our policies because they enable us to represent the demonstration data as camera images.
It is known that CNNs perform well on visual classification and detection tasks {{cite:770d29e88141f9ef2a7361d838bdb837c6560888}}.
The dataset we generated can be viewed as a machine learning dataset for visual regression task.
However, basic use of CNN comes with the assumption that we can infer the expected action using camera images of the current time step.
Although the problem is not fully observable, we see that the performance of different policy representations (CNN, H-CNN, Recurrent-CNN) are close.
This might be due to the possible overlap between training data set and test data set.
We argue that learned models are able to memorize some of the training cases to reduce the error, but not able to generalize or memorize the whole task.
| d | 7c02871d5da889a4704e9606e59df0a5 |
In this paper, we studied the lensing effects of charged rotating KN black holes, closely following the work {{cite:350b8a76b344520af235d09486379c2c5d41c8ee}} on the lensing by Kerr black holes. Comparing with the results in Kerr case, we found the modifications in the integral form of the photon motion and three key parameters {{formula:11ef7d06-c42f-49dc-9736-0be68771fffd}} , {{formula:67218ed9-c0c7-4e94-b651-2119665f0510}} and {{formula:e61f6ade-f13f-4f50-861a-621bce5bdd7f}} of critical photon emissions induced by the charge {{formula:a26936e5-492c-4093-9467-ad2ecc0a1050}} of a KN black hole, see (REF ), (REF ), (REF ), (REF ) and (). Then we investigated the optical appearances of luminous sources around KN black holes, and our main results can be seen in Fig. REF to Fig. REF . In particular, we paid special attention to the higher-order images ({{formula:3dac5179-0799-4221-bbd8-6130f9f9d8af}} ) of luminous sources, which have not been discussed carefully in the literature.In {{cite:350b8a76b344520af235d09486379c2c5d41c8ee}}, only the primary ({{formula:3b456036-004e-4be5-8ba9-7f539a99d9c9}} ) and secondary ({{formula:1ce0951f-4e78-43e0-b73d-3e3091047b78}} ) images were considered.
| d | 1f0e2f652b8e9dae93aed2a15db31255 |
Lastly, unlike the quaternions, the octonions are rarely applied to
the domain of science {{cite:738432b3ae292222c9c1d48f27df0c068782d777}}. Our work provides
a rare practical application of octonion algebra. More generally,
our work indicates a potential link between the mathematics of quadratic
forms and the binding problem in cognitive science and machine learning.
| d | c4d20834c53e80a9b043542500ac97be |
During the last decade, on-surface synthesis has become a powerful technique to form atomically precise nanostructures by linking small precursor molecules {{cite:066f6749a9d1d4c5b76efe5558865600f7340e5d}}, {{cite:03bbd82e770eb7dce12c9fda7741e87f1fc8b847}}, {{cite:d6cc4b515670d99a4c728d59e0db381d1389d64c}}, {{cite:9f754607f64d3719be16506216a8dd8f23b31a16}} or porphyrin building blocks {{cite:6491ce25418a844db35c103272867186c01c5fcb}}, {{cite:d86a4112c39df6317ef8d7c53ab436c0ec7997cd}} in a bottom-up approach. This makes the fabrication of quasi 1D Por-GNR hybrids feasible, avoiding problems such as random molecular placement and metal clusters formation {{cite:8a97e8e9bfd7db189c4e8a70372696532f606204}}, {{cite:75e64fadf71341114894ef1dc97608f4b2b4780f}}. Recently, several research groups have attempted to expand the synthetic 1D complexes including porphyrin cores in different ways{{cite:582dc5c30bc6df5436198e3fa8556a55e571dcf3}}, {{cite:ea22c2abc35a6a4d3167e2738086a2c52d7a647a}}. The structures considered so far, both in experimental and theoretical works, have mainly been porphyrin oligomers/polymers {{cite:4fbb39a979afa92b0446d84369d71902bfbd1140}}, or porphyrin nanotapes {{cite:4f15620accafb8a450f6c3d40d5574b41db55a22}}, which lack the GNRs segments as a backbone. However, Mateo et al., have synthesized structures with two metal-free ({{formula:a83df562-6e7b-41c9-a182-9322f04704be}} ) Pors connected by a short GNR segment {{cite:516dd535557fc54d44953bb58e9bfb3ce906e21f}}. This naturally poses the question of how electronic transport takes places in GNRs with incorporated Pors, and especially spin transport for Pors with magnetic centers.
| i | 99df775681521b0c8df9811c697af2e0 |
In this light, our attention is drawn to the Bayesian interpolative decomposition of underlying matrices.
The interpolative decomposition of observed data matrix {{formula:4c1a49d2-0fa5-4d2f-a996-2bc3954e7a5a}} can be described by {{formula:21bf5f59-d3df-4e48-949e-499e29b98f3b}} , where the data matrix {{formula:586f91c0-1801-4d57-ad22-f09306be3d45}} is approximately factorized into a matrix {{formula:893b29d4-cfcd-457e-a187-c209e570e6f9}} containing {{formula:500f3597-b99e-417c-a3ea-86bb02f82468}} basis columns of {{formula:f28d9db0-17ed-4969-a53a-4f1d788a918f}} and a matrix {{formula:908e72fc-6414-42db-bc4a-35e529c14453}} with absolute values of the entries no greater than 1 in magnitude A weaker construction is to assume no entry of {{formula:803e44e1-032b-40aa-b68f-f8a5eef7a01e}} has an absolute value greater than 2. See proof of the existence of the decomposition in {{cite:0f5dd44551a7b0899894b0e5f3caf36b78418afc}}.; the noise is captured by matrix {{formula:bfedc940-eb2c-42d1-9c07-7e52c6ae4c23}} .
Training such models amounts to finding the optimal rank-{{formula:41d91a37-bf66-4de8-9184-04dd3e9daff1}} approximation to the observed {{formula:9efab5a7-a619-4e82-b246-10b0c2846880}} data matrix {{formula:c95a1503-c1fb-4ed6-a267-ac8928372b7f}} under some loss functions.
Let {{formula:96dbdd39-6dcd-4ced-a765-f0ee87d83e94}} be the state vector with each entry indicating the type of the corresponding column, i.e., basis column or interpolated (remaining) column: if {{formula:337827f9-e86e-4a2e-aecd-e7a92e655b0d}} , then the {{formula:571c40a8-e955-470f-bee0-fcd79bce0f78}} -th column {{formula:4a5565da-091e-4292-a5c2-42e2540d6869}} is a basis column; if {{formula:e47b7944-1352-4ad0-8a49-907ceea43bc5}} , then {{formula:16885316-c7bb-4718-adcc-7c17432f09c4}} is interpolated using the basis columns plus some error term. Suppose further {{formula:30c351d0-ec60-43c4-85bf-2648e07ab266}} is the set of the indices of the selected basis columns, {{formula:ddc6893e-cd92-4bfc-af10-75521be83e59}} is the set of the indices of the interpolated columns such that {{formula:555717c8-692f-4519-ba73-6eda9a602348}} , {{formula:b108ea23-23bc-41d3-b06c-ece057b6c7ab}} , and {{formula:3a2166dc-3ce4-44dd-8e3f-6f70ef15531a}} . Then {{formula:e77f6592-5046-4622-ba76-7f12f5ae2be0}} can be described as {{formula:e5d59562-48c7-41ac-8965-fee384e5ba44}} where the colon operator implies all indices. The approximation {{formula:a510fbdd-487b-4542-adfd-48bf5b0d432b}} can be equivalently stated that {{formula:cac63fa1-ea36-4d9b-8231-b3ec4a58a16c}} where {{formula:25209b71-67ba-4a3f-a6f0-6632103c9607}} and {{formula:fbf18468-4261-4c36-a35f-25c717f33f08}} so that {{formula:d8660194-9767-482b-b5bf-047f199d5e67}} , {{formula:24ab034c-d62e-4739-917c-4c5b0c87e5ad}} ; {{formula:b32237f0-da75-432a-9eb7-7beb38e4a53d}} .
We also notice that there exists an identity matrix {{formula:2397547d-6749-41a8-a275-b2fb55bb993c}} in {{formula:1fa96bb7-9c27-4ae0-8881-d0dd0e0c3ada}} and {{formula:83042218-0785-47c2-8f6b-a6695ff8a05d}} :
{{formula:dec7a4a6-001c-4ea9-be2e-2b1ffab8619a}}
| i | 61acfec802248b713b04663e01700875 |
Code Parsing Issues. These errors are very few compared to the other categories. This error occurred due to the presence of code snippets and inline code in the samples. For example, SecBot is unable to determine security threats related to input injection noted in the code block that is provided as text in this sentence: “In Package tag add ...In Capabilities add ...<rescap:Capability Name="inputInjection" /><rescap:Capability Name="inputInjectionBrokered" />”. This type of parsing error is inevitable for automated models like SecBot because security-specific context can be buried anywhere in a code snippet.
RQ{{formula:00db6330-5cac-4cf3-92c2-e84d096270d5}} What are topics in the IoT security discussions collected from the entire IoT SO dataset using the best performing model?
We automatically detect security topics in the sentences labeled as security-specific by SecBot in our IoT dataset.
Motivation
The rapid growth and usage of IoT devices, tools, and technologies have become prominent threats to data protection, security, and privacy. IoT developers encounter various security challenges while using these devices and tools. It is vital to understand those challenges in order to improve and design more secure devices and tools. So, the understanding of topics in the SO IoT security discussion can be helpful to infer those challenges.
Approach
First, we apply SecBot on the entire 672,678 sentences that we collected from our 53K
IoT posts (Section REF ). SecBot returned 30,595 sentences
as 1, i.e., those containing security discussions. Second, we preprocess this dataset as
follows. We removed stopwords, extreme words (e.g., words with less than 20 frequencies), most frequent words (e.g., words with more than 2000 frequencies), short tokens, and letter accents. The stopwords are collected from
NLTK {{cite:a0bf606bb16b9f79d642dbc4bd83c5b60a49b618}}. We applied lemmatization to sentences to get root form
of each word. Third, we fed the preprocessed sentences into the topic modeling algorithm Latent Dirichlet Allocation (LDA) {{cite:b3ad25be8c7e11b273a95a3846b68fdb5b4eba5a}}
available via Gensim {{cite:bc20b5a0f075afb5fc8da0f41e5b722c95315916}}. LDA takes a
set of texts as input and outputs a list of topics to group the texts into K
topics. This K is a hyperparameter of the algorithm. To get the optimal value of K,
we used standard practice as originally proposed by Arun at
el{{cite:f8e8f53e4d8b0b219011873072e1a4cdbb7d849c}}. This approach suggests that the optimal number
of topics has the highest coherence among them, i.e., the more coherent the
topics, the better they can encapsulate the underlying concepts. To determine
this coherence metric, we use the standard c_v metric as originally
proposed by Röder et al. {{cite:1564e98b5deeeb098380c5d772666cffd67b8521}}. We use the topic coherence metric as available in
Gensim package {{cite:bc20b5a0f075afb5fc8da0f41e5b722c95315916}}. To determine our optimal number of topics, we use grid search technique. We fist
select a set of values for K. Then we run our LDA
algorithm for each value of K and measured the coherence metric. Finally, we
choose the value of K that is associated with the highest coherence score. In our
experiment, we set K = 1,4,8,9, 10,11, 12, 16,20,24,28,32, 36, 40 and got maximum coherence for K = 9. Thus, we picked 9 as our optimal number of topics. LDA also uses two hyperparameters, {{formula:f4d2b852-df9e-48b2-9f43-a6591cda4fd7}} and {{formula:d3e77738-cf56-49fb-8413-c58412b5e6e5}} to control the
distribution of words in topics and the distribution of topics in posts following Biggers et al.
{{cite:883e6013839dab3371b1d009d0e1e14c739ed759}}.
Following previous work {{cite:df67e1d72bcf17ff03edddcf06bcdb26bc10887a}}, {{cite:10ca95e1f88387583fa42a51d21d8e3e89a3b535}}, {{cite:f698b8b838c446032ee6635d58cd08bb5347d5d6}}, {{cite:883e6013839dab3371b1d009d0e1e14c739ed759}}, {{cite:179f1e9b30a0fe4f6af166e6b5680f6ea2e3ae1e}}, {{cite:3659b511b8cf70b57b31b1ab12927a9d412ef841}}, we thus use standard
values {{formula:73710c51-6e3a-4058-8467-d5ee7a138700}} and 0.01 for the two hyper parameters. Fourth, we label each topic a name by manually
analyzing a random sample of at least 50 sentences per topic. We do not limit ourselves to a finite number of sentences per topic during the labeling to ensure that the assigned label is reliable. The first and second authors virtually meet to label the topic manually. First, both authors review a number of sentences for a topic and come up with possible topic labels. Then both authors share their thoughts on the possible topic labels. If both authors have the same thoughts, they label the topic with a suitable title; otherwise, they go over the sentences again and label the topics together. For example, the following sentence is about the certificate validation for different users, “...Found the answer as I was trying to import the certificate with the Root and was testing with a different user.” Another sentence assigned to the same topic is “ ...The commands where I input the SSID and Password are the ones causing trouble.”. We thus label the topic as `User/Device Authorization'.
Fifth, we assign each topic a category, where a
category denotes a higher order grouping of the topics. For example, we assign the topics `User/Device Authorization' and `Crypto/Encryption Support' to the category `Software', because
the topics contain discussion about the support of software in IoT devices to ensure security. The topic `Secure Transaction' is assigned to the category `Network, because
it contains discussions about the securing of transactions/communications between IoT devices/services over the network.
{{figure:185df4a2-f10c-4603-8a10-5369817316aa}}{{table:1c150c62-aac4-4912-9757-67d34aa45351}}
Results
We found 9 IoT security-related topics, which are grouped into three high-level categories: Software, Network, and Hardware. Figure REF shows the distribution of these three high-level categories. The Software category contains the greatest number of topics (5) and covers more than 57% of all sentences. In Table REF , we show the 9 topics based on the distribution of sentences in IoT security dataset. The distribution column indicates the percentage of sentences in our IoT security dataset that cover this topic. The topic `IoT Device Config' from the Hardware category covers the lowest number of sentences among all topics (2402 sentences, around 7.9%). It contains discussions of programming and configuration challenges IoT developers face while using Bluetooth/BLE/Microchip interfaces in their IoT devices to setup a service (e.g., setting up a barcode scanner in rpi2 to verify coupon {{formula:97a95f15-bbd3-45b8-af58-cbd9f5fcf837}}). {{formula:b54cef3e-d6fb-461e-a8d0-6db960baf81a}} denotes a question with an ID {{formula:35d94d03-f371-45f4-bc0f-6f2103f8f1b7}} in SO. The topic `Storage/Database Management' from the Software category covers the highest number of sentences (4463 sentences, around 14.6%). This topic contains discussions about developers' problems to database set up, configuration (e.g. {{formula:a6c02907-28ba-49d8-b9dd-ccc738d9915d}}), and attack like SQL Injection (e.g. {{formula:9cc6624d-01fd-4f17-990b-6347ed68d76e}}), Weak Authentication(e.g. {{formula:dd29a5d7-3644-4af0-9cb4-7b5180252588}}), Privilege abuse(e.g. {{formula:6363c5f0-80c0-4c2b-8022-20e15e9cb7f5}}), and etc.
The other four topics in the software category
are:
Mitigation techniques of
vulnerabilities/attacks in IoT devices (3808 sentences, around 12.4%),
Discussions about the vulnerability of specific approaches in frameworks (3169 sentences, around 10.4%),
Authorization of users and IoT devices (3101 sentences, around 10.1%), and
Documentation and usages of Cryptography/Encryption support (3004 sentences, around 9.8%).
The Network category has the second highest number of topics (3) and it covers 34.8% of all sentences. Two of the three topics under the Network category are also among the top four topics (each with around 12% of sentences). The first two topics are `Secure Transaction' and `Secure Connection Configuration', which contain discussions about ensuring security over secure payments and secure copy/transfer of digital files via/among IoT devices. The last network topic is 'IoT Hub Federation'. Questions in this topic concern the cloud-based setup of IoT devices, e.g., using Azure/AWS/Google cloud. An IoT hub is a managed service in the cloud that acts as a central messaging service to allow bi-directional communication between an IoT application and the smart devices it manages (e.g., connect to MQTT with authenticated Cognito credential {{formula:ee45dc53-6920-48b9-b7de-28f101a26a9e}}).
RQ{{formula:2937626b-3c8b-41c2-9fcc-723d52682cf8}} How do the IoT security topics evolve in SO?
Motivation
We investigate how the popularity of the three IoT security topic categories has changed over time by determining the total number of new sentences created per topic category over time. A new sentence can only be created if a new post is created. Therefore, the evolution of the topics over time offers us information about the relative popularity of the categories and whether a category is discussed more over time than the other categories.
Approach
For each of the 30K security-related sentences in our dataset, we identify the ID of the SO post from where the sentence is detected. We consider the creation time of the SO post to be the sentence creation time. We group the 30K sentences by the three topic categories. For each category, we compute the total number of sentences created every six months based on the creation time. Following Wan et al. {{cite:11c43351afc705f047881ad5ac6a1c16fb11a25e}}, we report the evolution of IoT security topics using the metric Topic Absolute Impact.
Topic absolute impact is calculated by determining the total number of sentences created per the 9 IoT topics every six months since 2008. We do this by first assigning each sentence to its most dominant topic, i.e., the topic with which the sentence shows the highest correlation score. We then assign the creation time to a sentence as the creation time of the post where the sentence is found. Finally, for each topic, we put the sentences into
time-buckets, where each bucket contains all sentences created within a time window (e.g., from Jan 2009 till June 2009, etc.). We further analyze the absolute growth to correlate the growth in real world contexts for each topic. It is possible that developers might be influenced by the release of new IoT APIs, tools, or devices. As such, we figure out the impact of those real-world contexts in our developer discussion in SO. We first identify some major releases in the IoT domain during the period from 2009 to 2019. Then, we find all the spikes in our absolute growth chart for each topic. Next, we collect all time frames of six months (e.g., January 2010 to July 2010) in which the spikes took place. To identify related real-world events (i.e., major updates and releases), we identify all major updates and releases during that time frame for each topic. Finally, we manually check the availability of discussions for all identified events in our IoT security dataset during that time frame. Based on that, we either count an event as a striking contributor to IoT discussions or discard the event. For example, during the time interval between January 2014 and July 2014, both `vue.js' and `Riot.js' were released. However, we find that our IoT dataset has enough discussion for `Riot.js' only during January 2014 and July 2014. As a result, we discard `vue.js' and count `Riot.js' as a striking event.
{{figure:9f435197-30a9-4c86-88df-4b045227aa74}}
Results
Figure REF shows the absolute growth of the three IoT security topic categories over time by showing the number of new sentences per category every six months. While the software category has the most new sentences among the three categories, the network category started to catch up starting from 2017. The number of discussions under Software shows an upswing between January 2011 and July 2011, when Django 1.2.6, a free open-source web framework that encourages rapid development, was launched. Some web security tools (e.g., `XSS', `memory-cache-backed session' and `CSRF btoken') were first introduced in this release. Many discussions in 2014 in our dataset are related to Riot.js. Similarly, the release of Kali Linux in 2006 and Django 1.10.8 gave a spike in software related security discussions. Overall, the periods between June 2011 and June 2014 showcase the rapid adoption of IoT-based SDKs/framekwors (e.g., rpi2 versions). IoT security concerns are raised at the same time due to the emergence of ransomware like CryptoLokcer and banking Trojans.
At the beginning of 2010, the network topic category got high priority due to internet connectivity and cloud storage. A high spike was found at that time. Firebase, a real-time cloud database suited for IoT applications, was released in May 2012. As such, we found many related security discussions across the Network category during this time. Similarly, we found the release of IoT hubs like AWS IoT in late 2015 and IBM Watson IoT in 2017 contributed to many network-related discussions during the respective periods. Recently, the emergence of online transactions and IoT cloud storage solutions has also contributed to the increased popularity of the network category (#new sentences).
Hardware related discussions showed continuous growth over the years. Many hardware supported security modules like fingerprint locks, smart doors, biometric devices, etc., have emerged during our study period. We found the release of popular open-source IoT hardware has increased the rate of discussion about that hardware. For example, the release of Intel Galelio back in mid-2012 produced a spike between June 2013 and December 2013. Similarly, after the release of `Rasberry Pi 2' in February 2016 and `Arduino Uno Wifi rev 2' in February 2017, spikes in the curve were visible.
In conclusion, the evolution of IoT security topic categories and subsequent topics can provide insight into new, emerging, or existing threats.
Discussion
For our experiments, we use SO data dumps up to September 2019. However, we also
investigate the most recent data dumps of SO. We use the online Stack Exchange data
archive to get the most recent
SO data dump, which was for January 2022. We collect IoT datasets from this data dump by
following all the steps we described in Section REF .
We found that many SO posts related to IoT were deleted by SO in the January 2022 data dump that were previously
available in the September 2019 data dump. As such, we first analyze the extent of
such deleted posts in the January 2022 dump (in Section REF ).
We then compare the results of our empirical studies for RQ3 (recall Section REF ) and RQ4 (recall Section REF ) between the two data dumps, i.e., Sept 2019 and Jan 2022 in
Sections REF , REF .
The “Case” of Post Deletion by Stack Overflow
As per the data dumps of September 2019, there are around 53K IoT related posts,
whereas the most recent data dumps (January 2022) have only 45K IoT related
posts. This indicates that most recent data dumps have fewer IoT-related posts
even though our trend charts from Figure REF based on Sept 2019 data dump showed
that IoT-related discussions have an upward trend in recent times. We
investigate this drop in the number of posts in Jan 2022 data dumps. We identified
all the missing posts that are available in the September 2019 data dumps. Then,
we check the recent activity and status of these missing posts. We find that
these posts were deleted in between September 2019 and Jan 2022. As a
result, these posts appear in the 2019 data dumps but do not appear in recent
data dumps. For example, `ESP32' tagged SO question {{formula:4be5ada5-4e05-44e8-b90b-a1c50d38dce1}} was created in
May 2019, but later it was deleted. We further apply Secbot to the new IoT data
and find only 6K security-related sentences. Compared to the 30K
security-related discussion in RQ{{formula:b961ba7e-5c9e-4b02-84d0-d93b760c3840}} (Section REF ), these results are
quite low. This is because the size of the dataset is smaller and most of the
posts related to the security discussions are removed. For example, an
encryption-related post, {{formula:190cd62e-c611-473e-a92b-80e66f8e0c6a}}, that contains multiple security-related
discussions, has been removed in recent data dumps.
{{figure:0e339e03-b4c2-441c-86cb-2f80a2a06625}}
Prevalence of IoT Security Topics After 2019
Following the RQREF , we apply topic modeling on 6k sentences we got
from the Jan 2022 data dump. We find that the optimal number of topics for this
dataset is 6. Then we manually label these six topics. The topics are: `IoT
Device Config/Set-up', `Secure Connection', `Secure Transaction',
`Framework/SDK-based Security', `Authentication Management', and
`Crypto/Encryption Support'. We further investigate how these topics differ from
previously found topics in RQ{{formula:e9fcbed3-319b-4207-ac55-af8e220eaa6e}} (Section REF ). Although five topics are
present in our previous findings, `Authentication Management' is different from
what we found previously in `User/Device Authentication'. `Authentication
Management' is comprised of discussion related to authentication mechanisms,
devices, techniques, protocols, implementation supports, etc., whereas
`User/Device Authentication' only consists of user or device level
authentication. On the contrary, we didn't find any discussion related to
`Storage/Database Management', `Vulnerability/Attacks', or `IoT Hub Federation'.
This happens because of the removed posts in recent data dumps. We further
categorize these topics into Software, Hardware, and Network group. We find 3
software-related topics (i.e., `Framework/SDK-based Security',
`Crypto/Encryption Support', and `Authentication Management'), 2 network-related
topics (i.e., `Secure Connection' and `Secure Transaction' ), and a hardware
related topic (i.e., `IoT Device Config/Set-up'). There are 22.3% hardware,
37.2% network, and 40.5% software related security-discussions in the February
2022 data dumps. Finally, we compare the topic distribution in the most recent
data dumps (i.e., February 2022) and the September 2019 data dumps. We show the
comparison in Figure REF . The relative percentage of
hardware-related discussions is higher in February 2022 data dumps than in
September 2019 data dumps. Although the network topic holds similar percentages
in both data dumps, the software topic has a lower value in the February 2022
data dumps. This is because most of the software-security related IoT posts were
removed in the February 2022 data dumps, and in recent times, developers are
discussing hardware-security more than they did in past times.
{{figure:16f30fc6-2883-41e4-bb20-ad2ef1463835}}
Evolution of IoT Security Topics
We analyze how the topics found in the Jan 2022 data dump evolved over the time period. We first calculate the
absolute growth of the topics following RQ{{formula:7699b868-65ca-421e-a9f8-1ee39bcc580b}} (Section REF ). Then we
generate a trend chart showing how the three topics (i.e., Hardware, Software,
and Network) evolve over the time period between 2011 and 2022. We report the
trend chart in Figure REF . From this chart, we find
that security related discussions follow an upward trend for all three topics
(i.e., hardware, software, and network) before 2019, similar to our previous
findings in Section REF . However, there is a downward slope after 2019
for all three topics. One of the possible reasons for this downward swing could
be the lack of releases of any software, hardware, and network related devices,
tools, technologies, etc. during this period. Another contradictory finding is
that network and software related discussions have almost similar numbers, but
our previous finding says there are more software-related discussions than
network ones. This could be because of the removed posts in the February 2022
dumps.
Discoverability of IoT security sentences
{{figure:3ab3692f-4aab-4bd8-a72f-14f300a0f018}}Our study data is collected from SO, by identifying questions in SO labeled as one of the 75 IoT related tags. As we noted in Section REF , the 75 tags were previously used to learn IoT developer discussions in SO by Uddin et al. {{cite:771dc0b87a48537fcf90bc9d1c981181abacc22a}}, who found that the most popular and most frequently discussed topics were related to the adoption of security measures in IoT devices (e.g., secure messaging). In addition, the same dataset was also used by Uddin {{cite:8466f66d3d416ae8b59d9a425c97d69a7aa3cbdb}} to determine the prevalence of security-related discussions at the sentence level. Both studies show that the dataset contains diverse discussions about security for IoT. Note that not all the 75 IoT tags have explicit reference to any security measures, but we observed that many of the questions (and their answers) labeled as the tags contained security discussions. In this section, we offer quantitative evidence on the extent of security discussions in questions labeled as non-security tags as follows.
We investigate whether SO security-specific tags cover all IoT security sentences. Thus, we pick 10 frequently used security-related tags, i.e., `security', `ssh', `ssl', `passwords', `authentication', `authorization', `encryption', `cryptography', and `hash' in SO. Then, we collect all posts from our 53K IoT posts with at least one of the tags. In the collected posts, we only manage to find 8% of 30K sentences, i.e., 92% of our IoT security sentences are not discoverable using the 10 tags (see Figure REF ). We also investigate the missing posts in the latest data dumps (January 2022) using these 10 security tags. The result shows the numbers are similar, only 10% posts can be discovered with these 10 tags in 2022 data dumps. This indicates that SO security tags do not cover all security discussions. However, our security discussion includes sentences that are tagged with security tags.
Implications of Findings
The findings from our study can guide the following major stakeholders in the rapidly emerging IoT ecosystems:
IoT Security Enthusiasts to learn about IoT security aspects from IoT developer discussions,
IoT Vendors to offer tools and techniques to support security in IoT devices,
IoT Developers to determine the current trends in IoT security based on developer discussions and to use the information to guide future development needs,
IoT Educators to develop tutorials and documentation to educate security principles to IoT practitioners, and
IoT Researchers to improve the detection of IoT security aspect in developer discussions and to study how to properly address the difficulty in the adoption of security practices for IoT devices and techniques.
We discuss the implications below.
{{figure:031220fc-e64a-45da-b8d9-e6a2d84d19f1}}IoT Security Enthusiasts. IoT enthusiasts can be IoT practitioners or
simply IoT market watchers, but they nevertheless form a crucial part of the IoT
ecosystem in various ways (e.g., buying/analyzing IoT devices/solutions, etc.).
The development and adoption of IoT-based solutions by developers is facilitated
by the exponential growth of IoT devices, software, and platforms. Our
increasingly interconnected digital world relies on smart devices built using
the IoT, which means security for IoT devices is paramount and it is the job of
IoT developers to adopt proper security measures for their solutions. Indeed, if
we compare the number of new security sentences per month to all sentences in
the month in our 53K SO posts, we observe a slow but relative growth of IoT
security-related discussions, especially after 2016 (see Figure REF ). This means that interest in IoT security is
increasing among IoT developers in SO. Therefore, IoT security enthusiasts can
benefit from the security discussions by IoT developers in SO. As we noted in
Section (Figure REF ), security
discussions in SO IoT posts can be buried inside general or non-security IoT
discussions. Therefore, to offer security-specific insights from SO IoT
discussions, we need tools to automatically detect those. The high performance
of our developed tool, SecBot, is suited to meet the needs.
{{figure:77a83ce0-cc88-40fb-9141-dc85fe4dc169}}Tradeoff between popularity and difficulty of IoT security topics in our IoT dataset
IoT Vendors. IoT vendors need to support IoT developers with proper and usable secure IoT techniques. To determine what is appropriate and what is working, though, the vendors need to know the problems faced by IoT developers. Forums like SO have become a go-to place to look for solutions to their technical problems. Thus it has become essential for the IoT vendors to keep an eye on this open discussion forum. Our research can help them in two ways. First, the high precision and recall of our tool SecBot will help IoT vendors reliably find security-related issues in SO. They can analyse those discussions and take a decision accordingly. Second, our findings from RQ{{formula:32a972c0-ce78-4324-abcd-b58f6fed5f00}} (Section REF ) show that IoT developers are aware of major releases and immediately discuss them more frequently. Thus, we can say the vendors can get quick feedback from the developers about their most recent release of any IoT device, tools, techniques, or software. This could play a vital role in their upcoming releases. For example, if there are multiple bugs or issues, they can fix those bugs in their next release or offer a quick release fixing those bugs. Overall, such insights can be crucial for vendors to improve their offerings and to compare the solutions of their competitors. Hence, they could be well ahead of their competitors.
In Figure , we show a bubble chart to show the tradeoff between the popualrity vs difficulty of IoT security posts. The chart is
constructed as follows. Out of the 30K IoT security sentences in our dataset, we pick the sentences that we find in the SO question posts.
For each of the 9 IoT security topics, we pick the sentences that we find in the SO question posts. For the total number of distinct questions thus
found for a given IoT security topic, we analyze two metrics as follows:
[(1)]
Popularity analysis based on the average view counts of the associated questions to the topic.
Difficulty analysis to get an accepted answer to a question by computing the percentage of question per topic that remain without an accepted answer.
Both of the metrics have been used previously in literature to determine the popularity and difficulty of topics generated from SO posts.
The x-axis in Figure shows the difficulty score of each IoT security topic, and the y-axis shows the popularity score of the IoT security posts. The size of each topic bubble is based on the number of distinct questions assigned to the topic. Therefore, the more right a topic in the chart, the more difficult it is in SO. Topics like `Framework/SDK Based Security', and `Storage/Database Management', and `IoT Device Config' are among the most difficult. The topic `IoT Device Config' is among the most difficult as well as the most popular IoT topics. Therefore, IoT vendors can accelerate the adoption of security in IoT Microschip and Bluetooth (e.g., BLE) devices, which will then increase the chance of getting answers to the problems.
IoT Developers. The security of IoT devices and solutions is of paramount importance. As the hacker community is looking for the single opportunity to breach the IoT system or device, the developers should not consider ignoring any corner of security vulnerability which can be exploited. We thus need IoT developers to stay aware of the IoT security trends and to adopt prevalent IoT tools. The developer community is one of the best places that could help them keep pace with the cutting edge technologies that can't be breached, at least for that time frame. Such insights can benefit from such precise, specific, but automatically mined security knowledge to stay aware of the trends related to IoT security. In addition, such knowledge can also help them make better decisions, like picking a popular IoT security technique over another. For example, in Figure , we find that the topic `IoT Hub Fedearation' is the least difficult, while also not very popular. This means that IoT developers leveraging cloud infrastructure to implement security methods for their IoT devices are not finding enough support from SO developers. The topic `Secure Transaction' between IoT devices or between IoT and non-IoT devices contain a large number of questions, which are also popular. This means that IoT developers can devote time to learn and implement secure transactions using their IoT devices, and they can also inquire with other IoT developers in SO about the problems they face.
The developer community cannot rely on new technologies every time, as new technologies can be challenging, complex, or incompatible with the current environment. For example, a developer asks how he can create a digital signature function PDF from Hardware Security Modules (HSM) using C# in SO question {{formula:61e8c0d6-c24c-4086-8e21-c45ba594c58e}}. However, the discussion section suggests that he can't use HSM directly in C# instead he needs to go through `CAPI' or `CNG' api or `ncryptoki'. Developers could face these challenges in the middle of any project, which could cost more time or, at worst, they could end up switching to another framework or tool. This scenario can be avoided if developers gather the necessary information about the required apis, devices, or software. Developer forum like SO is the ideal place to collect those information. Our research provides information about IoT security topics. Further mining of their discussed topics such as which features they are discussing, what are the sentiment of those discussion, etc. will be helpful for everyone to learn about an api, tools, frameworks, software, or devices. From this informative feedback, the developer community can decide whether they adopt the new technology or not. Thus, they can select their developing tools and frameworks wisely and minimize the risk of a project.
IoT Security Educators. Our research findings are more promising for IoT educators in many ways. From RQ{{formula:8fd65c46-4b7a-49d1-b558-50a49937472e}} (Section REF ), we learn that IoT topics are getting more attention and thus the responsibility of the IoT educators is also getting escalated. As the IoT security demands are increasing, the educator must prepare quality contents to motivate the newcomers and to enlight current practitioners. Thus, to create content, they can follow the topics we find in RQ{{formula:1420aabe-6f8b-4bb1-878e-373fa336939f}} (Section REF ). They should also be quite selective in selecting the most recent techniques and tools over the old ones. Otherwise, the learners will lack the new tools, which will eventually end up going backward instead of forward. SO can be a helping hand here for them. As SO contains IoT security discussions about all those 9 topics, they can collect that data to analyse and prepare their lessons. Moreover, as we previously presented, one can find enough discussions related to the most recent releases of any IoT security tools, software, and devices. The educator can also provide these insights regarding the most recent IoT tools, software, and devices into their lesson. This will also be beneficial for the learner.
IoT security educators can use the bubble chart in Figure to prioritize their efforts to develop security tutorials and documentation for IoT developers and practitioners. For example, the topic `Secure Connection Config' has the highest number of questions. It is also among the most popular topics, while also less difficult than most topics in terms of getting an accepted answer. Therefore, IoT security educators can analyze the questions related to user/device authorization in SO to develop comprehensive tutorials for developers. The topic `Crypto/Encryption Support' has the second most questions, while it is the most popular yet one of the most difficult topics. Therefore, the IoT security educators can analyze the questions in SO and consult IoT resources to ensure that IoT developers can learn about the risks associated with leaving IoT ports open and how they can configure them properly to ensure security.
IoT Researchers
This research creates a vast space for the researcher to go forward in multiple dimensions. Security aspect detection is a major task that we accomplish in this paper. However, this task requires more attention in the future. We find our Secbot has a maximum performance of 0.935, which seems like a good result considering no prior security detector has achieved that far. But when we figure out the errors made by our Secbot, we find multiple implicit contexts, ambiguous contexts, and ambiguous keywords related errors. These errors can further be researched to extract informative information like the identification of key phrases or words that cause ambiguity. Individual research can be conducted on how these key words or phases can be detected automatically. In another way, these error categories give us a way of improving the performance of Secbot. The researcher can approach this in three ways. First, they can add IoT contexts to Secbot. For example, they can use some rules based patterns to extract information and then feed it to the Secbot. Second, they can add more samples to reduce the sparsity of security-aspects in the dataset. For this, one may consider only adding the samples that are close to those error category types. They can even try to make a balanced dataset to see how the performance changes. Third, they can try a deep model that can only discern samples related to those error categories, and then they can apply ensemble Secbot and the deep model.
Besides this, we find that domain-specific BERTOverflow fails to perform as well as the generic model Secbot. Additionally, the correlation coefficient score from Table REF suggests that the results of BERTOverflow vary from the results of Secbot. This could be an interesting topic to delve deeply into. The researcher can explore the misclassification made by BERTOverflow to compare the results between Secbot and BERTOverflow. If there is a significant difference in error categories between these two models, they can consider developing an ensemble model. Another direction of research exploration could be developing a purely domain-based security aspect detector. Although BERTOverflow has knowledge about StackOverflow, compared to its overall knowledge, SO knowledge is not sufficient to represent it as an intelligent SO expert. Thus, an entirely SO knowledge-based model may be more suitable in security-aspect detection.
IoT researchers can analyze the security discussions to learn about the specific challenges that IoT developers are facing based on their real-world experience. Such insights can be useful for researchers to invent new techniques and tools for IoT security. It is important that research in IoT security can be influenced by the emerging trends in IoT security. As we observed in Section REF (Figure REF ), we see an almost equal number of sentences per the the two topic categories (Software and Network) starting from January 2017. This means that security research in the IoT needs to put equal emphasis on both software and network security. Similarly, discussions about secured hardware for IoT devices are also increasing over time, although not as much as the software and network topics. IoT security researchers need to design and develop innovative techniques to secure IoT software and networks. One of the 9 topics is `Vulnerability/Attack Concerns' in IoT devices, which points to the issues IoT developers are facing with regards to addressing specific vulnerabilities in their IoT-based solutions. As we find in Figure , this topic is also considerably popular and difficult. Therefore, IoT software and hardware security researchers can use the discussions to develop tools and techniques. Data management and storage are subject to the topic `Secure Data Management', which can benefit from research in database security.
Additionally, the research community can conduct rigorous research on IoT security topics such as what types of questions developers asked in each topic category, how the new release affects the developers, what factors developers discussed most about any new release, etc.
Threats to Validity
Internal validity threats relate to the authors' bias while conducting the analysis. We mitigated the bias in our benchmark creation process and topic labeling processes by computing agreements (security sentences) and labeling together (topic modeling). During the topic labeling, the authors communicated over Skype and using Google drive to reduce individual biases. The agreement between the two coders is above 95% all the time. We use a standard random sampling technique to reduce locality biases in our benchmark dataset. The machine learning models are trained, tested, and reported using standard practices. There was no common data between the training and test sets. We shuffle the training and testing datasets in each iteration to introduce a new scenario every time. We perform cross-validation on the entire dataset to make sure our assessment is not biased towards any subsets. We follow standard hyperparameter tuning to minimize the effects of hyperparameters on the final results.
Construct validity threats relate to the selection and creation of IoT security dataset. As we include all tags related to IoT tags, security-related tags are also present there. However, there may be some security tags that have not been included in the dataset creation process. In the future, SO may include more IoT and security tags, which could challenge our dataset. However, as we discussed in Section REF , we observed that IoT developers in SO did not constrain their security-related discussions in SO to only questions labeled as security-related tags. In fact, more than 90% of our security-related sentences are not covered by the security-specific tags in our dataset. As such, in Section , we discussed the implications of our automated tool to detect security-related sentences in SO, which could find such sentences even in SO IoT tags that do not explicitly refer to any security issues. Beside this, construction validity threats relate to the difficulty of finding data to create our IoT security-related sentences. Our benchmark creation process was exhaustive, as we processed more than 53K posts from SO. The evolution of IoT topic categories considers post-creation time as sentence creation time. A post can be edited after its creation with new sentences, but we observed less than 1% of such cases in our dataset. We find security-specific 30K sentences out of 672K sentences in the IoT dataset using the best performing SecBot model. During the topic modeling phase, we observed only a few non-security related sentences. Further investigation can be carried on. This may give us more insights about the model, which we leave as our future task. In addition, we select a topic for each sentence based on the higher coherence score of the topics. During the topic labeling step, we label each topic by analyzing the topic top 30 words and posts associated with the label. This approach is consistent with the related work that also analyzed topics in SO posts {{cite:aca1e7897acbe3b7676398db151e71e4f5eb4d96}}, {{cite:7a395f1b2adecdad3a541347bbbc3dc715fd85a7}}.
External validity threats relate to the generalizability of our findings. Our developed model, SecBot, demonstrates high accuracy. As such, the model is expected to offer good performance for other developer forums. There is a possibility that IoT developers may discuss different types of security discussions in other forums. For example, we found that most of the IoT developers discuss Database/Storage related security in SO. It is possible that in other forums, they discuss more about encryption. A detailed evaluation of SecBot in other forum data is our future work. In summary, our model exhibits good performance in this research direction, but the results should not be taken as an automatic implication of the same result in general. An extensive analysis of the diverse nature of challenges and characteristics can validate the transposition of the results to other domains. Besides this, SecBot is designed to work on textual datasets. However, SO contains code-snippets, logging, and urls which are filtered out during our dataset creation. Therefore, if SecBot is applied to SO posts containing source codes, logging, or urls, the model's performance may drop.
Related Work
Related work can broadly be divided into Studies to understand and Techniques to detect/mitigate IoT security issues.
Studies. Literature in IoT so far has focused on
surveys of IoT techniques and
architectures {{cite:6023be039a42cb63c63814c79b746d7f0caf9df2}}, {{cite:484eed79a6d95c155414fa264e80dfe7784fe257}},
the underlying middleware solutions (e.g.,
Hub) {{cite:f0c094562b01831913a551a042436efe19173359}}, the use of big data analytics
to make smarter devices {{cite:7e27e76e92b1b2f0c0aa5bf78d9821a67532ff6d}}, the
design of secure protocols and
techniques {{cite:484eed79a6d95c155414fa264e80dfe7784fe257}}, {{cite:cbf7fc98ddaacd8e6130fd3251a699b9157d2916}}, {{cite:e5180165a848817fcb13530843e26e9453bbd432}}
and their applications on diverse domains (e.g.,
eHealth {{cite:bf18ea495a9f9d26e4c23df729163e4718b1a27b}}), the Industrial adoption
of
IoT {{cite:c434d20df2bee9133a354fdfc1074ec5d6986f52}},
and the evolution and visions related to IoT
technologies {{cite:14d008ef4dfaa3d74770e0ec160d6959fe7a873c}}, {{cite:2a8ecc33aebbea83629ce8f9c9bf52d58ecc5e8a}}.
The unauthorized inference of sensitive information from/among IoT devices
is a prevalent concern{{cite:32f398b228d063cc3782e9d96ffb1503ea3a416c}}.
We are aware of no previous research that focused on understanding
IoT security discussions in SO.
In SE, topic modeling is used to learn aspects like software logging {{cite:ff533794212cb139e5a7a9d1a10e1d09f3389798}},
feature location {{cite:d2783b96be46c7fa283e9c703810891e84d510d2}}, {{cite:9eb7fbcdc488fd2122006700390031f0ed2a5672}},
traceability linking {{cite:82fc14ce1770b954f3d84403cb4dfe04c5101447}}, {{cite:11f5d2df083a817493933260b9757f26f4b37bb9}},
software and source code evolution {{cite:8d6c6f4f6a593de39bf049db28e695feed6e48f8}}, {{cite:f82aa5dd54d85d77c8cf8aaa4055db25e1d828c3}}, {{cite:0a675d301344c8c70e37622852c9b5fc7d79dc88}},
source code categorization {{cite:ad5aeab4a2e32adeee1c5ec476797884d05b4dfa}}, code refactoring {{cite:8953e3d3f57067ba674f5902a24918d17b3e6395}},
defect analysis {{cite:245c8b8b4206181cfe12bf26f6039cc7fa3eb33f}}, and various software maintenance
tasks {{cite:90dc2a1051e3cae4b041c0c9636d21df32676107}}, {{cite:272d93e7dc2d66ebcb2142f90cce1732d7c2af9a}}.
The SO posts are subject to topic modeling to understand concurrency {{cite:f698b8b838c446032ee6635d58cd08bb5347d5d6}}, big
data {{cite:10ca95e1f88387583fa42a51d21d8e3e89a3b535}} and chatbot issues {{cite:7a395f1b2adecdad3a541347bbbc3dc715fd85a7}}.
Yang et al. {{cite:3659b511b8cf70b57b31b1ab12927a9d412ef841}} studied security-topics in SO. They used SO tags to identify mobile security which might miss many security discussions which were not tagged as any security related tags. Our studies resolve this issue by following Uddin et al. {{cite:771dc0b87a48537fcf90bc9d1c981181abacc22a}} to collect all IoT posts. Next, they applied topic modeling to the security posts. As the recent studies have found that larger documents may have multiple topics, LDA topic modeling is unstable for such cases {{cite:addd14ce6ed0b6ca79713d470c8e19b3b49d1910}}. We thus use sentence level topic modeling. We develop a precise security detector, SecBot, to identify security related sentences from SO posts and apply LDA topic modeling on our security dataset. Unlike Yang et al. {{cite:3659b511b8cf70b57b31b1ab12927a9d412ef841}}, we focus on IoT security topics.
Techniques. IoT devices
can be easy target for cyber threats {{cite:e5180165a848817fcb13530843e26e9453bbd432}}, {{cite:30896e02a9ef5fac0a9190a2e3631835e16c86d4}}. As such, significant research efforts are underway to improve IoT security. Automated IoT security and safety measures are
studied in Soteria {{cite:416a17061cbafa1b5e7dddbe2a173b81a031abfa}},
IoTGuard {{cite:f07369d4f234b5843dfdc46be7259c93b54b37f6}}. Encryption and
hashing technologies make communication more secure and certified {{cite:be3db51d789c7b9c8d2eff1e64302304786bd8ac}}. Many authorization
techniques for IoT are proposed like SmartAuth {{cite:bb41a06c43080b9752f7dbd11692c188d4b6524a}}. For smart home security, IoT security techniques are proposed like Piano {{cite:655c81f6093954c5cac0b4eaffc54f3bbdc22690}}, smart authentication {{cite:9e4ec70aafba4a0424ca59013142336ba9740981}}, and cross-App Interference threat mitigation {{cite:a1d34d84ed3600cdc33785e03c2fc07d328ca1fd}}. Session management and token verification are used in web security to
prevent intruder getting information. Attacks on Zigbee, an IEEE specification used to support interoperability can make IoT devices vulnerable {{cite:1371d921edf40e25b21e4218a0e4a499cdf1497d}}. Smart gateway for IoT is proposed to tackle
malicious attack {{cite:679d947b82618056e516e8bb69461067cef56876}}. To the best of our knowledge, our developed SecBot+ is the first DL model that can be used to
automatically detect IoT security-related developers discussions. IoT researchers can gain insights to
offer increased support/security for a problematic IoT device as observed in the developer discussions.
Conclusions
The rapid adoption of IoT-based solutions has raised concerns about the security of IoT devices and communications. As such, it is important to understand the problems developers discuss when discussing their usage of IoT security tools and techniques in online technical forums like SO. With a view to automatically detecting developers' security discussions at the granularity level of sentences, in this paper, we have investigated a total of five advanced pre-trained language-based deep learning techniques (e.g., BERT, RoBERTa). The best-performing model, SectBot, based on RoBERTa, offers an F1-score of 0.935 to detect IoT security discussions. We use SecBot to automatically mine all the 30K IoT security-related sentences from the 53K IoT posts from SO. We apply topic modeling to the 30K sentences to find IoT security topics in developer discussions. We observed 9 topics that are grouped into three categories: Software, Network, and Hardware. Our developed tools and study findings can guide automated collection and analysis of IoT security problems in developer discussions.
| r | 811d49a8642034a24da8343e45903fae |
A common approach in the literature is the use of backpropagation/stochastic gradient descent {{cite:0d8d0a9f54388af7bb15c2e8e6c982ab310d2559}}. Typically, in such an approach, labels have to be defined. However, in RL-based design, the only feedback from the environment are the reward signals. To work with a definition of error, in most DRL schemes including ours, synthetic labels in the form of an optimal {{formula:0403bb9f-9c4b-407f-91ab-0caeadb3314a}} function are defined as a function of the reward signal. Since the optimal {{formula:c5a2ee0c-58d5-4a4f-9fcb-4b8eb5665e86}} function is unknown and must be approximated using samples from the history. The error signal is a function of an approximated optimal {{formula:85c2b81b-51ce-4c5a-9cc2-9cd8ac616b03}} function, which provides imprecise information early in the learning phase. If this imprecise and small error is utilized as is done with stochastic gradient descent (SGD) method, the vanishing gradient problem will stagnate the learning. However, our approach guarantees non vanishing leaning signals. Furthermore, we introduce an exploratory strategy resulting in improved performance.
| d | c61385c51843071cafe094234512c60e |
In our experiments, we use bert-base-uncased model as the victim model. For BadNet-RW and EP we randomly select the trigger word from { “mb”, “bb”, “mn”} {{cite:fcfc4025986c2af6b3c7682b7c615bcf8c5d9d73}}. The trigger sentences for BadNet-SL on each dataset are listed in the Appendix . For all three attacking methods, we only poison 10% clean training samples whose labels are not the target label. For training clean models and backdoored models by BadNet-RW and BadNet-SL, by using grid search, we choose the best learning rate as {{formula:701cbdbf-09c7-4c7a-b302-8031807a2e8c}} and the proper batch size as 32 for all datasets, and adopt Adam {{cite:43872dfd98c6bc82955aab723df03a1ec3b6dad1}} optimizer. The training details in implementing EP are the same as in {{cite:e0252e9c71c85dcd0df25b422f993b4800dcf80c}}.
| m | f212dd97cf5bcc6551173bc848dd8b8a |
The existence of a spectral distribution satisfying (REF ) is
a consequence of Hamburger’s theorem, see e.g., Shohat and
Tamarkin [{{cite:72b971a47b3043b0562bafa933bc5c83071a3093}}, Theorem 1.2].
| m | 34c1a68c074a316b7a764babb251bd3a |
Earlier attempts by Carvalho, Gonçalves, Spiering, and Navarra (CGSN) {{cite:60adb8a080badafc3bdf42b0ee441fbe36fe59d5}} to explain the Feynman scaling observed at HERA in the leading neutron spectrum showed that this scaling is associated with gluon saturation and only exists for small {{formula:ec2501f0-c324-4fa7-a672-507485172d85}} values near the saturation scale {{formula:de76248d-59e1-42ed-823f-343912e9753c}} GeV{{formula:2544a06e-7f84-49cb-99b9-6b49ceb95f74}} . CGSN argue, by using the so called bCGC dipole model {{cite:bab17411872eb6f05673494a450741585f4244bc}}, {{cite:34349b8fa9b5c3369ecb1d656885c12761d9337b}}, that this scaling is due to the saturation of the dipole cross section at higher energies. This is surprising, as saturation is expected to only become prominent at small {{formula:07c6b46d-79f6-4951-b461-e030fa2772d3}} , and the {{formula:63a77a9c-37aa-4809-8ec5-edbd66615e78}} values probed in semi-inclusive measurements of leading neutrons is considerably larger than what has been probed in inclusive DIS, where the latter has exhibited no clear signal for saturation.
Further in {{cite:0bf0bdc23d89d1275d6f315304e955df834931f5}} the authors showed that the structure function {{formula:63509818-fc61-48ef-80f8-274bb6273fed}} will be insensitive to saturation effects even in the kinematic region of future ep colliders such as the FCC, or the LHeC {{cite:a1742491675a3c75a4d474c594dd574a19f9701b}}. Moreover, there is also a scaling with respect to {{formula:6e7b198d-4dd3-4e85-9558-ba5bd5d77f41}} in the leading neutron cross section observed in the HERA measurements for which the scaling with respect to {{formula:cad0c900-1f7d-43e4-a8ff-dfa57eeecd51}} in the Feynman-{{formula:5652dfcf-f985-4f1a-a236-17386afb0c8c}} spectrum should be present for all {{formula:e5d3ccd7-df8e-4434-9433-d296f078d954}} values. This raises more concerns on whether or not saturation effects lead to Feynman scaling.
| i | c4d7e6474444e560a04a576318b3d111 |
Using standard methods and assumptions (REF )-(REF ), see e.g. {{cite:b7f2e49a2ac1c8df0e856c45b1aec980dc660154}}, Chapter 7, it is not difficult to establish that (REF ) admits a unique solution {{formula:f257a523-278f-4d69-99f6-d8d82c496c6c}} such that {{formula:2925ec8e-763b-4d5d-99b5-eec098e076af}} and {{formula:e2ca5149-c8e8-4a9f-a3e8-4ecfa368a300}} . Our first result is the theorem below, that provides us with a uniform estimate on {{formula:2e0f6ff9-2c08-42f9-a211-c1ee79cb8d63}} . This estimate is used in the second statement of the theorem in order to obtain the optimal refocusing estimate {{formula:7a482e0d-3845-4b98-a144-7756d416dfd2}} .
| r | c66fd9e70774ea444ccb7bb46bca52f0 |
Finally, we comprehensively evaluate our different implementations of DP-TopDown on real, relevant datasets, and provide plots that illustrate the privacy-accuracy trade-off for our algorithms. Empirically, LocalRNM performs better than NoisyCounts on most datasets. Motivated by our theory, we also show both the training and testing accuracy of the decision tree increases with more training data. In one dataset, we observe that private algorithms actually improve the test accuracy, which is similar in spirit to adding some noise in non-convex optimization problems to escape local optimums {{cite:3c6156db7eb459bbfa785dbb5d949187f2f30eaa}}.
| i | 4900d59f8509bfab4417626fc72354e3 |
Experimental Setup.
To align with {{cite:0c1402785d0154f4778357fa7bf64a804b6dba03}}, {{cite:cc18bcf9ba14c96a1ad76316cc55e5b82c6b4337}}, {{cite:9d2b720da2f5969b967fd408063f49fe6a0aa9bb}}, we perform experiments on the CelebA database {{cite:2dfa81c7dafff9066087bf5399db584d5054b81a}}. CelebA is a large-scale dataset containing more than 200K celebrity facial images. Each image is annotated with 40 binary attributes such as “Smiling”, “Male”, and “Eyeglasses”. These attributes allow us to evaluate counterfactual explanations by determining whether they could highlight spurious correlations between multiple attributes such as “lipstick” and “smile”. In this setup, explainability methods are trained in the training set and ML models are explained on the validation set. The hyperparameters of the explainer are searched by cross-validation on the training set. We compare our method with xGEM {{cite:0c1402785d0154f4778357fa7bf64a804b6dba03}} and PE {{cite:9d2b720da2f5969b967fd408063f49fe6a0aa9bb}} as representatives of methods that use an unconditional generative model and a conditional GAN respectively. We use the same train and validation splits as PE {{cite:9d2b720da2f5969b967fd408063f49fe6a0aa9bb}}. DiVE and xGEM do not have access to the labeled attributes during training.
| r | 3bac5b92eb44972a3af5db12c8cc947a |
For testing the non-multilocality of quantum networks consisting of general noisy resources, we provide one sufficient condition that all coefficients of {{formula:9171529a-43c6-4aac-b2d9-82134d19a009}} and {{formula:5dab7bba-03f7-4e0d-aecc-1d7a2980f6f5}} in quantum states are no smaller than {{formula:3233b016-ca74-4d7b-89b8-574c83c60ebc}} , see Appendix G {{cite:fe2796e70314d67278bdd4c16dea1cf6900e274f}}, where {{formula:1de41012-c350-41d6-b3b3-c27c9c29e6de}} are Pauli matrices. Further investigations are valuable for the non-multilocality and entanglement witness {{cite:1036d884441af838219150e0222cbfaf568a0565}}. When multiple outputs and inputs are required for observers, the linear expansion of dichotomic inputs and outputs {{cite:6635edd8e5ac244c014f12d98e0f9afea152b56e}} are inefficient to characterize all multipartite quantum correlations {{cite:41a170e5d99a6dcdf88dbf15f17b9592cf4eb50c}}. The general representations of {{formula:65f5b888-8689-4855-9282-05710e5a8b55}} and {{formula:67c3e7f0-f744-412f-ba56-4d423dc6bf3d}} are related to the famous conjecture of the Hadamard matrix {{cite:e810692dab7ae4db1500dfb9daa3d2931e1df817}}. This raises two interesting problems: (1) how to characterize networks consisting of high-dimensional quantum states; (2) how to characterize cyclic quantum networks without {{formula:39c328ad-b3c3-45c8-b08b-7438731caafd}} independent observers {{cite:6635edd8e5ac244c014f12d98e0f9afea152b56e}}, {{cite:2b98ac46f991e2b57bf51f894ca1e0d86bf6faae}}, {{cite:c16d0b4d9b9e114ca9cf7cd4a610398ed43cb0bf}}, {{cite:c312cd25d7256b245ec83b8332fd4265ef1f9557}}, {{cite:b3bd73e88dddac80cdfb937ab954ff098ae4c121}}, {{cite:45b23bdd0e6517a0e9faf943ec2393d62a02e06a}}, {{cite:c723ad0819c5064c1e02818e1278bc6c5eb091c5}}, {{cite:840a4d801c83672214dc6c9782e302652201d091}}, where several nontrivial examples are triangle network, symmetric cyclic network and door-type network consisting of EPR states and multipartite GHZ states, see Appendix H {{cite:fe2796e70314d67278bdd4c16dea1cf6900e274f}}, which maybe interesting for quantum nonlocal games {{cite:5080b50a84ffe15aaed004b16a9333803204da6b}}, {{cite:b90cc97f0b13cd99c63882d1ddc99d924173b1f4}}.
| d | afe152899bf1b4e946726ee619c667b7 |
In Figure REF , we compare the quantum Fisher information with {{formula:cacae124-9e6d-4c79-90b9-74651fc6a9fe}} causal orders as a function of the noise parameter {{formula:e88ebe11-661e-4874-97b5-50ed5d181665}} . As you can see in Figure 1(a), for certain combinations of causal orders, the quantum Fisher information is lower for the quantum 3-switch than the quantum 2-switch in the noise region {{formula:165f1ec6-c761-45f6-9cee-6b5d7db16bf4}} . We also see that the quantum Fisher information {{formula:41893f56-3b59-4b44-8ddb-e8f3b4225264}} is always higher than the quantum Fisher information for the two-case scenario in any noise region. This allows us to choose the best combination of causal orders with three channels to minimize the resources needed to implement the quantum switch, and to exceed the value of the quantum Fisher information of the two-channel scenario. Figure 1(a) also shows that {{formula:b73badb0-15b3-4a0a-893a-1c5924d8a95f}} is higher than any quantum Fisher information of different combination of causal orders. This confirms the fact that if we take all the alternatives of causal orders in superposition, we will have the maximum gain in the quantum Fisher information. In addition, the quantum Fisher information with three channels {{formula:497bc199-611a-4e14-a5c9-e3fc497e4a49}} is always higher than the quantum Fisher information {{formula:606a9a51-75b0-46af-b2cc-0316861b48c5}} for two channels. This may well indicate that the quantum Fisher information increases as the number of channels increases. However, in Ref. {{cite:878937e1c47efc9d9c767b1b5e85cffbeee5109e}}, it has been shown that the classical capacity increases monotonically with the number of channels, but it asymptotically reaches a limit for larger number of channels. The question whether the quantum Fisher information increases or has a limit as the number of channels increases is a fundamental question that future investigations could be done. In Figure 1(b), as the dimension {{formula:26315edd-a4e0-4d3e-933a-00507ac25d41}} of the input state {{formula:2ba72779-f38e-4cf2-b09e-f3af1f6104fb}} tends to infinity, the quantum Fisher information for {{formula:767607fb-19c4-44fa-a250-87e7da9afa89}} causal orders is lower than the quantum Fisher information for the quantum 2-switch in the noisy regions {{formula:3ebd1597-6b42-4152-abfe-fc1d07aa479c}} . Remarkably, {{formula:ba661a25-4da1-4e96-8d88-bbbbbecfbe70}} is always higher than the two-channel scenario at large dimensions of {{formula:1813307f-472e-41c6-9966-7e27941d0727}} for any noise region. Figures 1(c)-1(d) show the relative quantum Fisher information gain defined in equation (REF ). Finally, we notice that the quantum Fisher information {{formula:107bd8af-b5ac-408b-913c-ac76733f0d74}} was also calculated in Ref. {{cite:3b8bcf58eb8a29f18b2724ffb0779164714af9d1}} following a different procedure. Ref. {{cite:3b8bcf58eb8a29f18b2724ffb0779164714af9d1}} only studies three combinations among the 57 possible combinations to superimpose different causal orders. Our procedure allowed us to study the Fisher information for any combination of causal orders through the quantum switch matrix (REF ) and to provide analytical and compact expressions for the quantum Fisher information.
| d | 91c941e9c75548b60be61742fe6af04b |
The calculation of the transition density {{formula:fc1d33e7-5b09-41db-8759-6208a22aac69}} itself is a very complex issue as the inner crust may have a very complicated structure. A canonical approach is to search for the density at the point where the uniform fluid starts to become unstable against small-amplitude density fluctuations, signaling the creation of the nuclear clusters. This formalism includes the dynamical {{cite:dcc276853bbe1144ec4f0c7f4612e27a32211d11}}, {{cite:d427675b651d0629e0a70e144df0b20b1950d8e9}}, {{cite:31634a2d37cf920526adaaebd6a0272514e41dfa}}, {{cite:8faf80ca3c32556ee8d5333e45da3656a11d3ac4}}, {{cite:daba7cd4a8d1709de03a77613cf215bd6ea95f99}}, {{cite:f6760718bd6ede5bb466e45fdf25f9d422bc995f}}, {{cite:babb54b8a4836dc9ccf93297e2e39803d538e6ae}}, {{cite:8147902c043fad42ea24e9ac343c0365344b7165}}, {{cite:1a26e64bbbe5be5829215ab195295e85220adaea}}, the thermodynamical {{cite:5ac5c9796faf2ffd5fcddbd6eda56df5492a9434}}, {{cite:2a99ede54396433d30670bd4d3a0ce3a0dddea58}}, {{cite:2490af3ca3ce5e018b770f641a7e1cbbaf51b438}}, {{cite:cb2235295ac0b44ce8ca65f438bc9f8f55e0b9c6}} and the random phase approximation (RPA) {{cite:f31270509b999ab8ef20a001595c3401b163b962}}, {{cite:13fe57ef9fb1b10a50cbf4293a2d0697bc0f8490}} approaches. The different density regions of a compact star are regulated by different EoSs. The density domain can be largely classified into two distinct regions: a crust which is responsible for {{formula:af981f20-751a-483f-8fda-f2ad91a86309}} 0.5{{formula:261929e6-2f0f-4b5d-a33b-4554b3d0d36a}} of mass and {{formula:7e4be03b-3534-42e6-bea2-bbe1b51cb25f}} 10{{formula:1c2740bb-cfff-404c-981d-a089dc0db9cc}} of the radius of a star. The core accounts for the rest of the mass and radius of the star. Except in the outer few meters, the outer layers consist of a solid crust about a km thick comprising a lattice of bare nuclei immersed in a degenerate electron gas. When one penetrates into the crust deeper, the nuclear species become progressively more neutron rich because of the rising electron Fermi energy, beginning in principle from {{formula:30802ac7-aa97-42bc-b2a4-e44ef323baeb}} Fe through {{formula:52423369-094a-4d41-a598-fcb1991ff7e4}} Kr at mass density {{formula:73c993df-fc1d-41dc-a424-0c098cd073ef}} 4.3 {{formula:03ffebe8-49bf-48d9-9ef5-3d4041ec7f8b}} 10{{formula:a327d872-e282-424f-8116-b9dee8cdcc76}} g cm{{formula:462da9e5-67b2-454b-9df7-4f392e04c81b}} . This density corresponds to the ‘neutron drip’ point where the nuclei are so much neutron rich that with any further increase in density, the continuum neutron states start filling up. Consequently, the neutron-rich nuclear lattice gets permeated by a sea of neutrons.
| i | 795d123e0c21441ccc616047617dbbac |
Zones are interesting for synthesizing properties such as robustness of neural networks used for classifying data. Indeed, classification relies on determining which output neuron has the greatest score, translating immediately into zone-like constraints.
ReLU functions {{formula:9ce81f77-f138-496d-a31f-8c6c6c2fead2}} are tropically linear, hence an abstraction using tropical polyhedra will be exact. A direct verification of classification specifications can be done from a tropical polyhedron by computing the enclosing zone, see {{cite:481d3397b944a2d3069ec397cafc6b90860efbaf}} and Section REF . In Figure REF , we pictured the graph of the ReLU function {{formula:d67cba36-1d4a-48ae-b2e3-bb9c696c3ee1}} for {{formula:88c26f57-942f-4f87-aed6-b75797a2c41e}} (Figure REF ), and its abstraction by 1-ReLU in DeepPoly {{cite:cfa771116cc1389f22d81a6fb3071fa11fe61737}} (Figure REF ), by a zone (Figure REF ), and by a tropical polyhedron (Figure REF ), which is exactly the graph of the function.
{{figure:3a2d393d-582f-4ad7-8ea0-1caf498fc848}} | i | 4ba741f0a547db248cc20c2ec08b6bc2 |
The Camassa-Holm (CH) equation, rediscovered by Camassa and Holm {{cite:a81ee87021e6540dd38a10b366c2f0bc9c6d11c8}} at the crepuscle of the last century, added a new member to the zoo of models describing waves in shallow water, such as the KdV {{cite:9625654b2bb888c43f793232d453a67999df6527}} and BBM {{cite:4287ed0d986320643c2c4a797d0db51e67469e06}} equations. It was firstly reported (but not spotlighted) in the earlier 80's by Fokas and Fuchssteiner {{cite:6fc5629957387fd812f6fab6496fe8e31afdb2cb}}, and Fuchssteiner {{cite:bf634c4b2ed7e62535bb5b2a2e3f9a91817ecb51}}, see also the comments on {{cite:92b90676eaff54ac01e017a59e4dd2f1f3ebd744}}, in the context of integrable equations.
| i | 024e394c0a1844358df26575487f2a0b |
Therefore, it remains to verify that the middle vertical map is an isomorphism. For this, we need to recall some more terminologies. Denoting by {{formula:b08d7763-faed-46d1-b328-8b68ca4cb169}} the cyclic group generated by a primitive {{formula:b62a277c-0f1e-48dc-9409-325a6b1e268d}} -root of unity, we then write {{formula:7798cefe-a848-4c6d-87e1-5be6b5513fda}} for the direct limit of the groups {{formula:8f5fbdcb-110c-419b-99d7-140bf184e6a0}} . These have natural {{formula:5fe8e7f4-2d55-4bef-8ead-32c2d93e9a08}} -module structures. Furthermore, for an integer {{formula:c6d0efad-2ffe-40ac-a53d-d393e5ade440}} , the {{formula:0c764ddd-ec74-4a79-85a5-5557035b1540}} -fold tensor products {{formula:21b39e1c-55a6-40c7-a7b7-9e477734f1af}} and {{formula:1235a9c6-5839-4dc7-ae8b-d9b142f59140}} can be endowed with {{formula:2825b6ed-fd18-459c-b01d-6ec299d04ad6}} -module structure via the diagonal action. For convenience, we shall sometimes write {{formula:a86108a6-9ef1-48e8-8b8c-f56561c4cdb5}} and {{formula:992b4aaa-67f8-4709-839a-6ec86dc2bbdc}} , where {{formula:8373c436-2f3f-4a04-9f01-037bcd6e9c91}} is understood to have a trivial {{formula:4ca5f923-e525-4dd7-9602-789020ee2652}} -action. Write {{formula:808f1c4c-1270-423e-ab0a-a238b53f0d6e}} for the étale cohomology groups, where {{formula:ebb0285c-8912-40b6-9867-787b72ab374a}} is viewed as an étale sheaf over the scheme {{formula:38957859-b994-4574-b255-b5fbb6cf4ccf}} in the sense of {{cite:5c82f7b209a599c8e09d622d5eb0462a976b5196}}. The direct limit of these groups is then denoted by
{{formula:ac46cfe0-cfb0-44b0-a298-619c32923277}}
| r | af39965cae1da4dee1d2a984d735cd5a |
To turn this into a white-box adversarial attack, we need an algorithm that, given an image and an initial value of the conditioning random vector, would locate that value, in the neighbourhood of the initial value, which causes the loss function of the SISR network to attain its local maxima. State-of-the-art adversarial attacks such as FGSM {{cite:d82d7f1b0aa5737be3b7be26e83d7e799bc90413}} and PGD {{cite:439df5c182c9798a073ed347149030212bf6a354}} fit this description. These methods ascend the gradient of a loss surface to find out what perturbation in the input would cause the maximum loss at the end of a network. To ensure preservation of semantic content, the perturbations are bounded and it is within these bounds that these attacks are optimal. However, learned degradation models are trained to always preserve semantic content irrespective of the value of conditioning random variables, making the norm unnecessary and rendering FGSM and PGD sub-optimal at best. Moreover, PGD, being an iterative procedure, takes a long time to converge to the desired maxima.
| i | 9a1269677b79ce776bdd0d18ed7ea528 |
To explain the negative magnetoresistive behavior, we have assumed the clean limit. However, this assumption is not necessarily essential: In general, the sign of the sum of the DOS and MT contributions to the fluctuation conductivity relative to the normal conductivity is negative in low fields even in the moderately dirty case {{cite:98867fda4432b2ea5098f4f39c925d553576550e}}. Then, as far as PPB affects the gradient terms for the SC fluctuation in the way seen in Fig.5, we would arrive again at the conclusion that not the {{formula:98f9c912-4ac8-450c-a5c6-da9a07a0725d}} LL but the {{formula:b1e02e03-bcec-43d7-9c95-bc7cb19fd968}} LL fluctuation becomes the origin of a negative magnetoresistance. Rather, more attention should be paid to the use in Fig.6 of a large {{formula:c95807fd-f5d5-4d1f-9a29-bfca622fe074}} . It might indicate that the present simplest model based on a single-band weak-coupling BCS Hamiltonian is insufficient for explaining the strange resistive behavior seen in FeSe {{cite:5da1260edceef4a1df5c86fd93dae791e4a24356}} and suggest that the corresponding analysis based on a two-band model {{cite:9dc4b4a69eb60d866e8bf4cc6ffa6aeef7d8e2f4}} appropriate for describing FeSe is needed and will be left for our future work.
| d | d9085849535053037b290fb98594564e |
We define a transformation cost-function in Sec. REF to indicate the benefits of transforming one graph type to another in terms of changing memory capacity concidering the transformation runtime complexity.
Related Work
The topics related to this paper cover several distinct areas. On the one hand, graphs that serve as models for various theoretical and real-world problems are considered. On the other hand, graph expressivity and graph transformations are introduced that differ from existing expressivity usages and transformations.
Graphs –
The concept of graphs is fundamental in the environment of discrete mathematics. Several disciplines deal with graphs from different perspectives, such as graph theory {{cite:fea0caa61f0217c29210246beab00fe5a3354d89}}, {{cite:dd44b4a5460dea4311c6ed9ffeef2941c2ebd440}} and algebraic graph theory {{cite:0542334ca471a3b6c5589c1902b5fe59d82842de}} which work with graph properties and their relations. Furthermore, computer scientific fields like network theory {{cite:45658ac4508f8e0c39754a7559d1817ee9b2fec9}}, data mining and machine learning {{cite:a17a7c7409ae667cc0a4d584b2278b2c0f1cdfb3}} and neural networks {{cite:b46e855d480f495524976c80283c7ec89057c71b}} aim to find patterns and properties in graph-structured data.
Moreover, the range of graph applications are wide since graphs can model information from various domains. Starting with theoretical applications like combinatorial optimization {{cite:461eee437840e10dcf75160234924823e35c2c94}}, graphs also arise naturally from networks such as social networks {{cite:f0b7810f9f2e018866ebcce21a5521ed521fc02e}}, multiagent networks {{cite:deae691991c207b07e65a63a4f95f99c8e42acf0}}, or mobile robot networks {{cite:bcaaeea3590071694690c3f15e00f5ce8d68f384}}, among others. In addition, several areas of natural sciences, such as chemistry {{cite:a861d3fb9f424aa44723617f296ce4fb369d0c1c}}, (computational) biology {{cite:21b7321ad91abdce5c8be6db142960a6d95f799b}}, medicine {{cite:ef1e9391845e2fdaf1625ee64e36cc4dbae79793}}, and technical applications, e.g., novelty detection {{cite:4c414237751348f72a063c555a5ab929f1aafe41}}, {{cite:2153f7413ee04681039f3d4efc1795bec4278f32}} and manufacturing {{cite:c72dce597e68f8135b6f4b24d325cb3cce1a2338}} profit from graph models.
Expressivity –
The concept of expressivity plays a significant role in examining many different objects and is intended to analyze the strength of expression given to the object. For example, one of the famous expressivity results corresponds to the fact that shallow neural networks are universal approximators, i.e., any real-valued function from a wide variety of relevant function sets can be approximated arbitrarily accurately {{cite:593877b02c2be570241f9af96429c8ca4faa4349}}. Expressivity in the context of graph neural networks (GNN) evinces the consistency of graph embeddings, i.e., how similar feature embeddings of similar graphs are.
This is closely related to the Weisfeiler Lehman (WL) isomorphism test on graphs where expressiveness is measured by distinguishing structurally non-isomorphic graphs and it has been proven that most GNNs are at most as expressive as the WL-test {{cite:e0c1934b2131720873a43a0f0c4a8b8ab7f60a9e}}, {{cite:911d9be470c3e6d49f37d3b354f5fa0100e8ea9f}}. In the following, isomorphism is not taken into account since equal expressivity is utilized for graphs that exhibit the same properties.
A more similar approach to the following definition of graph type expressivity comes from linguistics and is known as language expressivity. It characterizes to which extent an ontology implements given requirements {{cite:90bae275135c3ec5bd1104c1088d12c5c91eac91}}. Here, the properties of graph types are utilized as requirements to defined a binary relation that indicates that one graph type is more expressive than another.
Graph Transformations –
Motivated by different applications, there are already several transformation techniques on graph-structured data. Especially in the context of graph learning, graph embeddings provide a basis for most learning methods {{cite:c2319a6c39e1d9d475dd5c54255964a0d1f32cee}}, {{cite:3d0b427dfe4504fc7cf10a950c5334fe96919756}}. Thus, there are many approaches that process an input graph and map it into a metric space {{cite:872e4b765c293051ae3faa87735429fe7c793a3e}}. Instead of mapping graphs into a metric space, the paper at hand focuses on transforming graphs of a specific graph type into another graph type.
There are already some established approaches for these types of transformations.
Some of these approaches are rule-based, have been studied since the 1970s and have been already successfully employed in many applications {{cite:05166ede2c34dce6e123001927067a275647b610}}. In general, these approaches are also stricter in the sense that they consider transformations based on graph morphisms, e.g., the transformation from an attributed graph into a hypergraph {{cite:95f004cc5e2c76863c850284ea1aa6620ac297bc}}, or transformations between flat and hierarchical hypergraphs {{cite:9bf504189ea05c020606b7c1f6b6eafcca00e105}}.
In the context of graph games and their winning strategies the authors of {{cite:f4d5de0fbe97df9a8575c8b9974ef51046151d88}} make multigraphs simple by swapping a sequence of doubled edges, i.e., any two edges {{formula:2c5308fe-b931-423a-b04f-dc5b4a7b06de}} and {{formula:dc8cd75c-3371-47dd-8a22-e67274ebb61a}} are replaced by {{formula:f6f79cc6-45e9-40aa-b84b-df152a4f3755}} and {{formula:2371c5ff-48ea-442d-b9ca-114b663c33ee}} until no duplicates exist anymoreThis definiton follows the one called double edge swap from {{cite:5561b4fd33904bb83b5e23b5071b58aedd69a0d0}}. It is also known as degree-preserving rewiring, checkerboard swap, tetrad or alternating rectangle {{cite:c27bcbdc11a1d84c7a8f9359046caeacc92c8ab9}}. Nevertheless, this type of transformation is lossy due to the absence of encoding of the previous multiple edges and only re-positioning them within the new graph in a way such that they are not multiple anymore. Another more general lossy transformation is given by the graph transformations from the python package NetworkX {{cite:485cf6b11bf6ebeed8ef09201df675ecca397898}} between directed and undirected graphs. Here, a directed graph is transofmed into an undirected one by getting rid of the direction information of an edge and arbitrarily chosing between the attributes of the directed edges if more than one direction is available.
networkx.DiGraph.to_undirected: lossy transformation, networkx.Graph.to_directed, lossless transformation.
In this paper, however, the kind of transformations are slightly different, since the transformations are neither rule-based nor have to follow strict graph isomorphisms.
Furthermore, the transformations listed here should not suffer from any loss of information.
Graphs and Graph Properties
To start with, this Section provides the foundations and defines the different elementary graph types and their possible static or dynamic structural properties.
A graph is a tuple of a set of nodes and a set of edges that can have various constitutions or attributes. In general, the set of nodes and edges is not further restricted, which opens up many possibilities to model with graphs. Only finite graphs are considered in the following, i.e., the node and edge sets are finite. In the following chapters up to and concluding Definition REF , the focus lies on static graphs, i.e., graphs have no temporal properties. Afterward, dynamic graphs and their properties are defined.
First, a set of elementary graphs is defined from which more involved graph types can be built.
Definition 3.1 (Static Graphs: Elementary)
A directed graph (digraph) is a tuple {{formula:6258c212-a644-421e-b0bf-209725e6069e}} containing a set of nodes {{formula:b82a49a2-6b73-4114-8c10-7d6c227b8fb7}} and a set of directed edges given as tuples {{formula:6b14a143-b629-4833-ab6d-41dccf6e0e20}} . The set of all digraphs is denoted as {{formula:a76bce96-7fbb-4de8-b686-3933f2e0ba01}} .
A (generalized) directed hypergraph is a tuple {{formula:90cf0a27-2ccf-4ef3-8b78-a2e4f4e22c6f}} with nodes {{formula:5a127b93-50fb-4c30-9061-34583e1a746e}} and hyperedges
{{formula:f9d88dcf-ed7b-41e8-a05d-e1c957eba603}} that include a numbering map {{formula:84bdaa06-294f-463a-862c-6909b0202182}} for the {{formula:07304e95-6b9f-49eb-b06f-a2e94af27c65}} -th edge {{formula:2f3f92b1-7b61-4f63-aa18-a061106323c2}} which indicates the order of the nodes in the (generalized) directed hyperedge. Wlog. it can be assumed that the numbering is gap-free, so if there exists a node {{formula:2fd46faf-021c-45cd-a694-5621b5b3bd53}} with {{formula:b75767e6-b095-469d-a228-9fafc67bf32d}} then there will also exist a node {{formula:6dc4c442-5e80-4274-bb4e-3367c67ebe92}} s.t. {{formula:a2c1ee1e-5dcb-4eda-ba15-3b8b41a21a70}} . The set of all (generalized) directed hypergraphs is denoted as {{formula:8c1cb408-ce3d-4add-a932-1445bece34b3}} .
Remark: The definition of the generalized directed hypergraph differs from the common definition of a directed hypergraph {{cite:dd78cd5477d93f29e9d694d386cf392268aa2c17}} with edges {{formula:c039512e-acd0-4edc-a4c2-8fd8b851b3d6}} . To obtain a more generalized definition, we introduce a numbering mapping for each hyperedge that indicates an ordering of the nodes in a hyperedge.
On a generalized hyperedge we define the ordering by equipping the node set {{formula:bb36324c-8d89-4de9-a517-dda54dd47122}} with a function {{formula:3aca6fdd-5848-469d-934e-7c4ca9a0d581}} such that for {{formula:28b9b43a-3d53-4880-88a5-f85f3744a1a3}} : {{formula:b2762290-d7d1-4ca9-a4e9-b3628277d4bf}}
With this, the common notion of a directed hyperedge {{formula:350f5e8e-5b62-4245-9e0b-ffc3b227d6c2}} can be depicted by mapping the nodes in {{formula:0f2ee879-9e88-4a22-ad6f-7de479929bda}} to 1 and the nodes in {{formula:e25ed575-d018-434b-adce-ce9bcccf54cc}} to 2. Due to this construction, not only binary relations can be considered, but also {{formula:8858d4f9-75d0-4416-9a58-eee2e0d97959}} -ary relations for arbitrary {{formula:7be946fd-9175-405d-93de-1d09c4ac7a87}} , encoding a path in a common directed hypergraph.
{{figure:3a8d2bdf-9fa0-4c15-a393-cc14ca4f32d9}}Another component for obtaining more graph types is to demand additional properties of the graph structure. These properties are called structural as they relate directly to the graph structure, i.e., the graph representation has to be extended or changed by further data structures.
Definition 3.2 (Static Structural Graph Properties)
An elementary graph {{formula:c1f31b47-8dd6-4e0f-8eae-bc91ec91cad6}} is called
undirected if the directions of the edges are irrelevant, i.e.,
for directed graphs: if {{formula:67e7ccac-2253-4fed-ad8a-12e1a2489abc}} whenever {{formula:6be52bba-1e4a-4fed-a086-b8a139b63225}} for {{formula:5d0b9326-fc00-4100-8c3b-169ea447f81f}} . Then an abbreviation can be the set data type instead of tuples, namely {{formula:e94dda87-83f7-4266-a8d4-2c5f1129ec49}}
for directed hypergraphs: if {{formula:3d949fc6-8739-4ce8-9922-34ab421815dc}} for all {{formula:85d257a1-be87-48e2-9182-6f36edb706d8}}{{formula:539e06dd-7662-4d36-8df0-3a661f1bf441}} encodes that {{formula:e7ecb59c-e7cc-4989-a0e8-f3c014229b9f}} is an undirected hyperedge. Abbreviated by {{formula:7cad51b5-f669-48f2-9730-98b3f7331ea9}} .
In what follows {{formula:543f8694-4a46-451a-9f1b-86c51181b34a}} is the set of all undirected graphs.
multigraph if it is a multi-edge graph, i.e., the edges {{formula:4a647878-6a74-4c7c-8022-7a24eeb4ea95}} are defined as a multiset, a multi-node graph, i.e., the node set {{formula:549728b7-3dde-4128-9b24-55b9fed5dd12}} is a multiset, or both. All multigraphs are written as the set {{formula:1f3834e2-1b8d-4e23-85f1-3e1875cd5a60}} .
heterogeneous if the nodes or edges can have different types (node- or edge-heterogeneous). Mathematically, the type is appended to the nodes and edges. I.e., the node set is determined by {{formula:8e273f8c-f0be-4a3a-8494-17c93cff2109}} with a node type set {{formula:7b11c26c-9b00-4c9d-83fc-bc21bff7997b}} and thus, a node {{formula:9ba96e8d-3b6e-4d77-969d-2b5e8eac0a9d}} is given by the node {{formula:0db5ad3f-8e48-4769-9ff5-edc235b7f7d8}} itself and its type {{formula:38be7fe1-a1ba-4a17-8871-8d67ad4df48c}} . The edges can be extended by a set {{formula:84ea3537-8266-4406-b404-46003e0e883e}} that describes their types, to {{formula:0c333d1d-2ce6-4851-9b9f-71a761a7c673}} .
attributed if the nodes {{formula:976fb7f2-ab5e-4fd5-b2a6-40f547818a84}} or edges {{formula:059c48f8-baba-48e1-b4c4-312c37f0f18f}} are equipped with node- or edge attributes. These attributes are formally given by a node attribute function and an edge attribute function, respectively, i.e. {{formula:a15563fc-27ca-4ec4-8053-d8e754284fe3}} and {{formula:474fcdca-5b04-4871-b706-7ccd78eedb2f}} ,
where {{formula:ab36ad45-e037-45fd-9cde-0f193d0304d4}} and {{formula:395c9bc9-88b0-4f4c-996f-639627715104}} are arbitrary attribute sets. In case there are only node attributes the graph is called node-attributed, in case of just edge attributes it is called edge-attributed and if we have {{formula:0520fcb4-a91e-45e2-8378-d8817507e791}} it is called weighted. The set of all attributed graphs is denoted as {{formula:f1db48a8-9401-4156-be01-30b90fbe9c0b}} .
Fig. REF shows examples of graph types that can be obtained from elementary graphs (Def. REF ) and additional static structural graph properties (Def. REF ).
For many theoretical as well as real-world problems, the corresponding graph model arises naturally.
For example, as visualized in Fig. REF [e]), networks in which different entities or relations exist tend to be modeled in heterogeneous graphs. A simple example of this are social networks in which different groups of people exist, possibly even in different relations {{cite:89f29b0649e7ee6e20ef28e1cd79c3f64f9e261d}} {{cite:f0b7810f9f2e018866ebcce21a5521ed521fc02e}}.
Problems in which non-binary relations can exist, are by definition formulated as hypergraphs, as visualized in Fig. REF [b]). See {{cite:db0300201c277cf1d27ef21193d743fa082000d1}} for an example where the nodes of a citation network are authors and the hyperedges exist between all authors of a publication. An example are problems that contain different groupings: In the hypergraph-model, hyper-edges define different group memberships and thus connect all objects in the respective groups.
For many mathematical problems from graph, group or number theory, simple graph structures such as (un)directed graphs are usually sufficient {{cite:461eee437840e10dcf75160234924823e35c2c94}} {{cite:514c7154e14b080be05edffcaa57c727063a43d4}} {{cite:dbe0ea78926ad407f50debdfe6028a8d93b4b929}}. Such graphs are visualized in Fig. REF [a] and [c].
While much additional information can be encoded in attributes, there is also information that nodes or edges can more easily formulate. An example of this is the road network, where nodes represent different locations and multiple paths exist between two locations. Such problems can naturally end up in a multigraph (cf. Fig. REF [d])).
{{figure:f6031ba7-c169-493b-b3e0-5ef5849ed9fd}}Different to the static structural graph properties, dynamic structural graph properties include temporal dependencies. They are another component for extending to further graph types and are defined in the following.
Definition 3.3 (Dynamic Structural Graph Properties)
An elementary graph is called
dynamic if the graph structure or the graph properties are time depended. In the following, the notion
{{formula:93d0b629-36a3-4703-af9e-6a3ecc98603b}} is used, where {{formula:47092f2a-7fce-44b8-acd8-2c9d5c49fab7}} is a set of timestamps to emphasize the time-dependence and therefore the dynamics.
growing if it is dynamic and the node or edge sets evolve w.r.t. addition of new nodes and edges respectively. I.e., {{formula:af0ca159-0a20-41e0-9869-a9346777868d}} , for all {{formula:cb2524d1-8128-4635-8ce7-b1d9502eef4e}} .
shrinking if it is dynamic and we just allow node or edge set evolution w.r.t. deletions of nodes and edges respectively. I.e., {{formula:f5d1e1d0-f3be-4796-8ecb-dd99e791e5d2}} , for all {{formula:2a6ebc88-8dd1-4670-b920-5d312ca96c03}} .
structure-dynamic if it is growing, shrinking or both simultaneously, i.e., in particular, the nodes {{formula:f9a1c98b-8bf5-4411-8b9b-bd0c9c25fc0b}} or edges {{formula:48e767e3-f6c7-4351-94ef-bbf5e9da6202}} evolve over time due to additions or deletions of nodes or edges{{cite:34f9e56c84d3be07652d0b128693da168b0ad71d}} also mentions splits and merges of nodes and edges. Obviously, these events are sequences of additions and deletions..
attribute-dynamic if the node or edge attribute function is time-dependent. Thus, we extend our notions of the attribute functions to {{formula:7dbfdc86-557b-4a19-a4c2-ee941027ee3d}} and {{formula:6979c3af-efaa-47ba-aa85-e569ff5740dd}} , for all {{formula:4f180eaa-1bc7-4b64-80ba-597a6e68cbe5}} .
type-dynamic if the graph type evolves over time. E.g., an undirected graph becomes directed from one to another time step.
So far, new graph types are obtained by arbitrary composition of an elementary graph (Def. REF ) and other structural properties (Def. REF , Def. REF ). However, in the literature, these compositions are not always mentioned by name. For this reason, in the following, common important compositions are listed and named according to the definitions from above.
Definition 3.4 (Combined Static Graphs)
Knowledge graphs are defined in several ways. In {{cite:5296f5a2714dbda167d031dc177981fc715f6f65}}, they are defined as heterogeneous directed graphs, while in {{cite:76f72c2a31e4608c2e08db84fd984c44515ff84c}} knowledge graphs are the same as heterogeneous graphs. But there are also definitions that do not see a knowledge graph as a graph, combined from the aforementioned types, see for example {{cite:d4404ae72c4c916065aade94389af77b4221de62}} for an overview.
A multi-relational graph {{cite:896de2ca4dc3dadd141c5688920d357ea792c28c}} is an edge-heterogeneous but node-homogeneous graph.
A content-associated heterogeneous graph is a heterogeneous graph with node attributes that correspond to heterogeneous data like, e.g., attributes, text or images {{cite:52304eca7d768d339eb617dd5e46cecd3abc4646}}.
A multiplex graph/multi-channel graph corresponds to an edge-heterogeneous graph with self-loops {{cite:896de2ca4dc3dadd141c5688920d357ea792c28c}}. Here, we have {{formula:eb52c74a-2579-478c-b7ec-2ecc102fd082}} layers, where each layer consists of the same node set {{formula:cd48e2f6-284d-4747-aeb7-fb0db34e312a}} , but different edge sets {{formula:fd978f20-d18e-4a07-9af5-358be352dd6f}} . Additionally, inter-layer edges {{formula:9b02e09d-4ff0-4a7a-8d66-b64d1068785f}} exist between the same nodes across different layers.
A multiscale graph is a multiplex graph with inter-layer edges between different nodes of differing layers {{cite:ad01e8e77c9542490585b7623f66234c319b09d4}}.
A spatio-temporal graph is a multiplex graph where edges per each layer are interpreted as spatial edges and the inter-layer edges indicate temporal steps between a layer at time step {{formula:08c521bf-3090-4c6f-a184-8d627400f1cd}} and {{formula:b59b6c84-90aa-47a4-86a5-260ecaca31b9}} . They are called temporal edges {{cite:edc70c014279e6d6c064fc9839db191cb37912ba}}.
Remark: Note that in Def. REF , only static combined graphs have been defined for the sake of simplicity, which does not mean that there are no dynamic combined graphs. Since the dynamic results only from additional time dependencies, one can list these as additional properties and define dynamic combined graphs. Furthermore, any static graph can be viewed from a dynamic perspective. In Sec. REF , we also give a simple algorithm for this (cf. Algo. REF ). {{formula:0a2f191e-accf-46fc-a9e1-a8436e2db612}}
Remark: Graphs can have further additional semantic properties like connectedness, cyclicity, scale-freeness, etc. However, these are independent from the basic graph structure.
Since we consider graphs syntactically in this paper, we do not go into further graph properties that result from interpretations of the given graph structure. For further readings see {{cite:461eee437840e10dcf75160234924823e35c2c94}}. {{formula:ad53ab21-99ef-4f8b-9fbb-e83598658d7e}}
After having defined elementary graph types and additional properties, the following Sections work through the research questions.
For the beginning, an overview of the memory complexity of the different graph types are given in Sec. that indicate their computational efficiency.
Graph Memory Complexity
Here, the memory complexity of the different graph types are used as basis for determining their computational efficiency (RQ2). The memory is represented in dependency of the number of nodes {{formula:2b8537b6-466d-404e-8413-a1a256a273e3}} and the number of edges as a function of {{formula:76a30f50-fce9-45c6-a89a-b305a4283271}} . The memory complexity of a graph type arising from the memory needed to store its nodes and edges is denoted by {{formula:b42b2806-4bfe-4340-b71c-05a39de72afb}} and {{formula:aec629b4-1be5-4dce-85de-6ce648fd8af6}} respectively. Tab. REF depicts the worst-case memory complexities for different graph types.
Remark: Note, that the memory listed in Tab. REF are worst-case estimations for the respective graph type for reasons of comparability. In particular, assume that all graphs are fully connected. However, if graphs are considered as a model for real-world applications, then in general, graphs will be more sparse.
{{formula:874cedfe-9233-4bb8-89ba-177904e90d70}}
{{table:764cc206-7b13-43a3-9f62-2c5b45a352d7}}Remark: For some of the graph types, there are already more optimized ways to store the data structure. An example of this is the use of the continuous-time dynamic representation, where one static initial graph is stored, and the subsequent snapshots are compressed in a set of events {{formula:c4a5be51-c135-4ab1-81dd-0fa11d164234}} representing only the differences between two successive snapshots. As an example, this would lead to the following storage capacities, compared to those in Tab. REF .
{{table:fd875924-cd64-4fb9-af3c-97f472306b2f}}In any case, there are not enough mature graph analysis approaches for dynamic graphs like this that handle them well. Therefore, optimized graph representations are not discussed here, but only worst-case memory requirements for intuitive graph representations are considered. For further information, see {{cite:34f9e56c84d3be07652d0b128693da168b0ad71d}}. {{formula:7601e980-87cb-4963-8c2b-816f0d118d1b}}
As can be seen in Tab. REF , the lowest memory which is polynomial in {{formula:3a99b6e8-8474-4e57-875c-70792dbfda82}} is required for directed and undirected graphs in the static case. In contrast, storing a generalized directed hypergraph grows highly exponentially in the number of nodes. Whenever another static graph property is included, the memory complexity either grows by a constant multiplier (see number of duplicates in a multigraph or number of node or edge types in a heterogeneous graph) or a constant factor (see maximum costs of attributes) in the worst case. These costs can grow further through considering the corresponding graph in a dynamic setting. Then, the memory is multiplied with the number of time stamps {{formula:469bdd57-554d-46b4-977f-5509987574ff}} which can be constant if the entire graph is already available or which may grow in the graph stream case.
Equal Expressivity of Attributed Graph Types
In the following, the graph type expressivity is defined to provide a comparative description of the amount of information a graph type can encode (RQ1). Afterwards, several graph type transformation algorithms are provided that are utilized to show that all attributed graph types are equally expressive.
Expressivity
Graph type expressivity is defined as a binary relation that turns out to be a partial order on the graph types. Utilizing the anti-symmetry property of partial orders, equal expressivity of all graph types is proven in Sec. REF .
Definition 5.1 (Expressivity)
A graph type {{formula:c535fbc9-f7a9-4c8e-8ce3-91db3635f84a}} is at least as expressive as a graph type {{formula:57e35607-c93e-4c7a-b7fe-c4c36cf65a63}} , if and only if {{formula:a5fd9ce0-2bd3-4aac-b327-d52a80990606}} encodes at least as many graph properties as {{formula:acc4847e-462e-488d-8f99-7ff4b636cba7}} denoted as {{formula:bb2dde4b-ceb1-4d44-8e4e-6ba89ea5189c}} . In case both types encode the same graph properties it is denoted as {{formula:48b2abfb-1927-4558-94ae-b2720540158c}} .
For example, let {{formula:9ff416f3-1f86-4885-8210-7af5bafdd801}} be the set of all directed weighted graphs and {{formula:3fd75e54-7fe8-4637-bc56-1b46e8b6d623}} the set of all undirected attributed multigraphs.
Then the graph type {{formula:41c31710-dcb3-4dcf-a5a2-bb73f6e47af6}} is at least as expressive as {{formula:da490218-76e4-4505-82f6-31b1f84e8189}} , i.e. {{formula:499040d8-1be7-49e6-b754-dda226835753}} . This is justified by the fact that the graph properties of {{formula:b93263e3-fe06-4287-8253-3447a435299d}} can all be modeled by graph properties of {{formula:1a4b56e6-ce09-4a2b-ba70-76b2de690a1a}} :
Undirected edges can be modeled by two directed edges (cf. Def. REF [1]), Graph weights are attributes restricted to the real numbers (cf. Def. REF [4]), and every set is by definition a multiset without multiple entries.
Lemma 5.2
The expressivity relation {{formula:cfd2ca3a-e03b-4976-b6bb-6e6847de5cd6}} is a partial order, i.e., {{formula:5b1ef9b9-e7c9-42a0-b99a-502155c9bde7}} is reflexive, antisymmetric and transitive.
Reflexivity: Let {{formula:ceb5be01-e814-49a0-ac05-1706d483793c}} be a graph type. Then obviously {{formula:80caaa18-8f54-4e5f-90ab-bbab1b010dc2}} holds.
Anti-symmetry: Let {{formula:4ae20606-a16b-4b6b-9f57-59292e76264f}} and {{formula:45f17f7f-1b9f-472e-8970-c3e7dd31fc3e}} be two graph types and {{formula:d4d635eb-1332-4308-a56b-11c0d9e3b566}} . Then, {{formula:84db235c-07aa-4c3d-aa7b-37201e1613e5}} encodes at least as many graph properties as {{formula:bf6078dc-a61f-46a2-84eb-85b512f3243d}} and {{formula:a7d63c1f-4a0d-42ef-af68-cd3e8de1bb32}} encodes at least as many graph properties as {{formula:25fcdbb3-30d0-4b6c-b1f9-cc02f5466f8c}} from which follows, that they encode the same graph properties and thus are equally expressive {{formula:81adc989-5545-475e-bae2-d0ccd7b2ab3a}} .
Transitivity: Let {{formula:345403e3-8bca-4a3b-b736-728e43bd871b}} and {{formula:4647a910-9b60-45ec-9f85-374d6dd43b60}} be graph types and {{formula:5e088572-5e75-4924-a29d-0f69f41c41af}} . Then, {{formula:53e1e3f4-a383-479f-b880-b86f3cd21f47}} encodes at least as many graph properties as {{formula:7d5dc4ab-2fcb-4838-a3be-6f77e860e34a}} and {{formula:d8fc3a01-35ea-4264-a76e-a4054fdbe8c4}} encodes at least as many graph properties as {{formula:415b212c-3091-4a25-b21c-a7bfb26fccff}} . Therefore, {{formula:afe1f5f7-ac83-4b84-8d92-8a77952bb1ff}} encodes at least as many graph properties as {{formula:bb1f836a-b60c-446c-9f7f-2f6c38525505}} and thus {{formula:c865f2a0-798e-4caf-a23d-735a398b635b}} .
With the aid of the following algorithms, it is possible to make claims about the expressiveness of the different graph types which finally leads to the conclusion that all graphs are equally expressive.
Transformation Algorithms
This section discusses the graph transformations that convert a given graph from one graph type to another without loss of information. The graph type transformations can be interpreted as embedding functions from one graph type to another. In closing, it can be inferred that all discussed attributed graph types are equally expressive.
The transformation functions should fulfill two requirements. First, it should be possible to represent every graph in the new type. Second, the representation of a graph in the new type should be unique in order to prevent information loss. The first requirement indicates that the function has to be surjective, i.e., every graph in the image exists at least once. Thus, there is at least one representation of the graph in the new type. Nevertheless, the transformation has to be injective due to requirement two, i.e., it yields no more than one representation in the new graph type. Consequently, the transformation function has to be bijective.
To keep the overview of the embeddings simple, the following procedure complies:
For a transformation {{formula:92447b67-45fb-429a-a6dc-a4e05874c8a2}} of a graph type {{formula:642f8cbc-f90d-4d96-b8dd-b0c68a18eb3c}} to {{formula:5af9814a-da01-478c-bb4b-724a08652dbf}} , only transformations of type {{formula:d4f4bb43-ee5a-4a32-8653-e9b3bd7b2b06}} into a subset {{formula:69ffb9ba-963f-483a-abf3-99748d799b78}} are specified.
This subset corresponds to the image {{formula:c9656d48-9a28-496d-9a8f-c1c3a3d1b421}} and guarantees surjectivity of {{formula:b01985a2-c009-4a28-9d6d-51d493140fca}} on its image.
Since in the worst case the superset graph type {{formula:fc64a8a2-400d-4e2e-b5bd-2d4d0fb5b48c}} is not necessarily reached, a back transformation independent of {{formula:1c9030b2-9b51-45f1-8f92-d3e9b72dbdf9}} is given in the same way, which must be bijective for itself restricted to its image. With the embeddings that result from these constructions of {{formula:08da44c1-31fb-4e1a-ba51-a6f3e9c1e176}} , it can be directly inferred that {{formula:c226e37f-9bd4-4f21-a536-3bc08c5c427e}} is at least as expressive as {{formula:b163cc45-d3bb-405b-8522-ecfebdb9e30d}} . In case a back transformation is available, the same results hold for the reverse direction. Therefore, and due to Lemma REF , the relation {{formula:9a1de79c-7274-46ca-8637-b2dbcaba2801}} induces a partial order and thus is anti-symmetric. This entails that {{formula:40fafe8f-abae-4650-b239-2ad43d80290b}} and {{formula:14018ab9-179e-4c4a-a256-fc18e8132360}} are equally expressive.
Without restriction of generality, it is assumed that all graphs of the algorithm's domain (except for Algo. REF ) are static. However, in the dynamic case, this does not pose a problem since the algorithms can be applied to the graph snapshots that occur at a fixed time.
Moreover, in each case, the node and edge attribute sets are denoted as {{formula:f9fa09db-76c6-4846-a2e6-4b7a3f2b397f}} and {{formula:b7e5d535-ec83-4fba-9dd2-f0ceb45b5981}} and are not restricted in their data type. This is possible since attribute sets can always be extended arbitrarily, even in the dynamic case.
In the following paragraphs, lossless transformations from and between the different graph types are explained in detail. Each part includes the algorithms in pseudocode. Transformations between combined graph types are examined as well. Afterward, the expressivity relations between the different graph types are discussed.
Attributed and Unattributed Graph Types
For adding attributes to a given unattributed graph {{formula:883f1c38-e15c-4293-a31b-320443224070}} , i.e., for embedding it into the set of attributed graphs {{formula:d28df663-1afe-4f11-82fe-a5e374c54844}} , one natural approach is to introduce a node and an edge attribute function that assign empty attributes to the nodes and edges, as illustrated in Fig. REF . The corresponding algorithm for this procedure can be found in Algo. REF .
{{figure:982fa5c7-0da2-46eb-84e2-303107007070}}Since this type of embedding adds artificially empty information, it is trivially a bijection if the range of values is restricted to the image. For the expressiveness of graphs, this implies that the set of attributed graphs {{formula:f09e0a34-42de-4448-8e64-de42cc0dded4}} is at least as expressive as the set of unattributed graphs {{formula:1a5aa574-445a-4f84-b820-4d3d630595a9}} , i.e.,
{{formula:3507f840-773d-40de-b0e1-b36aa580b58a}}
However, whether the reverse expressive power, i.e., {{formula:3fc77efe-8637-4ce3-b96e-dc189478dc8f}} , holds or not is non-trivial. Various indications (cf. Appendix §) suggest that this direction is also valid, but the implementation is left open for future work.
Since many other transformations in the following benefit from allowing attributes, it is assumed that all graph types under consideration are attributed and only consider embeddings on attributed graphs. Alternatively, unattributed graphs can be transformed into the attributed type in linear time using Algo. REF .
Make Attributed
[1]
MakeAttributed{{formula:a24efa2a-088c-4efe-b757-61307d3b66c2}}
{{formula:82cfc113-4d98-40b4-bc16-6441b08a4a68}}
{{formula:5b3cb3e9-a286-4184-8b8b-0e1cf36e1df5}}
{{formula:ece3ba45-75d9-4724-a12f-a7d046a811f8}}
{{formula:42f4685c-d77e-471d-84bf-e254380d8fc8}}
{{formula:f27004d7-8d9b-4184-a118-773f48cf188b}}
Attributed Directed and Undirected Graph Types
Given a graph {{formula:2f16bc4b-1707-4577-a3f2-3abb8129c042}} of undirected type, i.e., {{formula:ca36dc68-c37c-4edb-be85-ed1fdf5ee8a5}} , the following algorithm represents an embedding into the set of directed graphs {{formula:51cd3e67-6017-4c19-8c3d-9bb976f18bf1}} . The idea of the algorithm is illustrated in Fig. REF . Each undirected edge in the input graph is replaced by two directed edges, one in each direction (cf. Algo. REF , line 4). To avoid losing information, each of these directed edges inherits the edge attributes of the previous undirected edges (line 5). At this point by construction, Algo. REF is injective.
{{figure:c788e619-7ca7-4296-9260-aaa893c1a3c4}}If the range of Algo. REF is restricted to its image, the algorithm is bijective. Thus, there is no information loss due to the transformation. In terms of expressivity, this kind of lossless embedding yields
{{formula:0e09bdf6-21e7-4373-8bdc-3ab2feee0874}}
which implies that directed graphs can encode more or equal graph properties than undirected graphs.
Algo. REF turns a directed graph into an undirected one.
The transformation encodes the graph edges' direction into the edge attributes. In case both directions exist in the input graph, their attributes are not necessarily the same. To face this, the algorithm encodes the attributes associated with the respective directions (cf. Algo. REF , line 8).
Analogous to above, injectivity is given by construction, and bijectivity follows from a restriction of the algorithms range to its image. Therefore, there is no information loss caused by the transformation, and the undirected graphs are at least as expressive as the directed ones, i.e.,
{{formula:c788606d-9bc8-4dea-9e9f-41700f3abd1b}}
Note that Algo. REF does not require any attributes. However, since attributes are allowed after prerequisite, the back transformation that is described in Algo. REF becomes simpler. A comparison with Algo. REF in the appendix shows how much the algorithm benefits from allowing attributes. Algo. REF does not describe the inverse function of Algo. REF . Nevertheless, it can also be justified that it has at least an inverse mapping since the embedding is bijective. This can best be observed in the following illustration in Fig. REF , which gets to the core of the transformation idea of Algo. REF .
Considering both embeddings and the results from the equations (REF ) and (REF ), both graph types turn out to be equally expressive, i.e.,
{{formula:fe91d19d-ef99-4ce9-9d5a-2704f898b034}}
{{figure:ab5640d8-2e8e-453c-ae16-c55bf84a088d}}Make Directed
[1]
MakeDirected{{formula:baf9ecda-8622-4ec8-a809-5d3a295a7b70}}
{{formula:40af7b3a-7257-47b8-a2df-d8fe87e482cd}}
{{formula:fd01f822-0651-4856-bb49-9c9fb6f02b78}}
{{formula:9207bafd-9bb4-400d-8633-db7ddd637dab}}
{{formula:4dd03e1e-831e-4684-9d28-d225f9048d8a}}
{{formula:f0ebce6f-f492-4646-9690-8074359600b8}}
Make Undirected
[1]
MakeUndirected{{formula:484472c9-f98c-4943-a086-d05c97770a79}}
{{formula:a2278bb9-8bb3-42b4-a875-2fef6600ab76}}{{formula:259c37bf-a6fb-422b-9d09-8dfd23e6268a}}
{{formula:e2b6c3be-5ead-40d2-8352-a604a828ed64}}
{{formula:fb3c3932-5927-41bb-9e89-36cf8280619a}} check "edge direction"
{{formula:b68ce9ce-da97-465f-88e2-a96c22d76e84}}
{{formula:7f8e7854-7715-4c16-afee-a278679475b0}}
{{formula:c3c82bd8-8915-4431-a638-b1a44142a236}} append attributes of reverse direction
{{formula:a7dcf6f3-c6c7-4a1c-8fdf-d748c86baef3}}
{{formula:43836bee-563d-402a-9013-9bf284f9bbcd}}
Attributed Multi- and Non-Multi Graph Types
To obtain the property of multi-edges and nodes, Algo. REF modifies an input graph {{formula:15ad0c5d-8171-4200-9000-8bd2b6f62f53}} without multi-nodes or edges by transforming the node and edge sets into multisets through permitting multiples (cf. Algo. REF line 2). This procedure only changes the data type of the graph and thus is an injective function. The type of input graph is irrelevant for this algorithm, i.e., it can be an arbitrary elementary graph (cf. Def. REF ) or can have arbitrary properties (cf. Def. REF , Def. REF ).
If the algorithm is restricted to its image, it is bijective and thus a lossless transformation. As a result, the set of multigraphs {{formula:7d36ce67-f85d-4292-aa7e-3b76bfbb3bd6}} is at least as expressive as the set of non-multigraphs {{formula:ab3375bd-90a5-4b7c-9ee5-00db50d0ead8}} , i.e.,
{{formula:fc95da6d-8393-44d7-9874-4cf09d26a0ce}}
To encode the multi-edges or nodes of a multigraph {{formula:80324f69-2c0a-4947-9896-e5aed303d630}} in a non-multi graph, the Algo. REF associates the duplicates of nodes and edges with the corresponding attributes and stores them as a list of attributes (cf. Algo. REF , lines 4,5). Prior to that, the node and edge sets are transferred from multisets to sets via deleting duplicates by MakeSet (cf. Algo. REF ). An example transformation can be found in Fig. REF
{{figure:63a94af4-542e-46da-a989-ec6deef4b8d0}}Since the modification of the node and edge multisets to sets without duplications is not injective, the storage of the different attributes in lines 3 to 6 guarantees the injectivity of MakeNonMulti. The algorithm is bijective if the range is restricted to the image and thus maintains the input information without loss.
As a result, the set of non-multi graphs {{formula:d90950b6-9670-46b0-95a5-19c7f69a4037}} is at least as expressive as the set of multigraphs {{formula:e10b9175-a3b9-4426-a1bb-0e0bf1ec1b98}} , i.e.,
{{formula:1fe85d13-a230-4980-a34b-fd0d8821d382}}
And combining the results from equations (REF ) and (REF ) it follows that the non-multi graphs and multigraphs are equally expressive, i.e.,
{{formula:1ff42ad4-729e-4f8c-bd04-070a694d4a27}}
Turn into Multigraph
[1]
MakeMulti{{formula:45ff463b-cba1-4a08-b617-f1c95baf583a}}
{{formula:79c3cd1d-696d-4a30-8975-07c0c119743d}} transforms set into multiset
{{formula:bf731b73-9620-45a9-bf7a-96013c3d0ad5}}
Undo Multigraph
[1]
MakeNonMulti{{formula:4bbedf64-a55c-438c-9b8d-6b758ee79f84}}
{{formula:e201914d-706d-4611-b730-e90d2f12532b}} trafo. from multiset to set
{{formula:52f9c6c7-b2a5-41c0-8380-5901bec6f113}}
{{formula:c1609a63-1fb5-413c-8479-76a2b4827a5f}} append attrib. from node and edge duplicates
{{formula:7f8bcf9e-7a9e-4ff0-be17-6347b3c65045}}
{{formula:3382feaf-b2e0-4b59-a076-c822983daf40}}
{{formula:2c341d0d-9508-46a0-85c9-bda76d2ba233}}
Attributed Heterogeneous and Homogeneous Graph Types
Heterogeneous graph types can have different types of nodes or edges (cf. Def. REF [3]). For simplicity, it is assumed that they have both different node and edge types. Accordingly, the other graph types consist of exactly one node and edge type. Thus, they are called homogeneous in the literature {{cite:52304eca7d768d339eb617dd5e46cecd3abc4646}}.
From the definitions of these two terms, it is evident that homogeneous graph types describe a subset of the heterogeneous graph types. Hence, the transformation described by Algo. REF is straightforward.
The idea is to introduce a node and edge type 0 and artificially extend the nodes and edges of the input graph {{formula:7da4b8c7-cea6-4834-b7c1-81a15d9ecc20}} by this type (cf. Algo. REF , line 4, 8). Clearly, this transformation also provides a bijective embedding and results in the expressivity relation
{{formula:b616f667-25fa-4298-82cc-e76a0a3c0432}}
Also, the idea of the back transformation in Algo. REF follows immediately using the attributes. The different node and edge types are encoded in an extension of the attributes, analogous to the approach in Algo. REF , where the attributes are extended by the edge directions (cf. Algo. REF , line 6, 8 and 14, 16). Fig. REF illustrates this procedure.
{{figure:014c12c0-80a2-482e-adfc-4529b37570fa}}By construction, Algo. REF decribes a bijective embedding, and therefore it comes to the expressivity relation
{{formula:8ba28fe8-6c26-4927-9666-aef06c97b541}}
Since {{formula:9031aaad-fa8c-42cf-bd37-e39b341ad3ce}} is anti-symmetric according to Lemma REF , the equations (REF ) and (REF ) result in the equal expressivity
{{formula:20d827bb-8b51-4614-a8a8-255a47c66081}}
Make Heterogeneous
[1]
MakeHeterogeneous{{formula:10d1bf7e-1c61-440f-947a-94ef80c52910}}
{{formula:ffb585e3-1b39-413b-9653-38f19975b666}}
{{formula:ab88668e-a75c-465a-8673-e35c850f81c1}}
{{formula:86680984-42c6-40e4-bead-800353894e17}} introduce node type 0
{{formula:4d5d2a7d-5347-416e-9031-f5013ca8af92}}
{{formula:0a1a1995-5fcf-48cf-a9d8-875c8fdc03a7}}
{{formula:9d0eebdd-8fb4-4860-8839-a088cd6972c6}} introduce edge type 0
{{formula:09ed6d65-d99a-4be6-b4a4-537fb526ac79}}
{{formula:3c6574e2-b052-46b9-a86f-43f6c3fd5842}}
Make Homogeneous
[1]
MakeHomogeneous{{formula:6c141623-4d10-4c47-b3c0-c8c124f5ae00}}
{{formula:743a94a7-0222-4e87-83d9-a2e2d0805037}}
{{formula:6634493e-a462-4452-9673-9fa6faa3a419}}
{{formula:2ccd475a-a0f6-481e-8c22-68d0d14cbed7}}
{{formula:af29cadb-7a32-4fcc-b014-0d1deb3a6dd0}}
{{formula:57df8893-960e-4959-824a-5e7bdf63d20b}} store type in attribute
{{formula:247a1115-0903-43b3-8b33-8984f53d56f7}} add attribute from type-different node
{{formula:ee088c0a-2164-4b39-b6ac-6eae65faee0f}}
{{formula:51d3f8f1-d3ec-44af-82c2-33639c156a25}}
{{formula:c1726642-f1ea-4cc3-b198-6dfc9ad22c38}}
{{formula:d77b4813-79cd-47bc-8137-8be3721617b4}} store type in attribute
{{formula:cd5991b4-7969-46ec-8eb3-6da4d557c93c}} add attribute from type-different edge
{{formula:abbbcfd2-2f80-430f-a3dc-42ddda2ea9a6}}
Attributed Hyper- and Non-Hyper Graph Types
Similar to the transformations for multigraphs, there is a trivial and a non-trivial transformation algorithm for hypergraphs. By Def. REF , a hypergraph is a generalized version of a non-hypergraph. Consequently, the embedding of a non-hypergraph {{formula:e7c78b41-c976-449c-84e2-09ff18269b81}} into the hypergraph type is executed in Algorithm REF by extending the edges with a numbering function that is constantly 0.
As only the numbering functions are added to the graph, that assign the value 0 to all nodes between which an edge exists (cf. Algo. REF , line 5), the bijectivity and also the expressivity relation
{{formula:e19a0287-6194-4526-af7c-5b17e778655c}}
follow immediately.
In the reverse direction, i.e., from hypergraph type to non-hypergraph type, edges become fully connected subgraphs (cliques). If an input edge is undirected, it becomes a clique (cf. Algo. REF , lines 6-9), while a directed edge is encoded as several fully connected bipartite subgraphs, i.e., a bicliques (cf. Algo. REF , lines 11-14).
The numbering, given by the numbering functions {{formula:23f71e20-9004-4b04-81b0-979c0b0c9225}} from the hyperedges of the input {{formula:ecd1ee07-2499-4d3f-856f-334fd67cb990}} , provides the corresponding existence of the direction of an edge in the non-hypergraph (cf. Algo. REF , line 5). In particular, the numbering function is constant for nodes in undirected edges.
For better understanding, an example is visualized in Fig. REF . The blue edge containing nodes numbered all with a 0 becomes a fully connected clique. Given that the hyperedge is undirected, the non-hyperedges are directed both ways.
The lowest node within this blue hyperedge also corresponds to the top node of the red hyperedge, and within this edge, the number 1 is assigned to it. This means that there must be a directed non-hyperedge from it to the lower node of the red hyperedge, which has the numbering 2.
As stated in Def. (REF ) we assume wlog. that our numbering is gap-free. Then in Algo. REF , line 5, we can observe that for all hyperedges {{formula:957abb9b-0a6f-4ccd-8d49-8e2354967d0c}} of {{formula:aab55785-9311-4636-b084-be6fce1b1d59}} , {{formula:73738e75-50ed-40e2-9ea6-d7380d37ba32}} will contain a clique for all {{formula:b573c484-b061-4782-9f08-5ac2a3beb901}} , and a biclique for every {{formula:aea1f95f-bde3-4e16-92d5-4a2a5fb6bc94}} .
{{figure:6460dafb-7ab5-464c-b026-6990c5d4c1e5}}Due to the storage of the edge indices in lines 8 and 13, Algo. REF is injective, and restricting the function to the image, it is also bijective. Therefore, there is no information loss, and it holds that
{{formula:1cf4aa09-1c61-463c-8308-efc31c9516cf}}
Furthermore, it can be concluded that non-hypergraphs and hypergraphs are equally expressive, i.e.,
{{formula:4ba42de4-95ad-4e8c-a4fc-151a211d2b4e}}
Turn into Hypergraph
[1]
MakeHyper{{formula:dfe64fb4-b8ad-4c26-ac07-4040f28dc6fc}}
{{formula:388fe353-6e98-45a9-b946-556c6a06006e}}
{{formula:014a0291-4847-4b02-831b-af39370c742c}}
{{formula:ef9df6c3-8fbf-4222-a04a-cceae7616ebf}}
{{formula:6d3fc84c-ee62-409f-8df1-3da77fc9d6c1}}
{{formula:6307c267-4630-4bc2-892e-a11c03079edd}}
{{formula:88d1166c-6b0e-4a45-af73-10f0a91b7302}}
{{formula:f65d447f-8a5e-4b65-86bd-c8209bf62c7e}}
{{formula:7c4839af-8e45-44f6-a90e-656e0eda35a5}}
Undo Hypergraph
[1]
MakeNonHyper{{formula:a622bfb9-97c2-441d-8ac2-5724e5c82fbd}}
{{formula:4397ef76-6915-4fe6-b750-36471a62bd96}}{{formula:71e1231c-6cb7-434e-93de-1b590bdc3b24}}
{{formula:b17aaa03-257d-4784-bfba-7f23ee648288}}
{{formula:6f908908-b7e7-49f3-9f0d-962272c7c219}}
{{formula:d11c0618-b370-4929-82d9-ddcaeb745cee}}
{{formula:3e9eb5d4-7c4c-4b7e-959d-8dd367d15202}}
{{formula:f5ed1652-8b37-4d17-a5ce-5af1c6ff3670}}
Attributed Dynamic and Static Graph Types
Up to this point, the input graphs have been assumed to be static. Also, Algo. REF takes a static graph as input and now embeds it into the set of dynamic graphs. As can be seen in Algo. REF , this is a trivial bijective embedding, since nothing is changed in the static graph {{formula:785a3cfa-eab8-4ea2-bb3e-ffdc8292b71b}} itself. The temporal dependence comes from the interpretation of the graph as a static snapshot at an initial time stamp {{formula:09311164-06c5-4232-be79-9bc79ec44e92}} . Consequently, in the context of expressivity, the set of dynamic graphs {{formula:1e1ce601-9c7b-4401-849d-6b5fd8ccbaf6}} are at least as expressive as the set of static graphs {{formula:110def1b-6723-41e7-9485-432922af5a9c}} , i.e.,
{{formula:6abc0a7d-e873-4726-8a8f-dc6dfce27a97}}
While making a static graph dynamic is trivial, turning a dynamic graph into a static one is more complicated. For this direction, instead of considering the inverse mapping from the previous algorithm, which is only bijective by restricting it to its image, the following algorithm processes a dynamic graph by storing its dynamic behavior in the form of time series as attributes.
The main idea of the transformation is illustrated in Fig. REF . The graph structure from the dynamic input graph {{formula:b932394b-371a-4e5c-9682-751723aa5de3}} is cumulated over time, and the dynamic changes, either from the attributes or the structure, are stored in the corresponding attributes assigned with the time stamp.
{{figure:8965e228-e983-486a-9d7a-40d8b4ae0355}}In line 2 of Algo. REF , all nodes and edges of the time stamps are collapsed and the union of these are defined as the new node and edge set. Of course, this is not yet enough for an injective embedding as required. However, this is not a problem because the unique attribute assignments in lines 3-6 directly solve this issue. By restricting the range of values of the embedding to its image, again bijectivity is obtained and hereby guarantees a transformation without any information loss. In terms of expressivity, this results in the inverse expressivity relationship
{{formula:234bbde3-4647-4a9d-b052-bd03ba057b5b}}
Taking into account both expressivity relations (REF ) and (REF ) between static and dynamic graphs, equal expressivity is obtained, i.e.,
{{formula:cbd4b125-4d26-444c-a054-d0f93d81d610}}
Make Dynamic
[1]
MakeDynamic{{formula:eab44917-259b-4dcc-8de8-f7676eabf681}}
{{formula:59eee731-0dd7-4322-9722-6bdca83b5069}} graph snapshot at time stamp {{formula:31a6b88b-a72a-4f97-b9d6-e3cd3d53f7d6}}
{{formula:c45b2731-8477-4214-8384-9569a3fe1241}}
Make Static
[1]
MakeStatic{{formula:3ebc23fb-0697-4da9-b124-281a1f0da258}}
{{formula:94cdc557-4427-425e-9518-3d4b876ffe85}}
{{formula:d4c8de1c-08c4-44a7-a1e1-72ba82847e21}}
{{formula:81d6e695-56c3-4ad4-9d57-4b196ece3d7e}}
{{formula:8f8f8740-9008-4827-98cf-411390dfd17e}}
{{formula:885feb0b-90b8-451e-8056-01be73108512}}
{{formula:1241d3a2-e394-4a0f-bd54-f725dbd963a6}}
Remark: To the best of our knowledge, making a dynamic graph static is not common, and no publication can be found, which explicitly discusses this kind of procedure. In most cases, studying dynamic graphs is based on looking at the individual static snapshots. Another popular method for dealing with dynamic graphs is to use the resulting spatio-temporal graph introduced in Def. REF . By definition, this is a static graph encoding all the temporal dependencies in their temporal edges. {{formula:d38f2615-a51b-4dab-b7a5-7df4ff7da0d8}}
Transformations between Combined Graph Types
Up to this point, transformations have been considered that add or eliminate another structural property to a given graph of type {{formula:e4c0834e-aea8-4c3d-ad9f-d7c18b3a93bd}} so that the graph is transformed into another graph type {{formula:29b5bb8b-46d0-4b46-aef5-f3b807ed358a}} . In practice, it can happen that the graph types from and to which one intends to transform, do not only differ by one property but are somewhat further apart. In this case, a transformation between multiply combined graphs can be considered by the composition of the individual transformations defined for single structural properties that are illustrated in Fig. REF .
Mathematically spoken, let {{formula:4203958a-2d45-427a-8873-495c33b7b47d}} and {{formula:9e35860f-5483-49c4-b36d-9f8c17a84a9d}} be two arbitrary (combined) graph types and {{formula:d0fbfccf-05b2-46a2-ad24-72f3a6add74d}} for all indices {{formula:46e88651-7893-4f87-8c4e-984858a6b497}} the structural graph properties or a property describing an elementary graph, where {{formula:c8926c29-0b9d-4aee-bcb6-e30c73eda83b}} is the amount of properties that have to be changed to come from {{formula:653e300d-e6c7-4e86-85b9-0a6afc338122}} to {{formula:f9ba3e9c-e3b4-4dd7-ad38-b1e2910c0b42}} . Further let {{formula:160546c7-8f25-47f6-8868-041399bd167e}} be the transformation to add or remove the property {{formula:0b5c7847-4e6a-4e6e-8ec7-32f6a13e415a}} from a graph {{formula:ecdfce71-f5ac-4c5d-954e-984a9fa9ec4f}} .
Then the following composition of single transformations {{formula:8c162503-fae3-4b4e-9a5a-7a248311389f}} describes one possible transformation from graph {{formula:4344aea2-31ca-438b-889e-3db34552d9d3}} into the graph type {{formula:745b9896-414e-4f99-8bc7-2683b0c35387}} , namely
{{formula:931fe036-f54c-4238-8427-d5f751a53e49}}
{{figure:3ab3f959-0196-4ad4-b72c-f07a0a4974a5}}Note that the order within the composition of transformations is not unique. I.e., in particular, that the transformation from {{formula:7dd73f11-265d-47bf-8e70-c1fe74d99622}} to {{formula:334546a8-70e1-4a0c-a892-c67b644a53fa}} is deterministic in the result but not in sequence. Furthermore, the order of composition directly impacts the required memory and runtime capacity, which is illustrated in the following example.
Let {{formula:e5f0f888-6d4c-426f-952b-feb130b7194a}} be an unattributed directed graph and {{formula:1e20aba8-208b-4ec6-b01b-b58614791218}} an attributed undirected graph. For ease of reading, let
{{formula:68f6a923-0580-4b62-a35c-30cd82c10734}} be the use of Algo. REF , {{formula:67897f74-dd7b-45bd-9d4a-e88ff16e859b}} the use of Algo. REF where the use of attributes are allowed.
While the edge-attributes of the resulting graph {{formula:227d3141-beae-41fd-bab9-9814ec3f1487}} are of the form {{formula:b525f76c-a194-4dc3-ba0d-4eb237e8f2df}} and the node-attributes are all set to be the empty set {{formula:0669406c-ed26-4a9a-abce-10d23237883e}} , the graph {{formula:8e5447db-2def-4b67-8ded-25ea54ae731c}} has no node-attributesNote that if a graph has just one node or edge attribute this suffices for the whole graph to be attributed and just edge-attributes of the form {{formula:b8abc765-896c-4403-932a-66b8a2238354}} .
Therefore, the first transformation requires slightly more storage capacity.
Of course, it is an argument that the difference is almost minimal, but this cannot always be guaranteed.
Another critical point is that the single transformations of the properties are also not unique. The algorithms listed in this paper only describe a possible algorithm selection that works and can probably still be optimized in some places. In the appendix, for example, we introduce Algo. REF , which makes directed graphs undirected and does not allow any use of attributes. Replacing {{formula:0a8520bb-fcec-478b-9454-f9c3632320da}} by {{formula:aec01423-f9d3-4637-8eeb-0546c858279f}} , i.e., considering {{formula:e06e5526-501b-481e-b16e-f40e32fbbd58}} , results even in an almost double growth of the required memory capacity since the original graph is copied in a certain way (cf. sec. REF ).
Expressivity of Attributed Graph Types
In a nutshell, Fig. REF shows that the lossless embeddings listed here result in a commutative diagram of graph type transformations. This means, in particular, that the different embeddings can be combined, and thus combined graph types can also be transformed lossless.
Note that at the very beginning of this section the condition that all the graph types {{formula:528b43c8-06ec-421b-9475-df75961d6bf9}} and {{formula:b86421ed-0f0f-45a8-a5c0-9db6be29a6e4}} are attributed has been required. With this assumption and all the expressivity relations between two graph types each, Theorem REF follows directly.
{{figure:ce8c58a4-65c9-46bb-9a58-134451f8d067}}Theorem 5.3
All attributed graph types are equally expressive.
Proof: Equation (REF ) proofs that the dynamic and static graph types are equally expressive. Since it has not been distinguished between dynamic and static in the other transformations, it is valid for both graph types that they are also equally expressive to the (un)directed type cf. Eq. (REF ), the multigraph type cf. Eq. (REF ), heterogeneous type cf. Eq. (REF ) and the hypergraph type cf. Eq. (REF ).
dynamicGd (REF ) Gud (REF )
Gm (REF ) Ghet (REF ) Gh
(REF )
staticGd (REF ) Gud (REF )
Gm (REF ) Ghet (REF ) Gh.
{{formula:687e5682-1ab3-4aa2-a7ab-bd1b9199a6c0}}
The above Thm. REF may look unimpressive, but it carries great power. Since all attributed graph types are equally expressive, it follows that for any theoretical and real-world problem that a graph can model, any graph type can be used. The graph type does not even have to be attributed, since, by Algo. REF , it can be made attributed in linear time effort as listed in Tab. REF .
Runtime Analysis and Transformation Cost-Function
Following the results above, the question arises of when and how to choose a graph type transformation (RQ3). In general, it is problem-specific whether a transformation is beneficial. However, a basic guideline can provide support in this decision process. For this purpose, this section defines a transformation cost function (REF ) that takes the input and output graph types into account, the relation of their corresponding memory capacities, and also the runtime of the transformations (cf. Tab. REF ).
Let {{formula:8bad56e2-9321-4943-8199-b6df422fc00e}} be the number of nodes and {{formula:28aeee91-9db7-46ee-a349-09b2c97387af}} the number of edges in the corresponding input graph. In addition, let {{formula:4c255727-0ee7-4d0d-a6f8-73820bc73577}} be the number of time stamps of a dynamic input graph. Then, the following runtime complexities ensue.
[c]|c|c|
Algorithm Runtime
makeAttributed {{formula:478fcffd-3b56-470d-88e4-cbb0027500ec}}
makeDirected {{formula:bc0cfdab-1ece-424a-a701-06a2eba8b725}}
makeUndirected {{formula:84ca94f9-752d-4576-a495-a4cfdeb8d045}}
makeMulti {{formula:54ca7f93-4b22-48e2-bff6-47be3c9168cd}}
makeNonMulti {{formula:0a6b3b59-2efb-4212-84d1-bfdae9764c17}}
makeHeterogeneous {{formula:fd40d27e-0665-4d80-944e-a12c110c98ff}}
makeHomogeneous {{formula:50088d07-f078-4580-b0ba-ad71a03e6bf6}}
makeHyper {{formula:d5e28840-aef8-40af-8ce4-9569dbd4fecc}}
makeNonHyper {{formula:869cd8a7-25a0-4a79-950d-e416728ced4f}}
makeDynamic {{formula:54405c30-38dd-4c9e-a52d-b8fed721dc28}}
makeStatic {{formula:8d35b73f-410a-4adb-94dd-e353b7538658}}
Runtime for each graph type transformation w.r.t. {{formula:6b4778d5-788e-49f9-b11f-ec62ea3790f4}} nodes, {{formula:ba340c08-9e8d-424a-9654-a218a46c07f3}} edges and {{formula:e3202182-c1b1-46c5-b1de-db0f8ede380b}} time stamps.
To indicate whether a transformation from one graph type {{formula:f2fa6254-fe4e-4163-a33f-a8f73359d68a}} to another {{formula:9b269197-7b73-4dfc-9ba5-193cc2904cc1}} is beneficial, given the structure complexities {{formula:45e81c61-e95e-4bb5-8c82-2d4e642af0ca}} and {{formula:dadcb1a3-08ff-4d84-870c-2f6200d7bd1f}} and the runtime {{formula:e7c31222-6008-4d2a-9293-bb1e910a0e5b}} of the transformation, the upper results in Tab. REF are merged with results from Tab. REF in the following and compose the transformation cost function
{{formula:a36a48b2-c226-4ae8-b636-36b935d8a25d}}
As a guideline, higher values of the cost function represent less profitable transformations. Table REF shows the results for the different selections of graph types {{formula:2aa81ef6-6397-42e1-a694-1eecfc119f61}} and {{formula:70a3b056-e368-40bd-90e7-a5cfcd2bb13b}} for the transformation cost function. The first summand {{formula:b30529ff-d728-4ff4-a4fa-f0b9e0e19fee}} refers to the change in the structure complexity and the second summand {{formula:dcaf35d8-8f19-4da6-9fa2-8a6a3b348cae}} refers to the worst case runtime of the transformation.
[h!]|c|c|c|c|
{{formula:63978b49-2e79-49b0-9a49-18a11e026f6e}} {{formula:efa970df-9d6d-4631-92d4-385adbc9f7df}} {{formula:436f9d73-89fa-4e61-b153-53d77d262604}} {{formula:e7596c90-1b54-4012-867f-e58a2bd40413}}
unattributed attributed {{formula:44be4279-0f24-4ed4-828e-2a115336b66c}} –
undirected directed {{formula:58df9f9c-a4c7-4d61-83e0-6e0a8233d3cb}} {{formula:dd58de6d-ac4e-4b85-af61-7ddd7a0d7e91}}
undirected hyper {{formula:00e406a1-7402-4705-b686-44fb81db5ce1}} {{formula:ff2089ef-4224-4709-9c60-2d2f7affcab6}} {{formula:7ac2255a-9368-4bb2-afd5-d45f573e018f}}
directed dir. hyper {{formula:2cd1f76d-9b3e-4992-b6e9-49a725265dfc}} {{formula:a1499f17-2b26-479a-9977-1116244b9bda}}
non-multi multi {{formula:4b3f64f4-7b06-41e9-a26f-211ea8cae4e7}} {{formula:def93c31-6bb6-4b18-9a81-6ab9af229e42}}
homogeneous heterogeneous {{formula:fabbb0d9-1fe5-4587-97b9-b107a1cd8282}} {{formula:32d85724-ee67-4895-8a54-252abd822cc5}}
static struct. dynamic {{formula:936e6919-7f2f-4b0b-93ae-f8a8808f1db7}} {{formula:c578dc4b-ea66-43ec-8b5d-98cffe74670b}}
static attr. dynamic {{formula:031d3735-26b9-4be9-b0c8-a7d497858d57}} {{formula:b89dbbde-cd52-4dde-b510-eafe93400753}}
static type dynamic {{formula:77dfa0f9-9a61-4a12-a3b6-0226e706ecd1}} {{formula:b96d9fba-b80c-4b5f-b125-7ccb82f52743}}
Transformation cost function evaluated for the transformations between all different graph types. {{formula:16753b67-5683-4223-805f-056536fe4fb2}} is the max. number of duplicates in the input multigraph and {{formula:0aad2224-e1e8-4042-8495-66b73a3ec874}} is the number of node or edge types in the heterogeneous input graph.
Looking more closely at the results in Tab. REF , it is noticeable that they can be roughly divided into three categories.
First, a category corresponds to the transformations that entail constant transformation costs. Transformations from (un-) directed to (directed) (hyper-) graph, non-multigraph to multigraph, or from static to dynamic graphs belong to this first category. These are particularly cost-effective since neither the memory requirement changes significantly, nor do the transformations require a long runtime. For these cases, one can therefore consider transformations without any doubts.
Secondly, there is a category, which covers costly transformations that involve at least quadratic effort. These include the transformations from (undirected) hypergraphs to (undirected) non-hypergraphs and from multigraphs to non-multigraphs. Especially for larger graphs, transformations of this type should be avoided and instead models that learn on the original graph type, e.g., for directed hypergraphs see {{cite:ff5ddc04c13d84152af7db9f4cc3b0c5437cab89}}, should be selected.
Thirdly, the remaining transformations have a medium effort. While the structural complexity is mainly reduced, a linear transformation time in the number of nodes and edges arises for the algorithms. For these transformations, a threshold has to be defined depending on the application and the given resources that restrict the choice of graph type. While for smaller graphs (low number of nodes and edges) or sparse graphs (way fewer edges than nodes), it is less costly to transform, the transformation can be prohibitively expensive for large scale data sets from, e.g., social networks {{cite:c531292d1506de78d70e263b796bcd9c09bf60c5}} or similar that are often found in the literature {{cite:1d8acf53e01daa657ddcb1fac9dc4a81e3fccab2}}.
Having introduced all transformations between different types of graphs and their transformation costs, one might wonder where the necessity of transforming graphs comes from, although the information content remains unchanged. There are both theoretical and practical reasons for this, which will be illustrated in some examples.
The graph neural network model in {{cite:b46e855d480f495524976c80283c7ec89057c71b}} can analyze directed and undirected attributed graphs. While the authors do not discuss different types of nodes further, they emphasize the need to distinguish between coexisting edges of different types in the same data set.
Since their model cannot be directly applied to this type of heterogeneous graph, the authors propose a solution to encode the different edge types in the edge attributes. This is pretty much equivalent to preprocessing the input graphs by applying Alg. REF .
Since most graph neural networks refer to specific graph types {{cite:ce9d23db62724e601831e855aee40a500a9501dd}}, this example shows how crucial the graph type transformations are when the analysis models and the graph data are already given and are not compatible with each other.
Other examples that show how useful transformations introduced here are, come from graph theory or theoretical computer science. For example, it is well known that the so-called longest path problem is NP-hard, and the decision problem, i.e., investigating whether such a path exists at all, is NP-complete {{cite:5d9762394284f4013f14acf133c69e78f4592b01}}.
Briefly formulated, it means that given an arbitrary graph, finding a simple path of maximal length in this graph takes at least non-polynomial computation time.
However, it has been shown that the problem can be solved in polynomial time for directed acyclic graphsAcyclic graphs do not contain any paths that create circles.. Since most of the transformation algorithms listed here do not change the node-set and further edge information is encoded in the attributes (cf. Algo. REF , REF , REF , etc...), results on longest paths in a transformed graph can be traced back to the actual graph. An explicit example is an undirected acyclic multigraph, i.e., given an undirected acyclic multigraph {{formula:a873a187-896a-4c28-9b5f-9f95f839b35d}} , the longest paths in a directed acyclic non-multigraph (after applying Algo. REF and Algo. REF to {{formula:5887e6cc-116c-4590-9b5d-65f0082c7a55}} ) correspond to the longest paths in {{formula:8a1f8cb1-1de7-4ecc-a42c-55484e8ab805}} .
This example shows that any graph property that remains invariant under transformations and at least one algorithm exists for a particular graph type that lies in complexity class P, there is a worthwhile transformation if the general problem lies in class NP. This is mainly because the runtimes of the transformation algorithms in this paper are all polynomial, and the composition of polynomial-time algorithms again lies in class P.
Conclusion
This paper aimed to motivate the usage of diverse and suitable graph types as a model for real-world problems and theoretical graph analyses.
This has led to three research questions that we addressed in this paper: how much information can different graph types encode, how efficient is their use measured by the required memory complexity, and under which circumstances is a transformation from one graph type to another practically useful?
To ensure a solid basis for answering the research questions, a comprehensive overview of graph types and their properties has been presented in Sec. . Especially, the focus has been on graphs from scientific disciplines where they serve as models for a variety of problems. In general, the graph types result from extensions of elementary graphs by static or dynamic properties.
Among the commonly used graph types in the literature, two new graph types have been presented, namely a generalization of the directed hypergraph (Def. REF [2]) and the type-dynamic graph (Def. REF [6]).
Furthermore, to investigate the first research question on how much information each graph type can encode, the concept of expressivity has been introduced (Sec. REF ). It relates the different graph types to each other depending on how many properties they are able to encode. Lem. REF has proved that this relation is a partial order on the graph types, which serves as one of the arguments to show in Thm. REF that all attributed graph types are equally expressive.
Another significant result for this inference has been the lossless transformations from one graph type to another, presented in Sec. REF . These transoformations resemble bijective embeddings from one graph type into another under which the graph information remains invariant.
In App. ) partial results are listed to show additionally equal expressivity for unattributed graph types as an indication for future work. However, much graph data is attributed or can easily be attributed due to Algo. REF . Considering the equal expressivity of all attributed graph types, the reader of this paper can therefore choose a graph model based on efficiency. Additionally, the varying structures and properties of graphs can contribute to a better understanding of the information and the model behaviour. Hence, selecting different graph types for the modeling process may support, e.g., explainability of learning models on graphs.
Finally, to answer the third question, if and which graph type transformation is practically useful, a transformation cost function has been defined in Sec. REF . The cost function respects the runtime of the algorithms per se (Tab. REF ) and the memory complexities of each graph type (Tab. REF ), resulting from the answer to the second research question in Sec. .
To conclude, due to the plurality and complexity of different graph learning problems or graph analyses, it is not possible to give a general statement about if a graph type transformation should be applied or not. However, there are some tools that can be helpful in such a decision based on computational capacities and available algorithmic tools.
Apart from that, the paper implies that due to the equal expressiveness of the different graph types and the lossless transformations listed here, each graph type is equally suitable for each problem if only the information encoded in the model is considered. Computational efficiency can additionally be regarded by taking the transformation cost function into account.
Besides, a graph can be transformed with the given algorithms into another type without loss, e.g., if it is known that a different graph type is advantageous.
Based on this, a variety of new graph analysis methods on less common graph types can be used, and with this, many solutions to real-world problems can be solved in the future.
Notation
Throughout the paper we will use the following notation.
|l|l|
{{formula:a21e66ba-c725-4326-b0ed-a0a1b3005a7d}} natural numbers
{{formula:fac05634-4f64-4030-ad31-ae40cd71bfa8}} natural numbers starting at 0
{{formula:a5a1ff9a-2112-4ba1-abdc-7d2ff7c014a4}} absolute value
{{formula:bbc04c60-a9f5-4283-97ea-6e34a5b041dc}} sequence {{formula:c58b64a4-68ae-4042-bb5b-7dbe2f104408}}
{{formula:456037d2-0f9c-4a9f-8d57-bf64ee1cf0ac}} sequence {{formula:0d6043eb-7e1f-4a13-b745-21ca2e620ca1}}
{{formula:55f7651e-d392-426a-a358-bdc0157cc39d}} empty set
{{formula:f6340899-76aa-46b7-b9a1-ba5a1c9e3710}} set
{{formula:cd8e1aae-68e4-4ef6-8f00-cc3eab93844e}} tuple of arbitrary length over {{formula:05f3a8d0-1664-49e6-8a92-fa6d6a96c7ef}}
{{formula:c43dae93-996b-4e26-99fb-206bdbd8cc67}} multiset, i.e. set allowing multiple appearances of entries
{{formula:31d23757-3cad-4670-8eb6-ca811157b335}} power (multi)set
{{formula:48929ade-b3f6-4e19-a0c5-f0cfe3285d35}} conjunction
{{formula:24b8585a-df3d-4950-bbb9-2dadb1293e3d}} union of two (multi)sets
{{formula:d4967d26-2399-4b21-9d9d-8837eb39735d}} disjoint union of two (multi)sets
{{formula:ed9a88a6-a0e2-44d6-89a5-d2bcc4758d62}} sub(multi)set
{{formula:0cd6c269-40b7-42a8-82c3-046a59747428}} proper sub(multi)set
{{formula:2c432254-35a4-4795-9297-db047ca98256}} factor set of two sets
{{formula:b644e172-41cf-441d-bc81-b8e52852dcbf}} listing
{{formula:698e00da-e79f-461a-964b-d1aa504d2d53}} list comprehension
Notation table
Contributions
The first three authors contributed equally. Rüdiger Nather contributed the definition of a (generalized) directed hypergraph in REF and helped to improve the proofs in Section REF by proofreading and comments.
Acknowledgment and Funding
The GAIN-project is funded by the Ministry of Education and Research Germany (BMBF), under the funding code 01IS20047A, according to the 'Policy for the funding of female junior researchers in Artificial Intelligence'.
We would like to thank Abdul Hannan, Franz Götz-Hahn, Jan Schneegans and Bernhard Sick for review of the manuscript and fruitful discussions.
Appendix: Equal Expressivity for Unattributed Graphs
In the following, three partial results are presented that indicate that Thm. REF could also apply to the unattributed graphs occurring here. The results are called partial due to the fact that corresponding requirements have to be provided by the resulting graph type that can encode the attributes. I.e., in particular, the following transformations are discussed:
•
In Sec. REF , unattributed directed and undirected graphs are proven to be equally expressive. Nevertheless, in comparison to Algo. REF , the following algorithm requires for the additional storage of a copy of the nodes.
•
Sec. REF illustrates an algorithm of the encoding of edge weights in the graph structure of an unattributed multigraph. The corresponding Algo. REF uses the permission of storing duplicates to encode natural weights in the structure.
•
In Sec. REF , the idea of encoding attributes in form of different node- and edge types in a heterogeneous graph is described. Thus, the heterogeneity of the output graph type is required.
Unattributed Directed and Undirected Graph Types
The transformation from the undirected graph type to the directed graph type is described in Algo. REF and does not require attributes at any point. Provided that the reverse direction or the transformation from the directed to the undirected graph type also does not require any attributes, a higher memory requirement arises in the transformation algorithm Algo. REF itself. While Algo. REF encodes the directions of the edges in the allowed edge attributes, the directions of the edges must result directly from the graph structure itself.
Note that this algorithm does not describe the inverse function of Algo. REF . Nevertheless, it can also be justified that it has at least an inverse mapping, since the embedding is bijective. This can best be observed in the following illustration in Fig. REF , which gets to the core of the transformation idea of Algo. REF .
{{figure:4f109dcd-460b-4ed2-8310-c6276e062134}}The main idea behind the embedding is first to decompose the edge set into two sets of edges of different directions. Mathematically, the individual directions are determined by the order on the indices in the edges (Algo. REF , line 8). This means in particular, the new edge set is given by the disjoint union
{{formula:85493af5-a011-42bf-a4e9-d3a1fc4cb77f}}
Each subset describes one direction and induces one undirected graph component.
Formally, however, this is a single graph that has two subgraphs that are not connected. While the nodes of the input graph are used for one subgraph, i.e., for one direction, a copy of the nodes encodes the other direction in the other subgraph (cf. Algo. REF , line 3). Additionally, the node and edge attributes of the new graph are taken from the initial graph (cf. Algo. REF , lines 5, 10, 13). This construction guarantees the algorithm to be injective.
With a restriction of the range of Algo. REF to its image the algorithm is bijective and there is no information loss. In terms of expressivity this kind of lossless embedding yields to
{{formula:6159a6c0-724b-4086-a6e7-37ba836277b9}}
Considering both embeddings, i.e. Algo. REF and Algo. REF , and the results from the equations (REF ) and (REF ), both graph types turn out to be equally expressive, i.e.,
{{formula:eab2f0ac-feca-48f8-a685-83cee0cc4e80}}
Make Undirected
[1]
MakeUndirected{{formula:83b37cc3-eda3-46a8-acef-5378182bbf5d}}
{{formula:0a2e86c0-aea0-487d-8e0d-c372271c3014}}
{{formula:f9c2398c-f034-4d44-90ad-b5b1f4350b20}} double the node set
{{formula:268c24fb-1d33-43a0-9382-bce08667994d}} copy the node attributes
{{formula:8d9e1d36-7f06-44ff-b80f-055d0c75992c}}
{{formula:143cf8a6-00c4-4fef-aa2a-7822fdf5af0e}} insert undirected edges
{{formula:e1704503-51b2-4d4e-a626-cbde86828b03}} forward edges stay between original nodes
{{formula:a06d0a92-d053-4dd1-9370-a5ebe1b7e32b}}
{{formula:f858f627-32e5-4a86-afe8-fe87b5cde2ff}}
backwards edges are set between new nodes
{{formula:538c071a-7fe1-4f40-a054-14ecd67625b5}}
{{formula:f3be8696-cef2-4f38-b9c6-d6d8e571a5eb}}
{{formula:c2e2e2a4-3983-4a2b-82b1-cecbdb3b179e}}
Integer Attributed Graph Types and Unweighted Multigraphs
Restricting the attributes from Def. REF [4] to natural numbers incl. zero, the following algorithm encodes the attributes in a corresponding number of duplicates of the edges. Since Algo. REF already provided a transformation for unattributed to attributed graphs, only the reverse direction is given in Algo. REF .
Considering the natural numbers in the attributes as count of duplicates of the nodes and edges, the algorithm is obviously injective. Together with Algo. REF as inverse function, the algorithm is bijective and the set of multigraph types is at least as expressive as the set of integer attributed graph types {{formula:740a9933-d991-4f95-ba08-d70d7ffe73e2}} , i.e.,
{{formula:bb818402-6cbd-4fbb-8f7f-e65d93eb019d}}
It can be directly seen that the set of attributed graphs {{formula:52a03ebb-de28-41a3-ac72-3c9f839ec119}} is at least as expressive as {{formula:6aad873c-11a6-4798-8199-626c31443d27}} since the attributes are restricted. Thus, together with Algo. REF , it can be concluded that both sets are equally expressive:
{{formula:dc693f5d-cb8e-4cfa-bf29-385cea192003}}
Make Unattributed using duplicates
[1]
MakeUnattributedMulti{{formula:7f907085-4193-452c-a05c-da1562061e89}}
{{formula:619b3997-449e-4490-91de-26604de98898}} introduce node- and edge- multisets
{{formula:81402367-3001-4dbe-ae51-35c45c95570b}}
{{formula:91517373-34d0-4a04-b643-014bec33d079}}addKTimes{{formula:416dbd0e-0a84-4bb4-8991-df14703731ac}} add {{formula:34105cdf-52ad-4b0b-b1ff-f9856a3dca49}} duplicates of {{formula:d60e54d7-c95e-48e8-a7de-0593c74d8ef8}} to {{formula:b565a33b-4946-4fbe-bb3a-9d9338c79bfe}}
{{formula:90483dc4-1be3-4fb9-9b23-11166f43921f}}addKTimes{{formula:6ba6f47e-7bd9-4a84-b824-97369627602b}} add {{formula:090c2e8a-137b-4307-8139-f945f24c853f}} duplicates of {{formula:74cb8a98-9f28-4189-9b07-766c6c1aa22c}} to {{formula:24695e63-cb07-4b2c-8712-f9e051a91752}}
{{formula:dcf2da3d-93e2-416e-9b58-07ec6b1ce4d1}}
Attributed to Heterogeneous Graphs
Another idea for encoding attributes in the structure of a graph type is to introduce one node type for each unique node attribute and one edge type for each unique edge attribute. Then, the output graph is assumed to be heterogeneous, and each node or edge is of the type of its origin attribute, respectively. Algo. REF defines the disjoint different types by using the attributes and with this builds the unattributed heterogeneous graph.
Assuming that the {{formula:1048ca80-5fca-475d-98e1-5b5051f3fb80}} operator is bijective by assigning a different type to each unique attribute, Algo. REF is injective and again restricting it to its image, the algorithm becomes bijective, and the unattributed heterogeneous graph types {{formula:5f28a5e2-3e57-45d8-985c-b5657f246e5a}} are at least as expressive as the attributed graph types:
{{formula:b73d71b7-c865-4ad6-868b-cbb99987a72c}}
Combining (REF ) and Thm. REF , it follows that
{{formula:8f9c02a2-53e6-4d74-b3b0-932f1df42da0}}
Make Unattributed with the aid of heterogeneity
[1]
MakeUnattributedHeterogeneous{{formula:96ae5fa9-91b7-451d-a7e9-4252d576744b}}
{{formula:791ca309-b353-41ea-895a-36b4cfdfe5d3}}
{{formula:de63d97e-8ae6-4727-9552-731aceed4216}}
{{formula:283ac1a6-5501-4047-ae95-fef062995b35}} add {{formula:678e3072-408d-4b55-8772-74cfe7c91e70}} with type {{formula:bd1724fa-2957-45d5-baf2-1d76a84b40fe}} to {{formula:14b41462-ccb4-4df2-ac15-fc662d684c91}}
{{formula:e6cf7bc9-aa1c-4531-8c8f-0372a65c3a3c}} add {{formula:95d4a556-91d6-4bb1-9edb-0370d237d65d}} with type {{formula:197085c4-a181-4e00-9bd8-75e86efd4573}} to {{formula:f1214e67-052f-4ab1-a7e2-e7411ae6df2b}}
{{formula:7acf8dc0-d52c-48df-b6d4-d14d11f84efe}}
| i | 92d06524a1c0bda29391fbff8ece7278 |
Baseline: We use the proposed RNN model with the user profile attention mechanism and the exercise profile temporal attention mechanism (described in {{formula:fc326f9a-e652-4a73-9b26-4fd7f813c15c}} REF ) as our baseline method. The model is trained with Cross-entropy loss function and Adam optimizer {{cite:accf32e45017c23c677efc5c0c8ec2eeeba616ed}} for 30 epochs. We use k-fold approach for evaluating the performance of the recommender. In our k-fold setup, we kept one participant out of the training dataset and used the remaining 71 participants data for the training. Then, the left-out participant is treated as a new participant and the trained recommender is used to recommend exercises to the new participant. The actual data from the left-out participant is used as the groundtruth to calculate the accuracy of the recommendation system. We calculate the top-1, top-5, and top-10 accuracy for evaluating the recommendation system. The experiment is repeated 5 times and the average of the top-k accuracy are reported in Table REF . In the first experiment, we only use the demographic information of users in the attention mechanism of the proposed recommendation system. Table REF row 1 shows these accuracy. In the second, experiment, we used all information extracted from questionnaires in addition to their demographic information for the attention mechanism, and observed a slight decline in the accuracy (Table REF row 2). We hypothesize that most people don't have an accurate evaluation of their own ability. Thus, the answers to the questionnaires may not accurately represent their profiles, leading to a conflict to what exercises they performed and what they answered. For example, two person may give exact similar answers to the questions (possibly not accurate answers), but have different ability and interest to perform exercises. In the rest of this paper, we only use demographic information to represent the user's profile.
| d | 441a86220c1122667724f1b27301e6b7 |
This work studies the task of learning a Gaussian Bayesian network given its structure.
The problem is obviously quite fundamental and has been subject to extensive prior work.
The usual formulation of this problem is in terms of parameter estimation, where one wants a consistent estimator that exactly recovers the parameters of the Bayesian network in the limit, as the the number of samples approaches infinity.
In contrast, we consider the problem from the viewpoint of distribution learning {{cite:fc429c3c18996467724dca2802f3951939268353}}.
Analogous to Valiant's famous PAC model for learning Boolean functions {{cite:e457a558111d408dd84cd3803f5e886dff117854}}, the goal here is to learn, with high probability, a distribution {{formula:75c6de1a-5406-44c2-a082-bfd80339de56}} that is close to the ground-truth distribution {{formula:d2d74cca-8513-483e-88a3-95a345311e60}} , using an efficient algorithm.
In this setting, pointwise convergence of the parameters is no longer a requirement; the aim is rather to approximately learn the induced distribution.
Indeed, this relaxed objective may be achievable when the former may not be (e.g., for ill-conditioned systems) and can be the more relevant requirement for downstream inference tasks.
Diakonikolas {{cite:554b4baf58838a3cdae36226a682a46cdfdf3a6d}} surveys the current state-of-the-art in distribution learning from an algorithmic perspective.
| i | a55c136d31bb105234c19281300352d6 |
We present the baseline methods which are the inverse probability of treatment weighting
(IPTW) {{cite:d7fe5459cf361bcaa668a05e58ad44776314f1db}}, marginal structural model (MSM)
{{cite:22ff816c0eca7da739cc156c67cc450086d2658c}}, {{cite:f2643775a5af17691ca2a92969b395986677746c}}, sequential g-formula (Seq)
{{cite:3689beb4aed38aa70e9b48c9148b1a026932f2e8}}, and longitudinal targeted maximum likelihood estimation
(LTMLE) {{cite:f251be280b1d7cdcc7945ffcdcf71db8e2ef6df8}}, {{cite:7ed8e5c0dce7a529f2a0d15d37316b181acdb5a4}}. The comparative methods and publication references are
listed in Table REF . IPTW and MSM only apply treatment models,
sequential g-formula only applies outcome regression models, and LTMLE and our
proposed method apply both the treatment models and outcome regression models.
We describe the general implementation
procedure for these comparative methods as follows.
| m | 65afa13606b569a48617442940d633a7 |
If {{formula:3e4a867f-0e62-4a82-b4ae-434097159449}} decays only to invisible particles, such as dark matter, bounds on the coupling parameter ({{formula:a3f4dfb5-8be0-4cce-9fac-7bb7961f56a1}} or {{formula:d2465358-ed02-4bda-a371-7dd7fa635d22}} for the scalar and ALP models, respectively) are directly derived from its relationship with the branching ratio, with results shown in the right-hand panels of Figs. REF and REF .
If {{formula:9803a2ac-58c0-4df5-9730-87b888931715}} decays only to visible SM particles, {{formula:5f5a5503-e681-4449-9f3b-8aa5be9401b1}} is inversely proportional to the coupling parameters {{cite:526c44b6b510c07d76d44e547eccce52e3d43983}}, {{cite:d6879c61a10321d8a56ed4bb92c282aee61200eb}}, limiting the reach of this analysis for large coupling because of lower acceptance for shorter lifetimes.
The {{formula:46fd3dcd-0026-4b39-b11d-34f6cbcb2fe8}} decays dominate the visible decay width up to the di-muon threshold beyond which an additional channel opens and {{formula:8d49bc60-d42d-4330-a9dd-213a799db821}} decreases, limiting the sensitivity of this search.
The model-dependent relationship between the lifetime and coupling therefore determines the shape of the exclusion regions shown in the left-hand panels of Figs. REF and REF .
{{figure:a793dc9c-eae5-45db-a22d-26f25355d64d}}{{figure:3265bdcb-6d41-4c49-8c38-684ee715aced}} | r | 9f1f0af402253e659ad57ff9809f9cd3 |
The RDCBs are sequentially connected, each having a residual connection with the block input, followed by a new convolutional layer, and the output of that layer has a residual connection with the output of the first convolutional layer. The existing residual connections, identified by the symbol {{formula:f3520dc1-2092-4a7e-a07a-eb2b7539c2fc}} in the model, were used for the following purposes: they avoid the problem of vanishing gradients (it becomes zero) by introducing shorter paths, which can take the gradient over the entire length of very deep networks, as demonstrated by Veit et al. {{cite:8eae8fc74aa59abd6d0ea777ea4e44ec7607fb5c}}; and the use of residual connections seems to greatly improve training speed, as observed by Szegedy et al. {{cite:5c2d69c2690be650e5213fae38690d96f8e36cb5}}.
| m | e20bd57bc5526af98de24af5707e8129 |
Over the past fifteen years, the calculation of the scattering amplitudes relevant
to NLO corrections both in QCD and
the electroweak theory has been automated {{cite:6cd027dcf3b11cfe3770061ba246fc0ef7c41c8c}}, and can be
performed in a reliable and efficient manner up to high multiplicities. Calculations to this order
are now done routinely in the framework of multi-purpose event simulation programs, such
as SHERPA {{cite:f56827ab2d22c33be8f46bcda3d0ede61675db73}}, HERWIG {{cite:330273d49b3adc6e50dd1ca095d4b8d7b4b1c96b}}, POWHEG {{cite:d470662a0e7ef049a3eca5436e8c9b9913cfa6ce}} or mg5_aMCatNLO {{cite:cf090da42e68c051e3efd82f45ba0471355f5c6d}}, which read in the
tree-level and one-loop scattering amplitudes from automated tools {{cite:014c12865c68b89283bd9e678f73042aee5982f6}}
through standardised interfaces {{cite:6f73072b24c7d406832d0dd608ac78979d494eff}}.
These generic tools are complemented by
the dedicated libraries of NLO processes MCFM {{cite:add9a5896c225cb504e294b712967720e279b198}} and VBFNLO {{cite:922eef8fc1a5d5c004151d8db7a0ee1a48c61598}}.
| i | 4da59334828c7c1795d00e66ead259b2 |
Alternative dynamics such as the death-birth {{cite:5661521a129251ba3e455f5d30e2001ddd8eefce}} and the pairwise comparison {{cite:943a6119205938e1905b9ac22d7b34b62feedf1e}} could potentially hold similar fixation functions to the ones reported here. Additionally, considering multiplayer social dilemmas in well mixed populations would allow transition probability ratios {{formula:65778775-8fbc-49dc-ab01-af49c7bb41de}} to have a non-monotonic dependence on {{formula:a4a16b16-c425-4ab4-adbd-a0248c310cf0}} {{cite:c0f00b4a20c1ab5d3bde5ce75b80727ab63b565d}}, {{cite:b62a1445128704329305766a62cd462c92ab8ec1}}, which could lead to the emergence of new fixation probability functions of population size. Finally, fixation probabilities have been studied in the context of structured populations as well, often holding distinctive dependences on population size {{cite:b9b2bdbf3630866846a875d79a4e6a20f212e980}}, {{cite:5c4fc9d52d70571458f7080cbd23b64fdd83f77f}}, {{cite:12c951fa811b42713579dbeafeb5d8e3d0901fde}}, {{cite:5038e733e81d16f673f6c34c29de55f57553bc45}}. Expanding our analysis to structured populations could hold new effects and lead to a more general theory of fixation probability functions.
| d | 79784181b4ca5b7ab1713ab11b7d6441 |
Based on the referenced literature, it can be infer that, DL techniques have been used extensively to detect various skin diseases caused mainly by various infections. Therefore, traditional CNN is a good approach for developing deep learning-based models to diagnose the Monkeypox disease diagnosis. In this work, we proposed and tested six distinct improved deep CNN models by adopting models, namely, VGG16 {{cite:b9987bc6d184a7bc2fb615a0c7b3e646e2d357da}}, InceptionResNetV2 {{cite:0536f37d241a86a9e4b0e11f7a435ec03209e843}}, ResNet50 {{cite:3aedb3fab96a13c5224aa9ff7d6d36b4166dbe52}}, {{cite:023e14ee44f6c84ad79ca7146435d00f261680ce}}, {{cite:467a1666f6b939c81d9e15d4fc2b8b782c1ef98e}}, ResNet101 {{cite:5b7064f2c7264390b10c4837fd320a792116a425}}, MobileNetV2 {{cite:c3d28af552dfdeb7fc7f97af71e23e6aba5632e0}}, and VGG19 {{cite:b9987bc6d184a7bc2fb615a0c7b3e646e2d357da}}, using transfer learning approaches {{cite:f9e04619f66dfa83e6696789d63ed41a39d68e9a}}. The following is a summary of our contribution:
| i | e70881f415b9a70412bc0a261dda0464 |
In a traditional centralized architecture of machine learning, the data collected by mobile devices is uploaded and processed in a cloud based server to produce inference models {{cite:fb020718fb991585332ed561041315284cf808bd}}. With potentially large number of autonomous vehicles, where real-time decisions have to be made within a restricted time period, a cloud-centric approach is unable to offer acceptable latency and scalability. Also, a centralized architecture requires full connectivity which is challenging for vehicular networks. Federated learning (FL) is a distributed machine learning approach, in which mobile devices collect data and train their individual machine learning or deep learning models, called local models. They send their local models (i.e., models' weights) to an aggregator. The aggregator averages local models and produces a global model. Mobile devices further train the global model individually to create updated local models and submit them to aggregator. The steps are repeated in multiple iterations until a desired accuracy of global model is achieved {{cite:1bc62b3e3ab0cdf21ab36491b339a5722714279f}}. FL is considered as a feasible solution for safety and time critical applications involving autonomous vehicles {{cite:d09a16dbd2ef696e07572664e35ca124c26979ee}}.
| i | 428d5991a680156b2470b7e3846507a9 |
In this section, we reflect on MiCE's shortcomings. Foremost, MiCE is computationally expensive. Stage 1 requires fine-tuning a large pretrained generation model as the Editor.
More significantly, Stage 2 requires multiple rounds of forward and backward passes to find a minimal edit: Each edit round in Stage 2 requires {{formula:03ef99bb-ead2-480a-842e-7aed6c2ad5a3}} decoded sequences with the Editor, as well as {{formula:24d8c215-1ddd-40b5-8cf7-66d17b120265}} forward passes and {{formula:f832688d-3657-4c30-a9b7-f6ae248b838f}} backward passes with the Predictor (with {{formula:c5cf4969-39e6-4c3a-8b83-4eef67c6cbd8}} the first edit round), where {{formula:978d097a-1bfa-4cae-be6b-27215b2fc18d}} is the beam width, {{formula:9a6de771-7183-4e24-a0f8-220ce448ee0c}} is the number of search levels in binary search over the masking percentages, and {{formula:9d754923-6789-413f-8a99-cad686d10a56}} is the number of generations sampled for each masking percentage.
Our experiments required 180 forward passes, 180 decoded sequences, and 3 backward passes for edit rounds after the first.
While efficient search for targeted edits is an open challenge in other fields of machine learning {{cite:37116825aeab554d707afd06d9d195f5fc223dcf}}, {{cite:dddc25a1858e91f646f0cadf157517582ab31aae}}, this problem is even more challenging for language data, as the space of possible perturbations is much larger than for tabular data. An important future direction is to develop more efficient methods of finding edits.
This shortcoming prevents us from finding edits that are minimal in a precise sense. In particular, we may be interested in a constrained notion of minimality that defines an edit {{formula:dd6d12d8-beee-4e14-ab26-6c2e09ca7ba9}} as minimal if there exists no subset of {{formula:7846be4e-2ef1-44b6-bbe7-f06e491126e6}} that results in the contrast prediction. Future work might consider creating methods to produce edits with this property.
| d | 94222b86973384125cea2922f854847a |
Image augmentations
Since contrastive loss is sensitive to the choice of augmentation technique and learned representations can get controlled by the specific set of distortions {{cite:59bd78b3748537b7b989829af580f447a5a5eb86}}, we also examined how robust our method is to remove some of data augmentations. Figure REF presents decrease in top-1 accuracy (in % points) of our method and SimCLR at 300 epochs and under linear evaluation on ImageNet (SimCLR numbers are extracted from {{cite:59bd78b3748537b7b989829af580f447a5a5eb86}}). This figure shows that the representations learned by our proposed contrastive divergence are more robust to removing certain augmentations in comparison to SimCLR. For instance, SimCLR does not work well when removing image crop from its transformation set.
| d | 7c25d6769b46cac3b51fbe4ebe756f5a |
The goal of this section is to review well known facts about some of the above tau-functions and to put all of them in more general context. Following {{cite:86526307fbac04a9f305f911ec6083aa4a72cb4f}}, {{cite:dcced58988458441159a4f0ad2ceb958ccd5bb46}}, {{cite:326e4002b4e7d33e038856f9edbd20227f35e145}}, {{cite:64b816064e69785729596b7e879e3e639d1dff2a}} etc. one can show that all of them can be represented as matrix integrals of the form
{{formula:832ea19b-34d4-4ee0-b6ee-33d3bb2a5809}}
| d | 00b9ca5434a38d599237e075c0636979 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.