text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Palm vitamin E reduces catecholamines, xanthine oxidase activity and gastric lesions in rats exposed to water-immersion restraint stress Background This study examined the effects of Palm vitamin E (PVE) and α-tocopherol (α-TF) supplementations on adrenalin, noradrenalin, xanthine oxidase plus dehydrogenase (XO + XD) activities and gastric lesions in rats exposed to water-immersion restraint stress (WIRS). Methods Sixty male Sprague–Dawley rats (200-250 g) were randomly divided into three equal sized groups. The control group was given a normal diet, while the treated groups received the same diet with oral supplementation of PVE or α-TF at 60 mg/kg body weight. After the treatment period of 28 days, each group was further subdivided into two groups with 10 rats without exposing them to stress and the other 10 rats were subjected to WIRS for 3.5 hours. Blood samples were taken to measure the adrenalin and noradrenalin levels. The rats were then sacrificed following which the stomach was excised and opened along the greater curvature and examined for lesions and XO + XD activities. Results The rats exposed to WIRS had lesions in their stomach mucosa. Our findings showed that dietary supplementations of PVE and α-TF were able to reduce gastric lesions significantly in comparison to the stressed control group. WIRS increased plasma adrenalin and noradrenalin significantly. PVE and α-TF treatments reduced these parameters significantly compared to the stressed control. Conclusions Supplementations with either PVE or α-TF reduce the formation of gastric lesions. Their protective effect was related to their abilities to inhibit stress induced elevation of adrenalin and noradrenalin levels as well as through reduction in xanthine oxidase and dehydrogenase activities. Results: The rats exposed to WIRS had lesions in their stomach mucosa. Our findings showed that dietary supplementations of PVE and α-TF were able to reduce gastric lesions significantly in comparison to the stressed control group. WIRS increased plasma adrenalin and noradrenalin significantly. PVE and α-TF treatments reduced these parameters significantly compared to the stressed control. Conclusions: Supplementations with either PVE or α-TF reduce the formation of gastric lesions. Their protective effect was related to their abilities to inhibit stress induced elevation of adrenalin and noradrenalin levels as well as through reduction in xanthine oxidase and dehydrogenase activities. Background Stress affects psychological and physiological balances which can lead to various pathological changes. One known pathological stress-induced condition is the formation of gastric lesions and studies had shown that its pathogenesis is multifactorial. It includes factors which disrupt the gastric mucosal integrity such as changes in gastric acid, mucus and bicarbonate secretions, inhibition of gastric mucosal prostaglandin synthesis [1], reduction of gastric mucosal blood flow [2,3] as well as changes in stress hormones [4][5][6] and gastric motility [7,8]. It is also known that an increase in catecholamine levels during stress causes vasoconstriction [6]. These changes can ultimately result in formation of gastric lesions. Recent studies had also shown the involvement of oxidative stress in the pathogenesis of stress-induced gastric ulcer [9,10]. One particular type of oxidant injury is reoxygenation injury following reperfusion of ischemic tissues [11]. Xanthine oxidoreductase exists in two interconvertible forms, which are xanthine dehydrogenase and the oxygen-dependent xanthine oxidase. In some studies, it was shown that allopurinol reduced gastrointestinal injury, in rats that were exposed to xanthine/ hypoxanthine + xanthine oxidase system [12,13]. There have been some previous literatures related to the role and ability of vitamin E or its derivatives to reduce stress and gastric lesions. Our previous studies had found that both tocopherol and tocotrienol had the ability to reduce the formation of gastric lesions induced by stress in rats [7,14]. Although tocopherol is well known to be the most available and active form of vitamin E, recently the role of tocotrienols has received renewed attention. The present study was designed to compare the effects of palm vitamin E which mainly contains tocotrienols and α-tocopherol supplementation on catecholamines and gastric xanthine oxidase activity, which are involved in stress-induced gastric lesions in rats. Methods Sixty male Sprague-Dawley rats (200-250 gram) were divided into three equal sized groups. The first and second groups were given palm vitamin E (PVE) or αtocopherol (α-TF) respectively at the dose of 60 mg/kg body weight orally for 28 days, while the control group was given olive oil by using a 4-inch, 18 G needle, as the vehicle. Palm vitamin E used in this study contained a mixture of 22% tocopherol and 78% tocotrienols which was obtained from Malaysia Palm Oil Board (MPOB). The vitamin E dose was chosen based on our previous study, which showed the ability of this dose to reduce gastric lesions occurrence [6]. At the end of treatment period, blood was withdrawn and each group was subdivided into another two groups; one group was subjected to WIRS for 3.5 hours and the other group was not subjected to any stress (non-stress group). The rats were deprived of food overnight before they were exposed to stress. Stress was conducted by placing each rat in a plastic restrainer individually, after which they were immersed neck-deep in a beaker at room temperature (23°C) for 3.5 hours. This procedure was done following the method by Nishida et al. (1997) [15]. After exposure to stress, the rats were anesthetized by injecting both ketamine (5 mg/100 g body weight) and xylazine (1 mg/100 g body weight) before blood was withdrawn for catecholamine level determination. The rats were then sacrificed after which the stomach was removed. The experimental design was approved by Universiti Kebangsaan Malaysia Animal Ethics Committee (UKMAEC). Assessment of gastric lesions Gastric lesions were measured under 3X magnification using light microscopy. Lesion size in mm was determined by measuring each lesion along its greatest diameter. Each five petechial lesions was equal to 1 mm lesion. The total lengths in each group of rats were averaged and expressed as the lesion index. This method was previously described by Wong et al. [16]. Gastric xanthine oxidase and xanthine dehydrogenase activities Tissue preparation for the measurement of xanthine oxidase and xanthine dehydrogenase was done following a method previously described by Qu et al. [17]. The measurement of xanthine oxidase and xanthine dehydrogenase activities followed the method described by Terao et al. [18]. Statistical analysis Statistical analysis was carried out using the SPSS statistical package version 12 (SPSS Inc. USA). Normal distribution of all variables was examined by Kolmogorov-Smirnov test. The results were expressed as the means ± standard errors of the mean (SEM). Statistical significance (P < 0.05) was determined by ANOVA followed by Tukey's post-hoc test. Effects of PVE and α-TF on gastric lesions Non-stressed rats showed no focal lesions in the gastric mucosa. However, gastric mucosal lesions developed in rats subjected to water-immersion restraint stress (WIRS) for 3.5 hours. The area of involvement was confined to the glandular part of the stomach. In rats exposed to stress, pretreatments with either palm vitamin E (PVE) or α-tocopherol (α-TF) significantly reduced number of gastric lesions, by 52% (P = 0.001) and 40% (P = 0.001) respectively ( Figure 1). Macroscopic observation showed either lesions, most often 1-2 mm in size, or petechial bleeding ( Figure 2). Figure 3 shows that the exposure to WIRS for 3.5 hours increased the plasma noradrenalin level significantly (about 92%, P =0.001). The plasma noradrenalin levels of stressed PVE-(about 59%, P = 0.025) and α-TF-treated groups (about 70%, P =0.022) were decreased significantly compared to the stressed control group. However, no significant difference was observed in the plasma noradrenalin level between the stressed PVE and α-TF groups. The exposure to WIRS for 3.5 hours increased plasma noradrenalin level significantly in PVE-(P = 0.001) and α-TF-treated groups (P = 0.001) in comparison to their respective non-stressed groups. No significant difference (P > 0.05) in the plasma noradrenalin level between the non-stressed groups was observed. Effects of PVE and α-TF on adrenalin The output presented in Figure 4 shows that immobilization stress increased the adrenalin level significantly compared to non-stressed group (about 89%, P =0.003). There was a significant reduction in the adrenalin level of stressed PVE-(about 18.7%, P = 0.002) and α-TF-treated groups (about 20%, P =0.001) compared to the stressed controls. However, no significant difference in the adrenalin level between stressed PVEand α-TF-treated groups was seen. In addition, the exposure to WIRS increased plasma adrenalin level significantly in PVE-and α-TF-treated groups compared to their respective non-stressed groups. No significant difference in the adrenalin levels between the non-stressed groups was observed. Effects of PVE and α-TF on (XO + XD) activity Figure 1 The gastric lesions number (in millimetres) of rats that were pretreated with palm vitamin E (PVE) or α-tocopherol (α-TF) for 28 days and exposed to water-immersion restraint stress for 3.5 hours. Bars represent means ± sem (n = 7). a; significantly different from the non-stressed group (CN + NS), b; significantly different from the stressed control (CN + WIRS) (ANOVA followed by Tukeys test, p < 0.05). Figure 3 The plasma noradrenalin level in rats that were pretreated with palm vitamin E (PVE) or α-tocopherol (α-TF) for 28 days and exposed to water-immersion restraint stress for 3.5 hours. Bars represent means ± sem (n = 7). a; significantly different from the non-stressed group (CN + NS), b; significantly different from the stressed control (CN + WIRS)(ANOVA followed by Tukeys test, p < 0.05). Figure 4 The plasma adrenalin level in rats that were pretreated with palm vitamin E (PVE) or α-tocopherol (α-TF) for 28 days and exposed to water-immersion restraint stress for 3.5 hours. Bars represent means ± sem (n = 7). a; significantly different from the non-stressed group (CN + NS), b; significantly different from the stressed control (CN + WIRS)(ANOVA followed by Tukeys test, p < 0.05). of XO + XD by 76% (P = 0.003) compared to the nonstressed control. The activities of XO + XD of stressed PVE-and α-TF-treated groups were reduced significantly compared to the stressed control. However, there was no significant difference in the activities of XO + XD between the stressed PVE-and α-TF-treated groups. In addition, no significant differences in the activities of XO + XD were seen in the PVE and α-TF stressed group compared to their respective non-stressed group. Discussion The increase in the noradrenalin and adrenalin levels due to stress are well documented [19][20][21]. The present study showed that exposure to water-immersion restraint stress (WIRS) for 3.5 hours was enough to increase the level of these catecholamines significantly; noradrenalin by 92% and adrenalin by 89%. These observations support the hypothesis that adrenal catecholamines play a physiological role in response to stressful situations. Hamada et al. found that rats exposed to stress developed gastric lesions associated with reduced brain noradrenalin content and increased plasma catecholamines and corticosterone levels [22]. Similarly, we had previously shown that rats exposed to repeated restraint stress had a higher level of plasma noradrenalin and corticosterone compared to the non-stressed rats [6]. During stress, the underlying mechanisms involved are the activation of the hypothalamic-pituitary-adrenal axis (HPA) and sympatho-adrenal-medullary (SAM) systems, causing the release of corticosterone together with the release of noradrenalin and adrenalin [23]. Furthermore, the elevation in catecholamine levels may generate free radicals [24], which may be cytotoxic and mediate tissue damage by injuring cellular membranes and releasing intracellular components. It is widely accepted that the pathogenesis of gastric mucosal lesions involves oxygenderived free radicals. In the present study, the noradrenalin and adrenalin levels of stressed PVE and α-TF groups were reduced significantly in comparison to the stressed control. In parallel to its ability to block noradrenalin, vitamin E also blocked formation of gastric lesions in the rats exposed to stress. Moreover, the noradrenalin and adrenalin levels in the stressed PVE-and α-TF-treated groups were not different from their respective non-stressed groups. This suggests that vitamin E plays an important role in reducing the elevated catecholamine levels induced by stress. We had previously reported that the increase in the noradrenalin level was blocked in rats given tocotrienols supplementation but not in rats receiving α-TF [6]. These findings suggest that tocotrienols are more potent than α-TF in blocking the effects of stress. However, we found no significant difference between the stressed PVEand α-TF-treated groups. Both treatments were able to improve the effects of stress by reducing the levels of noradrenalin and adrenalin. The differences observed could be due to the different stress models used; acute versus repeated stress. In 2007, Campese and Shaohua showed that rats fed with a vitamin-E-fortified diet manifested a significant reduction in noradrenalin secretion from the posterior hypothalamus [25]. A vitamin-E-fortified diet mitigated the formation of reactive oxygen species in the brain, and this was associated with a reduced sympathetic nervous system activity and blood pressure in rats with phenol-induced renal injury. Lipid peroxidation mediated by free radicals is considered a primary mechanism of cell membrane destruction [26]. Gastric lesions caused by stress, alcohol, Helicobacter pylori infection and non-steroidal anti-inflammatory drugs have been shown to be mediated largely through the generation of reactive oxygen species (ROS) that seems to play an important role in producing lipid peroxides [3,14,27,28]. The damage in gastric mucosa due to WIRS has been attributed to impaired gastric microcirculation, which results in ischemia followed by reperfusion, a process that generates free radicals. The finding indicates that reactive oxygen species and lipid peroxidation are important in the pathogenesis of gastric mucosal injury induced by stress [10]. This present finding is consistent with the elevation of XO activity after stress, which produces ROS. A previous study had indicated that the exposure of rats to 3.5 hours of WIRS led to an increase in the xanthine metabolism to the level comparable to that observed in ischaemia-reperfusion model of gastric injury [2]. Xanthine oxidase activity is a major Figure 5 The gastric xanthine oxidase + xanthine dehydrogenase (XO + XD) activity in the stomach of rats that were pretreated with palm vitamin E (PVE) or α-tocopherol (α-TF) for 28 days and exposed to water-immersion restraint stress for 3.5 hours. Bars represent means ± sem (n = 7). a; significantly different from the non-stressed group (CN + NS), b; significantly different from the stressed control (CN + WIRS) (ANOVA followed by Tukeys test, p < 0.05). source of ROS such as superoxide anion (O 2 • -) and hydrogen peroxide (H 2 O 2 ) in the pathogenesis of disease in various biological systems including gastrointestinal tract [29][30][31]. The increase in ROS would then increase the gastric lipid peroxidation and subsequent gastric lesion development. This supports the hypothesis that stress-induced injury is mediated by lipid peroxidation. In the present study, PVE and α-TF had prevented the increase in XO + XD activities significantly after WIRS. It could be that both PVE and α-TF improved the gastric mucosal blood flow that was impaired during WIRS [2,32]. Improved gastric blood flow would further suppress the conversion of XD to XO. Raghuvanshi et al. showed that administration of 400 mg of vitamin E for six days along with 80 mg of aspirin produced an excellent antioxidant effect as evidenced by a reduced platelet xanthine oxidase activity [33]. Vitamin E is a lipid-soluble antioxidant and a well accepted first line defence mechanism against lipid peroxidation. It functions as a chain-breaking antioxidant for lipid peroxidation in cell membranes and as a scavenger of ROS such as superoxide anion, hydrogen peroxide and singlet oxygen [34]. Yoshikawa et al. reported a decrease in gastric mucosal vitamin E level and an increase in gastric mucosal lipid peroxidation in ischemia-reperfusion-induced gastric mucosal injury and the severity of the injury was enhanced in vitamin Edeficient rats [35]. Naito et al. had shown that in nitric oxide-depleted rats, vitamin E played an important protective role against ischemia-reperfusion-induced gastric mucosal injury, and suggested that this gastroprotective effect of vitamin E was not only due to its antioxidant action but also its inhibitory action on neutrophil infiltration into the gastric mucosa [36]. Al-Tuwaijri and Al-Dhohyan reported that a single oral pre-administration of α-tocopherol acetate to rats prevented ischemiareperfusion-induced gastric mucosal injury [37]. As mentioned earlier, stress can impair gastric blood flow and cause ischemic-like conditions. These conditions can lead to reperfusion-induced injury and finally development of gastric lesions. During ischemiareperfusion, lipid peroxidation was increased due to the production of ROS; supplementations with PVE and α-TF were able to reduce this increase. It can be concluded that PVE and α-TF have gastroprotective effects against WIRS, possibly via their antioxidant properties. As shown in this study, animals exposed to WIRS for 3.5 hours developed gastric mucosal lesions, thus confirming the reproducibility of this model for the study. Supplementations of PVE and α-TF at 60 mg/kg for 28 days prior to exposure to stress reduced the gastric mucosal injury. However, no difference between these two agents was observed, showing equal effectiveness in preventing stress-induced gastric injury. Similarly, exposure to WIRS has been shown to increase the incidence of gastric mucosal lesion and the increase was lowered by the administration of various antioxidants [1,38]. A study by Ohta et al. had demonstrated that WIRS for 6 hours reduced gastric α-tocopherol concentration but pre-administration of ascorbic acid partially reversed this reduction. In the present study, the prevention of the harmful effects of stress on the gastric mucosa may be mediated by the antioxidant activity possessed by PVE and α-TF, which reduce the formation of free radicals either directly or indirectly, leading to attenuation of lesion formation. The protective mechanism of vitamin E and its role on human health is still not well understood. The antioxidant characteristic of vitamin E, especially its effect on polyunsaturated fatty acids (PUFA) may improve cell membrane integrity. There is possibility that the gastric tissues become more resistant towards the aggressive factors like acid and pepsin. Conclusions Our data suggest that the protective effect of vitamin E was related to a decreased xanthine oxidase and dehydrogenase activities, which resulted in a reduction in the formation of free radicals. There is also a possibility that the ability of both PVE and α-tocopherol in blocking the stress induced damages was through its action on a higher level which was by blocking the increased in adrenalin and noradrenalin, known mediators of stress.
4,211.2
2012-05-28T00:00:00.000
[ "Biology", "Medicine", "Psychology" ]
The brittle boulders of dwarf planet Ceres We mapped all boulders larger than 105 m on the surface of dwarf planet Ceres using images of the Dawn framing camera acquired in the Low Altitude Mapping Orbit (LAMO). We find that boulders on Ceres are more numerous towards high latitudes and have a maximum lifetime of $150 \pm 50$ Ma, based on crater counts. These characteristics are distinctly different from those of boulders on asteroid (4) Vesta, an earlier target of Dawn, which implies that Ceres boulders are mechanically weaker. Clues to their properties can be found in the composition of Ceres' complex crust, which is rich in phyllosilicates and salts. As water ice is though to be present only meters below the surface, we suggest that boulders also harbor ice. Furthermore, the boulder size-frequency distribution is best fit by a Weibull distribution rather than the customary power law, just like for Vesta boulders. This finding is robust in light of possible types of size measurement error. INTRODUCTION Boulders on planetary bodies bear information on past and present surface processes. In particular, the boulder properties and spatial distribution are related to the bulk properties of the parent body and the surface environmental conditions. On terrestrial planets, processes like impact cratering, volcanism, and mass wasting are typically responsible for the boulder formation. Degradation of boulders may result from processes like comminution by impacts and weathering, which on bodies with water and/or an atmosphere can include chemical weathering. Over the last decades, observations by spacecraft have revealed the existence of boulder populations on small airless Solar System bodies such as comets (Pajola et al. 2015(Pajola et al. , 2016, asteroids (Lee et al. 1996; Thomas et al. 2001;Michikami et al. 2008;Küppers et al. 2012;Jiang et al. 2015;Michikami et al. 2019;Dellagiustina et al. 2019), icy satellites (Pajola et al. 2021), and the protoplanet (4) Vesta (Schröder et al. 2020). In the absence of an atmosphere and volatiles like water, there are only a few processes that can produce and destroy boulders. The most important formation mechanisms are the destruction of a parent body (Michel et al. 2020) and spallation during large impacts (Krishna & Kumar 2016). The former is thought to be responsible for the boulderdominated surfaces of rubble-pile asteroids (Fujiwara et al. 2006;Michikami et al. 2019;Dellagiustina et al. 2019), whereas the latter process dominates on asteroids suspected to be more monolithic (Lee et al. 1996;Thomas et al. 2001;Küppers et al. 2012). Destruction by small impacts ) and thermal stress weathering (Delbo et al. 2014;Molaro et al. 2017) are the most important degradational processes. Dwarf planet (1) Ceres maintains a position somewhat in between small bodies and the terrestrial planets, in the sense that it is a large, volatile-rich world, yet without atmosphere (Russell et al. 2016). As such, boulders on its surface may be affected by more processes than on small bodies, but by fewer than on the larger, more complex terrestrial planets. Here, we investigate the boulder population of Ceres and compare it with that of Vesta (Schröder et al. 2020). Both bodies were imaged by the same camera aboard the Dawn spacecraft (Russell & Raymond 2011). Such a comparison benefits from the fact that Vesta and Ceres have comparable distances to the Sun and very similar surface gravities (Basilevsky et al. 2013). So, any differences between the respective boulder populations may relate to compositional differences, with Ceres' crust harboring water ice, phyllosilicates, and salts De Sanctis et al. 2016;Prettyman et al. 2017), and Vesta's crust being basaltic (De Sanctis et al. 2012). The limited spatial resolution of the global Dawn image data set restricts our study to clasts larger than 100 m, for which Bruno & Ruban (2017) suggested the term megaclasts. But "boulder" has typically been used for clasts on small airless bodies irrespective of their size, and for consistency with the Vesta study we retain the term boulder. Here, we study the global boulder population of Ceres using similar methods as those used for the Vesta boulder population (Schröder et al. 2020), as described in Sec. 2. The results of our analysis are reported in Sec. 3. We searched all Dawn images acquired in the Low Altitude Mapping Orbit (lamo) for boulders and determine general statistics of the global population related to boulder sizes and numbers (Sec. 3.1). We determine the size-frequency distribution (SFD) of boulder populations of individual craters and that of the global population (Sec. 3.2). The boulder SFD is traditionally fit with a power law, but the Vesta boulders rather follow a Weibull distribution. We evaluate whether the same holds true for Ceres boulders. We investigate the spatial distribution of boulders in and around individual craters, as well as the distribution of craters with boulders across the globe (Sec. 3.3). Furthermore, we estimate the average boulder lifetime by comparing the boulder density around craters for which an age estimate is available, and assess the Basilevsky et al. (2015) prediction that meter-sized boulders on Ceres have the same lifetime as on Vesta (Sec. 3.4). In Sec. 4 we discuss our results and the implications of the observed differences with the Vesta boulder population. Boulder mapping Boulders on Ceres can only be distinguished in framing camera images acquired in the Low Altitude Mapping Orbit (lamo) at an altitude of around 400 km and lower orbits of the extended mission (Russell et al. 2007). The framing camera is a narrow-angle camera with a field-of-view of 5.5 • × 5.5 • (Sierks et al. 2011). lamo coverage of the illuminated surface was near-complete for the camera's clear filter, but color imaging was sparse. The clear filter (F1) is a polychromatic filter with 98% transmission in the 450 to 920 nm wavelength range (Sierks et al. 2011). lamo images were acquired between 16 December 2015 and 27 August 2016 and have a typical scale of 35 m per pixel 1 (Roatsch et al. 2017). The average scale of 59 lamo images that we used in our analysis (at least one for each crater with boulders) is 35.8 ± 1.3 meter per pixel. The boulder-finding procedure was identical to that followed for Vesta boulders (Schröder et al. 2020). In summary, the second author browsed the entire data set of lamo clear filter images and identified, measured, and mapped all boulders using the J-Ceres GIS program, which is a version of jmars 2 (Christensen et al. 2009), after which the first author reviewed the results for accuracy and completeness. Boulders were identified as positive relief features in projected images at a zoom level of 1024 pixels per degree. The lamo resolution is about 230 pixels per degree at the equator, so this represents a zoom factor of about 4. Boulder size was determined using the J-Ceres crater measuring tool, which draws a circle around a boulder fitted to 3 points that are selected by the user on the visible boulder outline. The measurement uncertainty is about a single pixel. The limited accuracy of pointing information for lamo images leads to mismatches between projected images. We used small craters inside and outside the crater as tie points to align the projected images to a Ceres background mosaic and relative to each other. All this leads to uncertainty in the location of boulders on the order of 500 m. We are confident that we could reliably identify boulders with a size of at least 3 pixels (105 m), although a criterion of 4 pixels (140 m) is more likely to ensure that mapping is complete (Schröder et al. 2020;Pajola et al. 2021). We did not distinguish between boulders located either inside or outside the crater rim, a choice that we justify in Sec. 3.3. The illumination conditions at the time of imaging affect the visibility of a boulder, mainly by the strength of its shadow. The photometric angles at the center of the lamo images, calculated for an ellipsoid Ceres, are plotted as a function of latitude L in Fig. 1. We see that the illumination conditions for imaging, and thereby boulder visibility, systematically changed according to latitude. The spacecraft looked at nadir most of the time (low emission angle), but the incidence and phase angle increased with L. If we distinguish three latitudinal zones as "low" (|L| < 30 • ), "mid" (30 • < |L| < 60 • ), and "high" (|L| > 60 • ), the average incidence angle at the image center is ι = 48 • ± 4 • for low latitudes, ι = 62 • ± 5 • for mid-latitudes, and ι = 75 • ± 5 • for high latitudes. For a spherical boulder that is half-buried in a plane surface, the maximum length of the shadow is l = r(cos −1 ι − cos ι), with boulder radius r and incidence angle ι. A boulder at low latitudes will cast the shortest shadow, with l = 0.83r. A boulder with a diameter of 3 and 4 pixels will cast a shadow of 1.2 and 1.6 pixels, respectively. Especially for the larger diameter, the shadow is long enough to be well visible. Thus, although boulders will be more easily recognized in high-latitude images, we are confident that all boulders with a diameter of 4 pixels can be recognized in low-latitude images. Nevertheless, it is likely that we missed boulders with a 3-pixel diameter in low-latitude images due to their short shadows. In Fig. 2 we investigate how the increase of ι with latitude affects our mapping. The figure shows lamo images of two craters with abundant boulders: high-latitude crater Jacheongbi (69 • S, ι = 78 • ) and low-latitude crater Unnamed17 (10 • S, ι = 42 • ). Both craters appear fresh, with boulders that are easily recognized by the shadows they cast on their ejecta blankets. At lamo resolution, the blankets appear equally smooth at either incidence angle. The rightmost panels show the distribution of boulders as mapped using J-Ceres. The stronger shadows of the Jacheongbi boulders did not lead us to recognize them in higher numbers compared to Unnamed17, which confirms that differences in visibility may only be consequential at the most extreme incidence angles (Wilcox et al. 2005). The figure also shows that half of Jacheongbi's interior is in shadow. Boulders are abundant on the sunlit half of the crater floor, which suggests that boulder numbers in high-latitude craters are severely underestimated, with potentially important consequences for the SFD. Power law SFD Various practices for displaying boulder SFDs seen in the literature were discussed by Schröder et al. (2020). In this paper, we display the SFD in both cumulative and differential format, using the incremental (binned, histogram) version for the latter (Colwell 1993). The cumulative distribution of boulders on Solar System bodies is often assumed to follow a power law, for which the number of boulders with a size larger than d is: with α < 0 the power law exponent and N tot the total number of boulders larger than d min . The exponent of a cumulative distribution of a quantity that follows a power law is identical to that of the associated incremental differential distribution with a constant bin size on a logarithmic scale, if the logarithmic bins are chosen wide enough (Hartmann 1969;Colwell 1993). The power law exponent is best estimated from the SFD by means of the maximum likelihood (ML) method (Newman 2005;Clauset et al. 2009). The ML power law exponent (α < 0) is estimated directly from the boulder size measurements aŝ with d i the size of boulder i and N the total number of boulders with a size larger than d min . The standard error ofα is plus higher order terms, which we ignore. The estimator in Eq. 2 is unbiased only for sufficiently large sample size. Clauset et al. (2009) also provide details to a statistical test that evaluates whether a power law is an appropriate model for the data 3 . The test randomly generates a large number of synthetic data sets according to the best-fit power law model (specified byα and d min ), and calculates for each the Kolmogorov-Smirnov statistic, which is a measure of how well the synthetic data agree with the model. A p-value, defined as the fraction of synthetic data sets that have a larger statistic than the real data set, quantifies how well the power law performs. The authors adopt p < 0.1 to mean rejection of the power law model. We fitted power laws to the global boulder population, but also to the populations associated with individual craters to investigate possible variations of the exponent over the surface. To evaluate whether any such variations found are meaningful, we simulate the Ceres boulder population on the basis of the power law, using the observed population sizes of individual craters as input. For all craters we adopt the same exponent, namely that of the power law that best fits the global population. The continuous power law probability distribution is also known as the Pareto distribution (Newman 2005). To simulate a size distribution of boulders associated with impact craters we draw a random variate U from a uniform distribution on (0, 1) using the randomu routine in IDL with an undefined seed. Then the boulder diameter follows a Pareto distribution, with α the power law index (associated with the cumulative distribution function, with α < 0). We adopted a minimum boulder diameter of d min = 140 m (4 pixels). We simulated populations for all craters and estimated the power law exponent for each using the ML method. We then compared the resulting distribution of exponents with the observed one. The power law exponent estimated according to Eq. 2 is biased for low boulder numbers (Clauset et al. 2009), which was illustrated for simulated boulder populations by Schröder et al. (2020). Another potential source of bias is measurement error of boulder size. While the boulders described in this paper are large in absolute terms, they typically measure only a few pixels across, and measurement errors on the order of a pixel can be expected. Here we investigate the consequences of measurement error using boulder populations simulated according to Eq. 4. First, we consider a group of "craters", each with a simulated boulder population of different size. The SFD of all boulder populations follows a power law with exponent −4, with a minimum boulder size of 40 m. We modified the boulder sizes according to three different definitions of measurement error, where we distinguish between systematic and random errors. Measurement errors are sized one pixel of 35 m, equal to the spatial resolution of the Ceres lamo images. We estimated the power law exponent for each crater, including only boulders larger than 4 pixels (140 m) in the fit. Note that we can only assess how many boulders have met this requirement after performing the simulation. Figure 3 shows the results. In (a), the boulder sizes are all measured correctly, and the power law exponent is retrieved reliably for craters with large boulder numbers. The negative bias at small boulder numbers inherent in Eq. 2 can be recognized clearly. In (b), boulder sizes are affected by measurement errors with a random character: Sizes are either decreased by 1 pixel (size overestimated), increased by 1 pixel (size underestimated), or left unchanged, each with equal probability of 1/3. Figures (c) and (d) explore the consequences of systematic measurement errors, by either underor overestimating all boulder sizes by 1 pixel. Systematic errors can be introduced by the method of measuring. Figure 3 shows that all types of measurement error lead to biased power law exponents. Underestimating the boulder sizes increases the exponent by a little less than unity (shallower power law), while overestimating the sizes decreases the exponent by unity (steeper power law). The bias is stronger for overestimating the sizes than for underestimating, which cause the random measurement errors in (b) decrease the exponent by a little less than unity (steeper power law). Another consequence of random measurement errors is that the exponent converges only at larger boulder numbers than without errors, which is not reflected in the formal uncertainty of the exponent (Eq. 3). Another aspect of this problem is the shape of the cumulative SFD. Figure 4 investigates how the shape changes for the same three cases of random and systematic measurement errors. For each case we generated four populations of the same size, again with a power law exponent of −4. In (a), the boulder sizes are all measured correctly, and the SFD follows the straight line of the power law up to a diameter of about 200 m. Beyond this size, the simulated curves diverge considerably due to chance. In (b), the boulder sizes are affected by measurement errors with a random character, which steepens the SFD slightly. In (c), boulder sizes are systematically underestimated, which makes the SFD shallower and introduces a slightly convex curvature. In (d), boulder sizes are systematically overestimated, which steepens the SFD considerably and introduces a slightly concave curvature. Our simulations demonstrate that bias due to measurement error is unavoidable when typical boulder sizes are on the order of a few image pixels. The results in Figs. 3 and 4 may allow us to identify or predict such bias for the Ceres boulder population. Weibull SFD The SFD of the Vesta boulder population is better described by a Weibull distribution than a power law (Schröder et al. 2020). We investigate whether the same holds true for the Ceres boulder population. The Weibull distribution was initially derived empirically, and is often used to describe the particle distribution resulting from grinding experiments (Rosin & Rammler 1933). Where the power law follows naturally from a single-event fragmentation that leads to a branching tree of cracks that have a fractal character, the Weibull distribution results from sequential fragmentation (Brown & Wohletz 1995). Because we only include boulders larger than a certain size in the fit, we employ a left-truncated Weibull distribution with the cumulative form (Wingo 1989): where N is the number of boulders larger than d min . We estimate the Weibull parameters α and β = 3(γ + 1) from the boulder sizes d i > d min using the ML method. To maximize the log-likelihood function, these two equations must be satisfied: We findβ from a simple grid search, andα by insertingβ. General statistics We identified a total of 4423 boulders on the surface of Ceres with a diameter larger than 3 image pixels (105 m), of which 1092 were larger than 4 pixels (140 m). All boulders are associated with impact craters. The details of all craters with at least one boulder larger than 4 pixels (n = 58) are listed in Table 1. First, we summarize some general statistics of the global boulder population. Figure 5a shows a (weak) correlation between the number of boulders per crater and crater size. The largest craters in the sample have a number of boulders that is much smaller than expected. For example, Occator is the largest crater with a diameter of 90 km, and it has 26 boulders larger than 4 pixels. From the general trend in the figure, we can expect to find a number at least an order of magnitude larger for a crater of its size. But many of its boulders may have been destroyed or hidden from view by the largescale flows that are present in and around the crater (Schenk et al. 2019). In fact, three out of the largest four craters in our sample show evidence of such flows: Azacca, Ikapati, and Occator (Buczkowski et al. 2018a;Krohn et al. 2018;Schenk et al. 2019). Boulders around these craters are also difficult to distinguish from features resulting from partly submerged topography. The fourth crater in this group, Gaue, is very old Pasckert et al. 2018). Age must affect the number of boulders, as they are destroyed over time. The figure also shows the number of boulders associated with craters on Vesta (Schröder et al. 2020), where we adopted the same minimum size (140 m) as for the Ceres boulders. Clearly, the number of boulders produced on average by impacts on Ceres is larger than on Vesta for craters of the same size. We investigate the relation between the size of the largest boulder (L) and size of the crater (D) in Fig. 5b. Being a single measurement, the largest boulder size is a poor statistic, but is nevertheless often used to characterize boulder populations (Thomas et al. 2001;Küppers et al. 2012;Jiang et al. 2015;Schulzeck et al. 2018a). We compare the Ceres distribution with the relation provided by Lee et al. (1996) for craters formed in rocky targets (L = 0.25D 0.7 with L and D in m) and with the empirical range established by Moore (1971) for a selection of lunar and terrestrial craters (L = 0.01 1/3 KD 2/3 with K ranging from 0.5 to 1.5). The former relation represents more or less the upper limit of the latter range. The largest boulders on Ceres do not agree well with either relation from the literature, with sizes that are almost independent of the size of the craters. Again, it is mostly the largest craters that break the trend, probably because the aforementioned flow features and old age. The largest boulder we found on Ceres is a 500 m large block on the rim of Jacheongbi (figure inset), a relatively large crater (27 km). The figure also shows the largest boulders of Vesta craters. While the Vesta data also do not perfectly agree with either relation from the literature, most fall within the range of Moore (1971). On average, the largest boulders on Vesta are somewhat smaller than those on Ceres. Size-frequency distribution We aggregate all boulders counted on the surface of Ceres to find the cumulative power law exponent of the global boulder population. We note that the resulting global SFD is biased, as the boulder populations of the largest craters were almost certainly decimated by large scale flows (see Sec. 3.1). Figure 6 shows the SFD, both in cumulative and differential representation. At the top of the differential plot we show the uncertainty in size resulting from a 1 pixel measurement error. We chose a logarithmic bin size of 0.07 with the boulder size in meters, ensuring that the size is on the order of the measurement error at the larger end of the scale. As the error is larger than the bin size at the smaller end of the scale, we can expect boulders to end up in adjacent bins merely by chance. We recognize the characteristic roll-over of the distributions towards smaller diameters, caused by the limited spatial resolution and the measurement error. We fit two power laws to the data with the ML method, one with the minimum boulder size (d min ) fixed, and the other with d min estimated by the ML algorithm. When fixing d min to 4 pixels (140 m), we find a power law exponent of α = −5.8 ± 0.2 (n = 1092, black dashed lines in Fig. 6). By extrapolating the power law to smaller diameters we find that the number of boulders with a diameter around 3 pixels may be severely underestimated; the observed number in the bin closest to the 3 pixel limit is 2328, but the extrapolated, expected number is about 4000. The counts for boulders larger than 4 pixels are probably close to complete. We note that the counts at the largest diameters do not match well with the power law, both in the cumulative and differential representation. The statistical test provided by Clauset et al. (2009) confirms that this power law is not a good model for the data (p = 0). When we let the ML algorithm itself choose the minimum boulder size, we find a larger d min = 169 m and a steeper power law with α = −6.7 ± 0.3 (n = 400, red dashed lines in Fig. 6). The statistical test indicates that this power law is a good model for the data (p = 0.37). We also estimated the power law exponent for individual craters that have at least 6 boulders larger than 4 pixels (Table 1), and plot these as a function of number of boulders in the population in Fig. 7. The figure also includes three simulations of the power law exponent distribution of individual craters. The simulation uses the observed population sizes and adopt the best-fit power law exponent for the global boulder population (α = −5.8). The observations and simulations agree in showing a large scatter in the exponents for smaller population sizes and their expected negative bias (Clauset et al. 2009;Schröder et al. 2020). The degree of scatter is similar for both simulated and observed data, indicating that the observed variety in power law exponents is merely due to differences in population size and not some physical property of boulders or craters. However, the observed exponents of craters with small boulder populations are typically more negative than in the simulations. Additionally, the power law exponent of the crater with the largest number of boulders, Jacheongbi, is further from −5.8 than that of the simulated "Jacheongbi's". It is almost as if the observed distribution of exponents is skewed with respect to the simulated distribution. This suggests that the power law model does not correctly describe the boulder SFD. Schulzeck et al. (2018a) independently counted boulders around several Ceres craters and fitted power laws to the SFD, using ML to estimate both the exponent and the minimum diameter. We compare their exponents with ours in Fig. 8. The exponents agree within the error bars for the craters with more than 70 boulders (Jacheongbi, Nunghui, and Unnamed11). The exponents for the other three craters (Juling, Ratumaibulu, and Unnamed17) agree less well, which is not surprising given the small size of their boulder populations. The exponents for the crater with the largest number of boulders (Jacheongbi with 160 boulders) match closely (−4.5 ± 0.4 versus −4.4 ± 0.7). This suggests that our counts are consistent with those of Schulzeck et al. (2018a). In Sec. 2.2 we uncovered evidence for bias resulting from measurement errors in the boulder sizes, which were probably similar in magnitude for Schulzeck et al. (2018a). Can this bias be responsible for the unusual steepness and the convex shape of the SFD of the global boulder population? Given that the boulder size measurements were almost certainly subject to errors of at least one image pixel, the "true" SFD is probably less steep (see Figs. 3 and 4). The power law exponent may be smaller by about unity, but with a value somewhere between −4.8 and −5.7, the SFD would still be unusually steep. Measurement errors also affect the shape of the SFD (Fig. 4). Random errors would actually lead to a slightly more concave shape of the SFD. Only systematically underestimating the boulder sizes would lead to a convex SFD, but this would also tend to make it shallower. Thus, we cannot attribute the downturn of the SFD towards large sizes to measurement error, and it must be an intrinsic property of the Ceres boulder population. We conclude that the power law is not a good model for the SFD of boulders larger than 4 pixels. The ML algorithm was able to find an acceptable power law for a larger minimum size (d min = 169 m), but there is no reason to exclude the wellresolved boulders larger than 4 pixels but smaller than 169 m (more than half of the total) from the fit. This is the same situation as for the Vesta boulder SFD, for which the Weibull distribution proved to be a better model than the power law (Schröder et al. 2020). The best-fit Weibull distribution for the Ceres global boulder population has N = 1092, α = 1.32, and β = 0.45 (Fig. 9). The fractal dimension D f = 3 − β for the cracks in the rock is 2.5. The Weibull distribution fits the SFD better than the power law, and, contrary to the latter, it does not predict that the number of boulders with a size of 3 pixels is massively underestimated. Figure 9 also shows Weibull distributions for the Vesta global boulder population (Schröder et al. 2020). There are two best-fit curves, one including and the other excluding boulders of Marcia, the largest crater on Vesta. Just like the largest Ceres craters, Marcia shows evidence of flows, which may have destroyed many of its boulders. As Vesta is smaller than Ceres, its Weibull distributions plot below the Ceres distribution. Given the uncertainty surrounding the boulder populations of the largest craters on both worlds, a detailed comparison of the Weibull parameters provides little insight. Spatial distribution In Fig. 10, we plot the distribution of all boulders with a size of at least 3 pixels (105 m) on a color composite map of Ceres. In the same figure and on the same scale, we also show the distribution of boulders larger than 105 m on Vesta using counts from Schröder et al. (2020). With a surface area 3.2 times that of Vesta, the Ceres map is much larger. On both bodies, boulders are confined to craters. On Vesta, boulders are mostly absent from a large area of low albedo that appears to be enriched with carbonaceous chondrite material (Denevi et al. 2016;Schröder et al. 2020). The situation is different on Ceres. Several distinctly blue craters (Occator, Haulani, Kupalo) have boulders, which is consistent with blue being a marker of youth (Schmedemann et al. 2014). Large areas are devoid of boulders, especially at lower latitudes, but these do not stand out in terms of color or albedo like on Vesta. Even though the poles were partly in shadow during lamo, we find many boulders there. We quantify the boulder abundance in three latitude zones (low, mid, and high) on both Ceres and Vesta in Fig. 11. We calculated both the density of boulders and the density of craters with at least one boulder larger than 105 m by dividing the total number of boulders/craters in a latitudinal zone by the total surface area of the zone (including shadowed terrain), calculated under the assumption that Ceres and Vesta are spheres with radii of 469 and 263 km, respectively. Adopting Poisson error bars allows to assess whether any differences are the result of chance. The boulder density graph (Fig. 11a) confirms our visual impression that the density is higher at the high latitudes of Ceres, despite the fact that polar terrain was partly in shadow. The small (Poisson) error bars indicate that this is very likely not due to chance. The Ceres boulder density at mid-latitudes is also a little higher than that at low latitudes. The boulder density at low latitudes is more uncertain than the error bar indicates. First, boulder numbers may be underestimated because of the limited visibility of the shadows cast by smaller boulders (see Sec. 2.1). Second, boulder counts for Occator and Haulani, with their large scale flows, are uncertain (see Sec. 3.1). In contrast to Ceres, the Vesta boulder density does not vary with latitude, within the Poisson error margins. It is a little lower than the Ceres boulder density at low and mid-latitudes and much lower at high latitudes. The density of craters-with-boulders (Fig. 11b) also increases on Ceres toward high latitudes, although the correlation is not as strong due to the larger error bars. The density of craters-with-boulders on Vesta does not vary with latitude, within the error margins. The crater density may be similar at high latitudes on both bodies, although the error bars are large. Interestingly, the density of craters-with-boulders on Vesta is significantly higher than that on Ceres for low and mid-latitudes, despite the boulder density being lower. This implies that, on average, craters on Ceres have more boulders than on Vesta, consistent with Fig. 5a. Examples 4 of the spatial distribution of boulders around individual craters are shown in Fig. 12. Boulders are located both inside the crater and outside the rim, typically, all within one crater radius. The figure distinguishes between boulders in two different size classes, but sorting of boulders according to their size is not evident, consistent with the findings of Schulzeck et al. (2018a). There are no accumulations of boulders with a size range considered in our global study at the foot of steep slopes, which argues against a formation of such boulders by post-impact weathering. High-resolution images (scales of 3-10 m/px) acquired in the Dawn extended mission show evidence for boulder transport on crater walls, such as bounce marks on unconsolidated talus and boulders at the downslope end of tracks. Figure 13a shows an example of boulders collected at the foot of a crater wall. Such boulders are consistently smaller than those we consider here. As very high-resolution images are only available for a very small fraction of the surface, we do not consider them for our global study. We therefore decided to group boulders inside and outside the craters together and treat them as a single population. Another image from the extended mission, in Fig. 13b, shows clusters of boulders that appear to derive from former, larger boulders. Such fields of debris may originate from either impact of the larger boulder on the surface or weathering and/or erosion in place, demonstrating that boulders disintegrate by fracturing. Boulder lifetime Boulders degrade over time and eventually disappear from the surface. Basilevsky et al. (2015) predicted that the survival time of meter-sized boulders on Ceres is very similar to that on Vesta, based on estimates of the potential impactor flux and the expected impact velocities. Schröder et al. (2020) determined a survival time of about 300 Ma for Vesta boulders, much larger than the ∼ 10 Ma predicted by Basilevsky et al. (2015). The authors attributed this apparent discrepancy to the fact that the boulders in their sample were one to two orders of magnitude larger than the meter-scale associated with the prediction. The boulders in our sample are even larger than the Vesta boulders studied by Schröder et al. (2020) because of the lower lamo image resolution at Ceres, but typically by a factor of two rather than an order of magnitude. It should therefore be possible to test the Basilevsky et al. (2015) prediction of similar survival times on Vesta and Ceres, if accurate age estimates are available for our Ceres craters. Estimating the age of a crater is typically done by counting smaller craters on a selected area on the ejecta blanket and modeling the resulting SFD. Just as for Vesta, two alternative chronologies have been used to model crater SFDs for Ceres: The lunar-derived model (LDM) adapts the lunar production and chronology functions to impact conditions on Ceres, whereas the asteroid-derived model (ADM) derives a production function by scaling the observed SFD from the main asteroid belt to the SFD of Ceres craters (Hiesinger et al. 2016). Most papers on the topic of Ceres dating employ both chronologies and provide two age estimates for a particular crater. Table 2 lists craters for which age estimates are available. The uncertainty associated with the age is typically large, as the two chronologies can yield widely different values. Additional sources of uncertainty are the choice of counting area and the assumed strength of the target surface. The tabulated ages were estimated assuming an impact into hard rock. Williams et al. (2018) found that the ADM (and, presumably, the LDM) ages are much larger for a rubble surface (e.g. Cacaguat: 14.4 instead of 3.3 Ma, Rao: 133 instead of 30.4 Ma). We define "areal density" as the total number of boulders identified in and around a crater divided by the crater equivalent area, calculated as the area of a circle with the diameter for that crater (Table 1). We determined the areal density of boulders larger than 105 m in and around the craters in Table 2. Some areal densities are unreliable. Boulder numbers are underestimated for craters at high latitudes, which were partly in the shadow during lamo (Shennong, Unnamed4/26/28/36/44). Boulder numbers are uncertain for craters that show post-impact modifications in the form of flows (Haulani, Ikapati, Occator). The distribution of boulders around craters with a reliable density was shown in Fig. 12. We relate the boulder density to crater age in Fig. 14a, where the age ranges are spanned by the LDM and ADM estimates. The two variables are anti-correlated. There is a large degree of scatter in the data, which may be due to the fact that the boulder density also depends on latitude (see Sec. 3.3). The data suggest that the maximum boulder survival time is around 150 Ma, where we note that the age of the oldest crater in the figure (Gaue) is very uncertain. Support for this maximum age comes from craters of this age for which we did not find boulders: One such, unnamed, crater at (162 • E, +78 • ) has an estimated age of 89-252 Ma . Two others, Messor at (234 • E, +50 • ) and an unnamed crater at (186 • E, +23 • ), have estimated ages of 96-192 Ma and 88-205 Ma, respectively (Scully et al. 2018). All craters estimated to be younger than 150 Ma in the papers referenced in Table 2 have boulders. Given the uncertainty in the crater age estimates due to the different models, we adopt an uncertainty of 50 Ma for the boulder survival time. Whereas the correlation with crater age is expected, we also find that the boulder density is correlated with crater size (Fig. 14b). This may at least partly be explained by the fact that large craters are, on average, older than small craters. Another explanation might be that larger craters sample different, deeper crustal layers in a stratified crust, a concept that we discuss in the next section. Basilevsky et al. (2015) estimated the survival time of meter-sized boulders on Vesta, Ceres, and the Moon by predicting the impactor velocity and density distributions. Boulders on Vesta and Ceres should live equally long, but 30 times shorter than on the Moon. Schröder et al. (2020) found that the maximum boulder survival time on Vesta is 300 Ma, the same as that of lunar boulders (Basilevsky et al. 2013). The authors attributed this apparent contradiction to the fact that the Vesta boulders in their study are larger than 60 m, rather than meter-sized. In other words, large boulders live longer than small ones. At 150 ± 50 Ma, the survival time of the Ceres boulders in our sample is only half that of Vesta boulders, despite being larger (> 105 m). Thus, the lifetime of Ceres boulders may be less than half that of Vesta boulders of the same size. DISCUSSION In the previous section we described the properties of the population of large boulders on Ceres and compared them to those of the Vesta population. Our major findings are: (1) Ceres craters have, on average, more boulders than Vesta craters of the same size, (2) the largest boulders are, on average, somewhat larger for Ceres craters than Vesta craters of the same size, (3) the SFD of the global boulder population is better described by a Weibull distribution than a power law for both Ceres and Vesta, (4) boulders on Ceres are more numerous at high latitudes than at midand low latitudes, in contrast to Vesta, and (5) boulders live shorter on Ceres than on Vesta. How can we reconcile these findings? Let us start with finding (3), which supports the idea that the SFD of particles on the surface of small bodies follows a Weibull distribution rather than a power law (Schröder et al. 2020). Unfortunately, uncertainties regarding the boulder populations of the largest craters on both Ceres and Vesta prevent us from a meaningful comparison of the Weibull parameters. To address the other findings, we need to consider how boulders degrade. The dominant mechanism responsible for degradation are weathering due to thermal stress (Delbo et al. 2014;Molaro et al. 2017;El Mir et al. 2019) and meteorite impacts . The efficacy of weathering due to diurnal thermal cycling or thermal shock correlates with the rate of surface temperature change (dT /dt). Rapid temperature changes occur at sunrise and sunset and during daytime shadowing. On quickly rotating bodies such as Vesta and Ceres, sunrise and sunset are the main drivers of dT /dt (Molaro & Byrne 2012). Vesta and Ceres have rotation rates of 5.34 h and 9.07 h, so dT /dt at the terminator should be larger on Vesta. Moreover, the thermal cycling rate is higher on Vesta, and a boulder of the same age will have experienced more thermal cycles than on Ceres. Therefore, boulders of identical lithology would degrade faster on Vesta through thermal stress weathering. This is inconsistent with the shorter boulder lifetime on Ceres (finding 5), but consistent with the fact that boulders for a given crater diameter are larger on Ceres (finding 2). It is a different story for boulder degradation due to meteorite impacts. Because of their similar locations in the main asteroid belt, Ceres and Vesta experience similar impact regimes, in terms of impactor size distribution, flux, and velocity (Basilevsky et al. 2015). Therefore, boulders of identical lithology would degrade equally fast on both worlds through meteorite impacts. The crusts of Vesta and Ceres are of different composition, leading to differences in the compressive and tensile strengths that control the resistance to stress. Vesta's crust is mostly an assemblage of eucritic basalts and pyroxene cumulates ( (Prettyman et al. 2017;Schmidt et al. 2017). On average, the materials in Ceres' crust are mechanically weaker than those in Vesta's crust, so its boulders are less resistant to stress. This applies to both thermal stress and stress caused by meteorite impacts. Given the considerable uncertainties in quantifying the effects of thermal stresses and determining thermal strain thresholds (e.g., Boelhouwers & Jonsson 2013), it is impossible to predict which of these competing effects dominates (higher thermal stress experienced on Vesta or lower resistance to stress on Ceres). Nevertheless, our finding (5) that boulders on Ceres have a shorter lifetime suggests that the lower mechanical strength of the Ceres crust is primarily responsible. The lower crustal strength would also lead to the formation of a larger crater on Ceres than Vesta for an identical impactor. So a crater of the same diameter is, on average, younger on Ceres than on Vesta. Boulders disappear over time, and younger craters have, on average, more boulders than older craters of the same size. Therefore, craters of the same size are expected to have more boulders on Ceres than Vesta, explaining finding (1). The prevalence of large boulders at high latitudes may be explained by a higher rate of physical weathering and boulder breakdown at lower latitudes as compared to higher latitudes. Ceres has a obliquity of only 4 • at present 5 (Ermakov et al. 2017), and therefore the diurnal temperature waves are expected to be larger at lower latitudes (Hayne & Aharonson 2015). The duration of sunrise and sunset would also be shorter at lower latitudes, increasing dT /dt. Both effects would lead to relatively faster boulder breakdown by thermal stresses at equatorial latitudes, consistent with finding (4). As water ice is likely abundant in the subsurface, Ceres boulders may harbor a significant fraction of ice. Ice would be relatively stable inside these large boulders, just as it is stable just meters below the surface (Fanale & Salvail 1989), yet ice-rich boulders could be more prone to degradation by thermal stress, as fractures may be widened by sublimation, further weakening the boulder structure. The hypothesis that Ceres' boulders are rich in water ice is consistent with Rivkin et al. (2014), who considered the question why Ceres does not have a dynamical family. Large impacts on Ceres would produce escaping fragment asteroids (essentially liberated boulders), but the absence of a dynamical family led the authors to suggest that such escaped fragments are ice-rich and prematurely destroyed by sublimation. To identify other factors that may contribute to latitudinal differences in the areal densities of boulders and craters with boulders, we consider the independent global data set of floor-fractured craters. Craters with fractured floors are indicators for several possible processes, including updoming due to cryomagmatic activity, as for instance beneath Occator crater (Buczkowski et al. 2018a). Cryomagmatism may indicate the presence of a crustal column beneath floor-fractured craters that is enriched in volatiles and therefore mechanically weaker. A volatile-rich and weak target material is expected to eject boulders with these properties, which would be more susceptible to degradation. Twenty-one floor-fractured craters have been identified on Ceres (Buczkowski et al. 2018b), seven of which are marked in Table 2. Floorfractured craters tend to have low boulder densities, consistent with the expectation that boulders ejected from floor-fractured craters degrade faster, resulting, on average, in a lower boulder density. However, the floor-fracturing is likely related to post-impact processes, and it is not clear whether the target substrate was already weaker before the impact. Support for the hypothesis of a crust with a mechanical strength that depends on latitude comes from the observation that five of the seven floor-fractured craters in Table 2 (Azacca, Haulani, Ikapati, Occator, and Tupo) display concentric fracturing beyond the crater rim, suggestive of creep of a low-viscosity, and therefore mechanically weak, subsurface layer (Otto et al. 2019). Such craters with concentric fractures are mostly located between latitudes 46 • S and 34 • N. This is consistent with landslide morphology that suggests the presence of a relatively weak layer at low-to mid-latitudes (Chilton et al. 2019). This layer thins towards the poles and overlies a stronger layer, in agreement with lower temperatures at high latitudes increasing crustal viscosity and strength (Bland et al. 2016). Boulders excavated from the weaker layer would tend to degrade faster, and more boulders would be retained in the polar regions. Moreover, the subsurface ice content is lower in low-to mid-latitudes regions (Schmidt et al. 2017), hence boulders there would harbor more impurities like phyllosilicates, which tend to lower the albedo, raise temperatures, and enhance sublimation (Rivkin et al. 2014). This effect would lead to faster degradation of lowand mid-latitude, less ice-rich boulders as compared with polar boulders with possibly a higher ice content. Therefore, there may be a causal relationship between the boulder density and the pre-impact properties of the crust. So, while boulders may harbor water ice, the complexity of Ceres' crust, with its laterally and vertically varying properties (Bland et al. 2016;Otto et al. 2019;Park et al. 2020), precludes any definite conclusion on the observed distribution of boulder and boulder crater densities across the surface. DATA AVAILABILITY Dawn framing camera images are available from NASA's Planetary Data System at https://pds.nasa.gov/. Our Ceres boulder data, including maps of the boulder distribution around all craters, are available for download at https://doi.org/10.5281/ zenodo.4715154. ACKNOWLEDGMENTS We are grateful for technical support provided by J-Ceres developer Dale Noss and his team at ASU. We thank Maurizio Pajola and an anonymous reviewer for helpful suggestions to improve the manuscript. Table 1. All craters on Ceres with at least one boulder larger than 4 pixels (d > 140 m). Crater and boulder diameters are D and d, respectively, and α is the power law exponent of the (cumulative) boulder SFD as derived with the ML method (only for craters with n d>4px > 5). . Investigating the effect of measurement error on the shape of the cumulative distribution when the true power law exponent is −4. The pixel size is 35 m, and the adopted power law is only shown for sizes "measured" larger than 4 pixels (140 m, dashed line). The four curves of different color represent repeated simulations of the same population. a. Boulder sizes measured correctly. b. Sizes either correctly measured, overestimated by 1 pixel, or underestimated by 1 pixel with equal probability. c. Sizes underestimated by 1 pixel. d. Sizes overestimated by 1 pixel. The population sizes for the four different scenarios were adjusted to achieve a cumulative number of boulders of around 1000 at 140 m. Figure 5. General statistics of boulders associated with craters on Ceres (this paper) and Vesta (Schröder et al. 2020). a. Number of boulders larger than 140 m as a function of crater diameter. b. Diameter of the largest boulder as a function of crater diameter. The uncertainty of the Ceres boulder size derives from a measurement error of 1 pixel (Vesta boulder error bars are omitted for clarity). The empirical range given by Moore (1971) for selected lunar and terrestrial craters is shown in gray. The dashed line is the relation given by Lee et al. (1996). The inset shows the largest boulder identified on Ceres, a 500 m sized block on the rim of Jacheongbi crater. . Power law exponents for all craters with a population of at least 6 boulders larger than 4 pixels (n = 39). The observed exponents were derived by fitting a power law to the data of each crater. The best fit power law index for the observed global boulder distribution is α = −5.8 ± 0.2 (dashed line with gray confidence interval). The crater with the largest number of boulders (160) is Jacheongbi. We compare the observations to three simulations. The simulated exponents were derived by fitting randomly generated boulder distributions, assuming a Pareto distribution with α = −5.8 (dashed line), using the number of boulders in the population of each crater as input. Figure 8. Comparison of the power law exponents for the craters Juling, Jacheongbi, Nunghui, Ratumaibulu, Unnamed11, and Unnamed17 as determined in this paper and by Schulzeck et al. (2018a). Filled symbols refer to exponents that are more reliable, being associated with populations of more than 70 boulders. , and the Vesta map has filters centered at 650 nm, 555 nm, and 438 nm in the RGB channels (Schröder et al. 2013). The center of the maps is at (lat, lon) = (0 • , 0 • ). Figure 11. The distribution of boulders larger than 105 m on Vesta and Ceres along latitude L. We aggregate the boulders in three latitude ranges: "low" (|L| < 30 • ), "mid" (30 • < |L| < 60 • ), and "high" (|L| > 60 • ). a. Number density of boulders. b. Number density of craters with at least one boulder larger than 105 m. The error bars derive from Poisson statistics. Figure 12. Spatial boulder distribution for several craters for which an age estimate is available. Green, small dots represent boulders with a size between 3 and 4 pixels (105 m < d < 140 m). Red, large dots represent boulders larger than 4 pixels (d > 140 m). The FC2 image number is indicated in the top right. (Table 2). a. Density versus crater age. b. Density versus crater diameter. The error bars on the density were calculated assuming the number of boulders follows a Poisson distribution. The open symbols represent craters whose boulder density is unreliable, either because of uncertain boulder identifications or because the associated craters were partly in the shadow (underestimate).
11,960.2
2021-05-25T00:00:00.000
[ "Physics", "Geology" ]
Chaotic Behaviors in a Nonlinear Game of Two-Level Green Supply Chain with Government Subsidies In this paper, a two-level green supply chain composed of a manufacturer and a retailer is taken as the background. Considering the consumer’s double consumption preference and themanufacturer’s green product R&D investment, a differential gamemodel of the green supply chain under the government cost subsidy strategy is constructed. Firstly, the equilibrium points of the system are solved and their stability is discussed and analyzed. Secondly, the dynamic evolution process of Nash equilibrium under the parameters of green degree, green preference coefficient, retail channel preference coefficient, coefficient of the sensitivity of price, and adjustment speed are described by numerical simulation. 2e results show that the two ways of a system entering chaos are Flip bifurcation and N-S bifurcation, respectively, by 2D bifurcation graph, and it is also verified in 1D bifurcation diagram.When the bifurcation parameters are small, the systemmaintains Nash equilibrium stability. If the green degree of products is increased, the green preference coefficient will also increase; on the contrary, the retail preference coefficient will decrease. Research and development cost subsidy policy can effectively improve the green degree of products and increase the sales volume of products, so as to improve the profit of supply chain members. Introduction In recent years, more and more attention has been paid to environmental protection in the world, and corresponding environmental protection laws and regulations have been formulated. On the other hand, consumers' awareness of environmental protection is also strengthening. ese behaviors urge suppliers to implement green production, so as to reduce the environmental damage caused by people's production activities [1][2][3]. e concepts of green economy and ecological protection have been gradually and deeply rooted in the hearts of the people, which have changed the production and business model of enterprises. Green production capacity has become an important indicator for enterprises to survive in the industry for a long time. Manufacturing enterprises improve the quality of products by using green and environment-friendly materials. Green products can not only attract the attention of consumers, so as to expand product demand and improve product competitiveness, but also reduce environmental pollution and create a good social image [4][5][6]. e State Council has also encouraged the green production of enterprises, guided the green consumption of the masses, and proposed to build a green manufacturing system. However, when manufacturing enterprises are faced with many obstacles in the process of green production, such as insufficient funds, technological innovation, and insufficient support for product research and development, the enthusiasm of green product production is affected. erefore, the government has formulated the corresponding green subsidy policy in order to subsidize the enterprises producing green products and promote the green development of the supply chain. Many scholars have done in-depth research on the green supply chain from many angles and achieved satisfactory research results. Ghosh and shah [7] studied the secondary supply chain composed of a manufacturer and a retailer. e demand of the market is determined by the price of the product and the green degree of the product. e relationship between the price of the product and the green degree of the product depends on the pricing between them. Li et al. [8] built a green supply chain game model under price strategy based on Stackelberg game. e paper mainly analyzed the advantages and disadvantages of competition between decentralized decision and centralized decision-making. e research results showed that when the green cost exceeds a certain critical value, manufacturers will open direct sales channels, and the price of green supply chain products under the centralized strategy is higher than that under decentralized decision-making. Literature [9][10][11] discussed and analyzed the impact of consumers' green preference and environmental protection awareness on green supply chain decision-making. Huang et al. [2,12] considered the pricing decision of closed-loop supply chain with the green degree of products under the same preference, and they found that green degree and fair preference will change the retail price, wholesale price, and waste recovery rate of products and also affect the total profit of the supply chain. Many scholars also studied the green supply chain system under government intervention [4]. For example, Sheng and other scholars [13][14][15][16][17][18][19][20] studied the impact of government subsidies on enterprise innovation. Madani et al. [21][22][23][24] considered the game model under the situation of green consumption subsidies and nongreen taxation and analyzed and discussed the influence of government fiscal strategy on the optimal strategy of supply chain members under centralized and decentralized decision-making. In this paper, we consider increasing innovation subsidies and construct effective measures that can make innovation subsidies become incentives for enterprises to produce green products. In conclusion, the existing literature on green supply chain mainly focused on the green degree of products, government subsidies, and preferential policies of supply chain members. Little literature established a green supply chain model based on consumers' double consumption preference, and few scholars studied the impact of product greenness and government subsidy policies on enterprise profits from the perspective of dynamics. erefore, this paper introduces the green preference and channel preference into the green supply chain management under the government subsidy strategy and constructs the green supply chain differential game model under the Government R&D cost subsidy. is paper discusses the impact of double consumption and government subsidy policy on product green degree, product price, and supply chain member profit. Establishment of Model Consider a green supply chain system consisting of a manufacturer and a retailer. e manufacturer sells through two ways: one is to use traditional channels for wholesale sale and the other is to sell directly online. After comprehensive consideration of market factors, the manufacturer decides that the direct selling price and wholesale price of the product are p d and w, respectively, and the retailer determines the retail price p r according to the manufacturer's decision information and market demand. In addition, in order to encourage manufacturers to produce green products, the government decided to take the R&D cost subsidy strategy to intervene with manufacturers. In order to facilitate the solution and the establishment of the model, we make the following assumptions about the model: ① Manufacturers not only need to pay fixed production costs to produce green products but also need to purchase new equipment or technological innovation to increase additional investment in green products. We assume that the relationship between R&D costs of green products produced by manufacturers and greenness e(e > 0) is (ke 2 /2), where k(k > 0) is the R&D cost coefficient of green products. In addition, it is assumed that the unit cost of the product produced by the manufacturer is c. ② Suppose that consumers have a preference for both channel and green. By the description of the demand function in [7,25], we assumed that the demand of the retail channel and direct channel is a linear function and the cross price elasticity coefficient is symmetric. us, the demand functions of retail channel and direct channel are, respectively, expressed as Among them, ξ represents the potential market capacity, θ(0 < θ < 1) represents the retail channel preference coefficient, c represents the green preference coefficient, α is the price sensitivity coefficient, β is the cross price sensitivity coefficient, and there are α > β > 0. In order to better analyze the characteristics of the green supply chain, referring to [26], (c 2 /k) is defined as the efficiency coefficient of product greening. e higher the value of (c 2 /k) is, the more the public likes green products, or it can be interpreted as that products with the same green quality need to pay less research and development costs. In order to encourage manufacturers to improve the green quality of their products, the government considers subsidy strategies for green products [7,27]. e government subsidizes the R&D cost of the manufacturer's products. Assuming that the R&D cost subsidy coefficient is ε(0 < ε < 1), the government subsidy expenditure under the R&D cost subsidy strategy is (εke 2 /2). In order to ensure the theoretical and practical significance of the supply chain, the model needs to meet the following constraints: 2 Complexity erefore, when the manufacturing enterprise considers the public's green preference and channel preference and the government subsidizes the manufacturer according to the product R&D cost, the profit functions of the retailer and the manufacturer are, respectively, By substituting equations (1) and (2) into (4) and (5), the profit functions of retailers and manufacturers under the dual preference and government subsidy strategy are, respectively, By taking partial derivatives of equations (6) and (7) with respect to p r and p d , respectively, we can obtain the optimal first-order condition for the profits of retailers and manufacturers under dual preferences as follows: If both the retailer and the manufacturer are bounded rational, that is, the two sides of the game do not fully understand each other's relevant information. en, they determine the price of products in period t according to the local estimation of the marginal target (zπ i /zp i )(i � r, d). In other words, in t period, if (zπ i /zp i ) > 0, then the manufacturer and retailer will increase the product price in t + 1 period; on the contrary, if (zπ i /zp i ) < 0, then the manufacturer and retailer will decrease the product price in t + 1 period. According to this idea, the dynamic adjustment mechanism [28,29] is introduced as follows: where v i > 0, i � r, d is the price adjustment speed of the retailer and the manufacturer, respectively. Combining equations (8), (9), and (10), a two-dimensional nonlinear discrete difference equation is obtained as For the convenience of calculation, let A � θξ + ce + wα and B � (1 − θ)ξ + ce + βw + c(α − β); then, equation (11) can be written in the following form: Stability Analysis of the Model Let P i (t + 1) � P i (t)(i � r, d); then, we could directly get four equilibrium solutions of mapping (12) as follows: Notice that E 0 is the trial fixed point, E 1 and E 2 are boundary fixed points, and E * is the internal fixed point. In order to guarantee that all the equilibrium points in system (12) have practical economic significance, the four equilibrium points are nonnegative. From the definition of parameters, we know θ(0 < θ < 1), α > β > 0, ξ, c, e > 0, and w > c > 0. erefore, we can get A > 0, B > 0, 2α + β > 0, and 2α − β > 0. us, it is concluded that all four equilibrium points are nonnegative, and all equilibrium points are of economic significance. For the sake of obtaining the global stable conditions of fixed points of mapping (12), then, the Jacobian matrix of mapping (12) at any point (p r , p n ) is given as follows: Obviously, J(E 0 ) is a diagonal matrix and eigenvalues of J(E 0 ) are elements located on diagonal; that is, Proof. By substituting equilibrium point E 1 into equation (13), the Jacobian matrix J(E 1 ) of E 1 can be expressed as e eigenvalues of matrix (15) are λ 1 � 1 − Av r and . From the definition of the previous parameters and proof, we can know that v r , v d > 0, In order to avoid redundancy, the proof process is no longer described. (19) holds, the Nash equilibrium is stable. Proposition 4. If and only if condition e specific forms of trace Tr(J) and determinant Det(J) of J(E * ) are given as According to the Jury condition, the local stability conditions of the Nash equilibrium point E * are as follows: And by the definition of the parameters, we know v r , v d > 0, α > β > 0, and A, B > 0, so 2α − β > 0. Obviously, the second condition in the inequality system (18) holds. en, inequality group (18) can be rewritten as 4 Complexity rough the above analysis, the Nash equilibrium is stable if and only if the inequality set (19) is satisfied. Fix the initial value as (0.6169, 1.4878) and select parameters θ � 0.4183, ξ � 1.4896, β � 0.0519, α � 0.597, c � 0.5171, e � 0.4019, w � 1.1462, and c � 0.8133. Under this set of parameters, the stable region of equilibrium point E * with respect to price adjustment speed (v r , v d ) is shown in the blue area in Figure 1. In this stable region, the prices p d and p r of manufacturer and retailer converge to the Nash equilibrium after many games. If the value of the adjustment speed v r or v d is increased, the point (v r , v d ) will jump out of the stable region, and complex dynamic phenomena will occur in the later process. Numerical Bifurcation Analysis As we all know, people mainly study the complex dynamic behavior of the nonlinear dynamic system by numerical simulation, and the tools used in the process of numerical simulation mainly include 1D bifurcation diagram, 2D bifurcation diagram, Lyapunov exponent diagram [30,31], and phase diagram. In this section, the influence of the parameter changing on the stability of the Nash equilibrium of system (12) is discussed by numerical analysis. e Influence of Price Sensitive Coefficient on the System. In order to study the effect of price sensitive coefficient α on the system, we select a group of parameters such as θ � 0.4183, ξ � 1.4896, β � 0.0519, c � 0.5171, e � 0.4019, w � 1.1462, and c � 0.8133. When the value of the parameter α increases gradually, 2D bifurcation diagrams of system (12) regarding the adjusted speeds v r and v d are obtained, as shown in Figure 2. e different colors in the 2D bifurcation diagram represent different periods (i.e., the color displayed by the color bar on the right side of the bifurcation diagram). e brown region represents the Nash equilibrium stable region, namely, 1-period, light green represents 2-period, dark green represents 3-period, orange represents 4-period, and so on, black represents quasiperiodic state or chaos, and white area represents escape state. ese three states cannot be calculated at present. e stable region of the Nash equilibrium point with respect to adjustment speeds v r and v d is shown in the brown area in Figure 2. In the parameter plane (v r , v d ), the manufacturer's direct selling price p d and retailer's selling price p r will converge to the Nash equilibrium point E * after many games. If we increase the value of v r or v d , it will cause v r or v d to jump out of the stable region. From Figures 2(a), 2(b), 2(c), and 2(d), we observe that the system has two different paths into chaos. One is that, with the increase of parameter values v r and v d , system (12) successively passes through the brown region (1-cycle), green region (2-cycle), orange region (4-cycle), light green region (8-cycle), and finally enters the black region; in other words, the system enters into the chaotic state by Flip bifurcation. When manufacturers and retailers determine their respective adjustment speed according to the above path, the system will enter into chaos through a period-doubling, which also means that a supply chain system composed of manufacturers and retailers will end up in chaos. With the increasing speed of adjustment, manufacturer or retailer may not survive and the supply chain will collapse. e other is that when the parameters (v r , v d ) enter the black region from the brown region (1cycle) through the green region (2 cycles), the system enters into quasiperiod and finally enters chaos through the Neimark-Sacker bifurcation. From these four pictures, we find that the path of the system into chaos is the same. e different value of price sensitivity coefficient α makes the unique graph of the score fork, and we also feel the complex dynamic phenomenon. Combined with the 2D bifurcation diagram and the above analysis, it is found that the price sensitive coefficient α will not affect the bifurcation way of the system, but will change the dynamic phenomenon of the fork. At the same time, we find that, in order to make the manufacturer and retailer obtain larger profits, the value of adjustment speed should not be too large. Once the threshold value is exceeded, the system may collapse. erefore, choosing a smaller adjustment speed can promote the stable development of the green supply chain system composed of manufacturer and retailer. e Influence of Green Degree on Retail Channel Preference Coefficient and Green Preference Coefficient. In order to better analyze the influence of green degree on retail channel preference coefficient θ and green preference coefficient c on the system, we choose a set of parameter values as v r � 0.8587, v d � 0.849, ξ � 1.4896, β � 0.0519, α � 0.597, w � 1.1462, c � 0.8133, and θ � 0.4183. Taking c as the bifurcation variable, the 1D bifurcation diagrams and their corresponding maximum Lyapunov exponent are obtained as shown in Figures 3(a), 3(b), 3(c), and 3(d). When e � 0.3019, the 1D bifurcation diagram as shown in Figure 3(a) is obtained. When 0 < c < 2.007, the system is in Nash equilibrium and steady state, which indicates that the green supply chain system develops steadily. When c ≈ 2.007, the system begins bifurcation. When 2.007 < c < 3.198, the system is in a periodic state. When 3.198 < c < 3.489, the system is in a chaotic state, which indicates that the green supply chain system composed of a manufacturer and a retailer is in chaotic competition. When 3.489 < c < 4.5, the system experiences three-time bifurcations from period to chaos, which indicates that the manufacturer and retailer experience three price games. Similarly, when e � 0.4019, 1D bifurcation diagram is shown in Figure 3(c). When 0 < c < 2.676, the system is in Nash equilibrium and stable state. When c ≈ 2.676, the 0.8587, v d � 0.849, ξ � 1.4896, β � 0.0519, α � 0.597, w � 1.1462, and c � 0.8133, where (a) e � 0.3019 and θ � 0.4183, (c) e � 0.4019 and θ � 0.4183. Figures 3(b) and 3(d) are the maximum Lyapunov exponent diagrams corresponding to Figures 3(a) and 3(c), where (e) e � 2.1019 and c � 0.5171, (g) e � 2.5019 and c � 0.5171. Figures 3(f ) and 3(h) are the maximum Lyapunov exponent graphs corresponding to Figures 3(e) and 3(g). system bifurcates. When 2.676 < c < 4.239, the system is in a period state. When 4.239 < c < 6, the system alternates between chaos and periodic bifurcation. Figures 3(b) and 3(d) are the largest Lyapunov exponent graphs corresponding to Figures 3(a) and 3(c). Lyapunov exponent diagram is an important tool to study the nonlinear dynamic behavior. In the nonlinear dynamical system, the Lyapunov exponent graph describes the convergence or divergence of adjacent trajectories. It distinguishes the regular attractor from singular attractor. When the Lyapunov exponent is less than zero, we can judge that the attractor is a regular attractor. If at least one of the Lyapunov exponents is positive, then we can infer that there are singular attractors or chaotic attractors in the system. When e � 2.1019 and e � 2.5019 (Figures 3(e) and 3(g) are obtained, resp.), it can be seen that the inverse N-S bifurcation occurs first in the system, followed by positive N-S bifurcation. It is obvious from these two diagrams that the stability interval of Figure 3(g) is smaller than that of Figures 3(e), 3(f ), and 3(h) which are the maximum Lyapunov exponent diagrams corresponding to Figures 3(e) and 3(g), respectively. e system experiences three-time bifurcation of entering chaos from the period, which indicates that the manufacturer and retailer experience three times of price game. When Lyp < 0, it means that the system is in a stable state; when Lyp � 0, it indicates that the system bifurcates at this point; when Lyp > 0, the system enters into a chaotic state. It can be seen from the diagrams that when the green preference coefficient c is small and the value of the retail channel preference coefficient θ is in the middle value, the system will be in a stable state. At the same time, we also find that if the green degree of the product is increased, the green preference coefficient will also increase. On the contrary, when the green degree of the product increases, the value of the retail preference coefficient will decrease. e Influence of Other Parameters on the System . In order to further analyze the influence of other parameters on system (12), we select v r , v d , c, w, e, and ξ, respectively, as the bifurcation variables and obtain the 1D bifurcation diagrams and the maximum Lyapunov exponent diagrams, as shown in Figures 4. In Figures 4(a) and 4(b), we can see that the internal equilibrium point E * is stable when the value of parameters v r and v d is relatively small. However, as the value of parameter v r continues to increase, the internal equilibrium E * is no longer stable. Suddenly, irregular dynamic behaviors appear, including N-S bifurcation, periodic window, and chaos. In Figure 4(b), the system experiences 2-period, 4-period, 2-period, and finally enters chaos; that is, the system enters chaos through Flip bifurcation. It can be seen from Figures 4(c), 4(d), and 4(f ) that the stable 2-periodic loop loses its stability through the N-S bifurcation, and the system finally enters chaos. In Figure 4(e), the bifurcation diagram with respect to the green degree θ is depicted. We can see that the period-doubling bifurcation and N-S bifurcation appear simultaneously in the process of bifurcation. In addition, we also find that periodic windows appear in the process of bifurcation in Figures 3(c), 3(d), 3(e), and 3(f ). In fact, the periodic window corresponds to the part of the Lyapunov exponent closed to zero. When the Lyapunov exponent is equal to zero, the system bifurcates at this point; when the Lyapunov exponent is greater than zero, the system enters into a chaotic state. It is found that when the bifurcation parameters are small, the system is in a steady state. In this case, the supply chain system composed of a single manufacturer and a single retailer develops stably. At the same time, it is found that increasing the green degree of products can make the system more stable. However, due to the limitation of research and development costs, manufacturers will not increase the green degree of products to meet the green preference of consumers. In other words, increasing the green degree of products can improve the sales of products and the profits of supply chain members. Global Dynamics Analysis of the System People often pay attention to the final state of the system in nonlinear dynamics. e final state reflects the final state of the system. e final state can be expressed by the attractor. In the process of studying the nonlinear dynamic system, it is found that the change of parameters will cause variation in number and structure of attractors. en, the influence of cross price sensitivity coefficient on the final behavior of system (12) is discussed through the evolution process of the attractor. Parameters v r � 0.3584, v d � 0.4005, θ � 0.7568, ξ � 1.079, α � 1.6181, c � 0.6934, e � 0.5, w � 2.4113, and c � 1.5383 are fixed and βis taken as the variable to obtain the attractor evolution diagrams, as shown in Figure 5. When β � 0.5827, a 2-periodic attractor is observed in Figure 5(a). With the increase in the cross price sensitivity coefficient β, the 2-period attractor evolves into periodic cycles with rough edges as shown in Figure 5(b). When β � 0.6357, the original 2-period ring with rough edges recedes and gradually evolved into a larger 2-period ring. When the value of parameter β continues to increase, the attractor becomes larger and larger. Until β � 0.7247, the 2period cycles break to form many very small attractors, which are called phase-locking, as shown in Figure 5(e). e broken attractors recombine to form two attractors which look like a "ring" as shown in Figure 5(f ). It is found that the number and shape of attractors change significantly when the cross price sensitivity coefficient β increases. When β � 0.7747, two strange attractors joint 8 Complexity together, as shown in Figure 5(g). With the further increase of the parameter β, two interlocking chaotic attractors evolve into a chaotic attractor as shown in Figure 5(h). Finally, the system is in a chaotic state, which means that the supply chain composed of a manufacturer and a retailer is in a chaotic competition scenario. erefore, it shows that increasing the value of cross price sensitivity coefficient β will make the market more and more chaotic, which is against the win-win intention of manufacturer and retailer. erefore, choosing a smaller cross price sensitivity coefficient is conducive to the common development of manufacturers and retailers. Conclusion In this paper, a two-level green supply chain composed of a single manufacturer and a single retailer is taken as the background. Firstly, considering the consumer's double consumption preference and the manufacturer's green product R&D investment, the game model of the green supply chain under the government cost subsidy strategy is constructed. Secondly, the dynamic evolution process of Nash equilibrium under the parameters of green degree, green preference coefficient, retail channel preference coefficient, coefficient of sensitivity of price, and adjustment speed is described by numerical simulation. e results show that the two ways of the system entering into chaos are Flip bifurcation and N-S bifurcation. e N-S bifurcation is mainly discussed in this paper. When the bifurcation parameters are small, the system maintains Nash equilibrium stability, and the stability of the system is destroyed if it exceeds a certain threshold value. Increasing the green degree of the product will increase the green preference coefficient, but the retail preference coefficient will decrease. e subsidy policy of R&D cost plays an irreplaceable role in encouraging the manufacturer to produce green products. Green sales can effectively improve the sales volume of products and realize the win-win benefits between manufacturers and retailers. Data Availability No data were used to support this study. Conflicts of Interest e authors declare that the research was conducted in the absence of any conflicts of interest.
6,259.4
2020-12-15T00:00:00.000
[ "Economics", "Environmental Science" ]
Human ACF1 Alters the Remodeling Strategy of SNF2h* The human ACF chromatin-remodeling complex (hACF) contains the ATPase motor protein SNF2h and the non-catalytic hACF1 subunit. Here, we have compared the ability of SNF2h and a reconstituted hACF complex containing both SNF2h and hACF1 to remodel a series of nucleosomes containing different lengths of DNA overhang. Both SNF2h and hACF functioned in a manner consistent with sliding a canonical nucleosome. However, the non-catalytic subunit, hACF1, altered the remodeling properties of SNF2h by changing the nature of the requirement for a DNA overhang in the nucleosomal substrate and altering the DNA accessibility profile of the remodeled products. Surprisingly, addition of hACF1 to SNF2h increased the amount of DNA overhang needed to observe measurable amounts of DNA accessibility, but decreased the amount of overhang needed for a measurable binding interaction. We propose that these hACF1 functions might contribute to making the hACF complex more efficient at nucleosome spacing compared with SNF2h. In contrast, the SWI/SNF complex and its ATPase subunit BRG1 generated DNA accessibility profiles that were similar to each other, but different significantly from those of hACF and SNF2h. Thus, we observed divergent remodeling behaviors in these two remodeling families and found that the manner in which hACF1 alters the remodeling behavior of the ATPase is not shared by SWI/SNF subunits. The human ACF chromatin-remodeling complex (hACF) contains the ATPase motor protein SNF2h and the non-catalytic hACF1 subunit. Here, we have compared the ability of SNF2h and a reconstituted hACF complex containing both SNF2h and hACF1 to remodel a series of nucleosomes containing different lengths of DNA overhang. Both SNF2h and hACF functioned in a manner consistent with sliding a canonical nucleosome. However, the non-catalytic subunit, hACF1, altered the remodeling properties of SNF2h by changing the nature of the requirement for a DNA overhang in the nucleosomal substrate and altering the DNA accessibility profile of the remodeled products. Surprisingly, addition of hACF1 to SNF2h increased the amount of DNA overhang needed to observe measurable amounts of DNA accessibility, but decreased the amount of overhang needed for a measurable binding interaction. We propose that these hACF1 functions might contribute to making the hACF complex more efficient at nucleosome spacing compared with SNF2h. In contrast, the SWI/SNF complex and its ATPase subunit BRG1 generated DNA accessibility profiles that were similar to each other, but different significantly from those of hACF and SNF2h. Thus, we observed divergent remodeling behaviors in these two remodeling families and found that the manner in which hACF1 alters the remodeling behavior of the ATPase is not shared by SWI/SNF subunits. Regulation of chromatin structure is crucial to many fundamental cellular processes such as replication and transcription activation and repression (1,2). The two classes of mechanisms that are believed to regulate structure are covalent and noncovalent modifications (1,(3)(4)(5)(6). Covalent modifications to chromatin include acetylation, deacetylation, methylation, ubiquitylation, and phosphorylation (4 -6). Noncovalent modifications encompass several mechanisms, the most prominent of which involves ATP hydrolysis-driven, multisubunit complexes whose central subunits are ATPases of the SNF2 superfamily (1,7). These complexes are divided into at least four subfamilies defined by the ATPase catalytic subunits Swi2/Snf2, ISWI, CHD, and Swr1. The ISWI family has the most identified members of remodeling complexes, each of which contains a small number of noncatalytic subunits. In contrast, the members of the SWI/SNF family, the second most abundant remodeling family, contain a total of 10 -15 subunits. Human complexes that contain the human ISWI homolog SNF2h include RSF (remodeling and spacing factor; contains SNF2h and RSF1), WICH (WSTF-ISWI chromatin remodeling; contains WSTF and SNF2h), hACF (ATP-utilizing chromatin assembly and remodeling factor; contains ACF1 and SNF2h), hCHRAC (chromatin accessibility complex; contains SNF2h, hACF1, p15, and p17), NoRC (nucleolar remodeling complex; contains SNF2h and TIP5), and the SNF2h/cohesin/NuRD complex (8 -13). This diversity in SNF2h-based complexes raises the question of the mechanistic role played by these different subunits in the process of nucleosome remodeling. The ISWI-based complexes have varied functions. For example, studies have suggested that ISWI complexes are involved in various aspects of transcription (14 -19). In addition, mouse NoRC is involved in the formation of heterochromatin and silencing of rDNA (12,20), and the human WICH complex is targeted to heterochromatic replication foci (9). Furthermore, ISWI has been implicated in the maintenance of higher order chromatin structure and the facilitation of DNA replication through heterochromatin (14,21), consistent with structural data suggesting that regular spacing of nucleosomes facilitates formation of higher order nucleosomal structures (22). The ability of the Drosophila dACF and CHRAC complexes to create regular spacing of nucleosomes suggests that this is an important property of these complexes in vivo (10,(23)(24)(25). Although both ISWI and the dACF complex can space nucleosomes, dACF is more efficient (23,24,26). Other data suggest a more intricate interaction between ISWI and the dACF1 subunit in dACF than simple enhancement of ISWI activity. ISWI by itself prefers to move centrally located nucleosomes to the end of the template, whereas dACF prefers to move end-located nucleosomes to the central position (27)(28)(29)(30)(31). The underlying mechanisms that determine these differences in function have not yet been elucidated. Differences in remodeling outcome catalyzed by the motor proteins in the ISWI and SWI/SNF complexes have been demonstrated previously (3,7,(32)(33)(34). ISWI family complexes can efficiently slide nucleosomes, and their remodeling products have characteristics of canonical nucleosomes. In contrast, SWI/SNF family complexes create products with characteristics distinct from those of canonical nucleosomes and are able to efficiently create access to sites near the center of the nucleosome. These observations have led to the hypothesis that these remodeling complexes function by distinct mechanisms. Studies to date show that additional subunits in the SWI/SNF family of complexes can alter biological targeting of the complex and can increase the specific activity of the complex (35)(36)(37)(38), but these studies have not shown significant changes caused by the non-catalytic subunits in the nature of the remodeling function of the complex. In contrast, initial data on directionality of movement have raised the possibility that partner proteins of ISWI might cause more substantive changes in remodeling outcome (27,28,30). The major goal of this work was to measure the remodeling activity of SNF2h as an isolated protein and to compare its activity with that of the hACF complex. Previous work implied that ISWI family proteins require a nucleosomal DNA overhang for remodeling and that the length of overhang might affect the activity of the complex in binding to nucleosomes (31, 32, 39 -41). Therefore, we set out to investigate the remodeling efficiencies of SNF2h and the hACF complex at a series of enzyme restriction sites throughout the nucleosome using an extensive set of templates with different lengths of DNA overhang. In addition, we also compared their activities in spacing nucleosomes. Our data suggest that, although SNF2h and hACF share a dependence on the presence of a nucleosomal DNA overhang, the nature of that dependence is changed in that hACF prefers substrates with longer overhangs. Furthermore, both SNF2h and hACF activities created regularly spaced nucleosomes on chromatin that had been assembled on supercoiled plasmid. In contrast, consistent with previous data, BRG1 and BRG1-based SWI/SNF complex remodeling did not require extranucleosomal DNA overhangs. Additional SWI/ SNF subunits did not appear to significantly alter the characteristics of BRG1 function, revealing intriguing difference between the two families of complexes in the interaction between the motor protein and its binding partners. MATERIALS AND METHODS Construction of DNA Templates-A PstI site was engineered into different positions of the 601 template using the Stratagene QuikChange kit. 3 Mononucleosome Assembly and Purification-DNA fragments containing the 601 nucleosome positioning sequence (provided by the laboratory of J. Widom) were generated by PCR and body-labeled with [␣-32 P]dATP as required. The templates were assembled into mononucleosomes with HeLa core histones by step gradient salt dialysis, followed by purification on a 10 -30% glycerol gradient (32,33,42). Protein Purification-C-terminally FLAG-tagged SNF2h and BRG1 were expressed in Sf9 cells using a baculovirus overexpression system and purified by M2 affinity chromatography (32,38). C-terminally FLAG-tagged hACF1 (cDNA provided by P. Varga-Weisz) and untagged SNF2h were coexpressed in Sf9 cells and purified using the same system. Human SWI/SNF was affinity-purified from HeLa cells with INI1-FLAG stably integrated into the genome as described previously (44). Nucleosome Mobility Assay-All reactions were performed in 12 mM HEPES (pH 7.9), 10 mM Tris-HCl (pH 7.5), 60 mM KCl, 8% glycerol, 4 mM MgCl 2 , 2 mM ATP⅐Mg, and 0.02% Nonidet P-40. Reactions were incubated at 30°C for 20 min. Time course experiments were performed to show that these reactions were complete within 5 min. Reactions were stopped by addition of 157 nM ADP and 1.5 g of salmon sperm DNA. The samples were run on 0.5ϫ Tris acetate/EDTA 5% gels (33). Determination of the Specific Activities of hACF and SNF2h-The specific activities of SNF2h and hACF were determined under the conditions described previously (33). 1 unit is defined as the amount of enzyme required to generate 1 pmol of PstIaccessible mononucleosomes (substrate C91-25)/min at 30°C. Rate constants were obtained from initial rates determined by linear fits of data for the first 15% of cut substrates. ATPase Assay-ATPase assays were performed using Michaelis-Menten conditions as described previously (32,42). To determine the K m of the remodelers for nucleosomal and naked DNA substrates, increasing amounts of substrates were titrated into reactions containing limited amounts of remodelers. To assay the turnover rate of ATP by the remodelers, saturating concentrations of remodelers and nucleosomes were used. Array Assembly-2 kilobase pairs of supercoiled plasmid DNA G1E10 (10,23) were assembled into arrays with HeLa core histone by salt dialysis (45). The ratio of DNA to histone was ϳ1:1, and the final concentration of the array was 0.12 g/l. Spacing Assay-All reactions were performed in 8 mM HEPES (pH 7.9), 8 mM KCl, 3 mM MgCl 2 , 3 mM ATP⅐Mg, 8% glycerol, 30 mM creatine phosphate, 6 ng/l creatine kinase, and 0.4 mM EGTA at 30°C for 3-4.5 h. The concentration of the array used was 5 g/ml, and the concentration of the remodelers used was 1 g/ml. After the array was incubated with each remodeler in the presence or absence of ATP, all the reactions were digested with micrococcal nuclease (Sigma) at two different concentrations (0.001 g/l and 0.5 ng/l) for 5 min in 25°C. The concentration of SNF2h and hACF used was 0.001 g/ml each. After the reactions were stopped with 2% SDS, DNA was extracted from each reaction by proteinase K digestion (0.5 g/l; Sigma), followed by phenol/chloroform purification. Ethanol-precipitated DNA samples were resus-pended in Tris/EDTA and resolved by 1.2% agarose gel electrophoresis in 1ϫ Tris acetate/EDTA buffer against a 123-bp DNA ladder (Invitrogen). RESULTS To examine how non-catalytic subunits in a remodeling complex might alter the function of the central ATP-dependent subunit, we compared the ATPase SNF2h in isolation with the hACF complex, which contains both SNF2h and the non-catalytic subunit hACF1. We used two protocols to examine the activity of these proteins following their purification from baculovirus-infected cells (Fig. 1A). The first protocol monitors the shift in nucleosome position upon remodeling using native gel electrophoresis. This protocol has been used previously to demonstrate that mammalian complexes containing SNF2h and Drosophila complexes containing SNF2h homologs change the translational position of the nucleosome (also referred to as "sliding" of the nucleosome) (27)(28)(29)(30)(31). The preparations used here were able to move end-positioned nucleosomes that contained 120 bp of additional DNA to the 147 bp that formed the nucleosome. Both SNF2h and hACF moved the nucleosome away from the end position toward the center, thus slowing the mobility of the nucleosomal fragment. As anticipated from previous work, the hACF complex had higher activity compared with SNF2h alone in that hACF could move nucleosomes away from the starting position at lower concentrations than SNF2h alone (Fig. 1B). The remodeling activities of SNF2h and hACF can be more readily quantified using a second protocol that measures restriction enzyme accessibility. Previously occluded restriction enzyme sites in nucleosomal DNA are made accessible when a produc-FIGURE 1. SNF2h and hACF are comparably active, but differ in their remodeling behaviors. A, shown is a Coomassie stain of purified SNF2h and hACF (left panel ) and BRG1 and human SWI/SNF (right panel ). B, both hACF and SNF2h remodeled the C120 nucleosome. In the mobility shift assay, we used a titration of 50, 25, and 10 nM SNF2h and a titration of 15, 7.5, and 3.8 nM hACF to remodel Ͻ1 nM labeled C120-25 nucleosome. C, SNF2h had slightly lower specific activity compared with hACF. In the restriction enzyme accessibility assay, 50 nM SNF2h and 15 nM hACF were used. Different amounts of unlabeled C91 mononucleosome were mixed with labeled C91-25 nucleosomes to achieve the intended nucleosome concentrations. D, SNF2h and hACF generated different distributions of products on longer templates. In the mobility shift assay, we used 50 nM SNF2h, 15 nM hACF, and Ͻ1 nM labeled substrate in all the reactions. The substrates had the following lengths of overhang: 20 bp (C20), 45 bp (C45), 91 bp (C91), and 120 bp (C120). tive remodeling event occurs (42,46). Therefore, the rate of remodeling can be determined by measuring the rate of restriction enzyme cleavage in the presence of a remodeler and ATP. In describing these restriction enzyme accessibility experiments, we refer to the "short end" of the mononucleosome as the end of the template that lacks an extranucleosomal DNA overhang, whereas the "long end" has a DNA overhang. The mononucleosome used here had a 91-bp overhang at the long end and a PstI site that was 25 bp away from the short end of the nucleosomal DNA and thus was occluded for cleavage upon assembly into nucleosomes. Access to the PstI site is believed to be created by moving the nucleosome by at least 25 bp onto free DNA. Both the SNF2h and hACF preparations were able to efficiently create access to the PstI site on this template, with the hACF preparation displaying 3-fold higher activity compared with the SNF2h preparation (Fig. 1C). SNF2h and hACF Remodeling Requires Different Lengths of Nucleosomal DNA Overhang-We compared the ability of SNF2h and hACF to remodel a series of defined substrates. SNF2h is unable to open a PstI site located at position 50 of mononucleosomes (158 bp) that lack a long DNA overhang (32), but it can open this site in substrates with a 55-bp overhang (33). In addition, ISWI-based remodelers from various organisms can reposition nucleosomes on substrates whose templates are longer than those of the core mononucleosome (11,19,40,41,47,48). These previous results indicated that the nucleosomal DNA overhang might be essential for remodeling by SNF2h and hACF, prompting us to examine this issue using a wide variety of substrates. A crucial role for a DNA overhang in SNF2h and hACF remodeling might be due to any of the following mutually compatible possibilities. 1) The DNA overhang might be required as a place to reposition the remodeled nucleosome; 2) it might enhance the productive binding of remodelers to substrate; and 3) it might increase the efficiency of SNF2h and hACF in hydrolyzing ATP. To test the above possibilities and to compare the impact of overhang length on SNF2h and hACF remodeling, we used templates based upon the 601 nucleosome positioning sequence (49). This template produces a defined nucleosome position, as measured by micrococcal nuclease mapping, with each of the different DNA overhang lengths and engineered restriction sites used in this study (supplemental Fig. 1). We initially characterized the impact of overhang length on remodeling using native gel electrophoresis. Templates with 20-, 45-, 91-, and 120-bp overhangs (referred to as C20, C45, C91, and C120, respectively) were all visibly remodeled by SNF2h and hACF in an ATP-dependent manner as measured using this protocol (Fig. 1D). The remodeled species created on the C45 template was similar for both preparations and had a mobility consistent with a centrally localized nucleosome. Despite the limited potential for change in mobility of the smallest C20 template, we observed a slightly slower mobility of this fragment following SNF2h and hACF remodeling that was also consistent with movement to a central position (Fig. 1D). Remodeling with both proteins on the C91 and C120 templates also moved the nucleosome away from an end position. Similar to previous studies comparing ISWI and Drosophila ISWI complexes (27,28), the distribution of remodeled products differed when SNF2h and hACF were compared. On these longer templates, the predominant remodeled product of the hACF reaction was the centrally positioned and therefore slowest moving nucleosome, whereas SNF2h created multiple species of products. We concluded that both SNF2h and hACF could move nucleosomes on each of the templates tested. Although movement on C20 and C45 overhangs appeared to be similar between the two, there were significant differences on templates with longer overhangs. One possible explanation for these observations, which is investigated further below, is that SNF2h and hACF differ in the way that overhang length affects their ability to remodel. To determine the impact of overhang length on the rate of remodeling, we used a restriction enzyme accessibility assay to measure rates of remodeling on a series of defined substrates (Fig. 2). Experiments were done using excess remodeling enzyme over substrate, and remodeling rate constants were determined by computer-determined fit to the amount of cutting observed during a time course. These experiments were performed using an excess of restriction enzyme so that restriction enzyme cutting would not be the rate-limiting step of the reaction (42,46,49,50). Control experiments demonstrated that both SNF2h and hACF released nucleosomes more rapidly than remodeling occurred (see "Materials and Methods" and supplemental Fig. 2), demonstrating that substrate release is not limiting. We constructed four nucleosomal substrates that had an identically engineered PstI restriction site that was 18 bp from the short end of the nucleosomal DNA ("-18"). The respective lengths of the DNA overhang at the long end were 0 bp (with the substrate referred to as C-18), 20 bp (C20-18), 45 bp (C45-18), and 91 bp (C91-18). If the DNA overhang simply provides a place for nucleosomes to slide on to, then both SNF2h and hACF should be able to expose the PstI site at position 18 on all nucleosomes with overhangs of 20 bp or longer. Consistent with previous findings, SNF2h could not create access to the PstI site in a core mononucleosome; however, it could expose the PstI site in substrates C20-18, C45-18, and C91-18 ( Fig. 2A). Similarly, hACF remodeling could not expose the PstI site in C-18, but surprisingly was also unable to create access to the PstI site in C20-18. The results above imply that hACF and SNF2h were able to remodel the C20 template as measured by a shift in mobility on native gels (Fig. 1D). The inability of hACF to expose the PstI site on C20 might be caused by the inability of hACF to slide the nucleosome sufficiently away from a central position to expose the site at position 18. The hACF complex could create access to the site in C45-18 and C91-18 (Fig. 2B). To further probe the different overhang length requirements for hACF and SNF2h, we used the restriction enzyme accessibility protocol to measure the accessibility of additional sites as a function of overhang length. We constructed five sets of templates that were defined by the lengths of their DNA overhang at the long end of the template: 0, 20, 45, 91, and 120 bp. Within each set of C, C20, and C45, we made four distinct templates that contained unique PstI sites positioned 18, 25, 55, or 75 bp from the short end of the nucleosome (Fig. 2C). We also created six templates of C91 (with restriction sites at positions 18, 25, 40, 55, 64, and 75) and nine templates of C120 (with restriction site at positions 18,25,40,55,64,75,94,109, and 118) (Fig. 2C). In this manner, we created a panel of 27 templates with varying overhang lengths and distinctly positioned PstI sites. By measuring the rates of SNF2h and hACF remodeling on these templates, we confirmed as well as expanded the analysis of their differential substrate requirements (Fig. 2, D and E). (Fig. 2D). Consistent with the hypothesis that these proteins create access by sliding the nucleosome, both SNF2h and hACF could not open up the sites unless there was sufficient DNA for the repositioned octamer to form a canonical nucleosome; in addition, they both opened up sites closer to the entry point more quickly than the sites near the dyad. If we assume that the nucleosome slid toward the center of the fragment in these experiments, then we predict 0, 10, 23, 47, and 60 bp of flanking linker DNA on the remodeled products of the five substrates that were tested. This prediction is in agreement with the observed rate of PstI site exposure created by SNF2h and hACF on these substrates (Fig. 2, D and E). Their different patterns of site exposure on C20 and C45, relative to templates with longer DNA overhangs, suggest that hACF requires a longer DNA overhang compared with SNF2h for productive binding and/or remodeling. hACF1 Alters the Interaction Interface between SNF2h and Substrates-It is possible that hACF remodels nucleosomes with a 20-bp (C20) or 45-bp (C45) overhang more slowly than nucleosomes with longer overhangs because it binds more weakly to nucleosomes with shorter overhangs. To test this possibility, we used the ATPase activities of SNF2h and hACF to examine how these enzymes compare in their ability to interact with different nucleosomal substrates. By varying the concentration of the nucleosomal substrate under Michaelis-Menten conditions, we were able to measure the apparent K m of each enzyme for each substrate, which is likely to reflect the ability of the enzyme to bind to each substrate. We found that SNF2h interacted significantly more strongly with nucleosomal substrates with an overhang of 45 bp or longer, as indicated by a much smaller K m (Table 1). This dependence on longer DNA overhangs in SNF2h/substrate interaction was not template-specific, as we saw similar results using nucleosomes assembled on a different nucleosome positioning template (data not shown). In comparison, we found that hACF interacted strongly with all substrates in a manner that was largely independent of the DNA overhang, as indicated by a K m in each reaction that was lower than the K m seen with SNF2h (Table 1). Finally, both SNF2h and hACF interacted strongly with assembled nucleosomal arrays. The ability of SNF2h and hACF to interact with nucleosomes having different overhang lengths was mimicked by their interactions with naked DNA of similar lengths as the linkers (Table 1). Both SNF2h and hACF interacted weakly with 10-bp DNA. As expected from the nucleosomal data, hACF exhibited comparably strong affinity for all the other longer substrates, whereas the affinity of SNF2h for double-stranded DNA substrates correlated with their lengths (Table 1). This is also consistent with the previous finding of stronger association between ISWI and naked DNA in the presence of non-catalytic subunits (24,41). Taken together, it appears that hACF1 enhances the interaction between SNF2h and substrates in a way that largely abrogates the need of a DNA overhang for a stable interaction. On the basis of these results, we infer that the slower remodeling of C20 and C45 by hACF, relative to templates with longer overhangs, is not because of weak binding. This implies a role for the DNA overhang distinct from facilitating binding. The DNA Overhang Does Not Alter the Ability of SNF2h and hACF to Hydrolyze ATP-We next determined whether the DNA overhang affected the ATPase activities of SNF2h and hACF by measuring the maximal rates of ATP hydrolysis in the presence of the different nucleosomal substrates. Consistent with previous findings (27,28), hACF1 did not appear to signif- icantly increase the ATPase activity of SNF2h (Table 2). Furthermore, longer DNA overhangs did not appear to increase the maximal ATPase rates for hACF and SNF2h. We calculated that SNF2h hydrolyzed 200 ATP molecules for each successful exposure and cleavage of the previously occluded PstI site on the C91-18 template and that hACF hydrolyzed 100 ATP molecules for the same event. This is similar to the figures calculated for dACF (25,28). hACF Exhibits More Acute Substrate Preference Compared with SNF2h-To help verify the differences in SNF2h and hACF function described above and to further compare the ability of SNF2h and hACF to productively interact with templates with differing lengths of DNA overhang, we performed a series of experiments in which the two proteins were challenged with a mixture of templates. As described above, SNF2h and hACF differed in their relative abilities to create access to the PstI sites in C45-25 and C91-25. Although SNF2h remodeling opened up the two PstI sites with comparable efficiency, hACF exposed the PstI site in C91-25 about five times more efficiently than that in C45-25 (Fig. 2, D and E). To extend this analysis, we examined what would happen if, under conditions of substrate excess, we mixed equal amounts of templates with different lengths of overhang and measured the rate of remodeling of the mixed templates by a single remodeler in the same reaction. This assay was possible because the restriction enzyme cleavage of the different substrates created products of different and distinguishable sizes on a gel. We first compared SNF2h and hACF discrimination between C45 and C120. When SNF2h was titrated into an equal mixture of C45-25 and C120-25, both templates were remodeled with similar efficiency (Fig. 3A, right panel). When hACF was titrated into the reaction, we observed very little remodeling of the C45 template relative to the C120 template (Fig. 3A, left panel ). To provide a quantifiable measure of the ability of SNF2h and hACF to discriminate between two mixed substrates, we performed a series of experiments in which equal amounts of unlabeled templates with two different overhang lengths were mixed and then added to a small amount of one labeled template with a restriction site at position 25. For example, when we mixed unlabeled C45 and C91, the labeled C45-25 that was added to the reaction was remodeled by SNF2h at a similar rate as when, in a parallel reaction, labeled C91-25 was added to the reaction (Fig. 3B, left panel). Thus, SNF2h did not discriminate between these substrates. Similar results were seen when SNF2h was tested using mixtures of C45 and C120 and of C91 and C120 (Fig. 3B, middle and right panels). hACF also remodeled C91 and C120 comparably in a mixture of C91 and C120 (Fig. 3C, right panel). However, hACF remodeled the C45 substrate ϳ100-fold less efficiently than either C91 or C120 in mixed reactions (Fig. 3C, left and middle panels). These experiments extend the previous results by demonstrating that hACF can discriminate between templates with different overhang lengths, favoring the template with a 91-bp overhang. This behavior differs from that of SNF2h. SNF2h and hACF Exhibit Different Activities in Nucleosome Spacing-Creating regularly spaced nucleosomes has been proposed to be an important function for the ISWI-based remodeling family. The ability of hACF to discriminate between substrates with short overhangs and those with longer overhangs might allow hACF to create longer regularly spaced arrays. The DNA overhang might mimic the linker DNA in nucleosome spacing, causing hACF to favor remodeling near long stretches of linker DNA and to disfavor remodeling near short stretches, thereby promoting regular spacing. We tested this hypothesis using salt dialysis to assemble a given amount of nucleosomes on a supercoiled array, therefore obtaining a population of randomly assembled nucleosomal arrays that had various linker DNA lengths. We then investigated whether SNF2h and/or hACF activities could make the linker DNA lengths more uniform, therefore making the arrays more evenly spaced. After incubation of the remodeling proteins with the nucleosomes, we used micrococcal nuclease digestion to ascertain the regularity of spacing in the products. We saw that, in an ATP-dependent manner, the hACF reaction yielded a ladder of cleanly cut DNA that contained higher molecular weight bands that that of the starting product, whereas the SNF2h reaction yielded a ladder of cut DNA that was similar to that of the starting product and that had a less distinct banding pattern than that obtained in the reaction performed with hACF (Fig. 4, *). Thus, SNF2h was less able than hACF to create uniformly spaced arrays. DNA Overhang Lengths Do Not Significantly Affect BRG1based Remodeling-The data presented above suggest that the length of DNA overhang is critical for the ability of SNF2h and hACF to function on mononucleosomal templates and further indicate that the hACF1 subunit alters the requirement for an overhang. Two considerations prompted us to use the same set of templates to profile the pattern of remodeling of BRG1 and the BRG1-containing SWI/SNF complex. First, we wished to determine whether the ability of a remodeling subunit to alter the interaction of the core remodeling protein with DNA overhangs is a shared phenomenon between ISWI and SWI/SNF remodelers. Second, we were interested in determining whether the amount of DNA overhang had similar effects on BRG1 and SWI/SNF as were seen with SNF2h and hACF. Several previous studies have demonstrated differences in function between BRG1-and SNF2h-containing complexes. It has been proposed that sliding of the histone octamer is the main outcome of SNF2h-based remodeling, whereas sliding is one of many outcomes of BRG1-based remodeling (3,27,32,33). If BRG1-based complexes use sliding as a primary mechanism, then a simple prediction is that the extent of DNA overhang will significantly alter the rate of remodeling of centrally positioned sites, as was observed with SNF2h and hACF. This is because it is energeti- 15 30 cally favorable for the repositioned histone octamer to have contact with DNA. Using a different positioning sequence, we had previously shown that BRG1 and SWI/SNF can open sites on nucleosomes with a 55-bp overhang in a position-independent manner (33). This raised the possibility that the activity of these enzymes was not significantly affected by the length of DNA overhang. To test this hypothesis, we measured the rates of site opening as a FIGURE 3. hACF, but not SNF2h, can discriminate between mononucleosomes with shorter DNA overhangs and those with longer overhangs. A, in the restriction enzyme accessibility assay, equal amounts of C120-25 and C45-25 were used with PstI continuously present. Three different concentrations of hACF and SNF2h were used. Fractions of reactions were terminated at different times, deproteinized, and resolved by 8% PAGE in 1ϫ Tris borate/EDTA. B, in the restriction enzyme accessibility assay, equal amounts of C45 and C91 (left panel ), C45 and C120 (middle panel ), and C91 and C120 (right panel ) were used with three concentrations of SNF2h. Radiolabeled C45-25 (indicated with asterisks) was add to the C45/C91 mixture to monitor remodeling of C45 templates. In parallel, radiolabeled C91-25 (denoted with asterisks) was added to the C45/C91 mixture to monitor remodeling of C91 templates (left panel ). In the next set, radiolabeled C45-25 and radiolabeled C120-25 were each added to different mixtures of C45 and C120 to monitor remodeling of C45 and C120 templates, respectively (middle panel ). In the final set, radiolabeled C91-25 and C120-25 were each added to a different mixture of C91 and C120 to monitor remodeling of C91 and C120 templates, respectively (right panel ). C, in the restriction enzyme accessibility assay, equal amounts of C45 and C91 (left panel ), C45 and C120 (middle panel ), and C91 and C120 (right panel ) were used with three concentrations of hACF. The scheme used to add radiolabeled substrates to different mixtures of templates was as described for B. function of overhang length. We saw that BRG1 and SWI/SNF could expose all the PstI sites in C-18, C20-18, C45-18, and C91-18 at comparable rates (Fig. 5, A and B). We next proceeded to examine BRG1 and SWI/SNF remodeling at positions 18, 25, 55, and 75 on C, C20, C45, and C91 substrates (Fig. 5, C-E). Both BRG1 and SWI/SNF were able to create access to centrally located sites on a core nucleosome based upon the 601 nucleosome positioning sequence (the C series) (Fig. 5, D and E). When we investigated remodeling rates using the full series of templates, we found that BRG1 and SWI/SNF had similar remodeling profiles (Fig. 5, D and E). These profiles differed from those observed with SNF2h and hACF (Fig. 5, D and E). For example, both BRG1 and SWI/SNF opened up the PstI site at position 55 in C, C20, C45, and C91 nucleosomes at comparable rates that varied within 3-fold (Fig. 5, D and E). This was in marked contrast to the significant increase in the rate of remodeling at position 55 seen with SNF2h and hACF as the overhang length increased (compare Fig. 5 (D and E) with Fig. 2 (D and E)). As seen previously, BRG1 and SWI/SNF could expose sites on a core mononucleosome, demonstrating a characteristic different from that of SNF2h-based remodeling. This further stresses the difference in substrate requirement and possibly in remodeling strategies between these two families of remodelers (32,33). The lack of a distinct difference between the remodeling profiles of BRG1 and SWI/SNF suggested that the other subunits did not appear to fundamentally alter the outcome of BRG1 remodeling. Taken together, the ISWI family of remodelers appears to differ from the SWI/SNF family of remodelers not only in substrate requirement, but also in the manner in which additional subunit(s) alter remodeling function. DISCUSSION This study has demonstrated two distinguishing characteristics of SNF2h that might pertain to the in vivo function of the ISWI family of remodeling complexes. First, SNF2h requires a DNA overhang to function, as anticipated from previous work (see below); surprisingly, addition of hACF1 changes this requirement such that a longer overhang is needed for optimal activity of the complex (Figs. 2-5). The DNA overhang is expected to be functionally related to the linker DNA between histone octamers in an array, so this enhanced sensitivity toward DNA linker length might be germane to the ability of this particular SNF2h-based complex, hACF, to space nucleosomes. Second, both SNF2h and the hACF complex have significantly different requirements for a DNA overhang compared with BRG1 and the SWI/SNF complex (Figs. 2 and 4). In addition, hACF1 appeared to alter the remodeling pattern of SNF2h more significantly than subunits in the SWI/SNF complex altered the remodeling pattern of BRG1. The latter findings further highlight the functional differences between these two families of remodeling complexes. The non-catalytic hACF1 protein interacts with the catalytic subunit SNF2h to change the requirement for a DNA overhang. On templates with 20-or 45-bp overhangs, SNF2h is more active than hACF as measured by the restriction enzyme cleavage assay (Fig. 2), and both remodelers show similar behavior as judged by changes in mobility as measured by native gel electrophoresis (Fig. 1D). On templates with longer overhang lengths, hACF displays more activity in both assays. It is interesting that hACF is able to remodel the C20 template as measured by native gel electrophoresis, but not as measured by restriction enzyme access. One explanation for this is that hACF moves the C20 template sufficiently to result in changes in mobility, but not extensively enough to expose the site at position 18. This hypothesis is consistent with the pronounced ability of hACF to move nucleosomes to the center of fragments with 91-and 120-bp overhangs (Fig. 1). It is apparent from our measurement of binding affinities that a longer nucleosomal DNA overhang is essential for SNF2h to form stable interaction with the substrates, whereas hACF1 abrogates such dependence on the DNA overhang for hACF. This suggests that the inability of hACF to remodel templates with no overhang or a short overhang is not a defect in binding substrate, but would appear to be a defect in forming a productive interaction with substrate. This observation raises the pos- sibility that hACF1 changes the interaction interface between SNF2h and the substrates. These experiments are consistent with and expand previous work on the ISWI family of remodelers done mainly with the Drosophila and yeast homologs. The requirement for a DNA overhang is consistent with the finding that the yeast Isw2 complex is more likely to slide a nucleosome on a template that has one or two DNA overhangs (41). The differential requirement of hACF and SNF2h for an overhang might be functionally related to the predominance of end-located nucleosomal products resulting from ISWI remodeling as opposed to centrally located nucleosomal products resulting from dACF (and hACF, as seen here) remodeling (27,28). The finding that SNF2h moves nucleosomes away from the ends of the templates used in the present study might indicate a functional difference between ISWI and SNF2h or might instead reflect the different templates used in this study and those used in the previous work investigating ISWI function (25,27,28). Finally, these findings are consistent with the observation that dACF increases spacer lengths in closely packed chromatin (10,23). It is possible that the increased ability of hACF to space nucleosomes ( Fig. 4) is related to the preference of this complex for longer stretches of adjacent DNA. This latter observation is consistent with previous experiments performed with ISWI family members in different organisms (32,33,41). Both SNF2h and hACF display low template commitment, suggesting that their respective releases of substrate are quick upon binding. This implies that several rounds of sampling of targets would occur before a successful remodeling event takes places. Because only hACF can discriminate between substrates of different lengths of overhang, the quick release of substrates may reflect multiple rounds of target sampling before a successful event. Both SNF2h and the hACF complex show a dramatic dependence upon overhang length that is not seen with either BRG1 or the SWI/SNF complex. It has been known for over a decade that SWI/SNF family complexes are able to remodel nucleosomal substrates with no overhang (32,(51)(52)(53). The data presented here extend these studies by showing that both SWI/ SNF and BRG1 do not display a dramatic change in activity at either internal sites or sites near the entry/exit point as the length of overhang changes. This is consistent with previous hypotheses that the SWI/SNF family of remodeling proteins does not use a sliding mechanism as a primary means to open a site. These data extend the differences in behavior between the SWI/SNF and ISWI families of remodeling proteins and are consistent with the hypothesis that these two families use distinct strategies to make nucleosomal DNA accessible. In addition, we observed that SWI/SNF and BRG1 remodeling had very similar activity profiles on substrates with different lengths of DNA, suggesting that other subunits in SWI/SNF do not appear to fundamentally alter the remodeling outcome produced by BRG1. Our data further buttress the proposal that the SWI/SNF and ISWI families of remodelers work in fundamentally different ways, not only in the level of motor protein activities, but also in the higher level of interplay between subunits and the motor protein.
9,058
2006-09-29T00:00:00.000
[ "Biology", "Chemistry" ]
Structural Stability Monitoring of Model Test on Highway Tunnel with Lining Backside Voids Using Dynamic and Static Strain Testing Sensors Voids behind a lining may develop due to insufficient backfilling, poor workmanship, water erosion or gravity. They affect the interaction between the surrounding rock and lining and even cause instability of the lining structure. To ensure the safe operation of tunnels, it is very important to study the influence of voids behind the lining of the lining structure. In this paper, a laboratory model of a tunnel lining was established by taking the voids behind the lining of the Wushan Tunnel as an example. By changing the position and size of the voids, the corresponding stress variation law of the lining was obtained, and the influence of the voids behind the lining on the structural stability of the highway tunnel was analyzed. The experimental results showed that the voids behind the lining led to an increase in the stress near the voids, especially the voids at the vault. The circumferential stress and axial stress increased with increasing void depth and length, and the increase was greater with increasing void depth than increasing length; that is, the void depth had a greater effect on the lining stress. When the vault void depth was 30 mm, the axial tensile stress of the vault was 0.281 MPa, and the maximum increase was 178.2% compared with that without voids. The safety factors at different lining positions, from large to small, are: arch foot > spinner > arch top > arch waist. In the processes of lining operation and maintenance, special attention should be given to the treatment of voids behind the lining, especially deep voids. Introduction In recent years, as the state strengthens its investment in infrastructure construction, China's transportation construction has undergone rapid development. At present, China is the country with the largest number of tunnel projects, the most complex structure and the fastest development speed in the world [1]. However, with the use of highway tunnels, various diseases often occur in the older tunnels, among which the void behind the tunnel lining is one of the most common tunnel diseases [2]. Zhang et al. [3] investigated about 100 railway tunnels in China and found that nearly 11.56% of tunnels had contact loosening and cavities behind the lining. The existence of lining voids reduces the stability of the lining structure, threatens the safety of driving in the tunnel and shortens the maintenance cycle and service life of a highway tunnel. Therefore, it is important to analyze the influence of the voids behind the lining on the structural stability of the operational highway tunnels and evaluate their safety. A lot of research on the void disease behind tunnel linings has been conducted by scholars at home and abroad. Some scholars have conducted theoretical analysis on the stratum voids problem and obtained the calculation formula of surrounding rock stress and lining internal force when there is a void behind the lining [4][5][6]. In addition, some scholars have used finite element analysis software, such as ABAQUS, ANSYS and MIDAS-GTS, to study the void disease behind the lining [7][8][9][10][11][12]. Zhang et al. [13] and Bao et al. [14] established a three-dimensional numerical model to study the influence of the geometric size of the tunnel void on the internal force and safety of the lining structure. Ye [15] used numerical simulation to study the influence of the voids and loose contact state between support and surrounding rock on the safety of the lining structure. Li et al. used ANSYS software to study the mechanical behavior of the tunnel structure when there are cavities of different shapes and sizes behind the vault lining [16][17][18]. Min et al. [19] studied the mechanical characteristics of a double-arch tunnel under the action of a void at the top of the middle wall through numerical simulation. Due to the complexity of tunnel engineering and the operating environment [20], the interaction between the surrounding rock and the lining is not clear at present. If only theoretical analysis and numerical simulation and other technical means are used for research, there are bound to be shortcomings because the limitations of the numerical simulation itself. For example the theoretical framework and boundary condition hypothesis cannot fit the reality. Therefore, some scholars used indoor model tests to conduct further research on the voids behind the lining [21][22][23][24][25]. Zhang et al. [26] studied the evolution law of tunnel structure cracks under the condition with double cavities in the tunnel vault and the back of the arch through a 1:70 indoor model test. In recent years, some scholars have studied the influence of void defects behind shield tunnel composite lining on structural mechanical characteristics and contact pressure with the stratum [27][28][29][30][31]. Leung [32] simulated the initial pressure of a shield tunnel with a test device separating the lining from the surrounding soil at different positions to simulate a void and studied the influence of the void behind the lining on the earth pressure distribution on the tunnel lining. Some scholars even used theoretical analysis, numerical simulation and model tests to carry out related research [2,[33][34][35][36]. Zhang et al. [37] used numerical simulations and model tests to study the safety state of the tunnel structure under the condition of double cavities behind the vault and the arch shoulder, respectively. Some scholars have improved tunnel disease detection methods for tunnel disease detection. For example, a multi-layer SAFT high precision ultrasonic imaging method was proposed for void disease detection [38]. Yue et al. [39] put forward a method to calculate shield tunnel displacements of a full cross-section tunnel. Many scholars have conducted studies of the stability of tunnels with defective engineering. Much of their research has investigated the influence of voids behind the lining on the safety of the lining structure. However, most of those studies have focused on numerical simulations and shield tunnel research. Too many assumptions have been made in those studies, which made it difficult to accurately describe the development of tunnel defects under actual working conditions. More studies are needed to verify the universality and rationality of the results of those studies. It is difficult to prevent the influence of size effects due to the small size of most laboratory tests. Therefore, a horizontal loading test device was designed to simulate the surrounding rock pressure through jacks in light of the complex original working conditions of the tunnel. To avoid the influence of size effects, a large scale model (1:10) is selected in this paper to simulate a typical mountain highway tunnel damage project. The influence of the voids behind the lining on the stability of the lining structure are systematically simulated for different sizes and positions of the cavity and analyzed experimentally, providing references for the maintenance and reinforcement of tunnels with void disease. General Information of the Tunnel Project The Wushan Tunnel, located at the southeast part of Gansu Province, is an important part of the Tianshui to Dingxi section of the National Highway G30, as shown in Figure 1. The upper line of the tunnel is 2.5 km long, with a maximum depth of 270 m. The tunnel mainly passes through the surrounding rock of grade IV and V, and the engineering geological conditions are quite complex. As shown in Figure 2, the secondary lining prototype section of the tunnel is a two-lane four-center circle. The section size of the lining is 1186 cm wide, 963 cm high, 50 cm thick concrete equal section structure. The lining section is symmetrical left and right, so only the radius and radian of arcs on the right side of the section are marked in the figure. According to the inspection, there are 57 voids behind the secondary lining of the upper tunnel, with a cumulative length of 244.0 m, accounting for 9.8% of the total length. geological conditions are quite complex. As shown in Figure 2, the secondary lining prototype section of the tunnel is a two-lane four-center circle. The section size of the lining is 1186 cm wide, 963 cm high, 50 cm thick concrete equal section structure. The lining section is symmetrical left and right, so only the radius and radian of arcs on the right side of the section are marked in the figure. According to the inspection, there are 57 voids behind the secondary lining of the upper tunnel, with a cumulative length of 244.0 m, accounting for 9.8% of the total length. Similarity-Scaling Relationship In this paper, the geometric similarity ratio of the prototype and model was set as CL = 10. Using this as the basic similarity ratio, we derive the similarity ratios of the prototype and model for each of the physical and mechanical parameters according to the similarity theory: heavy similarity ratio Cγ = 1, Poisson's ratio Cμ = 1, internal friction angle Cϕ = 1, elastic modulus similarity ratio CE = 10 and cohesive force similarity ratio Cc = 10. geological conditions are quite complex. As shown in Figure 2, the secondary lining prototype section of the tunnel is a two-lane four-center circle. The section size of the lining is 1186 cm wide, 963 cm high, 50 cm thick concrete equal section structure. The lining section is symmetrical left and right, so only the radius and radian of arcs on the right side of the section are marked in the figure. According to the inspection, there are 57 voids behind the secondary lining of the upper tunnel, with a cumulative length of 244.0 m, accounting for 9.8% of the total length. Similarity-Scaling Relationship In this paper, the geometric similarity ratio of the prototype and model was set as CL = 10. Using this as the basic similarity ratio, we derive the similarity ratios of the prototype and model for each of the physical and mechanical parameters according to the similarity theory: heavy similarity ratio Cγ = 1, Poisson's ratio Cμ = 1, internal friction angle Cϕ = 1, elastic modulus similarity ratio CE = 10 and cohesive force similarity ratio Cc = 10. Similarity-Scaling Relationship In this paper, the geometric similarity ratio of the prototype and model was set as C L = 10. Using this as the basic similarity ratio, we derive the similarity ratios of the prototype and model for each of the physical and mechanical parameters according to the similarity theory: heavy similarity ratio C γ = 1, Poisson's ratio C µ = 1, internal friction angle C φ = 1, elastic modulus similarity ratio C E = 10 and cohesive force similarity ratio C c = 10. Similar Materials and Similar Models In the safety model tests of the lining structure, gypsum was used as a material similar to plain concrete. Gypsum, as a common brittle material, is similar to concrete in fracture mechanics, so it is an ideal elastic model material. The secondary lining of the tunnel prototype is a C25 concrete structure, and the mechanical parameters were set according to the actual engineering values. The elastic modulus was 28 GPa, the ultimate compressive strength was 16.7 MPa, and the ultimate tensile strength was 1.78 MPa. In this paper, a mixture of gypsum and water was used to simulate the lining structure. Lining of according to the model To obtain the physical and mechanical parameters for the direct shear test and the compression test for the model, we adjusted the lining material proportioning of water and gypsum according to a similarity ratio of 0.6:1 mixture. Model physical and mechanical parameters are shown in Table 1. In addition to the specimen density and C25 concrete, ideal similar material bulk density was large, but severe differences have little impact on the content of the study for the test, Other physical and mechanical parameters of the model material basically meet the test needs. The material ratio of the tunnel secondary lining structure similarity model was water: gypsum = 0.6:1, and the specific parameters are shown in Table 1. As shown in Figure 3, the lining was prefabricated in a mold and maintained under certain temperature and humidity conditions after demolding. All reduced model section sizes were 1/10 of the prototype according to Figure 2; that is, the model geometry size was 1/10 of the prototype. The lining section and the thickness of the lining model was set as 0.05 m, the span was 1.19 m, the height was 0.96 m, and the axial length was 0.45 m. Similar Materials and Similar Models In the safety model tests of the lining structure, gypsum was used as a material similar to plain concrete. Gypsum, as a common brittle material, is similar to concrete in fracture mechanics, so it is an ideal elastic model material. The secondary lining of the tunnel prototype is a C25 concrete structure, and the mechanical parameters were set according to the actual engineering values. The elastic modulus was 28 GPa, the ultimate compressive strength was 16.7 MPa, and the ultimate tensile strength was 1.78 MPa. In this paper, a mixture of gypsum and water was used to simulate the lining structure. Lining of according to the model To obtain the physical and mechanical parameters for the direct shear test and the compression test for the model, we adjusted the lining material proportioning of water and gypsum according to a similarity ratio of 0.6:1 mixture. Model physical and mechanical parameters are shown in Table 1. In addition to the specimen density and C25 concrete, ideal similar material bulk density was large, but severe differences have little impact on the content of the study for the test, Other physical and mechanical parameters of the model material basically meet the test needs. The material ratio of the tunnel secondary lining structure similarity model was water: gypsum = 0.6:1, and the specific parameters are shown in Table 1. As shown in Figure 3, the lining was prefabricated in a mold and maintained under certain temperature and humidity conditions after demolding. All reduced model section sizes were 1/10 of the prototype according to Figure 2; that is, the model geometry size was 1/10 of the prototype. The lining section and the thickness of the lining model was set as 0.05 m, the span was 1.19 m, the height was 0.96 m, and the axial length was 0.45 m. Tunnel Similarity Model The self-made indoor model platform consists of the similar lining mode, a loading system and a data monitoring system. The tunnel lining-soil complex was used to simulate the actual working conditions, and the horizontal loading mode was adopted. The test system could simulate the dead weight stress field of the tunnel lining, and the surrounding rock was filled with clay to simulate the lining under uniform stress. The outer Tunnel Similarity Model The self-made indoor model platform consists of the similar lining mode, a loading system and a data monitoring system. The tunnel lining-soil complex was used to simulate the actual working conditions, and the horizontal loading mode was adopted. The test system could simulate the dead weight stress field of the tunnel lining, and the surrounding rock was filled with clay to simulate the lining under uniform stress. The outer layer of the soil layer was enclosed by a 1 cm thick steel plate, a pressurizing system composed of a jack, and the counter-force frame was welded with I-beam steel to simulate the surrounding rock to provide reaction force. The entire model diagram is shown in Figure 4. The actual picture of the model is shown in Figure 5. layer of the soil layer was enclosed by a 1 cm thick steel plate, a pressurizing system co posed of a jack, and the counter-force frame was welded with I-beam steel to simulate surrounding rock to provide reaction force. The entire model diagram is shown in Fig 4. The actual picture of the model is shown in Figure 5. Loading system The loading system consists of a pressure jack and a reaction steel frame, as sho in Figure 4. The test jacks were FCY-10100 hydraulic jacks, and the specifications were layer of the soil layer was enclosed by a 1 cm thick steel plate, a pressurizing system composed of a jack, and the counter-force frame was welded with I-beam steel to simulate the surrounding rock to provide reaction force. The entire model diagram is shown in Figure 4. The actual picture of the model is shown in Figure 5. Loading system The loading system consists of a pressure jack and a reaction steel frame, as shown in Figure 4. The test jacks were FCY-10100 hydraulic jacks, and the specifications were 10T Loading System The loading system consists of a pressure jack and a reaction steel frame, as shown in Figure 4. The test jacks were FCY-10100 hydraulic jacks, and the specifications were 10T horizontal loading hydraulic jacks. Each jack was equipped with a CP-180 manual pump, 1.5-m oil pipe and a pressure gauge. The vertical uniform pressure on the lining structure was simulated by jacks J3 and J4 located at the vault of the tunnel lining model. The horizontal distribution pressure on the tunnel lining structure was simulated by J1 and J2 on the left side of the tunnel lining and J5 and J6 on the right side of the tunnel lining. The jacks converted the point load into a uniform load to act on the lining structure of the tunnel through the 1cm thick steel plate at the front end and the silty clay medium between the steel plate and the lining structure. The earth pressure sensors P1, P2, P3 and P4 were, respectively, fixed at the left arch waist, the left side of the arch, the right side of the arch and the right arch waist on the surface of the lining structure and connected with the DH5956 dynamic signal test and analysis system. The accurate loading of the tunnel lining structure was achieved based on the readings of the earth pressure sensor. The lower parts of the jacks were in contact with the reaction frame, and the model was loaded by the reaction force provided by the frame. Data Monitoring System The data monitoring system consists of several strain gauges, earth pressure gauges, two DH3817 dynamic and static strain testing systems, and one DH5956 strain collection analyzer. This test mainly monitors the stress change of the tunnel lining under the action of ground stress and uses a 120-50AA resistive strain gauge for measurement. As shown in Figure 4, strain gauges S1-S6 were pasted on the left wall, left arched waist, left arched shoulder, vault, right arched shoulder, right arched waist and right wall of the inside the lining structure. A group of strain flowers will be added to the void when the void behind the lining is tested. Figure 6 shows the working principle of the DH3817. The system can realize sampling, transmission, storage, and display at the same time and can use a mass computer storage hard disk to record multi-channel signals for a long time without interruption. In this paper, a DH3817 dynamic and static stress and strain measurement (Taizhou City, Jiangsu Province, China) and analysis system is used to collect the strain gauge data. 1.5-m oil pipe and a pressure gauge. The vertical uniform pressure on the lining structure was simulated by jacks J3 and J4 located at the vault of the tunnel lining model. The horizontal distribution pressure on the tunnel lining structure was simulated by J1 and J2 on the left side of the tunnel lining and J5 and J6 on the right side of the tunnel lining. The jacks converted the point load into a uniform load to act on the lining structure of the tunnel through the 1cm thick steel plate at the front end and the silty clay medium between the steel plate and the lining structure. The earth pressure sensors P1, P2, P3 and P4 were, respectively, fixed at the left arch waist, the left side of the arch, the right side of the arch and the right arch waist on the surface of the lining structure and connected with the DH5956 dynamic signal test and analysis system. The accurate loading of the tunnel lining structure was achieved based on the readings of the earth pressure sensor. The lower parts of the jacks were in contact with the reaction frame, and the model was loaded by the reaction force provided by the frame. Data monitoring system The data monitoring system consists of several strain gauges, earth pressure gauges, two DH3817 dynamic and static strain testing systems, and one DH5956 strain collection analyzer. This test mainly monitors the stress change of the tunnel lining under the action of ground stress and uses a 120-50AA resistive strain gauge for measurement. As shown in Figure 4, strain gauges S1-S6 were pasted on the left wall, left arched waist, left arched shoulder, vault, right arched shoulder, right arched waist and right wall of the inside the lining structure. A group of strain flowers will be added to the void when the void behind the lining is tested. Figure 6 shows the working principle of the DH3817. The system can realize sampling, transmission, storage, and display at the same time and can use a mass computer storage hard disk to record multi-channel signals for a long time without interruption. In this paper, a DH3817 dynamic and static stress and strain measurement (Taizhou City, Jiangsu Province, China.)and analysis system is used to collect the strain gauge data. Pressure of Model Test The Wushan Tunnel is mainly buried deep and is part of a long tunnel buried in the mountains, and there is no bias pressure and expansion force in the surrounding rock. This test only considers the simulation of ground stress conditions of the tunnel under the deep buried condition. According to the Highway Tunnel Design Code Volume I Civil Engineering (JTG 3370.1-2018) [40], the vertical and horizontal uniform pressure of the loose load in a deep tunnel can be calculated according to the following formula under the condition of surrounding rock without significant bias and expansion force: The vertically distributed pressure can be calculated according to the following equation where q is the vertical uniform pressure; kN/m 2 ; γ is the surrounding rock weight; kN/m 3 ; h is the height of the surrounding rock pressure calculation, m; s is the level of the surrounding rock, with integer values of 1, 2, 3, 4, 5 and 6; ω is the width influence coefficient, calculated as follows: ω = 1 + i (B − 5); B is the tunnel width, and m; i is the increase or decrease rate of the surrounding rock pressure when the tunnel width increases or decreases by 1m, as shown in Table 2 and as can be seen from the design data of the upper line of the Wushan Tunnel, i = 0.12. The horizontal surrounding rock pressure of the deep tunnel can be valued according to Table 3, and the horizontal distribution pressure in this test is set as e = 0.5q. According to the geological conditions of the Wushan Tunnel, the surrounding rock weight is 20 kN/m 3 . The vertical distribution pressure of the surrounding rock is q = 0.263 MPa, and the horizontal distribution pressure e = 0.5q = 0.132 MPa can be obtained. Test Loading Scheme This model test was mainly based on the disease of the Wushan Tunnel. According to the possible location and size of the hole behind the tunnel, a step-by-step loading method was adopted to simulate the following parameters: simulation of voids at different positions behind the lining, different depths behind the vault and different lengths behind the vault. Nine test conditions were set up to study the stress characteristics of the void defect lining, as shown in Table 4. and right wall in the inner side of the lining structure according to the scheme, and one group of strain gauges was affixed to the void of the outer wall. The pressure system was controlled to slowly pressure the lining until the vault pressures P2 and P3 both reached the vertical distribution pressure q of 0.263 MPa, and the arch waist pressure P1 and P4 on both sides reached the horizontal distribution pressure e of 0.132 MPa, and the pressure was stopped. 2.6.1. Voids of different depths behind the vault In working conditions 2, 3 and 4, a void with a depth of 1 cm, a void with a depth of 2 cm and a void with a depth of 3 cm were successively installed on the outer wall of the vault, as shown in Figure 7a-c. In addition, seven groups of strain gauges were affixed to the left wall, left arch waist, left arch shoulder, vault, right arch shoulder, right arch waist and right wall in the inner side of the lining structure according to the scheme, and one group of strain gauges was affixed to the void of the outer wall. The pressure system was controlled to slowly pressure the lining until the vault pressures P2 and P3 both reached the vertical distribution pressure q of 0.263 MPa, and the arch waist pressure P1 and P4 on both sides reached the horizontal distribution pressure e of 0.132 MPa, and the pressure was stopped. In working conditions 2, 3 and 4, a void with a depth of 1 cm, a void with a depth of 2 cm and a void with a depth of 3 cm were successively installed on the outer wall of the vault, as shown in Figure 7a-c. In addition, seven groups of strain gauges were affixed to the left wall, left arch waist, left arch shoulder, vault, right arch shoulder, right arch waist and right wall in the inner side of the lining structure according to the scheme, and one group of strain gauges was affixed to the void of the outer wall. The pressure system was controlled to slowly pressure the lining until the vault pressures P2 and P3 both reached the vertical distribution pressure q of 0.263 MPa, and the arch waist pressure P1 and P4 on both sides reached the horizontal distribution pressure e of 0.132 MPa, and the pressure was stopped. Analysis of Test Results By controlling the loading system, the lining was slowly pressurized until T2 and T3 reached the vertically distributed pressure q = 0.263 MPa, and T1 and T4 reached the horizontally distributed pressure e = 0.132 MPa. The stress-strain data of the lining under different working conditions were obtained. According to the stress-strain data and morphological changes of the tunnel lining under nine groups of test conditions, the influence process and corresponding law of void on the stability of lining structure under different working conditions were analyzed. Stress Analysis of Tunnel Lining without Void When there was no void behind the lining, the external surface of the lining was under pressure, and the internal surface was under positive tension (the same below). Figure 9 shows that under the condition of no void defect, the stress values of the lining vault and the arch shoulder are positive tensile stress, while the stress values of the arch waist and the arch foot are negative compressive stress. However, whether it was tensile stress or compressive stress, the circumferential stress was generally greater than the axial stress at the same monitoring point. The inner circumferential tensile stress of the vault was 0.506 MPa, and the inner axial tensile stress was 0.101 MPa. The peak values of circumferential and axial tensile stresses appeared at the vault, while the peak values of circumferential and axial compressive stresses appeared at the arch waist, which can explain why the inner wall of the vault was mainly damaged by tensile stress, and the inner wall of the arch waist was mainly damaged by extrusion. Stress Analysis of Cavities at Different Positions The size of the void was controlled to be 50 mm × 20 mm × 10 mm (length × width × depth), and the positions of the void behind the lining were transformed into vault, arch shoulder, arch waist and arch foot in order to obtain the circumferential and axial stress values at different positions of the lining, as shown in Figures 10 and 11. (1) When the void was located at the vault, the circumferential tensile stress of the vault was 0.604 MPa, which increased by 19.37% compared with the condition without the vault void. The axial stress of the inner wall was 0.142 MPa, and the increase was 40.59% compared with that of the vault without void. When the void was located at the arch shoulder, the arch waist and the arch foot, compared with the lining without the void, the circumferential and axial stress values of the lining did not change much at the same monitoring position. It shows that the void has the greatest influence on the stress of the lining structure when it is located at the vault but has little influence on the stress of the lining structure when it is located at other positions. Stress Analysis of Cavities at Different Positions The size of the void was controlled to be 50 mm × 20 mm × 10 mm (length × width × depth), and the positions of the void behind the lining were transformed into vault, arch shoulder, arch waist and arch foot in order to obtain the circumferential and axial stress values at different positions of the lining, as shown in Figures 10 and 11. (1) When the void was located at the vault, the circumferential tensile stress of the vault was 0.604 MPa, which increased by 19.37% compared with the condition without the vault void. The axial stress of the inner wall was 0.142 MPa, and the increase was 40.59% compared with that of the vault without void. When the void was located at the arch shoulder, the arch waist and the arch foot, compared with the lining without the void, the circumferential and axial stress values of the lining did not change much at the same monitoring position. It shows that the void has the greatest influence on the stress of the lining structure when it is located at the vault but has little influence on the stress of the lining structure when it is located at other positions. Stress Analysis of Vaults with Different Void Depth During the test, the length × width of the void behind the lining was kept unchanged at 50 mm × 20 mm, and the depths were changed to 10 mm, 20 mm and 30 mm, respectively. As shown in Figures 12 and 13: (1) When the depth of the vault void was 20mm, the circumferential tensile stress of the inner wall of the vault was 0.715 MPa, which was 41.3% higher than that without the void; the axial tensile stress of the inner wall of the vault was 0.165 MPa, an increase of 63.4% compared with that without void. (2) When the depth of the vault void was 30 mm, the circumferential tensile stress of the vault was 0.802 MPa, an increase of 58.5% compared with that without void; the axial tensile stress of the vault was 0.281 MPa, an increase of 178.2% compared with that without the void. Both the circumferential stress and the axial stress of the inner wall of the lining vault increase with the increase of the void depth, and the increase of the axial stress becomes larger. Stress Analysis of Vaults with Different Void Depth During the test, the length × width of the void behind the lining was kept unchanged at 50 mm × 20 mm, and the depths were changed to 10 mm, 20 mm and 30 mm, respectively. As shown in Figures 12 and 13: (1) When the depth of the vault void was 20mm, the circumferential tensile stress of the inner wall of the vault was 0.715 MPa, which was 41.3% higher than that without the void; the axial tensile stress of the inner wall of the vault was 0.165 MPa, an increase of 63.4% compared with that without void. (2) When the depth of the vault void was 30 mm, the circumferential tensile stress of the vault was 0.802 MPa, an increase of 58.5% compared with that without void; the axial tensile stress of the vault was 0.281 MPa, an increase of 178.2% compared with that without the void. Both the circumferential stress and the axial stress of the inner wall of the lining vault increase with the increase of the void depth, and the increase of the axial stress becomes larger. Stress Analysis of Vaults with Different Void Depth During the test, the length × width of the void behind the lining was kept unchanged at 50 mm × 20 mm, and the depths were changed to 10 mm, 20 mm and 30 mm, respectively. As shown in Figures 12 and 13: (1) When the depth of the vault void was 20 mm, the circumferential tensile stress of the inner wall of the vault was 0.715 MPa, which was 41.3% higher than that without the void; the axial tensile stress of the inner wall of the vault was 0.165 MPa, an increase of 63.4% compared with that without void. (2) When the depth of the vault void was 30 mm, the circumferential tensile stress of the vault was 0.802 MPa, an increase of 58.5% compared with that without void; the axial tensile stress of the vault was 0.281 MPa, an increase of 178.2% compared with that without the void. Both the circumferential stress and the axial stress of the inner wall of the lining vault increase with the increase of the void depth, and the increase of the axial stress becomes larger. (3) As the depth increases to 10 mm, 20 mm and 30 mm, the circumferential stresses at the voids of the outer wall of the corresponding lining vault were −0.613 MPa, −0.787 MPa and −0.862 MPa, respectively; the axial stresses at the voids were −0.0564 MPa, −0.0712 MPa and −0.13 MPa, respectively; that is, both the circumferential stress and the axial stress at the void of the outer wall of the lining vault increase with the increase of the void depth. In summary, the increase in the void depth of the vault has a more obvious impact on the stress of the inner and outer walls of the lining vault. and −0.13 MPa, respectively; that is, both the circumferential stress and the axial stress at the void of the outer wall of the lining vault increase with the increase of the void depth. In summary, the increase in the void depth of the vault has a more obvious impact on the stress of the inner and outer walls of the lining vault. Stress Analysis of Vault with Different Void Length During the test, the width × depth of the void behind the lining were kept unchanged from 20 mm × 10 mm, and the length of the void was changed to 50 mm, 100 mm and 150 mm, respectively. As shown in Figures 14 and 15: (1) When the vault void length was 100 mm, the circumferential tensile stress of the vault was 0.655 MPa, which increased by 29.4% compared with that without void. The axial tensile stress of the vault was 0.152 MPa, which increased by 50.5% compared with that without the void. (2) When the vault void length was 150 mm, the circumferential tensile stress of the vault was 0.708 MPa, which was 39.9% higher than when there was no void; The axial tensile stress of the vault was 0.182 MPa, an increase of 80.2% compared to the absence of voids. Both the circumferential and axial stresses on the inner walls of the lining vault increased with the increase and −0.13 MPa, respectively; that is, both the circumferential stress and the axial stress at the void of the outer wall of the lining vault increase with the increase of the void depth. In summary, the increase in the void depth of the vault has a more obvious impact on the stress of the inner and outer walls of the lining vault. Stress Analysis of Vault with Different Void Length During the test, the width × depth of the void behind the lining were kept unchanged from 20 mm × 10 mm, and the length of the void was changed to 50 mm, 100 mm and 150 mm, respectively. As shown in Figures 14 and 15: (1) When the vault void length was 100 mm, the circumferential tensile stress of the vault was 0.655 MPa, which increased by 29.4% compared with that without void. The axial tensile stress of the vault was 0.152 MPa, which increased by 50.5% compared with that without the void. (2) When the vault void length was 150 mm, the circumferential tensile stress of the vault was 0.708 MPa, which was 39.9% higher than when there was no void; The axial tensile stress of the vault was 0.182 MPa, an increase of 80.2% compared to the absence of voids. Both the circumferential and axial stresses on the inner walls of the lining vault increased with the increase Stress Analysis of Vault with Different Void Length During the test, the width × depth of the void behind the lining were kept unchanged from 20 mm × 10 mm, and the length of the void was changed to 50 mm, 100 mm and 150 mm, respectively. As shown in Figures 14 and 15: (1) When the vault void length was 100 mm, the circumferential tensile stress of the vault was 0.655 MPa, which increased by 29.4% compared with that without void. The axial tensile stress of the vault was 0.152 MPa, which increased by 50.5% compared with that without the void. (2) When the vault void length was 150 mm, the circumferential tensile stress of the vault was 0.708 MPa, which was 39.9% higher than when there was no void; The axial tensile stress of the vault was 0.182 MPa, an increase of 80.2% compared to the absence of voids. Both the circumferential and axial stresses on the inner walls of the lining vault increased with the increase in the length of the void, and the increase of the axial stress was larger. (3) As the length increased sequentially to 50 mm, 100 mm and 150 mm, the circumferential stresses at the voids behind the corresponding lining vault were −0.613 MPa, −0.63 MPa and −0.662 MPa, respectively; and the axial stresses at the voids were −0.0564 MPa, −0.047 MPa and −0.0508 MPa, respectively. The circumferential stresses and axial stresses of the voids behind the lining vault had little change with the increase of the cavity length; that is, the stresses of the voids behind the lining vault had little influence on the change of the cavity length. In addition, the stresses in other locations of the lining were not greatly affected by changes in the length of the voids. voids behind the corresponding lining vault were −0.613 MPa, −0.63 MPa and −0.662 MPa, respectively; and the axial stresses at the voids were −0.0564 MPa, −0.047 MPa and −0.0508 MPa, respectively. The circumferential stresses and axial stresses of the voids behind the lining vault had little change with the increase of the cavity length; that is, the stresses of the voids behind the lining vault had little influence on the change of the cavity length. In addition, the stresses in other locations of the lining were not greatly affected by changes in the length of the voids. Variation Law of Axial Force and Bending Moment of Lining According to the inner stress value σ1 and the outer stress value σ2 of the lining section, the unit section bending moment M and axial force N can be calculated as [41]: = ℎ( + )/2 (4) Figure 14. Variation of circumferential stress with vault void length. voids behind the corresponding lining vault were −0.613 MPa, −0.63 MPa and −0.662 MPa, respectively; and the axial stresses at the voids were −0.0564 MPa, −0.047 MPa and −0.0508 MPa, respectively. The circumferential stresses and axial stresses of the voids behind the lining vault had little change with the increase of the cavity length; that is, the stresses of the voids behind the lining vault had little influence on the change of the cavity length. In addition, the stresses in other locations of the lining were not greatly affected by changes in the length of the voids. Variation Law of Axial Force and Bending Moment of Lining According to the inner stress value σ1 and the outer stress value σ2 of the lining section, the unit section bending moment M and axial force N can be calculated as [41]: = ℎ( + )/2 (4) Figure 15. Variation of axial stress with vault void length. Variation Law of Axial Force and Bending Moment of Lining According to the inner stress value σ 1 and the outer stress value σ 2 of the lining section, the unit section bending moment M and axial force N can be calculated as [41]: where σ 1 is the stress on the inner wall of the lining; σ 2 is the stress of lining outer wall; b is the unit length, taken as 1000 mm; and h is the lining thickness, which is taken as 50 mm. China's Code for Design of Highway Tunnels (JTG 3370. and Code for Design of Railway Tunnels (TB 10003-2005) both provide clear calculation formulas and measurement standards for the safety factor of tunnel linings. When the calculation result K ≥ 2, the steel bar does not reaches ultimate strength or concrete does not reaches ultimate compressive or shear strength, the structure is relatively safe. In contrast, a calculation result K < 2 means that the steel bar has reached the ultimate strength or the concrete has reached ultimate compressive or shear strength, the safety and stability of the steel bar and concrete are insufficient, and the secondary lining structure needs to be further strengthened. The following equation provides the concrete safety state discrimination standard: where K is the safety factor; N is the axial pressure, kN; b is the width of the section, m; h is the thickness of the section, m; R a is ultimate compressive strength of concrete or masonry, R a = 19 MPa; ϕ is the longitudinal bending coefficient of member, ϕ = 1; and α is the eccentric influence coefficient of the axial force, where, because the eccentricity is 0, α = 1. The Variation of Stress The void size was controlled to be 50 mm × 20 mm × 10 mm (length × width × depth), and the void behind was set at the vault, spandrel, hance and arch foot in turn. The stress values of lining the inner and outer walls at different positions were obtained, as shown in Figure 16. As can be seen from the figure, except for the tensile stress on the inside of the vault and the spandrel, other positions are subject to compressive stress, which is negative. From the numerical point of view, the outer stress is greater than the inner stress at the same monitoring point due to the concentration of stress in the void behind the lining. When the void was in the vault, the inner and outer stress values are the largest. K < 2 means that the steel bar has reached the ultimate strength or the concrete has reached ultimate compressive or shear strength, the safety and stability of the steel bar and concrete are insufficient, and the secondary lining structure needs to be further strengthened. The following equation provides the concrete safety state discrimination standard: where K is the safety factor; N is the axial pressure, kN; b is the width of the section, m; h is the thickness of the section, m; Ra is ultimate compressive strength of concrete or masonry, Ra = 19 MPa; φ is the longitudinal bending coefficient of member, φ = 1; and α is the eccentric influence coefficient of the axial force, where, because the eccentricity is 0, α = 1. The variation of stress The void size was controlled to be 50 mm × 20 mm × 10 mm (length × width × depth), and the void behind was set at the vault, spandrel, hance and arch foot in turn. The stress values of lining the inner and outer walls at different positions were obtained, as shown in Figure 16. As can be seen from the figure, except for the tensile stress on the inside of the vault and the spandrel, other positions are subject to compressive stress, which is negative. From the numerical point of view, the outer stress is greater than the inner stress at the same monitoring point due to the concentration of stress in the void behind the lining. When the void was in the vault, the inner and outer stress values are the largest. The Variation of Axial Force and Bending Moment The variation law of axial force and bending moment of the structure can be obtained by using Formulas (3) and (4), as shown in Figure 17. It can be seen from the figure that the axial force variation law of the lining structure was similar to the stress variation law of the outer lining void, and the maximum value was located at the arch waist. The bending moment in descending order of arch vault > spandrel > hance > arch foot, and the maximum bending moment of the lining structure was located at the vault. Variation of the Lining Safety Factor The lining safety factor can be obtained by using Formula (5). The variation law of the lining safety factor with void position is shown in the Figure 18, and its variation law is similar to the lining axial force. All safety factors are greater than two, meeting the requirements of safety standards, which are in descending order: arch foot > spandrel > vault > hance. The variation law of axial force and bending moment of the structure can be obtained by using formulas (3) and (4), as shown in Figure 17. It can be seen from the figure that the axial force variation law of the lining structure was similar to the stress variation law of the outer lining void, and the maximum value was located at the arch waist. The bending moment in descending order of arch vault > spandrel > hance > arch foot, and the maximum bending moment of the lining structure was located at the vault. Variation of the lining safety factor The lining safety factor can be obtained by using Formula (5). The variation law of the lining safety factor with void position is shown in the Figure 18, and its variation law is similar to the lining axial force. All safety factors are greater than two, meeting the requirements of safety standards, which are in descending order: arch foot > spandrel > vault > hance. Conclusions In this paper, a large-scale 1:10 indoor lining model was made by using a self-mad horizontal test loading device to simulate the influence of the internal forces of the linin structure under different locations and different sizes of a void disease behind the lining The main conclusions are as follows: (1) In terms of the stress law of the tunnel lining structure, the circumferential stres was generally greater than the axial stress at the same monitoring point, and the peak o the circumferential and axial tensile stresses appears at the vault, and the peak of the com pressive stress appears at the arch waist. It can explain why the vault position was mainl damaged by stretching, and the arch position was mainly damaged by extrusion. (2) In terms of the law of the influence of the void position on the lining structur when the void was in the vault, the stress change was more obvious, and when the voi was in the position of the arch shoulder, the arch waist and the arch foot, compared wit the lining without a hole, the ring and axial stress values of the same monitoring positio of the lining do not change much. It was explained that when the void was in the vault, Conclusions In this paper, a large-scale 1:10 indoor lining model was made by using a self-made horizontal test loading device to simulate the influence of the internal forces of the lining structure under different locations and different sizes of a void disease behind the lining. The main conclusions are as follows: (1) In terms of the stress law of the tunnel lining structure, the circumferential stress was generally greater than the axial stress at the same monitoring point, and the peak of the circumferential and axial tensile stresses appears at the vault, and the peak of the compressive stress appears at the arch waist. It can explain why the vault position was mainly damaged by stretching, and the arch position was mainly damaged by extrusion. (2) In terms of the law of the influence of the void position on the lining structure, when the void was in the vault, the stress change was more obvious, and when the void was in the position of the arch shoulder, the arch waist and the arch foot, compared with the lining without a hole, the ring and axial stress values of the same monitoring position of the lining do not change much. It was explained that when the void was in the vault, it had the greatest impact on the stress of the lining structure, and it had little effect on the stress of the lining structure when it was in other positions. The presence of void diseases can lead to varying degrees of increased stress values near the void location, and with the change of the position of the void, the stress value at the void was in order from large to small: the vault > the arch waist> the arch shoulder > the arch foot.
11,662
2023-01-26T00:00:00.000
[ "Engineering" ]
Electro-Quasistatic Animal Body Communication for Untethered Rodent Biopotential Recording Continuous multi-channel monitoring of biopotential signals is vital in understanding the body as a whole, facilitating accurate models and predictions in neural research. The current state of the art in wireless technologies for untethered biopotential recordings rely on radiative electromagnetic (EM) fields. In such transmissions, only a small fraction of this energy is received since the EM fields are widely radiated resulting in lossy inefficient systems. Using the body as a communication medium (similar to a ’wire’) allows for the containment of the energy within the body, yielding order(s) of magnitude lower energy than radiative EM communication. In this work, we introduce Animal Body Communication (ABC), which utilizes the concept of using the body as a medium into the domain of untethered animal biopotential recording. This work, for the first time, develops the theory and models for animal body communication circuitry and channel loss. Using this theoretical model, a sub-inch\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^3$$\end{document}3 [1″ × 1″ × 0.4″], custom-designed sensor node is built using off the shelf components which is capable of sensing and transmitting biopotential signals, through the body of the rat at significantly lower powers compared to traditional wireless transmissions. In-vivo experimental analysis proves that ABC successfully transmits acquired electrocardiogram (EKG) signals through the body with correlation \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$>99\%$$\end{document}>99% when compared to traditional wireless communication modalities, with a 50\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times$$\end{document}× reduction in power consumption. www.nature.com/scientificreports/ with wireless power transfer is being implemented, this coupled with smart devices and experimental arenas permits in-sensor analytics. The evolution and detailed comparison of the state of the art in biopotential recording has been described in a later section. Biopotential signals, both non-invasive (skin surface) and invasive, have been studied as a means of building bio-electronic medical devices. The central nervous system controls the body and this control can be observed by studying the changes in the peripheral physiological factors such as changes in the heart rate, muscle activity, and breathing. To study these changes, long term monitoring of these physiological signals is necessary 7 . EKG is one of the most widespread diagnostic tools in medicine and the similarity between human and rat EKG 8 has permitted the study of various physiological conditions and cardiac diseases 9,10 . Along with EKG signals, other surface biopotentials such as sEMG and EEG are studied in rats, analysis of these signals is used in sleep studies, epilepsy, locomotive analysis, and effect of spinal cord injuries 11,12 . The study of the brain along with the body is essential in understanding the control mechanisms of the brain on physiology. Sican Liu described a novel neural interface system for simultaneous stimulation and recording of EEG/EMG and ENG (Electroneurogram) signals 13 . Along with surface biopotential signals, invasive recording allows for localized, high fidelity signal analysis. Neural biopotential signal analysis is a topic of extensive research in experimental neuroscience, with the aim of improving the quality of life of people with severe sensory and motor disabilities. Wireless neural recording systems have been described in insects, rodents and non-human primates. In rodents particularly, various neural interface systems which include bidirectional communication has been explored 14,15 . Application-specific integrated circuit (ASICs) for neuro-sensing applications has been described for implantable neurosensors 4,[16][17][18] . Chronic multi-channel neural recording is a powerful tool in studying dynamic brain function. Multi-electrode arrays permit recording of more than one channel simultaneously enabling neuroscientists to explore different regions of the brain in response to a particular stimulus. Bandwidth constraints limit the number of channels that can be recorded simultaneously resulting in a trade-off between the number of channels that can be simultaneously recorded, power requirements, and the form factor of the device. For example, Borton et al. designed an implantable hermetically sealed device that was capable of sending neural signal information via a wireless data link to a receiver placed 1 meter away. This system permitted 7-h of continuous operation 17 . Chae et al. describes a 128-channel 6 mW wireless neural recording IC with on the fly spike detection for one selected channel. A sequential turn-on method is used to minimize the power requirement 19 . Similarly, Miranda et al. developed a 32-channel system that can be used for 33 h continuously but requires two 1200 mAh batteries 20 . To achieve a meaningful experimental duration, the power consumption is often > 10 mW, generally dominated by the communication (radio) power. Thus, it is evident that wireless neural interfaces are power-hungry and there is a need for constant replacement of the batteries or selective channel selection in a chronic setting. To overcome these constraints wirelessly powered neural interfaces were developed, which eliminates the need for constant replacement of the batteries. Implantable devices, in particular, need wirelessly powered devices to reduce the need for a battery at the implant site. Enriched experimental arenas allow for the constant transmission of power facilitating chronic recordings. Yeager et al. developed a wireless neural interface, NeuralWISP capable of sending neural information over a 1-m range 21 . Lee et. al describes an EnerCage-HC2 to inductively transfer power to a 32-channel implantable neural interface 4 . Wireless power transfer though ensures longer experimental duration, one has to take into account the exposure to high electromagnetic fields along with concerns regarding excessive heat dissipation. Thus, it is evident that neural recordings are limited by size constraints and overall power consumption. This leads to the next advancement in wireless biopotential recording with electro-quasistatic animal body communication, which aims to use the animal body as the transmitting medium similar to the concept of human body communication. In the following section we describe the concept of body communication and also describe how the ABC differs from HBC while still having similar advantages. Body Communication Basics. Body communication-based wearable technology has gained prominence over recent times as a communication modality for sending real-time information. Recent advances in using the human body as a channel for bio-physical communication has resulted in an energy-efficient secure information exchange modality 23 . HBC was first proposed as a method to connect devices on a Personal Area Network (PAN) by Zimmerman 24 , using a capacitively coupled HBC model where the return path is formed by the electrode to ground capacitance. The transmitter capacitively couples the signal into the human body which is then picked up at the receiver end. Galvanic coupling-based HBC introduced by Wegmueller et al. 25 , the signal is applied and received differentially by two electrodes respectively. HBC utilizes the conductivity of the human body for a low transmission loss, high-efficiency transmission modality making it ideal for energy-constrained devices. Traditional wireless body area networks (WBAN) use EM signals that radiates outside the body all around us, resulting in only a fraction of the energy being received. This radiative nature and high frequencies in WBANs are typically high energy and of the order of 10 nJ/bit 26 . Recent advances have shown impulse-radio ultra wideband (IR-UWB) to be more energy efficient than traditional WBANs, with a energy of 1 nJ/bit 27 . Now, if the body's conductivity is used, it provides a low loss broadband channel that is private (the full bandwidth is available for communication). This low loss and wide bandwidth availability along with the low-frequency operation results in ultra-low power body communication at 415 nW 28 as well as very low energy communication at 6.3 pJ/bit 29 . Low-frequency HBC was not widely adopted due to the high loss at these frequencies because of resistive (50 ) termination 30 . Recently we demonstrated, by using capacitive termination, the loss in the EQS region is reduced by a factor of > 100 , making it usable 29,31 . The first bio-physical model for EQS-HBC was developed by Maity et al. 22 and a detailed understanding of the forward path 32 and return path 33 was described. Datta et al. 34 describes an advanced biophysical model to capture channel variability. EQS-HBC www.nature.com/scientificreports/ is presently the most promising low-power, low-frequency communication alternative for WBAN. It has also been shown that the EQS-HBC adheres to the set safety standards 35 . The state of the art in body communication has been restricted to human body communication. In this work we propose to utilize the recent developments in the concept of body communication and apply it to the animal body for biopotential and neural recordings, reducing the size, weight, area, and power of the device. We propose a capacitive termination EQS communication from a sensing node on the rat's body and also device an experimental arena to pick up these EQS signals most efficiently. This form of communication utilizes electro-quasistatic transmission through the conductive layers of the rat below the skin surface. The skin is a high impedance surface while the inner tissue layers are conductive. The transmission of the electro-quasistatic signals through the body with a capacitive return path at frequencies below 1 MHz ensures that the signal is contained within the body. Animal Body Communication-Biophysical Theoretical Model. As already established, Human Body Communication has been explored as a viable communication model, extending this to an animal body allows for a low loss, efficient channel model, compared to the traditional wireless modalities currently used. Figure 2a,b depicts the concept of Animal Body Communication, the rat body capacitively couples with the signal plane. The transmitter placed on the body of the rat modulates this electric field to transmit OOK (On-Off Keying) sequences corresponding to the sensed biopotential signal. The experimental arena is designed such that the animal moves around on a conductive surface, which is isolated from the earth's ground. This surface picks up the EQS signals coupled onto the animal's body and is received through ground-referenced receiver. Hence, the received voltage is inversely proportional to the capacitance of the signal plane to ground (the less the capacitance the easier it is for the wearable device on the animal to modulate the potential of the animal body and the surface). The circuit model for Animal Body Communication is described in Fig. 2c. At lower frequencies the skin impedance and the series body and foot impedance is negligible compared to the capacitance between the signal plane and ground. Given the operation of ABC in the electro-quasistatic regime, these impedances can be neglected in the computation of the channel loss. From the simplified circuit model, the output voltage V o and the input voltage V In are related as follows: The human body has a much larger surface area when compared to an animal. In this ABC setup, the sensor node is placed on the body of the rat, while the receiver is a large conductive plane. This large conductive plane ensures that the movement of the rat is not restricted, and data can be continuously recorded. In contrast, in human body communication, the body is on the earth's ground and there exists a trunk path to ground. Due to this, the output voltage is affected by the body capacitance, unlike in the animal body setup. Figure 3 illustrates the key components of HBC and ABC. The capacitance of the body varies from ABC and HBC due to the fact that the ABC channel model consists of the additional conductive surface on which the rat is free to move. Another important component is the rat foot impedance, in ABC the rat's feet rest on the conductive surface. C Foot and R Foot change depending on the position of the rat's foot on the conductive plane. In the human model, the received signal is collected from the body surface itself, thus the output voltage depends on the capacitive return path of both the transmitter and the receiver. In the ABC model, the conductive surface is ground isolated and connected to an oscilloscope which acts as the receiving unit. The transmitter couples to the floating body and the return path capacitance C G_TX from the earth's ground plane to the transmitter ground plane completes the loop, allowing for signal transmission. The receiver in ABC is the oscilloscope signal probe, which can be modeled as the load capacitance C L in parallel with the load resistance R L . This oscilloscope is earth ground referenced and hence eliminates the capacitive return path of HBC. The low-loss in ABC coupled with low-carrier frequency communication (as a wire) enables ABC power consumption to be much lower when compared to wireless communication modalities such as Bluetooth. This reduced power enables longer duration experiments with small form factor devices. Results Animal Body Communication was explored as a new modality for the transmission of biopotential signals. The sensing and transmitting devices are built using off the shelf components and consist of a communication module, a processing module, a power source, and an interface to connect it to the rat body. Surface electrodes are placed on the skin surface of the rat, after employing appropriate skin preparation techniques and then connecting the electrodes to the front end of the device. Biopotential information is sensed and modulated for transmission, simultaneously transmitting the signal over Bluetooth and through the body of the rat as Animal Body Communication. Bluetooth has long been (1) www.nature.com/scientificreports/ used as a wireless communication modality and widely cited in literature as a means to transmit biopotential information. In this work, we use this gold standard of communication to compare the biopotential information received from the ABC transmitter and Bluetooth module. In an ideal situation, a tethered system acts as the gold standard, however, body communication cannot be achieved when the system is ground connected (as in the case of a tethered system), with a ground connected system, the results would be optimistic and incorrect 22 . A correlation analysis is performed to compare both signals. Experiments were performed on a rat to prove the feasibility of Animal Body Communication. Animal Body Communication Experimental Setup. The Animal Body Communication setup is tested on Sprague Dawley rats, experiments were performed on anesthetized rats. In this study, capacitive coupling is used as a means to achieve Animal Body Communication. The details of the sensor node are described in the methods section. Anesthetized rats are placed on a non-conductive surface, the sensor node, in a casing, is placed on the rat skin surface and patch connectors are used to connect to the surface electrodes. The feet of the rat are placed on a conductive copper plate, signals are acquired using the sensing unit, then transmitted via Bluetooth to a receiver connected to a computer as shown in Fig. 4. The device is capable of transmitting both over Bluetooth and through ABC simultaneously. Only the feet are connected to the conductive plane while leaving the body on a non-conductive surface. This depicts a case when the rat moves in a cage with only the feet on the bottom plane. ABC happens through the transmission of OOK sequences from the node through the body, to the conductive copper plate. These signals are picked up using an oscilloscope connected to the conductive plane. The oscilloscope signal probe is connected to the conductive plane while the ground probe is left floating. EKG signals are acquired using a three-electrode setup with the electrodes placed on the Right Arm (RA), Left Arm (LA), and Right Leg (RL). The RL serves as the right leg drive, common to EKG recording systems. Additional monitoring systems such as the anesthetizing setup and body vital measurement systems are present in this experimental setup not part of the communication setup. This setup aims to mimic the setup as described in Fig. 1. The copper plates act as the conductive surface which in an awake recording setup will form the base on which the rat is free to move. Time division multiplexing. Biopotential signal measurements require the body to be grounded to improve the CMR of the entire system. Grounding the body eliminated the floating nature which is essential for body communication. Thus, to sense and transmit biopotential signals, time-division multiplexing is used. Such multiplexing between each sensing cycle and transmission cycle ensures that surface biopotentials can be sensed accurately and also transmitted via body communication. In the event of simultaneous sensing and transmission, given that the transmitter is placed on the surface of the body, the sensing electrodes pick up the OOK sequences used in the transmission, resulting in a corrupted sensed signal. To avoid this, sensing and transmission are time multiplexed. This technique is critical for body Figure 5a describes the time multiplexing cycles, data is sensed for a period of 5s followed by the transmission for 10s. The transmission of ABC and Bluetooth occurs simultaneously, however Bluetooth sequences take longer to transmit due to packet constraints resulting in a longer transmission time as compared to the sensing time. Following the transmission cycle, the sensing cycle repeats. ABC data is sent as OOK sequences which are then demodulated and decoded to retrieve the EKG sample as shown in Fig. 5b. Bluetooth samples are transmitted as characters corresponding to the ADC codes, which are then converted to corresponding samples to compare with the transmitted ABC signal. Time Domain Correlational Analysis on Acquired EKG signal. EKG signals are chosen for testing the animal body communication setup. The experiment was conducted on a total of 8 Rats over 2 months. This current set-up ensures continuous synchronized transmission of the biopotential signal from both the Bluetooth module and the ABC transmitter. As mentioned before, the signals are time-multiplexed allowing Animal Body Communication. The EKG signal is sensed for a period of 5s followed by simultaneous transmission of ABC and Bluetooth. Figure 6 shows the EKG sample comparison, Fig. 6a shows the Bluetooth and the ABC EKG data for a period of 0.6s, these two signals are overlaid in Fig. 6b, the PQRST peaks of the characteristic EKG signal align, similarly, the data is compared for all 8 rats and correlation coefficients across each trial is depicted in Fig. 6e. The correlation coefficient for all the rats was seen to be > 99% . In Fig. 6c, we can see the complete overlaid 5s sample. Time multiplexing results in an ABC transmission period followed by a wait time for the completion of the Bluetooth transmission and sensing. Figure 6d depicts this time-multiplexed ABC data, with this cycle being continuous. Figure 6f depicts the variation of correlation across the entire 5s window, the correlation between Bluetooth and ABC is approximately 1 throughout the 5s window depicting a reliable transmission system. Since we are using commercial off the shelf (COTS) components not designed for body communication, the sensitivity is significantly worse compared to what can be achieved with custom-designed transceivers. Due to this, the BER (bit error rate) of the system is higher. We are able to show that even with a high BER, good correlations between the Bluetooth transmitted data and ABC transmitted data. We have further elaborated on this in the Discussion section. Effect of Distance of Foot from Conductive Surface to Received ABC Signal. A key component of animal body communication is the dependence of the received signal on body resistance and capacitance. Variation of the distance of the foot from the conductive surface changes the magnitude of the received signal which then tests the robustness of the system. Experimental analysis with only one foot on a conductive surface with varying distances shows that even with the foot raised, OOK sequences can be picked up from the conductive strip. The distance from the conductive surface was varied from 3 cm (Foot Raised) to a negligible distance when the rat foot is taped to the conductive surface (Foot Completely on Surface). It is evident that as the distance from the conductive surface reduces, the amplitude of the coupled signal increases. However, even at large distances, though the signal amplitude is lower, the received Bluetooth and ABC signal can be decoded and display > 97% correlation. This case evaluated only with one foot coupling to the conductive surface. In reality, the entire rat body would couple to the conductive surface increasing the received signal. When the rat foot is raised above the conductive surface, the foot resistance R Foot becomes infinite, however, even in that case C Foot and C B_CS exists as shown in www.nature.com/scientificreports/ Fig. 2 and the body as a whole will couple to the signal plane. Here C Foot and C B_CS are the capacitances of the foot to the conductive surface and the body to the conductive surface respectively. This ensures the necessary path for transmission of the signal. Since body communication works on capacitive coupling, even without complete contact with the conductive surface, the OOK sequences couple to the conductive surface. It is highly unlikely that the rat would have all feet raised above the conductive plane, for a long time. In the event of improper contact with the conductive surface or when the rat jumps, it is shown that the signals can still be received on the conductive plane and can be successfully decoded. In the event that the rat has all of its feet and body away from the conductive surface, which is not a common occurrence, C Foot and C B_CS would reduce and the signal may be lost. For such cases, bi-modular redundancy can be introduced in the system in which case, the lost data could be retrieved by transmitting it at a later instance. This form of error correction used for short burst errors, can ensure robust transmission. In Fig. 7 the variation of the distance of the rat foot from the conductive surface is shown, position 1 is furthest away from the conductive strip, while in position 6, the rat foot is completed taped on the conductive surface. The amplitude of the received signal increases with the reduction in distance for a set transmitter voltage of 3.3V. It can be seen that in all cases the sequences can be decoded, and all show high correlations with Bluetooth. Discussions Capacitive coupling from the transmitter ground plane to the earth's ground ensures the return path necessary for animal body communication. The presence of a large conductive signal plate prevents the existence of such a capacitive path to ground. The addition of a conductive plane connected to the receiver ground placed above the rat body provides the necessary return path. The transmitter ground plane along with this floating ground plane forms the capacitance C G_TX . In the setup with a rat cage, as shown in Fig. 8b the top and bottom surfaces of the rat cage are made conductive, with the top plate connected to the receiver ground, while the bottom plate, which acts as the signal plane is connected to the signal probe of the receiver. During in vivo tests, the ground plane consisted of a hand-held conductive plane above the anesthetized rat body. Only the feet of the rat are connected to the signal plane, with a slot in the conductive plane to allow for the placement of the rat. Figure 8a describes the need for the addition of the conductive ground plane in a model rat cage. Similar to Fig. 2b, the capacitive coupling from the device ground plane to the external ground plane provides the necessary return path. The sensor node placed on the body of the rat has the transmitter ground plane on the top surface and the signal electrode touches the body of the rat. The addition of this floating ground plane allows for the use of a large signal plane, providing a larger experimental arena for the rat to move on without being limited by the loss of the signal return path. Limitations and scope for future work. Some neuroscientific behavioral experiments involve swimming or require the animal to walk on treadmills and mazes. Due to the nature of the signal path, a modified setup would be needed to accommodate a conductive plane. In some cases, such as swimming, this may not be possible. However, there is a possibility to extend this setup into cases involving mazes with a special setup where the maze bottom surface is made conductive to receive the ABC transmitted signals. Similarly, in the case of treadmills, a copper strip can be stuck on the belt which can then be connected to the receiver. Conductive textile could be used as the plane through which ABC signals are received. Also, given that the signal electrode needs to be interfaced with the body of the animal, this could act as a limitation in certain applications. For this www.nature.com/scientificreports/ setup, we consider Bluetooth as the gold standard and compare the ABC signal with the Bluetooth signal. For the receiver, we use an oscilloscope-based system to recover the data. The sensitivity of this system is low, similar to traditional oscilloscopes which results in low SNR and higher BER. The SNR dB of the received signal was computed to be in the range of 7-8dB. Based on this SNR, the BER of the system for OOK modulation is of the range of 10 −3 to 10 −2 as stated by Salehi and Proakis 36 . Our system has a similar BER of 10 −2 . Even with a high BER, the system achieves good correlations between the two communicated signals. With a custom-designed receiver with higher sensitivity, it is possible to achieve a much lower BER of the order of 10 −4 for a 500 kHz carrier 28 In this system, we use time-division multiplexing to achieve ABC communication. There is a need for simultaneous sensing and monitoring in many neuroscientific studies. The basic physics of body communication does not change, and we have evaluated that body communication does not affect the actual electrophysiological signal. Given this, there is a path to simultaneous sensing and transmission which will involve a change in the engineering design of the current system. It is also possible to extend this system to support data recordings from multiple animals. For animals that are not singly housed, Frequency Division Multiple Access (FDMA) can be used, which allows us to use different carrier frequencies that can be separated on the receiver end allowing First Neural Recording Luigi Galvani -Electrical activity in the nervous system of a frog [32] 1842 First EKG Signal Recording Carlo Matteucci -Discovered the electrical activity with each heart beat in a frog [33] 1875 First EEG Signal Recording Richard Caton -EEG in rabbits and monkeys [34] 1948 Wireless Radio Telemetry Fuller and Gordon -Radio inductograph for recording physiological activity in unrestrained animals [5] 1970 Wireless Power Transfer Inductive power transfer for biomedical devices and implantables [35] 1986 -Present Multi-Channel Recording Multi-channel recording, radio based wireless telemetry with wireless power transfer in-sensor analytics for smart devices [36,37] Figure 9. Evolution of animal biopotential recordings 5 Conclusion To conclude, in this work we demonstrate a novel communication modality in the animal studies domain and demonstrate how the advances in Electro-Quasistatic Human Body Communication (EQS -HBC) can be adapted to animal biopotential recording. Biopotential signals were acquired from the rat and transmitted using Animal Body Communication. The theory and channel model for animal body communication was developed and a custom-designed sensor node was built and tested in vivo. The correlation between standard wireless transmission systems and ABC was found to be > 99% in these tests. The power consumption for Bluetooth transmission was observed to be 29.5 mW, while the power consumption for ABC transmission was found to be 0.5 mW. This depicts a > 50× reduction in power. If a custom-designed IC is built with only ABC transmission, the device size and power can be further significantly reduced, along with the possibility to make these high bandwidth systems. The effect of variation of distance of the foot of the rat from the receiver signal plane was observed and it is clear that reliable signals can be received even with improper contact or raised feet, adding to the reliability of this communication channel. A modified test setup was explored as an additional technique to ensure robust communication. While in this study, EKG was the chosen biopotential, it can be extended to neural signal acquisition and transmission, where low power communication modalities are essential. In Fig. 9 the evolution of animal biopotential recording was studied, the key differences between tethered, wireless and EQS-ABC was compared and it was found that EQS-ABC can prove to be the next advancement in this domain, allowing for an ultra-low power, efficient channel model. Methods System Architecture. Size, weight, area, and power consumption of wireless recording devices have the potential to significantly affect animal behavior and compromise the quality and length of recordings, thereby hindering scientific studies. Overcoming these obstacles formed the core design objectives for the custom node for the acquisition of biopotential signals and wireless transmission of data and resulted in the following initial specifications. Physical dimensions were constrained to one cubic inch, which is sufficiently small to be placed on a rodent and large enough to house the various components. The net weight and power consumption were capped at 50g, and 50 mW respectively. This posed a significant challenge since the analog front end for sensing, micro-controller for computing, wireless communication for comparison purposes, power management, and animal body communication had to be miniaturized and integrated into the device while meeting the power budget. The system architecture as shown in Fig. 10a can be broadly divided into three blocks, the custom-wireless signal acquisition node, the Bluetooth receiver connected to the data logging system (computer), and the animal body communication receiver. The custom node consisted of two vertically stacked custom-designed printed circuit boards (PCB) which were populated with commercially available integrated circuits and discrete Fig. 10b,c respectively. A System on Chip (NRF52840, Nordic Semiconductors) which integrates an ARM Cortex-M4F microcontroller and a Bluetooth 5.0 transceiver was selected to form the core of the node since it would minimize the device footprint and power consumption. The SoC utilizes Bluetooth 5.0 -Bluetooth Low Energy (BLE), which is the latest version of the Bluetooth wireless communication. The on board 1MB flash memory and 256 KB RAM was sufficiently large to store the sampled signals and implement in-sensor analytics in the future. Power efficiency was further improved by utilizing the on-chip DC-DC converters. A 3.7 V 150 mAh Lithium Polymer rechargeable battery is directly soldered onto the board, along with the battery management circuitry. In this custom node, we use a battery management integrated circuit, MCP73831 by Microchip Technologies. This linear charge management controller was selected for its small physical size and the need for a low number of external components. The custom node collected the EKG signals from a zero-insertion force connector placed on the PCB. Signal conditioning and sampling of the EKG signal was performed by another SoC (ADS1298, Texas Instruments). This analog front-end chip incorporates a programmable gain differential amplifier and right-leg drive generation for conditioning EKG signals, which were subsequently sampled at 500 Hz by a 24-bit analog to digital converter. The SoC was programmed to optimize signal acquisition quality and power consumption. The sampled signals were sent to the micro-controller through an on-chip Serial Peripheral Interface. The sampled data was stored in a buffer in the micro-controller until the transmission window started. The samples were then converted to characters and transmitted as a string over Bluetooth after adding delimiters to differentiate between subsequent samples. For Animal Body Communication, the sample was transmitted in its original 24-bit binary integer form after creating packets by adding two bits (binary 1) at the start and end of the sample. Each bit in ABC was represented by on-off keying, wherein a 500 kHz, 50% duty cycle square wave was turned on (binary 1) or off (binary 0). The amplitude of each bit is 3.3 V, which is the output of the microcontroller. ABC data was transmitted at 25 Kbps, which was significantly lower than the minimum required Bluetooth bandwidth of 45 Kbps, which excludes the overhead added by the Bluetooth stack. The custom-designed node was packaged in a 3D-printed housing of dimensions 25 mm × 25 mm × 10 mm, which is equivalent to 0.39 cubic inches. It had a net weight of 20g and average power consumption of 29.5 mW (with Bluetooth transmission for data comparison purposes) which resulted in approximately 20 h of battery life. This is 19 times smaller and has more than twice the battery life when compared to a commercial wireless unit (Bio-Radio). We expect a much longer lifetime when the Bluetooth transmission is turned off and only ABC transmission is turned on. The power required for sensing is typically orders of magnitude lower than the power required for communication, thus the system power is dominated by this communication power. The ABC transmission power is 50× lower when compared to the Bluetooth transmission power and this translates into an order of magnitude improvement in the device lifetime and reduction in the battery size. The Bluetooth receiver was essentially another NRF52840 SoC connected via USB to the data logging system, which in this case was a computer. This setup was used instead of the inbuilt Bluetooth device of the computer since it would be easier to collate data from multiple transmitters. The conductive signal plane is connected to the high impedance receiver probe. A computer-based oscilloscope, by Pico Technologies, was used as the ABC receiver. The OOK sequences are sampled at 3.9 MSamples/s and collected for post-processing. Signal Processing. OOK sequences collected from the ABC receiver are sent to a computer for processing. Signals are first band-passed between 400 to 600 kHz with 80 dB attenuation software filters. Filtered sequences are demodulated using envelop detection and thresholding. Sequences are then decoded using the start and stop bit followed by software error correction. Bluetooth sequences in the form of ADC codes are converted to corresponding voltage values and compared to the received ABC signals. Communication Protocols. • Time Multiplexed Data As discussed earlier, a requisite for animal body communication especially while recording surface biopotential signals is the need to time multiplex the sensing and transmission periods. • Error-Correcting Algorithms There is a possibility to bring in redundancy into the communication channel to ensure the robustness of this communication modality. We have shown that if the rat foot is lifted from the conductive surface, the received signal can still be picked up by the receiver. The goal of this paper is to ensure that long term recordings of freely moving animals can be obtained. To ensure that there is a successful transmission of data, error-correcting algorithms become a necessity. Bi-modular Redundancy can be introduced by repeating packets over time. In the event of a jump or signal drop, repeated packets ensure that the signal information is faithfully transmitted. This technique reduces the data rate due to the added redundancy. Block Codes a common error-correcting technique of encoding the data in blocks, such that the code is a linear combination of the message and parity bits in a linear block code. www.nature.com/scientificreports/ Surgery. All surgical procedures were performed under aseptic conditions at Purdue Animal Facility. 5% Isoflurane gas and oxygen were used to anesthetize the rat in an induction chamber, followed by a continuous flow of 2.5% Isoflurane gas with oxygen delivered through a nose cone. The dosage of Isoflurane and the flow of oxygen is continuously monitored to ensure that the rat does not respond to the toe pinch while still maintaining a steady breathing rhythm and observable pink extremities. A heating pad is placed below the rat to maintain the body temperature and lubricating drops are added to the eyes of the rat to prevent drying. The skin surface is shaved and cleaned for the placement of the surface electrodes. The device is placed on a shaved surface on the belly of the rat with the signal plane touching the skin surface. The surface electrodes are connected to the device using patch connectors. The experiment was performed on 8 Sprague Dawley rats which is sufficient to show the science and working of ABC. All procedures were approved by the Institutional Animal Care and Use Committee (IACUC) and all experiments were performed in accordance with the Guide for the Care and Use of Laboratory Animals. The experiments were closely monitored and reviewed by Purdue Animal Care and Use Committee (PACUC).
8,477
2021-02-08T00:00:00.000
[ "Materials Science" ]
Goldstone Inflation Identifying the inflaton with a pseudo-Goldstone boson explains the flatness of its potential. Successful Goldstone Inflation should also be robust against UV corrections, such as from quantum gravity: in the language of the effective field theory this implies that all scales are sub-Planckian. In this paper we present scenarios which realise both requirements by examining the structure of Goldstone potentials arising from Coleman-Weinberg contributions. We focus on single-field models, for which we notice that both bosonic and fermionic contributions are required and that spinorial fermion representations can generate the right potential shape. We then evaluate the constraints on non-Gaussianity from higher-derivative interactions, finding that axiomatic constraints on Goldstone boson scattering prevail over the current CMB measurements. The fit to CMB data can be connected to the UV completions for Goldstone Inflation, finding relations in the spectrum of new resonances. Finally, we show how hybrid inflation can be realised in the same context, where both the inflaton and the waterfall fields share a common origin as Goldstones. Introduction The empirically well supported paradigm of cosmic inflation [1] has a hierarchy problem from the perspective of particle physics. Parameterised in terms of a slowly rolling scalar field, the scale of inflation (from CMB data [2]) is exceeded by the field excursion (given by the Lyth bound [3]) by roughly two orders of magnitude: Λ 4 = 1.88 × 10 16 GeV 4 r . 10 and ∆φ ≥ M p r 4π (1.1) where r is the ratio of the tensor to the scalar power spectrum, and where M p = 2.435 × 10 18 GeV is the reduced Planck mass. Meeting both these conditions implies an exceptionally flat potential for the inflaton, which generically is radiatively unstable. Natural Inflation (NI) [4] offers a solution to this hierarchy problem by imposing a shift symmetry on the inflaton: the inflaton potential exhibits a shift symmetry φ → φ + C with C a constant, and therefore could protected from higher order corrections. The shift symmetry is realised by identifying the inflaton with the Goldstone boson (GB) φ of a broken global symmetry G to its subgroup H (φ ∈ G/H). In turn, the GB obtains a potential through effects that render G inexact. The resulting degree of freedom is therefore not an exact Goldstone boson, but a pseudo-Goldstone boson (pGB). Different effects can lead to an inexact global symmetry; we reviewed the relevant mechanism in [5]. The original and most popular NI model has an axion as the inflaton, the GB of spontaneously broken Peccei-Quinn symmetry [4]. The axion gets a potential through nonperturbative (instanton) effects. As shown in Ref. [6] these effects lead to the characteristic cos(φ/f ) potential across models, where f is the scale at which G is broken. To obtain the famous NI model one adds a cosmological constant term to impose the phenomenological constraint V (φ min ) = 0, to obtain, Alas, the original NI model can only be successfully reconciled with the data from CMB missions for superplanckian scales of the decay constant: f = O(10M p ). This is evidently a problem, because above the Planck scale one should expect a theory of Quantum Gravity (QG), and it is known that theories of QG in general do not conserve global symmetries [7]. Therefore one generically expects large contributions to the simple potential (1.2), as was shown recently in [8]. Thus, one may conclude that vanilla NI is not a good effective theory. 1 Different proposals have been made to explain the super-Planckian decay constant while maintaining the simple potential (1.2) and the explanatory power of the model. Among these are Extra-Natural inflation [10], hybrid axion models [11,12], N-flation [13], axion monodromy [14] and other pseudo-natural inflation models in Supersymmetry [15]. These proposals usually focus on generating an effective decay constant f eff in terms of model parameters, such that f eff = O(10M p ) is no longer problematic. Some of these models rely on a large amount of tuning or on the existence of extra dimensions, as 4D dual theories suffer from the same problems as the vanilla model. In [5] we recognised that pGB inflation does not have to have an axion as the inflaton. There are other models which generate a natural inflaton potential, protected from radiative corrections by the same mechanism. In particular, we showed that one can find models that fit the CMB constraints for a sub-Planckian symmetry breaking scale f . For example, if the pGB field is coupled to external gauge bosons and fermions, a Coleman-Weinberg potential is generated for the inflaton. We demonstrated the general mechanism and gave a specific successful example inspired by the minimal Composite Higgs model MCHM 5 [16]. Here we develop a comprehensive approach to Goldstone Inflaton. In Sec. 2, we give a full analysis of the potentials that can be generated, and motivate that the potential that is uniquely expected to give successful single-field inflation is given by In Sec. 3, we compare its predictions against the CMB data and find that the latter singles out a specific region in the parameter space. We comment on the fine-tuning necessary and show that one obtains a successful model with f < M p at marginal tuning. As the Goldstone inflaton is expected to have non-canonical kinetic terms, we give an analysis of the non-Gaussianity predictions. We show that the current bounds are comfortably evaded. In Sec. 4, we further explore the region of parameter space that leads to successful inflation. The relations that we find by comparison with the Planck data give information about the form factors that parameterise the UV-theory. We comment on the scaling with momentum we expect from theoretical considerations. We finish with an analysis of the UV theory, in which we use QCD-tools to compute the relevant parameters and give a specific example in the approximation of light resonance dominance in Sec. 5. Finally, in the Appendices we give specific examples of single-field and hybrid inflation coming from Goldstone Inflation. 2 Most general single field potential Here we provide an argument about what types of potentials we can expect for Goldstone inflatons, and which can lead to successful inflation. We describe a generic model in which the strong sector has a global SO(N ) symmetry which breaks to SO(N −1). This symmetry breaking gives rise to N − 1 massless Goldstone fields, one linear combination of which will play the role of the inflaton. We can parameterise these fields with Σ(x), which transforms as a fundamental of SO(N ): where Tâ are the broken SO(N )/SO(N − 1) generators, φâ(x) are the Goldstone fields, f is some energy scale (analogous to the pion decay constant), and is the symmetry-breaking VEV [17], with now f the scale of spontaneous symmetry breaking. 2 This parameterisation is motivated by the transformation properties of Goldstone fields. Under a spontaneously broken global symmetry, Goldstone fields should transform nonlinearly. Since Σ is a fundamental, it transforms as Σ → OΣ, for some O = exp(iTâαâ). Thus the Goldstone fields transform as φâ → φâ + f αâ. This non-linear shift symmetry prevents the Goldstone fields from acquiring a tree level potential. Only if the strong sector has couplings which violate the global symmetry may the inflaton acquire a potential through loop corrections. If we take the unbroken symmetry SO(N − 1) to be a gauge symmetry, we can gauge away N − 2 of the Goldstone fields (they give mass to N − 2 gauge bosons), as we show pictorially in Fig. 1. This will leave us with one physical Goldstone field, which we identify with the inflaton. The same mechanism gives masses to the W ± and Z bosons in models in which the Higgs doublet arises as a set of Goldstone bosons (see for example [19], [20]). We now attempt to write down an effective Lagrangian containing couplings of the Goldstone fields to the SO(N − 1) gauge bosons. A useful trick is to take the whole SO(N ) 2 Here we assume the CCZW formalism. A different proposal relying on quark seesaw has been made recently (see for instance [18] and references therein); however, in this setup the periodicity of the Goldstone field is disguised and therefore we will stick to CCZW. global symmetry to be gauged, and only at the end of the calculation setting the unphysical SO(N )/SO(N − 1) gauge fields to zero [21]. The most general effective Lagrangian involving couplings between Σ and SO(N ) gauge bosons, in momentum space and up to quadratic order in the gauge fields, is where A µ = A a µ T a (a = 1, ..., N ) are the SO(N ) gauge fields, P µν T = η µν − q µ q ν /q 2 is the transverse projector, and Π A 0,1 (p 2 ) are scale-dependent form factors, parameterising the integrated-out dynamics of the strong sector. Taking an appropriate choice for the SO(N ) generators and expanding out the matrix exponential in (2.1), we obtain: where φ = φâφâ. With an SO(N − 1) gauge transformation we can rotate the φâ fields along the φ 1 direction, so that (2.5) The remaining N − 2 degrees of freedom give masses to as many gauge bosons. Expanding out all the terms in (2.3) and setting the SO(N )/SO(N −1) gauge fields to zero as promised, we obtain: Using this Lagrangian we can derive a Coleman-Weinberg potential for the inflaton [22]: where p 2 E = −p 2 is the Wick-rotated Euclidean momentum. This result can be understood as the sum over the series of diagrams: (2.8) in which the inflaton field is treated as a constant, classical background. The factor of 3(N − 2) comes from the 3 degrees of freedom of each of the massive SO(N − 1)/SO(N − 2) gauge bosons, any of which may propagate around the loop. Provided the ratio Π A 1 /Π A 0 decreases fast enough at high momentum, we can approximate the potential by expanding the logarithm at leading order. This gives (2.10) Now we introduce a set of external fermions. Just as with the gauge case, the easiest way to write down a general effective Lagrangian is to assume that the fermions are embedded within representations of the full symmetry group SO(N ). First we try embedding two Dirac fermions (one left and one right handed) in fundamental SO(N ) representations: The reader will note that fermions placed anywhere other than the first and N th entries of these fundamentals will not contribute to the inflaton potential, since they will not couple to the rotated Σ (2.5). We place ψ L and ψ R in two separate fundamentals for the sake of generality -this arrangement will avoid cancellations between terms that would occur if we used the embedding  The most general SO(N ) invariant effective Lagrangian we can write down, up to quadratic order in the fermion fields, is which can be rewritten: We can derive the Coleman-Weinberg potential using the formula which is correct up to terms independent of φ. Here N c is the number of fermion colours and for all fermions ψ i . We obtain, up to terms independent of φ: The presence of higher order trigonometric functions inside the logarithm is due to the fact that we have more than one fermion that can propagate around the loop. We have, among other diagrams, the series: This series includes diagrams with only an even number of vertices, and so its summation leads to higher order terms in the argument of the logarithm. Again we can expand the logarithm at first order to get a potential of the form: This potential has a very flat region for α β, the flat region being a maximum (minimum) for β > 0 (β < 0). For realistic inflation we require the flat region to be a local maximum, so that the inflaton can roll slowly down the potential. However, since we expect the Π 0 form factors to be positive (see, for example [23]), the expansion of the log gives a negative value for β. Note that the (Π L term cancels other terms at next order in the expansion, so does not contribute to the potential. The gauge contribution -being of the form sin 2 (φ/f ) -will not help matters. Therefore we turn to the next simplest option: embedding the fermions in spinorial representations of SO(N ). Spinors of SO(N ), for odd N , have the same number of components as spinors of SO(N − 1). The extra gamma matrix Γ N is the chiral matrix, which in the Weyl representation is the only diagonal gamma matrix. Spinors of SO(N ) are built from two spinors of SO(N − 2) in the same way that Dirac spinors are constructed using two Weyl spinors. We denote these SO(N − 2) spinors χ L,R , and embed the fermions as follows: and construct the full SO(N ) spinors thus: This embedding is chosen so as to ultimately give a coupling between ψ L and ψ R -other embeddings that achieve this will lead to the same eventual result. The SO(N ) invariant effective Lagrangian takes the form where Γ a are the Gamma matrices of SO(N ). If we take this can be expanded to give: Combined with the gauge contribution, this will lead to the potential: (2.26) This potential has a flat maximum for α 2β, β > 0. The gauge contribution can now give us a positive value for β. Thus, for a region of parameter space, this is a viable inflationary potential. Including more fermions in our model will lead to a wider class of diagrams contributing to the Coleman-Weinberg potential. If we expand consistently to first order in Π 1 /Π 0 and (M/Π 0 ) 2 however, the only terms that appear at leading order will be those coming from diagrams in which only a single fermion, or an alternating pair of fermions, propagates around the loop. Equation (2.25) will therefore be the generic leading order result, although the coefficients will be modified. In particular, α will be given generally by where a i = 0 if ψ i is embedded in the upper half of an SO(N ) spinor, and a i = 1 if ψ i is embedded in the lower half. To satisfy the phenomenological constraint that the inflaton potential should be zero at its minimum V (φ min ) = 0, we now insert a constant term C Λ by hand: The result that fermions in fundamental representations cannot induce a satisfactory inflation potential holds generically for any group, for precisely the reasons outlined above. It is for this reason that we did not consider SU (N ) symmetries, since the only single-index representations of SU (N ) are fundamental (or anti-fundamental) representations. Embedding fermions in spinorial representations will generally lead, at first order, to a potential of the form (2.25). Since spinorial representations only exist in SO(N ), we conclude that an SO(N ) symmetry of the strong sector is the simplest and most natural way to generate a realistic inflaton potential. Constraints from Inflation After our discussion of the general structure of the inflaton potential, let us discuss the restrictions coming from inflation. We list some potentials that can give rise to inflation in Table 3. We parameterise the flatness of the potential as usual in the slow roll approximation (SRA). That is, we require 1 and η 1, where and η are here given by To simplify our expressions, in this section we work in units of reduced Planck mass M p ; that is, we will rescale our parameters φ → φ Mp and f → f Mp . The number of e-foldings in the slow-roll approximation is then given by where φ E is fixed as the field value for which either = 1 or η = 1, in other words, the field value for which the SRA breaks down. Here and in the following we conservatively choose N = 60 for our predictions. We compare the predictions of our model and the CMB data for the spectral tilt and the tensor-to-scalar ratio, which can be expressed in the SRA as respectively. A generic potential for a pseudo-Goldstone boson would contain powers of periodic functions, c φ = cos φ/f and s φ = sin φ/f , which we parametrize as The derivatives of this potential are again proportional to the same periodic functions. Roughly speaking, the flatness of the potential can be achieved in two ways. One possibility is setting the argument, φ/f , to be very small (modulo 2π) as in the Natural Inflation scenario. As the fluctuations of the inflaton can be large, this condition typically implies f M p , hence spoiling the predictivity of the model. Another possibility, and that is what we pursue here, is to look for models with f < M p , which in turn implies that two oscillating terms contribute to the flatness of the potential. This may seem like it would introduce fine-tuning in the model, but in the next section we quantify that tuning, finding it is milder than e.g. Supersymmetry with TeV scale superpartners. Note that different models are equivalent from a cosmological perspective and can be transformed into each other by a rotation in parameter space. We list these redefinitions of the parameters and the cosmological constant in Table 3 as well. Model |β| = |β/α| β/|β| C Λ (pheno) In the limit that the ratioβ = β/α is ±1/2, the potential is exactly flat at the origin and the spectrum is scale-invariant, i.e. n s = 1 as shown in Fig. 2. As the Planck data indicates a small deviation from scale invariance, we expect a small deviation ofβ with respect to 1/2. We find that the smaller f compared to M p , the closer β must be to the values in the table. The deviation δβ = 1/2 − β is then for all models in the table, but most importantly the model motivated in the previous section (2.28). This is the range ofβ for which the model is compatible with the Planck data, as we plot in Fig. 3. for the well motivated example V = Λ 4 C Λ + α cos φ/f + β sin 2 φ/f . Our models predict negligible tensors, so the measurement of r imposes no constraint onβ. We expect f to be between the scale of inflation, Λ inf ≈ M GU T and the Planck scale, the domain of quantum gravity. For example, for f ≈ M GU T we would have δβ ≈ 10 −6 . However, the deviation can be larger if we allow f to be closer to the Planck scale. Fine-tuning One may note that the specific relationship between α and β in the model described above requires one to fine-tune it. Here we quantify the amount of fine-tuning that one will typically expect. Defining tuning as is customary in Particle Physics [24], we have This relation is not unexpected because for large f > M p the potential will very flat over a large field range ∆φ, and this flatness is not sensitive to the specific value ofβ. For f < M p one needs a (partial) cancelation in α and β, at the cost of fine-tuning. Then we can define the percentage of tuning as It is seen in particular that if we take the upper bound f = M p seriously, the minimal tuning is at 95%. In Fig. 4 we plot the tuning ∆ as defined in (3.7) for the model at hand, (1.3). It is seen that for M p /10 f < M p one expects no tuning below the percent level. One should note that f < 10 −2 M p ≈ M GU T is not expected, as the symmetry breaking pattern should occur before the onset of inflation. One can compare this amount of tuning with the one required to avoid the de-stabilization of the electroweak scale in Supersymmetry. For example, stops at 1 TeV require a much worse fine-tuning, at the level of 1% [25]. It is also noteworthy that the tuning necessary in the other models in Table 3 will be very similar to the tuning in V = Λ 4 C Λ + α cos φ/f + β sin 2 φ/f . The parameter ∆ as defined above for V = Λ 4 C Λ + α cos φ/f + β sin 2 φ/f . Outside of the pink zone the spectral index n s predicted by the model is incompatible with the Planck data (n s < .948 above the region, n s > .982 below). Non-Gaussianity and its relation to Goldstone scattering Even before switching on the Coleman Weinberg potential, Goldstone bosons interact with themselves through higher-order derivative terms. Indeed, consistent with the shift symmetry, one can write terms containing a number of derivatives of the field, The first order term (n = 1) is the usual kinetic term, whereas any other term (n 2) would involve interactions of 2n pions. This expansion is called in the context of Chiral Perturbation Theory [26] as order O(p n ) in reference to the number of derivatives involved. Goldstone self-interactions appear at order O(p 4 ). Alongside the Coleman-Weinberg potential we derived in the previous section, the derivative self-interactions are relevant for inflation as well, as a nontrivial speed of sound arises from a non-canonical kinetic term. Specifically, the sound speed is a parameterisation of the difference of the coefficients of the spatial and temporal propagation terms for the Goldstone bosons φ: This difference arises from higher dimensional kinetic terms X n and the fact that inflation breaks Lorentz invariance. This can of course already be seen from the metric, The speed of sound is then given by where L X and L XX denote the first and the second derivative of the Lagrangian with respect to X respectively, and where c s is expressed in units of the speed of light. It is immediately seen that models with a canonical kinetic term predict c s = 1. The background equations of motion can be used to relate coefficients to the Hubble expansion parameter, To second order, the kinetic term will have the form 3 Canonically normalising the kinetic term thus implies, These higher order derivatives are also constrained by arguments of unitarity, analyticity and crossing symmetry of Goldstone scattering amplitudes such as shown in Fig. 5, This scattering amplitude must be a function of the Mandelstam parameters s, t and u, e.g. s = (p 1 + p 2 ) 2 = (p 3 + p 4 ) 2 . This amplitude A(s, t, u) must be analytical in the complex s plane, except for branch cuts (due to unitarity) and isolated points (due to the possible exchange of a resonance) [27]. Unitarity then implies the existence of a branch at some position s s 0 . Similarly, other branch crossings can be obtained by using crossing symmetry. Using these arguments, one can show that the amplitude would be non-analytical for s > 4m 2 φ , where m φ is the mass of the pseudo-Goldstone. Moreover, analiticity restricts the dependence of the amplitude on s, namely where s, t and u are restricted to the physical region, e.g. s 4m 2 φ . This translates into bounds for the coefficients of the Lagrangian in (3.8). At leading order in the Goldstone interactions, the aforementioned conditions lead to a bound for c 2 . In particular, c 2 must be positive and larger than some function of the Goldstone mass. 4 The positivity of c 2 constrains possible deviations from the speed of sound in the model with Goldstone inflatons. Indeed, Where we have defined the dimensionless parameter x = X/f 2 . As X ∼ p 2 , we expect the effective theory to be valid up to The current bound by Planck is c s > .024 [2]. In Fig. 5 one can see how for positive c 2 the speed of sound is in agreement with Planck for any value of c 2 x. As mentioned above, the sound speed is also constrained by arguments of (perturbative) unitarity. The scale at which violation of perturbative unitarity occurs was computed by Ref. [30] (and corrected in [31]) from imposing partial wave unitarity in the quartic interaction, and reads, We are in particular concerned with how Λ u relates to the symmetry breaking scale f . If Λ u < f , the action needs a completion below the symmetry breaking scale, possibly in terms of strongly coupled dynamics or new low-energy physics. The effective theory is therefore no longer a good description. One may thus consider a critical sound speed (c s ) * , defined by [31] For c s > (c s ) * our model predicts Λ u > f . Canonically normalising using (3.14), we have This theoretical lower bound is also shown in Fig. 5 for different values of x (subject to (3.19)). One can see how, once axiomatic conditions from Goldstone scattering are imposed, the inflaton evades both bounds. The speed of sound is related to non-Gaussianity by One does not expect significant contributions to non-Gaussianity from the non-derivative terms in the potential, as they will be slow-roll suppressed. It is worth noting that a deviation from one in the speed of sound will modify the tensor to scalar ratio r = 16 c s (3.24) The predictions for r will in this case be lowered, but as the Planck bound is consistent with r = 0, this is only to the merit of models with a pGB inflaton. Link to UV models We saw above that the model (1.3) gives inflation compatible with the CMB data for particular relations between the coefficients. Here we discuss what these relations indicate for the UV theory. Firstly, we noticed that to have the right shape of the potential, we should require β to be positive, that is Then we saw in Table table that the requirement of a sufficiently flat potential gives the condition α ≈ 2β, which will give a relation between the form factors of the form Lastly we have that the phenomenological condition V (φ min ) = 0 gives a preferred value of the constant C Λ in terms of the model parameters. In explicit models this will give a condition of the form 5 where C Λ is a cosmological constant during inflation. To obtain explicit expressions for the form factors Π X one would need a UV-complete theory. However, using the relations above we can make some general remarks about their large momentum behaviour. First, we can use an operator product expansion to find the scaling of Π 1 . This implies that Π 1 scales as O /p d−2 , where O is the lowest operator responsible for the breaking G → H, with mass dimension d. In our case, we expect O to be a fermion condensate with d = 6. Secondly we can require finiteness of the fermion Lagrangian (2.22). The scaling of the other form factors can be found by consideration of the kinetic terms in the high momentum limit. We will discuss this in the next section. We summarise our conclusions in Table 4. Form factor Large momentum behaviour Argument Recovering the bosonic Lagrangian Recovering the fermion Lagrangian M r ∼ 1/p 2 OPE coupling In the next section we will assume a light resonance connection to derive more specific conclusions in this approximation. Light resonance connection In this section we attempt to derive some of the properties of the UV theory, assuming that the integrated-out dynamics is dominated by the lightest resonances of the strong sector. To simplify what follows, we note that the form factor M in equation (2.24) is 'naturally' small in the 't Hooft sense [32]. This is because in the limit M → 0 we have an enhanced U (1) L ×U (1) R global symmetry under which ψ L and ψ R transform with independent phaserotations. Therefore in the following we will assume that the dominant contributions to α and β come from the Π i 0,1 form factors. Note that this observation makes it very plausible that condition (4.1) is satisfied. In the large-N limit one can express form factors as infinite sums over narrow resonances of the strong dynamics [33,34]. We assume that the Π i 1 form factors can be well approximated by considering only the contribution from the lightest of these resonances. We expect that Π i 1 has a pole at the mass of the lightest resonance m 2 i , and that the residue of this pole is equal to the square of the amplitude to create the resonance from the vacuum. This amplitude, f i , is equivalent to the decay constant of the resonance. This leads us to the following approximation for the fermionic Π i 1 : In the gauge case, this expression is modified to which now has a pole at p 2 = 0, since the broken SO(N )/SO(N − 1) currents can excite the Goldstones from the vacuum [21]. We approximate the Π 0 form factors with their tree level values. By inspecting (2.3) and (2.22), we see that to recover the tree level fermion and gauge Lagrangians we must have Π 0 = 1 in the fermionic case, and Π 0 = p 2 /g 2 in the gauge case, where g is the gauge coupling. Let us study the minimal model we can construct that leads to successful inflation. We will only need one external fermion -in this case we take the ψ R of Sec. 2. Then α and β will be given by Now we assume that Π R 1 and Π A 1 are given respectively by (5.1) and (5.2). With a single resonance, we cannot guarantee convergence of the integrals in (5.3) -generally this can be done by introducing more resonances and demanding that the form factors satisfy Weinberg sum rules [35,36]. However we can argue that, since our effective theory is only expected to be valid up to a scale Λ U V = 4πf , we should cut off the momentum integrals at p 2 = Λ 2 U V . Putting all this together, we find: where a = 2N c , and The approximate relation α 2β then implies a relationship between the parameters of the UV theory. If we demand that the quadratic cutoff dependence cancels, we obtain the relation and Inserting (5.6) into (5.7) we obtain , (5.8) which implies that m R < m A . If f A f , one finds that m R m A , i.e. there would be a degeneracy between fermionic and bosonic resonances. Note that this condition will be satisfied no matter the scale factor between α and β is, as long as they are proportional, α ∝ β. This kind of mass-matching situation [40] where resonances from different sectors acquire the same mass is reminiscent of what had been found in trying to build successful Technicolor models, namely Cured Higgsless [41] and Holographic Technicolor [42] models. Discussion and Conclusions The framework of slow-roll inflation has been corroborated to a good precision by the Planck data. This framework, however, suffers from an inflationary hierarchy problem, namely the strain of providing sufficient inflation while still satisfying the amplitude of the CMB anisotropy measurements. This balancing act requires a specific type of potential, with a width much larger than its height. This tuning is generically unstable unless some symmetry protects the form of the potential. In this paper we explored the idea that this potential could be related to the inflaton as a Goldstone boson, arising from the spontaneous breaking of a global symmetry. Another issue for inflationary potentials, including Goldstone Inflation, is that they are only effective descriptions of the inflaton physics. With the inflationary scale relatively close to the scale of Quantum Gravity, one expects higher-dimensional corrections to the inflationary potential. These corrections would de-stabilise the inflationary potential unless the model is small-field [43]. In other words, as the inflaton field value approaches M p the Effective Theory approach breaks down. We found out that in Goldstone Inflation a predictive effective theory is indeed possible, and it leads to specific predictions. For example, in single-field inflation, we computed the most general Coleman-Weinberg inflaton potential and learnt that 1.) Only the breaking of SO(N ) groups provide successful inflation and 2.) fermionic and bosonic contributions to the potential must be present and 3.) for fermions in single-index representations, a successful inflaton potential is given uniquely by V = Λ 4 (C Λ + α cos(φ/f ) + β sin 2 (φ/f )), with α ≈ 2β. When linking to UV completions of Goldstone Inflation, we have been able to show how relations among the fermionic and bosonic resonances are linked to the flatness of the potential. As we have developed a specific model for inflation, we were able to address the amount of tuning required to make it work, and found that it is not dramatic. Indeed, we found that the tuning is milder than that found in Supersymmetric models nowadays. Another advantage of this framework is the ability to examine the higher-order derivative terms in the Goldstone Lagrangian from several different points of view: modifications of the CMB speed of sound, constraints from unitarity and also axiomatic principles from Goldstone scattering. We have presented results in a rather generic fashion and for single-field inflation, and delegated to the appendices a discussion of a specific model of single-field inflation, and few examples of hybrid inflation which originate from this framework. There are other aspects of Goldstone Inflation which deserve further study. For example, in these models, hybrid inflation and reheating are quite predictive as the inflaton and waterfall fields come from the same object and naturally the inflaton can decay to other, lighter pseudo-Goldstones. Moreover, there may be interesting features of the phase transition causing the spontaneous breaking of the global symmetry, which we plan to investigate. then T 1 and T 2 are the broken generators. T 3 remains unbroken, and will generate the SO(2) gauge symmetry. A suitable gauge transformation then allows us to set φ 1 = φ, φ 2 = 0, and we can write Following (2.3) the effective Lagrangian for the SO(2) gauge boson is This leads to the Coleman-Weinberg potential Now we embed a fermion in an SO(3) spinor: The gamma matrices of SO (3) can be taken to be the Pauli matrices σ a . Thus the most general effective Lagrangian for the fermion is We can also construct models in which more than one physical Goldstone degree of freedom is left in the spectrum. This can be done by only gauging a subgroup of the unbroken SO(N − 1) symmetry. Let us look briefly at a simple example of such a model, in which we take the global symmetry breaking to be SO(5) → SO(4). In such a case we have four Goldstone bosons, and Σ is given by where we have φ = φâφâ, as before. If we gauge only SO(2) ∈ SO(4), taking for instance the gauged generator to be then the gauge freedom allows us to set φ 4 = 0. Following the same steps as before, the effective Lagrangian for the gauge field will be If we, as in Appendix A, consider the contribution from a single left-handed fermion, now embedded in an SO(5) spinor like so: then in fact the effective fermion Lagrangian will still be given by (A.11). Thus the Coleman-Weinberg potential will be given by V (φ) = α cos(φ/f ) + β φ 1 φ 2 sin 2 (φ/f ), (B.5) with α and β given by If we expand the trigonometric functions for small field excursions, we obtain, up to constant terms: We see that the three Goldstones have masses then the potential will be exactly as in (B.7), with φ 3 set to zero. We must also replace β → 2β, since the potential now receives contributions from two gauge bosons. We note further that if instead we gauged the generator which is symmetric in φ 1 and φ 2 .
8,534.4
2015-03-26T00:00:00.000
[ "Physics" ]
Research on the application of GIS technology in the spatial status grooming of village areas The preparation of five levels and three categories of territorial spatial planning includes many aspects, and this article mainly deals with one of the levels and categories: detailed territorial spatial planning of village areas; it focuses on the preliminary work of planning preparation: the current situation combing of village territorial spatial areas. The article firstly outlines the basic application of GIS technology in the current situation combing; then introduces the relevant contents of the Third National Land Survey. Finally, with actual cases, we focus on the significant role played by GIS technology in the integration of land use data in the base period of village territorial space, the location of permanent basic farmland indicators, the location of ecological protection red line, and the statistics of conflicting patches of ecological environmental protection and basic farmland protection. Introduction The practice of five levels and three categories of territorial spatial planning is now in full swing in China, which contains a lot of work, so I will not repeat it here. Geddes, a Western humanist planning master, creatively proposed the general method of regional planning: "investigation-analysis-planning", which was later regarded as the general procedure of planning;The author believes that this point coincides with the words of a great Chinese man, Mao Zedong: "Without investigation, there is no right to speak". The author is convinced that without investigation, one cannot even speak, let alone carry out subsequent planning work. Any level of planning work can not engage in one-size-fits-all, not just to meet the laws and regulations and technical specifications can be; also cannot be because of time constraints, heavy tasks will only focus on the final planning results, while ignoring the decision to plan the results can be achieved "tailored", "into the countryside with the customs The key part of "tailor-made" and "customizable" planning results: the overall grasp of the current situation of the project and resource conditions. This paper mainly focuses on the use of GIS technology in the completion of the current situation of land use in the village, the location of permanent basic farmland indicators, the ecological protection red line, the statistics of conflicting patches, etc. In this paper, we would like to make some limited discussion on the use of GIS technology in the village area. The concept of GIS GIS is an evolving concept, and the father of GIS, Roger Tomlinson (1966), first proposed GIS as a digital system for analysing and manipulating geographic data in a comprehensive manner. At present, experts and scholars worldwide prefer the definition of GIS by the Federal Institute for the Coordination of Digital Maps (FICCDC): "a system consisting of computer hardware, software, and different methods designed to support the acquisition, management, processing, analysis, modelling, and display of spatial data in order to solve complex planning and management problems. " . Specific uses of GIS in status quo research and analysis (1) Status quo research stage: GIS can be used to manage the status quo data (e.g., land use status quo data, road data, municipal facilities data, etc.). Use handheld GIS equipment to assist in site investigation. Handheld devices integrating GPS, RS and GIS can tell the planner the location and surrounding geographic environment, as well as relevant geographic data, so that the planner can grasp the site situation faster and more accurately. (2) Current situation analysis stage: GIS overlay analysis function can be used to evaluate the suitability of the site; make various types of current situation drawings; use the spatial statistics function to explore the spatial distribution pattern of geographic things; analyse the spatial structure; simulate three-dimensional terrain and landscape, virtual city scenes; analyse the landscape view domain to produce urban evolution animation, etc. The Third National Land Survey Land resources are the important material basis for national economic construction and the fundamental of land spatial planning. The basic survey and special survey of national land resources is the basis for the evaluation of the bearing capacity of resources and environment and is of great significance to the good spatial planning of the national land. The main task of the survey of national land resources is to find out the distribution and scope of various land use patterns projected on the surface as well as the basic situation of development, utilization and protection, and to grasp the most basic national background situation of national land resources and common features. Overview of the "Three Surveys" As a major national survey, the Third National Land Survey (NLS-3) aims to comprehensively refine and improve the national basic land use data based on the results of the Second National Land Survey (NLS-2). The purpose is to comprehensively refine and improve the basic data of national land use based on the results of the Second National Land Survey (referred to as "Second Survey"). The "Three Surveys" is an extremely important basic national survey of China's development into a new era, which is related to the overall situation of ecological civilization construction; it is related to a series of the most basic natural resource conditions, national conditions and national strength after the first century goal is achieved and towards the second century goal of modernization; it is also the most complex and important survey of the entire natural resource system. It is also the most complex and important basic work of the whole natural resource system [1] . "Three tuning" important features (1) The results of the "three surveys" are more realistic and reliable. Since the launch of the "three surveys" in 2017, the initial survey results were formed at the end of 2019, and the "three surveys" results were finally formed with December 31, 2019 as the unified point of time through the unified point of time update work. The whole survey process makes full use of satellite remote sensing technology and "Internet evidence" technology, adopts the integration of internal and external database construction technology, takes counties (districts) as the survey unit, and the results pass county-level self-inspection, provincial pre-inspection, national verification and multiple rounds of inspection, so the results are true, accurate and reliable. (2) The survey results cover the whole area: the classification of land use status quo is a classification of land use types from the perspective of resource development and utilization, which is a classification with relatively wide coverage and the largest number of resource types. The "three surveys" work classification is in line with the current land use classification system. The working classification is based on GB /T 21010-2017 "Land Use Status Classification", and some of the land types have been refined and consolidated, with 12 primary and 73 secondary classifications. The primary classification includes wetland, arable land, plantation land, forest land, grassland, commercial and service land, industrial and mining land, residential land, public administration and public service land, special land, transportation land, water and water conservancy facilities land, and other land. The survey results formed based on this cover the whole area and all elements. (3) More detailed survey content: the Second National Land Survey (abbreviated as "Second Survey") and the change surveys in the past years did not carry out the detailed work of land use within the urban and village areas , in order to improve the level of natural resource management and achieve accurate management, the Third National Land Survey (abbreviated as In order to improve natural resources management and achieve accurate management, the Third National Land Survey (the "Survey") explicitly proposes to conduct "survey on the current status of land use in urban and village areas", and to find out the land use status of commercial services, industry, storage and other land types in urban and village areas. In addition, the "three surveys" labelled some land attributes, such as cultivated land with planting attributes and plantation attributes [2] . (4) The importance of agriculture and ecosystems: the "three surveys" pay more attention to the ecosystem, adding a class of wetlands; and improve the accuracy of agricultural applications of the survey. The "three surveys" are oriented to the fine management of natural resources and the evaluation of land conservation and intensification, and expand the content of special surveys, including the detailed survey of arable land, the survey and evaluation of the quality level of arable land and the evaluation survey of the grading of arable land, so as to provide support for the "trinity" of arable land quantity, quality and ecology It provides support for the "three-inone" protection and management of arable land in terms of quantity, quality and ecology. This is especially important for the in-depth development of village territorial spatial planning to lay an important foundation for the current situation. GIS system has natural advantages in data collection, processing, analysis and visualization expression, which becomes an important tool for the current state of the land resources investigation. In the initial stage of the current situation investigation of national land resources, the powerful data organization and management capability of GIS system can be used to establish the spatial database of national land resources and provide data basis for national land spatial planning. Sorting out the current situation of land and space utilization in the village area The main vector data covered by the territorial spatial planning include the current status data of land use and planning data. The specific data are as follows: (1) the data of the Third National Land Survey; (2) ecological red line; (3) permanent basic farmland; (4) arable land use, etc.; (5) forestry small class surface; (6) land use control; (7) land change database; (8) land supply data; (9) land use planning data; (10) urban general planning data; (11) urban control planning data; (12) urban elastic development zone; (13) mineral resources distribution data; (14) other special planning data, etc. This paper mainly deals with the use and analysis of the first three status quo databases [3] . The work of sorting out the current situation of land and space utilization at the village level includes the following contents: integration of land and space data of the base period of village land and space, location of permanent basic farmland indicators, location of ecological protection red line, statistics of conflicting patches of ecological protection and basic farmland protection, etc. And the completion of these elements needs to take advantage of GIS technology to achieve. The author will take Feng Sheng Village in Xiao chang County, Hubei Province and Pan Long Village in Shen nong jia Forestry District, Hubei Province as examples, to start specific discussions respectively. Village land use in the base period In order to sort out the land use categories and scale of the village area, it is necessary to retrieve and integrate the "three surveys" data on the GIS platform: since the land use classification used in the "three surveys" data is the status quo land use classification system (12 primary classifications, 73 secondary classifications), and the land use classification of land spatial planning (24 primary classes, 106 secondary classes, 39 tertiary classes), it is impossible to correspond to all the data. (12 primary, 73 secondary), while the spatial planning of land use classification (24 primary, 106 secondary, 39 tertiary), it is impossible to achieve one-to-one correspondence of all data; therefore, after importing the "three investigations" data on the GIS platform, it is necessary to first apply the base conversion rules: (1) direct conversion (one-to-one or multi-to-one type); (2) refinement of the survey data; and (3) the integration of the "three investigations" data on the GIS platform. ); (2) refinement survey (one-tomany) (3) no correspondence (need to refine the survey), the land information can be directly converted directly, and then combined with the site field survey process, the other two cases of land information for conversion, and finally on the basis of the "three investigations" data to form the village area land Based on the data of "three surveys", the base period land data required for spatial planning can be formed, and then the statistical table and the land status map can be completed by using GIS platform to classify the current land status of village land space [4] . Table1. Statistics of land use classification in Fong sheng Village in the base period of the national land space Permanent basic farmland After completing the integration of the village land space base period land data of Feng Sheng village, then retrieve the village permanent basic farmland index data from the upper planning database, we can know that the total amount of basic farmland control in Feng Sheng village is 192.29 hectares; then import the GIS drawings of permanent basic farmland in Feng Sheng village into the village land space base period land map for overlay, if the land space base period land and permanent basic farmland land overlap If the location of the base period land and permanent basic farmland overlap is not arable land type, it is the land conflict patch, and these patches are also the place where the permanent basic farmland is occupied, and finally the conflict patches can be classified and counted by GIS platform to get the classification statistics of occupied permanent basic farmland in Feng Sheng Village. As shown in Table 2, it can be clearly seen that about 20.33 hectares of permanent basic farmland in Feng sheng Village, which was designated in the above plan, were occupied (10.57% of the total), and the most occupied were garden land and forest land (about 90%), i.e. non-construction land; according to the basic principle of "the total amount of basic farmland remains unchanged", in the later planning In accordance with the basic principle of "the total amount of basic farmland remains unchanged", direct reclamation to basic farmland can be considered at the later planning stage [5] . Others are rural homesteads and other construction sites that occupy basic farmland. For these land indicators, in the later planning stage, we can consider moving them out of the basic farmland indicators and select other sites to be allocated to basic farmland, to finally achieve the "balance" of basic farmland. Ecological Protection Red Line After completing the integration of the village territorial space base period site data of Pan Long Village, and then retrieving the village ecological protection red line data from the upper planning database, we can learn that the total area of Pan Long Village ecological protection zone is 12022.65 hectares. The GIS drawing of the ecological protection red line of Pan Long Village can then be imported into the village area of the spatial base period site map overlay, if the location of the spatial base period site and the ecological protection red line overlap is the type of dry land, rural residential land and mining land, that is, land conflict patches, these patches are also the ecological environment protection zone is occupied, and finally the conflict patches can be classified through the GIS platform statistics Finally, we can get the classification statistics of the occupied ecological protection red line in Pan Long Village through GIS platform. As shown in Table 3, it can be clearly seen that about 90616 square meters in the ecological protection zone of Pan Long Village designated in the above plan are occupied (0.75% of the total); according to the basic principle of "the ecological protection red line will only increase but not decrease", all these sites should be withdrawn in the later planning stage. Conclusion Village territorial spatial planning involves rural revitalization and the personal interests of the majority of farmers, so to do a good job of planning, it is the biggest prerequisite to get a clear picture of the family background; the current village planning work is different from the past, with more emphasis on "multi-planning", which involves the entire village territorial space and needs to grasp its current situation. In order to do a good job of sorting out the current state of the village land space, it is necessary to fully combine GIS technology to read the data of the three surveys, the index requirements of the basic farmland and the index requirements of the ecological protection red line; and then obtain the location and area of the basic farmland and the ecological protection red line occupied in the base year through the technology of GIS platform overlay processing data, and finally obtain the most accurate and detailed working base map, so as to better lay an important basic role for the next step of the village land space planning work.
3,848.6
2021-01-01T00:00:00.000
[ "Computer Science", "Environmental Science", "Geography" ]
Edge Guided Context Aggregation Network for Semantic Segmentation of Remote Sensing Imagery : Semantic segmentation of remote sensing imagery (RSI) has obtained great success with the development of deep convolutional neural networks (DCNNs). However, most of the existing algorithms focus on designing end-to-end DCNNs, but neglecting to consider the difficulty of segmentation in imbalance categories, especially for minority categories in RSI, which limits the performance of RSI semantic segmentation. In this paper, a novel edge guided context aggregation network (EGCAN) is proposed for the semantic segmentation of RSI. The Unet is employed as backbone. Meanwhile, an edge guided context aggregation branch and minority categories extraction branch are designed for a comprehensive enhancement of semantic modeling. Specifically, the edge guided context aggregation branch is proposed to promote entire semantic comprehension of RSI and further emphasize the representation of edge information, which consists of three modules: edge extraction module (EEM), dual expectation maximization attention module (DEMA), and edge guided module (EGM). EEM is created primarily for accurate edge tracking. According to that, DEMA aggregates global contextual features with different scales and the edge features along spatial and channel dimensions. Subsequently, EGM cascades the aggregated features into the decoder process to capture long-range dependencies and further emphasize the error-prone pixels in the edge region to acquire better semantic labels. Besides this, the exploited minority categories extraction branch is presented to acquire rich multi-scale contextual information through an elaborate hybrid spatial pyramid pooling module (HSPP) to distinguish categories taking a small percentage and background. On the Tianzhi Cup dataset, the proposed algorithm EGCAN achieved an overall accuracy of 84.1% and an average cross-merge ratio of 68.1%, with an accuracy improvement of 0.4% and 1.3% respectively compared to the classical Deeplabv3+ model. Extensive experimental results on the dataset released in ISPRS Vaihingen and Potsdam benchmarks also demonstrate the effectiveness of the proposed EGCAN over other state-of-the-art approaches. Introduction Semantic segmentation is a typical computer vision problem that processes raw data such as RGB images, to be specific, converting them into masks with different highlighted regions of interest where each pixel of the image is assigned as a unique category label. In recent years, semantic segmentation has become one of the key issues in remote sensing imagery parsing for its widespread applications, including road extraction [1,2], urban planning [3,4], object detection [5,6], and change detection [7], to name a few. Traditional segmentation methods mainly applied handcrafted features to assign pixelwise category labels, ranging from classic approaches such as logistic regression [8], distance measures [9] and clustering [10], to more superior models based on machine learning such as the support vector machine (SVM) [11], random forest (RF) [12], conditional random fields (CRFs) [13], and multi layer perceptron (MLP) [14]. Nevertheless, due to the restricted dependency extraction and expressive ability of the above mentioned models based on handcrafted descriptors, these methods failed to achieve considerable performance in challenging cases. For the past few years, DCNNs have been successful in natural image semantic segmentation and achieved excellent performance [15]. The CNN-based methods formulate the trainable tasks as an end-to-end paradigm and contain the powerful feature representation. One solution concentrates on designing an encoder-decoder architecture [16][17][18][19], which can keep more detailed information and obtain clearer object edges by gradually fusing low-level and high-level semantic features. Another solution is to exploit the elaborate contextual information. For instance, different-scale dilated convolutional layers or pooling functions are appended to the top of the network to incorporate multi-scale contexts and features in certain works [20][21][22]. There are several studies [23][24][25] to aggregate richer context information to invent large-size kernels or explore a context encoding module. To enhance the discriminant ability of feature representations, the attention mechanism was introduced into semantic segmentation [26,27]. The attention mechanism models the internal process of biological observation, a mechanism that aligns internal experience and external sensation to increase the fineness of observation in some salient areas [28,29]. It is also well known that attention plays a vital role in human perception [30,31]. Attention not only tells where to focus, but also improves the representation of meaningful areas. With the help of the powerful semantic feature expression ability brought by the attention mechanism, the accuracy of semantic segmentation has been further improved [32,33]. Based on the success of CNNs in processing natural image semantic segmentation, they have been widely explored for semantic segmentation of RSI [34,35]. In general, compared with natural images, remote sensing images are featured by complex data attributes, and various types of ground objects are diverse and easy to mix. Due to diverse topological shapes and variable scales, the semantic segmentation of RSI encounters barricades to some extent. Although the existing DCNN models perform well, semantic labeling on RSI is still challenging and difficult. Several solutions like multi attention network [36], adaptive tree CNN [37], and multi-source data fusion [38] for semantic segmentation of RSI are proposed in several research works. However, most of the current algorithms focus on learning a complicated mapping through an end-to-end DCNN, neglecting to consider and analyze the segmentation of the categories taking a small percentage of pixels in RSI, limiting the performance of RSI semantic segmentation. For example, road elements in RSI usually account for a relatively low proportion and the scale of road distribution is variable, making it difficult to effectively extract road features through exploiting CNNs and resulting in low road segmentation accuracy. Meanwhile, most of the current algorithms show a similar problem in modeling contextual information. To solve this issue, many researches have been conducted to analyze contextual dependencies, and the existing solutions are generally classified into two types. One approach is to utilize a pyramid module that integrates multi-scales feature information just like atrous spatial pyramid pooling (ASPP) in Deeplab. Another approach is to express long-interdependence from a channel or spatial aspect, such as the Non Local module. However, these current methods lack specific prior along the edge areas to aggregate contextual information. In this paper, a novel edge guided context aggregation network (EGCAN) is proposed for semantic segmentation of RSI to address the aforementioned issues. The Unet is adopted as backbone network to generate a dense prediction containing features of all object categories. Meanwhile, an edge guided context aggregation branch and minority categories extraction branch are presented in the proposed framework EGCAN according to their roles respectively. Specifically, the edge guided context aggregation branch contains three modules: edge extraction module (EEM), dual expectation maximization attention module (DEMA), and edge guided module (EGM). EEM estimates the binary edge information of remote sensing images, then the edge information and the global semantic features with different scales extracted from encoder part of the backbone are incorporated and fed into DEMA for sufficient context aggregation. Based on that, the edge area attention map generated by DEMA is fed back to EGM embedding in the different parts of decoder process to emphasize those error-prone pixels in the edge regions. Thus, the edge guided context aggregation branch can keep global semantic comprehension and enhances the representation of the edge features along the spatial and channel dimensions. Meanwhile, the minority categories extraction branch contains a hybrid spatial pyramid pooling module (HSPP), which is presented to acquire rich multi-scale contextual information to distinguish categories which take a small percentage and background; thus, a better segmentation result is achieved on the minority categories. Extensive experiments on the dataset released in the TianZhi Cup Artificial Intelligence Challenge, ISPRS Vaihingen, and Potsdam benchmarks demonstrate that the proposed algorithm can effectively improve the accuracy of semantic segmentation of RSI. The main contributions of this paper are summarized as follows: 1. A novel architecture named edge guided context aggregation network (EGCAN) is proposed for RSI semantic segmentation. The advantage of the proposed network is that the edge information is employed as a priori knowledge to guide remote sensing image segmentation. The edge information is beneficial for effectively distinguishing background and different categories, especially the categories occupying a small percentage. 2. A novel edge guided context aggregation branch is invented containing three modules, edge extraction module (EEM), dual expectation maximization attention module (DEMA) and edge guided module (EGM) to promote the accuracy of edge predictions, which enhances edge feature interdependencies and representation ability of the network along the spatial and channel directions. 3. A hybrid spatial pyramid pooling (HSPP) module is investigated in minority categories segmentation branch, which is comprised of different-scale dilated convolutions and pooling operations to capture rich multi-scale contextual information for improving the proposed model's discriminative capability of minority categories. 4. Extensive experimental results on the dataset released in the TianZhi Cup Artificial Intelligence Challenge, ISPRS Vaihingen, and Potsdam benchmarks demonstrate the superiority of the proposed EGCAN over other state-of-the-art approaches. Related Work Semantic segmentation is a fundamental and challenging task in the field of computer vision involving a deep semantic understanding of various types of images. In this section, methods regarding semantic segmentation of nature scenes and remote sensing images and attention mechanism relevant to our proposed method are reviewed. Semantic Segmentation As an extension of classic CNN, the fully convolutional neural network (FCN) that can learn the mapping relationship between pixels without extracting region suggestions aims to make classic CNN accept images of any size as input. Long et al. [15] built the first FCN in semantic segmentation. Utilizing the powerful representation learning ability of CNNs, FCN greatly surpassed the traditional methods based on hand-crafted features. Subsequently, several model variants were proposed to boost contextual extraction. For example, PSPNet [21] designed a pyramid pooling module (PPM) to exploit the global context information and produced a superior pixel-level prediction result. DeeplabV2 [20] aggregated contextual information via an astrous spatial pyramid pooling (ASPP) module constituted of parallel dilated convolutions with different dilated rates. Deeplabv3 [22] extended ASPP with image-level feature to further obtain more contexts. Meanwhile, to reduce computational complexity, FastFCN [39] further introduced the joint pyramid up sampling (JPU) module as a substitute for extended convolution. Typically, the encoder-decoder networks, such as convolutional networks for biomedical image segmentation (Unet) [17], encoder-decoder with atrous separable convolution for semantic image segmentation (DeepLabv3+) [40], a deep convolutional encoder-decoder architecture for image segmentation (SegNet) [18], and semantic prediction guidance for scene parsing (SPGNet) [41], established skip-connection, explicitly connecting encoder layers with decoder layers to gradually recover the spatial information, thus improving the models' accuracies and addressed the problem of vanishing gradients. Similarly, Yu et al. [42] preserved rich spatial information and obtained a larger receiving field by proposing spatial and context paths, which solved the high computational cost associated with high-resolution feature maps in the U-shaped architecture. Attention Mechanism The attention mechanism intended to elevate the effectiveness of certain models is widely applied to machine translation, image classification, semantic segmentation, etc. The attention-based networks and their variants have been proposed to tackle the challenge in semantic segmentation [43,44]. Inspired by the outstanding performance of the attention mechanism in machine translation originally proposed by Bahdanau et al. [43], a Squeeze-and-Excitation Network (SENet) was proposed by Hu et al. [44], introducing global average pooling to aggregate the feature maps. Then, the feature maps were simplified into a single channel descriptor, thus highlighting the most distinguishing features. Inspired by the self-attention mechanism, to explore the long-range dependency encouraged by attention-based networks utilizing Non Local module in semantic segmentation, the Double Attention Networks (AA2-Net) [45], Dual Attention Network (DANet) [27], Point-wise Spatial Attention Network (PSANet), Object Context Network (OCNet) [46], and Co-occurrent Feature Network (CFNet) [47] were proposed. Later on, Li et al. [48] further enhanced the attention mechanism's efficiency by combining self-attention and EM algorithm [49]. RSI Semantic Segmentation The development of remote sensing technology has made it easy to obtain a large number of high-quality remote sensing images. Meanwhile, encouraged by the progress made by deep learning (DL) applied in natural image processing, there indicates a promising prospect for a variety of DL based methods to be applied in RSI, thus improving understanding of the context. To address the different orientations of the RSI, Marcos et al. [50] developed a Rotation Equivariant Vector Field Network (RotEqNet) encoding rotation equivariance. Furthermore, Liu et al. [51] proposed that multiscale contexts captured by CNN encoder could be aggregated to improve the labeling coherence and low-level features from CNN's shallow layers for helping refine the objects. Both adaptive hierarchies and a deep neural network are used in a unified deep learning structure in the structure of TreeUNet. Likewise, a similar unified deep learning structure that combines decision trees and CNNs has been proposed in the work of ANT [52]. Liu et al. [51] proposed a novel end-to-end self-cascaded network (ScasNet) that promoted the labeling coherence with sequential global-to-local contexts aggregation, especially for confusing artificial objects. Superpixel-enhanced Deep Neural Forest (SDNF) [53] was proposed to tackle difficulties in distinguishing ground object categories due to the complexity of ground objects' spectrum. Furthermore, semantic segmentation and semantically informed edge detection were combined to clarify class boundaries in the work of Marmanis et al. [54]. Overview As shown in Figure 1, the proposed edge guided context aggregation network (EGCAN) consists primarily of three parts according to the effects of each part: the mainstream Unet, which combines encoder parts {E 1 , E 2 , E 3 , E 4 } and decode parts {D 1 , D 2 , D 3 , D 4 }; the edge guided context aggregation branch, which contains edge extraction module (EEM), dual expectation maximization attention module (DEMA), and edge guided module (EGM); and the minority categories extraction branch. In view of the edge guided context aggregation branch, firstly, the EEM is employed to obtain the edge feature map. Then, the edge information I 1 derived from the segmentation result of canny with morphological dilation operation and the various semantic information {I 2 , I 3 , I 4 } with different scales which were generated from four stages of the backbone Resnet101 [55], are combined together. After that, the proposed DEMA is utilized to aggregate the context of edge feature map. The result of DEMA is fed into the decoder parts noted as EGM, to verify the region of the object edges and relearn them in a gradual manner. For the minority categories extraction branch, the hybrid spatial pyramid pooling (HSPP) is exploited to obtain multi-scale spatial information through adjusting scale and rate parameters. Following that, the same decoder part of the mainstream except the edge guided module is shared to get the minority categories feature map. Lastly, the features of the mainstream and the minority categories extraction branch were fused by an ensemble way to obtain a better segmentation result. EGCAN contains three parts: the mainstream, the edge guided context aggregation branch, and the minority categories extraction branch.The input of the ECGAN is RGB images and the output is the corresponding segmentation result. Edge Extraction Module (EEM) Since the edge feature information are utilized to drive the procedure of edge context aggregation and they connect with the mainstream semantic features, EEM adopts the middle representations information from the backbone as its input directly. This step is beneficial to fully utilize low level feature and high level semantic information when the connections between the mainstream and the edge stream are created to allow different levels of information to flow over the network. As shown in Figure 1, the feature maps were obtained from every stage of the backbone Resnet101 by a 3 × 3 convolution. The following upsampling operation directly utilized bilinear interpolation to acquire the same size feature as the input feature maps. After that, the edge extraction module obtained the feature maps {I 2 , I 3 , I 4 }. In order to get more, clearer edge information and feed the edge information into the decoder part of the network, canny and dilation operations were utilized to get the binary edge map I 1 . As shown in Figure 1, the EEM offers a concatenate operation to get the binary edge map and the feature maps from backbone together. where X denotes the result of the EEM. ψ denotes a series of operations: concatenate, 1 × 1 convolution, BatchNorm and Sigmoid. Dual Expectation Maximization Attention Module (DEMA) The ddge guided context aggregation branch introduces the expectation maximization algorithm into the self-attention mechanism, which runs the attention mechanism through a set of compact base set instead of every pixel position of the whole image. The expectation maximization algorithm refers to finding the maximum likelihood estimation or the maximum posterior estimation of parameters in a probabilistic model, which depends on hidden variables that cannot be observed directly. The main steps of the algorithm are step E and step M [48]. As illustrated in Figure 2, step E is purposed to calculate the spatial attention map Z, using the existing estimate of the hidden variable to calculate its maximum likelihood estimate. Step M is purposed to maximize the maximum likelihood value µ obtained in step E to calculate the value of the parameter. The parameter estimates gained at step M are used in the following step E to calculate the expectation. The algorithm executes step E and step M alternately until the convergence criterion is satisfied. Step E is purposed to calculate the spatial attention map z. Step M is purposed to maximize the maximum likelihood value obtained in step E to calculate the value of bases µ. As illustrated in Figure 2, a dual expectation maximization attention (DEMA) module was developed to explore the feature correlations along both spatial and channel dimensions. The feature map X, which is the output of edge extraction module, has been considered with a size of C × H × W, where C represents the number of channels, and H and W denote the height and width, respectively. First, the SEMA module reshapes X into a simplified form of N × C, where N = H × W. After that, the bases µ is initialized as K vectors of length C, then the EM iteration step is executed (step E generates the attention map, step M updates µ). In the t-th iteration, the spatial attention map Z is expressed in the form of exponential inner product as: where 1 ≤ n ≤ N and 1 ≤ k ≤ K and λ is a hyper-parameter to control Z. Next, µ can be updated through Z to: In order to ensure that the learning of µ is stable, L2 norm is adopted to normalize µ in each update. After T iterations, the featureX s about spatial dimension can be obtained by reconstructing the final Z and µ as:X Similar to SEMA, the CEMA module reshapesX s to R C×N . Then, base ν is initialized to R N×J . Next, the EM iteration step is performed. In the t-th iteration, the channel attention map F is represented in the form of exponential inner product as: where 1 ≤ c ≤ C and 1 ≤ j ≤ J and θ is a hyper-parameter to control F. Next, ν can be renewed according to F to: To make sure that the learn of ν is stable, L2 norm is also adopted to normalize ν in each iteration. After T iterations, the featureX c can be obtained by reconstructing the final F and ν as:X Ultimately, refined featuresX s andX c are reshaped to R C×H×W and combined with X to generate the edge area attention map. Edge Guided Module (EGM) As previously stated, the edge of remote sensing imageries contains pixels that are hard to distinguished in the semantic segmentation task. Thus, the results of DEMA were sent into the EGM, the decoder part of the mainstream. The EGM retrains the pixels along the region of the edge and remodels the edge feature space. This strategy helps to improve the abilities of discriminatory for edge pixels. As shown in Figure 1, the result of the encoder part of EGCAN noted as D n is fed it into EGM. The edge guided module gets τ(D n ) where τ denotes 1 × 1 convolution, BatchNorm, and ReLu. Following that, an upsample procedure is performed. In order to maintain the same dimensions the upsampling ratio n * n ( n is predetermined was set to 16, 8, 2 in EGM 1 , EGM 2 , EGM 3 , respectively). The edge feature is then used to reconstruct as f D n = τ(D n ) ⊗X (8) where ⊗ denotes elementwise production, andX is the result of DEMA. Guided by edge area attention map, f (D n ) accentuates the pixels along the edge which is hard to distinguish. Next, the edge guided module combines f (D n ) with D n by the means of downsample procedure and concatenate operation. Meanwhile, the downsample ratio m * m is set to be the same as the upsample operation. In order to effectively utilize the advantages of the edge area attention map, the decoder part of EGCAN gradually enhances the representation ability of features f n = ξ Cat f D n , D n where ξ formed by convolution layers (i.e., 1 × 1 convolution, BatchNorm, and ReLU), which are used to reduce the number of channels and maintain resolution, respectively. "Cat" refers to a concat operation used to fuse f (D n ) and D n . f n is the output of EGM n . Minority Categories Extraction Branch As stated in the introduction, the current algorithms neglect to consider and analyze the segmentation for categories which take a small percentage of remote sensing imageies, which makes it difficult for existing CNN-based methods to effectively extract contextual features. At the same time, due to the diverse topology shapes and different distribution scales for minority categories elements, the segmentation accuracy of minority categories is further limited. Global average pooling and dilated convolution operation have been proven to be powerful tools for capturing contextual characteristics. Besides this, it is beneficial to obtain multi-scale spatial information by adjusting scale and rate parameters. Accordingly, a hybrid spatial pyramid pooling (HSPP) module is investigated in minority categories extraction branch of the proposed EGCAN network. Figure 3 depicts an illustration of the HSPP module, which is comprised of two parallel different-scale dilated convolutions and global average pooling operations. The input of this module is from the output feature of the dilated Resnet101 backbone network. To reduce the computational complexity, the HSPP employs 1 × 1 convolution to reduce the channel dimension of the corresponding feature for each pooling operation, while dilated convolutions apply a smaller number of filters. The following upsampling directly utilize bilinear interpolation to acquire the same size feature as the input feature map. Then, different scales and rates of features are concatenated as the final hybrid spatial pyramid feature. At the end of the proposed network, EGCAN fuses the results of the mainstream and the minority categories extraction branch via an ensemble strategy by voting. Experiments and Results In this section, the effectiveness of the proposed method is validated through a variety of datasets. Section 4.1 introduces fundamental experiment conditions and settings. A brief description of datasets utilized for the benchmark is shown in Section 4.2. Introduction for evaluation metrics can be found in Section 4.3. In Section 4.4, the effectiveness of the model EGCAN was evaluated in the Tianzhi Cup AI Challenge Dataset, ISPRS Vaihingen, and Potsdam dataset. Experimental results comprising the comparison of the proposed method and other classic methods can also be found in Section 4.4. Experimental Settings The hardware and system configuration for the laboratory server intended for our experiments is shown in Table 1. Essential packages for the experiment include Python 3.6, CUDA 9.0, Pytorch 1.1.0, and others. Certain operations for images like rotation, flip, brightness adjustment, and noise addition were adopted randomly in data augmentation to improve the generalization performance of the proposed method. Furthermore, due to sample imbalance in certain datasets, sample equalization was introduced to enhance the effectiveness of the training process. In order to avoid problems caused by hardware resource limitations, the images were cropped into patches of 512 × 512 resolution with a stride of 256 pixels for both rows and columns. For the Tianzhi Cup AI Challenge Dataset, 6422 images are used for training. For the ISPRS Vaihingen and Potsdam Challenge Datasets, the training set contains 864 and 9580 images, respectively. Adam optimization was adopted with the batch size 8 in the training process. The learning rate is initialized at 0.00002 with the polynomial function of power = 1.5 as the decay policy. The total training epoch is set as 100. Based on the experimental settings description above, the training time for the proposed method is approximately 24.3 mins using the Tianzhi Cup AI Challenge Dataset, 3.5 mins using the ISPRS Vaihingen Challenge Dataset, and 38.8 mins using the ISPRS Potsdam Challenge Dataset, respectively. Dataset Description The proposed method was validated on the dataset released in the TianZhi Cup Artificial Intelligence Challenge, ISPRS Vaihingen, and Potsdam benchmarks.The ground truth of the TianZhi dataset comprises five different common land cover categories: farmland, roads, water, vegetation, and backgrounds that denotes all other categories that differ from the above four categories. ISPRS datasets include the six most common land cover classes, impervious surfaces (imp_surf), buildings (building), low vegetation (low_veg), trees (tree), cars (car), and clutter/background (clutter). The Tianzhi Cup AI Challenge Dataset consists of a pair of 23 RSIs of 7400 × 4950 resolution and corresponding ground truth semantic labels. Each image contains three channels of red (R), green (G), and blue (B). Following the contest instructions, 12 of them are used for training, 6 of them are used as validation data, and the remaining RSI are used for testing. The ISPRS Vaihingen Challenge Dataset contains a variety of independent buildings and small multi-storey buildings, involving 33 orthorectified patches of different sizes acquired by a near-infrared-green (G)-red (R) aerial camera over the town of Vaihingen (Germany). Each image is accompanied by a corresponding DSM representing the absolute heights of pixels. The average size of the tiles is 2494 × 2064 pixels with a spatial resolution of 9 cm. DSM is not used in these experiments. Recently, the challenge organizer opened the ground truths of all the images. Among the previously opened ground truths, 12 annotated images were used to train the networks, 4 images (ID 5, 7, 23, and 30) were used to validate performance, and the remaining 17 images were used as a test set to evaluate the segmentation generalization accuracy. The ISPRS Potsdam Challenge Dataset contains 38 orthorectified same-size patches of size 6000 × 6000 pixels with a spatial resolution of 5 cm over the town of Potsdam (Germany). This dataset offers near-infrared, red, green, and blue channels together with the DSM and normalized DSM (NDSM). There are 20 images in the training set, 4 images (ID 2_11, 4_10, 5_11, and 7_8) in the validation set, and 14 images in the test set. Evaluation Metrics The performance of the proposed method was evaluated by overall accuracy (OA), mean Intersection over union (mIoU), and F 1 score. The overall accuracy as an intuitive metric computes a ratio of the amount of correctly classified pixels and the total number of pixels, standing for a general assessment result for overall pixels. The OA can be calculated as follows: The intersection over union (IoU) represents a ratio of the intersection of pixels predicted to be of a specific category and the ground truth pixels of that category and their union. The mIoU can be derived by averaging the IoU for all the label categories besides background. It is assumed that there are total k + 1 categories (from 0 to k, and 0 represents the Backgrounds), while p ij stands for the number of pixels belonging to category i and being predicted as category j. The mIoU can be calculated as follows: F 1 score is defined as the harmonic mean of recall and precision, and can be calculated as follows: Recall and precision, representing completeness and correctness, respectively, for class i are calculated as follows: True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN) are the four most basic metrics, in which Positive and Negative represent the prediction whether pixels belong to one class, while True and False represent the authenticity of this prediction, For example, TP represents the number of pixels predicted to be one class and belongs to this class. Assume that there are total k + 1 classes (from 0 to k), and p ij stands for the number of pixels belonging to class i but being predicted as class j, for class i, they can be derived as: Experimental Results Quantitative comparisons between other approaches and the proposed method on the Tianzhi testing dataset are conducted by metrics of OA and mIoU. In the TianZhi Cup dataset, the proportion of road elements is the lowest, so the minority categories extraction branch of the proposed network is mainly used to distinguish road elements. As shown in Table 2, the proposed method ranks first in OA and mIoU among six other classic methods and achieves 84.1% in OA and 68.1% in mIoU on the Tianzhi testing dataset. Meanwhile, the proposed method performs the best in categories of farmland and roads by a relatively large margin compared to other networks, which denotes the proposed method's effectiveness, especially for the minority category. From Figure 4, it can be seen that our method achieves the best visualization effect, reflecting the consistency of the numerical results. In view of road element, the proposed method provides a more complete and accurate segmentation along the edge of the road. Meanwhile, misclassified pixels along the edge of objects take a smaller proportion. To further test the effectiveness of the proposed EGCAN, comparisons with competitors' methods on the two challenging Vaihingen and Potsdam benchmarks were carried out. In the Vaihingen and Potsdam benchmarks, the proportion of car elements is the lowest, so the minority categories extraction branch of the proposed network in Vaihingen and Potsdam benchmarks is mainly used to distinguish car elements. The competitors' methods on the two challenging Vaihingen and Potsdam benchmarks contain: SVL_1, SVL_3, DST_2, DST_5, UZ_1, RIT_L7, ONE_7, ADL_3, DLR_10, CASIA_2, BKHN_10, TreeUNet, SWJ_2. Cascade denotes the Cascade-Edge-FCN and Correct denotes the Correct-Edge-FCN [56]. EDENet denotes the edge distribution-enhanced semantic segmentation neural network [57]. Tables 3 and 4 show the quantitative comparisons on the Vaihingen and Potsdam testing dataset. Correspondingly, visual comparisons are shown in Figures 5 and 6. In Table 3, the proposed EGCAN obtains an OA of 91.0% and Mean F1 of 89.7%. The Mean F1 of ours is listed in third place. From this, it can be seen that our method provides a very competitive result. As for the Potsdam testing dataset, the proposed method can acquire 93.0% in MeanF1, exceeding all the comparisons listed in Table 4. Moreover, the OA is second only to SWJ2. From Figures 5 and 6, our method still has significant advantages in dealing with complex feature images. Ablation of Edge Extraction Module As shown in Table 5, four experiments were designed to evaluate the performance of edge extraction module. In experiment (a), the Unet is adopted as baseline for semantic segmentation of remote sensing imageries. In experiment (b), the sEEM is introduced into the baseline where the sEEM denotes single-scale edge extraction module which only utilizes one intermediate feature from the decode part of the backbone, Resnet101. It means that information I 4 instead of I 2 or I 3 is used in the following ablation experiments. In experiment (c), all information {I 2 , I 3 , I 4 } from each stage of Resnet101 are used to aggregate context from multi-scale features. Based on experiment (c), canny and dilation operations are utilized to get clearer edge features. Then, the clearer edge features are fed into the decoder part of EGCAN. In view of the results, the improvement using sEEM alone is not significant, only a 2.1% increase on mIoU. However, the proposed method achieves 4.2% improvement by EEM (without canny and dilation), thus proving the effectiveness of the multi-scale context extraction. Furthermore, a better result of 63.2% on mIoU is obtained by adding canny and dilation operations in experiment (d), which clearly validates the significance of canny and dilation operations. From experiment (a) to experiment (d), the proposed network gets more and more edge information when the baseline Unet gradually adds other modules which enhance the representations ability of the network along the edge of different categories. Visual comparisons are shown in Figure 7. The first column is the input and the second column is the the label of input. The third to sixth columns show the results from experiments a, b, c, and d. According to the visual results, EEM effectively extracts rich feature information especially with the help of canny and dilation operations. From the first row and second row, it shows the improvement of the edge extraction, especially on the road element. However, from the second row to third row, it can be seen that the proposed method still performs well when the context of remote sensing images become more and more complicated. background farmland road water vegetation Ablation of Dual Expectation Maximization Attention Module To evaluate the effect of each component of the proposed approach, an ablation study is conducted on the Tianzhi dataset. As shown in Table 6, the baseline network only utilizing the dilated Resnet101 as backbone framework can achieve 59.4% in mIoU score. Then, the individual SEMA module or CEMA module are added to the backbone network to explore the multi-category segmentation. It can be seen that the SEMA or CEMA module alone would yield 60.6% or 61.1%, which can bring 2.0% or 2.8% improvement, respectively, thus proving the effectiveness of a single SEMA or CEMA module. Subsequently, three different ways were compared to arrange these two attention modules. As shown in Table 6, 'SEMA || CEMA' denotes that SEMA and CEMA module are set in the paralleled structure. 'CEMA → SEMA' stands for the structure where a SEMA module follows a CEMA module. 'SEMA → CEMA' denotes the structure where a CEMA module follows a SEMA module. Compared with the baseline result, involving SEMA and CEMA modules simultaneously and placing them in a cascade structure will bring 5.7% improvement. Then, consider placing SEMA and CEMA module in a paralleled structure. In the case where the SEMA module follows the CEMA module, 8.9% improvement can be achieved. In the case where the CEMA module follows the SEMA module, the improvement will be 9.9% and yield 65.3% in mIoU. In order to get a better result, the DEMA module of the EGCAN network applies this arrangement, which enhances edge feature interdependencies and representations ability of the network along the spatial and channel directions. Quantitative visualization results are shown in Figure 8. Obviously, DEMA enhances our model's sensitivity to edges of various scales and enables the pixels of the same class to achieve similar gains. Influence of Edge Guided Module As shown in Table 7, the proposed EGM enhances the model's mIoU by 0.4%, 1.3%, and 2.5%, respectively, confirming their efficiency in retraining the pixels along the region of the edge and remodeling the edge feature space. The promotion brought by EGM3 is more distinct than others, considering that the feature map in this module processes the largest resolution to maintain edge information. Quantitative visualization results are shown in Figure 9. From the third column to the sixth column, the results of semantic segmentation get better and better because of progressively relearning error-prone pixels by EGMs to enhance our model's ability to distinguish different classes. Effections of Hybrid Spatial Pyramid Pooling Module Based on the statistics of the total number of categories, the proportion of road category is usually smaller, owing to the inherent narrow and long distribution paradigm. Meanwhile, the scale of road distribution is variable, making it difficult to adequately extract road characteristics and making the segmentation accuracy low. The proposed EGCAN method considers to employ another branch designing the HSPP block to fulfill the road segmentation. The results are shown in Table 8. In particular, ASPP uses parallel convolution operation to extract multi-scale semantic features; its ability for driving mainstream context aggregation is not completely realized. Our model outperforms ASPP by 1.39% in terms of accuracy. In comparison to prior self-attention-based strategies, such as Non Local, RCCA, and DNL, HSPP successfully helps to reduce the negative effect of intra-class inconsistency, resulting in significant mIoU improvements of 4.6%, 5.0%, and 1.4%, respectively. The experimental result shows that considering both the mainstream segmentation and the road extraction segmentation can further produce better prediction. The mIoU score reaches 68.1%, which is 1.4% higher than the second best result. Quantitative visualization results are shown in Figure 10. Conclusions By considering abundant edge information and low-percentage categories of segmentation, a novel edge guided context aggregation network (EGCAN) designed for semantic segmentation of RSI breaks the barricades of the performance of RSI semantic segmentation, proving that the structure of the proposed method works and performs well. Essentially, an edge guided context aggregation branch was developed to promote the accuracy of edge predictions. Then, a hybrid spatial pyramid pooling (HSPP) module was investigated in the minority categories segmentation branch, which is utilized to capture rich multi-scale contextual information for improving EGCAN's discriminative capability of minority categories. As a result, our proposed method performed the best on the Tianzhi Cup AI Challenge Dataset and is among the best on the ISPRS Vaihingen and Potsdam Challenge Datasets. Nevertheless, there are still several challenging issues to be addressed. First of all, the annotations of the dataset need to be more precise to improve the semantic segmentation performance. Moreover, computing power is consumed more when the model becomes larger and larger. Thus, how to balance the performance of semantic segmentation and the computer power is an important direction of future research. Furthermore, whether some smaller edge extraction module can replace the edge guided context aggregation branch to obtain better results in semantic segmentation is a direction worth studying as well. Conflicts of Interest: The authors declare no conflict of interest.
8,892.8
2022-03-10T00:00:00.000
[ "Computer Science" ]
3D printing technology will eventually eliminate the need of purchasing commercial phantoms for clinical medical physics QA procedures 3D printing is not a new concept. The recent advances in printing speed, technology, and material selection are promoting its significant impacts in several industries, including health care. For our medical physics field, researchers are also finding its applications in various clinical aspects. However, the interests still remain in a few academic centers who have the luxuries of owning such an unconventional device in the radiation oncology department, or collaborating with a local 3D printing lab. As the 3D printing technology is becoming an unstoppable driving force in manufacturing revolution, are we also envisioning a future that 3D printing will become as common as a block‐cutting machine in a radiation oncology department? In this debate, we invited two researchers who are experienced in studying the clinical use of 3D printing in medical physics field. Dr. Eric Ehler is arguing for the proposition that “3D printing technology will eventually eliminate the need of purchasing commercial phantoms for clinical medical physics QA procedures” and Dr. Daniel Craft is arguing against. Dr. Eric Ehler is an Assistant Professor in the Department of Radiation Oncology at the University of Minnesota. He is the medical physics residency program director at the University of Minnesota Medical Center. His education and research interests are 3D printing, pediatric radiotherapy, radiation dosimetry, and machine learning. Dr. Daniel Craft is currently a medical physics resident at The Mayo Clinic in Phoenix, AZ. Prior to the beginning of his residency, Dr. Craft was a graduate research assistant and PhD student at the University of Texas MD Anderson Cancer Center in Houston Texas, where he studied techniques to deliver postmastectomy radiation therapy using 3D printed patient‐specific tissue compensators. He completed his Ph.D. in Medical Physics in May, 2018, and also holds an undergraduate degree in Physics from Brigham Young University. 2.A | Eric Ehler, PhD Phantoms provide medical physicists a means to assess the performance of medical devices in imaging, nuclear medicine, and radiation therapy. 1 Historically, phantoms were designed and constructed by clinical staff and/or hospital engineers using materials and formulations available to them at the time. 2 Currently, many vendors in the medical physics market provide a wide array of phantoms for clinical use. The reason for this shift could reasonably be attributed to convenience and in the interest of standardization of quality check (QC) procedures and quality assurance (QA) programs. 3D printing has been around since 1980s. 3,4 The expiration of patents related to 3D printing has lowered the cost of 3D printers. 3D printing technology has been described as the democratization of manufacturing; 3D printing is shifting the means of manufacture from a centralized system to a distributed network. The impact of increased access to manufacturing capability will reduce the convenience factor of commercial phantoms as clinicians can custom design and print phantoms as needed. The argument "3D printing technology will eventually eliminate the need of purchasing commercial phantoms for clinical medical physics QA procedures" is already becoming reality. In most clinics, the Linac morning QA is performed with a commercial image guidance radiotherapy (IGRT) phantom, which is a cubic phantom with marks on the faces for laser alignment and embedded features for x-ray imaging. An IGRT phantom with submillimeter accuracy was fabricated and reported by Woods et al. 5 using computer-aided design freeware and a relatively low cost 3D printer (commercially available for $3150 USD). In our clinic, rather than purchasing multiple identical IGRT phantoms, our team designed our own phantom in a similar manner as Woods et al. The phantom was 3D printed with PET-G plastic for a cost of $10, using a 3D printer in a cost range of $900. The 3D printed phantom did not have the full capabilities of our commercial IGRT phantom but it fits our clinical needs as we did not fully use the features of the commercial phantom during morning QA. Additionally, when compared to a commercial small animal PET/CT imaging phantom, the 3D printed phantom was described as "functionally equivalent to commercially available phantoms". 6 3D printed phantoms have also been described for MRI 7 and PET/MRI 6 systems. A feature of these phantoms is that they can be customized and produced by the end users at a low cost. vascular imaging, 10 and molecular imaging. 11 For IMRT QA, 3D printing a patient specific phantom for every patient treated with IMRT is not currently clinically feasible, mostly due to time constraints. However, for commissioning new procedures or for a periodic QA schedule, using a 3D printed phantom is warranted. The use of patient specific phantoms allows for a true end-to-end test on a per-patient basis at reduced cost of commercial, nonpatient specific, anthropomorphic phantoms. Beyond phantoms, 3D printing has been investigated for radiation therapy immobilization devices, 12 bolus, 13-16 electron blocks, 17 and other treatment aids. In fact, the strongest argument for clinical acquisition of 3D printing technology is for the fabrication of treatment devices due to the unique nature of patient anatomy and the high frequency of use of treatment devices. If clinics possess 3D printers for the purpose of treatment device fabrication, the convenience of 3D printing phantoms will increase greatly. A word of caution: 3D printing materials are not tightly controlled by all 3D printing material suppliers. For example, slight differences in formulation of 3D printing materials may affect the radiographic or other physical properties of the material. This variation could arise between one material supplier and another or even from batch to batch of the same supplier. Also, 3D printers can have defects in the printed object such as small unintended air voids or warping during printing. Air voids can occur from imperfect material deposition during the printing. Warping is an issue for fused deposition modeling (FDM) where a plastic filament is melted, extruded out of a nozzle, deposited, and then cools. Cooling can cause contraction, which may cause the FDM 3D printed object to warp. For charged particle radiation beams especially, this can negatively impact the performance of the 3D printed device or phantom. 18 Therefore, QC of the manufacturing process will need to be performed by 3D printing staff or clinicians whereas for commercial phantoms, QC is performed by the vendor and verified by the clinicians. For example commercial water equivalent plastic blocks are usually supplied with a certificate stating the physical dimensional accuracy of the plastic, uniformity of the plastic, and the attenuation properties of the plastic. If the blocks are 3D printed by clinic staff, these tests will need to be performed in-house. In summary, I believe there is already a market advantage for the clinical use of 3D printed phantoms. As 3D printers gain use in routine clinical device fabrication, their utilization in other clinical areas, such as phantom fabrication, will expand. In the long term, as 3D printing capabilities increase and 3D printing materials are designed specifically for medical physics use, 3D printed phantoms will increasingly replace commercial phantoms for clinical QA procedures. 2.B | Daniel Craft, PhD 3D printing is a transformative technology that allows users to physically manufacture anything that they can model with a computer. Over the last several years there has been enthusiastic and rapid adoption of 3D printing technology in medical physics to create a wide spectrum of custom, patient-specific devices. 3D printers are well-suited to manufacture a number of devices that are currently much more expensive, or much more inconvenient to procure from commercial vendors. These include customized, patient-specific bolus and customized phantoms that may only be used once, or for a single patient. However, despite the interesting applications and enormous potential of 3D printing technology for some radiotherapy applications, presently, there are several limitations that will prevent it from being uniformly adopted as the preferred phantom fabrication technique in hospitals across the country. The first major limitation of 3D printing is the material properties of 3D printed parts. 3D printable materials must have some specific properties; they have to either be a thermoplastic with a glass transition temperature near 200°C, or a photopolymerizing resin. This effectively limits the number of potential materials to thermoplastics and things that can be mixed with them. If a material cannot be melted and turned into a filament, it generally cannot be 3D printed. There are some creative materials that mix in other substanceslike wood shavings or copper powderwith thermoplastic bases to create materials with slightly different densities and HU values, but these material differences are mostly cosmetic and intended for hobbyist 3D printing. Importantly, there currently are no commercially available materials that can replicate either bone or lung tissues. Most current 3D printed phantoms either ignore bone entirely 8,19 or use custom inhouse mixed materials to mimic bone that requires custom filament creating equipment. 20 The first solution reduces the usefulness of the phantom, and the second solution dramatically reduces the convenience that 3D printing was supposed to provide in the first place. Similarly, the lungs are usually left open, or printed with "low infill" that matches lung density but is highly variable depending on the direction of an incident radiation beam. 21,22 Contrast this 3D printed phantom with a common commercial anthropomorphic phantom which comes with several different tissue types, including bone, cartilage, brain, soft tissue, and lung (Computerized Imaging Reference Systems, Inc. A Castleray company, Norfolk, VA). Additionally, these phantoms' low density material properties do not depend on the direction of incident radiation like low density 3D printed phantoms. Even if a full range of perfectly matched 3D printable materials were to be found, there are still large variations between identical 3D printed parts. We have previously shown that identically printed blocks of material can vary in density from each other up to 7%, 23 and that is using the same printer, the same model, and the same roll of filament. There are currently dozens of different kinds of 3D printers in use in clinics around the country using many different materials and printer settings. If 3D printing QA devices becomes commonplace, it will be difficult to make meaningful comparisons of measurements across institutions that are using different 3D printers to produce phantoms based on their own specific materials and printing protocols. Another problem with wide adoption of 3D printing is increased cost and complexity. To be clear, the actual material costs to 3D print a simple phantom are almost certainly less than the cost to purchase a similar commercial phantom. The cost of 3D printers, PARALLEL OPPOSED EDITORIAL | 9 however, can range anywhere from several hundred dollars to several hundred thousand dollars, with a commensurately huge range in printer complexity, print quality, available features, material compatibility, and reliability. For example, the cheapest 3D printers available on Amazon.com cost less than $200, but can only print using PLA filament, have minimum layer resolutions of approximately 200 microns, and have a build volume of only a few centimeters in any direction. On the other end of the spectrum, the HP Jet Fusion 3D 3200 uses multi-jet fusion technology to dynamically blend plastics to create parts up to 30 cm in each dimension with multiple colors and material properties, and has a minimum layer resolution of 70 microns. However, its cost starts at $155,000. It is important to remember that in-house phantom production will require in-house 3D printing expertise, so will it be the medical physicist's responsibility to be proficient in 3D design as well as the mechanical operation and maintenance of a 3D printer? Whose responsibility will it be if the 3D printer jams during a print and patient QA cannot be performed? 3D printers mostly operate in the background, but they do require operators to plan and start models printing, as well as change out materials and occasionally replace parts. Especially with less expensive printers the user must be able to troubleshoot and fix errors. This may be feasible in larger academic centers, but I do not think it is a reasonable expectation that the many small or nonacademic clinics that make up the majority of cancer care will embrace this unnecessary increased workload. In conclusion, 3D printing is currently not a mature enough technology to become the primary technique for fabricating important QA devices in radiotherapy clinics. Conventionally fabricated commercial phantoms are more uniform, reliable, and simple than 3D printed ones. It is definitely true that 3D printing has a place in radiation oncologyand an exciting one at that! The question that 3D printing must address is: what additional value does it provide over conventional phantom fabrication that outweighs the previously mentioned limitations. In my opinion, that value is in creating highly customized or unique phantoms for research and development in major academic centers, not in creating routine QA phantoms that every clinic needs. I am confident that 3D printing will eventually replace some commercial phantoms for clinical medical physics QA procedures at some clinics, but definitely not for all, or even most of them. 3.A | Eric Ehler, PhD I agree with Dr. Craft that currently there are many difficulties to overcome. However, in the long-term view I maintain the argument that all QA phantoms will be fabricated with 3D printing. It is true that currently available 3D printing materials are not equivalent to human tissues. Attributable to the complexity in designing a material that is compatible with 3D printing and is tissue or water equivalent, materials science developments are needed. In the meantime, there is an alternative to fully 3D printing a phantom if it is desired to be tissue or water equivalent. That is to use 3D printing to create a mold to fill with an equivalent material(s); this strategy can be used for phantoms 9 as well as radiotherapy bolus. 15 This can reduce 3D printing times and bypass deficiencies in the radiologic properties of current 3D printing materials such as those demonstrated by Dr. Craft. 23 Regarding 3D printer QA and additional workload, monitoring printers for jams or other print failures can be performed with a software packages such as OctoPrint. The software can be used to monitor printing progress via webcam and, if necessary, the print job can be aborted remotely. Updates on the printing progress can even be sent to mobile devices. To lend perspective on the frequency of print failures, one of our printers (Lulzbot Taz 6) has over 200 print hours with only one failed part in that time while a previously used printer failed quite regularly; thus the choice in the 3D printer is important. In addition, it is true that QA will be required for 3D printed phantoms or devices. However as physicists, we are responsible for the materials and devices used clinically. Regardless of whether a phantom is fabricated in-house or purchased from an established vendor, validation of the phantom and implementation into clinical use is required. There may be additional considerations in the QA of 3D printed phantoms or devices, but the advantages offset the additional workload. Finally, I contest the statement that 3D printing may be feasible for large academic centers but not for smaller clinics. In fact, I believe that the greatest benefit will be to smaller clinics. At a large academic center, there are likely engineers within the hospital and engineering machine shops nearby to fabricate phantoms and devices. Smaller clinics likely lack these resources and 3D printing can fill that gap at a reasonable cost. 3.B | Daniel Craft, PhD There are several points upon which Dr. Ehler and I agree. First, and most importantly, we share a concern for some of the variable material properties that 3D printed objects can have. As he notes, different material suppliers are not held to strict material standards, which can lead to various imperfections and inconsistencies in 3D printed parts. Objects printed from different suppliers using an equivalently labeled material could have different densities and radiological properties. 23,24 This is, however, not the only potential source of uncertainty. I would add that the quality of a printed object will depend equally as largely on the 3D printer used, and the model that has been designed. There are many 3D printers with slightly different properties that could affect print quality, such as how stable it can maintain the nozzle and bed temperature, how fast the extruder moves, and many more. Additionally, unless 3D models of useful phantoms are shared across all institutions there will be additional variation between clinics in the actual characteristics of phantoms used for QA. This leads to the second point on which we have common ground: if phantoms are printed in house, calibration and standardization tests into dimensional accuracy, material uniformity, and material attenuation properties will also have to be performed in house. As Dr. Ehler notes, these certifications currently come with phantoms from commercial suppliers. While larger research institutions may have additional resources and time to make this in house testing feasible, having to perform these tests for every printed object is an unnecessary workload for most smaller clinics. This increased workload for physicists in designing objects to be printed, maintaining a 3D printer, and validating 3D printed objects is in my opinion a major limiting factor in the widespread adoption of clinical 3D printing. As Dr. Ehler has mentioned, another use for 3D printing aside from creating clinical phantoms is the creation of patient-specific treatment devices. This is a very interesting application of 3D printing, because many of these devices are currently difficult, time-consuming, or expensive to acquire through conventional fabrication. With 3D printing, however, patient specific bolus 13,15,25 can be rapidly and inexpensively produced that reduces air gaps and improves dosimetric plan characteristics relative to less conformal bolus. In fact, I agree with Dr. Ehler that "the strongest argument for clinical acquisition of 3D printing technology is for the fabrication of treatment devices." I disagree, however, with his assertion that this technology can be applied equally to creating phantoms for every clinical need. Although 3D printed bolus is in many ways more convenient than and superior to conventional bolus, 3D printed phantoms are generally harder to manufacture and have inferior material properties relative to conventional phantoms. Ultimately, the debate around 3D printing taking over conventional commercial phantoms is an argument of magnitude. It is clear that 3D printing is currently being used in clinics around the country for a variety of interesting purposes including phantom development, 9,11 treatment device fabrication, 13,16 and more. 7,26,27 As the technology matures and continues to develop I am sure that it will improve and more use cases will be found. However, it is my opinion that 3D printing will remain a supplemental technology to fabricate a few special things, and will not ever completely replace conventionally fabricated commercial phantoms. Eric Ehler 1, * Daniel Craft 2, *
4,382
2018-06-26T00:00:00.000
[ "Engineering", "Medicine", "Physics" ]
Syndecan-3 is selectively pro-inflammatory in the joint and contributes to antigen-induced arthritis in mice Introduction Syndecans are heparan sulphate proteoglycans expressed by endothelial cells. Syndecan-3 is expressed by synovial endothelial cells of rheumatoid arthritis (RA) patients where it binds chemokines, suggesting a role in leukocyte trafficking. The objective of the current study was to examine the function of syndecan-3 in joint inflammation by genetic deletion in mice and compare with other tissues. Methods Chemokine C-X-C ligand 1 (CXCL1) was injected in the joints of syndecan-3−/−and wild-type mice and antigen-induced arthritis performed. For comparison chemokine was administered in the skin and cremaster muscle. Intravital microscopy was performed in the cremaster muscle. Results Administration of CXCL1 in knee joints of syndecan-3−/−mice resulted in reduced neutrophil accumulation compared to wild type. This was associated with diminished presence of CXCL1 at the luminal surface of synovial endothelial cells where this chemokine clustered and bound to heparan sulphate. Furthermore, in the arthritis model syndecan-3 deletion led to reduced joint swelling, leukocyte accumulation, cartilage degradation and overall disease severity. Conversely, CXCL1 administration in the skin of syndecan-3 null mice provoked increased neutrophil recruitment and was associated with elevated luminal expression of E-selectin by dermal endothelial cells. Similarly in the cremaster, intravital microscopy showed increased numbers of leukocytes adhering and rolling in venules in syndecan-3−/−mice in response to CXCL1 or tumour necrosis factor alpha. Conclusions This study shows a novel role for syndecan-3 in inflammation. In the joint it is selectively pro-inflammatory, functioning in endothelial chemokine presentation and leukocyte recruitment and cartilage damage in an RA model. Conversely, in skin and cremaster it is anti-inflammatory. Introduction Syndecans (sdcs) are heparan sulphate proteoglycans (HSPG) composed of a core protein to which heparan sulphate (HS) glycosaminoglycan chains are covalently attached. These molecules form part of the glycocalyx, which comprises a network of membrane-bound proteoglycans and glycoproteins at the cell surface of endothelial cells [1][2][3]. There are four mammalian syndecans, designated syndecan-1 (sdc-1), -2, -3, and -4, which have protein cores with characteristic structural domains [4,5]. The variable ectodomain, which is exposed to the extracellular environment, contains three to five HS and in some cases chondroitin sulphate chains, and is attached to the cell membrane via a hydrophobic transmembrane segment [6,7]. In addition, there is an intracellular domain containing peptide sequences, which serve as substrates for cellular kinases, enabling syndecans to act as signaling molecules [8]. HSPGs have been shown to play a pro-inflammatory role [9][10][11]. For example, on endothelial cells they bind and present chemokines to blood leukocytes that leads to leukocyte integrin activation, crawling on the endothelial cell surface and extravasation [12][13][14][15]. This interaction involves chemokine immobilisation and concentration at the endothelial surface and stimulation of leukocyte migration into the tissue [16]. Evidence also suggests that HS functions in chemokine transcytosis, which relays chemokines from basal to luminal surfaces of endothelial cells for their presentation to blood leukocytes [12,[17][18][19]. Furthermore, endothelial HS may act as an adhesion molecule, for example binding L-selectin during neutrophil rolling [19]. In contrast, data also indicate that HSPGs may be anti-inflammatory, for example in disease models of nephritis and lung inflammation using sdc-1 and sdc-4 knockout mice [20][21][22][23][24][25]. Furthermore, removal of HS by heparanase leads to increased leukocyte adhesion to the cremaster endothelium by intravital microscopy, suggesting an anti-inflammatory function [26]. Further work is needed to address the apparent contradictory roles of HSPGs in inflammation. Whether sdcs are pro-or antiinflammatory may relate to the particular tissue where they are expressed or the inflammatory state. Inflammation is a central feature of rheumatoid arthritis (RA) that affects around 1% of the population and can result in disability and morbidity. In RA, inflammation of the joint synovium is characterised by the infiltration and activation of leukocytes, which can lead to progressive destruction of cartilage and bone. Chemokines are involved in stimulating the infiltration of leukocytes into inflamed tissue and there is substantial evidence showing an involvement of these mediators and their receptors in RA [27]. For example, chemokine C-X-C ligand 1 (CXCL1) and CXCL8 are abundant in the sera, synovial fluid and synovium in human RA [27][28][29][30][31][32]. They are produced by synovial macrophages and other cells and attract neutrophils primarily. Furthermore, sdcs have been shown to be expressed in arthritic joints and sdc-4 functions in joint destruction [33][34][35][36]. A CXCL8 binding site on endothelial HSPG has been demonstrated in the synovium of RA patients [33]. In order to clarify which HSPG bound the chemokine, immunolocalisation of syndecans and glypicans revealed particularly strong expression of sdc-3 on RA synovial endothelial cells with quantitative PCR confirming endothelial expression. Furthermore anti-sdc-3 antibody and heparanase reduced CXCL8 binding to the endothelium. These data suggest a role for sdc-3 in synovial inflammation. Sdc-3 is the predominant syndecan in the nervous system, where it was first identified, and has been associated with the control of feeding behaviour and the generation of cerebellar fibrillar plaques in Alzheimer's disease [37,38]. Sdc-3 is also an HSPG of the musculoskeletal system. It has been found in the synovium in adult human joints [33] and is expressed by chondrocytes [39,40]. In addition, sdc-3 is involved in limb morphogenesis and skeletal development and regeneration [41,42]. Several studies have shown that it is expressed by endothelial cells in the synovium, lymph nodes and liver [33,43,44]. Nothing is currently known about the role of sdc-3 in inflammation, unlike sdc-1 and -4 [20][21][22][23][24][25]. The expression of a CXCL8 binding site on endothelial sdc-3 in human RA suggests a role for this HSPG in inflammatory disease [33] although in vivo studies are needed to substantiate this hypothesis. The current study addresses this question, whether genetic deletion of sdc-3 in mice alters leukocyte trafficking in response to murine CXCL1. This chemokine is the functional homologue to CXCL8, which is absent in rodents. The study also addresses if deletion of sdc-3 alters the severity and progression of disease in an RA model. The involvement of sdc-3 in leukocyte recruitment in the synovium was compared to that in the skin and cremaster muscle. This was to find out if sdc can play a different role in different tissues, which may help explain its apparent contradictory function in inflammation. We show that sdc-3 plays a dual role in inflammation depending on the tissue and vascular bed. In the joint it is pro-inflammatory, since its deletion leads to reduced leukocyte recruitment and the severity of arthritis. However, in the skin and cremaster it is anti-inflammatory, since its deletion leads to enhanced leukocyte interaction with the endothelium and recruitment. This is the first study to show a role for sdc-3 in inflammation and reveals its function is tissue-selective. Chemokine-driven leukocyte migration into the skin and joints Mice were injected intradermally or intra-articularly in the knee joint space with recombinant murine CXCL1 (KC) (PeproTech, London, UK) 3 μg/site in phosphatebuffered saline (PBS) [45]. PBS administration was used as a control. After four hours, the animals were sacrificed and skin biopsies or joints were processed for light microscopy. Leukocyte recruitment into the dermis and synovium was observed by light microscopy, neutrophils being identified by their lobed nuclear morphology. To quantitate leukocyte recruitment the number of neutrophils in the synovium was randomly counted in 10 fields of view at x780 magnification per section from sdc-3−/− (n = 8) and sdc-3+/+ (n = 8) mice. Myeloperoxidase (MPO) assay The MPO assay was used as a surrogate marker for the presence of neutrophils in skin tissue and was carried out as described [23]. Briefly, excised pieces of skin from mice were snap frozen in liquid nitrogen and homogenized on ice in 500 μl of PBS with 0.01 M EDTA and a proteinase inhibitor mix (Sigma-Aldrich, Poole, UK) and 1 ml of 1.5% Triton X-100 in PBS. Samples were placed on a rotary shaker at 300 rpm on ice for 30 min, centrifuged at 12,000 × g for 10 min, and supernatants were collected. Total protein concentration for each sample was quantified by BCA Lowry assay (Thermo Scientific Pierce, Cramlington, UK). The protein concentration in all tissue extracts was adjusted to 0.9 mg/ml. MPO activity was determined by using the EnzChek MPO Activity Assay Kit (Invitrogen, Paisley, UK) according to the manufacturer's instructions. Immunofluorescence For CXCL1 (KC) and E-selectin detection in skin and joint samples we used a tyramide signal amplification kit [46] (Molecular Probes, Invitrogen). Briefly, formalinfixed, wax-embedded sections of skin and joints were de-waxed, rehydrated, washed in PBS, and skin subjected to antigen retrieval in Tris-HCl buffer, pH 9.0, at 100°C in a water-bath for 20 min; for joints antigen retrieval was in 10 mM Tris-HCl buffer, 1 mM EDTA and 0.05% Tween 20, pH 9.0, overnight at 65°C. The endogenous peroxidase was blocked by incubation for 10 min with 3% H 2 O 2 followed by incubation with 1% blocking reagent for 60 min at room temperature. Sections were incubated for 60 min with rabbit anti-murine CXCL1 polyclonal antibody (PeproTech) at 2 μg/ml or rat antimurine E-selectin monoclonal antibody at 5 μg/ml (kindly supplied by Dr Alexander Zarbock, University of Munster, Germany). Sections were then treated with HRP-conjugated goat anti-rabbit or goat anti-rat secondary antibodies for 60 min, then Alexa Fluor™ 488 tyramide for 10 min. For sdc immunofluorescence, sections were treated with affinity purified rabbit anti-mouse sdc-3 (1:500) [37] goat anti-rabbit Alexa 594 antibody (Invitrogen) containing 10% mouse serum. Tissue sections were stained with DAPI for cell nuclei and analysed using a Leica IX51 microscope (Leica, Wetzlar, Germany). Control sections were negative when treated with rabbit or rat immunoglobulin (Ig)G instead of primary antibodies (added at the same concentrations) or when the primary antibodies were omitted. Heparanase treatment of sections was performed using a previously described method [33]. Briefly, formalinfixed, wax-embedded sections of CXCL1-injected joints were de-waxed, rehydrated, washed in PBS, and subjected to antigen retrieval as above. The sections were treated with 20 units/ml of heparanase I and 4 units/ml heparanase III (both Sigma-Aldrich, UK) in HBSS, or HBSS alone, for 1.5 hours at 37°C. After enzymatic treatment, the samples were rinsed twice with HBSS before CXCL1 immunolocalisation as described above. Intravital microscopy The effects of sdc-3−/−on leukocyte rolling and stationary adhesion was also measured in vivo in the cremaster muscle microcirculation using intravital microscopy (PPL 40/2747) [47]. Briefly, in anaesthetized (ketamine/xylazine; intraperitoneally (ip)) mice, the testis was exposed through a small scrotal incision and the cremaster muscle exteriorised, cleared of connective tissue and pinned across a glass coverslip on a specialised microscope stage. The muscle was continuously superfused with bicarbonatebuffered saline (131.7 mM NaCl, 4.69 mM KCl, 2.7 mM CaCl 2 , 2.1 mM MgCl 2 and 14.44 mM NaHCO 3, pH 7.4), equilibrated with 5% CO 2 in N 2 and maintained at 37°C. Prior to intravital observations, mice were either pretreated with an intrascrotal injection of TNFα (500 ng in 200 μl; R&D Systems, Abingdon, UK) for three hours or the cremaster was superfused with CXCL1 (5nM in 500 ml; Peprotech) for 1 hour. Control mice received a PBS vehicle. Leukocyte-endothelial cell interactions were observed in single unbranched post-capillary venules (PCV; 20 to 50 μm diameter). Leukocyte rolling was determined by counting numbers of cells rolling along a 100 μm PCV segment within 60 seconds. A leukocyte was considered firmly adherent if it remained stationary for ≥30 seconds. Induction of murine antigen-induced arthritis (AIA) Experiments were performed in 7-to 8-week-old male mice. Murine AIA was induced as described [48]. Briefly, mice were immunised subcutaneously with 1 mg/ml of methylated bovine serum albumin (mBSA) emulsified with an equal volume of Freund's complete adjuvant and injected intraperitoneally with 100 μl heat-inactivated Bordetella pertussis toxin (all reagents from Sigma-Aldrich). The immune response was boosted one week later. Twenty-one days after the initial immunisation, murine AIA was induced by intra-articular injection of 10 mg/ml mBSA in PBS in the right knee (stifle) joint. For a control, the same volume of PBS was injected into the left knee joint. Animals were inspected daily for arthritis development by measuring knee joint diameters using a digital micrometer. The difference in joint diameter between the arthritic (right) and non-arthritic control (left) in each animal gave a quantitative measure of swelling (in mm). Histological assessment Animals were killed at the indicated times after induction of arthritis. Joints were fixed in neutral buffered formal saline, and decalcified with formic acid at 4°C before embedding in paraffin. Mid-sagittal serial sections (7 μm thickness) were cut and stained with haematoxylin and eosin (H&E). Two independent observers blinded to the experimental groups scored sections. Synovial hyperplasia, cellular exudate and cartilage depletion were scored from 0 (normal) to 3 (severe); synovial infiltrate was scored from 0 to 5 [48,49]. Cartilage damage was scored on serial haematoxylin/safranin O-stained sections. All parameters were subsequently summed to give an arthritis index (mean ± SEM). Statistics Differences between groups were compared by Mann-Whitney U or unpaired t tests, with P <0.05 being deemed as significant. Sdc-3 deletion reduces neutrophil recruitment in CXCL1-injected joints To examine the effects of sdc-3 on inflammation we first studied chemokine-driven leukocyte migration into the knee joint. Intra-articular injection of murine CXCL1 stimulated the influx of neutrophils into the synovium of the joints of sdc-3−/−and wild-type mice ( Figure 1A), whereas PBS-injected controls were negative for neutrophils. To compare and quantitate leukocyte recruitment the number of migrated neutrophils in the synovia was counted. A significant decrease (P <0.0001; t test) in the number of neutrophils recruited in sdc-3−/−mice was observed compared to wild type after CXCL1 injection ( Figure 1B). In PBS-injected controls there was no neutrophil recruitment in synovia of sdc-3−/−and sdc-3+/+mice. Reduced chemokine presentation by synovial endothelial cells in sdc-3−/−mice Experiments were performed to examine if murine synovial endothelial cells expressed sdc-3, similar to human synovial endothelial cells [33]. Immunoreactive sdc-3 was demonstrated in blood vessels in normal and AIA synovia of wild-type mice but not in sdc-3 null mice as control ( Figure 1C-F). Further controls in the absence of sdc-3 antibody ( Figure 1G and H), or when substituted with rabbit control Ig, were also negative. Chemokines may be produced extravascularly and a transcytosis mechanism allows for these chemokines to be transported to the luminal surface of the endothelium [12,45,50]. At this interface HS is then involved in presenting the bound chemokines to signalling receptors on the surface of blood leukocytes [51]. We wanted to test if deletion of sdc-3 would alter chemokine binding and presentation at the endothelial surface, which may explain the reduced neutrophil recruitment in sdc-3 null mice in response to CXCL1 ( Figure 1B). Sections of the same CXCL1-injected joints of wild type (n = 8) and sdc-3−/− (n = 9) mice as used in Figure 1 were immunostained with a CXCL1 antibody using tyramide amplification and observed by confocal immunofluorescence. CXCL1 appeared as discrete clusters associated with synovial endothelial cells of sdc-3 null and wild-type joints (Figure 2A and B). In PBSinjected control joints there were no CXCL1 clusters in synovial endothelial cells of knockout and wild-type mice ( Figure 2C). Quantification revealed a three-fold reduction in the numbers of endothelial CXCL1 clusters per blood vessel in sdc-3−/−compared to sdc-3+/+joints (P = 0.0003; t test) ( Figure 2F). These data were further separated into luminal or intracellular/abluminal distribution of endothelial CXCL1. The numbers of luminal CXCL1 clusters were reduced 6-fold in sdc-3−/−mice compared to wild type (P = 0.0002; t test) ( Figure 2G). However, there was no significant difference comparing intracellular/abluminal numbers of CXCL1 clusters in sdc-3 null and wildtype mice ( Figure 2G). Serial sections of CXCL1-injected joints were treated with heparanase I and III to degrade heparan sulphate prior to CXCL1 immunolocalisation. Use of these enzymes resulted in lack of CXCL1 immunofluorescence in endothelial cells of wild-type and sdc-3−/−synovial blood vessels ( Figure 2D). Quantitation revealed that for wild type the mean number of endothelial CXCL1 clusters per blood vessel after heparanase digestion was 2.8 ± 0.7 (mean ± SE, n = 6), which was significantly lower than without heparanase (28.7 ± 3.2, see Figure 2F) (P <0.0001; t test). For sdc-3 null mice the mean number of endothelial CXCL1 clusters per blood vessel after heparanase was 2.7 ± 1.8 and 8.7 ± 1.8 without heparanase (both mean ± SE, n = 5) ( Figure 2F) and these values did not significantly differ. These heparanase data suggest that the heparan sulphate chains of endothelial sdc-3 bind CXCL1 clusters. Controls in the absence of anti-CXCL1 were negative ( Figure 2E). Sections were also immunostained for E-selectin. Although E-selectin was detected in synovial endothelial cells, it was less abundant than in skin endothelial cells, and there was no significant difference in E-selectin distribution between sdc-3−/−and sdc-3+/+mice in the presence or absence of CXCL1 (data not shown). Less severe AIA in sdc-3−/−mice Since the above data suggested that sdc-3 is proinflammatory in the joint the role of this HSPG in a model of inflammatory disease, namely RA, was assessed. AIA was induced in the knee joints of sdc-3−/−and wild-type mice and joint swelling, synovial inflammation and cartilage destruction were measured. Knee joint diameter (swelling), a clinical indication of joint inflammation, was significantly less in sdc-3−/−mice compared to wild type 24 hours after arthritis induction (0.63 ± 0.06 mm versus 0.96 ± 0.05 mm; P <0.0001 ANOVA and Tukey post tests) ( Figure 3A). This difference continued for approximately seven days post-intra-articular mBSA administration (P <0.001). Histologically, AIA was characterised by synovial hyperplasia of the synovial lining layer, infiltration of the synovial sublining by leukocytes, exudate in the joint cavity, and loss of proteoglycan from the articular cartilage, as observed in haematoxylin/eosin and haematoxylin/safranin-O stained sections ( Figure 3B and D). These changes did not occur in contralateral knee joints, which were injected with PBS instead of mBSA and appeared histologically normal. The degree of leukocyte infiltration and cartilage destruction (proteoglycan loss) appeared less severe in sdc-3 null mice compared to wild type ( Figure 3B versus C, D versus E). In order to quantitate these changes, parameters were scored as a measure of disease severity and differences between sdc-3 null and wild-type mice were apparent (Table 1). There was a significant reduction of synovial leukocyte infiltrate comprising mainly neutrophils (P <0.01), cartilage depletion (P <0.05) and arthritic index representing overall disease severity (P <0.01) in sdc-3−/−compared to sdc-3+/+mice at day 3 (Table 1) (all comparisons Mann-Whitney test). At day 14 and 21 post intra-articular injection of mBSA, there were no significant differences between wild type and sdc-3−/−for all parameters except exudate, which was significantly reduced in sdc-3 null mice (P <0.02 Mann-Whitney test) at day 21. In addition, the infiltrate was less in sdc-3−/−mice compared to sdc-3+/+although this approached significance at day 21 (P = 0.06 Mann-Whitney test). Sdc-3 deletion provokes enhanced neutrophil recruitment in CXCL1-injected skin In order to compare the inflammatory role of sdc-3 in the joint with other tissues, skin was injected intradermally with murine CXCL1 or PBS as control (as Figure 1). Four hours after CXCL1 injection, histological staining revealed an influx of leukocytes into the dermis in sdc-3 null and wild-type mice ( Figure 4A). These leukocytes were identified histologically as being neutrophils. To quantitate differences in neutrophil recruitment, we examined MPO activity as a marker for the presence of these cells in skin extracts. There was an increase in MPO levels following CXCL1 injection compared to PBS in sdc-3 null (P <0.005; t test) and wild-type (P <0.005; t test) mice ( Figure 4B). Interestingly, a significant 30% increase (P <0.03; t test) in MPO activity over wild type was observed in sdc-3−/−mice after CXCL1 administration ( Figure 4B). Baseline activity of MPO in PBS-injected control samples did not significantly differ between sdc-3 null and wild-type mice. Immunofluorescence using anti-murine sdc-3 showed that this HSPG was expressed in the endothelium of the dermis in wild-type mice ( Figure 4C and D). To further investigate the potential mechanism of increased neutrophil recruitment after CXCL1 challenge adhesion molecule expression was examined. E-selectin immunolocalisation was performed in skin tissue sections ( Figure 4E and F). This adhesion molecule is expressed by dermal endothelial cells and is involved in the rolling stage of leukocyte adhesion to the endothelium [52][53][54][55]. Using dual labelling E-selectin co-localised with von Willebrand factor as a marker of endothelial cells (Additional file 1C to E), and the proportion of von Willebrand factor positive dermal blood vessels that expressed E-selectin was >95% (n >15 vessels per wild-type and sdc-3−/−mouse). E-selectin exhibited a predominantly luminal or intracellular distribution in the endothelial cells of the dermis in sdc-3−/−and sdc-3+/+mice ( Figure 4E and F; Additional file 1A and B). Quantification revealed that there was a two-fold increase in the numbers of vessels with a luminal E-selectin distribution in sdc-3−/−mice compared to wild type (P <0.008 Mann-Whitney test) following CXCL1 administration ( Figure 4G). When PBS was administered instead of CXCL1, as vehicle-injected control, there was also significantly more vessels with a luminal E-selectin distribution in sdc-3 null mice compared to wild type ( Figure 4G). In sdc-3−/−mice there was no significant difference in luminal E-selectin between CXCL1-and PBS-injected skin ( Figure 4G), suggesting that this chemokine was not affecting E-selectin distribution. After injection of CXCL1, this chemokine could be detected as a uniform distribution in endothelial cells of dermal venules by immunofluorescence, however, there was no significant difference in the number or percentage of these cells positive for (See figure on previous page.) Figure 2 Reduced chemokine presentation by synovial endothelial cells in sdc-3−/−mice. In the same samples as in Figure 1 sections of CXCL1-injected joints of wild-type and sdc-3−/−mice were stained with a CXCL1 antibody using tyramide amplification, and DAPI, and viewed by immunofluorescence. (A and B) Synovial endothelial cells of wild-type mouse joint; CXCL1 occurs as clusters with white arrows showing examples of luminal chemokine and red arrows intracellular or abluminal chemokine. The endothelial cell layer is labelled (e) and the lumen of the blood vessel (L) containing red blood cells. (C) No CXCL1 clusters are present in the endothelial cells of PBS-injected controls (arrows); this image is from a wild-type mouse. (D) There are less CXCL1 clusters in endothelial cells of following treatment with heparanase I and III to degrade heparan sulphate prior to immunostaining. (E) is a negative control of the synovium of wild-type mouse in the absence of CXCL1 antibody. Bar = 30 μm in A to E. (F) Quantification of CXCL1 staining shows decrease in the numbers of endothelial CXCL1 clusters per blood vessel in sdc-3−/− (n = 6) compared to wild-type (n = 6) joints. Data are mean ± SEM. *** P = 0.0003. (G) shows the data in (F) expressed as number of CXCL1 clusters at the luminal surface or intracellularly/abluminally in synovial endothelial cells. There is a reduction of the numbers of luminal CXCL1 clusters in sdc-3−/−mice compared to wild type, *** P = 0.0002. Data are means ± SEM. CXCL1, chemokine C-X-C ligand 1; DAPI, 4',6-diamidino-2-phenylindole; PBS, phosphate-buffered saline. CXCL1 in sdc-3−/− (n = 8) and wild-type (n = 9) mice (data not shown); this suggests that CXCL1 presentation in skin may be occurring by a different proteoglycan than sdc-3. Control sections treated in the absence of E-selectin, von Willebrand or CXCL1 antibodies were negative. Increased rolling and adhesion of leukocytes in cremaster venules in sdc-3 null mice Intravital microscopy was used to examine the effects of sdc-3 gene deletion on the rolling and firm adhesion of leukocytes to venular endothelial cells in the cremaster muscle. The basal number of rolling leukocytes was not significantly different between unstimulated (PBS-treated) sdc-3−/−mice and wild-type mice ( Figure 5A). Although TNFα stimulation increased leukocyte rolling in wild-type mice, this did not reach significance. However, a significant increase in rolling was observed in TNFα-stimulated sdc-3−/−mice when compared to either unstimulated sdc-3−/− (P <0.01) or TNFα-stimulated wild-type (P <0.05) mice ( Figure 5A). Indeed, when compared to unstimulated sdc-3−/−, an almost four-fold increase in rolling was observed. Similarly, CXCL1 stimulation did not increase leukocyte rolling in wild-type mice, but it was associated with a significant increase in rolling in sdc-3−/−mice when compared to unstimulated sdc-3−/− (P <0.01) or CXCL1-stimulated wild-type (P <0.05) mice ( Figure 5A). Interestingly, the basal number of adherent leukocytes was significantly (P <0.05) increased in unstimulated sdc-3−/−mice when compared to wild-type mice, with more than double the numbers of adherent cells observed ( Figure 5B). As expected, TNFα stimulation significantly (P <0.05) increased leukocyte adhesion in wild-type mice when compared to unstimulated wildtype mice. However, this effect was more dramatic in the sdc-3−/−mice, with significantly increased leukocyte adhesion observed when compared to either unstimulated sdc-3−/− (P <0.05) or TNFα-stimulated wild-type (P <0.05) mice ( Figure 5B). Indeed, when compared to unstimulated sdc-3−/−, a 2.2-fold increase in adhesion was observed. Although CXCL1 stimulation did not increase leukocyte adhesion in wild-type mice, it was associated with a significant increase in adhesion in sdc-3−/−mice when compared to CXCL1-stimulated wild type (P <0.05). This did not reach significance when compared to unstimulated sdc-3−/−, presumably reflecting increased basal adhesion in the sdc-3−/− ( Figure 5B). All statistical comparisons for intravital microscopy were made by ANOVA followed by Tukey's pairwise tests. Discussion The current study demonstrated that sdc-3 played a role in inflammation, but interestingly, highlighted both proand anti-inflammatory properties for this proteoglycan depending upon the tissue and nature of the inflammatory insult. In the joint, chemokine administration resulted in reduced neutrophil influx in the synovium of sdc-3 null mice indicating that this HSPG is playing a pro-inflammatory role. This effect may be attributed to chemokine presentation by sdc-3 on synovial endothelial cells since deletion of this HSPG reduced the presence of chemokine CXCL1 on these cells. Furthermore, heparanase reduced the amount of endothelial CXCL1 suggesting the involvement of HS chains in binding CXCL1. The chemokine in synovial endothelial cells was not uniformly distributed but appeared to be bound to sdc-3 in clusters. Thus CXCL1 may be concentrated and immobilised into clusters at the endothelial surface for presentation to blood leukocytes. This is in agreement with Hardy et al. [15] who found a focal distribution of CCL2 bound to HS at the apical endothelial surface during leukocyte transendothelial migration in vitro. CXCL1 clusters were particularly reduced at the endothelial surface in sdc-3−/−mice whereas in the remainder of the cell in intracellular/ abluminal locations this was not the case. This suggests that sdc-3 may be particularly involved in chemokine presentation whereas other molecules may play a more dominant role in transcytosis, such as the Duffy antigen/ receptor for chemokines [45,56]. The finding of sdc-3 binding and presenting CXCL1 in the current study is in agreement with our previous study [33]. In human RA there is induction of a CXCL8 binding site on sdc-3 HS chains of synovial endothelial cells. Mice lack CXCL8 and CXCL1 is the functional equivalent in the murine system. Therefore taken together, these two studies suggest that sdc-3 may be involved in binding CXC chemokines and stimulating leukocyte trafficking into the RA synovium. A pro-inflammatory function of sdc-3 was also apparent in a murine model of RA. Induction of AIA in the knee joint resulted in reduced joint swelling in sdc-3 knockout mice suggesting that sdc-3 contributes to the clinical manifestation of the disease. This HSPG is also involved in underlying inflammatory changes such as leukocyte accumulation into the synovium, which was reduced in sdc-3 null mice as was the overall histological severity of disease. The pro-inflammatory function of sdc-3 in AIA may be due to chemokine presentation by synovial endothelial cells. Furthermore a role for sdc-3 in joint damage, which is a major feature of RA, is implicated as shown by the inhibitory effects of sdc-3 deletion on cartilage damage. This involvement of sdc-3 in cartilage damage may be related to its pro-inflammatory function in the synovium, via leukocyte recruitment leading to cytokine Synovial hyperplasia of the lining layer, synovial infiltration of the sublining by leukocytes, exudate in the joint cavity, and loss of proteoglycan from the articular cartilage were observed in haematoxylin/eosin and haematoxylin/safrinin-O-stained sections. Sections were scored blind by two independent observers from 0 to or degradative enzyme release. However, the effects of loss of sdc-3 in the arthritis model may be mediated, at least in part, by cells other than endothelial cells, since sdc-3 is also expressed by chondrocytes [39,40]. Further studies involving conditional deletion of sdc-3 in selected cell types and examining the effect on arthritis severity would be of interest in this respect. Recent data suggest a role for sdc-4 in inflammatory arthritis [34,35]. Using the human TNF transgenic mouse model (hTNFtg) of RA, sdc-4 was involved in the attachment and invasion of synovial fibroblasts into cartilage, contributing to cartilage destruction. Sdc-4 also regulates ADAMTS-5 activation and cartilage breakdown [36]. This suggests that sdcs may be involved in various aspects of joint inflammation and damage in arthritis, with endothelial sdc-3 functioning in leukocyte recruitment and fibroblast sdc-4 in cartilage destruction. Deletion of sdc-3 in the skin had the opposite effect compared to that in the joint. When CXCL1 was injected into the skin neutrophil recruitment was enhanced in sdc-3−/−mice compared to wild type suggesting that this HSPG plays an anti-inflammatory function in this tissue. The effect may be mediated, at least in part, by the adhesion molecule E-selectin since the luminal distribution of E-selectin increased in knockout animals suggesting increased expression of this adhesion molecule at the endothelial surface. This may lead to elevated neutrophil recruitment in the presence of CXCL1. E-selectin is expressed in normal skin venules where it is upregulated in skin inflammation [52][53][54][55]. Sdc-3 is part of the glycocalyx, which can form an anti-adhesive layer to blood leukocytes at the endothelial surface and it has been proposed that this may mask endothelial adhesion molecules inhibiting leukocyte-endothelial interactions [1,10]. Steric hindrance may play a role in this process since the glycocalyx can reach microns in thickness whereas selectins only extend <50 nm from the endothelial surface [1,57]. Therefore loss of sdc-3 in knockout mice may lead to the unmasking or altered expression of E-selectin at the luminal endothelial surface leading to increased leukocyte recruitment. This is in agreement with other studies that show that stimuli that degrade the glycocalyx or induce a more open mesh such as enzymes, cytokines, or ischaemia and reperfusion appear to uncover adhesion molecules, thereby allowing leukocytes to interact with the endothelium [1][2][3]26]. For example, heparanase, which is a glycosidase that removes HS, causes increased leukocyte adherence at the endothelial surface in the cremaster venules of mice by intravital microscopy [26]. In the present study, endothelial sdc-3 does not appear to be presenting the chemokine CXCL1 in the skin since there was no difference in the presence of this chemokine on dermal venules in wild-type and knockout mice and other HSPGs may be more involved in this mechanism. Thus in the skin sdc-3 may be involved in regulating leukocyte adhesion via altering the distribution or expression of the adhesion molecule E-selectin. Since the data obtained from the skin demonstrated an anti-inflammatory role for sdc-3, we further investigated its role using the more direct approach of intravital microscopy, which allowed real-time dynamic images of leukocyte adhesion to be monitored in anaesthetised mice in vivo. Furthermore, the effects on sdc-3 deletion on leukocyte rolling could also be assessed, which was not possible on static sections. Increased numbers of rolling and adherent leukocytes in the venules of sdc-3−/−mice in response to either CXCL1 or TNFα was observed compared to wild type. These results suggest that sdc-3 has an inhibitory effect on leukocyte-endothelial interactions in response to inflammatory stimuli and are in accord with those in skin. Intravital microscopy has been performed in sdc-1 null mice following TNFα treatment where there is increased adhesion of leukocytes to endothelial cells of the mesentery venules [11,58]. These intravital data, together with ours, indicate that sdc-3 and sdc-1 play similar roles in cremaster and mesenteric venules using similar inflammation models. In these tissues sdc-3 and sdc-1 appear to be negative regulators of leukocyteendothelial interactions. The anti-inflammatory role of sdc-3 in our models in skin and cremaster is similar to that of sdc-1 and -4 in inflammatory disease models. Sdc-1 gene deletion in mice reduces inflammation in models of allergic contact dermatitis, allergic lung disease, colitis and nephritis, with increased leukocyte recruitment and more severe disease [20,21,23,24]. Similarly sdc-4 null mice exhibit increased inflammation and neutrophil recruitment in a model of pulmonary inflammation and lung injury [25]. Thus our finding of sdc-3 having a pro-inflammatory role in synovium in a mouse model of RA is more unusual amongst the different sdc knockout models. Whether these HSPGs have pro-or anti-inflammatory functions may depend on the sdc, the tissue or cell-type where they are expressed and/or the type of inflammation. Furthermore, specific targeting of sdcs tailored to particular inflammatory diseases is called for if they are to be exploited therapeutically in human diseases. For example, blocking sdc-3 or sdc-4 in human RA would be of potential interest in reducing inflammation and joint destruction, whereas this strategy may have opposite effects in certain inflammatory conditions of the skin, lung, gut and kidney. In the current study, sdc-3 was found to be expressed by endothelial cells in murine synovium and skin. This is in agreement with human tissues where endothelial sdc-3 was found particularly expressed in the endothelial cells of RA synovium [33]. Interestingly, sdc-3 is also found in lymphoid tissue, where this HSPG perfectly delineates some of the high endothelial venules [44]. These venules are the preferred sites of lymphocyte extravasation which, taken with the findings of the current study, suggests a role for this HSPGs in lymphocyte trafficking in the lymph nodes. Sdc-3 is also expressed by the endothelial cells in human liver [43]. Conclusions Sdc-3 appears to have a tissue-selective role in inflammation being pro-inflammatory in the joint, which may be mediated by endothelial chemokine presentation. It is also involved in leukocyte accumulation and cartilage damage in joints with AIA. In the skin and cremaster it may be anti-inflammatory, contributing to the anti-adhesive properties of the endothelial glycocalyx. This study helps clarify the contradictory roles of HSPGs being reported as proand anti-inflammatory and suggests the importance of tissue-dependent functions of endothelial cells in the case of sdc-3. Furthermore, it suggests that targeting sdc-3 in the joint in inflammatory arthritis would be a therapeutic strategy.
7,985.6
2014-07-11T00:00:00.000
[ "Biology", "Medicine" ]
Introduction to Set Shaping Theory In this article, we define the Set Shaping Theory whose goal is the study of the bijection functions that transform a set of strings into a set of equal size made up of strings of greater length. The functions that meet this condition are many but since the goal of this theory is the transmission of data, we have analyzed the function that minimizes the average information content. The results obtained show how this type of function can be useful in data compression. Introduction In this article, we introduce the Set Shaping Theroy whose objective is the study of the bijection functions that transform a set of strings of length N into a set + of strings of length N+K with K and ∈ ℕ + , | | = | + | and + ⊂ + .In particular, we will analyze the functions in which the set + contains the strings with less information content belonging to the set + .The analysis of the results shows how this type of function can be useful in data compression. Methods In this article, we use the concepts and functions developed by C.E.Shannon [1] that represent the basis of information theory.Given a source defined by an ensemble = (; ; ), where x is the value of the random variable, = { 1 , 2 , … … } are the possible values of x (states) and = { 1, 2 … … } is 1 Author correspondence<EMAIL_ADDRESS>the probability distribution of the states ( ) = with ∑ =1 = 1. The entropy of X, denoted H, is defined as: We call the set that contains all possible strings = { 1 , … , , … ., } generated by X. Definition 1: We call f the bijection function on the set defined as: The function f defines from the set + a subset of size equal to || .This operation is called "Shaping of the source" because what is done is to make null the probability of generating some sequences belonging to the set + . Definition 2: The parameter K is called the shaping order of the source and represents the difference in length between the sequences belonging to and the transformed sequences belonging to + .Given a source = (; ; ), and a string = { 1 , … , , … ., }, we define its information content: The probability ( ) that the source X generates the sequence is: Definition 3: we call the average information content of a sequence generated by a source = (; ; ) the summation of the product between the information content of the sequences belonging to is their probability: Remark 1: As N tends to infinity () tends to NH(X).Indeed, when N becomes large the contribution to the value of the function (1) derives almost exclusively from the strings belonging to the typical set [2].With typical set we mean, the set of strings whose information content is close to NH(X). This function is essential to understand the advantages of applying f, because this function transforms the strings ∈ into the strings ∈ + consequently, the average information content changes as follows: Where ( ) remains unchanged but the information content of the string changes. Definition 4: We call the bijection function on the set defined as: Remark 2: The function transforms the set into the set + composed of | | strings with less information content belonging to + .Consequently, each string belonging to the complementary set of + has a greater information content than any string belonging to + .Wanting to apply this type of function to problems concerning data compression, the functions are the most interesting to be analyzed.Having chosen such short string lengths, we have a value of I(x) that differs greatly from NH(X).This result is normal since for these values of N the value calculated with the formula (1) depends very much on strings with information content less than NH(X).Observing the data in table 1, we notice an unexpected result, indeed for values of || > 2 the average information content I(y) is less than I(x).Now, let's increase the length of the strings to 100 and keep the value of K at 1. Thus, the strings ∈ 100 have length 100, consequently having chosen K=1 the strings ∈ 101 have length 101.In this case, given the length of the strings, the exact calculation of I(x) and I(y) is very complex, so we estimate these values using the Mote Carlo method [3] and [4].The data reported in table 2 concern the simulation of 1000000 of strings of length 100 generated by a source = (; ; ) with a uniform probability distribution and || variable between 2 and 10.The first column shows the cardinality of A. The second column shows the value of I(x), the third shows the value of I(y) and finally the fourth shows the difference I(x)-I(y). Given Analyzing the data in table 2, we note that for this value of N the value I(x) approximates NH(X).Indeed, as mentioned, increasing the length of the strings the contribution to formula (1) almost exclusively depends on the strings with information content close to NH(X).Also in this case for || > 2 the average information content I(y) is less than I(x).Hence, this result does not depend on the length of the strings but also remains by increasing N. Now to try to understand this result, let's compare the single values ( ) and ( ) with ( ) = , || = 3, K=1 and N=10.Therefore, the strings ∈ 10 have length 10 and having chosen K=1 the strings ∈ 11 have length 11.In this situation, the set 10 contains 3 10 = 59049 strings.We have chosen this value of N because it allows us to calculate the single values ( ) and ( ) and at the same time strings with information content much lower than NH(X) contribute negligibly to I(x) and I(y). |𝐴| I(x) N=100 I(y) N=101 I(x)-I(y) In Figure 1, the solid line shows the ( ) values and the dashed line the ( ) values in bits.The strings were sorted according to their information content in ascending order.Analyzing Figure 1, we can note that despite I(y)<I(x) (I(x)=14,263 bits, I(y)=14,136 bits) this inequality is true only on average.Indeed, the single values of ( ) and ( ) tend to oscillate between them.This result is interesting because it tells us that the use of this technique depends on the probability distribution P and consequently on the information content of the typical set.Since the information content of the typical set can be approximated with NH(X), the use of the function can only be useful when this value is placed in an area where ( ) < ( ). Conclusion In this article, we have defined the Set Shaping Theory whose goal is the study of the bijection functions that transform a set of strings into a set of equal size made up of strings of greater length.The functions that respect this condition are many but since the goal of this theory is the transmission of data, we have analyzed the function the which transforms the set into the set + composed of the | | strings with less information content belonging to + . Analyzing the data, we find an unexpected result, indeed the average information content I(y) turns out to be less than I(x) when the cardinality of A is greater than 2. This result is present for minimum lengths such as those reported in table 1 and for longer lengths like the one shown in table 2. Therefore, this result does not seem to depend on the length of the string.However, this is only a preliminary analysis to reach a conclusion it is essential to study the asymptotic behavior. Figure 1 shows another interesting result, the single values of ( ) and ( ) oscillate between them and neither of them is greater or less than the other continuously.Consequently, the use of this technique depends on the information content of the typical set. For these reasons, we believe that this theory is particularly interesting in data compression.However, as mentioned, the consequences of this type of transform on the average information content are particularly complex and therefore this analysis requires further studies. a source defined by an ensemble = (; ; ) with a uniform probability distribution, we will apply the function to the set , which contains all possible strings of length N produced by X, and compare the values of () and () with () = We start by analyzing strings of lengths equal to = || , K=1 and || variable between 2 and 7. Consequently, the length of the strings ∈ || is || and having chosen K=1 the strings ∈ ||+1 have length || + 1.Therefore, for example, if || = 2 the strings ∈ 2 have length 2 instead the strings ∈ 3 have length 3. Being N very small it is possible to calculate the value of I(x) and I(y) exactly.The first column shows the cardinality of A. The second column shows the value of I(x), the third shows the value of I(y) and finally the fourth shows the difference I(x)-I(y).||I(x) = || I(y) = || + 1 I(x)-I(y) 2 1,000 Table 2 : The average information content I(x) and I(y) in bits calculated for = 100 = 1.
2,206.4
2021-10-29T00:00:00.000
[ "Mathematics", "Computer Science" ]
BMTVDS2: a novel hybrid bioinspired model for task-and-VM-dependency and deadline aware scheduling via dual service level agreements This paper discusses the design of a novel hybrid bioinspired model for task-and-VM-dependency and deadline aware scheduling via dual service level agreements. The model uses a combination of grey wolf optimization with the league championship algorithm, to perform efficient scheduling operations. These optimization techniques model a fitness function that incorporates task make-span, task deadline, mutual dependencies with other tasks, the capacity of VMs, and energy needed for scheduling operations. This assists in improving its scheduling performance for multiple use cases. To perform these tasks, the model initially deploys a task-based service level agreement (SLA) method, which assists in enhancing task and requesting-user diversity. This is followed by the design of a VM-based SLA model, which reconfigures the VM's internal characteristics to incorporate multiple task types. The model also integrates deadline awareness along with task-level and VM-level dependency awareness, which assists in improving its scheduling performance under real-time task and cloud scenarios. The proposed model is able to improve cloud utilization by 8.5%, increase task diversity by 8.3%, reduce the delay needed for resource provisioning by 16.5%, and reduce energy consumption by 9.1%, making for a wide variety of real-time cloud deployments. Introduction Scheduling tasks on cloud-based VMs for efficient clientlevel service performance is a multidomain task, that involves the design of task pattern analysers, capacity optimization units, task mapping units, correlation evaluation layers, task dependency analysis models, etc. A typical task-to-VM scheduling model evaluates the correlation between the task's requirements and capacity of the VM [1] and uses Eq. 1 to identify a matching VM for a given task, where N(VM)andN(T ) represents the number of VMs, and the number of tasks, while f c andf r represents the capacity evaluation function for VM, and the requirement This paper proposes a task scheduling model in multiple virtual machine (VM) based cloud systems that combines the league championship algorithm (LCA) and grey wolf optimization (GWO). This model aims to address the complexity-related limitations of existing models as well as their failure to take task and VM dependencies into account when scheduling tasks. evaluation function for individual tasks. These functions are modeled such that the capacity of the VM has a close correlation with the respective requirements of underlying tasks. A CNN-based task mapping model is depicted in Fig. 1, wherein an optimized dynamic scheduler (ODS) is optimized via CNN based classification process. The process utilizes resource monitoring statuses which include the capacity of VMs, deadline of tasks, make-span of tasks, and mutual dependency of tasks to estimate a mapping plan between underlying VM and task configurations. The capacity of VMs is evaluated via Eq. 2, where, BW, Mem, &MIPS represents bandwidth, memory availability, and processing capacity in millions of instructions per second, while Cap(VM) represents the capacity of individual virtual machines. Similarly, the task requirements are evaluated via Eq. (3), where, MS, D&MS(D) represents make-span, deadline, and make-span dependency delays for each of the tasks, while T r represents its computational requirement levels. Based on these evaluations, the CNN model can classify each task requirement into relevant mapping VM types, which assists in improving scheduling performance. Similar models [2][3][4] are briefly reviewed in the next section of this text, wherein their contextual nuances, applicative advantages, functional limitations, and deployment-specific future scopes are discussed under various scenarios. Based on this discussion, it was observed that existing models are either highly complex or do not consider task and VM dependencies while performing the mapping process. Moreover, most of these models are highly contextsensitive, and cannot be applied to large-scale scheduling applications. To overcome these issues, Sect. 3 discusses the design of a novel hybrid bioinspired model for taskand-VM-dependency and deadline aware scheduling via dual service level agreements. This model was evaluated in terms of cloud utilization, task diversity, resource provisioning delay, and energy consumption levels in Sect. 4, where it is compared with various state-of-the-art methods. This article concludes with some thought-provoking comments on the model offered, as well as some ideas for enhancing the model's overall effectiveness across a range of usage scenarios. Core contributions • Hybrid bioinspired model The proposed model aims to enhance task scheduling in cloud systems by combining the strengths of LCA and GWO. The LCA component handles task scheduling by considering task dependencies and assigning tasks to suitable VMs. The GWO component then optimizes the scheduling results obtained from the LCA by exploring the search space further and refining the schedule. • Deadline aware and task-and-VM-dependency aware scheduling The suggested model considers several variables, including task duration, deadline, task dependencies, VM capacity, and energy consumption. The model aims to enhance real-time task and cloud settings as well as scheduling performance across various use cases by including these factors in the fitness function used by the optimization algorithms. • Dual service level agreements (SLAs) To increase task and requesting-user variety, the method makes use of a task-based SLA mechanism. Additionally, it builds a VMbased SLA model that alters the VM's internal characteristics to support various task types. The model aims to improve scheduling effectiveness in real-time cloud environments by taking deadline awareness, task-level dependency awareness, and VM-level dependency awareness into account for different scenarios. • Performance evaluation Using diverse task datasets from different parallel workload archives, the paper compares the performance of the proposed model to that of other models that are already in use. According to the evaluation results, the suggested model performs better than other models in terms of task diversity, resource provisioning speed, cloud utilization, and energy usage levels. This demonstrates the proposed model's potential applicability to real-time cloud deployments. Organization of the paper The paper begins with an abstract that provides a concise summary of the study's objectives, methodology, and key results. The introduction section establishes the context and motivation for the research, highlighting the limitations of existing models for task scheduling in VM-based cloud systems. It presents the research problem and outlines the objectives of the study. Following that, the review section offers a comprehensive analysis of related literature, discussing existing approaches and their strengths and weaknesses. This section also introduces bioinspired optimization algorithms and their relevance to task scheduling. The proposed model section describes in detail the hybrid bioinspired model that addresses task and VM dependencies while considering deadlines. It explains the integration of the league championship algorithm (LCA) and grey wolf optimization (GWO), along with the formulation of the fitness function that incorporates various factors. The results section presents the experimental findings, including the datasets used, performance metrics, and a comparison with other models. It discusses the implications of the results and highlights the advantages of the proposed model. Finally, the conclusion and future scope section summarizes the contributions of the research, reflects on the strengths and limitations of the proposed model, and suggests areas for future research and improvement. It concludes with a statement on the significance of the proposed model in real-time cloud deployments. Literature review A wide variety of task scheduling models are proposed by researchers, and each of them varies in terms of their internal operating characteristics. For instance, work in [5,6] proposes the use of geo-distributed data analytics, and a self-adapting task scheduling model, for the estimation of high-density data patterns while mapping tasks to different cloud configurations. But these models are not scalable, and thus cannot be used for heterogeneous task types. To overcome this limitation, work in [7] proposes the use of multiple device co-processing of data-parallel kernels, which assists in deploying the model for task scheduling under distributed scenarios. This model is capable of predicting task patterns, which assists in improving capacity pre-emption for different VM types. Similar models are discussed in [8][9][10], which propose the use of joint task scheduling and containerizing (JTSC), genetic algorithm with mobility aware task scheduling (GAMTS), and deep neural network scheduling (DNNS), for estimation of multiple task types under real-time environments. These models are useful for deploying scheduling techniques for large-scale use cases. Extensions to these models are discussed in [11][12][13], which propose the use of non-preemptive stochastic co-flow scheduling (NPSCS), energy, time, and rent cost (ETRC) optimization, and whale optimization algorithm (WOA) which assists in improving its performance for inter-related task sets. These models are highly functional when applied to low-complexity scenarios, and thus can be used to mitigate issues in presence of scheduling faults. Due to this characteristic, they can be deployed for large-scale scheduling applications. Models that propose the use of energy-efficient scheduling [14], profit sensitive spatial scheduling (PS3) [15], multi-task deep reinforcement learning (MTDRL) [16], novel multi-objective evolutionary algorithm based on decomposition (NMOEA) [17], dynamic voltage and frequency scaling (DVFS) [18], and elastic task scheduling scheme (ETSS) [19], that assists in tracking task pattern analysis, and deployment of SLA specific operations, which assists in enforcing application-level constraints. But these models are not useful for dynamic task sets, thus they are further extended via the work in [20], which proposes the use of the dynamic and resource aware load balanced scheduling model (DRALBM), which assists in improving its performance continuously changing task scenarios. To further optimize their performance, work in [21][22][23] proposes the use of an Energy-efficient dynamic scheduling scheme (EDSS), deep neural networks (DNN), and task scheduling and microservices based computational offloading (TSMCO), which aims at incorporating high-density feature extraction under real-time scheduling scenarios. These models are highly complex, and thus cannot be used for real-time applications. To overcome this issue, work in [24][25][26] proposes the use of parallel processing, deep reinforcement learning (DRL), and earliest deadline first (EDF) scheduling models, that can be used for high-speed applications. These models are capable of incorporating low-complexity processing techniques, which assists in improving their real-time performance. This work is further extended in [27,28], which discusses the integration of task duplication, and particle swarm optimization (PSO) with idle time slot-aware rules, which assists in the minimization of execution costs, under mutually dependent task sets. A cost-effective model work in [30,31] proposes the use of replication. But these models are either highly complex to deploy for real-time cloud tasks or do not consider task and VM dependencies while performing the mapping process. These models also showcase a high level of context sensitivity, and thus cannot be used for large-scale scheduling applications. To overcome these issues, the next section discusses the design of a novel hybrid bioinspired model for task-and-VM-dependency and deadline aware scheduling via dual service level agreements. The model was validated under multiple real-time datasets and was compared with various state-of-the-art methods, which assists in validating its performance under real-time deployments (Table 1). Based on this review, the following are the gaps of existing methods, 1. Taking task and VM dependencies into account: Current models for task scheduling in cloud systems based on virtual machines frequently fail to take task and VM dependencies into account. The proposed model incorporates task and VM dependency awareness in an effort to close this gap. Further study may examine more sophisticated methods to model and optimize task and VM dependencies, taking into account complex scenarios and interactions between various tasks and VMs. 2. Scalability and real-time scheduling: The proposed model shows improvements in scheduling performance; however, its scalability in massive cloud environments and real-time scheduling scenarios needs more research. To ensure effective and timely scheduling decisions, research could concentrate on improving the model's scalability and responsiveness to dynamic workload changes. 3. Energy efficiency and sustainability The suggested model takes energy use into account when scheduling operations are performed, which helps to lower energy consumption. The sustainability and environmental impact of the model could be improved, though, by further research into energy-efficient scheduling techniques like dynamic power management and workload consolidation. The hybrid bioinspired model combines the league championship algorithm (LCA) and grey wolf optimization (GWO), two optimization algorithms and strategies. While these algorithms are used, it may be possible to explore additional bioinspired algorithms or optimization strategies to improve the scheduling performance. The efficacy of various optimization algorithms and their combinations for task scheduling in VM-based cloud systems could be further investigated. 4. Evaluation and benchmarking The proposed model was compared to other models using diverse task datasets, according to the paper. The detailed comparison methodologies, benchmark datasets, and evaluation criteria are not provided. By taking into account additional performance metrics, utilizing standardized benchmark datasets, and contrasting it against a wider range of existing models, future research could concentrate on providing a more thorough evaluation of the proposed model. 5. Real-world implementation and deployment Although the proposed model exhibits encouraging results in the paper, additional study may examine its realworld implementation and deployments. This would entail taking into account the constraints and difficulties faced by real-world cloud systems, assessing the Limited evaluation on a small-scale testbed [4][5][6] Developed a genetic algorithm for task allocation in cloud environments Assumed independent tasks without considering interdependencies [7,8] Introduced a machine learning-based approach for VM scheduling Focused primarily on VM allocation rather than task dependencies [9][10][11] Proposed a task clustering method for efficient VM scheduling Did not consider deadline-aware scheduling or energy consumption [13][14][15] Presented an ant colony optimization algorithm for task scheduling Limited scalability when applied to large-scale cloud environments [16][17][18] Introduced a particle swarm optimization algorithm for VM scheduling Lacked consideration for VM-level dependency awareness [19,20] Developed a reinforcement learning-based approach for task scheduling Evaluation limited to a specific cloud platform and task characteristics [21][22][23] Proposed a hybrid cuckoo search algorithm for VM allocation Did not explicitly consider task-level and VM-level dependency awareness [24][25][26] Presented a fuzzy logic-based approach for VM and task allocation Limited explanation of the fuzzification and defuzzification processes [27][28][29] Introduced a multi-objective optimization algorithm for task scheduling Evaluation focused on a limited set of performance metrics [30,31] Proposed a cost-effective model based on replication Evaluation is limited to cost based parameter model's viability and efficiency across various cloud platforms, and resolving any implementation issues that may arise for real-time scenarios. Research methodology After referring to existing scheduling models, it was observed that they are either highly complex to deploy for real-time cloud tasks or do not consider task and VM dependencies while performing the mapping process. Most of these models also showcase a high level of context sensitivity, and thus cannot be used for large-scale scheduling applications. To overcome these issues, this section discusses the design of a novel hybrid bioinspired model for task-and-VM-dependency and deadline aware scheduling via dual service level agreements. The flow of the model is depicted in Fig. 2, wherein it can be observed that the proposed model uses a combination of grey wolf optimization (GWO) with the league championship algorithm (LCA), to perform efficient scheduling operations. These optimization techniques model a fitness function that incorporates task make-span, task deadline, mutual dependencies with other tasks, the capacity of VMs, and energy needed for scheduling operations. This assists in improving its scheduling performance for multiple use cases. To perform these tasks, the model initially deploys a task-based service level agreement (SLA) method, which assists in enhancing task and requesting-user diversity. This is followed by the design of a VM-based SLA model, that reconfigures the VM's internal characteristics for the incorporation of multiple task types. The model also integrates deadline awareness along with task-level and VMlevel dependency awareness, which assists in improving its scheduling performance under real-time task and cloud scenarios. Based on the model design depicted in Fig. 2, it can be observed that the model initially collects task and VM information from their respective sources, and then applies a task-based service level agreement (SLA) model which assists in sequencing tasks based on a contextbased criterion. The following process is used to perform this task, • Arrange all tasks chronologically as per their request timestamps, and track the user (client) from which the task request has arrived for scheduling purposes • Define a task SLA time ( T SLA ), and User SLA time ( U SLA ), and perform the following process, • For each set of tasks, evaluate the SLA interval ( I SLA ) via Eq. (4), where, t represents the timestamp of the task, while i represents the task number that has to be scheduled. where t(i) | U represents the task timestamp for the U th user that has requested for scheduling operations. This process is repeated for all tasks, and all users, which assists in enforcing task-level and user-level SLA for input scheduling requests. Based on the resulting task sets, a GWO Model is activated, which assists in deciding the configuration of VM-based input task requests. This model works via the following process, • To initialize the process, mark all the optimizer Wolves into the 'Delta' category • Evaluate each iteration, and scan all Wolves via the following process, • Check if Wolf's current category is not 'Delta' , then skip it and go to the next Wolf in sequence • Else, modify Wolf's configuration via the following process, • Stochastically modify the capacity of each VM via Eq. 6, Where, STOCH determines a stochastic process, which is used to generate numbers between the given range sets. VMs respectively, while CR(T ) represents the computational requirements of the task, which are evaluated via Eq. 9, where, MS&DL represents the make-span and deadline of the task respectively, which are combined for evaluation of task scheduling optimizations. • Evaluate this value for each Wolf, and then estimate the iteration threshold via Eq. 10, • At the end of each iteration, reconfigure Wolves via the following process, This process is repeated for all iterations, and the final configuration of the VM is evaluated based on the highest fitness of wolf and is used for the scheduling process. these configurations along with task sequences are given to a league championship algorithm (LCA), which assists in optimizing its performance for real-time use cases. This LCA Model works via the following process, where S task , BW task , RAM task , andDL task represent task size, bandwidth needed to schedule task, RAM needed for scheduling, and deadline of the task, while R, BandC represents RAM and Bandwidth of the VMs, which are used for the scheduling process. o Perform this task twice, and select the VM that has the minimum score via Eq. 12, • Repeat this process for all leagues, and identify configurations with minimum score levels. • Now, iterate through all seasons, and check all leagues via the following process, • Select two stochastic leagues, and compare their score levels. • Mark the league with a higher score level as 'Winner' , while modifying other leagues via the following process, • Stochastically replace L r * N VM VM sequences in the underlying league with VM sequences from the 'Winner' league via Eq. 13, Where, S(New)&S(Winner) represents stochastically replaced VM sequences from the 'Winner' league to the target league, thereby assisting in the deployment of an incremental learning process. • Repeat this process for all seasons, and continuously modify league configurations. At the end of the final season, identify the league with a minimum score, and use its VM sequences for scheduling tasks. Due to this, the scheduling model can map VMs to tasks via deadline and dependency awareness. The performance of this model was evaluated in terms of different statistical parameters, under real-time VM and task configurations. This performance was compared with various state-of-the-art models and can be observed in the next section of this text. Result evaluation and comparative analysis From the discussion about the proposed model, it can be observed a combination of GWO with LCA is capable of incorporating deadline awareness, SLA enforcement, and incremental optimization under dependent task types. To validate this, the proposed model was evaluated in terms of its task execution delay (D), scheduling efficiency (SE), deadline hit ratio (DHR), and energy efficiency (E) under different VM and task configurations. This performance was compared with WOA [13], MT DRL [16], and DNN [22]. To perform this comparison, configurations for different VM and task types were extracted from parallel workloads archive (PWA) which can be accessed via https:// www. cs. huji. ac. il/ labs/ paral lel/ workl oad, and can be used with open-source licenses. These logs consist of a large number of task configurations, out of which Sandia Ross cluster logs, San Diego Supercomputer Centre (SDSC) Blue Horizon logs, Lawrence Livermore National Lab's Linux Cluster Logs, Potsdam Institute for Climate Impact Research (PIK) IBM iDataPlex Cluster logs, and Intel Netbatch logs were used for the evaluation process. The proposed model was evaluated using the Cloud-Sim simulator, which assisted in the formation of virtual machines (VM), task scheduling processes and performance evaluation of different model parameters. These datasets were combined to form a total of 500 k tasks, which were evaluated on 400 VMs with standard configurations. Based on this evaluation strategy, the mean delay of execution (D) for different numbers of Tasks (NT) can be observed in Table 2. Based on this evaluation and Fig. 3, it can be observed that the proposed model showcases 8.5% faster execution performance when compared with WOA [13], 5.3% faster than MT DRL [16], and 6.5% faster than DNN [22] under multiple evaluation scenarios. This is possible due to the integration of low-complexity LCA and GWO Models, that assist in the high-speed mapping of tasks with VMs of different configurations. Based on a similar evaluation strategy, the deadline hit ratio (DHR) was evaluated via Eq. 14 as follows, where, N t d represents the number of tasks executed under the required deadline, while T t represents the total number of tasks executed by the VMs of different configurations. The values of DHR were tabulated in Table 3 as follows, Based on this evaluation and Fig. 4, it can be observed that the proposed model showcases 2.5% higher DHR when compared with WOA [13], 2.4% higher DHR than MT DRL [16], and 2.6% higher DHR than DNN [22] under multiple evaluation scenarios. This is possible due to the integration of deadline awareness in both LCA and GWO Models, which assist in mapping VMs with better DHR performance levels. Similarly, the scheduling efficiency is evaluated via Eq. 14 and is a measure of the computational efficiency of the scheduling (E) model under different configurations. where, NCC opt represents the optimum number of computational cycles that must be used for scheduling the tasks, while NCC represents the total computational cycles required to execute the tasks using given solutions. Based on this evaluation, scheduling efficiency is tabulated in Table 4 as follows, Based on this evaluation and Fig. 5, it can be observed that the proposed model showcases 9.5% higher scheduling efficiency when compared with WOA [13], 12.4% higher scheduling efficiency than MT DRL [16], and 10.5% higher scheduling efficiency than DNN [22] under multiple evaluation scenarios. This is possible due to the integration of deadline awareness along with makespan, bandwidth, and other task and VM-specific parameters in both LCA and GWO Models, that assist in mapping VMs with better scheduling efficiency performance levels. Similarly, the energy needed for mapping tasks to VMs was evaluated and tabulated in Table 5 as follows, Based on this evaluation and Fig. 6, it can be observed that the proposed model showcases 24.8% lower energy consumption when compared with WOA [13], 19 [16], and 3.5% lower energy consumption than DNN [22] under multiple evaluation scenarios. This is possible due to the use of low complexity LCA and GWO Models, which reduce energy requirements when evaluated for heterogeneous task scenarios. Due to these improvements, the proposed model is capable of deployment for large-scale task scheduling application use cases. Conclusion and future scope The proposed model fuses GWO and LCA to enforce service level agreements (SLA) at both the task level and VM level, which assists in the identification of optimum mapping between dependent tasks and heterogeneous VM types. The model also incorporates deadline awareness along with dependency awareness, which assists in improving the efficiency of mapping for large-scale task sets. Due to these optimizations, the proposed model [16], and 6.5% faster than DNN [22] under multiple evaluation scenarios. This allows the model to be deployed for highspeed scheduling use cases. The model was also observed to achieve 2.5% higher DHR when compared with WOA [13], 2.4% higher DHR than MT DRL [16], and 2.6% higher DHR than DNN [22], while it was also observed to have 9.5% higher Scheduling Efficiency when compared with WOA [13], 12.4% higher scheduling efficiency than MT DRL [16], and 10.5% higher Scheduling Efficiency than DNN [22] under multiple evaluation scenarios. This is possible due to the integration of deadline awareness along with makespan, bandwidth, and other task and VM-specific parameters in both LCA and GWO Models, that assist in mapping VMs with better DHR and scheduling efficiency performance levels. In terms of energy consumption, the proposed model was observed to consume 24.8% lower energy when compared with WOA [13], 19.4% lower energy than MT DRL [16], and 3.5% lower energy than DNN [22] under multiple evaluations scenarios. This is possible due to the use of low complexity LCA and GWO Models, which reduce energy requirements when evaluated for heterogeneous task scenarios. Due to these improvements, the proposed model is capable of deployment for large-scale task scheduling application use cases. In the future, the model's performance must be validated on large-scale datasets and can be improved via the integration of deep learning techniques that can pre-empt task requests, and modify VM performance under large-scale scenarios. The models' performance can also be extended via the use of a hybrid fusion of different Q-Learning and incremental learning models, which can assist in tuning its performance for multiple task sets under heterogeneous cloud configurations. Limitations of this work 1. Simplified model assumptions The paper does not explicitly mention the assumptions made in the proposed model. It is important to consider the simplifications or assumptions made regarding the task characteristics, VM capabilities, and system dynamics. These assumptions might not capture the full complexity of real-world cloud systems, potentially limiting the generalizability of the models. 2. Lack of real-world implementation and validation While the paper presents promising results based on evaluations using heterogeneous task datasets, it does not mention if the proposed model has been implemented and validated in real-world cloud environments. The absence of real-world implementation and validation might raise questions about the practical feasibility and effectiveness of the model in actual cloud deployments. 3. Limited comparison with state-of-the-art models The paper briefly mentions the evaluation of the proposed model against other models. However, it does not provide a detailed comparison with state-of-the-art models or existing state-of-the-art techniques in task scheduling. A more comprehensive comparison with a wider range of existing models would provide a better understanding of the advancements and contributions offered by the proposed models. 4. Lack of sensitivity analysis The paper does not mention conducting sensitivity analyses to assess the robustness and stability of the proposed model. Sensitivity analyses could help identify the impact of different parameters, variations in workload characteristics, and changes in system configurations on the model's performance. The absence of sensitivity analysis limits the understanding of the model's behavior under various scenarios. Limited discussion on model complexity and overhead The paper does not explicitly discuss the computational complexity or overhead associated with implementing and executing the proposed model. The potential impact of the model's complexity on the overall system performance, scalability, and resource utilization is not addressed. Understanding the computational requirements and potential overhead is essential for assessing the practical viability of the models. 6. Lack of open-source implementation or reproducibility The paper does not mention the availability of an open-source implementation of the proposed model or provide detailed information on how to reproduce the experiments and results. This can hinder the reproducibility and transparency of the research, making it challenging for other researchers to validate or build upon the proposed models. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
6,683.6
2023-07-28T00:00:00.000
[ "Computer Science" ]
Effect of Interfacial Bonds on the Morphology of InAs QDs Grown on GaAs (311) B and (100) Substrates The morphology and transition thickness (tc) for InAs quantum dots (QDs) grown on GaAs (311) B and (100) substrates were investigated. The morphology varies with the composition of buffer layer and substrate orientation. Andtcdecreased when the thin InGaAs was used as a buffer layer instead of the GaAs layer on (311) B substrates. For InAs/(In)GaAs QDs grown on high miller index surfaces, both the morphology andtccan be influenced by the interfacial bonds configuration. This indicates that buffer layer design with appropriate interfacial bonds provides an approach to adjust the morphologies of QDs grown on high miller surfaces. Introduction Self-assembled quantum dots (QDs) have been intensively studied over the past decades in both fundamental and application fields. To date, several systems have exhibited great optical properties and find their applications, such as laser diodes [1] and optical detectors [2]. The InAs/GaAs should undoubtedly be the most widely studied one among these systems. In recent years, room temperature emission of InAs QD laser around 1.3 lm for the fiber optical communication waveband [3] and optical absorption at 8-12 lm for the long-wavelength infrared detecting [4] had been achieved by means of employing a so-called dots-in-a-well (DWELL) structure. In this structure, the QDs are first grown on a thin InGaAs buffer layer and then finally with an InGaAs capping layer. So, the nucleation and growth dynamic of InAs QDs grown on the alloy layer are of central importance. And much attention has been paid to these important research fields [5][6][7]. However, most of the studies focused on the structures grown on GaAs (100) substrates. Recently, many high index polarized surfaces, such as GaAs (311) A [8] and (311) B [9][10][11][12][13], GaAs (411) A [14], and (411) B [15], have drawn greater attention because QDs grown on theses surfaces have some unique properties, such as the narrow size distribution, high QDs density, and so on. These structure properties can further show their efforts in improving the device performances. However, the growth mechanism of QDs is still a controversial subject, especially with regard to the high index surfaces. Apparently, for the superiority of QDs grown on these high index surfaces, a deeper research into these high index surfaces grown QDs is clearly needed. In this research, we have conducted a comparative study on the effect of buffer layer and the substrates' orientation on the equilibrium structure and the critical transition thickness (t c ) of InAs QDs grown on both GaAs (311) B and (100) substrates by molecular beam epitaxy (MBE). Experiments The samples were grown in a conventional MBE system equipped with 12-keV Reflection High Energy Electron Diffraction (RHEED). GaAs (311) B and (100) substrates were held side by side with indium on same molybdenum holder. For the InAs/InGaAs samples, after deoxidizing the surface oxide at 630°C, a 500-nm GaAs buffer layer was grown, then 2.3-ML InAs QDs layer was grown on top of a 2-nm In 0.15 Ga 0.85 As layer, at the rate of 0.022 ML/s. Both the QDs layer and the buffer layer were grown at 530°C. For the InAs/GaAs samples, only the 2-nm In 0.15 Ga 0.85 As layers were changed to a GaAs buffer layer, and the coverage of InAs was 2.1 ML. As 2 was used during the whole growth process, and the As 2 /In beam effective pressureflux ratio was fixed at 40; the growth rates were determined by the RHEED oscillation technique on the (100) plane. The RHEED pattern has been imaged by a charge-coupled device camera, then digitized, and analyzed by software. When the streak pattern turned into the spots of the threedimensional (3D) QDs which demonstrated the transition of 2D-3D growth mode, the intensity of one spotty pattern was recorded. The atomic force microscopy (AFM) test was conducted in a contact mode in air. Results and Discussion The surface morphology of self-assembled QDs is a key factor in determining its optical properties, and it is very sensitive to the sample structure, for example, the composition of buffer layer [16], surface reconstruction, and substrate orientation [17]. Figure 1 shows AFM images of InAs QDs grown on (In)GaAs buffer layer grown on GaAs (311) B and (100) substrates. The morphology of QDs varies a lot with the different buffer layer and substrate orientation. Note that there are very few QDs as can be observed in Fig. 1c. This is because we reduced the InAs coverage of the InAs/GaAs samples to 2.1 ML. The purpose of this action was make sure that the InAs coverage of QDs grown on GaAs (311) B sample was just over the transition thickness (we had measured the transition thicknesses before this experiment). At the same time, the QDs on GaAs (100) had already developed for a certain time. Thus, the 2.1 ML's coverage made the difference in morphology of these two samples become more clear. For the InAs/InGaAs structures, while the QDs grown on GaAs (311) B substrates were mature, those grown on GaAs (100) substrates were clearly underdeveloped. Most of the QDs grown on GaAs (100) substrates were very small sized and only a few QDs can be clearly observed. The average density, height, lateral size, and the standard statistics error of height and lateral size of these two samples are 4.4 9 10 10 cm -2 and 3.6 9 10 10 cm -2 ; 10.3(±2.58)nm and 6.2(±0.46)nm; 145(±6.58)nm and 130(±5.8)nm for the QDs on GaAs (311) B and (100), respectively. Nevertheless, for the InAs/GaAs QDs, the QDs were all of larger size on the GaAs (100) substrates than those on GaAs (311) B substrates. The average density, height, and lateral size for these two samples are 4.8 9 10 8 cm -2 and (100), respectively. These facts suggested that an earlier 2D-3D growth mode transition may exist in the InAs/GaAs on GaAs (100) than that on (311) B; however, if the buffer layer was an InGaAs layer instead of a GaAs layer, the transition starts later on GaAs (100) than on (311) B. In other words, for the InAs/In 0.15 Ga 0.85 As samples, t c311 is smaller than t c100 ; however, for the InAs/GaAs samples, t c311 is larger than t c100 . For the self-assembled QDs, t c is an important parameter. For it determines when the islands were formed during the growth, which therefore has a great impact on the morphology of QDs at a given coverage. It had been confirmed that the growth parameters have very little influence on t c . But t c is rather sensitive to the substrate orientation, as shown by many studies that have been conducted to check the effect of substrate orientation on t c [18,19]. Besides, it had been found that the effect of interfacial (IF) bonds can influence t c of the noncommon anion heteroepitaxy system (III 1 V 1 /III 2 V 2 , such as InAs/ GaSb and InP/GaAs) greatly. Take the InAs/GaSb superlattice for example: t c of this system was much thinner when the IF bonds consisted of In-Sb bonds rather than the Ga-As bonds [20,21]. This is due to additional IF strain offered by the higher atom sizeof In and Sb than that of Ga and As. However, one cannot observe this effect for the common anion system (III 1 V/III 2 V, such as InAs/GaAs and InAs/InGaAs) because the GaAs (100) surfaces are As terminated under common growth, and the IF bond configurations are no different from those of the film [21]. So one cannot find the effect of IF bonds in the InAs/GaAs or InAs/InGaAs system grown on (100) surfaces, which is the case of our InAs/GaAs QDs grown on GaAs (100). Since the In 0.15 Ga 0.85 As layers we had grown were so thin (2 nm) that they should be fully strained, t c should have no difference between the InAs/GaAs and InAs/In 0.15 Ga 0.85 As samples grown on the GaAs (100) substrates [20,21]. Then, we turn to t c of the InAs/GaAs and InAs/ In 0.15 Ga 0.85 As structures grown on GaAs (311) B substrates. We monitored the difference in t c of these two types of structures grown both on GaAs (311) B substrates by recording the dependence of intensity of one spotty pattern on the InAs coverage. The results can be seen in Fig. 2. A clear delay for the growth-mode transition can be found at the InAs/GaAs sample: for example, at the thickness 1.5 ML, the InAs/In 0.15 Ga 0.85 As structure had finished the sharp rise of intensity, whereas for the InAs/ GaAs structure, the transition had not even started. This result shows that t c varies a lot according to the composition of the buffer layer at the GaAs (311) B surface. The higher t c of InAs/GaAs than the InAs/In 0.15 Ga 0.85 As sample grown on GaAs (311) B can be understood by introducing the effect of IF bonds on t c . The GaAs (311) B surface has two type of atom positions, including twofold coordinated (100)-like Ga atoms at the topmost layer (two dangling bonds) and three threefold coordinated (111) B-like As atoms at the second layer (one dangling bond); the number of these two types of position are exactly the same, as can be seen from Fig. 3 [22,23]. If the heterointerface formed on this surface, then the IF bonds configuration is different from the film because there are mixed In-As and Ga-As bonds in the IF layer;however, only Ga-As bonds can be found in the buffer and only In-As bonds can be found in the film. So, the bonds configuration is different from the film and the buffer. Accordingly, one may see the effect of IF bonds. So, when we developed the InAs/GaAs sample, the twofold coordinated (100)-like positions were all occupied by Ga atoms, the IF bonds consisted of both Ga-As and In-As types, and the ratio between them was 2:1. While, when we developed the InAs/In 0.15 Ga 0.85 As sample, these twofold coordinated (100)-like positions were occupied by both In atoms and the Ga atoms, and nearly 15% Ga dangling bonds were replaced by In dangling bonds. Accordingly, the ratio of Ga-As and In-As dangling bonds became lower than 2:1. Comparing to the InAs/GaAs case, the IF strain accumulated was larger due to more In-As IF bonds can be found. And the additional IF strain provided by In atoms at the buffer layer made the transition start early. So if the epitaxy is performed on a high miller index surface, the effect of IF bonds on t c can be observed, even for the common anion systems. Thus, when the InGaAs buffer layer was used instead of the GaAs buffer layer, t c decreased on the GaAs (311) B substrates but remained constant on the GaAs (100) substrates. One thing that should be noted in conclusion is that the morphologies of InAs/GaAs and InAs/InGaAs QDs grown on GaAs (100) substrates are clearly very different despite the difference in InAs coverage being negligible (2.1 ML-2.3 ML). This may partly be due to the change of growth environment. After all, these two samples were not grown at the same time. Besides, this difference suggests that there may be other factors that contribute to the equilibrium shape of QDs grown on GaAs and InGaAs buffer layers: for example, the morphology differences in different buffer layers may modify the migrate length of adatoms. However, we argue that the difference in t c still at least partly induced different equilibrium morphologies of QDs as measured by AFM. This result shows that t c of InAs/GaAs QDs grown on high miller surfaces, i.e., GaAs (311) B, can be adjusted through modifying the type and amount of IF bonds and further to modify the equilibrium structures. These structural characteristics would surely induce different properties. So this effect offers one parameter for the design and fabrication of self-assembled QDs, and should be regarded as an advantage for the InAs QDs grown on high miller index surfaces compared to the conventional GaAs (100) surfaces. And also, due to the often-observed morphology instability when the highly mismatched epitaxy was conducted, this study provides the information that the effect of IF bonds should be taken into consideration in this field [24]. Conclusion In conclusion, the morphology and t c of the self-assembled InAs QDs grown on GaAs (311) B and GaAs (100) substrates with (In)GaAs buffer layer were investigated. It was found that the configuration of IF bonds plays an important role in the morphology and t c of InAs QDs. For common anion systems, such as InAs/(In)GaAs, this effect can only be observed at high miller index surfaces, which can be used to adjust the morphology in the QDs grown on high miller index surfaces.
3,000.2
2009-04-05T00:00:00.000
[ "Materials Science" ]
The Role of Incubators and Accelerators in the Fourth Agricultural Revolution: A Case Study of Canada The fourth agricultural revolution has resulted in technologies that could significantly support global efforts toward food security and environmental sustainability. A potential means for accelerating the development of these technologies is through business accelerator and incubator (BAI) programs. Using Canada as a case study, this study examines considerations around building agritech BAI capacity for supporting transitions to sustainable, resilient food systems. The research employs expert stakeholder interview and thematic coding methodology to identify opportunities, success factors, challenges/barriers, and actions/approaches for increasing agritech BAIs in a region/country. The study also identifies findings that are broadly applicable to BAIs in general and those that are specific to sectoral (i.e., agritech) and place-specific (i.e., Canada) contexts. The analysis identified four opportunities themes, seven success factors themes, eight challenges/barriers themes, and eight actions/approaches themes. Of the four thematic areas, success factors were the most broadly applicable to different sectoral and place contexts, and challenges/barriers were most specific to the agritech and (to a lesser degree) Canadian contexts. The study elucidates roles, challenges, and ways forward for building agritech BAI capacity in regions and countries for harnessing the opportunities presented by the fourth agricultural revolution and transitioning to sustainable and resilient food systems. Introduction A series of technologies has emerged (and is emerging) through the so-called 'fourth agricultural revolution', which have the potential to reshape food production and distribution across the world. Such technologies include those that support new approaches to farming such as digital, precision, vertical, and cellular agriculture, and involve advancements in tools and techniques in multiple fields such as robotics, blockchain, gene editing, drones, and synthetic proteins [1][2][3]. These new approaches and technologies present promising opportunities for transitioning toward sustainable food systems by enabling sufficient food production to feed a growing global population while minimizing environmental impact, arguably one of most pressing challenges of the modern Anthropocene epoch [4]. For example, the use of robotics, artificial intelligence, and machine learning in controlled indoor agriculture can be implemented to optimize farm management and crop harvesting, while minimizing land and water use [5,6]. As another example, new cellular agriculture methods for manufacturing animal proteins using tissue culturing and fermentation techniques have garnered attention for their potential to produce foods that are equivalent or near-equivalent to their animal counterparts, but with a much lower environmental footprints [7,8]. Such technologies could form essential components of efforts toward optimizing food security and environmental sustainability objectives worldwide. the role agritech BAIs could potentially play in transitions to better food production systems in the fourth agricultural revolution and Anthropocene. Characterizing Business Incubators and Accelerators No standard definition exists for BAIs [19]; however, some reports and scholars have provided useful descriptions for understanding what these programs are and how they operate. In a 2019 BAI world ranking report, UBI Global provides classifications of incubator, accelerators, and hybrid programs, describing incubators as programs that support early-stage start-ups, accelerators as programs that focus on growing later-stage start-ups, and hybrids as programs that possess characteristics of both incubators and accelerators [20]. Other descriptions of BAIs include Schwartz's [21] discussion on key characteristics of effective incubators, these being networking, subsidized rental space, credibility, business assistance, and collectively shared facilities. Descriptions of accelerators note that they differ from incubators in that they are geared towards providing seed money for investment and equity [22], and Del Sarto et al. [23] regard these programs as extensions of the business incubator model. However, common characteristics are shared by incubators and accelerators. Wise and Valliere [24] note that effective accelerator programs provide office space, networking, and legitimization, while also discussing other supports such as investment, branding, and mentorship. Relationships can be seen among the key features of BAIs, such as networks enabling opportunities for investment through connecting start-ups with venture capitalists [21] and affiliation with a credible BAI can lend to the credibility, branding, and legitimization of participant start-ups. Although no single definition of BAIs exists, a common understanding of what these programs do is observed in the literature. Succinctly stated, BAIs support start-ups and facilitate their growth through the provision of workspace, networking opportunities, mentorship, business assistance, and access to funding, while also benefiting these companies through their association with a respectable incubator or accelerator program. Figure 1 illustrates the key features of BAIs, drawing upon the characteristics of effective programs provided by Schwartz [21] and Wise and Valliere [24]. [21] and Wise and Valliere [24]. Figure 1. Key features of BAIs as per Schwartz [21] and Wise and Valliere [24]. The specific number of BAIs operating throughout the world is difficult to determine due to informal programs that are not labelled as BAIs but serve similar functions [25]; however, estimates indicate that is the magnitude of thousands. The first incubator was established in the late-1950s in New York [26], and over half a century, the total number of these programs grew to exceed 7000 globally [27]. Accelerators emerged later than incubators; the first accelerator, Y Combinator, was launched in 2005, and over the following decade these grew to over 3000 worldwide [28]. The growth of BAIs worldwide has been beneficial for technological advancement; however, many BAIs have sectoral focuses, and thus some sectors may have benefited more than others. Some surveys indicate that digital and information technologies may be stronger focuses for BAIs; for example, Bone et al. [29] found this sectoral focus comprises 29% of the incubators in the UK, the second highest category with the highest being 'no particular focus'. Similarly, through a global survey of over 300 accelerators, the Global Accelerator Learning Initiative (GALI) identified information and communication technology to be the largest sector-focused category of accelerator programs, consisting of 17% of the total sample [30]. In contrast, BAIs operating in the agricultural sector only comprised approximately 2% and 9% (respectively) of the sampled incubators and accelerators. Such findings indicate that there is potential room to grow agritech BAI capacity across the world; doing so could help advance agricultural technologies and innovation at a pace similar to that seen with information and digital technologies in recent decades. Case Study Canada is the case study for this research, and it serves as a useful focus for an investigation on agritech BAIs in part due to how active the agricultural industry is in the country. Canada has a strong agricultural sector, with over CAD 72 billion in farm gate receipts reported for 2020 [31]. The country also has a large land base, being the second largest country in the world; accordingly, it produces a wide range of agricultural goods. Agriculture and Agri-Food Canada [32] reports that in 2018, Canada farm receipts amounted to CAD 23.2 billion for grains and oilseeds, CAD 13.4 billion for red meat, CAD 6.6 billion for dairy, CAD 5.7 billion for fruits and vegetables, CAD 4.3 for poultry and eggs, and CAD 4.8 billion other crops. Such figures illustrate the variety of different agricultural products that are economically significant in Canada, ultimately speaking to the strength and diversity of the country's sector in terms of production and sales. Canada also serves as a useful case study for this research because it has room to grow agritech BAI capacity. Canada lags behind countries such as Singapore and the Netherlands, which have effectively stimulated and accelerated technological development and innovation in the agricultural sector through BAI programs [18]. This is not to say that Canada agritech BAIs (or programs that operate in a similar manner to BAIs) are absent in Canada; this research collects data from such programs. Rather, this paper argues that there is room to grow agritech BAI capacity in the country and that efforts to do so would greatly enhance fourth agricultural revolution technologies in the country. Supporting this position is Williams's [33] report on the BAI Performance Measurement Framework pilot project conducted in Canada, noting that from a sample of over 500 Canadian companies which have participated in BAI programs, 76% reported these programs to be either 'vital' or 'significant' to their success. Canada therefore serves as a useful case study for this research because it currently has a strong and diverse agricultural sector but lags in agritech BAI capacity, even though these have demonstrated to be valuable for supporting and advancing entrepreneurial innovation in the country. Data This study is part of a larger project that specifically focused on accelerating technological development and innovation in Canada's agriculture sector, and the research involved literature reviews, an environmental scan of BAIs operating in agritech, and semistructured interviews with expert stakeholders that operate within Canada's agricultural sector. The data collected through the later activity produced insights that were applicable beyond the Canadian context, and the current study conducts analysis on these interview data to capture these insights and develop this contribution to agritech BAI literature. Semi-structured interviews were conducted with 10 participants, following other research that uses small, purposive samples of expert stakeholders with specialized knowledge (e.g., [34,35]). Participants included people working in incubator or accelerator programs (or BAI-like/BAI-related programs) that operate in the agricultural sector, BAI participants that work in the agritech space, and advisors and affiliates of these programs. Three interviewees were academics, and this group included people who sat on boards of an agritech accelerator program (n = 1), were involved in government commissions on BAI policy (n = 1) or sat on advisory boards for agritech start-ups and food production innovation granting programs (n = 1). Three interviewees were involved in BAI or BAI-like programs, and affiliations included an accelerator that focuses on agritech companies (n = 1), an accelerator that focuses on sustainable technologies and supports agritech start-ups (n = 1), and a program that supports and aims to build the capacity of small-to medium-enterprises in the agrifood sector through activities such as food innovation grants and network building (n = 1). One interviewee worked in a program that focuses on mobilizing knowledge in, developing networks around, and increasing uptake of agricultural innovations (n = 1). Two interviewees were founders of start-ups that operate in the agritech space and have participated in BAI programs (n = 2). One interviewee was an angel investor in and advisor for early-stage agricultural companies (n = 1). Demographic data were not collected for this group; however, it was inferred from participant biographies that the sample consisted of more male (n = 7) than female (n = 3) participants and more mid-/late-career (n = 7) than early-career (n = 3) participants. Interviews were conducted in early-2021 by Zoom video conferencing software, and each interview lasted approximately 1 h. The interview protocol was designed to solicit information about the stakeholder's program/organization, the success factors for an incubator/accelerator (including key partnerships and networks), challenges and barriers, approaches for overcoming challenges/barriers, and potential benefits and societal impacts of agritech BAIs. Accordingly, interviews contained five sections: (1) details about the organization/initiative and the interviewee's role and involvement with BAIs, (2) successes, both in terms of where a program or initiative has been successful and what contributed to its successes, (3) challenges and barriers for achieving successes, (4) networks and partnerships associated with a program or initiative, and (5) societal impact of a program or initiative and future plans (or aims/hopes) for increasing this impact. Interviews were semi-structured, meaning interview questions were prepared for each of the five sections (3 to 6 per section), and other questions were asked to probe further into an insight or topic of interest (i.e., insights/topics which emerged through the interviews that were particularly pertinent to the research objectives). The interview protocol was tailored to different participants in a manner that allowed them to speak from their own experiences; for example, the wording of the questions differed depending on whether the participant worked in an incubator/accelerator or participated in one of these programs. This study was approved by the University of the Fraser Valley's Human Research Ethics Board (file number: 100619). Participants were provided with letters of consents, which were signed and returned to the researchers via e-mail. Prior to starting interviews, researchers provided a brief summary of the project's objectives, and participants were given an opportunity to ask questions about the project and/or letter of consent. Analysis The analysis employed thematic coding methodology [36,37]. Audio data were transcribed to text (stored in Word document format), and the transcripts were subsequently imported into NVivo (release 1.5.1). An inductive coding method was employed in a process that involved both applying and revising the coding framework as data were further reviewed and analyzed [38]. This method first involved open coding, where themes were identified in the interview transcripts as these data were reviewed, and codes were applied to the text accordingly using NVivo's coding function. The interview data and codes were reviewed again to refine the coding framework by aggregating codes to create a more concise list/collection of themes. This aggregation occurred when there were few references in the data for a particular code (e.g., 1 or 2) and when a code could be included within a common theme with another code. Altogether, 42 codes were applied through this process. After the inductive coding process, a deductive coding approach was used to specifically identify which aspects of the data referred to opportunities, success factors, challenges/barriers, and actions/approaches for addressing the challenges and for increasing agritech BAI programs and capacity. This process involved creating a coding framework prior to coding (i.e., rather than open coding and identifying themes as data were reviewed), and this framework consisted of the four aforementioned areas of interest (i.e., opportunities, success factors, challenges/barriers, and actions/approaches). Using NVivo, these codes were applied to the text in the appropriate places, and subsequently, a coding matrix was created to reveal overlap between the inductively-and deductively-applied codes. Such overlap was examined to group data coded through the inductive coding work within the four deductive coding categories. After this grouping was done, an axial coding process was performed [39], which involved further grouping coded data within the deductive coding categories to identify coherent, emergent themes related to opportunities, success factors, challenges/barriers, and actions/approaches. Although this research project uses Canada as a case study, it produced findings that are broadly applicable to other regions and countries. Similarly, the research focuses on BAIs operating in the agricultural sector, but some of the findings are applicable to BAIs in general. To highlight the relevance of the findings to narrower and broader contexts, the emergent themes were examined to determine whether they relate to (1) agritech BAIs or BAIs in general, and (2) BAIs in Canada or BAIs in general. The results of this work are presented with thematic analysis to provide a greater understanding of the utility and applicability of the insights produced through this research. It is important to recognize that although some findings relate to a Canadian context, other regions/jurisdictions may have similar geographical, social, political, cultural, and/or economic features; thus, some seemingly Canadian-specific findings could have broader applicability. Results In total, 27 coherent themes emerged through the analysis. The distribution of themes among the deductive coding categories consisted of four relating to opportunities, seven relating to success factors, eight relating to challenges/barriers, and eight relating to actions/approaches. With respect to relevance and applicability, the analysis found that challenges and barriers were proportionally highest in terms of their specific relevance to building BAI capacity in the agritech sector and in Canada; whereas, findings around success factors were the most broadly applicable to different sectoral and place-based contexts. Table 1 provides a summary of these results. Opportunities Four themes emerged from the analysis of opportunities. Each theme was generally applicable beyond the Canadian context, but half were particularly relevant to building BAI capacity specifically in the agricultural sector. Table 2 displays a summary of the opportunities analysis results, and descriptions of the themes are provided below. An opportunities theme that clearly illustrated the potential role of BAIs in transitioning to sustainable food production relates to how BAIs can guide the direction of food systems technology and innovation through aligning their programs' admission criteria with sustainability goals. Interviewees gave examples such as creating programs for clean technologies and innovations, which can reduce agriculture-related greenhouse gases. Other examples included innovations that contribute to the production of predictable and high crop yields, while reducing land and resource uses. In addition, the data analysis elucidated opportunities for agritech BAIs to help enhance and strengthen food value chains, as these programs can help identify value chain gaps or challenges where technology and innovation would best be directed. In addition, BAIs can develop their networks to create business-to-business relationships that address such gaps. These findings illustrate how agritech BAIs can play a strategic role in developing resilient food production systems. Other opportunities include the potential for capitalizing on the increased interest in entrepreneurship among students (i.e., skilled workers with specialized knowledge). An interviewee noted that modern student culture is conducive toward the proliferation of start-ups, as (from their observations and long experience working in academia) an increasing number of graduate students are attracted to this career path. Although not specific to the agriculture sector, opportunities exist to harness students' creativity and entrepreneurial spirit for agritech by establishing more university-based (or universityaffiliated) agritech BAI programs. Other opportunities that are broadly applicable to BAIs observed through this analysis include virtual engagement opportunities. Interviewees noted that people have become increasingly comfortable with online engagement and tools, particularly during the COVID-19 pandemic. Although it was clearly expressed that agritech BAIs need physical space and infrastructure (e.g., labs, greenhouses), it was also articulated that these programs could benefit from virtual engagement complements to increase network development, especially in geographically vast areas such as Canada. Success Factors Seven themes were identified through the analysis on factors for improving the success of agritech BAIs. Most of the success factors themes were broadly applicable to BAIs in general, with only one that was specific to the programs operating in the agricultural sector. In addition, all themes were applicable to place contexts beyond Canada. Table 3 displays a summary of the results of the success factors analysis, and more detail on the themes is provided below. Interviewees identified several success factors that were relevant to BAIs in general. Such factors include a BAIs ability to form and connect start-ups to supportive networks of diverse actors, such as investors, industry advisors, and different companies operating within the value chain (e.g., potential business-to-business opportunities). Other factors include mentorship and support/guidance in areas such as administration, business, and legal considerations were noted to be key features of BAIs, particularly those that harness and foster the talent of people working/studying in STEM disciplines with little knowledge and experience in other aspects of running a company. A theme related to affiliation and visibility was also identified, referring to the value of a start-up associating with a credible BAI, or a BAI affiliated with a credible institution (e.g., a respected university) for increasing their profile and the visibility of their company, services, and/or products. Other themes that emerged through the success factors analysis include selecting an advisory committee that captures the diverse expertise needed to support agritech start-ups, that is, technical, scientific, and business expertise. It was also noted that BAI programs (and their participants) would be more successful when tailoring toward technologies and innovations that align with policies and market needs, an example being technologies that help industry players meet greenhouse gas emissions reductions mandated or incentivized by government agencies. Another success factor involved determining effective ways of measuring BAI 'success'; for example, the number of successful businesses (e.g., survival over a multi-year period) and value returned to the agricultural sector was noted to be a better success metric for agritech BAIs than simply the number of companies that completed the programs. A success factor theme with specific relevance to the agriculture sector was the need for obtaining significant seed funding. It can be argued such funding is needed for any BAI; however, agritech research and development in particular is confronted with the challenge of needing large amounts of space (e.g., test farms) and expensive infrastructure (e.g., greenhouses, labs). Such resource needs are particularly pronounced in the agricultural sector, arguably more so than in the development of (for example) software and online applications. Accordingly, sufficient and significant seed funding was identified as a success factor particular to BAIs centered on agricultural technology and innovation. Challenges and Barriers Eight themes were identified through the challenges and barriers analysis. Three quarters of these themes (75%) capture challenges and barriers that face BAIs specifically in the agricultural sector, and half of the themes are particularly relevant to the Canadian context. Table 4 provides a summary of the results, and further discussion on themes is given below. Half of the challenges and barriers themes are particularly relevant to both the agritech and Canadian contexts. One such theme relates to the venture capital culture in Canada, noted by an interviewee to be more risk-averse than in other countries such as the United States. This is particularly problematic for attracting funds to agritech start-ups, as the development of agricultural technology is expensive with long returns on investment and sometimes unclear profitability (such as with farm data collection platforms). Another Canadian agritech BAI challenge concerns the perspective on agriculture commonly held by government, investors, and citizens in the country. An interviewee noted that agriculture is perceived through a welfare lens (i.e., a social good supported by farm subsidies), rather than as a wealth creator. Similarly, it was expressed that a commonly held view of agriculture is that it is a rural activity, rather than part of a dynamic technology environment. Interviewees noted a significant Canadian challenge for some types of agritech to be the country's difficult and unclear regulatory pathways for getting food products to market; this could discourage companies and talent from establishing themselves in the country. The challenge can be regarded as particularly problematic in light of the fact that some emerging agricultural methods, such as cellular agriculture, require highlyspecialized expertise that is sourced from small talent pools. Interviewees also discuss the issue around silos both in terms of departments and jurisdictions. For the former, it can be challenging to determine which governmental agencies have the mandate to support building agritech BAI capacity, as it relates to a variety of different areas such as agriculture, economic development, and science and technology, all of which have different governmental departments. In terms of jurisdiction, Canada is a geographical vast country with agricultural systems that differ by province, presenting challenges for BAIs with respect to forming cohesive agricultural innovation networks and coordinated agritech development pathways nationwide. Challenges and barriers that were broadly applicable to agritech BAIs (i.e., outside the Canadian context) include the need for creating products that are priced competitively to meet consumer expectations for affordable food. Such affordability considerations present related challenges around how to scale-up production in a manner that benefits from economies of scale and results in commercially viable goods. Another agritech-related challenge relates to the diverse nature of technologies emerging through the fourth agricultural revolution. 'Agritech' is highly varied, consisting of technologies for supporting cellular agriculture, vertical farming, digital agriculture, etc. Each of these areas of technology require their own set of specialized, expensive equipment, thereby creating challenges for establishing agritech BAIs with the appropriate resources and infrastructure for supporting a range of emerging agricultural approaches. Challenges for BAIs that are not specific to agriculture or Canada relate to the university affiliation of some of these programs. Although such affiliations are valuable for the credibility of BAI programs and their participants, they can also tie BAI programming to a single university's expertise and agenda, reducing the diversity of academic partnerships and potentially limiting types of program participants (i.e., college versus university students). In addition, interviewees indicated that some BAIs affiliated with universities can have the tendency of adopting university-style programming, with inflexible schedules that do not align with the irregular and busy schedules typical of start-up entrepreneurs. Actions and Approaches Eight themes were identified through the actions and approaches analysis. Half of the themes are particularly relevant to agritech, and a quarter of the themes relate to the Canadian context. Table 5 displays a summary of the results, and descriptions of the themes are detailed below. Half of the actions and approaches themes refer to ways of stimulating growth specifically in the agritech sector. One theme involves developing policies that create markets for agritech products (such as the clean technology examples discussed above), as well as providing tax breaks and incentives for agritech start-ups. Such actions would encourage entrepreneurship in agritech and potentially increase participation in agritech BAIs, as well as attract funding from investors. Agritech BAIs would also benefit from partnerships with regulators that can provide participant start-ups with expert guidance on pathways from bench experiments to products entering the market. Furthermore, as agritech is a highly diverse field, BAIs could consider specialization and focusing on building infrastructure for particular areas of fourth agricultural revolution technologies, such as cellular agriculture or vertical farming. Interviewees also expressed the importance of understanding the complete food value chains when developing agritech BAIs to identify gaps, strategically direct programming, and build networks and mutually beneficial business-to-business relationships within these value chains. An actions/approaches theme that relates to national research hubs was identified. This theme is particularly relevant to the Canadian context, but also broadly relevant to BAIs and technological development and innovation in fields outside of the agriculture sector. A national hub was discussed as important for supporting agricultural research and forming research networks across a geographically vast and agriculturally diverse country. An interviewee also noted that a hub would help attract and direct investment, as it would provide a clear point of contact for agritech investors. Three of the actions/approaches themes were found to be applicable beyond the agricultural and Canadian contexts. Interviewees noted that BAIs can have a tendency toward maintaining a domestic focus (as they often aim to encourage the growth of domestic businesses); however, extending BAI networks internationally is valuable to give start-ups access to diverse knowledge, larger investor pools, and potential export market opportunities. Interviewees also expressed the importance of increasing public funding for agritech and agritech BAIs to diversify from private investment sources. One interviewee noted that public funding in this space could be regarded as akin to infrastructure investment, in this case, food infrastructure that supports national food security objectives. The analysis also highlighted the importance of diverse networks and a variety of mentors for BAI programs. The analysis indicated that BAIs should partner with multiple post-secondary institutions to encourage participation from and harness the talents of students from a variety of college and university programs. Discussion This research explores considerations around building agritech BAI capacity for supporting transitions to sustainable and resilient food production systems, and to this end, it specifically studied the opportunities, success factors, challenges/barriers, and actions/approaches for establishing and growing agritech BAIs. Among these analytical categories, the opportunities findings mostly explicitly illustrated the potential role BAIs can play in transitions to sustainable food production. BAIs have the ability to define the criteria for admitting start-ups, and in the case of agritech BAIs, such criteria could include technologies and innovations that support food security objectives, while also contributing to goals in climate change, land and habitat conservation, pollution reduction, water conservation, etc. In addition, as found through examples provided by interviewees, agritech BAIs could focus on supporting start-up technologies that enhance economic viability of farming by increasing crop yield and predictability. In this way, BAIs can set agendas for supporting and scaling innovations that operate in the intersection between the three sustainability pillars, that is, the environmental, social, and economic imperatives of sustainable development [40]. This is significant as start-ups that participate in BAIs experience higher success and fewer economic challenges than those that do not [12][13][14]. Together, the ability for BAIs to define participant criteria and the advantage they provide to start-ups suggest that they could play a potentially important role in shaping a country's agritech landscape. When considering the potential role of agritech BAIs in transitions to sustainable food production, it is important to recognize the dangers of technological optimism and that technological advancements alone do not comprise sustainability solutions [41]. Technologies that initially show promise for contributing to sustainability objectives could ultimately result in minimal benefits, or even greater adverse effects, if implemented without effective supporting policies and programs. For example, Newell et al. [42] found the use of hydroponic systems in livestock feed supply chains can result in greenhouse gas reductions, but only when the hydroponic systems are powered by low-carbon energy sources. Similarly, Lynch and Pierrehumbert [43] estimate that cellular beef could result in greater contributions to anthropogenic climate change than livestock beef over a long-term period due to greenhouse gas emissions associated with the energy and transportation involved in cellular meat production. Furthermore, Newman et al. [44] raised questions about the land and habitat conservation potential of cellular agriculture when crops used as production inputs (such as sugar for fermentation-derived dairy) are obtained through agricultural expansion. These examples highlight the importance of complementing the scaling and widespread adoption of technologies with supporting policies that ensure maximization of benefits, such as policies related to energy (e.g., [42,43]) and feedstock sourcing (e.g., [44]). Relatedly, this research found policy alignment to be a success factor for agritech BAIs and policy incentives to be a potentially effective approach for stimulating agritech. The findings in this study relate more to the economic viability and success of agritech start-ups; however, they also demonstrate the interplay between BAI programs and government agencies, as well as the potential roles BAIs can play as an intermediate between sustainable food policy objectives and the growth of agritech industries. The findings of this study indicate that BAIs experience challenges that are particular to the agricultural sector. For example, the diversity of different forms of food production technologies and methods emerging through the fourth agricultural revolution (e.g., vertical farming, cellular agriculture) presents requirements for specialized, expensive infrastructure. This challenge is arguably more pronounced for agritech BAIs than (for instance) information technology BAIs. Another challenge involves government, investor, and public perspectives on what agriculture should 'look like' and how it should develop. Interviewees noted that agriculture is often viewed as being part of a rural domain rather than as part of a field of dynamic technology. These findings relate to the concept of the 'rural idyll,' that is, a commonly held image of agriculture farming being an activity done in rural countryside and consisting of old technologies such as scythes, antique tractors, and wooden barrels [45]. The rural idyll fosters conceptions and imagery of food production places as a part of a romanticized past [46], positioning the role of farmers as actors in idyllic rural landscapes [47], rather than innovators and adopters of cutting-edge technologies. This study identified the challenge as especially relevant to the Canadian context, but it ultimately links to a broader discourse in agriculture. The finding speaks to the dichotomization of the conventional (i.e., large-scale, industrial) and alternative (i.e., small-scale, organic) agricultural paradigms, the former being defined by high-yield, technology-driven production and the latter consisting of labor intensive, environmentally friendly approaches [48]. Alternative agriculture emerged as popular paradigm in response to the negative social and environmental effects of technology-driven conventional agriculture [49], and in many ways, it relates or even contributes to the agritech challenge of shifting perspectives toward understanding of the role and value of emerging, novel technologies in sustainable food production systems. Unlike the challenges and barriers, most of the success factors were broadly applicable to BAIs of all kinds. In fact, the findings of the study indicate that a number of the factors that contribute to successful BAIs are also essential features of effective incubators and accelerators, as described by Schwartz [21] and Wise and Valliere [24]. For example, this study found a key success factor for agritech BAIs to be an ability to form and connect start-ups to supportive networks of diverse actors, such as investors, industry advisors, and different companies operating within the value chain (e.g., potential business-to-business opportunities). As another example, mentorship and support/guidance in areas such as administration, business, and legal considerations were noted to be key features of BAIs for harnessing the talent of those who have technological expertise but no knowledge and experience in the business aspects of running a company. As a final example, the findings on the value of BAI affiliation relate strongly to the credibility component of effective incubators presented by Schwartz [21] and legitimization features/benefits of accelerators discussed by Wise and Valliere [24]. Such findings suggest that many of the critical components for the success of agritech BAIs are universal among BAIs operating in all sectors. Similar to the success factors, the actions and approaches analysis produced findings that relate to sustainability practices done in a variety of sectors and fields. In particular, the importance of forming networks of diverse actors to address complex sustainability issues have been discussed as critical components of efforts toward other challenges such as climate change (e.g., [50]) and biodiversity conservation (e.g., [51]). In addition, this research identified importance of knowledge sharing programs and mechanisms, such as the development of a national agritech hub (or hubs), and in a similar vein, effective tools, platforms, and methods for knowledge sharing have been regarded as valuable for supporting a range of sustainable development efforts (e.g., [52,53]). In this manner, the 'ways forward' for effectively building agritech and agritech BAI capacity mirror (or perhaps draw upon) common aspects and practices for addressing sustainability challenges and advancing sustainable development goals. Conclusions Current dominant agricultural paradigms, namely conventional and alternative agriculture, are not sufficient for sustainably feeding the growing global population, as the former carries a large environment footprint while the latter is labor intensive and does not produce sufficient yields to feed the majority of the population [48,49]. It is clear that new paradigms and agricultural approaches are needed to transition toward sustainable food production systems in the Anthropocene. The technologies and innovations of the fourth agricultural revolution could potentially help in this pursuit, which presents a potential role for agritech BAIs. However, technological advancement alone does not equate to progress toward sustainability; thus, it is important to recognize and promote the specific roles BAIs can play in shaping the agricultural technology landscape and guiding alignment between policy and start-up innovation. Recognition is also needed of the unique challenges agritech BAIs face in order to overcome these issues so that growth of BAIs working toward agriculture solutions can occur at a pace seen in other sectors, such as information technology and health. Ultimately, by recognizing the roles, challenges, and ways forward for agritech BAI growth, regions and countries can harness the opportunities presented by the fourth agricultural revolution to transition toward better, sustainable, and resilient systems for feeding the world.
8,138.2
2021-10-29T00:00:00.000
[ "Agricultural and Food Sciences", "Business", "Environmental Science", "Economics" ]
Genetically predicted lipids mediate the association between intrahepatic cholestasis of pregnancy and cardiovascular disease Introduction Intrahepatic cholestasis of pregnancy (ICP), the most prevalent liver disorder specific to pregnancy, affects approximately 1.5%-4% of pregnancies. However, the influence of ICP on cardiovascular disease (CVD), including hypertension (HTN) and coronary artery disease (CAD), has not been thoroughly investigated. Methods This study explores the causal relationship between ICP and CVD (HTN, CAD) using Mendelian Randomization (MR). Utilizing summary-level data from Genome-Wide Association Studies (GWAS), we applied the inverse-variance weighted (IVW) method, supplemented by sensitivity and reverse MR analyses, to ascertain robustness. Results Our findings reveal significant causal links, indicating ICP notably increases the risk of CVD (P = 0.001), hypertension (HTN, P = 0.024), and coronary artery disease (CAD, P = 0.039). A two-step MR analysis highlighted the mediation role of lipid profiles, with LDL, TC, and Apo-B contributing to increased CVD risk by 25.5%, 12.2%, and 21.3%, respectively. Additionally, HTN was identified as a mediator in the ICP-CAD association, accounting for a 14.5% mediation effect. Discussion The results underscore the genetic predisposition of ICP to elevate CVD risk and the critical mediating role of lipid levels, emphasizing the need for vigilant lipid monitoring and early intervention in individuals with ICP. Introduction Intrahepatic cholestasis of pregnancy (ICP), the most prevalent liver disorder specific to pregnancy, affects approximately 1.5%-4% of pregnancies (1,2).The incidence of ICP varies according to geographic location and ethnicity, with a higher incidence in South American populations and a lower incidence in European populations (3).Its clinical manifestations include pruritus and elevated levels of serum bile acids and transaminases (4).Typical symptoms of ICP begin to appear around the third trimester of pregnancy and the condition worsens as the pregnancy progresses.Typically, ICP symptoms subside within 48 h postpartum, and biochemical irregularities normalize within 2-8 weeks (5). Notably, ICP is linked to several adverse perinatal outcomes, including preterm labor, unexplained stillbirth, and postpartum hemorrhage (6,7).The pathogenesis of sudden intrauterine death related to ICP remains unclear, but it is hypothesized to involve disruptions in fetal circulation due to abnormal bile acid concentrations.Recent studies of ICP fetal outcomes have shown that the risk of adverse fetal outcomes increases with increasing maternal serum bile acid levels (8).The most effective pharmacologic treatment to improve clinical symptoms and biochemical abnormalities in patients with ICP is Ursodeoxycholic acid (UDCA), and this has also been shown to reduce placental abnormalities and improve placental bile acid transport in in vitro studies (9,10). The pathogenesis of ICP is multifactorial, involving environmental factors, hormonal changes, and genetic mutations (11).ICP is more likely to occur in winter and in populations in areas with lower dietary selenium intake, suggesting that environmental factors play a role in the development of ICP (12).Reproductive hormones also play a role in ICP, and women with higher levels of estrogen and progesterone are more likely to experience ICP symptoms (13).In terms of genetic factors, the more widely studied gene is ABCB4, and mutations at loci such as ABCB11 and ABCC2 have also been reported in ICP (14). Associations between ICP and various conditions, such as hepatitis C, non-alcoholic fatty liver disease, cholecystitis, pancreatitis, and autoimmune diseases, have been documented (15).Several studies have shown that ICP may coexist with other pregnancy-related conditions such as pre-eclampsia, acute fatty liver of pregnancy and gestational diabetes (16).Previous studies have shown that ICP patients have a threefold increased risk of gestational diabetes and pre-eclampsia compared to normal pregnant women (17).Both gestational diabetes and pre-eclampsia are recognized as risk factors for cardiovascular disease (18).However, the impact of ICP on cardiovascular disease (CVD) remains underexplored, with most research focusing on fetal cardiac implications (19).Observational studies have indicated an elevated risk of preeclampsia in women with a history of ICP (20).Yet, the influence of ICP on adult cardiovascular event risk, including hypertension (HTN) and coronary artery disease (CAD), requires further investigation.Currently, only a few observational studies have reported on the relationship between ICP and cardiovascular disease, and they have not obtained uniform results.Traditional observational studies in this context are often subject to limitations like residual confounding and reverse causality bias. Mendelian randomization (MR) refers to an analytical method for assessing causal relationships between observed modifiable exposures or risk factors and clinically relevant outcomes.It provides a valuable tool, especially when randomized controlled trials examining causality are not feasible and when observational studies provide biased associations due to confounding or reverse causality.These issues are addressed by using genetic variants as instrumental variables (IVs) for testing exposure.Because alleles of exposurerelated genetic variants are randomly assigned, the results obtained by MR are not affected by confounders and reverse causation (21).Large-scale genome-wide association studies (GWAS) conducted over the past decade have identified many genetic variants associated with cardiometabolic traits and risk factors.These findings have enabled the design of MR, which has been increasingly applied to predict cardiovascular risk factors in recent years.In this study, we used two-sample and two-step MR to explore the relationship between ICP and cardiovascular disease and to elucidate potential mediators in the pathway linking ICP with cardiovascular disease. Study design In this study, we used two-sample MR analysis (TSMR) and twostep MR to investigate the causal associations between ICP and cardiovascular disease, using summary statistics from genome-wide association studies (GWAS).This study adhered to the key principles outlined in the Strengthening the Reporting of Observational Studies in Epidemiology Mendelian randomization (STROBE-MR) guidelines (22).To ensure the accuracy and reliability of the results, this MR study strictly followed the three basic assumptions of Mendelian randomization.First, IVs must be strongly correlated with exposure factors.Second, IVs cannot be directly correlated with the outcome.Finally, IVs must be excluded from being associated with any confounding factors.Figure 1 illustrates the basic assumptions of MR and our study design. Data sources The exposure in this MR study was ICP, and the outcome factors were CVD, HTN, and CAD.This study categorizes the potential mediator factors into two major groups, as follows: ( (25).There is no significant sample overlap between the GWAS data.More information about the GWAS summary-level data used in this study is presented in Table 1. The selection of IVs In MR analyses, single nucleotide polymorphisms (SNPs) were used as instrumental variables to represent exposures and outcomes to explore causal relationships between them.We screened for SNPs that were strongly associated with exposure at a genome-wide significance level (P < 5 × 10 −8 ).A total of 11 SNPs were associated with ICP at the genome-wide significant threshold.All of them were not in linkage disequilibrium (LD, R 2 ≥ 0.001 and within 10 mb) and not overlapped with the known risk of CVD.Furthermore, to assess the strength of the screened IVs, this study introduced the F statistic to reflect the ability of the IVs to represent the phenotype.The F statistic was calculated from the sample size, the number of IVs, the minor allele frequency (MAF), and the β-value (26).IVs with F-statistics <10 were regarded as weak genetic instruments.In this study, SNPs with F statistic less than 10 should be removed (27).The details of instrumental SNPs in this study are shown in Table 2.All the F-statistics in this study are greater than 10, indicating that the IVs used satisfy the requirement of a strong association with exposures. Statistical analysis In this study, we utilized four MR methods-Inverse Variance Weighted (IVW), Weighted Median (WM), MR-Egger, and Weighted Mode-to assess the causal impact of ICP on CVD. Each method is based on distinct assumptions regarding horizontal pleiotropy.Primarily, the IVW approach synthesizes Wald ratios (the ratio of the SNP-associated outcome effect to the SNP-associated exposure effect) through meta-analysis to deduce the aggregate causal relationship between the exposure and the outcome (28).To mitigate reverse causation, we conducted a reverse MR analysis, which swaps the roles of exposure and outcome.The IVW model's heterogeneity was evaluated using Cochran's Q test, with a p-value less than 0.05 signifying heterogeneity (29).Furthermore, the Mendelian Randomization Pleiotropy Residual Sum and Outlier (MR-PRESSO) test was applied to ascertain the degree of horizontal pleiotropy among the IVs (30).A leave-one-out analysis was also conducted to determine the influence of individual SNPs on the overall results, leading to the immediate exclusion of any identified outliers.Subsequent analyses were carried out post-outlier removal. A two-step MR analysis was employed to investigate potential mediation effects by lipid profiles (LDL, HDL, TC, TG, Apo-A and Apo-B) and liver function markers (ALT, AST, ALP, γ-GGT, and bile acid) in the relationship between ICP and VCD.The principle of two-step MR analysis was shown in Figure 2. In the two-step MR analysis, βa is the effect of ICP on the mediator and βb is the 3 Results Causal effect between ICP and CVD (CAD, HTN) via TSMR In this investigation, three causal associations were established using the IVW method (P < 0.05, Figure 3).It was determined that ICP elevates the risk of CVD, as evidenced by an Odds Ratio (OR) of 1.004 and a 95% CI ranging from 1.002 to 1.007 (P = 0.001).Comparable causal relationships were observed in the WM analysis, yielding an OR of 1.005 with a 95% CI from 1.003-1.008(P = 7.760 × 10 −6 ).Furthermore, a significant correlation between ICP and an augmented risk of HTN was identified through the IVW method, with an OR of 1.002 and a 95% CI between 1.000 and 1.005 (P = 0.024).Consistent risk estimates were also derived from MR-Egger, WM, and Weighted Mode analyses.Additionally, a notable link between ICP and CAD was detected via IVW analysis, indicated by an OR of 1.039 and a 95% CI from 1.002 to 1.079 (P = 0.039).Reverse MR analysis revealed that CVD, CAD, and HTN do not influence the likelihood of developing ICP.Further details of MR analyses and reverse MR analyses are presented in Supplementary Material Table S1 and S2. After removing outlier SNP, the causal relationship between ICP and CVD still remained.The robustness of the results after removing outlier SNP was assessed through Cochran's Q test, the MR-Egger intercept test and MR-PRESSO, as detailed in Supplementary Material Table S3.The MR-Egger intercept tests yielded P-values greater than 0.05, indicating an absence of horizontal pleiotropy.Moreover, the lack of detected pleiotropy by the Egger intercept implies that the MR estimates remained unbiased by pleiotropy, notwithstanding observed heterogeneity.The results of the leave-one-out analyses are depicted in Figure 4. Causal effects of ICP, lipid traits, liver function, and CVD (CAD, HTN) via two-step MR mediation analysis To investigate the potential mediation of six lipid profiles (LDL, HDL, TC, TG, Apo-A, Apo-B) and five liver function indices (ALT, AST, ALP, γ-GGT, Bile acid) in the relationship between ICP and CVD, CAD, and HTN, a two-step MR analysis was employed (Figure 2).This analysis identified lipids, notably LDL, TC, and Apo-B, as significant mediators in the impact of ICP on CVD, CAD, and HTN (P < 0.05).Detailed results of the screening of mediators are shown in Supplementary Material Table S4.However, the liver function indices did not exhibit a substantial mediating effect on the association between ICP and CVD (P > 0.05).Specifically, the analysis demonstrated that ICP indirectly increased the risk of CVD by elevating levels of LDL, TC, and Apo-B, with their respective mediation percentages (IE/TE) being 25.5%, 12.2%, and 21.3%.Further exploration into the mediation effect between ICP and CAD indicated that LDL alone contributed a 23.7% mediation effect, while TC and Apo-B showed no significant mediation.Moreover, LDL and Apo-B were found to mediate the relationship between ICP and HTN, with mediation effects of 11.0% and 9.3%, respectively, but no The principle of two-step MR analysis.SNP, single nucleotide polymorphisms; ICP, intrahepatic cholestasis of pregnancy; CVD, cardiovascular disease; HTN, hypertension; CAD, coronary artery disease.Three causal associations between ICP and CVD, HTN and CAD.ICP, intrahepatic cholestasis of pregnancy; CVD, cardiovascular disease; HTN, hypertension; CAD, coronary artery disease. The results of the leave-one-out analyses about ICP, CVD and HTN.ICP, intrahepatic cholestasis of pregnancy; CVD, cardiovascular disease; HTN, hypertension.mediating role for TC was observed.Additionally, the analysis extended to the interplay between ICP, CAD, and HTN, revealing that HTN could act as a mediator in the increased risk of CAD attributable to ICP, with a mediation effect of 14.5%.Details of the two-step MR analysis were provided in Table 3. In conclusion, our findings delineate two primary mediating pathways linking ICP with CVD: Firstly, ICP exerts an indirect causal influence on the risk of CVD, including CAD and HTN, by modulating the concentrations of LDL, TC, and Apo-B.Secondly, ICP indirectly impacts the risk of CAD through its effect on the likelihood of developing HTN. Discussion While the causal link between ICP and CVD remains ambiguous, our MR analysis supports a causal association of ICP with an enhanced risk of CVD, corroborating the observational findings of Shemer et al. (32) Their study reported a marginally increased risk of CVD in later life stages among women with ICP [Hazard Ratio (HR) 1.12, 95% CI 1.06-1.19].Conversely, the research conducted by Suvi-Tuulia Hämäläinen et al. indicated a lower incidence of CVD-related mortality in women with ICP compared to a control group, a variance potentially attributed to factors such as age at enrollment and follow-up duration (33).Additionally, this analysis extended to explore the causal connections of ICP with CAD and HTN, suggesting that ICP elevates the risks of both conditions.While studies focusing on the ICP-CAD nexus are scant, limited cohort research has hinted at a divergent risk profile for CAD in women with a history of ICP compared to those without.Furthermore, it is widely acknowledged that ICP predisposes to hypertensive complications during pregnancy, and our findings affirm that this risk persists into later life.This study is the first research to find a causal relationship between ICP and CVD at the genetic level using MR methods, which is important for deepening the understanding of ICP complications and guiding follow-up protocols. Factors such as lipid dysregulation, insulin resistance, endothelial damage, and an enhanced systemic inflammatory response may contribute to the increased CVD risk associated with ICP (34).This study focused on the role of dyslipidemia in abnormal pathophysiologic processes.Research by Chen Y et al. demonstrated a higher prevalence of hyperlipidemia in individuals with ICP compared to those without (5.96% vs. 3.80%), identifying ICP as an independent risk factor for dyslipidemia (35).Building on this, our study employed mediator MR analysis to substantiate the significant indirect effect of ICP on CVD via lipid levels.This mediational MR analysis revealed that LDL, TC, and Apo-B predominantly mediate the indirect impact of ICP on CVD, aligning with the meta-analysis findings of Zhan Y et al.Zhang's research further indicated that severe maternal dyslipidemia was more common in the severe ICP cohort, hinting at a potential link between the severity of ICP and dyslipidemia (36).LDL plays a key role in the development of atherosclerosis (37).LDL is converted from very low-density lipoprotein (VLDL).LDL particles contain about 50% cholesterol and are the most cholesterol-rich lipoproteins in the blood, so they are called cholesterol-rich lipoproteins (38).95% or more of the Apo in LDL is Apo-B (39).LDL carries cholesterol to peripheral tissues, and most of it is metabolized through the catabolism of the LDL receptor (LDLR) in the hepatocytes and extrahepatic tissues.The pathways through which ICP disrupts lipid metabolism remain intricate, with some evidence pointing to anomalies in farnesoid X receptor (FXR) activity (40).Activating FXR leads to a suppression of endogenous bile acid synthesis while concurrently decreasing levels of triglycerides, total cholesterol, and glucose in the plasma (41).Increased levels of the 3β-sulfated progesterone metabolite epiallopregnanolone sulfate in ICP pregnancies antagonized the FXR (42,43).Hence, the diminished activity of FXR could play a role in pregnancies with ICP, potentially affecting maternal metabolic processes (44).Although the mechanism is not yet clear, even the detection and intervention of dyslipidemia has important clinical value for the treatment of the disease itself and related complications in patients with ICP.In addition, it has been shown that gut microbiota may also play a role in lipid metabolism during ICP pregnancies (45).Although high bile acid levels have been implicated in cardiovascular toxicity, our MR analysis did not find substantial evidence to support the mediating role of liver function markers, including bile acids, in the association between ICP and CVD. Interestingly, this analysis revealed a mediating role of HTN in the relationship between ICP and CAD.The connection between HTN and CAD is well-established and unequivocal.Chronic hypertension induces hemodynamic alterations that activate blood This study explored the association between ICP and CVD at the genetic level.Because of the strong gene-phenotype association, the results of this study are limited to a causal relationship between a single disease (ICP) and CVD and cannot be extrapolated to other types of cholestasis.However, a review of the relevant literature shows that multiple causes of chronic cholestasis (biliary obstruction, PBC, and other diseases) seem to be associated with an increased risk of CVD and that abnormal lipid levels are involved in an important role (48)(49)(50).This suggests that a variety of cholestasis, including ICP, may modify the risk of CVD, but the causal relationship between them requires further subsequent validation. This study boasts several notable strengths.Primarily, it is the inaugural MR analysis investigating the causal link between ICP and CVD, marking a significant advancement in this research domain.Additionally, we conducted a thorough examination of potential mediators within the causal pathway from ICP to CVD, enhancing the depth and breadth of our analysis.Nonetheless, the study is not without its limitations.A primary constraint is the exclusive manifestation of ICP in one sex, coupled with the absence of large-scale, sex-specific GWAS, necessitating the use of sex-combined population data.However, the selected SNPs for MR analysis exhibited no significant sex-based genetic effect differences, thereby minimally influencing the outcomes.Another concern is the potential for sample overlap between the lipid profiles and liver function markers employed in the mediation analysis.Furthermore, the reliance on data predominantly from European populations constrains the generalizability of our conclusions across diverse ethnic groups.Finally, despite the rigorous quality control adopted in this study to minimize the influence of confounding factors on the results, it is possible that the results may still be affected to some extent due to the extremely strong correlation between lipids and CVD itself. In conclusion, it is important for individuals diagnosed with ICP to be made aware of their elevated risk for future CVD and the importance of monitoring lipid profiles for early intervention.Furthermore, there is a pressing need for additional research to elucidate the precise mechanisms through which ICP influences CVD risk. Conclusion In summary, this study is the first comprehensive MR analysis to explore the causal link between ICP and CVD using genome-wide data.Bidirectional MR analyses revealed that genetically predicted ICP is causally linked to an increased risk of CVD, with no evidence supporting a causal effect of genetically predicted CVD on ICP risk.Furthermore, MR-mediated analyses have substantiated both a direct causal impact of ICP on CVD risk and a notable indirect effect mediated through lipid profiles, specifically LDL, TC, and Apo-B.These results underscore the critical mediating role of lipids in the causal pathway from ICP to CVD. their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. effect of the mediator on CVD.βc denotes the total effect (TE) of ICP on CVD and βc' denotes the direct effect (DE) of ICP on CVD (βc' = βc-βa*βb).We estimated the proportion of individual mediation by dividing the individual mediated effect (IE) (βa*βb) by the total effect (βc) (31).The indirect effects of ICP on VCD through mediators were quantified using the product of coefficients and the multivariate delta methods.All analyses were performed in R software (version 4.2.0) using the Two Sample MR and MVMR package.In this study, P < 0.05 was considered to have a causal relationship.MR estimates were shown as odds ratios (OR) with 95% confidence intervals (CI) or β estimates with 95% CI. TABLE 1 GWAS summary-level data used in this study. TABLE 2 The details of instrumental SNPs in this study. TABLE 3 The result of two-step MR analysis.
4,639.6
2024-04-30T00:00:00.000
[ "Medicine", "Biology" ]
Chemical Ordering in Na-Pb and Na-Hg Binary Liquid Alloys A simple model has been used to investigate the nature of chemical order in Na-Pb and Na-Hg liquid binary alloy at 700K and 673K respectively. The energy parameter obtained from the model was used to calculate the concentration dependent mixing properties such as Gibb’s free energy of mixing, Concentration fluctuations in the long wavelength limit and the Warren-Cowley chemical short range order parameter. Results obtained showed that both alloys are heterocoordinated throughout the entire concentration and there is tendency for segregation and demixing to take place in the liquid alloys. We observed that Na-Hg liquid alloy is more strongly interacting binary alloy and chemically ordered than Na-Pb liquid alloy. Introduction A number of theories have been developed by theoreticians to explain the temperature dependence of the thermodynamic properties of binary liquid alloys with a purpose of obtaining valuable microscopic information on them [1][2][3].Accurate thermodynamic knowledge of the alloy mixing properties and phase diagrams of alloy systems is crucial to having a reliable theoretical result.The mixing behavior of two metals forming binary alloys is as a result of the interplay between energetic and structural adjustments of constituent elemental atoms [4].Hetero-coordination known as the preference of unlike atoms to pair as nearest neighbors in the alloy system, form A-B pair while segregation is known as the situation where the constituent atoms in the alloy becomes selfcoordinated forming A-A and B-B pairs where A and B are the constituent atoms within the binary alloy [5].Alloys are mixture of metals and a binary alloy can be represented as AxB1-x where A and B are metals and subscripts x and 1-x are concentration of the respective metals.Na-Pb, Na-Hg, K-Pb and K-Hg are examples of binary alloys.Classification of binary alloys can be done according to the deviation of their thermodynamic functions such as chemical activity from raoulatian ideality, therefore binary alloys can be classified into two main groups; segregating (positive deviating) and heteoro-coordinating (negative deviating) alloys [6,7].The interest in the energetic and its effects on the alloying behavior of Na-Pb and Na-Hg stemmed from the understanding that, despite the hazards involved in the unsafe handling of Na-Pb alloy, it may still be relevant in qualitative inorganic analysis, both as reductant in acidic and alkaline media, and also as a source of sodium hydroxide for hydroxide precipitations [8].Na-Hg has been used in organic chemistry as a powerful reducing agent, which is safer to handle than sodium itself.Na-Hg has been found to be useful in fabricating pressure-based sodium lamp [9].It is necessary to mention that there has been a previous attempt to understand the alloying behavior of these alloys [10].Consequently, this paper also has the purpose of complimenting earlier studies on each of Na-Pb and Na-Hg liquid alloy by studying the structural behaviour of these two Sodium-based alloy systems, mostly with, studies of concentration fluctuations and the Warren-Cowley chemical short-range order parameter (CSRO).The calculation of the structural properties using Flory's model [11] is presented. Theoretical Concepts In the framework of the Flory's model, the expression for the Gibb's free energy of mixing, , of a binary liquid alloy is given by Where in Eqn.1, x and y are the bulk concentrations of the constituent A and B atoms in the binary alloy respectively such that, y=1-x, and = 1 − , VA and VB being atomic volumes of constituents' A and B respectively.The parameter w is the ordering energy whose value gives information on the alloying behaviour of the alloy.R is a universal gas constant. The concentration fluctuations in the long-wavelength limit, Scc(0) can be calculated from the standard relationship in terms of the Gibbs free energy of mixing Using equations ( 1), (2), Scc(0) becomes Where; For ideal mixing, the energy parameters w and δ are zero and eqn.(3) reduces to (0) = (5) The solid line represents theoretical values and the squares represent experimental values [13], xNa is the concentration of Na in the alloy. Free energy of mixing Table 1 shows the values of the fitted interaction parameter for liquid Na-Pb and Na-Hg.Eqn.(1) has been used in obtaining, the optimal value for the interaction parameter that gives agreement between experimental and theoretical Gibbs free energy.Eqn.(1) was also used to calculate the GM/RT for both systems while the experimental data were taken from the work reported by Hultgren et.al., [13].Figure 1 and 2 shows the computed and experimental values as a function of concentration.The negative values of w in Table 1 show both alloy systems are chemically ordered, which means pairing of unlike atoms.From the results in Figure 1 and 2, it could be seen that both systems Na-Pb and Na-Hg have minimum values of -2.797 and -3.125 respectively.This is an indication that Na-Hg is more heterocoordinated than Na-Pb or strongly interacting binary alloy, ≤ −3.0The solid line represents theoretical values and the squares represent experimental values [13].xHg is the concentration of Hg in the alloy. Concentration fluctuations and the Warren-Cowley CSRO parameter The fitted parameters were kept invariant when computing Scc(0) and the short-range order ( ).In principle, Scc(0) can be obtained directly from small-angle diffraction experiments but the experimental procedure involved is very tedious and has not been accomplished successfully.The Scc(0) is an essential microscopic parameter which has been widely used in the study of nature of atomic order in binary liquid alloys [13,14] and the Scc(0) with and it also formed a basis for explaining energetics in liquid alloys.Figure 3 and 4 shows the computed values of the concentration fluctuations in the long wavelength limit.Eqs. ( 3) and ( 5) has been used to compute Scc(0) and Scc(0) id respectively.The mixing behavior of liquid binary alloys can be deduced from the deviation of Scc(0) from the ideal value.Scc(0) < Scc(0) id is an evidence of chemical ordering and heterocoordination otherwise, there is tendency for segregation and demixing to take place in the liquid alloys.Fig. 3 and 4 shows heterocoordination in both alloys.The nature of ordering in binary liquid alloys can also be investigated by calculating the Warren Cowley Short Range Order (CSRO) parameter, α 1 [4,5] .Experimental values are not available to compare our theoretical results just as we have done in the case of the Gibb's free energy of mixing.This reveals Na-Hg has more heterocoordination and chemical order than Na-Pb. Conclusion The energetics of Na-Pb and Na-Hg liquid binary alloys have been analyzed in this present study at 700K and 673K respectively.Attention has been given to their thermodynamic functions such as Gibb's free energy of mixing, concentration fluctuations and Warren-Cowley CSRO parameter.Theoretical study of the alloying behavior of the two liquid alloys reveals hetero-coordination in both alloys throughout the entire concentration of Na in Na-Pb and Hg in Na-Hg.It has also been shown that Na-Hg alloy is a more strongly interacting binary alloy than Na-Pb. 1 indicates complete ordering (unlike atoms pairing) while ∝ = 1 represent segregation.The degree of chemical order may be deduced from the negative values of α 1 .Maximum heterocoordination can be observed at xNa = 0.41 in Na-Pb and at xHg = 0.72 in Na-Hg as depicted by figures 5 and 6. α 1 for Na-Pb at xNa = 0.41 is -0.30 and in the case of NaHg at xHg = 0.72 is -0.34. Table 1 . Fitted parameters for Na-Pb and Na-Hg liquid alloys x Na Figure 1.Concentration dependence of for Na-Pb liquid alloys at 700K, computed from Eqn.1.
1,724.8
2017-05-09T00:00:00.000
[ "Materials Science" ]
A viscoelastic flow model of Maxwell-type with a symmetric-hyperbolic formulation Maxwell models for viscoelastic flows are famous for their potential to unify elastic motions of solids with viscous motions of liquids in the continuum mechanics perspective. But the usual Maxwell models allow one to define well motions mostly for one-dimensional flows only. To define unequivocal multi-dimensional viscoelastic flows (as solutions to well-posed initial-value problems) we advocated in [ESAIM:M2AN 55 (2021) 807-831] an upper-convected Maxwell model for compressible flows with a symmetrichyperbolic formulation. Here, that model is derived again, with new details. Elastic and viscous motions in the continuum perspective First, let us recall seminal systems of PDEs that unequivocally model the motions φ t : B →⊂ R 3 of continuum bodies B on a time range t ∈ [0, T ).PDEs governing elastic flows are a starting point for all continuum bodies.PDEs governing viscoelastic flows, for liquid bodies in particular, shall come next in Section 2. In the sequel, assuming ρ ∈ R + * constant, we recall how one standardly defines e(F ) for solids and fluids dynamics, on considering the determinant |F i α | of the deformation gradient (also denoted |F | hereafter) and the cofactor matrix C i α of F i α (C in tensor notation) as variables independent of F .Next, in Section 2, we recall with much details the function e(F ) that we proposed in [2] so as to properly define a viscoelastic dynamics of Maxwell type that unifies solids and fluids. ) enters the framework of symmetric-hyperbolic systems.In particular, a unique time-continuous solution can be built in . The latter solution, associated with a unique mapping φ t , is equivalently defined by [17] where ∀α .Indeed, with the Eulerian description (7-10) of the body motions (i.e. in spatial coordinates, as opposed to the Lagrangian description (1-5) in material coordinates) where C T is the dual (matrix transpose) of C, and with Piola's identity (11) (16) div(ρF T ) = 0 = ∇ × (ρC T ) , one can show that, when e(F ) is polyconvex, the symmetric-hyperbolic framework applies to (12)(13)(14)(15)(16) insofar as smooth solutions also satisfy the conservation law for ρ 2 |u|2 + ρe, a functional convex in a set of independent conserved variables [7]. A first example of a physically-meaningful internal energy is the neo-Hookean with c 2 1 > 0.Then, the quasilinear system (1-3) is symmetric-hyperbolic insofar as smooth solutions additionally satisfy a conservation law for |u| 2 /2+e strictly convex in (u, F ). Unequivocal motions can be defined1 , equivalently by (12)(13).The latter neo-Hookean model satisfyingly predicts the small motions of some solids. However, (18) is oversimplistic : it does not model the deformations that are often observed orthogonally to a stress applied unidirectionally, see e.g.[16] regarding rubber.Many observations are better fitted when the Cauchy stress σ contains an additional spheric term −pI, with a pressure p(ρ) function of volume changes. Next, instead of (18), one can rather assume a compressible neo-Hookean energy The functional (19) is polyconvex as soon as γ > 1 [7].Thus, using either (1-5) or (7-10) one can define unequivocal smooth motions with where an additional pressure term arises2 in comparison with (18).Precisely, one can build unique solutions to a symmetric reformulation of a system of conservation laws for conserved variables They combine with (20) using Ξ(U ) T = pu y −pu x to yield a symmetric system after premultiplication by The symmetric formulation allows one to establish the key energy estimates in the existence proof of smooth solutions [7], as well as weak self-similar solutions to the 1D Riemann problem using generalized eigenvectors R solutions to For application to real materials3 , one important question remains: how to choose c 2 1 and d 2 1 . In most real applications of elastrodynamics, the material parameters c 2 1 and d 2 1 should vary, as functions of F e.g., but also as functions of an additional temperature variable so as to take into account microscopic processes not described by the macroscopic elastodynamics system.For instance, the deformations endured by stressed elastic solids increase with temperature, until the materials become viscous liquids.Then, one natural question arises: could (19) remain useful for liquids which are mostly incompressible (i.e.div u ≈ 0 holds) and much less elastic than solids ?In Sec.1.2, we recall the limit case when the volumic term dominates the internal energy, and p = C 0 ρ γ dominates σ, which coincides with seminal PDEs for perfect fluids (fluids without viscosity).In Section 2, we next consider how to rigorously connect fluids like liquids to solids using an enriched elastodynamics system.1.2.Fluid dynamics.Consider the general Eulerian description (12-15) for continuum body motions.It is noteworthy that given u, each kinematic equation ( 10), ( 8) and ( 9) is autonomous.As a consequence, in spatial coordinates, motions can be defined by reduced versions of the full Eulerian description (7-10), with an internal energy e strictly convex in ρ but not in F !One famous case is the polytropic law with C 0 > 0.Then, one obtains Euler's system for perfect (inviscid) fluids (31) with a pressure p := −∂ ρ −1 e = C 0 ρ γ characterizing spheric stresses: (32) The system (31) is symmetric-hyperbolic.It is useful to define unequivocal timeevolutions of Eulerian fields (on finite time ranges) [7], although multi-dimensional solutions are then not equivalently described by one well-posed Lagrangian description [8].In fact, for application to real fluids, the system (31) is better understood as the limit of a kinetic model based on Boltzmann's statistical description of molecules [9], and the model indeed describes gaseous fluids better than condensed fluids (liquids).In any case, the fluid model (31) still lacks viscosity.One classical approach adds viscous stresses as an extra-stress term τ in (32) i.e. ( The extra-stress is required symmetric (to preserve angular momentum), objective (for the sake of Galilean invariance), and "dissipative" (to satisfy thermodynamics principles) [5].Precisely, introducing the entropy η as an additional state variable for heat exchanges at temperature θ = ∂ s e > 0, thermodynamics requires with a dissipation term D ≥ 0. Usually, denoting D(u) ij := 1 2 ∂ i u j + ∂ j u i , one then postulates a Newtonian extra-stress with two constant parameters ℓ, μ > 0 (34) . The Newtonian model allows for the definition of causal motions through the resulting Navier-Stokes equations.But it is not obviously unified with elastodynamics; and letting alone that (34) is far from some real "non-Newtonian" materials, it implies that shear waves propagate infinitelyfast, an idealization that is also a difficulty for the unification with elastodynamics. By contrast, Maxwell's viscoelastic fluid models for τ possess well-defined shear waves of finite-speed, and they can be connected with elastodynamics with a view to unifying solids and fluids (liquids) in a single continuum description. 2.1.Viscoelastic 1D shear waves for solids and fluids.Some particular solutions to ( 12)-( 14)-( 33)-( 35)-(36) unequivocally model viscoelastic flows, and rigorously link solids to fluids.Shear waves e.g. for a 2D body moving along e x ≡ e x 1 following b = y ≡ x 2 , a = x − X(t, y), X(0, y) = 0 are well-defined by (7) 38) are well defined given initial conditions plus possibly boundary conditions when the body has finite dimension along e y ≡ e x 2 , such as y ≡ x 2 > 0 in Stokes first problem see e.g.[15].Moreover, the latter 1D shear waves rigorously unify solids and fluids insofar as they are structurally stable [14,4] yy u like elastic solids, and when λ → 0, they satisfy yy u like viscous liquids.So the 1D shear waves illustrate well the structural capability of Maxwell's model to unify solid and Newtonian fluid motions. But a problem arises with multi-dimensional motions: solutions to ( 12)-( 14)-( 33)-( 35)-( 36) are not well-defined in general. 4Other objective derivatives than UC can be used, which also allow symmetric-hyperbolic reformulations.They will not be considered here for the sake of simplicity. Proof.We will show (3) in material coordinates (the Lagrangian description).On one hand, computing ∂ t |u| 2 = 2u • ∂ t u is straightforward.One the other hand, using Interestingly, notice that our free energy (41) is not useful for well-posedness: it is not strictly convex in conserved variables.Morover, our formulation ( 12)-( 13)-( 14)-(39) for a sound Maxwell model admits the 1D shear waves examined in Sec.2.1 as solutions, so it preserves some well-established interesting properties of the standard (incompressible) formulation of Maxwell model. Let us finally detail the symmetric structure of our hyperbolic formulation for (compressible) viscoelastic flows of Maxwell-type, with Lagrangian description To that aim, we consider a 2D system when λ → ∞: Rewriting ∂ t U + ∂ α G α (U ) = 0 the system above, involutions M α ∂ α U = 0 hold with M a = 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 M b = 0 0 0 −1 0 0 0 0 0 0 0 0 0 0 0 −1 0 0 0 0 and A symmetric formulation is obtained for our quasilinear formulation of Maxwell (compressible) viscoelastic flows similarly to the standard compressible elastodynamics case: on premultiplying the system (48-55) by D 2 η(U ), insofar as the matrix (D 2 η(U )DG α (U ) + DΞ(U ) T M α )ν α is symmetric given a unit vector ν = (ν a , ν b ) ∈ R 2 .We do not detail the symmetric matrix (D 2 η(U )DG α (U ) + DΞ(U ) T M α )ν α here: its upper-left block coincides with (28), but the other blocks are complicate and depend on the choice of the variable Y = A − 1 2 (key to exhibit the symmetrichyperbolic structure using a fundamental convexity result from [10] -Theorem 2 p.276 with r = 1 2 and p = 0) a choice which is not unique (ours may not be optimal).In any case, the symmetric structure yields a key energy estimate for the construction of unique smooth solutions, and it also allows one to construct 1D waves similarly from (29) when λ → ∞ (otherwise one has to take into account the source term of relaxation-type). Conclusion and Perpsectives Our symmetric-hyperbolic formulation of viscoelastic flows of Maxwell type [2] allows one to rigorously establish multidimensional motions, within the same continuum perspective as elastodynamics and Newtonian fluid models.It remains to exploit that mathematically sound framework, e.g. to establish the structural stability of the model and rigorously unify (liquid) fluid and solid motions through parameter variations in our model: see [4] regarding the nonsingular limit toward elastodynamics.Another step in that direction is to drive the transition between (liquid) fluid and solid motions more physically, e.g. on taking into account heat transfers: see [3] for a model of Cattaneo-type for the heat flux, which preserves the symmetric-hyperbolic structure.Last, one may want to add physical effects for particular applications: the purely Hookean internal energy in (41) can be modified to include finite-extensibility effects as in FENE-P or Gent models, or to use another measure of strain, with lower-convected time-rate for instance, see [3].
2,627.8
2022-12-05T00:00:00.000
[ "Physics", "Engineering" ]
Remarks on the non-Riemannian sector in Double Field Theory Taking $\mathbf{O}(D,D)$ covariant field variables as its truly fundamental constituents, Double Field Theory can accommodate not only conventional supergravity but also non-Riemannian gravities that may be classified by two non-negative integers, $(n,\bar{n})$. Such non-Riemannian backgrounds render a propagating string chiral and anti-chiral over $n$ and $\bar{n}$ dimensions respectively. Examples include, but are not limited to, Newton--Cartan, Carroll, or Gomis--Ooguri. Here we analyze the variational principle with care for a generic $(n,\bar{n})$ non-Riemannian sector. We recognize a nontrivial subtlety for ${n\bar{n}\neq 0}$ that infinitesimal variations generically include those which change $(n,\bar{n})$. This seems to suggest that the various non-Riemannian gravities should better be identified as different solution sectors of Double Field Theory rather than viewed as independent theories. Separate verification of our results as string worldsheet beta-functions may enlarge the scope of the string landscape far beyond Riemann. Introduction This paper is a sequel to [1] which proposed to classify all the possible geometries of Double Field Theory (DFT) [2][3][4][5][6][7] by two non-negative integers, (n,n). The outcome -which we shall review in section 2is that only the case of (0, 0) corresponds to conventional supergravity based on Riemannian geometry. Other generic cases of (n,n) = (0, 0) do not admit any invertible Riemannian metric and hence are non-Riemannian by nature. Strings propagating on these backgrounds become chiral and anti-chiral over n and n dimensions respectively. It is noteworthy and relevant to this work that, all the geometrical notation of the covariant derivative, Doubled but at the same time gauged string action One of the characteristics of DFT is the imposition of the 'section condition': acting on arbitrary functions in DFT, say Φ r , and their products like Φ s Φ t , the O(D, D) invariant Laplacian should vanish We remind the reader that the O(D, D) indices are raised with J AB . Upon imposing the section condition, the generalized Lie derivative (1.4) is closed by commutators [3,7], The section condition is mathematically equivalent to the following translational invariance [8,83], where the shift parameter, ∆ A , is derivative-index-valued, meaning that its superscript index should be identifiable as a derivative index, for example ∆ A = Φ s ∂ A Φ t . This insight on the section condition may suggest that the doubled coordinates of DFT are in fact gauged by an equivalence relation, The expression of SAB in Table 1 is newly derived from [63] using ΓACDΓ CBD = Γ BCD ΓCAD = 1 2 ΓACDΓ BCD and ΓCADΓ DBC = ΓCADΓ CBD − 1 2 ΓACDΓ BCD which hold due to the symmetric properties, Γ [ABC] = 0 and Γ A(BC) = 0. Each gauge orbit, i.e. equivalence class, represents a single physical point. As a matter of fact in DFT, the usual infinitesimal one-form of coordinates, dx A , is not DFT-diffeomorphism covariant, However, if we gauge the one-form by introducing a derivative-index-valued connection, we can have a DFT-diffeomorphism covariant one-form, provided that the gauge potential transforms appropriately, (1.10) It is also a singlet of the coordinate gauge symmetry (1.8): The gauged one-form then naturally allows to construct a perfectly symmetric doubled string action [84], [8], .8) and similarly , we note that the tilde coordinates are indeed gauged: . With respect to this choice of the section, the well-known parametrization of the DFT-metric and the DFT-dilaton in terms of the conventional massless NS-NS field variables [88,89], DFT and the doubled-yet-gauged string action work well, provided these conditions are fulfilled. For example, instead of (2.1), we may let the DFT-metric coincide with the O(D, D) invariant metric, This is a vacuum solution to DFT, or to the 'matter-free' EDFEs, which is very special in several aspects. Firstly, compared with (2.1), there cannot be any associated Riemannian metric g µν and hence it does not allow any conventional or Riemannian interpretation at all. It is maximally non-Riemannian. Secondly, it is fully O(D, D) symmetric, being one of the two most symmetric vacua of DFT, H AB = ±J AB . Thirdly, it is moduli-free since it does not admit any infinitesimal fluctuation, δH AB = 0 [75]. 4 And lastly but not leastly, upon this background, the auxiliary gauge potential, A µ , appears linearly rather than quadratically in the doubled-yet-gauged string action (1.11). Consequently it serves as a Lagrange multiplier to prescribe that all the untilde target spacetime coordinates should be chiral [8] (c.f. [90,91]), An intriguing insight from [11] is then that, the usual supergravity fields in (2.1) would be the Nambu- about the existence of more generic non-Riemannian geometries (c.f. [8,10] for other examples and also [22] for 'timelike' duality rotations). This question was answered in [1]: the most general solutions to the defining properties of the DFT-metric (2.2) can be classified by two non-negative integers, (n,n), where i, j = 1, 2, · · · , n,ī, = 1, 2, · · · ,n and 0 ≤ n +n ≤ D. 4 Put HA B = δA B in (3.5). (i) While the B-field is skew-symmetric as usual, H µν and K µν are symmetric tensors whose kernels are spanned by linearly independent vectors, X i µ ,Xī ν and Y µ j ,Ȳ ν  , respectively, (ii) A completeness relation must be satisfied From the linear independency of the zero-eigenvectors, X i µ ,Xī ν , orthogonal/algebraic relations follow Intriguingly, the B-field (hence 'Courant algebra') is universally present regardless of the values of (n,n), and contributes to the DFT-metric through an O(D, D) adjoint action: whereH corresponds to the 'B-field-free' DFT-metric, and B is an O(D, D) element containing the B-field, It is also worth while to note the 'vielbeins' or 'square-roots' of K µν and H µν : where a, b are (D − n −n)-dimensional indices subject to a flat metric, say η ab , whose signature is arbitrary. Essentially, K µ a , X i µ ,Xī µ form a D × D invertible square matrix whose inverse is given by In fact, the analysis of the DFT-vielbeins corresponding to the (n,n) DFT-metric (2.5) carried out in [1] shows that the local Lorentz symmetry group, i.e. spin group is Spin(t + n, s + n) × Spin(s +n, t +n) . (2.14) Here (t, s) is the arbitrary signature of η ab or the nontrivial signature of H µν and K µν satisfying t + s + n + n = D. Of course, once the spin group of any given theory is specified, it is fixed once and for all. Thus, each sum, t + n, s + n, s +n, and t +n, should be constant. For example, the Minkowskian D = 10 maximally supersymmetric DFT [85] and the doubled-yet-gauged Green-Schwarz superstring action [79], both having the local Lorentz group of Spin(1, 9) × Spin(9, 1), can accommodate (0, 0) Riemannian and (1, 1) non-Riemannian sectors only (see [12] for examples of supersymmetric non-Riemannian backgrounds). Nevertheless, we may readily relax the Majorana-Weyl condition therein [79,85] and impose the Weyl condition only on spinors, such that the local Lorentz group can take any of Spin(t,ŝ) × Spin(ŝ,t) witĥ t +ŝ = 10. The allowed non-Riemannian geometries will be then (n, n) types with n =n running from zero to min(t,ŝ) [1]. On the other hand, bosonic DFT does not care about spin groups and hence should be free from such constraints. It can admit more generic (n,n) non-Riemannian geometries. Upon the (n,n) background, the doubled-yet-gauged worldsheet string action (1.11) reduces to 1 2πα which should be supplemented by the chiral and anti-chiral constraints over the n andn directions, These constraints are prescribed by the integrated-out auxiliary gauge potential A A (1.10). Comment 1. Matching with the content of the non-Riemannian component fields, 20) and the undoubled string worldsheet action resulting from (1.11), one can identify the original Newton-Cartan [33][34][35] as (1, 0), Stringy Newton-Cartan [36] as (1, 1), Carroll [37,38] as (D−1, 0), and Gomis-Ooguri [39] as (1, 1): see [1,11,57] for the details of the identifications. Further, the isometry of the (1, 1) 5 Through exponentiations, finite Milne-shift transformations can be achieved, which turn out to get truncated at finite orders, flat DFT-metric matches with the non-relativistic symmetry algebra such as Bargmann algebra [10], while the notion of T-duality persists to make sense in the non-relativistic string theory [47]. These all seem to suggest that DFT may be the home, i.e. the unifying framework, to describe various known as well as yetunknown non-Riemannian gravities. 6 Having said that there are also a few novel ingredients from DFT, such as the local GL(n) × GL(n) symmetry (2.15), the notion of 'Milne-shift covariance' as we shall discuss below (2.24), (2.26), and the very existence of the DFT-dilaton of which the exponentiation, e −2d , gives the integral measure in DFT being a scalar density with weight one, It is worth while to generalize the decomposition (2.9) to an arbitrary DFT tensor, Under diffeomorphisms, while the DFT tensor T A 1 ···An is surely subject to the generalized Lie derivative (1.4), the circled quantity,T A 1 ···An , is now governed by the undoubled ordinary Lie derivative which can be conveniently obtained as the truncation of the generalized Lie derivative by choosing the section, ∂ µ ≡ 0, and setting the parameter, ξ A = (0, ξ µ ) asξ ν ≡ 0: (2.23) Further, by construction, a DFT tensor is Milne-shift invariant. Yet, the circled one is Milne-shift covariant in the following manner, Explicitly, for a DFT vector, [76,93]) (2.25) That is to say, the circled quantities,T A 1 ···An ,V A , are 'B-field free', subject to the ordinary Lie derivative, For consistency, we also note for the O(D, D) invariant metric, Here we revisit with care the variational principle for a general DFT action coupled to matter (1.1) especially around non-Riemannian backgrounds. While the variations of the matter fields lead to their own Euler-Lagrange equations of motion, the variations of the DFT-metric and the DFT-dilaton give [65] δˆ1 Here G AB and T AB are respectively the stringy or O(D, D) completions of the Einstein curvature [81] and the Energy-Momentum tensor [65], as summarized in Table 1. The above result is easy to obtain once we neglect a boundary contribution arising from a total derivative [63]: and take into account a well-known identity which the infinitesimal variation of the DFT-metric should satisfy [7,62,94], Table 1. , as the two variations, δH AB and δd, give the projected part and the trace part separately, .12). We shall confirm that the full Einstein Double Field Equations are still valid for non-Riemannian sectors, either trivially due to projection properties or nontrivially from the genuine variational principle. Variations of the DFT-metric around a generic (n,n) background Here we shall identify the most general form of the infinitesimal fluctuations around a generic (n,n) DFTmetric (2.5). The fluctuations must respect the defining properties of the DFT-metric (2.2) and hence satisfy That is to say, the trace of the DFT-metric, H A A = 2(n −n), is invariant under continuous deformations. Without loss of generality, like (2.9), we put With this ansatz, the former condition in (3.5) is met and the latter gives We need to solve these three constraints. For this, we utilize the completeness relation (2.13), and decompose each of {α, β, γ} into mutually orthogonal pieces, where, since α, β are symmetric, We remind the readers that, using the (D − n −n)-dimensional flat metric, η ab , we freely raise or lower the indices, a, b. Now, with the decomposition (3.9), it is straightforward to see that (3.8) implies Thus, the independent degrees of freedom for the fluctuations consist of (3.12) In total, as counted sequently as Einstein Double Field Equations still hold for non-Riemannian sectors Now, we proceed to organize the variation of the action induced by that of the (n,n) DFT-metric (3.1) in terms of the independent degrees of freedom for the fluctuations (3.12). 7 The only required property of the DFT-vielbeins is VApVB p +VApVBp = JAB. See [75] for a related discussion. We apply the prescription (2.22) and write a pair of circled 'B-field-free' symmetric projectors, We also introduce a shorthand notation for the Einstein Double Field Equations, Hereafter, hatted quantities contain generically the H-flux, but, like the circled ones, there is no apparent bare B-field in them. It is now straightforward to compute the variation in (3.1), In total, as counted sequently as, there is D 2 − (n −n) 2 number of independent on-shell relations, or EDFEs, in consistent with (3.13). Up to the completeness relations (2.7), (2.13), and the identities (3.17), the first and the seventh in (3.21), the first and the eighth, the third and the fifth, the third and the sixth, the second and the last, the fourth and the last imply respectively, Finally, the first and the last, the second and the fifth, the third and the last, the fourth and the fifth give In this way, all the components of (P EP ) AB vanish and the full EDFEs persist to be valid universally for arbitrary (n,n) backgrounds. Comment. From (3.17), off-shell relations hold among the components of the EDFEs, such that the full EDFEs are satisfied if 4 What if we keep (n,n) fixed once and for all ? As it is a outstandingly hard problem to construct an action principle for non-Riemannian gravity (c.f. [45,46,48] for recent proposals), we may ask if the DFT action restricted to a fixed (n,n) sector might serve as the desired target spacetime gravitational action, c.f. (4.21). In this section, seeking for the answer to this question, we reanalyze the variational principle of DFT, crucially keeping (n,n) fixed. To our surprise, we obtain a subtle discrepancy with the previous section where the most general variations of the DFTmetric were analyzed. We shall see that, when the values of (n,n) are kept fixed and nn = 0, not all the components of the EDFEs (3.26) are implied by the variational principle. Variational principle with fixed (n,n) We start with (3.1) which gives the variation of the general DFT action induced by the DFT-metric. With fixed (n,n), the variation of the DFT-metric therein should comprise the variations of the (n,n) component fields: The variational principle implies either from the second line of (4.6), or alternatively from the third line of (4.6), Although (4.7) and (4.8) appear seemingly different, they are -as should be-equivalent. In fact, they are both equivalent to which are, from (3.25), further equivalent to more concise ones, Appendix A carries our proof. The surprise which is manifest in (4.9) is that, when nn = 0 the variational principle with fixed (n,n) does not imply the full EDFEs (3.26): it does not constrain Y ρ i (P EP ) ρσȲ σ ı . However, as we have shown in the previous section, within the DFT frame they should vanish on-shell, Y ρ i (P EP ) ρσȲ σ ı = 0, and the full EDFEs should hold. We shall continue to discuss and conclude in the final section 5. To identify the significance of the α iī parameter, we focus on the induced transformation of H µν , Geometrically the deformation of 2Y Without loss of generality, utilizing the completeness relation, decompose the zero-eigenvector, 17) substitute this ansatz into (4.16), and acquire the conditions the coefficients should satisfy, This shows that there are in total (n − rank [α iī ]) + (n − rank [α iī ]) = n +n − 2 × rank [α iī ] number of zero-eigenvectors. Moreover, from the invariance, δH A A = 0 (3.6), we note that the deformation by the α iī parameter actually changes the type of the 'non-Riemannianity' as This essentially explains why α iī vanishes in (4.13) where the (n,n) component field variables are varied with fixed values of (n,n), or fixed 'non-Riemannianity'. It is intriguing to note that the deformation makes the DFT-metric always less non-Riemannian. 8 Non-Riemannian differential geometry as bookkeeping device This subsection is the last one before Conclusion, and is somewhat out of context. At first reading, readers may glimpse (4.21) in comparison with (4.20), and skip to the final section 5. While the various (n,n) non-Riemannian geometries are universally well described by DFT through O(D, D) covariant tensors -as summarized in Table 1 Here in this last subsection, we propose an undoubled non-Riemannian differential tool kit, such as covariant derivative and curvature, for an arbitrary (n,n) sector. It descends from the DFT geometry, or the so-called "semi-covariant formalism" [63], and generalizes the standard Riemannian geometry underlying In particular, it enables us to extend the Riemannian expression of (4.20) in a way 'continuously' to the generic (n,n) non-Riemannian case, (4.21) We commence our explanation. First of all, D µ is our proposed 'upper-indexed' covariant derivative: which preserves both the undoubled diffeomorphisms (2.23) and the GL(n)×GL(n) local symmetries (2.15) as is equipped with proper connections: for undoubled ordinary diffeomorphisms, and for GL(n) × GL(n) rotations, We also denote a diffeomorphism-only preserving covariant derivative by and write for (4.22) and (4.24), Taking care of both spacetime and GL(n)×GL(n) indices, D µ acts on general tensor densities in a standard manner: On the other hand, D µ cares only the spacetime indices and ignores any GL(n) × GL(n) indices, For example, we have explicitly (4.29) It is instructive to see that the far right resulting expressions in (4.29) are clearly covariant under both diffeomorphisms and GL(n) × GL(n) local rotations, as the ρ, σ indices therein are skew-symmetrized and also contracted with H µρ , (KH) ν σ . However, without the GL(n) × GL(n) connections, we note and this breaks the GL(n) × GL(n) local symmetry. Further, for the DFT-dilaton we should have where we have explicitly Because H µν and K ρσ are generically degenerate, the conventional relation (2.1) between the DFT-dilaton, d, and the string dilaton, φ, cannot hold. We stick to use the DFT-dilaton all the way. 9 The connections do the job as they transform properly under the diffeomorphisms (2.23), (2.25) and the GL(n) × GL(n) local rotations (2.15), (4.33) In particular, X i µ Ω µν λ ,Xī µ Ω µν λ , and H ρ[λ Ω µ]ν ρ are covariant tensors which might be viewed as "torsions". Finally, we define an upper-indexed Ricci curvature, which is diffeomorphism and GL(n) × GL(n) covariant, as it comes from the following commutator relation that is clearly also covariant, A scalar curvature follows naturally, which debuted in (4.21). Our covariant derivative is "compatible" with the (n,n) component fields in a generalized fashion: (4.37) 9 We tend to believe that the conventional string dilaton, φ, is an artifact of the (0, 0) Riemannian geometry and the DFT-dilaton, d, is more fundamental as being an O(D, D) singlet. the hatted new connection becomes Milne-shift covariant as well, in the sense of (2.16), (2.25), (2.26), where H λµν is a diffeomorphism covariant, GL(n) × GL(n) invariant, and Milne-shift invariant H-flux, The GL(n) × GL(n) connections (4.26) are inert to the addition of the H-flux-valued-torsion (4.38) as After all, in terms of a hatted covariant derivative, we can dismantle the DFT curvatures into a H-flux-free (circled) term and evidently H-flux-valued ones: where, as it should be obvious from our notation, we setŜ AB := (B −1 ) A C (B −1 ) B D S CD , and the circled quantities are all H-flux free: from Table 1 or [63,65], and, with (3.16), (4.45) While we organize the H-flux-valued parts in terms of the hatted covariant derivative, like (4.41), we have The only nontrivial distinction lies in As advertised in (4.21), we may further dismantleS (0) as well as (PSP ) µν into more elementary modules:S Comment 1. It is worth while to note and rewrite the 'kinetic term' of the DFT-dilaton in (4.21), Consequently, the proposed covariant derivative (4.25) and Ricci curvature (4.34) reduce to the standard covariant derivative and Ricci curvature in Riemannian geometry, The actual computation of the variations of the action, even with (n,n) fixed, are still powered by the semicovariant formalism, specifically (3.2). Conclusion The very gravitational theory string theory predicts may be the Double Field Theory with non-Riemannian surprises, rather than General Relativity based on Riemannian geometry. The underlying mathematical structure of DFT unifies supergravity with various non-Riemannian gravities including (stringy) Newton-Cartan geometry, ultra-relativistic Carroll geometry, and non-relativistic Gomis-Ooguri string theory. The non-Riemannian geometries of DFT can be classified by two non-negative integers, (n,n) [1]. We have analyzed with care the variational principle. We have shown that the most general infinitesimal variations of an arbitrary (n,n) DFT-metric have D 2 − (n −n) 2 number of degrees of freedom, which matches with the dimension of the underlying coset [11], O(t+n,s+n)×O(s+n,t+n) (3.14). Through action principle, these variations imply the full Einstein Double Field Equations (3.22), (3.24). However, nn number of them change the value of (n,n), i.e. the type of non-Riemannianity (4.19). Consequently, if we keep (n,n) fixed once and for all, the variational principle gets restricted and fails to reproduce the full EDFEs: the specific part, Y µ i (P EP ) µνȲ ν ı , does not have to vanish on-shell (4.9). 10 The EDFEs are supposed to arise as the string worldsheet beta-functions [98,99]. For the doubled-yetgauged string action (1.11) upon an arbitrarily chosen (n,n) background, the (n,n)-changing variations of the DFT-metric would correspond to marginal deformations. We must stress that these deformations could not be realized by merely varying the background component fields with fixed (n,n) (4.13), c.f. [52,54,56]. Nevertheless, it is natural to expect that nn number of Y µ i (P EP ) µνȲ ν ı arise as the corresponding betafunctions too. That is to say, at least for nn = 0, the quantum consistency with the worldsheet string theory seems to forbid us to fix (n,n) rigidly. We conclude that the various non-Riemannian gravities should be identified as different solution sectors of Double Field Theory rather than viewed as independent theories. Quantum consistency of the non-Riemannian geometries calls for thorough investigation, which may enlarge the scope of the string theory landscape far beyond Riemann. 10 As can be seen from (4.44), Y ρ i (P EP )ρσȲ σ ı contains a second order derivative of the DFT-dilaton along the Y µ i andȲī ν directions, i.e. Y µ iȲ ν ı ∂µ∂ν d . Consequently, with the completeness relation (2.7), the identities from (3.17), and (A.2), we note Similarly we get with (A.3), B Derivation of the non-Riemannian differential tool kit from DFT The non-Riemannian differential geometry we have proposed in section 4.3, in particular the hatted Ω connection (4.38), descends from the known covariant derivatives in the DFT semi-covariant formalism [63]: In order to convert these into undoubled ordinary covariant quantities -or to get rid of the bare B-field in them-we multiply B −1 as in (2.22) and write Here we set∇ andΓ ABC is a naturally induced -or 'twisted' [101], c.f. Alternative combination of (B.1), rather than (B.5), can give different type of covariant derivatives, However, these can act only on one-form fields, and appear not so useful.
5,406.8
2019-09-24T00:00:00.000
[ "Mathematics" ]
An overview of the second-previous memory effect in the strictlyalternating donation game Game theory delves into the examination of strategic behaviour across diverse domains such as insurance, business, military, biology, and more, with the aim of deriving optimal decisions. Recent research focusing on the alteration of memory in the donation game with simultaneous iterated rounds has spurred our interest in investigating this phenomenon within the realm of the strictly alternating donation game. This study proposes a novel decision-making approach, utilizing the pre-previous unit instead of the most recent one. The scope narrows down to 16 employed strategies, each de fi ned by fi nite two-state automata, while accounting for potential implementation errors in the computation of strategy payoffs. Dominant strategies are determined by assessing the interaction payoffs among strategy pairs. This article centers on the calculation of equilibrium points among heteroclinic three cycles, as there is a lack of a single strategy that is unequivocally dominant. Among the strategy Introduction Operations research, also known as the study of optimization strategies, is often referred to as a scientific method of decision-making [1].The mental processes that result in choosing a course of action from a variety of options are referred to as decision-making.Every decision-making process ends with a final choice.In many real-world situations, choices must be made in a setting of conflict between two or more opposing parties, each of whose actions is dependent upon the others.Such a competitive environment is referred to as a 'game' [2].The game puts players against one another in a race to achieve goals.There are two primary categories of games: games of chance, like roulette, and games of strategy, like poker.The game of strategies will be studied in this paper.Finding the best strategy that maximizes the gain and minimizes each player's loss is the key objective [3]. The history of game theory began with the solution of the two-player card game by Waldegrave using the minimax mixed strategy in 1713 [4].Then Von Neumann and Morgenstern used game theory in economics [5].John Nash also explored non-cooperative games [2]. Game theory concepts have been used to solve numerous applications in politics, business, biology, military, and many other fields [6,7].These studies urged researchers to discover famous games deeply.One of the most interesting games in evolutionary game theory is the so-called donation game [8,9], which can be applied to such fields as political science and environmental problems [10]. According to several researches, the donation game is a contest between two participants players.The two options available to each player are to defect (D) or cooperate (C).In the case of cooperation, the donor will pay a cost (c) where the recipient will gain a benefit (b).In the case of defection, there is no cost or benefit.If the two players cooperate, both will receive (b-c).If both defect, they will both receive (0).If the decision is made differently, the cooperator will receive (-c) and the defector will receive (b) [11][12][13].The payoff matrix that follows serves as a good representation of the donation game. It assumed that b > c > 0. Taking this assumption into account, we find that the provided payoff matrix is an example of the famous symmetric 2 × 2 matrix [9]. The procedures of these studies were approved by these traditional conditions. Defection is the best choice for players if only one round is played to avoid the lowest payoff (S).As a result, both players will receive (P) instead of the payoff of mutual cooperation (R).On the other hand, defection is not the preferable strategy if the donation game will be repeated several times (repeated donation game).Each player will develop his strategy based on previous rounds to increase his payoff.So, the decisions taken by every player will affect the other players reaction in the coming rounds which in turn affects the players payoff [14].This turns players into using mutual cooperation rather than mutual defection [15,16]. Repeated game research has a lengthy history.Repeated donation game has been the subject of many prior studies, much of which have focused on creating a new state utilizing the results of the previous unit [17][18][19][20][21][22][23].One of the most frequently cited issues with memorizing is ignorance or delay of the preceding round [3], this delay occurs in the real-time situation when rounds occur at the same time and the decision of some rounds isnt known.In 2016; EL-Seidy et al [24,25] investigated the process of creating a new state from the pre-previous one in a simultaneous iterated prisoner dilemma game.So, the present research explores the generating of the new state from the second-previous unit (unit consists of two sequential rounds) in an alter nating repeated donation game. It is possible to categorize the game as either a simultaneous or an alternating game.The two players may participate in the same round in the simultaneous game.In the alternating game, every player will play in a separate round like chess [26,27].This research will investigate the alternating repeated donation game. The two players in an alternating game are not permitted to decide during the same round.Each player will decide lonely in a separate round and the other player will react lonely in a subsequent round [18].In any round, the person who makes the decision is referred to as the leader (donor), and the other player is referred to as the recipient.Each unit consists of two rounds. The two players have an equal chance of being the leader in the alternating game [28,29].This game is known as the strictly alternating game when the two players switch roles as the recipient for each round [30].The random alternating game depends on the irregular flipping of the leader role [31].In this work, we focus on the strictly alternating game. The two available selections for the leader are C and D. The leader and recipient gains a and b, respectively, if the leader chooses C.But if the leader selects D, the leader, and the recipient gain c and d respectively.Its assumed that [32]. In the same unit, the two players receive a + b, if the two players play C. While both receive c + d, when both play D. If the two players take counter decisions, a + d, and c + b are the outcomes of the defector and the cooperator respectively.The previous outcomes are similar to the outcomes of the simultaneous donation game using the following equations [32]. T S P R. ( ) + = + The previous equations together with the inequalities (3) imply that T > R > P > S and S + T < 2R.These are the simultaneous donation game conditions. The behavior of strategies will be studied using domination which proves that there are no absolutely dominant strategies, so mixed strategies are studied [33]. The study of equilibrium between more than three strategies is difficult and time-intensive.So, heteroclinic three-cycles will only be studied.Three different strategies-A, B, and C-are known as heteroclinic three-cycles, characterized by A invading B, B invading C, and C returning to invading A [34,35].Every heteroclinic threecycle equilibrium point will be determined using values (T = 4, R = 3, P = 1, S = 0) (these values are used because Axelrods values dont satisfy the equation (T + S = R + P) [36].Game dynamics will be used to implement this.The use of mixed techniques across all heteroclinic three cycles was studied. This paper attempts to study the behaviour of strategies using the second-previous unit in strictly alternating repeated donation game under the noise effect, which clarifies that defective strategies have superior performance and crush cooperative ones.Strategy S 2 shows the best performance because it satisfies the behaviour of rival strategies [37,38] with spiteful behaviour [39,40] which is not defeated by any other strategy.Strategy S 0 and S 8 show good performance but are defeated by some strategies.Unfortunately, the partner strategy S 10 shows moderate performance in this case study.Interestingly, some moderate performance memory-one strategies perform better when using the second-previous unit instead of the most recent one.But there is no absolutely dominating strategy, so the heteroclinic three cycles are also studied.Strategies S 10 and S 11 appear frequently in the majority of heteroclinic three-cycles. States generation technique This paper's technique is based on using the second-previous unit (two consecutive rounds) decisions instead of the immediately previous one to generate the new round.Thus, the fourth round will be generated using the decisions of the first unit (the first round of each player) and the fifth round will be generated state using the second unit (the first round of the second player with the second round of the first player).The second-previous unit is used since the immediately preceding round was unknown.The reward is obtained by averaging the payouts from each iteration of this process, which is performed continuously. Each player can employ an endless number of strategies.It will be difficult to study all of them.We will only look at calculation-saving strategies developed by two-state automata.To transform a current state into a new one, a two-state automaton is utilized.The two edges C and D that exit from each node make up the two-state automata.An additional arrow is included to show how the first state is imposed.There are only sixteen strategies that a two-state automaton may create. Each units alternative outcomes are the pairs (C, C), (C, D), (D, C), and (D, D).These pairs are made up of the decisions taken by the players and result in these payoffs R, S, T, and P, respectively.In the binary system, the strategies used are represented as quadruples of 0s and 1s.Each digit depicts the competitor's response to one of the four potential outputs of the used round (CC, CD, DC, and DD).The subsequent move for the player will either be D for a value of 0 or C for a value of 1.The digital form of the Tit-For-Tat strategy S 10 is (1, 0, 1, 0) as shown in figure 1 while the grim strategy S 8 is (1, 0, 0, 0), and the tweedled strategy S 11 is (1, 0, 1, 1).As a result, there are sixteen distinct strategies denoted by S 0 , S 1 , S 15 .The contention between S 8 versus S 11 will be discussed as an example to illustrate the memory-two states generation. Firstly we will illustrate the difference between memory-one and memory-two in states generation. Following the imposition of the first three rounds, new rounds will be produced.In the first sequence, the S 8 -player is assumed to play (C) in his first two rounds, and the S 11 -player is assumed to play (D) in his first round, then the new states are generated according to the states generation technique described previously, resulting in player S 8 playing (D) in each round and player S 11 switches between (C) and (D).The repetition period of this sequence is four rounds with payoffs (T, T, P, P).The value of the average payoff for this sequence will be (P+T)/2.Thus, any sequence that has a payoff (P+T)/2 will be called approach B. The second sequence has the payoff of approach B. In the third sequence, the two players are assumed to play (C) in the first three rounds, and then the new states are generated according to the states generation technique described previously, resulting in players playing (C) in each round.The repetition period of this sequence is one unit.This produces an average payoff with a value of (R) and is called approach A. The fourth sequence has the payoff of approach B. The fifth sequence has the payoff of approach B. The sixth sequence has the payoff of approach B. The seventh sequence has the payoff of approach B. The eighth sequence has the payoff of approach B. The appeared approaches and their corresponding payoffs are shown in table 1.The payoff may be affected by some types of errors.This study will only look at implementation flaws. Perturbed payoff (noise effect) If any player makes an incorrect movement or decision (plays C when the transition rule specifies D or plays D when the transition rule specifies C) which contradicts the player strategy transition rule (implementation error), this may affect the generation of the new states and in turn, may affect the player payoff.The transition rule will be represented in a quadruple, like how strategies are represented digitally, but zero is replaced by ò and one is replaced by 1 − ò and ò represents the probability of making an erroneous movement.These numbers represent the probabilities to play C after R, S, T, and P. To illustrate the effect of erroneous movement, approach A will be checked when an error occurs. Every decision in the repetition period will be changed individually, and the generation of the new states will be tracked to specify the approach and payoff. If the first decision of the repetition period (red D) changed, approach A will be changed to approach B. Also, if the second element of the repetition period (red D) changes, approach A will change to approach B. Consider a scenario in which two players compete using the transition rules P = (p 1 , p 2 , p 3 , p 4 ) and Q = (q 1 , q 2 , q 3 , q 4 ), respectively.Pi and qi reflect the chance of playing C following the outcome of the preprevious unit and range in value from 0 to 1.This results in a Markov process in which matrix (7) determines the transitions between the four possible states R, S, T, and P. R S T P R S T P p q p q p q p q p q p q p q p q p q p q p q p q p q p q p q p q If the strategy cube's interior contains p and q, then this stochastic matrix's entries must be strictly positive, consequently, there is a unique stationary distribution π = (π 1 , π 2 , π 3 , π 4 ) where the probability of being in the state i in the n-th round is P i n ( ) and when n → ∞ it converges to π for (i = 1,2,3,4).The sum of positive components π is one.They represent R, S, T, and P asymptotic frequencies.π is the left eigenvector of matrix (7) where eigenvalue 1 is π = (π 1 , π 2 , π 3 , π 4 ).Equation (8) provides the payoff for player P playing against player Q.The payoff is unaffected by the first imposed three rounds. The payoff can be obtained for any level of noise ò > 0 for a player with transition rule S i against another player with transition rule S j .But, if the limit value of the payoff is computed when ò → 0 the stochastic matrix (7) will be irreducible containing many zeroes because p i and q i are zeroes and ones.This makes the vector π not uniquely defined.This pushed us to directly calculate this vector for every contention by mutations. There are eight sequences with two approaches in the contention between S 8 against S 11 .Approach A arises in one sequence when the two players play C in the imposed rounds, while B arises in the other seven sequences.Every decision in the repetition period will be tested under probable error (playing C instead of D or vice versa).Firstly, Approach A has only two rounds on its repetition period, if one of them changed, Approach A will be converted to Approach B. Secondly, Approach B has four rounds on its repetition period, and all mutations will not change the approach if perturbation occurred.The contention of S 8 against S 11 will be as follows.The following will be the relevant transition matrix between different approaches in the contention between S 8 versus S 11 . Every element in the previous matrix represents the probability that each approach (row approaches) may be changed to another approach (column approaches) or remain the same when a wrong decision occurs.The first row in this matrix represents the probabilities of mutations of approach A. By studying the two possible mutations in approach A, it is evident that when a wrong decision occurs in approach A, approach A will be changed to (approach B) in all possible mutations with a probability of 100%, so the value of the element in the intersection between the first row (approach A) and second column (approach B) in the previous matrix is one and the other element in the same row is zero.Approach B (in the second row) has four possible mutations, approach B will not be converted to approach A in all mutations, with probability zero in the first column and probability one in the second column.Then, the corresponding stationary distribution of contention can be calculated using the following equation. The corresponding stationary distribution of contention between S 8 and S 11 is (0, 1).This means that asymptotically, an iterated game between S 8 player and S 11 -player is in all the time in regime B. The S 8 -player receives an average payoff. This procedure will be repeated for every contention between every two strategies and then put into table 2 which represents the conflict payoff between any two strategies used in this paper.Strategies behaviour can be studied using domination.To study the behaviour of any two strategies (S i × S j ) with each other, the four jointed entries between the two strategies in table 2 must be extracted and reused in the next matrix. S S S S a a a a 14 S i and S j are equivalent if a ii = a ji and a ij = a jj .But S i dominates S j when one of these two inequalities a ii a ji and a ij a jj is attained.Table 3 is created by substituting specific values (T = 4, R = 3, P = 1, S = 0) in table 2. Table 4 shows the behaviour of the strategies using domination when these specific values are substituted. According to table 4, all strategies can be invaded, except for Strategy S 2 , which cannot outcompete strategies S 4 , S 7 , S 10 , and S 11 .This indicates that there is no absolutely evolutionary stable strategy that invades all other strategies.So, this research stimulates players to use several pure strategies at different rates in the same game (mixed strategy). Similar to the Rock-Scissors-Paper game, if there are three strategies, S i , S j , and S k , where S i invades S j , S j invades S k , and S k then returns to invade S i , this is referred to as a heteroclinic three-cycle.The values in the intersection between these three strategies must be extracted from table 3 and reused in a new matrix to compute the equilibrium point between any heteroclinic three-cycles and to determine the type of this cycle.Below is the construction of the payoff matrix for the interaction of strategies S 0 , S 15 and S 10 .A system of linear equations must be constructed from the values of this matrix. S S S S S S The preceding system and the following equation will be solved in order to identify the equilibrium point. The values of the diagonal entries of the preceding matrix must be changed to zeros to obtain the kind of the heteroclinic three-cycles (attractors, center, and repellors) using the matrix determinant.The diagonal entry of each column will be made zero by subtracting the value of this column's diagonal entry from each entry in the same column and then getting the determinant of the matrix.By applying this procedure to matrix (14) we will subtract (1) from each entry of the first column, (3) from each entry of the second column, and (2) from the third column to construct matrix (20).The cycle type can be categorized using the determinant of the matrix (20).If the determinant equals zero, the cycle type is center and if the determinant is less than zero, the cycle type is attractor otherwise the cycle type is repellor. -- The determinant of matrix (20) is zero.As a result, the heteroclinic three-cycle has a centre type.For every three heteroclinic three-cycles, this procedure will be run again to obtain all equilibrium points.Table 5 will contain these equilibrium points. Table 1.Approaches and their corresponding payoffs. Results and discussion This research studies the impact of each strategy using a different technique to generate new units with a probability of implementation error.General conditions (table 2) and specific values (T = 4, R = 3, P = 1, S = 0) (table 3) are used to determine which strategy will dominate others.Unusually, strategies behave in the same way regardless of whether general conditions or the other used values. It is obvious from the results of applying domination between all strategies that no strategy could defeat the rival strategy S 2 , and it could also overcome eleven strategies including the All-D S 0 , the Grim S 8 , and the Win-Stay, Lose-Shift (WSLS) S 9 strategies.Therefore, the spitefull strategy S 2 forced us to say that it has a powerful performance in this new rounds generation technique.There are further strategies that have a superior performance like S 0 , S 4 , and S 8 which crush eleven strategies at least and are only outcompeted by two other strategies. Figure 2 and domination clarify that weak strategies allow many other strategies to invade them and are unable to defeat a large number of strategies.Unexpectedly, Strategy S 6 performs satisfactorily, outperforming five strategies and only five other strategies outperform S 6 .Poor strategies S 13 , S 14 and S 15 only succeeded in defeating three strategies at most and were defeated by eleven other strategies.Given these findings, we can say that these cooperative strategies cannot hold against defective ones.Table 4 results pointed out that the two strong strategies S 0 , and S 8 were invaded by only the same two strategies S 2 , and S 10 .The Tit-For-Tat strategy S 10 cannot invade any strategy except only the three strong strategies S 0 , S 1 and S 8 .On the other hand, S 10 was invaded by four cooperative strategies S 7 , S 11 , S 14 , and S 15 .Therefore, in this study, the partner strategy S 10 can be considered a moderately well-performing strategy. Table 5 specified that strategies S 10 and S 11 are the most frequent strategies in heteroclinic three-cycles because they appeared in ten and eleven out of twenty-two cycles respectively.This occurs for S 10 because it attacks the strong strategies S 0 and S 8 which in turn attack the majority of the sixteen used strategies and S 11 beats four strategies while being outcompeted by eight others.Also, each of S 14 and S 15 takes part in several heteroclinic cycles which mostly involve S 10 or S 11 .This happens because these strategies attack strategies S 10 and S 11 .According to the findings of this study, any player who chooses to use mixed strategies will mostly use strategies S 10 or S 11 . Conclusion The primary objective of this study is to investigate the behaviour of strategies in the strictly alternating repeated donation game wherein the two players switch the rule of the leader and donor in each round under the influence of memory changes.Research focuses on utilizing a set of sixteen finite two-state automata strategies to provide an accurate characterization of the strategies that were examined in the study. Instead of relying on the most recent unit that is commonly employed in the generation of subsequent rounds, a novel approach is employed, replacing the most recent unit by the pre-previous one to determine the new states.This choice is made due to potential delays or inadequate information associated with the most recent unit, commonly encountered in real-time scenarios.The existence of implementation errors contributes to the modification of states generation, this factor is considered when evaluating perturbed payoffs to enhance the accuracy of the obtained results. Throughout this research, starting with the creation of states and subsequently employing the domination technique, the strategy labelled S 2 emerges as a particularly robust contender.Specifically, strategy S 2 (0, 0, 1, 0) demonstrates a non-cooperative spiteful behaviour, cooperating only if it can deceive its opponent, thereby falling under the category of a defective rival strategy. In contrast, out of all the tactics examined, strategy S 14 performs the worst, as illustrated in figure 2. This cooperative strategy (S 14 ) with the configuration (1, 1, 1, 0) opts for defection only when both players defect in the current unit of generation. Notably, strategies S 10 and S 11 emerge as the most prevalent tactics within the context of heteroclinic cycles because they emerged at the majority of heteroclinic three-cycles along with strategies S 14 and S 15 as declared using table 5. Data availability statement All data that support the findings of this study are included within the article (and any supplementary files). It has two mutations because its repetition period is one unit.•If S 8 plays D instead of C A → B •If S 11 plays D instead of C A → B b) Approach B: It has four mutations because its repetition period is two units.•If S 8 plays C instead of D when S 11 C B → B •If S 8 plays C instead of D when S 11 D B → B •If S 11 plays D instead of C when S 8 D B → B •If S 11 plays C instead of D when S 8 D B → B Figure 2 . Figure 2. The payoff of every strategy against itself and all other strategies.The S 2 strategy is included along with three other different strategies in each of the five sub-figures.Each subfigure includes the effective strategy S 2 to highlight how it behaves better than other strategies.(a) Contains strategy S 2 besides defective strategies S 0 , S 4 , and S 8 which have the highest payoffs against most strategies but are unable to outcompete strategy S 2 because they are not stable and have high and rapid payoff fluctuations.(e) Includes weak strategies S 13 , S 14 , and S 15 which gain the lowest payoffs and strategy S 2 easily defeats them.(b-d) Involves the rest of the strategies which gain moderate payoffs but S 2 payoff stability gives it the upper hand.
6,195
2024-02-02T00:00:00.000
[ "Economics" ]
Cholinergic neurodegeneration and cholesterol metabolism dysregulation by constitutive p75NTR signaling in the p75exonIII-KO mice Degeneration of basal forebrain cholinergic neurons (BFCNs) is a hallmark of Alzheimer’s disease (AD). However, few mouse models of AD recapitulate the neurodegeneration of the cholinergic system. The p75 neurotrophin receptor, p75NTR, has been associated with the degeneration of BFCNs in AD. The senescence-accelerated mouse prone number 8 (SAMP8) is a well-accepted model of accelerated and pathological aging. To gain a better understanding of the role of p75NTR in the basal forebrain during aging, we generated a new mouse line, the SAMP8-p75exonIII−/−. Deletion of p75NTR in the SAMP8 background induces an increase in the number of BFCNs at birth, followed by a rapid decline during aging compared to the C57/BL6 background. This decrease in the number of BFCNs correlates with a worsening in the Y-maze memory test at 6 months in the SAMP8-p75exonIII−/−. We found that SAMP8-p75exonIII−/− and C57/BL6-p75exonIII−/− mice expressed constitutively a short isoform of p75NTR that correlates with an upregulation of the protein levels of SREBP2 and its targets, HMGCR and LDLR, in the BF of both SAMP8-p75exonIII−/− and C57/BL6-p75exonIII−/− mice. As the neurodegeneration of the cholinergic system and the dysregulation of cholesterol metabolism are implicated in AD, we postulate that the generated SAMP8-p75exonIII−/− mouse strain might constitute a good model to study long-term cholinergic neurodegeneration in the CNS. In addition, our results support the role of p75NTR signaling in cholesterol biosynthesis regulation. Introduction Degeneration of basal forebrain cholinergic neurons (BFCNs) is a hallmark of Alzheimer's disease (AD).BFCNs regulate a wide array of brain functions, including learning, memory, and attention (Niewiadomska et al., 2011;Ballinger et al., 2016;Allaway and Machold, 2017).BFCNs release acetylcholine (ACh) that plays an important role in memory function and it has been implicated in aging-related dementia.Loss of BFCNs is playing a significant role in cognitive Comaposada-Baró et al. 10.3389/fnmol.2023.1237458Frontiers in Molecular Neuroscience 02 frontiersin.orgdysfunction in AD (Mufson et al., 2003;Counts and Mufson, 2005;Marcello et al., 2012).Recent reports suggested that degeneration of cholinergic neurons precedes the cortical neurodegeneration observed in AD patients (Schliebs and Arendt, 2011;Fernández-Cabello et al., 2020).However, the degeneration mechanism of the cholinergic system is still unknown in part due to a lack of good animal models. Neurotrophins regulate the survival of BFCNs through the activation of their receptors, p75 NTR and Trks (Boissiere et al., 1997;Boskovic et al., 2019).p75 neurotrophin receptor, p75 NTR , is highly expressed in the BFCNs during all stages of their development.The normal function of p75 NTR within these neurons in the adult brain remains unclear (Coulson, 2006;Coulson et al., 2009;Qian et al., 2019).p75 NTR is a member of the tumor necrosis factor (TNF) receptor superfamily that regulates key biological processes in the nervous system (Ibáñez and Simi, 2012;Bothwell, 2014;Meeker and Williams, 2014) and plays several functions during the development and in the adult nervous system (Kraemer et al., 2014b).p75 NTR is best known for its role in programmed neuronal death during embryonic development or in response to injury (Ibáñez and Simi, 2012).However, it also regulates axonal growth and synaptic plasticity, as well as cell proliferation, migration, and survival (Kraemer et al., 2014b).These functions can be elicited by the association of p75 NTR with different ligands and co-receptors and the activation of various signaling pathways (Roux and Barker, 2002;Vilar, 2017).The role of p75 NTR in the survival of BFCNs has been studied in several models.In general, p75 NTR knock-out mice models showed that the number of choline acetyltransferase (ChAT)-positive neurons in the BF is increased at birth (Yeo et al., 1997;Van der Zee and Hagg, 1998;Ward and Hagg, 1999;Barrett et al., 2010;Boskovic et al., 2014), suggesting that during embryonic development p75 NTR might cause apoptosis in these neurons. The senescence-accelerated mouse (SAM) strains are mouse models used for investigating the biochemical and physiological basis of pathological aging (Chiba et al., 2009;Akiguchi et al., 2017).The SAM models were established through phenotypic selection from a common genetic pool of AKR/J mice (Chiba et al., 2009;Liu et al., 2020).Among its prone sub-strains, the SAMP8 (SAM Prone 8) shows accelerated aging and features typical of age-related cognitive impairments, like increased oxidative stress, memory impairment, an increase of phospho-tau and soluble Amyloid beta peptide, Aβ (Morley, 2002).At the same time, the SAM Resistant (SAMR) mouse models were generated as aging-resistant controls. Here we describe the generation of a new mouse model, the SAMP8-p75 exonIII−/− mouse, which exhibits BFCN neurodegeneration, cognitive deficiencies and cholesterol biosynthesis genes upregulation. A previous work showed a reduction in adult hippocampal neurogenesis in the C57/BL6-p75 exonIII−/− strain (Catts et al., 2008) that was related to a reduction in the width of the hippocampal dentate gyrus granule cell layer, indicating a role of the p75 NTR in neurogenesis.We decided to characterize adult hippocampal neurogenesis in our newly generated model.In the case of the SAMP8 strain, there were no significant differences in the width of the granule cell layer (at 2 months SAMP8-p75 exonIII+/+ 49.2 ± 2.5 μm, N = 5 vs. SAMP8-p75 exonIII−/− 46.2 ± 1.9 μm, N = 4; at 10 months SAMP8-p75 exonIII+/+ 50.2 ± 9.2 μm N = 4 vs.SAMP8-p75 exonIII−/− 49.5 ± 0.3 μm N = 4), however, quantification of the number of BrdU + cells showed that in the SAMP8-p75 exonIII−/− mice at the age of 2 months there was a slight decrease in the proliferative activity of the neural stem cell niche that did not reach statistical significance (Figures 1F-I).In addition, there was a reduction in the number of BrdU + /DCX + newly born neurons in the subgranular zone (SGZ) of the dentate gyrus of the hippocampus during aging (from 2 to 10 months) and in the total number of immature DCX + neurons irrespective of the mouse genotype (Figures 1H,I), supporting previous findings indicating that in aged SAMP8 mice, neurogenesis is impaired (Díaz-Moreno et al., 2013). Altered number of BFCNs in the SAMP8-p75 −/− mice As p75 NTR is highly expressed in the BFCNs, we focused on the study of this region (Figure 2).p75 NTR immunohistochemistry using an antibody against the intracellular domain was undertaken on BF sections.We quantified approximately 40% more ChAT + neurons in SAMP8-p75 exonIII−/− than SAMP8-p75 exonIII+/+ animals in the basal forebrain [medial septum (MS) and vertical diagonal band (VDB)] (Figures 2A-C).The number of the BFCNs in the SAMP8-p75 exonIII−/− is highest at 2 months of age, nevertheless, at 10 months of age, the two mouse genotypes showed the same number of BFCNs (Figure 2C), suggesting that the increased number of BFCNs in the SAMP8-p75 exonIII−/− degenerate or become ChAT-negative during this time interval. To study if the accelerated decrease in the number of BFCNs in the SAMP8-p75 exonIII−/− is due to the mouse background strain, we compared the number of BFCNs in the C57/BL6-p75 exonIII−/− mouse strain (Figure 3).As the SAMP8 mice have accelerated aging, we quantified the number of BFCNs in C57/BL6-p75 exonIII+/+ and C57/ BL6-p75 exonIII−/− until geriatric ages (ca.30 months old).We found that the number of BFCNs at birth is higher in the C57/BL6-p75 exonIII−/− similar to the SAMP8 background and to previous reports (Figures 3A,B) (Naumann et al., 2002), and the number of BFCNs slowly decreased and approached the same number than C57/ BL6-p75 exonIII+/+ at around 12 months old and reaching lower levels at 24 months of age (Figure 3B).Altogether this indicates that the deletion of p75 NTR induces an increase in the number of BFCNs at birth, but in the long term, there is a decrease in the survival of BFCNs independent of the mouse strain (SAMP8 or C57/BL6).The loss of BFCNs in the SAMP8 background in comparison to the C57/BL6 background could be mediated by the specific characteristics of this accelerated aging mouse model strain. Deletion of p75 NTR in the SAMP8 mice has an impact on behavior To determine whether p75 NTR deficiency in the SAMP8 mice affected anxiety and cognitive ability, a cohort of mice were subjected to the following behavioral tests: open-field (OF), Y-maze (YM) and novel object recognition (NOR).In order to assess the role of BFCNs, mice were tested at two different ages, 2 and 6 months.In the OF test, mice are tested for anxiety-related parameters measured as time spent in the center of the box.As animals display a natural aversion to brightly open areas (central zone) but the also have a dive to explore new environments (Kovacsics and Gould, 2010).A significant difference in the % of time spent in the center in the OF test was observed at both 2 and 6 months of age between SAMP8-p75 exonIII−/− and SAMP8-p75 exonIII+/+ animals (Figures 4A,B), showing an increase of time the SAMP8-p75 exonIII−/− animals.The behavior of SAMP8-p75 exonIII−/− in the OF is similar to the control mice SAMR1, indicating that the deletion of p75 NTR in the SAMP8 rescues the behavior of the SAMP8-p75 exonIII+/+ mice in this test.The SAMR1 has more BFCNs than the SAMP8 mice at this age and is more similar in number to the SAMP8-p75 exonIII−/− (Figure 2C), suggesting that the increased cholinergic innervation from the BFCNs may play a role in this phenotype.During the YM test, mice are positioned within the central point of a maze comprising three opaque arms, arranged in the configuration of the capital letter Y. Mice are allowed to freely explore their surroundings and due to their innate inclination to discover novel environments, the rodents exhibit a preference for exploring new arms of the maze.The primary objective of this test is to test their episodic memory, which is quantified by observing how often a mouse selects a previously unexplored arm of the maze (considered as the correct behavior).This parameter is referred to as SAB (spontaneous alternation behavior), a measure of the mouse's tendency to alternate between arms without any external cues (Kraeuter et al., 2019).At 2 months Frontiers in Molecular Neuroscience 06 frontiersin.org of age, SAMP8-p75 exonIII+/+ and SAMP8-p75 exonIII−/− mice were not behaving significantly differently (Figure 4C).However, at 6 months of age, the SAMP8-p75 exonIII−/− performed significantly worse than at 2 months of age, quantified with a lower % of correct alternations suggesting a worsening of these cognitive abilities.The decrease in the number of BFCNs from 2 to 6 months (around 50%) in SAMP8-p75 exonIII−/− may be responsible for this worsening in cognitive ability.This difference is not observed in the SAMP8-p75 exonIII+/+ mice during aging from 2 to 6 months (Figure 4C), which did not display a reduction in the number of BFCNs during this time frame.In the NOR test, the preference for exploring new objects over familiar ones evaluates short and long memory.The test requires a training phase, where familiar objects are presented.This study specifically focused on evaluating long-term memory as the test phase is performed 24 h after the training phase.When tested in NOR (Figure 4D) there were no differences between SAMP8-p75 exonIII+/+ and SAMP8-p75 exonIII−/− in the time exploring the novel object, but these two genotypes clearly performed worse than the control mice SAMR1 (Figure 4D).The decrease in the number of cholinergic neurons may impact on the release of the acetylcholine neurotransmitter.We measured the levels of acetylcholine in the different mice genotypes at the two ages of the study (Figure 4E).As shown, acetylcholine levels decrease in the SAMP8-p75 exonIII−/− from 2 to 6 months but not in the other genotypes (SAMR1 or SAMP8-p75 exonIII+/+ ), suggesting that the decrease in the number of cholinergic neurons impact on the synthesis of acetylcholine and in the behavior. Altogether the deletion of p75 NTR in the pathological strain SAMP8 induces differences in some of the studied tests, probably depending on the neuronal circuit involved. SAMP8-p75 exonIII−/− constitutively expresses a short isoform of p75 NTR The phenotype described in the SAMP8-p75 exonIII−/− suggests a deleterious effect of the deletion of p75 NTR on the survival of the BFCNs.Although these data suggest a pro-survival role of p75 NTR , it has been described that depending on the mouse strain, the p75 exonIII−/− mice express a short isoform of p75 NTR that may play a negative impact on the neurons where it is expressed (von Schack et al., 2001).We analyzed this possibility in the SAMP8 background and found that in the SAMP8-p75 exonIII−/− , there was a significant signal in the BFCNs when stained using a specific antibody against the intracellular domain (ICD) of p75 NTR (Figure 5A).To confirm this observation, we performed immunoprecipitation using a p75 NTR -ICD antibody and western blots analysis of basal forebrain tissue from 2-and 6-months old mice.The Figure 5B shows the presence of a short isoform of p75 NTR in the SAMP8-p75 exonIII−/− .This short isoform migrates at a similar size as the p75 NTR -C-terminal fragment, p75 NTR -CTF, construct analyzed in the same blot (Figure 5B) and is reminiscent of a short isoform previously described in von Schack et al. (2001). Interestingly the expression of this short isoform is also observed in the C57/BL6-p75 exonIII−/− mice by immunostaining and co-immunoprecipitation (Figures 5A,B), indicating that the expression of the short isoform is due to the p75 exonIII genetic cassette construct and not dependent on the mouse strain.As p75 NTR undergoes receptor intramembrane proteoysis (RIP) in physiological conditions, the p75 NTR -CTF is also observed in the wt mice (both SAMP8 and C57/BL6 strains) together with the p75 NTR full length (p75-FL) protein (Figure 5B).Quantification of the ratio p75-CTF/p75-FL (Figure 5C) indicates that there is a significant increase of this ratio in the SAMP8-p75 exonIII−/− and C57/BL6-p75 exonIII−/− mice that might play an aberrant signaling in these cells. Increase of the cholesterol biosynthesis genes in the basal forebrain of SAMP8-p75 exonIII−/− It has been described that p75 NTR regulates the metabolism of cholesterol in the nervous system (Yan et al., 2005;Korade et al., 2007;Follis et al., 2021) among other tissues (Pham et al., 2019).We quantified the levels of Sterol Regulatory Element-binding Protein-2, SREBP2, a transcription factor involved in the upregulation of key genes for cholesterol biosynthesis and cholesterol uptake (Figure 6A).As it is shown in the Figure 6B analysis by western blot of basal forebrain tissue showed a significant increase in the total expression of SREBP2 in the p75 exonIII−/− with respect to p75 exonIII+/+ in both SAMP8 and C57/BL6 mouse strain.SREBP2 plays an important role in the homeostasis of cholesterol by regulating the 3-hydroxy-3methylglutaryl-coenzyme A, HMGCR, a key enzyme in the synthesis of cholesterol and the low-density lipoprotein receptor, LDLR, that mediates the uptake of extracellular cholesterol.As shown in Figure 6B, the protein levels of HMGCR and LDLR increase significantly at 2 and 6 months in the in the SAMP8-p75 exonIII−/− versus SAMP8-p75 exonIII+/+ .In the case of the C57/BL6 mice, there is a significant increase at 6 months but not at 2 months.This difference could be related to the fact that the SAMP8 mice have an accelerated aging in comparison to the normal aging of the C57/Bl6.In any case at 6 months old, a similar increase of SREBP2, HMGCR and LDLR is observed indicating that is a general mechanism of the p75 exonIII−/− mice and not of the mouse background. The cellular type responsible of HMGCR expression is important, as it is described that in the adult brain the synthesis of cholesterol takes place mainly in the astrocytes.Immunofluorescence instead showed an increase of the HMGRC staining in the neurons of the basal forebrain almost exclusively in the ChAT+ neurons of the BF of both SAMP8-p75 exonIII−/− and in the C57/BL6-p75 exonIII−/− mice (Figure 6F), supporting the western blot data.We then analyzed the total levels of cholesterol in basal forebrain extracts from SAMR1, SAMP8-p75 exonIII−/− , and SAMP8-p75 exonIII+/+ at 2 and 6 months (Figure 6H) and found an increase in the total levels of cholesterol, although not reaching statistical significant values, in the SAMP8-p75 exonIII−/− at 2 and a 6 months of age respect to SAMP8-p75 exonIII+/+ . The axis NGF/TrkA and p75 NTR regulates the expression of cholesterol biosynthesis genes So far, we have described a correlation between the presence of a short isoform of p75 NTR in the p75 exonIII mice (independent of the mice strain) and an increase in the cholesterol biosynthetic proteins in the BFCNs.To demonstrate that the neurotrophin signaling is able to activate these genes, we used a heterologous system like the PC12 cells that express endogenous levels of p75 NTR and TrkA, stimulated with NGF for 24-48 h (Figure 7).Stimulation of NGF induces an increase in the expression of SREBP2 and HMGRC in 24 h but not of LDLR (Figures 7A,B) and an increase in the cholesterol content, as shown by filipin staining (Figures 7C,D).NGF/TrkA stimulation also induces p75 NTR upregulation (Figure 7A).Interestingly NGF stimulation in the presence of a TrkA kinase activity inhibitor, K252a, prevents the activation of these genes (Figures 7A,B) and the increase of cholesterol content measured by filipin staining (Figure 7C).As TrkA activation by NGF induces the regulated intramembrane proteolysis (RIP) of p75 NTR , we incubated the PC12 cells in the presence of an α-secretase inhibitor (TAPI-1) or a γ-secretase inhibitor (Compound E) (Figures 7E,F).Inhibition of p75 NTR shedding with TAPI-1 reduces the expression of HMGCR induced by NGF/TrkA, suggesting that shedding of p75 NTR is required for the upregulation of HMGCR.The presence of a γ-secretase inhibitor partially impairs the upregulation of HMGCR, indicating that the RIP of p75 NTR plays a role in HMGCR upregulation.Activation of p75 NTR independently of TrkA with BDNF (Figures 7E,F), is not able to upregulate the expression of HMGCR, supporting the data that TrkA activity is required.These experiments suggested that the activation of TrkA by NGF is required for the shedding of p75 NTR , which is the main driver of the upregulation of the cholesterol biosynthesis genes.In order to demonstrate the direct role of p75 NTR -CTF in this process we overexpressed p75 NTR , p75 NTR -CTF and p75-ICD in PC12TrkA/p75DKO cells (Testa et al., 2022) and found and increase in SREBP2 and HMGCR expression independently of NGF and TrkA (Figures 7G,H) in the cells overexpressing p75-CTF suggesting that accumulation of p75-CTF is a causative agent and pointing to the findings observed in vivo in the SAMP8-p75 exonIII−/− and C57/BL6-p75 exonIII−/− mice (Figure 6). Discussion BFCNs are involved in several cognitive tasks (Ballinger et al., 2016).In humans, it has been described that a reduction of the cholinergic area is a general hallmark of Alzheimer's disease patients (Mufson et al., 2008;Schliebs and Arendt, 2011;Kerbler et al., 2015;Fernández-Cabello et al., 2020).Here we found that the SAMP8-p75 exonIII−/− mouse is a good mouse model for studying cholinergic neurodegeneration.The role of p75 NTR in the BFCNs has been the focus of intense research since the initial observation that p75 NTR is highly expressed in these neurons (Ward and Hagg, 1999).Analysis of the p75 NTR knock-out mice showed an increase in the number of BFCNs at birth.This finding has been observed whatever deletion strategy was performed; in the initial p75 exonIII−/− , p75 exonIV−/− deleted mice and in the more recent conditional mice with p75 deleted in the ChAT-expressing cells (Peterson et al., 1997;Yeo et al., 1997;Greferath et al., 2000;Naumann et al., 2002;Boskovic et al., 2014).These findings suggested that p75 NTR plays a critical role in the total number of BFCNs at birth.Although it has been proposed that this is indicative of a pro-apoptotic role of p75 NTR in this neuronal population during embryonic development, it has not been fully demonstrated with the use of an apoptotic-impaired conditional mice, for instance.The increase of BFCNs in the p75 NTR knock-out mice could also be the result of an increase in proliferation or due to a positive role of p75 NTR in the neuronal precursor's mitotic exit.This could be similar to the role of p75 NTR in the mitotic exit found in the cerebellar granule cell progenitors (GCP) where in the absence of p75 NTR , GCPs continue to proliferate beyond their normal period, resulting in a larger cerebellum that persists into adulthood (Zanin et al., 2016).Here we have observed that deletion of p75 NTR in the SAMP8 background, a pathological aging mouse model, also results in an increase in the number of BFCNs at birth as it was reported in other mice strains (Peterson et al., 1997;Yeo et al., 1997;Greferath et al., 2000;Naumann et al., 2002;Boskovic et al., 2014).However, and in contrast to the study reported by Boskovic et al. (2014) we observed a significant decrease in the number of BFCNs in both the SAMP8 and C57BL/6 mouse strains in aging mice.Quantification of the rate of the decreasing number of BFCNs reveals that they are lost at a faster rate, indicating an accelerated cell death of BFCNs in the SAMP8-p75 exonIII−/− mice.This decrease of cholinergic neurons correlates with an impairment in the Y-maze from 2 to 6 months. To investigate what could be the mechanism of BFCNs loss in the long term, we rationalize that the loss of BFCNs might be due to the constitutive expression of a short isoform of p75 NTR previously described in the 129v background strain (von Schack et al., 2001).When analyzed by immunofluorescence using a specific antibody against the intracellular domain of p75 NTR , we saw a significant labeling in the basal forebrain of the SAMP8-p75 exonIII−/− mice (Figure 5).Immunoprecipitation of total lysates from the basal forebrain supported the presence of a short-isoform of p75 NTR , with a similar size as a p75-CTF construct used as a marker of migration (Figure 5).As p75-CTF is produced by the activation of TrkA signaling by NGF (Urra et al., 2007), we could consider the constitutive expression of p75-CTF in the SAMP8-p75 exonIII−/− as a gain-of-function of the TrkA/NGF and p75 signaling.It has been described that p75-CTF induces cell death of several neuronal types (Underwood et al., 2008;Skeldal et al., 2011;Vicario et al., 2015).Recently, we described that in the absence of a pro-survival signaling emanating from TrkA, p75-CTF induces the cell death of BFCNs in culture by activating p38, JNK, and caspase-3 pathway (Franco et al., 2021). The SAMP8-p75 exonIII−/− mice could be a good model to study the consequences of cholinergic neurodegeneration in the context of pathological aging and high oxidative stress.Oxidative damage has been universally linked to AD (Guglielmotto et al., 2010;Cai et al., 2011) and is also observed in the SAMP8 mice (Morley, 2002;Pallas et al., 2008).p75 NTR has been involved in the cell death of sympathetic neurons upon oxidative stress (Kraemer et al., 2014a).Our results showed an increase in cell death in the SAMP8-p75 exonIII−/− mediated by the short isoform of p75 NTR , supporting the cell death induced by p75 NTR signaling in conditions of high oxidative stress.This could explain why BFCNs degenerate faster or at a higher rate than the BFCNs from the C57/BL6 mice, with no oxidative stress. Here we found an increase in the total expression of SREPB2 in the SAMP8-p75 exonIII−/− with respect to the SAMP8-p75 exonIII+/+ and also in the C57BL/6-p75 exonIII−/− mice but at later time points (2 months versus 6 months) (Figure 6).This increase in the levels of SREBP2 parallels the increase of two of its main targets, the LDLR and HMGRC, suggesting an increase in the uptake and biosynthesis of cholesterol, respectively.Measurements of the total cholesterol levels in the basal forebrain suggested that the p75 exonIII−/− mice are prone to higher levels of free cholesterol in the basal forebrain.The brain is highly dependent on cholesterol (Petrov et al., 2016).The intact blood brain barrier (BBB) prevents the uptake of lipoproteins from the circulation in vertebrates.Unlike cholesterol in other organs in the periphery, brain cholesterol is primarily derived by de novo synthesis.During brain development, neurons have the capacity to synthesize their own cholesterol.In the adult state, however, cholesterol is synthetized by glia, mainly in the astrocytes, and transported bound with ApoE from the astrocytes to the neurons, that contain ApoE receptors like LDLR and LPR1 (Staurenghi et al., 2021;Li et al., 2022).The finding that the cholinergic neurons from the p75 exonIII -KO mice re-express HGMRC suggests that the cholesterol homeostasis is disrupted somehow in these mice.An increase in the neuronal cholesterol content has been associated with some neurodegenerative diseases and cell death.In the Niemann-Pick disease Type-C (NPC), the impaired transport of cholesterol from the ER to the plasma membrane by defects in the Npc1 gene, induces an accumulation of intracellular cholesterol, endosomal alterations, and cell death (Cabeza et al., 2012).Previous reports described that excessive uptake, as well as synthesis of cholesterol, underlie neuronal cell death by a necroptosis-like mechanism (Funakoshi et al., 2016).These results suggest that the increased biosynthesis of cholesterol in the cholinergic neurons of the p75 exonIII−/− could be one of the mechanisms of BFCNs loss, although this hypothesis needs further research.Although we focus here on cholinergic neurons, we cannot discard that a similar phenotype is found in other neuronal populations that express p75 endogenously.However as cholinergic neurons express high levels of p75 during all the life the levels of the short isoform of p75 in the p75 exonIII -KO might be higher in BFCNs than in cortical or hippocampal neurons that express much lower levels of p75 in the adult brain. Previous data might suggest a cell-autonomous regulation of cholesterol synthesis genes by p75 NTR .p75 NTR has been involved in the regulation of cholesterol synthesis in the forebrain (Yan et al., 2005;Korade et al., 2007) and in the liver (Baeza-Raja et al., 2016;Pham et al., 2019).Korade et al. (2007).described that the levels of p75 NTR positively correlate with the expression of cholesterol synthesis enzymes in both neuroblastoma cell lines and primary cerebellar neurons and ligand-activated p75 NTR mediates the activation of SREBP2 via p38 MAPK and caspase-2 in liver cell lines (Pham et al., 2019).Also, NGF, pro-NGF, and pro-BDNF induce the expression of LDLR in PC6.3 cells and in septal neurons in a TrkA and p75 NTRdependent manner (Do et al., 2016).Furthermore, in melanoma, metastasis is promoted by the upregulation of cholesterol synthesis genes by the axis NGF/TrkA/p75 NTR (Restivo et al., 2017).Our results suggested that the main role of TrkA is to facilitate the shedding of p75 to generate a p75-short isoform similar to p75-CTF, the real inducer of SREBP2 and HMGCR, as we showed by overexpression of p75-CTF in TrkA/p75-DKO PC12 cells.The upregulation of LDLR observed in vivo (Figure 6) was not observed in the PC12 cells system, suggesting that in vivo other more complex mechanism involving the regulation of cholesterol internalization is taking place. All these data suggest that the constitutive expression of p75-CTF is behind the rapid cell death of BFCNs in the SAMP8 mice.A decrease in the number of BFCNs in the C57/BL6-p75 exonIII−/− mice background is also observed, and correlated with the presence of p75-CTF.Interestingly in the conditional p75 flox/flox /ChAT-Cre mice, that do not express any domain of p75 NTR , an absence of BFCN cell death during aging was not observed (Boskovic et al., 2014).One important read-out of these results is that the use of the p75 exonIII−/− should be cautious, as the expression of a pro-apoptotic short isoform may misinterpret the results and draw wrong conclusions regarding p75 NTR signaling, at least in the basal forebrain. In summary, the generation of the SAMP8-p75 exonIII−/− mice described in this work uncovered a direct regulation of cholesterol synthesis genes by the TrkA/p75 axis and may facilitate the study of the degeneration of the cholinergic neurons by cholesterol dysregulation, a phenomenon also observed in several neurodegenerative diseases. Methods SAMP8 p75 exonIII −/− generation SAMP8 mice were backcrossed with C57BL6 p75 NTR exonIII +/− mice (Lee et al., 1992) (Jackson Laboratories) for 12 generations to create a new SAMP8-p75 exonIII −/− strain.The animals were housed on a 12 h light/12 h dark circle with food and water provided ad libitum in specific pathogen-free (SPF) at constant 24 degrees temperature.All animal experimentation was controlled following the recommendations of the Federation of European Laboratory Animal Science Associations on health monitoring, European Community Law (2010/63/UE), and Spanish law (R.D. 53/2013) with approval of the Ethics Committee of the Spanish National Research Council (1,246/2022) and the local Government (2022-VSC-PEA-0139 type 2). Brain fixation and tissue processing Animals at the indicated age were perfused with PFA 4%, the brains were removed and postfixed from 2 h to overnight with PFA 4%, and next were washed several times with phosphate buffer (PB) 0.1 M pH 7.4 and cryoprotected overnight at 4°C with 30% sucrose in PB.After, the brains were frozen with Tissue-Tek compound OCT (Sakura) and cut into coronal sections at 10 μm with Leica CM1900 cryostat.Alternatively, brains were washed and sliced into 40-μm sections with Leica VT1200 vibratome and kept in PB 0.1 M with 0,005% sodium azide at 4°C until used. Hippocampal astroglyosis Inmunohistochemistry (IHC) of the astrocytic marker GFAP was carried out in the hippocampus.Cryostat sections were washed with PB 0.1 M and blocked with PB 0.1 M, 0.1%Triton X-100 and 3% Fetal Bovine Serum (FBS) for 60 min at room temperature.Sections were incubated 16 h at 4°C with rabbit antibody α-GFAP (DAKO, Z0334) 1:300 in blocking buffer.Next, slices were washed three times with PB 0.1 M and incubated with the secondary antibody Cy3 donkey α-rabbit (Jackson, 711-165-152) 1:500 for 2 hours.Nuclei were stained with nuclear marker 4,6-diamidino-2-phenylindoledihydrochloride (DAPI; Sigma) 1:1000 and washed again to finally cover them with a coverslip and Mowiol and DABCO (50 μL/mL).Images were captured with confocal microscope SP8 (Leica) and GFAP positive astrocytes of the CA1 area of the HC were analyzed by measuring the mean signal intensity per μm 3 .At least 5 slices per animal were measured.For this purpose, eight animals per condition were analyzed. Basal forebrain cholinergic counting To count BFCNs, slices corresponding to basal forebrain and hippocampus were collected (Bregma 1.4 to −2.5 mm).30 slices per animal, separated by 100 μm were observed at a fluorescence microscope (Leica), and the positive neurons were counted.To carry out the cholinergic neuron detection, Choline Acetyltransferase (ChAT) was used as cholinergic marker.The sections were blocked with blocking buffer PB 0.1 M, 1%Triton X-100, 3% FBS for 60 min at room temperature and incubated three overnights at 4°C with primary antibody goat antibody α-ChAT (Millipore, AB144P) 1:200.After 3 days, the antibodies were removed, the slides were washed three times with PB and incubated with biotin rabbit α-goat (Jackson, 305-065-046) 1:200 at room temperature for 1 h and posterior cy2 streptavidin (Jackson, 016-220-084) 1:200.Nuclei were stained and the slices mounted as described. Inmunohistochemistry detection of p75, HMGCR and ChAT Immunodetection was performed on vibratome slices.The chosen sections (around Bregma 0.86 mm) were treated with 10 mM Sodium Citrate pH 6.5 for 20 min at 85°C.Slices were then cooled down at room temperature and blocked with 0.1 M PB, 0.5% Triton X-100, 10% FBS for 1 h.After the blocking step, slices were incubated for 3 days in 120 μL of the primary antibody: mouse α-HMGCR (Abcam,242,315) Mice behavior tests Groups of mice of 2 and 6 months conducted 3 different tests in the following order: open field, Y-maze test, and Novel Object Recognition test (NOR).Before performing the behavioral tests, the mice were moved to the behavioral room for habituation.In addition, each mouse was 5 min per day for 4 consecutive days with the experimenter.Every test was separated 3 to 5 days to let the mice rest.Y. Open field test Mice were placed individually into the periphery of a squared black box of 50×50 cm and 85 cm elevated from the floor for 5 min.They were free to explore, and they were recorded with an automatic activity monitoring system (Smart Video Tracking Software, PanLab).The area of the open field was divided into a 42×42 cm central zone (40% of the total surface) and a surrounded periphery zone.The following anxiety-related parameters were recorded: time spent and distance traveled in the center zone (and periphery).Total distance and mean velocity were used to assess general locomotion. Spontaneous alternation Y-maze Each arm of the Y-maze measured 32,5×8 cm.The mice were placed in the center and let explore freely for 8 min.Every time the mice put the four pawns on a new arm, it was counted as a new entrance.The correct alternations were counted for spatial memory parameters.The total number of entries was used to assess general locomotion. Novel object memory test Two days prior to the test, every mouse was placed in a 40×40 empty squared box for 10 min for habituation.24 h later, training was conducted, the animals were placed in the same box containing 2 identical objects that they could explore for 10 min.The next day the test was conducted, in which one of the objects was changed for a novel one.The mice were recorded with a camera, and the time that the animal was exploring the novel and the familiar object was quantified. Filipin assay A total of 2 × 10 4 cells/well PC12 were seeded onto a sterile slide placed in a 24-well plate and incubated with the different inhibitor (see above).After 24 h the cells were fixed by 4% PFA for 1 h, the cells were stained for 2 h with filipin (50 μg/mL, Sigma) in 10% of PBS and propidum iodide was used for nuclear staining.UV filter was required to view the filipin staining (340-380 nm excitation, 40 nm dichroic, 430-nm long pass filter).The cells were protected from light during the procedure because the filipin fluorescence photobleaches very rapidly.A confocal microscope SP8 (Leica) was used to scan and record the fluorescence.For filipin intensity quantification, regions of interest (ROIs) were defined thresholding total projections of each condition and signal intensity was measured using ImageJ software. p75 NTR immunoprecipitation assay Endogenous p75 NTR -CTF was detected by immunoprecipitation.For that purpose, BF extracts were incubated with 500 μL TNE lysis buffer (50 mm Tris-HCl, pH 7.5, 150 mm NaCl, 1 mm EDTA, 0.1% SDS, 0.1% Triton X-100, 1 mm PMSF, 10 mm NaF, 1 mm Na 2 VO 3 , and protease inhibitor mixture) and disrupted by a dounce homogenizer.Samples were centrifuged and supernatant was subjected to immunoprecipitation.Extracts were incubated overnight with 1.5 μL of p75 antibody (Millipore, 07-476) at 4°C in an orbital shaker.The following day, 10 μL of previously TNE washed Protein G Agarose Beads (ABT, 4RRPG-5) were added and incubated for 2 h.Samples were then centrifuged to remove non-bounded proteins at 100 x g for 2 min to precipitate the agarose beads bound to the antibody, and washed three times in TNE lysis buffer with 0.2% Triton-X.Finally, for SDS-PAGE analysis, 30 μL reducing 2X sample buffer was added and the samples were boiled for 5 min at 96°C.Samples are centrifuged at 100 x g for 2 min and agarose beads are removed.Immunoprecipitated samples are then subjected to western blot analysis. Statistical analysis All the statistical analysis were performed with GraphPad Prism software.The results are represented as mean ± standard error of the mean (SEM).The normal distribution of all data sets were confirmed with the D' Agostino & Pearson test.To determine if the differences between 2 groups were significant the unpaired Student's t-test was performed.For data presented as a fold increase, the One-sample t test was employed.For multiple comparisons one or two-way analysis of variance (ANOVA) test was used.Initially, it was evaluated if there were significant differences between the groups, then the Tukey's post-hoc test was used to determine the specific differences between groups.In the plots the "N" indicates the number of the independent mice used of each strain and age for each experiment.In all the analysis a p value <0.05 has been considered statistically significant, and represented as: *p < 0.05; **p < 0.01; *** p < 0.001 and **** p < 0.0001. ( D ) Quantification of GFAP intensity in the CA1 region of the hippocampus at 2, 6, and 10 months.The bars represent the standard error of the mean, N > 3. Two-way ANOVA followed by Tukey's posthoc analysis, **p < 0.05.(E) Quantification of GFAP intensity in the dentate gyrus region of the hippocampus at 2, 6, and 10 months.The bars represent the standard error of the mean, N > 3. Two-way ANOVA followed by Tukey's posthoc analysis, **p < 0.01.(F) Hippocampal neurogenesis.Representative images of the staining of BrdU+/DCX+ cells in the SGZ of the dentate gyrus of SAMP8-p75exonIII+/+ and SAMP8-p75exonIII−/− mice; N = 4, four brain sections per animal.(G) Quantification of the number of BrdU+ cells in the SGZ of the dentate gyrus at 2, 6 and 10 months of age.(H) Quantification of the number of DCX+ cells in the SGZ of the dentate gyrus at 2 and 10 months of age.(I) Quantification of the number of BrdU+/DCX+ cells in the SGZ of the dentate gyrus at 2 and 10 months of age.Two-way ANOVA followed by Tukey's posthoc analysis, *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
8,514.8
2023-10-13T00:00:00.000
[ "Biology", "Medicine" ]
Activation of NLRP3 Inflammasome in Liver of Long Evans Lactating Rats and Its Perinatal Effects in the Offspring after Bisphenol F Exposure The liver is the organ responsible for the metabolism and detoxification of BPF, the BPA analogue that is replacing it in plastic-based products. It is not known whether BPF can trigger inflammatory responses via the NLRP3 inflammasome, which plays a major role in the development of liver disease. The aim of this study was to evaluate nitrosative stress species (RNS) and NLRP3 inflammasome activation in the liver of lactating dams after BPF exposure. Moreover, it was studied whether this effect could also be observed in the liver of female and male offspring at postnatal day 6 (PND6). 36 Long Evans rats were randomly distributed according to oral treatment into three groups: Control, BPF-low dose (LBPF; 0.0365 mg/kg b.w./day) group and BPF-high dose (HBPF; 3.65 mg/kg b.w./day) group. The levels of nitrosative stress-inducing proteins (eNOS, iNOS, HO-1d), NLRP3 inflammasome components (NLRP3, PyCARD, CASP1) and proinflammatory cytokines (IL-1β, IL-18, IFN-γ and TNF-α) were measured by gene and protein expression in the liver of lactating dams and in female and male PND6 offspring. Lactating dams treated with LBPF showed a significant increase in iNOS and HO-1d, activation of NLRP3 components (NLRP3, PyCARD, CASP1) and promoted the release of proinflammatory cytokines such as IL-1β, IL-18, IFN-γ and TNF-α. Similar effects were found in female and male PND6 offspring after perinatal exposure. LBPF oral administration and perinatal exposure caused an increase of nitrosative stress markers and proinflammatory cytokines. Also, NLRP3 inflammasome activation was significantly increased in in the liver of lactating dams and PND6 offspring. Introduction Nowadays it is well documented that bisphenol A (BPA) exposure can cause liver tissue remodeling and fibrosis due to the generation of reactive oxygen species (ROS) and an uncontrolled inflammatory cascade [1].This liver injury can lead to diseases such as hepatic steatosis, tumors, and metabolic syndrome.An important role of the NLRP3 inflammasome has been described in liver diseases [2,3].Inflammasomes are key components of the natural immune system that can largely protect normal liver functions against pathogenic infections, metabolic diseases, and cellular stress [4].NLRP3 inflammasome is a multiprotein scaffold that responds to damage-associated molecular patterns (DAMPs) and can mediate the catalytic activation of caspase-1 (CASP1), promoting the cleavage and release of IL-1β and IL-18 [5].However, excessive inflammatory response regulated by NLRP3 inflammasome triggers liver disease progression [4]. Previous studies showed that BPA promoted inflammation and fibrosis progression with a key role of the NLRP3 inflammasome in the liver of obese mice after BPA and highfat diet administration [6].Knockout mouse models suggested that inhibition of the NLRP3 inflammasome reduced liver inflammation, indicating that the NLRP3 inflammasome is involved in the progression of non-alcoholic fatty liver disease (NAFLD) [7,8].Furthermore, NLRP3 upregulation and increased gene and protein expression of IL-1β, IL-18, NLRP3, and CASP1 were observed in laying hens after high doses of BPA [9]. Due to the large number of studies demonstrating the health risks of BPA, the development and production of alternatives to this endocrine-disrupting chemical (EDC), has been stimulated to replace it in a myriad of applications [10].Some of the new alternatives to BPA are the bisphenol analogues, such as bisphenol F (BPF).BPF is a diphenylalkane with two phenol rings linked through a methylene.BPF is replacing BPA in the manufacture of plastic-based products [11].Also, BPF is the predominant bisphenol found in foodstuffs, representing 17% of total bisphenols. After oral absorption, BPF is mainly metabolized in the liver by BPF-glucuronide and BPF-sulfate.Most BPF is excreted in the urine as a sulfate conjugate.Nonetheless, between 7-9% remains in the rat tissues 96 h after BPF exposure [12].The liver seems to be more vulnerable to the effect of lower doses of bisphenols as it is responsible for the metabolism and detoxification of compounds to maintain homeostasis in the whole organism.It also plays an indispensable role in mediating inflammatory responses [13].It is particularly interesting to investigate and understand how exposure to different EDCs can affect the developmental period.This is because an unborn fetus, as well as the placenta, is vulnerable because of the lack of the proper enzymatic machinery.This makes the gestation and the perinatal period, the most vulnerable times to EDC toxicity in human life [14].In addition, effects may manifest differently in males and females due to differences in metabolism, storage, and elimination of xenobiotics [15]. Previous studies by our research group showed that low-dose BPF increased oxidative stress by reducing antioxidant enzyme activities and altering the glutathione system in lactating rats and their offspring [16].However, it is unknown whether BPF triggers NLRP3 inflammasome-mediated inflammatory responses in the liver. The aim of this study was to evaluate nitrosative stress after BPF exposure, and whether reactive nitrogen species (RNS) could serve as a stimulus for NLRP3 inflammasome activation and generation of inflammation and apoptosis in the liver of lactating dams.Moreover, it was studied whether this effect could also be observed in the liver of female and male offspring at postnatal day 6 (PND6). Results In addition to reactive oxygen species (ROS), there are also reactive nitrogen species (RNS) that are produced physiologically.However, imbalances between the production and neutralization of these RNS are known as nitrosative stress. When lactating dams were treated with LBPF, gene and protein levels of oxidative stress-inducing proteins such as iNOS and HO-1d were significantly increased compared to the control group.In addition, iNOS and HO-1d mRNA and protein levels of HO-1d were higher in the LBPF group as compared with the HBPF-treated dams.No significant changes were shown in the physiological eNOS isoform after the administration of both doses of BPF in the liver of lactating dams (Figure 1a,b).To further investigate the role of BPF on hepatic inflammation, we measured the mRNA and protein levels of NLRP3 inflammasome components.The mRNA of NLRP3, PyCARD (ASC adaptor), and CASP1 were upregulated in LBPF-treated dams when compared to the control group.Higher PyCARD mRNA levels were also shown after LBPF administration as compared to HBPF in the liver of lactating dams (Figure 1d).Higher protein expression of NLRP3, CASP1, and IL-18 were obtained after LBPF administration when compared to control dams (Figure 1e-g).CASP1 and IL-18 protein expression levels were also higher in HBPF when compared to control dams (Figure 1f,g).Regarding proinflammatory cytokines IL-1β, IL-18, IFN-γ, and TNF-α, they were considerably upregulated in LBPF-treated dams, whereas no significant change was observed in the HBPF group as compared with the control group except for IL-18 mRNA levels.Significant differences were also observed between both treatment groups, resulting in higher gene and protein levels of IL-1β and TNF-α in the LBPF group (Figure 1h,i).Representative protein blots for each tested marker are shown in Figure 1c,j. IL-18 were obtained after LBPF administration when compared to control dams (Figure 1e-g).CASP1 and IL-18 protein expression levels were also higher in HBPF when compared to control dams (Figure 1f,g).Regarding proinflammatory cytokines IL-1β, IL-18, IFN-γ, and TNF-α, they were considerably upregulated in LBPF-treated dams, whereas no significant change was observed in the HBPF group as compared with the control group except for IL-18 mRNA levels.Significant differences were also observed between both treatment groups, resulting in higher gene and protein levels of IL-1β and TNF-α in the LBPF group (Figure 1h,i).Representative protein blots for each tested marker are shown in Figure 1c,j. Hence, LBPF increased nitrosative stress levels, which could be the stimuli to activate the NLRP3 inflammasome and to promote inflammatory responses in the liver of lactating dams.Hence, LBPF increased nitrosative stress levels, which could be the stimuli to activate the NLRP3 inflammasome and to promote inflammatory responses in the liver of lactating dams. To study whether perinatal administration of BPF generated alteration of the nitrosative balance in the liver of female and male offspring, we evaluated the same isoforms of NO and HO-1d.When female PND6 offspring was pre-and perinatally exposed to LBPF, the mRNA and protein levels of iNOS and HO-1d were increased in the LBPF group as com-pared to the control group (Figure 2a,c).Also, higher levels of HO-1d mRNA and protein expression were observed in LBPF-exposed female offspring compared to the HBPF group (Figure 2a).Notably, eNOS isoform showed no differences between groups (Figure 2a,c). To study whether perinatal administration of BPF generated alteration of the nitrosative balance in the liver of female and male offspring, we evaluated the same isoforms of NO and HO-1d.When female PND6 offspring was pre-and perinatally exposed to LBPF, the mRNA and protein levels of iNOS and HO-1d were increased in the LBPF group as compared to the control group (Figure 2a,c).Also, higher levels of HO-1d mRNA and protein expression were observed in LBPF-exposed female offspring compared to the HBPF group (Figure 2a).Notably, eNOS isoform showed no differences between groups (Figure 2a,c).In males exposed pre-and perinatally to BPF, the same results were obtained as in females.Thus, iNOS and HO-1d gene and protein levels increased in the LBPF-treated males compared to the control group, and no significant changes in eNOS isoform between groups were found (Figure 2b,d).Also, higher levels of HO-1d mRNA were observed in LBPF-exposed male offspring as compared to HBPF (Figure 2b).HO-1d protein levels were higher in HBPF compared to the control group.Figure 2e shows eNOS, iNOS, and HO-1d representative blots analyzed in both PND6 females and males.In both sexes, there was also an enhanced expression of the inducible HO-1d and iNOS isoforms, which increased nitrosative stress levels (Figure 2). Regarding the NLRP3 inflammasome pathway activation, an increase in NLRP3 gene expression and the following up-regulation of the adaptor ASC (PyCARD) and CASP1 mRNAs were shown after LBPF administration in female offspring (Figure 3a).In addition, increased PyCARD mRNA levels were observed in HBPF-exposed female offspring when compared to the control group (Figure 3a).Higher levels of NLRP3 and CASP1 mRNA and protein expression were observed in LBPF-exposed female offspring compared to the HBPF group (Figure 3a,c,d). Int. J. Mol.Sci.2023, 24, x FOR PEER REVIEW 5 of 14 In males exposed pre-and perinatally to BPF, the same results were obtained as in females.Thus, iNOS and HO-1d gene and protein levels increased in the LBPF-treated males compared to the control group, and no significant changes in eNOS isoform between groups were found (Figure 2b,d).Also, higher levels of HO-1d mRNA were observed in LBPF-exposed male offspring as compared to HBPF (Figure 2b).HO-1d protein levels were higher in HBPF compared to the control group.Figure 2e shows eNOS, iNOS, and HO-1d representative blots analyzed in both PND6 females and males.In both sexes, there was also an enhanced expression of the inducible HO-1d and iNOS isoforms, which increased nitrosative stress levels (Figure 2). Regarding the NLRP3 inflammasome pathway activation, an increase in NLRP3 gene expression and the following up-regulation of the adaptor ASC (PyCARD) and CASP1 mRNAs were shown after LBPF administration in female offspring (Figure 3a).In addition, increased PyCARD mRNA levels were observed in HBPF-exposed female offspring when compared to the control group (Figure 3a).Higher levels of NLRP3 and CASP1 mRNA and protein expression were observed in LBPF-exposed female offspring compared to the HBPF group (Figure 3a,c,d).When PND6 male offspring was pre-and perinatally exposed to LBPF, an increase in NLRP3, PyCARD, and CASP1 was observed as compared to the control group (Figure 3b).This was also observed with respect to the protein expression of NLRP3 and CASP1 (Figure 3c,d).NLRP3 protein expression was also upregulated in HBPF-treated offspring when compared to control male offspring (Figure 3c).Notably, NLRP3 pathway activation occurred in both sexes, allowing binding to the adaptor molecule and promoting CASP1 gene expression after pre-and perinatal exposure to LBPF (Figure 3).When female PND6 offspring was pre-and perinatally exposed to LBPF, the mRNA and protein levels of IL-1β, IL-18, IFN-γ, and TNF-α were increased when compared to the control group (Figure 4a,c,e).Also, higher mRNA and protein levels of IL-1β and IFN-γ were observed in LBPF-exposed female offspring when compared to the HBPF group (Figure 4a,c).TNF-α mRNA levels were upregulated in HBPF-exposed female offspring compared to the control group (Figure 4a). When PND6 male offspring was pre-and perinatally exposed to LBPF, an increase in NLRP3, PyCARD, and CASP1 was observed as compared to the control group (Figure 3b).This was also observed with respect to the protein expression of NLRP3 and CASP1 (Figure 3c,d).NLRP3 protein expression was also upregulated in HBPF-treated offspring when compared to control male offspring (Figure 3c).Notably, NLRP3 pathway activation occurred in both sexes, allowing binding to the adaptor molecule and promoting CASP1 gene expression after pre-and perinatal exposure to LBPF (Figure 3). When female PND6 offspring was pre-and perinatally exposed to LBPF, the mRNA and protein levels of IL-1β, IL-18, IFN-γ, and TNF-α were increased when compared to the control group (Figure 4a,c,e).Also, higher mRNA and protein levels of IL-1β and IFNγ were observed in LBPF-exposed female offspring when compared to the HBPF group (Figure 4a,c).TNF-α mRNA levels were upregulated in HBPF-exposed female offspring compared to the control group (Figure 4a).In males exposed pre-and perinatally to BPF, up-regulated mRNA levels of IL-1β, IL-18 and TNF-α were observed as compared to the control group (Figure 4b).Protein levels of IFN-γ, IL-1β, and TNF-α were also higher in LBPF-exposed male offspring as compared with the control group (Figure 4d).IL-18 protein levels were also higher in LBPF-exposed animals when compared to control and HBPF-exposed male offspring (Figure 4e). Figure 4f shows representative blots of pro-inflammatory cytokines in PND6 females and males. Regarding the histological study of the liver of lactating dams, no changes were still observed in cellular structure in the livers of BPF-treated dams compared to control hepatocyte images (Figure 5a).However, in both sexes of offspring, BPF administration induced nuclei aggregation and inflammatory cell infiltration in the liver of PND6 offspring compared to control pups with more noticeable effects at LBPF (Figure 5b,c). In males exposed pre-and perinatally to BPF, up-regulated mRNA levels of IL-1β, IL-18 and TNF-α were observed as compared to the control group (Figure 4b).Protein levels of IFN-γ, IL-1β, and TNF-α were also higher in LBPF-exposed male offspring as compared with the control group (Figure 4d).IL-18 protein levels were also higher in LBPF-exposed animals when compared to control and HBPF-exposed male offspring (Figure 4e). Figure 4f shows representative blots of pro-inflammatory cytokines in PND6 females and males. Regarding the histological study of the liver of lactating dams, no changes were still observed in cellular structure in the livers of BPF-treated dams compared to control hepatocyte images (Figure 5a).However, in both sexes of offspring, BPF administration induced nuclei aggregation and inflammatory cell infiltration in the liver of PND6 offspring compared to control pups with more noticeable effects at LBPF (Figure 5b,c).After BPF exposure, NLRP3 inflammasome activation and pro-inflammatory cytokines release were observed in offspring of both sexes.These same effects were observed in the liver of lactating dams with more noticeable effects after LBPF exposure. Discussion Oxidative stress and inflammation in the liver are closely correlated, as they occur simultaneously and interact with each other and are crucial in the initiation and development of liver disease [13].After BPF exposure, NLRP3 inflammasome activation and pro-inflammatory cytokines release observed in offspring of both sexes.These same effects were observed in the liver of lactating dams with more noticeable effects after LBPF exposure. Discussion Oxidative stress and inflammation in the liver are closely correlated, as they occur simultaneously and interact with each other and are crucial in the initiation and development of liver disease [13]. In a previous study by our research group, antioxidant enzyme activities were decreased, and oxidized glutathione levels were increased after low doses of BPF in lactating Long Evans rats and their offspring, in addition to increased lipid peroxidation.Thus, LBPF increases oxidative stress [16].However, it was unknown whether BPF could increase nitrosative stress and serve as a stimulus to trigger inflammatory responses after administration of two doses of BPF: a low dose of 0.0365 mg/kg/b.w./day (LBPF) and a high dose of 3.6 mg/kg/b.w./day (HBPF) in the liver of lactating dams and PND6 offspring after preand perinatal BPF exposure. Among the reactive nitrogen species (RNS), nitric oxide (NO) is a signaling molecule involved in many biological processes: blood pressure, inhibition of platelet aggregation, and neurotransmission; synthesized by at least three isoforms: neuronal nNOS, endothelial eNOS, and inducible iNOS.NO overproduction is associated with enhanced RNS production, which is able to induce structural damage to biomolecules, including proteins, lipids, and DNA [17]. No significant changes were found in the constitutive eNOS isoform, but increased gene and protein expression of inducible iNOS in LBPF-treated dams was observed.Excess of NO levels from increased iNOS activity can cause liver cell injury due to nitrosylation of thiol residues of many cellular enzymes, as well as a triggering effect of innate and adaptive immune responses [18].Increased gene and protein expression of inducible HO-1d were also observed in LBPF-treated dams.HO-1d responds to transcriptional induction due to alterations in oxygen tension, inflammatory mediators, heat shock, oxidative stress, and NO levels.Therefore, HO-1d induction is elevated after nitrosative stress in order to prevent further injury [19]. Increased mitochondrial reactive oxygen species (ROS) and RNS are able to influence several physiological and pathological processes, including inflammation.Inflammation may be triggered by several different processes being the activation of the inflammasome one of the most important.The NLRP3 inflammasome can be activated in response to a wide range of stimuli such as infection, tissue damage, or metabolic stress (via different pathways: ATP, damaged mitochondria, lysosomal breakdown, changes in Ca 2+, K + , and also increases in mitochondrial and non-mitochondrial ROS concentrations).Once NLRP3 is activated, it binds to the adaptor molecule PyCARD (ASC; apoptosis-associated specklike protein containing a CARD), which recruits and activates procaspase-1 into caspase-1 (CASP1), which is able to promote the maturation of proinflammatory cytokines such as IL-1β and IL-18.In addition, CASP1 is able to cleave protein precursors that affect the cell cytoskeleton, glycolysis, mitochondrial function, and inflammation [20].It also induces pyroptosis, an inflammatory form of programmed cell death [21]. An increase in gene expression of the NLRP3 sensor, its adaptor molecule PyCARD, and CASP1, the three components of the NLRP3 inflammasome were observed in LBPFtreated dams.In turn, a release of proinflammatory cytokines such as IL-1β, IL-18, IFN-γ, and TNF-α occurred after exposure to LBPF, as measured by gene and protein expression in the liver of lactating dams. IL-1β and IL-18, members of the IL-1 superfamily of cytokines, promote processes associated with infection, inflammation, and autoimmunity.IL-1β is key in the activation of hepatic stellate cells (HSC) and promotes the recruitment of inflammatory cells, contributing to fibrosis and triglyceride accumulation in hepatocytes and their death together with TNFα [3].TNF-α causes hepatic inflammation, proliferation, and apoptosis, as well as changes in HSC morphology [22].TNF-α can also promote the recruitment of proinflammatory neutrophils and macrophages and the activation of fibrogenic pathways leading to the development of liver fibrosis [23]. IL-18 induces IFN-γ synthesis, in addition to activating NK cells and cytotoxic T lymphocytes, and seems to be involved in modulating the gut microbiota [3].IFN-γ is a regulatory mechanism of the NLRP3 inflammasome and has a dual role: it activates effector cells such as NK lymphocytes and also tends to decrease activation through iNOS because NO induces nitrosylation of the NLRP3 protein and can inhibit its activity after a prolonged time [24].The results obtained in the liver of lactating dams are consistent with a study that showed a significant increase in the levels of TNF-α and other inflammatory molecules in zebrafish after administration of BPF between 10-1000 µg/L [25]. Therefore, oral administration of LBPF to lactating dams led to an increase in liver RNS, which could stimulate the NLRP3 inflammasome and promote the release of proinflammatory cytokines. There are no previous studies showing the influence of BPF on the activation or inhibition of inflammasomes, their components, or the release of products, but there is already data about the effects of BPA administration, as previously mentioned [6,9].In a recent study by our research group, it was shown that after administration of low doses of BPA, oxidative stress and NO levels increased, with a decrease in the endogenous antioxidant enzyme system (CAT, SOD, GST, GR, and GST) and glutathione system (GSSG/GSH ratio) in lactating dams as well as in female offspring [26].Therefore, understanding how BPF exposure can affect the developmental period is very important, as it is the most critical and vulnerable period in human life.This exposure could cause a higher risk of developing diseases in adulthood due to their limited ability in this period of life to metabolize and process these chemicals [14,27].Also, it is the moment in which the brain, as well as other organs, are in the phase of development. Furthermore, human placental cells incubated with BPA and BPF are shown to activate the P2X7 receptor, promoting the NLRP3 inflammasome and increasing the activity of several caspases, showing a toxic effect.This could trigger preterm birth and pre-eclampsia in humans [28].BPF administration also increases spontaneous abortions in pregnant dams in a dose-dependent manner [29]. Our results in PND6 offspring showed an increase in gene and protein expression of iNOS and no change in the eNOS isoform in both males and females, as well as an increase in inducible mRNA and protein HO-1d levels in both sexes.In a previous study [16], higher levels of the GSSG/GSH ratio were found in females than in males, but antioxidant enzymes were decreased in both sexes. Regarding the components of the inflammasome, in both female and male offspring, an increase in NLRP3, PyCARD, and CASP1 was observed after pre-and perinatal exposure to LBPF, together with the consequent release of proinflammatory cytokines IL-1β, IL-18, IFN-γ and TNF-α.Therefore, one of the stimuli responsible for the activation of NLRP3 components and the release of inflammation-promoting cytokines may be the excess of RNS after exposure to this chemical. In addition, inflammatory cell infiltration and aggregation was observed more noticeable after LBPF in both female and male offspring.However, no notable morphological changes were observed in lactating dams during exposure.Liver damage following perinatal exposure to LBPF was also observed in other studies [30,31].This may be due to the fact that after perinatal exposure, the fetus is in the process of tissue ontogeny, being much more vulnerable to such chemical exposure, and on postnatal day 6 (PND6), structural alterations are already observed with aggregation of nuclei and infiltration of inflammatory cells in the liver.Therefore, this makes the fetus much more sensitive and vulnerable to the effect of BPF on the liver than adult dams. Finally, the administration of LBPF had more noticeable effects than HBPF in the liver of lactating dams and their offspring.This might be due to the particular behavior of bisphenol in dose-response curves, so it may also be interesting to evaluate and analyze the effects of BPF, as well as other BPA analogues, at very low concentrations, typical of environmental exposure [32] on other organs apart from the liver.However, further research on the effect of BPF on inflammation and its mechanisms of inflammasome activation would be needed. Animals and Treatments After 10 days of acclimatization, 36 female (8 weeks of age) and 18 male (10 weeks of age) Long Evans rats (Janvier Labs, Le Genest-Saint-Isle, France) were randomly divided into three groups: control group (non-treated), low dose (0.0365 mg/kg body weight/day; LBPF) group of BPF and high dose (3.65 mg/kg body weight/day; HBPF) group of BPF.In each experimental group, there were 12 females and 6 males.Except for the control group, which received chow with a corresponding concentration of corn oil, all groups were fed their corresponding diet with BPF, and the experiment lasted 60 days.Food and water were fed ad libitum.The doses of BPF used were chosen according to previous studies on BPA [26,33] and the large existing literature, where the dose range of BPA (2.5-50 mg/kg) induced impairment learning and memory loss in rodents when BPA was administered in the perinatal period.Thus, the high dose is 3.65 mg/kg higher than 2.5 mg/kg; while the low dose was 100 times lower, to investigate whether, even with such a small dose, any effects were observed. All experimental procedures in this study were in accordance with the Guidelines for Ethical Care of Experimental Animals of the European Union (2010/63/UE) approved by the Ethical Committee of the Complutense University of Madrid (Madrid, Spain).This research is within a European project entitled "Novel Testing Strategies for Endocrine Disruptors in the Context of Developmental NeuroToxicity", supported by the European Union's Horizon 2020 Research and Innovation Programme (ENDpoiNTs project; grant number: 825759). Chemicals and Experimental Design BPF with purity > 99% was purchased from Sigma Aldrich (Buchs, Switzerland) (CAS Number 620-92-8; article number: 239658).It was dissolved in ethanol and then in corn oil at a ratio of 10% ethanol and 90% corn oil.The chosen rat chow was purchased from Granovit AG (Kaiseraugst, Switzerland) and corresponds to a diet with natural ingredients low in phytoestrogens. Rats were housed in special polypropylene cages (Sodispan Research, Coslada, Madrid, Spain), water bottles were made of glass, and a cylindrical environmental enrichment element was included.In vivo experimental design consisted of five phases: premating (2 weeks), mating (10 days), pregnancy (23 days), lactation (6 days) and dissections.During premating, female and male rats were treated with a control diet or the corresponding dose of BPF in the diet for 2 weeks.After checking that the female was in the estrus phase, the mating phase took place between a male and a female from the same group.The following morning, a check for a sperm-positive vaginal smear or sperm-plug was carried out and the process was repeated all mornings for 10 days.Diet treatment was maintained during the whole pregnancy period.Six females were pregnant in the control and LBPF groups, and 10 females were pregnant in the HBPF group.Before the birth of the offspring, pregnant dams were separated into individual cages for lactation, and dietary treatment was maintained until postnatal day 6 (PND6).During all phases of the in vivo experiment, the cages of the control group were kept separate from the BPF-treated groups to avoid any possibility of spreading chow containing BPF. Lactating dams were sacrificed by decapitation using a guillotine.Female and male offspring were sacrificed at PND6 by decapitation using scissors.The livers were collected and immediately frozen in liquid nitrogen and stored at −80 • C until analysis (Figure 6). Protein detection was performed using the Clarity Western ECL Substrate assay kit (Bio-Rad Laboratories, Richmond, CA, USA) by chemiluminescence with the BioRad ® ChemiDoc MP Imaging System to determine the relative optical densities.Pre-stained protein markers were used for molecular weight determinations.The intensity of the bands present in each lane was analyzed using BioRad ® Image Lab software (Bio-Rad Laboratories, Richmond, CA, USA) normalizing all measurements to the amount of total protein loaded in each well (thanks to the Stain Free technology of the Precast acrylamide Gels). Histological Staining Liver tissues were fixed in a 10% formalin buffer solution for 24 h and samples were processed for embedding in paraffin.Serial sections (5 µm) were prepared using a rotary microtome Leica RM2125 RTS (Leica Biosystems, Wetzlar, Germany) for hematoxylin and eosin staining (H&E).The sections were stained with 0.1% hematoxylin (Ciba, Basel, Switzerland) for 5 min.Then slides were washed with tap water for 15 min and a quick wash with hydrochloric alcohol (0.5% HCl in absolute ethanol) to remove excess staining on the sample (differentiation).The acid was neutralized by immersing the sections in tap water for 5 min and a final wash with distilled water.They were immersed in 0.1% eosin (Ciba, Basel, Switzerland) for 5 min.After washing with distilled water, tissue sections were dehydrated using ascending ethanol passages and finished in xylol for 30 s. Images were captured with a Leica Microscope (Leica Biosystems, Wetzlar, Germany). Statistical Analysis Results were presented as mean ± SD.Means from more than two experimental groups were compared by 1-way analysis of variance (ANOVA).To account for multiple comparisons, the Tukey-Kramer multiple comparison test after testing for normal distribution.All statistical analyses were carried out with Prism v8 (GraphPad Software Inc., San Diego, CA, USA).Statistical significance was set at p < 0.05 in all the statistical analyses. Conclusions One of the BPA analogues that is replacing its use in plastic products is BPF.In this study, lactating dams treated with LBPF showed an increase in iNOS and HO-1d, activation of NLRP3 components, and promoted the release of proinflammatory cytokines.Similar effects were found in the offspring after perinatal exposure.The study found that BPF exposure caused an increase in nitrosative stress markers and proinflammatory cytokines.The activation of NLRP3 inflammasome was significantly increased in the liver of lactating dams and PND6 offspring.These findings suggest that BPF exposure can cause liver inflammation and may contribute to the development of liver disease. Innovation Programme (ENDpoiNTs project; grant number: 825759).All authors complied with the ARRIVE guidelines. Informed Consent Statement: Not applicable. Figure 2 . Figure 2. BPF pre-and perinatal effect on nitrosative stress markers in the liver of female and male PND6 offspring.(a) mRNA levels of eNOS, iNOS and HO-1d in female offspring; (b) mRNA levels of eNOS, iNOS and HO-1d in male offspring; (c) protein expression of eNOS, iNOS and HO-1d in female offspring; (d) protein expression of eNOS, iNOS and HO-1d in male offspring; and (e) representative eNOS, iNOS and HO-1d protein blots measured by Western blotting in both sexes.Data represent mean ± SD.For mRNA analysis, n = 12 female PND6 pups and n = 12 male PND6 pups for each experimental group with three replicates for each sample, control (C), low-dose BPF (LBPF) and high-dose BPF (HBPF), were evaluated, and for protein analysis, n = 5 female and n = 5 male per experimental group.Statistical significance was determined by one-way ANOVA.* p < 0.05; ** p < 0.01 compared to control group.∇ p < 0.05; ∇∇ < 0.01, LBPF vs. HBPF. Figure 2 . Figure 2. BPF pre-and perinatal effect on nitrosative stress markers in the liver of female and male PND6 offspring.(a) mRNA levels of eNOS, iNOS and HO-1d in female offspring; (b) mRNA levels of eNOS, iNOS and HO-1d in male offspring; (c) protein expression of eNOS, iNOS and HO-1d in female offspring; (d) protein expression of eNOS, iNOS and HO-1d in male offspring; and (e) representative eNOS, iNOS and HO-1d protein blots measured by Western blotting in both sexes.Data represent mean ± SD.For mRNA analysis, n = 12 female PND6 pups and n = 12 male PND6 pups for each experimental group with three replicates for each sample, control (C), low-dose BPF (LBPF) and high-dose BPF (HBPF), were evaluated, and for protein analysis, n = 5 female and n = 5 male per experimental group.Statistical significance was determined by one-way ANOVA.* p < 0.05; ** p < 0.01 compared to control group.∇ p < 0.05; ∇∇ < 0.01, LBPF vs. HBPF. Figure 4 . Figure 4. BPF pre-and perinatal effect on release of pro-inflammatory cytokines in the liver of female and male PND6 offspring.(a) mRNA levels of IL-1β, IL-18, IFN-γ and TNF-α in female offspring; (b) mRNA levels of IL-1β, IL-18, IFN-γ and TNF-α in male offspring; (c) protein expression of IL-1β, IFN-γ and TNF-α in female offspring; (d) protein expression of IL-1β, IFN-γ and TNF-α in male offspring; (e) IL-18 protein levels in male and female offspring measured by ELISA and (f) representative IL-1β, IFN-γ and TNF-α protein blots measured by Western blotting in both sexes.Data Figure 5 . Figure 5. Histological study after BPF exposure of liver from (a) dams, (b) female and (c) male PND6 offspring stained with H&E.Representative images from control, LBPF and HBPF liver (10×) and magnified image of the specific tissue section (20×) indicating the aggregation of nuclei. Figure 5 . Figure 5. Histological study after BPF exposure of liver from (a) dams, (b) female and (c) male PND6 offspring stained with H&E.Representative images from control, LBPF and HBPF liver (10×) and magnified image of the specific tissue section (20×) indicating the aggregation of nuclei. Figure 6 . Figure 6.Experimental design.Parental generation (F0) was exposed to a diet containing a low dose (LBPF; 0.0365 mg/kg body weight/day) or a high dose (HBPF; 3.65 mg/kg body weight/day) of BPF or received a control diet (C) during the entire experiment.The levels of nitrosative stress and the NLRP3 inflammasome pathway in the liver of lactating dams and their offspring after BPF administration were studied.Activation of the NLRP3 inflammasome ultimately resulted in the release of the interleukins IL-1β, IL-18, IFN-γ and TNF-α, and could be triggered by different stimuli, including the generation of reactive oxygen species and nitrogen species (ROS/RNS).Figure created with Prism v7 (GraphPad Software, San Diego, Inc, CA, USA). Figure 6 . Figure 6.Experimental design.Parental generation (F0) was exposed to a diet containing a low dose (LBPF; 0.0365 mg/kg body weight/day) or a high dose (HBPF; 3.65 mg/kg body weight/day) of BPF or received a control diet (C) during the entire experiment.The levels of nitrosative stress and the NLRP3 inflammasome pathway in the liver of lactating dams and their offspring after BPF administration were studied.Activation of the NLRP3 inflammasome ultimately resulted in the release of the interleukins IL-1β, IL-18, IFN-γ and TNF-α, and could be triggered by different stimuli, including the generation of reactive oxygen species and nitrogen species (ROS/RNS).Figure created with Prism v7 (GraphPad Software Inc., San Diego, CA, USA).
7,677.6
2023-09-01T00:00:00.000
[ "Biology" ]
ACP-ADA: A Boosting Method with Data Augmentation for Improved Prediction of Anticancer Peptides Cancer is the second-leading cause of death worldwide, and therapeutic peptides that target and destroy cancer cells have received a great deal of interest in recent years. Traditional wet experiments are expensive and inefficient for identifying novel anticancer peptides; therefore, the development of an effective computational approach is essential to recognize ACP candidates before experimental methods are used. In this study, we proposed an Ada-boosting algorithm with the base learner random forest called ACP-ADA, which integrates binary profile feature, amino acid index, and amino acid composition with a 210-dimensional feature space vector to represent the peptides. Training samples in the feature space were augmented to increase the sample size and further improve the performance of the model in the case of insufficient samples. Furthermore, we used five-fold cross-validation to find model parameters, and the cross-validation results showed that ACP-ADA outperforms existing methods for this feature combination with data augmentation in terms of performance metrics. Specifically, ACP-ADA recorded an average accuracy of 86.4% and a Mathew’s correlation coefficient of 74.01% for dataset ACP740 and 90.83% and 81.65% for dataset ACP240; consequently, it can be a very useful tool in drug development and biomedical research. Introduction Cancer is currently the second most common cause of death and a leading cause of morbidity worldwide [1]. Rather than being a single disease, cancer is a heterogeneous set of complex disorders marked by unchecked cell proliferation and the ability to quickly spread or invade other parts of the body [2]. Chemotherapy and radiotherapy are two common conventional cancer treatments that are costly and frequently have negative side effects on healthy cells. Additionally, resistance to the existing anticancer chemotherapeutic medicines can develop in cancer cells [3]. Therefore, new anticancer drugs must be developed regularly to slow cancer cell proliferation. Peptide-based therapy provides significant benefits over other small molecule therapies due to the high selectivity, improved tumor penetration capabilities, and minimal toxicity of peptides under normal physiological settings [4,5]. Anticancer Peptides (ACPs) do not interfere with healthy bodily processes; rather, they provide new therapeutic options. The discovery of ACPs has opened new avenues for cancer treatment. ACPs are made up of 10 to 60 amino acids and feature an amphipathic cationic [6] structure that can interact with the anionic lipid membranes of cancer cells, enabling targeted treatment. Therefore, the discovery of new ACPs is critical for successful clinical applications. Experiments have identified and validated an increasing number of ACPs from protein sequences; however, using the experimental method to identify ACPs is time-consuming, laborious, and costly [1,7]. As a result, computational methods for ACP recognition based on robust composition feature vectors with physicochemical properties using boosting algorithms are urgently required. Numerous computational techniques in the domain of bioinformatics are utilized to solve various types of issues [8]. In particular, Machine Learning-based methods such as computational methods are used for the identification of ACPs. Based on a support vector machine, Anti-CP was the first computational tool to utilize binary profiles and sequencebased features [9]. Chou's pseudo-amino acid composition (PseAAC) and a local alignment kernel have been introduced for ACP prediction [10]. A computational model based on the optimization of a 400-dimensional feature vector of dipeptide residue components called g-gap features, representing the order and dipeptide composition of amino acids in peptide sequences was proposed for prediction of ACPs in [11]. An SVM was used to depict ACPs using amino acid composition, average chemical shifts, and reduced amino acid composition [12]. In [13], the authors developed a feature representation learning method using a two-step feature selection method to enhance the prediction of ACPs. In [14], the authors developed a generalized chaos game feature representation method for ACP prediction. The applied ensemble learning model for the identification of ACPs used different features and classifiers, and the classifier output was used as input to the SVM for the prediction of ACPs [14,15]. In [15], the authors proposed a novel computational approach for the accurate identification of ACPs using a deep learning algorithm. The authors of [16] developed a novel method called DRACP, using sequence and chemical characteristics for the identification of ACPs. In [17], the authors proposed a deep learning long short-term memory model (LSTM) called ACP-DL to forecast ACPs using highefficiency feature representation. AntiCP 2.0, an updated model for the prediction of ACPs using various features and different classes of machine learning classifiers on two datasets-ACP740 and ACP240, has been proposed for the prediction of ACPs [18]. A data augmentation method named ACP-DA, which uses sequential features and a multi-layer perceptron (MLP) classifier to predict ACPs using sequential physiocochemical features, has been proposed as well [19]. The number of the ACPs engaged with the above strategies has not surpassed 1000 cases, which is certainly not a huge number. The prediction performance of this strategy can be further improved if additional ACPs are included [20]. In the proposed method proposed in this paper, we use the concatenated features with data augmentation through a boosting classifier called Adaptive Boosting Classifier (ADA) with a base Random Forest learner, and further improve the performance of the ACP prediction method via machine learning. In this method, the binary profile feature (BPF), amino acid index (AAINDEX), and amino acid composition (AAC), which describe the order and composition of the targets along with their physicochemical properties, are concatenated to represent the peptides; the training set is then augmented in the 210-dimensional feature vector. The augmented training samples are then used to train a machine learning model for ACP prediction. There are four steps involved in the proposed method, as shown in Figure 1. First, the given peptide sequences are input and each peptide sequence is preprocessed to an equal length. Second, we calculate the BPF (140-Dimensional feature vector), AAINDEX (50-Dimensional feature vector with the features selected based on minimum redundancymaximum relevance (mRMR)), and AAC (20-Dimensional feature vector) of the peptides to contribute a 210-dimensional feature vector. Third, the training samples are augmented based on the contributing feature vector, and the augmented training samples are used to train the boosting classifier. Finally, to test the performance of the proposed technique, we apply five-fold cross-validation to evaluate ACP-ADA based on two benchmark datasets, ACP740 and ACP240. We assess the effectiveness of this strategy using several classification matrices and the outcome of augmentation using a different classifier. The results obtained from the experiment demonstrate that data augmentation based on the concatenated hybrid feature vector, that is, BPFs, AAINDEX, and AAC, can improve the prediction of ACPs with the choice of suitable classifiers using data augmentation. Thus, the proposed ACP-ADA method is suitable for prediction. Step flow diagram of ACP-ADA: binary profile feature (BPFs), amino acid index (AAINDEX) features after feature selection, and amino acid composition (AAC) were integrated to represent peptides, and the samples in the training set were augmented in the feature space. After data augmentation, the samples were used to train a machine learning model for the prediction of anticancer peptides (ACPs). Results In this section, we illustrate the effects of concatenated features (BPF+AAINDEX+AAC) on the performance of the proposed method when using different classifiers with and without data augmentation. Finally, we compare the proposed method with existing methods using a different classifier. Parameter Discussion The parameter affecting the performance of the model is Lx, the peptide length after pre-processing, which was selected as a length of 40, 50, or 60. In the data augmentation stage, N is an additional parameter connected to the number of new positive (negative) samples in the model. Thus, N can be set to 100, 200, or 300 percent of the initial positive (negative) sample number. The prediction performance of the model established based on different values of the Lx parameter, which is the peptide length, and 'N', which represents the percentage of augmentation for databases ACP740 and ACP240, is presented in Tables 1 and 2. MCC is a threshold-independent performance evaluation metric that generates a high score only if the classifier correctly predicts most of the positive and negative data instances. Therefore, we chose the best parameters, namely, Lx = 50 and N = 100% for ACP740 and Lx = 50 and N = 300% for ACP240, according to the maximum MCC value. Because ACP240 has fewer samples than ACP740, the value of N is larger for ACP240 than for ACP740, implying that more pseudo-samples are required for ACP240 than for ACP740. In addition, the performance of the model was evaluated on the ACP214 test dataset. The results for ACP-ADA on the independent test dataset are explained in the Supplementary Materials Section S2, Figure S2, Section S2.1. Comparison with Different Features Performance BPF and k-mer sparse matrix have proven to be effective in ACP-DL [17]; here, a physicochemical property feature descriptor called AAINDEX has been introduced as a therapeutic peptide predictor (PPTPP) [21]. AAC features were introduced to identify anticancer peptides through an improved hybrid composition using BPF and Physicochemical properties [22]. BPF, AAINDEX, and AAC are introduced in this methodology to build a model with robust and explainable features. To obtain a more effective feature combination, we used the AdaBoost Classifier with random forest as a base learner to build an ACP prediction model and evaluate the feature performance of each model based on the three features and their pairwise concatenation both with and without data augmentation in different peptide models, then chose the best performing classifier as the anticancer peptide predictor [23,24]. BPFs, AAINDEX, and AAC are the three features. BPF+AAINDEX, BPF+AAC, AAIN-DEX+AAC, and BPF+AAINDEX+AAC were combined together. The performance of the models for individual features and their concatenation is depicted in Figure 2. When the three features were applied separately, BPF and AAC performed the best. Based on the MCC value, the BPF+AAINDEX+AAC feature combination produced the best results for ACP740 and ACP240 among the four feature concatenations, as shown in Figure 2. We chose the BPF+AAINDEX+AAC concatenation to represent the peptide sequence based on feature concatenation and consequent performance.The feature importance for anticancer peptide prediction is explained in the Supplementary Materials Section S2. Classifier Discussion We used the concatenated BPF + AAINDEX + AAC as a concatenated feature to represent peptides. It was then necessary to determine the classifier which worked best with our strategy. In Figure 3, the horizontal axis represents the classifier and the vertical axis represents the MCC value for each classifier with and without data augmentation. We analyzed the performance of the prediction model with and without data augmentation on seven selected models: Multi-layer Perceptron (MLP), a neural network-based model for prediction; Support Vector Machine (SVM), which classifies peptides using a hyperplane; Random Forest (RF), which classifies peptides based on the if-then rule and is a tree-based model; k-Nearest Neighbours (KN), which separates two different classes using their number of nearest neighbours; Extremely Randomized Tree (ET), which is a tree-based hybrid model built using decision trees; Gradient Boosting Classifier (GB), which is a boosting method that focuses on previous incorrect classification by a weak learner and tries to improve the prediction; and AdaBoost (Base Learner = RF), which is an adaptive boosting method constructed using a weak learner random forest. We utilized MCC to assess and test the models' performance because it is a comprehensive metric. The performance of the selected models on the ACP-740 and ACP-240 datasets are shown in Figure 3. Figure 3. Comparison of the prediction models with and without data augmentation on the ACP740 and ACP240 datasets. Figure 3 confirms that based on the ACP740 dataset the prediction models built using MLP, RF, ET, GB, and ADA show performance improvements in terms of the MCC value used to evaluate the prediction models. However, data augmentation causes performance degradation in the models based on SVM and KN. On the ACP240 dataset, data augmentation can enhance the performance of the prediction models developed based on RF, KN, ET, GB, and ADA, meaning that the relative prediction performance of the models based on MLP and SVM decreases. Thus, when using RF, ET, GB, and ADA, data augmentation improved the performance of the ACP prediction model. This finding indicates that the effectiveness of data augmentation is linked to the classifier selected for prediction. Therefore, MLP, SVM, and KN were not suitable for our prediction model. Based on MCC as the comprehensive metric for evaluating the performance of the model, we chose the AdaBoost classifier (ADA) to build the final predictive model. Though GB the method achieved the best performance on ACP740, its classification performance on the ACP240 dataset after data augmentation, which consists of relatively fewer samples, was much weaker. Therefore, the ADA method was selected as a more robust alternative for classifying ACPs and non-ACPs on both datasets. ADA shows a better performance improvement on both datasets after data augmentation compared to the other classifiers. The method for building the AdaBoost classifier is called ACP-ADA, and has exhibited outstanding performance in various fields in recent years. The results of our developed Adaptive Boosting Classifier for the ACP740 and ACP240 peptide datasets show significant improvement compared with previous state-of-the-art models. It achieves a better performance based on both ACC and MCC, which indicates that the proposed ACP-ADA model can be used as an anti-cancer peptide model for investigating ACPs and non-ACPs. Comparison with Existing Methods To ensure the effectiveness and efficiency of the proposed method, we compared the performance of ACP-ADA with ACP-DA [19], ACP-DL [17], AntiCP2.0 [18], and DeepACP [15] while relying on the same main and benchmark datasets and corresponding classification evaluation metrics. Compared with ACP-DA, the use of our method has a distinct advantage. It is accompanied by a concatenated feature vector (BPF+AAINDEX+AAC) representing the order, composition, and physicochemical properties to represent the peptides with data augmentation and the boosting classifier, which is an ensemble learner that focuses on incorrectly classified samples. The proposed method with concatenated hybrid feature vectors with data augmentation outperforms ACP-DA in most metrics, especially the two most important performance metrics, ACC and MCC. As shown in Figure 4, the performance of the proposed method on the ACP740 and ACP240 datasets was better than that of ACP-DA, ACP-DL, DeepACP, and AntiCP 2.0. Compared to the ACP-DA as the current guarding model, our method showed improvements in ACC by 5%, PRE by 5%, SPE by 6%, and MCC by 9% for the ACP740 dataset. For ACP240, the number of samples was lower than for ACP740; nonetheless, our method improved the ACC by 3%, PRE by 1%, SPE by 2%, and MCC by almost 6%. The proposed method outperformed the alternatives on the ACP240 dataset in terms of both the ACC and MCC evaluation metrics, indicating that our strategy is well suited to datasets with a lower fraction of samples. This method applies the Gaussian noise oversampling method with the AdaBoost classifier method using random forest as a base learner and a feature vector representing the order and composition with physicochemical properties, which improves the prediction of ACPs. In addition, the performance of ACP-ADA and all control methods was evaluated on the ACP214 test dataset. The details are provided in the Supplementary Materials Section S2. Discussion Tracing the etiology of cancer remains challenging because of its ambiguous mechanisms. According to a systematic examination, individual feature vectors do not offer viable biomarkers for predicting peptide activity. Therefore, in order to investigate a suitable feature vector, we used BPF, AAINDEX, AAC, and their combination to represent the order, composition, and physicochemical properties of peptides to obtain suitable feature representation. From the experiment with features comparison based on the maximum MCC value for the ACP740 and ACP240 datasets, we selected the concatenation of BPF, AAINDEX, and AAC to represent the peptides. We extracted 210-dimensional feature vectors from this feature combination to represent peptides in the feature space. Here, we propose an ACP prediction method called ACP-ADA which uses a boosting method along with data augmentation of the training samples. According to the results on the two datasets, the proposed model has good overall performance. Compared with existing methods, ACP-ADA had better results in classifying whether the peptides were ACP or non-ACP; its ACC may be attributed to the following reasons. First, we used effective feature representation methods to characterize peptide sequences. To find the feature combinations, we concatenated three feature representation methods to form robust features using BPF, AAINDEX, and AAC. Experiments on the ACP740 and ACP240 datasets show that the concatenated features obtain the best performance; therefore, we used triad feature combination to represent the peptide sequences. Second, to compensate for the lack of samples in the training set, data augmentation was applied to generate pseudosamples. We generated a pseudosample by adding perturbation to the training samples in the 210-dimension feature space of the original samples. The feature space of the samples was formed by the concatenation of BPF, AAINDEX, and AAC as a hybrid feature, resulting in a 210-dimensional numerical feature vector. BPF is composed of vectors of 1 and 0, which are incompatible with the addition of noise; thus, we only added noise to AAINDEX and AAC to generate pseudosamples. Augmented training samples were used to train the machine learning model to further improve the performance of the prediction model, which showed a significant impact based on the choice of the classifier. Finally, the various models showed good performance in many bioinformatic classifications. However, it remains unclear whether data augmentation can improve the performance of prediction models using different classifiers. Therefore, we analyzed the effect of this methodology using seven different classifiers. The results show that data augmentation is effective when using RF, ET, GB, and ADA classifiers with RF as the base learner. Therefore, we selected ADA, which is a boosting classifier, as the final classifier with the best overall performance. In summary, the proposed method for the identification of ACPs showed improved performance; it is our hope that ACP-ADA can play an important role in biomedical research and the development of new anticancer drugs. Furthermore, a comparative analysis with other methods showed that ACP-ADA was better than the other methods in most cases. To accurately and quickly identify ACPs, a boosting classifier was applied to discriminate peptide sequences using a 210-dimension feature vector which focuses on incorrectly classified samples as a sample of priority while constructing a random forest to form a complete AdaBoost classifier. As an ensemble learning method, boosting effectively prevents over fitting; it performed well on test data and achieved a comparative improvement in prediction of ACPs. In addition, the secondary and tertiary structure prediction characteristics of peptides can be added to this model as a feature descriptor, which may improve the performance of the model with the data augmentation method. Furthermore, the neural network method can be used for the identification of ACPs with an increase in the dataset size. Because of the successful result with data augmentation for the dataset with low sample proportion (ACP240 dataset), using machine learning boosting methods, we can conclude that this methodology for peptide data augmentation can be applied for training deep learning models such as Convolutional Neural Networks, Recurrent Neural Networks, Transformer and several language models. Based on our predictive performance improvement for the dataset with a lower number of positive and negative classes, we can assert that this method of peptide data augmentation can enhance and quantify predictive performance on datasets with fewer samples using advanced deep learning models, which can be further explored for peptide-based research using data augmentation to escalate model performance. This method can be explored while working with advanced deep learning models using data augmentation. Data Acquisition In this study, a machine learning model called the boosting method is proposed to predict ACPs. Called ACP-ADA, the proposed method uses concatenated features provided by BPF, AAINDEX, and AAC. We evaluated the predictive performance of ACP-ADA for ACPs on the ACP740 and ACP240 benchmark datasets. Furthermore, using the common tool CD-HIT [20], sequences with a similarity of more than 90 percent were eliminated [20,25]; we used similar configuration as previous works for fair comparison on the two benchmark datasets. Therefore, there was no duplicate sequence between datasets, and both were unique and non-redundant. These datasets can be publicly accessed through In addition, we build datasets with an CD-HIT cutoff of 0.35% named ACP614 and ACP214. A description of the datasets and experimental results are provided in the Supplementary Materials Section S2. Preprocessing The iLearn python package [26] can encode peptides of the same length. The lengths of the peptides in the ACP740 and ACP240 datasets were statistically analyzed in order to establish the optimal sequence lengths, which we then used to preprocess the original peptide sequences. As shown in Figure 5, the majority of the peptides were less than 60 amino acids in length. To retrieve peptides of the same length, each peptide was processed as follows. For sequences shorter than Lx amino acids, each peptide was padded with "X" until Lx amino acids were reached. For sequences longer than Lx amino acids, the extra amino acids after Lx were removed; only the first Lx amino acids were retained. Lx was set to 40, 50, or 60 [12,19]. We believe that the best length to represent the peptides can be derived from a peptide length of 40, 50, or 60 for the calculation of BPF, AAINDEX, and AAC. Feature Extraction First, the physiochemical characteristics of each sequence of amino acids were determined using the AAINDEX function in the iLearn Python package [27]. Because AAINDEX results in larger dimensional features, mRMR was then used for feature selection. Similar to AAINDEX, the AAC feature descriptors in iLearn Python package were used to calculate AAC features for the entire peptide sequences [28,29]. The BPFs, AAINDEX, and AAC for each sequence were concatenated to represent the order, physicochemical characteristics, and composition of the peptides. The integrated feature for the prediction can be represented as Feature = BPF + AAINDEX + AAC (1) BPF represents the residue order, AAINDEX represents the peptides in terms of the properties of 20 amino acid residues with respect to the physicochemical properties (activitybased features) and AAC represents the proportion of residues dominant in ACPs and non-ACPs (which are highly dominant). Thus, the combination collectively represents the residue order, activity, and percentage of each residue for each peptide. Combining these features can capture the local residue level order information, structural sequence features, and proportion of amino acids highly available in ACPs and Non-ACPs as explainable parameters for the sequence and model. Because of this, we selected and extracted BPF, AAINDEX, and AAC as a predicting feature in our proposed method. Each individual feature was used along with the combination of trait features as predictors for the machine learning model. Finally, the training samples were augmented in the feature vector and used to train the machine learning model, with the trained model assigning the class level to the test sets. The newly constructed datasets ACP614 and ACP214 (with CD-HIT 0.35%) were featured based on PSSM. The details are explained in the Supplementary Materials Section S2. Representation of Peptides Converting peptides of various lengths into feature vectors of a fixed length is the primary goal of feature representation. The unprocessed peptide sequence P can be modeled as P = where P [1], P [2], P [3], and P[L] in Equation (2) represent the first, second, third and terminating residue peptide of length 'L', respectively. To train the machine learning model, residue P[i] served as a general representation of amino acids in peptides and a component of the standard amino acid alphabet. The primary step was converting the variable-length peptides into a fixed length in order to calculate the binary profile feature, amino acid index, and amino acid composition to represent the peptide sequences. In this study, we introduced three feature representation methods through the concatenation of BPF and AAC with physicochemical properties called the AAINDEX, as described below; the peptides can be expressed in terms of a fixed length 'Lx' for sequences, formulated in Equation (3) as follows: BPF The binary profile has the advantage of providing an order of residues in the peptides, which is not feasible with composition-based characteristics [30,31]. As a result, binary profile traits can distinguish peptides that are chemically similar and functionally distinct. It was difficult to build a fixed-length pattern because the lengths of the peptides employed in this investigation were different. To solve this problem and generate a fixed-length pattern, we isolated fixed-length segments from the N-terminus to represent the peptide, with each amino acid type represented using a 0/1 feature vector. The first type of amino acid in the alphabet was encoded as f(A) = 1,0,0,0. . . ,0), whereas the second type of amino acid was encoded as f(C) = (0,1,0,0 . . . ,0). The N-terminus of a particular peptide sequence P with a length of k amino acids was encoded as the feature vector represented in Equation (4), expressed as follows: where k represents the length of the peptide resembling the N-terminal amino group. The experiments suggest that setting k to 7 produces the best results [17,19]. As a result, the BPF vector encoded a particular peptide sequence into a 20 × 7 feature vector. AAINDEX The most useful qualities for representing biological reactions are the physicochemical characteristics of amino acids, which have been widely employed in bioinformatic studies. Numerous published indices that represent the physicochemical characteristics of amino acids can be found in the AAINDEX database [4,10,30], including a set of 20 numerical values for each physicochemical property for all amino acids. The AAINDEX database's 544 physicochemical attributes were retrieved, returning a total of 531 physicochemical characteristics to represent each residue in the peptide sequence; any physicochemical qualities for any of the amino acids that were removed are indicated with "NA". The AAINDEX descriptor can be used to encode peptides of the same length [32]. When Lx is set to 40, the AAINDEX descriptor for a peptide of length 40 produces a feature vector with a dimension of 21,240, which is excessively high and results in a dimension disaster. We chose the best 50 feature vectors to represent the peptide sequences and reduce dimensionality issues using the mRMR approach after the physicochemical properties of peptides (AAINDEX) were extracted using the iLearn platform. AAC The frequency of each residue in the peptide sequence was determined using AAC encoding. AAC, which demonstrates that particular residues are more prevalent in ACPs than in non-ACPs, can be used to discriminate between ACPs and non-ACPs. As a result, the AAC feature was added to represent the peptide, then extracted into a fixed-dimensional feature vector using the iLearn Python tool. All 20 natural amino acid frequencies (i.e., "ACDEFGHIKLMNPQRSTVWY") can be described by Equation (5): Here, N(t) is the repetition of an amino acid of type t, N is the length of a protein or peptide sequence, and F (AAC[N]) results in a 20-dimensional feature vector representing the AAC of the peptide sequence. A conjoint feature vector was formed to represent peptides using BPF (140), Amino AAINDEX (50), and AAC (20); the new feature vector dimension was 140 + 50 + 20 = 210-dimensional feature vector. In addition to sequential order information features and sequential composition features, we calculated PSSM features for the newly constructed datasets; a detailed description is provided in the Supplementary Materials Section S2. Data Augmentation When solving scientific problems, data imbalance and insufficient data are common issues in machine learning and deep learning technologies [30]. Historically, data augmentation been employed in the field of computer vision to handle this challenge, which can involve flipping, scaling, zooming, translating, and cropping the original sample [13,18]. Data augmentation can help to solve data imbalance issues. Here, the problem of a small sample size can be fixed by enhancing the data. Techniques for noise-added oversampling, which produce faux samples by perturbing the original samples in the feature space, can be used to create new samples. To enhance the effectiveness of the ACP prediction model, the number of positive and negative samples in the datasets was increased using peptide data augmentation techniques. The characteristics of the peptides were divided into three sections, namely, BPFs, AAINDEX, and AAC. BPFs are binary codes consisting of 0 and 1, and as such are not suitable for adding perturbations, as adding a noise value to the bits results in loss of the order information. Only the AAINDEX and AAC are susceptible to perturbation. The mathematical method for generating new samples F(new) for training the model is mathematically described by Equation (6): where F(i) is a random sample from a training sample of peptide sequences, i = 1 . . . , and N (N) is the total number of positive (negative) samples, representing a 210-dimensional vector used to generate a perturbation that corresponds to F(i). In order to improve model learning, we performed peptide augmentation by adding noise to the training samples following the Gaussian distribution and left the test set without data augmentation. Because test sets are used for evaluation of model performance, they are not suitable for data augmentation. Here, V is composed of three parts; one is a 140-dimensional vector of zeros and ones corresponding to BPFs, and the other consists of a 50-dimensional random vector and a 20dimensional random vector with a value between 0 and 1, corresponding to the AAINDEX and AAC, respectively. Thus, perturbation was added to AAINDEX and AAC and BPFs were kept unchanged in the pseudo-sample set F (new), where 'a' is the coefficient of perturbation and was set to 0.02 for ACP740 and 0.005 for ACP240. We tried adding different values of perturbation, and usually preferred a range of 0 to 1 to ensure that the features followed a Gaussian distribution. After training and testing with different set values, we found 0.02 and 0.005 to be the best values to add for feature distribution for ACP740 and ACP240, respectively, as these values closely resemble the AAINDEX and AAC. Augmenting the samples with these values led to improved prediction performance. Therefore, these fixed values of noise were considered as standard for augmenting the samples in ACP740 and ACP240. To obtain N new samples, the sampling process was repeated N times using these noise value for datasets ACP740 and ACP240. AdaBoost Random Forest Model When adaptive boosting is used in conjunction with the random forest approach, there are two options. The first is "boost in the forest", in which an AdaBoost classifier is generated for each random vector k (i.e., a set of variables); a series of 'simple' AdaBoost classifiers, each with a limited number of variables, is then used to arrive at a final result [33]. Here, we instead use a different approach in which a random forest is used as a poor learner. It is clear from a numerical standpoint that AdaBoost works faster with simple weak learner algorithms than with forests with trees, which is important for real-time applications; the philosophical idea behind weak learner algorithms is to find weak assumptions quickly with a moderate error rate [34,35]. An AdaBoost classifier is a meta-estimator that starts with the original dataset and then fits new copies on the same dataset while adjusting the weights of poorly classified instances in order to ensure that succeeding classifiers focus on more difficult cases. Owing to its excellent performance, this classifier has gained popularity in many fields of bioinformatics [36,37]. To build the model, we used the scikit-learn Python package; we developed the AdaBoost model with a random state = 121, number of estimators = 406, and learning rate = 0.04; the other parameters were set to the default values shown in Table 3. This model introduces a parameter for the base learner (random state = 120, number of estimators = 300, minimum number of data points placed in the node before the node is split = 10, minimum number of data points allowed in a leaf node = 1, maximum number of features considered for splitting a node = auto, method for sampling data points(bootstrap) = False), which were identified as the best parameters for the model using five-fold cross-validation. In addition, we evaluated the performance of other classifiers, including MLP (Multi-Layer Perceptron), SVM (Support Vector Machine), RF (Random Forest), KN (k-Nearest Neighbors), ET (Extremely Randomized Tree), GB (Gradient Boosting Classifier), and ADA (Ada Boosting Classifier with base learner Random Forest) to build a prediction model based on the non-augmented data and augmented data in the training set. Among these classifiers, the ADA classifier works best according to the experimental results obtained from the comparison with the features and with and without data augmentation. Evaluation Metrics of the Model To evaluate the performance of ACP-ADA, we used a five-fold cross-validation strategy. Five performance metrics were used to evaluate the strength of the binary classification tasks: accuracy (ACC), precision (PRE), sensitivity (SEN), specificity (SPE), and the Mathews correlation coefficient (MCC) [21][22][23][24]. Mathematically, these metrics can be computed as follows: FN)) (11) where FP stands for false positive predictions, FN stands for false negative predictions, TP stands for correct positive predictions, and TN stands for true negative predictions. In addition to these metrics, we used the F1-Score to evaluate the performance of the classifiers. The detailed results are provided in the Supplementary Materials Section S2. Conclusions The proposed ACP-ADA method can be used to determine whether peptides are anticancer or non-anticancer based solely on the concatenation of hybrid sequence feature vectors representing the order, composition, and physicochemical properties with data augmentation. The predicted results obtained by ACP-ADA via five-fold cross-validation on the benchmark datasets ACP-740 and ACP-240 indicate that the proposed ACP-ADA method is comparably better, or at the very least capable of supplementing futuristic computational models in this area. Because of its success rate on the alternate ACP-240 dataset with a lower number of (positive/negative) samples, ACP-ADA is expected to become a useful throughput tool that is widely used in drug development and biomedical research. This confirms the data augmentation method as an alternative approach to over-sampling techniques, as it can boost the performance of various sequence-based peptide and non-peptide models based on the choice of features and classifier. In the future, we intend to consider more complex feature extraction methods and machine learning algorithms to further improve the performance of ACP peptide prediction models. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
8,016.4
2022-10-01T00:00:00.000
[ "Computer Science" ]
Network component analysis provides quantitative insights on an Arabidopsis transcription factor-gene regulatory network Background Gene regulatory networks (GRNs) are models of molecule-gene interactions instrumental in the coordination of gene expression. Transcription factor (TF)-GRNs are an important subset of GRNs that characterize gene expression as the effect of TFs acting on their target genes. Although such networks can qualitatively summarize TF-gene interactions, it is highly desirable to quantitatively determine the strengths of the interactions in a TF-GRN as well as the magnitudes of TF activities. To our knowledge, such analysis is rare in plant biology. A computational methodology developed for this purpose is network component analysis (NCA), which has been used for studying large-scale microbial TF-GRNs to obtain nontrivial, mechanistic insights. In this work, we employed NCA to quantitatively analyze a plant TF-GRN important in floral development using available regulatory information from AGRIS, by processing previously reported gene expression data from four shoot apical meristem cell types. Results The NCA model satisfactorily accounted for gene expression measurements in a TF-GRN of seven TFs (LFY, AG, SEPALLATA3 [SEP3], AP2, AGL15, HY5 and AP3/PI) and 55 genes. NCA found strong interactions between certain TF-gene pairs including LFY → MYB17, AG → CRC, AP2 → RD20, AGL15 → RAV2 and HY5 → HLH1, and the direction of the interaction (activation or repression) for some AGL15 targets for which this information was not previously available. The activity trends of four TFs - LFY, AG, HY5 and AP3/PI as deduced by NCA correlated well with the changes in expression levels of the genes encoding these TFs across all four cell types; such a correlation was not observed for SEP3, AP2 and AGL15. Conclusions For the first time, we have reported the use of NCA to quantitatively analyze a plant TF-GRN important in floral development for obtaining nontrivial information about connectivity strengths between TFs and their target genes as well as TF activity. However, since NCA relies on documented connectivity information about the underlying TF-GRN, it is currently limited in its application to larger plant networks because of the lack of documented connectivities. In the future, the identification of interactions between plant TFs and their target genes on a genome scale would allow the use of NCA to provide quantitative regulatory information about plant TF-GRNs, leading to improved insights on cellular regulatory programs. Conclusions: For the first time, we have reported the use of NCA to quantitatively analyze a plant TF-GRN important in floral development for obtaining nontrivial information about connectivity strengths between TFs and their target genes as well as TF activity. However, since NCA relies on documented connectivity information about the underlying TF-GRN, it is currently limited in its application to larger plant networks because of the lack of documented connectivities. In the future, the identification of interactions between plant TFs and their target genes on a genome scale would allow the use of NCA to provide quantitative regulatory information about plant TF-GRNs, leading to improved insights on cellular regulatory programs. Background Gene expression is a complex process regulated by the interactions of proteins and other molecules with genes. This regulation occurs at multiple levels, giving rise to gene regulatory networks (GRNs) that define the regulatory programs for the expression of specific genes in response to specific cues [1]. One of the biggest challenges of systems biology is deciphering the organization of GRNs [2,3]. This task is further complicated by feedback-and feedforwardtype interactions of a multitude of genes and their protein products upon themselves and others. GRNs are usually modeled as graphs with nodes representing system components (e.g. molecules) and edges indicating interactions between components [1,4,5]. Various methodologies have been developed for the analysis of GRNs including directed graphs, Boolean networks, Bayesian networks and differential equations [2,[6][7][8][9][10][11]. An important subset of GRNs models gene expression as a result of the action of transcription factors (TFs) upon their target genes. In these models, directed edges from TFs to their target genes represent transcriptional regulation, and constitute a hierarchical network governing gene expression [2,12]. The reconstruction of TF-GRNs involves the identification of genes that encode the TFs and the identification of the target genes of the TFs. There is a considerable amount of information available on TF-gene interactions in microbes which is housed in databases. For example, RegulonDB and DBTBS are extensively curated databases containing information on transcriptional regulation in the bacteria Escherichia coli and Bacillus subtilis respectively [13,14]. The RegPrecise database contains similar information for many other prokaryotes [15], as does the YEASTRACT database for Saccharomyces cerevisiae [16]. The availability of such resources permits accurate reconstruction of TF-GRNs, and consequent network analyses to obtain insights on regulatory capabilities of the organism of interest. For plants, such information is comparatively sparse, with most regulatory studies directed at inferring GRNs in isolated organs such as roots or leaves, or processes such as development or abiotic stress response [9,17,18]. Large-scale TF-gene interaction data are only available for Arabidopsis thaliana and housed in the Arabidopsis Gene Regulatory Information Server (AGRIS) [19]. Although the establishment of TF-GRN connectivity (i.e. which TF regulates which gene) is very useful, the information contained in such connectivity maps is binary and not quantitative. Understanding quantitative changes in gene expression would provide deeper insights into gene regulation and perhaps even enable predictive modeling of cellular regulatory programs. This would, however, require significant mathematical processing of high-throughput gene expression datasets [20]. Under a given condition, gene expression would depend on the strength of the interaction between a TF and its target gene as well as the activity of the TF at that condition. Therefore, given the connectivity of a TF-GRN and gene expression values under a set of conditions, the next set of questions that need to be answered are: (i) Is it possible to obtain connectivity strengths (CS) of TF-gene interactions for the network and (ii) Can we quantify how TF activity varies across conditions? Estimating the CS between a TF and its target gene may be possible computationally by determining the decrease in free energy for binding between the TF and the DNA region of the target gene it binds to [21,22]. A higher free energy change would indicate stronger binding and a lower free energy change weaker binding [21,23]. However, thermodynamic calculations for determining changes in free energy are nontrivial and would require knowledge of binding thermodynamics of many TFs and their target genes. The CS between a TF and a gene can also be determined experimentally by using binding assays for determining parameters such as the dissociation constant or changes in free energy and enthalpy [24]. Although parameters derived from such TFgene binding assays are available in some databases, it would be a laborious exercise to obtain these values for every TF-gene pair [25]. For estimating changes in TF activity, experimental assays may be employed based on the binding of the active form of the TF with a target reporter molecule. However, such assays are only available for a limited number of TFs and would have to be conducted for each condition. Additionally, the experimental approaches for determining TF-gene CS and TF activities suffer from the drawback of being in vitro studies. Consequently, the values determined may not represent the in vivo interactions of the TFs and genes wherein multiple TFs can act on a single gene. It may appear that changes in the expression levels of the genes corresponding to the TFs could be used as surrogates for TF activities. However, a shortcoming of this approach is that TF activity could be considerably affected by post-transcriptional and posttranslational modifications such as phosphorylation and acetylation, and can therefore, differ substantially from the expression levels of corresponding genes. To deduce such quantitative information about TF-GRNs, researchers have developed methodologies like network component analysis (NCA) and regulatory element detection using correlation with expression (REDUCE) [26][27][28][29]. NCA, in particular, models gene expression to be the result of the connectivity strength between TF-gene pairs and TF activity [26]. The strength of the TF-gene interaction indicates the extent of the control of a TF over the transcription of a target gene, whereas the TF activity quantifies how active the TF is in regulating its target genes either via activation or repression. NCA uses connectivity information about the underlying network and gene expression data to obtain non trivial information about TF activity and TF-gene connectivity strength. Because the TF activity provides a measure for the TF in its final state, it includes information about the posttranscriptional and post-translational modifications. Compared to experimental approaches for obtaining similar information, NCA allows the deduction of such important regulatory information by a much simpler approach involving the measurement of gene expression for the set of genes in a network. The other input for NCA, the connectivity between TFs and genes, is available for many organisms in databases. Consequently, NCA provides an additional layer of regulatory information without the use of sophisticated experimental measurements [28]. Given the connectivity map underlying a TF-GRN, the NCA framework allows the decomposition of gene expression data into TF activities and connectivity strengths (CS) between each TF and its target genes. NCA models TF regulation of gene expression by the matrix equation [26,27]: Here, [G] m×n is a matrix representing an experimental gene expression dataset consisting of the expression of m genes across n conditions; [log G] m×n is its logtransformed version. Similarly, [TFA] p×n is a matrix of the activities of p TFs across the n conditions; [log TFA] p×n is its log-transformed version. These two matrices are linked by [CS] m×p , which consists of the CS of p TFs on m genes. The log-linear relationship used in NCA allows the benefits of linearization during the decomposition while capturing non-linear network behavior to a limited extent. Besides, since high-throughput gene expression data are usually expressed relative to a control condition, the log-linear relationship is convenient while working with relative gene expression data [26]. The NCA decomposition is unique up to a scaling factor, when the [CS] and [TFA] matrices satisfy a set of criteria termed "NCA-compliance" criteria [26]. The originally reported NCA algorithm [26] required the presence of as many gene expression data points as regulators for the decomposition. However, a more recent modification of that algorithm [30] permits the analysis of limited microarray datasets, thus widening the applicability of NCA. A detailed analysis of the original NCA algorithm and the modified algorithm are provided in the respective publications [26,30]. NCA has been previously applied for the analysis of microbial and mammalian transcriptional networks. Liao et al. [26] first used NCA to study cell cycle regulation in S. cerevisiae, and specifically to quantify the activities of different TFs during various stages of the cell cycle, thus gaining insight on the regulatory roles of specific TFs at each stage. Kao et al. [27] investigated the effect of a glucose-to-acetate carbon source transition on the activity of TFs in E. coli. They observed specific trends in the changes in activities of several TFs (CRP, FadR, IclR, and Cra) important during this transition. In a further extension of this study, they investigated the growth lag that resulted by the deletion of the ppsA gene in E. coli during this carbon source transition [28]. By using NCA, they deduced the activities of TFs that were affected by the deletion and proposed a mechanism for explaining the growth lag. A set of twin studies investigating the effect of the reactive nitrogen species, nitric oxide and S-nitrosoglutathione, on E. coli identified important TFs involved in response to the respective treatments [31,32]. The first study identified 13 important TFs of which ten have not been previously documented to be involved in response to nitric oxide [31]. The subsequent study with S-nitrosoglutathione identified four novel TFs (CysB, SF, FlhDC, and TTA) involved in response to the treatment [32]. The use of NCA in combination with transcriptome data allowed the construction of models depicting the response process for both studies. Brynildsen et al. investigated the isobutanol response network in E. coli and identified the ArcA-ArcB system to be a major regulator of the response via a loss of quinone function [33]. They also compared differences in TF activities in response to isobutanol with those seen for butanol and ethanol, and identified 6 TFs with differing activities for butanol, and 19 TFs with differing activities for ethanol compared to isobutanol. In another study [34], Buescher et al. performed genome wide TF-gene analysis of B. subtilis during a change in carbon substrate from glucose to malate and vice versa, and determined CS for 2900 TF-gene interactions. They deduced TF activities for 154 TFs out of which 127 TFs were found to change their activities significantly. Interestingly, many of these changes in TF activity were not seen at the mRNA level thus implicating the role of posttranslational modifications for the changes in TF activities. In mammalian systems, Sriram et al. studied the effect of overexpressing the glycerol kinase gene in rat hepatoma cells using a network of 62 genes and 9 TFs [35]. They found an increase in the TF activity for 7 of the TFs (ChREBP, Sp1, HNF1α, HNF4α, PPARα, LXRα, and glucocorticoid receptor [GR]) and a decrease in activity for the remaining 2 TFs (SREBP1a and CEBPβ). The increased activity of GR was hypothesized to be a result of the moonlighting nature of the glycerol kinase enzyme [36]. Sriram et al. experimentally verified the NCA-deduced change in TF activity of GR in the glycerol kinase-overexpressing cell line, thus demonstrating the power of NCA for deducing TF activities from gene expression data in a mammalian network. In a recent study [37], Tran et al. studied the TFs directly downstream of PTEN (phosphatase and tensin homologue deleted on chromosome 10), which is an important tumor suppressor gene. They identified 20 TFs whose activities were altered significantly by the expression of PTEN even when the mRNA levels of the corresponding genes did not alter significantly. They found many of the identified TFs varied in murine and human cancer models, and provided a signature for identifying the status of PTEN in cancers caused by PTEN loss. In this article, we report the application of NCA on a plant TF-GRN using available regulatory information from AGRIS. Starting with a set of TFs known to be important in floral development, we mined AGRIS to establish a network consisting of confirmed TF-gene connectivities in this developmental event. We used previously published gene expression data [38] for four types of cells isolated from the shoot apical meristem, which is known to initiate the growth of floral organs. By using the connectivity information and gene expression datasets, we used NCA to deduce activities for the NCA-compliant TFs, and numerical values of CS between the TFs and their target genes. To the best of our knowledge, this is the first study to apply NCA to dissect a plant TF-GRN. Results In this work, we tested the ability of NCA to quantitatively deduce nontrivial information about a plant TF-GRN solely from gene expression data and previously documented TFgene connectivities. Toward this, we established a TF-GRN consisting of ten TFs: LEAFY (LFY), AGAMOUS (AG), SEPALLATA3 (SEP3), APETALA2 (AP2), AGAMOUS-LIKE 15 (AGL15), ELONGATED HYPOCOTYL 5 (HY5), APETALA3/PISTILLATA (AP3/PI), ATBZIP14 (FD), WUSCHEL (WUS) and BEL1-LIKE HOMEODOMAIN 9 (BLR) using regulatory information available in AGRIS. The network included 57 genes known to be regulated by these TFs, as listed in the AtRegNet database from AGRIS [19]. On the basis of the interaction information obtained from AGRIS (Additional file 1, sheet: AGRIS TF-gene verification), we constructed an initial connectivity matrix for this network for use in NCA (Additional file 1, sheet: Initial connectivity matrix). We screened the Botany Array Resource [39] to locate pertinent gene expression data for the TFs under consideration. From this database, we selected microarray data from a study [38] that sampled four distinct types of shoot apical meristemmatic cells (denoted as CLV3n, CLV3p, FILp and WUSp) and that showed expression of the genes encoding LFY and other TFs included in our network (Additional file 1, sheet: Original microarray data). We then employed the NCA toolbox [26,30] to analyze the network using the gene expression data and the initial connectivity matrix, assuming that the CS was the same across all four cell types. Initial networks constructed for NCA have to be pruned to make them NCA-compliant [26,30]. On these lines, a subnetwork of 55 genes and 7 TFs ( Figure 1) was found to be NCAcompliant (Additional file 2, sheet: NCA-compliant network). The entire NCA output along with comparisons between deduced TF activities and the expression levels of the genes encoding the TFs, is included in Additional file 2. NCA deduces the strengths of TF-gene interactions NCA decomposes the gene expression matrix into two components: a matrix of [CS] signifying interactions between TFs and their target genes, and a matrix [log TFA] of TF activities (Eq. {1}). The matrix decomposition applies specific scaling factors for the activity of a given TF as well as the CS between that TF with its target genes. If negative, this scaling factor can invert the sign of the TF activity and CS pertaining to a given TF. Consequently, the CS and TF activity for each TF may need to be corrected by comparing the CS with the initial connectivity matrix and specifically looking at the connectivity between a TF and gene that is convincingly known from experimental evidence. Based on this comparison, we corrected the CS and corresponding TF activity for AG, SEP3, AP2 and HY5 (Additional file 2, sheet: TFA and mRNA). Figure 2 depicts the deduced CS values in the analyzed network. The CS between a TF and its target gene determines how strongly the TF activates or represses the corresponding target gene. We used two criteria for defining strong interactions (i) A CS of more than +1 (activation) or less than −1 (repression) (ii) Low variability across multiple NCA replicate runs. The CS used for distinguishing strong from nonstrong interactions is arbitrary but allows a means for distinguishing interactions between TFs and genes. For example, LFY is strongly connected to ACR7, HB51, GRA1, UNK3, MYB17, TLP8 and weakly connected to ASN1, BGLU15, BZIP, LEA, UNK2, and SUS4 among its target genes. Other sets of strong interactions include the following pairs: Gene expression levels simulated by NCA agree well with the originally measured gene expression levels We obtained the gene expression values simulated by NCA by multiplying the [CS] matrix with the [log 10 TFA] matrix for each of the four cell types (Eq. {1}). A comparison of the NCA-simulated gene expression levels with the original measurements as obtained by Yadav et al. [38] by microarray analysis, shows a good agreement between the two sets ( Figure 3). Some discrepancies were seen in the NCA-simulated gene expression levels, which may be attributable to residues arising in the least-squares minimization during the NCA decomposition. TF activities deduced for LFY, AG, HY5 and AP3/PI agree well with expression levels of genes encoding these TFs NCA provides log-fold changes of the TF activities with respect to a control condition. We compared changes in the TF activity across the four cell types with respect to a control by plotting the activities for the seven TFs against the corresponding gene expression values ( Figure 4). For instance, the consistent gene expression level of LFY across all four cell types agreed with the deduced TF activity for LFY, which was also consistent across the four cell types (Figure 4a). AG exhibited a decreasing trend of TF activity across the four cell types with CLV3n showing the highest activity. This trend also appeared in its gene expression values (Figure 4b). For HY5, the TF activity remained nearly unchanged across all four cell types while the gene expression showed smaller changes for CLV3n and FILp compared to CLV3p and WUSp (Figure 4f). The AP3/PI TF had higher activity in the CLV3n cells and a lower change in activity in the other three cell types. Because AP3 and PI proteins co-regulate the activity of some genes, we compared the activity of the AP3/PI TF separately with the AP3 and PI genes (Figure 4g & 4h). Interestingly, the TF activity trend of AP3/PI agreed better with the gene expression of PI, whereas AP3 expression showed an opposite trend for the FILp cell type. The TF activity of SEP3 showed agreement with its gene expression levels for two cell types (CLV3n and CLV3p), and a discrepancy for the other two cell types (FILp and WUSp) (Figure 4d). Two TFs, AP2 and AGL15, had differing trends in their TF activities and gene expression levels ( Figure 4c & 4e). This may be explained by the large biological errors of the gene expression levels of both AP2 and AGL15, which were comparable to the measurements. Further, we analyzed the changes in TF activities across the cell types statistically by comparing individual pairs of cells using a p-value cutoff of 0.05. The TF activities deduced by NCA for AG and SEP3 showed variation across multiple cell type pairs, while SEP3 and AP3 showed similar variation in their mRNA levels. Normalized plots of TF activities and gene expression values showed a good fit for LFY, AG, HY5 and AP3 Our comparison of NCA-simulated TF activities and expression levels of the genes encoding the TFs allowed a qualitative comparison between the trends shown by the computational NCA and the experimental transcriptome analysis. To provide a better comparison between the TF activity and gene expression values for corresponding TFs, we normalized the values across all four cell types and prepared a parity plot by using maximum and minimum values across each set as the basis for normalization ( Figure 5). This plot shows that TF activities deduced by NCA agreed well with expression levels of the TF-encoding genes, with only AP2 and AGL15 being exceptions. Discussion TF-GRNs, which model interactions between TFs and their target genes, are an important class of cellular networks that define regulatory programs leading to gene expression [2,12]. TF-GRNs provide Boolean information about the regulation of genes by TFs, with meticulously compiled data available in databases like RegulonDB, YEASTRACT and AGRIS [13,16,19]. To deduce further quantitative information about the connectivities between TFs and their target genes, methodologies such as NCA and REDUCE have been developed [26,29]. Given the underlying network connectivity information, NCA can provide information on the connectivity strength between a TF and its target gene as well as the TF activity by using gene expression data [26,30,40]. Through such nontrivial, quantitative information, NCA can provide important parameters about a TF-GRN. In this study, we sought to apply the NCA approach to analyze a network comprising TFs important for floral development and their targets using underlying connectivity information available in the AGRIS database. Floral development is one of the best characterized processes in plants with multiple studies providing much information at the molecular genetic level [41][42][43]. The most widely used model for explaining the initial development of the organs of a flower is the ABC model and its variants [42]. The model predicts floral development to result from the concerted action of multiple TFencoding genes. For this study, we constructed a plant TF-GRN consisting of ten TFs, known to be involved in floral development, (LFY, AG, SEPALLATA3 (SEP3), AP2, AGL15, HY5, AP3/PI, FD, WUS and BLR) and 57 target genes with verified interactions obtained from AGRIS. LFY is known to be a master TF that regulates important events in the transition from vegetative to reproductive growth, and has another important role in the activation of floral homeotic genes [44][45][46]. Some of its downstream targets are known to be TFs that are important in flower morphogenesis. The other TFs included in our original network are important factors in floral development: AG, SEP3 and AGL15 are MADS domain TFs; AP2 belongs to the AP2/EREBP (ethylene responsive element binding protein) class of TFs; HY5 and FD are basic leucine zipper TFs that regulate flower development; AP3/PI is a member of the NAC TF family that is expressed in floral primordia and WUS and BLR are homeobox TFs [47]. We were unable to include some of the other TFs (AP1, FT and AGL20) important in the process due to a lack of sufficient confirmed targets for them in AGRIS for NCA compliance. We used gene expression data from a study by Yadav et al. Figure 4). Good correlation is apparent for most TFs, but poor correlation is evident especially for AP2 and AGL15. The general agreement between normalized TF activity and expression level of the corresponding gene indicates the strength of NCA for deducing TF activities. had to be removed as they were not NCA-compliant. The final NCA-compliant network consisted of the remaining 7 TFs and 55 genes. For the NCA, we assumed same connectivity strengths for TF with their target genes across all cell lines, which is a reasonable assumption. NCA provided CS for all TF-gene pairs. However, after NCA decomposition, the CS needed to be checked for their signs (a positive sign signifies activation and a negative sign signifies repression). This is done by comparing the CS with the initial connectivity matrix, and especially the connectivity directions of well-established TF-gene pairs. We found that the TF activities and CS for the AG, HY5, SEP3 and AP2 TFs needed to be corrected for their signs. The TF-gene pairs showing strong CS represent strong binding between a TF and its target. However, many TF-gene pairs showed very low CS, so that their documented regulatory connection would be worth re-examining [26]. Interestingly, AGRIS did not list the direction of interaction between AGL15 and four of the genes regulated by it (AGL22, AGL25, EDF4 and RAV2). NCA deduced AGL15 to be a strong repressor of AGL22, strong activator of RAV2, moderate activator for AGL25 and very weak repressor for EDF4. Thus, given verified information about the sign of a TF-gene interaction, NCA can deduce whether the TF is an activator or repressor of other target genes based on gene expression data. We should point out though that the strength of NCA is the deduction of quantitative information about a TF-GRN based on verified information about the underlying connections and gene expression data for the network. AGL22, also known as Short Vegetative Phase (SVP) encodes a TF that can repress flowering time in addition to other genes AGL15, AGL18 and FLM [48][49][50]. Based on our NCA, we determined that AGL22 is repressed much more strongly by AGL15 compared to SEP3. Interestingly, though, the gene expression of AGL22 increased several-fold compared to the control across all four cell types. This might be explained by the observation that even though the TF activity of SEP3 increases relative to the control, the TF activity of AGL15 is reduced compared to the control by a similar extent. As AGL15 controls the repression of AGL22 more strongly compared to SEP3, the gene expression of AGL22 compared to the control increases. Two other genes, HLH1 and RD20, are regulated by the same TFs, HY5 (activation) and AP2 (repression). NCA determined HLH1 to have similar connectivity strengths to both HY5 and AP2 but of opposite signs while HLH1 gene expression was found to be slightly higher compared to the control strain. This could be because of the slightly higher TF activity of HY5 compared to AP2 as deduced by NCA. RD20, on the other hand, was found to be mildly repressed across the four cell types compared to the control. This could be because it is more strongly repressed by AP2 compared to activation by HY5. Of the different TFs included in our study, LFY plays the role of master regulator during floral development. Out of the direct targets of LFY included in our network, MYB17 or late meristem identity 2 is very important in meristem identity transition [51]. MYB17 was found to be very strongly activated by LFY. This, combined with high TF activity of LFY would explain the high expression levels seen for the MYB17 gene from mRNA analysis. We were unable to include AP1, which is another important TF in the meristem identity pathway that is known to interact in a positive feedback network with LFY and MYB17. We can, however, deduce that the AP1 TF would have higher activity across the four cell types compared to the control based on strong activities of LFY and MYB17. In fact, the reproductive phase in Arabidopsis involves the transition of the SAM to an inflorescence meristem and then to a floral meristem [44]. The floral meristem identity proteins in Arabidopsis [44] include the TFs that were found to be upregulated from our analysis (LFY and SEP3) which seems to indicate that the cells were isolated from a floral and not a vegetative meristem. We compared the TF activities obtained by NCA with the expression values for their corresponding genes. TF activities can in general be expected to be proportional to the expression levels of the corresponding genes. However, TFs that need to undergo extensive posttranslational modification to be active can be exceptions to this expected trend. Our analysis showed that the profiles of TF activities obtained from NCA compared well with the expression levels of the genes coding for these TFs in the case of the majority of TFs (LFY, AG, HY5, AP3/PI and SEP3 (in two out of four cell types). However AP2 and AGL15 are exceptions. The discrepancy for AP2 and AGL15 could quite possibly be because of the large error in the measurement of the microarray replicates leading to problems with the NCA. A repeat of the gene expression analysis with better control on the replicates may provide a better answer to this. If a discrepancy is still observed, this would indicate a change in TFs due to post-transcriptional and posttranslational modifications. NCA thus allows the generation of newer hypotheses relating to the conversion of a gene product to an active TF based on how well the gene expression results agree with the deduced activities of their corresponding TFs. As a further step, we compared normalized values for both, using maximum or minimum values for TF activity or gene expression across the four cell types to allow better comparison between them. We found a very good correlation for LFY; decent matches for AG, SEP3, HY5 and AP3/PI; and poor matches for AP2 and AGL15 from this analysis. The application of NCA to microbial and mammalian systems has provided interesting insights into gene regulation by TFs. As previously described, the applications of NCA to microbial systems include the following: (i) investigation of TF changes during cell cycle regulation in S. cerevisiae [26] (ii) analysis of changes in TF activities in E. coli during the change from a glycolytic carbon source (glucose) to a gluconeogenic carbon source (acetate) [27] (iii) studying the effects of reactive nitrogen species on a TF network in E. coli [31,32] (iv) identification of TFs important in the isobutanol response network in E. coli [33] and (v) determining TFgene interactions in B. subtilis during a carbon source transition from glucose to malate and vice-versa [34], Applications of NCA to mammalian systems are more recent (i) studying the effects of overexpression of the glycerol kinase gene in rat hepatoma cells [35] and (ii) identifying TFs with altered activity in response to PTEN expression [37]. These studies of TF-GRNs have revealed the strengths of NCA in providing insights about the regulatory aspects of a system given the basic structural information about the underlying network. In the case of plants, there is lesser information available about TF-gene interactions. The AtRegNet database from AGRIS, which is the most comprehensive resource for such information, contains 768 confirmed TF-gene interactions for 46 TFs in A. thaliana, which is estimated to contain more than 1700 TFs [52]. In our NCA of a network derived from AGRIS, the original network consisting of 10 TFs and 57 genes reduced to 7 TFs and 55 genes for NCA compliance. This is because of the absence of sufficient regulatory information about the three TFs that had to be removed. NCA requires that any TF in a network regulate at least two genes. The availability of more information about TF-gene interactions would overcome this issue of NCA non-compliant TFs. NCA uses gene expression data and underlying network connectivity during its analysis; consequently, the quantitative measures provided by NCA are dependent on the accuracy of the underlying network. For example, many of the genes considered in this study have unconfirmed interactions with other TFs. If any of these interactions were confirmed, the current NCA could be rerun to account for the effect of additional TFs on expression of the target genes. Thus, having correct prior connectivity information about a network would increase the accuracy of NCA substantially. Such information on TF-gene interactions is obtained mainly through ChIP-CHIP or ChIP-SEQ experiments that allow the detection of binding patterns of TFs with DNA sequences. In fact, a lot of the confirmed interactions between TFs and genes listed on AGRIS are derived from such papers investigating binding targets for particular TFs [19]. Another limitation of NCA is its inability to model feedback and feedforward regulations between TFs. TF-GRNs are cascades of TFs regulating genes where the product of many genes are TFs that regulate downstream genes. However, for NCA, if a TF is included as a regulator in a network, the gene corresponding to it cannot be included in the network. As a result, NCA cannot determine how strongly other TFs influence the expression of the corresponding gene. In our original network, AG was included as a TF and also present as a gene regulated by LFY, AG, SEP3, AP2, WUS and BLR. We had to remove the AG gene during the NCA because of the presence of AG as a regulatory TF. This limits the application of NCA to non TF target genes in many instances. Additionally, the NCA decomposition suffers from some variability in estimating CS and TF activity from gene expression data. This is because the NCA decomposition is unique to a scaling factor which can be different for each TF and vary during different data decomposition of the same set of gene expression values and initial connectivity matrix. NCA uses a two-step least squares approach to minimize the difference between experimental and NCA reconstructed gene expression data. As a result, based on the scaling factor chosen, the same gene expression data and initial connectivity matrix could give slightly differing TF activities and CS. In addition, the decomposition process might introduce some variability in estimating TF activities and CS. For the NCA decomposition of the floral TF-GRN used in this study, we found differences in TF activities and CS during repeat runs (Additional file 3). For this network, the LFY TF shows very little variability across the different runs while the other TFs have greater degree of variability. Thus, while the TF activity and CS obtained from NCA decomposition provide quantitative measures for the underlying network, they should be treated not as absolute but relative parameters. Another drawback that all approaches for modeling gene expression of eukaryotic organisms suffer from, is the inability to include all the factors that regulate gene expression [53]. Most of the current modeling approaches depict gene expression to result from the effect of some of these factors alone, which is not the case [5]. For example, microRNAs play a very important role in gene regulation at the post-transcriptional level similar to the TF regulation at the transcriptional level [54][55][56]. In humans, microRNAs have been found to use two modes for gene regulationthe first mode is rapid and modulated by homoclusters; the second is delayed and mediated by heteroclusters of microRNAs. Of the two, heteroclusters have been found to indirectly influence gene regulation in tandem with TFs [54]. In addition to microRNAs, other factors including chromatin structure and nucleosome sliding would affect gene expression especially in eukaryotes [53]. Consequently, an accurate model for depicting gene regulation in eukaryotes would have to include all these interactions to capture the true picture of genetic regulation. Despite these limitations, NCA can provide very interesting hypotheses and insights about regulatory signals in a TF-GRN. Previous applications have shown its utility in understanding microbial systems whose regulatory networks are well characterized, and mammalian sytems to some extent. Plants and eukaryotes operate more complex regulatory mechanisms. Additionally, complicated post-translational modifications can alter the activity of a TF compared to its mRNA transcript level. Consequently, the application of NCA to plant systems would provide interesting insights about these. Hence, there is a need for applying significant efforts in obtaining information about interactions between TFs and genes in plants for constructing TF-GRNs. Such information coupled with NCA would allow the determination of underlying properties of the system and establish paradigms for predicting cellular behavior. Conclusions In this work, we applied constructed a plant TF-GRN important in flower development using regulatory information from the AGRIS database. The initial network consisting of 10 TFs and 57 genes was found to be NCAcompliant for 7 TFs and 55 genes. We applied NCA to the reduced network to obtain CS between TF-gene pairs and TF activities. The CS showed strong connectivity between certain TF-gene pairs including LFY → MYB17, LFY → TLP8, AP2 → HLH1, AP2 → RD20, AGL15 → AGL22, AGL15 → RAV2, HY5 → HLH1 and HY5 → RD20, among others. For some of the co-regulated genes, we were able to determine the extent of transcriptional control of different TFs on a target gene using the CS. Additionally, we were able to determine TF activities for all TFs. Good agreement was seen for the changes in TF activities for multiple TFs and their corresponding gene expression levels. However, for some of the TFs (AP2, SEP3 and AGL15), the change in TF activities did not match with changes in gene expression levels. There could be multiple reasons for this discrepancy including post translation modifications which significantly alter the activity of a TF; noisy data or the small size of the network among others. Our study is the first application of NCA to a plant TF-GRN and demonstrates the power of NCA for determining nontrivial information about a network based solely on gene expression data and underlying network connectivity. NCA has been widely used to decipher interesting insights about microbial TF-GRNs. However, since NCA relies on underlying network connectivity, incomplete information about the network hinders the accuracy of NCA. Plant TF-GRNs are poorly documented with sparse data about specific sets of TFs and processes. As more information about TF-GRNs is uncovered in plants, similar analysis using NCA would provide profound insights regarding the role of TFs in various cellular processes. TF-gene network reconstruction We obtained TF-gene connectivity information from AGRIS (http://arabidopsis.med.ohio-state.edu) [19]. For the GRN analysis, we selected 10 TFs known to be important in floral development and listed in AGRIS. We selected 57 genes that were documented in AGRIS to be the targets of these TFs (Additional file 1, Sheet: AGRIS TF-gene verification). We constructed an initial connectivity matrix to map the TF-gene interactions documented in AGRIS (Additional file 1, Sheet: Initial connectivity matrix). Entries in this matrix were 1 (indicating a documented activation interaction), -1 (indicating a documented repression interaction) or 0 (indicating no documented interaction). Documented TF-gene interactions for which the type of interaction (activation or repression) were not known were assigned an entry of 1 (highlighted cells). Gene expression data We used the Botany Array Resource (http://www.bar. utoronto.ca) [39] for obtaining gene expression data pertinent to the TFs and genes in our network during floral development. This database provided gene expression data from the study by Yadav et al. [38] that provided expression levels of the genes of interest across four SAM cell types. The original and log transformed gene expression data are summarized in Additional file 1 (Sheet: Original microarray data, and Sheet: Log transformed microarray data, respectively). NCA We used the NCA toolbox (http://www.seas.ucla.edu/ liaoj/downloads.html) [26,30] in conjunction with the initial TF-gene connectivity matrix (Additional file 1, Sheet: Initial connectivity matrix) for decomposing the gene expression data. We independently analyzed the gene expression dataset corresponding to each biological replicate of each cell line. On completion, NCA provided TF activities for each replicate of each cell line (Additional file 2, Sheet: TFA and mRNA) as well as TF-gene CS common to all cell lines (Additional file 2, Sheet: Connectivity strengths). Additional files Additional file 1: Input data for NCA. Gene reference sheet: Gene models for the genes analyzed in this study, their common names and the number used to represent them in Figures 1 and 2. Initial connectivity matrix sheet: Matrix of connectivity information obtained between TFs and target genes from AGRIS. AGRIS TF-gene verification sheet: Data retrieved from AGRIS for constructing initial connectivity matrix. Original microarray data sheet: Microarray data retrieved for all the genes in this study across four different cell types (named CLV3n, CLV3p, FILp and WUSp) derived from shoot apical meristems of A. thaliana using the Botany Array Resource. Additional file 2: Output data from NCA. NCA-compliant network sheet: TFs and genes compliant for NCA obtained by initial NCA feasibility analysis. Connectivity strengths sheet: CS obtained by NCA. As NCA may invert the sign for the CS during the decomposition, CS for some of the TFs had to be corrected based on well-established TF-gene connectivity information. Gene expression sheet: Log 10 fold expression changes of genes obtained from microarray data and NCA simulated expression data. TFA and mRNA sheet: Log 10 fold changes in TF activities compared to control obtained by NCA and corresponding changes in mRNA values for all four cell types included in the study. Activities for some of the TFs had to be corrected in their sign based on the changes for the CS previously mentioned. Normalized TFA and mRNA sheet: Calculation of normalized TF activity and mRNA levels from the average TF activities and mRNA levels across all four cell types (expressed as log 10 fold changes compared to control). Additional file 3: Identifiability of NCA results: variability in estimating TF and CS from same gene expression data and initial connectivity strengths. TF activities and CS obtained in five independent executions of NCA from the same gene expression data and initial connectivity matrix used in this study.
9,851.8
2013-11-14T00:00:00.000
[ "Biology", "Computer Science" ]
Posterior probabilities of membership of repertoires in acoustic clades Recordings of calls may be used to assess population structure for acoustic species. This can be particularly effective if there are identity calls, produced nearly exclusively by just one population segment. The identity call method, IDcall, classifies calls into types using contaminated mixture models, and then clusters repertoires of calls into identity clades (potential population segments) using identity calls that are characteristic of the repertoires in each identity clade. We show how to calculate the Bayesian posterior probabilities that each repertoire is a member of each identity clade, and display this information as a stacked bar graph. This methodology (IDcallPP) is introduced using the output of IDcall but could easily be adapted to estimate posterior probabilities of clade membership when acoustic clades are delineated using other methods. This output is similar to that of the STRUCTURE software which uses molecular genetic data to assess population structure and has become a standard in conservation genetics. The technique introduced here should be a valuable asset to those who use acoustic data to address evolution, ecology, or conservation, and creates a methodological and conceptual bridge between geneticists and acousticians who aim to assess population structure. Introduction Many animals communicate or sense their environment using sound [1]. It is often logistically easier to record acoustic signals than to collect genetic, morphological, or other phenotypic data. Thus, the characteristics of animal calls have been used to examine a range of issues in biology, including evolution [e.g. 2], population structure [e.g. 3,4] and conservation [e.g . 5]. Call attributes can be genetically or culturally inherited [e.g. 6,7]. In either case, if there is drift or selection, variation in these attributes may signal population structure. This will especially be the case if the calls themselves structure populations, for instance if song attributes proscribe mate choice [e.g . 8]. Additionally, if the animals themselves use call attributes to identify segments of a population ("us versus them"), and this population structure circumscribes social interactions, and so social learning opportunities, the acoustically-distinguished population segments will tend to have distinct cultural behaviour in various contexts, including nonacoustic behaviour, such as foraging techniques [9]. Thus, there is increasing interest in using acoustic data to examine population structure. This is, however, dwarfed by molecular genetic methodologies. The majority of population structures inferred for animal species are based on genetic data, which are processed using a range of analytical methods [10]. Of these, the STRUCTURE package is particularly popular and influential [11]. STRUCTURE uses a Bayesian approach to calculate, from genetic data, posterior probabilities that individuals belong to each of K source populations, or, in the admixture option, to have a proportional assignment to each of the populations [12,13]. The results are displayed as stacked bar plots of posterior probabilities that each individual is a member of each population segment, or the estimated mixture proportions of source populations for a given individual. STRUCTURE thus gives direct estimates of the number of population segments, their distributions (in space, time, or along other axes), and confidence in allocations of individuals to the different population segments. An analogous method of analyzing and displaying acoustic data has the potential to be similarly useful for calling animals [14]. The IDcall routine (summarized in Fig 1) uses multivariate information on calls that are grouped into repertoires. It classifies the calls into types using contaminated mixture models. Each call has a probability of being a member of each type. The repertoires of calls are then clustered into identity clades by identity call types: identity clades are marked by one or more identity calls that are made frequently by the repertoires in the identity clade and rarely by those outside it. The IDcall framework is then somewhat analogous to the initial steps of STRUCTURE: the repertoires (from individuals or groups of individuals) are classified into population segments (identity clades), with the number of population segments being determined by the routine. However, only some call types are identity calls, and some repertoires may not be assigned to an identity clade. Here we show how the output of IDcall can be used to calculate the posterior probabilities that each repertoire is a member of each identity clade, and then display these posterior probabilities as a stacked bar graph using a routine that we call IDcallPP. A similar approach could be used with other methods of clustering acoustic repertoires to ascertain confidence in the assignment of acoustic repertoires to clusters. These outputs, especially the stacked bar graphs, parallel those of STRUCTURE. Theory In IDcall (see Fig 1), the contaminated mixture model algorithm estimates the probability that each call, i, belongs to each call type, j, as u(i,j) (where u(i,j) = 0 if call i is characterized by a different set of variables than the calls in j). The usage, U, of each call type, j, for each repertoire, r, is calculated by summing the probability of call type membership for all calls {i} in the repertoire and dividing by the total number of calls in the repertoire, n(r): The following procedures in IDcallPP are summarized in Fig 2. Once repertoires are assigned to an identity clade, c, we can estimate the probability distribution of call types in the identity clade as, where r represents the repertoire of interest. Pðj This is somewhat circular, as if repertoire R was assigned membership of identity clade c, the call type distribution within R is used to estimate the distribution of call types of c, which will then be used to calculate the likelihood that repertoire R is from identity clade c. In other words, the calls heard in a repertoire are used to delineate identity clades, the very information that is used to calculate the posterior probability that the repertoire is a member of an identity clade. To remove this circularity, we omit repertoire R from the calculation of the call type Then, using the multinomial distribution, the likelihood of the distribution of call types in a repertoire R given that the repertoire is a member of identity clade c is: Bayes' theorem gives the posterior probability that repertoire R is a member of identity clade c as: where Pr(R2c) is the prior probability that repertoire R is a member of identity clade c. In IDcallPP, these posterior probabilities are displayed as stacked barplots for each repertoire, as well as being output in a.csv spreadsheet file. Priors There are two simple formulations for prior probabilities: A. Equal prior probabilities of each identity clade. This is analogous to the "no admixture model" of STRUCTURE [13]. B. Prior probabilities for each identity clade are the proportion of repertoires assigned to the identity clade. This might make sense if sampling was sufficiently random or uniform (over space, time, or other relevant axes) so that the number of assignations to each identity clade was roughly proportional to its incidence in the population being considered. However, if sampling or assignation might be biased, then this option is likely inappropriate. Other types of priors might be sensible. For instance, in the admixture model of STRUC-TURE, the priors for membership of population segments are estimated using Bayesian techniques from the data itself [13]. Such formulations have yet to be implemented for IDcallPP but are a promising avenue for future development. Options IdcallPP has the following options: Priors. The prior probabilities of identity clade membership are either A (equal) or B (proportion of assigned repertoires), as described above. The default is A. Call types used. Eq 4 can use all call types or just those found to be identity calls. In our explorations with real data (see below), we found that the "all call types" option produced clearer posterior probability plots, presumably because, in our example data sets, the non-identity calls were distributed differently among identity clades, and so provided useful information when assigning identity clade membership. This need not necessarily be the case for all data sets. However, using all call types is the default. Repertoire order. The order in which the repertoires are displayed in the stacked bar plot is, by default, the order in the dendrogram plus heat map plot output from IDcall, so that the two plots can be displayed directly above one another with the repertoires lining up (see . Alternatively, the input order of repertoires may be used. This could be useful if the distribution of identity clades across some axis of interest (such space or time) is desired. Colors of identity clades: By default, the stacked bar plot of posterior probabilities uses the same color for each identity clade as in the dendrogram plus heat map plot output from IDcall (see . However, these can be changed. Application examples We use the same four example acoustic data sets from three taxa as in [14]: Australian field crickets (Teleogryllus spp.; hereafter crickets), grey-breasted wood-wrens (Henicorhina leucophrys; hereafter wrens) and sperm whales (Physeter macrocephalus; Atlantic/Mediterranean and Pacific datasets). These examples investigate population structure within species (sperm whales), among subspecies (wrens), and among species (crickets). For details of call variables, repertoire definitions, etc., see [14]. In each of Figs 3-6, we show the dendrogram plus heat map output from IDcall [14] above the stacked bar plot of posterior probabilities of identity clade membership from IDcallPP (using the default options listed above). In all four example data sets, the posterior assignment probability plots from IDcallPP generally support the identity clade assignations of IDcall. The posterior probabilities for the Output from IDcall (top; taken from [14]) depicts similarity among male Teleogryllus cricket calling songs recorded from individuals derived from 16 field sites in Australia (data from [17]). 'Oce' indicates song repertoires recorded from crickets belonging to the oceanicus species (in teal). 'Com' denotes song repertoires recorded from crickets belonging to the commodus species (in brown). The letters in parentheses denote field sites (see [17] for site abbreviations). For each song, we created an interval vector comprised of four traits: chirp pulse length, chirp interpulse interval, chirp-trill interval, and trill pulse length (see [17] for details on how song traits were measured). Each repertoire (i.e. branch in the dendrogram) contains all the songs recorded from first-generation crickets that were derived from wild-caught individuals from each field site. (A) The average linkage hierarchical clustering dendrogram thus depicts similarity among song interval vectors of male crickets from the 16 sites. (B) The heatmap shows identity song type usage (rows) for each field site (columns) in shades of grey, with usage calculated based on probabilistic assignment of songs to types. Identity song type codes are on the left of the heat map and centroid song interval vector plots are on the right (with the spaces between the dots representing chirp pulse length, chirp interpulse interval, chirptrill interval, and trill pulse length, and the scale bar in seconds). (C) The output from IDcallPP shows the posterior assignment probabilities of each repertoire belonging to each identity clade (i.e. species) as a stacked bar plot. See [14] and [17] for additional details. https://doi.org/10.1371/journal.pone.0267501.g003 cricket data set (Fig 3) shows almost perfect assignation to identity clades. It is also very good for the wren data set (Fig 4) with almost all posterior probabilities to the assigned clade greater than 0.7. The Atlantic/Mediterranean sperm whale data (Fig 5) is also very "clean" with only two repertoires having posterior probabilities to an identity clade of less than 0.7. One is a repertoire (leftmost arrow in Fig 5) that was not assigned to an identity clade; the other (rightmost arrow in Fig 5) was a repertoire that appeared on initial annotation to be a mixture of the codas from two previously described sperm whale clans (identity clades), Eastern Caribbean 1 and Eastern Caribbean 2 [15]. The posterior probabilities for the Pacific sperm whale repertoires are somewhat less clear (Fig 6). The great majority of the repertoires assigned to four of the identity clades (putative new, Four-Plus, Plus-One, and Regular clans) had posterior probabilities of >0.7 for their assigned clans. A few of the exceptions echo previous analyses. For instance, the recordings of a repertoire without a clearly dominant posterior probability (arrow in Fig 6) were from a day when photoidentification evidence indicated that there might [14]) depicts similarity among male songs (data from [18]) from two subspecies of greybreasted wood-wren: Henicorhina leucophrys hilaris (salmon) and Henicorhina leucophrys leucophrys (navy). Genotyping abbreviations are: Hil, parental H. l. hilaris; Leu, parental H. l. leucophrys; F1, first-generation hybrid; BChil, backcross between Hil and F1; and BC-leu, backcross between Leu and F1. For each song, we created an interval vector comprised of three traits: averaged note peak frequency, minimum song frequency, and maximum song frequency (see [18] for details on how song traits were measured). Each repertoire (i.e. branch in the dendrogram) contains all the songs recorded from a single individual. (A) The average linkage hierarchical clustering dendrogram thus depicts similarity among song interval vectors of 41 male wrens. (B) The heatmap shows identity song type usage (rows) for each wren (columns) in shades of grey, with usage calculated based on probabilistic assignment of songs to types. Identity song type codes are on the left of the heat map and centroid song interval vector plots are on the right (with the spaces between the dots representing averaged note peak frequency, minimum song frequency, and maximum song frequency, and the scale bar in Hertz). (C) The output from IDcallPP shows the posterior assignment probabilities of each repertoire belonging to each identity clade (i.e. subpecies) as a stacked bar plot. See [14] and [18] for additional details. https://doi.org/10.1371/journal.pone.0267501.g004 be two clans present. Additionally, the repertoires assigned to the Short clan generally have much lower posterior support, which agrees with conclusions from the original IDcall analysis that the nature and structure of this identity clade were much less certain [14]. For all four of these data sets, the posterior probabilities output using the "all call types" option was clearer than when just identity calls were used (Fig 7). This indicates that while the identity calls are the primary delineators of population structure in these data sets, the other, non-identity, call types also differ somewhat in their usage among population segments. Discussion IDcallPP estimates posterior probabilities that each repertoire is a member of each identity clade and provides a range of useful information. It can suggest that the population structure predicted by IDcall is extremely robust (e.g. Fig 3), robust (e.g. Fig 4), robust with an occasional, potentially interesting, outlier (e.g. Fig 5), or that parts of the population structure are The heatmap shows identity coda type usage (rows) for each repertoire (columns) in shades of grey, with usage calculated based on probabilistic assignment of codas to types. Identity coda type codes are on the left of the heat map and centroid coda interval vector plots are on the right (with the spaces between the dots representing the inter-click intervals and the scale bar in seconds). (C) The output from IDcallPP shows the posterior assignment probabilities of each repertoire belonging to each identity clade (i.e. vocal clan) as a stacked bar plot. See [14] for additional details. https://doi.org/10.1371/journal.pone.0267501.g005 well described while others remain unclear (e.g. Fig 6). In cases where different parameter settings for IDcall produce different population structures, it may help guide the choice of parameters. The output stacked barplot of posterior probabilities should provide good guidance for evolutionary biologists, resource managers and conservation biologists as to the structure of their target population, in a similar way to that provided by STRUCTURE [e.g. 16]. However, the output may also address other questions of a species' biology. For instance, the relative clarity of the posterior probability plots using identity calls versus those using all calls (e.g. Fig 7) might suggest whether the acoustic signatures of identity clades are restricted to identity calls, or manifest more broadly through repertoires. There are important differences between IDcall+IDcallPP and STRUCTURE, in addition to the different data sources (acoustic vs. genetic). Although IDcallPP calculates posterior probabilities of identity clade membership using Bayes' theorem (Eq 5), the delineation of the shows identity coda type usage (rows) for each repertoire (columns) in shades of grey, with usage calculated based on probabilistic assignment of codas to types. Identity coda type codes are on the left of the heat map and centroid coda interval vector plots are on the right (with the spaces between the dots representing the inter-click intervals and the scale bar in seconds). (C) The output from IDcallPP shows the posterior assignment probabilities of each repertoire belonging to each identity clade (i.e. vocal clan) as a stacked bar plot. See [14] for additional details. https://doi.org/10.1371/journal.pone.0267501.g006 identity clades by IDcall uses a non-Bayesian, and generally more conservative, method for determining the number of population segments, and allows some repertoires not to be assigned to identity clades. It should, thus, be less prone to overestimation of the number of population segments and the misassignment of repertoires. An issue which may affect the posterior probabilities is possible non-independence among the calls of a repertoire, thus theoretically invalidating Eq 4. We investigated the resulting biases by calculating how posterior probabilities were affected when the {n(R)} in Eq 4 were divided by a variance inflation factor v, where v>1 indicates lack of independence in count data [17]. With two identity clades, v = 1.2, and a true posterior probability of 0.8 for membership of one of the identity clades this was inflated to 0.84, and with five identity clades this became 0.87. When the variance inflation factor was raised to v = 2.0 (indicating substantial non-independence) these posterior probabilities were raised to 0.94 and 0.98, a considerable bias upwards from 0.8. Thus, non-independence of calls may be an important issue for some data sets. A correction could be applied in situations where the variance inflation factor can be estimated. IDcallPP employs only the no-admixture model in which a repertoire must be from only one identity clade or no identity clade at all, so the y-axes in Figs 3-6 are the posterior assignment probabilities. In the current implementation, there is no theoretical possibility that a repertoire contains elements of two or more identity clades: the posterior probabilities are that a repertoire is from a particular identity clade. However, as suggested above for the sperm whale populations, a repertoire could sometimes include calls from two, or possibly more, population segments. Thus, a useful future development would be an admixture model option in IDcallPP. We have developed this procedure of obtaining posterior membership of population segments using the output of IDcall which delineates clades using identity calls, made often by one population segment and rarely by the others. However, posterior probabilities can be calculated whenever an acoustic data set is divided into repertoires, the elements of each repertoire can be separated into calls in a manner so that each call can be categorized or at least quantified, and then some technique is used to cluster the repertoires into population segments. The trickiest part of this will often be calculating the likelihoods that each repertoire is a member of each population segment. If the calls can be categorized, or at least assigned probabilities of belonging to different categories (as in IDcall), and can be considered independent, then this is accomplished using Eqs 1-4. When calls are only defined by continuous measures (and not allocated to categories), one would need to obtain probability distributions for each population segment in multivariate space, perhaps using mixture models, and then assess the overlap of the calls of each repertoire with the probability distributions of each population segment. Some of these steps could be simple. For instance calls could be allocated to call types subjectively by humans [e.g. 4] or using a simple clustering method such as K-means [18]. Population segments could be delineated geographically, or by weighting equally all the calls in each repertoire (not emphasizing identity calls as in IDcall). Compared with molecular genetic methods for detecting, assigning, and evaluating population structure, techniques using acoustic data are much more rudimentary. They have mostly been ad-hoc methods developed or appropriated for a particular data set [e.g. 3,18]. However, although IDcallPP has been developed to work with the output of IDcall, we have outlined a generic methodology that should be generally useful in studies of population structure using acoustic data. The collection and analysis of acoustic data to study population structure will often be less costly, and usually less invasive, than comparable genetic studies. Sometimes, as with our wren and cricket examples, the genetic and acoustic data can tell similar stories. In contrast, when acoustic repertoires are socially learned, as with sperm whales, the contrasting patterns of genetic and cultural inheritance may lead to complex population structures [19]. Thus, the analysis of acoustic data may be effective and/or essential if we are to understand population structures. The IDcall and IDcallPP codes (in program language R) are under active development by the authors and can be accessed, along with the sperm whale datasets, through the Open Science Framework (https://osf.io/5fter/).
4,891.6
2022-04-22T00:00:00.000
[ "Biology" ]
Investigation of the Dosimetry Characteristics of the GAFCHROMIC® EBT3 Film Response to Alpha Particle Irradiation The purpose of this study was to investigate the dosimetric characteristics of the GAFCHROMIC® EBT3 film responding to alpha particle irradiation. Unlaminated GAFCHROMIC® EBT3 film pieces, were irradiated with a 30.055 kBq 241Am alpha source, at eight different dose levels, between 0 and 509 Gy. The irradiations were performed inside an enclosed box. Epson Expression 10000 XL scanner in transmission mode was used to digitize irradiated films 24 hours post-irradiations as 16-bit RGB images in tagged image file format (TIFF). Optical density (OD) values were obtained by following the OD theorem. Raw and normalized pixel values (PVnorm) from the red, green, and blue colour channels were sampled from a 3 × 3 mm2 region of interest. Calibration curves were created for both data sets (OD and PVnorm) and were fit accordingly. Monte Carlo simulations with the Geant4 toolkit were performed to establish a dose rate at the point within the sensitive layer of the film. An alpha dosimetry protocol for EBT3 films was obtained from the Monte Carlo calculated dose rate and dose calibration curves for alpha radiation were created. It is necessary to extend this study for different film types and compare to photon dosimetry calibration curves. Introduction GAFCHROMIC ® EBT3 films are designed and used for the measurement of absorbed doses of ionizing radiation especially suited for high-energy photons.This film model has optimal dose ranges from 0.2 Gy to 10 Gy and can be developed in real time without post-exposure treatment [1].EBT3 films need a color photo scanner to acquire 16-bit per channel of the red, green, and blue color components of light transmitted through the film.The EBT3 film model offers high spatial resolution and allows for depth dose measurements [2].The EBT3 film has a structure comprised of an active layer (28 µm thick), sandwiched between two matte-polyester substrates (125 µm).The active layer contains an active component, a marker dye, stabilizers, and other elements [1].The change in optical properties of GAFCHROMIC ® films depends on the absorbed dose and the linear energy transfer (LET) of the ionizing radiation.The current experience of GAFCHROMIC ® film dosimetry is based on low LET photon beams.Lately, the use of alpha emitting radionuclides such as the Alpha Tau Diffusing Alpha-emitters Radiation Therapy (DaRT) [3] for the treatment of cancer, has increased.This novel treatment modality relies on temporary or permanently implantable seeds impregnated with a small activity of 224 Ra, which are placed inside the tumours.Short-lived alpha-particle emitting atoms are released in the decay chain of 224 Ra and diffuse inside the tumour [3].In this study, we investigate the GAFCHROMIC ® EBT3 film model response to eight different alpha particle doses from an 241 Am source.However, since the range of alpha particles is very short (up to 100 micrometers), unlaminated films were used to directly expose the active layer to the alpha particles [4]. Materials and Methods In this study, unlaminated GAFCHROMIC ® EBT3 films were used for alpha particle dosimetry.The film was cut into 6.35 x 5.08 cm 2 pieces using a sharp cutter with a board for paper trimming.The films were cut longitudinally, and a landscape orientation was followed throughout the entirety of the study.The radioactive source emitting alpha particles, 241 Am was placed inside a metal box, mimicking a dark box environment.The source has a gold cover of approximately 51.8 nm to protect anything from being in direct contact with the source.The films were placed on the 241 Am source downwards, with the active layer in contact with the gold cover for varying exposure times to absorbed doses in the range of 0 Gy to 509 Gy. Figure 1 presents the experimental design.The absorbed doses were determined by calculating the dose rate of the experimental design using Monte Carlo Simulations performed with Geant4 simulation toolkit.The dose rate was calculated to be 28.26Gy/h for irradiation at the level of the active layer of the film [5].The film was scanned using Epson Expression 10000 XL scanner.The image was saved as a Tagged Image File Format (tiff).All scanned images were analysed using an inhouse python script in terms of pixel value (PV) for the red, green, and blue channels.The PV of the exposed region of the films were converted into average net optical densities (ODnet).The Equation 1 below indicates the conversion of 16-bit pixel channel numbers converted into ODnet.The film irradiations were repeated to lower error probabilities.It represents the EBT3 film at red, green, and blue colour channels.The calibration curves, i.e., the dependence delivered dose on the normalized PVs (Equation 2) and ODnet, were fitted to a power function (Equation 3) and exponential function (Equation 4) respectively. Where normalized PV is calculated below: Where b, c, and n are fitting parameters, as a polynomial correction to a linear form.Finally, the fit for the ODnet data as mentioned, was fit to an exponential function: where , , and are fitting parameters.The plotted uncertainties were calculated based on total uncertainties from the relative fit parameter uncertainties and experimental uncertainties added in quadrature. Results and Discussion The dependence of the normalized PV and ODnet on the absorbed dose have been investigated.The result from the normalized PV concludes that the red colour channel has the highest sensitivity than the green and blue channels, however only useful for small doses (up to ~8 Gy).The higher sensitivity for the red channel is due to this channel having the highest absorption for low dose ranges up to approximately 8 Gy, however, the higher dose ranges up to 100 Gy, will saturate the red channel sensitivity curve.The green channel can provide a smaller uncertainty than the red channel when normalized PV is used as a response function for higher doses.The absorbed dose (Gy) as a function of normalized PV for all three colour channels were fit to the polynomial fit function, seen as equation 3 above.The absorbed dose (Gy) as a function of ODnet for all three colour channels, were fit to an exponential function.The errors bars associated with each plot are the calculated total uncertainties from the relative fit parameter uncertainties and experimental uncertainties, added in quadrature.Through further analysis, caution must be taken when looking at particular colour channels where the raw PV is less than 10 000.In this case, the capabilities of a flatbed optical scanner are interpreted rather than the function of the film.The measured response value is dependent on the light transmission through the film but for response values less than 10 000, it is corrupted by a signal from scattered light in the measurement system.This signal is now independent of the transmission of the film.Therefore, any dose above 50 Gy cannot represent response values appropriately for EBT3 film. Conclusion EBT3 film can measure the alpha radiation doses with high accuracy up to 50 Gy.This film model cannot be used for dosimetry higher than 50 Gy, therefore, above this dose limit, another film model such as HD-V2 film (high-dose) must be used.Further analysis is currently in completion with HD-V2 film to be used in combination with EBT3.In future studies, unlaminated EBT3 film will be used to benchmark autoradiography measurements for novel animal studies involving alpha emitting sources used for the treatment of cancer. )And the normalized data was fit to: 3
1,691.2
2023-11-01T00:00:00.000
[ "Physics", "Medicine" ]
Clinical and Economic Implications of Hydroxyurea Intolerance in Polycythemia Vera in Routine Clinical Practice Background/Objectives: Polycythemia vera (PV) is a chronic hematologic neoplasm commonly treated with hydroxyurea (HU). We utilized the advanced digitalized database of Maccabi Healthcare Services to retrospectively investigate the clinical and economic implications of HU intolerance in the routine clinical care of PV patients in Israel. Methods: We collected data on demographics, physician visits, hospitalizations, laboratory results, medication purchases, cardiovascular and thrombotic events, mental health, economic outcomes, and mortality. Outcomes included cardiovascular and other thrombotic events, disease progression, mental health events, economic outcomes, and overall mortality. Results: Of the 830 patients studied, 3 (0.4%) were resistant to HU treatment, 318 (38.3%) were intolerant to HU treatment, and 509 (61.3%) were stable on HU treatment. The venous thrombosis rate was significantly higher among HU-intolerant compared to HU-stable patients (1.58 vs. 0.47 per 100 person-years [PY], respectively; p < 0.001). The rate of progression to myelofibrosis was 6 vs. 0.9 per 100 PY in HU-intolerant patients vs. HU-stable patients, respectively (p < 0.001), and the rate of progression to acute myeloid leukemia (AML) was 1.16 vs. 0.2 per 100 PY in HU-intolerant patients vs. HU-stable patients, respectively (p < 0.001). The phlebotomy requirement, mortality rate, and total hospitalization days among HU-intolerant patients were significantly higher than in HU-stable patients (p = 0.049, p < 0.001, p < 0.001, respectively). More mental health-related events were noted in HU-intolerant patients vs. HU-stable patients (p = 0.007), and the total healthcare cost ratio was 2.65 for the HU-intolerant patients compared with HU-stable patients. Conclusions: This study suggests that HU-intolerant patients are more likely to have worse outcomes than HU-stable patients, highlighting the need for the close monitoring of these patients for disease-related complications or progression. Introduction Polycythemia vera (PV) is the most common myeloproliferative neoplasm (MPN) disease [1,2].PV prevalence is 22-30 per 100,000 individuals [3] , and its incidence increases with age.Disease manifestations include elevated blood cell counts, a predisposition to thrombosis and hemorrhage, symptoms of hyperviscosity, constitutional symptoms, and, in a proportion of patients, progression to myelofibrosis (MF) and/or transformation to acute myeloid leukemia (AML) [4]. Treatment is based on the maintenance of a hematocrit of <45%, and the administration of low-dose aspirin.In patients with a low risk of thrombosis (age < 60 years and no previous thrombosis), the hematocrit is controlled by phlebotomy, with cytoreductive treatment added in patients resistant or intolerant to phlebotomy.In patients at high risk of thrombosis (age > 60 years and/or previous thrombosis), cytoreductive treatment is indicated, with hydroxyurea (HU) being the most frequently used agent for this purpose [2,5,6]. Even though Israel has a well-established digital medical records system, providing a unique opportunity to study the epidemiology and treatment outcomes of PV patients, there were no available data pertaining to the epidemiology of PV, treatment paradigm, or outcomes. While previous studies have shown that HU resistance or intolerance may be associated with disease progression and adverse outcomes [7], the implications of stopping HU because of intolerance are not fully characterized.Filling this knowledge gap may be valuable for clinical and health policy considerations.In this study, we report the clinical and economic implications of HU intolerance in adult PV patients in routine clinical practice. Study Design This retrospective study used the central database of Maccabi Healthcare Services (MHS).MHS is a nationwide health plan (payer-provider) covering a quarter of the Israeli population.The database contains longitudinal data on a stable population of ~2.5 million people.Data collected include demographic details, physician visits, laboratory results from a single central laboratory, imaging results, and prescription and drug purchase data.The database includes several automatically formulated registries, including a cardiovascular registry [13].The MHS ethics committee reviewed the study design and approved it. Study Population The study population included adult patients > 21 years of age to whom HU was dispensed for at least three consecutive months between the years 2000 and 2015.The time of first HU purchase was defined as the index date.Eligible patients had either a recorded diagnosis of PV (ICD-9 Code 238.4) at any time from 2 years before the index date and through treatment, or blood counts indicative of PV (i.e., hematocrit > 45%, platelets > 400 × 10 9 /L, and white blood cells > 10 × 10 9 /L) for 6 months before or after the index date.Study outcomes were documented from the index date until 31 December 2018, death, or leaving MHS.This retrospective, non-interventional study was reviewed and approved by the MHS Institutional Review Committee and all methods were performed in accordance with the relevant guidelines and regulations.The need for informed consent was waived by the MHS Institutional Review Committee. Patient Groups All patients in the study started as "HU-treated" patients.Thereafter, they were categorized into three groups.The first group included patients who were resistant to HU based on standard European LeukemiaNet (ELN) criteria [14].HU-resistant patients had to have been prescribed HU at a dose of 2 g/day for at least 3 consecutive months (and continue without dose reduction), and their hematocrit to have remained above 45% in at least 80% of tests in the first year after the day of first purchase, OR their blood counts to have indicated platelets > 400 × 10 9 /L AND white blood cells (WBC) > 10 × 10 9 /L in at least 80% of tests in the first year after the first purchase.As phlebotomies were not consistently recorded in the Electronic Medical Records (EMRs), resistance was not based on the need for phlebotomies. The second group included patients who were intolerant to HU.For the purposes of this study, intolerance to HU was determined as meeting one or more of the following criteria: (i) ELN-based criteria for hematologic toxicity [9], i.e., blood counts performed at least 3 months after the first purchase of HU and during treatment indicating either neutrophil count < 1 × 10 9 /L or platelets < 100 × 10 9 /L or hemoglobin < 10 g/dL; (ii) ceased purchase of HU or dose reduction from 2 g/day; (iii) prescription of busulfan, IFN-α, or ruxolitinib.Additionally, all patients in the intolerance group must not have met the above-mentioned resistance criteria.Non-hematologic toxicity intolerance, such as the development of leg ulcers, was not captured in our analysis since this is not consistently recorded in the EMRs. The third group comprised patients on HU who did not meet the criteria for resistance or intolerance and remained on continuous HU treatment ("stable" group). Demographic and Outcome Variables The following were collected from EMRs: demographic data, physician visits, laboratory results, imaging results, drug purchase data, and events captured in the cardiovascular registry, including myocardial infarction (MI), non-MI ischemic heart disease (IHD), peripheral vascular disease (PVD), and stroke/transient ischemic attack (TIA) [13].Additional data regarding additional outcomes were collected: venous thrombotic events, myelofibrosis, AML, mental health events (defined as at least two visits to a psychiatrist or psychologist, or two or more dispensed psychiatric drugs [antipsychotics, anxiolytics, benzodiazepine derivatives or antidepressants]), overall patient treatment costs (data in the EMRs were available for this outcome only from 2010), and mortality. Outcome Comparison For outcome comparison, a transition date was defined for HU-intolerant patients as the earliest date of cytopenia or stopping HU treatment or receiving second-line treatment (see patient groups, Table 1).To compensate for the time passing from index date to transition date in intolerant patients and to allow a meaningful comparison of HU-stable vs. HU-intolerant patients regarding outcomes that occurred during follow-up, the median time to transition date (~2 years) was added to the index date of stable patients (i.e., stable patients had a new adjusted index date), while patients with shorter follow-up time than the average time to transition (2 years) were not included in the outcome analysis.Only patients transitioning up to 5 years from the index date were included in the outcome analyses, to reduce variability (see illustration in Figure 1).The HU-resistant group was not included in the outcome analysis as only three patients met the resistance criteria (see Section 3). Statistical Analyses Descriptive statistics are presented as number and percent for categorical variables, and as mean ± standard deviation (SD) for continuous variables.Outcomes are reported as number per patient-years, and comparisons of proportions and means across groups were performed using chi-square and Student's t-tests, respectively.Kaplan-Meier survival curves were computed for time to the event and the log-rank test was used to assess between-group differences. A multivariate Cox proportional hazards model was used to identify the risk factors for outcomes. All tests were two-tailed and a p-value of 5% or less was considered statistically significant.All analyses were conducted using IBM-SPSS version 25 (IBM-SPSS Statistics for Windows, Version 25, Armonk, NY, USA). Study Population A total of 1620 patients who purchased HU for at least three consecutive months were identified, of whom 785 did not have a PV diagnosis or PV-indicating blood counts.Of the remaining 835 patients, 733 (88%) had a diagnosis of PV and an additional 102 (12%) patients had blood counts indicative of PV (with no documented PV diagnosis).Five patients (0.6%) were excluded due to less than 3-year continuous membership in MHS after the index date.Thus, 830 patients remained in the study (Figure 2).Among 406 patients who underwent JAK2 V617F mutational analysis (48.9% of total study patients), 372 Statistical Analyses Descriptive statistics are presented as number and percent for categorical variables, and as mean ± standard deviation (SD) for continuous variables.Outcomes are reported as number per patient-years, and comparisons of proportions and means across groups were performed using chi-square and Student's t-tests, respectively.Kaplan-Meier survival curves were computed for time to the event and the log-rank test was used to assess between-group differences. A multivariate Cox proportional hazards model was used to identify the risk factors for outcomes. All tests were two-tailed and a p-value of 5% or less was considered statistically significant.All analyses were conducted using IBM-SPSS version 25 (IBM-SPSS Statistics for Windows, Version 25, Armonk, NY, USA). Study Population A total of 1620 patients who purchased HU for at least three consecutive months were identified, of whom 785 did not have a PV diagnosis or PV-indicating blood counts.Of the remaining 835 patients, 733 (88%) had a diagnosis of PV and an additional 102 (12%) patients had blood counts indicative of PV (with no documented PV diagnosis).Five patients (0.6%) were excluded due to less than 3-year continuous membership in MHS after the index date.Thus, 830 patients remained in the study (Figure 2).Among 406 patients who underwent JAK2 V617F mutational analysis (48.9% of total study patients), 372 (91.6%) were positive (JAK2 V617F testing began in 2007).The annual incidence of PV patients requiring HU treatment during the study period was 2.37 to 5.94 per 100,000 patients. (91.6%) were positive (JAK2 V617F testing began in 2007).The annual incidence of PV patients requiring HU treatment during the study period was 2.37 to 5.94 per 100,000 patients. Of the 318 HU-intolerant patients, 144 (45.3%) patients developed cytopenia, 52 (16.3%) patients switched to an alternative PV treatment, and 122 (38.3%) patients stopped HU treatment for unknown reasons, and there was no record to indicate that they received any subsequent PV treatment (Table 1). Of the HU-intolerant patients, 173 (54.4%) transitioned within 5 years from the index date and were compared for outcomes with the HU-stable patients.After compensating for the time interval between the index date and transition date (by adding the median transition time of 2 years to the index date of the HU-stable patients; see Section 2, Figure 1), a total of 487 (95.7%)HU-stable patients with sufficient follow-up time were included. Of the 318 HU-intolerant patients, 144 (45.3%) patients developed cytopenia, 52 (16.3%) patients switched to an alternative PV treatment, and 122 (38.3%) patients stopped HU treatment for unknown reasons, and there was no record to indicate that they received any subsequent PV treatment (Table 1). Of the HU-intolerant patients, 173 (54.4%) transitioned within 5 years from the index date and were compared for outcomes with the HU-stable patients.After compensating for the time interval between the index date and transition date (by adding the median transition time of 2 years to the index date of the HU-stable patients; see Section 2, Figure 1), a total of 487 (95.7%)HU-stable patients with sufficient follow-up time were included.The median follow-up period was 4.9 years for the HU-intolerant group and 5.5 years for the HU-stable group. Patient Characteristics at Baseline after Time Adjustment For outcome comparison, we present baseline characteristics for patients who were included in the study analysis after time adjustment.The number of non-MI IHD events was significantly higher in the HU-intolerant patients vs. the HU-stable patients (22 [12.7%] vs. 28 [5.7%], respectively; p = 0.003).None of the other baseline characteristics differed between the two patient groups (Table 3). Clinical Outcomes The results of clinical events are presented in Table 4. Arterial Cardiovascular Events No differences were seen in cardiovascular events: the MI event rate was 0.63 and 0.57 per 100 person-years (PY) among HU-intolerant and HU-stable patients, respectively (p = 0.836); the non-MI IHD event rate was 1.42 and 0.92 per 100 PY among HU-intolerant and HU-stable patients, respectively (p = 0.185); the CVA or TIA event rate was 1.31 and 1.40 per 100 PY among HU-intolerant and HU-stable patients, respectively (p = 0.841); and the PVD event rate per 100 PY was 1.42 vs. 0.92 among HU-intolerant and HU-stable patients, respectively (p = 0.098). Progression to Myelofibrosis or AML The rate of progression to MF was 6 per 100 PY in the HU-intolerant group vs. 0.9 per 100 PY in the HU-stable group (p < 0.001), and progression to AML occurred in 1.16 per 100 PY in the HU-intolerant group vs. 0.2 per 100 PY in the HU-stable group (p < 0.001). Progression to Myelofibrosis or AML The rate of progression to MF was 6 per 100 PY in the HU-intolerant group vs. 0.9 per 100 PY in the HU-stable group (p < 0.001), and progression to AML occurred in 1.16 per 100 PY in the HU-intolerant group vs. 0.2 per 100 PY in the HU-stable group (p < 0.001). Phlebotomies during Follow-up Period During the follow-up period among all patients, 148 (30.4%) in the HU-stable group and 54 patients (31.2%) in the HU-intolerant group underwent phlebotomy.The mean number of phlebotomies among HU-intolerant patients was significantly higher than in the HU-stable group (9.8 ± 9.5 vs. 6.9 ± 8.0, respectively; p = 0.049). Mortality Death occurred in 58% of HU-intolerant patients vs. 30% of HU-stable patients.The mortality rate was significantly different between the groups: 10.3 per 100 PY in the HUintolerant group vs. 4.8 per 100 PY in the HU-stable group (p < 0.001) (Table 4).The time to death was significantly different between the groups (log-rank test, p < 0.001) (Figure 4). Phlebotomies during Follow-up Period During the follow-up period among all patients, 148 (30.4%) in the HU-stable group and 54 patients (31.2%) in the HU-intolerant group underwent phlebotomy.The mean number of phlebotomies among HU-intolerant patients was significantly higher than in the HU-stable group (9.8 ± 9.5 vs. 6.9 ± 8.0, respectively; p = 0.049). Mortality Death occurred in 58% of HU-intolerant patients vs. 30% of HU-stable patients.The mortality rate was significantly different between the groups: 10.3 per 100 PY in the HUintolerant group vs. 4.8 per 100 PY in the HU-stable group (p < 0.001) (Table 4).The time to death was significantly different between the groups (log-rank test, p < 0.001) (Figure 4). Mental Health-Related Outcomes A total of 133 HU-intolerant patients (76.9%) vs. 320 HU-stable patients (65.7%) (p = 0.007) had at least two purchases of psychiatric medications or two visits to a psychiatrist/psychologist (Table 4). Economic Outcomes The limitation of expenditure data availability resulted in a total of 82 HU-intolerant patients and 280 HU-stable patients suitable for this analysis.The total healthcare cost over one year from the adjusted index (HU-stable patients) or transition date (HU-intolerant patients) was measured as the total cost ratio.This was defined as the total healthcare cost of HU-intolerant patients divided by the total healthcare cost of HU-stable patients, and resulted in a total cost ratio of 2.65.When the total cost ratio was allocated into four categories of physician visit costs, hospitalization costs, laboratory test costs, and the cost of medications, the highest cost ratio was 3.61 in the hospitalization category (Table 5). Mental Health-Related Outcomes A total of 133 HU-intolerant patients (76.9%) vs. 320 HU-stable patients (65.7%) (p = 0.007) had at least two purchases of psychiatric medications or two visits to a psychiatrist/psychologist (Table 4). Economic Outcomes The limitation of expenditure data availability resulted in a total of 82 HU-intolerant patients and 280 HU-stable patients suitable for this analysis.The total healthcare cost over one year from the adjusted index (HU-stable patients) or transition date (HU-intolerant patients) was measured as the total cost ratio.This was defined as the total healthcare cost of HU-intolerant patients divided by the total healthcare cost of HU-stable patients, and resulted in a total cost ratio of 2.65.When the total cost ratio was allocated into four categories of physician visit costs, hospitalization costs, laboratory test costs, and the cost of medications, the highest cost ratio was 3.61 in the hospitalization category (Table 5). Discussion In this study we show that in routine clinical practice, HU intolerance substantially increases the risk of adverse outcomes in PV patients.These include thrombotic events, progression to MF and AML, the mean number of phlebotomies, hospitalizations, mental health-related events, and mortality.Furthermore, overall financial costs are greater in this patient group. The prevalence of HU resistance or intolerance in PV patients has been reported to be ~11-24% [5,[7][8][9].In our study, however, we found a higher prevalence of intolerance-38.3%.The ELN guidelines require receiving 2 g/day of HU to define HU-resistance [14].In this study, we identified very few patients receiving HU at this dose, which limited our ability to study this important group of patients.The reason for this finding is not fully addressed by our study, although it may suggest that many patients reach clinical intolerance before reaching the maximal guideline-recommended dose of HU, demonstrating what could be regarded as "relative resistance" at the patient's maximal tolerated dose.Our definition of HU intolerance differed somewhat from that of the ELN because we were unable to capture patients with leg ulcers or other HU-related non-hematological toxicities in MHS data.Of note, some patients were defined as intolerant after several months of ceasing HU purchase.We cannot rule out that some of these patients restarted treatment with HU later on. In our cohort, only 16.3% of the HU-intolerant patients switched to substitute PV treatments (busulfan, IFN-α, or ruxolitinib), while the remainder either continued HU or stopped HU treatment without receiving any substitute treatment.This reflects the limited treatment options that were available to PV patients intolerant of HU during the study period.An additional "follow up" analysis to compare the outcomes of the patients who stopped treatment following HU intolerance with those who switched to a subsequent line of therapy and to those who remained on HU treatment is warranted. In addition, to better understand the differential impact of HU intolerance based on baseline risk categories, future analyses should stratify patients into low-risk and high-risk groups (e.g., aged > 60 years and/or with a history of thrombosis).This stratification was beyond the scope of the current analysis but could provide additional insights into the management and outcomes of HU-intolerant PV patients. Patient Characteristics While most baseline parameters of stable and intolerant patients were similar, two exceptions are notable-platelet count and RDW.Platelet count was significantly lower in patients who subsequently developed HU intolerance, while RDW was higher in these patients (Table 2).Interestingly, the PV-AIM study (based on the Optum database, utilizing machine-learning technology) identified RDW > 15% as a predictor of HU treatment failure within 3 months in phlebotomy-dependent patients [15], and RDW > 17% as a predictor of HU resistance within 6-9 months of starting HU [16].Our findings, if validated, may contribute to the development of models to predict HU intolerance. Clinical Outcomes We observed a six-fold higher risk of MF and a five-fold higher risk of AML in HUintolerant patients.This was comparable to the results in the study by Alvarez-Larran et al. [5], in which HU intolerance or resistance was associated with a 6.8-fold increased risk of transformation to MF or AML.We did not find any significant difference in arterial thrombotic complications, although this may be because of the relatively short followup time of five years.However, we found that the venous thrombosis event rate was significantly higher in HU-intolerant patients, suggesting an increased risk of thrombosis in HU-intolerant patients. Interestingly, in the PV-AIM study, the incidence of thromboembolic events was 20% vs. 15% (restrictive event definition) and 50% vs. 35% (extensive event definition), respectively, in PV patients who continued HU treatment compared to those who switched to ruxolitinib [15]. In our study, more HU-intolerant patients than HU-stable patients were hospitalized, and hospitalizations were prolonged among these patients. Treatment Costs The total healthcare cost of HU-intolerant patients was considerably higher than the total healthcare cost of HU-stable patients, most of which was attributable to hospitalization and medication.Previously, Parasuraman et al. reported higher healthcare costs in HUtreated PV patients with the occurrence of thromboembolic events [9], and our study may reconfirm these findings. Limitations Our study has important limitations.It is an observational, retrospective cohort analysis of real-world data.Therefore, major components relevant to the clinical course of PV treatment may be underreported or missing, such as leg ulcers and other cutaneous toxicities, phlebotomy frequency, and other potential components of comprehensive documentation.These limitations prevented the implementation of the ELN guideline criteria for determining HU-resistance/intolerance in their full form, thus impeding the direct comparison of our findings with those of prospective observational studies. In our study, we identified cytopenia occurring at least 3 months after starting HU.This may reflect a transient situation in some patients until a stable dose of HU was achieved, and thus cytopenia may have resolved with longer follow-up. Our study also required a methodological solution to address the fact that HUintolerant patients had previously been HU-stable by definition, and that stable patients might have developed resistance/intolerance after the study period.We therefore added a 2-year mid-time point to the index date of the HU-stable patients, to enable a meaningful comparison to the HU-intolerant patients, and included in the outcome analysis only patients transitioning up to 5 years from index date, to reduce variability.While adding the mid-time point to the index date of HU-stable patients is in line with previously published studies, we did not conduct propensity score matching [17]. Finally, we did not have information regarding the JAK2 mutational status of some of the patients included, as they were diagnosed and treated prior to the availability of the JAK2 V617F mutation test.Among those tested for the JAK2 617F mutation, 91.6% were positive.This is a little lower than the WHO guideline-referenced percentage of 95% [18], but higher than that reported in other published studies [19].It is possible that some of the patients with PV diagnosis in their MHS EMR who did not have a documented JAK2 mutation test status in their MHS EMR were actually tested for JAK2 at a hospital and found positive, as part of their overall diagnosis procedure, e.g., prior to the availability of this test in MHS.However, while our findings are in line with previously published data on the adverse outcomes of HU-intolerant PV patients, further studies may be required to fully validate them. Conclusions In conclusion, this study suggests that in PV, HU intolerance is a risk factor for thromboembolic events, transformation to MF and to AML, and mortality.It also represents a significant economic burden.Thus, surveillance for HU intolerance through the regular monitoring of blood counts and clinical signs is crucial.Signs of HU intolerance to watch out for may include hematologic toxicity such as cytopenia (i.e.,: neutrophil count < 1 × 10 9 /L), platelet count < 100 × 10 9 /L, hemoglobin < 10 g/dL); non-hematologic toxicity such as the development of leg ulcers, gastrointestinal symptoms, and mucocutaneous toxicities; and a lack of efficacy that may present, for example, as a persistent elevation of hematocrit or platelet count despite tolerable HU dosing.Predisposing factors such as RDW or platelet count at baseline should also be monitored and evaluated.Recent data show that IFN-α may lead to better overall survival compared to HU [20], and that ruxolitinib is associated with improved event-free survival and other outcomes compared to best available therapy in HU-resistant/intolerant patients [12,21].The early and close monitoring of HU-treated patients for signs of intolerance may allow the timely identification of patients in whom the initiation of these drugs may be appropriate and consequently lead to better patient outcomes as well as reduced costs for the healthcare system. Figure 1 . Figure 1.Definition of adjusted index date for the HU-stable group.(a) Timepoint of index date for HU-stable patients and timepoint of transition date for HU-intolerant patients.(b) Timepoint of adjusted index date for HU-stable patients. Figure 1 . Figure 1.Definition of adjusted index date for the HU-stable group.(a) Timepoint of index date for HU-stable patients and timepoint of transition date for HU-intolerant patients.(b) Timepoint of adjusted index date for HU-stable patients. Figure 3 . Figure 3. Kaplan-Meier curve for time to venous thrombosis event by group. Figure 3 . Figure 3. Kaplan-Meier curve for time to venous thrombosis event by group. Figure 4 . Figure 4. Kaplan-Meier curve for time to death by group. Figure 4 . Figure 4. Kaplan-Meier curve for time to death by group. Table 1 . Study population by study group. Table 2 . Patient characteristics at baseline. Table 3 . Baseline characteristics at time of index date adjustment (for patients who were included in the study analysis). Table 4 . Summary of clinical outcomes. Table 5 . Healthcare cost ratio in study groups. †After one year of adjusted index date/transition date; HU intolerant group n = 82; HU-stable group n = 280. Table 5 . Healthcare cost ratio in study groups.
6,022.8
2024-06-01T00:00:00.000
[ "Medicine", "Economics" ]
A Class of Solutions for the Hybrid Kinetic Model in the Tumor-Immune System Competition In this paper, the hybrid kinetic models of tumor-immune system competition are studied under the assumption of pure competition.The solution of the coupled hybrid system depends on the symmetry of the state transition density which characterizes the probability of successful occurrences. Thus by defining a proper transition density function, the solutions of the hybrid system are explicitly computed and applied to a classical (realistic) model of competing populations. Introduction In this paper, the two-scale tumor immune-system competition hybrid model [1][2][3][4][5][6] is studied under the assumption that the transition density function is a symmetric and separable function.The competition between tumor and immune-system can be modeled at different scales.Cells of different populations are characterized by biological functions heterogeneously distributed, and they are represented by some probability distributions.The interacting system is characterized at a macroscopic scale by a density distribution function which describes the cells activity during the interaction proliferation.At this level, the distribution of cells fulfills some partial differential equations taken from the classical kinetic theory.In this case, the more general model consists in a nonlinear system of partial differential equations.From the solution of this system, one can define a parameter which defines the time evolving distance between the two distributions, and this parameter is the charactering coefficient of the microscopic equations, typically an ordinary differential system for the competition of two populations. This parameter has been considered [4,5] as a random coefficient whose probability density distribution is modeled by the hiding-learning dynamics referred to biological events where tumor cells attempt to escape from immune cells which, conversely, attempt to learn about their presence.Therefore, when the coupling parameter is obtained by solving the kinetic equations for the distribution functions, then it will be included in the classical Lotka-Volterra competition equations.We will analyze on a concrete example the influence of this stochastic parameter on the evolution.This method can be easily extended to more realistic competition models (see, e.g., [7][8][9][10][11][12][13][14][15][16][17][18][19][20]). The Hybrid Model for the Tumor-Immune System Competition Let us consider a physical system of two interacting populations, each one constituted by a large number of active particles with sizes: for = 1, 2 and R + def = [0, +∞).Particles are homogeneously distributed in space, while each population is characterized by a microscopic state, called activity, denoted by the variable .The physical meaning of the microscopic state may differ for each population.We assume that the competition model depends on the activity through a function of the overall distribution: = [ (, )] , ( [ (, )] : R + → R + ) . ( The description of the overall distribution over the microscopic state within each population is given by the probability density function: for = 1, 2, such that (, ) denotes the probability that the activity of particles of the th population, at the time , is in the interval [, + ]: Moreover, it is We consider, in this section, the competition between two cell populations.The first one with uncontrolled proliferating ability and with hiding ability; the second one with higher destructive ability, but with the need of learning about the presence of the first population.The analysis developed in what follows refers to a specific case where the second population attempts to learn about the first population which escapes by modifying its appearance.The hybrid evolution equations specifically can be formally written as follows [4,5]: where , for = 1, 2, is a function of = { 1 , 2 } and acts over = { 1 , 2 }, while A , for = 1, 2, is a nonlinear operator acting on and [] is a functional (0 ≤ ≤ 1) which describes the ability of the second population to identify the first one.Then, (6) denotes a hybrid system of a deterministic system coupled with a microscopic system statistically described by a kinetic theory approach.In the following the evolution of density distribution will be taken within the kinetic theory.The derivation of (6) 2 can be obtained starting from a detailed analysis of microscopic interactions.Consider binary interactions specifically between a test, or candidate, particle with state * belonging to the ith population and field particle with state * belonging to the jth population.The modelling of microscopic interactions is supposed to lead to the following quantities. (i) The encounter rate, which depends for each pair of interacting populations on a suitable average of the relative velocity , with , = 1, 2. (ii) The state transition follows from the mutual action of the field particle (F) of the th population on the test particle (T) of the th population and vice versa so that → ⇐⇒ * () With respect to this mutual action, we can assume that this function depends on the biological model, as follows. (1) Competition within the first group and with others: particles of the th population interact with any other particle both from its own th population and from the th population so that ( * , * , ) ̸ = 0, ( fixed, ∀) . In this case, each particle of the th population can change its state not only due to the competition with the th population but also by interacting with particles of its own population.Instead, the individuals of the th population change their state only due to the interaction with the other th populations.They do not interfere with each other within their th group.(2) Competition within the second group and with others: particles of the th population interact with any other particles both from its own th population and from the th population so that (3) Full competition within a group and with others: particles of each population interact with any other particles both from its own population and from the other population so that ( * , * , ) ̸ = 0, (∀, ∀) . (4) Competition of two groups: particles of each population interact only with particles from the other population so that ( * , * , ) = 0, ( = ) . We can assume that this kind of competition arises when the dynamics in each population are stable and each population behaves as a unique individual. Then, by using the mathematical approach, developed in [1,2], it yields the following class of evolution equations: which can be formally written as (6) 2 . Transition Density Function Based on Separable Functions In this section, we give the solution of ( 15) under some simple assumptions on the form of the transition density (7). On the Symmetries of the State Transition Density. We assume that the integrability condition on , holds true.As a consequence, if we write the transition density as a linear combination of separable functions, this definition implies some symmetries which will be useful for the following computations, in particular. Theorem 1. If one defines the transition density as with ( * , ), ( * , ) > 0 (, = 1, 2), the following symmetry holds true: Proof.From ( 7), ( 17), we have There follows, with = 1, = 2 and = 2, = 1, so that by a comparison of to be valid for all 1 , 2 , that is, as a consequence of the definition (17), In particular, to fulfill (20), we can assume from which, by taking into account (22), we get so that, by a difference, Thus, according to (25), the mutual action of the state transition given by the definition (7) can be summarized by (18). In the following, we will consider a special choice for the transition density (17) as so that (18) is fulfilled. Preliminary is solved by where () is the solution of the second kind homogeneous Fredholm integral equation with being the eigenvalue of the integral equation, and when = 0, () is any arbitrary function fulfilling (32) 2 . Proof.Let us first notice that in the trivial case of = 0, there is no dependence on the function but this equation is also solved by (30) being In the more general case, (31) 2 , (32) 2 are direct consequence of the condition (5).By a simple computation, (29) can be transformed into the Fredholm integral equations (31), (32). When = 0, from the r.h.s, we have so that () cannot be univocally determined. The proof of all cases above is followed by solving these two equations in (), () with respect to the initial condition (0, ). For instance, for the first case (1), there follows that is However, if ̸ = 0, the integral of the right side of the second equation is , while the integral of the first side must be zero. With similar reasonings, we get the proof of the remaining cases.(15) In this section, we will give the explicit solution of the system (15) under some suitable hypotheses on both the encounter rate and the transition density ( * , * , ).Let us assume the symmetry of so that Solution of the System Thanks to the previous theorems, and the symmetry of ( * , * , ) as given by (18), system (15) simplifies, the following. Theorem 4. Let the transition density ( * , * , ) be defined as which fulfills (7) and the symmetries conditions (18) Thus, we obtain, by a variable change, so that (56) follows. 4.1.Pure Competition Model.We will consider the solution of (56) when, together with the hypotheses (53), (54) 2 , some more conditions are given on the parameters.According to (26), let us assume together with the symmetries conditions (18).If we define we will discuss only the following hypotheses: which seem to have some biological interpretations, being the pure encounter-competition model.This happens when the transition of state arises only when particles of one population interact only with an individual of the other population.In this case, individuals of one population do not interact with individuals of the same population.with as given by (64), (65).This definition of the transition density fulfills (7) and the symmetries conditions (18).The density function (, ) is such that By assuming and for 0 , 0 , the condition with , Kronecker symbol. Application to Lotka-Volterra Model In this section, we will study a coupled system (6) where the macroscopic equations are the Lotka-Volterra equations (6) 1 .Concerning the coupling stochastic parameter [], we have to define the functional in (2), (6) depending on the "distance" between distributions; that is, with where the maximum learning result is obtained when the second population is able to reproduce the distribution of the first one: 1 = 2 , while the minimum learning is achieved when one distribution is vanishing.In some recent papers; it has been assumed [4,5] that In this case, it is = 1, when 1 = 2 ; otherwise ̸ = 1 with ↓ 0, depending on the time evolution of the distance between 1 and 2 . Let us notice that is the coupling term which links the macroscopic model (6) 1 to the microscopic model (6) 2 .There follows that the solution of the hybrid system (6) depends on the coupling parameter (80) which follows from the solution of (15).System (15) is a system of two nonlinear integrodifferential equations constrained by the conditions (7), (5).Moreover, its solution depends also the constant encounter rate , on the transition density function , and the initial conditions (0, ).In the following section, we will study the solution of ( 6), under some suitable, but not restrictive, hypotheses on . Under the hypotheses of Theorem 5 and the solution (72), we have In the last case we have the usual Lotka-Volterra system, therefore, we will investigate the first case.Thus, according to (6), we have the system with ≥ 0, ≥ 0, ≥ 0. The numerical solution of this system depends on both the parameters , , , and on the initial conditions 1 (0), 2 (0).We can see from Figures 1 and 2 that albeit the initial aggressive population 2 is greater than 1 , the first population can increase and keep nearly always over 2 in the quasilinear case in Figure 1(a) or always under 2 in presence of a strong nonlinearity in Figure 1(b). If we invert the initial conditions so that the initial population of 1 is greater than 2 , we can see that in case of quasilinear conditions (see Figure 2(a)) the population 1 after some short time becomes lower than 2 .For a strongnonlinearity, instead after an initial growth 1 , it tends to zero in a short time, while the second population grows very fast and becomes the prevalent population in Figure 2(b). Conclusion In this paper, the hybrid competition model has been solved under some assumptions on the transition density.In the simple case of Lotka-Volterra, the numerical solution gives some significant and realistic insights on the evolution of competing populations.
3,005.8
2013-05-08T00:00:00.000
[ "Mathematics" ]
Augmented play: An analysis of augmented reality features in location-based games As well as popularising location-based games, Pokémon GO helped connect location-based play with augmented reality (AR), bringing this still-nascent technology into the mainstream. Despite growing use of AR, its long-promised revolutionary potential remains stifled by limited innovation, technical barriers and lack of uptake by users. To explore how AR figures into location-based games, we analysed 11 location-based games with AR features. We identify four overarching ways these games incorporate the physical environment into gameplay: through superimposition, blending, immersivity and materiality. Our findings show that AR is most commonly a gimmick rather than a central element of the game experience and remains substantially hindered by technical glitches and limitations. While more advanced and deeply integrated AR mechanics are emerging, its use in location-based games remain far from the ‘technological imaginaries’ that have accompanied its development as AR continually oscillates between its status as a ‘mundane’ and ‘always-imminent’ technology. Introduction Pokémon GO (Niantic, 2016) was one of the first mainstream mobile games to successfully integrate augmented reality (AR). Players use their smartphone's camera to take photos of virtual Pokémon in physical locations, for example playing in their backyard, standing on their kitchen table, or posing next to themselves or others. This feature, used in the game's promotional material, was a successful experiences (see Carmigniani et al., 2011: 342-3;Vaughan-Nichols, 2009). It was not until the 1990s, however, that AR became popularised and more concretely defined (Azuma, 1997). It is often defined as the enhancement or augmentation of a physical environment or space, in real-time, through virtual, computer-generated information. This definition usually makes a clear distinction from virtual reality (VR), which immerses the user in a completely virtual environment. AR, in contrast, supplements, enhances, blends or even obscures an actual physical location with virtual content or data (Carmigniani et al., 2011: 342). This definition is seemingly established, and the technology has become mainstream, perhaps even mundane, since its incorporation into smartphonesas evidence by its inclusion in various photo filters embedded in social media platforms (see Javornik et al., 2022). But this definition also overlaps and intersects with numerous other technologies and practices that have emerged since the 1990s that similarly blend the physical and virtual realms. These include mixed-reality games and experiences, ubiquitous computing, wearable computers, internet of things (IoT) devices, ambient intelligence and responsive architecture, among many others. As Lev Manovich (2006: 225) argues, AR is simply one strand of a broader shift towards 'augmented space', as the spaces people live in and move throughfrom the home to the public spaces of the street, museum or shopping centreare increasingly overlaid with digital interfaces and content. This approach positions AR away from the technology that makes it possible, such as devices, interfaces and computer-generated information, reframing it as cultural practice. It widens the scope of what AR is, but at the same time makes distinctions between it, mixed-reality, ubiquitous computing, IoT and other practices even looser. Since its popularisation as a concept in the 1990s, AR has gradually been incorporation into both commercial technologies and artistic/experimental practices (see Liao and Iliadis, 2021;Geroimenko, 2019). These developments have subsequently given rise to both utopian promises and dystopian anxieties about its social impact. As Liao and Iliadis (2021) note in their analysis of the discourse around AR, corporations, start-ups and futurists frequently linked AR to science fiction imagery, like the figure of the cyborg or superhero, with humans becoming empowered and enhanced as their surroundings are enhanced by virtual information . Google Glass's announcement in 2012 fuelled expectations that these possibilities would become reality. But they also gave rise to anxieties about privacy and data trackingas reflected in parodies of its concept video and various dystopian-themed short films that followed its releasethat ultimately lead to Glass's failure. 1 AR continues to be shaped by such imaginaries, but these visions and anxieties remain largely unrealised. Both the development of AR and its uptake by consumers have been hindered by numerous technological and social barriers. The most notable of these remains what Azuma (1997: 18-9) calls the 'registration problem': achieving realism when blending virtual content with the physical environment, without clipping, distortion and other effects that break immersion. Other barriers include usability, battery life and privacy concerns around facial recognition, surveillance and data tracking. Nonetheless, dreams of an AR-driven society have not been abandoned, with Apple, Google and other companies continually improving and investing in the technology over both the long-and short-term (see Kastrenakes, 2021;Stein, 2021). Even as it becomes more familiar and mundane, then, AR perpetually inhabits what Liao and Iliadis deem 'a future so close'an always imminent and emergent state that never quite seems to fully materialise. Although the AR revolution has yet to arrive, location-based games have increasingly incorporated AR into the gameplay experience, making the technology tangible and accessible for a wider audience. Pokémon GO's success contributed to location-based games' uptake of AR, but videogames have been experimenting with AR since the late 1990s. 2 Following the advent of smartphones, location-based game apps began to incorporate AR elements, such as Argh (Augmented Reality Ghost Hunter) (see Gazzard, 2011) andNiantic's Ingress (2012). Although the latter does not use the phone's camera to augment the player's surroundings like Argh, Ingress is often nonetheless described as an 'augmented reality game' (see e.g. Metz, 2012;Winegarner, 2016) despite being better categorised as a location-based game or even an 'alternate reality' game (ARG). This reinforces the elusiveness of definitions of AR and its conflation with other technologies and practices like 'mixed-reality', 'locative media' and 'ubiquitous computing'all of which imply the blending of physical and virtual space. Similarly, the term location-based gaming is also difficult to define and overlaps with many other terms, such as mixed-reality games, pervasive games and locative games (Leorke, 2018: 36-7). Like the discourse around AR more broadly, the incorporation of AR in location-based gaming has been met with both optimistic and sceptical visions of its social impact. Niantic CEO John Hanke (2017) invokes earlier promises made about AR rooted in utopian science fiction fantasies when he speculates about AR's potential to bring about 'buildings, offices, homes, cities and transportation with live, dynamic interfaces customised to you and what you want to do' (n.p). Niantic has led the way with implementing these visions of AR in gaming through its 'occlusion' technology, 'buddy adventure' feature and 'AR mapping' research tasks in Pokémon GO. 3 Niantic has also developed a platform called Lightship for embedded AR in everyday environments described as a 'planet-scale augmented reality platform' and 'operating system for the world' (Niantic, 2021a: n.p.). While these developments have been met with enthusiasm by players, they have also fuelled familiar fears about pervasive surveillance, threats to privacy and disconnection from the real world (see e.g. Barbé, 2017;Carter and Egliston, 2020). While various theorists have sought to examine the use of AR in gaming, they remain largely design-focused. Wetzel et al. (2008) have proposed design guidelines for mixed-reality games, offering an early perspective on what AR games should be, while Kalalahti (2015) has proposed heuristics for the usability of AR, which can be used as a lens to study how contemporary AR games support them. Meanwhile, in their work Laato et al. (2019Laato et al. ( , 2021b examine the use of AR specifically in location-based games, including players' motivations for using it, finding that around 25% of all location-based games now include AR elements. Yet there is a growing need for further research into AR and location-based gaming that specifically unpacks how AR is used in these games and what impact it has on players' experiences. In turn, this approach can shed light on the extent to which AR's practical implementation fulfils the long-held visions around it, as it shifts from a 'speculative' or 'emergent' technology towards a more familiar and mundane one. Methods and data To address this gap in the literature, we created a list of current commercial location-based game apps and analysed their AR features. To create this list we searched the Google Play Store and Apple App Store with three different search terms: 'location-based game', 'GPS game' and 'geolocation game'. From the search results, we included the apps in our analysis that contained both locationbased elements and were considered to be games based on their descriptions. The search from Google Play resulted in 65 location-based games and from App Store in 46 location-based games. When duplicates were removed, 98 total games remained. From this full list of location-based games, we chose games that either had over one million downloads, an average rating of a minimum of 4 stars, or more than $5000 in revenue in February 2021 according to Sensor Tower. 4 With these criteria, we sought to predominantly capture successful games, since these games would be more likely to include higher quality or more widely used AR features. Four games were excluded from analysis: two for not being available in the local stores of the authors, one due to being unplayable without a larger group of people, and one due to the location-based elements having been removed from the current version. This left 42 games. Each of these games were played to check whether the game included any AR elements. We defined AR as the use of the device's camera to alter or augment the physical environment in some way. This definition excluded games without this feature and arguably overlooks broader conceptualisations of AR mentioned earlier, which consider AR under the broader rubric of 'augmented space' or 'augmented reality'. This approach focused our analysis more on the technical use of AR in location-based games than the conceptual use, although we revisit this broader conceptual approach to AR in our Discussion. 11 of the 42 games analysed featured AR elements (see Table 1). In addition, one game, The Walking Dead: Our World used to include AR, but the feature had since been quietly removed. The games with AR elements were more closely inspected to analyse these elements and their integration in the game. The analysis was done utilising the formal analysis of gameplay (Lankoski and Björk, 2015), examining game elements and their interactions with chosen focus points. All authors participated in the analysis process, and each game was analysed by two of the authors. As the chosen games and their AR features varied greatly, no strict time limit was agreed for the analysis; however, the analysis of each game was conducted in multiple sessions to find the different ways AR was included and implemented. Each individual use of AR was documented by describing how it functioned and how the feature was integrated in the gameplay. In addition, the analyser focused on the technical aspects of the implementation. One exception to this approach was Men in Black: Global Invasion (Ludare Group Inc, 2019), which could only briefly be analysed by one author before it was suddenly and without notice shut down. Despite this interruption we include it in our discussion, supplementing our notes with gameplay videos from YouTube. Augmented reality mechanics The games with AR elements (N = 11) included various ways of implementing them, summarised in Table 2. In its simplest form, AR was used to show an alternative view of the game content. This was the case with Landlord GO (Reality Games Ltd, 2020), which offered a special AR map, changing how the surrounding points of interest were shown. Instead of a top-down map view, the player could move their phone to see at which direction the different locations were in the physical world. This did not add any new content or gameplay but merely provided an alternative map view. Several games included a mechanic where the player can catch the in-game creatures in AR mode (Pokémon GO, Draconius GO (Elyland LLC, 2017), Monster Ball GO (Playfox Games World, 2016), Men in Black). In this mechanic, the creature appears against a real-world background when trying to catch it. In most games, the creature moves around to various degrees to evade the player as Table 2. AR mechanics found in the analysed games. Type Description Games Alternative view The game offers an alternative AR view of the same content as in the regular view Landlord GO Catching The game shows a creature on the game screen against the player's camera view, and the player must aim and catch/shoot it they attempt to 'net' it or deplete its health. The use of AR in these games is mostly optional, but in Pokémon GO, for example, catching special Pokémon in specific situations requires the use of AR. In many of these games, the player can take photos of the creatures against the real-world background (Pokémon GO, Jurassic World Alive, GPS Monster Scouter (Tankenka, 2016), Five Nights (Illumix Inc, 2019)). In this mode the game offers in-game photo taking opportunities, and might also give other tools, such as resizing or turning the content and adding frames or filters. In Jurassic World Alive, the player can also record video of their captured dinosaurs interacting with the environment. In GPS Monster Scouter, the player can take pictures simultaneously of several of the game creatures, which are randomly placed on the screen. In Harry Potter: Wizards Unite (Niantic, 2019), the player can also take a picture of themselves for the 'Ministry ID' with different digital effects, like humorous accessories and makeup. Players can share these photos, which can function as viral marketing for the game. While this feature is often freely available, the player can sometimes buy new frames or filters for it, and in the case of Five Nights, accessing the feature requires in-game currency. This suggests that the photoshoot AR features can also be a way to monetise the game. Some games allow the player to interact with their creatures (Pokémon GO, Jurassic World Alive) against a real-world background. In Jurassic World Alive, the player can feed and play with all the dinosaurs they have in their collection, while in Pokémon GO, the player can only feed and play with the Pokémon they currently have as their 'buddy'. The buddy Pokémon also appears on the game map alongside the player's avatar after being fed. Interacting with the buddy increases its 'friendship' level and unlocks new abilities, such as bringing gifts for the player. In both of these games, interaction with the creature requires the use of AR, although in Pokémon GO players can use the 'quick treat' feature to quickly feed their Pokémon against a blurred camera background. In a shared AR feature, up to three players in the same location can have their buddy Pokémon appear in a shared area and play, feed and take pictures of them all together. In Wizards Unite, the player can use AR in 'trace encounters'. Traces appear on map and can be encountered with the AR mode on, starting the encounter against a real-world background. In the AR mode, the player locks a magical encounter or an enemy by aligning a pattern on the player's UI with the object and tapping the screen, which will then lock the object to be interacted with (see Figure 1). In the encounter, the actual fighting continues in a regular mode without AR. With the AR turned off, the encounter simply skips the beginning. AR is also used when unlocking 'portkeys' in Wizards Unite. In this mode, the player finds a suitable spot from their environment by moving the device, and once found, they tap the screen to set the portkey on the ground. A portal appears, and the player must move and walk through it. A new environment appears after this, and while it can be viewed by moving the camera, the actual surroundings of the player are no longer shown in the background. The player then moves their phone around them to locate collectibles (see Figure 2). Five Nights included two similar AR mechanics where players have to defend themselves. In one, the player receives a visit from a monster, who stalks them before attacking, while in another, the player collects light orbs. When attacked, the player must constantly turn around to search where the monster is hiding, which is hinted at by a static distortion effect of the camera view. When found, the player has to wait for the monster to sprint towards them and shock it at the right moment. In the other mechanic, players must search and collect light orbs, luring them with a torchlight. After some time, a shadow monster appears, indicated by a sound. The player again must quickly find it by turning around and dissolving it with light before it attacks. Munzee (Freeze Tag Games, 2011) utilises physical content placed in the environment by other players. There are both virtual and physical 'Munzees', spots that can be deployed on the real-world map for other players to find. While the virtual Munzees can be collected simply by tapping the screen when near enough, the physical Munzeessmall QR code stickersneed to be found and scanned. The AR element is simple, as the player can see the physical sticker through the in-game user interface and scan it, which is then collected (see Figure 3). Some games allow more complex interaction with the in-game elements through the AR view. This was most advanced in Minecraft Earth (Mojang Studios, 2019), where the main mechanics focused on crafting: building, modifying and adventuring in small Minecraft worlds. These mechanics were divided into two different playstyles: buildplates that focused on building and modifying the world, and adventures that have a limited time to find a treasure within them. These worlds were placed on a chosen spot in the player's environment, after which the player could point their phones towards it and see it from different angles by circling the world. The player could interact in similar ways as in the Minecraft game: use a pickaxe to break and collect blocks, a sword to kill enemies, an axe to hack down trees and build with the collected blocks and items. Effect on gameplay The inclusion of AR features did not typically affect the gameplay significantly. AR functioned as a technical gimmick or a marketing tool, since game content superimposed on the player's surroundings could easily be shown, captured and shared for instance on social media. The photo taking and sharing aspect has been further utilised in the different photo shooting mechanics where the player can better place the creatures into the surroundings or add filters or accessories to the photos, which utilizes the AR functionalities more even if it would not be a part of the core gameplay. As the games have evolved, some games have moved from merely showing the player's surroundings as the background to incorporating more interactive content. For instance, in Pokémon GO, capturing the creature in AR mode starts with a minigame where the Pokémon has to first be located and tapped to start the actual capture. The buddy feature, where players can interact with their collected Pokémon, also involves a minigame to scan and situate the Pokémon in the physical environment. These make the AR element more integral to the gameplay, but they can also become laborious, as previous research shows (Laato et al., 2021a;Paavilainen et al., 2017;Rapp et al., 2018). In most games AR either features as a minigame outside the core gameplay or otherwise plays a small role. In Five Nights, though, AR is included in the core mechanics of the game, and the horror theme of the game is emphasised and partly utilised through the AR features. The screen darkens the player's surroundings and uses different audio and visual cues to tell when the enemies are near or approaching (see Figure 4), including jump scares especially when the player fails. The game utilises the full surroundings of the player and may convey a feeling of urgency to locate the content within it, seeking to be more immersive. Importantly, this immersion is achieved not just visually but also through sound effects, as the monsters arrive at the player's 'door' and laugh maniacally. The analysed games varied in whether the AR features could be turned off or were otherwise a voluntary part of the game. AR features still often make the gameplay more challenging and not all mobile phones have the technology to run them properly or at all, while in some games, the AR features can feel like extra labour for the player. As a solution, many games include a simple on-off switch to toggle between AR and normal mode. This option was used in situations where the player tries to catch something from the screen, either having the real world or the game world as the background. As the player loses resources if the catch attempt fails, the more challenging AR mode might be the less optimal choice from the gameplay point of view. Wizards Unite solves this by separating the AR part from the capturing part. However, because switching off the AR mode simply skips the beginning and the game does not reward using AR, it makes capturing slower and not beneficial purely from the point of view of progression, and thus might be less motivating for some players. The mandatory AR features were found in varying mechanics in the games, ranging from nonessential parts of the game to core mechanics. For instance, the photoshoot mode in GPS Monster Scouter does not give an option to turn off AR but was in a minor role and not needed to progress in the game. When the AR features were both mandatory and essential for the game, their implementation became more critical. In Munzee, the game could be played without AR if one concentrates solely on collecting virtual Munzees, but physical codes can be considered as a core mechanic of the game and require AR. In Minecraft Earth and Five Nights, the gameplay revolved around mechanics that included AR, and the games could not be played in full without them. As noted, these two games also had the most complex AR mechanics. Technical aspects The technical implementation of AR features is still lacking, which was visible in various crashes and bugs that occurred during the testing. Sometimes the crashes lead to losses; for instance, the adventure mode in Minecraft Earth sometimes crashed when the timer ran out, leading to the loss of all resources collected during it. Some of the issues were tied to a suboptimal connection between the game content and the camera view. Most frequently, the content shown on the player's screen jittered when the player moved their phone, making it feel less integrated in the surroundings. In some cases, the game content was static, and moved as the player moved their phone. While this would make the content feel even more detached from the surroundings, it made it easier to for instance place the content in a suitable spot to make an artistic or a funny photo. The slight movement of the virtual characters or objects on the screen made it more difficult to interact with them, especially when the player had to make more complicated gestures on the phone screen while trying to hold the phone steady. This was evident in the Pokémon GO type catching situations, while in Minecraft Earth, reaching the correct tiles inside a slightly moving 3D environment was sometimes challenging. The player might also have to be in uncomfortable physical positions to get into the inaccessible spots. Most of the AR technology in these games did not take the environment into consideration, but merely showed content on top of the camera view. Most frequently the recognition could detect flat areas such as floors and ground by first asking the player to slowly move or wave their phone, and then showing suitable spots for the player to place the game content. There were a couple of exceptions to this. In Five Nights, the camera view was altered with different filters, adding static or distortion to different parts of the player's surroundings. In this solution, the game did not have to implement more complex technology to recognise elements from the player's surroundings but could still deliver a functioning effect. Pokémon GO's AR+ feature uses occlusion technology and tries to recognise and use objects in the player's surroundings. However, the AR 'registration problem' remains and the creature might in some cases end up seeming to be inside walls and other objects (see Figure 5). Categories for augmented reality implementation in location-based games After identifying the different AR mechanics and analysing their use, we divided the different ways to implement AR as a part of a location-based game into four broad categories: content being superimposed on the physical environment; content blending with the physical environment; players being immersed within a 3D world; and content utilising material objects (see Table 3). While most of our games fit into only one of these categories, Pokémon GO's option to switch between types of AR makes it overlap with two different categories, and Wizards Unite includes a few different AR mechanics which belong to different categories. Superimposing content onto the physical environment is a simple way to add AR elements in a game and is often used in games Pokémon GO 'clones' when capturing or attacking game The game includes physical objects that can be recognised with the in-game camera view and brought into the game. Munzee characters. In these cases, the player can see the game content on top of their real-world surroundings. However, the game does not recognise elements in the player's surroundings and the content merely floats on top of the camera view. Blending content with the physical environment is a slightly more advanced use of AR and enables the game to be more responsive to the player's surroundings. In this category, the game typically recognises suitable areas for the content from the player's environment, such as flat surfaces. As the technology advances, the content can take the environment into account in more detail, for instance having a character disappear behind the furniture. The content might appear as a 3D object that the player can then approach or circle and see from different angles. Instead of merely adding content in the player's surroundings, some games immerse players in a 3D world, transforming their entire surroundings into a part of the game and offering a 360-degree view to the fictional world. This might be done by transforming the camera view with different effects or filling the whole view with game content. In this technique, the player might imagine being inside the game world, possibly bringing a stronger sense of immersion. In the last category, the game utilises material content from the physical environment, such as QR codes, and transforms them into game content. The content can be found from different locations, and the player needs to scan them with an in-game camera view. If the physical elements are specifically made for the game, the game needs to be locally organised or be popular enough to have enough active players to spread the content around. Discussion Across our analysis of location-based games with AR features, we would like to highlight two overarching findings. First, AR was rarely integral to the gameplay experience of these games, nor did it substantially impact the gameplay. Second, the AR features were often hindered by technical flaws and constraints: the more sophisticated the game's use of AR, the more obvious its technical limitations became. Both these findings have significant implications for the growing integration between location-based games and AR, as well as AR as a technology itselfand the discourses and imaginaries around it that we discussed at the start of this article. As seen from the results, several of the games merely superimposed content over the camera view, rather than blending or integrating that content into the physical environment. Of the games that did take the environment into accountthrough blending, immersion and/or utilising material objectsthese games typically only appropriated the environment as a flat 'canvas' on which to place virtual content. In this sense, most of the games we analysed used AR in a relatively minor way, perhaps to distinguish themselves from the competition and/or to build on Pokémon GO's use of AR in its viral marketing, rather than actually pushing the boundaries of AR's capabilities. For all but three of the games we analysed (Five Nights, Minecraft and Munzee) the AR features could be considered optional or part of mini-games, and their removal would not substantially impact the gameplay experience. In fact, it would make the game more streamlined and less laborious. This strongly echoes similar findings by , Koskinen et al. (2019) and Laato, et al. (2021a: 7), whose respondents felt that AR hindered their ability to progress more rapidly and predominately only used it to take and share photos. In almost all the games we analysed, AR presented technical problems and glitches despite the fact that none of these games, with the possible exception of Pokémon GO's occlusion technology, pushed the technological boundaries of AR in any substantial way. From a design perspective, this shows that AR can potentially help attract interest in the game as a 'gimmick' or distinguishing featurebut its inclusion can also introduce bugs and design problems that might eventually put players off the game. This is one explanation for why AR figures prominently in these games' marketing material but is most commonly 'downplayed' in the game itself, existing as an optional feature that is not essential for gameplay and can easily be toggled off. This conflict between the value of AR features for marketing and the actual game is highlighted in The Walking Dead: Our World. As we noted above, the game used AR features in its release version (superimposing zombies onto the players' surroundings). The AR feature had been removed from the game in 2020, signalling a lack of importance and, likely, a lack of use from the players. Nonetheless, its developer continues to show the AR feature in some of their marketing materials of the game, 2 years after the removal. 5 Location-based games already present designers and players with additional technical challenges that hinder gameplay: most notably inaccurate, bouncing GPS signals and battery drain. On top of this, they ask players to physically move as part of the gameplay, which is often a drawcard for many players in terms of exercise, mental wellbeing and sightseeing; but can also require players to specifically set aside time for this if it is not part of their daily routine. Adding AR into the mix as well can likewise be both a drawcard and a burden. It can provide added value for the player (Paavilainen et al., 2018) and spur imagination and foster a closer connection between in-game and real-world content (Rauschnabel, 2021), but due to current technical limitations, these objectives can be difficult to meet by the developers. As Laato et al. (2021b: 7) observe, the current technical limitations of AR indicate that it 'should be used to support, not replace human imagination.' This observation has additional implications when we consider the 'high turnover rate' for location-based games. As Leorke (2018: 113-118) notes, with few exceptions, location-based games have struggled to reach mainstream audiences and retain enough players to remain commercially viable over the long-term, often closing their servers within a few years of release. Indeed, two games within our sample with AR elements were closed down within 3 months of each other: Minecraft Earth did not move beyond Beta phase and was shut down in June 2021 while Men in Black disappeared from stores in the middle of our investigation. In addition, Niantic has since shut down Wizards Unite in early 2022. This suggests that location-based games with AR elements are similarly high-risk. They enable designers to break through and attract attention in a crowded mobile game market, but the expense in running their servers, designing new content and patching technical problems means they are likely to be short-lived if they struggle to retain players. Nonetheless, location-based games with AR elements continue to be released. After conducting our study, at least two more location-based games based on successful franchises, The Witcher: Monster Slayer and Pikmin Bloom (Niantic, 2021b), arrived, both prominently including AR elements. Niantic's Lightship platform also showcases original AR games. This continued interest in AR suggests that location-based games and AR will continue to co-develop in future, despite the challenges and risks associated with them. More broadly, the technical limitations we identified in our study also indicate that AR is still far from realising either utopian or dystopian visions and promises that have long accompanied it. As our overview of AR at the beginning of this paper indicates, the potential of AR always seems to be 'just on the horizon', couched in hyperbolic claims, science fiction metaphors and concept demos. But the actual technology itself has only incrementally advanced and is yet to overcome most of the technical barriers identified by Azuma (1997) more than two decades ago, including the 'registration problem' as an ongoing usability and user experience dilemma (Kalalahti, 2015). These limitations were evident in our analysis of location-based games with AR elements, with our sample games either avoiding them by minimising AR features, making them optional or 'living with' the glitches and crashes they produced. It is telling that Niantic, which has invested the most heavily in ARboth financially and discursivelyhas yet to achieve even the most basic level of realism in its AR features. Our analysis showed that even placing a Pokémon in a furnished room without clipping or registration problems could not be consistently achieved. This will potentially change as smartphone hardware and software continues to improve. But it also reinforces that AR in location-based gamingand we would argue, AR as a technology more broadlyis simultaneously 'mundane' and 'always imminent'. As Richardson et al. (2021: 4) note in relation to Pokémon GO as 'mundane media', the impact of new technologies is often 'most interesting when they become mundane, receding from the spotlight and absorbed as part of everyday and habitual rituals of mobility and communication'. Pokémon GO popularised the use of AR in commercial location-based games, but it has also settled into a mundane and 'safe' feature of these gamesas exemplified by the standardised and largely uninventive use of the technology in the games we analysed. This mundaneness presents two possible trajectories for AR in location-based gaming. It can remain a gimmick, the equivalent of a Snapchat filter: providing players with a moment of amusement or producing a viral, sharable image, but not substantially adding to the gameplay or pushing the boundaries of AR. Or it can spur new, innovative uses of AR as both commercial and artistic location-based game designers seek to create the next 'breakthrough' app that recaptures and reinvigorates the technological fascination that accompanied Pokémon GO's release. Our analysis shows that the former trajectory is currently dominant. But the lack of success that subsequent location-based games have achieved compared to Pokémon GO and the shutting down of several games in our sample signals that the time is ripe for a new wave of innovation that once again revives interest in the 'always-imminent' possibilities of AR. Limitations and future research The games we analysed are predominantly commercial location-based gaming apps designed to make a profit through microtransactions, advertisements, and/or players' data. There are many location-based games that include AR features that are not released on app stores but playable through specialised devices that we have not included in our analysis. Further, there are lesser known, artistic, experimental or publicly funded location-based game apps that use AR, but did not show up in our search because they are not available in all countries or simply not recognised as location-based or AR games by the app stores' search algorithm (see e.g. Innocent and Leorke, 2020). These games might present an alternative to our sample's gimmicky and unimaginative use of AR, since they are more likely to experiment and innovate with AR without the commercial imperatives that constrain our sample. In this article, we sought to examine the use of AR in the most popular and commercially successful location-based games, but a larger, more comprehensive study could also examine artistic location-based games. Furthermore, we chose to focus solely on location-based games that incorporated AR features through the device's camera. As we noted above, this excludes games like Ingress, which Niantic considers as part of a wider 'shared alternate reality' by connecting information with place (Hanke, 2017: n.p.). We took this approach to avoid the definitional overlap between location-based gaming, augmented reality, augmented space, mixed-reality and pervasive games, to more specifically focus on AR as a technology rather than a cultural practice (Manovich, 2006). If we instead defined AR as the broader connection of virtual information to physical space in real-time, every location-based game could potentially be considered as 'augmenting reality'. Our approach provided us with a smaller sample and narrower lens in our game analysis, but we also sought to connect our findings to this broader discourse around AR through our discussion above. Nonetheless, we acknowledge other scholars may adopt a wider definitional approach and this may produce different results to those we have presented here. Furthermore, while our analysis has reflected on subjective experiences of AR, such as players' affective and sensorial connection to place, further ethnographic research involving player interviews and observation can shed further light on the experiential nature of AR in location-based gaming. As we have argued, AR remains challenging to concretely define and is constantly in flux, and scholarly approaches to AR in location-based games similarly remain fluid and contextual. Conclusions AR has long been shaped by hyperbolic claims from futurists and technology companies about its potential, as well as concerns from scholars and commentators about its impact on privacy, social interaction and communication. Games have played an important role in these visions through experimentation with AR to create more immersive and realistic entertainment experiences, gamify elements of everyday life, and connect people with each other through playful augmented interfaces. To explore the impact of AR on location-based gaming and the growing synergy between these distinct, but overlapping, technologies, we examined the integration of AR features in one generation of location-based games. Our findings revealed four main ways that location-based games used AR to augment the physical environment, variously superimposing, blending, immersing or utilising material objects in it. These categories can be further utilised in future research when analysing or developing AR features in location-based games as they continue to evolve. Our findings also showed that in many cases, despite figuring prominently in the marketing campaigns, the AR features were shallow or did not substantially impact on the gameplay experience. We also found that the AR features of these games remain entangled in familiar technical issues, indicating that AR as a technology remains far from realising either the utopian claims or dystopian anxieties that continually accompany it. Non-commercial, artistic location-based games might be more able to explore new, emergent uses of AR for gaming. And existing games may continue to evolve through updates, patches, and new content, which can sometimes substantially alter their gameplay or features. They might also be shut down, as several of our sample already have been. As such, this study represents a snapshot of location-based AR games from early 2021, which we hope future research can build on. As a simultaneously 'mundane' and 'always imminent' technology, we argue that AR's incorporation into location-based games and other technologies and practices ensures it will continue to be an important site for ongoing scholarly analysis. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was partially funded by the Academy of Finland project Centre of Excellence in Game Culture Studies (CoE-GameCult, 312395).
9,300.2
2023-02-13T00:00:00.000
[ "Computer Science" ]
Transposon mutagenesis of Rickettsia felis sca1 confers a distinct phenotype during flea infection Since its recognition in 1994 as the causative agent of human flea-borne spotted fever, Rickettsia felis, has been detected worldwide in over 40 different arthropod species. The cat flea, Ctenocephalides felis, is a well-described biological vector of R. felis. Unique to insect-borne rickettsiae, R. felis can employ multiple routes of infection including inoculation via salivary secretions and potentially infectious flea feces into the skin of vertebrate hosts. Yet, little is known of the molecular interactions governing flea infection and subsequent transmission of R. felis. While the obligate intracellular nature of rickettsiae has hampered the function of large-scale mutagenesis strategies, studies have shown the efficiency of mariner-based transposon systems in Rickettsiales. Thus, this study aimed to assess R. felis genetic mutants in a flea transmission model to elucidate genes involved in vector infection. A Himar1 transposase was used to generate R. felis transformants, in which subsequent genome sequencing revealed a transposon insertion near the 3’ end of sca1. Alterations in sca1 expression resulted in unique infection phenotypes. While the R. felis sca1::tn mutant portrayed enhanced growth kinetics compared to R. felis wild-type during in vitro culture, rickettsial loads were significantly reduced during flea infection. As a consequence of decreased rickettsial loads within infected donor fleas, R. felis sca1::tn exhibited limited transmission potential. Thus, the use of a biologically relevant model provides evidence of a defective phenotype associated with R. felis sca1::tn during flea infection. Introduction Rickettsial pathogens are obligate intracellular bacteria spread by hematophagous arthropods associated with a spectrum of emerging and reemerging vector-borne diseases worldwide. In the United States, there has been a resurgence of flea-borne rickettsioses within endemic areas, including California, Texas, and Hawaii [1]. Among flea-borne rickettsiae, Rickettsia felis, the causative agent of flea-borne spotted fever (FBSF), has been detected worldwide in over 40 different arthropod species [2,3]. Since being associated with human infection in 1994 [4], further incriminating evidence implicating R. felis as a widely distributed human pathogen is building. As a common cause of febrile illness in sub-Saharan Africa, FBSF is found to be prevalent among 3-15% of hospitalized patients diagnosed with fevers of unknown origin, but underestimation of the perceived risk is likely due to shared similarities in clinical signs (fever, headache, myalgia) with other endemic febrile illnesses [3,5]. Moreover, a recent study demonstrates successful transmission from R. felis-infected fleas to canine hosts, resulting in a rickettsemic infection [6]. The cat flea, Ctenocephalides felis, which is a predominant ectoparasite found on domestic and wild animals, is a well-described biological vector of R. felis [7,8]. Notably, R. felis can utilize multiple routes to infect vertebrate hosts, including inoculation of infectious salivary secretions and potentially infectious flea feces [9][10][11][12][13]. While recognized as an emerging pathogen, little is known of the molecular interactions governing flea infection and subsequent transmission of R. felis. Genetic modification of vector-borne pathogens, such as Yersinia pestis and Bartonella henselae, has identified bacteria-derived factors essential for infection or transmission in fleas [14][15][16][17][18][19][20]. In contrast to these extracellular pathogens, the fastidious nature of rickettsiae requires direct interaction with host cells for propagation, complicating the development of applicable molecular tools. Utilizing genetic manipulation, studies have implicated several rickettsial determinants, including surface cell antigen-0 (Sca0), Sca1, Sca2, Sca4, Sca5, RickA, and RalF, in adhesion, invasion, cell-to-cell spread, and/or avoidance of the immune response in a mammalian host system [21][22][23][24][25][26]. While rickettsial factors vital for infection are being elucidated in vertebrates, less is known in arthropod vectors. For example, Sca1 is known to be expressed on the surface of rickettsiae and facilitate attachment to non-phagocytic mammalian cells during in vitro culture [21]; however, its role during vector infection remains unknown. Arthropod-borne pathogens undergo complex changes in their host environment as they traverse between vector and vertebrate hosts. It is known that R. felis utilizes host-specific gene regulation during infection and transmission by the arthropod vector [27]. Likewise, fleas mount an immune response against invading rickettsiae [13,28,29]. The kinetics of R. felis infection in the flea has been detailed, with rickettsiae observed throughout the midgut, excretory system, salivary glands, and ovarian tissues as early as 7 days post-exposure, indicating mechanisms of immune evasion have evolved [30,31]. However, the rickettsial determinants driving flea infection remain to be elucidated. Therefore, the objective of this study was to characterize the phenotype of a R. felis transformant during flea infection. The establishment of an intracellular niche is crucial for rickettsial survival; therefore, it is hypothesized that if sca1 is essential in the vector, then disruption will result in an altered infection phenotype. In the current study, a R. felis sca1::tn mutant was generated and utilized in an arthropod host system to determine its contribution to infection and transmission. While the R. felis sca1::tn mutant portrayed enhanced growth kinetics compared to R. felis wild-type during in vitro culture, rickettsial loads were significantly reduced during flea infection. Therefore, the use of a biologically relevant model implicates sca1 as an essential factor facilitating R. felis infection in the flea. Himar1 transposon mutants Using a modified pCis-mCherry-SS Himar A7 plasmid [32], rickettsial mutants were generated from R. felis str. LSU. Whole genome sequencing identified Himar1 insertion sites for 5 mutants, with the remaining 3 detected by semi-random nested PCR alone. Sequencing results were confirmed by PCR amplification, cloning, and Sanger sequencing. Results indicate 8 insertion sites with representatives in both R. felis chromosomal DNA, as well as the pRF plasmid (Fig 1A and Table 1). By microscopy, mCherry fluorescence was observed for some R. felis transformants (Fig 1B). To achieve clonal populations, the limiting dilution method was used and R. felis mutants were grown under selective culture in ISE6 cells using L15B medium supplemented with spectinomycin and streptomycin [33]. For further characterization of growth phenotypes, a single mutant, R. felis sca1::tn, was selected. Sanger sequencing results indicated Himar1 insertion near the 3' end of sca1 (Fig 2A). Clonality was confirmed by PCR amplification of the flanking regions of the transposon (Fig 2B). Himar1 transposon interrupts normal gene transcription of sca1 To assess the effect of the transposon insertion on sca1 gene transcription, total RNA was isolated from R. felis sca1::tn and R. felis WT and analyzed by reverse transcriptase PCR (RT-PCR). Primers designed to amplify upstream of the Himar1 insertion site showed a reduction in sca1 expression in R. felis sca1::tn compared to R. felis WT (Fig 2C). Furthermore, amplification of the downstream region of sca1 revealed an ablated gene expression for R. felis sca1::tn (Fig 2C). This data suggests that the transposon had an impact on normal sca1 mRNA synthesis at the 3' end of the gene. As it is known that transposon insertions can have polar effects on adjacent genes [32,34,35], primers were designed to amplify cDNA of portions of 3 genes located downstream of sca1, RF_0023, RF_0024, and RF_0025 (S1A Fig). Of interest, only transcripts from RF_0023 were detected in R. felis WT, where the absence of transcripts was observed for R. felis sca1::tn (S1B Fig). No amplification of rickettsial DNA was detected in no reverse transcriptase controls. Thus, the data suggests the Himar1 insertion has altered sca1 gene transcription and subsequently affected transcription of a neighboring gene. results identify a significant decrease in the ability of the R. felis sca1::tn mutant to attach to host cells across all time points examined (Fig 3A). However, the R. felis mutant was internalized to a similar degree to that of R. felis WT (Fig 3B). As previously described for the role of R. conorii Sca1 [21], the results presented are consistent with established redundant PLOS PATHOGENS Rickettsia felis sca1 mutant phenotype in fleas PLOS PATHOGENS Rickettsia felis sca1 mutant phenotype in fleas mechanisms of cell entry and suggest an independent mechanism occurs between attachment and invasion. Enhanced growth of R. felis sca1::tn in tick cells To assess the R. felis sca1::tn mutant's growth kinetics during arthropod cell culture, both rickettsial strains were independently cultivated in ISE6 cells and genomic equivalents were quantified temporally over a 7-day period by qPCR. Although R. felis sca1::tn was altered in its ability to attach to host cells, its growth was significantly enhanced beginning at 3 dpi when compared to R. felis WT (Fig 4A). Additionally, upon microscopy analysis, R. felis sca1::tn displayed distinct dense foci of infection, whereas R. felis WT presents a disseminated state of infection with few rickettsiae per cell at 7 dpi (Fig 4B) Overall, temporal examination of the growth kinetics following sca1 mutation indicates enhanced infection within tick cells with a unique phenotype resulting in reduced dissemination. R. felis sca1::tn demonstrates a deficient growth phenotype during flea infection To confirm there was not a loss in fitness of the mutant during bloodmeal acquisition assays, rickettsiae were re-isolated from blood following a 48-hour incubation period. Although overall rickettsial growth kinetics were reduced after re-isolation from blood, rickettsiae remained infectious. Specifically, there is no indication that there was statistically significantly reduced R. felis sca1::tn growth compared to R. felis WT (S2 Fig), suggesting the resulting flea infection phenotype is due to factors encountered following bloodmeal ingestion. Due to the inability to generate a rickettsemic model under laboratory settings, the rickettsemia levels in infected PLOS PATHOGENS Rickettsia felis sca1 mutant phenotype in fleas hosts remains unknown. To assess the mutant's phenotype during flea infection, cat fleas were exposed to an absolute number of 1.5 x 10 10 R. felis sca1::tn or R. felis WT in 600 μl of blood for 48 hours, providing a Rickettsia-rich meal and a higher probability of rickettsial infection among feeding fleas [11][12][13]31]. Following exposure, a subset of fleas was assessed for rickettsial burden, revealing that both flea cohorts acquired comparable loads of rickettsiae. However, weekly assessments over a 28-day period identified significantly lower R. felis sca1::tn loads in individual fleas when compared to R. felis WT (Fig 5A). Additionally, R. felis WT was detected at increasing levels in flea feces over time, with the highest loads at 28 dpe (Fig 5B). Failure of R. felis sca1::tn to replicate to comparable levels as R. felis WT coincided with the lack of detection in flea feces. However, detection of R. felis sca1::tn in flea feces could be induced, as fleas exposed to a higher dose of rickettsiae (5 x 10 10 total rickettsiae) sustained higher levels of R. Fig 5. Rickettsial loads during flea infection. A) Fleas were exposed, independently, to R. felis WT-(black) or R. felis sca1::tn-infected bloodmeals (teal) at an infectious dose of 1.5 x 10 10 rickettsiae for 48 hours. Data are representative of mean ± SEM from three experiments for a total of 60 fleas with 3 technical replicates. B) Feces collected from exposed fleas were assessed for rickettsial enumeration by qPCR and standardized to 1 mg of feces. C) Fleas were exposed, independently, to R. felis WT-(black) or R. felis sca1::tn-infected bloodmeals (teal) at an infectious dose of 5 x 10 10 rickettsiae for 48 hours. Data are representative of mean ± SEM from two experiments for a total of 20 fleas with 3 technical replicates. D) Rickettsial loads per 1 mg of feces collected from fleas exposed to a high dose of rickettsiae. Significance was assessed at a 95% confidence interval ( � p<0.05; �� p<0.01) by two-way ANOVA with Bonferroni's multiple-comparison test to assess variation in the means from wild-type over time. https://doi.org/10.1371/journal.ppat.1011045.g005 PLOS PATHOGENS Rickettsia felis sca1 mutant phenotype in fleas felis sca1::tn over time (Fig 5C and 5D). Additionally, R. felis sca1::tn was observed at lower rickettsial densities when compared to R. felis WT by microscopy of whole flea sections, supporting the qPCR results (Fig 6). As comparable loads of both strains were detected in fleas at the time of acquisition, the data suggest that R. felis sca1::tn had a deficiency in initiating early flea infection. However, detection of R. felis sca1::tn 28-days after exposure, implies persistence over time within the flea vector. R. felis sca1::tn is transmissible during flea cofeeding To determine if R. felis sca1::tn disseminated to flea tissues necessary for transmission, such as the salivary glands, fluorescent microscopy was employed. Both rickettsial strains were PLOS PATHOGENS Rickettsia felis sca1 mutant phenotype in fleas observed in salivary glands that were recovered from exposed female fleas within 48 hpe and at 7 dpe (Fig 7). Detection of the R. felis sca1::tn mutant in the salivary glands of exposed fleas, warranted further investigation into its ability to be transmitted by fleas while feeding on a murine host. Subsequently, a cofeeding bioassay was employed as a means of tracking transmission to proximal feeding arthropods in the presence of a vertebrate host [10,36]. Donor (infected) fleas were allowed to feed with recipient (naïve) fleas for 3 days. Naïve fleas were labeled with the fluorescent biomarker, RhoB, to allow for distinction between the flea cohorts using microscopy (Fig 8). Although not significantly different, the cofeeding bioassay generated R. felis WT infection in 20% of recipient fleas, whereas only 10% of recipient fleas were Rickettsial detection within flea salivary glands. Fleas were exposed to a R. felis WT-(top panels) or R. felis sca1::tn-infected (bottom panels) bloodmeal for 48 hours, in which female salivary glands were micro dissected at 2 dpe and 7 dpe. Samples were stained for rickettsiae (green), nuclei (blue), and Evans blue (red). White arrows indicate rickettsiae. Image is representative of salivary glands dissected from 5 fleas collected from independently exposed flea cohorts. Scale bar = 100 μm. https://doi.org/10.1371/journal.ppat.1011045.g007 PLOS PATHOGENS Rickettsia felis sca1 mutant phenotype in fleas positive for R. felis sca1::tn by qPCR ( Table 2). Additionally, rickettsial DNA was identified in the skin of two R. felis WT-exposed mice and a single R. felis sca1::tn-exposed mouse ( Table 2), suggesting a mechanism of transmission to naïve fleas. Confirmation of R. felis in the mouse skin was further validated by Sanger sequencing of PCR-amplified R. felis ompB. Although comparable prevalence of infected donor fleas (77% and 80%) for R. felis WT and R. felis sca1::tn, respectively, was observed, rickettsial loads were significantly lower in the R. felis sca1::tn-exposed fleas ( Table 2). Thus, the data suggests that the R. felis sca1:tn mutant has a decreased capacity to infect fleas, ultimately lowering its transmission potential. Discussion The genetic manipulation of rickettsiae has historically been challenging. The advent of transposon mutagenesis has allowed for the discovery of rickettsial-specific virulence factors. Phenotypes associated with genetic mutants observed during in vitro culture have elucidated functionality [22][23][24]37,38] or their crucial involvement in causing disease in vivo [24,38,39]. Although informative, these studies have primarily occurred in tick associated SFG Rickettsia spp. However, distinct genetic and biological differences between tick-and insect-borne rickettsiae exist (reviewed in [40][41][42]); therefore, the role of these molecules cannot be generalized across the entire Rickettsia genus. Moreover, most molecules have been characterized in a Fleas were independently exposed to a R. felis WT-or R. felis sca1::tn-infected bloodmeal for 48 hours. At 5 dpe,10 donor (circled) and 10 recipient (yellow) fleas were allowed to feed on a murine host for 12-hour increments, totaling 36 hours. Post-feeding, fleas were individually assessed for rickettsiae by qPCR. Figure created with BioRender.com. https://doi.org/10.1371/journal.ppat.1011045.g008 [37,43,44]. As sequencing revealed, the Himar1 transposon had inserted near the 3' end of sca1. Validation of a clonal, isogenic mutant led to examination of the insertional effects of the transposon in the mutant. For R. felis sca1::tn, reduction of sca1 transcript was observed both upstream and downstream of the insertion site, suggesting altered mRNA synthesis of the region necessary for translation of the β-peptide domain of the Sca1 protein. As the Scas belong to a class of immunodominant outer membrane proteins (Omps) that are known to be involved in recognition of, attachment to, and protrusion through host cells [21,26,45,46], they are highly conserved among Rickettsia spp., with a core 5 sca genes encoded in most rickettsial genomes [47,48]. Scas are characterized as type-V secretion systems, or autotransporters, which is defined by the presence of (1) an N-terminal signal sequence; (2) a central passenger domain; (3) a C-terminal β-peptide domain [47,49]. As Gram-negative bacteria, the rickettsial membrane consists of an inner membrane (IM), periplasmic space (PP), and outer membrane (OM). Thus, to achieve implantation into the OM, proteins must be secreted across the IM. In bacterial species, such as Escherichia coli, the structural integrity of the β-peptide domain is essential in the recognition process during translocation and OM anchoring (reviewed in [47]). The observation of several Scas anchored to the surface of rickettsiae [21,23,24] indicates the essential proteins are present to achieve translocation from the cytosol to the OM. Importantly, the β-peptide domain of another dominant Omp has been associated with cell surface expression and direct host interactions [50]. However, to study the exact function of the β-peptide domain and its role in translocation of the Sca1 peptide to the surface of R. felis, the R. felis sca1::tn mutant will require further investigation at the protein level. Protein expression was not examined in the current study due to the lack of an available antibody specific to R. felis Sca1. Transposons are known to have polar effects on the expression of neighboring genes [32,34]. To determine whether this occurred in R. felis sca1::tn, transcripts from three genes downstream of sca1 were analyzed. The mutant exhibited a loss of gene expression for RF_0023 when compared to R. felis WT. However, genes RF_0024 and RF_0025 were undetectable by RT-PCR in either R. felis WT or R. felis sca1::tn during tick cell culture. The results suggest that while RF_0024 and RF_0025 may not be expressed by R. felis during tick cell infection, the introduction of the transposon in the mutant induced polar effects on an adjacent gene. Due to the lack of comprehensive gene annotation, the influence of RF_0023 on the R. felis sca1::tn mutant's phenotype during infection cannot be assigned. However, the ability of the R. felis sca1::tn mutant to infect cells under these experimental conditions suggests that the gene is dispensable for tick cell infection. Future studies detailing transcript activity during R. felis infection in arthropod cells would provide additional insight into currently uncharacterized genes. While annotated in all validated Rickettsia spp., little is known of the functional role of Sca1 during infection and transmission. Using a non-phagocytic mammalian cell model, pretreatment with Sca1 antibodies reduced the attachment efficiency of R. conorii to host cells [21]. Due to the current lack of an available flea-derived cell line, a surrogate in vitro arthropod system was utilized. ISE6 cells have been routinely adopted to isolate and grow R. felis isolates [13,51,52]. In the current study, the R. felis sca1::tn mutant was limited in the ability to attach to tick cells, suggesting a universal role of Sca1 across both vertebrate and arthropod hosts. Genetic mutagenesis of Rickettsia spp. has revealed unique phenotypes during in vitro culture; yet these mutants remain competent in overall growth when compared to wild-type strains [22][23][24]53,54]. In this study, the R. felis sca1::tn mutant exhibited enhanced growth in tick cells compared to R. felis WT. The phenotype is consistent with tick-borne rickettsiae, Rickettsia parkeri sca2::tn and R. parkeri rickA::tn, where altered phenotypes were not observed during in vitro culture. Although essential in the ability to polymerize host cell actin during early and late stages of infection for spotted fever group (SFG) Rickettsia, overall growth kinetics in cells were augmented compared to R. parkeri WT [23]. As multiple rickettsial molecules have recognized involvement in attachment to and invasion of host cells, compensation of rickettsial factors necessary for survival are likely present. Due to the fast intermittent feeding biology of fleas, the initiation of bloodmeal digestion can occur as early as 6 hours post-feeding [55]. Additionally, the digestive process occurs within the midgut lumen and fleas are armed with the capacity to elicit an immune response to invading pathogens (reviewed in [40]). Thus, to avoid rapid excretion or detection by immune mechanisms, flea-borne rickettsiae must quickly attach to and invade midgut epithelial cells. However, the factors facilitating rickettsial colonization in fleas remains undefined. In its biological vector, the R. felis sca1::tn mutant's growth was significantly reduced compared to R. felis WT, suggesting its inadequacy to colonize the flea at early stages of infection (e.g., host cell attachment or evasion of the flea's immune response). It has been shown that fleas mount a transcriptional response against invading Rickettsia typhi during midgut infection [28,29] and R. felis during salivary gland infection [13]. However, the rickettsial-specific molecules involved in evading the arthropod's natural immune response remain to be elucidated. Additionally, while protein expression of the 5 most prevalent Scas in R. typhi revealed differential expression between vertebrate and vector hosts; R. typhi Sca1 was not shown to be expressed during flea infection [56]. Differences in the observed phenotype of the current study may be due to Rickettsia sp. examined, R. typhi versus R. felis. More likely, the examination in the current study has identified a temporal necessity for sca1 during flea infection. Temporal expression of pathogen determinants has been identified for other vector-borne pathogens, such as Borrelia burgdorferi and Y. pestis (reviewed in [19,57]). If Sca1 is implicated in the recognition of and adherence to host cells, then expression may be below the limit of detection at later stages of flea infection. Thus, a temporal assessment of rickettsial Sca expression profiles during flea infection is warranted to gain a thorough understanding of the factors essential for initial colonization, replication, and transmission. Insect-borne typhus group Rickettsia, such as R. typhi and Rickettsia prowazekii, are known to colonize the insect's midgut epithelium in which exponential growth causes host cell lysis, subsequently releasing rickettsiae back into the midgut lumen [58][59][60]. Extracellular rickettsiae are then excreted into insect feces, which is a primary mechanism of horizontal transmission to vertebrate hosts. The detection of R. felis WT in flea feces throughout the 28-day time course suggests a similar transmission mechanism [12]. In the current study, the inability of the R. felis sca1::tn mutant to be detected in flea feces at any time point may be a reflection of rickettsial load. Thus, if rickettsial loads within the flea correlate with detection within feces, a defect in R. felis sca1::tn may prevent the prerequisite steps essential for fecal transmission of insectborne rickettsiae. With the current lack of appropriate vertebrate models to assess the contribution of R. felis transmission through feces, the implication of reduced rickettsial loads and this route of exposure requires further investigation. Although R. felis sca1::tn had a reduced rickettsial load during flea infection compared to R. felis WT, it persisted within the flea cohort throughout the 28-day time course of this study, which is representative of the average adult flea lifespan. Detection of R. felis within salivary glands as early as 1 dpe is known [13]. Comparatively, R. felis sca1::tn was observed in flea salivary glands for at least 7 days, suggesting its ability to disseminate to other flea tissues was not fully impaired. Due to the inefficiency to induce R. felis systemic infection in mouse models, cofeeding bioassays were used to assess transmission of rickettsiae between vectors. While deposition of infectious salivary secretions into the skin of vertebrate hosts does not guarantee transmission of the agent that progresses to a systemic infection, detection of rickettsiae in the host skin at the arthropod feeding site is associated with transmission to proximal feeding arthropods [10,36]. Indeed, R. felis sca1::tn can be inoculated into murine skin and acquired by cofeeding, naïve fleas. Although R. felis sca1::tn was acquired orally during feeding in the artificial host, migrated to flea salivary glands, secreted during subsequent feeding events on vertebrate hosts, and acquired by neighboring fleas, a diminished transmission phenotype was observed compared to R. felis WT. Similar to the observations for fecal detection, reduced rickettsial loads in infected donor fleas likely influences transmission efficiency. Recent advances in rickettsial genetics have provided several mutants of tick-borne rickettsiae. Yet, limited insect-associated rickettsial mutants have been developed [54,61,62]. To fully understand the complex biology of these obligate intracellular bacteria, the interplay of rickettsial factors during lifestyle changes between both mammalian and vector hosts must be examined. In the current study, transposon mutagenesis was employed to obtain randomized insertions with the R. felis genome, providing new resources to elucidate the function of rickettsial molecules involved in host infection. A clonal R. felis mutant with an insertion in the sca1 gene was further examined to identify novel phenotypes associated with culture conditions and flea infection. Cell culture revealed an enhanced growth phenotype, yet dissemination was limited compared to the wild-type strain. Interestingly, a reduced rickettsial load observed in the flea vector exposed to R. felis sca1::tn correlated with decreased transmission potential. While several factors can contribute to the differences observed, the data suggest that R. felis sca1 is associated with early infection of the vector and efficient transmission. Future studies utilizing complementation techniques and molecular reagents specific to R. felis Sca1 in the biological system presented within will facilitate full elucidation of Sca1's role in transmission by the vector. Ethics statement All animal research was performed under the approval of the University of South Alabama Institutional Animal Care and Use Committee (protocol number: 1489181). Rickettsial transformation The R. felis str. LSU transposon mutants were generated using a modified pCis-mCherry-SS Himar A7 plasmid [32]. The plasmid carries sequences encoding a mCherry fluorescence marker and the aadA gene that confers resistance against spectinomycin and streptomycin with expression driven by the Anaplasma marginale transcriptional regulator 1 (Am-Tr1) promoter [32]. The transposon is flanked by nine base pair inverted repeats recognizable by the PLOS PATHOGENS Rickettsia felis sca1 mutant phenotype in fleas Himar1 transposase where 1,833 base pairs were inserted into the rickettsial genome. Rickettsial transformants were serially passed on ISE6 cells using L15B medium supplemented with 100μg/ml spectinomycin and streptomycin until clonal populations were achieved by limiting dilution method [33]. Determination of Himar1 insertion sites To determine the Himar1 insertion site, rickettsial stocks were sucrose purified by needle lysis as previously described [63]. Genomic DNA (gDNA) was extracted using the DNeasy Blood and Tissue Kit, according to the manufactures protocol (Qiagen). Prior to whole genome sequencing, the integrity of DNA fragments was visualized by agarose gel electrophoresis. The sequencing was carried out with an Ion Torrent Personal Genome Machine (PGMTM) System on a 316 chip. Sequences were aligned to the annotated R. felis URRWXCal2 genome as a reference database (NCBI GenBank accession number: CP000053.1). Insertions identified by genome sequencing were confirmed by PCR followed by Sanger sequencing using both a semi-random nested PCR method [37] and PCR-amplified products cloned into the pCR4-TOPO vector (Invitrogen) (S1 Table). Sequences were Sanger sequenced following Azenta Life Sciences specifications and aligned to the R. felis reference genome using the Gen-Bank database using BLAST for further analyses by SnapGene1 (Version 6.0.4) software. All whole genome sequences are deposited into the NCBI Sequence Read Archive under the Bio-Project accession PRJNA896619. Characterization of R. felis sca1::tn To determine clonality of the R. felis sca1::tn mutant population, primers were designed to amplify flanking regions of the confirmed transposon insertion site (S2 Table). Amplicon specificity was validated by Sanger sequencing and aligned to the R. felis reference genome using NCBI BLAST. Bacterial populations were screened for clonality prior to each experiment. To determine alterations in sca1 expression, primers were designed to amplify upstream and downstream of the known transposon insertion site (S3 Table). Semi-purified rickettsiae were harvested through needle-lysis followed by 2 μm filtration to remove large host cell debris and stored in TRIzol reagent (Invitrogen). Total RNA was isolated using Zymo mini-RNA kit, cleaned with the Zymo Clean and Concentrator kit, and any residual DNA was depleted using TurboDNase treatment (Ambion). cDNA was synthesized using iScript (Bio-rad) with random hexamers. A no reverse transcriptase control was used to confirm the absence of gDNA. In vitro infection assays For analysis of rickettsial cell attachment and growth kinetics by immunofluorescence, ISE6 cells were seeded onto glass coverslips in 24-well plates at a density of 8x10 5 cells/well or 8-well chamber slides at a density of 5x10 5 cells/well and incubated at 32˚C for 48 hours. To detect genomic equivalents during growth curve analysis, ISE6 cells were seeded in 48-well plates at a density of 5x10 5 cells/well. Rickettsiae were enumerated by BacLight viability stain kit [64] to determine a multiplicity of infection (MOI) of 10 rickettsiae/cell. Host cell contact was induced by centrifugation at 300 x g for 5 minutes and unbound bacteria were removed following specific incubation times described below. For the cell attachment assay, unbound rickettsiae were removed and infected cells were washed with PBS and fixed for immunofluorescence staining at 10-minute intervals for the first 30 minutes of infection. Microscopy images from 2 experiments with 10 fields of view were quantified. For growth curve analysis, rickettsiae were added to ISE6 cells in tissue culture plates, centrifuged, and incubated at 32˚C for 1 hour to allow rickettsiae to attach to host cells. At 1-hour PLOS PATHOGENS Rickettsia felis sca1 mutant phenotype in fleas post-infection (hpi), unbound rickettsiae were removed and samples were considered time point 0 where rickettsial growth was calculated as a change over time. The remaining wells were replaced with maintenance media for the duration of the experiment (7 days) where entire well volumes (both intracellular and extracellular rickettsiae) were collected every other day for rickettsial enumeration of genomic equivalents by qPCR (S4 Table). A total of 3 independent experiments were performed with 3 technical replicates per experiment for both R. felis WT and R. felis sca1::tn. In vitro immunofluorescence staining Cells were fixed using 4% paraformaldehyde (PFA) for 20 minutes and washed thoroughly with PBS. To distinguish between cell attachment and invasion, extracellular bacteria were stained with rabbit anti-Rickettsia I1789 antibody (provided by Ted Hackstadt) followed by the secondary antibody, Alexa 594 goat anti-rabbit (Invitrogen, A11005; 1:1000 dilution). To subsequently stain all bacteria (intracellular and extracellular), cells were permeabilized and again stained with rabbit anti-Rickettsia I1789 antibody followed by the secondary antibody, Alexa 488 goat anti-rabbit (Invitrogen, A11008; 1:1000 dilution). Coverslips were mounted with VectaShield HardSet antifade mounting medium with DAPI (Vector Laboratories Inc.) for nuclear staining. Samples were visualized and quantified using a Nikon A1 microscope (S10RR027535). For growth curves, cells were permeabilized using 0.5% Triton-X100 for 15 minutes and blocked with 3% BSA for 1 hour. Coverslips were then probed for rickettsiae using rabbit anti-Rickettsia I1789 antibody diluted 1:1000, followed by the secondary antibody, Alexa 488 goat anti-rabbit (Invitrogen, A11008; 1:1000 dilution). For all immunofluorescence assays, samples with secondary antibody only served as a control for non-specific binding of the Alexa fluor antibodies. Rickettsial isolation from blood To re-isolate rickettsiae from prepared bloodmeals, 1 mL of R. felis WT-and R. felis sca1::tninfected ISE6 cells were prepared as previously described [10]. Cell pellets were incubated in microcentrifuge tubes in an artificial dog unit for 48 hours to mimic flea infection temperature range. Host cells were then lysed, and cell debris was pelleted at 300 x g for 5 minutes. The supernatant was collected and filtrated through a 2 μm filter. Rickettsiae were enumerated by BacLight viability stain kit [64] to determine a multiplicity of infection (MOI) of 50 bacteria/ cell. Cells were infected in the same manner as growth curve analyses, in which whole well contents were collected every other day for 1 week. Similarly, rickettsial genome equivalents were calculated as a change over time by qPCR (S4 Table). A total of 2 experiments with 3 technical replicates/experiment were completed for both R. felis WT and R. felis sca1::tn. Flea infection Cat fleas were purchased from Elward II Laboratory (Soquel, CA) and maintained using an artificial dog system [65]. Prior to use in each bioassay, a subset of fleas was confirmed to be Rickettsia free using qPCR protocols amplifying R. felis ompB gene [13]. For flea infections, cages were prepared with approximately 200 mixed-sex cat fleas and prefed heat-inactivated bovine blood (HemoStat Laboratories) for 24 hours. Following prefeeding, fleas were starved for 6 hours and exposed to a R. felis WT-or R. felis sca1::tn-infected bloodmeal at a low (1.5 x 10 10 rickettsiae) or high dose (5 x 10 10 rickettsiae) prepared as previously described [10,12]. Fleas were allowed continuous access to the infectious bloodmeal for 48 hours after which it was replaced with uninfected, defibrinated blood for the remainder of the study. A total of 20 fleas (10 male and 10 female) were collected weekly, surface sterilized [13], and homogenized using stainless steel beads in a TissueLyser II (Qiagen) prior to gDNA extraction and subsequent qPCR analysis with primers listed in S4 Table to determine rickettsial loads within individual fleas. To assess rickettsial loads per 1 mg of flea feces, feces were collected weekly [12] and subjected to gDNA extraction following the manufacturer's instruction for blood isolation. A total of 3 independent replicates (60 fleas per time point) over a 28-day period were analyzed for both R. felis WT and R. felis sca1::tn mutant experimental groups. Flea microdissections, sections, and immunofluorescence staining For salivary gland dissections, fleas were collected 48 hours post-exposure and 7 days postexposure (dpe). Female fleas were surface sterilized, and microdissected in sterile phosphatebuffered saline (PBS) using a stereo microscope. Rinsed salivary glands were placed onto slides, fixed with 4% PFA and stored at 4˚C until IFA staining was performed. For flea sections, whole fleas were submersed in 10% neutral buffered formalin for a minimum of 24 hours. Slides containing formalin-fixed paraffin-embedded flea sections (5 μm) were heated at 65˚C for 15 minutes and deparaffinized by repeated immersions in Hemo-De (Electron Microscopy Sciences). Slides were rinsed with PBS, antigen retrieval, and immunofluorescence staining was performed as previously described [30,31,52]. Briefly, slides were incubated with mouse polyclonal antisera to R. felis [52,64] at a dilution of 1:100 for 1 hour and subsequently incubated with Alexa 488 goat anti-mouse (Invitrogen, A11001; 1:1000 dilution). Fleas were counterstained with 0.1% Evans blue in PBS at 37˚C for 30 minutes and mounted with VectaShield HardSet antifade mounting medium containing DAPI (Vector Laboratories Inc.) for nuclear staining. Samples were visualized using a Nikon A1 microscope (S10RR027535). Flea cofeeding Five-week-old, male, C3H/HeJ mice (Jackson Laboratory) were used as a murine model for transmission studies. For cofeeding bioassays, donor fleas were exposed to either R. felis WTor R. felis sca1::tn-infected bloodmeals, independently, for 48 hours at an infectious dose of 5 x 10 10 rickettsiae. In parallel, recipient (naïve) fleas were exposed to a bloodmeal supplemented with the fluorescent biomarker, rhodamine B (RhoB) at a working concentration of 0.025%, for 48 hours [10,31] (Fig 8). Five days post-exposure to the infectious bloodmeal, 10 donor and 10 recipient mixed-sex fleas were combined into feeding capsules made from modified 1.7 ml microcentrifuge tubes [10]. Capsules were attached to the shaved flank of mice and fleas were allowed to feed for three 12-hour increments. Fleas and skin biopsies at the site of flea feeding were collected after the final feeding time point (7 dpe). Fleas were prepared for qPCR analyses as described for the infection bioassay. A total of 3 mice were used per experimental group, along with a control mouse exposed to uninfected fleas only. Rickettsiae were not detected in control samples by qPCR. DNA extraction and qPCR To determine genome equivalents of rickettsiae, gDNA was extracted using the DNeasy Blood and Tissue Kit (Qiagen) following the manufacturer's instruction. Rickettsial and host gene copies were quantified by qPCR with the appropriate primers and probes (S4 Table) using iTaq Universal Probes Supermix (Bio-Rad) on a LightCycler 480 II (Roche Life Sciences). Standard curves were generated by creating 10-fold serial dilutions of pCR4-TOPO plasmids containing the R. felis ompB, C. felis 18sRNA, or ISE6 calreticulin genes to quantify each target sequence. Amplification conditions were as follows: an initial denaturation step at 95˚C for 3 minutes, followed by 45 cycles of denaturation at 95˚C for 15 seconds, annealing and elongation at 60˚C for 60 seconds with fluorescence acquisition in single mode. PLOS PATHOGENS Rickettsia felis sca1 mutant phenotype in fleas Statistical analyses To compare growth kinetics and cell association of R. felis sca1::tn to R. felis WT in ISE6 cells, a two-tailed t-test was performed to determine differences between the means at a given time point. A two-way analysis of variance (ANOVA) was performed for flea infections to compare differences in variance between R. felis sca1::tn and R. felis WT overtime. During the cofeeding bioassay, a Fisher's exact test for significance was used to examine independence between the proportion of R. felis-infected recipient fleas for each rickettsial strain. To determine significant differences between flea donor loads, a Mann Whitney test was performed. A p value � 0.05 was considered statistically significant. All statistical analyses were performed using Prism 8 software (GraphPad Software). Data used for figures are displayed in S1 Data. Supporting information S1 5, 9). Rickettsial DNA samples were used as controls for gene amplification (lanes [13][14][15][16][17][18]. cDNA samples lacking reverse transcriptase were used as a negative control (lanes 3,4,7,8,11,12). (TIF) S2 Fig. Re-isolation of rickettsiae from blood. Rickettsiae were lysed from host cells and ISE6 cells were infected with semi-purified R. felis WT or R. felis sca1::tn after a 48-hour incubation period in bovine blood. Growth curve is measuring rickettsial genome equivalents by qPCR. Data are representative of mean ± SEM from two experiments, with 3 technical replicates, and normalized to input bacteria at 1 hpi. Significance was assessed at a 95% confidence interval by unpaired t-test to assess variation in the means from wild-type at a given time.
8,717.8
2022-12-01T00:00:00.000
[ "Biology", "Medicine", "Environmental Science" ]
Development and Evaluation of Collaborative Embedded Systems using Simulation Embedded systems are increasingly equipped with open interfaces that enable communication and collaboration with other embedded systems, thus forming collaborative embedded systems (CESs). This new class of embedded systems, capable of collaborating with each other, is planned at design time and forms collaborative system groups (CSGs) at runtime. When they are part of a collaboration, systems can negotiate tactical goals, with the aim of achieving higher level strategic goals that cannot be achieved otherwise. The design and operation of CESs face specific challenges, such as operation in an open context that dynamically changes in ways that cannot be predicted at design time, collaborations with systems that dynamically change their behavior during runtime Introduction Modeling and simulation are established scientific and industrial methods to support system designers, system architects, engineers, and operators of several disciplines in their work during the system life cycle. Simulation methods can be used to address the specific challenges that arise with the development and operation of collaborative embedded systems (CESs). In particular, the evaluation of collaborative system behavior in multiple, complex contexts, most of them unknown at design time, can benefit from simulation. In this chapter, after a short motivation, we exemplify scenarios where simulation methods can support the design and the operation of CESs and we summarize specific simulation challenges. We then describe some core simulation techniques that form the basis for further enhancements addressed in the individual chapters of this book. Motivation Simulation is a technique that supports the overall design, evaluation, and trustworthy operation of systems in general. CESs are a special class of embedded systems that, although individually designed and developed, can form collaborations to achieve collaborative goals during runtime. This new class of systems faces specific design and development challenges (cf. Chapter 3) that can be addressed with the use of simulation methods. At design time, a suitable simulation allows verification and exploration of the system behavior and the required architecture based on a virtual integration. At runtime, when systems operate in open contexts, interact with unknown systems, or activate new 1 system functions, the aspect of trust becomes of crucial importance. Using later research and technology advancements, we foresee the possibility of computing trust scores of CESs directly at runtime based on the evaluation results of system behavior in multiple simulated scenarios. The core simulation techniques presented in this chapter form the basis for enhanced testing and evaluation techniques. Benefits of Using Simulation Regardless of the domain, the use of simulation methods for behavioral evaluation of systems and system components has multiple benefits. For a concrete scenario of complex interactions, simulation methods are more exploratory than analytical methods. The effectiveness of the exploration is achieved through the coupling of detailed simulation models, while the efficiency of the exploration is achieved by exercising a system or system group behavior in a multitude of scenarios, including scenarios that contain failures. Through the collaboration of CESs, collaborative system groups (CSGs) that did not exist before are formed dynamically at runtime. Moreover, the exact configuration of those CSGs is not known at design time. In such situations, when systems operate in groups that never existed before, there is insufficient knowledge about the collaborative behavior and its effects. In this case, simulation can help to discover the effects of different function interactions. As a third benefit, the use of closed-loop simulation (X-in-the-loop simulation) is a suitable approach for testing embedded systems (e.g., control units of collaborative assistant systems). The independence of the simulated test environment from the implementation and realization of the embedded system (system under test) generates advantages, such as reusability of the simulations and cost savings in system testing. One example is the testing of different control unitsfor which the simulation environment can be reused without major adaptions-independently of the implementation and realization concept of the control unit. Only the interfaces of realized functionality of the system under test have to be the same to enable coupling of the simulation and testing environment. A fourth major benefit is that the risk for the system user (e.g., car passenger) can be reduced by using simulations during the system testing process by virtual evaluation. The test execution in virtual environments enables discovery of harmful behavior in a virtual world, where only virtual and not real entities are harmed. Real hazards can thus be avoided. In addition, the risk during the operation of collaborative systems can be reduced by using predictive risk assessment by means of simulation. Additionally, the use of simulations for testing at system design time can be used to make tests virtual, with an associated reduction in hardware and prototypes. In particular, the costs for the production of these real components can be reduced. In addition, making tests virtual leads to early error detection and correction and thus to a Simulation to support effectiveness of exploration Simulation to evaluate the function interaction Closed loop simulation Risk reduction Virtualization of tests further reduction in development costs. This is especially useful as the exact configuration of CSGs is not known at design time. Here, simulation gives the opportunity to simulate sets of possible (most likely) scenarios. Furthermore, the independence of simulation models that reflect the behavior of real components results in efficient development, because in some use cases, simulations are not bound to real-time conditions. Therefore, they can be executed much faster than in real time and thus be used to reduce development time. It is also easier to explore many more scenarios and variations of scenarios to gain a better overview and trust in the systems. As a seventh benefit, the use of simulation environments for testing embedded systems is especially independent of external influences of the environment and ensures that tests can be reproduced. This allows efficient tracking and resolution of problems exposed by the simulation and reproduction of the absence of the problems in the updated system configuration. The last benefit is that the internal behavior of the simulated systems and their visualization are exposed in a broad way. The traceability of the execution of a real system is limited due to hardware and time restrictions. In the simulation, it is easier to log relevant internal system execution and therefore to identify the causes of problems and unexpected behavior. In the context of developing and evaluating CESs, the use and benefit of simulation-as described above-lie mainly in the first phases of the entire life cycle. In addition, simulation is also used during operation and service-that is, during the runtime of the system. Thus, simulation represents a methodology that can be used seamlessly across all life cycle phases. Accordingly, there are different challenges for simulation as a development methodology and as a validation technique. Challenges in Simulating Collaborative Embedded Systems Even though there are multiple benefits from using simulation, the aspect of simulation for CESs and CSGs poses particular challenges. In this section, we describe the design time and runtime challenges. Design Time Challenges To support the use of simulation during the design of collaborative systems, as presented in Chapter 3, multiple challenges must be addressed, as detailed in the following. One challenge is the evaluation of function interaction at design time, because in a simulation of CESs, functions of multiple embedded systems, developed independently, must be integrated to allow evaluation of the resulting system. This is necessary to discover and fix unwanted side effects before the systems are deployed in the real world. Also, the other relevant aspects for the simulation scenario, such as the context or the dynamic behavior of the systems, must be covered. To support this activity, the integration of different models and tools is also important. Development of collaborative system behavior relies on simulating models of different embedded systems that are often developed with different tools. Furthermore, the integration of different simulation models, sometimes at different levels of detail, represents an important design engineering challenge. This is because the design of CESs relies on the evaluation of collaborative system behavior that can be expressed at different levels of abstraction. Another challenge is the integration of different aspects of the simulation scenario. The comprehensive simulation of collaboration scenarios must cover several aspects to achieve a broad coverage of scenarios. Examples are the context of the CSG, the execution platform of thxe systems and the system group, including the functional behavior, the timing behavior, and the physical behavior of the systems and the system group. The different aspects can require dedicated models and must therefore be covered by specialized simulation tools. For a comprehensive simulation of the whole scenario, these models and tools must interact with each other and must be integrated via a co-simulation platform. The use of simulation methods pursues specific strategic goals as well. One of these methods is the virtual functional test, which uses simulation to test a certain collaboration functionality or a certain functionality of one system in the collaborating context. The models of the other parts (systems, context, etc.) must include only those details relevant for the functionality being tested. Another purpose of the simulation is the virtual integration test. Here, simulation tests the correct collaboration of the different systems or parts of the systems in a virtual environment. The exact structure of the CSG may not be available at design time and can be subject to dynamic changes. Simulation can test multiple scenarios for this structure for a multitude of situations. An early application of Virtual functional test Virtual integration test such tests in the design process, before the different systems are fully designed and implemented, will allow early detection of potential problems and hazards for the collaboration behavior. One strategic goal for the application of simulation, especially in early design phases, is to support a design-space exploration. The possibility to support the evaluation of a lot of design alternatives and to identify hazards and failures in the different simulation models allows a strategic evolutionary search for a system variant that fulfills the desired goals and requirements. The determination of fulfilled requirements allows the simulations to serve as automation tools for test cases. The results must then be linked to the requirements to determine the coverage. Besides the degree of coverage, additional system behavior can be investigated in relation to the requirements. Due to the great complexity of collaborative systems, automated algorithms must be increasingly used. In Section 12.3, we present a possible approach to help developers and testers meet this challenge. Runtime Challenges Even though properly tested during design time, CESs face multiple challenges at runtime and the simulation techniques deployed at runtime face particular challenges as well. In this subsection, we list the challenges of CESs and CSGs as introduced in Chapter 2. We then detail the challenges of using simulation to solve these runtime challenges. One particular challenge CESs face at runtime is operation in open contexts. The external context may change in unpredictable ways during the runtime operation of CESs. In particular, the environment changes and the context of collaboration may change as well. For example, in the automotive domain, a vehicle that is part of a platoon may need to adapt its behavior when the platoon has to reduce the speed due to high traffic. If the vehicle has a strong goal of reaching the target destination at a specific time, it may decide to leave the platoon that is driving at a lower speed and select another route to its destination. For the remaining vehicles within the platoon, the operational context has changed because the vehicle is now no longer part of the platoon and instead, becomes part of the operational context. The operational context of a CSG may change dynamically as well, either because a CES joins the group or because the CSG has to operate in an environment that was not foreseen at design time. The CSG has Design-space exploration Fulfillment of requirements Open context to adapt its behavior in order to cope with the new environmental conditions. For example, a vehicle under the control of a system function in charge of maintaining a certain speed limit within a platoon has difficulty maintaining the speed after it starts raining. When CESs form at runtime, the runtime activation of system functions poses additional challenges. When the behavior of CESs is coordinated by the collaboration functions that negotiate the goals of the systems and activate system functions, multiple challenges arise when these system functions are activated for the first time. One example is scheduling: the timing behavior of system functions activated for the first time can influence the scheduling behavior of (a) the interacting system functions, (b) the collaboration functions, and (c) of the whole system. In this case, the functional interaction must be evaluated because when system functions are activated for the first time, the way in which they interact with other system functions in specific situations can be faulty. Moreover, changing goals at runtime can also have consequences on the CSG or the CESs. In order to form a valid system group, CESs and/or the CSG may need to change their goals at runtime dynamically, which may obviously have significant impact on the system behavior. The overall dynamic change of internal structures within a CSG is impossible to foresee at design time. When a CES leaves a CSG, the roles of the remaining participants and their operational context may change as well. The same happens when a new vehicle joins the platoon as a platoon participant that later on may take the role of platoon leader. In turn, this leads to a dynamic change of system borders of a CSG, which may change the overall functionality of the CSG. For example, a vehicle ahead of the platoon is considered a context object that influences the speed adjustments of the approaching platoon. If the vehicle in front of the platoon decided to join the platoon, then the borders of the initial platoon would be extended. Addressing the challenges mentioned above by using simulation may even require using simulation at runtime, which, in turn, puts further requirements on the simulation method. Firstly, when simulation is used to control the behavior of safetycritical systems, the real-time deadlines must be achieved. When system behavior is evaluated at runtime, in a simulated environment, then the simulation must deliver the results on time. This is necessary in order to give the system the chance of executing a safe failover Secondly, predictive evaluation of system behavior is possible only by achieving efficient simulation models. When system behavior is evaluated at runtime, in a simulated environment, it must execute faster than the wall clock. This imposes a high degree of efficiency on the simulation models that are executed. For example, it may not be feasible to execute detailed simulation models as parts of the interacting platform because this may take too much time. Instead of executing the detailed models, abstractions of the system behavior can be executed. These abstractions must be directed towards the scope of the evaluation. If scheduling behavior needs runtime evaluation in a simulated environment, then the parts of the platform that influence or are influenced by the scheduling will be executed. However, in order to have accurate evaluation, the efficiency of simulation must balance with the effectiveness of simulation models. In order to perform a trustworthy system evaluation in a simulation environment during runtime, the models must accurately reflect the parts of the system under evaluation. However, because simulation also needs to be efficient, effective simulation can be achieved by using the abstraction models (for efficiency reasons) directed towards the scope of the evaluation. This in turn requires extensive effort during the design time of the system to create accurate models that reflect selected parts (abstraction) of the internal system architecture. For example, to enable evaluation of scheduling at runtime, systems engineers must design the meaningful simulation models of the platform that will be executed during scheduling analysis. Simulation Methods Simulation is a universal solution approach and is based on the application and use of a few basic concepts from numerical mathematics. In our case, simulation models are implemented in software and use numerical algorithms for calculation. We speak of time-discrete, discrete-event, or continuous simulation (continuous time) depending on the mathematical concepts used, which characterize the different handling of time behavior. Simulation tools usually realize a combined strategy. The fact that simulation covers several disciplines, combines different elements of a system, or addresses the system and its context, leads to approaches for a cooperation of different simulations, also called co-simulation. From Enabling model abstraction to achieve efficiency a practical point of view, data and result management are important for supporting the simulation activities. In the area of testing software functions, the three approaches Model-in-the-Loop (MIL), Software-in-the-Loop (SIL), and Hardwarein-the-Loop (HIL) are relevant [VDI 3693 2016]. MIL simulation describes the testing of software algorithms implemented prototypically during the engineering phase. These algorithms are implemented using a simulation models language, mostly in the same simulation tool that is also used to simulate the physical system (understood here as the dynamic behavior with its multidisciplinary functions) itself. The SIL simulation describes a subsequent step. The software is realized in the original programming or automation language and is executed on emulated hardware and coupled with a simulation model of the physical system. The third step is a HIL simulation. Here, the program (or automation) code compiled or interpreted and executed on the target hardware is tested against the simulation of the physical system. Simulation of technical systems usually consists of three steps: model generation (including data collection), the execution of simulation models, and the use of the results for a specific purpose. In the following, we describe the methodology of simulation for these three process steps. In general, the data collection and generation of the models take a lot of effort and time. For virtual commissioning, there are statements that up to two-thirds of the total time is spent on these activities [Meyer et al. 2018]. As a consequence, especially for CESs and CSGs in partially unknown contexts, efficient methods for setting up the model must be provided. Integrating the model generation directly into the development process in order to generate up-to-date models at any time is a good approach, as shown in Chapter 6. The most common concept for seamless integration of all information relevant in the entire life cycle of a product is product lifecycle management (PLM). It integrates all data, models, processes, and further business information and forms a backbone for companies and their value chains. PLM systems are, therefore, an important source for the creation of simulation models. With the technical vision of a digital twins approach, the importance of different kinds of models is increased. Digital twins are abstract simulation models of processes within a system fed with realtime data. For more information on supporting the creation of digital twins for CESs, see Chapter 14. Semantic technologies are used to realize the interconnectedness of all information and to guarantee the Supporting model creation openness of the approach to add further artifacts at any time [Rosen et al. 2019]. These semantic connections, frequently realized by knowledge graphs, can be used in future to generate executable simulation models that are up to date with all available information more efficiently. Furthermore, existing models must be combined to form an overall model of different aspects of the system and context. This requires an exchange of models between different tools, which can be solved via co-simulation [Gomes et al. 2017]. The FMI standard [FMI 2019] describes two approaches towards co-simulation. With model exchange, only those models that can be solved with one single solver are combined to form an overall mathematical simulation model, whereas FMI for co-simulation uses units, consisting of models, solvers, etc. that are orchestrated by a master. On the one hand, this master must match the exchange variables described in the interfaces. On the other hand, it must orchestrate the different time schemes of the different simulators from discrete-event through time-discrete up to continuous simulation [Smirnov et al. 2018]. For efficient simulation of CSGs, the simulation chains must therefore be set up and modified quickly and efficiently as they can change quite often depending on the situation. In order to set up an integrated development and modeling approach, two aspects must be covered: firstly, different methods must be assembled into an integrated methodology; and secondly, interoperability and integration between different tools must be established in order to set up an integrated tool chain (see Chapter 17). A special focus of co-simulation lies in HIL simulation, which uses real control hardware. The remaining simulation models, with their inherent simulation time, must be executed faster than real-world time to ensure that the results are always available at the synchronization time points with the physical HIL system. Thus, both, the slowest model as well as the orchestration process, must be executed faster than real-world time. One key goal of simulation is validation and testing of the system behavior. This requires the definition of test cases, the setup of the simulation model, execution of the test cases, and finally, the evaluation of the test. For context-aware CESs and CSGs in particular, this may be a highly complex task with exponentially increasing combinations. Finally, the test results must be compared with the requirements. In Chapter 15, we therefore develop exhaustive testing methods to cope with these challenges. Enabling model execution Developing the use of the results One way to support the tester is to mark system-relevant information in the requirements and link it to simulation events. A markup language can be used to mark software functions and context conditions within a document. After important text passages in the requirements have been marked, they can be extracted automatically. When the extraction process is completed, the information is linked to the specific signals of the system. This results in a mapping table. Since many simulators, models, and interfaces are used in the simulation of CESs, a central point is created to combine them. In the simulation phase, all signals of the function under test are recorded and stored in log data. These log data contain all signal names and their values for each simulation step. Once the simulation run is complete, the log data can be processed further and linked to the original requirements using the mapping table from the previous phase. This allows the marked text phrases in the requirements to be evaluated and displayed to the user. Simulation methods are increasingly integrated into the design and development process and used in all phases of the system life cycle [GMA FA 6.11 2020]. Beyond development, validation, and testing, simulation is used during operation with an increasing benefit [Schlegner et al. 2017]. Specific applications include simulations in parallel to operation in order to monitor, predict, and forecast the behavior of the CESs. This means that simulation models must be updated regarding the current state of the systems collaborating in a CSG [Rosen et al. 2019]. Chapter 3 introduces a flexible architecture for the integration of simulation into the systems architecture to support the decision of the system or the operator. For complex scenarios, the simulation has to cover not only the functional behavior of a single system, but also the combined behavior of the CSG and all relevant aspects, including, for example, the resulting collaboration behavior, the context of the collaborative system, the timing of the systems, and the communication between the systems and with the context. The collaboration functions result from the interaction between the functions of the different systems. All these aspects must be addressed by simulation as early as possible in the design process. It may not be sufficient to test them in a HIL simulation when the implementation of the system has already widely progressed. The MIL and SIL simulations must also address those aspects. Application The methods described above have several applications. First of all, they support development, testing, and virtual integration, especially in early phases of the system design. They also support the development of extended simulation methods such as the ones used for runtime evaluation of system trustworthiness, as presented in Chapter 10; they support the generation of simulation models based on a step-by-step approach, as presented in Chapter 6; and they support the operator during system operation, as presented in Chapter 3. Furthermore, they support system evaluation in real-world scenarios. During the design of CESs in particular, simulation methods can help to check the current state of development, verify the correctness and completeness of the current design, and explore the applicability of the next steps and extensions. For collaborative systems, virtual integration of different systems is a special challenge, especially in early and incomplete stages of development. The purpose is to explore the collaborative behavior as early as possible, detect possible hazards and failures when they are much easier to change, and adapt the design of the systems for the solution to these hazards and failures. Simulating the collaborative behavior in the early stages of development-especially for applications like autonomous drivingshould include all relevant aspects of the underlying scenarios, especially context and physical system behavior. Co-simulation approaches can address the challenges involved in such a comprehensive simulation. Chapter 13 provides more details on the possibilities and tools for realizing such simulation approaches. Building trust into collaborative embedded systems requires a sustained evaluation and testing effort that spans from design time to runtime. As detailed in the sections above, simulation is an important technique that enables system and software testing at design time and behavior evaluation during runtime. Within CrESt, as presented in Chapter 10, an extension of existing simulation methods has been realized. These methods either address runtime challenges at design time or enable runtime evaluation of system behavior. Addressing runtime challenges at design time is enabled by extending the co-simulation method described in this chapter towards integrating the real world (in which collaboration functions and system functions execute on real hardware) with the virtual world (formed by purely virtual entities). This allows the runtime Simulation methods for development, testing, and virtual integration Simulation methods as a basis for extension Simulation methods for runtime evaluation activation of system functions, for example, to be validated in an extended set of scenarios that are easier and cheaper to explore within a virtual environment. Building on the challenges and methods described in this chapter, simulation techniques deployable at runtime have been developed. Coupled with monitoring components, simulation can be used for runtime prediction of system behavior emerging from the runtime activation of system functions. When simulation platforms are deployed on CESs, the functional and timing interaction of a collaboration function with system functions and the functional and timing interactions between system functions can be predicted at runtime. For details on how the simulated prediction is performed, see Chapter 10 of this book. Conclusion Simulation methods support the development of CESs, verification and validation of their continuous development, from the conceptual phase when abstract behavioral methods can be coupled through cosimulation and verification of system behavior after detailed models are integrated, up to the final testing of systems before deployment. We have analyzed the benefits and challenges of CESs and of simulation methods that support their development and testing. We have set the basis for future extensions beyond the current state of the art and practice. In order to realize these technological visions, it is important to consider the economic benefits. This means that the effort and ultimately the cost of deployment must not exceed the benefits. One approach will be a step-by-step realization. This will ensure that advanced simulation methods will be a success factor for validation and testing of CESs.
6,451.2
2020-12-15T00:00:00.000
[ "Computer Science" ]
Issues Surrounding Behavior towards Discarded Textiles and Garments in Ljubljana : In recent years, post-consumer textile waste has become an important issue that attracts attention from activists, scientists and the media. The production and use of clothing has more than doubled in the last fifteen years due to declining costs, streamlined operations and rising consumption under the influence of fast fashion. According to research, the average European buys as much as 26 kg of textiles each year and discards 11 kg, while a very small share of post-consumer textile waste is recycled. This article presents the findings of a study on household textile waste in the capital of Ljubljana. The research showed that despite the significant declarative environmental awareness of people for sustainable behavior in the field of textile waste, the share of those decreases with exposure to actual behavior. However, there are few people who are completely uninterested in reducing textile waste, as most people are aware of the problem and pay more and more attention to it. The authors study the management of textile waste and its creation by the inhabitants of Ljubljana in the broader context of the influences of fast fashion, as well as the cultural specifics of the Slovenian society. Introduction Globally, the amount of waste is increasing rapidly [1]. Waste in general in the article is understood as a substance or object that the holder discards, intends to discard, or must discard [2]. With increasing urbanization, economic growth and population growth in the world, the World Bank predicts that the amount of waste generated will increase from 2.01 billion tons in 2016 to 3.40 billion tons in 2050 [3]. An individual produces on average 600 times as much waste as they weigh during their lifetime. However, the amounts of waste produced around the world vary considerably. Thus, only 16% of the world's population of the most developed countries produces as much as 34% of the world's waste [3]. In Slovenia, for example, in 2019, each inhabitant produced an average of 509 kg of municipal waste, which is 10% more than in 2018 [4]. The amount of waste produced is directly related to wealth, production, and consumption [1,5]. People have more and more choice in consumption, and products have a shorter lifespan [1]. With the development of advertising, rapidly changing trends, and ever lower product prices, providers are inviting consumers to more intensive consumption and shopping. The changes in shopping habits and lifestyle may have increased the quality of our lives at first glance, but they also mean that we generate more waste than ever before [1]. What about textile waste in households? Textile waste in this article is defined as a material that is deemed unusable for its original purpose by the owner. It can include fashion and textile industry waste, created during fiber, textile, and clothing production, and consumer waste, created during consumer use and disposal [6]. According to the European Environment Agency, the production of clothing, footwear, and home textiles for personal consumption by Europeans is the fifth largest producer of CO 2 emissions [7]. The current system of production, distribution, and use is almost entirely linearly linked to negative environmental and social impacts [8]. The production and use of clothing have more than doubled in the last fifteen years, due to declining costs, streamlined operations, and rising consumption under the influence of fast fashion [8]. At the European Union level, 9.35 million tons of textile waste are disposed of or incinerated each year [9]. Thus, the average European buys 26 kg of textiles every year and discards 11 kg [10]. In Slovenia, according to the Statistical Office for 2015 [4], the annual amount of textile and clothing waste ending up in mixed municipal waste from households is 37,180 tons. Thus, each inhabitant discards as much as 18 kg of textile waste per year. In this paper, we focus on people's attitudes towards textile waste in Slovenia and, in more detail, in its capital Ljubljana, and on the analysis of habits related to textile waste in households. We are interested in the behavior of the inhabitants of Ljubljana in relation to their awareness and handling of textiles. The theoretical framework explains how and why people dispose of textiles and what the reasons for their awareness, habits, and behavior related to textile waste are. Theoretical Background The production of clothing and textiles and their high consumption have a strong negative impact on people and the environment [11]. In 2016, per capita emissions related to the estimated global textile consumption were 442 kg of CO2eq, or 6.7% of global climate impacts [12,13]. The textile industry is one of the more environmentally burdensome and labor-intensive industries [11,[14][15][16]. The production phase of textile represents one of the largest environmental impacts from the life-cycle perspective among other things, in terms of climate change, toxic pollutants, and contribution to water scarcity [13]. In Sweden, for example, the production phase includes approximately 80% of the total climate impact of the full life cycle of a textile. Similarly, the average European life cycle of production is 70%, and the use-phase laundry is 10% of the total climate impact of the full life cycle of a textile [13,17]. However, the negative environmental impact is not only created by the industry itself, but also by consumer behavior, which generates large amounts of consumer textile waste. The growing quantities of textiles in the global market, which are usually represented by lower quality and consequently cheaper fashion pieces, are associated with changes in consumer behavior. Although consumers spend a smaller share of their income on clothing, they buy more often and more fashion pieces [18,19]. These changes began in the late 1980s, and even more intensely in the late 1990s and early 21st century, when some brands, such as Zara and H&M, established a shorter time chain from the design, through production, to the customer. This market model is called fast fashion [20,21]. Fast fashion is a business model where it is important that clothes are made quickly and cheaply, most often in third world countries (in Asia and Africa), based on the latest trends, allowing people to expand and refresh their wardrobe cheaply. In addition, many fashion retailers encourage more frequent purchases through impulse buying strategies when offering new collections every few weeks [20,22,23]. However, the concept of fast fashion has also led to the fact that, as research shows, consumers keep clothes only half the time they did 15 years ago. Consumers treat the cheapest clothes almost as disposable and discard them after only seven or eight uses [24]. During this time, the number of garments purchased by the average consumer per year increased by 60% [24]. Although the negative environmental impact of clothing, as much as 70%, is generated in the course of production, the average European user in the life cycle of clothing creates, especially in the use phase, another 10% of emissions. However, one of the biggest challenges in the clothing life cycle, next to searching for cleaner models of production, is building up the infrastructure around collecting and sorting textiles for reuse and materials recycling [17]. Although textiles have a high potential for recycling, at a rate of 85% to 90%, only between 1% and 43% are expected to be recycled [7,25]. At the EU level, between 15% and 25% of textile waste, both industrial and consumer, is collected separately and can then be reused or recycled [25]. The textile industry has been very late with its transition to a circular economy, in comparison to sectors such as plastics, glass, and metals [13]. Globally, it only uses 3% of recycled materials. Although there are many initiatives under way, at the moment there is still no extended producer responsibility for the collection of textiles for reuse and recycling in most member states [17]. There is extremely little reuse of textiles in Slovenia, which is shown by the fact that there is practically no system for the separate collection of consumer textile waste. At the level of the city of Ljubljana, there are containers for separate collection of textile waste from the resell company Humana, in 56 places [26]. However, this is an entrepreneurial activity and not primarily a separate collection of textile waste or waste separation. The number of containers is decreasing from year to year, as the quality of the textiles is deteriorating, and their recycling technology is so demanding that collection centers across Europe limit their purchases due to the high costs of textile processing or disposal [27]. At the household level, the utility company Snaga d. o. o. does not yet offer the separate collection of textile waste, so residents often dispose of unusable textile waste as mixed waste [28]. Similar challenges are being encountered elsewhere in Europe. For example, in Sweden, according to 2016 data, only 1% of the total volume of textile consumed four years earlier was recycled. Also, most textiles ended up in unsorted household waste [17]. However, changes are expected soon, as the European Union has adopted the EU Directive 2018/851 as part of a package of measures on the circular economy, which requires EU member states to establish the separate collection of textiles in households by 1 January 2025 [29]. In addition to the issue of disposal, sales practices that promote consumerism and the rapid change of clothing are a major problem, as indicated by an increasing number of studies on the management of post-consumer textile waste [16,[30][31][32]. Consumer behavior in terms of clothing maintenance, that is washing and drying, also has a significant environmental impact [33]. Until a few years ago, consumer behavior [34,35] was studied mainly in the context of marketing research, and less attention was paid to the processes and phases that lead to waste disposal, but this segment of research is also recently on the rise [16,30,36,37]. It is important to study consumer behavior from an environmental point of view, as they decide when, where, and in what way they will discard used clothes, thus determining their lifespan, the amount of waste generated, and the possibility of reuse and recycling [16], but also how they will handle the clothes in their lifespan, how many times they will be dressed, washed and dried. So what will be the consumption of electricity and water for their maintenance? Only the sum of all this information, about the production, transport, use of clothes, and their disposal, that is the life-cycle assessment or LCA, gives us information about their overall environmental impact [33]. Methods Decisions about the selection and consumption of raw materials and their disposal in modern households take place behind closed doors, as noted by Daniel Miller [38]. Since it makes sense to research what is happening inside homes in the research of modern society, households are a key link in the waste production and consumption chain. Household members determine the choice of services and consumption of raw materials by their lifestyle, values, thinking, and behavior [39], while at the same time gross domestic product also plays an important role [40]. Compared to energy production and industry, an individual household has a relatively insignificant role in environmental impact, but if households in the whole country or municipality are considered, their combined impact is large [41]. Thus, the research was based on Miller's [38] recommendations and focused on observing and researching the behavior of individuals in their basic cells and households, namely, in the area of the City of Ljubljana. In the spring of 2020, we conducted an online survey for the primary form of insight into textile and clothing-related practices. We used an open source application, www.1ka.si (accessed on 1 February 2021) [42]. The online survey lasted for four months and was intended for the inhabitants of the whole of Slovenia, whereby we also separated the respondents according to which municipality they come from. In this study, we present an analysis of the habits of the inhabitants of the city of Ljubljana. Although we are aware of the limitations, as we did not include a representative sample in the survey, this is the first survey of this kind in the Ljubljana area. We chose non-probability sampling because in recent years we have found it increasingly difficult to obtain a representative sample of the population, as people are less and less inclined to complete surveys. In addition, the study was conducted at a time of restrictions due to the COVID-19 epidemic, which made it impossible to address people directly, and thus contributed to the limitations by limiting access to representative samples. So, we decided to do the research online, with the widest possible public involved. On the other hand, by obtaining the answers of those who are interested in the topic, a deeper insight into the studied topic is gained, while non-interested parties are more likely to give socially desirable answers. Very similar results of the sample were later obtained in the all-Slovenia survey on discarded clothing, which was also conducted online in the autumn of 2020, with as many as 1371 answers [43]. Our survey was completed by 120 respondents. Among them, 81% were women and only 19% were men. The reason for the significant discrepancy in women's responses may be that women are the ones who handle textiles and related waste more often in households than men. In terms of age structure, most respondents were between 35 and 44 years old, namely, 34%, followed by those aged between 25 and 34 at 24%, aged 55 to 64 at 20%, aged 45 to 54 at 18%, and 2% aged 18 to 24 years and older than 65 years (see Figure 1). According to the educational structure, highly educated people predominate, namely, as many as 43% have a master's degree or a doctorate, 49% have a higher or university education, and only 8% have a secondary education. The survey was mainly responded to by an environmentally aware population (see Figure 2). The survey consisted of five systematic sets and 31 closed-ended and open-ended questions. The introductory part was intended to study the state of general environmental awareness of respondents and awareness of textile waste and its disposal. For these questions, we used the Likert scale from 1 to 5, with 1 meaning "I am not aware at all" and 5 "I am very aware". This was followed by a course on handling clothing, shoes, and home textiles. This part was intended to study the actions related to the purchase, disposal, separate collection, or use of worn-out clothing. We were also interested in the feelings of discarding textiles, and at the end, a set of demographic questions followed. Results The results showed a high level of declarative general environmental awareness of the respondents, with the mean value of the answers being 4.2. However, when we concretized the topic and asked the respondents about their awareness of textile waste, they showed a lower level of awareness, with an average value of 3.6 (see Table 1). The reason for the gap between general environmental awareness and awareness among textile waste is that the issue of textile waste has come to the fore only in recent years and people are not yet sufficiently aware of it. In addition, neither at the state nor the municipality level is textile waste collected separately, and people usually do not even think about it. The respondents buy on average 11 to 20 pieces of new clothing per year at 41%, followed by those who buy 5 to 10 pieces of clothing at 23%, 21 to 50 pieces at 19%, more than 50 pieces only at 5% of respondents, and 12% of respondents who buy less than 5 pieces of new clothes (see Figure 3). They most often buy underwear and socks at 81% and cotton T-shirts at 49%. As many as 45% of the respondents buy most of their clothes new, but they are also given some or buy used ones. It should be noted that the majority of respondents live in a multi-member household where children are present, and in this regard accept or buy used clothes. This is also indicated by the answer that 17% of the respondents rarely buy new clothes, where most of them are donated. On the other hand, 36% of the respondents always buy new clothes. The need of the majority of Slovenes to buy new clothes was also perceived by the ethnologist Mateja Habinc [44], when she studied reasons why there are so few second-hand clothing shops in Slovenia. She found that new clothing in Slovenia is still ahead of used clothing, where the focus is on originality, uniqueness, nostalgia, or environmental awareness, which are common in Western Europe and North America [44,45]. She saw the reason for such an attitude towards buying second-hand clothes in the cult of the new, which was spread by mass socialist consumerism, which, among other things, equated buying second-hand clothes with poverty [44]. On a positive note, the respondents rarely throw clothes into mixed waste, at only 14%. They are most often taken to a Humana container with 63%, donated to friends, or acquaintances at 56% or the Red Cross, Caritas at 49%, 26% are processed or otherwise recycled, and 17% are taken to the collection center of the utility company (see Figure 4). Among mixed waste, the respondents, although very rarely, most often discard socks and underwear. Those who re-use textiles most often use them for cleaning cloths, mending children's clothes, gardening, and the like. The respondents on average throw away 5 to 10 garments a year, most commonly cotton T-shirts, and underwear and socks. The fact that the inhabitants of Ljubljana are quite rational in creating consumer textile waste shows that as many as 52% of the respondents themselves repair minor damage to clothing (e.g., small holes, tears, dropped buttons), and thus extend their lifespan. Thus, the most common reasons for discarding clothes are their wear and tear (86%). Other clothing and textile items, such as shoes and home textiles, account for a very small share of post-consumer textile waste. Thus, the respondents buy an average of one to three pairs of shoes per year. If they do not wear shoes, most often due to age, and wear and tear, they are most often dumped into the mixed waste. Very rarely, the respondents discard shoes because they no longer like them, are out of fashion, or because of the need for space. It has also been shown that people take shoes to a shoemaker quite often, 28% take them always, and 23% often but not always. The respondents rarely buy home textiles, less than every year at 53%, and 37% buy 1 to 5 pieces a year, most often buying bedding, followed by bathroom towels and kitchen towels. Home textiles are very rarely discarded, less than one piece per year. They are discarded due to wear and tear, and are most often reused for cleaning cloths. From the point of view of textile purchases, the respondents show modern trends in behavior, and quite often buy new pieces of clothing, shoes, and other forms of textiles. One of the reasons can be seen in the fact that today's product promotion is extremely strong, and often the desire to buy prevails over the rationality of the decision. However, they are also attracted by the low prices of the textile items, as consumers feel that they have made a good bargain by buying cheap clothing [46]. Although one of the drivers of fast fashion is the rapid adaptation to trends [20,23], it is interesting that only a very small proportion (only 5%) of the respondents expressed an intention to buy new clothes due to fashion motives, which is extremely encouraging from the environmental point of view. Regarding feelings when discarding textiles, the respondents did not express excessive bad conscience, as they discard them only when they are already completely useless. As many as 40% of the respondents expressed mixed feelings that today's textiles are of much lower quality compared to textiles in the past. They are saddened by the thought that quality textiles are practically impossible to buy nowadays, which is also one of the effects of the spread of fast fashion, where other aspects such as following trends are more important than product quality [47]. On the other hand, the respondents expressed a high level of responsibility in terms of disposing of textile waste after its use. Namely, the respondents throw away textiles only after they are completely useless, and most of the time they reuse them for cleaning cloths. The reason for this may be that we are dealing with the interested public, who are well aware of the handling of textiles and are motivated to solve this type of problem. On the other hand, the cause can also be found in the roots of past clothing management practices, when clothing was considered economical, extended with patching and processing, and wearing clothes while they were still usable [48,49]. Discussion In Ljubljana, despite the considerable declarative environmental awareness of people for sustainable behavior in the field of textile waste, we found that the share of actual environmental awareness decreases with exposure to actual behavior in practice. A similar conclusion was reached in other studies. It was found that personal norms and the moral nature of motivations to reduce clothing consumption are those that otherwise have a significant impact on behavioral purposes related to reducing clothing consumption. However, on the other hand, because of their connection to people's visual perception of each other, clothing is particularly exposed and influenced by social norms. Also, due to the prevailing marketing technique and advertising that promotes the purchase of new clothes, and following the changing trends, otherwise good intentions of reducing the consumption of clothes are often more difficult to achieve in practice [50]. The sense that awareness of the importance of reducing textile waste is increasing gives us the fact that there are few people who are completely uninterested in reducing textile waste, as people are mostly aware of the problem and pay more and more attention to it. This is especially important, as research shows that most of us do not wear as much as 50% of the clothes we have at home [32]. This suggests that consumption in the clothing segment is much higher than the actual needs of individuals, and consumer textile waste creates major environmental problems as very few textiles are reused [25]. Wider social progress in the field of textile waste management can be increased through information provision, that is education and training of other consumers, so that their choices will influence the decisions of retailers in the direction of reducing textile waste, as well as creating a more socially and environmentally ethical textile industry. On the other hand, the successful shift to a circular economy could be also one of the key solutions to meet the sustainable development goals. At the moment, there are many efforts to optimize the current system of production of textiles and garments, but the shift to a circular system for textiles needs a systematic change throughout the whole chain of textiles, and also changes in policies. To improve efficiency and reduce environmental degradation, the technological innovation is not enough, but complex social innovation and advancement in terms of working conditions, equality and social justice are also needed. There is also a need for a change to new business models and policies, which can reach extended life time, which could be the easiest environmental earning [51]. In recent years, we have witnessed radical changes in the field of waste management in the city of Ljubljana, both in terms of systemic changes and changes in people's minds [52]. The residents have reduced the amount of landfill waste and are also separating it to a greater extent. Although still largely at the expense of financial savings and due to the force of infrastructural changes, the share of those who separate for environmental reasons is growing. However, as our research has shown, very little or almost nothing has been done at the national and city level, unlike other types of waste, in relation to consumer textile waste. For example, at the household level by Snaga d. o. o., there is no separate collection of textile waste, and the population is not encouraged to do so. The problems related to waste can be completely specific, so we cannot solve them only with universal approaches, but the solutions must be adapted to the socio-cultural environment and the actual needs of people. Researches also show that the efficacy of communication about environmental issues is better if the information is content specific, that is, it addresses, for example, how much water and energy can be saved through the reduction in one's personal clothing consumption. So, more impact is achieved when consumers are directly addressed with information about the impact of their behavior on the environment [50]. Although our research has some weaknesses, such as a small sample of respondents, reliance on self-reported purchasing and disposal behaviors, which are subject to several potential biases, including memory bias (e.g., inaccurate reporting of purchase amounts) and social desirability bias (e.g., deliberate underreporting of purchases), or lack of in-depth research of the behavior of Slovenian costumers, this research represents an important step in the research of consumer behavior in connection with textile waste in Slovenia. In further research, it is therefore important to further address consumer behavior, explore the importance of other variables that influence purchase and use behavior that are not assessed in the current study, and compare them with research results from other countries. In any case, this research already reveals some aspects that help us better understand the specifics of the Slovenian consumers. Conclusions Our research has shown that there is much room for improvement in the sustainable management of consumer textile waste in Ljubljana, especially in the direction of its reduction and prevention, and especially in the direction of raising consumer awareness of the social and environmental consequences of affordable clothing and textiles waste in general. Thus, based on the conducted research, we recommend promoting a bottom-up approach through a policy of small steps. With methods of practical approach, adapted to the individual or community, we can achieve better understanding and awareness, which leads to appropriate changes in practice [53]. One of the effective ways to have a long-term positive impact on behavior change is the implementation of targeted information and education through the education system and curricula, while research shows that young people are more receptive to innovations and represent the most effective medium for transmitting such ideas and practices to the elderly [52,54]. On the other hand, research shows that young people are the most affected in the fast fashion market model, as teenagers, whose money is generally quite limited, love the concept of very cheap clothes and are proud of the fact that their fashionable clothes cost a little. Low clothing prices, achieved by manufacturers through various mechanisms, however, allow them to change their styles quickly, despite low financial resources [46,47]. As a result, almost the entire fast fashion market, along with all marketing support, has become youth-oriented [11]. Thus, it is crucial that young people start raising awareness about the creation and management of consumer textile waste.
6,363
2021-06-07T00:00:00.000
[ "Economics" ]
A Hierarchical Resource Allocation Scheme Based on Nash Bargaining Game in VANET : Due to the selfishness of vehicles and the scarcity of spectrum resources, how to realize fair and effective spectrum resources allocation has become one of the primary tasks in VANET. In this paper, we propose a hierarchical resource allocation scheme based on Nash bargaining game. Firstly, we analyze the spectrum resource allocation problem between different Road Side Units (RSUs), which obtain resources from the central cloud. Thereafter, considering the difference of vehicular users (VUEs), we construct the matching degree index between VUEs and RSUs. Then, we deal with the spectrum resource allocation problem between VUEs and RSUs. To reduce computational overhead, we transform the original problem into two sub-problems: power allocation and slot allocation, according to the time division multiplexing mechanism. The simulation results show that the proposed scheme can fairly and effectively allocate resources in VANET according to VUEs’ demand. Introduction With the rapid increase of the number of vehicles, traffic accidents and congestion are becoming more and more serious, which has attracted worldwide attention [1]. The concept of Vehicular Ad-hoc Networks (VANET) is considered to be an effective way to solve this problem. Through modern information and communications technology, the vehicles can access the network and interact with the cloud resource pool in time [2]. In addition, edge cloud computing has been applied to the VANET for data acquisition considering the simple deployment of edge cloud infrastructure [3]. For example, Autonomous Vehicular Clouds was proposed to offer potential applications to VUEs, which ensures the safe operation of autonomous vehicles and provides the services required in the running process of vehicles [4]. VANET clouds usually comprises three types of clouds, which are Central cloud, RSU cloud, and Vehicular cloud [5]. One of the main advantages of VANET clouds is that no additional infrastructure is required. The central cloud is a centralized resource pool that provides the resources needed for RSUs and vehicles. The RSU cloud provides services for vehicle access networks. The vehicular cloud consists of vehicles on the road, providing services for data transmission between vehicles. As seen in Figure 1, the role of central cloud is to achieve centralized scheduling of spectrum resources. Multiple RSUs are connected to the central cloud through the forward link and the management interface, different RSUs' wireless resources are abstracted into a virtual resource pool. VUEs request resources from RSUs within the communication range of each RSU. However, due to the high mobility of vehicles, the topological structure of the VANET is constantly changing, and the number of vehicles within the coverage area of RSU is constantly changing, which increases the difficulty of resource allocation and results in tidal phenomena in VANET [6]. On the other hand, the selfishness of VUEs brings new challenges and difficulties to the rational resource allocation in vehicular networks [7]. For example, the uncertainty of vehicle motion and the selfish behaviors of vehicles may lead to the cloud resource competition among the VUEs and result in network congestion, which occurs in the data access through the RSUs cloud and central cloud. Considering the sparse deployment of RSU, when the number of vehicle users increases rapidly within the coverage of an RSU, the spectrum resources of the RSU can not meet the needs of all vehicle users. Therefore, how to fairly and efficiently allocate spectrum resources to VUEs has become an urgent problem to be solved. At this stage, some scholars have put forward some resources allocation mechanisms in VANET. A spectrum resource acquisition method based on game theory is proposed, which is modeled as a non-cooperative congestion game to ensure the fair resource allocation [8]. Some research has transformed the problem of spectrum resource allocation into a semi-Markov decision-making process to achieve reasonable and effective resource allocation [9]. In [10], a repeated game scheme based on Gauss-Seidel (G-S) iteration method is proposed to solve the resource allocation problem in VANET. Considering the selfishness of the vehicle, a punishment strategy is proposed to avoid the irrational behavior of the vehicles [11]. However, the above research has some limitations and only considers the spectrum resource allocation within the coverage of a single RSU. Considering the tidal phenomenon caused by vehicle movement [12], the above research fails to realize the reasonable allocation of spectrum resources in the central cloud, resulting in the waste of spectrum resources. In non-hot spots of cities, considering the deployment cost, RSUs cannot be deployed on a large scale, which means they cannot form a seamless coverage network [13]. When the traffic tends to be congested during rush hours, the network load of a single RSU is bound to increase sharply, resulting in that the Qos of VUEs cannot be guaranteed. In addition, the services that the VUEs request from the RSUs can be divided into safety services and nonsafety services [14]. Safety services are used to send safety messages, for example, various warning messages the assist vehicles to prevent accidents. Nonsafety services are used for entertainment purposes. Obviously, safety services have a higher priority than nonsafety services. Hence, considering the limited resource of RSU, we must first satisfy the requirements of safety services. On the other hand, the services requested by VUEs have different requirements for transmission characteristics provided by RSUs, such as delay, jitter and packet loss rate [15]. Therefore, in order to solve the resource allocation problem in the VANET in a more reasonable way, we must take the differences of services requested by VUEs into account, while ensuring the fair allocation of resources. In this paper, we proposes a hierarchical resource allocation scheme based on Nash bargaining game to achieve a fair and efficient resource allocation in VANET. In the first layer, we study the resource allocation between the central cloud and the RSUs' cloud. In the second layer, we study resource allocation between RSUs and VUEs. The major contributions of this paper are as follows: • Considering the characteristics of VANET, we propose a hierarchical resource allocation architecture based on Nash bargaining game to ensure the proportional fairness and effectiveness of resources allocation in VANET. • Considering the tidal phenomenon caused by the random movement of vehicles and the limited resources of a single RSU, we study the resource allocation strategy of the central cloud by establishing the Nash bargaining model between RSUs. • Considering the selfishness of VUEs and the difference of services requested by VUEs, we study the resource allocation of RSUs, by establishing the Nash bargaining game model of VUEs and constructing the matching degree index between VUEs and RSUs. • To reduce computational overhead, we transform the resources allocation problem between VUEs and RSUs into two sub-problems: power allocation and slot allocation, according to the time division multiplexing mechanism. The rest of this paper is organized as follows. We formulate this hierarchical resource allocation scheme based on Nash bargaining game in Section 2. Simulations are performed and results are analyzed in Section 3. Finally, Section 4 concludes this paper. Hierarchical Resource Allocation Scheme Based on Nash Bargaining Game In this section, we briefly introduce the bargaining game and Nash bargaining solution. Thereafter, we propose the hierarchical mathematical model of resource allocation in VANET based on Nash bargaining game. Finally, we give the solution of this hierarchical model. Bargaining Game and Nash Bargaining Solution The game theory can effectively solve the problem of competition among the VUEs in the process of resource allocation. Although the selfish characteristics of VUEs in non-cooperative games conform to the actual situation, the rational behaviors of VUEs will damage the overall benefit of the system and restrict the benefit of other VUEs. Therefore, relevant constraints need to be established among the VUEs to enable them to cooperate with each other and to improve the overall efficiency of the system as much as possible. Bargaining game [16] is a kind of cooperative game in which both players have the opportunity to reach a win-win situation. In this game, there is a conflict of interest among the participants. If either party cannot accept the bargaining scheme, the overall agreement will not be reached. Before the specific definition is given, a simple example of two-person bargaining game is given firstly. We assume participant 1 and participant 2 share the benefits of the system, which is X. Participant 1 first proposes a share of the proceeds, which is x, x ∈ [0, X]. In addition, then, participant 2 judges whether to accept the agreement based on the proposal and decision of participant 1. If the agreement is accepted, participant 2 receives the remaining proceeds, which is X − x. Otherwise, participant 2 rejects the agreement, both parties fail to reach an agreement, the benefits will be zero. The definition of a Nash bargaining game is as follows. G = (K, S, U , u 0 ) represents a game problem with K players. S represents the set of all participants' policies. The set of benefits that participants can obtain in the game is U . u 0 i ∈ u 0 = {u 0 1 , u 0 2 , ..., u 0 K } represents the benefit of the ith participant under the failure agreement. Clearly, participants do not participate in the collaboration when the benefits allocated to the collaboration are less than u 0 i . Then, game G is called the bargaining game of K participants. The Nash Bargaining Solution (NBS) is the function mapping that can make the above bargaining game G has the unique optimal return vector, u * = {u * 1 , u * 2 , ..., u * K } = f (U , u 0 ). However, there may be multiple bargaining functions, therefore, the most widely used bargaining function is selected to realize effective and reasonable allocation of resources in VANET. This bargaining solution is subject to the following conditions [17]. As for any linear mapping g, it should satisfy Symmetry. Conditions 1 and 2 guarantee the validity and rationality of the existence of NBS, condition 2 means the ideal state of resource allocation, in which the bargaining scheme is optimal. Conditions 3 to conditions 5 guarantee the proportional fairness of NBS, and conditions 3 indicates that NBS has zoom invariance. It means that if the benefit function changes linearly, the final NBS stays the same. The condition 4 indicates that the expansion of the domain will not affect the final result of the bargaining game. The bargaining function of NBS meeting the above five conditions is shown below is a strictly quasiconcave function on a nonempty closed set U . This function has a unique maximum value in the maximum solution problem. Considering the selfishness of VUEs and the architectural characteristics of VANET, Nash bargaining game is suitable for solving the resource allocation problem among the central cloud, RSU cloud and VUEs based on the analysis above. A hierarchical resource allocation scheme based on Nash bargaining games will be described in the following paragraphs. Table 1 summarizes the important notations in this paper for easy reference. Table 1. The description of variables in hierarchical resource allocation scheme. K The number of RSUs b k The resources allocated to the kth RSU B TOTAL The total spectrum resources in the central cloud R k Transmission rate of the kth RSU R min k The lowest transmission rate expected by the kth RSU η k The modulation factor for the kth RSU C k The number of subcarriers of the kth RSU N k The number of vehicles requested services from the kth RSU ω 0 k Subcarrier bandwidth of the kth RSU h ij Channel gain of the jth subcarrier for the ith VUE P ij Transmission power of the jth subcarrier for the ith VUE P MAX The sum of the transmitted power for all subcarriers α k i The matching degree between the kth RSU and the ith VUE τ i The size of time slot assigned to the ith VUE for requesting services r min i The minimum transmission rate expected for the ith VUE The Formulation of Hierarchical Resource Allocation Scheme in VANET In this section, a hierarchical resource allocation scheme based on Nash bargaining game is proposed. In the first layer, we analyze the resource allocation between central cloud and RSU cloud. Thereafter, based on the matching degree between VUEs and RSUs, We study the resources allocations problem between them in the second layer. Resources Allocation Scheme for RSUs in the First Layer Different RSUs share the centralized spectrum resources of central cloud resources pool, the first layer of the hierarchical resource allocation scheme is the problem of allocating spectrum resources to RSUs from the central cloud. We assume that K = {1, 2, ..., k} is the set of RSUs, the spectrum resource requirements of the kth RSU is b k . According to Shannon theory [18], we use the transmission rate R k to represent the benefit function of each RSU where σ 2 is the white Gaussian noise,P k andḠ k represent the transmission power and channel gain of the kth RSU, respectively. c 1 is the modulation factor of orthogonal amplitude modulation [19], which can be denoted as where c 2 = 1.5, c 3 = 0.2. BER k is the maximum error rate that the kth RSU can tolerate. According to the definition of NBS, the cooperative game problem of the first layer centralized resource allocation can be expressed as F(R, R min ). R is a set of benefits that each RSU gets when participating in the game. The minimum R min = (R min 1 , R min 2 , ..., R min k ) is the disagreement point. We assume the total spectrum resources of the central cloud is B sum , the Nash bargaining game problem can be formulated as is the target function for resource allocation at the first layer. b is the spectrum resources vector assigned to each RSU, which is (b 1 , b 2 , ..., b k ). The first constraint C 1 in Formula (4) is used to guarantee the existence of the optimal allocation vector. The second constraint C 2 indicates that the number of spectrum resources shared by RSUs cannot exceed the total spectrum resources in central cloud. According to the properties of logarithmic functions and Lagrange multiplier method, the Lagrange function form of this spectrum allocation problem can be formulated as λ is lagrangian multiplier, according to the definition of the KKT condition [20], Formula (5) should satisfy the following three conditions After formula derivation, we can get b k and λ (8) in Formula (7), γ k = log 2 . From Formula (6), due to λ is a nonnegative variable, we can find that ∑ K k=1 b k − B sum = 0. Hence, through formula derivation and calculation, the optimal spectrum resources allocated to the kth RSU can be obtained After completing the spectrum resources allocation for RSUs in the first layer, we need to continue to study the allocation of spectrum resources for VUEs within the coverage of RSUs. Resources Allocation for VUEs in the Second Layer According to the spectrum resources allocated in the first layer, the spectrum resources obtained by the kth RSU is b * k . In order to improve the utilization of spectrum resources, each RSU network adopts the multiuser orthogonal frequency-division multiplexing (OFDM) systems [21]. Hence, continuous spectrum resources are divided into several slices of virtual resources in the form of orthogonal subcarriers in the RSUs cloud. Meanwhile, transmission power is distributed to each VUEs. We assume that each RSU's subcarrier has the same bandwidth, the subcarrier bandwidth of the kth RSU is ω 0 k . The number of subchannel resources in the kth RSU is · means the integer down. We use N k = {1, 2, ..., N k } to represent the set of VUEs in the kth RSU. We assume that the benefits set of VUEs in bargaining cooperation is r. r min = {r min 1 , r min 2 , ..., r min N k } is the disagreement point, which is the minimum benefit set that VUEs are willing to accept. Therefore, the Nash bargaining game of the second layer resource allocation issue can be represented by (N k , r, r min ). In order to guarantee the efficiency and fairness of the resources allocation at the VUEs layer, we formulate the resource allocation optimization problem as Formula (11) represents the optimization problem of resource allocation to VUEs at the second layer. c k = {c k 1 , c k 2 , ..., c k N k } is the strategy vectors of VUEs in the kth RSU for subcarrier allocation. p k = {p k 1 , p k 2 , ..., p k N k } is the strategy vector of VUEs in the kth RSU for transmission power allocation. a i,j ∈ {0, 1} denotes whether subchannel j is assigned to VUE i or not (i.e., if subchannel j is assigned to VUE i, a i,j = 1; otherwise, a i,j = 0). |g i,k | 2 and d −β i,j represent the channel gain and transmission distance between RSUs and VUEs, respectively. β is the decay coefficient of distance. η is the modulation coefficients of subcarriers, we assume that this coefficient is equal to c 1 in Formula (2). According to the above analysis, we can achieve a proportional fairness resources allocation in VANET. However, the above resource allocation analysis does not take into account the differences of VUEs, that means the services requested from RSUs by VUEs are different from each other. On the other hand, the performance of RSUs is different with each other. Different VUEs should have different performance requirements in different RSUs. For example, latency and jitter may be more important than throughput for a VUE requested the online game. A low-latency RSU is more suitable for transmitting voice services, and the low jitter RSU is more suitable for video stream transmission services. However, in most previous studies, the benefit function of VUEs just depend on the power and the number of subcarriers allocated. As shown in formula 11, VUEs obtaining the same amount of resources have the same benefits, which does not take into account differences of VUEs. It is not consistent with the actual situation. In order to solve the above problems, we firstly construct the matching degree index between VUEs and RSUs. The evaluation of matching degree index is based on computing the L-dimensional Euclidean distance between services features offers and demands, where L is the number of services features. We use binary variables SF k,i l,o to represent whether the lth services feature can be provided to ith VUEs by the kth RSU, such as low-latency, low-jitter and high-throughput. The value of SF k,i l,o is 1, which indicates that the kth RSU can provide the ith VUEs with the lth services feature. The value of SF k,i l,o is 0, which indicates that the kth RSU can not provide the ith VUEs with the lth services feature. On the other hand, we use binary variables SF k,i l,d to represent whether the lth services feature will be demanded by the ith VUEs from the kth RSU. The value of SF k,i l,d is 1, which indicates that the lth services feature will be demanded by ith VUEs. To facilitate analysis, we use the variable D k i to represent the ability of the kth RSU providing service features for the ith VUE Obviously, as the value of D k i gradually increases, the ability of the kth RSU providing service features for the ith VUEs becomes worse. Therefore, we use exponential function to build the matching degree index between RSUs and VUEs. The matching degree index between the ith VUE and the kth RSU can be formulated as where ω l is the normalized weight, β k ∈ [0, 1] is a reputation parameter of the kth RSU, which is related to services features provided in the past. Thereafter, considering the matching degree index between VUEs and RSUs, Equation (11) can be transformed into The NBS optimization problem in formulation 14 is a mixed integer nonlinear programming problem (MINLP), which is NP-hard. In order to reduce the computation complexity, we adopt the time-sharing relaxation [22] to transform the MINLP problem into a nonlinear real-number programming problem. We introduce allocation time variable τ k i , which means the fraction of time when VUE i occupies the all subchannels of the kth RSU. Now, the optimization problem 14 can be transformed into two sub-problems: optimal power allocation and optimal time allocation. we assume that τ k i ∈ [0, 1] represents the time length occupied by the ith VUE in the kth RSU. We assume Then, the optimization problem can be formulated as From [23], we know that necessary and sufficient condition for the optimal allocation solution p * i,j exists, the sub-problem of the optimal transmission power can be converted to Obviously, optimization function 16 is log-concave with respect to p i,j , and constraint C12 is linear. Thus the optimization power allocation problem is convex. According to the Lagrangian multiplier method, the Lagrangian function can be written as The Lagrangian multiplier vector is θ = {θ 1 , θ 2 , ..., θ N k }. According to KKT conditions, the following conditions must be satisfied Then, we can obtain the optimal transmission power p * i,j for VUE i on subchannel j, which is given as According to the value of the optimal transmission power p * i,j , we can get the optimal transmission rate Then, the sub-problems of optimal time allocation can be formulated as The Lagrange function is formulated as is the time vector allocated to VUEs. ν is the lagrangian multiplier. According to KKT conditions, the following conditions must be satisfied (23) and then, the optimal transmission time allocation for the ith VUE in the kth RSU is obtained Thus, we present the algorithm in Algorithm 1. In Algorithm 1, we firstly calculate the spectrum resources obtained by the kth RSU. Then, we obtain the power resource and time slot resource of the VUE i in the kth RSU. Therefore, the time complexity of Algorithm 1 is O(N 2 ). Algorithm 1 Hierarchical Resource allocation based on Nash bargaining game Input: total spectrum resources B sum , the parameters of RSUη k ,P k ,Ḡ k , R min k , subchannel bandwidth ω 0 k , VUE's minimum transmission rate requirement r min i . Calculated b * k , which is the optimal spectrum resources allocated for the kth RSU based on Equation (9). 3: for i ∈ N do 4: Calculated α k i , which is the matching degree between the ith VUE and the kth RSU. 5: According to the allocated spectrum resources b * k , calculated C k based on Equation (10), which is the number of subchannel allocated to the kth RSU. 6: Calculated p * i,j based on Equation (19), which is the optimal transmission power for VUE i on the subchannel j. 7: Calculated τ * i,k based on Equation (23), which is the optimal transmission time allocation for the ith VUE in the kth RSU. 8: end for 9: end for 10: Algorithm Simulation and Results Analysis In this section, we use Python 3.5 to evaluate the performance of the proposed method in terms of proportional fairness and effectiveness. We firstly introduced the simulation setup. Then a lot of simulation results are provided and analyzed. Simulation Setup We assume that there are three RSUs in the simulation scenario, namely RSU1, RSU2, RSU3. Then we consider a group of VUEs that are randomly deployed in the OFDMA wireless network within the coverage of RSUs. The total spectrum resources of the resource pool B sum is 15 MHz. The minimum transmission rate requirement R min for RSUs are R min For analysis purposes, we assume that there are three types of VUEs within the coverage of RSUs. VUE1 primarily requests voice service from the RSU. VUE2 primarily requests a text service from the RSU. VUE3 mainly requests video services from the RSU. Obviously, different VUEs have different requirements for the service characteristics provided by RSU. Latency and jitter may be more important than throughput for a VUE requesting online game services. In addition, in order to facilitate the calculation, we use the delay, jitter and packet loss rate to measure the performance of RSUs. For example, the minimum requirements for delay, jitter and packet loss rate for VUE1 are 100 ms, 25 ms and 4% respectively, while the best performance of RSU1 for delay, jitter and packet loss rate is 98 ms, 27 ms and 5%, respectively. Hence, according to the analysis above, we can find that Then, we can calculate the matching degree between VUEs and RSUs. The performance parameter values of each RSU are shown in Table 2. As we know, RSU with low latency is better suited to provide voice services, text services should prefer RSU with low packet loss rates, video services should prefer RSU with less jittery. The minimum performance requirements of VUEs for delay, jitter and packet loss rate are shown in Table 3. According to Equation (14), we can get the matching degree between each RSUs and VUEs, as shown in Table 4. Then we can find that RSU1 is suitable for voice transmission services, RSU2 and RSU3 are suitable for text services. After the spectrum resources allocation to RSUs is completed, we consider the allocation of resources for VUEs in the second layer, including power allocation and subchannel allocation. Consider the space limitation of this paper, we just study the resources allocation problems in RSU2. We assume that there are three types of VUEs within the coverage of RSU2, which are VUE1, VUE2 and VUE3, respectively. The value of minimum rate requirement r min The subchannel bandwidth ω k 0 = 20 kHz. The maximum bit error rate that VUEs can tolerate is 10 −3 . Simulation Results The proposed resources allocation scheme is evaluated through by two measurements: effectiveness and fairness. The effectiveness means that the minimum rate requirements of any VUE can be met by negotiating with other VUEs. The fairness means the proportional fairness considering the individual minimum rate. Figures 2 and 3 show the impact of BER 1 (the maximum bit error rate that RSU1 can tolerate) on the spectrum allocation. As shown in Figure 2, the spectrum allocated of RSU1 decreases with the increase of BER 1 . It is because that the increase of BER 1 increases the modulation coefficient according to Equation (3). With the minimum transmission rate been satisfied (denoted in Figure 3), when the spectrum resources required by RSU1 are reduced, the additional spectrum resources will be distributed to RSU2 and RSU3 in proportion, so the spectrum resources obtained by the two RSUs will increase with the increase of BER 1 . Then, as for RSU1, the increase of BER 1 eventually leads to the increase of transmission rate for each RSU, which can be concluded from Figure 3. Figure 4 shows the impact of RSU1's minimum rate requirements on spectrum resource allocation. After the minimum spectrum resources requirements of each RSU are satisfied, the rest of the spectrum resources is redistributed fairly to all RSUs in proportion to demand. Therefore, even if the minimum transmission rate requirements of RSU1 is 0 Mbps, RSU1 can still be allocated 4 MHz spectrum resources. To meet the increasing demand for RSU1's minimum transmission rate, more spectrum resources must be allocated to RSU1, resulting in gradual reduction in spectrum resources allocated to RSU2 and RSU3. Figure 5 show the spectrum resources obtained by RSUs under different distribution schemes. After meeting the minimum rate requirements of RSU1 and RSU2, the allocation scheme based on the maximum throughput of the system allocated the remaining spectrum resources to RSU3, which has better average performance than RSU1 and RSU2. On the contrary, after meeting the minimum requirements of each RSU, the Nash bargaining scheme would distribute the remaining spectrum resources equally to all RSUs in proportion to demand. By further observation of the spectrum resources obtained by RSU2 and RSU3, we can be found that RSU2 obtained less spectrum resources in the maximum throughput allocation scheme of the system for its poor average performance, while in the Nash bargaining scheme, it can obtain spectrum resources in proportion to demand. Thereafter, we analyze the resource allocation for VUEs in the second layer. If we do not take into account the differences of VUEs and the matching degree between VUEs and RSUs, as long as each VUE has the same resource requirements, each VUE can get the same amount of resources from the RSU. Obviously, this case is unreasonable. Therefore, when researching the resource allocation scheme of the VUE layer, we must consider the matching degree between VUEs and RSUs. Figure 6 shows the effect of the communication distance between VUEs and RSUs. When the communication distance between each VUE and RSUs is the same, the transmission time slot size obtained by each VUE is determined by the minimum transmission rate requirements of each VUE. Therefore, VUE3 will be allocated to the maximum transmission time slots, which has the maximum transmission rate requirements for requesting the video services. In addition, as the distance between VUE3 and RSU increases gradually, in order to meet the minimum rate demand of VUE3, the transmission time slot size allocated to VUE3 must increase at the same time. On the contrary, the transmission time slots allocated to VUE1 and VUE2 will be reduced appropriately. The rate reduction can be accepted by these VUEs because their rates are still greater than their minimum rates requirements. Therefore, proposed scheme can achieve fair and reasonable allocation of resources according to the minimum needs of VUEs. Thereafter, we compare the effectiveness of proposed hierarchical resource allocation scheme with other resource allocation schemes, such as equal distribution scheme and throughput maximum scheme. To prove the advantages of our proposed scheme, we study the trend of VUE3's transmission rate when the distance between VUE3 and RSU3 increases gradually. In Figure 7, when the distance between VUE3 and RSU3 increases gradually, the path loss of data transmission also increases. Therefore, the transmission rate of the VUE3 will decrease gradually in these three resource allocation schemes. When the distance between VUE3 and RSU3 is greater than 180m, the transmission rate of the VUE3 will be lower than 1 Mbps in equal distribution scheme and throughput maximum scheme. That means the minimum rate requirements of VUE3 cannot be guaranteed. However, in our proposed scheme, when the distance between the VUE3 and the RSU3 is greater than 180 m, the minimum rate demand of the VUE3 can still be met. We evaluate the performance of the proposed method in guaranteeing the fairness of VUEs. The fairness index φ can be mathematically defined as [24] where φ ∈ (0, 1]. If φ = 1, the ratios of VUEs' rates to the corresponding minimum rate are the same, which implies that the resource allocation is perfectly fair. From Figure 8, we can find that the proposed scheme outperforms the equal distribution scheme and the throughput maximum scheme in fairness. This is because the proposed scheme realizes the dynamic allocation of resources based on Nash bargaining game. However, in equal distribution scheme, each VUE is assigned the same resources. When the number of VUEs increases, the resources allocated to each VUE decreases. Then, the minimum rate requirements of some VUEs cannot be met. As a result, the fairness index decreases with the number of VUEs. In throughput maximum scheme, it will first satisfy the minimum demand of the VUEs, who have the poor communication condition. Then, more resources will be allocated to the VUEs with better communication condition. However, when the number of VUEs is large enough, in order to maximize system throughput, the minimum demand of the VUEs who have the poor communication condition cannot be met. Therefore, the fairness index begin to drop. The proposed hierarchical resource allocation scheme provides a better fairness index due to its proportional allocation based on Nash bargain game. Conclusions In this paper, a hierarchical resource allocation scheme is proposed to realize fair and effective resource allocation in VANET. In the first layer, the random movement of vehicles brings difficulty for the central cloud to allocate resources to RSUs. Therefore, we use Nash bargaining game to achieve fair and efficient resources allocation between RSUs. In the second layer, considering the selfishness of VUEs and the difference of services requested by VUEs, we construct the matching degree index between RSUs and VUEs. Then, we establish the Nash bargaining game model to achieve fair and effective resources allocation to VUEs. In order to reduce the computation complexity, we transform the original problem into two sub-problems: power allocation and slot allocation. Simulation results show that the proposed hierarchical resource allocation scheme can realize the fairness and effectiveness of resource allocation in VANET.
7,761.6
2019-06-04T00:00:00.000
[ "Computer Science", "Engineering" ]
Factors Affecting Students’ Desire to Take Upcoming Online Courses after E-learning Experience During COVID-19 — Since 2020, COVID-19 has completely changed the daily activities of almost all nations, and education has been heavily affected. Because of school closures, face-to-face classrooms were halted or replaced with online classes in which both lecturers and learners had to adjust their teaching and learning styles to cope with unexpected situations. The ‘new normal’ of learning from homes, spending hours staring at screens, and struggling with piles of online tasks has somehow demotivated students to continue learning. This study explores factors affecting students’ desire to take online courses after experiencing e-learning during COVID-19. Nine hundred fifty-five students of Vietnam National University took part in the survey via an online questionnaire. Data were analyzed using SPSS 20; correlation, hierarchical regression was employed to examine how online factors influence students’ decision. The research results showed that skill enhancement, self-regulated learning, lecturer interaction during the course were among the most important predictors of students’ desire to take more online courses. In contrast, student interaction imposed no significant influence. This study gives the theoretical background for other studies in the same field and suggests practical implications for governments and universities to implement online training better to cope with the pandemic. Introduction Although mobile learning has been a new phenomenon in the past few decades, its benefits in high-quality education and learning processes are numerous. Several studies on mobile learning have been carried out to understand better how mobile devices are used in educational contexts [1]. Mobile learning is defined as learning that involves using a mobile device, either alone or in conjunction with other forms of information and communication technology, to allow students to learn at any time and anywhere [2]. This possibility shows that mobile learning could be beneficial to both students and teachers [3]. In general, mobile learning assists students in developing technical skills, conversing skills, finding answers, developing a sense of teamwork, allowing information exchange, and thus maximizing their learning results [4]. Mobile technology has expanded faster than any other technology in history, and developing countries have seen the highest growth rate in mobile technology acquisition [5]. It has led developing nations to skip some intermediate development stages that developed countries had to go through, such as erecting extensive electricity power infrastructures and constructing several computer rooms in educational institutions [6]. Though mobile learning experience in developing countries is limited, it offers good prospects for cost-effective ways of delivering quality learning through open and distance learning provisions, as Lamptey & Boateng [7] point out, and has emerged as a new trend in these countries' education systems including Vietnam. Due to COVID-19, a severe disease caused by Coronavirus Sars-Cov-2, routine activities in almost all countries have been badly affected. The consequence of the lockdown and social distancing has led to a dramatic change in education. Thousands of universities and colleges have been closed to foster social distancing and limit the virus's spread. This catastrophic scenario raises many problems, including the decline in educational quality and the student's prospects [8]. Therefore, all educational institutions have placed a premium on adopting innovative teaching techniques and approaches to maintain the quality and continuity of student learning [9]. Mobile learning is one of the most acceptable options in this case. Under the same situation of COVID-19, the Ministry of Education and Training (MOET) of Vietnam released a decision to formally switch education mode from traditional face-to-face learning to e-learning in the time of school closures. No sooner had the Vietnamese government released the order of social distancing at the end of March 2020 than Vietnam National University (VNU) promulgated Documentary No. 944/ ĐHQGHN-ĐT to give guidelines on how to carry out online teaching and learning among its university members. Although lecturers and students are familiar with basic ICT, the sudden change has caused challenges in teaching and learning. The lack of e-learning course design format, physical interaction learning environment, teaching methodologies of lecturers, interaction, and students' motivation can be obstacles for both lecturers and students to achieve their education goals. Many students realized that they had no other choice but to use modern online technologies to fulfill their learning tasks and safely keep in touch with their instructors to preserve social distance [10]. While ICT application is one of the most potent tools to speed the growth of online learning, the swift switch to fully online learning may cause some negative impacts on students. As a result, various roadblocks may hinder students' learning processes, causing them to be hesitant to enroll in future online courses [11]. For such reasons, it is essential to study the critical factors affecting students' intention to continue using future online courses since few studies have been related to this topic and online mobile applications learning [11]. The results of this research will contribute to the theoretical framework of mobile learning or online learning acceptance and set the ground rules for school managers, teachers, and policymakers in carrying out the necessary training and supports to enhance online learning quality and student's continuity with online courses. The lesson learned from the circumstances of confinement caused by the coronavirus will also force a generation of new laws, regulations, platforms, and solutions for future cases [12]. The primary purpose of this research was to explore factors affecting students' desire to take upcoming online courses after experiencing tailored online courses in an emergency to cope with the pandemic. For this purpose, we collected data when the semester ended, and students could have insights into their online learning to clarify the motivating factors for their future online course decision. This study also applied SPSS 20 to test hypotheses and path connections among factors in the model developed. Towards that end, the study posed a research question and six hypotheses: -How do six variables (interaction with lecturers, interaction with students, peer support, self-regulated learning, technical support, skill enhancement) influence students' desire to take the upcoming online course after their learning experience during COVID-19? H1. Interaction between students and lecturers significantly affects students' desire to take upcoming online courses. H2. Interaction between students significantly affects students' desire to take upcoming online courses. H3. Peer support significantly affects students' desire to take upcoming online courses. H4. Self-regulated learning significantly affects students' desire to take upcoming online courses. H5. Technical support significantly affects students' desire to take upcoming online courses. H6. Skill enhancement significantly affects students' desire to take upcoming online courses. Apart from the introduction, the rest of the paper is organized as follows. In Section 2, some literature is reviewed about mobile learning, online learning mode, the role of students' satisfaction in defining the quality of online training courses. Section 3 presents the research method of the study. In the next part, the authors showed the main findings of the research, and the final section presents concluding remarks and suggestions for further study. Modern technological advancements enable the creation of low-cost, inventive, portable, and digital technologies [13], which assist students in developing the capacity to overcome learning difficulties [14]. Because of its portability, ICT has improved at an incredible rate over the last decade, and mobile devices have expanded in popularity and importance in our daily lives. By 2023, about 70% of the world's population will own a cell phone, according to Cisco's annual Internet Report (2018-2023) [15]. With the help of current technologies, online learning (learning that takes place over the internet) has become more popular even at the schools that formerly only offered faceto-face learning. Mobile learning is a subset of online learning or e-learning. It refers to the process of learning in a variety of situations via social and information exchanges using mobile devices such as laptops, cellphones, and wearable technologies. It is a type of distance learning where students employ instructional tools on mobile devices at their convenience [16]. Before the pandemic, schools mainly taught in person. When facing the COVID-19 epidemic, virtually every educational institution, from kindergarten to university, has shifted to online learning. These caused the entire world to rely on mobile learning to remain teaching and learning in the context of social distancing [17]. Due to the COVID-19 epidemic, about 1.6 billion students could not attend physical classes, resulting in over 91 percent of all students enrolling online [18]. In the context of the pandemic, mobile learning, online learning, and e-learning are brought much closer in the concepts. They all refer to the mode of learning with the internet connection and portable device support. Learning mode Two predominant educational modes are typically used: face-to-face instruction and online instruction. According to Kasser et al. [19], there are three types of learning environments: synchronous, asynchronous, and blended learning. In detail, Gazan [20] defined face-to-face learning as a typical physical or live virtual classroom where the teachers and students interact in real-time, as a synchronous environment. On the other hand, it is an asynchronous workplace when teachers and students work in various time zones. The combination of synchronous and asynchronous learning is referred to as blended learning. The degree to which asynchronous and synchronous learning is expanded varies in a hybrid learning environment. These possibilities are depicted in Figure 1 and are referred to as the synchronicity spectrum. Because of the COVID-19 pandemic, many educational institutions worldwide had to switch to online synchronous or asynchronous mode. Universities combined nonreal-time learning activities (asynchronous) and live virtual classrooms where students and instructor gather at the same time through several platforms such as Google Meets or Zoom meetings (synchronous). Role of students' satisfaction in defining quality of online courses Following the viewpoint that quality is the appropriateness and level of objective achievement, more and more universities are applying the student-centered approach [21]. According to Papadakis [22], to achieve more effective learning at higher levels, we need instructional strategies focused on the students, which allow them to learn by doing. The key idea of this approach is to consider students as customers. Universities must try to offer the best educational services for their learners, as stated in [21], which will make them satisfied and retain strong motivation to continue their learning at the university [23]. Moreover, in the context of Industrial Revolution 4.0, educational institutions have to gradually integrate online learning into their traditional training to meet their students' needs and societal demands. As a result, blended courses combining online and onsite learning are increased so that students can choose to learn at their suitable time and place. With an online platform, students can work with friends via an online forum, receive feedback from lecturers and peers, do tasks and submit their work with a click. The experience, therefore, tends to shape students' views on blended or online learning and affect their satisfaction level and decision to continue their study with this learning mode, [24]. The satisfaction rate strongly links students' commitment to complete their course, motivation, determination, and drop rates [25]. Previous studies have suggested several determinant factors influencing online student satisfaction, for example, instructor, technology, interactivity, course constituents, and course management as shown in Figure 2 below [26]. This view is further developed by Kuo et al. [27], showing that interaction is the main factor affecting students' satisfaction, persistence, and success in distance education. Other study by Alqurashi [25] also affirm the central role of interaction in online learning satisfaction. The limited interaction and other online learning obstacles have negatively affected students' performance and satisfaction with the course [26]. The term "interaction" can be generated as the connection or direct involvement between learners and learners, learners and instructors, learners and content, and more. According to Prohorets & Plekhanova [28], there are different types of interactions in a blended learning environment which can be included as in Table 1: student-content student-instructor student-student human interactions non-human interaction learner-content learner-learner learner-instructor learner-self learner-interface student-to-student student-to-teacher student-to-community student-to-material student-to-technology Besides interaction, there are other factors affecting students' satisfaction and desire to take upcoming online courses, such as self-regulated learning, assessment scheme, supports from others [32], [33]. Self-reflection and self-reaction activities such as writing reflection on the course forum, giving comments or feedbacks are common forms of self-regulated learning. Besides, other elements involved in the online learning process such as supports from peers and schools, technology skills, technical instruction and support, course design also play critical roles in students' satisfaction [34]. In addition, [35] revealed that students' perception of the e-learning environment and their skills affect their overall satisfaction. Research method A quantitative approach with a convenient sampling method was used for this study. An online survey was designed using Google Form and distributed via email to students due to the lockdown, and data was then analyzed using SPSS 20. The research has two main stages of data collection. A pilot survey was sent to 90 students at stage one, two items were removed, and two items were revised after checking the exploratory factor analysis (EFA) and reliability (Cronbach's Alpha). All students in the pilot test were excluded from the primary survey. At stage two, an email explaining the survey purpose was sent to students' registered email to invite them to participate in the study. A survey link and QR code for the survey was attached to the email so that students could access the questionnaire online. Questionnaire items were anchored with a 4-point Likert scale running from 'don't agree, 'somewhat don't agree, 'quite agree,' and 'agree' to get a specific response and avoid safe 'neutral' choice for their remarks with the new learning style they have experienced. The questionnaire has two main parts, the first part asked about the demographic characteristics of respondents, the second part was formulated into seven sections (1) interaction with lecturers, (2) interaction with peers, (3) self-regulated learning, (4) technical support, (5) skill enhancement, (6) peer support and (7) desire to take upcoming online courses of their programs. 955 students took part in the study through the Google Form link. Regarding the major of students, 5% were from Science, 17.5% were from Languages and International Studies, 12.4% were in Engineering and Technology, 19.5% were from Economics and Business, 24.7% studied Social sciences and Humanities, and 20.9% studied education. Data collected from the survey was analyzed with SPSS 20. Table 2 reports the summary of the survey sample, including gender, academic year, time spent on online tasks, devices for learning, and the online learning platform used. As shown in Table 2, female respondents were outnumbered by 73.7%, whereas the percentage of male students was 26.3%. More than half of the respondents (59%) were 1 st -year students, while 41% were in their second, third, or fourth year. Because students could attend classes anywhere, anytime, as long as they had a stable internet connection, four popular devices were chosen. Students used laptops (66.6%) and smartphones (27.3%) more than desktops for their online learning, and very few students used tablets. About 28% of students spent less than 10 hours on their online assignments, including reading materials, watching videos, and discussing forums. The number of students spending from 10 to 30 hours each week on their online tasks made up 57.2% of the sample, and the percentage of students who spent more than 30 hours doing online tasks was 14.8%. Since all subjects were taught online in real-time and students had to attend classes, as usual, the result indicated a high rate of time spent doing online tasks to fulfill the requirements of the course besides their class time. According to the data, out of 955 respondents, the number of Zoom users was outstanding with 823 choices (86.1%), the second favorite online platform was Google classroom with 504 choices (52.7%) which nearly doubled Microsoft Teams with 286 (29.9%). UPM, an online learning platform created by a Vietnamese company, was introduced and used by 247 students accounting for 25.8%. Google hangout and Skype, which initially were not created for educational purposes, stood at the last line with 77 choices (0.8%) and 44 choices (0.46%). Results This study used Cronbach's Alpha value to assess the internal consistency of each multi-item within the scale. All calculated Alpha values were above 0.77, indicating that the scales were reliable. The Principal Component Analysis was performed to test the construct validity with the cut-off point of 0.5, and Varimax with Kaiser Normalization was used for the rotation method. The results are presented in Table 3. Having examined the overall reliability of the instrument, we gathered the items measuring the same construct into the same group, the mean score was calculated for each construct. Lecturer-interaction was computed by taking the average score of 4 items of lecturer involvement. The computation continued with self-regulated learning from 5 items, technology support (4 items), skill enhancement (3 items), peer active interaction (4 items), and peer support (4 items). The results are summarized below: -LECT1, LECT2, LECT3, LECT4 measure the same construct; hence, are grouped into LECTURER INTERACTION -SELFREG1, SELFREG2, SELFREG3, SELFREG4, SELFREG5 measure the same construct, hence, are grouped into SELF-REGULATED LEARNING -TECH5, TECH6, TECH7, TECH8 measure the same construct, hence, are grouped into TECHNICAL SUPPORT -SKILL1, SKILL2, SKILL3 measure the same construct, hence, are grouped into SKILL ENHANCEMENT -PEER1, PEEER2, PEER3, PEER4 measure the same construct of interaction with peers were grouped into PEER INTERACTION -PEER5, PEER6, PEER7, PEER8 measure the same construct of peer collaboration were grouped into PEER SUPPORT Table 4 presents the mean, standard deviations, and correlations for each pair of constructs. The correlation analysis with alpha = 0.01 as the level of significance indicated a significant correlation among factors ranging from 0.391 to 0.599, and the results are reliable. The correlations demonstrate that students' desire to take upcoming online courses correlates with lecturer interaction, self-regulated learning, technical support, peer interaction, peer support, and skill enhancement. Skill enhancement is stably correlated with lecturer interaction (r = 0.529, p < 0.01), self-regulated learning (r = 0.583, p < 0.01), and technical support (r = 0.563, p < 0.01). Also, the moderate positive correlation between the desire to take upcoming online courses and the interaction with lecturers, peers, self-regulated learning, technical support, and skill enhancement means that as these variables increase, the desire to take other online courses also increases moderately. Notes: N = 955, *p < 0.05, **p < 0.01, ***p < 0.001. Hypothesis testing. Hierarchical multiple regression analysis is used to examine the influence of controlling variables (lecturer interaction, peer interaction, peer support, skill enhancement, technical support, and self-regulated learning) on students' desire to take other online courses. Table 5 presents the summary of a four-step hierarchical regression model. Lecturer interaction is entered at stage one of the regressions to control students' desire to take other online courses. Peer interaction and Peer support are entered at stage two, Self-regulated learning at stage three, Technical support, and Skill enhancement at stage four. The overall model with four blocks and six variables is statistically significant and explains 29.9% of the variance in students' desire to take other online courses with F (6, 948) = 67.45, p < 0.001. The hierarchical regression reveals that at Model 1, Lecturer interaction contributes significantly to the regression model F (1,953) = 207.24, p < 0.001 and accounts for 17.9% (R 2 = 0.179) of the variation in students' desire to take other online courses. For the second model, R 2 value increases to 0.234 or 23.4% of the variance. When Peer interaction and Peer support enter model 2, they account for an extra (23.4-17.9) 5.5% of the variance, which is statistically significant with F (3,951) = 96.82, p < 0.001. In the third model, the R 2 value is added up to 0.262 or 26.2%, self-regulated learning explains for an extra 2.8% after controlling for individual variables (lecturer, peer interaction, peer support) in Model 2, and this change in R 2 is significant with F (4,950) = 84.25, p < 0.001. For the final model (Model 4), its R 2 value stands at 0.299 or 29.9% of the variance; the appearance of two variables (technical support and skill enhancement) accounts for an extra 3.7% with F (6,948), p < 0.001. When all six independent variables are included in model 4 of the regression model, Lecturer interaction has a positive and significant impact (B = 0.185, p < 0.005, β = 0.213), thus Hypothesis 1 (H1) is supported. Peer interaction is not a statistically significant predictor of students' desire to take other online courses (B = 0.067, p = 0.26, β = 0.043). As a result, Hypothesis 2 is not supported. Regarding other factors, the result shows that they have a positive and statistically significant impact on students' desire to take other online courses. Peer support (B = 0.134, p < 0.05, β = 0.97) has a positive and significant impact which supports Hypothesis 3. The impact of Self-regulated learning is statistically significant (B = 0.216, p < 0.005, β = 0.213) and supports Hypothesis 4. Technical support also shows its positive and significant impact (B = 0.134, p < 0.01, β = 0.91), which supports Hypothesis 5. Lastly, Skill enhancement (B = 0.302, p < 0.001, β = 0.220) supports Hypothesis 6 that skill enhancement has significant impact on students' desire to take other online courses. From the results, Hypotheses 1,3,4,5 and 6 are supported, and Hypothesis 2 is rejected. The most influential predictor is Skill enhancement followed by Self-regulated learning, Lecturer interaction, Peer Support, and Technical support. The factors contributing to students' desire to take other online courses can be demonstrated as in Figure 3. Discussion The current study investigated the factors that contributed to students' desire to take upcoming online courses. This study provides some significant findings. Firstly, the result supports the findings on the interaction between lecturers and students, and this interaction is the essential factor to a student's learning and satisfaction [36]. The findings strongly confirm the role of lecturers in e-learning which has a strong positive effect on students' desire to take forthcoming online courses. The result is confirmed by the findings of [37], who found that if online courses are based on student responses and proposals and supported by regular smart instructors' help, students may make a significant qualitative jump forward in their studies. The interaction and exchange of information between lecturer and students asynchronously and synchronously regarding learning content or social information develop and strengthen students' knowledge construction and set up and increase social relationships and motivation. Interaction and information exchange are crucial in learning [38]. This study result endorses previous research that the timely feedback of lecturers [39], well-designed and prepared material [40] are among the factors affecting students' satisfaction with the course. Lecturers should provide a course content structure, give feedback, stimulate students' motivation, assign suitable assignments, and advance their ICT skills [33]. Also, participation scores can be applied in online learning courses to boost students' interaction with lecturers and peers. Secondly, technical support and skill enhancement play essential roles in students' desire to learn more online courses. Switching from face-to-face learning to online learning requires support and effort. For students, it is their first learning experience working 100% with laptops and smartphones without physical interaction with lecturers and peers. Technical support through the online help desk and online training are vital for students in the learning process. In addition, students develop and enhance their IT skills and soft skills when doing online tasks. This study indicates that the technical support and the skills students acquired from the new learning mode have heightened students' satisfaction and decision to learn more courses. Thirdly, self-regulated learning and peer support are also among the key factors; students were on the way to finding a better way to manage their learning, such as finding other resources, self-assessment, planning study, and regulating study pace. Students value their peers' help when they have trouble working in a new learning environment. The support from peers and the ability to organize their learning will increase students' satisfaction with the course. The result suggests that peer-support groups should be established so that students can connect to seek and give support both in the onsite and online environment. Perhaps the most surprising finding of this study is the insignificant effect of the student interaction variable. The result contradicts early studies but supports the findings by Kuo et al. [27] that student-student interaction did not significantly predict students' satisfaction in online courses. This finding implies that the online learning model has prevented students from active group work and reduced interaction chances. It is suggested that lecturers have clear guidance and set a reasonable time for group work online and outside class hours to become familiar with the new interaction model. Conclusion This study has presented the analysis of a survey conducted among Vietnamese students investigating factors that influence students' decision to take upcoming online courses after their COVID-19 learning experiences. The findings of this present study indicate that skill enhancement, lecturer interaction, self-regulated learning, technical support, and peer support are the determinants of students' desire to take more online courses. From the results, the study suggests that the role of lecturers in e-learning is vital. Responsive feedback and interaction from lecturers can enhance students' learning process leading to students' satisfaction. Besides, peer support should be encouraged and facilitated so that students can take full advantage of this resource. To make the learning process meaningful and satisfying, technical support, skill enhancement, and self-regulated learning should receive good attention to enhance learning. The research results provide insights for leaders, researchers, and educational policymakers about an effective teaching and learning mode to use in an emergency and in a long-term plan to live with the COVID-19 and exploit technological advances. It also helps determine how to leverage motivation and satisfaction to optimize learning based on experiences gained from the pandemic and provides empirical evidence for studies related to satisfaction theories regarding online learning. The study results suggest several further studies for future work. Similar studies should be conducted in various school settings to investigate the influence of culture, socio-economic status, geographical area. The effects of computer self-efficacy, technology anxiety should be examined to prepare teachers and students in their teaching and learning. Authors Nga Thuy Nguyen is an Associate Professor and senior lecturer at the Faculty of Quality Management, VNU University of Education, Hanoi, Vietnam. She gained her Ph.D. from The University of Queensland, Australia. Her research interests include language development, educational assessment, quality assurance, and educational technology. Huong Thi Thu Tran is a researcher, lecturer, teacher trainer working fulltime at the Department of Accreditation and Quality management, the Faculty of Quality Management, VNU University of Education, Hanoi, Vietnam. Her research interests are Measurement and Assessment in Education, Quality Management and Education Accreditation.
6,216
2022-01-18T00:00:00.000
[ "Education", "Computer Science" ]
AXIAL FLUX PROFILE IN THE ADVANCED TEST REACTOR This work demonstrates an approach to determine probability of perturbation of the axial profile of the thermal neutron flux in the Advanced Test Reactor. The axial flux profile is expected to follow a theoretical cosine shape, due to the minimal use of vertically-withdrawn shims. Reactivity is normally controlled by rotation of Outer Shim Control Cylinders, uniformly affecting neutron flux at all axial locations. The Advanced Test Reactor routinely accepts for irradiation experiments of a variety of designs. Among the analyses required by the safety basis approved by the United States Department of Energy is the characterization of a new experiment’s potential for perturbing the axial flux, which could exacerbate power peaking in the driver fuel. However, this perturbation can be more or less severe in different locations within the fuel. Therefore, the best characterization of axial flux perturbation requires knowledge of baseline axial flux. Such information is obtained by measuring decay in activated uranium flux wires irradiated at known positions in cooling channels in plate-type fuel elements. Due to variability in measured axial flux, it is not usually clear whether a given anomalous measurement is caused by an actual perturbation. Assuming normality in random measurement errors, the probability of an actual perturbation is quantified. INTRODUCTION The ATR core (see Figure 1) consists of forty (40) plate-type fuel elements, of aluminum-clad highlyenriched uranium. Pressurized water is both the moderator and the primary coolant and flows downward through the fuel element channels. The forty elements are organized around nine (9) flux traps, and power is shifted radially and azimuthally by means of control elements such as outer shim control cylinders, in order to obtain desired powers for irradiation of flux trap experiments. The purpose of localized power control is to simultaneously irradiate multiple experiments in the various flux traps, at programmable power levels. Most of the flux traps include the additional advantage of an in-pile tube, isolated from the reactor primary coolant system, which allows a given experiment to be irradiated at temperature, pressure, and chemistry conditions selected by the sponsor. For convenience, fuel elements are grouped into five lobes, named for the Center (C) and for the cardinal direction of each corner (Northwest (NW), Northeast (NE), Southwest (SW), and Southeast (SE)). The C lobe is in some contexts divided into fourths, allowing the ATR core to be grouped into four quadrants (NW, NE, SW, and SE). Each lobe contains a flux trap. The four other flux traps are known as outer flux traps, for being outside the closed fuel serpentine, and are designated by cardinal directions (North (N), West (W), East (E), and South (S)). Power is said to be produced by each lobe or quadrant and is indicated to operators. Outer flux trap powers are also indicated, as the average of the C lobe and the two adjacent corner lobes. As indicated in Figure 1, Outer Shim Control Cylinders are withdrawn by rotating such that the hafnium plate is moved closer to or further from the fuel. In addition to providing localized control for each corner lobe, the axial shape of the neutron flux is largely unperturbed. Prior to irradiation in ATR, experiments are measured in the companion ATR Critical facility (ATRC). ATRC is a pool-type reactor of the same size as ATR. Reactivity impacts of various experiments can be determined, usually for validation of models. ATRC has been used extensively at Idaho National Lab and has also been used for published benchmarks [1]. The axial flux profile can also be measured, as described below. By taking these measurements in ATRC, it can be shown what effect a given experiment will have on ATR. THEORY The ATR or ATRC core is contained within a right circular cylinder. Due to leakage, the axial flux will take a basic cosine shape. Any measurement of flux at a given axial height will have a normal probability distribution, centered about the true value. Unperturbed Axial Profile The ATR safety basis requires a certain margin to critical heat flux, which is verified by calculating the amount of subcooling at each axial elevation in the hottest coolant channel. When the peak fission rate is shifted from the core centerline or when heat flux is too high at a given location, assumptions regarding the coolant flow and heat transfer regimes can be challenged. Furthermore, verifying an unperturbed axial flux profile ensures that fuel elements will deplete evenly, avoiding the formation of localized pockets of high fission density. For each new experiment destined for ATR, representative measurements can be made in ATRC to characterize the axial flux profile in adjacent fuel elements. The unperturbed fission profile in ATR or ATRC is described by Equation 1, shown graphically in Figure 2. Fa in Equation 1 is the axial peaking factor: the ratio of the flux (or the fission rate) at elevation z inches to the channel average. The height of the active region of ATR fuel is 48". Probability Distribution of Measurements Measurements of fission rate are assumed to be random variables, distributed normally about the true value. This use of the normal distribution dates to the distribution's initial formulation by Gauss [3]. Figure 3 shows a theoretical probability distribution function and cumulative distribution function. With sufficient probability that the true value of the heat generation rate is higher than analyzed for the amount of subcooling, the need for additional thermal-hydraulic analyses is indicated. This must be taken with the perspective that if the results of the measurement are exactly the unperturbed profile, there is 50% probability that the true fission rate at a given height is above the measurement. With some knowledge of the standard error in the measurement process, the normal distribution can be used to find the probability that the true fission rate is above an arbitrary measurement. To decrease this probability, additional measurements must be taken at each height and show lower fission rates. Propagation of Uncertainty Uncertainties in measured fission rate at various locations all impact the computation of axial peaking factor Fa. For each parameter computed in this study, the standard error is assumed to be a combination of independent, random errors in the input parameters. Therefore, the square of the combined standard error is assumed to be the sum of the square standard errors, scaled by their effect on the computed parameter, according to Equation 2. Comparison to Unperturbed Profile As the ATR safety basis requires that the axial peaking factor be determined for each of 5 axial nodes at which thermal safety margins have been characterized. The cumulative sum of power fractions must also be determined for each of these nodes. Both calculated parameters are required to be conservatively lower than established criteria. Assuming that measurements are independent of each other, this is essentially a 10-stage experiment, because a measured flux profile may be shown to fail either of the two criteria at any one or more nodes. If a measurement of a profile exactly matched the unperturbed case, the probability that it is actually perturbed in the non-conservative direction is the complement of the probability of not being perturbed in all 10 experiment stages: 1 − ଵ ଶ భబ = 99.9%. METHODS The axial flux profile in ATR and ATRC is characterized routinely, as follows. Monte Carlo Simulation Nearly all new experiments are analyzed using some Monte Carlo method prior to irradiation in ATR. In general, this modeling work takes place prior to measurement in ATRC; therefore, ATRC measurements are used to validate models. Axial Profile Measurement Flux wands are loaded into ATRC fuel elements, with Uranium-235 ( 235 U)-bearing flux wires distributed at each 2" of height [4]. Use of 235 U flux wires is described in [5]. After 20 minutes of irradiation at about 600 watts (W) total core power, the flux wands are removed, and decays in the flux wires are counted. Obtaining this data by irradiating flux wires is performed in ATRC approximately four or five times per calendar year, depending on experiment needs. For ATR, this process is done only upon a major core reconfiguration; therefore, it is done as a set of three or so measurements approximately every ten years. A sample of such measurements in ATRC is presented in 4. RESULTS. The present work discusses only the probability that the axial fission profile is perturbed in a nonconservative way, not addressing whether a perturbation is beyond the safety basis. Uncertainty in Flux Wire Counting Uncertainty in flux-wire counting results is computed as the standard error of a set of counting data of a single core configuration. The most commonly used core loading in ATRC is core loading 12-13, which was first established in 2012. Power distribution has been measured in this core loading seven times, once each year 2013-2019 for annual calibration of nuclear instruments. Thus a set of seven measurements exists for each flux wire position. Three hundred forty positions (20 elements measured × 17 positions at core midplane) were measured in each of these flux runs, resulting in 340 standard deviation calculations. These standard deviations are independent estimates of the true standard error of the measurement method at each location. RESULTS The following results were obtaining, regarding uncertainty in the measurement process and implications on completed axial measurements. Uncertainty in Flux Wire Counting Each power distribution measurement requires a flux run, and counting of flux wires reveals that the total core power during these flux runs was near the intended 600W but varied by as much as 25%. The standard deviation of measurements of each of 340 flux wires are illustrated in Figure 4. Each standard deviation shown in Figure 4 is taken as an estimate of the true uncertainty of the measurement process. The set of 340 measurements is characterized by an average (9.808×10 7 ) and a standard deviation (3.677×10 7 ). 1.586 ×10 8 is 1.645 standard deviations above the average and is therefore the value below which 95% of randomly varying sample standard deviations will be found. However, any assumed standard error will affect all propagated uncertainties approximately equally. Therefore, the qualitative conclusion on this present work is not impacted by uncertainty in this standard error. Implication for Analyzed Axial Fission Profiles Two hundred forty-eight axial profiles were analyzed, a few of which are shown in Figure 5, with comparison to the unperturbed profile shown in standard error from section 4.1 is propagated from each measured axial point, as described in Section 2.3, to determine the standard deviation used for comparison with the normal distribution shown in Figure 3. Figure 6. Probability of a True Perturbation in Each Evaluated Axial Profile. If a given profile measurement exactly matched the unperturbed profile, it would indicate a 0% increase in relative probability that the actual fission profile is perturbed high (non-conservatively). It is seen in Figure 6 that some of the measured profiles are actually less probable to be perturbed than is one exactly matching the unperturbed profile. This is situation is possible whenever a given measurement shows a large number of data points with fission rates below the unperturbed profile. Monte Carlo Simulations The experiments that gave rise to the ATRC axial fission profiles shown as perturbed in Figure 6 were modeled prior to insertion in ATRC. Results from ATRC confirmed a priori calculations, except in a few cases where the measurement data is suspected of error. Figure 7 shows an example erroneous measurement. Profiles numbered 1 through 4 and 6 in Figure 6 are from an experiment previously reported by Nielsen [2] and are examples where measurement validated Monte Carlo calculations. CONCLUSIONS Using basic principles of probability theory, ATRC measurements of flux profiles can be rigorously compared against the ATR safety basis, and the probability of perturbation can be meaningfully quantified. Outliers in probability are easily observed and can be quantitatively compared.
2,762.2
2021-01-01T00:00:00.000
[ "Physics" ]
Towards a Model-Driven Datacube Analytics Language —Datacubes form an accepted cornerstone for analysis (and visualization) ready spatio-temporal data offerings. Geo datacubes have been standardized since long under the umbrella concept of coverages, and such data structures are well understood in concept and practice. This, however, is not paired by a similar understanding of coverage analytics. We present a formal model for datacube analytics which is based on Linear Algebra, incorporates space and time semantics, and allows a wide range of common datacube operations, up to, say, the Discrete Fourier Transform. For convenience, the formalism is based on a language allowing expressions of any complexity. The specification is currently in the avanced adoption process of ISO for becoming the future 19123-3 standard. Datacubes introduce function-rich services on spatiotemporally aligned, homogenized raster data assets typically coming as sensor, image (timeseries), simulation, and statistics data.Actionable datacubes support analytics through the paradigm of "any query, any time, on any size" [6] originally coined by research on Array Databases [7].Today datacubes are an accepted cornerstone for providing data ready for analysis, fusion, and visualization.In particular this is due to the homogenization of the zillions of scenes and other data into spatio-temporal units where each one has its common coordinate system, pixel type, etc. Thanks to the flexibility of the OGC / ISO / EU INSPIRE coverage standards [8] there is still sufficient degree of freedom to accommodate missing data areas in datacubes, regular as well as irregular grids, and a wide variety of data formats while retaining interoperability to allow, for example, simple fusion of datacubes with different dimensions and coordinate systems. Given the ever-increasing importance of coverage services going far beyond mere subsetting there is a strong need to standardize not only data representation and exchange, but also the capabilities of services in an interoperable manner -not the least for the emerging vision of interoperable automated service mashups. Datacube query languages establish actionable datacubes enabling users to ask "any query, any time" without programming, and independent from internal storage and external ingest/delivery data formats.Following this philosophy, 19123-3 is independent from any concrete coverage encoding format (such as GeoTIFF, NetCDF, etc.) and concrete processing language (such as python, R, SQL/MDA, etc.).Likewise, it is not a service, but rather provides the basis for service concretizations such as those existing in OGC:WCS GET/KVP, WCS POST/XML, WCS SOAP, OAPI-Coverages, and OAPI-Processing.In future, ISO TC211 possibly might establish concrete coverage services in a forthcoming 19123-4. However, due to the highly dynamic nature of the field it is not easy to understand terms, trends, and technologies, and how they relate or differ.It helps to look at standards where, centered around the notion of a "coverage", the main geo standardization bodies of OGC, ISO, and INSPIRE have established a mature, agreed, modular data and service model allowing implementations to differ in the extent of support while still remaining interoperable.For example, in the Web Coverage Service (WCS) suite WCS-Core just offers subsetting and format encoding while on the high end Web Coverage Processing Service (WCPS) offers a datacube analytics language. In this paper, we report about our work on establishing an ISO standard for the foundations of datacube processing, based on a language for extraction, filtering, processing, analytics, and fusion of multi-dimensional geospatial datacubes representing, for example, spatio-temporal sensor, image, simulation, or statistics datacubes.Expressions in this language accept any number of input datacubes (together with further common inputs like numbers) to generate any number of output datacubes or scalar results. The data model used for geo datacubes is given by the ISO coverage model as defined in the twin draft standard 19123-1 [5].A coverage describes mathematical fields through several -practically induced -techniques, specifically: regular and irregular grids, point clouds, and general meshes. The language is functionally defined and free of any side effects.It has a formal semantics foundation and is minimal: only two constructs establish all coverage processing: A coverage constructor to build (or derive) a coverage and an aggregation operator (called condenser) deriving summary information.Further convenience functions are derived from those. The language does not define a service API -it is independent from any particular request and response encoding, as no concrete request/response protocol is assumed.Hence, this standard rather acts as the foundation for defining service standards functionality.Currently such concrete service definitions exist with OGC Web Coverage Service (WCS) [4] via GET/KVP, POST/XML, SOAP protocol bindings, as well as with emerging specifications like OAPI-Coverages [13] and OAPI-Processes [12]. Supported by European Commission H2020 CENTURION. The datacube analytics specification has been finalized and submitted for voting to the national delegations under its identifier 19123-3.This work is part of ISO plans on further populating the coverage ecosystem.Fig. 1 shows possible evolution paths and the position of 19123-3 in it.In its current version CPF supports grid coverages with index, regular, and irregular axes.In the future it is foreseen that the standard gets extended so as to address all CF types.The remainder of this contribution is organised as follows.In the next section we provide an overview of the datacube part of the coverage model as a background.After that, the CPF processing language is introduced in Section 3. A comparison against the state of the art is provided in Section 4. Section 5 gives a summary and an outlook. II. COVERAGES AND DATACUBES For the reader's convenience this section gives a brief informal recap of the CF model, under adoption by ISO as 19123-1.A detailed description is provided in [5]. Following the mathematical notion of a function that maps elements of a domain (here: spatio-temporal coordinates) to a range (here: "pixel", "voxel", etc. values), a coverage consists of: • an identifier which uniquely identifies a coverage in some context (here: the context of an expression); • a domain set of coordinate points (expressed in a common Coordinate Reference System, CRS): "where in the multi-dimensional space can I find values?" • a probing function which answers for each coverage coordinate in the domain set ("direct position"): "what is the value here?" • a range type: "what do those values mean?" • optional metadata: "what else should I know about these data?" The coverage concept encompasses regular and irregular grids, point clouds, and meshes.In our context, we only consider multi-dimensional grid coverages which are used to represent datacubes.CF supports a very general grid concept where any kind of regular and irregular grids is supported; Fig. 2 shows some examples. In an interface-oriented UML specification such a coverage, on abstract level can be described as in Fig. 3. III. A LANGUAGE FOR COVERAGE ANALYTICS Being on the same conceptual abstraction level as CF, the CPF likewise is not at implementation level (in particular: does not define a concrete service), but rather establishes coverage processing concepts that may serve to conceptualize concretizations, such as the OGC Web Coverage Service (WCS) API.Actually, core concepts draw on the OGC Web Coverage Processing Service (WCPS) language; however, CPF has been stripped from several OGC legacyand lifted to the abstract CF data model. In CPF, the following CF axis types are supported: • A Cartesian ("index") axis just requires lower and upper bound (which are of type integer). • A regular axis which can be described by lower and upper bounds together with a constant distance, the resolution. • An irregular axis which has individual distances, described by a sequence of coordinates. The coverage domain set with its axes has a single native CRS which may allow georeferencing.Additionally, the underlying grid structure is defined through a Cartesian grid CRS.Both CRSs have the same dimension, i.e.: number of axes.In CPF, CRSs are addressed by name in expressions.Both CF and CPF do not make any assumptions about the nature of identifying CRSs, but rather treat them as opaque identifiers. These concepts are formalized in CPF through so-called probing functions which extract information from an otherwise abstract, encapsulated object following the theory of Abstract Data Types [1].Table I summarizes representative probing functions. A. Processing Expressions The CPF coverage processing language defines expressions on coverages which evaluate to ordered lists of either coverages or scalars (whereby "scalar" here is used as a summary term of all data structures that are not coverages).In the sequel, the terms processing expression and query are used interchangeably. A CPF coverage processing expression consists the CPF primitives plus the nesting capabilities, altogether forming an expression language which is independent from any particular encoding and service protocol.The semantics of a CPF expression is defined recursively by indicating, for all admissible expressions, the semantics which is given by the probing function output when applied to a coverage-valued expression. The basic shape of a CPF query is given by The Li are lists in the for clause are coverages to which the corresponding variables v1 are bound in sequence.This establishes an iteration over these coverages.Having several variables establishes a nested loop where any number of covverages can be combined for fusion. Optional let clause is just for convenience -it allows for convenient abbreviations of sub-expressions.In the return clause, coverage expression E performs the analytics.These expressions may contain occurrences of the variables defined in the for and let clauses.If the result is scalar it will be returned as ASCII, coverage-valued results need to be encoded in some suitable data format. The optional where clause performs filtering: only those coverages where the filter predicate P evaluates to true are forwarded for processing the return clause instantiation. For example, assume availability of coverages A, B, and C.Then, the following CPF query for $c in ( A, B, C ) return encode( $c, "image/tiff" ) will produce a result list containing three TIFF-encoded coverages.In the next example, assume availability of satellite images A, B, and C and a coverage M acting as a mask (i.e., with range values of 0 and 1 and the same extent as A, B, and C).Then, masking each satellite image can be performed with this query: The operations available for building filter and processing expressions are based on only two primitives, a coverage constructor and an aggregation operation, called condenser. B. Coverage Constructor A coverage constructor creates a d-dimensional grid coverage for some d≥1 by defining the coverage's domain set, range type, range set, and metadata through expressions.This allows deriving entirely new shapes, dimensions, and values. The coverage domain set is built from a CRS defining the multi-dimensional axes and the meaning of coordinates, including units of measure; indicating the coordinates of the direct positions, i.e., the points where values sit.Axis names can be chosen according to the rules of CF; it is recommended to keep native and grid CRS axis names disjoint. A range type expression optionally creates the coverage range type.In the scope of the embedding CPF condensers this expression defines the range component names as known (immutable) variables.Values derived for some such range component will automatically be cast to the target type of that range component. A range set expression creates the coverage range set.A corresponding scalar expression is evaluated at every direct position of the coverage's domain set. An optional metadata expression creates the coverage metadata component.As such metadata are not interpreted by the coverage they are represented as a string which may contain any character, depending on the character set supported. Syntactically, a coverage is built as with the following constituents.The id parameter is the new name of the coverage.In a concrete service, the name may be required to not exist in the service data pool yet.On this level of abstraction, however, no such requirement is in order. The D parameter defines the domain set, through its constituents CRS, each axis with its type -regular or irregular -and its constituents such as lower and upper bound, resolution (if regular), etc.For example, the following clause defines a 2-D grid with axes Lat and Long.Both axes are regular with the indicated extent and resolution.The common CRS defining them is WGS84 which has the EPSG code 4326.Further, interpolation along both axes is set to linear: The range set definition provides an expression which is evaluated for each cell ("direct position" in CF terminology) for determining the value of each cell in the datacube.An example for such an expression is range set (integer) $c / 2 Every cell value of the output coverage is given by taking the corresponding value of the $c coverage, dividing it by 2, and casting the result to an integer value.This implicit iteration over all elements defined in the domain set, i.e., its direct positions, allows for operations that sometimes are called "embarrassingly parallel", or local operations in Tomlin's Map Algebra [14].However, the mechanics of the constructor provides not only the iteration over all the direct positions, but also the coordinate iteration variables, available through the axis names.In the following example the result value for each direct position is determined from subtracting two adjacent time slices in coverage $c.In a 3-D x/y/t timeseries datacube this would effectively determine change: Note that the axis name serves to identify the axis addressed and also the variable contents.The syntactic position guarantees differentiating both positions.Beyond local operations this allows expressing also Tomlin's zonal, focal, and global operations. The metadata element M, finally, allows associating a metadata string.As the syntax and semantics of the metadata remains opaque and unknown in this framework there are no particular rules on this string -it might be XML, JSON, or any other structure defined by some concrete standard based on CPF.In CPF, this might simply be written as metadata "any metadata contents" In a fully fledged coverage constructor all these clauses are present.However, in particular where other coverages are referenced some details often can be inferred -for example, if for some given datacube all values simply are halved then obviously the domain set of the derived cube is that of the original cube.In many cases this yields very compact expressions, such as coverage LogOfCube range set log( $c ) This gives rise to induced expressions: For cell-wise operations the result coverage is trivially defined through the input coverage constituents.We simply write the range set expression and define its semantics through the corresponding coverage constructor.All common arithmetic, comparison, trigonometric, and exponential become immediately available this way, and additionally "if" statements, records access, etc. become defined naturally. We illustrate these unary, binary, and n-ary induced operations by a binary addition of two coverages $c and $d which both need to agree in their domain set -say, axes x and y -and need to have compatible range types.Then, Conversely, by manipulating the domain set while retaining the original coverage values we can define extraction of sub-coverages.Following the WCS standard [4], we differentiate subsetting into trimming (which reduces the domain footprint, but keeps the dimension) and slicing (which reduces the dimension).Then, subsetting can be expressed by taking the range values of the input coverage at the reduced domain set of coordinates: For some coverage $c with unknown dimension, but with an axis date a time slice at a particular date can be written as In practice, combinations of constructor and condenser are common. C. Coverage Condenser The second fundamental operator is the condenser.It collapses a coverage into a scalar.On principle, any binary operation forming a monoid applies; CPF defines addition, multiplication, max, min, and, and or. D. Further Functions In addition, common convenience functions relevant for geo applications are available in the language, such as scaling and reprojection. The encode function specifies encoding of a coveragevalued query result by means of a data format and possible extra encoding parameters.The decode function, conversely, evaluates a byte stream passed as parameter to a coverage by decoding the byte stream.This byte stream must represent a coverage encoding following CIS 1.1 [8] and its coverage encoding profiles. A. Standardization Geo Web services for raster data started with 2D map rendering with OGC Web Map Service (WMS).This delivers images suitable for human interpretation, but not data results that may be perused, e.g., by a GIS analysis tool.Further, no processing is supported, just selecting predefined colirng styles. The emerging OGC Environmental Data Retrieval (EDR) standard [9] defines some fixed functionality, in the spirit of (but not to the extent) of OGC WCS.There is only static functionality, but no flexibility for composing requests of arbitrary complexity as in CPF. OGC WCS, conversely, is a data-oriented service extension, however with requests of fixed, static functionality.One extension, WCPS, offers a datacube analytics language based on the concrete coverage data model CIS; CPF is derived from WCPS and reshaped to be the processing counterpart of the abstract CF data model of ISO 19123-1. Table II. -CONDENSER SHORTHANDS OGC Web Processing Service (WPS) defines a Web API for remote function invocation, i.e., Remote Procedure Calls (RPC) [11].This principle exists with C since the 1980s and lateron with SOAP.Any process to be invoked in the server must be defined by the server administrator.The invocation syntax in WPS is described through XML documents; the actual code executed remains opaque to the invoker.Hence, WPS realizes syntactic interoperability: the invocation syntax (function name, parameter number and types) is defined whereas the execution semantics is not.CPF, on the other hand, establishes semantic interoperability: clients and services based on CPF share the same understanding the filtering and processing. B. Technology Image processing has a strong history.After using programming languages natively libraries emerged encapsulaing advanced imaging functionality.With the recent proliferation of python libraries in this language have become popular, such as xarray for n-D arrays.These are usually limited to main memory processing and are not directly usable for Web services.Likewise, they require concrete programming and, additionally, do not support space and time semantics directly.In several language specific support has been added for built-in array handling, from APL over Matlab to R. CPF is suitable for describing the datacube-related parts and define interoperability, up to possibly automatic translation across languages and services. V. CONCLUSION We presented a language for expressing geo datacube operations , specifically tailored in its operations to the ISO abstract grid coverage model.This allows manipulating datacubes of any dimension and with space, time, and other axes in a uniform manner, including combination of heterogeneous objects for data fusion. The first innovation is that, to the best of our knowledge, it is the currently only formalized processing model that strictly relies on the coverage standards. Further, this approach is novel as it abstracts away from the usual procedural APIs, but rather offers a high-level, declarative language allowing open-ended complexity in the requests while focusing on the "what" rather than on the "how".Similar languages are known on vector data, such as SQL Simple Features, so our proposal can be seen as closing a gap, thereby making datacubes first-class citizens in the conceptual framework world of geographic data. The syntax tentatively is shaped along the XQuery language -the vision is to integrate data and metadata analytics, and many of today's metadata are in XML.Even when changing to JSON, or any other structured metadata description model, XQuery still works.Given this generality of XQuery we have shaped the CPF syntax to prepare for an integration which ultimately should overcome the data / metadata divide. At the time of this writing the specification is sent out to the voting delegations of the participating nations for Draft International Standard (DIS) ballot.Should it be accepted then only editorial changes will be further possible any more.An implementation of CPF is possible on principle.In a slightly different syntax One mapping to a concrete standard is exemplified in the 19123-3 specification: the OGC WCS Core and some extensions are described through the CPF language, demonstrating how it can be used to unambiguously describe functionality. Our hope is that the concepts of this language will help to better communicate algorithms and ideas.Different coverage processing standards might define their semantics through 19123-3 making them comparable, possibly even enabling cross-translation between them.Further, the systematics of coverage processing might guide software implementers in the design of their functionality, using whatever interface style like function libraries, different languages, etc. Future work includes extending the datacube analytics expressiveness with AI methods, based on the common basis of tensor algebra.Another research direction is to extend support for further coverage types, specifically: point clouds and meshes. Fig. 1 . Fig. 1.Possible evolution of coverage standards in ISO For the purpose of this paper we refer to the 19123-1 draft specification as Coverage Fundamentals (CF) and the 19123-3 draft text as Coverage Processing Fundamentals (CPF). Table I . -SELECTION OF COVERAGE PROBING FUNCTIONS Coverage characteristic Probing function for some coverage C as per CF Comment .
4,757.2
2021-12-15T00:00:00.000
[ "Computer Science", "Geography" ]
NT-seq: a chemical-based sequencing method for genomic methylome profiling DNA methylation plays vital roles in both prokaryotes and eukaryotes. There are three forms of DNA methylation in prokaryotes: N6-methyladenine (6mA), N4-methylcytosine (4mC), and 5-methylcytosine (5mC). Although many sequencing methods have been developed to sequence specific types of methylation, few technologies can be used for efficiently mapping multiple types of methylation. Here, we present NT-seq for mapping all three types of methylation simultaneously. NT-seq reliably detects all known methylation motifs in two bacterial genomes and can be used for identifying de novo methylation motifs. NT-seq provides a simple and efficient solution for detecting multiple types of DNA methylation. Background Although epigenetic regulation has been reported in all domains of life, most studies focused on eukaryotes. However, mounting evidence for the crucial function of epigenetic regulatory pathways in prokaryotes has been reported. Three forms of DNA methylation, N 6 -methyladenine (6mA), N 4 -methylcytosine (4mC), and 5-methylcytosine (5mC), are prevalent and play essential roles in viral defense [1], mismatch repair [2], gene regulation [3,4], and pathogenesis [4,5] in prokaryotes. DNA methylation occurs in a motif-dependent manner in bacteria, and methylation motifs vary among different bacterial strains [6]. While emerging evidence has shown the functional role of bacterial methylation in transcriptional regulation, how DNA methylation and methyltransferases orchestrate the gene expression to determine the phenotype is still elusive [7]. One of the major challenges is the lack of efficient and straightforward methods for comprehensive genomic methylome profiling. Most of the next-generation genomic sequencing (NGS) methods for DNA methylation mapping have been developed for 5mC, such as bisulfite sequencing [8], but 6mA is the most prevalent form of methylation in prokaryotes [9]. Multiple antibody-based or enzyme-based approaches have been developed [10][11][12], yet these methods are either complicated [10], low resolution [11], or restricted to particular enzyme-cutting motifs [12]. While the 3rd-generation single-molecule real-time sequencing (SMRTseq) has been utilized to detect DNA methylation motifs in bacterial genomes [13,14], the current SMRT-seq lacks open-source bioinformatic tools or independent methods that could cross-validate the results. Moreover, although SMRT-seq has been widely used to detect 6mA and 4mC in bacterial genomes, recent results from 4mC-TAB-seq [15] and mass spectrometry [16] indicated that SMRT-seq might overestimate 4mC in bacterial genomes. The Oxford Nanopore sequencing has also shown the ability to detect multiple types of DNA methylation in bacteria [17,18], but the signal of Nanopore sequencing in detecting bacterial DNA methylation, especially the 6mA, is still noisy, and the machine-learning analysis methods need more training datasets [17]. Furthermore, since SMRT-seq and Nanopore sequencing can only detect methylation from unamplified genomic DNA, the required amount of input DNA is the limitation for applying single-molecule methods on restricted clinical samples. Therefore, to help fully understand microbial epigenomics, we must develop an efficient chemical-based NGS strategy to detect all three types of DNA methylation (whole methylome profiling). DNA base deamination is a well-known chemical strategy to detect DNA methylation; for example, bisulfite sequencing, in which unmethylated cytosine is efficiently deaminated and converted to thymine during PCR amplification, while the modified cytosines such as 5mC and 5hmC are not converted [8]. For adenine methylation, although it was reported more than 60 years ago [19,20], such a condition that could be applied for sequencing was not clarified until recently. Deamination induced by nitrous acid has been shown only to deaminate unmethylated adenine but not 6mA, which was utilized to develop nitrite sequencing [21] and NOseq [22] for DNA 6mA or RNA m6A detection in oligos or targeted sequencing settings. As far as we know, nitrite treatment has not been applied for genomic methylome profiling because generating the genomic sequencing library in such conditions and following bioinformatic analysis are still challenging [23]. Based on the previous studies, we developed NT-seq (nitrite treatment followed by next-generation sequencing), a sequencing method for detecting multiple types of DNA methylation genome-wide. We demonstrate that NT-seq can detect not only 6mA but also 4mC and 5mC. NT-seq can identify methylation motifs of all three types of methylation in Escherichia coli and Helicobacter pylori genomes. We also show that NT-seq can be used for methylation motif de novo discovery in a microbial community standard sample. Thus, NT-seq provides an efficient, cost-effective, and high-resolution method for methylation motif detection in both single bacterial species and metagenomic settings. Of particular note, 6mA has also been reported in lower eukaryotes and mammals [24][25][26][27]. The 6mA methylated DNA immunoprecipitation followed by sequencing (6mA DIP-seq) is the primary approach for profiling 6mA in eukaryotic genomes, but the specificity has been debated [28,29]. Since our method can efficiently recognize 6mA, coupled with DIP-Seq, we present that DIP-NT-seq can detect 6mA at single-base resolution with high fidelity and eliminate the false-positive 6mA sites in DIP-seq. Thus, this method can be used independently or coupled with other protocols for methylome profiling, which will pave the way for DNA modification studies in different contexts. Experimental design of NT-seq Nitrite treatment has been reported to deaminate adenine (A), cytosine (C), and guanine (G) for decades [30,31]. The deamination of A or C changes the bases and is read by polymerases as G or T, respectively, in PCR amplification (Fig. 1a) [30,31]. As the methyl groups of 6mA and 4mC are located at the amino groups of adenine and cytosine, 6mA and 4mC can block the deamination of adenine and cytosine under nitrite conditions. Recently, liquid chromatography mass spectrometry (LC-MS) results showed that N 6 -methyladenosine (m 6 A) was converted to N 6 -nitroso-m6A (m 6 A-NO) but not deaminated inosine by nitrite treatment [21,22]. While the nitrite treatment has been adapted to detect DNA 6mA and RNA m 6 A, it has been only tested in oligos or targeted RNA locus as a proof-of-concept for methylation sequencing [21,22]. Moreover, it can also be used to distinguish 5mC from cytosine because the deamination rate of 5mC in nitrite treatment has been reported to be up to 4.5-fold higher than cytosine [32] (Fig. 1a). However, whether this approach could be applied for whole-genome sequencing (WGS) with genomic DNA libraries remains unknown. Here, we hypothesize that the DNA 4mC, like the 6mA, can also be detected with nitrite treatment, and we can build the NGS sequencing library for NGS sequencing to define the whole methylome profiling (Fig. 1a). 1 Principle and workflow of NT-seq. a Schematic illustration of nitrite treatment. Nitrite treatment induces deamination of adenine, cytosine, and 5mC at different frequencies, producing inosine, uracil, and thymine, respectively. Meanwhile, nitrite treatment nitrosylates 6mA and 4mC, producing nitrosylated 6mA (6mA-NO) and nitrosylated 4mC (4mC-NO). During PCR amplification and sequencing, base pairing and reading for each product are labeled on the right column. b The workflow of NT-seq. Single-stranded DNA is first annealed with protective oligos to protect PCR primer regions. Annealed DNA is treated with nitrite and then amplified to construct the sequencing library. Sequencing data from native DNA and PCR control are used to calculate the A to G or C to T mutation ratio and to call methylation We first characterized the products formed from the reactions between nitrite and 2′-deoxyadenosine/2′-deoxycytidine by HPLC separation followed by mass spectrometric analyses. We found that the two nucleosides can indeed be deaminated upon subjected to nitrite treatment (Additional file 1: Fig. S1-S4). More importantly, when subjecting 6mA and 4mC to the same nitrite treatment conditions, 6mA-NO and 4mC-NO were the dominant products and there were minimal side products formed from nitrite treatment (Additional file 1: Fig. S5-S8). Additionally, we performed timedependent nitrite treatment to investigate the reaction dynamics of these four nucleosides. Consistent with published m 6 A results [21], we found that 6mA and 4mC are converted to 6mA-NO and 4mC-NO more rapidly than the deamination of their unmethylated counterparts (Additional file 1: Fig. S9). We then designed an experimental workflow to investigate whether nitrite treatment can be used to develop a sequencing method to simultaneously detect genome-wide all three types of methylation (Fig. 1b). In the workflow, we hybridized two protective oligos, which is reverse complementary to the primer regions of the single-stranded DNA (ssDNA) since Watson-Crick base pairing has been shown to protect DNA bases from deamination [33]. The quantitative PCR (qPCR) results showed that the protected DNA could be more efficiently amplified than unprotected DNA by decreasing the cycle threshold (Ct) value up to 7.5 (Additional file 2: Table S1). Since the deamination by nitrite treatment is completed through nitrite-mediated diazotization followed by hydrolysis which requires acidic conditions and relatively high temperature, the deamination efficiency is positively correlated to acid concentration, incubation temperature, and treatment duration [21]. However, increasing these conditions such as treatment duration can also result in an increased level of DNA degradation (Additional file 1: Fig. S10). Therefore, it is crucial to optimize the treatment condition to achieve a high deamination rate while preserving enough DNA for library preparation when applying it to genomic DNA. Therefore, we treated six pmol (~250 ng) protected DNA oligos with different concentrations of acetic acid, different temperatures, and different durations of treatment and performed qPCR to estimate the amount of remaining amplifiable fragments. Considering Ct around 15 as the required amount of amplifiable DNA for downstream library preparation, we determined 2.3% acetic acid, 1 M sodium nitrite (pH = 4.187), and incubation at 37°C as our optimal condition (Additional file 3: Table S2). We also evaluated the damage level caused by nitrite treatment using 293T genomic DNA and found that nitrite-treated DNA is less fragmented compared to bisulfite-treated DNA. However, the DNA degradation from nitrite treatment is more severe than bisulfite treatment (Additional file 1: Fig. S10). To enable methylation detection in genomic DNA, it is also essential to efficiently align NT-seq reads to the reference genome. Therefore, we developed an NT-seq analysis pipeline that can tolerate all deamination-elicited base substitutions (Additional file 1: Fig. S11). Since the base deamination will only cause transition mutations (A to G, C to T, or G to A) but not transversion mutations, we degenerate A/C/G/T bases in both reads and reference to purine/pyrimidine bases. Using whole-genome sequencing data from E. coli MG1655 as a mock dataset, we artificially introduced A to G, C to T, or G to A change to mimic the base changes after nitrite treatment. The unique alignment rate is very similar to the original unchanged reads, indicating the NT-seq analysis pipeline can tolerate all possible base changes introduced by nitrite treatment (Additional file 1: Fig. S11). Detection of 6mA, 4mC, and 5mC in oligonucleotides using NT-seq To investigate the feasibility of NT-seq for DNA 6mA methylation detection, we performed pilot experiments with 6mA modified oligo and unmodified oligo. Consistent with previous work [21], we found that at the 6mA position, the A to G ratio (the ratio between A to G frequency in modified/native sample and A to G frequency in unmodified/amplified sample) is about 18-fold lower than that in other adenine positions (Fig. 2a). We further performed NT-seq on oligos mixed with different percentages of 6mA modified oligo and found that the normalized A to G frequency at the 6mA position is linearly correlated to the 6mA percentage (r = −0.968, Fig. 2b), indicating that the NT-seq can quantify the 6mA frequency precisely. We then used BamHI methyltransferase to methylate double-strand DNA oligo with GGATCC motif for 4mC modification, as the 4mC modified DNA oligo is not commercially available. The BamHI restriction enzyme was used to cleave unmethylated DNA oligo before nitrite treatment (Additional file 1: Fig. S12). The NT-seq result showed that C to T ratio at the BamHI methylated sites is about 4-fold lower than other cytosine positions (Fig. 2c). Additionally, we performed NT-seq on oligos mixed with different percentages of 4mC modified oligo and demonstrated the C to T ratio at both 4mC positions is linearly correlated to 4mC percentage (r = −0.988 for position 39 and −0.992 for position 42), indicating that NT-seq can also quantify 4mC frequency Fig. 2 NT-seq detects both adenine and cytosine methylation in oligonucleotides. a The inverse of A to G ratio at adenine sites between unmodified control and 6mA modified oligo. b Linear regression between A to G frequency at 6mA site and the percentage of 6mA modified oligo. c The inverse of C to T ratio at cytosine sites between unmodified oligo and oligo modified by BamHI methyltransferase. d Linear regression between C to T frequency at 4mC position 39 and the percentage of 4mC modified oligo. e Linear regression between C to T frequency at 4mC position 42 and the percentage of 4mC modified oligo. f C to T ratio at cytosine sites between unmodified control and 5mC modified oligo. Modified adenine or cytosine sites are labeled in red. Adenine or cytosine sites inside the primer regions are not included. Dots represent the mean and error bars represent standard deviation. All samples were replicated three times. One replicate for 25%, 50%, and 100% of 4mC modified oligo samples was not used due to library prep and sequencing depth issues ( Fig. 2d, e). These results demonstrated that NT-seq indeed could detect 4mC and 6mA in parallel. Of note, the fold change of 4mC oligo is lower than 6mA oligo, which is likely caused by a small proportion of hemimethylated/unmethylated 4mC oligo that escaped from restriction enzyme cleavage. As for detecting 5mC using NT-seq, we performed NT-seq on 5mC modified and unmodified oligos. The C to T ratio at the 5mC position is about 40% higher than other cytosine positions (Fig. 2f), consistent with a previous study showing that 5mC is easier to deaminate than C [32]. We also performed NT-seq on oligo mixture with different percentages of 5mC oligo and found that the C to T frequency is also linearly correlated to the percentage of 5mC (r = 0.972, Additional file 1: Fig. S12). This result further demonstrated that NT-seq could also detect 5mC quantitatively. Notably, unlike bisulfite sequencing, the impact of 4mC and 5mC is the opposite during nitrite treatment, making NT-seq capable of distinguishing 4mC and 5mC. Taken together, we demonstrated that NT-seq could detect all three types of DNA methylation in DNA oligos. Detection of methylation motifs in bacteria by NT-seq To determine the capability of NT-seq in detecting three types of methylation motif in genomic DNA, we applied NT-seq to the E. coli MG1655 genome. Firstly, we found that A to G frequency at known 6mA sites (Dam (G6mATC) and M.EcoKI (A6mACN 6 GTGC and GC6mACN 6 GTT) motifs) was significantly decreased compared to unmethylated adenine sites, while no difference was observed after PCR amplification (Fig. 3a). Thus, the A to G ratio at Dam and M.EcoKI motifs was significantly decreased compared to unmethylated adenine positions (Fig. 3c). The M.EcoKII motif was used as a negative control because the methyltransferase M.EcoKII is known not to be expressed under standard laboratory conditions [34]. As expected, the A to G ratio at the M.EcoKII motif is not different from other unmethylated adenine sites (Fig. 3c). To further demonstrate that NT-seq detects bona fide 6mA motifs, we applied NT-seq to hsdM (the gene encoding M.EcoKI protein) KO strain and dam/dcm mutated strain. In contrast to the WT E. coli strain, the M.EcoKI motif in the hsdM KO strain shows no difference in A to G ratio from other unmethylated adenine sites (Fig. 3d). Similarly, the A to G ratio difference between the Dam motif and unmethylated adenine is also lost in the dam/dcm mutated strain (Fig. 3e). Next, we performed NT-seq on the H. pylori JP26 genome, which contains two known 4mC motifs: 4mCCGG and T4mCTTC [17]. Consistent with 6mA results, the C to T frequency at these two 4mC motifs was decreased compared to unmethylated cytosine positions, while no difference was observed after PCR amplification (Fig. 3b). The C to T ratio at these two motifs decreased around 4-fold compared to unmethylated cytosine positions (Fig. 3f). As there is no known 4mC motif in E. coli MG1655, we analyzed C to T ratio at 4mC sites identified by SMRT-seq [35]. Consistent with the previous report, the C to T ratio at these sites showed no difference from other cytosine sites (Fig. 3g), indicating NT-seq can only detect true 4mC-induced difference. In contrast, the A to G ratio at 6mA sites identified by SMRT-seq was decreased 4-fold on average (Additional file 1: Fig. S13). These results are consistent with the previous report [15,16] that SMRT-seq may overestimate the total level of 4mC or falsely detect some 4mC motifs in the bacterial genomes. We compared C to T ratio at two known 5mC motifs in the H. pylori genome to examine whether NT-seq also detects 5mC motifs. Consistent with 5mC oligo results, the C to T ratio at these two known 5mC motifs is significantly increased compared to unmethylated cytosine sites (Fig. 3h). Similarly, the Dcm motif in E. coli also shows a significantly increased C to T ratio between native and PCR samples (Additional file 1: Fig. S13). The increase in C to T ratio is diminished in dam/dcm mutant, further indicating that the 5mC can be detected by NT-seq (Additional file 1: Fig. S13). Overall, we demonstrated NT-seq could be used to simultaneously detect multiple types of DNA methylation motif in bacteria genomes. NT-seq simultaneously detects three types of methylation motifs in bacteria. a A to G frequency at known 6mA sites (G6mATC, A6mACN 6 GTGC, and GC6mACN 6 GTT) and unmethylated A sites in E. coli MG1655 genome from native and PCR amplified DNA. b C to T frequency at known 4mC sites (T4mCTTC and 4mCCGG) and unmethylated C sites in H. pylori JP26 genome from native and PCR amplified DNA. c-e Negative log normalized A to G ratio of different 6mA motifs in E. coli WT strain MG1655 (a), hsdM deleted strain (b), and dam/dcm/hsdR mutated strain (c). 6mA position was underlined. f Negative log normalized C to T ratio of different 4mC motifs in H. pylori JP26. g Negative log normalized C to T ratio of 4mC sites identified by SMRT-seq in E. coli strain MG1655. h Negative log normalized C to T ratio of different 5mC motifs in H. pylori JP26. Only motifs with sequencing depth larger than 25× were considered for violin plots. Statistic test were performed by two-sided Mann-Whitney-Wilcoxon (MWW) test with Bonferroni correction (ns: P > 1.0e-3; ****: P ≤ 1.0e −6) De novo discovery of methylation motifs in bacteria using NT-seq As we showed that NT-seq could detect known methylation motifs in bacteria, we further explored whether we could discover novel methylation motifs using NT-seq. We traversed all possible 4mer, 5mer, and 6mer adenine motifs in the H. pylori JP26 genome and found that the average A to G ratio of these motifs can be clearly divided into two groups (Additional file 1: Fig. S14). Most of the upper group are known 6mA motifs, and the lower group includes mainly unmethylated motifs with few exceptions. We further investigated these exceptions and found one sub-motif (GAGG6mA) of a previously reported motif (GMRG6mA) showed no difference in A to G ratio from other adenine motifs. In contrast, the other three sub motifs are significantly different from other adenine motifs (Fig. 4c). The same pattern was observed in SMRT-seq results Fig. 4 De novo discovery of methylation motifs in H. pylori genome using NT-seq. a-b Scatter plot of the median difference of −Log2FC between any 4mer-6mer A motif and the remaining A sites (a) and the median difference of -LogFC between any 4mer-6mer C motif and the remaining C sites (b). (Previously reported, corrected, and novel 6mA motifs are labeled in color). c Violin plot of −Log2(Normalized A to G ratio) for four sub motifs of previous reported GMRG6mA motif. d Violin plot of −Log2(Normalized A to G ratio) for eight sub motifs of GGWCN6mA motif. e Violin plot of −Log2(Normalized A to G ratio) for all previous reported and newly discovered 6mA motifs. Statistic test were performed by two-sided Mann-Whitney-Wilcoxon (MWW) test with Bonferroni correction (ns: P > 1.0e-3; ****: P ≤ 1.0e−6) (Additional file 1: Fig. S14), demonstrating that previous SMRT-seq workflow imprecisely combined GAAG6mA and GCRG6mA into GMRG6mA. Moreover, we identified a novel 6mA motif-GGWCN6mA using NT-seq (Fig. 4d), which has not been identified by SMRT-seq in the previous study. We investigated the sub motifs of GGWCN6mA and found all sub motifs showed a significant decrease in A to G ratio compared to unmethylated motifs (Fig. 4d). A similar trend was also observed in the SMRT-seq IPD ratio quantification, demonstrating GGWCN6mA is an actual 6mA motif (Additional file 1: Fig. S14). Complete annotation of the 4-6mer 6mA motif scatter plot and the detailed distribution of the A to G ratio of all 6mA motifs in the H. pylori genome are shown in Fig. 4a and e. By contrast, we did the same analysis for cytosine motifs and failed to find novel 4mC or 5mC motifs in the H. Pylori JP26 genome, indicating there is likely no additional cytosine methylation motif other than previously reported 4mCCGG, T4mCTTC, G5mCGC, and GG5mCC (Fig. 4b, Additional file 1: Fig. S14). These results demonstrated that NT-seq could not only validate reported methylation motifs but also identify novel methylation motifs in bacterial genomes. Therefore, NTseq provides a simple "all-in-one" solution to accurately profile methylome in bacterial genomes, which will facilitate the discovery of unknown methylation motifs and epigenetic regulators in bacteria. Methylation motif identification in microbial community reference using NT-seq Meta-epigenomic analysis based on SMRT-seq has recently revealed diverse DNA methylation in an environmental microbial community [36]. To determine whether NT-seq can be used to detect DNA methylation patterns in microbial community samples, we applied NT-seq on a commercial microbial community standard, including eight bacteria species and two fungi species. Since the composition of two fungi genomes is about six times lower than other bacterial genomes, and there is no reported methylation motif in both fungi species, we focused our 6mA analysis on eight bacteria species. We traversed all possible 4mer, 5mer, 6mer, and common type I RM bipartite methylated adenine motifs. We confirmed that all 6mA motifs with a high IPD ratio from the previous SMRT-seq [18] in seven bacteria strains are significantly different from unmethylated motifs in NT-seq (Fig. 5a, Additional file 1: Fig. S15, S16). Interestingly, putative 6mA motifs (CTKV6mAG and CTCC6mAG in E. faecalis) with a small difference in IPD ratio from SMRT-seq showed no difference in NT-seq, indicating that these two motifs might not be 6mA methylated (Fig. 5a). Consistently, the current signals from Nanopore sequencing showed minor changes at any position of CTKVAG or CTCCAG [18]. For Lactobacillus fermentum, which has no available SMRT-seq data, a putative 6mA motif: AG6mAGG showed a similar extent of decrease in A to G ratio compared to known 6mA motifs in other species (Fig. 5a). A to G ratio distribution of most sub motifs of AG6mAGG displayed clear separation from unmethylated motifs, suggesting it is a true 6mA motif (Fig. 5c, Additional file 1: Fig. S15). A previous study using 6mA DIP-seq failed to detect any 6mA peaks in the L. fermentum genome [18]. Therefore, we reanalyzed their 6mA DIP-seq data and found we were able to identify 1557 6mA peaks by keeping all "duplication" reads at the exact location but failed to detect any peaks after default deduplication. The reads "duplication" here can be caused by excessive sequencing depth (15.7 million reads on average for a 1.9 million genome) rather than true PCR duplicates. Further investigation of identified 1557 6mA peaks revealed 75% of total AG6mAGG motifs intersected with 6mA peaks (Fig. 5d). Motif discovery analysis also identified AGAGG as the top significantly enriched motif inside these 6mA peaks (Fig. 5d). These results clearly demonstrated the AG6mAGG detected by NT-seq is indeed a true 6mA motif in the L. fermentum genome. For cytosine methylation, we were not able to identify any 4mC methylation motif, indicating there is likely no 4mC motif in these bacteria genomes. We also analyzed available bisulfite sequencing for all eight bacterial species and identified six 4 to 6mer cytosine methylation motifs in Bacillus subtilis, E. coli, Staphylococcus aureus, and Salmonella enterica (Additional file 4: Table S3). The C to T ratios of NT-seq at these motifs were all higher than unmethylation motif controls (Fig. 5b), indicating that these cytosine methylation motifs are 5mC methylated but not 4mC methylated. Profiling 6mA at single-base resolution in E. coli and Oxytricha genome by DIP-NT-seq DNA 6mA is the most common methylation in prokaryotes. Recently, 6mA has been identified in eukaryotes, including mammals, in which DNA 6mA methylation is motif independent. To evaluate the performance of NT-seq in detecting different types of DNA methylation at the single-base resolution, we generated a Receiver operating characteristic (ROC) curve for each methylation type in the H. pylori genome (Additional file 1: Fig. S17). Consistent with oligo results, the performance of NT-seq in detecting 6mA and 4mC at single-base resolution is similar (AUC = 0.934 for 6mA and 0.948 for 4mC at positions with more than 25× coverage). The performance is significantly decreased for 5mC detection (AUC = 0.832 at positions more than 25× coverage). To explore whether the performance of NT-seq in detecting methylation at single-base resolution can be further improved, we coupled 6mA DNA immunoprecipitation (6mA-DIP) with NT-seq and tested it in the E. coli genome (Fig. 6a). We sequenced both the unenriched whole-genome sample and 6mA-DIP-enriched sample to a near saturation depth as indicated by PCR duplication level (Additional file 5: Table S4). We found that without enrichment, NT-seq is unable to cover the whole set of the 6mA sites with high sequencing depth (25×, Fig. 6b). However, with 6mA-DIP enrichment, NT-seq can cover roughly 3-fold more 6mA sites at the same sequencing depth threshold (25×, Fig. 6b). This might be due to the DNA damage effect induced by nitrite treatment under acidic conditions. Previous studies showed that depurination in xanthine (generated from guanine deamination) is more rapid than other bases under Fig 6 6mA profiling at single-base resolution in the bacterial and eukaryotic genomes. a Workflow of DNA 6mA immunoprecipitation followed by NT-seq (6mA DIP-NT-seq). 6mA modified DNA fragment is labeled in green. b Bar plot of percentage of 6mA sites passing filter by different sequencing depth threshold. 100% at threshold 0 means there is no filtering. c Violin plot shows 6mA DIP increases the A to G ratio difference between 6mA and A. d Receiver operating characteristic (ROC) curve shows that 6mA DIP significantly improves the performance of 6mA detection at single-base resolution. e Pie charts of DIP-NTseq in 6mA sites correctly and incorrectly classified by SMRT-seq. Normalized A to G ratio threshold was determined by maximizing the F1 score. f 6mA DIP-seq peak number in WT and MTA1 mutant Oxytricha strain. g SMRT-seq 6mA percentage in WT and MTA1 mutant Oxytricha strain. h DIP-NT-seq 6mA percentage in WT and MTA1 mutant Oxytricha strain. i IPD ratio of DIP-NT-seq undetected and detected 6mA. Statistic tests were performed by two-sided Mann-Whitney-Wilcoxon (MWW) test with Bonferroni correction (****: P ≤ 1.0e −6) acidic conditions [37,38]. Additionally, xanthine was reported to induce polymerase arrest [37], which may lead to a PCR amplification bias for xanthine-free DNA fragments. Consistently, compared to untreated samples, nitrite treatment causes the final sequencing result to be more biased for G-poor regions (Additional file 1: Fig. S18). Therefore, 6mA-DIP can eliminate reads from unmethylated G-poor regions and concentrate sequencing reads to methylation loci. Additionally, 6mA-DIP also enriched 6mA, as shown by a significant decrease of A to G ratio at Dam/M.EcoKI motif in unenriched sample when compared to enriched sample (Fig. 6c). To comprehensively evaluate the performance of NT-seq in detecting 6mA sites, we generated the ROC curve of unenriched and DIP-enriched samples using Dam/M.EcoKI motif as the gold standard for 6mA positions. Consistently, DIP enrichment greatly improves the AUC scores at different sequencing depth thresholds (Fig. 6d). Similar results were also observed in the PR curve and AP scores (Additional file 1: Fig. S18). By comparing DIP-NT-seq with the well-established SMRT-seq method on 6mA single-base detection, we found DIP-NT-seq and SMRT-seq results are consistent (Additional file 1: Fig. S18). Furthermore, using Dam/M.EcoKI-mediated 6mA as the ground truth, DIP-NT-seq can detect up to 76.8% of the false-negative sites from SMRT-seq and only 0.3% of false-positive sites from SMRT-seq, indicating that DIP-NT-seq was able to efficiently filter false-positive 6mA identified from SMRT-seq. In addition to SMRT-seq, we also compared DIP-NT-seq to a few other next-generation sequencing-based methods for E. coli 6mA detection, such as 6mA DIP-seq [18] and 6mACE-seq [10]. We showed that DIP-NT-seq can detect 11.7% more 6mA sites within the Dam (G6mATC) motif, indicating DIP-NT-seq can not only increase traditional DIP-seq to single-base resolution but also increase the sensitivity for 6mA detection (Additional file 1: Fig. S19). When comparing to 6mACE-seq, we found that DIP-NT-seq can detect 7% more 6mA sites but DIP-NT-seq generates more false-positive sites (Additional file 1: Fig. S19). Overall, F1 score indicates DIP-NT-seq is slightly improved compared to 6mACE-seq (0.843 for DIP-NT-seq and 0.833 for 6mACE-seq). It is also worth noting that the performance of NT-seq in identifying true 6mA can be improved using a higher sequencing depth threshold (25X, F1 score: 0.911; 50X, F1 score: 0.953) (Additional file 1: Fig. S18), indicating these incorrectly classified 6mA sites by NT-seq were likely caused by the low sequencing coverage at these sites, which can be solved by increasing the sequencing depth. 6mA has recently been described in various eukaryotic organisms, including unicellular eukaryotes [24,39], metazoans [25][26][27], and plants [40,41]. 6mA is generally more abundant in unicellular eukaryotes than metazoans. In Oxytricha trifallax, the abundance of 6mA is around 7000 ppm (parts per million dA), and MTA1c has been identified as a methyltransferase complex of 6mA [39]. Although the 6mA DIP-seq is the primary approach for profiling 6mA in eukaryotic genomes, the specificity of this antibody-based method has been debated [28,29]. To determine whether NT-seq can improve the specificity and resolution of DIP-seq, we performed DIP-NT-seq in WT and MTA1 mutated Oxytricha. In agreement with 6mA DIP-seq and SMRT-seq results [39], 6mA level in MTA1 mutant is about 50% less than WT Oxytricha (Fig. 6f-h), suggesting that DIP-NT-seq can robustly detect 6mA changes at single-base resolution in eukaryotic genomes. The SMRT-seq IPD ratio of 6mA detected by DIP-NT-seq is significantly higher than 6mA undetected by DIP-NT-seq (Fig. 6i), indicating that 6mA sites detected by DIP-NT-seq can be cross-validated in SMRT-seq. Discussion In this study, we developed NT-seq to map multiple types of DNA methylation genome-wide. We demonstrated that NT-seq could be used to detect all three major types of methylation motifs in bacterial genomes simultaneously. Notably, NT-seq can not only detect known methylation motifs but also be used to discover novel methylation motifs. We demonstrated that NT-seq could be used for de novo methylation motif discovery in single bacteria species and microbial communities. Furthermore, our results also indicate that coupling methyl DNA immunoprecipitation (DIP) and NT-seq can profile 6mA at single-base resolution in both prokaryotes and eukaryotes. We demonstrated that the performance of DIP-NT-seq in the E. coli genome is comparable to SMRT-seq. Moreover, we demonstrated that DIP-NT-seq could filter out the falsepositive 6mA sites from DIP-seq or SMRT sequencing, which raised the concerns for 6mA detection in the field. Therefore, NT-seq improves the accuracy of 6mA detection and supply an independent method that can be used for cross-validation with SMRTseq. While 6mA has been identified in eukaryotes, including mammals [24][25][26][27], current 6mA genomic profiling in eukaryotes are mainly dependent on 6mA antibody-based DIP-seq. However, 6mA antibodies have recently been shown might generate nonspecific signals when the abundance of 6mA is low [28,29]. In fact, Wu TP. et al. have reported the 6mA DIP-seq sensitivity limitation in 2016. By coupling 6mA DIP and NT-seq, the non-6mA fragments (false-positive source) will not generate different A to G ratios than PCR control in nitrite treatment. Thus, NT-seq can improve the specificity and resolution of 6mA DIP-seq by filtering out false-positive signals. Here, we would also like to discuss a few limitations of NT-seq. Firstly, the current nitrite treatment in NT-seq causes DNA damage, which causes trouble for library generation and makes us unable to completely deaminate A and C to reach a better performance (Fig. 3a, b; Additional file 1: Fig. S13). Secondly, we found that NT-seq reads are biased on G-poor regions (Additional file 1: Fig. S18), which may limit the application of NT-seq in GC-rich bacterial genomes, although we took advantage of annealing two protective oligos on the PCR primer regions before nitrite treatment to preserve amplifiable DNA fragments and can generate the NGS library of most of the genomic DNA. In the bacterial genomes we tested, Pseudomonas aeruginosa has the highest GC content (66.2%), and we did find a slightly noisier distribution of A to G ratio in different motifs, compared to other genomes with low GC content (Additional file 1: Fig. S15). Therefore, high GC content bacterial genome may require more input DNA to achieve identical performance than low GC content genomes. Lastly, the 5mC detection by NT-seq is less efficient than bisulfite sequencing. Thus, we recommend performing a shallow bisulfite sequencing to cross-validate the NT-seq-detected 5mC motifs for unknown genomes or microbial community samples. These limitations might be overcome by searching for a mild nitrite treatment condition to achieve near complete deamination rate while decrease the DNA degradation. Collectively, NT-seq provides an efficient chemical-based method to reliably detect multiple types of DNA methylation in single bacteria, microbial communities, and eukaryotic genomes. Importantly, since NGS is much more affordable and easier to access than current third-generation sequencing platforms, NT-seq makes bacterial methylation analysis more accessible and cost-efficient for epigenetic researchers. Recently, Nanopore sequencing was shown to be able to detect multiple types of DNA methylation motifs in bacteria and microbiome. However, accurate methylation detection from Nanopore sequencing requires computational training with known methylation motifs sequencing in the different contexts, which are currently generated from the high-cost SMRT-seq. To aim for mammalian methylome profiling with Nanopore sequencing, more training datasets will be necessary for the success analysis model development. Recent developed computational tools like nanodisco [17] are still insufficient to reliably detect 6mA motifs in mouse gut microbiome samples (Additional file 1: Fig. S20), largely limited by the small training dataset (only 28 known 6mA motifs trained versus hundreds of thousands of possible 6mA motifs in microbial genomes). Since we have shown that NT-seq can reliably detect multiple types of DNA methylation motifs, NT-seq can be used to generate more accurate and large-scale training datasets with minimal cost and help to develop machine-learning-based computational tools for methylation detection from Nanopore and other single-molecule sequencing. Conclusions We developed a method (NT-seq) to simultaneously map all three major types of DNA methylation in prokaryotic genomes. NT-seq allows accurate detection of methylation motifs in both single species and microbial community samples. Compared to SMRTseq, NT-seq provides a cost-efficient solution for bacterial methylation mapping, which will boost the study of bacterial epigenetics. By coupling methyl DNA immunoprecipitation (DIP), we showed that NT-seq could accurately profile 6mA at single-base resolution in bacterial genomes, which could also be applied to eukaryotic genomes to help eliminating the non-specific signals in 6mA DIP-seq in eukaryotes. NT-Seq can crossvalidate SMRT-Seq results and generate more training datasets for developing machine-learning tools for methylation analysis. This method paves the way for further epigenetic study on genomic DNA 6mA in eukaryotes. Methods Bacterial strains, cell lines, and culture conditions WT E. coli strain MG1655 and hsdM deletion K12 strain were kindly provided by Dr. Susan M Rosenberg's lab (Baylor College of Medicine). dam/dcm mutant E. coli strain JM110 was purchased from Addgene (Bacterial strain #49763). All E. coli strains are cultured in typical LB broth, liquid medium at 37°C. Characterization of reaction products formed from the reactions between nitrite and nucleosides in vitro Briefly, we incubated a 90-μL reaction mixture containing 66.7 μM of individual nucleoside (dA, dC, 6mA, or 4mC), 1.0 M sodium nitrite, and 2.3% (v/v) glacier acetic acid at 37°C. Aliquots (10 μL each) were taken out from the reaction mixtures at the indicated time points. To the aliquots were added 10 μL of 2.0 M TEAA, and the mixture was subjected immediately to HPLC analysis on a Beckman Gold HPLC system. A reversed-phase C18 column (4.6 × 250 mm, 5 μm in particle size) was used for the separation. Mobile phases A and B were 50 mM triethylammonium acetate (pH6.8) and 30% acetonitrile in mobile phase A, respectively, and the flow rate was 0.8 mL/min. The following gradients were used for the separation: 33-55.5% B in 25 min for the 2′deoxyadenosine reaction mixture, 45-80% B in 30 min for the 6mA reaction mixture, 30-45% B in 22.5 min for the 2′-deoxycytidine reaction mixture, and 35-80% in 30 min for the 4mC reaction mixture. The LC fractions were identified using ESI-MS and MS/MS on a LCQ Deca XP mass spectrometer (Thermo) operated in positive-ion mode (Fig. S2, S4, S6, and S8). NT-seq for oligonucleotides Sequences of all oligonucleotides used in this study are available at Additional file 6: Table S5. 6mA modified (156nt with one 6mA at position 60) and unmodified control oligos were synthesized at GeneLink, Inc. All other oligos, including 5mC modified oligo (100 nt with one 5mC at position 52), were synthesized at Sigma. 4mC modified oligo (91bp) was generated by treating dsDNA oligo with BamHI methyltransferase (methylates GGATCC motif at the first cytosine base) according to the manufacturer's instructions (NEB, M0223S). BamHI methyltransferase-treated dsDNA was further treated by BamHI-HF restriction enzyme according to the manufacturer instructions to eliminate DNA with incomplete 4mC methylation. DNA oligo was first annealed to the equal number of protective oligos at both ends in 0.2 M NaCl (95°C for 2 min, 25°C for 5 min, and held at 4°C with ramp rate for temperature decreasing at 0.1°C/s). In total, 20 pmol of annealed DNA was treated with 1 M sodium nitrite (NaNO 2 ) and 2.3% (v/v) acetic acid (AcOH) at 37°C for 4-5 h. Nitrite-treated DNA was purified using Oligo Clean & Concentrator Kits (Zymo Research, D4060). Illumina TruSeq adaptor was added to oligo by PCR amplification using Taq DNA polymerase (NEB, M0490S) (cycle number was determined by qPCR with iTaq polymerase (BioRad, 1725121)). The indexed library was constructed by NEBNext Ultra™ II DNA Library Prep Kit for Illumina (NEB, E7645S). Samples were sequenced using NextSeq 500. NT-seq for genomic DNA Helicobacter pylori JP26 genomic DNA was kindly provided by Dr. Gang Fang's lab (Icahn School of Medicine at Mount Sinai). E. coli genomic DNA was isolated using DNeasy Blood and Tissue Kit according to the manufacturer's instructions (Qiagen, 69506). Microbial community standard DNA was purchased from Zymo Research (D6306). One microgram of genomic DNA was first fragmentized to 100-300 bp using Covaris S220 Focused-ultrasonicator. Fragmented gDNA was ligated to TruSeq adaptor using NEBNext Ultra™ II DNA Library Prep Kit for Illumina (NEB, E7645S). Unmodified control DNA was made by amplifying adaptor-ligated DNA using Q5 DNA polymerase (NEB, M0492S). Both native and amplified DNA was first annealed to excessive protective adaptor sequences in 5 mM adaptor oligos (F-RC: AGATCGGAAGAGCG TCGTGTAGGGAAAGAGTGT, R: CTGGAGTTCAGACGTGTGCTCTTCCGATCT) and 0.2 M NaCl (95°C for 2 min and hold at 37°C with ramp rate for temperature decreasing at 1°C/s) and then treated by 1 M NaNO 2 and 2.3% (v/v) AcOH at 37°C for 4 h. Nitrite-treated DNA was purified using Oligo Clean & Concentrator Kits (Zymo Research, D4060). The indexed library was constructed using Taq DNA polymerase (NEB, M0490S) (cycle number was determined by qPCR with (BioRad, 1725121)). Samples were sequenced using NextSeq 500. 6mA DIP-NT-seq One microgram adaptor-ligated E. coli genomic DNA was denatured at 95°C for 10 min and ice bath immediately for 10 min. Single-stranded DNA fragments were immunoprecipitated with 6mA antibodies (CST, D9D9W) overnight at 4°C. Methylated DNA capture, wash, and elution were performed using the hMeDIP kit according to the manufacturer's instructions (Active motif, 55010). Eluted DNA was treated by 2 μl 1 mg/ml proteinase K at 50°C for 30 min and purified by Oligo Clean & Concentrator Kits (Zymo Research, D4060) to remove the remaining antibodies. A small proportion of input and enriched DNA was used to construct traditional 6mA DIP-seq libraries. The remaining enriched DNA was used to perform NT-seq as described above. NT-seq data analysis Preprocess Analysis workflow is shown in Additional file 1: Fig. S11. The scripts used for analysis are available at GitHub (https://github.com/TaoLabBCM/NT-seq) [42] and Zenodo (https://zenodo.org/record/6540299#.Ynwdt5PML_s) [43]. Briefly, sequencing reads were trimmed using Cutadapt (v1.18) [44] to remove adaptors. Duplicated reads were removed using BBMAP (v38.84) [45], Clumpify package (This step is omitted for oligo data analysis). To align all nitrite treatment converted reads (A to G and C to T mutations) to reference genome, FASTQ reads and FASTA reference were converted to ATonly format (convert all purine (A/G) to A and all pyrimidine (C/T) to T). Converted FASTQ reads were aligned to converted reference using Bowtie2 (v2.3.5.1) [46]. ATonly SAM files were converted back to SAM files with original reads using custom python scripts. NM (number of mismatches between the sequence and reference) and MD (string encoding mismatched reference bases) tags in SAM files were recalculated using Samtools (1.9) [47] calmd command to obtain mutation pattern of original reads. The alignments in SAM files were further filtered by recalculated NM and MD tags to remove unconverted reads and reads with unwanted base mutation (number of mismatch ≥ 2, number of mismatches at A or C position/total number of mismatches ≥ 0.8). Base count at each genomic location was generated by Igvtools (v2.5.3) [48] count command using filtered SAM files, and the base count files were used to calculate A to G ratio and C to T ratio of native and amplified samples at each position using custom python scripts. Methylation motif identification For all possible methylation motif combinations (4mer, 5mer, and 6mer motifs for 6mA, 5mC, and 4mC; bipartite type I RM system motifs for 6mA), an average A to G or C to T ratio were calculated. A to G ratio and C to T ratio data were first normalized by dividing average A to G and C to T ratio between native and amplified samples and then log-transformed. Motifs were filtered by requiring at least 50 motif loci to be covered with at least 200 reads. Motifs passing filters were then used to generate the scatter plots for motif identification. Bisulfite sequencing analysis Bisulfite sequencing data for bacterial species in Zymomics microbial community reference were downloaded from NCBI SRA with Bioproject number PRJNA477598. Raw reads were first trimmed using Cutadapt (v1.18) [44] to remove adaptors. Downstream alignment, deduplication, and methylation extraction for each cytosine position were performed using Bismark (v0.23.1) [50]. Cytosine methylation motifs were identified using custom python scripts.
9,973.6
2022-05-30T00:00:00.000
[ "Biology" ]
Infinitely many solutions for elliptic equations with non-symmetric nonlinearities We deal with the existence of infinitely many solutions for a class of elliptic problems with non-symmetric nonlinearities. Our result, which is motivated by a well known conjecture formulated by A. Bahri and P.L. Lions, suggests a new approach to tackle these problems. The proof is based on a method which does not require to use techniques of deformation from the symmetry and may be applied to more general non-symmetric problems. Introduction Let us consider the problem − u = |u| p−1 u + w in , where is a smooth bounded domain of R n , with n ≥ 1, w ∈ L 2 ( ), p > 1 and p < n+2 n−2 when n ≥ 3. If w ≡ 0 in , the corresponding energy functional E : H 1 0 ( ) → R, defined by is not even, so the equivariant Lusternik-Schnirelmann theory for Z 2 -symmetric sets cannot be applied to find infinitely many solutions as in the case w ≡ 0 (see for instance [1,3,18,19,27,28,32,34] and also [9,17] for a more general framework). In the case w ≡ 0 in , a natural question (which goes back to the beginning of the eighties) is wether the infinite number of solutions still persists under perturbation. A detailed analysis was originally carried on in [2, 3, 5-8, 25, 29, 30, 33, 35, 39] by Ambrosetti, Bahri, Berestycki, Ekeland, Ghoussoub, Krasnoselskii, Lions, Marino, Prodi, Rabinowitz, Struwe and Tanaka by introducing new perturbation methods. In particular, this question was raised to the attention by Rabinowitz also in his monograph on minimax methods (see [34,Remark 10.58]). In [4] Bahri proved that, if n ≥ 3 and 1 < p < n+2 n−2 , then there exists an open dense set of w in L 2 ( ) such that problem (1.1) admits infinitely many solutions. In [8] Bahri and Lions proved that, if n ≥ 3 and 1 < p < n n−2 , then problem (1.1) admits infinitely many solutions for every w ∈ L 2 ( ). These results suggest the following conjecture, proposed by Bahri and Lions in [8]: the multiplicity result obtained in [8] holds also under the more general assumption 1 < p < n+2 n−2 . More recently, a new approach to tackle the break of symmetry in elliptic problems has been developed by Bolle, Chambers, Ghoussoub and Tehrani (see [10,11,15], which include also applications to more general nonlinear problems). However that approach did not allow to solve the Bahri-Lions conjecture. In the present paper we describe a new possible method to approach this problem. The idea is to trying to piece together solutions of Dirichlet problems in subdomains of chosen in a suitable way. This idea has been first used by Struwe in earlier papers (see [35][36][37] and references therein). In the present paper we consider as nodal regions subdomains of that are deformations of cubes by suitable Lipschitz maps (we say that nodal solutions of this type have a check structure). It is interesting to observe that such a class of Lipshitz maps also appeared in some recent works by Rabinowitz and Byeon (see [13,14] and the references therein) concerning a rather different problem: construct solutions having certain prescribed patterns for an Allen-Cahn model equation. Also in that papers, as in the present one, Lipschitz condition is combined with the structure of Z n and the covering of R n by cubes with vertices in Z n . The main multiplicity result of this paper is stated in Theorem 2.7 and says that if satisfies a suitable geometric condition (condition (2.53)) then problem (1.1) admits infinitely many nodal solutions having as nodal structure suitable partitions of in subdomains that are Lipschitz deformations of arbitrarily small cubes. In order to explain the meaning of the geometrical condition (2.53), which plays a crucial role in this paper, we need some more detail on the method we use in the proof. Let P k denotes the union of all the cubes with sides of length 1 k and vertices in 1 k Z n , that are enclosed in and, for L > 1, consider the class C L (P k , ) consisting of the bilipschitz maps between P k and with Lipshitz constants in 1 L , L . For all T in C L (P k , ) and k in N, we construct by a mini-max argument a nodal function u T k in H 1 0 ( ), whose nodal regions are the deformations of these cubes by the map T . For k large enough, u T k satisfies equation (1.1) in each nodal region (by Proposition 2.4) and it is solution of the Dirichlet problem (1.1) when, in addition, it satisfies the assumptions of Proposition 2.5 (a kind of stationarity property). Then, for all k, we minimize the energy functional E in the set {u T k : T ∈ C L (P k , )} and show that, if the minimum is achieved by a bilipschitz map T L k with Lipschitz constants in the interior of 1 L , L , then the corresponding function u T L k k satisfies the stationarity condition of Proposition 2.5, so it is a solution of Problem (1.1) for k large enough. Condition (2.53) requires just that, for a suitable choice of L > 1, there existsk in N and L in ]1, L[ such that the minimum is achieved by a map T L k in CL (T k , ) for all positive integer k ≥k. Thus, if satisfies condition (2.53), we obtain infinitely many nodal solutions u T L k k having check structure. Moreover, the number of nodal regions of u T L k k and its energy E(u T L k k ) tend to infinity as k → ∞, while the size of the nodal regions tends to zero. Lemma 2.9 and Corollary 2.10 show that condition (2.53) holds true, for example, when n = 1 (the proof may be also adapted to deal with radial solutions in domains having radial symmetry). Indeed, in dimension n = 1, a more general result was obtained by Ehrmann in [23] (see also [24,26] for related results). Here it is proved that the ordinary differential equation has infinitely many distinct solutions when f is a function with superlinear growth satisfying quite general assumptions. However, the method here used relies on a shooting argument, typical of ordinary differential equations, combined with counting the oscillations of the solutions in the interval (0, 1). Therefore, this method, which gives the existence of solutions having a sufficiently large number of zeroes in dimension n = 1, cannot be extended to higher dimensions. On the contrary, in the present paper we use a method which is more similar to the one introduced by Nehari in [31], that can be in a natural way extended to the case n > 1. In fact, for example, Nehari's work was followed up by Coffman who studied an analogous problem for partial differential equations (see [18,19]). Independently, this problem was also studied by Hempel (see [27,28]). More recently, the method introduced by Nehari has been also used by Conti, Terracini and Verzini to study optimal partition problems in n-dimensional domains and related problems: in particular, existence of minimal partitions and extremality conditions, behaviour of competing species systems with large interactions, existence of changing sign solutions for superlinear elliptic equations, etc. (see [20][21][22]40]). Notice that Nehari's work deals with an odd differential operator, so the corresponding energy functional is even. Moreover, Nehari proves that for every positive integer k there exists a solution having exactly k zeroes. On the contrary, in the present paper (as Ehrmann in [23]) we find only solutions with a large number of zeroes; moreover, we prove that, for all w in L 2 ( ), the zeroes tend to be uniformly distributed in all of the domain as their number tends to infinity (see Lemmas 2.9 and 3.2). The reason is that, when w ≡ 0, the Nehari type argument we use in the proof works only when the sizes of all the nodal regions are small enough, so their number is sufficiently large. In order to show that our existence result is sharp, we prove also that the term w in problem (1.1) can be chosen in such a way that the problem does not have solutions with a small number of nodal regions. More precisely, in the case n = 1 we show that for all positive integer h there exists w h in L 2 ( ) such that every solution of problem (1.1) with w = w h has at least h zeroes (see Corollary 3.6). Indeed, we show that for all n ≥ 1 and for every eigenfunction e k of the Laplace operator − in H 1 0 ( ) there existsw k in L 2 ( ) such that every solution u of problem (1.1) with w =w k must have the sign related to the sign of e k in the sense that every nodal region of e k has a subset where u and e k have the same sign (see Proposition 3.5). In the case n > 1, condition (2.53) seems to be more difficult to be verified because the class C L (P k , ) of the admissible deformations of the nodal structure might result too large. In fact, as we point out in Remark 3.1, the minimality of the map T L k implies that as k → ∞ the nodal regions of u T L k k tend to have all the same size. In the case n = 1, this property is sufficient to control the Lipschitz constant of T L k , for k large enough, in order to prove condition (2.53). On the contrary, if n > 1 the Lipschitz constant of T L k might be very large, even if the nodal regions of u T L k k tend as k → ∞ to have all the same size, because their shape might be very different from the cubes of R n . Therefore, a natural idea is to restrict the class of admissible deformations C L (P k , ), taking also into account that our method to construct nodal solutions with many small nodal regions having check structure can be easily adapted to deal with other classes of bilipschitz maps, different from C L (P k , ). For example, we can fix a bilipschitz map T 0 : → and consider as a class of admissible deformations a suitable neighbourhood of T 0 in C L (P k , ), that is the class of all the maps in C L (P k , ) that are close to T 0 in a suitable sense (see Remark 3.1). Then, the geometrical condition (2.53) has to be replaced by a similar condition that holds or fails depending on the choice of T 0 and of its neighborhood (condition (3.7)) whose meaning is again that, for k large enough, the minimization problem settled in this new class of admissible deformations is achieved by a map which is, in some sense, in the interior of this new class. Thus, if n > 1, the problem is to choose carefully a suitable class of admissible deformations. In a similar way, for example, we can prove that if is a cube of R n with n > 1, p > 1, p < n+2 n−2 if n > 2, for all w in L 2 ( ) there exist infinitely many solutions u k (x) of problem (1.1) such that the nodal regions of the function u k x k , after translations, tend to the cube as k → ∞ (the proof will be reported in a paper in preparation). Notice that, in particular, this result shows that the Bahri-Lions conjecture is true when is a cube of R n with n ≥ 3. Let us point out that our method does not require techniques of deformation from the symmetry and may be applied to more general problems: for example, when the nonlinear term |u| p−1 u is replaced by c + (u + ) p − c − (u − ) p with c + and c − two positive constants (see Lemma 3.2), in case of different, nonhomogeneous boundary conditions and even in case of nonlinear elliptic equations involving critical Sobolev exponents. Existence of infinitely many nodal solutions In order to find infinitely many solutions with an arbitrarily large number of nodal regions, we proceed as follows. Let us set Notice that there exists k in N such that Z k = ∅ ∀k ≥ k . For all subsets P, Q of R n and for all L ≥ 1, let us denote by C L (P, Q) the set of all the functions T : P → Q such that For all k ≥ k , z ∈ Z k , L ≥ 1, T ∈ C L (P k , ) let us set Since p < n+2 n−2 when n ≥ 3, one can easily verify that the infimum in (2.4) is achieved. Moreover, for all L ≥ 1 and k ≥ k , also the infimum is achieved (as one can prove by standard arguments using Ascoli-Arzelà Theorem) and the following lemma holds. (2.7) We say that In fact, arguing by contradiction, assume that lim inf It follows that (up to a subsequence) (ū k ) k is bounded in H 1 0 ( ) and there exists a functionū ∈ H 1 0 ( ) such thatū k →ū, as k → ∞, weakly in H 1 0 ( ), in L p+1 ( ), and almost everywhere in (hereū k is extended by the value 0 in \ T k ( 1 k C z k )). Since meas T k ( 1 k C z k ) → 0 as k → ∞, from the almost everywhere convergence we obtainū ≡ 0 in , which is a contradiction becauseū k →ū in L p+1 ( ) and (2.7) holds for all k ≥ k . Thus (2.8) is proved. Notice that (2.11) As a consequence, for all k ≥ k we obtain (2.12) and, as k → ∞, which completes the proof. is achieved by a unique minimizing functionũ T k,z . Moreover, we have On the other hand, n−2 when n ≥ 3, one can prove by standard arguments that (up to a subsequence) it converges to a functionũ T k,z ∈ H 1 0 T 1 k C z such that In order to prove (2.15) we argue by contradiction and assume that lim sup Then, for all k ≥ k(L) there exist z k ∈ Z k and T k ∈ C L (P k , ) such that (up to a subsequence) Since E(ũ T k k,z k ) ≤ 0 and the sequenceũ T k k,z k (extended by the value zero outside 1 k C z k ) is bounded in L p+1 ( ), we infer that it is bounded also in H 1 0 ( ). We say that, as a consequence, . Thus, we can conclude that (2.15) holds. Finally, notice thatũ T k,z is the unique minimizing function for (2.14) because the functional E is strictly convex in a suitable neighborhood of zero. So the proof is complete. Taking into account Corollary 2.2, for all k ≥ k(L), z ∈ Z k and T ∈ C L (P k , ) we can consider a minimizing functionũ T k,z for the minimum (2.14). Moreover, since p > 1, for all u ∈ H 1 0 1 k C z there exists the maximum Proof Let us consider a minimizing sequence (u i ) i∈N for the minimum (2.23). Whitout any loss of generality, we can assume that It follows that this sequence is bounded in H 1 0 T 1 k C z . Therefore, since p < n+2 n−2 when n ≥ 3, up to a subsequence it converges weakly in H 1 0 , in L p+1 and a.e. to a function Notice that the L p+1 convergence and (2.24) imply We say that, indeed, the convergence is strong in H 1 0 T 1 k C z . In fact, arguing by contradiction, assume that (up to a subsequence) Therefore, we can conclude that u i →û in Proof It is clear that the functionũ T k,z (local minimum of the functional E) is a solution of the Dirichlet problem (2.28). In order to prove that, for k large enough, also u T k,z is solution of the same problem, let us consider the function G : Let us assume, for example, σ (z) = 1 (in a similar way one can argue when σ (z) = −1). One can verify by direct computation that for all u ≡ũ T k,z there exists a unique t u > 0 such that Taking into account thatũ T k,z is solution of problem (2.28), we obtain by direct computation Notice that and, by (2.15), Moreover, we have where, for all k ∈ N, As a consequence, since p < n+2 n−2 when n ≥ 3, there existsψ in H 1 0 ( ) such that (up to a subsequence) ψ i →ψ as i → ∞ weakly in H 1 0 ( ), in L p+1 ( ) and a.e. in . Moreover, since lim i→∞ meas T i 1 k i C z i = 0, the a.e. convergence implies thatψ ≡ 0 in , which is in contradiction with the convergence in L p+1 ( ) because |ψ| p+1 dx = 1 ∀i ∈ N. Thus, we can conclude that lim k→∞ λ k = ∞. It follows that, for k large enough, As a consequence, if we denote by the set defined by Therefore, there exists a Lagrange multiplier μ ∈ R such that On the other hand, since u T k,z −ũ T k,z ≥ 0 in T 1 k C z , we have so u T k,z is a solution of problem (2.28). When the function u T k = z∈Z k u T k,z satisfies a suitable stationarity property, then it is solution of problem (1.1) (here the function u T k,z is extended by the value zero outside T 1 k C z ). In fact, the following proposition holds. Proposition 2.5 Assume that k ≥ k 1 (L) and T ∈ C L (P k , ). Moreover, assume that the function u T k = z∈Z k u T k,z satisfies the following condition: where ν k,z denotes the outward normal on ∂ T 1 k C z . Thus, in order to obtain E (u T k ) [ϕ] = 0, we have to prove that if z 1 , z 2 ∈ Z k and |z 1 − z 2 | = 1 (that is T 1 k C z 1 and T 1 k C z 2 are adjacent subdomains of ) then Taking into account that u T k,z satisfies problem (2.28) for all z ∈ Z k , for all vector field In order to obtain a function u T k which is stationary in the sense of Proposition 2.5, we can, for example, minimize E(u T k ) with respect to T for k large enough. First notice that, since is a smooth bounded domain, there exist k ≥ k and L ≥ 1 such that, for all k ≥ k and L ≥ L , we have (2.49) Moreover, using Ascoli-Arzelà Theorem, one can show the following lemma. For all L ≥ 1 and T ∈ C L (P k , ), let us set (2.51) Using again Ascoli-Arzelà Theorem, we infer that, for all L ≥ L and k ≥ k , there exists (2.52) Notice that T L k depends only on the geometrical properties of the subdomains T L k 1 k C z with z ∈ Z k . A large L(T L k ) means that there are large differences in the sizes and in the shape of these subdomains. We can now state the following multiplicity result. Thus, taking into account Proposition 2.5 we have to prove that E (u k )[v · Du k ] = 0 for all vector field v ∈ C 1 ( , R n ) such that v · ν = 0 on ∂ . Therefore, for all vector field v ∈ C 1 ( , R n ) such that v · ν = 0 on ∂ and for all τ ∈ R, let us consider the function D τ : → defined by the Cauchy problem (2.60) Thus, we have to prove that For the proof, we argue by contradiction and assume that (2.61) does not hold. For example, we assume that . As a consequence, there exists a sequence of positive numbers From Corollary 2.2 we infer that, if we choosek large enough, for all k ≥k, z ∈ Z k and i ∈ N there exists a unique minimizing functionũ As in the proof of Proposition 2.4, let us consider the functions for i large enough. In fact, arguing by contradiction, assume that (up to a subsequence still denoted by (τ i ) i∈N ) the inequality (2.66) does not hold. Then, for all i ∈ N and z ∈ Z k , there exists t z,i ≥ 0 such that It follows that lim i→∞ t z,i = 1 ∀z ∈ Z k and in contradiction with (2.62). Thus, (2.66) holds. Notice that, ifk is chosen large enough, and tends to −∞ as t → ∞. As a consequence, there exists t i k,z ∈ R such that Therefore, from (2.66) and (2.71) we obtain for i large enough, in contradiction with (2.59). Thus, we can conclude that d Let us point out that, if n = 1, condition (2.53) in Theorem 2.7 is satisfied. In fact, it is a consequence of the following lemma (see also Remark 3.1 concerning the case n > 1). Lemma 2.9 Assume n = 1, p > 1 and w ∈ L 2 ( ). Then, for all L > 1 there existsk(L) ∈ N such that , which implies (2.74) as one can easily verify. Also, notice that in this case we have Taking into account that for all w ∈ L 2 ( ) we obtain lim sup and, as a consequence, Then, taking into account that and that lim i→∞ J (k i ) As one can easily verify, for all i ∈ N, there exists a function T i ∈ C L (P k i , ) such that Taking into account that E(ū On the other hand, which is a contradiction because When the case (2.85) occurs, we argue in analogous way. In this case, for all i ∈ N we choosẽ z in Z k i such that Moreover, we can consider a function T i ∈ C L (P k i , ) satisfying all the properties of T i with z i andz i instead of z i andz i . Then, we can repeat for T i , z i andz i the same arguments as before. In particular, the property (analogous to (2.94)) now follows from (2.85) and (2.98). Thus, also in this case we obtain again a contradiction with the minimality property of . So the proof is complete. As a direct consequence of Theorem 2.7, Proposition 2.8 and Lemma 2.9 we obtain the following corollary. Final remarks Notice that the method we used in Sect. 2 to find infinitely many solutions of problem (1.1) with a large number of nodal regions having a prescribed structure (a check structure) may be used also in other elliptic problems as we show in this section. It is clear that in this method condition (2.53) plays a crucial role. In Sect. 2 this condition is proved only in the case n = 1. In next remark, we discuss about the case n > 1. As a consequence, we can construct a sequence (k i ) i∈N such that Notice that L(T L i k i ) is large, for example, when there are large differences in the sizes or in the shapes of the subdomains T L i k i 1 k i C z with z ∈ Z k i . For k i large enough, too large differences seem to be incompatible with the minimality property This fact explains why condition (2.53) holds in the case n = 1. In the case n > 1, on the contrary, even if the subdomains T L i k i 1 k i C z with z ∈ Z k i have all the same shape and the same size, we cannot exclude that L T L i k i is large as a consequence of the fact that the shape of these subdomains is very different from the cubes of R n . This explains why it is difficult to prove that condition (2.53) holds also for n > 1. Therefore, in the case n > 1, the natural idea is to restrict the class of the admissible deformations of the nodal regions. For example, we can fixL ≥ L , T 0 ∈ CL ( , ), r > 0 and consider the set of deformations (analogous to condition (2.53)) is satisfied. It is clear that condition (3.7) holds or fails depending on the choice ofL, T 0 and r that have to be chosen in a suitable way. For example, in the case n = 1, if we choose T 0 (x) = x ∀x ∈ , (3.7) holds for allL > 1 and r > 0 as follows from Lemma 2.9. In the case n > 1, condition (3.7) seems to have more chances than condition (2.53) to be satisfied. In fact, as we show in a paper in preparation, a variant of this method works for example when is a cube of R n with n > 1, p > 1, p < n+2 n−2 if n > 2 and, for all w ∈ L 2 ( ), allows us to find infinitely many solutions u k (x) such that the nodal regions of u k x k , after translations, tend to the cube as k → ∞. Therefore, it seems quite natural to expect that, by a suitable choice ofL, T 0 and r , for every bounded domain in R n with n > 1 and for all w ∈ L 2 ( ) one can find infinitely many nodal solutions of problem (1.1) with p > 1 and p < n+2 n−2 if n > 2. Notice that this method to construct solutions with nodal regions having this check structure works for more general nonlinearities, even when they are not perturbations of symmetric nonlinearities: for example when in problem (1.1) the term |u| p−1 u + w is replaced by In fact, this method does not require any technique of deformation from the symmetry. For example, let us show how Lemma 2.9 has to be modified in this case. In this case the energy functional is We denote by F 0 the functional F when w = 0. Now, consider the numberL ≥ 1 defined byL = 1 min{t,2−t} wheret ∈]0, 2[ is the unique number such that Notice thatt = 1 (and soL = 1) if and only if c + = c − . Then, we have the following lemma which extends Lemma 2.9. cannot happen. In fact, for all i ∈ N we can chooseẑ i andẑ i + 1 in Z k i such that (up to a subsequence) Taking into account the minimality property As a consequence, we obtain In order to prove that lim i→∞ L(T L k i ) =L, arguing by contradiction, assume that lim i→∞ L(T L k i ) >L. As a consequence, since there exists a sequence (z i ) i∈N such that z i ∈ Z k i ∀i ∈ N and 18) or there exists a sequence (z i ) i∈N such that z i ∈ Z k i ∀i ∈ N and Assume, for example, thatt ≤ 1 (otherwise we argue in a similar way but witht replaced by 2 −t). Then,L = 1 t and, ift = 1, Lemma 2.9 applies. Thus, it remains to consider the casê t ∈]0, 1[. Consider first the case where (3.18) holds. Notice that there exists a sequence (ζ i ) i∈N such that ζ i ∈ Z k i and |z i − ζ i | = 1 ∀i ∈ N. Then, the minimality property (3.13) implies and, arguing as in the proof of Lemma 2.9 but with meas T L As a consequence of (3.20) and (3.21), we obtain which is a contradiction because with 1 t > 2 −t fort ∈]0, 1[. Thus, we can conclude that the case (3.18) cannot happen. In a similar way we argue in order to obtain a contradiction in the case (3.19). In fact, assume that (3.19) holds. Notice that there exists a sequence (ζ i ) i∈N such that ζ i ∈ Z k i and |z i − ζ i | = 1 ∀i ∈ N. As before, the minimality property (3.13) implies that and lim i→∞ k i meas T L As a consequence, we infer that Thus, we can conclude that lim i→∞ L(T L k i ) =L so the proof is complete. Remark 3.3 The results we present in this paper concern the existence of solutions with a large number of nodal regions. In particular, when ⊂ R n with n = 1, these solutions must have, as a consequence, a large number of zeroes. In next propositions we show that the term w can be chosen in such a way that the sign of the solutions is related to the nodal regions of the eigenfunctions of the Laplace operator − in H 1 0 ( ). In particular, if n = 1 we show that for suitable terms w in L 2 ( ), problem (1.1) does not have solutions with a small number of zeroes: more precisely, we show that for all positive integer h there exists w h ∈ L 2 ( ) such that every solution of problem (1.1) has at least h zeroes (it follows from Corollary 3.6). (3.28) Let u ∈ H 1 (D) be a weak solution of the equation then sup D u > 0. Proof Let e 1 be a positive eigenfunction corresponding to the eigenvalue λ 1 , that is e 1 + λ 1 e 1 = 0, e 1 > 0 in D, e 1 ∈ H 1 0 (D). (3.31) Arguing by contradiction, assume that (3.28) holds and u ≥ 0 in D. Then, from (3.29) we infer that where ν denotes the outward normal on ∂ D, so that and g(x, t) ≥ λ 1 t + c ∀x ∈ D, ∀t ∈ R for a suitable constant c > 0. It follows that which implies c D e 1 dx ≤ 0, that is a contradiction. Thus, the function u cannot be a.e. nonnegative in D. In a similar way one can show that we cannot have u ≤ 0 a.e. in D when (3.30) holds, so the proof is complete. In particular, Lemma 3.4 may be used to obtain informations on the effect of the term w on the sign changes of the solutions of problem (1.1), as we describe in the following proposition. Proposition 3.5 Let ⊂ R n with n ≥ 1 and e k ∈ H 1 0 ( ) be an eigenfunction of the Laplace operator − with eigenvalue λ k , that is e k + λ k e k = 0 in . Assume that w ∈ L 2 ( ) satisfies Proof Notice that λ k is the first eigenvalue of the Laplace operator − in H 1 0 ( k ) and |e k | is a corresponding positive eigenfunction. Moreover, if we set g(x, t) = |t| p−1 t + w(x), we infer from (3.36) that, if w(x) > 0, g(x, t) ≥ λ k t +c ∀t ≥ 0 (3.39) and, if w(x) < 0, g(x, t) ≤ λ k t −c ∀t ≤ 0 (3.40) wherec = inf |w| − max{λ k t − t p : t ≥ 0} > 0. (3.41) Since ue k ≥ 0 and we k ≥ 0 in k and e k has constant sign in k , we have u ≥ 0 and w > 0 in k if e k > 0 in k and u ≤ 0, w < 0 in k in the opposite case. Thus, our assertion follows from Lemma 3.4. In fact, for example, in the case e k > 0 in k we cannot have − u = |u| p−1 u + w in k (3.42) otherwise inf k u < 0, because of Lemma 3.4, while u ≥ 0 in k . In the opposite case, when e k < 0 in k , one can argue in a similar way, so the proof is complete. Assume, for example, that e k > 0 on I 1 (in a similar way one can argue if e k < 0 in I 1 ). Then, from Proposition 3.5 we infer that for every solution u of problem (1.1) we have inf I i u < 0 for i odd and sup I i u > 0 for i even. Remark 3.7 Notice that all the assertions in Proposition 3.5 and Corollary 3.6 still hold when the nonlinear term |u| p−1 u is replaced by c + (u + ) p − c − (u − ) p where c + and c − are two positive constants. In this case we have only to replace max{λ k t − t p : t ≥ 0} by max{λ k t −ct p : t ≥ 0}, wherec = min{c + , c − } > 0. Notice that this method to construct solutions with nodal regions having a check structure may be used for nonlinear elliptic problems with different boundary conditions, for systems and also when the nonlinear term has critical growth. For example, for all λ ∈ R consider the Dirichlet problem − u = |u| Using this method, if the functional F satisfies condition (2.53), one can prove that for n ≥ 4 and λ > 0 the functional F has an unbounded sequence of critical levels. More precisely, the following theorem can be proved. Theorem 3.8 Let n ≥ 4, λ > 0, w ∈ L 2 ( ) and assume that condition (2.53) holds for the functional F . Then, there existsk ≥ k such that for all k ≥k there exists TL k ∈ CL (P k , ) and a solution u k of problem (3.44) such that TL k (P k ) = and, if for all z ∈ Z k we set u z k (x) = u k (x) when x ∈ TL k Let us point out that Theorem 3.8 gives a new result also when w ≡ 0 in . In fact, in this case the functional F is even but well known results (see [12,16,38]) guarantee only the existence of a finite number of solutions (because some compactness conditions hold only at suitable levels of F ). On the contrary our method, combined with some estimates as in [12] and in [16], allows us to construct infinitely many solutions with many nodal regions and arbitrarily large energy level.
8,442
2022-04-22T00:00:00.000
[ "Mathematics" ]
Conditional lower bounds on the distribution of central values in families of $L$-functions We establish a general principle that any lower bound on the non-vanishing of central $L$-values obtained through studying the one-level density of low-lying zeros can be refined to show that most such $L$-values have the typical size conjectured by Keating and Snaith. We illustrate this technique in the case of quadratic twists of a given elliptic curve, and similar results would hold for the many examples studied by Iwaniec, Luo, and Sarnak in their pioneering work on $1$-level densities. Introduction Selberg [11,12] (see [8] for a recent treatment) established that if t is chosen uniformly from [0, T ] then the values log |ζ( 1 2 + it)| are distributed approximately like a Gaussian random variable with mean 0 and variance 1 2 log log T .More recently, Keating and Snaith [6] have conjectured that central values in families of L-functions have an analogous log-normal distribution with a prescribed mean and variance depending on the "symmetry type" of the family.This is a powerful conjecture which gives more precise versions of conjectures on the non-vanishing of L-values; for example, it refines Goldfeld's conjecture (towards which remarkable progress has been made with the work of Smith [13]) that the rank in families of quadratic twists of an elliptic curve is 0 for almost all twists with even sign of the functional equation.In [7] we enunciated a general principle which shows the upper bound (in a sense to be made precise below) part of the Keating-Saith conjecture in any family where somewhat more than the first moment can be computed.In this paper, we consider the complementary problem of obtaining lower bounds in the Keating-Saith conjecture, which is intimately tied up with questions on the non-vanishing of L-values.One analytic approach, conditional on the Generalized Riemann Hypothesis, towards such non-vanishing results is based on computing the 1-level density for low lying zeros in families of L-functions, and our goal in this paper is to show how this approach (in the situations where it succeeds in producing a positive proportion of non-vanishing) may be refined to give corresponding lower bounds towards the Keating-Snaith conjectures.In a later paper, we shall consider similar refinements of the mollifier method, which is another analytic approach that in many Date: August 2, 2023.The first author was partially supported by DMS-1902063.The second author is partially supported by an NSF grant, and a Simons Investigator award from the Simons Foundation. cases establishes non-vanishing results unconditionally.Algebraic approaches such as Smith's work [13] on Goldfeld's conjecture are capable of establishing definitive non-vanishing results (or, for other examples, see Rohrlich [9,10] and Chinta [2]), but we are unable to refine these methods to show that the non-zero values that are produced in fact have the typical size predicted by the Keating-Snaith conjectures. To illustrate our method, we treat the family of quadratic twists of an elliptic curve E defined over Q with conductor N, where the 1-level density of low lying zeros has been studied by many authors, notably Heath-Brown [3].Let the associated L-function be where the coefficients a(n) are normalized such that |a(n)| ≤ d(n).Since elliptic curves are known to be modular, L(s, E) has an analytic continuation to the entire complex plane and satisfies the functional equation where ǫ E , the root number, is ±1 and Throughout the paper, let d denote a fundamental discriminant coprime to 2N, and let χ d = ( d • ) denote the associated primitive quadratic character.Let E d denote the quadratic twist of E by d, and let its associated L-function be is entire and satisfies the functional equation Note that, by Waldspurger's theorem, and in this paper, we shall restrict attention to those twists with root number 1. Put therefore The Keating-Snaith conjectures predict that for d ∈ E, the quantity log L( 12 , E d ) has an approximately normal distribution with mean − 1 2 log log |d| and variance log log |d|.To state this precisely, let α < β be real numbers, and for any X ≥ 20, let us define (1) Then the Keating-Snaith conjecture states that, for fixed intervals (α, β) and as X → ∞, (2) Here we interpret log L( 1 2 , E d ) to be negative infinity if L( 1 2 , E d ) = 0, and the conjecture implies in particular that L( 12 , E d ) = 0 for almost all d ∈ E. Towards this conjecture, we established in [7] that N (X; α, ∞) is bounded above by the right hand side of the conjectured relation (2).Complementing this, we now establish a conditional lower bound for N (X; α, β). Theorem 1. Assume the Generalized Riemann Hypothesis for the family of twisted Lfunctions L(s, E × χ) for all Dirichlet characters χ.Then for fixed intervals (α, β) and as X → ∞ we have Above we have assumed GRH for all character twists of L(s, E); this is largely for convenience, and would allow us to restrict d in progressions.With more effort one could relax the assumption to GRH for the family of quadratic twists L(s, E d ).Note that the factor 1 4 in our theorem matches the proportion of quadratic twists with non-zero L-value obtained in Heath-Brown's work [3]. While we have described results for the family of quadratic twists of an elliptic curve, the method is very general and applies to many situations where 1-level densities of low lying zeros in families have been analyzed and yield a positive proportion of non-vanishing for the central values.The work of Iwaniec, Luo, and Sarnak [5] gives many such examples, and the technique described here refines their non-vanishing corollaries, showing that the non-zero L-values that are produced have the typical size conjectured by Keating and Snaith.For instance, consider the family of symmetric square L-functions L(s, sym 2 f ) where f ranges over Hecke eigenforms of weight k for the full modular group (denote the set of such eigenforms by H k ), with k ≤ K (thus there are about K 2 /48 such L-values).Assuming GRH in this family, Iwaniec, Luo, and Sarnak (see Corollary 1.8 of [5]), showed that at least a proportion 8 9 of these L-values are non-zero.We may refine this to say that for any fixed interval (α, β) and as We end the introduction by mentioning the recent work of Bui, Evans, Lester, and Pratt [1] who establish "weighted" (where the weight is a mollified central value) analogues of the Keating-Snaith conjecture.This amounts to a form of conditioning on non-zero value since central values that are zero are assigned a weight equal to zero.The use of such a weighted measure allows [1] to establish a full asymptotic, however as a side effect they have little control over the nature of the weight.Acknowledgments.We are grateful to Emmanuel Kowalski for a careful reading of the paper, and helpful comments.The first author was partially supported by DMS-1902063.The second author is partially supported by an NSF grant, and a Simons Investigator award from the Simons Foundation.The paper was completed while KS was a Senior Fellow at the Institute for Theoretical Studies, ETH Zürich, whom he thanks for their excellent working conditions, and warm hospitality. Notation and statements of the key propositions We begin by introducing some notation, as in our paper [7], and then describing three key propositions which underlie the proof of the main theorem.Let N 0 denote the lcm of 8 and N. Let κ be ±1, and let a mod N 0 denote a residue class with a ≡ 1 or 5 mod 8.We assume that κ and a are such that for any fundamental discriminant d with sign κ and with We write below where we may write a(p) = α p + α p for a complex number α p of magnitude 1 (unique up to complex conjugation), and then For fundamental discriminants d ∈ E with |d| ≤ 3X, and a parameter 3 ≤ x define (3) Let h denote a smooth function with compactly supported Fourier transform and such that |h(x)| ≪ (1 + x 2 ) −1 for all x ∈ R. For concreteness, one could simply consider h to be the Fejer kernel given by ( 4) Lastly, let Φ denote a smooth, non-negative function compactly supported in [ 1 2 , 5 2 ] with Φ(x) = 1 for x ∈ [1, 2], and we put Φ(s) = ∞ 0 Φ(x)x s dx.Below all implied constants will be allowed to depend on N, h, and Φ, which are considered fixed. Our first proposition connects log L( To analyze sums over the zeros we shall use the following proposition, whose proof is based on the explicit formula.The ideas behind this proposition are also familiar, and in this setting (and in the case ℓ = 1 below) may be traced back to the work of Heath-Brown [3]. Proposition 2. Let h be a smooth function with h(x) ≪ (1 + x 2 ) −1 and whose Fourier transform is compactly supported in [−1, 1].Let L ≥ 1 be a real number, and ℓ be a positive integer coprime to N 0 , and assume that e L ℓ 2 ≤ X 2 .If ℓ is neither a square, nor a prime times a square, then If ℓ is a square then Finally if ℓ is q times a square, for a prime number q, then (7) Finally, to understand the distribution of P(d; x) both when d is chosen uniformly over discriminants d ∈ E, and when d ∈ E is weighted by contributions from low-lying zeros, we shall use the method of moments, drawing upon the following proposition.Proposition 3. Let k be any fixed non-negative integer.Let X be large, and put x = X 1/ log log log X .Then (8) d∈E(κ,a) where M k denotes the k-th Gaussian moment: Further, for any parameter L ≥ 1 with e L ≤ X 2 we have, Deducing the Theorem from the main propositions We keep the notations introduced in Section 2. Let X be large, and put x = X 1/ log log log X . and such that there are no zeros Proof.Take Φ to be a smooth approximation to the indicator function of the interval [1,2], and let κ and a mod N 0 be as in Section 2. The first part of Proposition 3 (namely ( 8)) together with the method of moments shows that (10) d∈E(κ,a) Next, take h to be the Fejer kernel given in (4), and L = (2 − δ/2) log X.Then the second part of Proposition 3 together with the method of moments shows that d∈E(κ,a) P(d;x)/ √ log log X∈(α,β) Note that the weights γ d h(γ d L/(2π)) are always non-negative, and if L(s, E d ) has a zero with |γ d | ≤ (log X log log X) −1 then the weight is ≥ 2 + o(1) (since there would be a complex conjugate pair of such zeros, or a double zero at 1 2 ).Combining this with (10), and summing over all the possibilities for κ and a, we obtain the lemma. Lemma 2. The number of discriminants Proof.Applying Proposition 2 with ℓ = 1, h given as in (4), and 1 ≤ L ≤ (2 − δ) log X, we obtain (after summing over the possibilities for κ and a) Integrate both sides of this estimate over L in the range log x ≤ L ≤ 2 log x.Since, for any y > 0 and t = 0, 1 y 2y y sin(πtu) πtu 2 , and therefore we may conclude that The lemma follows at once.With these results in place, it is now a simple matter to deduce the main theorem.By Proposition 1 1 we know that for Lemma 1 tells us that for d ∈ G X (α, β) we may arrange for P(d; x)/ √ log log X to lie in the interval (α, β) and for there to be no zeros with |γ d | ≤ (log X log log X) −1 .Lemma 2 now allows us to discard ≪ X/ log log log X elements of G X (α, β) so as to ensure that the contribution of zeros with |γ d | ≥ (log X log log X) −1 is O((log log log X) 3 ).Thus there are which completes the proof. Proof of Proposition 1 A straight-forward adaptation of Lemma 1 from [14] (itself based on an identity of Selberg) shows that for any σ ≥ 1 2 with L(σ, E d ) = 0, and any x ≥ 3 one has (11) Here ρ d runs over the non-trivial zeros of L(s, E d ), and this identity in fact holds unconditionally.Now assume GRH for L(s, E d ) and write We may restrict attention to the real part of the integral above since all the other terms involved are real, or noting that the zeros ρ d appear in conjugate pairs.Consider first the sum over n in (12).The contribution from prime powers n = p k with k ≥ 3 is plainly O(1).The contribution of the terms n = p is P(d; x) + O(1), where the error term O(1) arises from the primes dividing N 0 .Finally, by Rankin-Selberg theory (see for instance [4]) it follows that (13) Thus the contribution of the sum over n in ( 12) is ( 14) 1).Next we turn to the sum over zeros in (12).If x and larger values of σ.The first range contributes Thus in all cases the sum over zeros in ( 12) is Finally, taking logarithmic derivatives in the functional equation we find that The proposition follows upon combining this with ( 12), (14), and (15). Proof of Proposition 2 The proof of Proposition 2 is based on the explicit formula, which we first recall in our context.Lemma 3. Let h be a function with h(x) ≪ (1 + x 2 ) −1 and with compactly supported Fourier transform h where the sum is over all ordinates of non-trivial zeros 1/2 + iγ d of L(s, E d ). Applying the explicit formula to the dilated function h L (x) = h(xL) whose Fourier transform is 1 L h(x/L), we obtain We multiply this expression by χ d (ℓ) and sum over d with suitable weights.Thus we find (17) where The term S 1 is relatively easy to handle.If ℓ is a square, it amounts to counting square-free integers d lying in a suitable progression mod N 0 and coprime to ℓ.While if ℓ is not a square, the resulting sum is a non-trivial character sum, which exhibits substantial cancellation.A more general term of this type is handled in Proposition 1 of [7], which we refer to for a detailed proof.Thus when ℓ is not a square we find while if ℓ is a square We now turn to the more difficult term S 2 .First we dispose of terms n (which we may suppose is a prime power) that have a common factor with N 0 .Note that since d is fixed in a residue class mod N 0 , if n is the power of a prime dividing N 0 then χ d (n) is determined by the congruence condition on d.Thus the contribution of these terms is where δ(ℓ = ) denotes 1 when ℓ is a square, and 0 otherwise.Henceforth we restrict attention to the terms in S 2 where (n, N 0 ) = 1.Note that if d ≡ a mod N 0 then d is automatically 1 mod 4, and the condition that d is a fundamental discriminant amounts to d being square-free.We express the square-free condition by Möbius inversion α 2 |d µ(α), and then split the sum into the cases where α > A is large, and when α ≤ A is small, for a suitable parameter A ≤ X.We first handle the case when α > A is large.These terms give α>A µ(α) upon using GRH to estimate the sum over n and then estimating the sum over d trivially. We are left with the terms with α ≤ A, and writing d = kα 2 we may express these terms as We now apply the Poisson summation formula to the sum over k above, as in Lemma 7 of [7].This transforms the sum over k above to where τ v (nℓ) is a Gauss sum given by The Gauss sum τ v (nℓ) can be described explicitly, see Lemma 6 of [7] which gives an evaluation of from which τ v (nℓ) may be obtained via The term v = 0 in (25) leads to a main term; we postpone its treatment, and first consider the contribution of terms v = 0. Since h is supported in [−1, 1], we may suppose that n ≤ e L .The rapid decay of the Fourier transform Φ(ξ) allows us to restrict attention to the range |v| ≤ ℓe L A 2 X −1+ǫ , with the total contribution to S 2 of terms with larger |v| being estimated by O(1).For the smaller values of v, we interchange the sums over v, performing first the sum over n using GRH.Thus these terms contribute We now claim that (on GRH) the sum over n above is X|v| X ǫ , so that the contribution of the terms with v = 0 is To minimize the combined contributions of the error terms in (28) and (23), we shall choose A = (X/ℓ) 4 , so that the effect of both these error terms is To justify the claim (27) we first use (26) to replace τ v (nℓ) by G v (nℓ) so that we must bound (for both choices of ±) First consider the generic case when n is a prime power with (n, v) = 1.Here (using Lemma 6 of [7]) The rapid decay of Φ(ξ) implies that we may restrict attention above to the range p > X 1−ǫ |v|/(ℓα 2 N 0 ).Then splitting p into progressions modN 0 and using GRH (it is here that we need GRH for twists of L(s, E) by quadratic characters, as well as all Dirichlet characters modulo N 0 ) we obtain the bound which is in keeping with (27).Now consider the non-generic case when n is the power of some prime dividing v.We may assume that n|v 2 (else G v (nℓ) = 0 by Lemma 6 of [7]) and also that n ≥ X 1−ǫ |v|/(ℓα 2 N 0 ) else the Fourier transform Φ is negligible.Using that |G v (nℓ)| ≤ (v, nℓ) 2 (which again follows from Lemma 6 of [7]) we may bound the contribution of these terms by since log v ≪ log X ≪ X ǫ and α ≤ A ≤ √ X.Thus these terms also satisfy the claimed bound (27).Now we handle the main term contribution from v = 0, noting that τ 0 (nℓ) = 0 unless nℓ is a square, in which case it equals φ(nℓ).Thus the main term contribution from v = 0 is Thus this main term only exists if ℓ is a square (so that n is a square), or if ℓ is q times a square for a unique prime q (so that n is an odd power of q).In the case ℓ is a square, writing n = m 2 and performing the sum over α, we obtain that the main term is Using (13) and partial summation we conclude that the main term when ℓ is a square is (30) p) 2 − 2) log p p = − log y + O(1), so that, by partial summation, the contribution of the terms n = p 2 equals p≤ √ x p∤N 0 [14] E d ) with the sum over primes P(d; x) (for suitable x) with an error term given in terms of the zeros of L(s, E d ).Such formulae have a long history, going back to Selberg, and the work here complements an upper bound version that played a key role in[14].Proposition1.Let d be a fundamental discriminant in E, and let 3 ≤ x ≤ |d|.Assume GRH for L(s, E d ), and suppose that L( 1 2 , E d ) is not zero.Let γ d run over the ordinates of the non-trivial zeros of L(s, E d
4,779.6
2023-07-31T00:00:00.000
[ "Mathematics" ]
A simple refined DNA minimizer operator enables 2-fold faster computation Abstract Motivation The minimizer concept is a data structure for sequence sketching. The standard canonical minimizer selects a subset of k-mers from the given DNA sequence by comparing the forward and reverse k-mers in a window simultaneously according to a predefined selection scheme. It is widely employed by sequence analysis such as read mapping and assembly. k-mer density, k-mer repetitiveness (e.g. k-mer bias), and computational efficiency are three critical measurements for minimizer selection schemes. However, there exist trade-offs between kinds of minimizer variants. Generic, effective, and efficient are always the requirements for high-performance minimizer algorithms. Results We propose a simple minimizer operator as a refinement of the standard canonical minimizer. It takes only a few operations to compute. However, it can improve the k-mer repetitiveness, especially for the lexicographic order. It applies to other selection schemes of total orders (e.g. random orders). Moreover, it is computationally efficient and the density is close to that of the standard minimizer. The refined minimizer may benefit high-performance applications like binning and read mapping. Availability and implementation The source code of the benchmark in this work is available at the github repository https://github.com/xp3i4/mini_benchmark Introduction The minimizer concept is a data structure for sequence sketching.It is firstly introduced to the sequence analysis by Roberts et al. (2004) to reduce the storage requirements of biological sequence data.Then it was applied by many other applications in the field, such as sequence binning (Deorowicz et al. 2015), sequence compaction (Chikhi et al. 2016), sequence classification (Wood and Salzberg 2014), and read mapping (Li 2016, Jain et al. 2020, B€ uchler et al. 2023). Given the sequence, the minimizer is the minimum k-mer of a predefined ordering scheme in a window of w consecutive k-mers.The minimizer performance relates to several key measurements.Schleimer et al.'s (2003) study defined the density of a k-mer selection scheme as the fraction of selected k-mers.Formally, denote < the ordering scheme and X the selected k-mers in the sequence S, whose size jSj � wþk.The density of the selection scheme is given by where jXj, jSj are the size of X and S. Since it was first introduced to measure the storage requirements, the selection schemes are supposed to select a set of k-mers that is as sparse as possible such that the storage requirements can be largely reduced.Novel selection schemes, such as Orenstein et al. (2016), Marc¸ais et al. (2017), Jain et al. (2020), and (Zheng et al. 2021), are proposed to improve the minimizer density. The k-mer repetitiveness is another minimizer measurement.It is measured by the k-mer frequency in practice.Formally, the frequency of a k-mer X ¼ x in S is defined as its average occurrences in the sequence, where nðxÞ is the occurrence of k-mer X ¼ x.Let V denote the random variable over possible k-mer frequencies.It relates to the performance of applications such as 1) Read mapping: Consider the anchoring (seeding) problem, where we need to find all matched pairs of minimizers in the reference and the read.2) Binning: Similar to the read mapping, we need to find matched minimizers and cluster them into bins. For the two problems, we prefer selection schemes that can generate minimizers of lower repetitiveness (Deorowicz et al. 2015), because highly repetitive minimizers would significantly decrease the matching accuracy and computational efficiency.Like many other fundamental data structures, computational efficiency is the third performance measurement.Although the time complexity of computing minimizers is commonly linear, optimizations of density or k-mer repetitiveness may significantly increase the runtime.For high-performance applications, such as population-scale read mapping, drops in computational efficiency may be non-negligible. In general, there exist performance trade-offs between minimizer variants.For instance, the random ordering scheme (Chikhi et al. 2014) generates more uniformly and sparsely distributed minimizers than the lexicographic ordering scheme at the expense of increased runtime.In contrast, lexicographical minimizers are less affected by nearby mutations or sequencing errors than random minimizers, sometimes called "conservation" (Edgar 2021).Thus, they are beneficial to some matching applications.But the trade-off is the less random sampling. Here, we propose an operator as a refinement of the standard (canonical) minimizer.It has the following features. 1) It improves k-mer repetitiveness of the standard minimizer.It is less biased to small k-mers and distributes more uniformly. 2) It applies to any selection schemes of total orders (Davey et al. 2002) (e.g.lexicographic or random order).3) Its density converges toward that of the standard minimizers.4) It is commonly faster than the standard minimizer to compute and can reach two times at most. It is worth noting that the operator does not apply to noncanonical minimizers of single-strand sequences, such as RNA minimizers.However, canonical minimizers are essential to most sequence analysis applications, such as read mapping and genome assembly. In the following sections, we will first define the refined minimizer.Next, we will prove three properties that are essential to the refined minimizer performance.In the results, we will compare the algorithm complexity of computing the standard and refined minimizers.Then, we will evaluate the statistics (e. g. repetitiveness, density) of standard and refined minimizers in real sequences.Finally, we will analyze the statistics and discuss the potential limitations and improvements. Definitions Operations: For high-performance applications, a preferable minimizer function should be simple and effective.The core idea of the refined minimizer is to define an appropriate decision function that makes the ordering scheme only compute minimizers in the sequence of one strand such that the smallest k-mers are less likely to be selected, repetitively.Provided jsj � 1ðmod2Þ, we define an operator as where p A ; p C ; p G ; p T are the occurrences of characters A, C, G, T in s. jsj � 1 ðmod2Þ is to guarantee dðsÞ 6 ¼ 0, which will be later discussed in the properties.The refined minimizer h is then defined as h r ðsÞ ¼ Table 2 is an example comparing refined and standard minimizers.The lexicographic order of a given k-mer can be computed by , where a i is the order of the ith (right toward left) character of the k-mer and a i equals 0, 1, 2, 3 for A, C, G, T. Properties Here, we discuss three refined minimizer properties that are essential to the applications.They hold for all ordering schemes ðw; k; <Þ defined above.The first one guarantees the strand symmetry, such that the computation of the refined minimizer is independent of the strand.The second one guarantees that the refined minimizer is always not smaller than the standard one.The third one guarantees that the refined minimizers have a reasonable density that is close to that of the standard one. according to the definition in expression 4. 2) For any total order < of R k , h s ðsÞ 6 h r ðsÞ.Proof: It implies that h r ðsÞ would be less biased to small k-mers than h s ðsÞ. 3) Denote s n ¼ a 0 a 1 ; ::; a jsj−1 and s nþ1 ¼ a 1 a 2 ; ::; a jsj the nth and nþ1th subsequences, where a i is the ith base.Denote d n ¼ dðs n Þ the operator of s n defined in expression 3. Provided the sequence is random, then the following expression of probabilities holds Because there exist two cases that h s ðs n Þ 6 ¼ h s ðs nþ1 Þ, namely the minimizer of s n is its leftmost k-mer or the minimizer of s nþ1 is its rightmost one, otherwise s n and s nþ1 share the same minimizer.The probability of each case is We then prove the limit of the refined minimizer in expression 5. Since s nþ1 can be iterated from s n by removing the first character of s n , namely a 0 , and append the last character of s nþ1 , namely a jsj , at the end, we have d nþ1 ¼ d n þd n , where It is worth noting that d n d nþ1 6 ¼ 0, since d 6 ¼ 0 has been proved in the first property.Then we have the following two cases: Then according to the definition in expression 4 According to expressions (3) and ( 6), we know that characters in a 1 ; a 2 ; . . .; a jsj−1 are 2 fA; Cg and a jsj 2 fG; Tg.Therefore, where p is the probability of an random character 2 fA; Cg.Analogously, The limits of the two probabilities above equal 0. Therefore, lim jsj!þ1 Pðd n d nþ1 < 0Þ ¼ 0 Therefore, the limit in expression 5 is Based on the discussion above, we have the expected k-mer density of refined minimizers where q s is the expected density of standard minimizers.Therefore, lim jsj!þ1 q r ¼ q s . Heuristics Expression ( 7) suggests that we can improve the k-mer density without significantly impacting the selected minimizers by simply skipping the nþ1th window if d n d nþ1 < 0. The core idea of the heuristic is to skip the "solo" windows, whose signs of d are different from those of predecessor and successor windows.Solo windows are minority especially for large jsj, while they significantly increases Pðd n d nþ1 < 0Þ in expression ( 7).The heuristic skips minimizers of solo windows while preserving minimizers of "non-solo" ones.For instance, if d 1 ; d 2 ; d 3 ; ¼ −1; 1; −1, then skipping the solo window 2 will also drop its minimizer.However, if d 1 ; d 2 ; d 3 ; ¼ −1; 1; 1, then skipping window 2, which is non-solo, may not affect its minimizer, since window 3 may preserve it. Runtime Arbitrary windows: We compared the CPU cycles of computing the refined and standard minimizer in algorithms 1 and 2. The loops in the pseudocodes apply to arbitrary windows and ordering schemes induced by the random hash function R, such as ntHash (Mohamadi et al. 2016), which directly computes random rolling hash values.CPU cycles for each step are listed in the comments of algorithms 1 and 2. Algorithm 1 takes o r ¼ 10o 1 þ2o 2 þo 3 þwð3o 1 þo R þo 3 Þ operations in sum and algorithm 2 takes The expected speedup of the refined minimizer is Hence, T r 2 ½0:949; 2Þ, where T r is minimized when w ¼ 1 and o R ¼ 0 (lexicographic ordering).T r is maximized when w � 1 or o R � 0. Therefore, the refined minimizer can be two times faster at most.Applications may apply heuristics to further improve the minimizer performance.For instance, a more practical way to break ties (when the smallest k-mer appears multiple times) is to skip ties in adjacent windows.This creates optimal spread in poly-X regions (e.g.repetitive AA.).Such heuristics will introduce additional CPU cycles.However, heuristics for standard minimizers commonly apply to refined minimizers and can be integrated into function R. Hence the speedup upper bound can be preserved in such cases. Consecutive windows: Applications may use buffers to reduce the times of computing k-mers when computing minimizers in consecutive windows.The refined minimizer preserves the speedup upper bound in such a case.They are discussed in Supplementary Notes.However, the speedup in practice can be washed out to some extent by additional buffer operations, such as reading, writing, traversing, etc.The exact trade-offs depend on w, k, ordering schemes, CPU architectures, etc. Optimizations of buffers can substantially improve the practical runtime in such cases. Distributions As discussed above, we ideally prefer selection schemes that can generate k-mers of lower frequency for the read mapping and binning problem.Correspondingly, we prefer more uniformly distributed minimizers.We evaluated key statistics shown in Table 3 as a sketch of the distribution of selected minimizers X, which are computed in consecutive windows by streaming GRCH38 (chr 1-22, X, Y).Runtime (i.e.T in the table) is the corresponding time of computing minimizers in consecutive windows with buffers rather than the runtime of algorithms 1 and 2. Results for additional groups of jsj 6 45 and k 6 30 are presented in Supplementary Tables S1 and S2.It is worth noting that the tables only show statistics for even ks to simplify the results.The refined minimizer concept also applies to odd ks, and the corresponding results have no significant difference compared to those of even ks.Supplementary Table S3 shows statistics of minimizers of minimap2 (Li 2018).We evaluated 25-95% percentiles of minimizer frequency V, as shown in the table.For instance, P 0:25 ¼ 9:97 per megabases for standard lexicographical minimizer with jsj ¼ 15; k ¼ 4 means 25% minimizer frequencies are lower than this value. The column D KL ðXjjUÞ is the Kullback-Leibler (KL) divergence of the distribution of X and the uniform k-mer distribution U.It is given by since there exist 4 3 types of 3-mers and each type has the same chance of being selected.A lower KL divergence implies that X is more uniformly distributed, and thus the scheme is less biased to specific minimizers.As expected, the results reveal that refined minimizers have lower KL divergence.Therefore, we would expect refined minimizers to generate less biased k-mers. The column E-hits is the expected number of hits introduced by research (Sahlin 2022).A lower E-hits may benefit applications such as read mapping.It is computed as follows in the assessment. Therefore, it is a comprehensive metric of density q and frequency vðx i Þ.Since the refined minimizers improve the kmer frequency V at the cost of limited increased density q, we expect refined minimizers to improve E-hits, while the improvement is relatively lower than those of percentiles and D KL .E-hits for minimizers in GRCH38 are in line with expectations, as shown in Table 3 and Supplementary Tables S1-S3. Figure 1 illustrates the empirical distribution of minimizer frequency V discussed above.It is log-scaled since the distribution is right-skewed, namely a long tail on the right side.Supplementary Fig. S1 shows the histogram version of the same data as a complement.As discussed above, we prefer small V for anchoring and binning problems, since large ones in the long tails would be the performance bottleneck.The figure reveals that for different jsj; k, standard minimizers have heavier tails, indicating larger V than refined minimizers.Therefore, refined minimizers generate more uniformly distributed k-mers.Figures for additional settings of jsj 6 45 and k 6 30 are presented in Supplementary Fig. S2. Overall, statistics including the percentiles, D KL , E-hits and the distribution figures suggest refined lexicographical minimizers are less repetitive than standard lexicographical or random minimizers.Since the refined minimizer is also computationally efficient, it is expected to be more friendly to high-performance minimizer applications. Potential limitations We can observe a drop in benefits for frequency-related statistics of refined minimizers for larger k and jsj (i.e.P 0:95 , D KL , E-hits, and distributions in Supplementary Fig. S2).However, it is worth noting that the benefits depend on a latent factor, the sequence size.We use a coefficient, the average minimizer occurrences in the sequence denoted by EðX; kÞ to describe the latent performance impact. For instance, if we assess 20-mers in GRCH38 references of approximately 3Gbps in size, then EðX; kÞ ¼ q � 3Gbps=4 20 � 0. It means that most types of 20-mers never occur in the minimizer set of GRCH38.As a result, the empirical distribution of minimizer frequency will not be close to the expected one due to insufficient minimizers (i.e. law of large numbers).Specifically, EðX; kÞ drops exponentially or linearly as k or jsj increases.Therefore, given the sequence of fixed size (e.g.GRCH38), we expect to observe significant or moderate drops in the statistics for large k or jsj.For validation, we assessed the empirical distributions of minimizer frequency V for jsj ¼ 25; k ¼ 10 in 6 sequences, whose sizes jSj are 1; 4; 16; 64; 256; 1024Mbps, as shown in Supplementary Fig. S3.We can observe that the difference between the standard and refined minimizer distributions is insignificant in short sequences (e.g.1Mbps; 4Mbps).However, distributions become significantly different as the sequence size jSj increases exponentially.Therefore, the empirical distributions depend on the sequence size and the practical benefits will increase as the sequence size grows. Potential improvements We have discussed the heuristic to improve the refined minimizer density in Section 2.3.There potentially exist other heuristics that can improve the refined minimizers in practice. For instance, refined minimizers can possibly be improved for specific sequences, such as A, T or C, G enriched ones, where d signs are likely to be frequently changed.A potential improvement is to extend d as follows: where weights x 1 ; x 2 � 1 ðmod2Þ.Additionally, we extend d based on the occurrences of 2-mers p AA ; p AC ; . . .; p TT or q-mers (i.e.q characters).Generally, d based on the occurrences of q-mers can be defined as x i ðp qi −p q 0 i Þ where q i ; q 0 i are the ith q-mer and its reverse complement.Weights x i can be optimized, provided distributions of q-mers in the sequences are known.In practice, the distributions can be approximated by sampling q-mers in the subsequences.Such heuristics may further improve the performance of refined minimizers. Conclusion In this work, we proposed a refined DNA minimizer operator.We discussed basic properties that are essential to applications.The refined minimize is generic, computationally efficient, and can improve the k-mer repetitiveness, especially for the lexicographic order at the cost of limited increased density.However, simple heuristics, such as skipping "solo" windows, can further improve the performance.Assessments based on the GRCH38 are in line with expectations.We expect the performance can be potentially improved with additional heuristics in practice. <Þ selects the minimum k-mer in w consecutive k-mers 2 R k , where R is the character set and order < is commonly induced by a hash function h, which is an injection from R k Table 1 . CPU cycles for operations used to compute minimizers.Operations such as traversing an array will probably trigger L 1 cache read. Table 2 . Comparison of standard (Std)and refined (Rfd) minimizers in a DNA sequence s and reverse complement s 0 , where jsj ¼ 11, k ¼ 5.K is the minimizer.hðKÞ is the lexicographic order of the minimizer.Q 2 ðhÞ is the median of hðKÞ.Values with bold text imply that h is less biased to small ones. where o 1 , … ,o 3 are defined in Table 1, o R is CPU cycles for function R. Assuming o 1 ¼ 1, o 2 ¼ 3 and o 3 takes 10 cycles on average, then Table 3 . Statistics of standard (Std) and refined (Rfd) minimizer sampled consecutively in GRCH38: P 0:25 -P 0:95 are percentiles of minimizer frequency per megabases.D KL ðXjjUÞ is the Kullback-Leibler (KL) divergence of the distribution of X and the uniform k-mer distribution U. Large values such as E-hits are expressed by scientific notation.T is the runtime.Better values are in bold text.Empirical distributions of V for k ¼ 4; 8; 12 in rows and jsj ¼ 15; 25 in columns.Rfd and Std are refined and standard minimizer.The vertical axis equals the frequency of V ¼ v, namely the empirical probability PðV ¼ vÞ.The horizontal and vertical axes are in log 10 scale.
4,463.2
2024-01-25T00:00:00.000
[ "Computer Science", "Biology" ]
THE ANALYSIS OF THE TRAFFIC SIGNS VISIBILITY DURING NIGHT DRIVING Road safety has an extremely important role in existing transportation systems. Drivers on the road are influenced by various factors (light and temperature conditions, visual smog, environment surrounding, etc.) and driver ́s distraction represents the most common cause of road traffic accidents. According our previous researches, we found that visual smog has a negative influence on drivers on the road. However, the task of placing the traffic signs on the roads is to increase road safety, thus positively influencing the driver while driving. The main objective of this article is to measure the visibility of traffic signs on selected roads in specific light conditions (in night). Secondary objective is to measure visibility of roadside advertisement (billboards) near roads that are influencing driver ́s distractions in negative way. The mobile ETG technology (eye tracking glasses) has been used as a method for measuring the gaze of the driver’s eyes on traffic signs in night conditions. We compared the results from our previous research (daylight conditions) with the obtained results from the current research. On the basis of the comparison of both measurements we can find out differences (positive and negative) in influence of traffic signs and road side advertisements on drivers in various light conditions. INTRODUCTION In recent years, scientific disciplines for measuring biometric data are coming more and more to the foreground, for which the term neuroscience is commonly used.Neuroscience is a natural science that examines the workings of the neural system of animals and humans, how it develops during life, and examines individual neurons, parts of the neural system, their connections, and methods of creating neural networks, their interaction and relation to the environment.Neuroscience also includes cognitive neuroscience that ex-amines what is happening in the human brain during cognitive processes.These processes include for example perception, thinking, remembering, recalling from memory, learning, etc. [2].It searches for relationships between individual levels of the brain and tries to uncover causal laws, and it also tries to explain cognition, i.e. knowledge and learning about the world in humans and animals.There are several types of methods that can identify and evaluate changes in biometric data of test subjects.The most important include eye tracking ET, galvanic skin response GSR, electrocardiogram ECG, electroencephalogram EEG and facial expression analysis FEA [5].The overview of options for each method is listed in the following Table 1. We perceive approx.90% of all information about the surrounding world through eyes.Therefore, in the viewpoint of neuroscience, the ET -Eyetracking method is one of the most important methods of determining interaction between the environment and the human factor. The science of eye tracking has been around as early as the 1800s.Although the technology was not where it is today, people conducted eye movement studies for centuries using direct observations.In 1879, a French ophthalmologist named Louis Émile Javal made an observation when reading.He realized reading didn't involve smooth sweeping across the text, but rather the reader's eyes would have a series of short stops throughout with rapid eye movement [12].These short stops are referred to as eye fixations.From the time, he made these observations and through the 1900s, people continued to conduct eye tracking studies to make more sense of these eye fixations.Even today, people ask themselves why test subject's eyes stop on certain areas and why they fixate on certain areas more than others. In the early 1900s, an educational psychologist named Edmund Burke Huey built an early eye tracker.He used contact lenses with holes for the pupils.The contact lenses were connected to aluminum pointers, which would move along with the eyes to track a test subject's eye movement [13].After Huey's early eye tracking technology, an experimental education psychologist named Guy Thomas Buswell from Chicago built the first non-intrusive eye tracker [10].Unlike Huey, Buswell used beams of light that were reflected on the test subject's eyes, and then recorded on film.It was still an early form of eye tracking technology, but again it was much less intrusive compared to Huey's eye tracking methods. In the 1950s, a Russian psychologist named Alfred Lukyanovich Yarbus conducted several eye tracking studies that resulted in important eye tracking research.His research showed the relationship between eye fixations and the test subject's interest [9]. Moving into the 1970s, eye tracking studies and research continued to grow rapidly.Just like in the 1800s and early 1900s-1950s, the eye tracking research focused mainly on studying how people read.In the 1980s, Just and Carpenter came up with the Strong eye-mind hypothesis.This hypothesis states that when a subject is viewing a word or object, he or she is also processing it cognitively (thinking about it) for exactly the same amount of time he or she is fixating on it [1].During this time, the Strong eye-mind hypothesis was questioned because of the idea of covert attention, which is the attention to something that one is not looking at. The 1980s also saw the first use of eye tracking technology to help answer questions related to human-computer interaction [6].Researchers analyzed how users navigated through and interacted with computer command windows [11].These researchers also made advancements in the technology by using real time eye tracking results to help disabled people.There are two types of eye tracker: remote (also called screen-based or desktop) and headmounted (also called mobile) [2].Remote eye trackers record eye movement at a distance, there is no attachments to respondent (tested subject).It is mounted below or placed close to a computer or screen and the respondent is is seated in front of the eye tracker (see Fig. 1).This type of eye tracker is recommended for observations of any screen-based stimulus material in laboratory settings such as pictures, videos and websites, offline stimuli (magazines, books etc.) and other small settings (small shelf studies etc.). Head-mounted eye trackers record eye activity from a close range.It is mounted onto lightweight eyeglass frames and respondent is able to walk around freely (see Fig. 2).This equipment is recommended for observations of objects and task performance in any real-life or virtual environments (usability studies, product testing etc.). The progressive and broadly applicable ET tool that is used in several research areas is the head-mounted eye tracker (eyetracking glasses).Eyetrack glasses now offer several options of use in practice.The possibility of immediate movement in real life introduces unlimited number of research focused on the impact of sight.Tracking the impact of sight of the driver when driving a vehicle using eyetrack glasses is commonplace today [3,8,14].They are used mainly in research focused on the impact of the environment (e.g.billboards) on the driver, but also on tracking time the driver spends on e.g.observing traffic signs when driving a vehicle [4,7]. ANALYSIS OF THE SITUATION The visibility of traffic signs has been realized on a 14 km road section located on the road between the towns of Žilina and Martin.Almost the entire section is outside of any town.Given it is a first-class road, the maximum permitted speed is 90 km/h., which is permitted on approx.50% of the road.The road section is characterized by several speed restrictions and also narrowed down lanes in certain part of the road due to change in the number of lanes, specifically from two lanes to three lanes.The Transport Inspectorate in Žilina provided traffic accident statistics on the selected road.Between 1. January 2016 and 30.December 2016, there were 20 traffic accidents in the road section no. 1, registered by the Transport Inspectorate in Žilina.The most common cause of the traffic accident was improper driving by the driver, which was recorded in 18 of the cases.A more detailed analysis of the traffic accidents is described in Table 2. Based on the results of the analysis of visual smog done using a GoPro camera we found that there are a total of 174 billboards in the selected road section, which is 348 advertisement surfaces in both direction that can capture the driver's attention.Since the distance of the road section is 14 km, it means that one billboard is placed every 80 meters.Majority of the billboards are 510 x 240 centimeters big and their average distance from the road is two meters (see Fig. 4). OBJECTIVE AND METHODOLOGY The main research goal was to identify the level of visibility of road signs in the selected road section under low light conditions, specifically at night.The analysis was conducted on a road near Žilina, which is characterized by a varied articulation and speed profile.The secondary research goal was to measure the effect of visual smog on the driver during driving under low light conditions.The results of the measurements were compared with the measurements conducted in the same section in the past.The following techniques and tools were used to achieve the goals: • Statistics on traffic accidents on the selected road were obtained based on consultation with the county traffic engineer.• Eyetrack glasses were used to collect data on driver's sight.These glasses from the SMI Company serve to capture and record the sight of a person in real time (see Fig 2). • The BeGaze software was used in the final part to evaluate data collected using the Eyetrack glasses (see Fig. 5). RESULTS The main goal of the experiment was to assess the tracking of vertical traffic signs by the driving during driving in low light conditions (at night) and compare the data obtained with the data obtained during drive under optimum light conditions (during daylight).First part of experiment (night conditions) was realised during January 2017 and 5 test subjects participated.Second part of experimet (daylight conditions) was conducted on May 2017 and same test subjects participated on the experiment as in January.The results found during driving are listed in table 3. The table shows the average fixation times of drivers on advertising equipment and road signs in optimum and low light conditions. During the whole drive under low light conditions, the tested subjects looked at on average 32 road signs out of a total of 150.This means that the driver saw in night 21.3% of road signs, i.e. on average every fifth road sign (see Fig. 6).Compared with driving during the day, the drivers focused their attention on 52 road signs, which is 34.6% of all road signs (see Fig. 7).The value of tracking road signs (sight fixed on them) during the night drive is 38.5% lower than tracking road signs by the driver under good light conditions.This is interesting despite the fact that the majority of road signs contain reflective elements.Based on analyzing the eyetrack camera videos this can be explained by the driver being blinding by the incoming cars, which caused his sight to be higher (outside of road signs) compared to testing under good light conditions. The secondary goal of testing was to measure the effect of visual smog on the driver under low light conditions.The goal of this measurement was to determine, whether the driver focuses his sight on ads also under low light conditions.The measurement took place on the same road between Žilina and Martin, where there are 174 billboards, with more than 97 % of them or not illuminated billboards. The results of the measurement shown that the time the driver spent on observing visual smog in night, compared to the average time the driver spent watching billboards during the day under good light conditions, is more than five times lower.In absolute terms this means that the driver looked only on 15 billboards during the whole drive.The duration of fixation on a billboard was short, usually lasting less than 0.5 s, which amounts to a total of approx.7.5 seconds. The driver paid the most attention to billboards with a reflective element.Compared to classic billboards these were more prominent and the driver noticed them from a greater distance.Equally, the duration of fixation of the driver's sight was longer in this case than with non-illuminated billboards without a reflective element. CONCLUSIONS Roads signs on roads have a key role in ensuring road safety.Their visibility to drivers during the drive is the main prerequisite in preventing the occurrence of non-standard situations and limiting the likelihood of traffic accidents.Therefore, we set to identify how the visibility of vertical road signs changes depending on the light conditions.Based on experimental testing using an eyetracking camera in real conditions of a selected road we came to the conclusion that under low light conditions (night driving) the number of fixations on traffic signs was 38.5% lower than when driving in good light conditions.This result was all the more interesting, because road signs are equipped with reflective elements that should ensure the same visibility during the night, as well as during the day.By testing we found out that the level of fixation on road signs is lowered by the intensity of road traffic (especially by dazzling the driver by the opposite car). In the secondary measurement, we focused on the visibility of negative elements found around the road (visual smog).The number of fixations of sight on billboards in the measured section during the day was higher than the number of fixations on road sighs, but the change of light conditions decreased the number of fixations on visual smog by 81.7%.This decrease is the consequence of absence of reflective elements on billboards. Finally, it can be concluded that by comparing the results from the experimental measurements carried out under good light conditions (during the day) and under low light conditions (at night) there is significant reduction of road signs visibility during driving. Figure. 3 depicts the selected road section. Fig. 6 .Fig. 7 . Fig. 6.Driver´s view at vertical traffic sign during night driving Table 1 . Overview of neuroscience methods used for biometric measurement Table 3 . Comparison of driver´s gaze fixation depending on light conditions
3,186.4
2018-06-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Environmental Drivers of Diversification and Hybridization in Neotropical Butterflies Studying how the environment shapes current biodiversity patterns in species rich regions is a fundamental issue in biogeography, ecology, and conservation. However, in the Neotropics, the study of the forces driving species distribution and richness, is mostly based on vertebrates and plants. In this study, we used 54,392 georeferenced records for 46 species and 1,012 georeferenced records for 38 interspecific hybrids of the Neotropical Heliconius butterflies to investigate the role of the environment in shaping their distribution and richness, as well as their geographic patterns of phylogenetic diversity and phylogenetic endemism. We also evaluated whether niche similarity promotes hybridization in Heliconius. We found that these insects display five general distribution patterns mostly explained by precipitation and isothermality, and to a lesser extent, by altitude. Interestingly, altitude plays a major role as a predictor of species richness and phylogenetic diversity, while precipitation explains patterns of phylogenetic endemism. We did not find evidence supporting the role of the environment in facilitating hybridization because hybridizing species do not necessarily share the same climatic niche despite some of them having largely overlapping geographic distributions. Overall, we confirmed that, as in other organisms, high annual temperature, a constant supply of water, and spatio-topographic complexity are the main predictors of diversity in Heliconius. However, future studies at large scale need to investigate the effect of microclimate variables and ecological interactions. INTRODUCTION Understanding how the environment shapes species distribution and affects patterns of biological diversity is still a challenging task, especially in species rich regions, such as the Neotropics (Hawkins et al., 2003;Gotelli et al., 2009;Brown et al., 2020). To date, information on this topic is mostly based on vertebrates and plants, and suggest that the combination of high annual temperature with a constant supply of water and spatio-topographic complexity are the main predictors of species distribution, richness, and endemism (Hawkins et al., 2003;Kreft and Jetz, 2007;Qian, 2010;Vasconcelos et al., 2019). Within the Neotropics, the Amazon and the foothills of the North-eastern Andes are examples of regions that combine these conditions, and consequently, they exhibit high levels of species richness and phylogenetic diversity in monkeys, snakes, birds, amphibians, palms, and vascular plants (Kreft and Jetz, 2007;Fenker et al., 2014;Vallejos-Garrido et al., 2017;Velazco et al., 2021). Similarly, regions such as the Biogeographic Choco, Costa Rica, and the Amazon show high levels of phylogenetic endemism (e.g., Rosauer and Jetz, 2014;López-Aguirre et al., 2019;Varzinczak et al., 2020). However, these patterns have not been deeply evaluated in Neotropical invertebrates, and particularly butterflies (Pearson and Carroll, 2001;Mullen et al., 2011). The environment, and especially climatic niche, has also been suggested to have an effect on gene flow. For example, phylogenetic discordance in multiple loci in beetles of the genus Mesocarabus seems to be the result of hybridization between species sharing the same climatic niche (Andújar et al., 2014), while in armadillos of the genus Dasypus, asymmetric gene flow appears to be facilitated by niche conservatisms at both sides of a geographic barrier (Arteaga et al., 2011). Additionally, climaticbased selection likely plays a role in maintaining mosaic hybrid zones in Quercus oaks, where climatic heterogeneity favors the co-occurrence of parental species and their hybrids (Swenson et al., 2008;Ortego et al., 2014). Heliconius butterflies are a diverse insect group found across southern United States, Central, and South America, where they occupy divergent habitats (Jiggins, 2017). Due to the recent radiation of this butterfly genus, species pairs have different levels of reproductive isolation, which are used as proxies for different stages of speciation (Kronforst et al., 2013;Martin et al., 2013). In total, ∼25% of Heliconius species are known to hybridize in nature (Mallet et al., 1998(Mallet et al., , 2007, but the role of abiotic variables in facilitating or hampering such hybridization has been poorly studied (Mallet et al., 1990;Rosser et al., 2014). In this study, we combined an extensive database of occurrences of species and hybrids in Heliconius as well as environmental data to investigate: (1) how the environment shapes the distribution of Heliconius at a regional scale, (2) how the environment molds species richness, phylogenetic diversity, and phylogenetic endemism in these butterflies, and (3) whether niche similarity promotes hybridization. Species Data and Environmental Variables We included occurrence data of 46 species of Heliconius and generated a database of the localities where these butterflies have been collected across their entire distribution range. The data were obtained from: (1) entomological collections and (2) the Heliconiinae checklist of Rosser et al. (2012). For those regions in Colombia that we identified as undersampled, we conducted field trips to improve our geographic coverage. The nomenclature of all records was updated to the most recent taxonomic checklist when needed (Lamas and Jiggins, 2017). We also included occurrence data for all interspecific hybrids documented in Heliconius. All individuals were photographed and identified based on their color pattern. We used the point-radius method to georeference specimens with missing coordinates following Wieczorek et al. (2004). Although Heliconius is widely represented in databases, such as global biodiversity information Facility (GBIF), we did not include such records to ensure the use of data that have been curated by specialists both in terms of georeference and taxonomy, or that have images of each specimen that would allow us to confirm the taxonomy. We used the 19 climatological variables from climatologies at high resolution for the earth's land surface areas (CHELSA) at spatial resolution of 1 km (Karger et al., 2017) to characterize climatic variation across the occurrence range of Heliconius, and altitude was obtained from Jarvis et al. (2008). Collinearity between variables was avoided by estimating the Pearson correlation coefficient among all 20 variables, and the absolute value of this correlation was used to create a dissimilarity matrix (1-correlation values). We used this matrix to perform a hierarchical clustering analysis with the hclust function in R (R Core Team, 2021). We then chose one variable per cluster that had a pairwise distance <0.5. Using the selected variables, we calculated the variance inflation factor (VIF) (Dormann et al., 2013) with the HH package in R (Heiberger, 2020) and chose those variables with VIF <5 (Kubota et al., 2015). Species Distribution Modeling and Environmental Variables Importance First, we used R pipelines (Assis, 2020) to reduce sampling bias and spatial autocorrelation among occurrences in our species distribution models using the variables that passed the filters mentioned before. The minimum non-significant autocorrelated distances were used to prune species databases. H. nattereri and H. tristero were not modeled because they had <32 occurrence records. Then, we generated a second database that included pseudoabsences data following Phillips et al. (2009), Soberón and Nakamura (2009), Barbet-Massin et al. (2012), and Lake et al. (2020). Because Heliconius is a very well-sampled genus we had enough information to select pseudo-absences points for each species in places where: (i) Heliconius other than the focal species have been collected, (ii) environmental conditions may not be optimal for its occurrence, and (iii) absence is not caused by dispersal limitation. Using these criteria, we defined a minimum convex polygon with a 50 km buffer area for each species and selected 10,000 pseudo-absences only in this buffer. Then, we estimated the ensemble species distribution models (ESDMs) of Heliconius with the R package stacked species distribution models (SSDM) (Schmitt et al., 2017), equally weighting presences and pseudo-absences (prevalence weights = 0.5). Individual species distribution models (SDM) were implemented using four algorithms that optimize the use of pseudo-absences in a similar way (Barbet-Massin et al., 2012): (1) Generalized Linear Models (GLMs) (McCullagh and Nelder, 1989), (2) Generalized Boosting Models (GBMs) (Friedman et al., 2000), (3) Maximum Entropy Models (MAXENT) (Phillips et al., 2006), and (4) Generalized additive model (GAM) (Hastie and Tibshirani, 1990). Each algorithm was run 10 times. In each run, models were calibrated using 75% of the occurrence data and their accuracy was evaluated with the remaining 25%; the "holdout" method was used to ensure independence between training and evaluation sets. The data set randomly changes between runs. An ensemble model (ESDM) was obtained for each species by averaging the best SDM outputs (highest Area Under the Curve-AUC-score), and the ensemble models were evaluated with the AUC score and the Cohen's Kappa coefficient (k). Following Smith and Santos (2020), we did not model species with n < 32 or that occupy >70% of the background region (i.e., entire distribution range for the genus). We used the relative importance values of the variables provided by SSDM to evaluate the influence of each of them within all models. The importance is estimated with a randomization process, where SSDM calculates the correlation between a prediction using all variables and a prediction where the independent variable being tested is randomly removed; this is repeated for each variable. The calculation of the relative importance is made by subtracting this correlation from one, therefore higher values are the best variables for the model (Schmitt et al., 2017). Diversity Metrics: Species Richness, Diversity, Endemism Phylogenetic Maps, and Environmental Variables Importance Species richness, phylogenetic diversity, and phylogenetic endemism were calculated by superimposing the distribution maps of all species using the R package phyloregion (Daru et al., 2020b). In order to avoid overestimation of the diversity metrics, we created alpha hulls with the R package rangeBuilder (Davis Rabosky et al., 2016) and following (Paz et al., 2021). Briefly, we used occurrence data available for all species (54,392 georeferenced records) that had more than 10 locality points, a dynamic selection of alpha for each species, and an alpha that varied in steps of 1 (Meyer et al., 2017). We next generated a community matrix using the alpha hulls of all species with the function polys2comm in the R package phyloregion (Daru et al., 2020b). We used the community matrix to calculate species richness by summing all species present in each cell, and also, with this matrix and the best Maximum Likelihood tree estimated with 20 nuclear and 2 mitochondrial loci for Heliconius (Kozak et al., 2015), we estimated phylogenetic diversity and phylogenetic endemism (Faith, 1992;Rosauer et al., 2009), with the functions phylogenetic diversity (PD) and phylo_endemism of the R package phyloregion (Daru et al., 2020b). To investigate whether these metrics are scale dependent, we performed the above analyses at three consecutive grain sizes (5, 10, and 20 km). We performed a linear regression model using phylogenetic diversity as response variable and species richness as predictor variable to investigate their relationship and plotted the residuals to highlight areas where these metrics are different. We also used four machine learning algorithms to generate correlative models and then we created an ensemble prediction of each diversity metric to identify the environmental variables that best explain them (Paz et al., 2021). The algorithms used were: Random Forests (Liaw and Wiener, 2002), Neural Network (Venables and Ripley, 2002), Support Vector Machines (Karatzoglou et al., 2004), andGLM (McCullagh andNelder, 1989). The models were built with the R package caret 6.0-86 (Kuhn, 2008), and we used the varImp function to compute the weighted average of the contribution of each variable. Evaluating the Environmental Effect in the Hybridization on Heliconius Butterflies We estimated the Schoener's niche equivalency test (D) and Warren's niche background test (I) between pairs of hybridizing species to determine if they share environmental niches. We used the R package humboldt (Brown and Carnaval, 2019) and we followed the concept of environmental niche sensu (Phillips et al., 2006;Soberón and Nakamura, 2009), where the niche consists of the subset of conditions currently occupied and where environmental conditions at the occurrence localities constitute samples from the realized niche. The niche overlap metric Schoener's D ranges between 0 and 1, meaning no overlap and complete overlap, respectively (Rödder and Engler, 2011). The environmental overlap was visualized with a principal component analysis (PCA). We tested the significance of this metric by comparing the realized niche overlap against a null distribution of 1,000 overlaps randomly generated from the reshuffled occurrence dataset and tested whether niche background and niche equivalency were different from those expected by chance at α = 0.05 (Brown and Carnaval, 2019). This was done using the entire distribution of the entities under comparison (niche overlap test = NOT) and using only the area where they overlap (niche divergence test = NDT) (Brown and Carnaval, 2019). From the species records we discarded 13,476 records as they could not be reliably georeferenced, thus leaving us with 54,392 records. For species modeling, these were further subject to pruning, which left a total of 13,671 records (Supplementary Table 3). There was considerable variation in the sampling effort across the phylogeny. For example, species of the erato and silvaniform clades are well-represented, whereas species from the aoede clade had the lowest number of records (Supplementary Figure 1). The variables retained and used to model species distributions and diversity metrics were: (i) minimum temperature of coldest month, (ii) altitude, (iii) precipitation of coldest quarter, (iv) isothermality, and (v) precipitation seasonality (Supplementary Figure 2). The maximum absolute pairwise correlation between minimum temperature of coldest month and precipitation of coldest quarter was 0.436. The four algorithms we implemented were accurate in predicting the distribution of species, but their combination (ensemble) was the most accurate (Supplementary Figure 3). In total, we generated 44 species distribution models for Heliconius species. These are deposited in ZENODO. 1 We found that environmental variables are better predictors of the distribution of Heliconius compared to topography. For instance, current temperature (isothermality) explains the distribution of 14 species (Figures 1A,B) and precipitation explains the distribution of 24 species (Figures 1C,D). In contrast, altitude explains the distribution of only five species ( Figure 1E). No single variable was correlated with the entire distribution of the genus (Figure 1F), but we observed some general patterns. For example, isothermality explained the distribution of widely distributed species and trans-Andean species (i.e., west of the Andes; Figures 1A,B). Also, precipitation of the coldest quarter explains the distribution of species that occur in the biogeographic Choco + Costa Rica while precipitation seasonality explains the distribution of cis-Andean species (i.e., east of the Andes) + the Pacific of Ecuador (Figures 1C,D). Altitude explains the distribution of species restricted to the eastern foothills of the Andes and highland Andean species (Figure 1E). Interestingly, we did not find a single variable that was better correlated with the distribution of H. charitonia (Supplementary Table 4). Diversity Metrics: Species Richness, Diversity, Endemism Phylogenetic Maps, and Environmental Variables Importance We found that higher values of Heliconius species richness are concentrated in the foothills of the eastern Andes from Colombia to Ecuador, and into the Amazon basin mainly along the course of the Amazon River (Figure 2A). These results were consistent but more striking in the phylogenetic diversity maps ( Figure 2B). Also, species richness has a strong and significant effect on phylogenetic diversity (adjusted R 2 0.9887, p ≤ 2e-16; Supplementary Figure 4). Interestingly, the residuals map showed values of phylogenetic diversity below those expected from species richness in the same regions, indicating that phylogenetic diversity, although high, is underestimated (blue grids; Figure 2C). In contrast, this metric was overestimated mainly in the Central Andes, the southern Amazon in Brazil, and the northern Chaco in Bolivia (red grids; Figure 2C). The highest values of phylogenetic endemism were concentrated in: (i) the Pacific coast of Costa Rica and Panama, (ii) the central foothills of the Eastern Cordillera in Colombia, and (iii) the biogeographic Choco of Colombia ( Figure 2D). The pattern of these metrics was not scale dependent, and the results were highly congruent at 5, 10 (Supplementary Figures 5, 6, respectively), and 20 km ( Figure 2C). The ability of the machine learning models to predict species richness, phylogenetic diversity, and phylogenetic endemism varied between algorithms (Supplementary Figure 7). The best algorithms for all diversity metrics were the ensemble model followed by random forest, while the GLM algorithm had the lowest predictive accuracy in all metrics (Supplementary Figure 7). The best models predicted that altitude and isothermality were the most important variables for species richness and phylogenetic diversity (Figures 3A,B). In contrast, the most important variable for phylogenetic endemism was precipitation seasonality, followed by isothermality ( Figure 3C). Finally, the residuals from the spatial regression between phylogenetic diversity (response variable) and species richness (predictor variable) were explained by isothermality ( Figure 3D). Evaluating the Environmental Effect on Hybridization in Heliconius We found 18 pairs of hybridizing species in Heliconius. The results of the NOT and NDT tests based on Schoener's D revealed that the niches of three of these pairs (H. melpomene/H. cydno, H. melpomene/H. hecale, and H. hecalesia/H. hortense) are equivalent (Figure 4 and Table 1) and overlap climatically (D > 0.40). In contrast, 12 of these pairs did not show evidence of niche equivalency. These included both pairs that have extensive geographic overlap (such as H. ethilla and H. numata) (Supplementary Figure 8) and pairs with a narrow overlap (such as H. erato and H. himera) (Figure 5). The remaining three pairs (H. beskei/H. ethila, H. timareta/H. melpomene, and H. charitonia/H. peruvianus) showed inconclusive results (Figure 4 and Table 1). The results of these analyses were deposited in ZENODO (see text footnote 1). DISCUSSION We found that Heliconius butterflies display five general distribution patterns, namely: (i) wide distribution, (ii) trans-Andes, (iii) biogeographic Choco + Costa Rica, (iv) cis-Andes + Pacific of Ecuador, and (v) highland Andes. We also found that three variables (isothermality, precipitation and altitude) explain these patterns. Isothermality is a variable that quantifies how daily temperatures oscillate relative to the annual oscillations (O'Donnell and Ignizio, 2012), and its importance as one of the most explanatory variables of species distribution is not without precedent. For example, this variable explains the distribution of frugivorous bats (Chattopadhyay et al., 2019), mealybugs (Heya et al., 2020), Opiliones (Simó et al., 2014), and American monkeys (Vallejos-Garrido et al., 2017). Although all Heliconius species are strongly affected by isothermality, its effect seems to be stronger for widely distributed species and those with trans-Andean distribution. Interestingly, these species occur in regions with high and medium isothermality (>460%), that is, in regions that experience temperature changes throughout the day but keep a constant temperature throughout the year (O'Donnell and Ignizio, 2012). This suggests that these butterflies are particularly sensitive to long term changes in temperature, thus limiting their range to tropical areas. The distribution of species occurring in the biogeographic Choco of Colombia, Costa Rica, cis-Andes and the Pacific of Ecuador is also strongly limited by precipitation. Consistently, these regions have either rainforest, monsoon, or savanna climate, and they are the Neotropical regions with the highest precipitation [precipitation in the driest month (Pdry) > 60 mm] Kozak et al. (2015), and branches that contribute the most to the phylogenetic endemism are labeled as H1-H5, both in the phylogeny and the map. All maps were plotted in grid cells of 20 km × 20 km. (Beck et al., 2018). Previous studies have suggested that cloudiness and precipitation decrease flying bout duration in butterflies and, consequently, limit their dispersal (Cormont et al., 2011). Therefore, exceptionally high levels of precipitation in such regions may act as population traps, preventing butterflies from flying over longer distances and keeping them in a single region (Rosser et al., 2014). This finding agrees with previous studies in South America, where precipitation shapes the distribution of multiple vertebrates and invertebrates (Atauchi et al., 2017;Amundrud et al., 2018;Schivo et al., 2019;de Oliveira da Conceição et al., 2020). In addition, altitude was the best predictor for the distribution of Heliconius species that can reach elevations up to 2,600 masl, which is considerably higher than the elevational range occupied by other members of the genus (<2,200) (Rosser et al., 2012). Therefore, it is likely that these highland species have morphological or physiological modifications that allow them to expand their elevational range and occupy new niches. In fact, highland Heliconius are known to have rounder wings compared to lowland species, and this has been suggested to aid them flying dense cloud forest or compensate for the lower air pressure found at higher altitudes (Montejo-Kovacevich et al., 2019). Also, comparisons among different populations of Heliconius have revealed that highland populations are less tolerant to heat (Montejo-Kovacevich et al., 2020), which may limit their distribution range. The foothills of the eastern Andes and the Amazon basin appeared as the regions with highest Heliconius species richness, which confirms the findings of a previous study done for the subfamily Heliconiine at a higher scale (50 km) (Rosser et al., 2012). Interestingly, both of these regions are known to present unusual concentrations of contact zones and hybrid zones (i.e., suture zones) (Dasmahapatra et al., 2010;Rosser et al., 2021), which may explain the richness they exhibit. Also, altitude, isothermality, and precipitation were the variables best correlated with this metric. This may be due to the elevational gradient found at the foothills of the eastern Andes, which offers multiple ecological niches thus favoring diversification rates (Rahbek and Graves, 2001;Jetz and Rahbek, 2002;Davies et al., 2007;Keppel et al., 2016). Additionally, there are several climate-based hypotheses that seek to explain broad-scale diversity patterns, and water and energy have emerged as crucial influencers of species richness (Silva-Flores et al., 2014). In particular, the water-energy dynamics hypothesis argues that species richness increases in places where liquid water and optimal energy conditions provide the greatest capacity for biotic dynamics FIGURE 4 | Co-occurring and hybridizing species of Heliconius. Green: species pairs with equivalent environmental niches, blue: species pairs with divergent environmental niches, and salmon: species pairs with inconclusive results. Numbers indicate the pairs of species falling into each category. (Svenning et al., 2008). The Amazon and foothills of the eastern Andes are regions with near constant hot-warm temperature throughout the year and have a permanent liquid water supply (Rosser et al., 2014;Vallejos-Garrido et al., 2017) thus ensuring an optimal water-energy dynamic. The latter translates into constant availability of plants for butterflies, including host plants for immature and pollen for adults, and continual interactions between individuals, which may be correlated with the high species richness we detected. Similar to other studies, patterns of phylogenetic diversity were similar (although not identical) to richness (Davies Jonathan and Buckley, 2011;Fenker et al., 2014;Mendoza and Arita, 2014;Guedes et al., 2018). Interestingly, areas with highest species richness got low phylogenetic diversity (Figure 2C, blue grids), which may be a consequence of the recent increase in diversification rate in Heliconius (4.5 Ma) and the consequent cooccurrence of multiple young species in the Amazon and foothills of the eastern Andes (Rosser et al., 2012;Kozak et al., 2015). In agreement with this observation, previous research in both animals and plants have found high phylogenetic diversity in the eastern Andes of Colombia, Peru, and Ecuador (Fenker et al., 2014;Mendoza and Arita, 2014;Guedes et al., 2018;Arango et al., 2021;Velazco et al., 2021). The highest phylogenetic endemism was found in the central eastern Andes of Colombia, and this result is possibly due to the restricted range of the species Heliconius heurippa ( Figure 2D, area H1). However, we cannot rule out this result as an overestimation since the phylogenetic tree that we used (Kozak et al., 2015) considers this taxon as a separate species and not as part of H. timareta (as recently hypothesized). If H. heurippa had been included within H. timareta, which has a wider distribution range, it is likely this result on phylogenetic endemism does not hold. Additionally, the pacific region of Costa Rica, Panama and Colombia show intermediate values of phylogenetic endemism that resulted from the presence of species that have reduced geographic range and are either longbranch species (e.g., Heliconius godmani) or species for which no close relatives are known (e.g., Heliconius hewitsoni) (Figure 2D, area H2 and H3, respectively). These regions were previously described as highly endemic phylogenetically for plants (Sandel et al., 2020), terrestrial mammals (Rosauer and Jetz, 2014), birds and amphibians (Daru et al., 2020a). Interestingly, there were several species that, although are considered as geographically endemic within Heliconius, exhibited low values of phylogenetic endemism. However, it is important to acknowledge that phylogenetic endemism is a concept based on linages rather than species, and thus, if an endemic species has a narrow range but it is closely related to a widespread species, its phylogenetic endemism will not necessarily be low (Rosauer et al., 2009). An example of this is Heliconius nattereri, an endemic species from Brazil's Atlantic Forest that, despite having a narrow distribution, is sister to the widely distributed Heliconius ethilla ( Figure 2D, area H4). Similarly, Heliconius atthis is restricted to the Ecuadorian and Peruvian Pacific, but it is sister to the widely distributed Heliconius hecale (Figure 2D, area H5). In our study we found that high precipitation and near constant hotwarm temperature throughout the year are strongly correlated with phylogenetic endemism, which agrees with studies that point a role for temperature in promoting endemism by reducing extinction rates and increasing population sizes in small areas (Jetz et al., 2004;Rosauer and Jetz, 2014;Varzinczak et al., 2020). Our environmental niche analysis showed that hybridizing species do not necessarily share the same climatic space despite some of them having largely overlapping geographic distributions. This is the case of H. ethilla and H. numata, which frequently co-occur throughout their distribution, but there are some regions with an extreme climate, such as the Pacific coast of Colombia (a humid jungle) and the Colombian Magdalena valley (which has a marked precipitation gradient, being humid in the north and dry in the south), where H. ethilla but not H. numata occur (Supplementary Figure 9). This suggests that the former species has a broader climatic tolerance. We also detected differences in the environmental niche between pairs of hybridizing species that rarely overlap geographically, but when they do, they hybridize. For example, H. erato and H. himera occupy contrasting environmental niches in Ecuador (Jiggins et al., 1997), where H. himera lives in dry forests while H. erato inhabits wet forests of the Andes (Figure 5). Similarly, the hybridizing H. erato (H. e. venus) and H. chestertonii meet in an environmental transition zone between wet and dry forest in the Colombian Andes (Muñoz et al., 2010 ; Supplementary Figure 8). In summary, we confirmed that, at large scales, the distribution of Heliconius, its richness, diversity, and phylogenetic endemism are mainly shaped by a combination of high annual energy (i.e., hot-warm temperature), constant water supply, and an extraordinary topographic complexity. However, species distributions are thought to result from dynamics occurring at multiple spatial scales. Therefore, including microclimate variables and ecological interactions would provide an in-depth understanding of the multiscale drivers of distribution, niche range and phylogenetic processes (Montejo-Kovacevich et al., 2020;Paz and Guarnizo, 2020). Our study confirms the richness and diversity of areas already identified in other taxa, thus strengthening the importance for their conservation as strategic hotspots of biodiversity. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://doi.org/10. 5281/zenodo.5149294.
6,277.6
2021-10-22T00:00:00.000
[ "Environmental Science", "Biology" ]
Gamma Factory Searches for Extremely Weakly-Interacting Particles The Gamma Factory is a proposal to back-scatter laser photons off a beam of partially-stripped ions at the LHC, producing a beam of $\sim 10$ MeV to $1$ GeV photons with intensities of $10^{16}$ to $10^{18}~\text{s}^{-1}$. This implies $\sim 10^{23}$ to $10^{25}$ photons on target per year, many orders of magnitude greater than existing accelerator light sources and also far greater than all current and planned electron and proton fixed target experiments. We determine the Gamma Factory's discovery potential through"dark Compton scattering,"$\gamma e \to e X$, where $X$ is a new, weakly-interacting particle. For dark photons and other new gauge bosons with masses in the 1~to~100 MeV range, the Gamma Factory has the potential to discover extremely weakly-interacting particles with just a few hours of data and will probe couplings as low as $\sim 10^{-9}$ with a year of running. The Gamma Factory therefore may probe couplings lower than all other terrestrial experiments and is highly complementary to astrophysical probes. We outline the requirements of an experiment to realize this potential and determine the sensitivity reach for various experimental configurations. I. Introduction The search for new light and weakly-interacting particles is currently an area of great interest [1,2]. If new particles have masses in the MeV to GeV range, like most of the known particles, they cannot be coupled to the known particles with O(1) couplings. However, loopsuppressed interactions with Standard Model (SM) particles are expected in theories with a dark sector [3], and the requirement that such dark sectors contain dark matter particles with the desired thermal relic density also motivates such small couplings [4,5]. In fact, frameworks have been identified in which the couplings are first generated by anywhere from 1-loop to 6-loop interactions, resulting in couplings in the broad range of ε ∼ 10 −3 to 10 −13 [6]. Clearly the existence of such particles is an open experimental question, and novel searches for such particles should be explored, particularly if they exploit existing facilities (see, e.g., Refs. [7,8]). The Gamma Factory (GF) is such an initiative, which exploits the Large Hadron Collider (LHC) [9][10][11]. In this proposal, laser light with energy E laser ∼ 10 eV is back-scattered off partially-stripped ions that are accelerated in the LHC to Lorentz factors γ ∼ 200 to 3000. Using the same principle that governs radar guns, the laser light is Doppler shifted twice to energies These energies are well-matched to the MeV to GeV mass range for new, weakly-interacting particles. Just as remarkable, the expected intensities of Φ GF ∼ 10 16 to 10 18 s −1 are far greater than any other existing or proposed accelerator light source, and the resulting number of GF photons per year, N GF ∼ 10 23 to 10 25 , is significantly greater than the protons on target and electrons on target of all fixed target experiments used to search for new MeV to GeV particles to date. The GF, then, has the potential to explore models with light, weakly-interacting particles in regions of parameter space inaccessible to other experiments. In this paper, we determine the GF's discovery potential for a variety of new, weaklyinteracting particles X produced through dark Compton scattering, γe → eX. Dark Compton scattering has been considered previously for existing photon beam facilities, which have been shown to provide new sensitivity in regions of parameter space with relatively large couplings ε ∼ 10 −5 to 10 −3 [12]. Here we focus on the GF's potential and consider dark photons, "anomaly-free" (B − L, L e − L µ , L e − L τ ) gauge bosons, dark Higgs bosons, and dark pseudoscalars. For the last two cases, where couplings are Yukawa-suppressed, dark Compton scattering is not promising; nuclear scattering may be more sensitive, but we will not consider this here. However, in all of the gauge boson cases, we find that dark Compton scattering at the GF has significant discovery prospects, probing regions of parameter space with masses m X ∼ 1 to 100 MeV and couplings ε ∼ 10 −9 to 10 −4 , where the low-ε part of the range extends to values far lower than all other terrestrial experiments. The GF is therefore complementary to other ongoing and proposed experiments that make use of the LHC to search for weakly-interacting particles [13][14][15][16][17][18][19][20], and our results provide a significant new physics case for the GF, supplementing existing SM and beyond the SM motivations [10,[21][22][23]. The experiment consists of a (graphite) target with thickness L target = 1 m, followed by a (lead) shield with thickness L shield = 2 m, an open air decay region with length L decay , and a tracking detector, centered on the beam axis, which we take to be a circular disk with diameter L det . The GF photon beam enters from the left and produces an X particle through dark Compton scattering γe → eX. The X particle is produced with an angle θ relative to the GF beamline and decays to an e + e − pair, which is detected in the tracking detector. II. A Fixed Target Experiment The fixed target experiment we propose is simple, compact, and not particularly remarkable; it is shown schematically in Fig. 1. A GF photon beam collides with a target material, producing new particles X through dark Compton scattering γe → eX. The target is followed immediately by a shield, a large block of matter that stops all SM particles. The X particles are extremely weakly interacting, however, and so they may pass through the shield and then decay to e + e − pairs, which may be detected in a particle detector. The detection of coincident e + and e − particles that point back to the target provides a striking signal of the production of a new fundamental particle. In this section, we discuss the SM background and the required materials and thickness of the target and shield. We also discuss, in general, the signal rate and its dependence on the X production cross section and decay width. In the following sections, we will consider specific candidate X particles and determine the sensitivity reach for each of these particles, as well as its dependence on the length of the decay volume L decay and the transverse size of the detector L det . As discussed in Sec. I, the GF will produce a beam of ∼ 10 MeV − GeV photons at intensities that are many orders of magnitude beyond current accelerator light sources. Taking the photon intensity to be Φ GF = 10 17 s −1 [9,10] at 200 MeV and assuming that the back-scattered photon power is fixed by the radio frequency power [9] resulting in the flux being inversely proportional to the photon energy (see, for example, Eq. (10) of Ref. [24]), we consider three sets of parameters: where the lowest photon energy is based on a longer laser wavelength or lower ion energy, the highest photon energy would be possible with the HE-LHC project [25,26], and, in each case, N GF is simply the number of photons produced in a full year at the corresponding intensity. The photon energies of Eq. (2) are maximal energies, and the energy distribution may be quite broad; see, e.g., Refs. [27,28]. In detail, however, the distribution depends on the particular atomic transition being used [29]. To highlight the dependence of our results on the new physics scenarios being probed and minimize the dependence on particular realizations of the GF, we will assume a monoenergetic photon beam with the energies given in Eq. (2) in determining sensitivity reaches. The actual sensitivities will be degraded by the energy spread, but this effect will be small away from threshold, and even for X masses near threshold, the degradation will not greatly compromise the discovery prospects of the GF. For example, if the effective GF intensity is reduced by a factor of 10, given the strong ε 4 dependence of the event rates (see Eq. (12)), the reach in ε will only be reduced by a factor of 1.8. As we will see, even with such a reduction, the GF's sensitivity reaches extend far beyond existing constraints. Of course, once the GF is precisely defined, the effect of beam energy spread should be included in a more refined analysis. These GF photons can then produce X particles through dark Compton scattering in a target material with cross section σ X ≡ σ(γe → eX). This competes with the far stronger SM processes, which, at these photon energies, are dominated by pair production in the target's nuclear electromagnetic field, with a small component from SM Compton scattering [30]. The probability of producing an X particle is where Z is the number of electrons per target atom, and σ SM is the SM cross section per target atom. We neglect secondary production of X particles from subsequent processes. Our analysis is therefore conservative, but these additional sources of X particles are unlikely to enhance significantly the sensitivity reaches we derive. Clearly the signal rate is optimized for target materials with low σ SM /Z. Since σ SM is very roughly proportional to Z 2 , this is minimized for low-Z materials. For H, Be, and C and the photon energies of interest, the SM cross sections are [31] For the photons to interact in the target, the target thickness should be a few mean free paths. At these photon energies, the mean free path is approximately 10 m in liquid hydrogen, 50 cm in beryllium, and 30 cm in graphite [30,32]. To choose a concrete and practical example for the rest of this analysis, we will assume a graphite target of thickness L target = 1 m. As we will see, L target L decay in the parameter regions of greatest interest, and so for simplicity, we assume that X particles are created with the probability given in Eq. (3) with a production point uniformly distributed within L target . For a background-free experiment, it is ideal, although not necessarily required, for the shield to stop all particles produced by the GF photon beam. A high-Z material is best, and lead (Pb) is an obvious choice. The mean free path in Pb for photons with energy E γ ∼ 20 − 1600 MeV is λ ∼ 1 − 2 cm [30]. Given an initial number of photons N 0 , the number remaining after traversing a thickness L shield of Pb is therefore N = N 0 e −L shield /λ . Thus, even with an initial number of photons N 0 = 10 26 , corresponding to several years of GF running, the number of photons can be reduced to negligible levels for a shield of thickness L shield ∼ 60λ ∼ 0.6 − 1.2 m. We therefore expect that a 2 m thick Pb shield will be sufficient to remove the SM background. 1 The approximate power of the very high photon flux on the target will be (200 MeV)(1.6022 × 10 −19 J/eV)(10 17 s −1 ) ∼ 3 MW. This is comparable to the average beam power of 18 MW for the 250 GeV ILC beam dumps [33] and 5.3 MW for the 125 GeV ILC beam dumps [34]. In addition, the photon beams are narrowly collimated and cannot be spread out to reduce the energy density by magnets, as shown for the photonphoton collider configuration of the ILC with 10-15 MW of power [35]. Therefore, detailed design of cooling systems for the target will be required (see Refs. [33,35]). Finally, we must determine the decay volume length L decay and detector size L det . As we will see, for all models considered, in the region of parameter space that can be probed for the first time at the GF, the X decay length d X = γ X v X cτ X is far greater than any reasonable L decay . The probability of decay in the decay volume is therefore The number of signal events scales linearly with L decay , but larger L decay requires a detector with larger L det to capture the produced e + e − pairs. We will explore how the sensitivity depends on L decay and L det in the following sections, but as a preview of these results, we will find that parameters L decay ∼ 10 m and L det ∼ 1 m will be sufficient to probe large swaths of new parameter space. III. Dark Photons We first consider the case where the new, weakly-interacting particle is the dark photon A [3,36,37]. The dark photon's properties are determined by two parameters, its mass m A and its coupling ε (in units of e), which enter the Lagrangian through where q f is the SM electric charge of fermion f . The cross section for dark Compton scattering γe → eA and the angular distribution of the produced dark photons are shown in Fig. 2. (See the Appendix for further details.) The cross section is maximal not far above threshold, then drops for increasing E γ , but remains within an order of magnitude of the maximum for all GF photon energies. The angular distribution of the produced dark photons is also highly peaked in the forward direction. This is clearly true at threshold, since there is no excess energy to support components of the A momentum transverse to the beam, but we see that it is even true for light dark photons when the beam energy is far above threshold, at least for the beam energy shown. Once produced, the dark photon dominantly decays to pairs of SM particles, assuming m A > 2m e . For m A > 2m µ , decays to muons and a number of hadronic states are possible, but, given the available GF energies of Eq. (2), m A 40 MeV, and so only the decay channel A → e + e − is open. We assume that there are no non-SM decays. In this case, the dark photon decay width is where in the last expression, we have assumed m A m e . If the A is produced relativistically, with v A ≈ 1 and γ A ≡ E A /m A 1, its decay length is We see that in the region of parameter space where the GF will probe new parameter space, d A L decay , as anticipated in Eq. (7). The probability of decay within the decay volume is very small, and this must be compensated by producing an extraordinarily large number of dark photons. To determine the sensitivity reach, for any parameters (m A , ε), we simulate dark photon production by dark Compton scattering, including the correct cos θ distribution. In particular, using a Monte Carlo approach, we sample X particle momenta, weighted by the matrix element of the production process. We then decay the dark photon to e + e − pairs, according to the probability distribution given in Eq. (7), with the approximation that the decays are isotropic in the A rest frame. Practically, for a given point in parameter space, i.e., for a fixed pair of X mass and coupling, we randomly extract 10 5 values of cos θ from the inverse of the cumulative distribution function: where |M| 2 denotes the spin-averaged matrix element of the dark Compton scattering process. From the distribution of cos θ so obtained, we eventually derive the distribution of the signal events, P(N S ). In particular, after checking that the simulated e ± pairs pass through the detector, we can compute the mean of events N S . If N S ≥ 3 events, we accept the chosen point in parameter space as one within the GF sensitivity. In any other case, we discard it. A signal event is indeed defined to be an event where both the e + and the e − pass through the tracking detector shown in Fig. 1. The coincident detection of two oppositely-charged particles, each pointing back to the target, will be a striking signal, and we will assume zero background. If the e + and e − energies can be measured, for example, by placing the tracker in a strong magnetic field or adding a calorimeter, the invariant mass of the e + e − pair can be determined, providing a further kinematic constraint to differentiate signal from background, as well as a measurement of the A mass. The sensitivity reach is shown in Fig. 3. These results may be understood as follows: The sensitivity regions are bounded at low mass by the requirement that the e + e − decay is open (m A > 2m e ) and at high mass by the requirement that dark Compton scattering γe → eX is kinematically accessible (m A 2m e E γ ). The regions are further bounded at large ε by the requirement that the dark photons travel through the target and shield before decaying (d A 3 m), and at small ε by the requirement that a sufficient number of dark photons decay in the decay volume. It is instructive to understand the bound at small ε by estimating the number of signal events in the limit of long decay lengths. We parametrize σ X ∼ ε 2 (1 mb) (10 MeV/m A ) 2 , assume E γ = 200 MeV and a typical dark photon energy E A ∼ 100 MeV, and let L decay = 12 m and P det ∼ 1 be the probability that a dark photon that decays in the decay volume is captured in the detector. The signal event rate is, then, roughly We see that, provided the beam energy is above threshold, the number of events is approximately independent of m A , but is highly sensitive to ε. One also expects to probe ε as low as 10 −9 , given the extraordinary number of GF photons on target. All of these features are confirmed by the simulation results shown in Fig. 3. The GF probes new parameter space at low values of ε between 10 −9 and 10 −7 . Such low values are inaccessible to all other terrestrial experiments investigated to date, because the signal rate is suppressed by low production rates and the long A decay length. At the GF, however, this suppression is compensated by the extraordinary number of photons on target. Such low values of ε are subject to astrophysical constraints, for example, from supernova [38][39][40][41][42][43][44][45][46][47][48][49][50] (for further details, see also [51,52]), from (g − 2) e [53], and the dashed gray line encloses the region probed by supernova cooling, as determined in Ref. [54]. cooling [54][55][56][57][58][59][60]. However, such constraints are dependent on a number of astrophysical assumptions, which may weaken the constraints or possibly even remove them altogether; see, e.g., Ref. [61]. The GF therefore probes a significant new region of parameter space that cannot be probed by other particle experiments, and it is highly complementary to astrophysical probes. In the left panel of Fig. 4, we show signal event rate contours for the GF parameters (E γ , N GF ) = (20 MeV, 3 × 10 25 ) (yellow) and (E γ , N GF ) = (200 MeV, 3 × 10 24 ) (orange). Given the strong ε dependence of Eq. (12), we see that there are uncharted regions of parameter space where as many as 3 × 10 4 dark photons could be produced in a year. Assuming a background-free experiment, a dark photon discovery could be achieved with just a few hours of running. Alternatively, if there is background, one can see that requiring, say, 10 or 100 signal events does not reduce the sensitivity region much, given the dependence of the signal rate on ε 4 . In the right panel of Fig. 4, we show the dependence of the sensitivity reach on the size of the detector L det . For L decay = 12 m, and L det = 3 m, the detector is large enough to catch all signal events, and so is effectively infinite in size. For L det = 1.5 m and 0.75 m, however, events may be lost. This degrades the reach primarily at low m A : for (E γ , N GF ) = (20 MeV, 3 × 10 25 ), the low m A coverage is degraded significantly for L det = 1.5 m and almost all coverage is lost for L det = 0.75 m, while for (E γ , N GF ) = (200 MeV, 3 × 10 24 ), the degradation is minimal for L det = 1.5 m, but again becomes significant for L det = 0.75 m. This may be understood as follows: for low masses, there is sufficient energy for the dark photon to be produced with significant transverse momentum, and so one or both of the e + and e − particles produced escape detection. On the other hand, for large m A near threshold, the dark photons are produced in the direction of the photon beam. When they decay, the e + e − pairs are produced with some transverse momentum, but this is typically small enough so that no events are lost. For example, for m A = 10 MeV and E A ∼ 100 MeV, the typical angle of the e ± relative to the beamline is m A /(2E A ) ∼ 0.05, and so these particles are detected in a detector with size L det ∼ 0.1L decay . Finally, in Fig. 5, we show the distribution of distances between the e + and e − when they pass through the detector for several representative E γ and dark photon parameters. For m A = 10 MeV, the separations are ∼ 10 cm−1 m; for m A = 2 MeV, the e + and e − are more collimated, as expected, and their separations are reduced to ∼ 1 − 10 cm. Nevertheless, in all cases shown, the typical separations are large compared to the position resolution of typical trackers, and so the e + and e − are easily distinguished in a tracker. With 2 or more tracking layers, one can also verify that the e + and e − are coming from the direction of the GF photon beam. Although we do not discuss a detailed detector design here, such kinematic constraints can be powerfully exploited to differentiate signal from background. IV. Anomaly-Free Gauge Bosons The GF also has significant potential to discover other light gauge bosons. We will consider the three cases of gauge bosons that mediate the "anomaly-free" U(1) gauge in- where j X µ is the appropriate current. We simulate the production of these anomaly-free gauge bosons through dark Compton scattering γe → eX, following the same procedure used for dark photons in Sec. III. Unlike in the case of dark photons, in the anomaly-free gauge boson cases, decays to neutrinos are open, reducing the decay lengths, but otherwise the analysis is very similar 3 . In the parameter space of greatest interest, the results for L e − L µ and L e − L τ bosons are identical. The sensitivity reaches for the B − L and L e − L µ,τ cases are shown in Fig. 6. As in the case of dark photons, the GF is able to probe new parameter space for couplings g X that are far below the reach of all other terrestrial experiments, and the GF's sensitivity is complementary to supernovae probes. V. Dark Higgs Bosons and Pseudoscalars For completeness, we consider two spin-0 dark mediator particles (see, e.g., Refs. [62,63]): the dark Higgs boson φ, with Lagrangian terms and the dark pseudoscalar a, with Lagrangian terms where v 246 GeV is the SM Higgs vacuum expectation value. The dark Compton scattering production cross sections of spin-0 bosons is detailed in the Appendix. The cross sections are shown in Fig. 7. As in the spin-1 cases, the cross sections peak near threshold and then drop as E γ increases, but for all GF energies, the cross sections remain within roughly an order of magnitude of their maximum values. In Fig. 8, we show the GF sensitivity to these two spin-0 candidates. Unfortunately, the couplings of both spin-0 candidates considered here are Yukawa-suppressed. This implies that the dark mediator's decays to electrons are extremely suppressed and the decay length is extremely long, which suppresses the rate. Competing constraints, many of which use processes where the dark mediator interacts with a 2nd or 3rd generation particle and so is not as Yukawa-suppressed, are typically stronger, and the GF with one year of running does not probe new parameter space in these models. VI. Conclusions The proposed GF will be able to provide 10 23 to 10 25 photons on target per year, a remarkable leap in light source intensity. By exploiting the LHC's ability to accelerate partially-stripped ions to Lorentz factors of γ ∼ 200 − 3000, ∼ 10 eV photons can be backscattered to 10 MeV to GeV energies, sufficient to search for new particles with masses in the 1 − 100 MeV mass range. In this paper, we have investigated for the first time the potential of the GF to discover new particles through dark Compton scattering, γe → eX, where X is a dark photon, anomaly-free gauge boson, dark Higgs boson, or dark pseudoscalar. In the cases of the spin-1 gauge bosons, we have found that the extraordinary intensities of the GF allow it to probe couplings as low as ε ∼ 10 −9 , over an order of magnitude lower than existing bounds from terrestrial experiments. The ε 4 dependence of the signal event rate implies that as many as 10 4 new gauge bosons may be produced in a year at the GF, or, in other words, the GF may start probing new models with just a few hours of running. The region of parameter space with ε ∼ 10 −9 can be probed by bounds from supernova cooling [54][55][56][57][58][59][60], but such constraints depend on astrophysical assumptions that have been argued to weaken or possibly even remove them altogether [61]. The GF therefore provides a highly complementary probe. The fixed target experiment proposed here is shown in Fig. 1. It consists of a low-Z target to enhance the new physics event rate, followed by a high-Z shield to eliminate SM background, followed by ∼ 10 m-long decay volume and a tracking detector with a cross sectional area of ∼ 1 − 10 m 2 . We have assumed that the detection of coincident e + and e − particles that point back toward the GF photon beam, with an invariant mass equal to the X boson's mass, will provide a spectacular and essentially background-free signal. For the spin-0 candidates, with Yukawa-suppressed couplings to SM fermions, we have found poor discovery prospects, since the signal rates are highly suppressed by the GF's dependence on X couplings to electrons. For such models, GF photons scattering off not electrons, but nucleons and nuclei may provide significantly improved prospects. Finally, we have considered only a small sample of the many possible new light, weakly-interacting particles. Axion-like particles have recently been considered [65], and evaluations of the GF's sensitivity reaches for other particles, such as sterile neutrinos, may also be enlightening. A. Production Cross Section Calculations In this Appendix, we derive the production cross sections entering the analysis. The diagrams contributing to the "dark Compton scattering" processes γe → eX, where X is a vector A , a scalar φ, or a pseudoscalar a, are shown in Fig. 9. Following the momentum assignments of Fig. 9, the amplitude for the vector boson case is where, for dark photons, B − L gauge bosons, and L e − L µ,τ gauge bosons, the coupling g X is εe, g B−L , and g Le−Lµ,τ , respectively. The spin-averaged amplitude squared is . (A2) The amplitude squared has also been derived in Ref. [12], and the above expression matches a similar expression found in Ref. [66], once one accounts for the different metric used. On integrating the differential cross section in the CM frame, over the entire range of the angle θ * between the incoming photon and the vector boson, one finds that the total cross section in the CM frame is where In the lab frame, where the photon is scattered off a static electron, the differential cross section can be obtained from the expression of Eq. (A3) by applying a Lorentz boost along the opposite direction to the incoming electron in the CM frame to bring it to rest. Therefore, in the lab frame, the differential cross section for vector boson production will be where β and β A are, respectively, the velocity of the lab frame with respect to the CM frame and the velocity of the scattered vector boson along the direction of its scattering angle θ in the lab frame. As usual, γ = 1/ 1 − β 2 . In principle, the total cross section in the lab frame can be derived by integrating the above differential cross section over the entire range of the scattering angle θ. However, for a massive vector boson, the integration can be non-trivial. On the other hand, since the total cross section is boost-invariant, we can safely bypass the intricacies of such integration by simply substituting in Eq. (A4) to find that the total cross section in the lab frame is where E γ is the energy of the incident photon. From Eq. (A7), we can also find that the threshold photon energy for X production is In a similar way as above, we can derive the corresponding expressions for the dark Higgs boson and dark pseudoscalar cases. The corresponding amplitudes are where, for dark Higgs bosons and dark pseudoscalars, the coupling g X is sin α m e /v and g Y m e /(2v), respectively. The spin-averaged matrix elements squared are the same as in Refs. [66,67] (with the appropriate choice of metric): Finally, the expressions for the total cross sections in the lab frame are . (A18)
7,078.4
2021-05-21T00:00:00.000
[ "Physics" ]
Deep neural model with enhanced embeddings for pharmaceutical and chemical entities recognition in Spanish clinical text In this work, we introduce a Deep Learning architecture for pharmaceutical and chemical Named Entity Recognition in Spanish clinical cases texts. We propose a hybrid model approach based on two Bidirectional Long Short-Term Memory (Bi-LSTM) network and Conditional Random Field (CRF) network using character, word, concept and sense embeddings to deal with the extraction of semantic, syntactic and morphological features. The approach was evaluated on the PharmaCoNER Corpus obtaining an F-measure of 85.24% for subtask 1 and 49.36% for subtask2. These results prove that deep learning methods with specific domain embedding representations can outperform the state-of-the-art approaches. Introduction Currently, the number of biomedical literature is growing at an exponential rate. Therefore, the efficient access to information on biological, chemical, and biomedical data described in scientific articles, patents, or e-health reports is a growing interest in biomedical research, industrial medicine manufacturing, and so forth. In this context, improved access to chemical and drug name mentions in biomedical texts is a crucial step downstream tasks such as drug and protein interactions, chemical compounds, adverse drug reactions, among others. Named Entity Recognition (NER) is one of the fundamental tasks of biomedical text mining, intending to automatically extract and identify mentions of entities of interest in running text, typically through their mention offsets or by classifying individual tokens whether they belong to entity mentions or not. There are different approaches to address the NER task. Dictionary-based methods, which are limited by the size of the dictio-nary, spelling errors, the use of synonyms, and the constant growth of vocabulary. Rule-based methods and Machine Learning methods usually require both syntactic and semantic features as well as specific language and domain features. One of the most effective methods is Conditional Random Fields (CRF) (Lafferty et al., 2001) since CRF is one of the most reliable sequence labeling methods. Recently, deep learning-based methods have also demonstrated state-of-the-art performance for English (Hemati and Mehler, 2019;Pérez-Pérez et al., 2017;Suárez-Paniagua et al., 2019) texts by automatically learning relevant patterns from corpora, which allows language and domain independence. However, until now, to the best of our knowledge, there is only one work that addresses the generation of Spanish biomedical word embeddings (Armengol-Estapé Jordi, 2019; Soares et al., 2019). In this paper, we propose a hybrid model combining two Bi-LSTM layers with a CRF layer. To do this, we adapt the NeuroNER model proposed in (Dernoncourt et al., 2017) for track 1 (NER offset and entity classification) of the Phar-maCoNER task . Specifically, we have extended NeuroNER by adding context information, Part-of-Speech (PoS) tags, and information about overlapping or nested entities. Moreover, in this work, we use existing pre-trained as well as our trained word embedding models: i) a word2vec/FastText Spanish Billion Word Embeddings models (Cardellino, 2016), which were trained on the 2014 dump of Wikipedia ii) our medical word embeddings for Spanish trained using the FastText model and iii) a sense-disambiguation embedding model (Trask et al., 2015). For track 2 (concept indexing) based on the output of the previous step, we use fulltext search and fuzzy matching on the SNOMED-CT Spanish Edition dictionary to obtain the corre-sponding index. Experiment results on PharmarCoNER tasks showed that our features representation improved each of separate representations, implying that LSTM-based compositions play different roles in capturing token-level features for NER tasks, thus making improvements in their combination. Moreover, the use of specific domain word vector representations (word embeddings) outperform general domain word vector and concept vector representations (concept embeddings). Materials and Methods In this section, we first describe the corpora, the training procedure and the word, concept, and sense embedding models used in our study. Then, we describe our system architecture for offset and entity classification. Corpora The corpus was gathered from Spanish biomedical texts from different multilingual biomedical sources: Source corpus details are described in Table 1. All the corpora are in XML (Dublin core format) and TXT format files. XML files were processed for extract only raw text from specific XML tags such as "title" and "description" from Spanish labels, based on the Dublin Core format as shown in Figure 1. TXT files were not processed. Raw texts from all files were compiled in a single TXT file. Texts were processed, setting all to lower, removing punctuations, trailing spaces and stop words and used as input to generate our word embeddings. Sentences pre-processing (split and tokenized) were made using Spacy 1 , an open-source python library for advanced multi-language natural language processing. Transfer Learning Transfer learning aims to perform a task on a dataset using knowledge learned from a previous dataset (Giorgi and Bader, 2018). As shown in many works, such as speech recognition (Wang and Zheng, 2015), sentence classification (Mou et al., 2016) and Named Entity Recognition (Giorgi and Bader, 2018), transfer learning improves generalization of the model, reduces training times on the target dataset, and reduces the amount of labeled data needed to obtain high performance. In this work we used an existing generic word embedding (Word2Vec embedding trained on Spanish Wikipedia), a trained medical embedding model, and a medical/generic sensedisambiguation embedding. Word embedding is an approach to represent words as vectors of real numbers. Word embedding models have gained much popularity among the NLP community because they are able to capture syntactic and semantic information among words. In this work, we used the Spanish Billion Words Corpora (SBWC) (Cardellino, 2016) (W2V-SBWC), which is a pre-trained model of word embeddings trained on different general domain text corpora written in Spanish (such Ancora Corpus (Martí et al., 2007) and Wikipedia) using the word2vec (Mikolov et al., 2013) implementation. The FastText-SBWC pre-trained word embeddings model was trained on the SBWC using the FastText implementation. Furthermore, we used the sense2vec (Trask et al., 2015) model, which provides multiple dense vector representations for each word based on the sense of the word. This model is able to analyze the context of a word based on the lexical and grammatical properties of words and then assigns its more adequate vector. We used the Reddit Vector, a pre-trained model of sense-disambiguation representation vectors presented by (Trask et al., 2015). This model was trained on a collection of general domain comments published on Reddit (corresponding to the year 2015) written in Spanish and English. Medical word and concept embeddings We used the FastText (Bojanowski et al., 2016) implementation to train our word embeddings using the Spanish Biomedical Corpora (SBC) described in section 2.1 (FastText-SBC). Moreover, we trained a concept embedding model replacing biomedical concepts in the SBC with their unique SNOMED-CT Spanish Edition identifier (SNOMED-SBC). We used the PyMedTermino library (Lamy et al., 2015) for concept indexing. A full-text search with the Levenshtein distance algorithm (Miller et al., 2009) was applied in a first instance for concept indexing and fuzzy search with threshold using FuzzyDict implementation (Hemati and Mehler, 2019) as a second approach for concepts not found by partial matching. The FastText model uses a combination of various subcomponents to produce high-quality embeddings. It uses a standard CBOW or skip-gram models, with position-dependent weighting, phrase representations, and sub-word information in a combined manner. The training parameters for each model are shown in Table 2. Our pre-trained mod-els can be found in Github 2 with the corpora sources, text preprocessing, and training information. System Description Our approach is based on a deep learning network with a preprocess step, learning transfer, two recurrent neural network layers and the last layer for CRF (see Figure 2) as proposed in (Dernoncourt et al., 2017). The input for the first Bi-LSTM layer are character embeddings. In the second layer, we concatenate character embeddings from the first layer with word, concept, and sense-disambiguate embeddings for the second Bi-LSTM layer. Finally, the last CRF layer obtains the most suitable labels for each token using a tag encoding format. For more details about NeuroNER, please refer to (Dernoncourt et al., 2017). Our contribution consists of extending the Neu-roNER system with additional features. In particular, Sense embeddigs ( obtained using POS tags), concept embeddings (obtained using semantic features) and the extended BMEWO-V encoding format has been added to the network and were as a pre-preprocessing a step. POS tags are concatenated to token in order to create dense vector representations containing word/POS information (sense embeddings) and include this in the token embedding layer of the network. Furthermore, concept features are dense vector representations generated replacing concepts with their unique SNOMED concept identi- fiers (concept embeddings) and include this in the token embedding layer of the network. The BMEWO-V encoding format distinguishes the B tag for entity start, the M tag for entity continuity, the E tag for entity end, the W tag for a single entity, and the O tag for other tokens that do not belong to any entity. The V tag allows us to represent nested entities. BMEWO-V is similar to other previous encoding formarts (Borthwick et al., 1998); however, it allows the representation of nested and discontinuous entities. As a result, we obtain our sentences annotated in the CoNLL-2003 format (Tjong Kim Sang and De Meulder, 2003). An example of the BMEWO-V encoding format applied to the sentence "calcio iónico corregido 1,16 mmol/l y magnesio 1,9 mg/dl." ("ionic calcium corrected 1.16 mmol / l and magnesium 1.9 mg / dl.") can be seen in Figure 3 and Table 3. First Bi-LSTM layer using character embeddings Word embedding models are able to capture syntactic and semantic information. However, other linguistic information such as morphological information, orthographic transcription, or part-ofspeech (POS) tags are not exploited. According to (Ling et al., 2015), the use of character embeddings improves learning for specific domains and is useful for morphologically rich languages. For this reason, we decided to include the characterlevel representations to obtain morphological and orthographic information from words. Each word is decomposed into its character n-grams and initialized with a random dense vector which is then learned. We used a 25-feature vector to represent each character. In this way, tokens in sentences are represented by their corresponding character embeddings, which are the input for our Bi-LSTM network. Second Bi-LSTM layer using word and Sense embeddings The input for the second Bi-LSTM layer is the concatenation of character embeddings from the first layer with the pre-trained word or concept embeddings and sense-disambiguation embeddings (described in sections 2.2 and 2.3) of the tokens in a given input sentence. The second layer goal is to obtain a sequence of probabilities for each tag in the BMEWO-V encoding format. In this way, for each input token, this layer returns six probabilities (one for each tag in BMEWO-V). The final tag should be with the highest probability for each token. Last layer based on Conditional Random Fields (CRF) To improve the accuracy of predictions, a Conditional Random Field (CRF) (Lafferty et al., 2001) model is trained, which takes as input the label probability for each independent token from the previous layer and obtains the most probable sequence of predicted labels based on the correlations between labels and their context. Handling independent labels for each word shows sequence limitations. For example, considering the drug sequence labeling problem an "I-NORMALIZABLES" tag cannot be found before a "B-NORMALIZABLES" tag or a "B-NOR-MALIZABLES" tag cannot be found after an "I-NORMALIZABLES" tag. Finally, once tokens have been annotated with their corresponding labels in the BMEWO-V encoding format, the entity mentions must be transformed into the BRAT format. V tags, which identify nested or overlapping entities, are generated as new annotations within the scope of other mentions. Evaluation As it was described above, our system is based on a deep network with two Bi-LSTM layers and the last layer for CRF. We evaluate our NER system using the train, validation, and test datasets (SPACCC) provided by the PharmaCoNER task organizers (Gonzalez-Agirre et al., 2019). Detailed information for each datasets can be seen in Table 4. The PharmacoNER dataset is a manually annotated corpus of 1,000 clinical cases written in Spanish and annotated with mentions of chemical compounds, drugs, genes, and proteins. The dataset consists of Normalizables (4,398), No Normalizables (50), Proteins (3,009), and Unclear (167) labels. Further details can be found in (Gonzalez-Agirre et al., 2019). The PharmaCoNER task considers two subtasks. Track 1 consider offset recognition and entity classification of pharmacological substances, compounds, and proteins. Track 2 considers concept indexing where for each entity, the list of unique SNOMED concept identifiers must be generated. Scope level F-measure is used as the main metric where true positives are entities which match with the gold standard clue words and scope boundaries assigned to the clue word. A detailed description of evaluation can be found in the Phar-maCoNER web 3 . Track 1 -Offset detection and Entity Classification The NER task is addressed as a sequence labeling task. For track 1 we tested different configurations with various pre-trained embeddings models. The embedding models and their parameters are summarized in Table 5. Table 6 describes our different experiments configurations. In Table 8, we compare the different pre-trained models in Spanish on the validation dataset. As shown in Table 8 specific domain word embeddings outperform general domain models by almost 5 points. For the test dataset, we applied our best system configuration FastText-SBC + Reddit (see Table 8) obtaining an f-score of 85.24% for offset detection and entity classification. Furthermore, Table 7 shows the classification results ob-tained by our best system configuration for track 1 with a micro average of 88.10% for valid dataset. Moreover, we compared our best system configuration (FastText-SBC + Reddit) with the baseline system (NeuroNER without POS and BMEWO-V format encoding) using the same pre-trained models and configuration. Table 9 shows that our extended system outperforms the baseline system, which has proven that POS and BMEWO-V format to be an additional source of information that can be leveraged by neural networks and keep our model domain agnostic. Furthermore, the use of specific domain word embeddings highly improve performance as shown in Table 8. Track 2 -Concept Indexing For track 2, we applied the same approach described for SNOMED-SBC model training in section 2.3 for entities obtained in the previous task. We used the PyMedTermino library employing a two-stage search using full-text search and fuzzy search for concepts not found by partial matching. Our results for track 2 are low due to a large number of misspellings that exceed the similarity threshold such as "diacepam" ("diazepam"), drug names where the identifier corresponds to the active substance as "durogesic" ("Duragesic") active ingredient "fentanyl" ("fentanyl"), identifiers not existing in SNOMED CT, such as CHEBI:135810 Conclusions In this work, we propose a system for the detection of chemical compounds, drugs, genes, and proteins in clinical narrative written in Spanish. We address the named entity recognition task as a sequence labeling task. Our hybrid model based on machine and deep learning approaches only use dense vector representations features instead of hand-crafted word-based features. We proved that as in other tasks such as NER, the use of dense representation of words such as word-level embeddings, character-level embeddings, and sense embeddings are helpful for named entity recognition. The hybrid system achieves satisfactory performance with F-score over 85%. The extension of NeuroNER network is domain-independent and could be used in other fields, although generic prebuilt word embeddings are used, new medical Spanish word and concept embeddings have been generated for this work. As future work, we plan to enhance the SNOMED-CT concept embeddings and analyze why its performance is lower than the medical word embeddings. We plan to test whether other supervised classifiers such as Markov Random Fields, Optimum-Path-Forest, or CRF as RNN would obtain more benefit from dense vector representation. That is to say, we would use the same continuous representations with the after-mentioned classifiers. Apart from that, we could train word embeddings obtained from multiple multilingual biomedical corpus to obtain multilingual word representations and test other word representation algorithms such as concept embeddings using UMLS or other biomedical unique concept identifier dictionary. The motivation would be to see whether word embeddings generated with multilingual biomedical domain texts can help to improve the results and provide a deep learning model language and domainindependent. Funding This work was supported by the Research Program of the Ministry of Economy and Competitiveness -Government of Spain, (DeepEMR project TIN2017-87548-C2-1-R).
3,801
2019-11-01T00:00:00.000
[ "Chemistry", "Computer Science", "Medicine" ]
RADIAL VARIATION OF FUNCTIONS IN BESOV SPACES This paper considers the radial variation function F (r, t) of an analytic function f(z) on the disc D. We examine F (r, t) when f belongs to a Besov space Apq and look for ways in which F imitates the behaviour of f . Regarded as a function of position (r, t) in D, we show that F obeys a certain integral growth condition which is the real variable analogue of that satisfied by f . We consider also the radial limit F (t) of F as a function on the circle. Again, F ∈ B pq whenever f ∈ A s pq, where B s pq is the corresponding real Besov space. Some properties of F are pointed out along the way, in particular that F (r, t) is real analytic in D except on a small set. The exceptional set E on the circle at which limr→1 f(re) fails to exist, is also considered; it is shown to have capacity zero in the appropriate sense. Equivalent descriptions of E are also given for certain restricted values of p, q, s. 2000 Mathematics Subject Classification. 30H05, 31A05, 46E15. Introduction In [4] A. Beurling considered functions f on the unit circle T which belong to a certain Besov space B 1/2 2 , and described the set E of points e ix for which lim r→1 f (re ix ), the radial limit of the Poisson integral of f on the unit disc D, fails to exist.He showed that the set E coincides with the set of points on T for which the Fourier series of f diverges, and that this in turn coincides with the set of points for which the symmetric derivative lim He proved that F has certain properties closely resembling those of f and made decisive use of these properties in deriving the result on capacity mentioned above.He showed that It is clear from the definition, that the boundary function F (t) = lim r→1 F (r, t) exists, finite or infinite, for all t ∈ [−π, π].Beurling showed further that F ∈ B 1/2 2 , and that its norm in that space satisfies He applied these results to show that the set of points on T for which F (t) = ∞ has logarithmic capacity zero; it is well known that this capacity is naturally associated with the space B 1/2 2 or with the Dirichlet space A 1/2 2 , the subspace of B 1/2 2 consisting of analytic functions.Since then, these results have been extended in various directions.In [10] Nagel, Rudin and Shapiro considered Bessel potential spaces L p s .They showed that for w ∈ T , lim z→w f (z) exists a.e. for wider approach regions than the non-tangential.In fact for the case of the Dirichlet space L 2 1/2 , the approach region can have exponential order of contact with T .Ahern and Cohn [3] considered similar spaces of holomorphic functions on the ball of C n called Hardy-Sobolev spaces.Certain admissible approach regions are first defined and the exceptional set E(f ) is the set of points w on the boundary for which lim z→w f (z) fails to exist as z → w within this region.They showed that the exceptional set has capacity zero in the appropriate sense.Here the capacities are Bessel capacities. Efforts have also been made to show that the estimate on the exceptional set is sharp, that is given a compact subset K of capacity zero, there is a function f in the space for which K = E(f ).In this regard, [3] showed that the sets of Bessel capacity zero completely characterize the exceptional sets for the case n = 1.Further results of this type were proved by Cohn and Verbitsky [6].A great impetus to developments in this area was given by Carleson's book [5].In particular this demonstrated how sets of Cantor type can be used to prove that certain statements about exceptional sets are sharp. More recently, for a function f in the Dirichlet space, Twomey [16] has exhibited tangential approach regions A γ such that f has A γ -limits at all boundary points outside of a set of logarithmic capacity zero.Moreover he showed that such approach regions are in a certain sense optimal.In yet another direction the result on the exceptional set for the radial variation F (t) has been extended to certain weighted Dirichlet spaces.See [17] and the references cited therein.Indeed all the works cited above contain other pertinent references. Our aim is to try to extend in so far as we can the results of Beurling on the radial variation function F (r, t) to a class of Besov spaces B s pq , for which is the special case s = 1/2, p = 2 = q.In Section 2 some properties of F (r, t) are set forth which are needed later.It turns out that F (r, t) is an analytic function in D outside of a small set H which is determined by the zeros of f ′ of odd order. In Section 3 we explore the consequences for F (r, t) as a function on D, of the assumption that f ∈ A s pq .For 0 < s < 1, it is shown in Theorem 1 that F obeys a certain integral growth condition which is the real variable analogue of that satisfied by f .For s = 1, we obtain a limited result (Theorem 3), whereby we require p = 1, 1 ≤ q < 2. We consider the boundary function pq ; Theorem 1 is used in the proof.For s = 1, 1 ≤ q < 2, we are able to show in a like manner, by means of Theorem 3, that f ∈ A 1 1q implies that F ∈ B 1 1q .In Section 5 we consider the exceptional subset E ′ of T , for which An immediate application follows.Let E(f ) be the set of points for which the radial limit, lim r→1 f (re it ) , fails to exist; then C(E; B s pq ) = 0.It should be noted that for s > 1/p, B s pq is a space of continuous functions and therefore this last result is significant only for s ≤ 1/p.For the special case s = 1/p, q = p, 1 < p ≤ 2, we show further that the alternative characterizations of E obtained by Beurling, also hold. It is not to be expected that the very strong results last cited hold without restriction on s, p, q.Certainly nothing of this kind can be expected for s < 1/p, since the diagonal Besov spaces B 1/p p with s = 1/p, p = q, are well known as the interface between the smoother spaces where s > 1/p and the less tractable class with s < 1/p, where many results break down. Preliminaries. Let D denote the unit disc, T the unit circle in the complex plane.For convenience we shall let m denote normalised Lebesgue measure on the circle T , and m 2 the normalised area measure on the disc.Given a function f on T , let ∆ t f (e ix ) = f (e i(x+t) )− f (e ix ), and ), denote the differences of order one and order k respectively, of f at e ix .Let be the Poisson integral of f on the disc, where P r (t) is the Poisson kernel.Suppose now that f is analytic in denote the integral mean of f of order p.It is well known that M p (f, r) is an increasing function of r on [0, 1) and that the class of functions f for which sup r<1 M p (f, r) < ∞, is the familiar Hardy space H p [7].Given f (e it ) ∼ ∞ −∞ a n e int , we write for the partial sums of the Fourier series of f .For 1 ≤ p, q < ∞, s > 0, and an arbitrary integer m > s, we define the Besov space B s pq by It is well known that the definition is independent of m.For a discussion of these spaces see [1], [11], [14], [15].It is known that the Riesz projection is a bounded operator from B s pq to itself.Let A s pq denote the subspace of B s pq consisting of analytic functions.The space A s pq may be characterized as follows: the analytic function f ∈ A s pq if and only if Once again the definition is independent of m for m > s.Each function f ∈ A s pq is in H p and has a boundary function, also denoted by f , on T .This boundary function is in B s pq and we denote its norm in that space by f B .Of course the two norms are equivalent. Properties of F 2.1.F (r, t) a real analytic function. Recall that for f analytic, (1) and F (r, t) is a majorant for f .The function F (r, t) represents the length of the image of the radius vector [0, re it ] under the mapping f , and is known as the radial variation.An immediate property of F is that if Proof: For, for any h = 0. Taking limits as h → 0, the result follows. Lemma 2. Suppose that f is an analytic function in the disc, 0 < r < 1, We look next at some partial derivatives of F .It is obvious that ∂F ∂r = |f ′ (re it )| for all points (r, t) in the disc.Although ∂|f ′ | ∂t (re it ) does not exist at points where f ′ (z) has a zero of odd order, these points are at most countably infinite and the following result still holds: Lemma 3.For all points (r, t) in the disc, Proof: Consider the difference quotient for some s between t and t + h.Since |f ′′ (ue is )| is uniformly bounded on compact subsets of D, dominated convergence applies and we may take the limit as h → 0 under the integral sign.The result follows. We note that ∂F ∂t (r, t) exists at all points and is continuous there.However, the second derivative ∂ 2 F ∂t 2 (r, t), need not exist at all points.First, observe as before that ∂ 2 |f ′ | ∂t 2 (re it ) exists at all points except those where f ′ has a zero of odd order.But now this function need not be summable over a radial segment [0, r 0 e it ] which contains a zero of f ′ .This is unlike the situation for ∂|f ′ | ∂t (ue it ).To see this, consider the case f ′ (z) = (z − a)g(z) where g is analytic and g(a) = 0 and 0 < a < 1.We may assume that z = a is the first zero of f ′ on the ray t = 0. Then The last two terms are summable over the interval [0, a], so the first term is the crucial one.Let Putting t = 0, the right hand side (R.H.S.) equals ar/(a − r) for r < a, and ar/(r − a) for r > a.But this means that it is not summable over [0, b] for any b ≥ a.It follows that ∂ 2 F ∂t 2 (r, t) does not exist whenever t = 0 and r ≥ a for this particular f .Nevertheless, it remains true that if we exclude those rays [0, e it ) along which f ′ has zeros of odd order, a similar result to Lemma 3 holds.The formal statement is Lemma 4. If e it is chosen as above, and 0 < r < 1, then ∂s 2 (ue is ) is a continuous function of both u and s provided |s − t| is sufficiently small.The argument of Lemma 3 invoking the Mean Value Theorem, can now be applied to ∂F ∂t (r, t) just as before. If f ′ has a zero of arbitrary odd order at a point, it will be found in a similar fashion that a sufficiently high order derivative of F will not exist along a part of the corresponding ray.It is seen that these are the only possibilities whereby F fails to be infinitely differentiable.The zeros of even order of f ′ on the other hand, do not cause problems for F .At all other points, F has derivatives of all orders.Indeed, it turns out that F has the further property of being (real) analytic on the complement of the segments identified above. Recall a few facts about such analytic functions.A real function g(x, y) is analytic at a point (x 0 , y 0 ) if there is a neighbourhood U of the point (x 0 , y 0 ) such that where the series is absolutely convergent [9]. If f is an analytic function of z, then |f (z)| is analytic in the real sense except at the odd zeros of f .It follows that r = x 2 + y 2 is analytic except at r = 0. Also, cos t = x/r, sin t = y/r are both analytic except at the origin.While ∂g ∂x , ∂g ∂y are both analytic wherever g is, we note that ∂g ∂r (r, t) = ∂g ∂x cos t + ∂g ∂y sin t is not analytic at the origin even if g is analytic there.On the other hand, ∂g ∂t (r, t) = ∂g ∂x (−r sin t) + ∂g ∂y r cos t is analytic wherever g is. These facts may now be applied to our situation.If f is an analytic function of z in D, then |f ′ (re it | is analytic except at zeros of f ′ of odd order.It is also true that F (r, t) = r 0 |f ′ (ue it )| du is analytic at (r, t) if f ′ has no zeros of odd order on the line segment.However, F may not be analytic at (0, 0).Indeed, if f ′ is constant on the segment, then F (r, t) = Cr.We have seen that F cannot be analytic on [be it , e it ), if f ′ has a zero of odd order at be it , 0 < b < 1.There are at most countably many such segments in the disc.Let H be the set on which F fails to be analytic; H is therefore a small set and we can say that F is analytic on D \ H.The same is true for ∂F ∂t . 2.2.Integrability of ∂ 2 F ∂t 2 (r, t).It will be necessary later to consider the integral of ∂ 2 F ∂t 2 (r, t) over [−π, π], and to show that this exists.As the analysis above shows, the problem arises near a ray which contains an odd zero of f ′ .It is enough to consider f ′ replaced by z − a where a is fixed, 0 < a < 1 as before.It suffices to prove the following: Lemma 5.For each r < 1, ∂F ∂t (r, t) is absolutely continuous as a function of t. Proof: With f ′ as above we have as before.Suppose π/2 > x > y > 0 as we may, 0 < r ≤ a, and consider Since L(u, t) = (a − u) 2 + 4au sin 2 t/2, it is easy to see that The numerator above is in absolute value less than or equal to au L(y) 1/2 (x − y) + a(x − y) sin y = au(x − y)L(y) 1/2 + a 2 u(x − y) sin y, which gives rise to two terms in the integrand.If a − u is small then L(u, t) is small for small t and so the denominator is also small.Let us take the worst possible case r = a and x, y both small.We split the integral into one over (0, a − ǫ) and one over (a − ǫ, a), where ǫ = min{ax, a/2}.Over (0, a − ǫ) , L(u, t) ≥ (a − u) 2 , and the first term satisfies The second term is bounded by since sin y/x ≤ 1. Passing to the integral over (a − ǫ, a), consider the second term first.Observe that for such u there is a constant C such that L(u, t) ≥ C 2 (at) 2 and therefore a sin y/L(y) 1/2 ≤ 1/C.The integral is bounded by Clearly the first term contributes a like amount.The case r > a is handled in the same way.We have shown that Since the function t → t ln(1/t) is absolutely continuous for all t, the result follows. f ∈ A s pq We now inquire what are the consequences for F of the assumption that f ∈ A s pq .Hereafter p ′ will denote the conjugate index of p. Theorem 1. Suppose that 0 < s < 1, 1 ≤ p, q.There is a constant C = C(s, p, q) such that if f ∈ A s pq then (2) Proof: It is clear that if we replace ∇F in (2) by its first component, the inequality is trivially satisfied.Therefore to prove our theorem it suffices to show that for any r 0 , 0 < r 0 < 1, ( The idea is to apply integration by parts with respect to r on the left hand side.This is valid if the inner integral is an absolutely continuous function of r which in turn is true if ∂F ∂t (r, t) is absolutely continuous in r, r ≤ r 0 , and this follows from Lemma 3. We shall proceed formally and justify the operations later.The integrated term is q/p r0 0 and we first show this vanishes as r 0 → 1. At the lower limit we may suppose that f ′ (0) = 0. Then for small r, as r → 0, uniformly in t, by Lemma 2. Consider the case as r → 1. From Lemmas 2 and 3 we know that whence by Minkowski's inequality, From the definition it follows that and the claim now follows from (4).Returning to the main thread, the left hand side (L.H.S.) of (3) becomes r dr, which in turn equals We notice that T 2 < 0 whereas the sum is positive.We can therefore discard T 2 and the L.H.S. is bounded by |T 1 | which is less than Write K = 1 2(1−s) .Using Lemmas 3, 1 and 2 again, we can replace the R.H.S. by r|f ′′ (re it )| dm r dr. We apply Hölder's inequality with indices p ′ , p to the second of the inner integrals to obtain which allows us to replace (5) on the R.H.S. by which in turn is equal to M p (f ′′ , r)r dr. Next we write (1 − r 2 ) q(1−s) = (1 − r 2 ) (q−1)(1−s)−1/q ′ (1 − r 2 ) 2−s−1/q , and apply Hölder's Inequality again, this time with indices q ′ , q, to replace the R.H.S. by K times r0 0 Summing up, we can say that the L.H.S. of ( 3) is bounded by this last expression.However, a cancellation is now possible after which we conclude that r0 0 The R.H.S. is bounded by a constant times f A .Since this holds for all r 0 < 1, the desired inequality follows and proves that F (r, t) satisfies the stipulated condition. Remarks. (1) The first point to be validated here is the differentiation under the integral sign in the T 1 term.For x ≥ 0, y ≥ 0, p ≥ 1, the inequality |x p − y p | ≤ p|x − y|(x p−1 + y p−1 ), holds; see Section 41 of [13]. This . Divide across by r − s and let H(r, s, t) = 1 r−s r s u|f ′′ (ue it )| du.It is clear that B(t) = sup{H(r, s, t); r, s ≤ r 0 } is a bounded function on (−π, π).Equally, the term in curly brackets is also a bounded function of (r, t) for r ≤ r 0 .An appeal to dominated convergence is therefore valid, the limit may be taken as s → r, and differentiation under the integral sign is justified. (2) A further consideration needs to be dealt with.Since ∂F ∂t (r, t) is absolutely continuous as a function of r (see Lemma 3), it follows that ∂ ∂r ∂F ∂t (r, t) exists a.e.r for all t. The case s = 1. For the case s = 1, we introduce the Laplacian of F , namely In this case our results are less general; we comment on this further below.We shall require that p = 1 in order to get a result similar to that above.To set the scene, we first present a very simple special case.Proof: We know that F is analytic except on the set H.An important property of F is that it is subharmonic [4], and therefore ∆F (r, t) ≥ 0 where this exists.Fix r < 1, let [be it , e it ) be a line segment which intersects H ∩ D(0, r).Enclose this segment in a narrow strip of width ǫ. These strips are at most finite in number.If necessary, enclose the origin also in a disc of radius ǫ.Let G(ǫ) be that part of D(0, r) with these subsets excluded.Green's Theorem [9] may be applied to F over the domain G(ǫ): On the side of each strip the outward normal derivative is ± ∂F ∂t .By continuity, the integrals over these sides cancel pairwise when ǫ → 0. In the limit we get We now let r → 1 and deduce that there is a constant C such that This completes the proof. We can progress beyond this case to a limited extent.Theorem 3. Suppose that 1 ≤ q < 2, and that f ∈ A 1 1q .Then there exists C = C q such that 1 0 Proof: Since ∆F (r, t) ≥ 0, it is enough to consider each of the three terms in turn.From Lemma 5 Next we have, using Lemmas 1 and 2, Above, we used the fact that 1 0 (1 − u) q−1 u −q/2 du = B(q, 1 − q/2), is finite.Putting all these together the result follows. Remarks.(1) To see that the restriction on q is necessary, take f (z) = z so that |f ′ (z)| ≡ 1.Then F (r, t) = r everywhere in the disc.It follows that ∆F (r, t) = 1/r and the integral on the left above diverges if q ≥ 2. (2) We needed p = 1 in order to be able to use the subharmonicity of F which was crucial to our argument. (3) The limitation on q noted above, holds for all p.We cannot expect a general theorem along the lines of Theorem 1 to hold if we replace ∇F by ∆F , as the example f (z) = z above shows. We recall that F (t), the boundary function of F (r, t), exists, finite or infinite, for all t.We have shown that if 0 < s < 1, 1 ≤ p, q < ∞, and if f ∈ A s pq , then Apart from its intrinsic interest, this result can be used to show that F (t) is in the space B s pq on the circle; that in fact This is a direct analogue of Beurling's result which he used to prove his main result on capacity.The method is standard and has been used by Hardy and Littlewood; see e.g.[14]. where C is independent of r.Letting r → 1, the conclusion follows from monotone convergence.We write where r, 0 < r < 1, is at our disposal.Choose r such that 1 − r = t/π.Observe that Taking the L p -norm with respect to the variable v we have Here and hereafter a dot over a variable signifies integration with respect to that variable. We proceed to estimate the || • || B of each term on R.H.S. of (8).It suffices to consider the integral from 0 to π only, and we shall use dt here rather than dm(t). Since 0 ≤ u ≤ π, let H(u) = 0 outside this range.We now apply the following inequality of Hardy [14], [8]: We apply this to the case where g = H/π, l = sq, and obtain Next, consider the second term D. Walsh , the last observation gives by Theorem 1, and the equivalence of the norms on f .The third term on the R.H.S. of (8), F (r, v) − F (v), is dealt with in exactly the same way as the first term, and (7) now follows. We now consider the case s = 1 and ask which if any of the results above hold for F .Theorem 5. Suppose 1 ≤ q < 2. There exists a constant C = C q , such that if f ∈ A 1 1q then F ∈ B 1 1q , and Proof: We recall that ||f || q A = ||f || q 1 + 1 0 (1−r 2 ) q−1 |f ′′ (re it )| dm q r dr < ∞.Since A 1 1q ⊂ A s 1q for any s < 1, the fact that F ∈ L 1 (T ) follows from the proof in Theorem 4. It is enough to show that (r, v).This allows us to write The first three terms are all of the same kind, and the last two terms are also of the same kind.We choose 1 − r = t/π.Take a typical term in the first group, L r (v), and write it as f ′′ (we iv ) dw du f ′′ (we iv ) dw du. The region of integration is a parallelogram in the uw plane.Let us now switch the order of integration.If 2r − 1 < w < r, then the limits for u are 0 < u < w − (2r − 1), while if r < w < 1, they are w − r < u < 1 − r. The last displayed double integral now becomes First taking the L 1 norm with respect to v, we get Next we calculate the outer norm of L r ( v) and apply Minkowski's Inequality to get Starting with T 1 , we change the variable.Let w = 2r − 1 + y/π and note that y = 0 when w = 2r − 1 and y = t when w = r.Then . Applying Hardy's Inequality to the R.H.S. with g(y) = yM 1 (f ′′ , 2r − 1 + y/π), l = q, gives for some absolute constant C. A similar argument applies to T 2 .This time we let w = 1 − y/π and note that y = t when w = r.We have This takes care of the three terms of the first type.Next consider a term of the second type.Lemma 5 allows us to apply the Fundamental Theorem of Calculus: due to the fact that differentiation with respect to x is the same as differentiation with respect to y.From this it follows that The last step is to take the outer norm involving integration with respect to t.In doing this we invoke Theorem 3 which was a result about the A-norm of F .We claim that Using the fact that 1 − r k ≤ k(1 − r) we have, by Hölder's Inequality and Lemma 8, Next, consider the second term and use Hölder's Inequality again, Both terms can be made small provided that Comparing the index sp with 1 + 1/p − s, we see that if (1 − r)n sp = 1 then (a) is satisfied if and only if sp ≥ 1.It follows therefore, that if sp ≥ 1, then the difference can be made arbitrarily small as n → ∞, or, equivalently, as r → 1.Consequently, the first statement of the theorem follows.But the same argument applies equally to the second statement, and the theorem is proved. It is a pleasant fact that the same argument can be readily adapted to prove the second equivalence, that the symmetric derivative exists at a point if and only if the Fourier series of f converges at that point.then B s pp is not contained in L r for any r > 1 such that 1/r < 1/p − s, [12, p. 321]. (2) In the proofs above we made essential use of Hölder's Inequality and the fact that p > 1.Further, we required s ≥ 1/p.For the case s = p = 1, f ∈ B 1 11 , the Fourier series of f is uniformly absolutely convergent on the circle.To see this, let us assume f ∈ A 1 11 , as we may.We know that this implies that f ′ ∈ H 1 or f (e ix ) is absolutely continuous.But in that case ∞ 1 |a n | < ∞ [7], and the uniform convergence follows.It is also immediate from the preceding and (10), that the symmetric derivative exists at every point on the circle. An application. Suppose that f is any summable function on T .Let us define the exceptional set E(f ) to be the set of points e ix on T for which lim r→1 f (re ix ), the radial limit of the Poisson integral of f on the unit disc D, fails to exist. Our object here is to look at the size of the set E(f ) in case f belongs to some Besov space.This will require a notion of capacity associated with the space B s pq , denoted by C(•; B s pq ).We assume henceforth that 1 < p, q < ∞.Consider the dual pairing f, g = π −π f (e ix )g(e ix ) dm. It is well known that with this dual pairing, the dual space of B = B s pq is B * = B −s p ′ q ′ which is a space of distributions and we refer to [1] and [12] for the definition.We state the following (dual) definition of capacity [1].For a closed set K ⊂ T , the capacity of K is C(K; B s pq ) = sup µ(K) : µ B * ≤ 1 , where the sup is taken over all positive measures µ on the circle which belong to the dual space B * = B −s p ′ q ′ , and the norm is the norm in the dual space.The definition may now be extended to an arbitrary set G by means of a standard procedure; see Chapter 2 of [1].If p = q = 2, s = 1/2, this capacity is equivalent to the logarithmic capacity.First we consider the exceptional set E ′ for F , E ′ = {e ix : F (x) = ∞}.Theorem 8. Let 1 < p, q < ∞, 0 < s < 1 and f ∈ A s pq .With E ′ defined as above, C(E ′ ; B s pq ) = 0. Proof: Letting E ′ m = {e ix : F (x) > m}, it is clear that E ′ = m E ′ m .Since F is lower semi-continuous [4], it follows that E ′ is a G δ set.Let µ be a positive measure in B * .The dual pairing of F with µ, namely But this implies that µ(E ′ ) = 0 since F (x) = ∞ on E ′ .Since this holds for all µ, it follows that C(E ′ ; B s pq ) = 0 and the proof is complete. The following corollary is significant only for s ≤ 1/p by an earlier remark. Proof: It suffices to take f ∈ A s pq .Suppose that lim r→1 f (re ix ) does not exist; it is readily verified that the radial variation of f at e ix is infinite, F (x) = ∞.Consequently E ⊂ E ′ , and the result follows from the theorem. Remark.For the special case in which f ∈ B s pp with s ≥ 1/p, 1 < p ≤ 2, we know from Theorems 6 and 7 that much more can be said: the exceptional set E(f ) coincides with the set of points e it at which the Fourier series of f , ∞ −∞ a n e int , fails to converge, and also at which the symmetric derivative of f fails to exist. F f (t) dt, fails to exist.A consequence of his approach is that E has logarithmic capacity zero.Beurling associated with an analytic function f ∈ B (r, t) = r 0 |f ′ (ue it )| du, r < 1. Theorem 2 . There is a constant C such that for all f ∈ A (r, t)| dm r dr ≤ C||f || A . inequality together with Lemma 2 gives for all r, s ≤ r 0 , r = s and
7,803.6
2006-07-01T00:00:00.000
[ "Mathematics" ]
PHOTOGRAMMETRIC PROCESSING OF HEXAGON STEREO DATA FOR CHANGE DETECTION STUDIES Hexagon satellite data acquired as a part of USA Corona program has been declassified and is accessible to general public. This image data was acquired in high resolution much before the launch of civilian satellites. However the non availability of interior and exterior orientation parameters is the main bottle neck in photogrammetric processing of this data. In the present study, an attempt was made to orient and adjust Hexagon stereo pair through Rigorous Sensor Model (RSM) and Rational Function Models (RFM). The study area is part of Western Ghats in India. For rigorous sensor modelling an arbitrary camera file is generated based on the information available in the literature and few assumptions. A terrain dependent RFM was generated for the stereo data using Cartosat-1 reference data. The model accuracy achieved for both RSM and RFM was better than one pixel. DEM and orthoimage were generated with a spacing of 50 m and Ground Sampling Distance (GSD) of 6 m to carry out the change detection with a special emphasis on water bodies with reference to recent Cartosat-1 data. About 72 new water bodies covering an area of 2300 hectares (23 sq. km) were identified in Cartosat-1 orthoimage that were not present in Hexagon data. The image data from various Corona programs like Hexagon provide a rich source of information for temporal studies. However photogrammetric processing of the data is a bit tedious due to lack of information about internal sensor geometry. INTRODUCTION Change detection is the process of ascertaining specific changes among the features of interest within certain period of time.Change detection analysis of spatial features is an important process in understanding the growth patterns and for planning the future development of rural and urban areas of a developing nation.Remote sensing technology has been effectively used in the recent past for change detection analysis for various projects in India.The process requires spatial information of the area of interest for different periods of time.The temporal resolution depends on the availability of the archival data for different time periods.A very high temporal resolution may lead to redundant information and may also put unnecessary load on the processing system.Presently there are many remote sensing satellites acquiring data with various spatial and spectral resolutions.At present the temporal frequency of remote sensing data from various sensors is high and automation in processing is possible as the data is in digital format (Hussain et al., 2013).The main problem is the availability of historical data for the area of interest and acquired at the desired point of time.If the data sets are available with required specifications as per the project demand, remote sensing technology can do wonders in detecting even the subtle changes both in terms of quality and quantity.Remote sensing data acquired at regular intervals of time can provide a better understanding of the characteristics and distribution of the changes either natural or manmade (Shaoqing and Lu, 2008).This helps administrators and policy makers in monitoring the change for future planning and making better decisions.In view of these merits, the remote sensing technology has been widely accepted for change detection analysis of both rural and urban areas for planning and development. * Corresponding author. The first remote sensing satellite Landsat-1 (80 m GSD) was launched in 1972 by NASA followed by a series of Landsat satellites.First high spatial resolution, remote sensing satellite was launched in 1986 by SPOT with a GSD of 10 m (Campbell and Wyne, 2011).Indian Space Research Organization (ISRO) launched its fist operational remote sensing satellite IRS-1A in 1988 followed by IRS-1B and IRS-1C (Kasturi Rangan et al., 1996).Subsequently ISRO launched a series of remote sensing satellites viz.Oceansat, Resourcesat, Cartosat etc. Presently ISRO has a fleet of remote sensing satellites with various spatial resolutions and spectral bands (Navalgund et al., 2007;www.isro.org).It is very important to acquire and store data from all the possible satellite missions so that this becomes a rich source of information for future studies and analysis.Earlier this was a costly affair due to the limitations of onboard storage, data reception and storage systems.Now with the advancements in the technology, the data reception and storage capability has improved and available at a reasonable cost.Till the declassification of data acquired through Corona program, change detection was limited to the last couple of decades and for only selected areas.Now there is a huge archival of data for change detection studies which aids in modelling various natural or human induced spatial phenomena (Jianya et al., 2008). Corona Program Corona was a photoreconnaissance satellite program jointly launched, operated and managed by Central Intelligence Agency (CIA) and United States Air Force.This program includes a number of satellites with onboard film cameras, recovery vehicles to collect the exposed film from mid air.The cameras and the films used have undergone much technological advancement as the program spanned for over two decades.The main objective of this program was to acquire photographic intelligence regarding the arms proliferation from different parts ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-8, 2014ISPRS Technical Commission VIII Symposium, 09 -12 December 2014, Hyderabad, India This contribution has been peer-reviewed.The double-blind peer-review was conducted on the basis of the full paper. doi:10.5194/isprsannals-II-8-151-2014 of the world primarily from communist controlled nations like USSR and China (Anderson, 2005).This data was acquired during the cold war period, post the World War II for over two decades in 60s and 70s.The satellites launched in this program were designated as KH1, KH2, KH3 and KH4 where KH stands for Key Hole.As part of this program many satellite missions were launched with code names Argon, Lanyard, Gambit, Hexagon etc. Initially this data was classified and had restricted access to the authorized defence personnel.Hence the civilian community was not even aware of its existence.Some of the data acquired through this program was declassified partly in 1995 and 2002 in phases.(Dashora et al., 2007;Surazakov and Aizen, 2010). Hexagon Satellite After 2011;Surazakov and Aizen, 2010).Image data acquired as part of these photoreconnaissance missions like Corona and Hexagon contribute significantly to the change detection studies (Narama et al., 2010).These data sets significantly improve the limited archival data available for temporal studies as they were acquired long before the launch of the first civilian remote sensing satellite i.e., Landsat-1 in 1972.The data is available for many parts of the world in high resolution at an affordable cost (Dashora et al., 2006;Surazakov and Aizen, 2010). SENSOR MODELING A sensor model defines the mathematical relationship between 2D image coordinates and the 3D object coordinates (Hu et al., 2004).In photogrammetry orientation of a stereo pair of images can be done either using a rigorous sensor model or a generic sensor model.The choice of the model depends on availability of sensor geometry parameters, ground control, processing software etc. RSM require the knowledge of sensor and platform geometry and are very accurate as they represent the true physical geometry of an imaging system.But the interior and exterior geometry parameters of an image may not be available every time especially with the historical data sets.The generic sensor model on the other hand is independent of the sensor or platform geometry.Rational function model which is the ratio of polynomial models is the most popular one among the generic sensor models (Tao et al., 2000;Di et al., 2003;Liu and Tong, 2008).The coefficients of these polynomial equations are called Rational Polynomial Coefficients (RPC).The RPCs can be computed from a grid of reference points for different elevation levels across the range derived from the RSM or from GCPs collected directly from the ground or from any other source of topographic information.The former procedure is called terrain independent approach and the later one is called terrain dependent approach (Hu et al., 2004). In the present study both the approaches have been attempted to understand the advantages and limitations in processing of historical data like Hexagon. Rigorous Sensor Modelling The RSM is based on the fundamental principle of collinearity condition (Liu and Tong, 2008).The collinearity is an imaging condition where in the exposure station, an object point and its corresponding image point all lie along a straight line in the three dimensional object space (Wolf and Dewitt, 2004).For rigorous sensor modelling the information regarding the sensor internal geometry (focal length, principal point, format size etc) and external geometry (Exposure station coordinates and sensor attitude) should be available (Di et al., 2003).These parameters carry physical significance and can be refined by incorporating the calibration information (Tao and Hu, 2001).The internal geometry parameters can be obtained through sensor calibration.The external geometry parameters can be observed using a Global Navigation Satellite System (GNSS) and an Inertial Measurement Unit (IMU) after due consideration of lever arms and misalignment angles.The collinearity condition is expressed by the following equations from 1 to 4. Where, f is the focal length of the sensor. Where, V , V are residual error in measured image coordinates; dω, dφ, dκ are corrections to the initial approximations for the orientation angles of the photo; dX L, dY L, dZ L are corrections to the initial approximations for the exposure station coordinates; dX A , dY A , dZ A are corrections to the initial values for the object space coordinates; b's are coefficients equal to the partial derivatives; J and K are equal to x a -F 0 and y a -G 0 . These equations have to be solved iteratively until the values of corrections to the initial approximations become negligible (Wolf and Dewitt, 2004).The sensor and platform geometry information is not available for all the cases or sometimes it is not shared intentionally.Moreover the RSM is not simple and needs to be changed with the type of sensor and platform (Liu This contribution has been peer-reviewed.The double-blind peer-review was conducted on the basis of the full paper.doi:10.5194/isprsannals-II-8-151-2014 and Tong, 2008).Modelling the linear pushbroom through physical parameters is complex because of the requirement of EO parameters for each line unlike a frame sensor (Dowman and Dolloff, 2000;Grodecki and Dial, 2003). Rational Function Modelling In situations when the sensor geometry and attitude information is not available or accessible, generic sensor modelling is very useful in defining the mathematical relationship between the image and the object space.The advantage of the RFM is that it is independent of the physical geometric relations of the sensor, platform and the ground.The RPCs do not convey any physical sensor information and are interoperable across the softwares with a standard format (Hu et al., 2004).The rational function modelling has been the most popular method for the last one decade especially with the launch of Ikonos satellite with a GSD of 1m (Grodecki and Dial, 2003;Liu and Tong, 2008).This continued and further gained popularity with the launch of other high resolution satellite sensors like Quickbird, Cartosat, Worldview, Geoeye etc. Now all the above mentioned data sets are supplied with the coefficients of the RFM without disclosing the sensor model (Dowman and Dolloff, 2000;Tao and Hu, 2001;Di et al., 2003).The RFM can be used for any coordinate system and is not specific to any software.The Rational functions are the ratio of polynomial models one for the sample and one for the line as given by equations 5 and 6. P n (X, Y, Z) is a polynomial function and generally it is a third order polynomial equation as given below P n (X,Y,Z) = a 1 + a 2 X + a 3 Y + a 4 Z + a 5 XY + a 6 YZ + a 7 ZX + a 8 X 2 + a 9 Y 2 + a 10 Z 2 + a 11 X 2 Y + a 12 X 2 Z + a 13 Y 2 Z + a 14 Y 2 X + a 15 Z 2 Y + a 16 Z 2 X + a 17 XYZ + a 18 X 3 + a 19 Y 3 + a 20 Z 3 (7) Where, 's' is scan, 'l' is line and X, Y, Z are object point coordinates. In total there would be 78 coefficients to be estimated for a stereo pair as the constant term in the denominator is normally taken as unity.The accuracy that can be achieved with RPC depends on how well they represent the geometric relationship between the image and the object space.In terrain dependent modelling the distribution of GCPs plays an important role.The GCPs should be well distributed all over the image and covering entire elevation range.Any deviation from this criterion would affect the accuracy of the model.The RPCs are difficult to interpret as they do not have any physical meaning (Liu and Tong, 2008).There is also the possibility of failure due to zero denominator (Madani, 1999). STUDY AREA AND DATA USED The study area chosen for this exercise is part of Western Ghats comprising the state of Goa, part of Karnataka and Maharashtra states of India.This area falls under UTM 43 zone in the northern hemisphere.The exact area of interest is shown as a polygon in Figure 1.The total area is 17862 Sq.Km. and mostly covered by Western Ghats or Sahyadri mountain range.The average elevation of the area is around 1500 m with reference to MSL.This mountain range has a dense forest cover with many streams flowing through, which form the source for few major rivers of India like Godavari, Krishna and Kaveri.In this part of the country many reservoirs and dams were constructed in the last 3-4 decades over various streams. PHOTOGRAMMETRIC PROCESSING Photogrammetric processing of Hexagon stereo data is not simple and straight forward due to non availability of interior orientation parameters of the mapping camera.Since the Hexagon and other photoreconnaissance missions were classified, the sensor details like principal point, fiducial marks, calibrated focal length were not known.Because of this interior orientation of these images for photogrammetric processing is considered to be difficult (Altmaier and Kany, 2002;Galiatsatos et al., 2008). As per the hypothesis of Surazakov and Aizen (2010), the KH9 mapping camera of Hexagon was similar to Large Format Camera (LFC) flown onboard Space Shuttle mission STS 41-G (1984).This hypothesis was based on the fact that both the cameras were developed for space based topographic mapping by Itek Corporation.Based on this information a film based camera file is generated with a focal length of 304.8 mm and the format size of 230 mm X 460 mm.Each of the raw image for Hexagon was supplied in two pieces with a suffix a and b.These two parts of each frame was precisely registered one to one to make it a single frame image.On visual inspection of the stereo pair images it was inferred that the flying direction is along the longer of the mapping camera.The principal point coordinates were assumed to be (0, 0).The four corners of the scanned images were taken as the fiducial marks for interior orientation and the fiducial coordinates were calculated based on the format size of the sensor.Thus the interior orientation of the raw images was carried out by measuring the four corners of the image that were treated as the fiducials for reference to recreate the internal sensor geometry. Figure 2. KH9 Hexagon stereo pair with GCPs Exterior orientation tool of Inpho 5.7 photogrammetry software was used to estimate the initial values of the exterior orientation (EO) parameters for both the images.For this process, orthoimage and DEM generated using Cartosat-1 stereo data was used as the reference.These approximate EO parameters were imported into the project as GNSS and IMU values.The Hexagon KH9 images were initialized with these values.Automatic point matching tool of Inpho 5.7 was used to generate the tie points automatically.Prior to that few tie points were added manually to aid automatic point matching as there was a possibility of mismatches due to the presence of reseau grid marks on the images.Few sharp and common points were identified on both the Hexagon data and Cartosat-1 orthoimage to be used as the ground control.Triangulation of the stereo pair was carried out with 50 GCPs as shown in figure 2, which were extracted from the Cartosat-1 derived orthoimage and DEM.The stereo images were triangulated with a Root Mean Square Error (RMSE) of 2.5 m in X, 2.3 m in Y and 5.0 m in Z. For Rational Function modelling, a grid of well distributed GCPs covering entire portion of the images are required to compute RPCs.For this the reseau grid marks available on Hexagon images were taken as the reference.First the Hexagon raw images were orthorectified with reference to Cartosat-1 orthoimage and DEM through projective transformation using the Autosync tool of Erdas Imagine 2014 software.The ground coordinates for each grid mark was extracted from the orthorectified Hexagon data.The ground coordinates for 98 grid points as shown in figure 3 and the corresponding image coordinates for each reseau grid mark were used to compute the RPCs.This computation was done using the program code written for the purpose in MATLAB 14a.These RPCs were used to orient the Hexagon stereo model in Imagine 2014 photogrammetry software.The accuracy of this stereo data was ascertained using independent, well distributed check points extracted from the reference data.The methodology adopted for orientation of Hexagon stereo data is depicted in the form of flowcharts through Figures 4 & 5 This contribution has been peer-reviewed.The double-blind peer-review was conducted on the basis of the full paper.doi:10.5194/isprsannals-II-8-151-2014conjugate points at some places.Hence, DEM points edited manually in 3D for further refinement.This refined DEM was used to generate orthoimage of the study area with a of 6 m.Historical data is a prime input for temporal studies using remote sensing technology.For most of the country, historic data in high resolution is not available for periods before Corona program after declassification make a great potential for change detection studies.This data provide us with invaluable historic record of spatial features and their environs which could be of special interest to various disciplines of science.But photogrammetric processing of these data sets is non availability sensor interior geometry parameters.From the study it can be inferred that the hypothesis that mapping camera of Hexagon KH9 is similar in design with the Large Format Camera built by the same manufacturer holds good.Based on the information available and few assumptions geometry could be successfully recreated.The presence of reseau grid marks on the hexagon data can create problems in automatic point matching.Care should be taken to identify and correct these mismatches in order to avoid the blunders.Inspite of these issues, an accuracy of better than one achieved in adjustment of the Hexagon stereo data RSM and RFM.The advantage of RSM is that it represents the true physical sensor geometry of a sensor.T parameters in RSM have physical sense and interpret.But the physical sensor geometry information is not always available.For historical data sets where sensor and platform related information is not available terrain depe RFM can be a good option.input for temporal studies using remote sensing technology.For most of the country, historical data in high resolution is not available for periods before 80s. after declassification make a great potential for data provide us with invaluable historic record of spatial features and their environs which could be of special interest to various disciplines of science.But ing of these data sets is tedious due to availability sensor interior geometry parameters.From the study it can be inferred that the hypothesis that mapping camera is similar in design with the Large Format Camera built by the same manufacturer holds good.Based on n available and few assumptions, the internal The presence of grid marks on the hexagon data can create problems in automatic point matching.Care should be taken to identify and order to avoid the blunders.Inspite one pixel has been achieved in adjustment of the Hexagon stereo data using both The advantage of RSM is that it represents the .The geometry and are easy to interpret.But the physical sensor geometry information is not always available.For historical data sets where sensor and platform related information is not available terrain dependent The main advantage of RFM is it is independent of sensor geometry and attitude GCPs are available, RFM can provide accuracy similar to RSM. Comparison of Cartosat-1 data with KH9 Hexagon data provides an idea of the kind of changes that have taken place during the long period of time between 1973 and 2011 present study area a total of 72 water bodies have been built over various natural flowing streams covering an area of 2300 hectares after 1973.The implications of such changes need to be studied and understood in the make better decisions for future growth. 6. ACKNOWLEDGEMENT x a , y a are photo coordinates of an image point a, X A , Y A , Z A are object space coordinates of a point A, X L , Y L , Z L are object space coordinates of exposure station, m 11 ….m 33 are functions of three rotation angles.Equations 1 and 2 are non linear and hence need to be linearized using Taylor's theorem.The linearized form of collinearity condition equations are b 11 dω + b 12 dφ + b 13 dκ -b 14 dX L -b 15 dY L -b 16 dZ L + b 14 dX A + b 15 dY A + b 16 dZ A = J + V (3) b 21 dω + b 22 dφ + b 23 dκ -b 24 dX L -b 25 dY L -b 26 dZ L + b 24 dX A + b 25 dY A + b 26 dZ A Figure 1 . Figure 1.Study area pertaining to part of Western Ghats, India (sourced from http://bhuvan.nrsc.gov.in)The historical data set used for the study is Hexagon KH9 stereo pair from the mapping camera with a spatial resolution of 20 feet and acquired on 19 November 1973.The reference data includes CartoDEM with 10 m spacing (Muralikrishnan et al., 2013) for height control and Cartosat-1 orthoimages with a spatial resolution of 2.5 m for planimetric control in photogrammetric processing of Hexagon data.This reference data was generated from Cartosat-1 stereo pairs acquired in the month of December 2011. . The model error values for the Hexagon stereo data by both rigorous sensor and RFM are given in Table1. Figure 3 . Figure 3. Grid of GCPs for RPC generation Figure Figure 5. Orientation of Hexagon data through RFM Figure 6 . Figure 6.Hexagon image with water bodies shown in red colour (digitized from Hexagon KH9 data of 1973) and blue polygons (new waterbodies digitized from Cartosat- ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, VolumeII-8, 2014 ISPRS Technical Commission VIII Symposium, 09 -12 December 2014, Hyderabad, India Table 2 . The main advantage of RFM is it is Number of water bodies and the area covered in
5,292
2014-11-27T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Human Genome Polymorphisms and Computational Intelligence Approach Revealed a Complex Genomic Signature for COVID-19 Severity in Brazilian Patients We present a genome polymorphisms/machine learning approach for severe COVID-19 prognosis. Ninety-six Brazilian severe COVID-19 patients and controls were genotyped for 296 innate immunity loci. Our model used a feature selection algorithm, namely recursive feature elimination coupled with a support vector machine, to find the optimal loci classification subset, followed by a support vector machine with the linear kernel (SVM-LK) to classify patients into the severe COVID-19 group. The best features that were selected by the SVM-RFE method included 12 SNPs in 12 genes: PD-L1, PD-L2, IL10RA, JAK2, STAT1, IFIT1, IFIH1, DC-SIGNR, IFNB1, IRAK4, IRF1, and IL10. During the COVID-19 prognosis step by SVM-LK, the metrics were: 85% accuracy, 80% sensitivity, and 90% specificity. In comparison, univariate analysis under the 12 selected SNPs showed some highlights for individual variant alleles that represented risk (PD-L1 and IFIT1) or protection (JAK2 and IFIH1). Variant genotypes carrying risk effects were represented by PD-L2 and IFIT1 genes. The proposed complex classification method can be used to identify individuals who are at a high risk of developing severe COVID-19 outcomes even in uninfected conditions, which is a disruptive concept in COVID-19 prognosis. Our results suggest that the genetic context is an important factor in the development of severe COVID-19. Introduction In recent years, humanity has faced the COVID-19 pandemic, an infectious disease caused by the SARS-CoV-2 virus. The first COVID-19 patients were documented in December of 2019 in China, and now the virus has spread around the world, causing a pandemic with over 668,000,000 cases and over 6.7 million deaths reported by January 2023 [1]. The SARS-CoV-2 infection ranges from asymptomatic to life-threatening, wherein the most revealed that immune genes, such as TLR7, TLR3, TICAM1, TLR8, IRAK, and RnaseL, were associated with COVID-19 severity [21]. For this paper, a LASSO logistic regression model was used to identify the most informative variants for severe or mild cases. These data suggest a broad complexity in the genetic markers of patients with COVID-19 disease. COVID-19 disease is a multi-causal/multi-loci complex disease. We believe that these kinds of genomic rearrangements can only be observed under mass genotyping and machine learning interpretation. Therefore, we present a human genome polymorphism data/machine learning approach that can be used to produce a potential genome prognosis tool for several COVID-19 phenotypes. Our classifier was developed using a panel of 191 SNPs from 96 Brazilian patients with mild and severe COVID-19. The purpose was to assess a set of SNPs that were capable of identifying patients at risk of severe disease. We found an optimal 12 SNPs/genes panel that was able to predict severe COVID-19 with high accuracy, sensitivity, and specificity, using SNPs/genes from viral recognition and antiviral responses. Material and Methods Our method using human genome polymorphisms/machine learning techniques to determine severe COVID-19 prognosis could be separated into three steps: 1. data acquisition, 2. data preprocessing, and 3. data analysis and prognosis. Patient Group Ninety-six COVID-19-positive patients were enrolled in this study: forty-eight of them with mild and the other forty-eight with severe COVID-19 symptoms. Two hospitals were used for patient recruitment in the city of Recife, Brazil: Hospital dos Servidores do Estado de Pernambuco and Real Hospital Português. The patients were invited to participate in this research, and after a thorough explanation of the project, the individuals who consented to participate were enrolled in our study (ethics committee approbation n • CAAE: 36403820.2.0000.5190 and CAAE: 38435120.5.0000.5190). Patients were categorized as having mild COVID-19 when they had a positive qRT-PCR without the severe symptoms described below. Patients with severe COVID-19 were those who had a positive qRT-PCR with at least one of the following phenotypes: hospital care and mechanical support ventilation (non-invasive ventilation, high-flow oxygen, intubation, and mechanical ventilation, ECMO-extracorporeal membrane oxygenation, RRT-renal replacement therapy, etc.), oxygen saturation under 96%, or death. Vaccination against SARS-CoV-2 was considered an exclusion criterion. Whole blood collected from the patients was processed and submitted to cryopreservation under −80 • C. Genomic DNA Extraction The genomic DNA extraction was performed using the whole blood of patients through the illustra blood genomicPrep Mini Spin Kit (GE Healthcare, Chicago, IL, USA) and the PureLink ® Genomic DNA Kit (Invitrogen, Waltham, MA, USA) following the manufacturer's protocols. NanoDrop 2000/2000c Spectrophotometer (Thermo Scientific, Waltham, MA, USA) and Qubit ® 3.0 Fluorometer (Life Technologies, Carlsbad, CA, USA) were utilized to measure the DNA concentration and purity based on 260 nm/280 nm absorbance ratio. The best sample from each patient was chosen to proceed to the next steps. already been related in the literature, at some time and population, with the development of some viral diseases. These SNPs are usually from genes that are involved with the innate immune system or that act directly on antiviral responses. This panel has been used by our group in studies with other viral diseases, such as dengue [23]. A total of 15 ng of DNA per sample was used for target enrichment by a multiplex PCR reaction, which was designed for 283 amplicons targeting 296 SNPs in one pool. After 17 PCR cycles, the FuPa reagent was used to digest primer dimers and partially digest PCR amplicons. The unique index combination for dual-index-tagged libraries was generated for each sample using the AmpliSeq library preparation kit with 96 CD-indexes according to the manufacturer's instructions (see AmpliSeq for Illumina on-demand, custom, and community panels' reference guide; document #1000000036408, v09). The barcoded libraries were quantified with the Qubit ® 3.0 Fluorometer (Life Technologies) and normalized for DNA concentration to 12 multiplexed library pools. To determine the Molar concentration of the 12 multiplexed pools, a qPCR standard library quantification was performed using the ProNex ® NGS Library Quant Kit (Promega catalog number NG1201). Agarose gel was used to determine the size of PCR products. Each multiplexed pool and a phiX spike-in were combined to a final loading concentration of 19 pM, which was sequenced on Illumina MiSeq using the MiSeq Reagent v3 for 600 cycles in a single 2 × 150 base pair run. Data Preprocessing The 296 polymorphisms for the 96 patient samples were sequenced. To analyze the data, first, all sequencing results were assessed with the FASTQC tool version 0.11.8 (https://www.bioinformatics.babraham.ac.uk/projects/fastqc/, accessed on 1 April 2021). The quality results were compiled using the MultiQC tool version 1.7 [24]. Afterward, the results from the quality control step were used to guide the trimming and filtering step by applying the Trimmomatic tool version 0.38 [25]. Following the sequencing quality evaluation step, the sequencing data were mapped against the human genome (GRCh38) using the bwa tool version 0.7.12 along with the BWA-MEM algorithm [26,27]. After the mapping stage, the variant discovering analysis was performed using the Genome Analysis ToolKit (GATK) version 4.2.2.0 [27]. For this, the best practice workflow for germline variant discovery was applied. At the end of the previous step, we had 96 VCF files with high-quality variants, which were combined using the GATK command called CombineGVCFs, and then, to perform joint genotyping on the samples pre-called with HaplotypeCaller, we used the Genotype-GVCFs command, which was also from GATK. Finally, the variants were annotated using BCFTOOLS version 1.14 (http://samtools.github.io/bcftools/ (accessed on 10 October 2021)) together with the annotate command and the file with the known human variants cited earlier. The last step was to produce a table of genotypes for the loci of interest, which was performed using the VCF tools version 0.1.13 [28] together with the parameter and argument-extract FORMAT-info and GT, respectively. Data Analysis and Prognosis In this step, each locus data were labeled and encoded into integer values considering a categorical scheme of genotypes comprising reference homozygous, heterozygous, or variant homozygous. A total of 105 SNPs were removed from the genome data for having more than 10% missing data. This threshold was considered adequate for balancing the amount of required data imputation and the remaining SNPs for analysis. The remaining 191 SNPs were subject to the missing data imputation most frequently [29]. This approach was selected due to its simplicity and the fact that, biologically, it might make the separation of classes harder while preserving a reasonable number of SNPs to be analyzed. After the preprocessing phase, the dataset kept 191 SNPs and the same initial 96 samples. Feature Selection The feature selection phase was necessary because, after the preprocessing phase, the number of samples was still smaller than the number of SNPs. This kind of scenario, in most cases, does not allow for the proper training of machine learning algorithms, which is called the curse of dimensionality [30]. This phase was conducted according to the bootstrap sampling method: 1-For each of the 1000 rounds, the full dataset was resampled with repositioning to generate a training dataset with 96 samples. The test dataset was composed of out-of-bag samples and those not included in the training dataset. 2-For each round, the training and test datasets were submitted to the SVM-RFE algorithm for feature selection. The selected SNPs for each round were registered. 3-The SNPs selected in more than 500 rounds were considered good discriminators of mild vs. severe COVID-19 cases and were used in the patient prognosis phase. Recursive feature elimination (RFE) is a consolidated technique that is used in feature selection tasks. The main idea behind SVM-RFE [31] is to train the SVM, evaluate feature importance according to this classifier, and recursively remove the least important feature. We decided to use this technique because it is well consolidated (launched in 2002), created to be used in similar cases, and because of previous works of the group [30]. All experiment scripts were implemented using the Python language, and the employed SVM-RFE algorithm was part of the freely downloadable scikit-learn library provided by Pedregosa et al. [32]. Patient Prognosis To quantitatively evaluate the discriminatory potential of the selected SNPs in the previous phase, five machine-learning techniques with different learning strategies were evaluated. The selected techniques were logistic regression [33], K nearest neighbors [34], decision trees [35], and support vector machines [36], with linear and radial basis functions as kernels, all available in the scikit-learn library. The hold-out validation strategy was used so that each model was trained with 76 samples and tested with 20 distinct samples, which were not seen during the training phase. SHAP (SHapley Additive exPlanations) [37] graphics were developed to quantify the contribution that each feature brought to the prediction made by an ML model. In the current study, that meant the impact of each SNP on the tendency of protection or on the risk of severe COVID-19. Data analysis for conventional association tests allelic/genomic frequencies were estimated by the software pLINK v. 1.07 on previously selected SNPs in the complex analysis. The existence of associations between groups was evaluated by Chi-square tests, or Fisher's exact, when appropriate. The differences were considered significant for p < 0.05. The magnitude of these associations was estimated as the odds ratio (OR) using 95% confidence intervals. COVID-19 Patient Group The patient group comprised Brazilian individuals who tested positive for SARS-CoV-2 and were admitted to two hospitals in the city of Recife, State of Pernambuco, Brazil, before vaccination efforts in Brazil, between 22 August 2020 and 25 August 2021. In this period, the main local SARS-CoV-2 circulating strains were the P1 (gamma), the AY.99.2 (Delta), the BA.1 (Omicron), and the BA.2 (Omicron). The age of the patients ranged from 11 to 90 years, with 48 non-severe COVID-19 patients and 48 patients with severe clinical phenotypes (Table 1). Genomic Aspects A total of 283 amplicons were sequenced for 96 patient samples. The sequencing data had a mean GC content equal to 41.14%. At the end of the sequencing run, 19.1 million paired-end reads were produced, 72.9% of the data had a base quality that was equal to or higher than phred 30, and the mean quality was equal to 31.3. The number of reads per sample ranged from 13,145 to 742,829. The sequencing depth for each locus ranged from 2 to 5075 times. COVID-19 Genomic Classifier Our genome polymorphisms/machine learning COVID-19 prognosis classifier, whose features are selected by the SVM-RFE method, including 12 SNPs in 12 innate immune genes ( Figure 1). The SNPs and genes are described below: rs1990760 (IFIH1), rs2161525 In the conventional analyses, individual OR was analyzed for each variant allele/genotype that was present in the genome complex classifier (Figure 2). For the variant allele effect, significant analyses were observed for the risk of severe COVID-19 development in PD-L1/rs17804441 SNP (allele variant C, OR = 1.92, CI 1.12-3.45, p = 0.045) and In the conventional analyses, individual OR was analyzed for each variant allele/genotype that was present in the genome complex classifier (Figure 2). For the variant allele effect, significant analyses were observed for the risk of severe COVID-19 development in PD-L1/rs17804441 SNP (allele variant C, OR = 1.92, CI 1.12-3.45, p = 0.045) and IFIT1/rs303215 (variant allele C, OR = 3.34, CI 1.55-7.20, p = 0.009) loci. On the other hand, the variant allele protection effect was identified as JAK2/rs12340866 (variant allele A, OR = 0.48, CI 0.26-0.89, p = 0.048) and IFIH1/rs1990760 (variant allele T, OR = 0.55, CI 0.34-0.89, p = 0.04). For the variant genotype analyses, significant risk effects were identified for the PD-L2/rs17804441 genotype CC/TT (OR 2.6, CI 1.08-6.15, p = 0.03) for the dominant model and IFIT1/rs303215 for the genotype model p = 0.04. It was observed that some variant genotypes (rs17622656, rs2508450, and rs303215) were out of Hardy-Weinberg equilibrium in all COVID-19 groups (n = 96). prognosis classifier under SHAP (Shapley Additive exPlanations) analysis. The impact of each selected genotype (feature) on genome polymorphisms/machine learning model output (mild or severe COVID-19 cases). In blue, reference homozygotes; in purple, heterozygotes; in pink, alternative homozygotes. The figure depicts SHAP analysis over test dataset only-some alleles may not appear with all three values. In the conventional analyses, individual OR was analyzed for each variant allele/genotype that was present in the genome complex classifier (Figure 2). For the variant allele effect, significant analyses were observed for the risk of severe COVID-19 development in PD-L1/rs17804441 SNP (allele variant C, OR = 1.92, CI 1. 12 For the complex approach, among the five tested machine-learning techniques (Supplementary material S2 and S3), the best classifier was produced by SVM with the linear kernel (SVM-LK). The best performing SVM used the hyperparameters C = 1 and kernel = 'linear'. All other hyperparameters had defaulted scikit-learn values. The performance indicators of this best complex classifier are shown in Figure 2, with a high sensibility (80%), specificity (90%), and accuracy (85%) for the complex (combined) genome classifier. Since the best-performing complex classifier was produced by an SVM with a linear kernel, it is not possible to directly understand how each input led to each prediction nor the impact that each feature had on the general classification. Therefore, to properly understand the inner functioning of the complex classifier, a state-of-the-art ML explainability technique was used: the SHAP technique [37]. Briefly, SHAP quantifies the contribution that each feature brings to the prediction made by an ML model. The graph contained in Figure 1 is an adapted version of a violin plot produced by the SHAP API compatible with Scikit-learn. In this figure, it is possible to observe that (i) each one of the 12 selected SNPs was sorted with the most important ones for model classification at the top; (ii) reference homozygotes are represented as blue dots, heterozygotes are represented as purple dots, and alternative homozygous are represented as pink dots; (iii) each dot represents the genotype of a given patient for the row contained within each horizontal line; and (iv) at the bottom, it is possible to see the impact of each SNP on the prediction of mild or severe cases of the complex classifier. It is possible to observe the high relevance of IFNB1, DC-SIGNR, JAK2, PD-L1, IFIH1, STAT1, and IL10RA polymorphisms. It is also interesting to note the "individual" effect of each polymorphic locus on the classifier: for some SNPs, the reference homozygous genotype contributed as a risk element (IFNB1, rs1051922) and, for others, the variant homozygous genotypes were the risk factors (STAT1, rs3771300). Curiously, some heterozygous genotypes worked on both sides of the classification, depending on the genomic context of each patient (i.e., purple heterozygous genotype in IFIH1, STAT1, and IL10RA): a probable codominance effect. COVID-19 Genomic Prognosis Classifier The SARS-CoV-2 infection promotes broad asymptomatic to life-threatening symptoms that can culminate in acute respiratory distress syndrome and death as COVID-19 progresses [2,3]. The severe COVID-19 disease especially affects patients with comorbidities such as diabetes and cardiovascular illnesses, which are both commonplace in older populations [2]. In addition to comorbidities and age, studies have described important factors that relate to the progression of severe COVID-19, such as virus genetics [5], sex [6,7], blood type [38], and host genetics [8]. However, the prognosis of COVID-19 has still been a challenge due to the complex nature of the disease. Early prediction of COVID-19 severity is fundamental for better management, patient care, and therapy. In clinical routines, the optimal medical practice would be to identify, as soon as possible, patients that would develop severe forms of the disease and to provide specific and individual treatment to avoid disease progression. The most proposed COVID-19 classification methods use a wide variety of clinical and laboratory data (disease stage-dependent, tissue-dependent, etc.) and/or subjective data-dependent, sometimes based on clinician experience [22,39]. However, some genetic studies have proposed classification methods for early COVID-19 prediction, avoiding the imprecision of clinical interpretation. Some of these works were based on conventional genetic analysis (i.e., considering individual genome markers effect) [14,40,41], but more recent efforts have attempted new alternatives based on massive genome data and computational intelligence tools. These new ways consider some complex genome signatures that are impossible to be observed under traditional perspectives, which is an emergent field in genetics (see [8]). In our present work, however, we solely compare human innate immune genome marker (SNPs) data by (i) statistical genetic analysis with (ii) a complex machine learning genome polymorphisms classifier. In the univariate analysis, we observed that variant alleles in PD-L1("C", rs17804441, OR = 1.92) and in IFIT1 ("C", rs303215, OR = 3.34) genes had significant risk effects on the development of severe COVID-19, whereas polymorphism in JAK2 ("A", rs12340866, OR = 0.48) and in IFIH1 ("T", rs1990760, OR = 0.53) were related to protection. For genotype associations, we observed that PD-L2 CC/TT (rs17804441, OR = 2.6) in the dominant model and IFIT1 (rs303215) under the genotype model were highlighted as key elements for severe phenotype development. Some of these SNPs were observed as important elements in SARS-CoV-2 infection or other respiratory viral infections, as discussed below. It is important to note that some non-significant alleles/genotypes in the univariate analysis were considered significant members in the complex classifier (Figure 1), showing the importance of taking a complex view to better understand complex diseases. Our proposed genome polymorphisms/machine learning approach showed an accuracy that was greater than 85%, alongside a sensitivity and specificity of over 80% and 90%, respectively. The results revealed complex multi-loci human genome signatures to severe COVID-19 predisposition using data from 12 key SNPs and 12 innate immune genes, under different genotype compositions, particularly from the IFN pathway (Table 2). These findings support previous observations that suggest the importance of IFN and Interferon stimulated genes (ISGs) on COVID-19 outcomes. It also shows that the key element for genetically influenced diseases can be the "genome context," instead of the commonly used single genetic markers, as observed in dengue complex genetics [23] and COVID-19 [8]. The study by Asteris and colleagues (2021) was based on machine learning (ANN) and human genome and biological data for COVID-19 prognosis. The authors, by means of the next-generation sequencing approach, identified, among 381 SNVs and 133 patients, five critical polymorphisms associated with severe COVID-19 in C3 (2), THBD (1), CFH (1), and CFHR1 (1) genes. They used the genome data in combination with gender and patient age data to develop an ANN-predicted mortality architecture (~90%) for severe COVID-19 [8]. However, it is important to note that our prognosis method still has some limitations, especially if we consider that we used hold-out validation which was implemented with a view to later use the SHAP method, and the lack of a methodology to estimate ancestry. Therefore, the population studied was composed of the same geographical area, which contributed to decreasing the influence of ancestry as a bias [42]. rs16923189 A>G 5_prime_UTR_variant PD-L2 Negative regulation of interleukin-10 production. [49] rs17804441 T>C intron_variant PD-L1 Inhibitory receptor-ligand expressed by T-cells and B cells, and various types of tumor cells. [49] rs1051922 G>A "sotp gained", coding sequence. IFNB1 A cytokine that belongs to the interferon family of signaling proteins. [50,51] rs12340866 G>A intron variant JAK2 The non-receptor tyrosine kinase of the JAK/STAT pathway. [52] rs3771300 T>G genic_downstream_transcript STAT1 Member of the STAT protein family. [52] Viral Replication rs17804441 T>C intron_variant PD-L1 Positive regulator of ISG expression. [23] rs303215 T>C intron_variant IFIT1 An interferon-induced protein that inhibits viral replication and translational initiation. [53,54] rs17622656 G>A intron_variant IRF1 A transcriptional regulator that activates the genes involved in both innate and acquired immune responses. [55,56] DC-SIGNR is a DC-SIGN homolog that is encoded by the CLEC4M gene with 77% amino acid identity and features polymorphism in its extracellular neck region, which is encoded by tandem repeat domain in exon 4 [44,45]. Together with ACE2, DC-SIGNR is expressed in the lung and small bowel of patients who are fatally infected with SARS-CoV [58]. A genetic risk association study conducted by Chan et al., 2006 [59] on SARS-infected patients during the outbreak in 2003 showed that individuals homozygous for CLEC4M tandem repeats were less susceptible to SARS infection. They also showed that homozygous DC-SIGNR cells had a higher capacity to bind SARS-CoV with little dissociation, DC-SIGNR is a DC-SIGN homolog that is encoded by the CLEC4M gene with 77% amino acid identity and features polymorphism in its extracellular neck region, which is encoded by tandem repeat domain in exon 4 [44,45]. Together with ACE2, DC-SIGNR is expressed in the lung and small bowel of patients who are fatally infected with SARS-CoV [58]. A genetic risk association study conducted by Chan et al., 2006 [59] on SARSinfected patients during the outbreak in 2003 showed that individuals homozygous for CLEC4M tandem repeats were less susceptible to SARS infection. They also showed that homozygous DC-SIGNR cells had a higher capacity to bind SARS-CoV with little dissociation, leading to viral degradation in a proteasome-dependent manner and a lower capacity for trans infection. Incidentally, Interleukin-1 receptor-associated kinase-4 (IRAK-4) is a kinase that activates NF-kappaB in both the Toll-like receptor (TLR) and the T-cell receptor (TCR) signaling pathways [46]. IL10, IFNB and JAK-STAT Pathways PD-L1 (CD274) and PD-L2 (PDCD1LG2), also known as programmed cell death-1 ligands 1 and 2, are cell-surface receptors that are found on hematopoietic cells. Along with PD-L1, PD-L2 binds to PD-1 (programmed cell death-1 receptor): an inhibitory receptor that acts as an immune checkpoint and plays a role in suppressing the adaptive immune system. Both PD-L1 and PD-L2 inhibit T-cell proliferation and cytokine production to ensure homeostasis and reduce the damage caused by the host immune response [49]. When both PD-L1 and PD-L2 were blocked, dendritic cells enhanced T-cell proliferation and cytokine production, including that of IFN-and IL-10, which showed that they were the negative regulators of the IL-10 pathway [60]. PD-L1 is also a positive regulator of Interferon stimulated genes (ISGs), activating their expression [23]. Interleukin-10 (IL-10) and other anti-inflammatory cytokines have a central role in infection, preventing host damage by limiting immune response, and can be produced because of virus replication. An understanding of how the IL10 expression is complexly regulated was related by Saraiva and O'Garra, 2010 [47], including upregulation by TLRindependent stimuli, such as DC-SIGN, another target identified in our study. Lu, 2021 [61] related that cytokine storm is similar to COVID-19 and SARS-CoV patients, but in severely ill COVID-19 patients, IL-10 is incredibly elevated, which suggests that IL-10 could be a putative biomarker. The engagement of the JAK-STAT signaling circuit by the ligation of the IL-10 receptor complex occurs mainly through STAT3 [48,52]. It has been shown that individuals with severe immune-mediated diseases, such as very early onset inflammatory bowel disease (VEOIBD) and autoimmune thyroid diseases, have polymorphisms in IL-10 and its receptors, IL-10RA and IL-10RB. IL-10RA mutation at the 3 ends of exon 4 (c.537G → A) reflected an increased risk of severe IAE (influenza-associated encephalopathy) [62]. The IFNβ expression, in turn, was the result of the PRRs activation, for example, MDA5 and IRAK4, by a specific PAMP. The signaling begins when it binds with the IFNA receptor and triggers the JAK-STAT pathway, activating interferon-stimulated genes (ISGs), such as IFIT1 and IRF1. The upregulation of ISGs simulates an antiviral state, which inhibits viral entry, replication, and translation in both non-infected and infected cells, respectively [50,51]. Virus Replication The interferon-induced protein with tetratricopeptide repeats 1 (IFIT1) inhibits virus replication by binding and regulating the functions of cellular and viral proteins and RNAs [54]. For example, IFIT1 impedes JEV replication by inhibiting mRNA translation through direct binding to mRNA 5 structures [53]. This is the first report associating rs303215 T>C with a viral disease. Finally, IRF1 is a transcriptional regulator that was originally characterized in the type I Interferon pathway [55] but is now known as an activator of genes involved in both innate and acquired immune responses [56]. In the early phase of a virus infection, IRF1 is highly expressed, stimulating the production of IFNs and ISGs. Then, in the later phase, secreted IFN triggers the STAT1-STAT2-IRF9 pathway to induce IRF1, which acts as a feedback loop [63]. Together and in combination, all these factors must be important to COVID-19 s severe phenotype progress. Conclusions Here, we presented the genome polymorphisms/machine learning approach for predicting severe COVID-19 phenotype based solely on human innate immune genome markers complex data and machine learning techniques: a robust approach for a potential severe COVID-19 prognosis tool. This method has some key novelties: it can be applied to any genetically influenced disease, in any development stage, even before infection (in case of infectious diseases), using a broad human sample. Moreover, it is a method free of clinical and laboratory data and medical interpretation (that depends on the medical experience). The presented tool was able to select the optimal loci combination data and accurately predict those patients who would develop COVID-19 disease based on their genome background, including key elements of host antiviral response and innate immune system (i.e., avoiding clinical routine limitations). However, our method presents some limitations that need to be considered: singlecenter patient group and ancestry considerations, data validation aspects, etc. Despite this, we can consider that our method is a preliminary approach with a potential application in the future clinical routine. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: All data generated in this study are available in the Supplementary Materials section.
6,307
2023-02-28T00:00:00.000
[ "Medicine", "Computer Science" ]
Testing embedded system through optimal mining technique (OMT) based on multi-input domain Testing embedded systems must be done carefully particularly in the significant regions of the embedded systems. Inputs from an embedded system can happen in multiple order and many relationships can exist among the input sequences. Consideration of the sequences and the relationships among the sequences is one of the most important considerations that must be tested to find the expected behavior of the embedded systems. On the other hand combinatorial approaches help determining fewer test cases that are quite enough to test the embedded systems exhaustively. In this paper, an Optimal Mining Technique that considers multi-input domain which is based on built-in combinatorial approaches has been presented. The method exploits multi-input sequences and the relationships that exist among multi-input vectors. The technique has been used for testing an embedded system that monitors and controls the temperature within the Nuclear reactor s. helps address this ask by placing Combinatorial Test Design (CTD) at the heart of the solution. This document presents two case studies of CTD implementation in client engagements and focuses on the approach, process and challenges addressed to scale up the implementation and make CTD a mainstream activity. The IBM Focus tool was used in both cases to implement Combinatorial Test Design for optimization of tests and for reducing test effort while increasing test coverage. INTRODUCTION 1.Background Testing is playing a significant role in the development of any system which is a systematized process to verify the reliability, behaviour and performance of a system against considered stipulations. It enables a device or a system to be as defect-free as possible which act as a one of the detective measures, and verification is one of the corrective measures of quality. Black-box testing inspects the functionality of an application without seeing into its internal structures or workings. It mainly concentrates on the functional requirements of the embedded system without considering the internal working of the system. The main aim of this testing is to select the acceptable test cases and detect as many faults based on requirements specification at least cost and time. Testing embedded systems involves testing software, hardware and both. Testing of hardware and software however can be conducted independently and then the testing has to be undertaken after the software is migrated into the hardware. Embedded systems are a mixture of various computing devices, such as microcontrollers, application-specific integrated circuits, and digital signal processors. Some widely used systems in real world applications such as routers, power plant system, medical instrument system, home appliances, air traffic control station, and firewalls, telecommunication exchanges, robotics, industrial automation and smart cards etc. are example of embedded system. Falseness in hardware systems may be designated in terms of defect, error and faults. Combinatorial testing is commonly utilized black-box practice that could dramatically diminish the number of test cases, as it is a highly competent technique to perceive software faults. This method originates test cases from input domain of the system under test. But, when the input domain is noteworthy and the output domain is much tiny, it is desirable to go for testing the output domain either exhaustively [1] or as much as possible. For a few safety critical embedded systems, building test cases drawn from multi-input domain is a necessity as multiple inputs can occur at the same time. However the an embedded system must also be tested form other perspectives that include input, output, input-output and Multi-output domain Generation of test cases based on Multi-input perspective will be more suitable than other perspective's as it guarantees that all or as many possible input combinations are comprehensively tested. Exhaustive testing [2] considering non multi-input domains is out of question when many input variables exist and they act in several combinations. Pseudo-Exhaustive testing aims at considering only those combinations that will most likely result in failure conditions. Optimal Mining Technique (OMT) derives test cases by choosing certain combinations of either the inputs or outputs based on the possibility of occurrence of multi-output or multi-input domain of an embedded system such as TMCNRS which monitors and controls temperatures within nuclear reactor systems. In the case of TMCNRS, the occurrences temperatures within nuclear records can happen simultaneously are independently. In an embedded system processing tasks are designed for handling multiinputs which occur simultaneously. These tasks must be tested thoroughly to guarantee the proper working of the embedded system. Test cases should be generated to verify the functionality of occurrence of proper outputs based on the Occurrence of Multi-input. Problem In the case of embedded systems, Inputs occur as a set in addition to the occurrence of independent inputs. The behaviour of an embedded system when multi-inputs occurs must be tested to find the whether the system has been properly developed to process multi-inputs that occur simultaneously. Proposed solution An improved optimal Mining technique is presented in this paper, the range of values that must be used for generating the test data have also been pre-identified and mapped with output variables. The multi input relationships and also the input output relationships which can be used as a database for mining relationship pattern for further modelling. The pattern of occurrence of the input variable can be determined by using a mining algorithm or through manual inspection. RELATED WORKS Lakshmi Prasad, et al., [3] had presented a comprehensive survey on combinatorial testing. In [4] a method is proposed that deals with generation of test cases based on the input domain considering with special consideration to testing standalone embedded system. The algorithm can successfully generate pairs for those input parameters and eliminate non related input pairs thereby reducing the size of the test suite to a minimum. Lakshmi Prasad, et al., [5] had proposed several combinatorial methods for testing an embedded system. Gray. D. M. Cohen, et al., [6] introduced the combinatorial design approach for generating the test cases automatically. They have described an application which is developed using the method presented by them. They have shown that the time required for the development of test plan has been reduced considerably and they have also shown that the entire code has been covered using the test plan that has been used to generate test cases. Cohen D. M., et al., [7] have presented a system called AETG. The approach presented by them considers all combinations of input parameters that include pair-wise, tripe-wise and n-wise. The approaches presented by them will breed all the valid test pairs ignoring the invalid test pairs. The numbers of test cases generally are of the logarithmic order of the number of input variables used. The AETG has been used for undertaking different types of testing that include unit, functional, acceptance, system, integration, regression and inter-operability testing. Cohen D. M., [8] have presented a method and a system for enumerating a minimal number of test cases for systems with interacting elements that have relationships between the elements and the number of characteristics evaluated for each element. In the method, the user enters values for each of the elements and then defines relationships between the elements. Our method then enumerates a table of test cases for each relationship between elements using deterministic procedures, when applicable, and random procedures The significant expansion of autonomous control and information processing capabilities in the coming generation of mission software systems results in a qualitatively larger space of behaviours that needs to be "covered" during testing, not only at the system level but also at subsystem and unit levels Tung, [9]. A major challenge in this area is to automatically generate a relatively small set of test cases that, collectively, guarantees a selected degree of coverage of the behaviour space. They described an algorithm for a parametric test case generation tool that applies a combinatorial design approach to the selection of candidate test cases. Evaluation of this algorithm on test parameters from the Deep Space One mission reveals a valuable reduction in the number of test cases, when compared to an earlier home-brewed generator. Lei Y., et al., [10] have used a criterion which is test specification based. The criterion considers each pair of input variables, and every value pair of the input pair considered and every value selected covered through a test case. They have evolved this strategy for carrying pair-wise testing. Covering arrays have been augmented further by Cohen M. B., et al., [11] through inclusion of the concept called Annealing. This has led to special array containing several sub-arrays which all together contain all the t-tuples, each tuple appearing at least once. The strength of the array is measured through t number of tuples contained in the array considering all the sub-array contained in it. They have analyzed all the arrays that have strength of 3 tuples using recursive combinatorial construction and using search techniques. The technique used by them leveraged optimality and efficiency of the size through use of combinatorial construction and heuristic search. Further addition to the recursive combinatorial generation added with heuristic search has been made through detection of interaction of multiple components that lead to different kinds of failures. R. Kuhn, et al., [12] have applied this approach to the real world applications and obtained the analytical results. The way a system is tested depended on the type of the system. Choice of an appropriate testing method is crucial for making testing effective and rational. The application related to flow of water inside a carbon Nano tube that is single walled and in which several temperature gradients exist has been tested by Shiomi J., et al., [13] by using combinatorial method. Ochoa, et al., [14] had introduced the box-fusion which is an approach to improve pair wise testing. Box-Fusion approach was guessed and a case study was carried out by using two software implementations: the Simple LTL Generator that builds Linear Temporal Logic (LTL) formulae with atomic propositions and the prospect algorithm that can produce LTL formulae from more than 31,000 possible input combinations. They have presented evaluation of Box-Fusion approach which considers, pair wise testing approach, annotated control flow graphs approach and regression testing approach. M. Lakshmi Prasad, et al., [15]- [19] had built test cases by particle swarm optimization (PSO) for multi output domain embedded systems using combinatorial techniques. They also used neural network based strategy for automated construction of test cases for testing an embedded system using combinatorial techniques. They also developed generating test cases for testing web sites through neural networks and input pairs. They also generated test cases using combinatorial methods based multi-output domain of an embedded system through the process of optimal selection. Abdul Rahman, et al., [20] had presented a survey on input-output relationship relation to test data generation strategies. They reviewed the existing combinatorial test data generation strategies supporting the IOR features specifically taking the nature inspired algorithm as the main basis. Benchmarking results illustrate the comparative performance of existing nature inspired algorithm based strategies supporting IOR. Combinatorial methods can also be used for testing software that predominantly uses logical expressions based on Boolean or binary inputs. In the software used related to most of the safety critical applications, Boolean expressions are used extensively. S. Vilkomir, [21] has steadied effectiveness of combinatorial testing when binary inputs are used. Deepa Gupta, et al., [22] had proposed a sequence generation of test cases using pair wise approach. They presented an approach which uses the series origination approach for pair wise test case origination. This approach makes certain to disseminate the required intent of trial run cases which cover all available relations between all instructions pairs at least once. Trial run selection specification is this approach is based on combinatorial testing. Jose Torres-Jimenez, et al., [23] Covering arrays are combinatorial structures which have applications in fields like software testing and hardware Trojan detection. In this paper we proposed a two-stage simulated annealing algorithm to construct covering arrays. The proposed algorithm is instanced in this paper through the construction of ternary covering arrays of strength three. We were able to get 579 new upper bounds. In order to show the generality of our proposal, we defined a new benchmark composed of 25 instances of MCAs taken from the literature, all instances were improved. [24] Clients today want more for less and the IBM test mantra of Test Less Test Right helps address this ask by placing Combinatorial Test Design (CTD) at the heart of the solution. This document presents two case studies of CTD implementation in client engagements and focuses on the approach, process and challenges addressed to scale up the implementation and make CTD a mainstream activity. The IBM Focus tool was used in both cases to implement Combinatorial Test Design for optimization of tests and for reducing test effort while increasing test coverage. P. S., M. B., et al., [25] Combinatorial Testing is a test design methodology that aims to detect the interaction failures existing in the software under test. The combinatorial input space model comprises of the parameters and the values it can take. Building this input space model is a domain knowledge and experience intensive task. The objective of the paper is to assist test designer in building this test model. A rule based semi-automatic approach is proposed to derive the input space model elements from Use case specifications and UML use case diagrams. A natural language processing based parser and an XMI based parser are implemented. The rules formulated are applied on synthetic case studies and the output model is evaluated using precision and recall metrics. The results are promising and this approach will be of good use to the test designer. Y. Yao, et al., [26] as an effective software testing technique, combinatorial testing has been gradually applied in various types of test practice. In this case, it is necessary to provide useful combinatorial testing tools to support the application of combinatorial testing technique on industrial scenarios, as well as the academic research for combinatorial testing technique. To this end, on the basis of the research results of this group, a suite of combinatorial testing tools has been developed, whose functions include test case generation, test case optimization, and etc. For the requirements from both industrial and academic scenarios, the tools should be configurable, scalable, modular, and etc. This paper gives a brief introduction to the design and implementation of these tools. Keywords-combinatorial testing, combinatorial testing tools, test generation, test prioritization. APPLICATION OF OPTIMAL MINING TECHNIQUE TO PILOT PROJECT The modified OMT algorithm and its application to the pilot project are presented below: Steps of optimal MINING technique 3.1.1. Step-1 Determine the regular and embedded system specific input variables from the test requirements specification. Input variables that are of continuous nature have been selected. In this OMT method only, the multi-input variables that are of continuous in nature have been considered. The details of input variables i.e. regular and ES specific selected are shown in Table 1 and Table 2. The details of output variables i.e. regular and ES specific selected are shown in Table 3 and Table 4. The range of values that must be used for generating the test data have also been pre-identified and mapped with input variables. The relationships that exist between the input variables and its corresponding related input variables can be used as the basis for generating the test cases. Step-2 Determine the input-input relationships and also the relationships with the output variable which can be used as a database for mining relationship pattern for further modelling. Sample associativity and the relationships among the input and output variables are shown in the Table 5. INIT-MESSAGE -LCD-STAT LCD-WRITE -----2. MSG-PW-ENTRY -LCD-STAT LCD-WRITE -----3. Step-3 A set of input variables occurs in union. The set of input variables behave in a pattern. The pattern of occurrence of the input variable can be determined by using a mining algorithm or through manual inspection. Following are the input sets and the pattern of occurrence of those sets which can be mined or manually determined. Step-5 Trace out the output vectors having variables of similar nature and domain. Following are the output vectors related to example application. Step 8: Stop ()  ISSN: 2088-8708 COMPARATIVE ANALYSIS Three methods exist in literature which can be used for generation of test cases that can be used for testing the embedded systems. The methods include generation of test cases using input domain, output domain, and generation of test cases for semi or pseudo exhaustive testing using genetic algorithms. All these methods do not take into account interrelation ships between input variables. The association between the variables is only limited to adjacency. The methods are compared considering the testing requirements of the embedded systems and the techniques that must be used for undertaking the testing of the embedded systems. Table 7 shows comparison based on the suitability to generate the test cases that can be used for testing different features of the embedded system. From the table it can be seen that OMT is made for testing the embedded systems considering all the features that are related to the embedded systems. Table 7. Comparison of the Test case generation methods (Combinatorial Methods) based on the suitability of the same for testing the embedded systems
4,016.6
2019-06-01T00:00:00.000
[ "Computer Science" ]
Plastic Instability in Medium-Carbon Tempered Martensite Steel Inhomogeneous plastic deformation damages the surface quality of a product in the metal forming process. Therefore, it is necessary to investigate the plastic instability of a metal. Tempered martensite is a common microstructure of medium-carbon steel. Plastic instability (Lüders phenomenon, Portevin-Le Châtelier phenomenon) in this phase was investigated by a uniaxial tension test performed at room temperature. The formation and propagation of a plastic band were analyzed via two-dimensional digital image correlation, and the strain and strain-rate fields were experimentally evaluated. The results obtained are as follows: (1) there was no clear yield plateau on the stress–strain curve; (2) Lüders phenomenon was present, but the Portevin-Le Châtelier phenomenon was not found; (3) in the Lüders deformation process, local strain distribution in tempered martensite is more complicated than that in ferrite. Introduction Plastic instability occurs in the deformation process of some crystalline materials in the form of single or multiple plastic bands. Piobert [1] and Lüders [2] first reported that plastic instability took place when mild steel transited from an elastic to a plastic state. Portevin and Le Châtelier [3] found that plastic localization existed in a certain range of a plastic deformation process in aluminum-based alloys and low-carbon steels. Their results showed that plastic instability could take place not only in the elastic-to-plastic transition region but also during the process of plastic deformation. The plastic instability occurring in the former period is referred to as the Lüders phenomenon, and that in the latter period is the Portevin-Le Châtelier (PLC) phenomenon. Some materials have only one of the two types of plastic instabilities [4,5], and some have both [6]. The type of plastic instability can be identified from the typical characteristic on the tensile curve: a yield plateau for the Lüders deformation [4,5] or a jerky flow (a series of serrations) for the PLC effect [7][8][9][10][11]. Lüders deformation is dependent on the applied stress [12,13], grain size [14,15], strain rate [16][17][18], specimen size [18,19], and temperature [20]. The PLC effect is strongly influenced by the temperature [21] and strain rate [22]. Although the micro-mechanisms of the Lüders and PLC phenomena have been widely studied, they have not been clearly explained nor confirmed. Cottrell [23,24] first proposed a dislocation model that assumes that the two types of plastic instability are related to the interactions between solute interstitials, such as C and N, and mobile dislocations. The dislocations are initially locked by solute interstitials, which tend to form Cottrell atmospheres around them. The pinning of dislocations is associated with the increase in yield strength (i.e., hardening). When the stress threshold for unlocking or multiplying these dislocations is exceeded, the dislocations are unpinned, and a rapid multiplication of mobile dislocations occurs; as a result, yield strength decreases (i.e., softening). The pinning and the unpinning of dislocations result in strain aging [25]. It is generally believed that the Lüders phenomenon is caused by static strain aging (SSA) [26,27], and the PLC effect is attributed to dynamic strain aging (DSA) [10,[28][29][30]. Onodera et al. found that the Cottrell atmosphere did not agree with the Lüders deformation in an alloy, Al-4Cu-0.5Mg-0.5Mn [31]. Hahn [32] proposed another model in which the dominant mechanism of Lüders band formation is attributed to rapid dislocation multiplication. Steel has several elementary microstructures, such as ferrite, austenite, bainite, martensite, and pearlite. The Lüders phenomenon is known to occur in the ferrite [15,16] or austenite [33] phase, and the strain-induced phase transformation from metastable austenite to martensite leads to the PLC effect [26,34]. Due to the presence of ferrite or austenite, plastic instability can also take place in multi-phase steels containing either phase, e.g., ferrite/pearlite steel [4,5], ferrite/austenite steel [2,6], and ferrite/austenite/martensite steel [34]. However, the possible occurrence of plastic instability in phases other than ferrite and austenite was not reported in the literature. Tempered martensite, which has an excellent balance of strength and toughness, is the most common microstructure used in medium-carbon steel. However, plastic instability in this phase is unknown. In this study, the evolution of strain and strain rate in the tempered martensite of medium-carbon steel in a tension test was analyzed via digital image correlation (DIC), and the possibility of the occurrence of plastic instability was investigated by the obtained experimental results. Experimental Methods Hot-rolled steel plates 120 mm long × 60 mm wide × 1.2 mm thick were used as as-received material whose chemical composition is 0.3 C, 1.5 Mn, and the balance Fe (in wt%). The as-received plates were heat-treated to obtain the desired microstructures in the present study: (1) Tempered martensite: Tempered martensite is the microstructure in focus for the present study. The as-received plates were heated at 850 • C for 5 min, and then were water quenched to produce martensite. The water-quenched plates were heated in a furnace (600 • C) for 20 min, followed by furnace cooling to room temperature. The tempered martensite steel plates were denoted as QT steel, which is composed of fully tempered martensite. (2) Ferrite: The Lüders phenomenon in ferrite was well investigated in the literature [2,4,5]. We selected ferritic steel as a reference for comparing the characteristic of plastic instability between the tempered martensite steel and ferritic steel. The as-received steel plates were heated at 850 • C for 5 min, and then the furnace cooled to room temperature. At 850 • C, the microstructure of the as-received steel transformed into austenite. The austenite transformed into ferrite and pearlite during furnace cooling, and the volume fraction of the ferrite and pearlite was 95.4% and 4.6%, respectively. The obtained plates were denoted as F steel. Dog-bone-type specimens (c.f. Figure 1) with a parallel part 30 mm long, 8 mm wide, and 0.8 mm thick were machined from QT steel (two specimens, and their numbers: QT-1 and QT-2) and F steel (two specimens, and their numbers: F-1 and F-2). Their front surfaces were sprayed with white and black paint to make speckles for DIC analysis. An extensometer with a gauge length (GL) of 30 mm (equal to the length of the parallel part of the specimen) was attached to the back surface. Tension tests were performed on the four specimens at room temperature and at a crosshead speed of 0.01 mm/s. The deformation process on the front surface was recorded successively at a time interval of 0.5 s using a camera. The digital images (area: 30 mm × 8 mm) obtained were processed using VIC-2D software with a subset size of 9 pixels × 9 pixels (246 µm × 246 µm) and a step of 5 pixels (137 µm) to produce the displacement field, strain field, and strain-rate field. In the DIC operation, the displacement uncertainty is 0.02 pixels. The DIC measurement area covers the whole GL. In the present study, the two results were obtained from tension tests: (1) macroscopic stress-strain curves showing the global image of the tensile property and (2) the evolution of plastic deformation in terms of the strain field and strain-rate field. software with a subset size of 9 pixels × 9 pixels (246 μm × 246 μm) and a step of 5 pixels (137 μm) to produce the displacement field, strain field, and strain-rate field. In the DIC operation, the displacement uncertainty is 0.02 pixels. The DIC measurement area covers the whole GL. In the present study, the two results were obtained from tension tests: (1) macroscopic stress-strain curves showing the global image of the tensile property and (2) the evolution of plastic deformation in terms of the strain field and strain-rate field. Macroscopic Stress-Strain Curves Two tension tests were carried out for each steel. The obtained macroscopic stressstrain curves that show the average deformation behavior over a gauge length of 30 mm are provided in Figure 2. If the two stress-strain curves for one steel are plotted in the same coordinate system, the curves will be overlapped, and it is difficult to distinguish the individual curves. To clearly identify the individual curves of one steel, the two curves (F-2 and QT-2) were intentionally shifted 0.05 along the macroscopic strain axis. The optical microstructures of F steel and QT steel are shown in Figure 2. Macroscopic Stress-Strain Curves Two tension tests were carried out for each steel. The obtained macroscopic stressstrain curves that show the average deformation behavior over a gauge length of 30 mm are provided in Figure 2. If the two stress-strain curves for one steel are plotted in the same coordinate system, the curves will be overlapped, and it is difficult to distinguish the individual curves. To clearly identify the individual curves of one steel, the two curves (F-2 and QT-2) were intentionally shifted 0.05 along the macroscopic strain axis. The optical microstructures of F steel and QT steel are shown in Figure 2. The stress-strain curve of F steel shows a typical characteristic of Lüders deformation, i.e., a yield plateau. This means that the Lüders phenomenon inevitably occurs in F steel. In contrast to F steel, the stress-strain curve of QT steel does not show clear evidence of Lüders deformation and the PLC effect. This indicates that the occurrence of Figure 2. Macroscopic stress-strain curves of (a) ferrite/pearlite steel (F steel) and (b) tempered martensite steel (QT steel). Two tests were performed for each steel (F-1 and F-2 for F steel; QT-1 and QT-2 for QT steel). The stress-strain curve of F steel shows a typical characteristic of Lüders deformation, i.e., a yield plateau. This means that the Lüders phenomenon inevitably occurs in F steel. In contrast to F steel, the stress-strain curve of QT steel does not show clear evidence of Lüders deformation and the PLC effect. This indicates that the occurrence of plastic instability in QT steel cannot be identified only from the stress-strain curve. Digital image correlation was used to analyze the plastic deformation behavior in the following section. It is noted that a region of the stress-strain curve of QT steel is enclosed with a dotted rectangle. It was found that plastic instability takes place in the rectangle, which will be described in detail later. Plastic Instability Lüders deformation or the PLC effect is characterized by the plastic band. Previous studies [4,5] showed that the strain-rate field could effectively identify the moving plastic bands. In the present study, the deformation process on the front surface of tension specimens was digitalized by a camera, and the obtained digital images over the whole tension process were used to analyze the strain and strain-rate field using two-dimensional DIC. The analysis via the strain-rate field for the F steel and QT steel shows that the plastic band occurs in two regions: (1) the elastic-to-plastic transition region (i.e., the Lüders phenomenon), and (2) the region after the onset of the necking of the tension specimen. This means that before the onset of the necking of the specimen, only the Lüders phenomenon exists in both steels. It is well known that the necking of the specimen induces plastic instability. This kind of plastic instability is not our concern, and it will not be discussed in the present study. For the F steel, the Lüders phenomenon occurs mainly on the yield plateau. Eleven images (image 1 to image 11 ) on the yield plateau of the F-1 specimen (cf. Figure 3) were selected, and their strain-rate fields over a gauge length of 30 mm are shown in Figure 3. It is generally believed that the pinning and the unpinning of dislocations cause the formation of a Lüders band. To form a Lüders band, a certain level of stress is required [35]. Apparently, at a given applied stress level, a site with a high-stress concentration more easily meets this critical stress condition than do other sites with low-stress concentrations. The shoulder of a specimen produces a high-stress concentration, providing an appropriate site for Lüders band nucleation as a result. As shown in image 1 , two plastic bands (B-1 and B-2) formed near the right and left shoulders of the specimen. It can be seen that the position of image 1 on the stress-strain curve is ahead of the yield point. This indicates that plastic bands were formed ahead of the yield point, which agrees with the observations of the previous study [4,5]. After band nucleation, B-1 propagates from right to left, while B-2 propagates toward the right. Lüders band propagation is characterized by the moving of a leading band front into the adjacent elastic region. It is also a repeated process of pinning and unpinning of dislocations. Naturally, the applied stress is required to exceed a critical stress level. Band propagation velocity was reported to be related to the magnitude of applied stress [4]. As shown in Figure 3, the applied stress on the yield plateau essentially remains constant, but it fluctuates significantly at some points on the yield plateau, and the applied stresses greatly decrease at those points. The applied stress corresponding to image 3 is too low to exceed the required critical stress for unlocking the dislocations, resulting in band disappearance. The strain-rate field of image 3 verifies the disappearance of the two plastic bands. When the applied stress recovers to its original value, the two bands appear again at the original sites. In Figure 3, four arrows show four low-stress levels. The two bands also disappear around the four stress levels. B-1 and B-2 propagate oppositely until coalescing with each other (cf. image 9 ). The position of image 11 is nearly the end of the yield plateau, and the coalesced band almost disappears at this point. The deformation process of QT steel over the whole stress-strain curve was examined via DIC. It was found that plastic instability occurred only within a certain range of the stress-strain curve. The range is enclosed in Figure 2 by a dotted rectangle. The QT-1 specimen was taken to show in detail the evolution of the plastic instability in QT steel. Twenty typical points on the macroscopic stress-strain curve of the QT-1 specimen were selected (cf. Figure 4). The strain-rate fields corresponding to the 20 points (image ① to image ⑳) are given in Figure 4. It can be seen that two bands (B-1 and B-2) formed near the left and right shoulders of the specimen in image ①, respectively. The position of image ① in the stress-strain curve indicates that a band nucleated ahead of the yield The deformation process of QT steel over the whole stress-strain curve was examined via DIC. It was found that plastic instability occurred only within a certain range of the stress-strain curve. The range is enclosed in Figure 2 by a dotted rectangle. The QT-1 specimen was taken to show in detail the evolution of the plastic instability in QT steel. Twenty typical points on the macroscopic stress-strain curve of the QT-1 specimen were selected (cf. Figure 4). The strain-rate fields corresponding to the 20 points (image 1 to image 20 ) are given in Figure 4. It can be seen that two bands (B-1 and B-2) formed near the left and right shoulders of the specimen in image 1 , respectively. The position of image 1 in the stress-strain curve indicates that a band nucleated ahead of the yield point. The variation in the value of the strain rate within B-1 from image 1 to image 6 shows the evolution of B-1, in that the band first grows ( 1 → 2 ), and then decays, and finally completely disappears ( 2 → 6 ). The position of B-1 in images 1 → 5 is almost unchanged. This indicates that the band formation, growth, and disappearance of B-1 took place at almost the same site, and apparently, band propagation did not occur. After band formation, B-2 grows along one direction from image 1 to image 2 and then extends along another direction, shown by an arrow in image 2 , instead of in its original direction. Image 3 shows the appearance of B-2 after the change in the direction of extension. B-2 propagates from right to left from image 3 to image 4 . In image 4 , a new band (B-6) on the left side of B-2 was formed. B-6 and B-2 coalesced and extended across the width of the specimen to produce a new band with several branches (B-7) in image 5 . The position of B-7 is almost unchanged from image 5 to image 15 . This indicates that B-7 did not propagate after band formation. A band (B-3) was formed in image 4 , which is after the upper yield point. This band extends across the width of the specimen from image 4 to image 6 and then propagates from left to right. In image 8 , a new band (B-5) was split out of B-3, and B-3 continues to propagate until coalescing with B-7 in image 16 . The coalesced band gradually decays and finally disappears. The split band (B-5) gradually decays without band propagation and finally disappears in image 14 . B-4 experiences a process of band formation, growth, and disappearance from image 7 to image 13 that is similar to B-1 and B-5. It can be seen from Figures 3 and 4 that the Lüders deformation process in QT steel is more complicated than that in F steel. Because Lüders deformation occurs in the elastic-to-plastic transition region, the plastic region and the elastic region are simultaneously present in the Lüders deformation process. This indicates that the strain distribution over the specimen is heterogeneous. In Figure 5, one point on the stress-strain curve of (a) F steel (F-1 specimen) and (b) QT steel (QT-1 specimen) that is nearly in the middle of Lüders deformation process is selected. The strain heterogeneity in the two steels is revealed by analyzing the strain distribution at this point. The experimental data of the F-1 specimen and the QT-1 specimen at this point are summarized in columns (a) and (b), respectively. The strain-rate ( . ε x ) field of the F-1 specimen (cf. Figure 5(a2)) shows that two moving plastic bands exist. The corresponding strain (ε x ) field was described in terms of twodimensional contour ( Figure 5(a3)). The strain field is roughly divided into three regions: the middle is the elastic region, and the others are plastic regions. To quantitatively describe the strain distribution, a center line (line AB) is drawn in the two-dimensional strain contour, and the strain (ε x ) along the line is extracted and shown in Figure 5(a4). Three reference points, which were directly derived from the macroscopic stress-strain curve, are also provided: the Lüders strain (ε L ), the average strain over the gauge length (GL), and the elastic limit. The Lüders deformation process is characterized by the formation of Lüders bands (single or multiple bands), followed by band propagation over the whole specimen. The strain corresponding to the ending point of the Lüders deformation process is referred to as the Lüders strain. For the steel with a clear yield plateau, the ending point of the Lüders deformation process is generally around the ending point of the yield plateau. This point in F-1 and QT-1 specimens were directly determined by the strain-rate field in the present study. The average strain over the GL is the macroscopic strain of the stress-strain curve of the F-1 specimen corresponding to the point of interest. Careful examination of the macroscopic stress-strain curve in the macroscopic elastic region (i.e., from point zero to the upper yield point) shows that stress linearly increases with strain only within the initial region (from point zero to a certain stress level); beyond this region, stress gradually deviates from this straight line. The maximum macroscopic strain of the linear range is denoted as the elastic limit. Figure 5(a4) shows that the middle region is in the elastic state, and the corresponding strain is almost equal to the elastic limit. The width of the two moving plastic bands with respect to Line AB is marked by an arrow (↔) in Figure 5(a2). The range of Line AB enclosed by the band width is shown by two vertical dotted lines in red in Figure 5(a3,a4). Figure 5(a4) shows that the strain varies significantly within the band width from an elastic strain to a large plastic strain (close to the ε L ). The average strain over the GL is the average value of the elastic and plastic regions. Conclusions The plastic deformation behavior of medium-carbon tempered martensite steel at room temperature was investigated. The results obtained regarding the plasticity of tempered martensite were as follows. (1) In the elastic-to-plastic transition region, there is no clear yield plateau on the stressstrain curve of medium-carbon tempered martensite steel; (2) The Lüders deformation phenomenon is present, but the Portevin-Le Châtelier phenomenon is not found; (3) The elastic and plastic regions are simultaneously present in the Lüders deformation process. The local strain in the elastic region is close to the elastic limit. The variation in strain within a Lüders band is significant, monotonously increasing from an elastic strain to a large plastic strain that is close to the Lüders strain; Figure 5. Local strain (ε x ) distributions in (a) F steel (F-1 specimen) and (b) QT steel (QT-1 specimen) over the gauge length at a point on the macroscopic stress-strain curve located at almost the middle of the Lüders deformation process. The strain-rate field shows the status of the moving bands, and the strain field shows the two-dimensional strain distribution at the point of interest. The local strain (ε x ) distributions along the AB and CD lines were extracted. The average strain over the GL is the macroscopic strain of the stress-strain curve corresponding to the point of interest. ε L , Lüders strain. The strain heterogeneity in the QT-1 specimen was analyzed in a similar way in Figure 5b. As shown in Figure 5(b2), the shape of the moving bands is more complicated than that in the F-1 specimen. The local strain along Line CD (cf. Figure 5(b3)) is shown in Figure 5(b4). The strain in the elastic region (middle region) is close to the elastic limit. The width of three moving bands with respect to Line CD is shown by an arrow (↔) in Figure 5(b2). The strain variation within the band width is similar to that in the F-1 specimen. The maximum plastic strain within the band width also approaches the ε L . The strain distribution in the QT-1 specimen is more complicated than that in the F-1 specimen. Conclusions The plastic deformation behavior of medium-carbon tempered martensite steel at room temperature was investigated. The results obtained regarding the plasticity of tempered martensite were as follows.
5,285
2021-08-01T00:00:00.000
[ "Materials Science" ]
The pull-out test on knit bamboo reinforcement embedded into concrete beam The pull-out test is generally conducted to obtain accurately the carrying capacity of the flexural strength of the knit bamboo reinforced concrete beam, which is more determined by the bonding strength than the tensile strength of reinforcement in concrete. Bamboo bar with braid knit which was coated with sikadur as bonding agent based on selected epoxy resin was expected to improve a good friction with concrete. In the pull-out test method, a hydraulic jack was applied to encourage bamboo embedded into a pair of concrete blocks, whose size was 15cm x 30cm x 40cm. The experimental variable of specimens were types of knitted bamboo, and type of coating. Based on the test results, either the bond strength or the tensile strength, which was calculated based on the failure mechanism, increased with respect to the concrete quality. The compressive strength of concrete was averaged as much as 25,97 MPa. The usage of outer skin surface on the cutting braid knit bamboo (type 1), which was coated with sikadur experimentally could increase the pull-out load. In the pull-out test, bond failure occurred with using of the plain bamboo bar with the bond stress of 1.18 MPa, while tensile failure occurred with using of knit bar type 1 with peak tensile strength of 85.84 Mpa. Introduction Kinds of research were conducted to reduce the shrinkage of bamboo reinforcement in structural concrete that caused the bonding failure. The influence of coating of knit bamboo with varnish could increase the bonding stress in concrete [1,2]. Bond strength is caused by the shear interlock between the reinforcement and the concrete. It means a combination of the ability between the reinforcement and the concrete that covers it in forces resistance that can cause the bonding to lose between the rebars and the concrete [3]. The bond stress between bamboo and concrete can experimentally be determined by the pull-out test. In this research, variations of knit bamboo and coating were used to examine the failure mechanism and the bond strength. Literature review Bond stress is the shear stress on the surface of the concrete, where the load transfer occurs between the reinforcement steel and the surrounding concrete so as to modify the stress of the reinforcing steel. This bonding is effectively distributed, and allows two material forms a composite structure, as shown in Fig. 1 that illustrates the behavior of the bonding along the reinforcement. The embedment length determines the resistance of bar slip. The main basis of the embedment length theory is to take into account an embedded reinforcement in concrete. In order to transfer the force completely through the bond, it is needed to embed the bars into the concrete to a certain depth, which is expressed in the embedment length. The tensile force acting on the reinforcement can be retained by the bonding between the surrounding concrete and the reinforcement. When this bond stress acts evenly over the entire embedded bars, the total force to be resisted before pulling out the rod from the concrete will be equal to the multiplication of length of the embedded bars, its circumference, and the bond stress. In the calculation of the amount of reinforcement that is embedded in the concrete required the bond stress (μ). This means that the tension is related to the embedded length of reinforcement on concrete. So that the pull-out load of bars embedded in concrete can be formulated as follows [4]: Where: Pmax is the maximum pull-out load  is the bond stress ld is embedment length p is the circumference of bars The bonding mechanism between concrete and reinforcement can be conducted by a pull-out test on a reinforced embedded in concrete. In the pull-out test, the pull-out load and also the bond stress can be obtained. The influencing factors of bond strength between concrete and reinforcement are as following factors [5]: (1) gripping effect (holding) as a result of drying shrinkage of concrete around the reinforcement and friction between reinforcement and the surrounding concrete; (2) friction resistance against the slip and interlock each other when the reinforcement experiences tensile stress; (3) diameter of reinforcement; (4) coating material, and (5) concrete cover. The bonding of reinforcement to the concrete occurs due to several factors, including the chemical adhesion between two materials, the friction due to the natural roughness of the reinforcement, and the effect of the tightly deformed bars to the reinforcing surface of the concrete [6]. The relatively low bonding stress on the plain round rebars will cause a slip which adequately removes the adhesion at a location adjacent directly to the location in the concrete so that the relative friction between the reinforcement and the surrounding concrete is only held up by friction along the slip region. Deformed bars are installed to adjust a behavior that depends on the surface area of friction and adhesion and depends more on the resistance of its knot to the concrete. Bond stress between the concrete and the reinforcement can be reduced when the stress increases so that cracks will arise in the concrete, then development cracks will result in deflection. Specimens In the pull-out test, three types of knit bamboo with the outer skin surface, shown in Fig. 2 were controlled by plain bamboo whose size was 0.7x0.7x124. Each knit bamboo contained three slices of bamboo including outer skin surface. In order to reduce the shrinkage of bamboos, the coating material of either varnish or sikadur was used. Sikadur is a bonding agent based on selected epoxy resin. In addition, varnish is a transparent, hard, protective finish or film that is primarily used not only in wood finishing but also for other materials. Two bamboo reinforcements were embedded into a pair of concrete block specimens with the size of each 40x15x30cm, as shown in Fig. 4. Variables of the pullout specimens were tabulated in Table 1. Types of bamboo and coating were already described in subsection 3.1. In the preliminary test, both bonding agents were coated on the surface of bamboo knit with braid configuration (knit 1). The best result of them was considered for further variables with different bamboo types in the pull-out test. For preservation, bamboo bars were immersed previously with NaOH and dried naturally before knitting and coating process. The accelerator type of admixture additive was added in all concrete mixture. To measure the characteristic compressive strength of 20 MPa, concrete cylinders were cast. Pull-out and cylinder specimens were cured for 10 days (same with 28 days in normal concrete) and respectively tested with pull-out test and compressive test as explained in subsection 3.2. Many tensile specimens with a flat shoulder made from bamboo were designed to be used with serrated grips. Each sample had two shoulders larger than a gauge section which expected to collapse. The dimension of the flat test specimen according to actual and ASTM B557M standard were shown in Table 2 and described in Load carrying capacity of bamboo bars is more commonly determined by pull-out test than the tensile test. In this research, almost all tensile strength of bamboo specimen could not be obtained by Universal Testing Machine due to the slip failure at the grip side, though the bamboo was formed to the flat tensile test specimen. The installation of the tensile test can be seen in Fig. 4. Fig. 5. Outline of the pull-out test. In this recent pull-out test method, hydraulic jack, piston, and load cell were set up to test the specimen, as seen in Fig. 5. Bamboo was identified as a weak material compared with steel so that it was easy to be broken under tensile test by Universal Testing Machine device [7]. Therefore, the proposed method was usable in the pull-out test. The compressive strength of cylinder concrete mixing for pull-out specimens was also measured here by Compressive Testing Machine as seen in Fig. 6. Fig. 3. The dimension of the flat tensile test specimen. Testing device Testing of materials was initially prepared instead of identifying their characteristics. Testing of aggregates such as density and absorption was conducted previously to describe the appropriate proportion of concrete mix. Since the bond stress between bamboo and concrete was influenced by concrete quality, cylinder specimens of each pull-out concrete mixture were prepared and tested. The process of the pull-out test is briefly explained in Fig. 7. The failure mechanism of the tensile test specimen shown in Table 3, and Fig. 8 did not satisfy because of slip failure in the grip side under testing by Universal Testing Machine, as explained before. Compressive test The averaged compressive strength of cylinder specimens could be seen in Pull-out test The pull-out test that results in the peak load can be used to calculate the bond stress between bamboo bars and concrete. Fig. 9a shows a relationship between the pull-out load and slip in specimen reinforced with knit bamboo type 1 coated with sikadur. From the test result, that specimens have the greatest pull-out load about 23 kN in average, and slip about 5 mm in average because bamboo knit 1 has high friction force with concrete. Usage of sikadur can increase the pull-out load, that is why it was chosen as the coating for other knit variations. Pull-out load of specimens with knit bamboo type 2 and 3 are almost similar about 13 kN. Slip between bamboo and concrete in the specimen with bamboo type 3 is smaller than the specimen with bamboo type 2 (Fig 9b, 9c). 30 A1B1 1 25 A1B1 2 Slip (mm) Fig. 9c. The pull-out load and slip curve of specimens with knit bamboo type 3 coated with sikadur Fig. 9d shows that the ability of plain bamboo to carry pull-out load is slightly higher than knit bamboo. However, plain bamboo goes through slip greater than knit bamboo. In Fig. 9e, the specimen with knit bamboo type 1 coated by varnish could carry the pull-out load slightly lower about 5 kN than that coated with sikadur. However, the slip behavior, in this case, cannot be compared to each other due to an insignificant result of repetition sample (Fig. 9e). The usage of either plain or knitted bamboo bars was to identify the failure mechanism in the pull-out test but could not clearly show the maximum load that can be carried each other. The maximum load of plain bamboo is 13.25 kN, higher than knit bamboo. It may be caused by the determination of cross-section area, and the twisting effect on the fiber hardness. Nevertheless, among knitted bamboo variation, braid type 1 (A1B1) carried the average maximum load about 11.13 kN. The results show that the specimen reinforced with knit bamboo type 1 can improve more the pull-out load 20 15 10 5 0 A1B1 4 than the others. Specimen coated with sikadur can carry the pull-out load better than varnish and uncoated one. It is found that knit bamboo can improve the bonding stress, but decrease the load carrying capacity in comparison with plain bamboo. The pull-out load in this research is also used to calculate the bond stress or tensile stress, which is dependent on the failure pattern. Failure mechanism The slip failure generally occurs in the pull-out test, when the bonding strength of bamboo bars is less than the yield strength. Table 5. Bond stress analysis of specimens. Bond stress analysis based on pull-out test The pull-out results took into account either the bond stress or the tensile stress, depending on the tensile failure. The obtained tensile stress, T in the pull-out test is the ratio between the pull-out load and cross-section area of bamboo. As seen in Table 5, it is prior known that knit bamboo embedded in concrete can go through tensile Slip (mm) Fig. 9e. The pull-out load and slip curve of specimens with knit bamboo type 1 coated with varnish It is shown with shear failure on the contact surface between concrete and reinforcement. According to the American Concrete Institute standard code, there are three types of collapse in the pull-out test, including tensile failure, concrete collapse followed by pulling out of the reinforcement, and collapse by pulling out of reinforcement in concrete. The bond stress can be analyzed in the case of plain bamboo bars that embedded in concrete went through the pull-out failure, as seen in Fig.10. On the other hand, the pull-out test of specimens reinforced with the knit bamboo result in tensile failure, that is taken into account the tensile stress of bamboo, as seen in Fig. 11. The roughness between knit bamboo and concrete improve because of interlocking between bars' void and concrete. The bonding between knitted bamboo and concrete is also influenced by the improvement of concrete quality. failure in the pull-out test. The bond between bamboo and concrete increases so that the tensile failure of reinforcement occurs. On the other hand, the plain bamboo has less bonding with concrete, so that bond failure or pull-out failure occurs here. Then, bond stress can be obtained. Fig. 12. The ratio between either the bond stress or the tensile stress and concrete compressive strength Figure 12 shows the ratio between either the bond stress or the tensile stress and concrete compressive strength in the pull-out specimens. Knit bamboo embedded in the pull-out specimens more increased the ratio than plain bamboo one. It means that knit bamboo has good friction with concrete, which prevents the bond failure. Specimen Failure types Conclusions Based on the result and discussion of the research, it can be concluded as follows: a. Knit bamboo effects on the pull-out load improvement, because it contributes to the friction force between reinforcement and concrete. It can result in tensile failure in the pull-out test, that means it has great friction or bonding with concrete. So, it is found that the bond stress is influenced by the friction force between bamboo and concrete b. Sikadur as a bonding agent can be recommended as good coating compared with varnish because it can increase the bond stress as well as the pull-out load. c. Plain bamboo result pull-out failure. It means there is no good bonding between plain bamboo and concrete.
3,304
2019-01-01T00:00:00.000
[ "Materials Science", "Engineering" ]
The role of semaphorin 3A on chondrogenic differentiation Osteoblast-derived semaphorin3A (Sema3A) has been reported to be involved in bone protection, and Sema3A knockout mice have been reported to exhibit chondrodysplasia. From these reports, Sema3A is considered to be involved in chondrogenic differentiation and skeletal formation, but there are many unclear points about its function and mechanism in chondrogenic differentiation. This study investigated the pharmacological effects of Sema3A in chondrogenic differentiation. The amount of Sema3A secreted into the culture supernatant was measured using an enzyme-linked immunosorbent assay. The expression of chondrogenic differentiation-related factors, such as Type II collagen (COL2A1), Aggrecan (ACAN), hyaluronan synthase 2 (HAS2), SRY-box transcription factor 9 (Sox9), Runt-related transcription factor 2 (Runx2), and Type X collagen (COL10A1) in ATDC5 cells treated with Sema3A (1,10 and 100 ng/mL) was examined using real-time reverse transcription polymerase chain reaction. Further, to assess the deposition of total glycosaminoglycans during chondrogenic differentiation, ATDC5 cells were stained with Alcian Blue. Moreover, the amount of hyaluronan in the culture supernatant was measured by enzyme-linked immunosorbent assay. The addition of Sema3A to cultured ATDC5 cells increased the expression of Sox9, Runx2, COL2A1, ACAN, HAS2, and COL10A1 during chondrogenic differentiation. Moreover, it enhanced total proteoglycan and hyaluronan synthesis. Further, Sema3A was upregulated in the early stages of chondrogenic differentiation, and its secretion decreased later. Sema3A increases extracellular matrix production and promotes chondrogenic differentiation. To the best of our knowledge, this is the first study to demonstrate the role of Sema3A on chondrogenic differentiation. Introduction The vertical growth of the mandibular bone is mainly dependent on the growth of the mandibular branch, and the growth of the mandibular branch is mainly dependent on "endochondral ossification," which occurs in the mandibular condyle.Endochondral ossification is the differentiation of undifferentiated mesenchymal cells into cartilage cells after the formation of cartilage primordia, followed by the differentiation of the cartilage cells into hypertrophic cartilage cells and the gradual conversion of the surrounding osteoblasts into limestone.The cartilage tissue is replaced with the bone tissue through ossification (Goldring et al. 2006;Mackie et al. 2008). Semaphorin, which we focused on in this study, is a group of proteins identified as repulsive nerve guidance factors that determine the direction of nerve axons (Kolodkin et al. 1993).More than 20 types of semaphorin molecules function not only in the nervous system but also in various physiological processes, such as the skeletal, vascular, endocrine, and immune systems have been discovered (Behar et al. 1996;Sekido et al. 1996;Kumanogoh et al. 2002;Serini et al. 2003).The semaphorin family has unique structural feature of having a region (sema-domain) consisting of approximately 500 amino acids outside the cell and is classified into eight subgroups based on the difference in the structure on the C-terminal side following the sema domain.In addition, semaphorin is classified into three types depending on its mode of binding to the cell membrane: cell membrane-bound, glycosylphosphatidylinositol bound, and secretory.Among the eight subgroups, class 3 semaphorin is a secretory semaphorin typical of vertebrates, and there are seven members from semaphorin 3A (Sema3A) to semaphorin 3G classified depending on the differences in the base sequence (Yazdani and Terman 2006).Sema3A has a high affinity for Neuropilin-1 (NRP-1), whereas NRP-1 performs intracellular signaling by forming a receptor complex with PlexinA1 (PLXNA1) (Chen et al. 1997;Kolodkin et al. 1997;Winberg et al. 1998;Takahashi and Strittmatter 2001).In addition, in a co-culture experiment of a post-chicken ganglion cell mass and Sema3A, the neurites on the side of Sema3A-expressing cells regressed; therefore, Sema3A was initially identified as a repulsive axonal inducer (Kaneko et al. 2006).Recently, osteoblast derived Sema3A has been reported to be involved in bone protection by increasing bone mass by simultaneously promoting osteoblast differentiation and suppressing osteoclast differentiation via estrogen regulation (Hayashi et al. 2012(Hayashi et al. , 2019)).In addition, Sema3A knockout mice have been reported to exhibit chondrodysplasia such as dyscoupling of costal cartilage or sternum (Behar et al. 1996).In addition, the expression of Sema3A and its receptors in the cartilage has been reported (Sumi et al. 2018).Thus, Sema3A may play an important role in cartilage growth.Therefore, this study was aimed to elucidate the pharmacological effects of Sema3A on chondrogenic differentiation. Cell line and culture conditions All experiments were performed using the mouse chondrogenic cell line ATDC5 (RIKEN Cell Bank, Tsukuba, Japan), which is derived from embryonal carcinoma cells AT805 (Atsumi et al. 1990).ATDC5 cells provide an in vitro differentiation model that faithfully replicates the complex differentiation stages of chondrocytes from undifferentiated mesenchymal cells to mineralization under consistent culture conditions (Shukunami et al. 1997).Therefore, ATDC5 cells are widely utilized in research concerning the proliferation and differentiation of chondrocytes (Nakatani et al. 2007;Yoshioka et al. 2015;Yamaguchi et al. 2018).Cells were seeded and cultured in 6-well plates (FALCON, Franklin Lakes, NJ) at a density of 6.0 × 10 4 cells/well.The culture was maintained in Dulbecco's Modified Eagle's Medium/Nutrient Mixture F-12 Ham (DMEM/Ham's F12; Sigma Aldrich, St. Louis, MO) supplemented with 5% fetal bovine serum (FBS; Biological Industries, Cromwell, CT), 10 μg/mL human transferrin (Sigma Aldrich), and 3 × 10 −8 M sodium selenite (Sigma Aldrich) under an atmosphere of 5% CO 2 in a humidified incubator at 37 °C.The medium was changed every alternate day.When the density of the cells on the plates reached 50% confluence, the cells were changed to a differentiation medium, DMEM/Ham's F12 containing 5% FBS (Biological Industries, Kibbutz Beit Haemek, Israel), 10 μg/mL bovine insulin (Sigma Aldrich), and 37.5% ascorbic acid 2-phosphatase (Sigma Aldrich) to induce chondrogenic differentiation.The cells were cultured in a differentiation medium for 24 d. Measurements of Sema3A protein concentrations in the supernatant of ATDC5 cultures The cell culture supernatants were collected.Particulates were removed by centrifugation for 15 min at 1000 × g and 2-8 °C.The concentrations of Sema3A in the supernatant was measured using a mouse Sema3A enzyme-linked immunosorbent assay (ELISA) kit (Cusabio, Wuhan, China).This assay employs a quantitative sandwich enzyme immunoassay technique, according to the manufacturer's guidelines.Standard curves were generated using a standard process.The experiments were performed in triplicates. Sema3A application To examine the effect of Sema3A on chondrogenic differentiation, ATDC5 cells were treated with Sema3A (1, 10, and 100 ng/mL; recombinant mouse Sema3A Fc chimera; R&D Systems Inc., Minneapolis, MN) at the time of medium change on the seventh day and the cells are cultured for 24 h.Representative data were obtained from four samples from each group.To investigate the long-term effects of Sema3A on Aggrecan gene expression, 10 ng/ml Sema3A was added continuously by changing the medium every two days, and the cells were cultured for 7 d.On the other hand, to investigate the effects of Sema3A on the synthesis capacity of proteoglycans and the amount of hyaluronan synthesis in the extracellular matrix, 10 ng/ml Sema3A was added continuously during medium change, and the cells were cultured for 13 d. Real-time qPCR Total RNA was isolated from cell cultures using the TRIzol reagent (Invitrogen Life Technologies Inc.) according to the manufacturer's instructions.cDNA was generated using the ReverTra Ace qPCR RT Master Mix (Toyobo, Osaka, Japan).Real-time qPCR was performed using the THUNDERBIRD SYBR qPCR Mix (Toyobo) and a Light Cycler System (Roche Diagnostics, Mannheim, Germany) to quantify the target gene expression.The primer sets used are listed in Table 1.Relative gene expression levels were calculated using S29 as an internal control.Normalized cycle threshold (Ct) values were compared with those of controls.Data were calculated as a relative expression by 2-ΔCt, where the cycle threshold is the beginning of logarithmic amplification and ΔCt is the difference in the target gene Ct subtracted from the reference gene Ct.A minimum of four independent measurements were obtained. Alcian blue staining To assess the deposition of total glycosaminoglycans during chondrogenic differentiation, ATDC5 cells were stained with Alcian Blue. 10 ng/ml of Sema3A was added during medium change every two d and Alcian Blue staining was performed on the 13th day of differentiation.The cells were fixed with a 4% paraformaldehyde solution and stained with 1% Alcian blue staining solution (Sigma) for 1 h at 25 °C.After washing the wells with pure water, plates were photographed.Alcian blue dye was extracted using a 6 M guanidine hydrochloride solution.The absorbance was measured at 450 nm using a microplate reader (Multiskan FC, Thermo Scientific, Waltham, MA). Measurement of hyaluronan content in culture supernatant 10 ng/ml Sema3A was added during medium change every two d, and the culture supernatants were collected on days 7, 9, 11, and 13 of differentiation.The concentrations of Hyaluronan in the supernatant were measured using a Hyaluronan quantification kit (Cosmo Bio, Tokyo, Japan).This assay employs a quantitative sandwich enzyme immunoassay technique, according to the manufacturer's guidelines.Standard curves were generated using a standard process.The experiments were performed in triplicates. Statistical analysis All experiments were repeated at least thrice.Statistical analyses were performed using one-way analysis of variance.Subsequently, Scheffe's multiple comparison test or Mann-Whitney U test (Statcel 4 software; OMS Publishing Inc., Saitama, Japan) were performed when necessary.A p-value < 0.05 was regarded as indicative of a statistically significant difference.A p-value < 0.01 was regarded as indicative of a highly significant difference. Results Expression of Sema3A at each differentiation stage of ATDC5 cells To confirm the expression of Sema3A at each differentiation stage in ATDC5 cells, we examined the mRNA expression levels using qPCR.Sema3A gene expression significantly increased from day 7 after the onset of differentiation, reaching a maximum on day 14, consistent with Type II collagen (COL2A1) gene expression (Fig. 1A and B).From day 17, Sema3A gene expression decreased, and Type X collagen (COL10A1) gene expression increased (Fig. 1A and C).When the amount of Sema3A protein secreted into the culture supernatant of ATDC5 cells was measured using ELISA, it reached a maximum on day 14 of differentiation and decreased on day 21, but both were significantly larger than that on day 7 (Fig. 1D). Effect of Sema3A on extracellular matrix synthesis To evaluate the effect of Sema3A on the ability to synthesize proteoglycans, ATDC5 cells were stained with Alcian blue (Fig. 3A).The absorbance of the proteoglycans in the Sema3A group was significantly higher than that in the control group (Fig. 3B).Additionally, when 10 ng/ml Sema3A was added continuously every two d for 7 d, an increasing trend in ACAN gene expression was confirmed (Fig. 3C).To examine the effect of Sema3A on the amount of hyaluronan that constitutes the extracellular matrix, the amount of hyaluronan was measured.When sema3A was added, the amount of hyaluronan tended to increase (Fig. 4A). Discussion In this study, we clarified the role of Sema3A on chondrogenic differentiation.First, we analyzed the gene expression of Sema3A in ATDC5 cells.Sema3A expression showed a trend consistent with that of COL2A1, which significantly increased from day 7 after the onset of differentiation and reached a maximum on day 14.From day 17, Sema3A gene expression decreased and COL10A1 gene expression increased.Sema3A secretion into the culture supernatant peaked on day 14 and decreased by day 21.This trend revealed that Sema3A is abundantly secreted in the early stage of chondrogenic differentiation, and its secretion decreases in the late stages.To the best of our knowledge, this is the first study to clarify Sema3A expression during chondrogenic differentiation. Next, we examined the effects of excessive Sema3A administration on cartilage differentiation.We added Sema3A on the 7th day of differentiation, where Sema3A secretion capacity was the lowest.The dose of Sema3A in this experiment was determined with reference to the previous studies and the amount of Sema3A protein expressed in the culture supernatant.To investigate the effect of Sema3A on human dental pulp stem cells, Yoshida et al. added recombinant Sema3A at a concentration of 10 ng/ ml to the culture medium (Yoshida et al. 2016).Kajii et al. also added Sema3A at a concentration of 1 ng/ml or 100 ng/ml to the culture medium of human chondrocytes to investigate the functions of Sema3A and PLXNA2 in human chondrocytes (Kajii et al. 2018).From these reports, the concentration of Sema3A added was determined to be 1, 10, 100 ng/ml.Since the Sema3A protein expression level on day 14, which showed the maximum expression, was about 0.12 ng/ml, the concentrations of Sema3A added in this experiment were 10, 100 and 1000 times the expression level. In the Sema3A addition experiment, Sox9 and Runx2 gene expression was significantly enhanced by the addition of Sema3A on the seventh day.Although the regulation of Sox9 and Runx2 expression remains unknown, recent studies on ATDC5 have shown that the small G protein Rac1 may be a positive regulator of Sox9-mediated chondrogenesis (Woods et al. 2007).When Sema3A binds to NRP-1 and stimulates PLXNA1, FARP2 binds to PLXNA1 immediately below the cell membrane, dissociates, and activates Rac1 (Zhou et al. 2008).Therefore, Sema3A may regulate Sox9 expression via Rac1.Sox9 has been reported to induce the expression of Aggrecan and Type II collagen (Yonashiro et al. 2009).In the present study, the addition of Sema3A enhanced the gene expression of COL2A1, suggesting that it was secondarily induced by Sox9, whose expression was enhanced by the addition of Sema3A. It has been reported that Runx2 expression is low in proliferating chondrocytes and high in pre-hypertrophic chondrocytes, promoting chondrocyte hypertrophy (Nishimura et al. 2012).On the other hand, Runx2 maintains the expression of Col2A1 through the intron 6 enhancer and is involved in the regulation of early chondrocyte differentiation markers (Nagata et al. 2022).Additionally, Runx2 directly regulates Gpr132, Sfn, c-Myb, and cyclinA1, controlling chondrocyte proliferation (Chen et al. 2014).The transcription factor Dmrt2 promotes chondrocyte hypertrophy by binding to Sox9 and Runx2 (Ono et al. 2021).From these reports, it can be suggested that Runx2 plays multiple roles in the proliferation, differentiation, and hypertrophy of chondrocytes.From the present research, it is suggested that Sema3A increases not only Sox9 but also Runx2 gene expression, indicating its potential to promote chondrocyte differentiation.COL10A1 expression is regulated by Runx2, and the increased Runx2 by Sema3A may contribute to the promotion of COL10A1 gene expression. The synthesis and degradation of hyaluronan are regulated according to the chondrocyte differentiation stage, and the expression of hyaluronan synthase HAS2 is upregulated in hypertrophic chondrocytes in the growth plate (Magee et al. 2001;Suzuki et al. 2005).Sema3A addition increases HAS2 and ACAN expression.Furthermore, the addition of Sema3A up to day 13 of differentiation significantly increased proteoglycan synthesis, indicating that Sema3A promotes extracellular matrix production. Conclusions In conclusion, Sema3A is secreted during the early stages of chondrogenic differentiation, and its secretion decreases during the late stages of chondrogenic differentiation.Sema3A increased extracellular matrix production and promoted chondrogenic differentiation.This study suggests Sema3A as one of the factors controlling endochondral ossification in the mandibular condyle, and it may be used as a therapeutic method for promoting the growth of the mandible in the future. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Figure 3 . Figure3.Effects of Sema3A on the ability to synthesize proteoglycans.For Alcian blue staining, Sema3A was added at a concentration of 10 ng/ mL for 13 d.(A) To assess the deposition of total glycosaminoglycans during chondrogenic differentiation supplemented with Sema3A, ATDC5 cells on day 14 were stained with Alcian blue.(Scale bar: 200 μm) (B) Absorbance was measured at 450 nm using a microplate reader.(*p < 0.05, N = 8) (C) ATDC5 cells were treated with 10 ng/ml Sema3A continuously every two d for 7 d.The gene expression levels of ACAN were determined using qPCR. Figure 4 . Figure 4. Effects of Sema3A on the ability to synthesize hyaluronan.To evaluate the amount of hyaluronan synthesized during differentiation of ATDC5 supplemented with Sema3A, culture supernatants on days 7, 9, 11, and 13 were collected and the amount of hyaluronan was measured.Negative control (n.c);No statistical difference (n.s.) (N = 12). Table 1 . The primer sequences for qPCR analysis
3,768.8
2024-05-10T00:00:00.000
[ "Medicine", "Biology" ]
Interorganizational Cost Management Study on Inhibitor Strategic cost management in supply chains is not a new concept. Coordinated actions between companies of the same chain, in order to reduce costs and end consumer price, offer opportunities for improved results. Interorganizational Cost Management (IOCM) is a structured approach with a broad vision, beyond the borders of the organization, which aims to reduce costs at the internal and external levels. Indeed, cost management is a complex issue that permeates all areas of the organization and may pose a number of difficulties to be implemented and sustained. Thus, this work has the overall goal of identifying, in the literature, the factors and conditions that inhibit the applicability of the Interorganizational Cost Management approach. To achieve these goals, an analysis was made of 35 academic research studies available in the literature that reported the difficulties faced by companies in cooperative cost management. The analysis of the studies showed the perceptions of different companies, and described the difficulties they face; therefore, the present research is qualitative and exploratory. Factors that inhibit IOMC were grouped into: (i) corporate strategy; (ii) integration of companies; (iii) people; (iv) intraand interorganizational processes; (v) corporate training and education; (vi) disputes between companies; and (vii) lack of trust between companies. Introduction The evolution of markets and the increasing complexity in supply chains led to the emergence of new management techniques and new information exchange systems between companies, going through the internal environment and achieving interorganizational relations (Kulmala, Paranko, & Uusi-Rauva, 2002).Wincent (2008) stated that interorganizational networks emerged as an alternative to the needs of companies, and inter-relationship is the future trend.For Drucker (1997), it is up to companies to position themselves and articulate relationships that help them in their activities in order to face challenges (such as globalization) as an opportunity. Companies can gain competitive advantage by Interorganizational Cost Management (IOCM), whose goal is to find solutions that have lower costs when compared to the sum of costs of companies acting individually (Kulmala et al., 2002).Cooper and Slagmulder (1999) explained that Interorganizational Cost Management is a structured approach to coordinate companies' activities in a supply chain, in order to reduce total network costs. Studies about inter-relationship claim that this approach is a tool for companies to grow in the market where they operate, generating benefits for all parties involved (Borin & Farris, 1990;Ellram, 1994;Ellram & Siferd, 1998;Cooper & Slagmulder, 1999;Ferrin & Plank, 2002;Lalonde, 2003).However, from an empirical point of view, there are a number of companies which fail to participate in cooperative processes, and many networks are unable to consolidate their structures and management models (Pereira, Alves, & Silva, 2010). The analysis of the opinions of the cited authors shows that IOCM is as an opportunity in this scenario, but its application can be a difficult task.Additionally, given the advantages it offers, most companies would be expected to apply ICOM, but paradoxically, the opposite occurs (Kulmala et al., 2002). Thus, as inter-relationships can provide benefits and competitiveness in the market, one may wonder why companies abandon partnerships, and what reasons lead to difficulties and subsequently, cause companies to break off these relationships.Bastl, Grubic, Templar, Harrison, and Fan (2010) explained that managers are facing new challenges in the search for competitive advantage outside their organizations; however, they have little guidance about the potential challenges related to implementation of approaches to Interorganizational Cost Management.The authors explain that managers find it difficult to deal with the complexity of the problems that prevent organizations to obtain success in their cooperative actions. Traditional Approaches of Accounting to Interorganizational Cost Management Functions that accounting should exercise for inter-organizational context are different from those carried out for only one company.Bastl et al. (2010) stated that traditional accounting practices often do not fulfill the role of managing inter-organizational relationships.Seal, Cullen, Dunlop, Berry, and Ahmed (1999) and Tomkins (2001) argued that accounting focused on inter-relationships should generate information that helps managers make decisions in this scenario.The information currently generated by accounting should cover not just one company but should consider the members of the chain that such company is a part of. The Standard Cost, for example, can be recognized as an efficient accounting practice when applied by a company in its home environment.However, Gupta and Gunasekaran (2004) claimed that the Standard Cost does not encourage improvements in the supply chain. Forming Interorganizational Networks What motivates companies to form inter-organizational networks ? Amato Neto (2000) stated that companies seek competitive advantages, e.g., the possibility of combining skills, using know-how of other enterprises, sharing the workload of carrying out technological research, sharing risks and costs of exploring new opportunities, sharing resources and strengthening the purchasing power. In fact, the interest in forming strategic alliances must be accompanied by initial concerns about the protection of enterprises.Pereira et al. (2010) argued that the process of forming interorganizational networks requires, in most cases, enforcement mechanisms to assist in regulating the relationship between companies. Janowicz-Panjaitan and Noorderhaven (2009) explained that these mechanisms are intended, among other purposes, to punish opportunistic behavior, and explained that the network must have rules, code of ethics and a committee that monitors compliance with the rules set out to participants.Abbade (2005) pointed out that companies must take some action to protect themselves from opportunistic actions; for example, they should determine the role of each member of the chain, specifying their rights and obligations; they should protect their strategic resources that cannot be violated or disclosed, and learn about the history of the possible chain partners. Inter-Relationship Problems In transaction costs theory, Williamson (1985) explained that in a perfect scenario, which would be unreal, where there was no opportunism among members of a chain and information was available to all parts and at all times, failure in operations would not be very likely.Thus, the theory states that the fear of opportunistic behavior and the absence or lateness of information to the members of the chain creates the need for the network to establish strong mechanisms of control and to increase bureaucracy and complexity of transactions between its members.Williamson (1985) mentioned that the uncertainty as to the benefits and obligations allocated to each company of the network is considered as a factor that inhibits the interest of companies to enter into cooperative relations. In fact, the literature states that companies can gain many benefits by coordinating efforts and acting collaboratively with members of the chain, and that there are many benefits offered to justify the formation of inter-organizational networks (Cooper & Slagmulder, 1999).However, many problems can occur during this process, which can make it difficult or impossible to achieve the goals pursued by companies. Methodological Aspects First, the field of exploration was delimited, and the databases selected after these steps were followed: (I) the databases which appeared in the Portal de Periódicos Capes (501 databases) were selected; (ii) in the field "Knowledge Areas", the "Social Sciences" option was specified, and the sub-area "Business Administration-Public Administration-Accounting" was chosen, which resulted in 73 databases; (iii) the databases were selected to provide "Full Text" (24 databases); and (iv) a selection was made of the databases that allowed searches on "All text fields" and the use of at least two axes by using Boolean expressions.The process resulted in 13 databases: ANNUAL REVIEWS; CAMBRIDGE JOURNALS ONLINE; EMERALD INSIGHT; JSTOR; OECD LIBRARY; SAGE JOURNALS ONLINE; APA PSYCARTICLES; WILEY ONLINE LIBRARY; PROQUEST; WEB OF SCIENCE; SCIENCEDIRECT; SCOPUS; and EBSCO. In the selected databases, searches were conducted with terms in English, using two axes.Axis 1 contained words whose meaning is similar to "inhibitor", i.e., those words that refer to the idea of "barrier", something that gets in the way or hinders.Axis 2 contained terms related to "inter-organizational cost management". The searches were conducted in January 2015, selecting the option "all text fields" using the Boolean operator "AND" between the axes, the Boolean operator "OR" between terms, without delimitation of time period. The search was conducted in the 13 selected databases; however, there were search results in only seven databases (at least one article).The databases that provided the articles were EMERALD INSIGHT; WILEY ONLINE LIBRARY; PROQUEST; WEB OF SCIENCE; SCIENCEDIRECT; SCOPUS; and EBSCO.The search yielded a total of 418 results. In order to ensure the relevance of the search results, the content was filtered through the following steps: (I) the 418 articles arising from the search were downloaded; however, only 225 articles were fully available and free; (ii) the 225 studies were imported into a bibliographic management software, Mendeley® , which was used to delete 134 repeated or misaligned studies, thus yielding 91 works; (iii) the titles and abstracts were read, and the articles that were not aligned with the search criteria were excluded.In order to be included, a study should: a) address the issue of cost management in the inter-organizational context; b) contribute to the debate about inhibitors and barriers to Interorganizational Cost Management.After filtering, 35 studies were selected to compose the bibliographic portfolio to be analyzed. Then, the articles were fully read.The reading of each article identified the factors that hampered/hindered Interorganizational Cost Management in each environment, enabling the creation of a report with all the identified factors.Twenty-five factors inhibiting IOMC practices were identified and, in order to facilitate the discussion of the factors, they were grouped into related inhibiting factors: (i) corporate strategy; (ii) integration of companies; (iii) people; (iv) intra-and inter-organizational processes; (v) training and corporate education; (vi) disputes between companies; and (vii) lack of trust between companies. Corporate Strategy Porter (1980) explained that "strategy" can be seen as building defenses against the competitive forces or as determining positions for the company to reach its goals.According to Porter (1980), strategy describes what is absolutely necessary to reach a goal, where the ultimate goal of an organization is fulfilling its mission to ensure business continuity. The way organizations determine and execute their strategies decisively influences the way parties consider the balance and efficiency of cooperation and influences the motivation of organizations to continue or terminate cooperation over time (Guth, Schmittberger, & Schwarze, 1982). Table 1 shows the inhibitors relative to corporate strategy.One of the barriers to the success of partnerships between companies is the difference between partners' strategies, that is, different reasons that led companies to enter into alliances.Understanding partners' goals and the compatibility with them, can improve the performance of partnerships, in that companies' characteristics and their individual goals play a significant role in building relationships (Ojala & Hallikas, 2007;Hitt et al., 2000).Ferrin and Plank (2002) explained that cost should be examined from a long-term perspective and argued that they should include other elements beyond the initial purchase price.Partner companies must signal, in a credible manner, their intentions to continue the relationship in the long run when deployment of IOCM is intended (Möller et al., 2011;Ferrin & Plank, 2002). The fact that partnerships are aimed at the long-term does not mean that the strategy of earning profits should be a long-term one.Pereira et al. (2010) explained that the imbalance between short-term and long-term gains can inhibit inter-relationships and be a point of tension for many relationships.Lin et al. (2001) explained that companies cannot lose focus on their customers.According to the authors, they should devise their relations while targeting at customers as well as improve profit potential.However, managers may become overly attentive to cost-related issues, and possibly lose focus on customers.Himme (2012) stated that a strategic cost management plan requires a holistic vision in order to maintain focus on the customer, since, according to the author, work becomes useless when the customer-oriented vision is lost. Integration between Companies In the process of cooperation between companies, new features arise, such as increasing complexity and the need to work within and across organizational boundaries.Thus, the dynamics of the market requires companies to quickly share and integrate information.Companies should be able to dynamically communicate and analyze the processes taking place between them. Table 2 shows inhibitors relative to integration between companies. Table 2. Inhibitors relative to integration between companies One factor that inhibits the success of cooperative relationships is the lack of cross-functional teams in companies.Procurement management (from buyers) and supply management (from suppliers) need specific support from cost management experts who are assigned to support these processes (Ellram, 2002). The management of close relations between suppliers and buyers requires additional attention to supply chain issues and the inclusion of more data about the organization and the external environment.The relationship between the partner companies can hardly flow without the help of certain management technologies that, however imperfectly, help managers manage the relationships between partners (Ramos, 2004;Mouritsen & Thrane, 2006). People The determination of corporate strategies should take into account the characteristics of the people involved in the change, the period during which this change is to be implemented, the depth of change and the desired duration of its effects (Lopes, Stadler, & Kovaleski, 2003). In fact, people may be prepared or ale to make changes that will occur when companies implement Interorganizational Cost Management. Table 3 shows people-related inhibitors.Source: Survey data (2015).Axelsson et al. (2002) explained that the change in organizational behavior refers not only to the question of designing management systems, but it is a systemic effort.In order to achieve the organizational changes needed to allow companies to expand inter-relationships, they should observe how much people are interested in the changes, and whether or not this can be a barrier.Ellram (1994) pointed out that the flexibility to change is an important factor for the successful implementation of management programs.Thus, changes in organizational culture can pose a double challenge, depending on the level of resistance of its employees and the complexity of the strategic cost management approach.Elrram and Siferd (1998) pointed out that when an organization attempts to make a change in its operations, the nature and extent of this change needs to be clearly defined. The internal culture and organizational structure of the company should be characterized by the support of senior management for cost management purposes.In addition to resources that can support chain cost management, cross-functional teams should be encouraged to identify and implement cost management approaches.Those responsible for cost management are hampered when they receive support from members of the company (Seal et al., 1999;Ellram, 2002). Himme ( 2012) explained that it is not enough for a company to have only the managers committed to reducing costs in the organization.The company should seek to develop and encourage its employees so that everyone adopts the cost-reduction philosophy.Himme (2012) also explained that the development of a corporate culture is a long-term process, and persistence is a prerequisite. Intra-and Interorganizational Processes The processes to be executed by the companies to act collaboratively can be a determining factor for the success of IOCM.The literature shows some factors that can inhibit Interorganizational Cost Management with regard to processes. Table 4 shows the inhibitory factors relative to intra-and interorganizational processes.Waeytens and Bruggeman (1994) pointed out that problems in the design of the models of cost management can prevent companies from determining their costs correctly, and then they may have difficulty in collaborating with members of their chain.A complex model with poorly designed activities, without providing information on the actual cost of a company's activities, is seen as a barrier to IOCM.Kaplan and Anderson (2004) warned that highly complex cost management models may inhibit their applicability. The authors explained that the models implemented by the companies tend to evolve while they learn about the variety and complexity of their processes, applications, suppliers and customers.According to Kaplan and Anderson (2004), with a view to increasing the precision and detail of information, the models may become overly complex, which can be a barrier to the operation of the chain. According to Dubois (2003), when cost management is subject to interaction between buyers and suppliers, it is not possible to predetermine what should be included in the "total cost" of products or services, at the beginning of the relationship.The author explains that a predefined total cost could unnecessarily limit cost reduction opportunities.Dubois (2003) explained that while there occurs joint management of costs, learning about cost behavior can occur, thus favoring continuous improvements.Thus, the idea of a default total cost for a model can bring problems due to the dynamic and changing needs of both partners.Gareth (2005) stated that the alliances formed by companies that are too rigid or too flexible cannot achieve success.For the author, balance is the key to success; thus, partners should develop a relationship in which they are able to shape the changes occurring in the environment where they are situated without losing the necessary rigidity to keep the structure as agreed between the parties.Gareth (2005) explained that relations between companies must undergo regular assessments, in order to make necessary adjustments so that companies can overcome the difficulties that will appear.Caker (2008) pointed out that the ability of the dominant company to manage the chain is a issue for the development of interorganizational relationships, a point of view shared by other authors (Carr & Ng, 1995;Dekker, 2003). Corporate Training and Education Bohlander , Snell, and Sherman (2003) explained that "training" is the effort made by the company to encourage the learning of their employees.Corporate training and education are part of an organization's activities to enable the development of skills that can make the individual able to perform their current and future roles (Borges-Andrade, 2002). Table 5 shows the inhibiting factors relative to corporate training and education.LaLonde and Pohlen (1996) explained that expertise can overcome many problems associated with cost-related activities across the boundaries of companies.For the authors, expertise can help in the process of Interorganizational Cost Managemen by, identifying the "ineffective" activities carried out by other companies.Ramos (2004) explained that one of the barriers to the application of Interorganizational Cost Management approaches is the absence of competence on the part of business costs managers.The author stated that the close relations between suppliers and buyers require additional reports on the problems in supply chains and the need for more data on the organizations and the external environment.Kulmala et al. (2002) stated that companies must assess whether or not supplier relationships are beneficial for their business.In this way, companies should be able to calculate the amount of cost reduction that relationships will offer; thus, there is a need to understand cost behavior of their products. Education has been one of the most valuable tools in improving the performance of supply chain and cost management.Companies can create training programs for those involved, such as employees and suppliers, so that they can learn the concepts and principles of the inter-relationship (Ellram, 2002;Thomson & Gurowka, 2005). Himme (2012) explained that in order to reduce resistance to change and prepare employees for cost reduction programs, companies must provide training and education to describe the changes that may occur.For the author, it is critical that instructions are given before changes arise.Thomson and Gurowka (2005) emphasized the importance of the taxonomy of terms used in cost management, as inconsistent and/or inexistent terminology and standards no can be a barrier for everyday relations between companies.The authors explained that partner companies should avoid noise in communication, and that training and education are fundamental capacitors to the success of any strategic cost methodology.Nicolini et al. (2000) suggested that one of the barriers to Interorganizational Cost Management appears because companies often operate without a full understanding of costs along the supply chain.In general, companies develop projects and then quote for prices of suppliers who were not involved in project design development. The companies analyze whether or not relationships will continue by observing the ability of such relationships to generate demonstrable value to the participants (Cannon & Homburg, 2001).Indeed, the reduction of the total cost of the product available to the end consumer is an indicator of the efficiency of inter-organizational relationships, but it is not the only one.The lack of perceived value generated by the relationships between the companies is appointed by Cannon and Homburg (2001) as a factor that could impair the continuity of long-term relationships. The measurement of the generated value or the intangible value resulting from the relations between the companies is a difficult task.Cannon and Homburg (2001) explained that, in order to create models to assess the generation of value, the researchers will need to resort to a variety of perspectives. Conflicts between Companies Cooper and Slagmulder (1999) explained that conflicts arising from relationships may be an impediment to achieving established goals between network partners.Indeed, conflicts and differences are factors that, according to literature, inhibit the IOCM practice. Table 6 shows the inhibitory factors relative to conflicts between companies.Agndal and Nilsson (2008) Source: Survey data (2015).Kulmala (2004) explained that one of the obstacles to interorganizational relations is the disagreement over sharing benefits.Companies tend to not wish to cooperate and share cost information when the benefits are not shared fairly.Kajüter and Kulmala (2005) stated that there is no general rule on how savings generated by the inter-relationships should be shared, which is justified by the fact that situations vary from case to case.Christopher (1998) pointed out that this does not mean that the benefits should always be shared equally, but those involved are in agreement and satisfied with the benefits for each one of them.Nicolini et al. (2000) emphasized that market prices of products arising from inter-relationships can be difficult to determine.According to the authors, for some products, such as commodities, it is easier to determine prices charged to consumers, because prices are set by the market; however, companies may have difficulty in determining the prices of other products. Problems and difficulties are expected in interorganizational networks; thus, conflicts between parties are a normal component for the development of the network.However, according to Arino and De La Torre (1998), constant conflicts can lead to the termination of the relationship.Lui (2009) explained that relationships can end after the stress caused by conflicts, even if the relationship started based on trust, good will of the parties, financial resources and commitment.Agndal and Nilsson (2008) stated that an open-book accounting policy is not necessarily something implemented by the buyer, exclusively for the benefit of the buyer.The authors explained that one can find open books in a collaborative environment, where they are recognized as beneficial effects for both parties.Thus, suppliers should be aware that sharing of information in order to conduct a strategic cost management can bring benefits to the whole chain.If it is not perceived by suppliers, it can lead to a lack of interest in entering into inter-relationships, a fact seen as a barrier to Interorganizational Cost Management. Lack of Trust between Companies Cooper and Slagmulder (1999) explained that trust is the basis of IOCM as it allows greater interaction between the agents involved in the network.The authors agree that factors may favor or inhibit the formation of partnerships between companies, and trust is one of the most important factor (Cooper & Slagmulder, 1999;Kajüter & Kulmala, 2005;Souza & Rocha, 2009). Lack of trust can trigger a number of factors that inhibit the IOCM practice.Table 7 shows the inhibiting factors relative to lack of trust between companies. Table 7. Inhibitors relative to the lack of trust between companies Inhibiting factors Authors Lack of interest on the part of the partner companies in sharing information Cooper and Slagmulder (2003); LaLonde and Pohlen (1996); Kulmala et al. (2002); Kulmala (2004) Lack of interest from suppliers in using Open Book Accounting Caker (2008); Windolph and Moeller (2012); Mcivor (2001); Munday (1992) Uncertainty about the relationship with partners Barney and Hesterly (1996) Lack of trust among the partners in IOCM Dekker (2004) Source: Survey data (2015).Cooper and Slagmulder (2003) explained that cooperation between companies must seek the reduction of the total cost and, thus, seek to increase corporate profits.The networks of buyers and suppliers must be governed based on trust and great sharing of information.However, companies may be reluctant to share information, mainly driven by uncertainties. LaLonde and Pohlen (1996) explained that the reluctance to share cost information can be a significant obstacle for the determination of supply chain costs.As a result, companies may continue to behave independently, and thus inadvertently increase costs throughout the supply chain. Information sharing should be bidirectional and lack of interest in sharing information by a member of the chain can be a barrier to interorganizational relationships (Kulmala et al., 2002).Windolph and Moeller (2012) explained that there may be lack of interest from suppliers in sharing information with members of the chain.According to the authors, vendors may refuse the idea of using open book accounting for fear that buyers use the data to increase the pressure on their profit margins.Barney and Hesterly (1996) stated that the greater the uncertainty in the agreements and partnerships between companies, the more control mechanisms, such as the use of contracts, are required, which increases the complexity of relationships.The authors explained that companies can adopt governance measures and establish hierarchical structures in order to settle possible disagreements.Dekker (2004) pointed out that control mechanisms help to reduce uncertainties and risks, so that they enhance trust between the companies.Finally, the author stated that companies should look for ways to ensure the stability and continuity of the alliance in the future in order to prevent lack of trust among companies from being a barrier to strategic cost management. Conclusions and Recommendations By acting collaboratively with members of the supply chain in order to reduce the total cost of the product, significant advantages ca be gained against competing chains.In fact, the relationship network of companies can play a key role in their survival and development. There are many advantages about forming partnerships between companies; however, there are numerous difficulties and factors that hinder and inhibit relations aimed at reducing costs in the chain.Nicolini et al. (2000) and Norek and Pohlen (2001) explained that the for approaches to Interorganizational Cost Management to be successful, the internal development of companies has to take place, such as knowledge and understanding of costs, understanding of organizational culture and education and training of employees.For the authors, companies must first overcome internal barriers and then later make efforts to take advantage of external cost management. When entering into partnerships, companies expect to develop and strengthen in the market.However, when a partnership is unsuccessful, sometimes because managers are not prepared, the involved companies can be disappointed.Pereira et al. (2010) explained that the process of removing or excluding a corporate network can cause discomfort between the parties, among other reasons, due to the cancellation of contracts that may have been signed at the beginning of the partnerships. As for the identified inhibiting factors, it can be seen that Interorganizational Cost Management is an interdisciplinary phenomenon that involves people, corporate culture, technology and processes.It is argued, based on the findings of the present research, that the application of Interorganizational Cost Management practices cannot be seen as a technical approach, only guided by technology and management programs. As a limitation of the research, the use of works from different sources and in different contexts has allowed the identification of overly generic and comprehensive factors, without a specific look into a particular branch or economic sector. Finally, suggestions for further research include: a) analysing whether the strategic focus of companies interferes in the decision to enter into strategic alliances; b) conducting research with an interdisciplinary focus, seeking to mitigate the inhibiting factors when they arise in the relationship between companies.In fact, the study related to Interorganizational Cost Management and the challenges posed by this approach are a great opportunity for future research aimed at helping managers to deal with the complex task of managing companies' external costs. Table 4 . Inhibitors relative to intra-and interorganizational processes Table 5 . Inhibitors relative to corporate training and education Table 6 . Inhibitors relative to conflicts between companies
6,451.2
2016-02-25T00:00:00.000
[ "Business", "Economics" ]
Control of protein activity by photoinduced spin polarized charge reorganization Significance The role of well-placed charges within proteins in mediating biological functions, from protein-protein association to enzyme kinetics, is well documented. Here, we go beyond this static picture and show that charge motions can exert significant effects on protein function. Injecting charge from a photosensitizer, we demonstrate a threefold decrease in enzymatic activity and a twofold increase of antibody-antigen binding. These effects depend on the specific position of the photosensitizer on the protein. Our results point to charge reorganization as a form of allostery that complements known allosteric mechanisms such as conformational changes and dynamics. Introduction Biomolecules within the living cell are subject to extensive electrical fields, particularly next to membranes(1). Indeed, a role for bioelectricity has been well established at the organismal level (2). While the importance of electrostatics in protein functions such as protein-protein association and enzymatic activity has been well documented (3), very little is known on how biomolecules respond to external electric fields, or in other words, what may be the potential contribution of polarizability to protein function. Multiple protein activities involve electrostatic effects (3). For example, it is recognized that the association kinetics of proteins can be accelerated by charged residues positioned close to the interaction sites on their surfaces (4). Recent work on enzyme catalysis has given rise to a picture of pre-organized charges at catalytic sites, directly influencing substrate molecules and lowering enzymatic reaction barriers in this manner (3,5,6). These mechanisms for charge influence on protein function invoke essentially fixed charge distributions, and do not take into account the potential role of charge regulation and reorganization due to external electric fields (7). Yet, it is important to appreciate that any interaction between two proteins, as well as between a protein and other molecular species, involves the formation of an effective electric field that results from the difference in electrochemical potentials of the two interacting bodies. (8,9) and experimental work from our labs (10,11) have indeed hinted at a role for charge reorganization as an allosteric signal in proteins. Here we decisively establish this role by studying the effect of phototriggered charge injection on both protein-protein association kinetics and enzyme kinetics. We find a rich spectrum of responses that depends on the position of the photoexcited group as well as on the spin polarization of the rearranging charges. The spin dependence is likely associated with the chiral induced spin selectivity effect (12). Modulating protein-protein association We site-specifically labeled phosphoglycerate kinase (PGK), a 415-residue protein, with the photosensitizer ([Ru(2,2′-bipyridine)2(5-iodoacetamido 1,10-phenanthroline)] 2+ (Ru) (13). In particular, we created the mutant C97S/Q9C, in which the native cysteine at position 97 was changed to a serine, and a cysteine residue was inserted at position 9 ( Fig. 1). Ru can inject either an electron or a hole into the protein, potentially modulating the charge distribution (i.e., the electric polarization) within the protein. We first studied the binding of an anti-His antibody to a polyhistidine tag at the C terminus of PGK (Fig 2A). The Ru-PGK construct was attached to a gold surface to facilitate uniform illumination and readout of antibody-antigen interaction. The antibody molecules were labeled with the dye Alexa 647, which allowed counting individual events of protein-protein association at the surface at different times, following the addition of the antibody to the solution. The experiment was performed either under illumination with a linearly polarized (LP) 470 nm laser or in the dark (Fig 2B-E). The kinetics traces in Fig. 2F demonstrate that under illumination association was significantly enhanced at early times. In particular, at 2 s, illumination increased the association rate by a factor of 2.25±0.05. At longer times the difference between the two sets decreased, reaching a similar value at 8 s, due to saturation of the binding of antibody molecules to the surface. In the rest of the article, we will therefore report only rate differences at 2 s. The experiment was repeated with PGK molecules that were not labeled with Ru, and no effect of illumination was observed (Fig. 2G). We further repeated the same experiment on glass to rule out any potential contribution of the gold surface, and the results were similar (Fig. 2H). As it is known that electron transport through a protein may be spin selective, due to the chirality of the protein and its secondary structure (12,14,15), we asked whether illumination with circularly polarized light can modulate the observed effect. The experiment on the gold surface was therefore repeated with either right or left circularly polarized light. Circularly polarized light is likely to generate excitations with one spin state (16), so that the injected charge into the protein (either positive or negative) would be spin polarized. Remarkably, the enhancement of the association kinetics was observed only with left circularly polarized (LCP) light, and not with right circularly polarized (RCP) light (Fig. 2I). These results indicate, within the experimental uncertainty, that the whole photoinduced effect an outcome of essentially a single spin polarization, suggesting in turn that the charge reorganization within the protein is spin selective. Only a minor illumination effect was observed when Ru was moved to residue 290. In G-J molecules were counted 2 s following the initiation of the reaction. At least 9 regions were counted in each sample. Experiments were repeated three times (see Supporting Table 1 for all values). Error bars represents standard errors of mean. To test the position dependence of the charge reorganization effect on association kinetics, the Ru complex was moved to residue 290, using the mutant C97S/S290C (Fig. 1). At this position, the photosensitizer is much further away from the His-tag at the C terminus compared to the previous position; the distance from residue 290 to the C terminus, residue 415, is 55 Å, based on the crystal structure 3PGK, while from residue 9 it is only ~10 Å. Repeating the same experiment, it was found that illumination (either LP, LCP or RCP) had only a minor effect on the association reaction ( Fig. 2J), pointing to a significant position dependence of the effect. Controlling enzymatic activity We then turned to measure the effect of photosensitization on the catalytic reaction of PGK. The enzyme catalyzes the transfer of a phosphate group from ATP to 3-phosphoglycerate (3-PG), producing ADP and 1,3-bisphosphoglycerate (1,3-BPG) ( Figure 3A). To observe a robust reaction on a surface, the His-tag at the C terminus of PGK was used to attach protein molecules to a supported lipid bilayer formed on a glass substrate (Fig. 3A). The turnover of surface-bound enzyme molecules was measured using a coupled assay, and the kinetics were gauged through a change in NADH absorbance (17). Based on the slopes of the kinetics curves in Fig. 3B-E (Supporting Table 2), and assuming a surface density of PGK molecules of ~5⋅10 11 /cm 2 (somewhat lower than expected for a close-packed layer of the protein), we calculated a turnover rate of ~200 s -1 for Q9C PGK and S290C PGK in the dark. This turnover rate is quite close to the value measured in solution with C97S PGK (226.9±7.3 s -1 ). Remarkably, with Ru at position 290, the enzymatic rate decreased under illumination by a factor of 3.3±0.2 (Fig. 3B). As above, this rate reduction was induced by either LP or LCP illumination, but not under RCP illumination. The effect could be observed in a single experiment: when light was turned off, the slope of the kinetics curve increased ( Figure 3C). In the absence of Ru, no effect of illumination was observed ( Fig. 3C and Supporting Fig. 1A). When the Ru complex was moved to position 9, an illumination effect was still observed, but it was significantly reduced to a factor of only 1.8±0.1 ( Fig 3D); as above, the slope increased when light was turned off, and no illumination effect was observed in the absence of Ru ( Fig. 3E and Supporting Fig. 1B). Discussion When a protein interacts with a charged molecule/protein, charge rearrangement occurs within the protein, which may affect the interaction between the protein and the other species. The extent of charge rearrangement depends on the polarizability of the protein, and therefore polarizability may affect both interaction between proteins and enzymatic activity. Upon excitation of a photosensitizer, charge can be injected into the protein, hence affecting its polarizability, thereby modulating the effect discussed above. Charge injection can involve either an electron or a hole, and might potentially be only partial, leading in either way to an effect on the charge distribution within the protein. However, since the protein is chiral, any charge injection would be spin dependent due to the chiral induced spin selective (CISS) effect, as shown by Naaman and coworkers in multiple studies (12). Exciting the dye with circularly polarized light causes one spin to be preferentially excited. Due to the CISS effect, one specific spin can be injected more A. Schematic of the experimental setup to study the effect of illumination on enzymatic kinetics of Ru-modified PGK. PGK molecules were attached to a lipid layer supported on a glass surface through their His-tags. The enzymatic reaction of PGK is depicted in the cartoon. Enzymatic activity was measured at 25 °C using a coupled assay (see Methods) and the absorbance of NADH at 340 nm was monitored. B. A strong reduction in enzyme kinetics was observed upon either LP or LCP illumination of PGK modified with Ru at position 290, but not under RCP illumination, as compared to no light (NL). C. The slope of reaction progression changed when the initial LP illumination was stopped after 5 min (full symbols). In the absence of Ru, no effect of light was observed (empty symbols). D. The effect of light was smaller when Ru was at position 9. As in B, the effect was observed under LP or LCP illumination, but not under RCP illumination. E. As in C, but with Ru at position 9. Only the linear regions of the activity curves were fitted. Experiments were repeated three times, and this figure shows only one set. For values obtained from all experimental sets, see Supporting Table 2. efficiently into the protein. Therefore, one circular polarization is more effective than the other. Excitation with the 'correct' circular polarization would lead to charge injection into the protein and to a charge-separated species that would typically have a much longer lifetime than the usual excitation lifetime of the molecule. On the other hand, excitation of the 'wrong' circular polarization would not lead to charge injection, and the excited state would relax quickly, either radiatively or non-radiatively. Our results indeed indicate a significant effect of charge injection from the photosensitizer Ru into the protein both on association with an antibody and on its enzymatic reaction. Notably, the effects we measure depend on the polarization of light and in particular, respond to only one circular polarization. Our findings strongly support the notion that charge reorganization is involved, as it has been established that the motion of charge through a chiral potential is spin selective and is affected by the protein secondary structure (15). This spin dependence is due to the chiral-induced spin selectivity effect (12). Importantly, it has been shown previously that spin polarization enables long range charge transfer through chiral systems (18). Specifically for our systems, we can only speculate on the exact effect of charge injection and in which direction charge is transferred. In the case of the antibody-protein interaction, since the antibody is directed to the His-tag, we observe the C-terminal region of the protein (where the Histag is connected) and find that it is in general more negative. Clearly, the protein-protein association reaction would benefit from this region being even more negative, meaning that an electron would likely be injected from the photosensitizer. In the case of the enzymatic reaction, charge reorganization may affect substrate binding by making the active site more negatively charged and changing its interaction with the negatively charged substrate molecules. Additionally, charge reorganization may affect the catalytic mechanism itself. We cannot be more specific about this aspect at this moment of time. In any case, since charge reorganization is found to be sensitive to circularly polarized light, it is likely that α-helical structures of the protein are involved, as αhelices have been implicated as good spin filters (15). The photoinduced charge injection effect we observe here depends on the distance from the active site involved, rather than on the sequence separation. Thus, for protein-protein association at the C terminus of PGK, Ru at position 9 had a strong effect, while Ru at position 290 had no effect. A similar picture arose also for the enzymatic activity of PGK, though now Ru at position 290 (close to the ATP binding site) showed double the effect of Ru at position 9. The findings here, combined with previous studies (10,11), point to a new role of charge reorganization, or of polarizability, in modulating protein activities. Surprisingly, not much is known about the involvement of polarizability in protein function, though the development of polarizable force fields for molecular dynamics simulations of biomolecules in recent years may change this situation (19). The role of charged protein residues in enzymatic catalysis has been discussed extensively by Warshel and coworkers (3), who emphasized the contribution of charges that are pre-organized to reduce the free energy of the transition state. Recent work from the Boxer lab has experimentally demonstrated that charges at the active site of the enzyme ketosteroid isomerase exert an electric field that contributes significantly to the catalytic effect (20). However, these charges are considered to be static. We suggest instead that the electric field at the active site of an enzyme may be modulated through the binding of charged groups at distant sites or by the presence of bioelectric fields. Indeed, our current results indicate that this is the case. The excitation of the Ru moiety likely leads to a propagation of a polarization signal through the protein, reaching and affecting the active site. A significant effect is demonstrated here on both the binding of an antibody to the His-tag of PGK and, most remarkably, on its enzymatic activity. The effect on the activity of PGK might be due either to modulation of the binding of substrates or to an effect on the catalytic step itself-this remains to be determined. In any case, these findings point to a so-far unappreciated role of electric fields in the regulation of biological activity at the molecular level. Within the cellular environment, electric fields abound particularly near membranes, and it is possible that membrane proteins and also proteins that interact with membranes are susceptible to control mediated by charge reorganization. This discovery also suggests a novel method for generating photo-controlled enzymes and sensors, based on photoexcitation of an attached group. Currently, all proposed methods to photo-control bioactivity have relied on various conformational changes induced by photoexcitation (21,22). Photo-controlling bioactivity through charge injection might be easier to implement. Future work will allow us to optimize the location of the photosensitizer and enhance the effect of light on activity even further and will teach us more about pathways of charge rearrangement in relation to protein function. For that purpose, we plan to identify biological systems that might be particularly susceptible to this type of activity regulation in proteins. Methods Protein expression and purification. Yeast phosphoglycerate kinase (PGK) DNA was cloned into a pET28b vector, fused to a C-terminal 6xHis tag. For site-specific labeling of PGK, the natural cysteine (C97) was replaced by a serine. A single cysteine residue was introduced using sitedirected mutagenesis, resulting in either a Q9C or a S290C PGK mutant. Single-cysteine PGK plasmids were transformed into E. Coli BL21 pLysS (DE3) cells (Invitrogen), which were grown in LB media at 37 °C up to an optical density of 0.8-1. Rinsing with PBS removed the unattached protein molecules. Labeling of anti-His tag antibody. In order to study the antigen-antibody reaction kinetics by observing the fluorescence of attached antibody molecules, anti-His tag antibody molecules were tagged with the dye Alexa Fluor® 647 NHS Ester (Succinimidyl Ester, ThermoFisher SCIENTIFIC, Catalog number: A20006) using the same procedure as followed in our previous paper. (11) In brief, unlabeled antibody molecules in PBS buffer were reacted with the NHS ester of the dye in a 1:1.5 ratio in presence of 0.1 M sodium bicarbonate buffer for 1 h at room temperature in the dark. Micro Bio-Spin columns with Bio-Gel P-30 (Bio-Rad) were used to remove the unlabeled dye molecules. We verified that the labeled protein did not show any optical activity at the wavelength of absorption of the Ru group using circular dichroism spectroscopy (Supporting Fig. 2). Interaction between His-tagged PGK and anti-His antibodies with and without illumination To study the antibody-antigen reaction kinetics, His-PGK-Ru modified gold surfaces were To test the potential contribution of the gold surface on the antigen-antibody reaction, the above experiment was repeated using a glass surface coated with His-PGK-Ru with and without illumination. Microscopy measurements & data analysis. Fluorescence imaging of the samples following reaction with antibody molecules was carried out following the same procedure used in our previous work.(11) A home-built total internal reflection fluorescence microscope (TIRFM) was used for the imaging. In each experiment, 10 different TIRFM movies were recorded on 10 different regions of 101 X 101 pixels (6.73μm X 6.73 μm). On each region, 100 ms frames were recorded until all molecules in the designated area were photo-bleached. TIRFM movies were analyzed using custom-written Matlab (MathWorks) routines. Individual spots, corresponding to individual antigen-antibody complexes, were identified in the first frame of a movie using a combination of thresholding and center-of-mass (CM) analysis as described previously (23). The intensity of the center of mass of each individual spot as a function of time was plotted, and change-point analysis was performed on to identify photobleaching steps and hence the number of emitters in each spot. Some examples are shown in Figure S3 of ref (11). introduced at the appropriate angle just before the sample chamber. We verified that at the sample light was circularly polarized to within ~10% by rotating a polarizer and measuring the power. The laser intensity (~5 mW/cm 2 ) at the sample was kept constant for linear as well as circular polarization by tuning the laser power at the source. While the relatively low laser intensity implies a low efficiency of excitation, potentially long charge recombination times are likely to lead to a significant fraction of charge-separated protein molecules. Supporting Information Supporting 3. All slopes are given in units of absorbance change per minute. Note that slopes vary between different samples due to differences in the surface densities of the proteins; however, the ratios are similar within experimental error.
4,368
2021-10-13T00:00:00.000
[ "Chemistry", "Physics" ]
The Magnetoresistance of Nanostructured Co-ZnO Films with ZnO Buffer-Layers Co-ZnO films were prepared on oxidised silicon by magnetron sputtering at room temperature both with and without a ZnO buffer-layer. The Co-ZnO films consisted of Co particles dispersed in a semiconductor matrix. The combination of a Co-ZnO layer and a ZnO buffer-layer has a higher magnetoresistance than the Co-ZnO layer alone on an insulating Si substrate. The causes of this effect were investigated using X-ray photoelectron spectroscopy, depth profiling using Auger electron spectroscopy and electrical resistance as well as measurements of the change in the saturation magnetisation, the field cooledand zero field cooled-magnetisation. This work has shown clearly what criteria are needed to optimise the magnetoresistance and how these conditions may be met by adding a buffer-layer thus making granular films based on ZnO more suitable for applications as field sensors. Introduction The discovery of the giant magnetoresistive effect (GMR) [1] [2] is considered to have been the beginning of spintronics as an active research field [3].Extensive studies of the magnetoresistance (MR) of heterogeneous structures have been performed with a view towards spintronic applications.MR is observed in layered magnetic structures composed of alternate ferromagnetic and non-magnetic layers [4]- [7] and can also be observed in X. L. Li et al. 997 granular films, in which small ferromagnetic (FM) particles are dispersed in a non-magnetic matrix [8] [9].The non-magnetic component (or matrix) may be either a metal or an insulator; GMR or tunnelling MR is produced accordingly.Although a high spin-injection efficiency from a ferromagnetic metal into a metal or an insulator has been achieved, spin-injection from a FM metal into a semiconductor is still a challenging task, due to their large conductivity mis-match [10]. Dilute magnetic semiconductors offer an effective way to produce spin-injection into a semiconductor, whereby transition metal (TM) ions are uniformly substituted as cations into the semiconductor host lattice.Homogeneous, dilute magnetic semiconductors may be formed, of which doped ZnO is a typical example.Moreover, ZnO barrier-based magnetic tunnel junctions [11]- [13] have previously been investigated, providing evidence of spin-injection into a semiconductor.On account of their potential application in magnetic sensors, MR readheads and MR random access memory, inhomogeneous TM-ZnO semiconductor systems have been specifically designed to maximize granular MR [14]- [17].Compared with the multilayer films, the magnetic granular films have some advantages, such as simple preparation, good thermal stability, and tunable grain size and structure after deposition; this is helpful to maximize the use of the MR [9].Moreover, MR in granular films can work with a magnetic field applied in any direction, whereas GMR tends to work only for magnetic field in plane.Many reports show that an oxygen deficiency is necessary to obtain a high magnetization in doped oxide semiconductor-based films [18] [19], which is also present in granular oxide films.Co-ZnO granular systems have the highest MR at room temperature (RT) and hence it is productive to look for ways of further improving these samples [20]. Experimental parameters, including the thicknesses of Co and ZnO layers and also post-annealing process, have been optimised in order to obtain large MR values as previously reported [15] [21].However, by our knowledge, so far there has been no report about the influence on the MR of a ZnO buffer-layer between a Co-ZnO granular layer and the substrate.In this work, we investigate the influence of a ZnO buffer-layer on the MR of Co-ZnO samples deposited on thermally oxidised silicon. Experiments Co-ZnO films of varying thickness, with or without a buffer-layer of ZnO, were prepared on thermally oxidised Si(100) by magnetron sputtering at RT.The thickness of the ZnO buffer-layers were chosen to be 5 nm, 10 nm, 15 nm, 20 nm, 35 nm, 50 nm, 75 nm, 100 nm, 150 nm.The nominal structure of Co-ZnO films in all the samples is [Co(0.6nm)/ZnO(0.7 nm)] 10 ; this was achieved by sequentially depositing an ultra-thin 0.6 nm Co layer and a 0.7 nm ZnO layer for 10 periods.The Co and ZnO deposition rate is 0.041 nm/sec and 0.056 nm/sec, respectively.For these thin layers, granular films (GFs) are formed, rather than a multilayer [16].Therefore, for convenience, in this paper, we shall denote [Co(0.6 nm)/ZnO(0.7 nm)] 10 as GF; all the samples reported here contain the same amount of Co. The structures of the samples were investigated by X-ray diffraction, XRD and transmission electron microscopy, TEM.Auger electron spectroscopy, AES depth profiling was performed in order to obtain the composition of the samples and observe the diffusion at the interface between a GF layer and a buffer-layer or a substrate.X-ray photoelectron spectroscopy, XPS was also performed to investigate the composition and the chemical state of Co in the samples.The magnetic field dependence of MR at RT was measured by using a four-probe method with the current in the plane.The maximum applied magnetic field was 18 kOe.Zero-field-cooled and field-cooled (ZFC/FC) magnetic moments of the samples were measured from 2 K to 300 K in 100 Oe using a SQUID magnetometer with a field applied parallel to the film plane.The magnetic properties of the thin films were measured using SQUID magnetometer at 5 K. Results and Discussion The observed MR for GF-based samples in the maximum applied magnetic field of 18 kOe at RT is shown in Figure 1(a) and Figure 1(b).The MR ratio is defined as [R(H)-R(0)]/R(0)×100%, where R(H) and R(0) are the resistances in an external magnetic field and zero field, respectively.The GF sample without a ZnO buffer-layer shows a negative MR ratio of 8.3% at RT and 18 kOe.Adding ZnO buffer-layers to the substrates before depositing the GF increases the MR ratios of GF-based samples.Initially, the MR of the samples increases strongly with the increase of ZnO buffer-layer thickness from 0 nm to 20 nm, and thereafter it remains nearly constant when ZnO buffer-layer is increased from 20 nm to 150 nm.The negative MR ratio rises to a maximum of 11.9% for [Co (0.6 nm)/ZnO(0.7 nm)] 10 films with only 10 periods.It is also enhanced compared by about 11% MR for the [Co (0.6 nm)/ZnO] 60 samples with 60 periods as previously investigated [14] [15].The origin of the higher MR ratio in the system combining a Co-ZnO layer plus a ZnO buffer-layer will be subsequently investigated in detail from microstructures, conductivity, and magnetic properties. Figure 2(a) shows the XRD patterns of the GF samples with and without a ZnO (50 nm) buffer-layer.The XRD pattern of GF sample without a ZnO buffer-layer does not show any peaks, which means that this sample is in an amorphous state or the crystal grains are too small to be detected by XRD [15].The sample of the GF with a ZnO (50 nm) buffer-layer gives rise to a barely-discernible broad ZnO (002) peak.This data shows that our samples have small grain size and poor crystallinity because of the low substrate temperature, RT, which can also be evidenced from the TEM micrograph as shown in Figure 2(b).No lattice fringes are clearly observed, which implies poor crystallinity of the sample.It can be seen that the sample consists of Co particles (dark regions) dispersed in a semiconducting matrix (light regions), which is similar to that found in our previous studies of Co/ZnAlO films [17].The interface between Co particles and semiconductor matrix is not distinct, suggesting that there is a concentration gradient of Co in the ZnO.The gradient boundary layer between the Co and the ZnO was considered in detail previously [17]. The metallic Co fraction in the GF layer was obtained from XPS.The Co 2p core-level XPS spectrum of the GF sample with a ZnO (20 nm) buffer-layer is shown in Figure 3(a).The peaks with the binding energy at 778.6 and 793.6 eV originate from Co metal and the peaks at 780.3 and 795.6 eV originate from the Co 2p 3/2 and Co 2p 1/2 of Co 2+ ions.Their respective weak satellite peaks occur at higher binding energies.This means that Co in the sample exists as metallic Co and as Co 2+ ions, as occurred in our Co/ZnAlO samples [17].The ratio of me- The gradual decrease of the Co signal at the interface is significantly longer for the film with the buffer-layer than for the GF on oxidised Si, ~1.1 min compared with ~0.7 min, which translate roughly into lengths of ~11 nm and ~7 nm.This indicates that there is a more diffuse interface when the GF is grown on a buffer-layer possibly caused because the Co in the GF layer diffuses more easily into a ZnO buffer-layer than into the thermal oxidised Si substrates.Any diffusion of Co particles of the GF layer into the ZnO buffer-layer will act to dilute the Co particles in GF layer.Since the same nominal GF [Co(0.6 nm)/ZnO(0.7 nm)] 10 was deposited, a thicker GF layer may be formed in the sample with a ZnO buffer-layer. The composition of the GF layer was obtained by both AES and XPS.The concentrations of the three main elements Zn, Co, and O were obtained by averaging the results from AES and XPS, in consideration of the cer-tain deviation from different methods.It is found that the concentrations of the three main elements Co, Zn, and O in GF layer are 39.3 ± 2.0 at.%, 23.1 ± 1.2 at.%, and 37.6 ± 1.9 at.%, respectively; this also implies that the microstructure of the GF layer may be a mix of metallic Co particles and semiconducting Zn 1-x Co x O 1-δ grains.The above discussion of the XRD, TEM, XPS, and AES of the samples indicates that the GF layer has a granular structure, in which the Co particles are dispersed in a semiconductor matrix and that Co diffusion may occur at the interface between GF and ZnO buffer-layer. Further information is also obtained from the dependence of the resistance on the thickness of the buffer-layer which is shown in Figure 4.The resistance of the GF is increased significantly by growing the GF on a thick buffer-layer.We note that if the buffer layer was providing a resistance in parallel with that of the GF, the measured resistance would have fallen.The rise in the measured resistance indicates that both the resistivity and the structure of the whole GF are affected by a buffer-layer that is significantly thicker than the roughness/diffusion length.The resistivity of the whole film depends on the size of the metallic nanoparticles, because of the Coulomb blockade, and also the resistivity of the semiconductor matrix Zn 1-x Co x O 1-δ , which is dependent on the oxygen deficiency, δ.Any migration of the Co atoms out of the GF and into the buffer-layer will reduce the oxygen deficiency in the GF and hence raise the overall resistance. Figure 5(a) shows the hysteresis loops of the GF samples with the same measured area that were taken at 5 K after the background diamagnetic signal has been subtracted.This indicates that the presence of a buffer-layer has reduced the saturation magnetisation (M s ) by ~20%. ZFC/FC magnetic moment measurements were also performed, in a field of 100 Oe, in order to study the magnetic behavior of the Co particles in the different GF-based samples, as shown in Figure 5(b).At low temperatures, a large bifurcation is observed between the ZFC and FC curves for the two samples, as is normally exhibited by superparamagnetic particles [22] [23].The ZFC magnetizations show a maximum centered at 140 K for GF and 100 K for the GF plus a ZnO buffer-layer, which means that their blocking temperatures (T b s) are below RT.It can be clearly seen from Figure 5(b) that the width of the peak in the ZFC curve of the GF is larger for the film grown without a buffer-layer.This implies that the width of the size distribution of the Co nanoparticles has been reduced by the inclusion of the buffer layer.The variation in the size of the Co particles can also be observed from its TEM plane view image in Figure 2(b). The difference between the Co particles in the GF grown with and without a buffer-layer is most pronounced in the Curie-Weiss plots made at temperatures well above the T b s.The susceptibilities follow the Curie-Weiss relation, χ = C/(T + θ), where C GF = 533.51emu Oe −1 deg −1 , C buffer/GF = 158.52emu Oe −1 deg −1 , θ GF = 75.85K and θ buffer/GF = 2.90 K.A positive Curie-Weiss constant, θ value characterises the presence of an antiferromagnetic interaction between the nanoparticles.A good MR material has very little coupling between the magnetic clusters so that the magnetisation of each cluster is free to respond to an external field; hence the reduction in θ caused by the buffer-layer is very beneficial to an increased MR.This can be evidenced from the larger MR of GF with a buffer-layer compared with that of the GF without a buffer-layer.The change in the Curie constant, C, by a factor of ~3 indicates that the mean size of the nanoparticles has been reduced strongly by the inclusion of a ZnO buffer layer.The T b and the M s were smaller in the film grown with a buffer layer by 40% and 20%.This indicates that the inclusion of the buffer layer has resulted in a modest decrease in the mean size of the nanoparticles but a much larger decrease in the average of the mean square size due to a reduction in the width of the size distribution due to the elimination of some of the largest nanoparticles. According to the above discussion about microstructures and magnetic properties of the GF samples, there are two effects that act together to increase the MR in the films grown with a buffer layer.These are the reduction of the θ from 75.85 K to 2.9 K and the reduction of the size of the nanoparticles which increases the efficiency of the Coulomb blockade [24]. We now consider the physical processes that may be taking place at the interfaces between the GF and the silicon substrates and with the buffer.We have found that the inclusion of the buffer layer has resulted in a composite structure in which the mean size of the nanoparticles, the width of the particle size distribution, the M s and the C and θ have all decreased and the MR and the resistance have increased.The inclusion of the buffer layer will relieve strain and also can allow for cobalt atoms to diffuse into the ZnO buffer and may facilitate oxygen atoms to diffuse into the GF layer eliminating some of the oxygen vacancies. There is evidence for interdiffusion at the interface between the GF and the buffer from the AES data which monitors the width of the interface which increases to ~11 nm for a thick buffer.The MR and the resistance depend strongly on the thickness of the buffer layer for thicknesses less than ~20 nm which is the range over which diffusion is likely to occur.The diffusion of Co out of the GF will reduce both the size of the nanoclusters as well as the amount of Co 2+ in the semiconductor.This will reduce the magnetisation of the film and the size of the clusters.The diffusion of oxygen into the GF will reduce the concentration of oxygen vacancies which will increase the resistance and decrease the magnetism [19] of the semiconducting film.The larger density of oxygen vacancies in the semiconducting fraction of the GF on silicon will increase the magnetic interactions and, in this case, give rise to the higher θ value.The strain between the silicon oxide and the GF may induce the formation of large cobalt nanoparticles which are removed after the introduction of the buffer layer.It is the elimination of these large nanoparticles by the buffer which is reducing the mean square particle size so strongly. Conclusions In this work, the influence of ZnO buffer-layer on microstructures and MR properties of Co-ZnO samples was studied.Co-ZnO GF layer consisted of Co particles dispersed in a semiconductor matrix.Moreover, the interface between the GF and the buffer-layer is more diffuse than the GF on oxidised Si, which may be due to the diffusion of Co and O ions between the GF and the ZnO buffer-layer.The combination of GF and a buffer-layer has a higher RT MR than the GF on an insulating Si substrate.The GF sample without a ZnO buffer-layer shows negative MR ratio of 8.3% at RT. Adding ZnO buffer-layers increases the negative MR ratio of GF-based sample to 11.9%.The similar maximum of their ZFC magnetizations implies the similar average size of the Co particles; the only difference is the wider distribution of Co particle size in Co-ZnO sample without a buffer-layer. Therefore, these phenomena may be related with the difference of Co intergrain interactions, the distribution and the sizes of the Co nanoparticles, the number of oxygen vacancies in GF layer and the thickness of GF layer due to the diffusion at the interface in the sample with and without a ZnO buffer layer. This paper has shown the combination of GF and a ZnO buffer-layer has a higher MR than the GF on an insulating Si substrate and has shown why this occurs.ZnO-based GF with a ZnO buffer-layer may be more suitable for applications as field sensors. Figure 1 . Figure 1.(a) The field dependence of MR ratios of GF-based samples at RT; (b) The dependence of the RT MR ratio in the maximum applied magnetic field of 18 kOe for GF samples on a ZnO buffer-layer of thickness of 0 -150 nm. Figure 2 .Figure 3 . Figure 2. (a) XRD patterns of GFs without a buffer-layer and with a ZnO(50 nm) buffer-layer; (b) TEM plane view image for the ZnO(50 nm)/GF sample. Figure 4 . Figure 4.The dependence of the resistances for GF samples on a ZnO a buffer-layer of thickness of 0 -150 nm.The solid line is a guide for the eye. Figure 5 . Figure 5. Magnetic data for GF's with and without a buffer-layer (50 nm).(a) The hysteresis loops obtained at 5 K after the diamagnetic background has been subtracted; (b) The temperature dependence of the susceptibilities found from the FC and ZFC magnetizations taken with a magnetic field of 100 Oe, the solid curves are fit to a Curie-Weiss law.
4,221
2014-12-03T00:00:00.000
[ "Materials Science", "Physics" ]
On spinning loop amplitudes in Anti-de Sitter space In this work we present a systematic study of AdS$_{d+1}$ loop amplitudes for gluons and gravitons using momentum space techniques. Inspired by the recent progress in tree level computation, we construct a differential operator that can act on a scalar factor in order to generate gluon and graviton loop integrands: this systematizes the computation for any given loop level Witten diagram. We then give a general prescription in this formalism, and discuss it for bubble, triangle, and box diagrams. Introduction The gauge gravity duality or the AdS/CFT is the correspondence between weakly coupled theories of gravity in Anti-de Sitter space and conformal field theories with large N . This correspondence provides a powerful framework to study quantum gravity on Anti-de Sitter space [1][2][3]. Given the importance of this duality, a lot of effort has been invested to compute tree level AdS scattering amplitudes in configuration space and Mellin space [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. In the recent years, there has been some renewed interest in computing CFT correlators in momentum space [19][20][21][22][23][24][25][26][27][28][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43]. 1 However, most of the progress is largely focused on tree level results. AdS loop amplitudes pose difficult technical problems. 2 In addition to the standard loop integrals, one performs bulk integrals whose complexity is already comparable to loop integrals in flat space. For a long time, there were very few loop-level results; however, some progress has occurred in last few years. In [11,58], Mellin amplitudes corresponding to loop Witten diagrams in AdS were used to study analytical properties of such amplitudes. These papers inspired the usage of CFT crossing symmetry [59] which lead to progress in computing loops 1 There has also been recent results in p-adic space [44][45][46]. Additionally, because of translation invariance momentum space is a natural choice for cosmological correlators. For some related recent papers, see [47][48][49][50][51][52][53][54][55]. 2 It is interesting to note that de Sitter loops are also conceptually difficult. For instance it was pointed out in [56] that scale factor a(t) enters the logarithmic divergence. For some recent progress in de Sitter loops, see [57] in AdS 5 ×S 5 [60][61][62][63]. Progress in the computation of scalar loop diagrams was performed recently in [26,[64][65][66]. Some progress in studying unitarity in the context of AdS was carried out in [58] and more recently in [67][68][69]. In [70], it was shown that higher-point diagrams at one-loop may be written in terms of the 6j symbols of the conformal group. Similarly, Mellin space pre-amplitudes and the pole structure of the result was investigated in [71,72]. In [73,74], 1-loop bubble diagram in spectral representation for a φ 4 scalar was performed. An algorithm which computes the one-loop Mellin amplitudes for AdS supergravity was demonstrated in [75]. 3 Similarly cutkosky rules in CFT's at both strong and weak coupling is studied in [77]. Despite the aforementioned progress, work in loop amplitudes is still in a developing stage. It was shown in [23][24][25] that higher point gravity and gauge theory tree amplitude takes a simplified form with the judicious use of momentum space formalism. We view our work as the natural extension of tree level results in gauge and gravity theory with the usage of momentum space. We are inspired by the stunning progress in the study of flat space S-matrix at loop-level which has revealed powerful mathematical structures and remarkable physical insight. Many of the results in flat space loop calculations have shown the connection between trees and loops [78,79] and gravitational theories to gauge theories [80], and the loop amplitudes also correspond to geometric structures [81]. Many of these deep connections and powerful mathematical structures have occurred in the context of gauge and gravity theory and with the usage of momentum space. We initiate this investigation as we are interested in exploring whether the AdS loop level gauge and gravity theory scattering amplitudes encodes analogous rich structures to flat space scattering amplitudes. Here is the organization of the paper. In section 2, we review the AdS momentum space formalism on tree-level amplitudes for gauge and gravity theory and discuss the necessary modifications to extend them beyond tree level computations. In particular, we manage to write any loop-level Witten diagram as a differential operator acting on a scalar factor. In section 3, we further discuss these scalar factors by providing implicit results for gluon triangle and box diagrams and by going over the explicit computation of gluon bubble diagram. We then conclude with future directions. Many technical details are collected in appendices. 2 Momentum space formalism: review of tree level technology and extension to loops We start by defining the bulk to boundary propagators 4 where i labels different external legs and where we define for convenience. We note that all propagators in this paper are in axial gauge, similar to our previous work [23][24][25]. The bulk to bulk propagators read as where we define the shorthand notation for brevity and where Π are projectors that depend on the vector k µ and the boundary metric η µν : we refer the reader to Appendix A.1 for the explicit form of any object without definition in this section. We also note that we are working in the Poincaré patch of the AdS with the metric ds 2 = z −2 (dz 2 + η µν dx µ dx ν ). The relevant three and four point vertex factors for gluons and three point vertex factor for graviton are as follows 5 where the permutations in the graviton vertex are generated by the permutation group element (k 1 k 2 k 3 )(ikm)(j n) in cycle notation. 6 At tree level, the expression for a gluon/graviton Witten diagram of m-external, n- In [24,25], one insight to simplify the computation was to rewrite the propagators as differential operators acting on simpler propagators. Indeed, we observe that At tree level, these quantities are not all independent and satisfy the equality m + 2n − 3r − 4s = 0. 8 One can modify the graviton Witten diagram by adding higher point interactions as well, yet in this paper we stick to three point graviton interactions only. with which eqn. (2.6) become The operator D above consists of contraction of tensor structures in the Witten diagram but its details are not really important. The real importance of this form of the Witten diagram is that it drastically reduces the number of integrations because it generates the full Witten diagram by acting on a scalar factor with a differential operator whose action simply consists of derivatives, limits, and contractions, all of which can be easily automated in a computer algebra program. In contrast, symbolic integrations of interest here are computationally costly and reducing the total number of integrations enables the computations of higher order Witten diagrams in practice (see [23][24][25][26] for further details with explicit results). Once we move beyond tree level, the momenta q dependence of D q µν and D q µνρσ spoils the nice separation of the scalar factor from the rest because we cannot take the differential operator outside the loop momenta integral due to dependence of q. To circumvent this problem, we present here an alternative representation for the propagators: for auxiliary polarization vectors v, where we define in terms of the modified projectors Π. Likewise, we use these auxiliary vectors to rewrite the tensor structure of three point vertex factors to be independent of k: With these ingredients, we can rewrite eqn. where D carries all tensor structure information and where M is simply a scalar factor. As D consists of derivatives, limits, and algebraic manipulations, it can be straightforwardly and efficiently applied once the scalar factor is known. On the other hand, scalar factor has all the integrations which are particularly challenging for symbolic arguments unless carried out at specific conditions (such as gluons in AdS 4 ). Therefore, in the rest of the paper, we will focus on scalar factors. Scalar factors for spinning Witten diagrams The scalar factors for loop level Witten diagrams defined in eqn. (2.14) read as for gluons and for gravitons, where q a (or q a ) is the momenta of the propagator a whose dependence on the external momenta k b and the loop momenta c is determined by the topology of the diagram at hand. Likewise, z a , z a , and z a are one of bulk points z i , where topology determines which one they are. which can be reorganized as where we can take Similarly, we can write down the scalar factors associated with the triangle and box diagrams as follows. Computing bubble diagram Let us recall the scalar factor for bubble diagram from eqn. (3.3): for The first piece in eqn. (3.7) can be computed analytically in terms of Appell's hypergeometric functions: 9 9 Please see section A.2 for further details. which we can rewrite using the definition of q above as where we have defined for convenience. In Appendix A.3 we go over how to do such volume integrals in great generality via standard QFT tricks; the final result in eqn. (A.29) reduces such involved integrals into various products, summations, 1d definite integrals of rational functions, and set-partitioning, all of which can be efficiently implemented in an algorithmic way in any computer computation software such as Mathematica. Indeed, we can rewrite eqn. (3.14) with eqn. (A.33) as is the overall tensor structure. 10 The other terms in the equation above are of similar form as well: they will simply have different overall-tensor-structure, and they may bring additional p dependent terms inside the integration; however all of them can be computed using the same equation, that is eqn. (A.29). The remaining computation in eqn. (3.15) is intricate which involves integrating products of hypergeometric functions, hence it is not sagacious to insist to work in non-specific dimensions. However, the expression is very simple for specific d values; for example, with which the integration becomes doable with an appropriate regularization at any given n. 10 Its explicit form reads as 16) In summary, we observe that the loop-level computations become tractable in momentum space in AdS d+1 . Although we only illustrated the case for the gluons, the situation is similar for gravitons as well; what is common in both cases though is the very technical nature of the formalism that we unpacked above. However, the key point is that the computations in each and every step is algorithmic and can be efficiently implemented in a computer computation software. In particular, momentum space formalism along with the way we decompose the Witten diagrams into differential operators and scalar factors effectively converts a mathematically hard problem into technical yet computer-friendly computation as the final result is simply derivatives and limits acting on a scalar factor which itself is computed via products, sums, and list partitioning, and all of these can be efficiently computed unlike a convoluted volume integral! The main result of the paper is therefore the following prescription: 3. Rewrite the scalar factor such that it becomes of the form M = dp 1 dp 2 . . . dp m dz 1 · · · · · · dz n · · · d d 1 · · · · · · d d r · · · which can always be done in the current formalism (see eqn. Conclusion In this paper, we have studied a formalism to compute loop amplitudes in Anti-de Sitter space in Fourier space for gauge theory and gravity loops in AdS d+1 . In particular, we have constructed a differential operator which can act on a scalar factor to yield both Yang Mills and gravity loop correlators. In addition, we have presented a prescription which can be automated in order to perform tensorial loop computations in Anti-de Sitter space. There are myriad of interesting directions that one can pursue and we will list a few. One of the main motivation of our work is to take the first step to connect AdS loops with cascading number of new ideas and techniques that are emerging in flat space. For instance, in [82], it was shown that n-particle massive Feynman integrals in arbitrary dimensions of spacetime have nice geometric properties such as the connections with hyperbolic simplicial geometry and the answer respects dual conformal symmetry. This method can be directly applied to the computation of the above-mentioned AdS scale factor. Furthermore, we want to stress that we are motivated to study gluons and gravitons in AdS as many of the extremely powerful physical insights and mathematical structures in the last decade have occurred in the study of the flat space S-matrix of gauge theory and gravity [81]. It is tempting to contemplate if there are analogous geometric structures like the amplituhedron that exist for loop amplitudes in Anti-de Sitter space. Similarly, as in the context of Minkowski space, AdS loops can also be expressed in terms of the special classes of multiple polylogarithms. In the context of flat space, there has been progress in demonstrating that these complicated polylogs can admit a much simpler analytic expression. The technology used is called the symbol map and this map can capture combinatorial and analytical properties of the complicated Feynman integrals [83]. In a related work [84], symbols were used to compute loop amplitudes in de Sitter space. It would be natural to use these methods in the context of AdS loops. Likewise, it would be intriguing to incorporate cutting rules in momentum space AdS in the study of gluons and gravitons, and we are hoping to address it in a future work. A.1 Projectors and differential operators In this appendix, we collect some of the technical details we skipped in main body. We first note the definition of the projectors Π used in eqn. (2.3): and We likewise note the definition of the differential operators in eqn. and the modified projectors for gravitons are defined in terms of them: where we use these modified projectors in eqn. (2.12). We finally note the tensor structure of vertex factors given in eqn. (2.13): with which one can define the full modified differential operator D: (A.7) with which we write down the Witten diagrams in terms of the scalar factors in eqn. (2.14). A.2 On integration of products of Bessel-type functions We know in momentum space formalism that the bulk point integrals we need to compute take the form for three point interactions, where E a (x) ∈ {J a (x), K a (x)}. In [85] Rice uses contour manipulations to compute such integrals in terms of Appell's hypergeometric function if E = J, for which the result reads as Same result has been computed independently by Bailey in [86] who first uses hypergeometric identities to derive and then uses analytic continuation from BesselJ to BesselK to get eqn. (A.9). The identity he uses is and he argues that the transition is valid as the the integrand still converges. As z a K a (z) better converges for z → ∞ and is still convergent for z → 0, we can replace z a J a (z) with z a K a (z) where we can use the identity for Re (λ + µ ± ν) > |Re (ρ)| , c > b > 0 , a > 0 (A.13) A.3 Computing loop integrals via standard QFT tricks In this appendix we will review the solution of loop integrals via Feynman parametrization, a standard trick known from QFT. The general form of integrals of interest are 14) which can be parameterized with the Feynman trick as We can then use 17) and shift the integration parameter to obtain which we can rewrite as We note that the integrand is a function of 2 only except for where the exponents are integers, hence the Lorentz symmetry allows us to make the replacements where P α i 1 ...im is the list which has the element v a i a times, and the element n i=1 u i b i α times; for example Note that the partitioning of p ∈ P α i 1 ...im is only possible if P has even number of elements, hence 26) This is just the realization of the fact that integration volume is invariant under → − , hence integrands with odd number of vanish. We are now left with the −integration in eqn. (A.23). To proceed, we first use the well-known identity which can be generalized as We can now write down the final result: · · · ( · v m ) (a 1 + (b 1 + ) 2 ) . . . (a n + (b n + ) 2 ) 2j = iπ d/2 (−1) n Γ n − d 2 for where the set P α i 1 ...im is defined and detailed around eqn. (A.24). As an example, we see that which then becomes
3,943.8
2020-06-22T00:00:00.000
[ "Computer Science" ]
ML-Based Trojan Classification: Repercussions of Toxic Boundary Nets Machine learning (ML) algorithms were recently adapted for testing integrated circuits and detecting potential design backdoors. Such testing mechanisms mainly rely on the available training dataset and the extracted features of the Trojan circuit. In this letter, we demonstrate that this method is attackable by exploiting a structural problem of classifiers for hardware Trojan (HT) detection in gate-level netlists, called the boundary net (BN) problem. There, an adversary modifies the labels of those BNs, connecting the original logic to the Trojan circuit. We show that the proposed adversarial label-flipping attacks (ALFAs) are potentially highly toxic to the accuracy of supervised ML-based Trojan detection approaches. The experimental results indicate that an adversary needs to flip only 0.09% of all labels to achieve an accuracy drop of over 9%, demonstrating one of the most efficient ALFAs in the HT detection research domain. I. INTRODUCTION O VER recent years, the domain of hardware Trojan (HT) insertion and detection has received increasing attention.Such a threat is considerable and pivotal with the modern semiconductor supply chain increasingly relying on outsourced elements developed by third parties, spanning all design-process steps, including tools and intellectual property (IP) cores.Therefore, trustworthy testing of integrated circuits (ICs) or electrical/electronic (E/E) products should be paramount for every designer.The key idea is to test and evaluate the trustworthiness of ICs already inside the design house, not later [1].Nevertheless, a Trojan circuit is essentially designed to evade detection by automated testing tools [2].A Trojan circuit, consisting of trigger and payload parts, can be injected into pre-and post-silicon IC production stages.HTs inserted into netlists are a severe issue [2], and IC verification engineers must develop efficient detection techniques as countermeasures.Machine learning (ML) methods have shown high accuracy in detecting HT [3].For instance, support vector machines (SVMs) [4], random forest classifiers (RFCs) [5], and further algorithms [3] have been employed to detect HTs.In these approaches, the ML model of choice is trained to classify a given wire as either Trojan-free (Normal) or as part of a Trojan circuit.Multiple metrics have been proposed to evaluate the quality of such ML models [3].In [6], the best results of all reviewed MLs for HT detection exhibit a true positive rate (TPR) between 72.5% and 85.3% while the TPR of supervised MLs ranges from 68.2% to 99.9%. Most of the proposed works on HT classifiers1 in the netlist-testing domain do not cover and evaluate the classifiers' security.Recently, an adversarial example attack (AE) was performed successfully in [7] and [8] against multilayer neural networks for HT detection at gate-level netlists proposed in [9].AE aims to add some noise, so-called perturbation, to evade the HT classifier.The results show that the TPR drops by at most 30.15%.An adversarial label-flipping attack (ALFA) was performed against the categorical boosting (CatBoost)based HT classifier in [10].ALFA is considered a causative attack, where the attacker aims at degrading the accuracy of the HT classifier by influencing and altering the training process [11].The attack results show that the classifier's average accuracy drop is 58.5% if the attacker flips 20% of the sample label. A. Article Contributions and Organization This letter investigates a structural problem of HT classifiers called the boundary trojan net labeling problem (BTP).In particular, we demonstrate that labeling boundary nets (BN) is practically toxic to the HT classifier quality, and an adversary can exploit BTP's impact to perform ALFA against HT classifiers.Thus, we carry out four different experiments to prove the attack's efficiency by using the benchmarks introduced in [12] and [13] and made available in Trust-Hub for research use [14].To the best of our knowledge, this work is the first proposal exploiting the BTP to perform successful ALFAs against HT classifiers. The remainder of this letter is organized as follows.Section II outlines the structural problem of supervised ML classifiers to verify the integrity of gate-level netlists when BN labels are applied incorrectly.Four attack scenarios are presented in Section III, followed by an analysis of the attack's impact on multiple classifiers in Section IV.Section V provides a conclusion with a short result discussion. A. Construction of HT Classifier The construction of an HT classifier consists of the training and inference phases.The training phase comprises several stages: a verification engineer first prepares a set of infected netlists.All features of every net in each netlist are extracted in the feature extraction stage.Then, every net is labeled as either Trojan or Normal in the labeling stage.An ML is trained based on all extracted features resulting in an HT classifier. When a new netlist needs to be tested during the inference phase, the verification engineer uses this HT classifier to verify whether the provided netlist is Trojan-free. B. Boundary Trojan Net Problem: Definition All nets only connected to the Trojan cells (logic gates and registers) are called Internal Trojan Nets.All nets only connected to the Trojan-free cells are called Normal Nets.The fuzzy area at the edge of a Trojan circuit is called the boundary area.There, all nets are attached to both, the Trojan circuit and the Normal circuit.Such nets are called BNs [15], [16].Fig. 1 shows those BNs (green) that lay between the Trojan (red) and Normal (black) circuit. In the context of supervised ML, the location of a BN determines its according label, i.e., whether it is a Trojan or Normal net, formalizing the boundary trojan net problem (BTP).Correct labeling of BNs is very ambiguous [15].For instance, if the BNs are neglected and considered Normal nets as in [5], this labeling procedure leads to BN misclassification.Therefore, a correction mechanism of the classification is needed as an extra step [16] to overcome the BTP.This makes the HT classifier inefficient in terms of time complexity.Another labeling procedure considers all BNs as Trojan nets [15], which results in better HT classification accuracy.However, it significantly increases the size of the Trojan circuit and imprecisely labels some Normal nets as Trojan nets.Consequently, the BTP is a structural problem of HT classifiers due to the unclear and not-obvious way to label BNs.We therefore exploit the BTP to introduce and perform ALFAs on HT classifiers. III. BTP EXPOSING ALFAS AGAINST HT CLASSIFIERS This section introduces several ALFA scenarios against HT classifiers. A. Threat Model In this work, the adversary has complete knowledge of the labeling stage.We assume the adversary attacks the HT classifier in this stage by flipping only BN labels, as shown in Fig. 2. The adversary hence relabels the BNs manually, by malicious software [10], or even via an untrusted EDA tool [3], [17] to cause a misclassification of some nets in the later inference phase.A verification engineer could mistakenly classify Trojan nets as Normal so that the malicious circuit will not be removed from the netlist.Consequently, the resulting IC will have severe security issues. B. Possible ALFA Scenarios Based on BTP We propose four BN label-flipping scenarios: two can be perceived as deterministic procedures, and the third and fourth are random.Particularly, these label-flipping scenarios are equivalent to performing ALFAs on HT classifiers. 1) ALFA.1: The attacker flips all BNs to be Trojan Nets. 2) ALFA.2: The attacker flips all BNs to be Normal Nets. 3) ALFA.3: The attacker deploys some random rules of BN labeling, such as every BN linked to a flip-flop from the trigger part is labeled as Normal.4) ALFA.4:The attacker introduces plain random BN labeling without any rules.This attack will be used to compare the performance of the proposed labeling strategies against random labeling.In the following section, we evaluate the impact of the proposed ALFAs on several HT classifiers. IV. EXPERIMENTAL RESULTS AND ANALYSIS In the following, we quickly draft the most-used procedure of net-feature extraction proposed in [5] and introduce baseline HT classifiers that do not aim at overcoming the BTP but will be used to compare the HT classification quality before and after being attacked. A. Net-Feature Extraction and Basic Labeling Procedure In the following, we deploy a well-established net-feature extraction procedure introduced by [5].Following this work, 51 features of every net n can be extracted, whereby k denotes the number of logic levels, as shown in Table I.A table of extracted net features characterizes the netlist.For a set of netlists with their corresponding tables, the construction of the HT classifier consists of training and inference phases based on all tables deploying leave-one-out cross-validation [5]. Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.[5], WHERE 1 ≤ k ≤ 5 1) Basic Labeling Procedure: The HT detection procedure introduced in [16] consists of two stages: 1) net classification and 2) classification result correction.Here, we modify this procedure and combine both stages in a single labeling procedure, called basic labeling (BL) procedure illustrated in Procedure 1. TABLE I NET FEATURE EXTRACTION AS PROPOSED IN 2) Baseline Four HT Classifiers: A set of netlists from [14] are randomly selected.Then, we apply RFC, SVM, decision tree (DT), and decision stump (DS) classifiers to this set of netlist.Typically, the so-called F-measure is employed to evaluate the quality of almost all HT classifiers in this domain.However, the F-measure can dangerously show overoptimistic inflated results [18].Therefore, we additionally use the Matthews correlation coefficient (MCC), as MCC dependents on all basic ML-evaluation metrics: true negative (TN), true positive (TP), false negative (FN), and false positive (FP) and exhibits more advantages over the F-measure for binary classifiers [18] B. Reliability Analysis of HT Classifiers Fig. 3 shows a comparison of the TPR, TN Rate (TNR), Precision (PRE), F-measure, and MCC of all proposed classifiers.RFC and DT exhibit high TNR and Precision, indicating a reliable Trojan-free verification.Both classifiers perform well in Trojan detection with acceptable TPR and F-measure.In the SVM case, Trojan detection is slightly worse in comparison, and TNR indicates less reliable accuracy in identifying Trojanfree nets.Furthermore, DS wrongly predicts most of the nets as Trojan, which causes a high TPR but a low TNR.According to Fig. 3, the DS classifier is unreliable due to the huge difference between its F-measure and MCC.This indicates a case when the F-measure provides over-optimistic results, which is misleading.Since RFC and DT classifiers exhibit similar metrics, we choose RFC and SVM only as examples to study the impacts of our proposed attacks on HT classifiers in the following.impact on SVM, especially its TNR since BL is not the optimal labeling strategy. D. Boundary Nets as Toxin: Discussion Table II compares our results to the state of the art.The presented prior works on different applications and even on HT classification such as CatBoost [10] show that the adversary targets 20% instances in the dataset by ALFA to achieve 58.5% misclassification ratio of ML.Our work indicates that ALFAs exploiting BTP only target 0.09% of the sample size to cause 9.19% SVM misclassification ratio and 9.15% RFC misclassification, respectively.This reflects the severity of BN impact on HT classifiers, where flipping less than 0.1% of labels in the dataset causes almost 10% accuracy drop.This table, together with the results of ALFAs on RFC and SVM explains and demonstrates the toxic nature of the BTP to HT classifiers. Further, when a verification engineer uses an ML-based HT detection tool, the BTP will generally be toxic for the tool, and the verification engineer will not be able to remove Trojan nets completely with high confidence.Consequently, a verification engineer should apply several combinations of labeling scenarios and HT classifiers to achieve an acceptable classification outcome, though the number of such combinations is enormous: considering a practical example with just four classifiers n c = 4 and a set of benchmarks with a total number of n BN = 254 BNs where the verification engineer wants to find one classifier out of four, the number of all possible combinations of labeling scenarios and HT classifiers is n c × 2 n BN = 4 × 2 254 = 2 256 combinations. V. CONCLUSION This letter investigated the labeling impact of the BTP on supervised ML-based HT-classification methods for gatelevel netlists.BNs are a structural problem for these HT classifiers where labeling a small number of nets greatly impacts HT classification.Therefore, BN labeling can be considered toxic for any HT classifier.This toxicity we illustrated with four labeling scenarios attacking HT classifiers where we clearly see strong effects on classification quality: while the true-negative ratio remains high in all cases, the true-positive ratio shows large deviations with according effects on MCC and F-measure.Theoretically, this can be mitigated by applying several labeling scenarios; practically, this is, however, impossible due to the large number of tobe-covered combinations.This calls for novel approaches, as with established ones proper detection and removal cannot be guaranteed anymore.We will hence target a different class of ML and evaluation methods to identify and correct maliciously flipped labels. ML-Based Trojan Classification: Repercussions of Toxic Boundary Nets Saleh Mulhem , Felix Muuss , Christian Ewert, Rainer Buchty , and Mladen Berekovic Abstract-Machine learning (ML) algorithms were recently adapted for testing integrated circuits and detecting potential design backdoors.Such testing mechanisms mainly rely on the available training dataset and the extracted features of the Trojan circuit.In this letter, we demonstrate that this method is attackable by exploiting a structural problem of classifiers for hardware Trojan (HT) detection in gate-level netlists, called the boundary net (BN) problem.There, an adversary modifies the labels of those BNs, connecting the original logic to the Trojan circuit.We show that the proposed adversarial labelflipping attacks (ALFAs) are potentially highly toxic to the accuracy of supervised ML-based Trojan detection approaches.The experimental results indicate that an adversary needs to flip only 0.09% of all labels to achieve an accuracy drop of over 9%, demonstrating one of the most efficient ALFAs in the HT detection research domain.Index Terms-Gate-level netlist, hardware Trojan (HT), integrated circuit (IC) testing, machine learning (ML). 2 Every boundary net linked to the payload is labeled as follows: (a) Trojan net, if the wire is linked to the payload output.(b) Trojan net, if the wire is connected to a logic gate or complex gate*.(c) All other boundary nets linked to the payload are normal nets. 3 Every boundary net linked to the trigger part is labeled as follows: (a) Trojan net, if the wire is connected to a logic gate or complex gate* with four or more input ports and linked somehow to a flip-flop.(b) All other boundary nets linked to the trigger are marked as normal nets.4 Every other net is labeled as normal.* Complex gate indicates a combination of basic gates integrated into a single cell by CMOS vendors. December 2023; date of current version 30 August 2024.This work was supported in part by the German Ministry of Education and Research (BMBF) via the Project VE-Jupiter under Grant 16ME0234.This manuscript was recommended for publication by F. Merchant.(Correspondingauthor: Saleh Mulhem.)Theauthors are with the Institute of Computer Engineering, Universität zu Lübeck, 23562 Lübeck, Germany (e-mail: saleh.mulhem@uni-luebeck.de). Table of Unlabeled Net-Features.Table of Fully Labeled Nets 1 Every internal net of a Trojan is labeled as a Trojan net. Input: TABLE II COMPARISON BETWEEN OUR WORK AND THE STATE OF THE ART
3,479
2024-09-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Machine Learning for Detection of Muscular Activity from Surface EMG Signals Background: Muscular-activity timing is useful information that is extractable from surface EMG signals (sEMG). However, a reference method is not available yet. The aim of this study is to investigate the reliability of a novel machine-learning-based approach (DEMANN) in detecting the onset/offset timing of muscle activation from sEMG signals. Methods: A dataset of 2880 simulated sEMG signals, stratified for signal-to-noise ratio (SNR) and time support, was generated to train a hidden single-layer fully-connected neural network. DEMANN’s performance was evaluated on simulated sEMG signals and two different datasets of real sEMG signals. DEMANN was validated against different reference algorithms, including the acknowledged double-threshold statistical algorithm (DT). Results: DEMANN provided a reliable prediction of muscle onset/offset in simulated and real sEMG signals, being minimally affected by SNR variability. When directly compared with state-of-the-art algorithms, DEMANN introduced relevant improvements in prediction performances. Conclusions: These outcomes support DEMANN’s reliability in assessing onset/offset events in different motor tasks and the condition of signal quality (different SNR), improving reference-algorithm performances. Unlike other works, DEMANN’s adopts a machine learning approach where a neural network is trained by only simulated sEMG signals, avoiding the possible complications and costs associated with a typical experimental procedure, making this approach suitable to clinical practice. Introduction Assessing muscle-recruitment timing is relevant in different fields, including clinical gait analysis and electromyography-driven assistive devices [1,2]. Traditionally, onset/offset events are detected by visual inspection of surface electromyographic (sEMG) signals by trained experts [3]. However, visual inspection may be time-consuming, not completely reproducible/repeatable, and not suitable for large datasets [4]. A further classical approach is represented by threshold-based automatic methods [5]. Among these, the double-threshold statistical algorithm (DT) is a robust approach, and nowadays it is still widely adopted for clinical and research purposes [6,7]. Further approaches are typically developed based on time-frequency analysis [8][9][10][11] and signal filtering by a Teager-Kaiser energy operator (TKEO) [12]. As reported [4,13], performances of the above-mentioned approaches could be significantly affected by the relative amount of background noise compared to the magnitude of the actual sEMG signal, i.e., low values of the signal-to-noise ratio (SNR). A further issue to consider is that the majority of these approaches do not take into account those conditions where SNR is not constant throughout the signal acquisition, such as during prolonged tasks (walking, running, cycling). Intra-signal variability of SNR during sEMG recording could be ascribed mainly to the change of noise power, due to the alteration of electrode-skin contact characteristics or to the changes in the ground reference level [7]. This could strongly affect the onset-offset event detection in those portions of the sEMG signal where SNR deteriorates. Machine/deep learning has proven to be effective in interpreting sEMG signals for different purposes [14], such as to classify gestures [15], to detect muscle fatigue [16], and to investigate human-machine interaction [17]. Different models were adopted: convolutional and recurrent neural networks for muscle force estimation [18], unsupervised competitive learning for assessing muscle recruitment during pregnancy [19], and multi-layer perceptron to classify neuromuscular disorders [20,21]. Support vector machines were largely applied to the sEMG signal for classification purposes [22,23] and for the detection of physiological patterns and parameters [24,25]. Attempts were made even for characterizing the walking task, with particular focus on classifying gait phases and assessing gait [26][27][28][29]. In spite of the presence of a large literature on the machine-learning based interpretation of sEMG signals, this approach is scarcely adopted to face the challenge of assessing the timing of muscle activation. The problem to solve is essentially an sEMG-based prediction of a transition between the period when the muscle is silent and the period when the muscle is active, i.e., to discriminate between actual sEMG activity and noise. Given that, the possibility of adopting a machine learning approach that learns to interpret the shape of the sEMG signals for assessing muscle-activation onset and offset seems to be a feasible solution. A very recent study proposed by Ghisleri et al. adopted long short-term memory (LSTM) recurrent neural networks (RNN) for detecting muscle activity [30]. Very encouraging outcomes were achieved in this study by using a very diversified dataset of sEMG signals to train the network, including simulated signals, signals from able-bodied subjects, and signals from patients affected by neurological or orthopedic pathologies. To run this approach, a large dataset of real sEMG signals from many different subjects is needed. However, recruiting an adequate number of subjects to build the dataset could be a challenging task. This is particularly true if patients affected by different pathologies are included, as in this case. Thus, an alternative way that considers a less demanding approach to neural-network training could be valuable. A first preliminary (and at the moment the only) attempt to provide a different approach to the training phase was proposed, based on the idea of including only simulated sEMG signals in the training procedure [31]. This study used the wavelet spectrogram of sEMG signals as the input to the network. Model performances were provided only in terms of absolute latency of onset-timing detection. Validation performed against two literature methods [32] showed promising results in terms of latency, encouraging research to continue along this path. However, the prediction of the offset event is not provided, and the model performances are tested only in a single subject, questioning the clinical impact of this approach and the reliability of the validation procedure. The goal of the present study was to investigate the suitability of a novel machinelearning-based approach in assessing the onset-offset timing of muscle activation, i.e., the Detector of Muscular Activity by Neural Networks (DEMANN). Specifically, the present approach aimed to predict both onset and offset timing using only simulated sEMG signals with a large range of SNR values for neural network training in order to explore a large range of SNRs without deterioration, which is often encountered in clinical environments. This aspect, together with the simple architecture of the neural network (based on a multi layer perceptron), should help to provide fast training and prediction, making this approach very suitable for clinical purposes. Thus, the main contributions that the present study would like to provide could be summarized as follows: • To develop a novel high-performance approach (DEMANN) that contributes to support the use of machine learning for muscle activity detection; • To highlight the advantages of the proposed machine-learning approach, such as the possibility of real-time applications, achieved without loss of accuracy and with respect to existing, non-machine-learning-based systems; • To limit the deterioration of event assessment associated with low SNRs and the large inter-signal variability of SNR, typical of clinical environments, by training the model with simulated sEMG signals with a large range of SNR values; • To reduce the complexity of the experimental protocol associated with model training, since no signal acquisition is needed to provide real time activation predictions. Materials and Methods The robustness of DEMANN was evaluated by a test bench of simulated sEMG signals and two datasets of real sEMG signals. Simulated and real sEMG signals underwent the same procedure described in the following sections. DEMANN was validated by a direct comparison with reference approaches on both simulated and real data. Simulated sEMG Signals A simulation study, using a test bench of signals, was carried out for assessing the performance of the DEMANN approach in predicting onset and offset events of muscular activity. sEMG signals acquired during cyclic movements could be modeled as the superimposition of the actual signal produced by muscle contraction and the background noise [33]. In this study, a Gaussian process with zero mean and variance σ 2 noise was adopted to model the sEMG-signal where the muscle was silent and only background noise was acknowledged. To simulate the sEMG-signal portion where the muscle is recruited, the background uncorrelated noise was added to a band-limited stochastic process with zero-mean Gaussian distribution of amplitude and a fixed power level [6]. This distribution was achieved by band-pass filtering (80-120 Hz) a Gaussian series of uncorrelated samples, according to [6]. This Gaussian distribution was truncated to simulate the sEMG activity due to muscle activation. Each simulated sEMG signal was generated with a sampling frequency fs = 2000 Hz, a time window = 1 s, and a variable value of the Gaussian-distribution median, µ, ranging from 0 to 1. Different simulated sEMG signals were created varying the standard deviation, σ, and the time support, 2 × α × σ, of the Gaussian distribution, in order to simulate the physiological variability associated with the recruitment of different muscles. The variation of σ was achieved according to the desired value of SNR, where: Simulated sEMG signals were generated from all the different combinations of the values adopted for σ (50, 100, and 150 ms), for α (1, 1.5, 2, and 2.4), and SNR values from 1 dB to 30 dB, with step = 1. In [30], Ghisleri et al. trained LSTM recurrent neural networks by means of simulated sEMG signals, with SNR ranging from 3 dB to 30 dB. In the present paper, this SNR range was slightly expanded to consider even worse conditions. Real sEMG Signals Two different datasets of real sEMG signals were considered. The first dataset is available in [3] (https://github.com/TenanATC/EMG, accessed on 23 April 2021), including the ground truth. The experimental protocol consisted of acquiring sEMG signals from 18 participants performing knee extension and elbow flexion. Knee extension was performed in subjects seated in a stationary chair, with a mass (2.3 kg) applied to the right ankle. Elbow flexion was performed with a mass (2.3 kg) applied to the right wrist. sEMG probes were applied over vastus lateralis (VL) for monitoring knee extension and over biceps brachii (BB) for elbow flexion. A total of 103 sEMG signals were acquired with 0 dB < SNR < 13 dB. Three experts visually analyzed the signals and noted down the activation onsets in a randomized and double-blind fashion. Every trial was inspected twice by each expert. The average over the six onset values was the ground truth for the experiments in [3] and it was adopted also here. Further details can be found in [3]. The second dataset consisted of foot-floor contact and the sEMG data collected during 30 healthy adults walking, retrospectively taken from the database built at the Movement Analysis Lab, Università Politecnica delle Marche, Ancona, Italy and used for previous studies [28,29]. Data are freely available, consulting the public repository of medical research data PhysioNet [29,34,35]. Overweight and obese people (body mass index, BMI > 25) and subjects affected by any pathological condition, joint pain, or undergone orthopedic surgery were not considered. Gait data were captured (sampling rate: 2 kHz; resolution: 12 bit) by the multichannel recording system Step32 (Medical Technology, Torino, Italy). sEMG signals were acquired in each leg by single differential probes placed over gastrocnemius lateralis (GL), tibialis anterior (TA), and vastus lateralis (VL). SNR values ranged between 3 dB and 30 dB. SENIAM guidelines for sEMG-sensor positioning were respected [36]. Foot-floor contact signals were measured by three footswitches placed under the heel and the first and the fifth metatarsal heads of the foot. Subjects walked barefoot at a self-selected pace for about 5 min, following an eight-shaped path, which involved natural deceleration, acceleration, and reversing. Further details are reported in [28]. The research was undertaken following the ethical principles of the Helsinki Declaration and was approved by the local ethical committee. Signal Pre-Processing Simulated and real sEMG signals were band-pass filtered (2nd-order Butterworth filter, cut-off frequency 10-500 Hz). Then, signals were pre-processed to extract the linear envelope (LE), the root mean square (RMS), and the wavelet scalogram, which were concomitantly used as input to the neural network. LE was extracted by low-pass filtering of the signal (2nd-order Butterworth filter; cut-off frequency 5 Hz). RMS was extracted by computing the following formula over overlapping sliding 60-sample windows that scan the whole signal: Continuous wavelet transform (CWT) was used for providing energy localization in the time-frequency domain of sEMG signals in terms of CWT scalogram function, P sEMG , defined as the square of the absolute value of CWT coefficients, W sEMG : Wavelet transform was implemented by adopting Morse of order 4 with 6 levels of decomposition as mother wavelet. Data Preparation To adopt the most suitable input to the neural network, preliminary experiments were performed, evaluating four different alternatives: LE, RMS, CWT scalogram, and their concatenation (LE + RMS + CWT). The concatenation consisted of a min-max normalization of the outputs of the different processing procedures, thus mapping the values in a [0, 1] range, and a concatenation of outputs of the different processing procedures (Figure 1). These choices were motivated by the related literature, where LE and RMS of the sEMG proved to be suitable signals to train the neural network for gait analysis [27][28][29], even if the prediction tasks were different from the one addressed here. Outputs of time-frequency analysis (spectrograms, scalograms) were also features often used in sEMG analysis, as for example in [31] to predict muscle activations. Before training the classifier, the concatenated vector was segmented in overlapping sliding windows of 10 samples, where each window was shifted of one sample with respect to the previous window. Each window was used to label that single sample, according to the value of the related ground truth in the window. The single sample was labeled as 1 (muscle activity) or 0 (no muscle activity), according to the most frequent ground truth value identified in the window. The size of the processed windows, the simple neural network architecture, and the use of sliding windows provided a very low latency of 3-4 milliseconds, which could be suitable for real-time applications. The single sample was labeled as 1 (muscle activity) or 0 (no muscle activity), according to the most frequent ground truth value identified in the window. The size of the processed windows, the simple neural network architecture, and the use of sliding windows provided a very low latency of 3-4 milliseconds, which could be suitable for real-time applications. Training the Classifier The classifier was a hidden single-layer (32 units) fully-connected neural network. A Rectified Linear Unit (ReLU) activation function was used, and a sigmoid function was adopted to map the network output to a 0-1 interval. The binary output was achieved by using a standard threshold of 0.5. The model was trained with a learning rate of 0.001, a batch size of 512 for 40 epochs using the standard stochastic gradient descent (SGD) optimization algorithm, and by adding a L2 regularization penalty set to 0.0001. The training set was composed of only simulated sEMG signals: 8 signals for each combination of σ (50, 100, and 150 ms) and α (1, 1.5, 2, and 2.4) were chosen, for a total of 96 signals for each SNR. Considering 30 SNR values (from 1 dB to 30 dB, step = 1), a total of 2880 simulated signals were included. The classifier performances were evaluated on three different testing sets. The first one was composed of only simulated sEMG signals. Eight signals were generated for each combination of σ, α, and SNR. Nine different SNRs were considered, specifically 3, 6, 10, 13, 16, 20, 23, 26, and 30 dB, as suggested in [6]. A total of 864 simulated signals were achieved. No overlapping occurred between the training and testing set, i.e., none of the simulated signals generated to train the model were used during testing. The ground truth of muscle activity was the vector composed of the same number of samples of the simulated sEMG signal, where samples can assume only two values: "0" and "1". The ground truth was "1" if the truncated Gaussian distribution assumed values > 0, "0" otherwise. The DEMANN performance was provided in terms of precision, recall, F1-score, and mean absolute error (MAE), assessed in true positives as defined in Section 2.6. MAE was the average time distance between the predicted event and the one of the same kind in the ground truth signal. A comparison of the results achieved in the first test set was reported in Table 1, in terms of the mean F1-score (±SD) of classification. The overall best F1-score was achieved by LE + RMS + CWT (Table 1). Thus, this input was adopted to feed the neural network. Training the Classifier The classifier was a hidden single-layer (32 units) fully-connected neural network. A Rectified Linear Unit (ReLU) activation function was used, and a sigmoid function was adopted to map the network output to a 0-1 interval. The binary output was achieved by using a standard threshold of 0.5. The model was trained with a learning rate of 0.001, a batch size of 512 for 40 epochs using the standard stochastic gradient descent (SGD) optimization algorithm, and by adding a L2 regularization penalty set to 0.0001. The training set was composed of only simulated sEMG signals: 8 signals for each combination of σ (50, 100, and 150 ms) and α (1, 1.5, 2, and 2.4) were chosen, for a total of 96 signals for each SNR. Considering 30 SNR values (from 1 dB to 30 dB, step = 1), a total of 2880 simulated signals were included. The classifier performances were evaluated on three different testing sets. The first one was composed of only simulated sEMG signals. Eight signals were generated for each combination of σ, α, and SNR. Nine different SNRs were considered, specifically 3, 6, 10, 13, 16, 20, 23, 26, and 30 dB, as suggested in [6]. A total of 864 simulated signals were achieved. No overlapping occurred between the training and testing set, i.e., none of the simulated signals generated to train the model were used during testing. The ground truth of muscle activity was the vector composed of the same number of samples of the simulated sEMG signal, where samples can assume only two values: "0" and "1". The ground truth was "1" if the truncated Gaussian distribution assumed values > 0, "0" otherwise. The DEMANN performance was provided in terms of precision, recall, F1-score, and mean absolute error (MAE), assessed in true positives as defined in Section 2.6. MAE was the average time distance between the predicted event and the one of the same kind in the ground truth signal. A comparison of the results achieved in the first test set was reported in Table 1, in terms of the mean F1-score (±SD) of classification. The overall best F1-score was achieved by LE + RMS + CWT (Table 1). Thus, this input was adopted to feed the neural network. The second test set was composed of 103 real sEMG signals proposed in [3]. The performance of the DEMANN approach was provided in terms of prediction accuracy and MAE, assessed in all 103 signals of the dataset. The third test set included foot-floor contact and sEMG data collected during 30 healthy adults walking, as described in Section 2.2. Sequences of five consecutive gait cycles were selected randomly. Two experts analyzed three different versions of the same signal: raw sEMG signal, rectified band-pass-filtered sEMG signal, and RMS of the sEMG signal. Then, the experts identified onset-offset instants of muscular activity by visual inspection. The mean over the six onset values represented the ground truth for the experiments. A total of 538 events were identified (269 onsets and 269 offsets). The reference chosen for validation was the acknowledged double thresholding algorithm (DT) [5,6]. The performances were reported in terms of precision, recall, and F1-score of the event prediction. For all the three test sets, model validation and performance were computed in signals never used during the training of the model. Identification of sEMG Onset-Offset To achieve the model output, segmented sEMG signals were provided as input to the trained model. Thus, the model output was composed of sequences of 0 (no muscle activity) alternating with sequences of 1 (muscle activity). This signal was chronologically scanned to identify the transitions between the two conditions: the transition from 0 to 1 identified the onset event and the transition from 1 to 0 detected the offset event. This was achieved by the following procedure: a time tolerance T of 100 ms was adopted, as suggested in [10]. Then, we acknowledged as true positive each predicted event at time t p if an event of the same kind occurred in the ground-truth signal at time t g , such that t g − t p < T. Otherwise, the predicted event was acknowledged as false positive. Moreover, a post-processing procedure was performed, consisting of cleaning the signal by discarding those sequences of samples that were too short to be physiologically plausible; it was acknowledged, indeed, that muscle recruitments lasting less than 30 ms had no effect in controlling joint motion [6]. Thus, sequences of 0 (or sequences of 1) shorter than 60 samples were removed. Statistics The Shapiro-Wilk test was adopted to appraise the normality of data distribution. A two-tailed, non-paired Student's t-test was applied to verify the significance of difference between the normally-distributed samples. The Mann-Whitney test was applied to verify the significance of difference between the non-normally-distributed samples. Statistical significance was established at 5%. Simulated sEMG Signals The mean classification accuracy computed in the testing set stratified for different SNR is shown in Table 2. The accuracy on the simulated test set increased with increasing SNR from 3 dB (accuracy = 95.3%) to 23 dB (accuracy = 99.2%), and then it remained practically unaltered. Likewise, SD decreased with increasing SNR (from 4.8 to 0.7%). Table 3 reports the mean classification performances in the testing set computed separately in the portions of sEMG signals where muscle activity was acknowledged (activity area) and where it was not (silent area). The effect of SNR on the classification performances was preserved. While in the present study, a shallow neural network was used as a classifier, the DEMANN approach can be flexibly modified to embed a different machine-learning model. Support vector machines (SVM) are identified in literature as suitable modeling tools [22][23][24][25]. Thus, a direct comparison was performed, with results achieving replacing the neural network with a linear kernel SVM classifier on the same dataset of simulated sEMG signals. The SVM model was trained with the Stocastic Gradient Descent optimizer on a Hinge loss function and by applying a L2 regularization with coefficient 0.0001. The results of this comparison are shown in the following Table 4. A significantly lower mean MAE (p < 0.05) was provided by the DEMANN approach for both onset and offset timing. No significant differences were detected in precision, recall, or F1-score between the performances of the two models. Figure 2 reports an example of simulated sEMG signal, where onset and offset events predicted by DEMANN and DT approaches (rectangular lines) are highlighted and compared with the ground truth, i.e., the truncated Gaussian function used to model the simulated signal. * means that the difference between the two mean onset values is statistically significant (p < 0.05); § means that the difference between the two mean offset values is statistically significant (p < 0.05). A significantly lower mean MAE (p < 0.05) was provided by the DEMANN approach for both onset and offset timing. No significant differences were detected in precision, recall, or F1-score between the performances of the two models. Figure 2 reports an example of simulated sEMG signal, where onset and offset events predicted by DEMANN and DT approaches (rectangular lines) are highlighted and compared with the ground truth, i.e., the truncated Gaussian function used to model the simulated signal. The average performances of the onset-offset prediction over the simulated-signal dataset provided by the DEMANN and DT approaches are reported in Table 5. The average performances of the onset-offset prediction over the simulated-signal dataset provided by the DEMANN and DT approaches are reported in Table 5. The variability of MAE in the function of α, σ, and SNR is quantified in Table 6. A color-level coded representation was adopted to allow a visual interpretation of results. The direct comparison of performances achieved by DEMANN and DT is depicted in Figure 3, stratified for different SNR. An improvement of the F1-score of offset prediction was introduced by DEMANN for signals with SNR ≤ 6 dB (p < 0.05, Figure 3B). No significant differences were detected for SNR > 6 dB. The F1-score was comparable for onset prediction in the whole SNR range (p > 0.05, Figure 3A). Lower MAEs in onset-offset prediction were provided by DEMANN. Details of statistical significance are reported in Figure 3C,D. Real sEMG signals A first validation was performed on the sEMG dataset available in [3]. In [13], four onset-detection algorithms and two filtering approaches were tested on this dataset characterized by SNR ≤ 8 dB. The same 52 sEMG signals were considered here (first four lines, Table 7). Real sEMG signals A first validation was performed on the sEMG dataset available in [3]. In [13], four onsetdetection algorithms and two filtering approaches were tested on this dataset characterized by SNR ≤ 8 dB. The same 52 sEMG signals were considered here (first four lines, Table 7). Table 7. Absolute error of onset prediction in the function of SNR ranges in terms of mean, standard deviation (SD), median, 25-percentile, and 75-percentile. As in [13], the 52-signal dataset was split according to four ranges of increasing SNR values (step = 2 dB) to facilitate the comparison of results. The absolute error of the onset prediction provided by DEMANN is reported in Table 7, in terms of mean, standard deviation (SD), median, 25-percentile, and 75-percentile. Validation was performed against the four algorithms tested in [13]: the double-threshold statistical algorithm (DT) [6]; the wavelet-based approach (WLT) [9]; the method grounded on CUSUM logic [37]; and the technique based on profile-likelihood maximization, employing discrete Fibonacci search (PROLIFIC) [38]. DEMANN provided the lowest values of absolute error for all the metrics (Table 8), except for SD (best value = 114.8 ms; DEMANN-value =120.3 ms). Similar consideration could be performed for signals with 6 < SNR < 8 dB. For lower SNR (<6 dB), DEMANN provided performances comparable to the other algorithms As in [13], the 52-signal dataset was split according to four ranges of increasing SNR values (step = 2 dB) to facilitate the comparison of results. The absolute error of the onset prediction provided by DEMANN is reported in Table 7, in terms of mean, standard deviation (SD), median, 25-percentile, and 75-percentile. Validation was performed against the four algorithms tested in [13]: the double-threshold statistical algorithm (DT) [6]; the wavelet-based approach (WLT) [9]; the method grounded on CUSUM logic [37]; and the technique based on profile-likelihood maximization, employing discrete Fibonacci search (PROLIFIC) [38]. DEMANN provided the lowest values of absolute error for all the metrics (Table 8), except for SD (best value = 114.8 ms; DEMANN-value =120.3 ms). Similar consideration could be performed for signals with 6 < SNR < 8 dB. For lower SNR (<6 dB), DEMANN provided performances comparable to the other algorithms ( Table 8). The results of signals with 8 < SNR < 12 are also reported in Table 7. Precision, recall, and F1-score were dependent on the choice of the tolerance used to identify true positives. In this case, all the events were detected within the tolerance range, leading to a precision, recall, and F1-score of 100% for DEMANN and for all the algorithms chosen for validation. A second validation was performed on the sEMG dataset acquired during walking (Section 2.2), with a direct comparison to the DT algorithm. Outcomes are reported in Figure 4. A significant mean increase over the whole population (p < 0.05) of recall and F1-score was provided by DEMANN, for onset and offset prediction. This improvement (p < 0.05) was preserved also considering signals from a single muscle, for both TA and for GL. No significant differences (p > 0.05) were identified in the VL signals and for all the prediction parameters. Discussion The present study was designed to test the capability of a novel machine-learningbased approach of estimating onset and offset timing of muscle activation. One of the main advantages of the present DEMANN approach is that the neural network was trained by means of only simulated sEMG signals (no real signal was needed to train the neural network), thus avoiding all the possible complications and costs associated with a typical experimental procedure. A further advantage was the running time. Without considering the processing time, which depends on the processing capability of the running device (in the case of the present neural network, it was less than 1 ms on an i-7 processor), once the model was trained, the maximum delay of activation prediction was 10 ms (the size of the windows). Although this paper did not explicitly target real-time applications, such a delay can be acceptable even under real-time constraints [26], making DEMANN suitable for the detection of muscle activity in sEMG-driven assistive devices, such as orthoses and exoskeletons. Otherwise, this could be an issue for the algorithmic (non-machine-learning) approaches. For example, the recent literature proposed a novel algorithm for detecting muscle activation in a time-frequency domain, based on Continuous Wavelet Transform (CWT) [11]. This study focused on quantifying the frequency content of the muscle activations and needed to detect muscle activation in the time domain in order to properly compute the frequency range (maximum and minimum). This approach could be very useful for specific aims and could open a new way to deepen the knowledge of neuromotor disorders. However, as most of the algorithm-based approaches, it was based on the computation of a threshold value in order to identify the activation onset and offset [5][6][7][8][9][10][11]. Thus, a portion of the sEMG signals must be processed to compute the threshold. This introduces a time-delay of at least the duration of the chosen portion, increasing the running time. In cyclic tasks such as walking, such a portion corresponds to a complete gait cycle. This would introduce a delay of at least 1 s, limiting the application of the approach to environments where realtime application is requested, such as in sEMG-driven exoskeletons. This is not needed in the DEMANN approach, where activations are predicted on subsequent 10 ms windows. Moreover, to identify each single gait cycle, kinematic or dynamic data are needed, such as signals from foot-switch sensors, pressure mats, stereo-photogrammetric systems, and inertial measurements units. This introduces a further complexity in experimental settings, potentially raising the costs, the time consumption, and the intrusiveness on patients. DEMANN does not suffer of these limitations, as it is based on a "blind" segmentation in short time segments. In the present study, DEMANN proved to provide high performances in three different datasets: (1) a test bench of 864 simulated sEMG signals; (2) 103 real sEMG signals acquired in vastus lateralis during knee extension and in biceps brachii during elbow flexion; and (3) real sEMG signals from gastrocnemius lateralis, tibialis anterior, and vastus lateralis collected during 30 subjects walking. Details are reported in the following two sections. Discussion The present study was designed to test the capability of a novel machine-learningbased approach of estimating onset and offset timing of muscle activation. One of the main advantages of the present DEMANN approach is that the neural network was trained by means of only simulated sEMG signals (no real signal was needed to train the neural network), thus avoiding all the possible complications and costs associated with a typical experimental procedure. A further advantage was the running time. Without considering the processing time, which depends on the processing capability of the running device (in the case of the present neural network, it was less than 1 ms on an i-7 processor), once the model was trained, the maximum delay of activation prediction was 10 ms (the size of the windows). Although this paper did not explicitly target real-time applications, such a delay can be acceptable even under real-time constraints [26], making DEMANN suitable for the detection of muscle activity in sEMG-driven assistive devices, such as orthoses and exoskeletons. Otherwise, this could be an issue for the algorithmic (non-machinelearning) approaches. For example, the recent literature proposed a novel algorithm for detecting muscle activation in a time-frequency domain, based on Continuous Wavelet Transform (CWT) [11]. This study focused on quantifying the frequency content of the muscle activations and needed to detect muscle activation in the time domain in order to properly compute the frequency range (maximum and minimum). This approach could be very useful for specific aims and could open a new way to deepen the knowledge of neuromotor disorders. However, as most of the algorithm-based approaches, it was based on the computation of a threshold value in order to identify the activation onset and offset [5][6][7][8][9][10][11]. Thus, a portion of the sEMG signals must be processed to compute the threshold. This introduces a time-delay of at least the duration of the chosen portion, increasing the running time. In cyclic tasks such as walking, such a portion corresponds to a complete gait cycle. This would introduce a delay of at least 1 s, limiting the application of the approach to environments where real-time application is requested, such as in sEMGdriven exoskeletons. This is not needed in the DEMANN approach, where activations are predicted on subsequent 10 ms windows. Moreover, to identify each single gait cycle, kinematic or dynamic data are needed, such as signals from foot-switch sensors, pressure mats, stereo-photogrammetric systems, and inertial measurements units. This introduces a further complexity in experimental settings, potentially raising the costs, the time consumption, and the intrusiveness on patients. DEMANN does not suffer of these limitations, as it is based on a "blind" segmentation in short time segments. In the present study, DEMANN proved to provide high performances in three different datasets: (1) a test bench of 864 simulated sEMG signals; (2) 103 real sEMG signals acquired in vastus lateralis during knee extension and in biceps brachii during elbow flexion; and (3) real sEMG signals from gastrocnemius lateralis, tibialis anterior, and vastus lateralis collected during 30 subjects walking. Details are reported in the following two sections. Simulated sEMG Signals DEMANN provided a high classification performance, quantified by a mean accuracy (±SD) of 97.8 ± 3.0% and supported by the accuracy = 95.3% in the worst-case scenario (SNR = 3 dB, Table 2). Differences due to increasing SNR values were very small (<4%), suggesting a good robustness to SNR variability. The classification performances of activity vs. silent area confirmed these findings ( Table 3). The effective classification capability and the efficient post-processing of model output provided mean prediction very close to 100% ( Table 5). The variability of MAE in the function of α, σ, and SNR is reported in Table 6. Independently from the SNR effect, MAE increased where α and σ assumed the highest values. This means that the quality of prediction worsened, enlarging the activation time-duration, being the time support (i.e., the duration of a single activation) defined as 2 × α × σ. However, for activations lasting up to 45% of the simulated-signal duration (450 ms), MAE was <15 ms for both onset and offset predictions, except for sporadic low-SNR situations (<6 dB). MAE > 50 ms was reported mainly for those activations characterized by the concomitant conditions of time durations > 60% of the simulated-signal duration (600 ms) and SNR < 10 dB (red areas, Table 6). It is worth noticing that, in cyclic tasks such as walking, a single muscle activation longer than 50% of signal period (gait cycle, for walking) is rare. Continuous muscular recruitment longer than 60% of the gait cycle is practically not realistic during walking. Muscle groups such as ankle plantar flexors (gastrocnemius, soleus, peroneus) and knee extensors and flexors (vastii, rectus femoris, biceps femoris) are typically recruited for short periods, covering up to 35% of the gait cycle [39]. Only ankle dorsi flexors (tibialis anterior, extensor digitorum longus) may rarely present activations that last up to 50% of the gait cycle. Thus, for most practical applications, DEMANN can provide onset-offset estimation affected by MAE < 20 ms for a wide SNR range (3-30 dB), confirming a good classifier robustness for SNR variability. The efficiency of the DEMANN approach was firstly proved versus a different machinelearning model. The support vector machine (SVM) was chosen among the models proposed in the literature as a suitable tool for this purpose [22][23][24][25]. A comparison, in the whole dataset of 864 simulated sEMG signals, specifically generated for the current experiments, showed DEMANN outperforming SVM, in terms of both onset and offset MAE (Table 4). Moreover, the DEMANN robustness was supported by comparison with the DT algorithm on the same simulated data (Table 5). DEMANN predicted offset values with better accuracy for the lowest SNR values (SNR < 6; Figure 3B). Moreover, DEMANN provided F1-score = 100% in offset prediction for SNR ≥ 10 dB; DT only for SNR ≥ 13. Likewise, mean offset MAE over the whole dataset was reduced in the DEMANN prediction, compared to DT (Table 5, p < 0.05). This was true also considering each single SNR value ( Figure 3D); the reduction was significant (p < 0.05) for SNR = 3 and for SNR ≥ 16. An absence of statistical significance for 6 ≤ SNR ≤ 13 was likely due to the very large mean SD (28.8 ms) associated with the mean MAE computed over DT predictions in this range. Particularly relevant was the 47% reduction of MAE for SNR = 3 dB, suggesting that DEMANN improved DT performances especially in the lowest SNR values. Although an overall reduction of onset-MAE was visible in the DEMANN prediction ( Figure 3C), no significant difference was detected. One of the most reliable sEMG timing detectors reported in the literature is the waveletbased approach described in [10]. In that study, the robustness of algorithm performances was also tested on simulated sEMG signals. However, a suitable comparison of the results of the current study with those reported in [10] was hard to accomplish because of the many differences in the generation of the simulated signals (different values of α, σ, and SNR) and in the metrics used to evaluate the algorithm performances (MAE in the present study and bias in [10]). Nevertheless, in the attempt of giving the readers further tools to evaluate the robustness of the present approach, the bias has been computed also in the present data as the relative (with sign) value of the time distance between the predicted and the ground-truth value. Results computed in the signals characterized by SNR = 20 dB (the only value in common between the present study and the one reported in [10]) were compared with those reported in [10]: mean bias was 1.7 ms for DEMANN vs. 7.1 ms in [10] for the onset and −2.8 ms for DEMANN vs. 4.1 ms in [10] for the offset. Signs "−" and "+" were adopted to indicate that the predicted event occurred earlier and later than the corresponding value in the ground-truth signal, respectively. Real sEMG Signals The dataset introduced in [3] was mainly chosen for the specific characteristics of the motor tasks (knee extension and elbow flexion), which allow for achieving a reliable detection of the onset event and consequently a trustworthy ground truth. Only onset events were tested, because the ground truth for offset events was not available in [3]. Outcomes of the application of DEMANN to this dataset are shown in Table 7. At first glance, it seems that a substantial difference exists between MAE values obtained for the simulated (Table 5) and real sEMG signals when using DEMANN. However, considering the same SNR range (3 dB ≤ SNR ≤ 12 dB), the distance between the two MAE values was strongly reduced (MAE-simulated = 19.1 ± 25.5 ms vs. MAE-real = 38.5 ± 56.4 ms); MAE and SD are about twice as many in real signals. This difference may be mainly due to a couple of reasons: (1) the neural network was trained with only simulated signals; (2) the larger variability of real sEMG signals due to the eight-shaped path followed by subjects during the experimental procedure that introduced further sEMG variability (caused by curves, reversing, deceleration, and acceleration [40]) and thus affected the performance of classification and prediction. Table 8 highlights that the DEMANN approach globally outperformed the performance of the algorithms tested in [13], providing: (1) the lowest absolute error values over the whole 52-signal dataset (SNR ≤ 8 dB) for all considered metrics; (2) a relevant reduction of mean and median values over the whole 52-signal dataset of absolute error compared to the best value (ETKEO) reported for DT (mean 31.4%; median 21.8%), WLT (mean 28.7%; median 31.0%), CUSUM (mean 20.3%; median 31.0%), and PROLIFIC (mean 24.6%; median comparable); (3) the same result also for the signals with 6 dB < SNR < 8 dB; and 4) performances comparable with those achieved by the four algorithms, for SNR < 4 dB. As conducted in [13], this dataset was adopted to evaluate the performance of the proposed approach on sEMG signals characterized by a range of low SNR (≤12 dB). For 6 dB ≤ SNR ≤ 12 dB, absolute error was practically not affected by SNR variability (Table 7). It was reported that, in limb movement studies, time differences from stimulus to sEMG onset with neurological diseases, aging, and postural sets may be as low as 20 ms [41]. The performances of DEMANN in the SNR range from 6 dB to 12 dB complied with these requirements. For lower SNR values (<6 dB), the absolute error was proportionally increasing with decreasing SNR, up to 200 ms for SNR < 2 dB. For this SNR range, and for these specific motor tasks (knee extension and elbow flexion), all the algorithms considered in Table 8 reported high values of absolute error, not complying with the abovementioned clinical needs. However, for these very low SNR values, the identification of onset timing by visual inspection could be very hard also when performed by actual experts, as shown in [13]. Thus, onset prediction is affected not only by the reduction of algorithm performances but also by the uncertainty associated with ground truth identification. In our opinion, this consideration may contribute to explain the high values of absolute error, especially for SNR < 4 dB. This would contribute to also explain the fact that, for similar SNR (=3 dB), the mean MAE provided by DEMANN in the simulated signals was around 20 ms ( Figure 3C). Since walking is one of the most useful tasks to obtain insights on human movement, DEMANN was tested also on a dataset of sEMG data collected during 30 healthy adults walking. Despite the high data variability due to curves, reversing, deceleration, and acceleration during the eight-shaped path, prediction performances were >90% for both the onset and offset prediction (Figure 4). Performances provided by DEMANN were validated vs. the DT algorithm. Significantly higher values (p < 0.05, Figure 4) of recall and F1-score for onset and offset prediction showed that DEMANN outperformed the DT algorithm in correctly identifying these events. This was true (p < 0.05) also considering the mean values over the signals from the same muscle, in the case of TA and GL. Otherwise, for VL, no significant difference was detected between the two approaches. TA and GL are mainly ankle flexor muscles and VL is a knee extensor; it is acknowledged that ankle muscles are typically more involved in the walking task [39]. Given that differences between DEMANN and DT were significant for TA and GL but not VL, one interesting direction to follow in the future studies could be the analysis of possible muscle specificity of the present approach. Conclusions The present outcomes suggest the feasibility of predicting onset-offset timing of muscular recruitment of the proposed machine-learning-based method, which was able to provide high performances also in condition of large variability of the sEMG signal. The adoption of DEMANN introduced several further advantages, such as a running time compatible with real time applications, a small deterioration of event detection due to low SNR values and to a large within-signal variability of SNR, and reduced complexity of the experimental protocol associated with model training, since no real signal is needed. All these advantages make this approach suitable for clinical practice and for being included in the procedure for controlling sEMG-driven assistive devices, such as orthoses and exoskeletons. The DEMANN approach was validated in simulated sEMG signals and in real sEMG signals acquired in young able-bodied subjects, but not in elderly and pathological populations. This is acknowledged as a limitation of the present study. Future studies will be focused on assessing the reliability of the DEMANN approach to provide a robust prediction of activation events also in these populations and on the possible improvements to implement for adapting the model to different conditions and environments. While the present study showed that relatively simple supervised methods, such as shallow neural networks, can be suitable for muscle activation detection, further experiments should be made to determine an optimal classifier to embed in the detecting system. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki. Ethical review and approval were waived for this study because only simulated data and data available in the literature and already employed in previous studies were used. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patients to publish this paper. Data Availability Statement: Data supporting reported results can be found by contacting the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
10,641.4
2022-04-28T00:00:00.000
[ "Computer Science", "Medicine", "Engineering" ]
Space-Time Transmit-Receive Design for Colocated MIMO Radar Space-Time Transmit-Receive Design for Colocated MIMO Radar This chapter deals with the design of multiple input multiple-output (MIMO) radar space- time transmit code (STTC) and space-time receive filter (STRF) to enhance moving targets detection in the presence of signal-dependent interferences, where we assume that some knowledge of target and clutter statistics are available for MIMO radar system according to a cognitive paradigm by using a site-specific (possible dynamic) environment database. Thus, an iterative sequential optimization algorithm with ensuring the convergence is proposed to maximize the signal to interference plus noise ratio (SINR) under the similar- ity and constant modulus constraints on the probing waveform. In particular, each iteration of the proposed algorithm requires to solve the hidden convex problems. The computational complexity is linear with the number of iterations and polynomial with the sizes of the STTW and the STRF. Finally, the gain and the computation time of the proposed algorithm also compared with the available methods are evaluated. Introduction Multiple-input multiple-output (MIMO) radar emits multiple probing signals via its transmit antennas, which provides the greater flexibility for the design of the whole radar system, and boosts the development of more sophisticated signal processing algorithms [1]. On the basis of the configurations of transmitter/receiver antennas, MIMO radar systems can be classified into two categories: widely distributed [2,3] and colocated [4,5]. The former has different angles of view on the target owing to widely separated antennas, and this feature can be used to improve the performance of target detection and angle estimation, as well as the capabilities of target identification and classification [6]. The latter shares the same aspect angle of the target by using tightly spaced antennas. However, colocated MIMO radar exploits the waveform diversity to form a long virtual array, thus providing better results concerning spatial resolution, target localization, and the interference rejection, as well as obtaining the degrees of freedom for the design of transmit beam pattern [1,7,8]. Recently, colocated MIMO radar waveform design is a hot and challenging topic and has received significant attention. In general, these works can be divided into two categories. The first category focuses on the fast-time waveforms design exploiting some a priori information. In particular, in [6], by using the a priori knowledge of target power spectral density, the minimax robust waveforms are designed based on the rules of the mutual information (MI) and minimum mean-square error (MMSE). In [9], MIMO waveforms for the case of an extended target are devised based on the maximization of signal-to-interference plus-noise ratio (SINR) through a gradient-based algorithm assuming the knowledge of both the target and signal-dependent clutter statistics. In [10], by considering MMSE as figure of merit, MIMO radar waveforms are synthesized under signal-dependent clutter. The join design of the transmit waveform and the receive filter is addressed for improving the extended target delectability in the presence of signal-dependent clutter, by employing a cycle iteration algorithm with ensuring convergence [11]. In [12], by designing the transmit waveform and the receive filter, two sequential optimization algorithms are proposed to maximize SINR subject to the constant modulus and similarity constraints. Based on the rule of the worst-case output SINR in the presence of unknown target angle, the robust joint design of transmit waveform and the receive filter is considered [13]. Some more works can be found in [7,8,14,15]. The second category addresses the MIMO radar space-(slow) time code design for moving target scenarios. In particular, in [16], MIMO radar slow-time code shares the ability of improving the resolution in angle-Doppler images and obtaining enhanced moving target detection performance. In [17], the signal-dependent interference is alleviated by the spacetime coding framework based on a beamspace space-time adaptive processing (STAP). In [18], based on the max-min SINR optimization criteria, the time-division beamforming signal is designed for a multiple target scenario. For a moving point-like target detection, based on the worst case SINR over the actual and signal-dependent clutter statistics, the robust joint design of the space-time transmit code (STTC) satisfying the energy and similarity constraints and the space-time receive filter (STRF) is addressed in [19]. This chapter handles the joint design of the STTC and STRF with the aim of enhancing the moving target detectability under signal-dependent interferences and white Gaussian noise. Unlike [19,20], some knowledge of target and clutter statistics is assumed to be available. In particular, the SINR is considered as figure of merit to maximize subject to a constant modulus constraint on the transmit signal in addition to a similarity constraint. To deal with the resulting nonconvex design problem, an iterative algorithm ensuring convergence is proposed. Each iteration of the proposed algorithm involves the solution of hidden convex problems. Specifically, both a convex problem with closed-form solution and a set of fractional programming problems, which can be globally solved through the Dinkelback's algorithm, are solved. The resulting computational complexity is linear with the number of iterations and polynomial with the sizes of the STTC and the STRF. The remainder of the chapter is organized as follows. In Section 2, the system model is formalized. In Section 3, the constrained optimization problem under constant modulus and similarity constraints is formulated. In Section 4, the new optimization algorithm is presented. In Section 5, the performance of the new procedure is evaluated. Finally, in Section 6, concluding remarks and possible future research tracks are provided. System model We focus on a colocated narrow band MIMO radar system consisting of N T transmitters antennas and N R receivers. Each transmitter emits a slow-time phase-coded coherent train K denote the transmitted space code vector at the kth transmission interval, where s nt k ð Þ denotes the kth transmitted phase-code pulse of the n t th transmitting antenna, for n t ¼ 1, 2, ⋯, N T , Á ð Þ T stands for the transpose, and C N is the set of N-dimensional vectors of complex numbers. At each receiver, the received waveform is downconverted to baseband, undergoes a pulse matched filtering operation, and then is sampled. Hence, the observations of the kth slow-time sample for a farfield moving target at the azimuth angle θ 0 can be expressed as [21] x where • α 0 is a complex parameter taking into account the target radar cross section (RCS), channel propagation effects, and other terms involved into the radar range equation. • v d0 denotes the normalized target Doppler frequency, which is related to the radial velocity v r via the equation v d0 ¼ 2v r T=λ with λ being the carrier wavelength and T being the pulse repetition time (PRT). in which a t θ ð Þ and a r θ ð Þ denote the transmit spatial steering vector and the receive spatial steering vector at the azimuth angle θ, respectively, and Á ð Þ * and Á ð Þ † are the conjugate and the conjugate transpose operators, respectively. In particular, for the uniform linear arrays (ULAs), they are given by with d T and d R being the array interelement spacing of the transmitter and the receiver, respectively. • d k ð Þ ∈ C NR , k ¼ 1, 2, ⋯, K, considering M signal-dependent uncorrelated point-like interfering scatterers. Specifically, as shown in Figure 1, the angle space is discretized as , l m ∈ 0; 1; ⋯; L f g , the received interfering vector d k ð Þ can be expressed as the superposition of the returns from M interference sources, i.e., with r m , v dm , and θ m , respectively, the complex amplitude, the normalized Doppler frequency, and the look angle, given by θ m ¼ 2π Lþ1 ð Þ l m , of the mth interferences. Furthermore, M is nominally equal to K Lþ1 ð Þ. (1) can be expressed in a compact form as à T being the temporal steering vector, ⊗ denotes the Kronecker product, and Diag Á ð Þ denotes the diagonal matrix formed by the entries of the vector argument. Additionally, we assume that the noise vector v is a zero-mean circular complex Gaussian random vector with covariance matrix Finally, interference vector d can be expressed as d ¼ where P rm is given by in which J r denotes the shift matrix [23], whose k 1 ; k 2 ð Þth entry is defined as 1 , r ∈ 0; 1; ⋯; KÀ1 f g and k 1 ; k 2 ð Þ∈ 1; 2; ⋯; K f g 2 . In particular, we assume that r m , m ¼ 1, 2, ⋯, M, and α 0 are a zero-mean uncorrelated random variables with, respectively, σ 2 m ¼ E jr 2 m j  à and As to the normalized Doppler frequency of the interfering signals, we model v dm as a random variable uniformly distributed around a mean Doppler frequency v dm , i.e., where ε m accounts for the uncertainty on v dm . Basing on the previous assumptions, the interference vector d has zero mean and covariance matrix where in which and vector, ⊙ and E Á ½ denote the Hadamard product and the statistical expectation, respectively. This expression, for the covariance matrix Σ d s ð Þ, follows from the results obtained in ( [19], Appendix 1). Inspection of (11) and (12) reveals that the interference covariance matrix Σ d s ð Þ requires the knowledge of θ m and σ 2 m as well as v dm and ε m , for m ¼ 1, 2, ⋯, M. These information can be obtained according to a cognitive paradigm [22][23][24] through exploiting a site-specific (possible dynamic) environment database, which involves a geographical information system (GIS), digital terrain maps, previous scans, tracking files, clutter models (in terms of electromagnetic reflectivity and spectral density), and meteorological information. Problem formulation This section formulates the joint design problem of the STTC and STRF based on the maximization of the output SINR considering practical constraints. Output SINR Letting the observations x be processed via the STRF w ∈ C NRK , the SINR b r s; w ð Þat the output of the receiver can be expressed as where we exploit and and assume w6 ¼0 and the independence between the disturbance and the noise random processes. In particular, the numerator in (14) denotes the useful energy at the output of the STRF, w † Σ d s ð Þw and σ 2 v w † w represent the clutter energy and noise energy, respectively, at the output of w . Observe that the clutter energy w † Σ d s ð Þw functionally relies on the STTC w and the STRF s through Σ d s ð Þ as well as the useful energy. Furthermore, we note that the objective function b r s; w ð Þ requires that the exact angle θ 0 and normalized Doppler frequency v d0 are known. However, from a practical point of view, the explicit knowledge of θ 0 and v d0 cannot be available. To circumvent this drawback, the averaged SINR defined as r s; w ð Þ¼E b r s; w ð Þ ½ as figure of merit is exploited. More specifically, we suppose that v d0 and θ 0 are independent random variables uniformly distributed around a mean Doppler frequency v d0 and a mean , where $ means "distribute" and U represents uniform distribution and ε 0 and ϑ 0 accounts for the uncertainty on v d0 and θ 0 , respectively. Interestingly, after some algebraic manipulations, the objective function r s; w ð Þshares the following two equivalent expressions, While S ¼ ss † ∈ H KNT and W ¼ ww † ∈ H KNR , Ξ m is given by (12), E denotes the energy of s , where Γ m1m2 ∈ C NRÂNR and Θ i1i2 ∈ C NT ÂNT can be computed by (38) and (46) respectively, ∀ m 1 ; m 2 ; i 1 ; i 2 ð Þ ∈ 1; 2; ⋯; K f g 4 , as shown in Appendix A. Constant modulus and similarity constraints In practical applications, the designed STTC is enforced to be unimodular (i.e., constant modulus) since the nonlinear property of radar amplifiers [24,25]. To this end, we limit the modulus of each element of the code s as a constant. Precisely, the ith element s i of s can be written as with φ i denoting the phase of s i . Furthermore, K different similarity constraints are enforced on the N T transmitting waveforms, namely where s 0 k ð Þ ∈ C NT is the reference code vector at the kth transmission interval, ξ k is a real parameter ruling the extent of the similarity, and ∥x ∥ ∞ denotes the infinite norm. Without loss of generality, we assume the same similarity parameter ξ 0 (i.e., ξ 0 ¼ ξ [12,26,[28][29][30] on the sought STTC. Thus, Eq. (24) can be written as ∥sÀs 0 ∥ ∞ ≤ ξ 0 , where à T is the reference code vector. Several reasons are presented to show the motivation to exploit the similarity constraints on radar codes. Actually, an arbitrary optimization of SINR via designing an STTC does not offer any kind of control on the shape of the resulting designed waveforms. Specifically, an pure optimization of the SINR can cause signals sharing high peak sidelobe levels and, in general, with an undesired ambiguity function feature. To this end, by exploiting the similarity constraint, when s 0 possesses suitable properties, such as low peak sidelobe levels, and reasonable Doppler resolutions, the designed STTC can enjoy some of the good ambiguity function feature of s 0 . In other words, the similarity constraint compromises the performance between SINR improvement and suitable waveform features [31]. Design problem Summarizing, the joint design of the STTC and the STRF can be formulated as the following constrained optimization problem: where |Á| and ∥Á∥, respectively, represent the modulus and the Euclidean norm. Without loss of generality, we add the constraint ∥w ∥ 2 ¼ 1. P 1 is a NP-hard problem [12,28] whose optimal solution cannot be found in polynomial time. Next, we develop a new iterative algorithm to offer high-quality solution to the NP-hard problem (25). STTC and STRF design procedure This section focuses on the design of an iterative algorithm ensuring convergence properties, which is capable of offering high-quality solutions to the NP-hard problem P 1 by sequentially improving the SINR. In particular, we exploit the pattern search framework to cyclically optimize the design variables w; s 1 ; s 2 ; ⋯; s NT K ð Þ . STRF optimization In this subsection, we deal with the STRF optimization for a fixed STTC s . Specifically, we handle the optimization problem We observe that the optimal solution w o to P w is the maximum eigenvector of the matrix i.e., to a generalized eigenvector of the matrices Γ s s † À Á and Σ dv s s † À Á corresponding to the maximum generalized eigenvalue. Thus, a closed-form solution to P w can be obtained by normalizing w o . STTC optimization This subsection is devoted to the optimization of the STTC under a fixed STRF. Precisely, each code element in s is sequentially optimized under the fixed remaining N T KÀ1 elements. Performing some algebraic manipulations to similarity constraints [26], the optimization problem P si with respect to the ith STTC variable, i ¼ 1, …, N T K, is written by, where and s 0i is the ith element of s 0 . Notice that for ξ ¼ 0, the code s is equal to the reference code s 0 , whereas the similarity constraint would become the constant modulus constraint with ξ ¼ 2. Remark: This procedure by resorting to pattern search framework offers a new strategy to address the code design problem under a fixed filter. In addition, this STTC optimization problem can be efficiently but approximatively settled by semidefinite relaxation (SDR) and randomization procedure with the computational complexity of O N T K ð where L is the number of randomization trials. However, the SDR technique usually shares a huge computational complexity, especially in large dimension N T K, thus limiting its applications in real-time systems; moreover, the existing approach also needs the reasonable selection of L. On the other hand, it is shown that a higher quality solution can be further obtained via a sequential iteration optimization algorithm, which is capable of monotonically increasing the SINR value and achieving a stationary point of the formulated NP-hard problem [27]. Next, we focus on the proposed iteration algorithm to solve problem (27) in a polynomial time. In particular, performing some algebraic manipulations to the objective function in (27), P si can be equivalently rewritten as a fractional programming optimization problem by the following proposition. Proposition 4.1 The problem P s i is equivalent to where and a k, i , b k, i are constants for k ¼ 0, 1, 2, ℜ x ð Þ denotes the real part of x. Proof. See Appendix B. Problem (28) is solvable [32] since the objective function is continuous with ℜ b 1, i s i ð Þþb 3, i >0 and the constraint is a compact set (closed and bounded set of C). Thus, we consider the following parametric problem [32], After some simple manipulations, problem (30) can be rewritten as max si ℜ c i s i ð Þ s:t: where c i ¼ a 1, i Àμb 1, i and the constant a 3, i Àμb 3, i do not affect the optimal value. Interestingly, problem (31) shares a closed-form solution whose phase φ * is given by, where φ ci is the phase of c i ; otherwise, the optimal solution φ * is given by, We observe that problems (28) and (30) are relevant in each other via Lemma 2.1 of [32]. Specifically, we can find a solution to problem (28) by obtaining a solution of the equation ϱ μ ð Þ ¼ 0 concerning s i . To this end, the Dinkelbach-type procedure [32,33] summarized in Algorithm 1 is introduced to solve problem (27). Algorithm 1. Dinkelbach-type algorithm for solving P si Input: a 1, i , a 3, i , b 1, i , b 3, i , γ i and δ; Output: An optimal solution b s i to P si ; 1. Randomly generate s i, 0 within the feasible sets; Þþb3,i and let k ≔ 1; 3. Find the optimal solution s i, k by solving problem (30), 4. If ϱ μ k À Á ¼ 0, then s i, k is an optimal solution of P si with optimal value μ k and stop. Otherwise, go to step 5; Þþb3,i and k ≔ kþ1; Then go to step 2. Algorithm 1 sharing a linear convergence rate [34] is needed to handle the problem (30) in each iteration. The objective value of the generated sequence of points has a monotonic convergence property, and the optimal value of (28) can be achieved eventually. We set the exit condition ϱ μ ð Þ ¼ 0, actually, which can be replaced by ϱ μ ð Þ ≤ ς, with ς being a prescribed accuracy. Transmit-receive system design This subsection reports the iteration optimization procedure for the STTC and STRF in Algorithm 2. In particular, Algorithm 2 guarantees that the SINR monotonically increases 2 . Furthermore, we need to point out that the maximum block improvement (MBI) [24] framework could be used to ensure the convergence to a stationary point of problem P 1 . The global computation consume of the Algorithm 2 is linear to the number of iterations and polynomial with the sizes of the STTC and the STRF. More specifically, each iteration of the proposed algorithm involves the computational cost associated with the solution to problems (26) and P si , for i ¼ 1, 2, ⋯, N T K. The former requires to solve the generalized eigenvalue decomposition with the order of O N R K ð Þ 3 (see [35], p. 500). Similarly, the latter is linear to polynomial with the size of the STTC, while each iteration needs the solution of a generalized fractional programming problem with the computational complexity of O N T K ð Þ 2 . We need to point out that SOA2, based on the SDR and randomization method, can also be used to the solution of problem (25). However, it cannot guarantee the convergence to a stationary point due to the use of randomized approximations. Moreover, from computational complexity, 2 Notice that the similar convergence analysis can be obtained in [23]. Output: An optimal solution s * ; w * ð Þto P 1 ; (20) and (22) 14. If |r n Àr nÀ1 | ≤ κ, where κ is a user selected parameter to control convergence, output s * ¼ s n ð Þ and w * ¼ w n ð Þ ; Otherwise, repeat step 5 until convergence. Numerical results This section focuses on assessing the capability of the proposed algorithm for designing optimized STTC and STRF in signal-dependent interference for both a nonuniform and an uniform point-like clutter environment. In particular, for both scenarios, we consider an L-band radar with operating frequency f c ¼ 1:4 GHz, which is equipped with an ULA of N T ¼ 4 transmit elements and N R ¼ 8 receive elements under an interelement spacing d t ¼ d r ¼ λ=2. We set the code length K ¼ 13 for each transmitter and the orthogonal linear frequency modulation (LFM 3 ) is used as the reference waveform s 0 [12] with the n t ; k ð Þth entry of the reference S 0 ð Þ given by, where n t ¼ 1, 2, ⋯, N T and k ¼ 1, 2, ⋯, K. Hence, the reference code is derived as Moreover, we assume the target located at range-azimuth bin of interest (0,0) with power σ 2 0 ¼ 10 dB. In addition, we set a mean azimuth θ 0 ¼ 0 ∘ with azimuth uncertainty ϑ=2 ¼ 1 ∘ , and a normalized mean Doppler frequency v d0 ¼ 0:4 with Doppler uncertainty ε 0 =2 ¼ 0:04 for the presence of target. We set the noise variance to σ 2 v ¼ 0 dB. Finally, the exit condition 4 ς ¼ 10 À3 for Algorithms 1 and 2 is κ ¼ 10 À3 , i.e., |r n Àr nÀ1 | ≤ 10 À3 : All simulations are performed using Matlab 2010a version, running on a standard PC (with a 3.3 GHz Core i5 CPU and 8 GB RAM). Nonuniform point-like clutter environment This subsection focuses on a scenario where three disturbances, respectively, are located at the spatial angles θ 1 ¼ À55 For comparison purpose, we also perform simulations for the SOA2 with constant modulus and similarity constraints as well as the algorithm in [19] with energy constraint (i.e., ∥s ∥ 2 ¼ 1), respectively. In particular, Figure 2 shows the SINR versus the iteration number for different ξ by also comparing the results obtained via Algorithm 2 and SOA2 considering L = 100 and exploiting the CVX toolbox [36] to handle the semidefinite programming (SDP) involved in SOA2. The results exhibit that the SINR values achieved using Algorithm 2 and SOA2 increase as the iteration number increases. In addition, the SINR increases as ξ increases owing to the higher degrees of freedom available at the design stage. Precisely, Algorithm 2 is superior to SOA2 for ξ ¼ 0:1, 0:5, 1:3. It is interesting to note that Algorithm 2 and SOA2 share almost the same SINR for ξ ¼ 2, whereas both obtain lower SINR than the case considering energy constraint. Finally, it is worth pointing out that a loss of SINR caused by constant constraint can be observed since the gap of SINR between ξ ¼ 2 and energy constraint is about 1 dB. Table 1 reports the achieved SINR values, iterations number, and global computation time of Algorithm 2 and SOA2 supposing a target with Àπ=180 ≤ θ 0 ≤ π=180, 0:36 ≤ v d0 ≤ 0:44 for ξ ¼ 0:1, 0:5, 1:3, 2 and setting the same exit condition for SOA2. We observe that Algorithm 2 and SOA2 both converge very fast. Additionally, Algorithm 2 is superior to SOA2 concerning the achieved SINR value for ξ ¼ 0:1, 0:5, 1:3 and concerning the required computational cost for ξ ¼ 0:1, 0:5, 1:3, 2. In the following, the joint frequency and azimuth behavior of STTC and STRF are considered corresponding to ξ ¼ 2 supposing Àπ=180 ≤ θ 0 ≤ π=180, 0:36 ≤ v d0 ≤ 0:44 for different iteration numbers, by using the contour map of the slow-time cross ambiguity function (CAF) [19], where b A v; θ ð Þand P r are obtained by exploiting Eqs. (6) and (8), respectively. Figure 3 plots the contour map of the Doppler-azimuth plane of CAF at r ¼ 0 versus the iteration number n ¼ 0; 1; 4; 15 ½ for Algorithm 2. As expected, the lower and lower values in the regions of (highlighted by black ellipses) θ 1 ¼ À55 Table 1. SINR values (in dB), iterations number, and global computation time (in seconds) of Algorithm 2 and SOA2 assuming a target with Àπ=180 ≤ θ 0 ≤ π=180, 0:36 ≤ v d0 ≤ 0:44 for ξ ¼ 0:1, 0:5, 1:3, 2, s 0 as the initial point. For the uniform distribution, we define both standard deviations σ v d 0 and σ θ0 of target Doppler and azimuth as, respectively, We also observe that the higher σ v d 0 and σ θ0 and the lower SINR can be obtained due to the larger inaccuracies on the knowledge of Doppler and azimuth of the actual target. Finally, we need to point out that the proposed design procedure still has the better robustness against a large uncertain set in comparison with SOA2. Uniform clutter environment This subsection focuses on a scenario where we consider a homogeneous range-azimuth ground clutter interfering with the range-azimuth bin of interest (0,0). Specifically, for each range-azimuth ground clutter bin, a clutter to noise ratio (CNR) of 25 dB and a normalized Doppler frequency v ¼ 0 with Doppler uncertainty ε=2 ¼ 0:04 are considered. We suppose M ¼ 50 range-azimuth ground clutter bins located within the azimuth angular sector Àπ=2; π=2 ½ . Moreover, we set the range ring r i ¼ 0 for all range-azimuth ground clutter bins. In Figure 5, we show the SINR of Algorithm 2 and SOA2 for ξ ¼ 0:1, 0:5, 1:3, 2 supposing a target Àπ=180 ≤ θ 0 ≤ π=180, 0:36 ≤ v d0 ≤ 0:44. The SINR values increases both for Algorithm 2 and SOA2 with the increasing iteration number n. Furthermore, we observe the higher ξ, the better SINR values reflecting the larger and larger feasible set. Interestingly, Algorithm 2 significantly outperforms SOA2 for all the considered ξ, except for ξ ¼ 2 where they both achieve the same SINR value. In particular, we see that the gap between ξ ¼ 2 and energy constraint is about 1.1 dB because of the introduction of constant modulus constraint. We also observe that in this scenario, Algorithm 2 needs a higher number of iterations to achieve convergence compared with that in Figure 2. For instance, for ξ ¼ 0:1, Algorithm 2 converges with about 12 iterations in Figure 5, whereas in Figure 2 after about 2 iterations. In Table 2, we summarize the SINR values, iterations number, and the global computation time of Algorithm 2 and SOA2. In particular, Algorithm 2 shows a lower computational time for ξ ¼ 0:1, 2. Furthermore, it is observed that the gains of 2.3 and 3 dB are achieved using Algorithm 2 with a slightly higher computational cost for ξ ¼ 0:5, 1:3, respectively. Figure 6 shows the joint frequency and azimuth behavior of STTC and STRF concerning CAF. Specifically, the contour map of the Doppler-azimuth plane of CAF at r ¼ 0 against the Table 2. SINR values (in dB), iterations number, and global computation time (in seconds) of Algorithm 2 and SOA2 assuming a target withÀπ=180 ≤ θ 0 ≤ π=180, 0:36 ≤ v d0 ≤ 0:44 in uniform clutter environment for ξ ¼ 0:1, 0:5, 1:3, 2, s 0 as the initial point. Figure 6. Doppler-azimuth plane of CAF at r ¼ 0 for ξ ¼ 2 of Algorithm 2 for n ¼ 0; 10; 30; 82 ½ assuming a target withÀπ=180 ≤ θ 0 ≤ π=180, 0:36 ≤ v d0 ≤ 0:44 in uniform clutter environment (black rectangles represent the locations of uniform clutter), s 0 as the initial point of Algorithm 2 and SOA2. iteration number (n ¼ 0; 10; 30; 82 ½ ) considering ξ ¼ 2 for Algorithm 2 is plotted. We observe that g n ð Þ s n ð Þ ; w n ð Þ ; r; v; θ À Á obtains lower and lower values in the region of Àπ=2 ≤ θ ≤ π=2, À0:04 ≤ v ≤ 0:04 (highlighted by black rectangles) with the increase of iteration number n. This performance behavior highlights that the proposed algorithm of joint design STTC and STRF possesses the ability of sequentially refining the shape of the CAF to achieve better and better clutter suppression levels. , v d0 ¼ 0:4, respectively. Again, we see that Algorithm 2 obtains a higher SINR gain than SOA2 for ξ ¼ 0:1, 0:5, 1:3, whereas they both fulfill the near same gain at ξ ¼ 2. Interestingly, we also observe that a decreasing trend in gain with the increase in standard deviation. This is reasonable due to that the larger standard deviation results in the larger uncertainty on the knowledge of target. Conclusions This chapter has considered the joint STTC and STRF design for MIMO radar under signaldependent interference. We focus on a narrow band colocated MIMO radar with a moving point-like target considering imprecise a prior knowledge including Doppler and azimuth. Summarizing, • We have devised an iterative algorithm to maximize the SINR accounting for both a similarity constraint and constant modulus requirements on the probing waveform. Each iteration of the algorithm requires the solution of hidden convex problems. The consequent computational complexity is linear with the number of iterations and polynomial with the sizes of the STTC and the STRF. • We have assessed the performance of the proposed iteration algorithm through numerical simulations. The results have manifested that the larger the similarity parameter (i.e., the weaker the similarity constraint), the larger the output SINR due to the expanded feasible set. Moreover, we observed that the devised iteration procedure can provide a monotonic improvement of SINR and ensuring convergence to a stationary point, which possesses excellent superiority in computation complexity and performance gain compared with the related SOA2. The numerical examples also have revealed the capability of the developed procedure to sequentially refine the shape of the CAF both in nonuniform point-like clutter environment and uniform clutter environment. Possible future work tracks might extend the proposed framework to consider spectral constraint [37] and MIMO radar beampattern design by optimizing integrated sidelobe level (ISL) with practical constraints.
7,322
2017-12-20T00:00:00.000
[ "Computer Science" ]
Travelling Corners for Spatially Discrete Reaction-Diffusion System We consider reaction-diffusion equations on the planar square lattice that admit spectrally stable planar travelling wave solutions. We show that these solutions can be continued into a branch of travelling corners. As an example, we consider the monochromatic and bichromatic Nagumo lattice differential equation and show that both systems exhibit interior and exterior corners. Our result is valid in the setting where the group velocity is zero. In this case, the equations for the corner can be written as a difference equation posed on an appropriate Hilbert space. Using a non-standard global center manifold reduction, we recover a two-component difference equation that describes the behaviour of solutions that bifurcate off the planar travelling wave. The main technical complication is the lack of regularity caused by the spatial discreteness, which prevents the symmetry group from being factored out in a standard fashion. Introduction In this paper we construct travelling corner solutions to a class of planar lattice differential equations (LDEs) that includes the Nagumo LDĖ u i,j = u i+1,j + u i−1,j + u i,j+1 + u i,j−1 − 4u i,j + g cub (u; ρ) (1.1) posed on the two-dimensional square lattice (i, j) ∈ Z 2 , in which the nonlinearity is given by the bistable cubic g cub (u; ρ) = (u 2 − 1)(ρ − u), −1 < ρ < 1. (1.2) Such corners can be seen as interfaces that connect planar waves travelling in slightly different directions. In particular, our analysis does not require the use of the comparison principle, but merely requires a number of spectral and geometric conditions to hold for the underlying planar travelling waves. This allows our results to be applied to a wide range of LDEs, highlighting the important role that anisotropy and topology play in spatially discrete settings. Reaction-diffusion systems The LDE (1.1) can be seen as a nearest-neighbour spatial discretization of the Nagumo PDE u t = u xx + u yy + g cub (u; ρ). (1. 3) In modelling contexts one often uses the two stable equilibria of the nonlinearity g to represent material phases or biological species that compete for dominance in a spatial domain. Indeed, the diffusion term tends to attenuate high frequency oscillations, while the bistable nonlinearity promotes these. The balance between these two dynamical features leads to interesting pattern forming behaviour. As a consequence, the PDE (1.3) has served as a prototype system for the understanding of many basic concepts at the heart of dynamical systems theory, including the existence and stability of planar travelling waves, the expansion of localized structures and the study of obstacles. Multicomponent versions of (1.3) such as the Gray-Scott model [19] play an important role in the formation of patterns, generating spatially periodic structures from equilibria that destabilize through Turing bifurcations. Memory devices have been designed using FitzHugh-Nagumo-type systems with two components [31], which support stable stationary radially symmetric spot patterns. Similarly, one can find stable travelling spots [46] for three-component FitzHugh-Nagumo systems, which have been used to describe gas discharges [38,42]. At present, a major effort is underway to understand the impact that non-local effects can have on reaction-diffusion systems. For example, many neural field models include infinite-range convolution terms to describe the dynamics of large networks of neurons [10,11,39,43], which interact with each other over long distances. The description of phase transitions in Ising models [3,4] features non-local interactions that can be both attractive and repulsive depending on the length scale involved. It is well-known by now that the topology of the underlying spatial domain can have a major impact on the dynamical behaviour exhibited by such non-local systems. For example, nerve fibers have a myeline coating that admits gaps at regular intervals [40], which can block signals from propagating through the fiber [15,30,34]. In order to study the growth of plants, one must take into account that cells divide and grow in a fashion that is influenced heavily by the spatial configuration of their neighbours [20]. Finally, the periodic structure inherent in many meta-materials strongly influences the phase transitions that can occur [13,14,44] as a consequence of the visco-elastic interactions between their building blocks. We view the planar LDE (1.1) as a prototype model that allows the impact of such non-local spatially-discrete effects to be explored. Indeed, the spatial R 2 → Z 2 transition breaks the locality but also the translational and rotational symmetry of (1.3), leading to several interesting phenomena and mathematical challenges. Existence of planar waves It is well-known that the balance between the diffusion and reaction terms in the PDE (1.3) is resolved through the formation of planar travelling wave solutions u(x, y, t) = Φ(x cos ζ + y sin ζ + ct); Φ(−∞) = −1, Φ(∞) = 1, (1.4) which connect the two stable equilibria u = ±1. When c = 0, these waves can be thought of as a mechanism by which the fitter species or more energetically favourable phase invades the spatial domain. The existence of these waves can be obtained by applying a phase-plane analysis [18] to the travelling wave ODE cΦ = Φ + g cub (Φ; ρ), Φ(±∞) = ±1, (1.5) which results after substituting (1.4) into (1.3). (1.7) The broken rotational invariance in the transition from (1.3) to (1.1) is manifested by the explicit presence of the propagation direction in (1.7). The broken translational invariance causes the wavespeed c to appear in (1.7) as a singular parameter. A comprehensive existence theory for solutions to (1.7) was obtained in [36]. In particular, for every ζ ∈ [0, 2π] and ρ ∈ [−1, 1] there exists a unique wavespeed c = c ρ,ζ for which (1.7) admits a solution. However, it is a delicate question to decide whether c = 0 or c = 0. Indeed, a sufficient energy difference between the two stable equilibrium states is needed for the propagation of waves [4,6,17,33], a phenomenon referred to as propagation failure. In fact, due to the angular dependence in (1.7), planar waves can fail to propagate in certain directions that resonate with the lattice, whilst travelling freely in others [12,25,37]. Linearization It is well-known that planar travelling waves can be used as a skeleton to describe the global dynamics of the PDE (1.4) [1]. In particular, they have been used as building blocks to construct other more complicated types of solutions. A key ingredient in such constructions is to understand the dynamics of the system that arises after linearizing (1.3) around the planar waves (1.4). Performing this linearization for ζ = 0, we obtain the system ∂ t v(x, y, t) = ∂ xx v(x, y, t) + ∂ yy v(x, y, t) + g cub Φ(x + ct); ρ v(x, y, t), (1.8) which can be transformed to the temporally autonomous system ∂ t v(x, y, t) = ∂ xx v(x, y, t) + ∂ yy v(x, y, t) − c∂ x v(x, y, t) + g cub Φ(x); ρ v(x, y, t) (1.9) by the variable transformation x = x + ct. Since this system is also autonomous with respect to the y-coordinate, which is transverse to the motion of the wave, it is convenient to apply a Fourier transform in this direction. Upon introducing the symbol L z p (x) = ∂ xx p(x) + z 2 p(x) − c∂ x p(x) + g cub Φ(x); ρ p(x), (1. 10) we readily find ∂ tvω (x, t) = L iωvω (·, t)](x). (1.11) Inspecting (1.10), we readily see that the spectrum of L z can be obtained by rigidly shifting the spectrum of L 0 by z 2 . In particular, writing λ z = z 2 , we find that (1.12) Noting that λ iω = −ω 2 , we hence see that perturbations of the form v(x, y, 0) = θ(y, 0)Φ (x) evolve under (1.9) according to the heat semiflow θ t = θ yy . These perturbations are important because they correspond at the linear level with transverse deformations of the planar wave interface. (1.17) The theory developed in [5,7] essentially justifies this formal calculation and confirms that the spectral properties of L z can be used to understand the dynamics of the time-dependent problem (1.14). Upon writing λ z = 2(cosh(z) − 1), we again have To find the evolution of perturbations of the form v ij (0) = θ j Φ under (1.14), we must now solve the discrete heat equationθ The situation is hence similar to that encountered for the PDE (1.3). More material changes arise however when considering the diagonal direction ζ = π 4 . Following a similar procedure as above, one arrives at the linear operator [L diag z p](ξ) = −cp (ξ) + 2 cosh(z) p(ξ + 1) + p(ξ − 1) − 4p(ξ) + g cub Φ * (ξ); ρ p(ξ), (1.20) which has a spectrum that can no longer be directly related to that of L diag 0 . It is hence no longer clear how to formulate an analogue for (1.19) to describe the linear evolution of interface deformations. However, it is still the case that λ z = O(z 2 ) as z → 0 for the curve of eigenvalues that bifurcates from the zero eigenvalue L diag 0 Φ = 0. For general rational angles ζ this quadratic behaviour need no longer be true. In fact, we obtain the relation for the quantity that is often referred to as the group velocity. A similar relation was found in [21] for planar PDEs with direction-dependent diffusion coefficients. However, in this case it is always possible to change the coordinate system in such a way that λ z = O(z 2 ) holds again. Such a transformation is not possible in the spatially discrete setting (1.1), since this would require the transverse spatial coordinate to become continuous. However, we do remark here that the function φ → c ρ,ζ can behave rather wildly in the critical regime where ρ is small, allowing the group velocity to vanish at specific values for ρ even if ζ / ∈ π 4 Z. Stability of planar waves The realization that transverse interfacial deformations are governed by a heat equation led to the development of two main approaches to establish the nonlinear stability of the planar waves (1.4). Both approaches exploit the coordinate system u(x, y, t) = Φ x + ct + θ(y, t) + v(x, y, t) (1.22) in the neighbourhood of the planar travelling wave and require the initial perturbations θ(y, 0) and v(x, y, 0) to be localized in a suitable sense. The first approach was pioneered by Kapitula in [32], where he used semigroup methods and fixed-point arguments to show that θ tends algebraically to zero, while v decays exponentially fast. The advantage of this approach is that only weak spectral assumptions need to be imposed on the underlying system. However, the crude estimates on the nonlinear terms lead to rather weak estimates for the basin of attraction. The second approach leverages the comparison principle to obtain stability for a much larger class of initial perturbations. By slowing down the natural decay-rate of the fundamental solution of the heat equation, the authors of the landmark paper [8] were able to construct explicit super and sub-solutions to (1.3) that trap perturbations that can be arbitrarily large (but localized). In fact, the authors use their construction to show that these planar waves can pass around large compact obstacles and still eventually recover their shape. In [23,24] these approaches were generalized to the discrete setting of (1.1), thereby continuing the early work by Bates and Chen [2] featuring a related four-dimensional non-local problem. In both cases the key technical challenge was the analysis of troublesome non-selfadjoint terms spawned by the anisotropy of the lattice, especially in situation where the group velocity does not vanish. These terms have slower decay rates than their PDE counterparts and hence require special care to close the nonlinear bootstrapping procedure. For example, the sub-solutions in [8] consist of only two terms, while 33 terms were required in [23] to correct for the slower decay. Spreading phenomena The classic result [1,Thm. 5.3] obtained by Weinberger for the PDE (1.3) states that large compact blobs with u ≈ 1 inside and u ≈ −1 outside can expand throughout the plane. The proof of this result relies on the construction of radially expanding sub-and supersolutions by glueing together planar travelling waves. In [23] a weak version of this expansion result was established for the LDE (1.1) in the special case that no direction is pinned. However, the underlying sub-and super-solutions expand at the speeds min 0≤ζ≤2π c ρ,ζ and max 0≤ζ≤2π c ρ,ζ respectively, which still leaves a considerable hole in our knowledge of the expansion process. Indeed, the numerical results in [45] provide strong evidence that the limiting shape of the expanding blob can be found by applying the Wulff construction [41] to the polar plot of the ζ → c ρ,ζ relation. For a large subset of parameters ρ this limiting shape resembles a polygon. The main motivation behind the current paper is to take a step towards understanding this expansion process by looking at the evolution of a single corner. Indeed, when the expanding blob is sufficiently large, it would seem to be very reasonable to assume that the corners of the polygon behave in an almost independent fashion. Corners for PDEs Assuming for concreteness that ρ < 0, the horizontal planar wave (c, Φ) given by (1.4) with φ = 0 satisfies c > 0, which means that it travels towards the left. In [22] Haragus and Scheel construct travelling corner solutions to (1.3) by 'bending' this planar wave to the left in the spatial limits y → ±∞, so that the interface resembles a > sign. In particular, for any small opening angle ϕ > 0, the authors establish the existence and stability of solutions of the form Here v(·, y) H 2 = O(ϕ 2 ) uniformly in y, while the phase θ satisfies the limits lim y→±∞ θ (y) = ± tan ϕ. (1.24) Notice that the horizontal speed c cos ϕ of these corners is faster than the original speed of the planar wave. The result is obtained by using the change of variable x = x +ct to recast (1.3) as and subsequently demanding u t = 0. The resulting system can be written in the first-order form which admits the family of y-independent equilibriã The linearization of (1.26) around (c, u) = (c, Φ) can be written as (1.28) This system admits the y-independent solutions (Φ , 0) caused by the translational invariance, together with the linearly growing solution (yΦ , Φ ). In particular, the desired corner (1.23) lives on the two-dimensional global center manifold associated to the family (1.27). The solutions on this manifold can be represented in the form One can subsequently obtain two skew-coupled ODEs to describe the dynamics of the scalar functions κ and θ. A relatively straightforward analysis shows that these ODEs have solutions for which θ satisfies the limits (1.24), while κ remains small. This suffices to establish the existence of the corners (1.23). In addition, in [21] anisotropic effects were introduced into the problem by allowing the nonlinearity g to depend on the gradient of u and considering non-diagonal diffusion coefficients. In such cases the group velocity c g defined by the quantities (1.21) need not vanish, but it can be removed by applying a coordinate transformation y = y − c g t in the transverse direction. By restricting their attention to small opening angles ϕ and using center manifold arguments, Scheel and Haragus were also able to apply their techniques to multi-component reaction-diffusion PDEs such as the FitzHugh-Nagumo and Gray-Scott equations [21,22] However, it is also possible to consider large opening angles when considering equations that admit a comparison principle. Indeed, in [9] explicit sub-and super-solutions are used to construct corners for the Nagumo PDE (1.3) that can be arbitrarily sharp. Corners for LDEs The crucial point in the analysis outlined above for the corners (1.23) is that the phase shift θ(y) can be completely factored out from the system. This implies that the ODE for κ does not depend on θ. In addition, it allows the center manifold to be constructed by a standard fixed point argument analogous to the local case. This is possible because the right-hand side of (1.28) maps H 2 × H 1 into H 1 × L 2 , which roughly means that its inverse gains an order of regularity in both components. This precisely compensates for the loss of regularity that arises by factoring out the phase-shift. However, when attempting to mimic this procedure for the LDE (1.1) one runs into a fundamental difficulty. Indeed, the analog of (1.28) has a right-hand side that now maps H 1 ×H 1 into H 1 ×L 2 due to the lack of second derivatives in the equation. This forces us to construct a full two-dimensional global center manifold that takes into the account the dynamics of θ and v simultaneously. A similar situation was encountered by one of the authors in [27], where modulated travelling wave solutions were constructed to a class of non-local systems. However, the analog of (1.28) is a difference equation rather than a differential equation. The order of this difference equation can become arbitrarily large depending on the height of the fraction tan ζ, which we require to be rational. Nevertheless, the final step in our analysis requires us to uncover a first-order difference equation for the center variables. The main technical contribution in this paper is that we adapt the spirit of the approach in [27] to construct global center manifolds for the differential-difference systems that we encounter here. This approach uses two intertwined fixed point procedures to separate the flow problem for the two center variables from the task of capturing the shape of the remainder function h * . The underlying linear problems have non-autonomous slowly-varying coefficients, for which we develop appropriate solution operators. In this paper we do require the group velocity (1.21) to vanish. Unlike in the spatially continuous setting, this cannot always be arranged by a simple variable transformation. Indeed, such a transformation would force the spatial variable transverse to the propagation direction to become continuous, destroying the difference structure of the system. This prevents us from exploiting the ω → ω + 2π periodicity in the Fourier variable. As a result, resonances start to appear in the spectrum that are very hard to control. A similar situation was encountered in [27], which forced the authors to add a smoothening term to the underlying system. We emphasize that the group velocity for (1.1) vanishes automatically in the directions ζ = 0 and ζ = π 4 . In addition, directions where the wavespeed is minimal (and hence the group velocity is zero) play an important role in the Wulff construction, which is the primary motivation for our analysis here. In any case, the delicate behaviour of the c ρ,ζ map for the Nagumo LDE (1.1) leads to a much richer class of behaviour than that displayed by its continuous counterpart (1.3). For example, the latter only features interior corners, while the former can also admit exterior corners. The former also allows for so-called bichromatic corners, which connect spatially homogeneous equilibria to checkerboard patterns. While we are confident that our center manifold construction will also allow us to establish the (linearized) stability of the corners constructed here along the lines of the approach in [21], we do not pursue this in the present paper. The main reason is that there is no coordinate transformation that can freeze our corners and also leave the discrete structure of the equation intact. One would need to generalize the approach developed in [5,7,24] to accommodate solutions that vary in two directions instead of just one, which we expect to be a tedious task. Organization Our main results are formulated in §2 and applied to the Nagumo LDE (1.1) in §2.1-2.2. In §3 we derive the differential-difference system that the pair (θ, v) must satisfy and formulate the global center manifold result. We proceed in §4 by deriving a representation formula for solutions to the linearized problem with constant phase. This requires us to compute a convoluted spectral projection operator that arises from the second order pole that the operator L −1 z has in z = 0. In §5- §6 we combine this representation formula with Fourier analysis to construct a solution operator for the linearized problem where the phase is allowed to vary slowly. Finally, we setup the fixed point problems required to build the global center manifold in §7, appealing at times to the results in [27] for overlapping parts of the program. posed on the planar lattice (i, j) ∈ Z 2 , in which u takes values in R d . For convenience, we introduce the operator π + ij : ∞ (Z 2 ; R d ) → (R d ) 5 that acts as for any (i, j) ∈ Z 2 , which allows us to rewrite (2.1) in the condensed forṁ 3) The plus sign corresponds with the fact that a "+"-shaped stencil is used to sample u. The conditions we impose on the nonlinearity f are summarized in the following assumption. (Hf) The nonlinearity f : (R d ) 5 → R d is C r -smooth for some r ≥ 2 and there exist two points We emphasize that the two points u ± are allowed to be equal. These two equilibria are required to be connected by a planar travelling wave solution to (2.1). In particular, we pick an arbitrary rational direction (σ A , σ B ) ∈ Z 2 with gcd(σ A , σ B ) = 1 and impose the following condition. (HΦ) There exists a wave speed c * = 0 and a wave profile Φ * ∈ C r+1 (R, R d ) so that the function In addition, we have the limits Upon introducing the operator τ : C(R; R d ) → C(R; (R d ) 5 ) that acts as we note that the pair (c * , Φ * ) must satisfy the functional differential equation of mixed type (MFDE) (2.8) In particular, the C r+1 -continuity mentioned in (HΦ) is automatic upon assuming that Φ * is merely continuous. For convenience, we now introduce the new coordinates which are parallel respectively orthogonal to the direction of motion of the wave (2.5). Upon introducing the notation which admits the travelling wave solution u nl (t) = Φ * (n + c * t). (2.12) A standard approach towards establishing the stability of the wave (2.12) under the nonlinear dynamics of the LDE (2.11) is to consider the linear variational probleṁ Looking for a solution of the form v nl (t) = e λt e zl p(n + c * t), (2.14) we readily find that p must satisfy the eigenvalue problem Here the linear operator L z : with shifts r j and functions A z,j that are given by (2.17) Since Φ * (ξ) approaches u ± as ξ → ±∞, it is possible to define the characteristic C d×d -valued functions Our first spectral assumption states that these characteristic functions cannot have roots on the imaginary axis whenever z is purely imaginary. (HS2) For any ω = 0 the operator L iω is invertible as a map from Since the Fredholm index varies continuously, (HS1) and (HS2) together imply that the Fredholm index of L 0 is zero. The translational invariance of the problem implies that L 0 Φ * = 0, which means that zero is an eigenvalue for L 0 . Our next assumption states that this eigenvalue is in fact algebraically simple. For any z ∈ C we now introduce the linear operator that acts as An easy computation shows that holds for all pairs p, q ∈ W 1,∞ (R, C d ). For these reason, we refer to this operator as the formal adjoint of L z . Using [35,Thm. A] together with (HS3), one sees that the kernel of L adj 0 must also be onedimensional. In particular, it is spanned by a function ψ * ∈ W 1,∞ (R, R d ) that can be uniquely fixed by the identity on account of (2.21). We note that (HS1) implies that both Φ * (ξ) and ψ * (ξ) decay exponentially as ξ → ±∞. We now explore two important consequences of the algebraic simplicity condition (HS3). The first of these states that the zero eigenvalue can be extended to a branch of eigenvalues λ z for L z when |z| is small. (2.26) defined for each z ∈ C with |z| < δ z , such that the following hold true. (i) The characterization together with the algebraic simplicity condition hold for each z ∈ C with |z| < δ z . The second consequence is that wave (c * , Φ * ) travelling in the rational direction (σ A , σ B ) can be perturbed to yield waves travelling in nearby directions. In particular, we introduce the constants (ζ * , σ * ) by writing Looking for solutions to the LDE (2.11) of the form a short computation shows that the pair (c ϕ , Φ ϕ ) must satisfy the MFDE in which we have introduced the notation (2.33) In order to translate these waves back to the original coordinates, we remark that any solution to (2.32) yields a solution to the original LDE (2.1) by writing with the rescaled quantities . Assume that (Hf ), (HΦ) and (HS1)-(HS3) are all satisfied. Then there exists a constant δ ϕ > 0 together with pairs defined for each ϕ ∈ (−δ ϕ , δ ϕ ), such that the following hold true. satisfies the LDE (2.11) for all t ∈ R. (iv) We have the identities We remark that the first quantities in (2.39) can be interpreted as a so-called group velocity, which represents the speed at which long-amplitude perturbations travel in the transverse direction. Indeed, expanding (2.14) with p = φ z and λ = λ z we find (2.40) Our final condition requires λ z to depend quadratically on z, which means that this group velocity has to vanish. We emphasize that the inequality [∂ 2 z λ z ] z=0 > 0 was required in [24] to obtain the nonlinear stability of the planar wave (c * , Φ * ). (HM) We have the identities As a final preparation, we introduce the directional dispersion Assuming that the original wave travels in the horizontal direction ζ * = 0, the quantity d ϕ represents the speed at which level-sets of the wave (c ϕ , Φ ϕ ) travel along the horizontal axis; see Figure 1. An easy calculation using (2.41) shows that Our main result establishes the existence of travelling corners in the setting where [∂ 2 ϕ d ϕ ] ϕ=0 = 0. Assuming again that φ * = 0 and that [∂ 2 z λ z ] z=0 and [∂ 2 ϕ d ϕ ] ϕ=0 are both strictly positive, the levelsets resemble a > sign. In particular, when c * > 0 this resembles an interior corner travelling to the left. together with two angles ϕ − < 0 < ϕ + that satisfy the following properties. satisfies the LDE (2.11) for all t ∈ R. (iii) We have the identities have the same sign, then we have the limits On the other hand, if these quantities have opposing signs, then we have the limits The Nagumo LDE As an example, we return to the Nagumo LDĖ in which the nonlinearity is given by the scaled cubic for some detuning parameter ρ ∈ (−1, 1). In the terminology of (2.1), we hence have which shows that (Hf) is satisfied upon picking u ± = ±1. Turning to (HΦ), we note that the results in [36] show that for each ζ ∈ [0, 2π] and ρ ∈ (−1, 1) there is a unique wavespeed c = c ρ,ζ for which the system admits a monotonic solution Φ = Φ ρ,ζ that also satisfies the limits (2.6). Figure 2 contains polar plots of the ζ → c ρ,ζ relation, which can be very delicate whenever |ρ| is small. By symmetry, we have c ρ,ζ = −c −ρ,ζ and hence c 0,ζ = 0 for all angles ζ ∈ [0, 2π]. Upon writing ρ * (ζ) = sup{ρ : c ρ,ζ = 0}, (2.56) the results in [36] show that 0 ≤ ρ * (ζ) < 1 for all ζ ∈ [0, 2π]. In particular, this means that (HΦ) is satisfied whenever tan ζ is rational (or infinite) and ρ * (ζ) < |ρ| < 1. Under the same conditions, the discussion in [24, §6] uses arguments based on the comparison principle to show that also (HS1)-(HS3) are valid. The verification of the conditions in (HM) is much more subtle. In order to make the angular dependence fully explicit, we first pick and consider the operatorsL that act as Writingλ z for the branch of eigenvalues forL z bifurcating fromλ 0 = 0 and comparing this to the branch λ z defined in Lemma 2.1, we haveλ with σ * > 0 the smallest number so that In particular, we have In view of the similar rescalings (2.35) and the fact that the statements in Theorem 2.3 merely depend on the signs of the quantities [∂ 2 and focus on the eigenvalues λ z;ρ,ζ and eigenfunctions φ z;ρ,ζ bifurcating from (0, Φ ρ,ζ ) for ρ * (ζ) < |ρ| < 1. We write ψ ρ,ζ for the solution to the adjoint equation In our context, the operators defined in (4.13) and (4.13) act as (2.65) In particular, Lemma 4.2 allows us to compute which in turn allows us to find [∂ z φ z;ρ,ζ ] z=0 by solving the MFDE In addition, item (iv) of Lemma 2.2 shows that Turning to the second derivatives, we again use Lemma 4.2 to compute (2.69) We remark that the last line vanishes in principle if the normalization (2.29) is imposed. However, numerically it is convenient to be free to utilize a different normalization, in which case this term should be included. Notice the extra minima that start to form in the directions tan ζ = 1 and subsequently tan ζ = 2 3 as ρ is decreased. Finally, in view of the fact that that D 2 f acts only on its fifth argument, we can use Lemma 4.5 to obtain (2.70) The last line can be ignored if indeed [∂ ζ c ρ,ζ ] = 0. In the special cases where ζ = k π 4 for some k ∈ Z, we have A 1 = 0 and hence For ζ = 0 this allows us to write On the other hand, for ζ = π 4 we have Since ψ ρ,ζ and Φ ρ,ζ are strictly positive we hence see that . The sharp spikes occur at the critical value ρ * (ζ) where pinning sets in. We note that sign changes appear for ζ = π 2 but not for ζ = 0. In particular, the identity c g ≡ 0 for these directions implies that interior and exterior corners can both occur for ζ = π 2 , while the horizontal direction ζ = 0 features interior corners only. The right panel contains numerically computed values for c g (ρ). Notice the zero-crossings for tan ζ = 3 4 and tan ζ = 4 5 , which indicates the presence of interior corners at these two critical values for ρ. for all k ∈ Z. The numerical results in [24, §6] suggest that this inequality extends to a wide range of (ρ, ζ) and we take this for granted for the remainder of our discussion. However, even for the straightforward expressions (2.72)-(2.73), it is not clear whether the quantity c ρ,ζ + ∂ 2 ζ c ρ,ζ has a sign. For any fixed ζ * , we now introduce the notation for the group velocity and second derivative of the directional dispersion that play a role in Theorem 2.3. In particular, to apply this result we need c g (ρ) = 0 and κ d (ρ) = 0. Since c ρ,ζ ≤ 0 whenever ρ ≥ 0, we have an interior corner for κ d (ρ) < 0 and an exterior corner for κ d (ρ) > 0. In both cases the corner travels in the rightward direction (provided |ζ| < π 2 ). In Figure 3 we have numerically computed the quantities (2.75) for a range of rational directions. In all cases, we also confirmed numerically that [∂ 2 z λ z,ρ,ζ ] z=0 > 0. In particular, the results predict interior corners travelling in the horizontal direction ζ * = 0, while both types of corners can travel in the diagonal direction ζ * = π 4 . In addition, for two critical values of ρ > 0 there are interior corners that travel in the direction ζ * = arctan(3/4) respectively ζ * = arctan(4/5). To obtain these results, we simultaneously solved the systems (2.55), (2.64) and (2.67). For well-posedness reasons, we added the extra terms γΦ , γψ respectively γ[∂ z φ z;ρ,ζ ] to the right-hand side of each equation, taking γ = 10 −6 . For the precise procedure, we refer to [24, §6]. Bichromatic Nagumo LDE We here reconsider the Nagumo LDĖ cos(ζ−ζ * ) , with ζ * = 0 for the left column and ζ * = π 4 for the right column, again with ρ = 0. These results strongly suggest that [∂ 2 ζ d(ζ)] ζ=ζ * can take both signs as the diffusion coefficient α is varied. In particular, both the horizontal and diagonal directions can have interior and exterior corners. but are now interested in so-called bichromatic planar travelling wave solutions. Such solutions can be written in the form for some wavespeed c ∈ R and R 2 -valued waveprofile These waves fit into the framework of this paper, since they can be seen as travelling wave solutions for the 'doubled' LDĖ We now introduce the notation The results in [26] show that there exists an open set of values (ρ, α) with ρ ∈ (−1, 1) and α > 0 for which (2.84) admits solutions (u bc , v bc ) ∈ (0, 1) 2 that are stable spatially homogeneous equilibria for (2.79). By applying the theory in [28], we hence obtain the existence of solutions to (2.83) that satisfy the limits together with similar solutions that connect (u bc , v bc ) with (1, 1). We remark that the existence theory in [28] does not prescribe whether c = 0 or c = 0. However, in the special case ζ = π 4 the travelling wave system can be written as there exists an open set of values (ρ, α) for which the wavespeed does not vanish, allowing us to verify (HΦ). By continuity in ζ, this hence also holds for nearby angles. In addition, our numerical results suggest that the diagonal direction is the first to become pinned as α is decreased; see Figure 4. This contrasts the situation encountered in the monochromatic case, where the diagonal waves satisfy the same travelling wave MFDE as the horizontal waves, but with a doubled diffusion coefficient. As a result, the monochromatic horizontal waves pin earlier than their diagonal counterparts. Since the spectral conditions (HS1)-(HS3) can be verified with techniques similar to those used for the monochromatic case, we now turn our attention to (HM). In particular, writing (c ρ,α,ζ , Φ ρ,α,ζ ) for the solution to (2.83) that satisfies the limits (2.85), we introduce the operator . We now introduce the notation A mc 1 , A mc 2 and B mc 1 for the operators (2.65) defined for the monochromatic equation. In addition, we write A bc 1 , A bc 2 and B bc 1 for the operators defined in (4.13) and (4.13) associated to the bichromatic problem (2.83). It is not hard to verify the relations We write (λ z;ρ,α,ζ , φ z;ρ,α,ζ ) and ψ ρ,α,ζ for the analogs of the similar named expressions defined in §2.1. Since A bc 1 = 0 for ζ = 0 and ζ = π 4 , we again have For ζ = 0 this allows us to write On the other hand, for ζ = π 4 we have (2.91) Since both components of ψ ρ,α,ζ and Φ ρ,α,ζ are strictly positive, we again see that for all k ∈ Z. As before, it is unclear if the derivatives of c have a sign. As a consequence of the increasing number of components in the MFDEs, our numerical method is at present unable to compute the desired derivatives in the same fashion as above. Instead, we simply compute the directional dispersion relation directly and determine by inspection whether it is concave or convex; see Figure 4. Interestingly enough, we find that this characterization flips at least twice as α is decreased, both for the horizontal direction ζ = 0 and the diagonal direction ζ = π 4 . In contrast to the monochromatic case, we hence see that interior and exterior bichromatic corners can both travel in the horizontal direction. Problem setup In this section we setup a differential-algebraic equation to describe solutions to the LDĖ that can be written in the form u nl (t) = Ξ l (n + ct) (3.2) for some sequence Ξ : The elements Ξ l will be required to lie in the orbital vicinity of the waveprofile Φ * . In particular, we formulate a global center manifold reduction that allows us to find an equivalent two component difference equation of skew-product form. For any sequence Ξ of the form (3.3), we introduce the notation to refer to the expanded sequence In addition, for any p = (p 1 , . . . , we introduce the function τ p that is given by This allows us to write π × nl u(t) = τ Ξ l (n + ct) + τ Ξ l (n + ct) holds for all l ∈ Z and ξ ∈ R. From now on, we often drop the explicit ξ-dependence. For instance, we simply write instead of the longer form (3.9). For technical reasons it is advantageous to recast (3.10) as a 2d-component system. To this end, we introduce the first differences In addition, we introduce the notation for the summed sequence in which we make the convention that sums where the lower index is strictly larger than the upper index are set to zero. For example, in the special case (σ A , σ B ) = (1, 0) we have These definitions allow us to observe that . (3.15) In particular, the system (3.10) can now be rewritten in the equivalent form (3. 16) In the special case c = c * , the travelling wave solution (2.12) gives rise to l-independent solutions to (3.16) in which ϑ ∈ R can be chosen arbitrarily. Here we have introduced the left-shift operator for any p ∈ C(R; R d ). We now look for a branch of solutions to (3.16) that bifurcates from the travelling waves (3.17) for c = c * . In particular, we consider the Ansatz for three sequences (θ, v, w) : (3.21) In order to close the system, we augment (3.21) by demanding that for all l ∈ Z. We now set out to isolate the linear and nonlinear parts of the system (3.21). For anyṽ ∈ C(R; (R d ) 5 ) and ϑ ∈ R we therefore introduce the function N f ;ϑ (ṽ) ∈ C(R; R d ) that is given by (3.23) In addition, for any phase ϑ ∈ R and difference θ d ∈ R we introduce the function Using these functions, the system (3.21) can be recast as For any v ∈ L 2 (R; R d ), we now introduce the notation Applying a difference to (3.22), we obtain Substituting the equation for v, we arrive at In order to formulate this in a more compact fashion, we write together with ev l p = p l−σ * +1 , . . . , p l+σ * −1 , p l+σ * (3.31) for any sequence p. In addition, we introduce the shorthand notation With a slight abuse of notation, we introduce the function S : R 2σ * × H 1 × H 1 → R that acts as This allows us to rewrite (3.29) in the form For any triplet (θ, v, w) : Z → R × H 1 × H 1 , we now introduce the sequences In addition, we introduce the nonlinear function that acts as (3.39) Finally, we introduce the operators and the associated projections ϑ (v, w) = (0, P ϑ w). (3.41) This allows us to represent the full problem as and consider the pair (Ξ, Υ) defined by (3.19). Then the differential-algebraic system (3.16) and the identity are both satisfied for all l ∈ Z, if and only if (3.42) holds for all l ∈ Z. Proof. The computations above shows that indeed (3.42) holds whenever (3.16) and (3.45) are satisfied. The converse implication can be checked by using (3.35) to compute We now proceed to obtain estimates on the nonlinear terms. These are mostly standard bounds for quadratic nonlinearities that will be used in §7 for the center manifold construction. Notice however that any dependencies on the phase θ always involve differences in θ. (3.47) In addition, for any pair (ϑ A , ϑ B ) ∈ R 2 and any pair we readily see that Since f is at least C 2 -smooth, there exists C 1 > 0 so that the pointwise bound This yields for some C 2 > 0, from which (3.47) follows. Upon writing a short computation shows that Under the assumption that (3.53) holds for both (Φ A , v A ) and (Φ B , v B ), we hence find for some C 3 > 0. The bound (3.49) now follows from the fact that for some C 4 > 0. Lemma 3.3. Assume that (Hf ), (HΦ) and (HS1) hold and pick a sufficiently large constant K > 0. Then for any pair ϑ ∈ R and θ d ∈ R we have the bound In addition, for any pair (ϑ A , ϑ B ) ∈ R 2 and any pair (θ A d , θ B d ) ∈ R 2 we have the bound Proof. Assumptions (Hf), (HΦ) and (HS1) imply that Φ * is a continuous function that decays exponentially, from which (3.59) follows. Writing (3.62) The inequality (3.60) follows directly from this representation. for all l ∈ Z, we have the bounds 65) for all l ∈ Z. In addition, for any pair of triplets for all l ∈ Z, the quantities satisfy the bounds Proof. This follows from Lemma's 3.2-3.3 upon inspecting the definitions of S and R. We are now in a position to state our main center manifold result. The fact that this manifold is two dimensional is related to the observation that the linear problem has the constant solutions (Φ * , 0) and ([∂ z φ z ] z=0 , Φ * ), as we will see in §5. such that the following properties are satisfied. for κ → 0 and c → c * , uniformly for ϑ ∈ R. (ii) The functions f θ and f κ are C r−1 -smooth. In addition, we have the behaviour as κ → 0 and c → c * . (iv) Pick a c ∈ (c * − δ, c * + δ) and consider a triplet of sequences that satisfies (3.42) and admits the bound for all l ∈ Z. Then upon writing together with the difference equation are both satisfied for all l ∈ Z. Proof of Theorem 2.3. For explicitness, we assume that [∂ 2 ϕ d ϕ ] ϕ=0 > 0 and [∂ 2 z λ z ] z=0 > 0. Whenever c − c * > 0 is sufficiently small, the identity [∂ ϕ d ϕ ] ϕ=0 = 0 allows us to use a Taylor expansion to show that there exist ϕ − < 0 < ϕ + for which d ϕ− = d ϕ+ = c. Noting that we use (iii) of Proposition 3.5 to define two quantities κ ± = κ ϕ± for which we have the identity In addition, possibly after further restricting the size of c − c * , we can ensure that the inequalities hold for all κ ∈ (κ − , κ + ). As a consequence, for any such κ we have the bounds (3.85) In particular, for any κ ∈ (κ − , κ + ) one can apply the contraction mapping principle to the fixed point problemκ and obtain a unique solution inκ ∈ (κ − , κ). For any choice ofκ 0 ∈ (κ − , κ + ), the problem can therefore be iterated backwards and forwards with respect to l to yield a solution κ : Z → (κ − , κ + ). This solution is strictly increasing and satisfies the limits lim l→±∞ κ l = κ ± . Preliminaries In this section we obtain a number of preliminary results related to the constant-coefficient linear system In particular, we study the Fourier symbol ∆(z) associated to this system and obtain a representation formula for solutions that are allowed to grow at a small exponential rate. As a preparation, we introduce the notation for the linearization of the travelling wave MFDE (2.8) around Φ * and the corresponding projection onto the kernel element Φ * . In addition, for any z ∈ C we define the vector recalling the convention that sums where the lower index is strictly larger than the upper index are set to zero. By construction, this allows us to write s [e z· w] = e z· s z w (4.4) for any w ∈ H 1 . Upon introducing the linear operators ∆(z) : is invertible and satisfies the bound In order to gain insight regarding solutions to the homogeneous linear system we briefly discuss the maximal Jordan chain associated to ∆(z) at z = 0. In particular, we set out to construct an analytic function z → J (z) ∈ H 1 × H 1 with J (0) = 0 so that ∆(z)J (z) = O(z m ) (4.10) for the largest possible value of m. As a preparation, we note that whenever z = 0. Recalling the definition (2.16), we hence see that (4.12) Upon introducing the notation A k = ∂ k z L z z=0 (4.13) for any integer k ≥ 1, we may differentiate (4.12) to find (4.14) In particular, we may write Using (HS3) we immediately see which allows us to pick J (0) = (Φ * , 0). This chain can be extended by exploiting the following preliminary identities. (4.18) Proof. Differentiating the definition L z φ z = λ z φ z , we obtain the identities (4.19) Evaluating these expressions at z = 0 we find (4.17). The deriatives (4.18) can then be obtained by recalling the normalization ψ * , φ z = 1 and using the fact that ψ * , L * y = 0 for all y ∈ H 1 . Indeed, we now write for a pair of analytic functions z → v(z), w(z) ∈ H 1 × H 1 . Using the identity we may exploit (4.17) to compute (4.22) In particular, we have achieved m = 2 in (4.10). This corresponds with the presence of two solutions to the linear homogeneous problem (4.9). However, it is not possible to achieve m = 3. Indeed, setting the O(z 2 ) term in (4.22) to zero, we obtain and hence Taking the inner product with ψ * , we may use (HM) to obtain the contradiction Our second main result confirms that there are no other linearly independent solutions to (4.9) that are bounded by e ηmax|l| . In addition, it provides a representation formula for solutions to the inhomogeneous system (4.1) that share such an exponential bound. Whenever L z : H 1 → L 2 is invertible, a short calculation shows that the same is true for ∆(z) with It is hence crucial to understand the behaviour of L −1 z for small |z| > 0, which we set out to do by exploiting the Fredholm properties of L * . As a preparation, we introduce the notation for the unique v ∈ H 1 that has ψ * , v = 0 and satisfies the problem This allows us to rephrase the identities (4.17) in a more explicit form. Proof. These expressions follow from (4.17), noting that L qinv Φ * = 0. At this point, it is natural to briefly turn our attention to the angular dependence of the waves (c ϕ , Φ ϕ ), which can be analyzed using techniques that are similar to those used above. Unfortunately, the expressions for the second derivatives are somewhat more involved. In order to accomodate this, we introduce the notation For any p ∈ H 1 , we can compute Based on (4.14), we may hence write (4.47) Using these identities to evaluate the expressions (4.43)-(4.44) at ϕ = 0 readily leads to the identities (4.41). The deriatives (4.42) can then be obtained by using the fact that ψ * , L * y = 0 for all y ∈ H 1 . Item (iv) of Lemma 2.2 follows directly by comparing the first identities in (4.17) and (4.18) with those in (4.41) and (4.42). We now construct a preliminary inverse for L z that behaves as z −2 as z → 0. As a preparation, we implicitly define the remainder expressions R L;i by writing whenever h ∈ L 2 and 0 < |z| < δ z . Proof. We set out to seek a solution a solution to (4.50) of the form for some κ ∈ R and w ∈ H 1 that satisfies ψ * , w = 0. Writing we may use (4.17) to compute The identity (4.50) is hence equivalent to the system (4.54) Whenever |z| is sufficiently small, we may use the quantity to rewrite the first line of (4.54) in the form Substituting this into the second line of (4.54), we find The desired properties now follow from the fact that I +zM (z) is invertible whenever |z| is sufficiently small. In the following result we explicitly identify the singular O(z −2 ) and O(z −1 ) terms in the expansion of L −1 z . In order to express these in a convenient fashion, we introduce the operator Γ * : L 2 → R that acts as so that we have Proof. Fix z = 0 together with h ∈ L 2 and consider the functions Performing the expansion L z v # = E #;0 + zE #;1 + z 2 R #;2 (z) (4.63) and demanding that R #;2 (z) is analytic in zero for # ∈ {A, B, C}, we can use Lemma 4.2 to compute together with and finally Summing these expressions we see that where we used (4.59) to simplify the second expression. In particular, Lemma 4.6 allows us to write from which the desired expansion follows. Proof of Proposition 4.1. For any δ z > 0 and sufficiently small η max > 0, (HS1) implies that L z is invertible for | z| ≤ η max and δ z ≤ | z| ≤ π. The result now follows from the expansion obtained for L −1 z in Lemma 4.7. We now proceed towards establishing the representation formula (4.32). To this end, we introduce the left-shift operator S that acts on sequences as (4.69) Lemma 4.8. Consider any sequence y ∈ BX µ,ν (H) and pick an integer k ≥ 1. Then we have the identities Proof. Upon computing j≥0 e −zj y j+k = j ≥k e zk e −zj y j = e zk j ≥0 e −zj y j − e zk k−1 j =0 e −zj y j (4.72) together with j≥0 e −zj y j−k = j ≥−k e −zk e −zj y j = e −zk j ≥0 e −zj y j + e −zk k j=1 e zj y −j , the identities (4.70) readily follow. In addition, we compute j≥1 e zj y −j+k = j ≥1−k e zk e zj y −j = e zk j ≥1 e zj y −j + e zk k−1 j=0 e −zj y j from which (4.71) follows. Proof. For the first compoment, we compute (4.79) For the second component, we note that The desired expression follows directly from these computations, noting that the third and fourth component can be obtained by flipping σ A and σ B in the expressions for the second respectively first component. Proof. The expressions follow from the direct computation we introduce the notation This allows us to take the two discrete Laplace transforms of the main linear system (4.1). Proof. Writing V = (v, w) and H = (g, h), we may use Lemma 4.8 and Corollary 4.9 to compute which is equivalent to (4.87). The remaining identity (4.88) follows in a similar fashion. For convenience, we introduce the linear operators together with the projections Proof. Pick any z ∈ Z for which L z is invertible. Upon introducing the representation Pick a sufficiently small η max > 0 together with a sufficiently large K > 0. Then for every 0 < η < η max there exists a C r−1 -smooth map that satisfies the following properties. (i) Pick any 0 < η < η max and ϑ ∈ R. For any H ∈ BS η (H 1 × L 2 ), the function V = K cc η (ϑ)H satisfies (5.1) and admits the orthogonality conditions (ii) Pick any 0 < η < η max and assume that V ∈ BS η (H 1 × H 1 ) satisfies (5.1) with H = 0 for some ϑ ∈ R. Then there exists a pair (a 1 , a 2 ) ∈ R 2 for which we have (iii) For any 0 < η < η max and ϑ ∈ R we have the bound (iv) Pick any 0 < η < η max . Then for any pair (ϑ 1 , ϑ 2 ) ∈ R 2 , we have the identity (v) Consider a pair (η 1 , η 2 ) ∈ (0, η max ) 2 together with a function Then for any ϑ ∈ R we have K cc η1 (ϑ)H = K cc η2 (ϑ)H. (5.9) Our strategy is to exploit the representation formula derived in §4 for the unprojected problem In particular, we first use the Fourier symbols ∆(z) defined in (4.5) to construct an inverse in the sequence spaces where again H is a Hilbert space. This can subsequently be used to obtain an inverse in the spaces by exploiting the fact that interactions between lattice sites decay exponentially with respect to the separation distance. (ii) We have the bound Λ inv (5.14) (iii) We have the explicit expression Proof. This follows directly from Proposition 4.1 and standard properties of the Fourier transform; see for example [29, §3]. Pick a sufficiently small η max > 0 together with a sufficiently large K > 0. Then for every pair 0 < η 1 < η 2 < η max and any In addition, for any Proof. Following the approach in [27,Lem. 5.8], we introduce the sequences In view of the convergence k∈Z H (k) = H ∈ 2 η2 (H 1 × L 2 ), (5.22) the boundedness of Λ inv η2 implies that also for all l ∈ Z. We now pick two constants η ± in such a way that By construction, we have Recalling the left-shift operator S defined in (4.69), we note that In particular, we have Here the last identity follows from Proposition 4.3, since the sequence together with the homogeneous problem We are now able to use item (ii) of Lemma 5.2 to compute for some C 1 > 0. Summing over k, we hence find 33) The result follows directly from this bound, possibly after decreasing the size of η max > 0. For any H ∈ BS η (H 1 × L 2 ) we now introduce the splitting for some small > 0, which by construction implies that V = K up η;I H satisfies the unprojected problem (5.10) with ϑ = 0. In addition, Lemma 5.3 implies that V ∈ BS η (H 1 × H 1 ). In order to allow for any ϑ ∈ R, we introduce the operator In view of the orthogonality conditions (5.4), we finally write Pick a sufficiently small η max > 0 together with a sufficiently large K > 0. Then for any 0 < η < η max , any ϑ ∈ R and any H ∈ BS η (H 1 × L 2 ), the function V = K up η (ϑ)H satisfies the unprojected problem (5.10) and admits the orthogonality conditions In addition, properties (iii) -(v) from Proposition 5.1 are satisfied after replacing K cc η by K up η . Proof. In view of the discussion above, the statements follow directly from the fact that the set of solutions to the homogeneous problem (4.9) in BS η (H 1 × H 1 ) is two-dimensional as a consequence of Proposition 4.3. A detailed discussion can be found in the proof of [27,Prop. 5.1]. We now set out to lift the results above from the unprojected system (5.10) to the full system (5.1). A key role is reserved for the summation operator J that acts on a sequence W as (5.40) with the usual remark that sums are set to zero when the lower bound is strictly larger than the upper bound. for the set of solutions to the homogenous version of (5.1). By relating this set to its counterpart for (4.9) we show that N cc η is also two-dimensional for small η > 0. Lemma 5.6. Assume that (Hf ), (HΦ), (HS1)-(HS3) and (HM) are all satisfied. Then for all sufficiently small η > 0, we have the identification Proof. A direct computation shows that In particular, we find (I − P 46) which verifies that the right-hand-side of (5.43) is indeed contained in N cc η . Conversely, let us consider a sequence W ∈ N cc η , which implies that Slowly varying coefficients In this section we study the properties of the bounded linear operator that for any sequence θ : Z → R acts as We are specially interested in cases where the sequence θ varies slowly with respect to l ∈ Z, which means (S − I)θ ∞ < δ θ (6.3) for some small δ θ > 0. Our first main result states that the kernel of Λ(θ) is again two-dimensional, provided that (6.3) holds. For technical reasons, we also extend the two basis functions for the kernel to situations where (6.3) fails to hold. Proposition 6.1. Assume that (Hf ), (HΦ), (HS1)-(HS3) and (HM) are all satisfied. Pick a sufficiently small constant δ θ > 0 together with a sufficiently small η > 0. Then for every θ : Z → R there exist two functions that satisfy the following properties. (iii) Pick any 0 < η < η max and suppose that Λ(θ)V = 0 for some V ∈ BS η (H 1 × H 1 ) and θ : Z → R for which (S − I)θ ∞ < δ θ . Then we have the identity Our second main result constructs operators K η (θ) that can be seen as an inverse for Λ(θ) whenever (6.3) holds. Naturally, the kernel elements above obtained in the result above need to be projected out, which is performed in (6.9). Special care needs to be taken when considering the smoothness with respect to θ. Indeed, the smoothness criteria below are based on the arguments involving nested Banach spaces argument that are traditionally used to establish the smoothness of center manifolds; see for example [16,§IX.7]. We remark that the notation L (p) stands for bounded p-linear maps. Proposition 6.2. Assume that (Hf ), (HΦ), (HS1)-(HS3) and (HM) are all satisfied. Recall the integer r appearing in (Hf ) and pick two sufficiently small constants 0 < η min < 2rη min < η max . Then for every η min < η < η max , there exists a map that satisfies the following properties. Our strategy is to exploit the inverses K cc η (ϑ) for the constant-coefficient problem (5.1) to introduce an approximate inverse for Λ(θ) by writing [K apx η (θ)H] j = pev j K cc η (θ j )H. (6.19) In order to turn this into an actual inverse, we need to establish bounds for the remainder term To this end, we introduce the coordinate projection π 2 [v, w] = w together with the sequence A short computation shows that In a similar spirit, we introduce the sequences and compute These computations allows us to obtain the identity (6.25) In order to formulate appropriate bounds for this expression, we introduce the notation cev l θ = ev l θ − 1θ l = θ l−σ * +1 − θ l , . . . , θ l+σ * − θ l . Pick a sufficently small constant η max > 0 together with a sufficiently large K > 0. Then for any 0 < η < η max and any H ∈ BS η (H 1 × L 2 ), the following estimates hold. (i) For any sequence θ : Z → R we have the bound Proof. As a consequence of the bound (5.6) and the smoothness of the map ϑ → K cc η (ϑ), there exists C 1 > 0 for which In particular, we see that for all |j| ≤ σ * we have The bound (6.27) hence follows immediately from the representations (6.22), (6.24) and (6.25). In addition, (6.29) also implies that The second bound (6.28) readily follows from this. (i) For any sequence θ : Z → R we have the bound (ii) For any η 1 > 0 and any pair of sequences θ A , θ B ∈ BS η1 (R), we have the bound After pickingδ θ > 0 to be sufficiently small, the bound (6.35) allows us to define the full inverse We remark that the normalization conditions (6.9) hold as a direct consequence of (5.4) and the construction of K apx η . Turning to identify the kernel of Λ(θ), we introduce the operator for any sequence θ : Z → R. We now write Proof of Proposition 6.1. Item (i) follows from the fact that together with a similar identity for V B hom (θ). Item (ii) follows directly from the normalization (6.9). Finally, (iii) can be established by following the proof of [27, Lem. 6.4]. The center manifold Our goal here is to construct and analyze a global center manifold for the system (3.42) that captures all the solutions where the pair (v, w) remains small. In particular, we set out to establish Proposition 3.5. While the main spirit of the ideas in [27, §7] can be used to establish the existence of the manifold, we need to take special care to identify the reduced equation that is satisfied on the center space. The key issue is that we wish to recover a first order difference equation from a differential-difference system of order 2σ * . Proposition 7.1. Assume that (Hf ), (HΦ), (HS1)-(HS3) and (HM) are all satisfied and pick two sufficiently small constants 0 < η min < η max . In addition, pick a sufficiently large constant K > 0 together with a sufficiently small constant δ v > 0 and write Then there exist C r−1 -smooth maps so that the following properties hold true.
15,132.6
2019-01-08T00:00:00.000
[ "Mathematics", "Physics" ]
Ampere force based magnetic field sensor using dual-polarization fiber laser A magnetic field sensor is proposed by placing a dualpolarization fiber grating laser under a copper wire. With a perpendicular magnetic field applied, an electrical current flowing through the copper wire can generate Ampere force to squeeze the fiber grating laser, resulting in the birefringence change inside the laser cavity and hence the change of the beat note frequency. When an alternating current is injected into the copper wire, the magnetic field induced beat note frequency change can be discriminated from environment disturbances. A novel fiber-optic magnetic field sensor is therefore demonstrated with high sensitivity and inherent immunity to disturbances. ©2013 Optical Society of America OCIS codes: (060.2370) Fiber optics sensors; (280.3420) Laser sensors. References and links 1. L. Cheng, J. Han, Z. Guo, L. Jin, and B.-O. Guan, “Faraday-rotation-based miniature magnetic field sensor using polarimetric heterodyning fiber grating laser,” Opt. Lett. 38(5), 688–690 (2013). 2. M. Yang, J. Dai, C. Zhou, and D. Jiang, “Optical fiber magnetic field sensors with TbDyFe magnetostrictive thin films as sensing materials,” Opt. Express 17(23), 20777–20782 (2009). 3. B.-O. Guan and S.-N. Wang, “Fiber grating laser current sensor based on magnetic force,” IEEE Photon. Technol. Lett. 22(4), 230–232 (2010). 4. G. A. Cranch, G. M. H. Flockhart, and C. K. Kirkendall, “High-resolution distributed-feedback fiber laser dc magnetometer based on the Lorentzian force,” Meas. Sci. Technol. 20(3), 034023 (2009). 5. J. Noda, T. Hosaka, Y. Sasaki, and R. Ulrich, “Dispersion of Verdet constant in stress-birefringent silica fibre,” Electron. Lett. 20(22), 906–907 (1984). 6. T. Yoshino, T. Hashimoto, M. Nara, and K. Kurosawa, “Common path heterodyne optical fiber sensors,” J. Lightwave Technol. 10(4), 503–513 (1992). 7. C.-L. Tien, C.-C. Hwang, H.-W. Chen, W. F. Liu, and S.-W. Lin, “Magnetic sensor based on side-polished fiber Bragg grating coated with iron film,” IEEE Trans. Magn. 42(10), 3285–3287 (2006). 8. A. Dandridge, A. B. Tveten, and T. G. Giallorenzi, “Interferometric current sensors using optical fibers,” Electron. Lett. 17(15), 523–525 (1981). 9. C. T. Shyu and L. Wang, “Sensitive linear electric current measurement using two metal-coated single-mode fibers,” J. Lightwave Technol. 12(11), 2040–2048 (1994). 10. S. Jin, H. Mavoori, R. P. Espindola, L. E. Adams, and T. A. Strasser, “Magnetically tunable fiber Bragg gratings,” in Proc. OFC’99, San Diego, CA, ThJ2, 135 – 137 (1999). 11. J. Gong, C. C. Chan, M. Zhang, W. Jin, J. M. K. MacAlpine, and Y. B. Liao, “Fiber Bragg grating current sensor using linear magnetic actuator,” Opt. Eng. 41(3), 557–558 (2002). 12. B.-O. Guan, L. Jin, Y. Zhang, and H.-Y. Tam, “Polarimetric heterodyning fiber grating laser sensors,” J. Lightwave Technol. 30(8), 1097–1112 (2012). 13. Y. Zhang, B.-O. Guan, and H. Y. Tam, “Ultra-short distributed Bragg reflector fiber laser for sensing applications,” Opt. Express 17(12), 10050–10055 (2009). 14. K. S. Chiang, R. Kancheti, and V. Rastogi, “Temperature-compensated fiber-Bragg-grating-based magnetostrictive sensor for dc and ac currents,” Opt. Eng. 42(7), 1906–1909 (2003). 15. R. Gafsi and M. A. El-Sherif, “Analysis of induced-birefringence effects on fiber Bragg gratings,” Opt. Fiber Technol. 6(3), 299–323 (2000). 16. Y.-N. Tan, L. Jin, L. Cheng, Z. Quan, M. Li, and B.-O. Guan, “Multi-octave tunable RF signal generation based on a dual-polarization fiber grating laser,” Opt. Express 20(7), 6961–6967 (2012). Introduction Fiber-optic magnetic field sensors have been actively studied over years because of the advantages over their electronic counterparts in immunity to electromagnetic interference, light weight, compact size and large bandwidth.Many mechanisms have been explored, including Faraday effect, magnetostrictive effect, magnetic force, Lorentzian force, etc [1][2][3][4].Among them, Faraday effect based schemes can measure magnetic fields directly.However, because the Verdet constant of silica fibers is quite small [5], their sensitivities are fairly low.Therefore, magneto-optic crystals are frequently employed to enhance the sensitivity [6].Schemes based on other mechanisms normally measure magnetic fields indirectly and need external transducers which usually enhance their sensitivities.However, such external transducers may also result in some disadvantages.For example, magnetic materials, such as magnets and magnetostrictive materials are quite popular in various indirect magnetic field sensing schemes [2,3,7].However, the inherent magnetic saturation and hysteresis of magnetic materials may greatly reduce the dynamic range and cause inaccurate measurement. Various configurations of fiber-optic magnetic field sensors have been proposed, such as those based on optical fibers with interferometric detection [8,9] and fiber Bragg gratings with wavelength interrogation [10,11].Dual-polarization fiber grating laser based sensors have been attracting many attentions these years [12,13].The two orthogonally polarized lasing modes generate a radio-frequency (RF) beat signal after a polarizer.The frequency of the beat signal changes according to the variation of the birefringence inside the laser cavity.An extremely weak cavity birefringence change of 10 8 can result in a beat-frequency variation of megahertz order.Therefore, dual-polarization fiber grating laser based sensors exhibit high sensitivities and much easier and simpler signal extraction by electronic signalprocessing. Immunity to environment disturbances is always a big concern for fiber-optic sensors because the refractive index of silica fiber can be easily changed by temperature variation, bending and other environment disturbances which interfere with the measurands and make the measurements unreliable.Normally, some passive methods, such as disturbance insensitive packages, external references and compensations [14], have to be adopted to protect the measurands from disturbances.However, such passive methods are not sufficient as their reliability in long-term is very likely problematic.Methods with inherent immunity to environment disturbances are highly desired. In this paper, we propose a novel magnetic field sensor based on dual-polarization fiber grating lasers and magnetic field induced Ampere force.The Ampere force is generated by an electric current in a perpendicular magnetic field, which presses the dual-polarization fiber grating laser to shift the beat signal frequency.We demonstrate when an alternating electric current is employed, the proposed scheme can discriminate the magnetic field from environment disturbance very well, showing an inherent ability of anti-disturbance.The schematic diagram of the proposed magnetic field sensor is shown in Fig. 1.A dualpolarization fiber grating laser is placed under a copper wire to sense the Ampere force generated by the electric current flowing through the copper wire.The fiber grating laser operates in single longitude mode with two orthogonally polarized states.The inherent birefringence inside the laser cavity results in a frequency difference between the two orthogonally polarized states, which generates a beat signal by photodetecting the laser output with the beat frequency   given by 00 c B n Principle where c is the light speed in vacuum, 0  is the laser wavelength, 0 n and B are the average refractive index and the birefringence of the optical fiber, respectively.With a transversal force applied to the laser cavity, a linear birefringence is induced and expressed by [15]   where 11 p and 12 p are the components of strain-optical tensor of the fiber material, p  is Poisson's ratio, f denotes linear force per unit length, r is the fiber radius,  is the angle of the applied force with respect to the fast axis and E is the Young's modulus of the silica fiber.Therefore, with Eq. ( 1), the beat frequency change due to the applied lateral force is given by which suggests a linear relationship between the beat frequency change and the lateral force. It is well known that an electrical current in a perpendicular magnetic field generates a lateral force H F named Ampere force as expressed by where H is the magnetic field, I is the electric current and H L is the length of the electric current experiencing the magnetic field.If all the force is applied onto the laser cavity, the Ampere force per unit length is where C L is the laser cavity length.With Eq. ( 3) and ( 5), the perpendicular magnetic field induced beat frequency change can be presented as It then shows that the magnetic field induced beat frequency is linearly related to the magnetic field and the electric current with the sensitivity maximized when the Ampere force is applied along one of the fiber polarization axes.Environment disturbances can also induce linear birefringence into the laser cavity.Normally, these environment disturbances manifest as some low frequency perturbations.If the electric current is a direct current, the magnetic field induces a stationary birefringence which is difficult to be discriminated from the birefringence by disturbances.However, the proposed scheme permits injecting an alternating electric current into the copper wire.If a sinusoid alternating current with a frequency of where I B is the inherent birefringence of the fiber and D B is the disturbances induced birefringence.The magnetic field induce frequency change is then moved to a band centered at ac  .The frequency change due to environment disturbances remains in low frequency band as the disturbances are magnetic field and electric current independent.Therefore, the disturbances can be greatly suppressed by bandpass filtering, providing a quiet detection of the magnetic field.The proposed scheme hence shows an inherent capability to combat environment disturbances. Experiment and Results Fig. 2. The measured waveform for the beat signal frequency variation versus time with an alternating current of 160 mA amplitude at 1 kHz and a magnetic field strength of 197 G. The experiment setup is shown in Fig. 1.The proposed fiber-optic magnetic field sensor was put in a perpendicular magnetic field generated by two permanent magnets with their spacing varied to tune the magnetic field magnitude.The fiber grating laser was a dual-polarization distributed Bragg reflector (DBR) fiber laser inscribed on an Er-doped fiber (Fibercore M-12) with grating lengths of 7.5 mm and 5.5 mm, respectively, and a grating spacing of 6 mm.The absorption coefficient is 11.3 dB/m at 979 nm.The Ampere force was generated by an alternating electric current in a copper wire and controlled by a function generator through a voltage-to-current convertor.The copper wire was glued to a big glass plate of 760 × 250 mm.To ensure the Ampere force was completely exerted onto the fiber grating laser, the big glass plate was also glued to another small glass plate of 180 × 250 mm placed on the fiber grating laser and a dummy fiber arranged parallel to the laser.The fiber grating laser was supported by another glass plate and positioned with the fiber polarization axis aligned to the force direction to maximize the response sensitivity.The output of the fiber grating laser was photodetected after a polarizer to generate an RF signal for monitoring by an RF spectrum analyzer.To ensure the fiber grating laser was firmly squeezed by the glass plates, a preload of 200 g was placed on the big glass plate, which also provided a bias force to shift the original beat frequency of about 390 MHz to about 630 MHz.The measured waveform for the beat signal frequency variation versus time is shown in Fig. 2. The magnetic field magnitude is 197 G and the electrical current injecting into the copper wire was alternating at 1 kHz with amplitude of 160 mA.A 1 kHz sinusoid waveform is clearly observed, showing that the beat frequency of the fiber grating laser was varying at 1 kHz due to the Ampere force applied by the alternating current.It is also observed that the average beat frequency was slowly drifting due to environment disturbances, such as vibration and air flow [16].However, the amplitude and the frequency of the beat frequency variation do not change with these environment disturbances.By detecting the amplitude at 1 kHz, the Ampere force and hence the magnetic field can be measured and the environment disturbances can be significantly suppressed, which can be effectively achieved, for example, by correlating the beat frequency variation with the alternating current signal driving the copper wire.Figure 3 shows the measured beat frequency variation amplitude at 1 kHz for different magnetic field magnitude with alternating current amplitude of 240 mA. Figure 4 shows the measured beat frequency variation amplitude at a magnetic field magnitude of 110 G for various amplitude of alternating current at 1 kHz.The results confirm that the beat frequency variation amplitude is linearly related to the magnetic field magnitude and the alternating current amplitude as already shown by Eq. (7).Moreover, the results suggest that the sensitivity of the proposed sensor can be easily tuned by varying the alternating current amplitude, which will be enhanced at larger alternating current amplitude provided the power dissipation is tolerable. As the proposed sensor translates a magnetic field to Ampere force, the mechanical structure has great impact on the response of the sensor.For the structure employed in the experiments, the measured response in a 197 G magnetic field for alternating current of 320 mA amplitude at different frequencies is shown in Fig. 5.The structure appears more sensitive at frequency between 1 kHz and 1.3 kHz which should be a resonance peak of the structure.Normally, environment disturbances seldom appear in such high frequency band.Therefore, it is possible to greatly improve the sensitivity by a proper mechanical structure deliberately designed to maximize its response in a particular frequency band at which the alternating current is operating and very few environment disturbances are present.Moreover, a bandpass filter with extremely narrow passband can be implemented by correlating the detected frequency variation with the driving signal of the alternating current to significantly suppress noises.A performance much better than those of wavelength discrimination based schemes is therefore very promising.A minimal detectable magnetic field of much less than 1 G should be very likely with a proper design of the electronic signal processing, which makes the proposed sensor potentially suitable for applications needing weak magnetic field detection, such as navigation, spatial and geophysical research. Conclusion A novel fiber-optic magnetic field sensor is proposed and demonstrated based on magnetic field induced Ampere force and dual-polarization fiber grating laser.The magnetic field induced Ampere force is generated by an alternating current inside a copper wire and is applied to the dual-polarization fiber grating laser to change its beat frequency for magnetic field measurement.The proposed sensor shows an inherent capability to combat environment disturbances by performing the measurement in a quiet high frequency band.The experiment results validate the theoretical analysis and demonstrate a novel fiber-optic magnetic field sensor with high sensitivity and good performance. Fig. 1 . Fig. 1.Schematic and experiment setup for magnetic field sensor based on an dual-polarization fiber grating laser and magnetic field induced Ampere force.ISO: Isolator; WDM: Wavelength division multiplexer; PC: Polarization controller.PD: Photodetector. Fig. 3 . Fig.3.The beat frequency variation amplitude for various magnetic field magnitude.The current was alternating at 1 kHz with amplitude of 240 mA. Fig. 4 . Fig.4.The beat frequency variation amplitude at 1 kHz for various alternating current amplitude and a magnetic field magnitude of 110 G. #Fig. 5 . Fig.5.The measured beat frequency variation amplitude at various current alternating frequency.The amplitude of the alternating current is 320 mA and the magnetic field magnitude is 197 G.
3,411.8
2013-06-03T00:00:00.000
[ "Physics" ]
The Potential for Using Video Games to Teach Geoscience: Learning About the Geology and Geomorphology of Hokkaido (Japan) from Playing Pokémon Legends: Arceus . In recent years, video games, as a geoscience communication tool have gained momentum. Popular 10 commercial video games see millions of people around the world immersed in wonderous landscapes, many filled with real geological features including volcanoes, mineral deposits, and dinosaurs. Even though these features can be overlooked by many players as simple video game tropes, if utilised in educational environments or scientific outreach events, video games have the potential to encourage and stimulate teaching of geoscientific concepts, both in the classroom or in their own time. Here, we focus 15 on the geo-educational potential of Pokémon Legends: Arceus , the latest game in the popular Pocket-Monster franchise, Pokémon . Pokémon Legends: Arceus is set in a fictional landscape, Hisui, that is directly based on the real-world island of Hokkaido, northern Japan. Both formal (peer-reviewed literature) and informal (online websites) resources are used to explore in-game and real-world geological feature comparisons and assess the game’s educational potential. This paper demonstrates 20 that a single commercial video game can be used to explore a variety of geological and geomorphological concepts including volcanology, economic geology and hazard-mitigation, with direct real-world examples to support the geoscientific understanding. Applications for this study could be extremely useful for not only increasing interest and facilitating the self-learning of geoscience worldwide, but also for teaching in educational environments. From an educational standpoint, 25 Pokémon Legends: Arceus could be used as a powerful tool to help students engage more in their learning by utilising their natural affinity to the popular game and showcasing the many geological and geomorphological features found across Learning via Video Games Video games are commonly used to teach primary subjects for younger audiences (e.g.basic arithmetic and simple logic-based skills), however, video games have also previously been explored in various advanced educational topics for several years (Adams, 1998;Squire, 2005;Pew Research, 2008;Squire et al., 2008;de Freitas, 2008).In many cases, specifically designed games were developed to teach players about particular topics, focussing the gameplay on presenting players with information required to pass tasks and progress within the game (Shute et al., 2013;Mani et al., 2016;Kerlow et al., 2020).However, the teaching potential of such 'educational' or 'serious' games may be nullified by failing to hold players' attention through sufficiently engaging gameplay (Kerawalla and Crook, 2005;Van Eck, 2006;Floyd and Portnow, 2012).'Commercial' or 'entertainment' video games on the other hand prioritise engaging and entertaining gameplay over educational learning.This may lead players to miss the educational potential by creating the perception of fictional content (Floyd and Portnow, 2012;Brown et al., 2014).As a result, the prioritisation of entertainment over educational value is a deterrent for those wishing to use video games as educational tools. The lines between educational-and entertainment-focused gaming are increasingly blurred as realworld events and locations more frequently form the basis of new games (Brown et al., 2014).Video games provide exposure and greater appreciation of base subject matter, with players exploring the realworld implications of the gaming subject (Brown et al., 2014).Because commercial video games capture the voluntary and undivided attention of millions immersed in rich landscapes for extended hours (Mayo, 2009), they are a logical tool for boosting geoscience communication and education efforts. Video games can be used to achieve educational goals via four different means: (1) using game mechanics to teach specific skills, such as map reading; (2) expanding vocabulary with game narratives; (3) improving social skills such as teamwork and communication; and (4) promoting tangential learning, i.e. self-directed learning inspired by exposure to a topic one already enjoys (Floyd and Portnow, 2012;Turkay and Adinolf, 2012).This study examines only the affective realms of 1, 2 and 4, as area 3 belongs to the realm of multiplayer or forum based games, which Pokémon Legends: Arceus is not.Hut et al. (2019), McGowan and Scarlett (2021) and Clements et al., (2022) illustrate how popular commercial games (including Legend of Zelda: Breath of the Wild and Minecraft) could be used as a form of geoscience communication to promote and educate the wider public, covering topics such as volcanology and palaeontology.If effectively used, commercial video games can become a powerful tool in educational settings, and at outreach events, to stimulate geoscientific education and engagement in students.However, despite the previously mentioned work, both on the use of video games in education in general and those directly applied to geoscience, video games are currently a rare resource tool used to teach geological concepts (Jolley et al., 2022). Recent work by Video games also have further benefits to those with learning difficulties, (for example, attention deficit hyperactive disorders (ADD/ADHD) or dyslexia), who struggle to maintain focus using more conventional educational methods (Griffiths, 2002;Marino and Beecher, 2010;García-Redondo et al., 2019).In most cases, studies have shown video games improve a student's measured attention, as tested using the d2 test measures of attention, and motivation towards formal learning (García-Redondo et al., 2019).Additional benefits also include improved language comprehension and mathematics skills (Franceschini et al., 2013), mental agility, strategic reasoning (García-Redondo et al., 2019), time management and planning and organization (Bul et al., 2016). Background of Pokémon Legends: Arceus Released worldwide on the 28 January 2022, Pokémon Legends: Arceus is part of the eighth generation of Pokémon games spanning over a 25-year period.The game was extremely popular, selling over 6.5 million copies worldwide during the first week of release, making it the fastest selling game of the franchise at the time of writing (Knezevic, 2022). Each set series of video games in the Pokémon franchise are set in a unique region, which are based on a real-world location.This not only inspires the design of the explorable game map (including layout, geography and environments), but also the Pokémon (based on real and mythological animals associated with that region), clothing, culture, food, and architecture.The first four generations are set in fictional versions of Japan, while later generations are based on other countries and states, including New York, USA (Pokémon Black/White) and the United Kingdom (Pokémon Sword/Shield;O'Farrell, 2018).The fictional region of Hisui in Pokémon Legends: Arceus is directly based on the island of Hokkaido, Japan (Nintendo, 2022).Hokkaido is used as inspiration for the Pokémon Diamond/Pearl games which are set during modern day, meaning that Pokémon Legends: Arceus is set in the past of the same region (Wikipedia, 2022). Part of Pokémon Legends: Arceus' popularity lies in the game's graphics, which provides some of the most modern and realistic visuals seen in the franchise to date.Additionally, the gameplay has dramatically shifted from a fixed formulaic style with set paths for players to follow, to providing several open-world biomes for players to freely explore to research the Pokémon in their natural habitats.Pokémon Legends: Arceus' combination of improved three-dimensional graphics and real-world inspiration, makes it an excellent choice to explore the educational potential of video games on geographic and geological features.It is important to note that even though much of the player base is likely to be classified as non-geoscientists, players are still likely to be able to identify differences between fake and realistic landscapes to inform their learning (Hut et al., 2019). By comparing real-world and in-game features, this paper aims to explore and test if a single video game can be used for a variety of educational topics.By doing so, the apparent 'realness' of the features can be assessed.This paper intends to be used as an example -in addition to the other 'geo-gaming' literature -to highlight how commercial video games could be applied in an educational setting (facilitated learning) and encourage the player's own self-learning (tangential learning; Floyd and Portnow, 2012;Brown et al., 2014) of geoscientific topics (e.g.McGowan and Scarlett, 2021;Clements et al., 2022). Methods Authors identified geological and geomorphological features, including active volcanoes, crater lakes and peninsulas, which were tied to key moments within the game's main narrative.This approach is inspired by McGowan & Scarlett (2021), where geoscientific features are identified in popular commercial video games and then compared to real-world examples.Features and areas that are a necessity for progression, therefore guaranteeing player-interaction, are particularly addressed.Features encompass extremely visible landmarks, including volcanoes, or frequently referred to locations that contain geological context in their name. Real-world counterparts of the in-game features were identified based on geographical location and physical characteristics.Comparisons between the literature content and in-game appearance were made to determine if they form suitable explanations for the inspiration behind each feature. It should be noted that Pokémon Legends: Arceus was developed to be played by the general population and not specifically for academic specialists.Therefore, informal sources (for example, Wikipedia and online magazines) will also be used alongside peer-reviewed literature as players potentially prefer this type of resource (Nisbet and Scheufele, 2009) or may be unable to access scientific papers behind paywalls. In-Game Features When comparing the in-game map of Hisui and that of Hokkaido, Japan, including topographic and geological maps (Ayalew et al., 2011), striking similarities in the topography and coastal outline are seen (Fig 1).Therefore, players can identify locations within Pokémon Legends: Arceus based on their relative geographic location and similarities (e.g.volcanic craters identifiable in topographic maps) and the literature reviews can provide additional geological understanding to the features (Table 1). Obsidian Fieldlands The first open area players may explore is the Obsidian Fieldlands: a lush grass land, with hilly ground in the centre, a large, forked river cutting northeast to southwest and a dense forest in the south.The locality's name suggests obsidian naturally occurs on this part of the island.Indeed, obsidian is a common volcanic material found on Hokkaido, having at least 21 confirmed primary sources of the glass across the island (Izuho and Sato, 2007).In contrast to Hisui however, the majority of sites are located in the northeast of Hokkaido, around the Kitami Mountains, over 100 km from the Ishikari Lowland (Fig 2a; Izuho and Sato, 2007;Akai, 2008) -where the Obsidian Fieldlands are paralleled in Pokémon Legends: Arceus (Fig 1 and 2b). The obsidian of Hokkaido was an important resource to Palaeolithic inhabitants on the island, where it was shaped into microblade tools.Such tools were created between 26-10 ka (Akai, 2008;Yakushige and Sato, 2014), and were widely transported across the island, including the Ishikari Lowland and Honshu, Japan's main island (Yakushige and Sato, 2014).X-ray fluorescence analysis of the obsidian microblades from the Ishikari Lowland allows individual tools to be traced back to their primary origin, including Akaigawa, ~40 km to the west, and Shirataki, over 170 km northeast (Akai, 2008).An additional homage to Hokkaido obsidian is in the newly released Pokémon, Kleavor.It can be obtained using black augurite (a fictional mineral) or caught in the wild.Despite black augurite being fictional, its item design and Kleavor's mirror obsidian.Furthermore, the official description of Kleavor states Hisuians used the chipped pieces of stone that fell off Kleavor as tools (Pokémon Legends, 2022), evoking the use of obsidian tools by Hokkaido's indigenous inhabitants. Whilst the name, Obsidian Fieldlands, suggests obsidian would be naturally present in this region, this is false.Instead, obsidian was likely transported from elsewhere on the island, suggesting the name is more of a homage to the once important resource to the Palaeolithic inhabitants. Cobalt Coastlands The Cobalt Coastlands, found on the east coast of Hisui, is another open access area (Fig 1a).As with the Obsidian Fieldlands, one could expect cobalt to be found in this coastal region.However, cobalt is mined in the central regions of Hokkaido, not on the east coast (Khoeurn et al., 2019).This draws into question the use of 'cobalt' in the area's name.Is it purely a catchy use of alliteration, or is there greater geological influence? The area's name could be related to the popular tourist destination known as the Blue Pond (Fig 1b ), a man-made pond famous for its "cobalt" blue waters (Biei Tourist Association, 2017;Smart Magazine, 2018).Following the 1988 eruption of Tokachi-Dake volcano, concrete dams were built to divert volcanic mudflows (lahars) away from populated areas (Ministry of Land, Infrastructure, Transport & Tourism, 2016;Smart Magazine, 2018).Lahars are amongst the deadliest volcanic hazards, ranking third (primary lahars) and forth (secondary lahars) out of thirteen, based on total number of fatalities (Brown et al., 2017).Not only can they flow tens to hundreds of kilometres from the flanks of a volcano, but secondary lahars can occur years after the primary event (Brown et al., 2017).An unexpected result of the hazard-mitigation was that aluminium-rich spring water from the volcano was also diverted, leading to formation of a pond with a distinctively blue hue (Smart Magazine, 2018). While the Blue Pond is in central Hokkaido, not near the east coast where the Cobalt Coastlands are in Pokémon Legends: Arceus -a number of larch trees were drowned by the pond, turning silvery-white as they died (Smart Magazine, 2018).Such trees are found within the southern part of the Cobalt Coastlands in the area named Deadwood Haunt (Fig 1a), which contains numerous ghost type Pokémon, possibly a tribute to the drowned trees of the Blue Pond (Fig 3).This adds further merit to the idea that the Cobalt Coastlands are based upon the popular tourist destination. Veilstone Cape -Volcanic Chains, Arches and Caves One of the most prominent geomorphic features in the Cobalt Coastlands is the Veilstone Cape, a tall, narrow rocky headland (Fig 1a).On Hokkaido, the comparable feature is known as the Shiretoko Peninsula (Fig 1b).The real-world peninsula is the result of several overlapping volcanic complexes (Neogene to Holocene in age) that form the Kuril Volcanic Chain, running NEE-SWW from central Hokkaido to the eastern end of Shiretoko Peninsula (Minato et al., 1972).The volcanic chain constitutes part of the Kuril Island-arc System -a 1175 km arc system produced by the subduction of the Pacific Plate along the Kuril Trench (Khomich et al., 2018) -and through submarine volcanism, uplift and continued terrestrial volcanism resulted in the steep topography along the Shiretoko Peninsula (Chakraborty, 2018).Along the Veilstone Cape in Pokémon Legends: Arceus, caves and arches cut through the coastal cliff (Fig 4a).While the comparable erosional features in Hokkaido are not as well reported as other elements mentioned in this paper, the cause may be due to the Shiretoko Peninsula being much wider and less steep than the in-game Veilstone Cape.As the fictional cape (Fig 4a) is taller and narrower than its real-world counterpart, it would be easier for coastal erosion to create the prominent arches seen at the end of the peninsula.Travelling inland the arches decrease in size, eventually forming only sea caves, where the coastal waters have yet to erode through and connect both sides, cleverly demonstrating the progressive evolution and formation of natural sea arches (BBC, 2022; Fig. 4b). The major inaccuracy of Veilstone Cape is the size of the headland.In the real-world, the Shiretoko Peninsula is much longer, wider and has a gentler profile.However, this is likely a calculated resizing by developers to ensure the headland remains visually impressive without making it feel like a chore for players to traverse, something for which games with large maps can receive bad reviews for (Tassi, 2018). Firespit Island -Active Volcano Off the coast of the Cobalt Coastlands, in the northeast of the region, is Firespit Island (Fig 1a and 5a).This is a fictional location without a real-world equivalent in Hokkaido.Firespit Island is a large volcanic edifice, likely to be a stratovolcano due to its steep, conical slopes, tectonic setting and this being the most common type of video game volcano (McGowan and Scarlett, 2021).It has a distinguishable crater rim that is taller in the east, presumably the product of a violent explosive eruption that destroyed the rest of the cone (Fig 5a).To the west is a gap in the outer slopes and a shallow fan reaching into the sea.These pieces of evidence suggest a sector collapse and/or lateral blast modified the morphology of the main edifice and produced a debris avalanche (Romero et al., 2021). Lava pours out of the vent of a new volcanic cone within the centre of the collapsed edifice (Fig 5b), which is one of the most common volcanic attributes seen in video games (McGowan and Scarlett, 2021).Post-collapse volcanism is common in volcanoes around the world, including Anak Krakatoa (Indonesia), Mt St Helens (USA), Soufrière Hills (Montserrat) and Bezymianny (Russia) (Girina, 2013;Watt et al., 2012;Watt 2019).However, lava flows produced in these post-collapse craters is typically highly viscous and does not 'pour out' of the vents (Carr et al., 2022).After progressing further through the storyline of the game, the lava ceases and solidifies into a mass within the vent forming a plug (Fig 5c). Even though it is typical for mafic stratovolcanoes in arc settings, like that of Hokkaido, to rapidly build themselves upwards, producing steep slopes, typically between 21° -40° (Romero et al., 2021).The old edifice and central vent on Firespit Island exceeds this, producing an unrealistically steep slope and cone (Fig 5).This is another common trope of video game volcanoes, with other overly steep stratovolcanoes also seen in Legend of Zelda: Breath of the Wild and Monster Hunter: Generations Ultimate (McGowan and Scarlett, 2021). Spirit Lakes -Flooded Calderas The storyline takes players to three lakes found across Hisui (Fig 1a The description of Lake Verity's formation (in the game) mirrors the series of six continuous rhyolitic caldera-forming eruptions which produced Lake Tōya and the < 80 m-thick Tōya Ignimbrite around 110 ka (Fig 6b).The first five events were phreatomagmatic, suggesting the presence of a pre-caldera lake (Machida et al., 1987;Goto et al., 2018).Post caldera volcanism (around 40-45 ka) produced Nakajima, an andesitic to dacitic dome complex in the centre of the lake (Goto et al., 2018).Lake Kussharo (Lake Valor equivalent) is also situated within Kussharo Caldera (Fig 6d).The last major caldera-forming eruption is estimated around 30 ka (Fujiwara et al., 2017).Like Tōya Caldera, a post-caldera dome complex formed, producing a dacitic to rhyolitic island (Smithsonian, 2013a), alongside an additional caldera complex, the Atosanupuri Caldera, within the eastern half of Kussharo Caldera during the Holocene (Fujiwara et al., 2017). In both scenarios, the geomorphology of the Spirit Lakes and the descriptive dialog in Pokémon Legends: Arceus, accurately portrays features of real-world caldera lakes and post-caldera lava domes on Hokkaido. Coronet Highland -Volcanic Peaks The centre of Hisui houses a large mountainous area known as the Coronet Highlands where the tallest mountain on the island, Mount Coronet is located (Fig 1a).It can be presumed that the real-world equivalent is Mount Asahi, a 2,291 m stratovolcano within the Daisetsuzan Mountain Range, part of the Daisetsuzan volcano group, a complex of numerous stratovolcanoes and lava domes (Smithsonian, 2013b). The Coronet Highlands are a barrier of progress in the modern day setting of Pokémon Diamond/Pearl/Platinum and likely represent the roughly north-south trending Hidaka Mountains on Hokkaido (Fig 1b).The Hidaka Mountains were initially formed through the collision of Eurasia and North American plate boundaries approximately 13 Ma within the Hidaka collision zone (Niida, 2010;Ichihara et al., 2019). The Coronet Highlands also contain a "special magnetic field" that allows the evolution of certain Pokémon, such as Nosepass.Lodestones, a rare form of magnetite (Mills, 2004), is thought to be driven by lightning remanent magnetization, allowing them to be found at the Earth's surface as opposed to at depth (Wasilewski and Kletetschka, 1999) and were previously used in compasses.They are therefore a likely inspiration for Nosepass, which is noted to always point north and is checked by travellers to get their bearings (Bulbapedia, 2022). Lake Acuity -Lagoon Lake Acuity is the third Spirit Lake found within Hisui (Fig 1a and 7a).Unlike the two previously mentioned flooded caldera Spirit Lakes (Section 3.5), Volo does not say this lake formed due to a volcanic eruption.Instead, the character states it contains seawater, but does not know whether this is related to its geography, or a Pokémon.This hints at a different origin for Lake Acuity. The origin of Lake Acuity is difficult to determine from in-game visuals alone because they are similar to the previously mentioned lakes (a topographically circular lake with an island in the middle), so it could be assumed it is also a flooded caldera with a central lava dome complex (Fig 6a, c and 7a).However, when consulting a geological map (Fig. 1c), no volcanic features are found in the real-world region, supporting the hint that Lake Acuity did not form in the same way as the other two Spirit Lakes and instead has a non-volcanic origin. The lake is the most northern in Hisui, and therefore can be assumed that its real-world equivalent is Lake Onuma, Wakkanai (Fig 7b), the most northern lake in Hokkaido (Fig 1b).Due to Lake Onuma's proximity to the ocean at Soya Bay, tidal inflows can bring seawater into the lake (Ministry of the Environment, 2015).Despite none of the literature directly stating the lake's origin, it is more akin to a coastal lagoon than a volcanic lake and explains the change of descriptive dialog. Tangential Learning about Hokkaido Pokémon Legends: Arceus utilises a wide range of resources to communicate geological features to the player, including maps, physical structures/graphics, and dialogue from characters.These details may be used to facilitate learning and stimulate curiosity about the geology of Hokkaido.From the topics covered in this paper, Pokémon Legends: Arceus can be used to teach volcanology, hazard-mitigation, economic geology and more (Table 1).Whilst this knowledge was mostly applied to Hokkaido, the general principles could also be transferred to other similar geological settings around the world. While not every topic covered was explored in detail, this is realistic of the expectations for a player to do online searches to quickly understand more about features they have seen in the game.At the same time, these seem to be sufficient to gain a basic understanding of this region's basic geology and geomorphology. It is not logical to expect every player to share enough interest in geoscience-related topics to stimulate any desire for tangential learning.However, as noted by Floyd and Portnow (2012), even if only 0.1% of players conducted online investigations into a single feature mentioned herein, Pokémon Legends: Arceus would have facilitated a modicum of geoscience learning for > 6,500 people worldwide. Even in situations such as understanding the use of 'cobalt' in the name of the Cobalt Coastlands, where the outcome was not as conclusive as others (e.g. the flooded caldera lakes with direct real-world equivalents), players are presented with the opportunity to learn about both mining on Hokkaido and lahar risk-management, while critically analysing the in-game evidence to draw a conclusion. There is also the possibility that the opportunity to learn about the real-world equivalents of game features could stimulate further interest to pursue additional tangential learning.For example, learning that Lake Verity/Lake Tōya formed due to a caldera-forming eruption, players could continue to research the volcanism of Hokkaido by investigating Firespit Island due to its very prominent volcanic features (crater, active vent, molten lava, etc), or the similar looking Lake Acuity and discover its nonvolcanic origins.This could even expand into players conducting tangential learning on features not specifically found in the game, or on a larger scale (e.g.plate-tectonics and island-arc formation that resulted in the formation of Hokkaido). Caution is warranted when using video games in educational settings as the potential for learning misinformation is high.For example, players are informed that two of the three caldera lakes formed via volcanic activity, however the third lake is suggested to be possibly formed through different, unmentioned means.A caldera lake is defined by the volcanic activity that led to its formation, however due to the lack of volcanic activity in northern Hokkaido, there is merit to the change in descriptive dialog. In addition, over-exaggeration is often found in popular media including video games to provide a more entertaining experience through 'speculative fiction' and artistic liberty (Shaw, 2014;Politopoulos et al., 2019).Such evidence was found in Pokémon Legends: Arceus in the overly steep volcanic slopes on Firespit Island.Unrealistically steep volcanic slopes are a common over-exaggerated feature of video game volcanoes (McGowan and Scarlett, 2021).It is efforts such as those demonstrated here that allow for the apparent accuracy and authenticity of features found within a medium to be assessed and then utilized in educational settings, as opposed to simply assuming learning will take place regardless of the quality of representation.Furthermore, tangential learning through commercial gameplay can also be conducted using other games.For example, numerous mineralogical items are considered resources in video games that can ultimately lead to players better understanding the real world.A case of this was presented by Robb (2013) when interpreting mineral deposits in Elder Scrolls V: Skyrim, or the numerous games covered by Clements et al., (2022) on palaeontological topics. Using Video Games in Geosciences Despite professional instructors rarely utilising video games to teach geological concepts (Jolley et al., 2022), this example illustrates how they can be used to teach about a wide range of topics in an engaging way.Compared to other literature on the subject matter that investigates a single topic across numerous commercial video games (McGowan and Scarlett, 2021;Clements et al., 2022), the focus of this paper, shows how one game can introduce several geoscientific topics and potentially spark additional interest.This should reassure geoscience educators that they do not require access to multiple different video games to provide sufficient examples for their course. The shift to online-based and hybrid learning following the COVID-19 pandemic has led to increasing reliance on newly-developed teaching methods including virtual field trips (MacKay, 2020;Bond et al., 2021) and other digital resources (Pringle et al., 2017;Jeffery et al., 2021).Video games can augment this new education paradigm.The use of virtual learning, including video games, holds numerous benefits, including increased accessibility for students who cannot attend field-based teaching due to costs or physical disabilities, as well as the ability to visit high-risk locations (Stainfield et al., 2000;Pringle et al., 2017). The high standards of graphics, gameplay and internal functions of commercial video games takes considerable time and funding (Mayo, 2009) which educators cannot be expected to invest themselves.However, specific areas or features can require significant amount of gameplay to reach, meaning that alternatives should be investigated.YouTube or Twitch streams have access to thousands of video game walkthroughs, meaning one could select the appropriate video that covers the desired location or feature to show students in the classroom, without needing to own or play the game.The downside to this is reduced control over what is shown and no opportunity for students to directly engage in gameplay.Educators could also set homework to investigate the geology observed in a video game (either through direct gameplay or via videos), with further prompts and questions to help guide the students learning and promote tangential learning at home.Pokémon Legends: Arceus also provides players with opportunities to develop other skills.For example, providing the player context to practice map reading skills and exposure to the utility of topographical maps. Pokémon Legends: Arceus could also prove to be useful for geoscience communication to the wider public at geoscience outreach events.The game has a generally relaxed gameplay style and quick-tounderstand controller mechanics, therefore, the game could be offered to non-geoscientists at outreach events, allowing them to casually under the tutelage of a geoscientist.Pokémon Legends: Arceus is also rated PEGI 7, meaning it is appropriate for everyone over the age of seven, and so accessible to a wide range of people. Conclusion Pokémon Legends: Arceus includes a wide range of geological and geomorphological aspects within its design, drawing direct inspiration from features found on Hokkaido.This ability to directly compare virtual and real-world counterparts could stimulate tangential learning in players should they be curious enough.Whilst an entire curriculum cannot be covered using Pokémon Legends: Arceus, it offers an additional way of communicating the science found in numerous geoscientific topics to the player/student.Though care must be taken, either through using resources such as this one or a prior demonstration of the game to ensure appropriate information is being taught.The exposure potential of geological and geomorphological concepts through video games could be widespread.If only a small fraction of the player base conducts such learning, because the game is so popular and sold millions of copies, Pokémon Legends: Arceus can potentially facilitate learning about Hokkaido's geoscience for thousands of players worldwide.This reach can be extended by using the game as a prompt in classrooms to increase student engagement. Figures ). Upon reaching the islands in the centre of Lake Verity in the Obsidian Fieldlands (Fig 6a) and Lake Valor in the Crimson Mirelands (Fig 6c), a character named Volo explains that many believe these lakes formed after volcanoes erupted and craters later filled; the geographical locations of the two Hisuian lakes suggests they are the in-game versions of Lake Tōya (Fig 6b) and Lake Kussharo (Fig 6d) respectively. Figure 2 : Figure 2: (A) Map of Hokkaido showing the source locations of all 21 recorded obsidian sites (red triangles) across the island (Izuho and Sato, 2007) and the location of the Ishikari Lowland that the Obsidian Fieldlands (red box) is based on in Pokémon Legends: Arceus.(B) A zoomed-in game map of the Obsidian Fieldlands (zoomed-out version in Fig 1a) from Pokémon Legends: Arceus © The Pokémon Company (2022). Figure 5 : Figure 5: Images of the volcano, Firespit Island, located in the Cobalt Coastlands in Pokémon Legends: Arceus (Fig 1a.3) (A) Annotated schematic of Firespit Island showing a hypothetical look of the volcano pre-sector collapse and highlighting the resulting debris avalanche.(B) Close up of the steep, central active vent with lava flowing out (C) Volcanic plug that forms after the lava eruption ceases.© The Pokémon Company (2022).There is no direct comparison for this volcano found on Hokkaido, Japan (Fig 1b). Table 1 : Summary of author interpretations of the geological and geomorphological features selected within Pokémon Legends: Arceus, based on in-game visuals or prompts, versus the geological understanding of the features post-literature review.
6,921.6
2022-10-11T00:00:00.000
[ "Geology" ]
High Resolution Synchrotron X-Radiation Diffraction Imaging of Crystals Grown in Microgravity and Closely Related Terrestrial Crystals Irregularities in three crystals grown in space and in four terrestrial crystals grown under otherwise comparable conditions have been observed in high resolution diffraction imaging. The images provide important new clues to the nature and origins of irregularities in each crystal. For two of the materials, mercuric iodide and lead tin telluride, more than one phase (an array of non diffracting inclusions) was observed in terrestrial samples; but the formation of these multiple phases appears to have been suppressed in directly comparable crystals grown in microgravity. The terrestrial seed crystal of triglycine sulfate displayed an unexpected layered structure, which propagated during directly comparable space growth. Terrestrial Bridgman regrowth of gallium arsenide revealed a mesoscopic structure substantially different from that of the original Czochralski material. A directly comparable crystal is to be grown shortly in space. Irregularities in three crystals grown in space and in four terrestrial crystals grown under otherwise comparable conditions have been observed in high resolution diffraction imaging. The images provide important new clues to the nature and origins of irregularities in each crystal. For two of the materials, mercuric iodide and lead tin telluride, more than one phase (an array of non diffracting inclusions) was observed in terrestrial samples; but the formation of these multiple phases appears to have been suppressed in directly comparable crystals grown in microgravity. The terrestrial seed crystal of triglycine sulfate displayed an unexpected layered structure, which propagated during directly comparable space growth. Terrestrial Bridgman regrowth of gallium arsenide revealed a mesoscopic structure substantially different from that of the original Czochralski material. A directly comparable crystal is to be grown shortly in space. Introduction The performance of electro-optic devices varies according to the crystal growth and fabrication procedures used; and increases in the ability to control these procedures now promise substantial improvement in such devices. In particular, reduction of convection in the microgravity found in space now offers control of one very important parameter in crystal growth. Nevertheless, the ' Lockheed Engineering and Sciences Co., Hampton, VA. absence of comprehensive knowledge of the principal structural defects engendered during the various stages of crystal growth and device fabrication and of the roles played in device performance by the various defects has severely restricted device improvement to date. In order to begin to shed light on the principal irregularities found in various electro-optic detector materials and their influence on device performance, we have observed and compared irregularities found in three crystals grown in space and in four directly comparable crystals grown on the ground. These irregularities, which were observed in mercuric iodide, lead tin telluride, triglycine sulfate, and gallium arsenide by high resolution synchrotron x-radiation diffraction imaging provide new clues to the nature and origins of the principal irregularities in these important materials and on their respective influence on detector performance. Detector materials are of special interest because the performance of detectors made from spacegrown mercuric iodide has been reported to be far superior to similar devices made from similar ground-grown material. The charge carrier mobility of x-and gamma-ray detectors made from spacegrown crystals was at least six times higher than for similar detectors made from ground-grown crystals [1] [2]. This is expected to lead to increased energy resolution in the radiation detectors made from this material. Determination of the principal irregularities in these materials is of interest: 1) for the scientific insight to which it can lead, 2) for the optimization of expensive space growth, and ultimately 3) for the establishment of desirable growth conditions in far less extreme and expensive environments on the basis of the insight achieved. Establishment of the specific nature of the mesoscopic irregularities in these materials, determination of their level of incidence, and observation of their distribution in space-grown and comparable terrestrial materials are all important to these goals. Fortunately, recent technical advances in diffraction imaging with highly parallel monochromatic synchrotron x-radiation present the first opportunity for crystal growers to obtain all of these parameters simultaneously [3,4,5]. Imaging Goals 2.1. General Considerations X-ray topography alone has long promised to provide this information simultaneously on the nature, prevalence, and distribution of structural irregularities over the macroscopic areas important to integration with crystal growth parameters. However, this information clearly has not been available. A principal impediment to the fulfillment of this promise has been x-ray beam divergence. Individual irregularities and the immediately surrounding matrix in a typical crystal are illuminated by laboratory x-ray sources over an angular range measured in arc minutes. The divergence of this incoming beam (at a given point on the sample) unfortunately sup-ports diffraction simultaneously from many irregularities along with that from the surrounding regular regions. The spread in wavelength in white beam synchrotron radiation also permits diffraction simultaneously from irregular and regular regions. In both instances, contrast that would otherwise be present is severely reduced or eliminated entirely. Even where some contrast remains, the spatial information contained is convoluted by the differing Bragg angles in a way that will not permit unfolding and subsequent detailed analysis. Lattice deviations influencing diffraction by seconds of arc, which are frequently critical to satisfactory interpretation and understanding of mesoscopic irregularity, are rendered visible in diffraction only by a source of monochromatic radiation parallel within an arc second. In such a beam, spatial fidelity also is preserved at the micrometer level. The recent availability of such a source of x-radiation thus now permits the realization of the long term promise of x-ray topography. Among the irregularities that can now be observed over areas large enough to interpret in terms of crystal growth are the following. Lattice Orientation and Strain Of the various types of irregularity in high-quality crystals, perhaps the most pervasive is gradual change in the lattice. The orientation of the lattice or the magnitude of the lattice parameter, or both, may vary. For any one orientation of the crystal with respect to the incident beam, such variation results in diffraction only from a portion of a single grain. Diffraction from grains whose lattice orientation or parameter varies monotonically or aperiodically yields images of restricted regions of a single grain. The part of such a grain that is in diffraction shifts gradually as the crystal is rotated. The moving edge of this image is characteristically soft and relatively indistinct in high resolution diffraction images. In other systems, such as the Czochralski growth of doped material, lattice variation may be oscillatory, leading to striations in diffraction images of crystals cut obliquely to the local growth direction and oriented slightly off of the Bragg condition. Contrast is inverted on opposite sides of the Bragg diffraction peak. These striations record, like tree rings, not only variation in chemical composition but, taken together, also the shape of the crystal at various stages in its growth; and they can be deciphered in a somewhat similar if more complex and sophisticated manner [5,6]. Grain and Subgrain Boundaries Sharp contrast in the image of a crystal can delineate homogeneous grains or subgrains. In contrast to the preceding case, the boundaries of such an image do not move as the crystal is rotated in the Bragg direction. Where the lattice orientation of a pair of such homogeneous grains differs by rotation in the diffraction plane by more than the acceptance angle, only one of these grains (or subgrains) will diffract at a time; and, if it is not strained, it does so in its entirety. Variations in real-time images of such a crystal permit rapid and detailed assessment of the relative misorientation (in the Bragg direction) of the various grains and subgrains with respect to one another. Where the lattice orientation of a pair of such contiguous grains differs in a direction orthogonal to the plane of diffraction, both grains may appear in diffraction simultaneously, but the resulting images are displaced with respect to one another in this direction. The pair of these images is either separated or overlapped, depending on the relative inclination of the two lattices. Dislocations Dislocations typically appear in diffraction images taken in Laue geometry (transmission) as linear features that are broader at one end than the other. The broadening of one end of such a feature arises from scattering deep within the crystal, while the sharp end locates the intersection of the dislocation with the x-ray exit surface of the crystal. The orientation of a dislocation can be determined with high precision for those cases in which the intersection of the dislocation with both entrance and exit faces is distinct in the diffraction image. Variation in the visibility of such a line feature in successive diffraction images indicates the direction of atomic displacement associated with a dislocation, which is parallel to its Burgers vector. However, since the visibility of such a dislocation varies relatively slowly, that is, as the cosine of the angle between the Burgers vector and the diffraction vector, the determination of this direction is most precise when contrast can be observed to disappear at one unique angle. When the direction of diffraction is oriented normal to the atomic displacement, such a feature vanishes from the diffraction image. Phase Domain Boundaries Twins are distinguished normally by absence of diffraction from regions between sharp parallel boundaries visible in some diffraction directions but not in others. The contrast in the latter when observed by high-resolution beams may be affected by very slight differences in lattice alignment. With angular coUimation of the order of an arc second, the images of other boundaries recorded in Laue geometry may also become visible when the diffraction vector falls along the boundary. Such boundaries are visible even under these restrictive conditions only when they separate atomically coherent regions differing by an atomic phase shift [7,8]. Those boundaries that have been observed to date to fulfill these conditions appear to separate antiphase domains. Radiation with a divergence of the order of a second of arc or less is necessary to image such boundaries. Additional Phases The absence of diffraction from particular regions of a crystal under all diffraction conditions supporting diffraction from the rest of the crystal strongly suggests the presence of a second phase, although in principle the non diffracting regions may simply be misoriented with respect to the rest of the crystal. In stoichiometric materials, the boundaries of two phases are sharply delineated. In alloys, this sharpness is vitiated by the gradual changes in composition that may be permitted. Surface Scratches The strain associated with surface scratches is linear, but sometimes gently curved, and typically non crystallographic in orientation. They are typically distinguished also by three other characteristics: 1) uniform width, 2) sharp edges, and 3) contrast reversal either laterally, longitudinally, or both, particularly as the Bragg peak is scanned. The latter two characteristics are evident both in observation in Bragg geometry and when scratches are present on the exit surface in Laue geometry. Current Imaging Capability Suitable synchrotron radiation sources now offer opportunities to fulfill the long awaited promise of x-ray topography; but the degree of success in their realization depends upon the particular parameters of the storage ring and its beam lines. Since the precise orientation of the x radiation at individual points on the sample is crucial to the analysis, the vertical source size, together with the distance of the sample from the tangent point on the ring, may limit the utility of the images produced. The x-ray storage ring of the National Synchrotron Light Source at Brookhaven National Laboratory offers the most suitable combination of characteristics of any existing storage ring, providing an unusually bright beam whose degree of vertical divergence at a point on a sample mounted on beam line X-23A3 is 1.5 arc seconds. Although this 1.5 arc second beam provides a considerable improvement over other sources, it is not yet sufficient for diffraction imaging with optimum sensitivity to defects. For useful sensitivity to irregularities, which requires further improvement in beam divergence by another order of magnitude, i.e., 0.1 arc second, the optics of the monochromator are crucial. Such a beam is necessary for rendering critical features visible, for preservation of the spatial information in the image within the plane of diffraction, and for displaying essential clues to the strains upon which the success or failure of detailed analysis can depend [6]. With such a dedicated 0.1 arc second monochromatic capability, however, which is available on a routine basis only on Beam Line X-23A3 at the National Synchrotron Light Source, the detection and interpretation of irregularities are limited principally by their density. Irregularities can be recorded photographically with a spatial resolution of 1 (im. Observation with an x-ray vidicon and charge coupled device (ccd) cameras readily provides complementary information with a spatial resolution of 35 |xm with intermediate sensitivity in real-time, and 20 |a,m with shot noise limited sensitivity in quasi real-time. The Crystals Three crystals of mercuric iodide, two of lead tin telluride, one of triglycine sulfate, and one of gallium arsenide are included in the present study. The three mercuric iodide crystals were grown by identical physical vapor transport procedures, one on Spacelab III, a second from identical source material under full gravity, and a third also terrestrially from more highly purified material recently available. The resulting crystals were state of the art for each material, as demonstrated by the performance of detectors made from directly comparable material and by the diffraction images. The two lead tin telluride crystals were grown by identical Bridgman techniques from identical source material, one on Space Shuttle STS 61A and the second terrestrially. The images strongly suggest that this material also is state of the art for such a ternary crystal. The triglycine sulfate crystal consisted of a terrestrial seed and additional growth by identical techniques from identical solution on Spacelab III. The images indicate that this material contains relatively few irregularities. The gallium arsenide crystal consisted of a Czochralski seed and Bridgman regrowth, all terrestrial, but carried out under procedures identical to those due to be employed shortly in a space experiment. The purpose of this particular regrowth experiment is to examine various aspects of the growth rather than to grow immediately the most regular material. The terrestrial material is of interest in its own right for diffraction imaging just now in addition because it is the first Bridgman material to be observed by high resolution diffraction imaging. Terrestrial Crystal Compared with Spacelab in Crystal While the terrestrial mercuric iodide crystal grown from source material identical to that used for a crystal grown on Spacelab III diffracts over a range of one half degree, a large central portion is sufficiently regular to diffract only within a few minutes of arc. Full high-resolution diffraction images of this terrestrial crystal appear In figures 1 and 2, and an enlarged portion of the first in figure 3. Most of the central portion of the crystal is in diffraction in the (1 1 12) image in figure 1, indicating lattice regularity with respect to rotation around a [110] axis of the order of a few arc seconds. However, the absence of diffraction in a wide [110] (vertical) stripe in the center of this figure indicates that the lattice is deformed by a sharp twist of about 10 minutes of arc around the orthogonal axis defined by this stripe. The extent of this twist is determined from the 100 jjim width of the stripe, and the knowledge that the photographic plate was located about 3.5 cm from the crystal. This twist of the crystal lattice is evident also in the (0 1 11) diffraction image, figure 2, for which the crystal was rotated azimuthally 45°. In this orientation, the misalignment of the two parts of the crystal precludes bringing them simultaneously into diffraction. Examination of a number of full images and of a sequence of real-time images indicates that the principal lattice twist axis itself bends through several minutes of arc, differing slightly in the two subgrains. The other principal large feature of the full images of this crystal is a set of textural stripes, which are oriented in the 110 direction. Enlargements such as that in figure 3 show these stripes to consist of a relatively high density of discrete features, typically out of diffraction in these images and therefore ascribable to one or more additional phases. Some of these features take the form of thin {100}-oriented stripes a few micrometers wide; they are sometimes crossed. The others are more irregular, globular features, 1-60 |xm in diameter. These may differ completely from the stripes, but may simply represent similar stripes normal to the (001) image and projected on it. In those regions characterized by a high density of discrete features, diffraction appear to be restricted to small (=5 \i.m) cells of the type observed in scanning cathodoluminescence microscopy [9]. The other areas of the crystal contain similar features that are out of diffraction, but with a much lower density. In addition, however, these regions contain thin, curved features marked by varying sections of higher diffraction, lower diffraction, or alternating regions of higher and lower diffraction in in tandem. The inability to observe diffraction in Laue geometry in the current series of experiments, because of the sample thickness, precludes firm identification now of these features as dislocations. The nature and arrangement of these various features in this sample make this crystal into a Rosetta stone in understanding the evolution of irregularity in mercuric iodide. One principal question that arises is associated with the origin of the lattice twist. Does it appear during growth or only later during subsequently handling of this very soft crystal? Six distinct observations all indicate that this lattice twist occurred during growth and indeed indicate the growth direction. The first two observations are that the twist axis does not extend across the entire crystal, and that once started, the magnitude of the apparent separation of the two parts of the images does not increase. It is difficult to conceive of such a partial lattice twist, one lying precisely in the (001) plane, developing through inadvertent mishandling. The third observation is that this twist axis is normal to the [110] layered texture formed by a high density of additional phase features. These layers appear to be broad striations formed during growth and to indicate its direction; the [110] lattice twist axis appears to be aligned with the crystal growth direction. Fourth, the gradually curved nature of some of the linear multiple phase configurations in the vicinity of the lattice rotation is more consistent with growth than with post-growth bending. Fifth, the onset of the lattice twist immediately jyecedes a major textural change that appears to be growthrelated. The final observation is the bending that has been noted in the lattice twist axis, bending that differs in the two resulting subgrains. Examination of the interfaces between the widest stratum of high-density features and the adjacent low feature density strata confirms the growth orientation and the origin of the lattice twist. The linear additional phase features in the low-density layer immediately adjacent to the highdensity region near the center of the crystal appear correlated closely with individual features in the high-density region. Growth thus took place in this part of the crystal from the high-density stratum to ?8E%V 010 100 .?^r^. -ut^aBi Figure 3. Enlargement of central jxirtion of figure 1, (1 1 12) diffraction. Darker areas diffract more strongly. the low-density stratum, i.e., in a direction projecting onto the (001) crystal surface in the [110] direction. Moreover, as just noted, the sharp lattice twist appears to begin immediately preceding the onset of the broad textural stratum of high density of (precipitate) features, where it joins the preceding low feature density textural stratum. All of these observations are consistent with a growth model in which growth begins in the extreme [110] corner of this crystal (in fig. 1 this is the top corner) and proceeds relatively uneventfully in the [110] direction (downward in fig. 1), or in a direction projected onto this direction in the images, until just before the onset of the wide swath of a high density of additional phase features, one of which initiated the sharp lattice twist. The twist then propagated for the remainder of the growth. During this subsequent growth, briefer periods of relatively high additional phase density alternate with periods of relatively low additional phase density. The nature of the additional phase material is suggested by evaporation of such crystals. As material is removed, small specks of foreign material similar in size to the additional phase features observed in this study accumulate on the surface, at an irregular rate. Chemical analysis indicates these specks are neither mercury nor iodine precipitates but rather consist of organic and metallic impurities with a 70% carbon content and a wide variety of metals. It is tempting to associate these observed impurity formations with the additional phase features observed in diffraction and therefore to conclude that these impurities reside in such crystals in discrete form. The morphology of the diffraction images permits us to develop two alternative growth models, which tie together all of these observations. Growth over a region of a few micrometers forms a crystal with a relatively high degree of purity and crystal perfection, creating small regions that diffract strongly. Impurities are rejected from the crystal during this stage of the growth process, in a manner similar to constitutional supercooling, and accumulate near the growth surface. In one model, the level of impurities after growth of a few micrometers accumulates to such an extent that they precipitate out, marking the local growth surface in {100} directions. At reentrant corners of such {100} growth surfaces a globular precipitate possibly forms. In a second model, the rejection of impurity stimulates dendritic growth, which leaves the linear features observed in {100} directions. In this case, the globular form of the precipitate may form in the reentrant dendritic locations. Alterna-tively, the features that appear to be globular may simply represent the cross section of dendrites normal to the image surface. None of our observations to date permit us to distinguish absolutely between these two models. Either model involves modulation of the general impurity level by an as yet unidentified process that produces textural stripes or striations delineated by changes in the density of precipitates. The resulting composite formed in either model resists deformation. It consists of relatively pure and thus relatively strain-free components. Spacelab III Crystal A crystal grown in Spacelab III from material identical to that used for the terrestrial growth of the crystal shown in the preceding section diffracts over a wider angular range, about one and one half degrees. A full highresolution diffraction image appears in figure 4 and an enlarged region of this in figure 5. It is clear from the appearance of the full images as well as from the one and one half degree acceptance angle for diffraction that the lattice orientation or parameter of the space crystal in its entirety is less uniform than the comparable terrestrial crystal shown in figures 1-3: that is, less of the space crystal appears in diffraction at a given angle of incidence than does the comparable terrestrial crystal, indicating gradual variation either in lattice parameter or lattice orientation, or both. Perhaps closely related, but potentially far more important, is substantial reduction in the enlargements of the images of the Spacelab III crystal of arrays of features that are out of diffraction, the textural arrays characteristic of the comparable terrestrial crystal. A few irregular regions of the order of 50 iJim across that are out of diffraction are observed, but they are much less pervasive and sharply delineated than are those in the terrestrial crystal. None of the crystallographically oriented regular regions that are typically out of diffraction in the images of the comparable terrestrial crystal are observed in the Spacelab III crystal. The formation of regions of additional phase thus appears to be almost completely suppressed in the crystal grown in microgravity. The Spacelab III sample differed from the terrestrial sample not only by its growth in microgravity but also by the superposition of graphite electrodes; so that its performance as a neutron and x-ray detector could be measured. Since graphite is relatively transparent to x rays, these electrodes were not expected to interfere with the imaging process itself. While they could in principle have affected the surface strain, we found no evidence for this. However, this crystal was not encapsulated. With the passage of the 5 years since the growth of this crystal, some deterioration in electronic performance of the device made from it actually has been observed, as is characteristic also of unencapsulated devices made in terrestrial environments. The observed gradual variation in lattice is consistent with varying retention within the lattice of some foreign material. This leads to increased interest in the results of the growth of a mercuric iodide from the much purer material on a future flight. 5.1.3 Terrestrial Crystal to Be Compared to a Future Flight Crystal Models ascribing a central role in the structure and properties of mercuric iodide detectors to impurities are reinforced by observation of a third mercuric iodide crystal, grown terrestrially in an identical manner from higher purity material similar to that to be used on a future flight. It diffracts over a full two degrees. A full high-resolution diffraction image appears in figure 6, with an enlarged region in figure 7. The extent and character of the diffraction in these images, reflecting the general lattice uniformity, resembles much more the diffraction from the Spacelab III crystal than that from its terrestrially-grown counterpart. Moreover, the absence of an array of small features that are out of diffraction also gives these images much more the character of those from the Spacelab III crystal than those from the terrestrial crystal grown about the same time from similar material. The performance of devices made from the new high purity material approaches the original performance of the device made from the Spacelab III crystal. The improved performance of the Spacelab III crystal is traceable to the higher mobility of its charge carriers. By contrast, however, the purified terrestrial crystal here is characterized by improved carrier lifetime. Although the electronic improvements are quite distinct in these two cases, in neither crystal do we find the additional phase features that we have observed in the first terrestrial crystal. Thus absence of additional phase precipitates appears to be much more important to device performance than the generally higher level of lattice uniformity that we observe in the first terrestrial crystal. The stiffening provided by additional phase precipitates apparently comes at too high a price in terms of charge carrier trapping. Future space growth of this high purity material now assumes particular interest. Will incorporation of residual impurities in the final crystal even below their currently low level in the charge material be achieved in space growth? And, if so, will these lower impurity levels lead to greater general lattice uniformity? And finally, will this new level of regularity result in still further improvement in device performance, improvements both in carrier mobility and in carrier lifetime? Terrestrial Crystal Comparable to Space Shuttle STS 61A Crystal Various regions of the terrestrially grown sample of lead tin telluride similar to one grown on Space Shuttle STS 61A diffract as the crystal is rocked over a full two degrees. Full high-resolution diffraction images of two distinct grains are shown in figures 8 and 9. Growth was in the [001] direction, which is oriented "down" in all figures. The sample was a regular half cylinder. The sharply delineated irregular outlines of the image in figure 8 thus indicate immediately tha^ several grains are present: the curvature of the [110] (right hand) edge indicates that a subsidiary grain started to grow almost simultaneously with the main grain. Then, after 1 cm of growth, a third grain started to grow between the center of the boule and the opposite edge of the main grain. It grew laterally more rapidly than the nucleating grain, however, displacing and, after another 2 cm, completely overtaking the growth of the nucleating grain. The new grain is brought into diffraction in figure 9 simply by rotation of the sample about the boule (growth) axis. Subgrains within each of the main grains are clearly visible through terraced variation in contrast and can be studied in real time images as the crystal is rotated. The generally strong diffraction from a 1.5 cm length of each of the two principal grains observed is notable, however, in light of the increase in tin level from 14 to 18% during the first 3 cm of growth visible in these images . The fractional change in lattice constant over the 1.5 cm length of the grains is 4x 10"^ which changes the Bragg angle by 90 arc seconds. Nevertheless, diffraction is observed in a single image of one of the grains through broadening by kinematic scattering, which is difficult to quantify, as well as by local compositional variation. Because of the mixture of these two broadening mechanisms, unfortunately we can not use the broadening to evaluate the degree of local compositional variation. 4;V Other aspects of this variation are evident in enlargements such as figure 10. Cellular regions of high diffraction varying in size from ten to several hundred \i,m are observed. They are separated by lines of reduced diffraction that are 10-50 \im wide. Many of these lines at first glance appear to be scratches because of their curvature and random orientation. However, three characteristics typical of surface scratches, such as those visible for example in the gallium arsenide images to which we turn L. later, are not observed in these linear features. First, the lines vary in width, both from line to line, and even over the length of a given line. In reality these lines separate cellular regions of high diffraction. Second, the boundaries of the lines are very indistinct. And third, contrast reversal is never observed in them. They are invariably out of diffraction over their entire length, even as the crystal is rotated while it is observed in real time. Thus, while we cannot rule out scratches, the linear features here differ markedly in several respects from those of typical scratches in other materials. Moreover, they are not observed in the image of the space-grown sample, whose images follow. We are thus left with the postulate that the highly diffracting cells are separated by material of another phase. The indistinctness of the boundaries between these regions of differing phase strongly suggest gradual change in chemical composition on a scale of 1-10 jim or so, in contrast to the sharp delineation between diffracting and non diffracting features in the images of mercuric iodide discussed above. The pseudobinary phase diagram along the leadtin axis predicts complete miscibility [11]. However, the observation of similar structure following electrolytic etching led earlier to a series of experiments on the metal/tellurium ratio, which delineated its importance in the growth of this material. This earlier work provides a satisfactory model for the current observations as well [12,13]. While the metal constituents are widely recognized to be interchangeable, a single phase is preserved only with tellurium concentration in excess of 51%. Below this value, two phases are formed, differing in metal/tellurium ratio. Since the tellurium concentration of the current crystals is 50.1%, two phases are actually to be expected. Constitutional supercooling may also play an important role, depending on the temperature gradients imposed [14]. Space Shuttle STS 61A Crystal A full image of a crystal grown on flight STS 61A appears in figure 11 and an enlargement of the central portion of this in figure 12. The multigrain nature of the STS 61A crystal is generally similar to that of the terrestrial crystal. But, while these images appear qualitatively similar to the full images for the corresponding terrestrial crystal, they differ in important ways. Each grain is more generally uniform than those of the terrestrial crystal. This uniformity follows a drastic reduction in the incidence of linear features and subgrains. As a result, variation in diffraction on a scale of 10-100 |Jim is greatly reduced. Thus, while the granular structure resembles that for the terrestrial crystal, variation within individual grains from the intrusion of a distinct second phase appears to be suppressed in microgravity. The absence of thermo-solutal instability for this system in microgravity was noted in the preceding section. Triglycine Sulfate A normal slice from a disc-shaped terrestrial seed crystal with additional growth achieved on Spacelab III diffracts into images, each of which appears over less than half of an arc minute. Full high-resolution diffraction images appear in figures 13-16. The character of the diffraction from this crystal is very different from that of the others. This crystal was thin enough and low enough in atomic number to allow diffraction in Laue geometry. Moreover, superimposed images of this crystal taken as it was rotated about its [100] and [001] axes appear in closely spaced groups, each associated with the diffraction directions expected for diffraction from one set of (/lOO) or (00/) planes, respectively. The various images have similar, but not identical, shapes. Subsequent work, summarized in table 1, indicates that the members of a given group of images appearing at nearly similar diffraction angles come into diffraction at differing sample orientation. The appearance of images in groups indicates that this crystal consists of layered grains whose lattices are similar but rotated with respect to one another by rotation about the [100] and [001] axes. Since each image is ostensibly nearly "complete," the grain boundaries are roughly parallel to the (010) crystal surface. In an optically thick material, transmission through such a layered crystal would be precluded by the misalignment of the successive grains. However, this crystal is optically thin, permitting the observation of symmetrical diffraction from each of the grains in turn. From the occurrence of similar features in pairs of images, which can be ascribed to features shared by adjacent grains at their interface and the degree of clarity, we can assign a tentative order to the various grains as intersected by the x-ray beam. This is the order in which their images are presented in the figures and in table 1. Most of the features thus appear to be associated with irregularities at the granular interfaces, although radiographic effects from each layer are present. The seed portion of this crystal takes upmost of each image. Space growth was in the [001] direction along that one edge of the seed. The absence of a clear demarcation between the seed and new growth in this region in most of the images is in contrast to the terrestrial growth of comparable material. The interface between the seed and the new growth is visible in figure 14, but irregularities in this grain do not appear to propagate into the new growth in the central portion of the disc. In the other grains, irregularities from the seed indeed appear to have propagated into the part grown in microgravity. Toward the edge of the disc, the irregularity observed in all of the grains is consistent with rapid growth anticipated from the defects observed near the edge of the seed. Irregularities near the edge in such systems before faceting becomes fully developed later in growth are observed optically as well. The last of these images, figure 16, differs in shape along the growth edge from the others. It thus appears that new growth did not occur uniformly on all layers. On this one layer, growth appears to have been much slower than on the others, although this may represent initial etching of the seed crystal associated with premature contact with the solution. Gallium Arsenide A terrestrial crystal of lightly selenium-doped Bridgman grown gallium arsenide diffracts over several degrees. Growth of identical material by identical techniques is scheduled for an early space flight. A low resolution diffraction image of the terrestrial crystal, achieved by rocking the crystal 4° around a [112] axis during diffraction is shown in figure 17. An infrared image of the same crystal is shown in figure 18. The similarity of these two images is striking. The additional information in a full high-resolution diffraction image of this crystal, figure 19, is evident. An enlargement of a portions of this image is shown in figure 20. The demarcation of the Czochralski seed from the new Bridgman growth is very clear in those images in which this region is in diffraction. The seed/ growth boundary is delineated in two ways. First, toward the periphery of the boule it marks a smooth limit to diffraction, past which the lattice does not diffract under the same conditions. Thus, either the lattice constant, or orientation, or both differs in the new growth. Second, in the one region of the seed interface supporting diffraction from both sides, the mesoscopic structure of the growth is observed to be transformed at the interface. The cellular structure of the seed is characteristic of diffraction images of Czochralski-grown undoped gallium arsenide. In the new Bridgman growth, the formation of cells appears to be completely suppressed. Freedom from other demarcation, however, indicates that, to the extent permitted by the lattice parameter match, the two lattices continue uninterrupted. The lattice parameter mismatch appears to set up a gradual warping of the crystal lattice. Further analysis of the features observed is precluded by the inability to observe diffraction in Laue geometry. Summary of Initial Observations These results are summarized in the following sections. The three crystals of mercuric iodide, two of lead tin telluride, one triglycine sulfate crystal, and one gallium arsenide crystal show remarkable differences in their irregularities. General Effects of Microgravlty Seven very different crystals do not provide a sample that is sufficiently large to form definitive conclusions. Nevertheless, our observations provide guidance for further evaluation and crystal growth. The formation of pervasive multiple phases observed in two terrestrial crystals appears to have been greatly suppressed on growth of two comparable crystals in microgravity. Mercuric Iodide A terrestrial specimen comparable in growth procedure and source material composition to one grown on Spacelab III displays more than one phase. The features containing the additional phase appear to be globular in part and partly in the form of thin layers oriented along the {100} crystallo- graphic directions of the matrix. One of these features appears to have initiated a sharp lattice twist by 10 minutes of arc around an axis aligned with the growth direction and to have stiffened the two resulting subgrains. Formation of the additional phase material is suppressed both in a comparable Spacelab III crystal and a recently grown high purity terrestrial crystal. At the same time, the general regularity of the lattice of these latter crystals is lower than in the earlier terrestrial crystal. Superior performance of detectors made from these materials thus appears to be limited far more by sharp discontinuities associated with additional phase(s) than by slow variation in the lattice. Lead Tin Telluride The mesoscopic structure of lead tin telluride appears also to be influenced strongly by the intrusion of additional phase material. But in this instance all available evidence points to identification of the additional phase with the major constituents. Indeed, the presence of two phases has been predicted for systems with tellurium concentration very close to 50%. Although the STS 61A crystal has grain structure that appears to be similar to that of the comparable terrestrial crystal, the formation of the subgrain variation characteristic of the terrestrial sample is suppressed in microgravity. This suppression is correlated with the predicted thermo-solutal stability in microgravity. Triglycine Sulfate Interpretation of the space-growth of triglycine sulfate has been complicated by the layered structure that we observe in the seed. Defects in one of these layers appear not to have propagated in the central portion of the disc in microgravity, while defects in the other seed layers appear indeed to have propagated into the new growth. In addition, one of the seed layers appears to have grown at a rate slower than the others, but this may simply represent inadvertent contact between the seed and the solution prior to space growth. In the IML 1 mission scheduled, use of multifaceted natural seeds is planned. They will be characterized not only for various physical properties but also for defects and structural properties prior to flight. In this way, definite information should be obtained on the generation and propagation of defects during growth. Gallium Arsenide Although gallium arsenide has not yet been grown in space in the NASA program, space growth directly comparable to that used for our terrestrial sample is scheduled shortly. Meanwhile, the mesoscopic structure of our terrestrial Bridgman regrowth has been observed to differ from that of the original Czochralski-grown material. The Bridgman-grown lattice also appears to be warped more than that of the Czochralski-grown seed. Figure 19. High-resolution (220) stationary diffraction image from the approximately (220) surface of terrestrial GaAs crystal in Bragg geometry. The growth direction is [111]. Lighter areas diffract more strongly.
9,576
1991-05-01T00:00:00.000
[ "Materials Science", "Physics" ]
Assessment of the Results and Methodology of the Sustainable Development Index for Spanish Cities In 2017, the United Nations adopted a global Sustainable Development Goals (SDG) indicator framework, calling on member countries to collect complementary national and regional indicators. Cities are crucial to channelling efforts towards sustainability through the use of these indicators. They provide an integrated approach to the city situation monitoring sustainability. However, more research is needed to understand how to adapt the goals, targets and indicators to specific municipal contexts. In 2020, the Spanish Sustainable Development Solutions Network launched the 2nd edition of the Spanish Cities Index. A set of 106 indicators allows for monitoring the implementation of the SDGs at the local level for Spanish cities. The objective is to perform a statistical audit to evaluate the consistency of the indicators and the impact of modelling assumptions on the result. The methodology used is an adaptation of the Handbook on Constructing Composite Indicator prepared by the European Commission. The indicator system is well balanced and covers the essential areas of the Sustainable Development Goals. The Spanish ranking is robust enough among the alternatives evaluated. However, some improvements are possible in the selection of indicators, e.g., removing redundant indicators and regional data. Finally, it is recommended to weigh goals based on municipal responsibility to adjust the results to the Spanish municipal context. Introduction Based on the experience of the Millennium Development Goals (MDGs) [1,2], in 2015, the UN adopted the 2030 Agenda for Sustainable Development and its 17 Sustainable Development Goals (SDGs). They aim to guide the achievement of sustainable development [3] and are rank highly on the agenda of most countries in the world. The SDGs comprise 17 goals that cover different aspects of sustainable development under a holistic approach. These objectives, in turn, are further specified in 169 goals. The evaluation and monitoring of sustainability through indicators are considered effective ways to condense the complex system dynamics, starting from a manageable amount of information used to evaluate the progress against the declared results [4]. Solid metrics and indicators are a practical sustainability measurement tool to assess progress and ensure achievement [5,6]. In 2017, the UN adopted a global framework of 247 indicators to assess progress in meeting the SDGs [7]. In this framework, member countries are asked to compile complementary national and regional indicators. The implementation and success of this universal agenda require all levels of administration, the academic environment, civil society and the private sector [8,9]. In addition, each country is left free to establish its implementation strategies. Governments must be able to tailor targets and their indicators to fit national contexts and priorities. In this way, countries show their benchmarks against which they can evaluate their performance and help measure their progress. In addition, these metrics should serve as a management tool for all the parties involved to carry out the necessary transformations to achieve the targets of the SDGs in 2030. For example, one of the first steps that countries must take is to establish voluntary monitoring evaluations of the progress made in each of the 17 SDGs [9]. The UN High-Level Political Forum plays a central role in monitoring progress globally [10]. It is estimated that more than two-thirds of the world's population will reside in urban areas by 2050, adding another 2.5 billion people to the current 4 billion urban residents [11]. Meeting the basic needs of growing urban populations while ensuring the integrity of their ecosystems, addressing climate change, and promoting economic productivity and social inclusion are the main challenges facing the cities of our time. They are considered places of critical importance for understanding and solving sustainability problems [12,13]. They are the primary consumers of energy [14], the largest generators of waste [14], and they produce the majority of global greenhouse gas emissions [15]. Urban planning decisions will play a critical role in achieving the SDGs [16]. In this sense, UN-Habitat has also developed an action framework of indicators specifically to assess the sustainability of cities [17]. This document examines the extent to which UN indicators will help cities assess their efforts to achieve results towards their sustainability. However, it does not provide either a policy roadmap for action or a data or policy monitoring system [18]. The recognition of the role of municipalities and local governments in facilitating sustainable development has led to a specific goal dedicated to cities and communities [19]. However, there are urban issues among the other 16 goals [20][21][22], and many cities already have their own sustainability goals. In particular, SDG 11 relates to sustainable and resilient cities and human settlements; given rapid urbanization, cities are generally recognized as key actors to implement the entire SDG agenda [17,19] successfully. It has thus become increasingly important to monitor their performance [23]. As urban systems are complex, a common way to simplify monitoring is by using indicators and their metrics [24]. Although they have been using indicators for a long time, it is only in the last few decades that an attempt has been made to compile sustainability indicators into sets that reflect the many different aspects required to assess their performance [25]. In this sense, the SDG indicators offer the possibility of a more balanced and integrated approach for monitoring urban sustainability [26,27]. To help countries in the annual balance of SDG progress, the Sustainable Development Solutions Network (SDSN) has been conducting yearly evaluations since 2016 through indices and dashboards of the Sustainable Development Goals [28]. Its evaluation report for countries, the SDG Index, presents a composite index that analyzes the 17 SDGs of the 2030 Agenda with 85 indicators. In its latest edition in 2020, it has included the analysis of 193 countries [29]. Likewise, the SDSN promotes evaluation reports and dashboards through its national and regional chapters evaluating progress in achieving the 2030 Agenda by measuring a series of indicators. It is an unofficial monitoring tool whose objective is to complement official efforts to monitor the 2030 Agenda implementation. Table 1 shows the evaluation reports promoted by the SDSN and their distribution of indicators at the national, regional and local levels. SDSN reports distribute the aggregated indicators across the 17 SDGs to help countries and cities assess their degree of achievement and level of progress directly with the 2030 global political agenda [3]. SDSN Report A significant imbalance in each goal's number is observed by analyzing the distribution of indicators in these international and national reports. On the one hand, SDG 3 and SDG 16 have the highest number of indicators, followed by SDG 4 and SDG 9. On the other hand, SDG 10 and SDG 17 have the least number of indicators. In addition, significant differences can be observed between the distribution of indicators by SDG for country-level reports versus city-level reports. There are fewer indicators for each SDG compared to country-level reports due to the difficulties in finding data [13,50]. In the particular case of Spain, the Red Española para el Desarrollo Sostenible (REDS-SDSN) presented in 2020 the second edition of the Spanish Cities Report (SCR) [49] where more than 100 cities are evaluated and which is the object of analysis of this article. The study includes all the Spanish cities with more than 80,000 inhabitants and the regional capitals, covering over 50% of the total population in Spain. For this purpose, all the indicators selected were identified considering the national context and data availability of the official statistical sources. The SCR maintains alignment with the global SDG framework similar to the SDSN's methodology for the SDG index. In this way, as with the countries, it is intended to help local Spanish entities to diagnose and evaluate their progress in each of the 17 SDGs. It presents a selection of aggregated indicators in the 17 objectives to link them with the 2030 Agenda. It has followed a rigorous selection and validation process run by representatives of the academic environment. It has also had the support of local entities and the Spanish Federation of Municipalities and Provinces. This report has become the benchmark for monitoring the progress of the objectives in the cities in the Spanish context. In its latest edition of 2020, 106 indicators have been selected starting from the previous edition indicator set and following the SDSN methodology [51]. The indicators have been selected based on relevance, statistical adequacy, timeliness, quality and percentage of coverage. In addition, there has been a validation by experts for each SDG and a final public consultation to validate and rule out their suitability. These are essential aspects that contributed to increasing the transparency of the SCR. The researchers of this article are also coauthors of the SCR. To continue with the research process, they have considered it necessary to evaluate and analyze the results obtained in greater depth. The main objective is to assess the robustness of the results and methodology of the Sustainable Development Index for the Spanish Cities. This could identify improvements for future editions by studying the impact of different alternatives in the calculation methodology and selecting indicators. For this, the methodology used by Joint Research Center (JRC) of the European Commission's Competence Center for the audit of the SDG index in 2019 [52] has been taken as a reference. In addition, based on the results obtained, the following complementary objectives are pursued: (i) consolidate its system of city indicators, (ii) analyze the results of different alternatives and (iii) validate their reliability. This article does not intend to question the conceptual relevance of the indicator system. The aim is to analytically and objectively identify its main features and the improvement options that could be implemented based on the results obtained in its database. Materials and Methods In 2019, the SDSN requested an audit of the 2019 SDG Index from the Joint Research Center (JRC) of the European Commission's Competence Center on Composite Indicators and Scoreboards (COIN) [52]. This statistical audit focuses on two main issues: the statistical coherence of the structure of indicators and the impact of crucial modelling assumptions on the SDG Index ranking. This analysis was carried out in three stages: (i) Descriptive statistics of the data and data analysis to detect missing values and potential outliers; (ii) Multilevel analysis testing the statistical coherence of the structure and correlations between indicators and each SDG Index; (iii) Analysis of the index robustness and testing of the impact of crucial modelling assumptions on the SDG Index ranking. The JRC report also supplemented the country rankings of the SDG index with confidence intervals to better understand their robustness to the calculation method. This JRC analysis has been taken by the authors as a methodological reference to achieve the objectives of this research, but applied to the SCR. This is possible because both reports use the SDSN methodology [51]. However, the Monte Carlo experiment was not performed to investigate the impact of varying the assumptions. Instead, to evaluate the effect of the weighting assumption, the survey published in the SCR report has been used. Thus, the analysis presented in this article follows these steps: 1. Description and analysis of the indicators The objective is to identify potentially problematic indicators that could bias the overall index results. The authors used the same JRC rule to analyze the distributions [52]. An indicator should be considered for mathematical treatment if it has an absolute skewness more significant than 2.0 and a kurtosis greater than 3.5. In those cases, further analysis of their data distribution would be developed [53]. The formula for skewness is referred to as the Fisher-Pearson coefficient of skewness: The authors use the following definition of kurtosis: methodology affect the position of cities in the ranking. The selection of indicators and their targets can be considered two central points for defining the SDGs' performance metric [54]. This proves the sensitivity of rankings by comparing an Initial set (I s ) versus an Alternative set (A s ) of the SCR indicators. The assumptions raised in this study are the following: i. Aggregation: arithmetic mean and geometric mean The geometric mean is usually used to aggregate heterogeneous variables and when the focus of the analysis is on percentage changes rather than absolute changes. For example, this method is used in the Human Development Index [55]. Its three-dimensional aggregation method for the arithmetic mean was changed to the geometric mean in 2010. Compared with the geometric mean, the arithmetic average has the advantage of the simplicity of interpretation: an index score between 0 and 100 reflects the average initial placement of the country between worst and best on the average of the 17 goals [51]. The study proposes the calculation of geometric mean for the Alternative set (A s ). ii. Weighting of the SDGs The method for aggregating and weighting different variables into a single index can profoundly impact the overall ranking [56]. In the Initial set (I s ), each indicator was weighted equally. As a result, the relative weight of each indicator in a goal was inversely proportional to the number of indicators considered under that goal [51]. Different weightings of individual SDGs can have important implications on a city's performance and relative ranking in the composite index [57]. For this reason, the authors propose to use for the Alternative set (A s ) as the expert weight approach at the goal level [51], taking advantage of the survey on municipal competencies carried out by sustainability experts and members of local Spanish entities included in the report. iii. Reduction of the indicator set To evaluate the statistical consistency of SCR indicators, a cross-sectional analysis is employed. The correspondence between the SCR index and realworld phenomena needs to be analyzed because correlations do not necessarily represent the real influence of the individual indicators on the phenomenon being measured [58]. The correlation aims to quantify the strength of the link joining two different indicators or goals [59]. Non-parametric correlation methods are commonly applied to those pairs of variables whose distribution is unknown a priori. This is the case of Spearman's analysis [60]. In contrast to Pearson's, the most commonly used correlation coefficient, Spearman's does not assume normally distributed and same-scaled variables [61]. This is why Pearson's rank has been used in several disciplines and previous studies [62,63]. The authors propose using the cross-correlation analysis to preliminary address the extent to which the data supports the conceptual framework [51]. The 1% significance level is used to determine whether the correlation between two variables is statistically significant [58]. To optimize and reduce the number of indicators, the Alternative set (A s ) will not include the correlated ones. 3. Analysis of the impact of assumptions: iv. Principal component analysis Principal Component Analysis (PCA) is commonly used to assign weights to individual variables correlated and measured by a common underlying factor. In addition, PCA reduces the effects of multicollinearity by using a subset of the principal components in the model [51]. To analyze the impact of the previous assumptions using the Alternative set, the authors propose to use principal component analysis (PCA) to summarize each goal and interactions in the SCR. Applying PCA allows mapping trends, synergies and trade-offs at the level of goals for all SDGs while using all available information on each indicator [64]. v. Analysis of the variation of positions This analysis aims to evaluate the shifts in the positions between the Initial set (I s ) and the Alternative set (A s ). Cities shifts under three positions cannot be considered significant, whereas differences of 10 places can show a meaningful difference [52]. The variation in the rankings, considering the previous assumptions, allows us to identify which cities show a particular sensitivity to changes. Description and Analysis of the Indicators The SCR identifies a total of 106 indicators based on the ones from the 2018 edition. Of these indicators 84% (89 out of 106) have data at the municipal level, 47% (49 out of 106) are new to this edition or present improvements in their level of detail, and 60% (64 out of 106) have just been updated. The quality and reliability of the data stand out because they all come from official National and European statistical repositories or research centers and non-governmental organizations of recognized prestige. Data provided individually by the entities evaluated have not been accepted in the SCR to guarantee comparability and reliability. A full list of indicators can be found in Table A1 Appendix A. The distribution of indicators for each SDG is balanced compared to other similar reports, as shown in Table 1. SDG 3 and SDG 11 with 13 and 11 indicators, respectively, stand out for their significantly higher number of indicators, while SDG 7, SDG 13, SDG 15 and SDG 17 hit a minimum of four. In general, data coverage for the indicators included in the index is suitable for all SDGs and all cities observed. In the particular case of the cities of the Basque Country region, some of the economic data is not available. SDG 14 is a particular case since the cities without coastal areas have no indicators for this goal. Regarding the dataset provided, no specific issues have been found. Data did not require imputation for the index calculation because the selection of indicators already excluded those not reaching at least 80% coverage. Complete data can be found in Appendix B. The SDSN methodology identifies the sustainability thresholds for each indicator based on the explicit/implicit goals of the SDGs, scientific goals or the average performance of the best actors and the specific criterion of the goal expert. At the same time, to eliminate the effect of extreme values and facilitate the comparability of results, the report authors have limited the data to the lower 2.5 percentile as the minimum value for normalization. The details of the specific values used on the maximum/minimum values and the chosen thresholds are described in Annex I of the SCR [49]. Indicator values are normalized using the minimum/maximum method from the dataset of all cities for any given indicator. The normalized value is then transformed into a value ranging from 0 to 100, which is directly comparable with the rest of the indicators. In other words, the city with the highest value of the raw data obtains a score of 100, while the lowest value will have a score of 0. This normalization operation guarantees that all the variables are ascending and, therefore, the highest values indicate positive performance in achieving each goal. It also eliminates outliers at both ends of the distribution because those cities that exceed the average of the best or worst values are assigned the same score, as recommended by the OECD manual for constructing composite indicators [58]. This improves their understanding and facilitates the communication of results. The methodology used by JRC [52] for the SDG index analyzes skewness and kurtosis to assess the data distribution's shape and identify potentially problematic indicators. The rule applied by the JRC is that an indicator is valid for treatment if it has an absolute skewness greater than 2.0 and a kurtosis more significant than 3.5. Table A2 in Appendix B shows potentially problematic indicators: 6 indicators with abnormal distributions (2b, 2c, 9d, 11h, 11i and 17a) and 11 indicators with negative skewness (7d, 8h, 10f, 11c, 11d, 12d, 13a, 14b, 16a, 16i and 17d). As well as JRC, the authors applied different techniques to improve the distributions, such as logarithmic transformations, and their scatter plots have been analyzed in detail, but no significant improvements were observed. Finally, it has been decided to keep them in the calculation set due to their alignment with the official UN indicators [65] and to guarantee a minimum number of indicators per goal. However, 16.04% (17 out of 106) of the indicators come from data at the provincial or regional level (see Appendix B). These indicators could alter the results because they do not reflect the particular reality of each city but rather a regional average. In the cities analyzed, several cities are very different from each other. Some of them are in single-province regions or with high depopulation rates. Some other cities belong to large metropolitan areas highly populated. Only 9 of the 17 SDGs present indicators with regional data, and there are never more than two indicators per goal. Therefore, its impact on the SCR index is relevant but limited. Aggregation: Arithmetic vs. Geometric Average In the SCR, according to the SDSN methodology, the arithmetic average has been used as a two-stage aggregation method, at the indicator level for each goal and the goal level for the general index. An alternative aggregation method is proposed based on the geometric instead of the arithmetic mean to limit compensation between very different values in various areas of sustainable development [51]. Table 2 shows the position shifts in the SCR index obtained by changing from arithmetic to geometric average across SDG scores. The two methods yield results that are almost the same and thus a nearly identical ranking. The volatility between ranks is minimal. These differences are due to the geometric average, which, unlike the arithmetic mean, penalizes significantly poor scores on specific goals. The maximum shift in positions is 10 and only occurs in two cities in the Madrid metropolitan area (1.94% of the total). Most cities, 74 out of 103 (71.84% of the total), change from zero to two positions. The cities that are most affected by the change in the aggregation method have their location in the metropolitan areas of Madrid, Catalonia, Basque Country and Andalusia. Weighting of the SDGs The SDSN reports are calculated without using any type of weighting because all targets and SDGs are equally crucial for the 2030 Agenda by definition. Only the number of indicators per SDG skews their representativeness. However, assigning the same weight to the indicators and targets does not necessarily guarantee an equal contribution of the indicators or targets to the index results [58,66]. For example, the 13 indicators from SDG 3 and the 11 indicators from SDG 11 have less weight in the overall aggregation than the 4 indicators from SDG 7, SDG13, SDG 15 and SDG 17 (see Table 1). In conclusion, the greater the number of indicators per SDG, the less relative weight than other SDG indicators with a lower number. The SCR [49] publishes an assessment of municipal competencies carried out by sustainability experts and members of local Spanish entities. The authors propose these results to create alternative weighting coefficients for normalizing the assessment values. These vary from 1.5 for the best result to 0.5 for the worst (Table A3 in Appendix C) to properly analyze their impact. Table 3 shows the shifts in the position of the cities in the SCR index using this alternative weighting method. Of the total cities, 37.86% only change a maximum of two positions. Most cities, 59 out of 103 (57.28% of the total), change a maximum of four positions. The most affected cities by the alternative weighting method are sparsely populated southern cities of the peninsula and do not belong to any metropolitan area. However, unlike the application of the alternative aggregation method, this method significantly affects all cities and alters the results of the SCR index. Reduction of the Set of Indicators The methodology proposed by the JRC [52] and the SDSN methodological paper [51] performs a correlation analysis to evaluate the statistical coherence of the SCR, aiming to reduce the set of indicators initially proposed. Determining the relationship and degree of dependency between the quantitative variables in the report evaluates the extent to which the data supports the index's conceptual framework. The analysis of the correlations (both positive and negative) between indicators makes it possible to identify redundancies, avoid an overvaluation of the same event, and, finally, reduce the model's complexity. The authors have analyzed the correlations of the indicators (with their SDG and with the general index) and the correlations of the SDGs (with each other and with the general index). Table A4 in Appendix C shows the correlations between indicators with their respective SDG general index. Ideally, each indicator should correlate positively with its SDG and with the overall index. A significance level of 1% has been taken to determine if the correlation between two variables is statistically significant. Indicators 4e and 11i (in red) show negative correlations with their SDG, but their coefficients are very low and not significant. These results are similar or even better than those obtained by the JRC analysis for the SDG Index in 2019. Only two indicators (1epoverty line, and 13d-covenant of mayors) present a Pearson correlation coefficient higher than 0.92. It makes sense because they are of particular relevance to the achievement of the SDG. Furthermore, 21 of the 106 indicators present correlation coefficients higher than 0.70 and an acceptable significance level. Values greater than 0.70 are desirable as they imply that the index captures at least 50% (≈0.70 × 0.70) of the variation in the underlying goals and vice versa [52]. In total, eight of the SDGs present two or more indicators with correlation values greater than 0.70, only three SDGs present a single indicator, and six Sustainability 2021, 13, 6487 9 of 29 of the SDGs do not show any indicator with a correlation value greater than 0.70. This finding suggests that the selection of the indicators has been adequate because there is a low redundancy in the results [66]. Regarding the correlation with the general index, on the one hand, 20 negative correlations have been identified. They all have a very low correlation coefficient (<0.5), and only indicators 8g and 10d present acceptable levels of significance (<0.01). On the other hand, 10 indicators are identified with a positive correlation with a Pearson coefficient (>0.5) and an acceptable level of significance (<0.01). Therefore, SDG 1, SDG 4 and SDG 17 present a higher number of indicators with a better positive correlation, which corresponds to the highest scores in the city index. On the contrary, SDG 3 and SDG 13 present indicators with negative correlations and the worst scores of the cities that top the index. Table A5 in Appendix C presents the Pearson coefficients of the 17 SDGs regarding the correlations at the SDG level. All of them correlate positively with the overall index. In addition, SDG 1, SDG 7, SDG 16 and SDG 17 show high positive and significant correlations with the index. Cities well positioned on these SDGs rank equally well in the SCR index. Furthermore, most of them (12 out of 17) present an excellent significance (<0.01). On the contrary, SDG 3 and SDG 14, with very low correlation coefficients, are identified as the worst-ranked cities in the SCR overall index. Regarding the correlations between the Goals, only three of them have been identified with a high Pearson correlation coefficient (>0.50) and an acceptable level of significance (<0.01): (SDG 1 vs. SDG 4, SDG 7 vs. SDG 16, SDG 12 vs. SDG 17). Moreover, SDG 3 presents several negative correlations with other SDGs, but none have an acceptable significance level (<0.01). SDG 14 shows negative correlations with SDG 1 and SDG 4 with a low coefficient but a high significance level. Similarly, SDG 17 shows negative correlations with SDG 2 and SDG 14 with a low correlation coefficient but a high significance level. Pearson correlation coefficients greater than 0.70 and significant values under 0.01 have been identified regarding the correlations between the indicators themselves. It shows a very high significant correlation which may suggest redundancy. The main values of the Pearson correlation analysis are summarized in Table A6 in Appendix C. Only a negative correlation has been identified between indicators 15a and 11j. The rest of the significant correlations are positive. To obtain a reduced set of indicators, those highly correlated with each other have been further analyzed to remove them [52]. Finally, the indicators removed for a reduced set are: 1d, 1e, 3f, 4d, 5a, 6e, 8e, 10a, 10f, 11d and 16h. Analysis of the Impact of Assumptions Based on the precedent results, variations in the methodology for calculating the SCR index can be proposed to evaluate their impact within a range of improvement alternatives. The objective is to quantify the uncertainty based on the difference in the position of the cities considered in the SCR index in each result. Table 4 shows three particular assumptions that have been identified in this uncertainty analysis. They are alternatives for the construction of the SCR index and can be easily investigated. According to the SDSN methodology, the arithmetic average has been used as an aggregation method in two stages in the SCR report: at the indicator level for each goal and the goal level for the overall index. In Section 3.2.1, the change in the aggregation method by geometric instead of arithmetic mean has been analyzed. It concludes that it does not significantly impact, so it has been ruled out as a suitable alternative for this study. Consequently, the improvement alternatives to be analyzed are using a weighting method for the SDGs and using a reduction method for the indicators. Their impact analysis is carried out by comparing the initial set (I s ) of the SCR and the new alternative set (A s ) resulting from applying these alternatives. The evaluation of their results is carried out with two approaches: principal components analysis and an analysis of variation of positions in the index of cities. Principal Component Analysis Principal Component Analysis (PCA) aims to assess the extent to which statistical approaches confirm the conceptual framework [67]. It explores the correlation of all indicators simultaneously, highlighting, if present, some common trends that describe a common concept among the indicators [68,69]. The objective is to transform a set of original variables into a new set of variables that are a linear combination of the original ones, called Principal Components. These components or factors are unrelated to each other and successively explain most of the total variance. Ideally, it is expected to have one principal component explaining at least 70-80% of the total variance to claim a single latent phenomenon behind the data. As shown in Table A7 in Appendix C, this is not the case for the SCR Index. The results identify that six principal components explain almost 70% of the variance. Eighty-two indicators are available for each city in the sample and seventeen intermediate indices referring to each SDGs and the overall index. Based on the 17 variables, a reduction of dimensions is carried out through a PCA. Table A6 shows the PCA results for I s and A s . The eigenvalues represent the amount of variance explained by each factor; therefore, the higher the eigenvalue, the more variance each factor explains. The Kaiser-Guttman rule [70] has been used in this study due to its strict scale. It suggests keeping those factors with eigenvalues greater than 1.0. It would hold a total of six factors that would represent 66.94% of the explained variance. On the other hand, in A s , the number of factors becomes seven, with an explained variance of 71.08%. In addition, the total explained variance and the distribution between the same factors increase, being more uniform among the seven factors of I s concerning those of A s . Consequently, the modification of indicators from one set to another has increased the sample's representativeness, at least in those latent relationships within the dataset. In addition, Table A8 in Appendix C shows the rotated component matrix for both sets. They differ considerably concerning the composition of their factors, and none of these components exhibit a clear or logical arrangement concerning the subject discussed. Figure A1 in Appendix C shows a heterogeneous disposition to the issue analyzed. For factor 1, a group of SDG 12, SDG 16, SDG 17, SDG 4 and SDG 7 was analyzed versus an opposing group consisting of SDG 14 and SDG 10 (as suggested by negative correlations). For factor 2, ODS 1 is diametrically opposite to ODS14 and ODS15. Consequently, it is necessary to visualize the composition of the intermediate indices and analyze them individually as has been done as a whole. Table A9 in Appendix C shows the analysis of the main components of the indicators that comprise each SDG. Factors indicate the number of factors generated by Factor Analysis; indicators indicate the number of indicators collected in the database for each SDG; % Variance is the total percentage of variance explained by these factors. The last column shows the variance difference explained within the alternatives. Accordingly, 14 of the 17 SDGs contain a total explained variance more significant than 60%, exhibiting high representativeness and assessing the subject to be observed. This is reduced in A s where the number of SDGs with a total explained variance above 60% is reduced to five. The elimination of indicators in several SDGs has a severely negative impact. However, the cases that remain above the established criterion, i.e., SDGs 2, 3, 4, 8 and 11 that maintain a total explained variance above 60%, manage to describe and monitor the central theme of each of the goals. Therefore, the analytical loss of reducing an average of two indicators for each group of indicators per SDG does not compensate or enrich the analysis in some of them. As can be observed, up to 34% of the analytical information losses for some SDGs do not interact well with reducing indicators. Looking individually at the rotated component matrices of each of the SDGs, it is possible to observe groupings that explain different aspects within each goal. SDGs 3, 4, 8 and 11 are worthy of analysis because they obtain high total explained variances, and they present exciting relationships within their matrix: SDG 3 is a goal with a large number of indicators and factors. Table A10 in Appendix C shows the rotated component matrix, filtering out those relationships equal to or greater than 0.3 in absolute value. Values represented only in one factor, being absolute relationships with their associated factor, are highlighted in bold. It can be observed that these relationships coincide with the most robust relationships in the matrix (except one existing in indicator n_sdg03_alcohol) and have a positive relationship with the measurement of SDG 3. Figure A2 in Appendix C represents a rotated space component graph for SDG 4. It visually shows the type of relationship that the indicators that make up the SDG maintain with the calculated factors. It can be seen that there are two types of indicators or aspects within the SDG itself. Thus, factor 1, which contains most of the variance, is highly related to education expenditure per capita, an explicit effort that transversally influences the SDG. On the other hand, those outcome indicators that would give us a picture of the situation in the territory are grouped in factor 2, suggesting that they are different dimensions to be considered but not contrary or exclusive. The analysis of SDG 8 (Table A11 in Appendix C) shows a first factor that explains 24.41% of the variance and represents a positive dimension for the SDG. Their indicators that have high relationships connect with the economic progress and productivity of the territories. In contrast, factor 2, which explains 23.06% of the total variance, is the one that represents the negative weighting of the SDG. These are the variables that have an inverse influence on the goal's progress, all referring to the unemployment data. Finally, Table A12 in Appendix C presents the analysis of SDG 11. It does not exhibit explicit specializations or differentiated aspects within each factor and groups the variables that measure air quality in the same factor, which provides logic to the composition of this factor but does not present a valuable interpretation within the analysis. Table 5 summarizes the changes in the position of the cities in the SCR index between I s and A s . Appendix C includes the complete list of cities and the index score for each variation of the calculation. Table A14 shows the bottom 10 positions of cities on each calculation alternative. The full list of results is in Table A15 in Appendix C. Final Conclusions and Discussions The SCR, like the SDG Index, proposes a one-of-a-kind composite measure to track the progress of the SDGs at the city level. A deep understanding of their underlying components and the relationships between them must accompany the results. The effort of cities, strategic territories for their contribution to the national socioeconomic and environmental performance [25], is essential to achieve compliance with the SDGs since the municipal level is closest to the daily lives of people and companies. Therefore, the adaptation of its policies to the 2030 Agenda and the measurement of its progress is urgent and necessary for the country's progress towards meeting the SDGs. The SCR ranking is robust enough among the alternatives evaluated based on the previous evaluation of the results and the methodology. The sensitivity analyses performed confirm that the uncertainty is manageable. For this reason, it can be concluded that the system of city indicators is consolidated. However, according to [71], many indicator initiatives are driven by the availability of relevant and reliable data [72][73][74]. The limitation in the data availability conditions the use of the appropriate indicators [75]. In the case of SCR, the sets of indicators are biased and incomplete to measure sustainability. This situation jeopardizes the reliability of the results. Therefore, developing further scientific research and expanding the data collection at the city level is necessary. It is also hopeful that, as the availability of data increases to measure some of the goals, implicit weighting would be reduced across goals. Regarding selecting indicators, two aspects should be improved to reduce the complexity of the evaluation system. On the one hand, redundancy between collinear indicators should be avoided because it is equivalent to double-counting the same urban phenomenon. This target seems to have been accomplished for the SCR. However, the indicators selected for the SCR should be positively correlated with each of the objectives they represent. The results in this aspect are slightly better than those obtained by the JRC analysis for the global SDG Index. These suggest that there is little redundancy in the indicators' data, and their selection has been correct because they measure different aspects of the city. Therefore, its representativeness is adequate and, from a statistical point of view, with low levels of uncertainty. On the other hand, whenever possible, regional data should be omitted [50]. By repeating the data of cities in the same province, the singularities of each city are neglected. There are single regions with high depopulation in the cities considered, and others belong to large, highly populated metropolitan areas. In this way, very different city realities are mixed due to the chosen population and representativeness bias. In addition, the results of very different realities are simplified, and their comparability is difficult. It would be advisable to carry out specific analyses by regions or similar urban areas to deepen and broaden the results. Furthermore, it would be desirable to use compliance thresholds based on other criteria for selecting and grouping cities to complement the SCR. For instance, in addition to the number of inhabitants and representativeness, economic biases or population density could be used to make groupings between equals and improve comparability [76,77]. Regarding the calculation methodology, it can be concluded that the use of an alternative aggregation method from the geometric mean instead of the arithmetic does not significantly affect the index positions [51]. However, using an alternative weighting method has been shown to affect index positions significantly. The first and last positions of the index are not affected by this change in the weighting method, but the rest of the intermediate positions are. There is no specific pattern of cities that are more sensitive to this change. Further investigation would be necessary considering other variables. The SCR index is based on the 2030 Agenda for Sustainable Development adopted by all UN member states and rigorously follows the same structure of 17 goals. The indicator system is well balanced and covers the essential areas of the SDGs. However, as it is a framework designed at the country level, its application in cities requires an adaptation process. Therefore, indicators must adjust to the competence frameworks distributed among the different administrative levels and eminently urban phenomena. According to [78], this recent research contributes to a strong grounding for the successfully implementation of the SDGs in Spain at both the national and city levels. Corroborating with previous research, our findings show that no SDG can individually make a country evolve and comply with the 2030 Agenda but working with the SDGs as a whole can create a virtuous cycle of SDG progress. Once the datasets and indicators are consolidated and improved, it would be advisable to investigate the synergies and trade-offs between the results at the country level and the results of their main cities. Acknowledgments: The authors would like to special thank the Spanish SDSN Network (REDS) for organizational and technical support, and Ana Justel (Autonomous University of Madrid) and Alberto Quintanilla (Smart & City Solutions) for programming assistance. We are grateful for the support provided by the project "Hacia la consolidación de ciudades inclusivas, un desafío para Madrid-H2019/HUM-5744)", and anonymous reviewers for their feedback to improve this paper. Conflicts of Interest: The authors declare no conflict of interest. Figure A1. Factor map of the 17 goals of the SCR Index for As. Figure A2. Rotated space component for SDG4. Figure A2. Rotated space component for SDG4.
9,697.8
2021-06-07T00:00:00.000
[ "Environmental Science", "Economics" ]
%HPGLIMMIX : A High-Performance SAS Macro for GLMM Estimation Generalized linear mixed models (GLMMs) comprise a class of widely used statistical tools for data analysis with fixed and random effects when the response variable has a conditional distribution in the exponential family. GLMM analysis also has a close relationship with actuarial credibility theory. While readily available programs such as the GLIMMIX procedure in SAS and the lme4 package in R are powerful tools for using this class of models, these progarms are not able to handle models with thousands of levels of fixed and random effects. By using sparse-matrix and other high performance techniques, procedures such as HPMIXED in SAS can easily fit models with thousands of factor levels, but only for normally distributed response variables. In this paper, we present the %HPGLIMMIX SAS macro that fits GLMMs with large number of sparsely populated design matrices using the doubly-iterative linearization (pseudo-likelihood) method, in which the sparse-matrix-based HPMIXED is used for the inner iterations with the pseudo-variable constructed from the inverse-link function and the chosen model. Although the macro does not have the full functionality of the GLIMMIX procedure, time and memory savings can be large with the new macro. In applications in which design matrices contain many zeros and there are hundreds or thousands of factor levels, models can be fitted without ex-hausting computer memory, and 90% or better reduction in running time can be observed. Examples with a Poisson, binomial, and gamma conditional distribution are presented to demonstrate the usage and efficiency of this macro. Introduction Mixed models comprise a class of important statistical tools to estimate variance and covariance parameters, account for repeated measurements and other features of experimental designs, and adjust for over-dispersed data (Stroup 2012). Mixed models extend the classic fixed effect models by including random effects and best linear unbiased predictors for subjects. The random effect represents a random sample from a hypothetical distribution, and serves as a mechanism to link observations with the same level of random effect via a covariance matrix, so that information from similar observations can be utilized in estimation. Mixed modeling also has a close relationship with actuarial credibility theory. The generalized linear mixed model (GLMM) has attracted considerable attentions during the past two decades, because it extends the linear mixed model to a general framework that accommodates a rich set of distributions from the exponential family, so that non-normally distributed data such as counts and binary observations can be modeled appropriately. Readily available commercial or free software packages, such as the GLIMMIX procedure from SAS Institute Inc. (2011a) and the lme4 package (Bates, Maechler, Bolker, and Walker 2014) in R (R Core Team 2014) make GLMMs increasingly popular to the research community. GLMMs have been widely applied in areas such as biology (Vergara, Aguirre I, and Fernandez-Cruz 2007), ecology (Milsom, Langton, Parkin, Peel, Bishop, Hart, and Moore 2000), small area estimation (Maiti 2001), genetic research (Kerr, Martin, and Churchill 2000), and actuarial science (Antonio and Beirlant 2007;Frees, Young, and Lou 1999;Kaas, Dannenburg, and Goovaerts 1997), to name a few. In many of these applications, however, model fitting is a challenging task because the fixed and random effects may have a large number of levels. This is especially true with molecular biology studies as indicated by Wolfinger et al. (2001). Here, we present a macro in SAS for fitting GLMMs to data with large numbers of fixed and random effect levels using sparse-matrix techniques, and compare results with the output of the GLIMMIX procedure in SAS (which does not use sparse-matrix or other high performance techniques). To understand the computational challenge, it is necessary to review the estimation techniques, which fall into either of the two categories: 1. Linearization of the model based on a Taylor series, such as described in Breslow and Clayton (1993), Wolfinger and O'Connell (1993), Schall (1991). The linearization method is more general than the integral-approximation method (in terms of the diversity of models that can be fitted), but may produce more biased variance-covariance and other parameter estimates than found with integral-approximation methods. Stroup (2012) shows, however, that the bias problem is usually of concern only under extreme situations, such as when the number of Bernoulli trials per sampling unit is very small, especially if the number of subjects is small. The linearization method is the focus of this paper because of its generality and how this approach can be incorporated into a high performance computational algorithm. Schall (1991) and Breslow and Clayton (1993) proposed a method based on the first-order Taylor-series expansion of the inverse link function around the current estimate of fixed and random effects, which is known as a quasi-likelihood based method. It is also known as a so-called penalized quasi-likelihood (PQL) method. Wolfinger and O'Connell (1993) expanded the Taylor-series approach by incorporating a probabilistic approximation based on the Gaussian distribution. This results in a so-called pseudo-likelihood approach because the marginal log-likelihood for the approximating function mimics the structure of a Gaussian log-likelihood. Within this structure, iterative mixed-model estimation is achieved using likelihood-or restricted-likelihood based methods and iteratively-reweighted-least-squares, essentially coupling and generalizing linear mixed model (LMM) and generalized linear model (GLM) algorithms (Schabenberger and Pierce 2002). This section basically follows Wolfinger and O'Connell (1993) and Stroup (2012). A GLMM can be expressed as: where y is the response vector, X is the fixed effects design matrix, Z is the random effects design matrix, β is the vector of fixed effects parameters, γ is the vector of random effects, µ is the vector of expected values, η is the vector of linear predictors conditional on the random effects, and h() is the inverse link function g −1 (). It is assumed that γ ∼ N (0, G), where G is the variance-covariance matrix of the random effects. We are mostly concerned about variance-component models, which correspond to a diagonal G matrix, but the approach is applicable to a wider class of models. The variance (or variance-covariance) of y conditional on the random effects is defined through two matrices. A is a diagonal matrix whose elements represent the variance function for h(η) (dependent on the assumed conditional distribution, and calculated at µ), and R is a scaling matrix. In the nominal situation, R is a diagonal matrix with elements φ, a "residual-type" scaling term; for some conditional distributions in the exponential family, such as the binomial and Poisson, φ ≡ 1. For other conditional distributions (e.g., gamma, normal, negative binomial), φ is unknown and must be estimated. Over-dispersion with the binomial and the Poisson distribution can be accounted for by allowing φ to be an unknown parameter that is estimated; this is equivalent to holding φ fixed at the theoretical value for the conditional distribution and multiplying it by an over-dispersion parameter. In this over-dispersion situation with the binomial and Poisson distribution, the estimation becomes a quasi-likelihood method, because the "likelihood" no longer corresponds to a known distribution (Stroup 2012). R can also be generalized to a non-diagonal matrix as one approach to account for correlations of the observations within subjects. Define the first order derivative of the inverse link function h() evaluated at a given estimate of linear predictor effects β, γ as: Then the first-order Taylor-series expansion of the GLMM at a given estimate of linear predictor effects is: Here, the hats refer to the current estimate of the parameter (or parameter vector) in an iterative process. Rearranging terms, we have Following the idea from the iterative re-weighted least squares algorithm with a GLM, the pseudo-variable is defined as whereη = Xβ + Zγ. It follows that the conditional expected value and variance are given by Stroup (2012): and VAR(y * |γ) With the pseudo-likelihood approach, it is assumed that y * | γ has a normal distribution. Using y * as the response variable, pseudo-likelihood estimation of a GLMM is achieved within the framework of a linear mixed model, with weights defined asŴ, a diagonal matrix with elements as A −1D2 . Under the canonical link function, W = A −1 . Estimates of β and predictions of γ are obtained by solving the GLMM equations: Given the probability approximation as above, the objective function is the Gaussian loglikelihood function for the pseudo-variable y * : and the restricted pseudo-log-likelihood function is: where r = (I − X(X V −1 X)X V −1 )y * , n denotes the sum of the frequencies used in the analysis and p is the rank of X. In Wolfinger and O'Connell (1993), the uses of Equations 2 and 3 (with the GLMM equations) were originally called the PL and REPL algorithms, respectively. In the GLIMMIX procedure of SAS, the PL algorithm is referred to as MSPL (maximum subject-specific pseudo-likelihood), while the REPL algorithm is called RSPL (restricted subject-specific pseudo-likelihood). Note that the elements of R can either be held constant based on the conditional distribution (consistent with Breslow and Clayton 1993), or be estimated (consistent with Wolfinger and O'Connell 1993). The GLIMMIX procedure uses the MSPL and RSPL labels for the linearization estimation methods whether or not the R matrix is estimated. This is discussed further below. A general label for all of these approaches in this paragraph is linearization. Estimation of a linearized GLMM follows a doubly iterative algorithm, as indicated by SAS Institute Inc. (2011a). In a doubly iterative algorithm, a simpler model, a LMM, is derived from the original more complex GLMM; here the pseudo-variable (y * ) is calculated and is fitted to data as a LMM using the above-described pseudo log-likelihood and mixed-model equations. For most LMMs, this is an iterative process, known as the inner iteration. Using the parameter estimates and predictions of the random effects obtained after convergence, y * is re-calculated (the outer iteration) and a LMM is again fitted to the data. The outer iterations continue (with the corresponding inner iterations at each step) until a preset convergence criterion is met or the maximum number of iterations is attained. The deviance based on the assumed conditional distribution is then calculated. The algorithm is outlined in Section 2. When either the fixed effect design matrix X or the random effect design matrix Z has many columns, solving the mixed model equations will be extremely time consuming and memory intensive, especially when determining the generalized inverse of the matrices, see SAS Institute Inc. (2011a). The time complexity will roughly be about O(k 3 ) and the space complexity will roughly be about O(k 2 ), where k is the rank of the matrix H. What makes the new macro outperform the GLIMMIX procedure in terms of speed and memory consumption is the use of sparse-matrix techniques and special optimization methods in the inner iterations of the doubly iterative algorithm. This is accomplished by calling the new HPMIXED procedure for the inner-iteration calculations instead of using a more traditional LMM algorithm not adapted for large scale problems. HPMIXED is specifically developed for linear mixed models with large numbers of sparsely populated columns in the X and Z matrices. In a mixed model with fixed and/or random effects that have large number of levels, the resulting mixed model equation matrices are very large, but often extremely sparse in the sense that most of the elements are 0. For a typical variance-component mixed model with many factor levels, close to 99% of the elements may be 0. Sparse-matrix techniques exploit this fact by representing a matrix not as a complete two dimensional array, but as a set of nonzero elements and their location (row and column) within the matrix. The HPMIXED procedure in SAS, in particular, employs the compressed sparse row (CSR) representation of a sparse matrix, where nonzero elements are stored row by row in (value, col_ind, row_ptr) format where value is an array of the (left-to-right, then top-to-bottom) non-zero values of the matrix; col_ind is the column index corresponding to the values; and row_ptr is the list of value indexes where each row starts. CSR is efficient for row-wise arithmetic operations which is exactly how the likelihood calculations for mixed model are conducted. Several optimization methods are possible for the linear mixed model fit, and the default in HPMIXED is dual quasi-Newton, which only requires first derivatives of the (restricted) pseudolog-likelihood. HPMIXED also provides several optional optimization techniques to choose from when solving the pseudo-log-likelihood function, some of which require the calculation of the second derivatives. As the default for the %HPGLIMMIX macro, the HPMIXED default is replaced with the Newton-Raphson with ridging optimization method. Table 1 shows available optimization techniques and whether second-order derivatives are required. These are chosen with the TECH= option in the macro (see last example for a demonstration). However, HPMIXED does not actually calculate the true second derivative (or the observed information matrix). Instead, the so-called average information matrix is calculated, which is much less computationally demanding, and can be more stable (Gilmour, Thompson, and Cullis 1995). Outline of linearization algorithm for GLMMs The specific algorithm implemented in this macro as well as the description below follows that of Wolfinger and O'Connell (1993). Table 2 according to specified conditional distribution. Set up variance function and deviance function based on 2. Use the original data as initial estimate of µ,μ. Adjustment (correction factor), as in Table 3, may be applied to y in order to apply the link function (e.g., avoid the log of 0). Correction factor is set to 0.5 by default. 5. In the inner iterations, use REML to estimate components of covariance matrices G, R (or just G, if there is not a free scale parameter with the specified distribution and one does not wish to adjust for over-dispersion after random effects are in the model) and solve the mixed model equation for fixed and random effects. 6. Obtain the maximum difference of estimates for covariance parameters and fixed effect parameters between the current and previous outer iteration. Convert the difference to a relative scale by dividing by the magnitude of the corresponding parameter estimate. 7. If the max (relative) difference is larger than a threshold (e.g., 1E-8), update µ using the inverse link function with newly estimated fixed and random effects. Then go back to Step 3. 8. If the max (relative) difference is smaller than the threshold, claim convergence, calculate deviance, and assemble requested statistics. %HPGLIMMIX macro program %HPGLIMMIX largely follows the structure of the now obsolete %GLIMMIX macro SAS Institute Inc. (2007), from which it is derived, and has almost the same set of input parameters. Distribution Variance function Deviance Normal Distribution Apply correction factor (CF) Binomial or binary (Response+CF)/(1+2 · CF) Binomial using event/trail (Event+CF)/(Trail + 2 · CF) All others Response + CF Some parameters are dropped that do not apply to the HPMIXED procedure (see below), and some are added (such as one for the optimization method in the inner iterations (TECH=). All of the above listed items in Section 2 are automatically carried out by the macro. The %GLIMMIX macro does pseudo-likelihood estimation or restricted pseudo-likelihood estimation of GLMMs, which operates by repeatedly calling the MIXED procedure with the pseudo-variable and weights updated with each call to the MIXED procedure. Later, SAS Institute Inc. put the functionality of this macro, together with many other features, into the GLIMMIX procedure. However, sparse-matrix techniques are not incorporated into the GLIMMIX procedure. Thus, we used %GLIMMIX as a template for the development of a new macro that repeatedly calls HPMIXED procedure instead of calling the MIXED procedure. In addition, many segments of the data processing code in the macro have been rewritten to achieve the maximum efficiency in data processing and updating when using big data. HPMIXED only supports REML estimation; thus, the new macro can only perform restricted pseudo-likelihood methods (RSPL) to fit GLMMs using the linearization approach. Additionally, the computational part of the new macro is significantly modified to both accommodate the syntax difference between HPMIXED and MIXED procedures and improve efficiency, especially for larger data sets with many observations. The HPMIXED procedure has only a subset of the options available in the more general MIXED procedure, and default settings are different in some cases (see below). It is assumed that the user has general familiarity with the syntax of the GLIMMIX procedure for fititing GLMMs and the MIXED or HPMIXED procedures of SAS for fitting linear mixed models. Users can invoke the macro by calling %HPGLIMMIX. The list below explains key parameters in the syntax that will be used most often, while explanation of the full list of parameters is in the .sas program file and basically follows the instructions for the %GLIMMIX macro in SAS Sample 25030. 1. DATA= specifies the data set you are using. It can either be a regular input data set or the _DS data set from a previous call to %HPGLIMMIX. The latter is used to specify starting values for %HPGLIMMIX and should be accompanied by the INITIAL keyword option in the OPTIONS= option (see below for description of OPTIONS). 2. STMTS= specifies HPMIXED procedure statements for the analysis, separated by semicolons and listed as a single argument to the %str() macro function. Statements may include any of the following: CLASS, MODEL, RANDOM, REPEATED, PARMS, ID, TEST. Syntax and options for each statement are exactly as in the HPMIXED procedure documentation. Most aspects of the GLMM specification (in terms of fixed and random effects, continuous versus categorical [dummy] variables, and over-dispersion) are given with these statements. Unlike with the GLIMMIX procedure, the link function and conditional distribution are not given in the MODEL statement but are specified with separate options. The TEST statement is explained below. 3. ERROR= specifies the distribution of y conditional on the random effects (sometimes known as the error distribution). When you specify ERROR=USER, you must also provide the ERRVAR= and ERRDEV= options. The default conditional distribution is binomial. Valid types and their abbreviations are listed in Table 4. 4. LINK= specifies the link function. Valid types are logit, probit, cloglog, loglog, identity, power(), log, exp, reciprocal, nlin, and user. The default link function for each error distribution is listed in Table 4. The user should see the .sas program for more details. 5. OPTIONS= specifies %HPGLIMMIX macro options separated by spaces. For example, key word INITIAL specifies that the input data set is actually the _DS data set from a previous call to %HPGLIMMIX. This allows you to restart a problem that stopped or to specify starting values. For a full list of available keywords, refer to the .sas program. 6. PROCOPT= specifies the options used by the HPMIXED procedure statement. Refer to the HPMIXED procedure documentation for more information. 7. TECH= specifies the optimization algorithm for covariance component estimation, default is NRRIDG (Newton-Raphson ridge). Available algorithms are listed in Table 1. There are some important differences between the %HPGLIMMIX macro and the GLIMMIX procedure, even if the same linearization (pseudo-likelihood) algorithm is used for both procedures, mostly due to the difference between the HPMIXED and the GLIMMIX procedures. Because of differences between the HPMIXED and the MIXED procedure, there are also a few differences between the %GLIMMIX and %HPGLIMMIX macros. First, the syntax between HPMIXED and GLIMMIX procedures has some differences. For example, the statements COVTEST, LSMESTIMATES, SLICE and FREQ in GLIMMIX are not supported in HPMIXED (making them, therefore, unavailable in the %HPGLIMMIX macro), while the LSMEANS and CONTRAST statements in HPMIXED do not provide the same level of functionality as those in GLIMMIX. On the other hand, HPMIXED does not automatically produce global tests of fixed effects (main effects or interactions) in order to reduce computational time and memory usage for big data problems; for situations with huge numbers of factor levels, overall F tests are often not of value. F tests of main effects and interactions, when desired, are specified with TEST statements in HPMIXED and in the %HPGLIMMIX macro; these tests are automatically obtained with GLIMMIX procedure. An example of the TEST statement is given in Section 4.3. Second, %HPGLIMMIX supports the REPEATED statement in HPMIXED procedure for modeling the so-called R-side (residual) variation; in contrast, the GLIMMIX procedure uses the RANDOM _RESIDUAL_ statement for the same or similar purpose (depending on the type of GLMM). However, there are some important differences that must be kept in mind between the macro and the procedure in this regard, depending on the selected conditional distribution; %HPGLIMMIX follows the same convention as the obsolete %GLIMMIX in this regard. If one fits a model without a free scale parameter, such as the Poisson, binary, or binomial conditional distribution, there is no residual variance term in the GLIMMIX procedure (because the conditional residual variance is fully defined as a function of the mean). In terms of the GLMM, the R matrix has no unknown parameters, as discussed in the introduction. But with the HPMIXED (or MIXED) procedure, there is always a residual term. So, in essence, there is one more variance (or variance-covariance) parameter with the macro than with the procedure for these conditional distributions. With the macro, one must force the "last" variance term (residual variance) to equal 1 in order to perform a pseudo-likelihood analysis and duplicate the model (and the results) of the GLIMMIX procedure (for those conditional distributions without a free scale parameter). This difference is demonstrated in Sections 4.1 and 4.2. In Section 4.1, it is shown that if an extra scale parameter is desired with the models to deal with over-dispersion that is not accounted for with (conditional) random effects, the statement: RANDOM _RESIDUAL_ has to be explicitly specified in GLIMMIX procedure. In contrast, with the %HPGLIMMIX macro, this scale parameter is automatically estimated. In Section 4.2, the opposite case is demonstrated, where there are four variance parameters with the %HPGLIMMIX macro but with the 4th variance-covariance parameter held at 1 by specifying HOLD=4 option in the RANDOM statement, and only three explicit variance parameters with the GLIMMIX procedure with the scale parameter automatically hold at 1. For other conditional distributions which have a free (residual) scale parameter (e.g., gamma, inverse Gaussian), nothing special has to be done with the macro (or with the procedure); that is, the number of variance-covariance parameters match up naturally between the macro and the procedure. Third, HPMIXED uses the residual denominator degrees of freedom (df) for tests of fixed effects. The only other option is to use an infinite df, which means that t and F tests become z and chi-square tests, respectively. In the GLIMMIX and MIXED procedures, several df calculation or estimation methods are allowed, and the residual method is not the default. Thus, for direct compatibility in denominator df between the new macro and the procedure (or the %GLIMMIX macro), one needs to use the ddfm=residual option in the GLIMMIX procedure (or in the %GLIMMIX macro), as shown in the first example below. Fourth, it is routine in data analysis for models to be fitted with an over-parameterized fixed effect component (Xβ), which means that there is an infinite number of fixed effect parameters with the same model fit; only certain linear combinations of the parameters are estimable and are unique. This happens typically when classification variables (factors) are in the model. In MIXED, HPMIXED, and GLIMMIX, the generalized inverse used in the mixed-model equations results in one of the factor levels (the reference level) being "estimated" as 0. With MIXED and GLIMMIX procedures, by default, the 0 is obtained for the last factor level, but with HPMIXED, the order of 0 estimates is almost random and cannot be controlled by the user. Thus, for an over-parameterized model, the estimates of β from HPMIXED may differ from those in GLIMMIX or MIXED, although the estimatable functions will be the same (e.g., least squares means, contrasts). Examples In this section, several examples are used to demonstrate key features of the new high performance macro. First, in Section 4.1 the new macro is shown to be in agreement with the now obsolete %GLIMMIX macro for the same model. In Section 4.2, we show how one needs to fix the residual variance at 1 in the %HPGLIMMIX macro code when fitting a model with a conditional binomial distribution. In comparison, this is automatically determined in the procedure. In the third example, we show that the new macro saves tremendous amount of time when fitting a large-scale GLMM. In this example, a mixed model with a gamma conditional distribution is used where the fixed effect design matrix has 4513 columns and the random effect design matrix has 3054 columns, ending up with mixed-model equations with more than 7500 columns in total. The total running time using the macro is less than 2.5% compared to the GLIMMIX procedure (67 minutes vs. 2714 minutes). Memory consumption using the new macro is also a tiny fraction of the procedure in this case. title2 "Using PROC GLIMMIX"; proc glimmix data=work.ship order=data; class type year period; model y = type / solution d=poisson link=log offset=service ddfm=residual; random year|period; estimate 'E vs. Others' type 4 -1 -1 -1 -1 / divisor=4 cl; random _residual_; run; title; title2; The PROCOPT statement can have many purposes for controlling options in the HPMIXED statement called by the macro (see the HPMIXED procedure documentation). For the %HPGLIMMIX macro, the PROCOPT=ORDER=INTERNAL option is used to specify the order in which to sort the levels of the classification variables listed in the CLASS statement. The sorted order of the classification variable levels from the %HPGLIMMIX macro may be different from that from the GLIMMIX procedure depending on which option you choose. With the %GLIMMIX macro, one uses the PROCOPT=ORDER=DATA option (because MIXED has a different default ordering compared to HPMIXED). The ORDER=DATA option is also used with the GLIMMIX procedure, because the GLIMMIX and MIXED procedures use the same convention for ordering factor levels. It should be pointed out that the ordering of factor levels is often of concern only when the investigator needs the individual parameter estimates for the over-parameterized model. Often, only linear combinations of parameters (such as least squares means or contrasts) are required, and these will not be affected by the parameterization and reference level chosen. The OFFSET option is used for defining an offset variable in the fixed effect linear predictor (a predictor variable with a parameter equal to 1). Note that the %HPGLIMMIX macro only specified one RANDOM statement corresponding to the factors of interests, but GLIMMIX added another RANDOM statement: RANDOM _RESIDUAL_. This is because, as mentioned previously, the HPMIXED procedure automatically estimates a scale parameter, but for a Poisson conditional distribution, the scale parameter is fixed at 1 by default for the GLIMMIX procedure, and in order to make the GLIMMIX procedure estimate the same statistical model, this second RANDOM statement is required. Note that the code for the %GLIMMIX macro is given at http://support.sas.com/kb/25/030. html. To obtain the same denominator df as with the new macro, one uses ddfm=residual for the model statement. The results are identical with the results shown below, but are not shown to save space. For this small data set, %HPGLIMMIX takes more outer iterations to converge compared to the GLIMMIX procedure and compared to the %GLIMMIX macro from SAS (latter output or log not shown here). This may simply reflect different default starting values for HPMIXED, GLIMMIX and MIXED. GLIMMIX and MIXED use the MIVQUE0 algorithm for starting values for random effects, and GLIMMIX uses the GLM solution for the starting values of the fixed effect parameters; HPMIXED uses the EM-REML algorithm instead for starting values, see SAS Institute Inc. (2011a). Also, the sparse-matrix methods may not be efficient for small data sets with small numbers of fixed or random effects. That is, the increased computational load of producing the sparse-matrix formulation of the matrices may not be offset until the number of levels of fixed or random effects reaches a certain minimum value (depending on the sparseness of the matrices), relative to the calculations made directly with the original matrices. Examining the results below, we are assured that the estimates of parameters are identical as reported by SAS on-line for the now obsolete %GLIMMIX macro, as well as from GLIMMIX procedure. The section below shows parameter estimates output from %HPGLIMMIX macro: The parameter estimates output from GLIMMIX is shown below, which is the same as the one from the macro. Because we selected the residual degrees of freedom method with GLIMMIX, the significance levels from the macro and procedure are also the same. As can be seen above and verified on the SAS website, %HPGLIMMIX obtains exactly the same results as both the GLIMMIX and the %GLIMMIX macro. However, with an estimated 0 for the year × period variance, it is probably advisable to refit the model without this interaction random effect. That is, one can use the following statement in the above code: random year period; Conditional binomial mixed model We have already seen that the macro produces the same results as the GLIMMIX procedure using the linearization RSPL method when the same model structure is used. In this example, we fit a hierarchical GLMM to data with an assumed conditional binomial distribution, based on the data sets analyzed in Kriss, Paul, and Madden (2012). The incidence of diseased wheat spikes (heads) in a three-level hierarchy was analyzed: counties, fields nested within counties, and sites nested within fields within counties. The number of diseased (y) and total (n) wheat spikes was determined at each site within each field within each county, and all effects were assumed to be random. A complementary log-log (CLL) link function was used and it was assumed that y had a conditional binomial distribution. The number of counties is set to 62; this is larger than the number used in the original study, but is useful for showing the advantage of the macro. We used the linearization method to fit a hierarchical GLMM to a simulated data set that is based on typical data, and set of results, in Kriss et al. (2012). Here we emphasize the second key difference between the %HPGLIMMIX macro and the GLIMMIX procedure mentioned in Section 3. When using the %HPGLIMMIX macro to fit a mixed model for a conditional distribution without free scale parameter, there is an extra variance term, the residual variance, that must be fixed at 1. GLIMMIX, however, automatically handles this. So, with this example, there are three variance terms with the procedure and four with the macro (although the last one is held at 1). data work.plant; CALL STREAMINIT(9873123); do sim = 1 to 1; inter = -2; n = 50; do county = 1 to 62; varc = 0.65; uc = rand('normal')*sqrt(varc); do field = 1 to 10; varf = .50; uf = rand('normal')*sqrt(varf); do site = 1 to 20; vars = .07; us = rand('normal')*sqrt(vars); eta = inter + uc + uf + us; p = (1-exp(-exp(eta))); y = rand('binomial',p,n); output; end; end; end; end; run; The following log pieces show the code and resource usage from the GLIMMIX procedure and the %HPGLIMMIX macro, respectively, for estimation and comparison purposes. Note that in the macro, the PARMS statement is used to not only specify the starting value for variance (or more generally, the variance-covariance) parameters, but also fixes the last (4th in this example) variance to be 1 using the HOLD=4 option. Because random effects have a nested structure, we specified the SUBJECT= option in the RANDOM statement to process the data by subject in order to make the computing more efficient. Note that both GLIMMIX and %HPGLIMMIX support this option in the RANDOM statement. Although the output is not shown, the same variance parameter estimates were obtained with the macro and the procedure. The GLIMMIX procedure took slightly more than 5 minutes 51 seconds to converge and the %HPGLIMMIX macro took about 2 minutes 3 seconds. For random effects with nested structure, using the SUBJECT= option to enable processing by subject is highly encouraged. If the random statement is specified as RANDOM county field(county) site(field county) , the GLIMMIX procedure would take several hours to finish, whereas the macro would take less than an hour to finish. In situations where the GLIMMIX procedure and the %HPGLIMMIX macro have a similar performance, the GLIMMIX procedure should be preferred since it provides a much wider range of features and covariance types. Reduction in running time and memory requirement for large scale GLMM In this example, we fit a GLMM to the simulated microarray data from Example 45.4 in SAS Institute Inc. (2011b) for the HPMIXED procedure, but assume a gamma distribution (conditional on the random effects) instead of a normal conditional distribution as in SAS Institute Inc. (2011b). The purpose is to push the scale of model to a higher limit and demonstrate the great advantage of using the macro instead of the procedure for such big data problems. The data set simulates a so-called loop microarray design structure, which is commonly used in such studies. There are 500 genes and 6 treatments, each gene occurs in 6 arrays, and each array has 2 dyes; so-called pins and dips on the arrays give multiple observations. The model assumes the same structure as in SAS Institute Inc. (2011b), which is also described as case study 16.12 in Littell, Milliken, Stroup, Wolfinger, and Schabenberger (2006). Fixed effects are: gene, treatment, dye, gene-treatment interaction, dye-gene interaction, and array pin; random effects are array, array-gene interaction, dip-within-array, array-pin interaction. This is a large model with 4513 columns in the design matrix of fixed effects and 3054 columns in the design matrix of random effects, which makes the mixed model equation having more than 7500 columns in total, with a sparsity of only 0.14537%. The data generation is given in the SAS program, and is the same as found in SAS Institute Inc. (2011b), except that η is a linear function of the fixed and random effects, and that the response variable has a conditional gamma distribution with expected value exp(η) and scale parameter of 0.5 (which was estimated in the model fitting). The following example used both the GLIMMIX procedure and the %HPGLIMMIX macro to estimate the same model and showed the difference in time consumption and memory usage. Results were stored using the ODS output system, and additional code for a data step were written to compare key results side-by-side in the SAS log (which is displayed). As shown below, on a Windows PC equipped with Intel i5-3570K CPU running at 3.8GHz, the macro took a total of about 67 minutes to finish, while the GLIMMIX procedure took more than 45 hours, a 40-plus folds saving in time. As an aside, we attempted to fit the GLMM using the Laplace (likelihood approximation) method of the GLIMMIX procedure, but convergence could not be obtained after 5 days (unpublished).The following log shows the input program code and information on the model fitting with the linearization method, as well as the execution time, and a comparison ot the estimates of variance parameters from both the macro and the procedure. Note that the TEST statement is used in the macro to perform an F test of the treatment effect with the macro. Also note that the Newton-Raphson with ridging (NRRIDG) was explicitly chosen for the inner-iteration optimization technique; although it is the default with the macro, it is shown here to demonstrate its use. As can be seen, the variance parameter results are identical up to 8 decimal places. The actual ODS output in the results window are not shown to save space. Conclusion The %HPGLIMMIX macro, based on the %GLIMMIX macro of SAS, provides a convenient way to fit GLMMs to large-scale data sets with large numbers of fixed or random effects. Depending on the size and sparseness of the design matrices, considerable time and memory savings can result, relative to the use of the GLIMMIX procedure. The macro is based strictly on the use of the doubly iterative linearization method, which is a very general method that can be applied to a wide range of GLMMs. Although the parameter estimates may be more biased than found with the likelihood approximation methods, these latter approaches are not computationally well suited to large-scale problems at this time. The bias problem has been shown by Stroup (2012) to be an issue with discrete data only under extreme conditions, such as with very small number of trials per sampling unit. On the other hand, the %HPGLIMMIX macro is built on the HPMIXED procedure, hence the limitations of this procedure apply. It is designed for special cases of a mixed model with large but sparse design matrix and only a few distributions are supported. For GLMMs with large but dense design matrices, the performance of this macro will be worse than that of the GLIMMIX procedure. In addition, only a subset of covariance structures of the GLIMMIX procedure are available as of this writing. Some other limitations include type 3 test results are not provided by default because dense matrix computation is involved, and degrees of freedom methods such as the Kenward-Roger method and the Satterthwaite method are not supported because they require to store and operate on the dense mixed model equation. Therefore, users that need those features will have to use the GLIMMIX procedure. However, they can use the %HPGLIMMIX macro for large scale (big data) problems and to accelerate the GLIMMIX procedure analyses for very large problems. The idea is to maximize the likelihood and produce parameter estimates more quickly using the %HPGLIMMIX macro, and then to pass these parameter estimates to the GLIMMIX procedure for some further analysis that is not available within the %HPGLIMMIX macro, see Example 45.3 in SAS Institute Inc. (2011a) for details.
8,800.4
2014-06-30T00:00:00.000
[ "Computer Science", "Mathematics" ]
Environmental Impacts of Biomass Energy Sources in Rwanda Rwanda is adopting a new concept of using an alternative energy source as a cooking fuel, where more than the majority of the population live in a rural area and use wood for all heating needs. Biomass in the form of firewood and charcoal plays a significant part in Rwanda’s economy. This accounted for 83 per cent of Rwanda's energy consumption in 2020. Biomass technology can be converted into fuel through some different processes, including solid fuel combustion, digestion, pyrolysis, fermentation and catalyzed reactions. With the government engaging in improving the health and protection of the environment, it becomes mandatory to look for alternative fuels not harmful or way to improve the methodology and the quality of stoves used in the country. In this study, the impacts of using biomass energy were assessed and mitigation measures were also proposed. The result shows that reducing reliance on unsustainable use of wood fuel and adds ongoing efforts in Rwanda to transition from the traditional use of biomass to Liquefied Petroleum Gas (LPG) or other improved cooking technology for fuel in a sustainable way. Along with this, the use of biomass for fuel is having harmful effects through health impacts and emissions. The article fills an important gap on the energy literature on Rwanda, as it gives detailed info on the cooking sector. Introduction Rwanda relies on fuelwood for heating and cooking. Fuelwood in Rwanda accounts for at least 80.4 percent of energy consumption and as a result, there is significant deforestation across the country. Furthermore, population growth is intensifying deforestation and causing more environmental degradation. For this reason, in 2020 the government of Rwanda through the Ministry of Land and forestry has begun a campaign to reduce the use of firewood for cooking while promoting other technologies such as gas and energy saving stoves to limit deforestation. The campaign will include the planting of more trees that will be catered for before being felled to ensure they mature. Rwanda launched a national forest planting season 2017/2018, in which more than 45,000 hectares of agroforests will be planted especially in eastern Rwanda. According to the ministry, current country forests cover stands at 704,997 hectares, equivalent to 29.6 percent, of which planted forests constitute 17.7 percent and 11.9 percent are natural forests [1]. The biomass is used in the form of firewood, charcoal and agricultural residues which is mainly used in cooking both in rural and urban populations as well as some industries. Biomass consumption is putting pressure on existing resources, with an estimated 870,000 tons of woody biomass deficit in 2009. Rwanda heavily relies on traditional biomass, for instance, wood, charcoal, dung, with more than 83 percent of households using firewood and demand for biomass energy continues to be a major driver of deforestation. An Energy and Environmental Engineering 7(3): 62-71, 2020 63 increase in demand for cooking fuel has exerted immense pressure on forestry resource and the country aims to reach a potential net reduction in wood used to 5 770 000 tons by 2030. Rwandans still rely on biomass in big percentages and the government of Rwanda needs an investor who is capable of replacing firewood-the traditional cooking energy which is putting the country forests under pressure. Rwanda's energy balance shows that about 85percent of its overall primary energy consumption is based on biomass (99% of all households use biomass for cooking). 11% from petroleum products (transport, electricity generation and industrial use) and 4% from hydro sources for electricity as shown in figure 1. Rwanda Energy Sector Structure The mission of the Rwanda energy sector is to create conditions for the provision of safe, reliable, efficient, cost-effective and environmentally appropriate energy services to households and all economic sectors on a sustainable basis. The management of energy systems in Rwanda involves various ministries and government agencies as well as private entities and individuals. The main parties involved in the energy in the country include the Ministry of Infrastructure (MININFRA), genuinely interested in biomass issues, but it is mainly concerned with end-users aspects and energy conversion, transformation and efficiency; the Ministry of Natural Resources (MINIRENA), focuses on the silvicultural aspects and productivity of plantations, and Ministry of Agriculture and Animal Resources (MINAGRI) on the agroforestry aspects of biomass. Other Ministries such as the Ministry of Finance and Economic Planning (MINECOFIN), the Ministry of Local Government (MINALOC), the Ministry of Trade and Industry (MINICOM) have an interest in part of the technical and regulatory aspects of the biomass supply and use chain. MININFRA is responsible for the development of national policies and strategies related to energy generation in the country, while regulation of the sector is the preserve of the Rwanda Utilities Regulatory Authority (RURA). Rwanda Energy Group (REG) is a private company established in 2014, wholly owned by the government. It carries operations out by two subsidiaries, the Energy Development Corporation Limited (EDCL) and the Energy Utility Corporation Limited (EUCL). The EDCL is responsible for developing both generation and transmission projects, exploiting new energy resources, and executing a least-cost power development plan and with Independent Power Producers (IPPs) [18]. While the EUCL is in charge of day to day operations of power generation, transmission, distribution and sales to final customers. The utility will also play a key role in the execution of power purchase/sales agreements with IPPS and other regional utilities for import and export. The institutional framework of the energy sector in Rwanda is shown in figure 2. Current Status of Biomass Energy in Rwanda The analysis of supply and demand of energy in Rwanda indicates that today approximately 83 percent of primary energy still comes from biomass, in the form of wood that is used directly as a fuel or is converted into charcoal, together with smaller amounts of agricultural waste. Biomass is largely consumed for cooking, with wood used by rural households and charcoal by urban households. This leaves serious negative impacts on forests, environmental degradation and people's health, which makes the government determined to cut the use of firewood for cooking by institutions such as hotels, schools, hospitals, prisons, police and the army. The government of Rwanda has embarked on strategies aimed to reduce its dependence on biomass as a source of energy by 2024. The current national energy consumption is shown in figure 1. Firewood is the most common cooking fuel in Rwanda and it is used in various types of woodstoves, 93% of rural households utilize firewood as it is considered in most cases still freely available [4]. More than half of the firewood stoves operating nationwide are three stones stoves as shown in figure 4. Approximately 65% of households living in major urban areas like Kigali, Huye 64 Environmental Impacts of Biomass Energy Sources in Rwanda and Rwamagana use charcoal to meet most of their cooking needs, through both traditional and improved cookstoves. Most of the charcoals are produced locally, in urban areas, charcoal is the most preferred fuel due to its long-life storage and low-cost transportation as it is smaller in volume and weight and has higher heat content compared to firewood [5]. In Rwanda, most charcoal (86%) is produced in a rather inefficient way and by use of traditional earth mound kilns with the average thermal efficiency of about 12% (air dry kg of charcoal/air dry kg of wood) [6]. Agricultural residues used to constitute only a small percentage of fuels used by the households but their use has increased year after year as a substitute due to wood scarcity, particularly among the poorest households of rural semi-arid areas. In Rwanda, the most used agricultural residues at the household level are cereals (maize, sorghum, stalks and rachis), wheat and rice straws and husks, tubes like cassava stalks, banana leaves, coffee husks, vegetable wastes (beans, groundnuts, soya, coffee pulps and dried caw dung). A rice husk is used as a fuel mainly for brick firing in the major rice growing areas. Sugar bagasse, coffee husks, rice husks and wheat husks are also used in brick making industries. In urban areas, poor households use sawdust and other end-cuts from wood processing industries without using appropriate cookstoves [5]. The exposure to polluting cooking energy is different between rural and urban areas. We observe a slow but increasing use of clean energy in the metropolitan area where we record 2% of the usage of LPG cooking gas and 63.9% of households using an improved stove, while in a rural area only 22.4% household's use an improved oven, and use of clean fuel stoves is negligible [16]. The transition from Tier 0 to Tier 5 will take time and will pass by intermediate cooking fuels and improved stoves on Tier 3 and 4 before reaching Tier 5, as presented in table 1. Based on a review of recent data, five key market segments as shown in table 2 have been defined for biomass energy used for cooking, heating and drying processes in Rwanda: Household sector (Rural and Urban), commercial food industry, public institutions and processing and production sectors. The use of firewood by rural households is an attractive option as it is freely available to most households. In urban areas, charcoal is the preferred fuel. This is due to its long-life storage and relatively low-cost transportation, given its smaller volume and weight compared to firewood. The classification of biomass energy sources is shown in figure 3. Urban consumers who rely heavily on charcoal for cooking and heating purposes Commercial food industry Hotels, bakeries and restaurants who rely on charcoal and firewood for cooking Public institutions Schools, prisons, military, refugee camps relying on charcoal and firewood for cooking purposed Processing and production Tea factories utilize firewood for tea curing and brick making processes utilize firewood for brick making. Improved Cooking Technologies in Rwanda Rwanda Energy Group (REG), in partnership with its stakeholders, is carrying out a countrywide awareness campaign on the use of safe, effective and clean cooking technologies to ensure that Rwanda meets its targets to reduce the use of biomass energies to cook in households. Currently, around 83 percent of Rwandans still use firewood for cooking but by 2024. Rwanda is targeting to have reduced the figure to 42 percent as shown in table 3. The performance of a cook stove is characterized by three processes: Heat-transfer efficiency depends primarily on the geometry of the cook stove and the flow of hot gases around the bottom and sides of the pot. Combustion efficiency, by contrast, depends primarily on the temperature in the cook stove and the characteristics of the combustion chamber that affects the circulation of air. Overall thermal efficiency can be raised by improving either combustion efficiency or heat-transfer efficiency [3]. The use of improved cook stoves that are up to three times more efficient than the traditional 3-stone stove and can reduce biomass consumption by anywhere between 68-94%. The transition from traditional cooking to modern energy cooking solution is shown in figure 4. As well as the fuel used, the type of stove has a significant impact on the amount of fuel required and the health of households. Most households (66%) use three-stone cookstoves (the simplest cookstove, made by placing a pot on three stones, which are positioned around a fire) or traditional cooking stoves. These are normally used with firewood. The average household uses around 1.8 tonnes of firewood each year to satisfy its cooking needs with this type of cookstove. The average monthly consumption per household on firewood is RWF 1,930 ($2.27) [10]. A government program to support the use of improved cooking technologies has run since the 1980s with 30% household penetration. Private sector led efforts are also distributing cook stoves that are up to three times more efficient than the traditional 3-stone stove and can reduce biomass consumption by anywhere between 68-94%. The consumption of charcoal for cooking can cost up to 36,000 francs per month for a family in Rwanda when one bottle of 24kg costing 28,000 francs would be sufficient for the same family and the same period and be less harmful. However, most of the families do not realize that charcoal is more expensive as they buy it in small quantities daily [16]. The different types of stoves and their combustion performance are shown in figure 5. 66 Environmental Impacts of Biomass Energy Sources in Rwanda The potential of biomass has not been effectively used in the provision of modern energy for a variety of reasons. One is the failure to exploit the opportunities for transforming wastes from agricultural production and processing into locally produced modern energy. Continued over-dependence on unsustainable wood fuel, biomass residue and other forms of biomass as the primary sources of energy to meet household energy needs has contributed to uncontrolled harvesting of trees and shrubs with negative environmental impacts. Besides, continued consumption of traditional biomass fuels contributes to poor health among users due to excessive products of incomplete combustion and smoke emissions in the poorly ventilated houses common in rural areas [2]. Proper impacts studies are needed to assess the genuine social, environmental and economic benefits of biomass energy source for households are shown in figure 6. Environmental Impacts As a developing country, Rwanda uses wood and charcoal for cooking and heating fuels. Burning biomass releases carbon dioxide (CO 2 ), a greenhouse gas. However, the plants that are the source of biomass for energy capture almost the same amount of CO 2 through photosynthesis while growing as is released when biomass is burned which can make biomass a carbon neutral energy source. The environmental impacts of biomass energy in Rwanda are the changes in forest areas, degradation, biodiversity, regeneration, ecosystem services and greenhouse gases, etc. Beyond the availability of firewood and charcoal, the most significant impact of wood and charcoal cooking is the effects on people's health and the environment in general. According to the ministry of health, more than three million Rwandans suffer from respiratory problems every year, of which 13 percent are caused by air pollution. In 2017, deaths linked to poor air quality reached 12,000 and over 9,040 deaths out of 12,000 were due to indoor air pollution and 2,960 due to ambient air pollution [12]. The use of an alternative or improved cooking technology caused some impacts such as reduction of respiratory illnesses caused by indoor and outdoor air pollution and injuries occurring in unsafe kitchen environments, such as burns from contact with the stove's hot surface, scalds from moving pots from a stove that has raised obstructions along its edges, or cuts through contacts with sharp edges. The environmental impacts and proposed mitigation measures are summarized in table 4. Social Impacts Charcoal is often blamed for the destruction of the forests and this is both true and false. In the past, charcoal was one of the factors that contributed to deforestation, although it was not the main factor: land clearing for agriculture, for habitation, and for creating tea plantations contributed more to the destruction of the natural forests than did the demand for charcoal. The rural employment and income opportunities will be lost but also because urban households and low income would have trouble finding cooking fuels. If the rural and urban household continues to cut trees for firewood and timber, etc.., the country risks desertification and that's why everybody should be aware of the consequences and embrace the safe cooking system and also improve environmental protection. Indoor pollution and illness are the health impacts of the use of biomass energy (charcoal, firewood). Many households across rural Rwanda look to forests as a source of income, cutting down trees to supply growing markets for charcoal and timber, however, urged them to embrace sustainable charcoal production to protect the country's forest cover. The protection of our environment must be a high priority to forest conservation while subsidizing households to use alternative energy sources [11]. The use of uncontrolled trees has contributed to deforestation in some parts of the country as shown in figure 7. The changes are meant to reduce the cutting down of trees as well as respiratory diseases that kill an estimated 12,000 people annually. Economic Impacts By displacing the use of biomass (firewood or charcoal) energy sources, biogas and LPG can help to reduce households' energy expenses. The use of biomass energy has potentially serious environmental implications and forests, woodlots are more productively produced and charcoal more efficiently used in countrywide. In this regard, the deforestation is increased as the demand for energy is increasing due to the population increases. The energy policy proposes more efficient production and use of biomass energy by households for encouraging the shift to use of non-biomass modern energy and high efficient biomass technologies by promoting the use of other sources of energy including cooking gas, biogas, pellets and briquettes. To ensure environment protection the, all households in the urban and rural areas are encouraged to use modern gas and stoves for cooking. The distribution of fuel-efficient cookstoves reduces the amount of wood burning in households, which means less harmful smoke, less indoor air pollution, and fewer greenhouse gas emissions. The socio-economic impacts for the community and beneficiaries are the change of income, employment, assets, equity, costs and profits, etc. Air pollution Biomass contains less than 0.1% of Sulphur. This problem of SO 2 will not be there when biomass is burning If people harvest the wood faster than trees can grow it causes deformation. Planting fast growing trees for fuel using fuel efficient cooking stove can help slow deforestation and improve the environment Government of Rwanda through REMA has to set out a strategy to reduce reliance on wood and charcoal. Carbon dioxide balance CO 2 produced when biomass is burning or when biomass is converted into gas or liquid fuels does not disturb CO 2 balance in the atmosphere. The strategies undertaken by the government to reduce the dependence on biomass include encouraging the use of institutional biogas and liquefied petroleum gas and ensuring affordable prices Ash from firewood or charcoal production Burnt biomass leaves ash which is rich in plant nutrients mineral It can be used as fertiliser. Concerned people Ash from waste biomass plants Some landfills use ash that is considered safe as a cover layer for the landfills and other used to make concrete blocks bricks Separating biomass waste before burning. Plant managers Ecological conditions Land becomes green Encouraging all urban, rural and refugee households to use gas or other improved cooking technologies. Local administration authorities and REMA 68 Environmental Impacts of Biomass Energy Sources in Rwanda 6. Discussions The Transition from Cooking with Charcoals to LPG Gas in Kigali The transition from cooking on the charcoal to another alternative energy source particularly LPG are still at the beginning with some difficulties, like the cost and cultural habits, but the trend shows that more people, especially in urban areas, are attracted by LPG and are willing to change. The government of Rwanda continuous sensitization campaigns, the population becomes aware of the ecological risk of cooking with wood and charcoal and understands the need to adopt clean fuels for cooking. The ministry of environment is set to ban the use and supply of charcoal in Kigali City as it steps up efforts to protect the environment by reducing the use of wood fuel. The use of charcoal has been cited as the main driver of deforestation and indoor air pollution. And the move could help reduce Rwanda's reliance on wood fuel from 80 percent to 42 percent by 2024. It has realized that many households in Kigali city consume a big percentage of charcoal which is a threat to forests across the country yet they can afford cooking gas. The government has developed an intuitive dubbed "Pay as You Cook" that will make cooking affordable to the public. The intensive use of charcoal was undermining Rwanda's efforts to achieve 30 percent of national forest cover and protecting forests and the use of cooking gas as a way of saving forests. The master plan suggests that the demand for LPG is set to rise to more than 240,000 tonnes by 2024, from the current 10,000 tonnes which will also reduce respiratory diseases caused by overreliance on wood fuel [12]. Kigali has the most important woodfuels market in Rwanda and draws its supply from all parts of the country. Since the overall supply/demand balance is negative (assuming medium productivity variant), there is no supply zone for Kigali's demand that can be considered entirely sustainable [14]. Figure 8 provides an outline of the proveniences of the charcoal and, tentatively, of the fuelwood sold in Kigali. How did the people of Kigali react to the government's decision not to use charcoal in cooking and heating? Most likely people think that transitioning to gas in the kitchen will be more expensive and they should be made aware of all the government strategies and tools to help them make a smooth transition. Global Climate Change Impacts First, a significant portion of charcoal production wood is unsustainably harvested. Second, emissions during charcoal production are significant compared to those from charcoal burning. Charcoal is produced via pyrolysis, or thermal degradation, of biomass. This partial combustion, in an oxygen-poor environment, results in the formation of products of incomplete combustion (PICs), such as CH 4 , CO, alkanes, alkenes, oxygenated compounds and particulate matter. In ideal biomass combustion, only CO 2 and H 2 O would be formed as shown in figure 9. In practice, however, various amounts of PICs are produced, depending upon operating conditions. An aerosol is a suspension of fine solid particles or liquid droplets in air or another gas [13]. Aerosols can be natural such as fog, mist, dust, forest exudates and geyser steam and can also be anthropogenic such as a particulate air pollutant and smoke. A non-methane hydrocarbon (NMHC) is emissions from biomass burning and they are important reactive gases in the atmosphere since they provide a sink for hydroxyl radicals and play key roles in the production and destruction of ozone in the troposphere. Energy is needed for cooking as well as for lighting. Most of the energy sources used are biomass-based (fuelwood, charcoal, grass, dung, crop residues) or are extracted from the natural environment (peat, vegetable oils, beeswax). Usually, there are more energy options in urban areas (e.g. LPG and grid-supplied electricity) than in rural areas. A major determinant for the household's decision on fuel use is the price of fuel. Over time, a household may either add to its energy sources or appliances or replace one by another. The determinants of this change, as well as the process of change are reflected in the concept as shown in figure 10. Conclusions Rwanda heavily relies on traditional biomass, for instance, wood, charcoal, dung (biogas) with more than 83 percent of households using firewood for daily cooking activities and industries such as tea factories. This is an increase in demand for cooking fuel has exerted immense pressure on forest resource and country aims to reach a potential net reduction in wood use to 5770000tones by 2030 through some mitigation measures, policies, including developing a modern and efficient charcoal value chain and the use of Liquefied Petroleum Gas (LPG) by middle-class households especially in urban areas. The government of Rwanda envisages reducing the reliance on wood fuel for cooking from 83 percent of the households to 42 percent by 2024. There is a high dependency on inefficient and unclean biomass cooking energy sources has resulted in many adverse such as environment, socio-economic and the health of the population. The government of Rwanda has to put more effort into creating more cooking technologies that use less combustible, like improved stoves, biogas, including making Liquefied Petroleum Gas (LPG) available to people depending on their means of income. As a result of the reduction of the use of woods and save trees harvested before maturity for cooking purposes and protected forest. Historically, wood and charcoal were one of the factors that led to the country's deforestation, but this was not the main factor: land clearing for agriculture, habitation, and for creating tea plantations led more to the destruction of the natural forests than did the demand for charcoal. The uses of briquettes technology to replace fuelwood and any other alternative energy sources have a potential solution to address climate change and deforestation while improving the livelihoods of the community and reducing CO 2 emissions. Biogas technology can be used in rural areas as an alternative energy source for cooking. The results show that the biogas generated from agricultural wastes, the process's development was quite slow due to the initial cost of investment required to start production. The direct causes of deforestation come mainly from the rapidly growing population in rural /urban areas and the economic interests of the country. As population size expands, the needs of food (agricultural expansion), wood extraction (logging or wood harvest for domestic fuel or charcoal), housing, infrastructure expansion such as road construction and urbanization, and energy will increase accordingly. Fuel shortages in rural areas led to wood being used for both heating and cooking. But the traditional rural stoves do have low efficiency in combustion. Moreover, much of the fuel wood used is found in poor quality with the use of wood harvested from the forest. With respect to this phenomenon, it initially contributed to the country's deforestation and economic instability. Rwanda's government must continue the mobilization and awareness for the citizens of Kigali city and the country as a whole has shifted energy supplies from wood to improved cooking technologies and later to Liquefied Petroleum Gas (LPG) or electricity.
5,976.6
2020-09-01T00:00:00.000
[ "Engineering" ]
Subshifts of finite type which have completely positive entropy Domino tilings have been studied extensively for both their statistical properties and their dynamical properties. We construct a subshift of finite type using matching rules for several types of dominos. We combine the previous results about domino tilings to show that our subshift of finite type has a measure of maximal entropy with which the subshift has completely positive entropy but is not isomorphic to a Bernoulli shift. Introduction Subshifts of finite type are a fundamental object of study in dynamics. A Z d subshift of finite type is defined by a finite set A and a finite list of forbidden words Forbidden ⊂ A [−n,n] d . The state space is S ⊂ A Z d such that S = s ∈ A Z d : T w (s) ∈ Forbidden ∀w ∈ Z d where the shift maps T w : The topological entropy, k, of a subshift of finite type is defined to be where Admissible(k) be the number of words in A [−k,k] d that do not contain a forbidden word. Subshifts of finite type are fundamentally topological objects. However the study of subshifts of finite type often includes measure-theoretic question. This is possible because for every subshift of finite type there exists an invariant measure with measure-theoretic entropy equal to the topological entropy [17]. Using these measures we can study the ergodic theoretic properties of a subshift of finite type with respect to its measures of maximal entropy. The ergodic theoretic properties of one dimensional subshifts of finite type are well understood. The state space S is non-empty if and only if it contains periodic points. Also there is an algorithm to calculate the topological entropy of a one dimensional subshift of finite type. Under very mild conditions a one dimensional subshifts of finite type has a unique measure of maximal entropy. Finally if a one dimensional subshift of finite type is mixing with respect to its measure of maximal entropy then it is measurably isomorphic to a Bernoulli shift. See [17] for more details about subshifts of finite type. In contrast two (and higher) dimensional subshifts of finite type may have very different behaviors. In fact none of the properties listed above necessarily apply to all two dimensional subshifts of finite type. For instance given an alphabet and a list of forbidden words it may be a difficult problem to determine if the state space S is empty or not. In fact there exists subshifts of finite type for which S is not empty but S contains no periodic points. Because of this there is no algorithm which can determine whether a subshift of finite type has nonempty state space [1]. It can also be difficult to calculate the topological entropy of a subshift of finite type even for some of the simplest subshifts of finite type (such as the hard sphere model). The measure theoretic properties of two dimensional subshifts of finite type can also be quite complicated. Ledrappier showed that there are Z 2 subshifts of finite type which are mixing but not mixing of all orders [16]. It remains a long standing open question as to whether there are actions of Z which are mixing but not mixing of all orders. Burton and Steif used ideas from statistical physics to show that there are strongly irreducible subshifts of finite type with multiple measures of maximal entropy, and these measures of maximal entropy are not weak mixing [4]. One particular subshift of finite type that has been very well studied is the domino tiling of the plane [3] [12]. We will construct a subshift of finite type that is a variant of the domino tiling of the plane which we call the colored domino tiling. We make use of some of the results about the domino tiling of the plane to analyze the ergodic theoretic properties of the colored domino tiling. A transformation has completely positive entropy if every nontrivial factor of the transformation has positive entropy. We will show that the colored domino tiling has completely positive entropy but is not isomorphic to a Bernoulli shift. The rest of this paper is organized as follows. In the next section we review some of the results about the domino tiling. In Section 3 we construct a zero entropy extension of the domino tiling which we call the colored domino tiling. In Section 4 we construct the subshift of finite type. In Section 6 we calculate the entropy of the subshift of finite type and identify a measure of maximal entropy. In Section 5 we show the connection between the two processes. Then in Section 7 we show that our subshift of finite type has completely positive entropy. Finally in Section 8 we show that it is not isomorphic to a Bernoulli shift. We conclude this section with an open question. The subshift of finite type that we construct has multiple measures of maximal entropy. We show that with respect to one of them the subshift has completely positive entropy but is not isomorphic to a Bernoulli shift. With other measures of maximal entropy the subshift is isomorphic to a Bernoulli shift. This leads us to the question: Does there exist a subshift of finite type which has a unique measure of maximal entropy and with respect to that measure the subshift has completely positive entropy but is not isomorphic to a Bernoulli shift? We believe the answer to be yes and that the techniques in this paper could be extended to construct such an example. Domino Tilings and the Height Function A domino tiling is a map x from (Z + 1 2 ) 2 → (Z + 1 2 ) 2 such that 1. ||x(u) − u|| 1 = 1 for all u and We call this a domino tiling because we can think of this as for each u ∈ (Z + 1 2 ) 2 there is a 2 by 1 domino whose two squares are centered at u and x(u). We let X be the space of all domino tilings. There are two natural shift operations on X given by for all x ∈ X and u ∈ (Z+ 1 2 ) 2 . Burton and Pemantle studied the ergodic theoretic properties of the domino tiling [3]. There is a unique measure of maximal entropy µ on X. The action (X, µ, T left , T down ) is isomorphic to a Z 2 Bernoulli shift. The height function h x of a domino tiling x is an integer valued function on Z 2 . It changes by 1 along each edge of the graph that is on the boundary of a domino and changes by 3 along each edge of the graph that bisects a domino. More precisely for a tiling x the height function h x : Z 2 → Z such that for any z, z ′ ∈ Z 2 with ||z − z ′ || 1 = 1 If we further require that h x (0, 0) = 0 then there are exactly two choices for h. We pick one arbitrarily by first putting a checkerboard pattern on the plane with a white in the square between (0,0) and (1,1). Then we say pick the height function so that if v, w ∈ Z 2 and the edge between v and w bisects a domino in x then h x (v)−h x (w) = 3 if moving from v to w there is a white square on your left and h x (v)−h x (w) = −3 if moving from v to w there is a white square on your right. The height function has been extensively studied [2] [11] [12] [13] [19] [20]. In Figure 1 we show an example of a domino tiling y and its corresponding height function. One useful way to think of a domino tiling as a graph. It has vertices (Z + 1 2 ) 2 and an edge between u and x(u) for all u. The second condition of a domino tiling implies that every vertex in this graph has degree one. With this interpretation we can consider the union of two domino tilings. This interpretation will be very useful for studying the height function. For any domino tiling x and any N ∈ N we definẽ The following theorem of Kenyon shows that cycles of y ∪y ′ are critical to understanding the difference in the height functions. Theorem 2.2. [12] For all N , x and y, y ′ ∈x N 1. If there exists a path from u to v which does not cross a cycle of 2. h y − h y ′ increases or decreases by 4 every time you cross a cycle of y ∪ y ′ and 3. conditioned on y, y ′ ∈x N and the cycles in y ∪y ′ , the increases or decreases of h y − h y ′ are mutually independent for all the cycles in y ∪ y ′ . To illustrate this theorem in Figures 2 and 3 we show the previous domino tiling y and another domino tiling y ′ such that y ∪ y ′ has a cycle. The height functions for both x and y are shown. Note that the height function in the second tiling agrees with the height function in the first outside the cycle and is four greater than the height function for the first tiling inside the cycle. In [12] Kenyon proved that asymptotically the fluctuations in the height function are conformally invariant. The precise version of this theorem that we use is as follows. Define Square ′ l ⊂ R 2 be the boundary of the square with vertices at (±l, ±l) and let Square l = Z 2 ∩ Square ′ l . Define the annulus Annulus l to be the region between Square ′ l and Square ′ 2l . Lemma 2.3. [12] There exists δ > 0, N 0 and p > 0 such that for all N > N 0 and all height functions H and H ′ with Proof. This version of the conformal invariance for height functions is stated in the discussion after Theorem 1 in [12]. The colored domino process The height function h x is defined on Z 2 so we can easily extend h x by linearity to the wireframe, (Z × R) ∪ (R × Z). Then we define A colored domino tiling of the plane consists of a domino tiling of the plane x and a function C : Colored Points x → {1, 2} which satisfies the following coloring rule. satisfies the coloring rule if for every u, v ∈ Colored Points x with ||u − v|| < 10 and g The coloring rule is a global rule as it applies to all u, v ∈ Colored Points x . In Section 5 we will show that we can define a "local coloring rule," that says if u and v are sufficiently close and g x (u) = g x (v) then C(u) = C(v) which implies the coloring rule. The fact that there is a local coloring rule which is equivalent to Definition 3.1 will allow us to show the space X * is isomorphic to the state space of a subshift of finite type. Let We consider the measure µ * = µ × P on X * . There are two natural actions T * left and T * down on X * that preserve µ * . One is given by and the other is defined in the analogous manner. and h x is not so simple. Also g x and Colored Points x do not behave nicely under T left and T down but do behave well under T 2 left and T 2 down . If we tried to construct a subshift of finite type which has an isomorphic state space and the shifts T left and T down , then we would need different tiles for the white squares and the black squares in the underlying checkerboard coloring of the plane. This would result in a two point factor. We conclude this section by proving some facts about g x which will be useful (in Section 5) to show that the coloring rule can be generated by a local coloring rule. Combining these two proves the lemma. Thus for each x ∈ X and i, k ∈ Z there exists a unique j ∈ R such that g x (i, j) = k. Proof. Across any edge in the lattice h x can change by at most 3 so A similar argument gives us the following. The last equation is true because you can't move in a straight line past two squares and have a white square on your left (or right) both times. Thus h x can not increase (or decrease) by 3 on two consecutive edges and can only increase or decrease by 4 over any segment of length two. Thus Another version of this principle is the following. Proof. Both h x and g x can change by at most 3 across any horizontal edge and by Lemma 3.4 g x can change by at most 7 across any vertical edge. As g x (0, 0) = 0 this proves the lemma. The subshift of finite type First we describe the set of tiles that we will use in our subshift of finite type. Let Tiles be the set of all possible ways to tile [0, 2] × [0, 2] with dominoes whose corners are at points in Z 2 . We show a representation of the set Tiles in Figure 4. By Lemma 3.3 for each element D ∈ Tiles we can define h D : {0, 1, 2} 2 → Z in the natural way and extend it by linearity to g D where Then let Colored Points D be the set of (x, y) in We can extend the map c ∈ {1, 2} Colors(D) to a map We let the state space S ⊂ A Z 2 be the set of all points that satisfy the adjacency rule. As we will see in Lemma 5.3 the adjacency rule forces a complete domino tiling of the plane. We illustrate the alphabet A and the adjacency rule with the following figures. We show a representation of the set Tiles in In the upper pictures we chose a coloring of A and B. The adjacency rule for the subshift of finite type allow us to place these colored tiles to be placed next to each other horizontally but not vertically. The two processes are the same In Section 3 we defined X * and in Section 4 we defined S, the state space for our shift of finite type. Now we will construct a natural bijection between these two spaces. The definition of (x, c) i,j defines a map from Our goal for this section is to prove the following. Lemma 5.3. M is a shift invariant bijection from X * to S. To prove this we will use the following lemmas. Proof. Since g x is defined by linearity both min g D and max g D are achieved on {0, 1, 2} × {0, 1, 2}. By Lemma 3.5 g x (i, j) ≤ g x (i + 1, j + 2) and g x (i, j) ≥ g x (i + 1, j − 2). Combined with the first statement these imply that These next two lemmas show how the local adjacency rule implies the global coloring rule. It does so because for any to u, v with g x (u) = g x (v) there exists an intermediate sequence {w i } connecting u to v with g x (w i ) = g x (u) = g x (v) that are all colored the same. then |j − j ′ | < 2 and there exists m, n ∈ Z such that either and the other is in Also there exists with g x (i ′′ , j ′′ ) = g x (i, j). there exists a sequence w 0 , . . . , w k such that Proof. By Lemma 5.4 we can find an appropriate w 1 = (a, b) ∈ Z × R. By Lemma 3.4 we can find w ′ i ∈ (a+i−1)×R such that g x (w ′ i ) = g x (v). If for some i the points w ′ i and w i ′ +1 do not satisfy the last condition, then Lemma Proof of Lemma 5.3. To show that M is a shift invariant bijection we need to establish the following properties. The first three follow straight from the definition of M , the coloring rule and the adjacency rule. To show that M is invertible we first show that any element of s ∈ S generates a complete domino tiling x s of the plane. Then we show that the coloring of s satisfies the coloring rule of Definition 3.1. Any horizontal line segment from (i, j) to (i + 1, j) on the boundary of a tile D which bisects a domino has three elements of Colored Points D in its interior (as g changes by 3 across such an edge), while a horizontal line segment which does not bisect the boundary of a domino has no elements of Colored Points A in its interior(as g changes by 1 across such an edge). Likewise any vertical line segment from (i, j) to (i, j + 1) on the boundary of a tile A which bisects a domino has zero or six elements of Colored Points A in its interior (as g changes by 1 or 7 across such an edge) while a vertical line segment which does not bisect the boundary of a domino has two or four elements of Colored Points A in its interior (as g changes by 3 or 5 across such an edge). Thus if two tiles can be placed next to each other under the adjacency rule then their domino tilings are consistent and every element of s generates a domino tiling x s of the plane. Now we verify the coloring rule. For every v, v ′ ∈ (Z × R) ∪ (R × Z) with g xs (v) = g xs (v ′ ) by Lemma 5.6 there is a sequence w i ∈ Z 2 with 1. v = w 0 , 2. v ′ = w k and 3. g xs (w i ) = g xs (w i+1 ) for all i = 0, 1, . . . , k − 1 and 4. for all i = 0, 1, . . . , k − 1 there exist m, n ∈ Z such that Then the last two conditions along with the adjacency rule imply that C(w i ) = C(w i+1 ) for every i = 0, . . . , k − 1. Thus C(v) = C(v ′ ) and the coloring rule of Definition 3.1 is satisfied. Thus M −1 (s) exists and M is a shift invariant bijection. Entropy Up until now we have treated S as just a topological object. Using M we can put a measure on S by looking at the pushforward of µ * . To minimize notation we refer to the pushforward of µ * as µ * as well. We will show that µ * is a measure of maximal entropy for our subshift of finite type. Let The first statement implies that the entropy of the colored domino tiling is at least four times the entropy of the domino tiling. The second implies which proves the lemma. By Lemma 6.1 the measure µ * is a measure of maximal entropy. As we could change µ * to µ× Bernoulli(1/3,2/3) and not change the entropy of the colored domino tiling, the subshift of finite type does not have a unique measure of maximal entropy. But µ * is in some sense the most natural measure of maximal entropy. It is the measure we get by generating measuresμ N by putting equal mass on all colored domino tilings of [−N, N ] 2 and the taking the limit as N goes to ∞. We can show that this limit converges by the uniqueness of the measure of maximal entropy for the domino tiling. Let µ ′ be any weak limit of the sequence of measures above. Then µ ′ is a measure of maximal entropy on S. Then project µ ′ onto X to get a measureμ on X. The entropy ofμ is the same as the entropy of µ ′ . Thusμ is a measure of maximal entropy on X. As there is a unique such measure we have thatμ = µ. In every of our sequence of measures conditioned on a domino tiling the colors are independent with all colorings equally likely. We say B ⊂ X is a cylinder set determined by [−n, n] 2 if x ∈ B is determined by x(u) for all u ∈ [−n, n] 2 . We write B ∈ Cyl [−n,n] 2 . Let B be a cylinder set for dominoes and c 1 and c 2 be colorings of the colored points in B. Thenμ N (B, c 1 ) =μ N (B, c 2 ). This carries over to any weak limit. Thus µ ′ = µ * . Completely positive entropy In order to show that our subshift has completely positive entropy we take the following steps. • We take a domino configuration x ∈ X and a cylinder set B ∈ Cyl [−n,n] 2 and condition on B ∩x 2 k n . • Then we show in Lemma 7.1 that the conformal invariance of the height function implies that for any n and m if k is sufficiently large then for all B and most x the conditional probability that the union of two domino tilings in B ∩x 2 k n have at least m cycles that surround Square n and are inside Square 2 k n is close to 1. • Next in Lemmas 7.2 and 7.3 we show that this implies that for all ǫ > 0 if k is sufficiently large then for all j ∈ Z • Finally we show how (2) implies that the colored domino process has completely positive entropy. When we defined the height function we arbitrarily chose to set h x (0, 0) = 0 for all x. Sometimes it is more convenient to chose to set the function to be zero at a point of the form (−l, −l). For this reason we define Now define B * x,m,n,k = (x ′ , y ′ ) : x ′ ∪y ′ has ≥ m cycles surrounding Square n and x ′ , y ′ ∈ B∩x 2 k n Here is the key fact about the fluctuations of the height function that we need in this paper. Proof. For a point x ′ ∈ B ∩x 2 k n we get a sequence of functions h x ′ ,n | Square n , h x ′ ,2n | Square 2n , . . . , h x ′ ,2 k n | Square 2 k n . Thus the setx 2 k n generates a measure on sequences of functions. To independently sample x ′ and y ′ from B ∩x 2 k n we first independently sample h i : Square 2 i n → Z for i = 0, . . . , k and h ′ i : Square 2 i n → Z for i = 0, . . . , k according to this measure. Then we sample x ′ and y ′ such that h x ′ ,n = h 0 , . . . , h x ′ ,2 k n = h k and h y ′ ,n = h ′ 0 , . . . , h y ′ ,2 k n = h ′ k . Fix δ > 0 smaller than the δ in Theorem 2.3. The difference is an ergodic sum. Repeated uses of the ergodic theorem show that for most x for all large l the probability ||h l || ∞ < δ2 l n is high. This implies that for all k sufficiently large for a set of x of measure at least 1 − δ the sequences h 0 , . . . , h k and h ′ 0 , . . . , h ′ k have the property that with probability 1 − δ/2 For l with ||h l || ∞ , ||h l−1 || ∞ , ||h ′ l || ∞ , ||h ′ l−1 || ∞ < δ2 l n and l large enough for Corollary 2.3 to hold this implies that there exists p > 0 such that the probability that x∪y has a cycle in Annulus l is at least p. This happens independently for all such l. Thus for any m we can choose K such that for k > K the set of x such that µ × µ B * x,m,n,k |x ′ , y ′ ∈ B ∩x 2 k n is greater than 1 − δ. Proof. Let Cycles be any set of at least m cycles that surround Square n and are inside Square 2 k n . Let B x,2 k n,Cycles be the set of pairs (x ′ , y ′ ) such that (x ′ , y ′ ) ∈ B ∩x 2 k n and that the set of cycles in x ′ ∪ y ′ that surround Square n is Cycles. Then by Theorem 2.2 we have that the conditional probability that µ × µ h x ′ ,2 k n (0, 0) = h y ′ ,2 k n (0, 0) B x,2 k n,Cycles < 1/ √ m. We have that B * x,m,n,k = Cycles B x,2 k n,Cycles so by Bayes' rule the conditional probability on the left hand side of (3) is a weighted average of conditional probabilities of the form that are on the left hand side of (4). As all of these are less than 1/ √ m, this proves the lemma. Lemma 7.3. Given n ∈ N, cylinder set B ∈ Cyl [−n,n] 2 and ǫ > 0 there exists K and a set G ⊂ X with µ(G) > 1 − ǫ such that for all k > K, all x ∈ G and all j ∈ Z µ h x ′ ,2 k n (0, 0) = j x ′ ∈ B ∩x 2 k n < ǫ. Proof. Choose m ∈ N and δ such that Then we get where (7) is an application of Bayes' rule and (8) follows from Lemmas 7.2 and 7.1. Taking square roots completes the proof. Now that we have established this lemma we can combine this with Theorem 2.1 to prove that our subshift of finite type has completely positive entropy. By [6] to prove the theorem it suffices to show that for all n and ǫ > 0 there exists N and a set of (x, c) of measure at least 1 − ǫ such that for all E ∈ {1, 2} [−20n,20n+2] and cylinder sets B ∈ Cyl [−n,n] 2 We have that By Theorem 2.1 the domino tiling is isomorphic to a Bernoulli shift thus it has completely positive entropy. Thus by [10] for any δ > 0 we can chose N large enough so that there exists a set of h of measure at least 1 − δ such that for all B. Thus we need to show that for all δ > 0 there exists N large enough so that for most x ∈ X and c ∈ {1, 2} Z and all E ∈ {1, 2} [−20n,20n+2] and cylinder sets B By Lemma 7.3 for all δ ′ > 0 we can make k large enough so that for most x and all B max j µ h x,2 k n (0, 0) = j x 2 k n ∩ B < δ ′ . Thus we can choose δ ′ small enough so that the weak law of large numbers implies (10) is satisfied for most c. For a sufficiently small choice of δ combining (9) and (10) proves the theorem. Not Bernoulli For c, d ∈ {1, 2} N we definē where j is the largest number such that there exists subsequences 1 ≤ n 1 < n 2 < · · · < n j ≤ N and 1 ≤ m 1 < m 2 < · · · < m j ≤ N such that c ni = d mi for all i = 1, . . . , j. The same definition holds in the case that one string has length less than N . A simple combinatorial argument proves the following standard lemma. The lemma is well known but we include the proof here for the sake of completeness. Then for r ′ < .1 and N > N 0 Since this holds for all N sufficiently large there exists r > 0 such that (11) is true for all N . A slight generalization is that there exists r > 0 such that for all N sufficiently large P×P (c, d) : ∃a, b ∈ [−100N, 100N ] and K > N such thatf K (σ a c, σ b d) < .01 < e −rN . (12) The main tool that we use in this lemma is that a goodd [−N,N ] 2 matching generates a goodf matching of the colorings. We make that precise in the following lemma. Before we prove this lemma we first introduce a few definitions and then prove a few quick lemmas. Note that any x, y and i generate a natural bijection F from R to R by F (r) = F x,y,i (r) = g −1 y (2i, g x (2i, r)). Lemma 8.3. F is increasing and Proof. By Lemma 3.4 both g x (i, r) and g y (i, r) are piecewise linear increasing in r and have derivatives between 1 and 7. This implies that F is piecewise linear and increasing with derivative between 1/7 and 7. Let Colors that Agree x ′ ,i,N = g x ′ (2i, l) : l ∈ 2Agree i,N + [0, 2] ∩ Z and Colors that Agree y ′ ,i,N = g y ′ (2i, l) : l ∈ 2Agree i,N + [0, 2] ∩ Z. This next lemma is the reason for our use of the notation Colors that Agree x ′ ,i,N . Lemma 8.6. If t ∈ Colors that Agree x ′ ,i,N then F (t) ∈ Colors that Agree y ′ ,i,N and c t = d F (t) . Combined with (14) this implies that F (t) ∈ Colors that Agree y ′ ,i,N if t ∈ Colors that Agree x ′ ,i,N . The definition of Agree i,N also implies that c t = d F (t) . Proof of Lemma 8. Then choose the sequence n 1 , . . . , n k to be elements of Colors that Agree x ′ ,i,N in increasing order and chose the sequence m 1 , . . . , m k to be elements of Colors that Agree y ′ ,i,N in increasing order. Then by Lemma 8.6 F (n j ) = m j and c a+nj = d b+mj for all j. for all a, b and K such that −100N ≤ a, b ≤ 100N and K > N . By Lemma 8.2 for those quadruples the integrand is bigger than .0001. Thus the colored domino tiling is not isomorphic to a Bernoulli shift. Theorem 8.8. The colored domino tiling (X * , T * left , T * down ) is a subshift of finite type. The measure µ * on X * is a measure of maximal entropy for which the subshift has completely positive entropy but is not isomorphic to a Bernoulli shift. Proof. This is a combination of Lemma 6.1 and Theorems 7 and 8
7,586.6
2010-12-01T00:00:00.000
[ "Mathematics" ]
A review on the use of large language models as virtual tutors Transformer architectures contribute to managing long-term dependencies for Natural Language Processing, representing one of the most recent changes in the field. These architectures are the basis of the innovative, cutting-edge Large Language Models ( llm s) that have produced a huge buzz in several fields and industrial sectors, among the ones education stands out. Accordingly, these generative Artificial Intelligence-based solutions have directed the change in techniques and the evolution in educational methods and contents, along with network infrastructure, towards high-quality learning. Given the popularity of llm s, this review seeks to provide a comprehensive overview of those solutions designed specifically to generate and evaluate educational materials and which involve students and teachers in their design or experimental plan. To the best of our knowledge, this is the first review of educational applications ( e.g. , student assessment) of llm s. As expected, the most common role of these systems is as virtual tutors for automatic question generation. Moreover, the most popular models are gpt-3 and bert . However, due to the continuous launch of new generative models, new works are expected to be published shortly. Introduction Artificial Intelligence (ai) refers to the synthetic capabilities of computer science applications to perform tasks that usually require human intelligence (e.g., adaptation, learning, reasoning, etc.) (Sarker, 2022;Cooper, 2023).The recent technological advancements within the ai field have led to relevant changes in business and research, the economy, and society (i.e., mega-trends) that are predicted to continue (Estigarribia et al, 2022;Haluza and Jungwirth, 2023;Rasa et al, 2023). The most relevant change that perfectly exemplifies the impact of the mega-trends above is the transformer architectures that contribute to managing long-term dependencies for Natural Language Processing (nlp) (Tay et al, 2023).They are the basis of the innovative, cutting-edge Large Language Models (llms) that have produced a huge buzz in several fields and industrial sectors (MacNeil et al, 2022a).In this line, Chatgpt achieved more than 1 million users within the first five days of its release 1 .Accordingly, llms have been used in economy and finance (Alshater, 2022), journalism (Pavlik, 2023), medicine (O'Connor and ChatGPT, 2023), and education (Sallam, 2023), among others.However, as other technological advancements, llms have experienced the community's resistance, a common evolutionary and social psychology phenomenon (Tobore, 2019). Regarding the learning field, during the 21st century, education has experienced a profound change in methods and content.Specifically, a flexible and multidisciplinary environment is sought, where the student can actively participate in their learning process, promoting more autonomous and ubiquitous studying thanks to ai advancements (Baidoo-Anu and Ansah, 2023; Li et al, 2023).The scientific community has researched the use of ai techniques like Machine Learning (ml) models for training purposes ever since their inception (Hochberg et al, 2018;Talan, 2021;Huang and Qiao, 2022), causing progressive advances towards autonomous high-quality learning (Han, 2018;Demircioglu et al, 2022).Mainly, ai has driven the technological transition in this field regarding the instructional applications, contents, platforms, resources, techniques, tools, and network infrastructure (Roll and Wylie, 2016).This transition also involves changes in the leading roles of the education systems, teachers, and students since this new digital education environment requires new digital competencies and reasoning patterns (Jensen et al, 2018;Zhou et al, 2023).However, although promising, the advances offered by ai are still far from becoming a standardized tool in the educational field due to its early state and the need for training in using these solutions to take the most advantage of them. Of particular interest is the impact of those applications that leverage llms, framed within the generative ai field and based on ml techniques.They enable hands-on learning and are common practice in the classroom nowadays.Compared to previous ai solutions and traditional methodologies, which focused 1 Available at https://twitter.com/gdb/status/1599683104142430208,April 2024. primarily on modifying the textual input using correction, paraphrasing, and sentence completion techniques, llms generate on-the-fly human-like utterances, hence its popularity, especially among students and teachers (Rudolph et al, 2023).Current advanced llms can enhance pedagogical practice and provide personalized assessment and tutoring (Sok and Heng, 2023).Consideration should be given to the cooperation between llms-based systems and humans, provided the experience and scientific knowledge along with the capabilities of the human-agents for creativity and emotional intelligence (Zhang et al, 2020;Korteling et al, 2021).Note that these ai-based systems present advantages in specific educational tasks as self-learning tools and virtual tutors.Specifically, they enable automatic answer grading (Ahmed et al, 2022), explanation generation (Humphry and Fuller, 2023), question generation (Bhat et al, 2022), and problem resolution (Zong and Krishnamachari, 2022).Furthermore, when used for text summarization (Phillips et al, 2022), they help synthesize content and improve the student's abstraction capabilities.Their use as learning software in virtual assistants is highly relevant to flexible learning (Wang et al, 2022;Yamaoka et al, 2022).Furthermore, their language intelligence capabilities make them an appropriate tool for code correction (MacNeil et al, 2022b). More in detail, llms are trained with massive textual data sets to create human-like utterances.They perform a wide variety of nlp taking advantage of fine-tuning and pre-training pipelines (Kasneci et al, 2023).Note the relevance of both the pre-training and prompt engineering development.The first concept refers to training llms with miscellaneous large data sets, while the second refers to specific fine-tuning on a particular task (Kasneci et al, 2023).Consequently, the quality of the llms output highly depends on the input data and prompt designed, aka prompt engineering (Cooper, 2023).The latter technique ranges from zero-shot learning, widely popular when applied to llms (Russe et al, 2024), to few-shot learning.Note that the model follows task instructions in zero-shot learning since the end user provides no examples.In contrast, in few-shot learning, the model learns from the demonstrations available (i.e., few-shot text prompts). Among the most popular llms, bert (Devlin et al, 2019), gpt-3 (Brown et al, 2020), gpt-3.52 , gpt-43 and t5 (Raffel et al, 2020) deserve attention.bert (Bidirectional Encoder Representations from Transformers) was released by Google in October 2018 (slightly after gpt-1 dated June 2018).It is a pre-trained transformer-based encoder model that can be fine-tuned on specific nlp tasks such as Named Entity Recognition (ner), question answering, and sentence classification.Moreover, gpt-3, gpt-3.5, and gpt-4 (Generative Pre-trained Transformer) models were released by OpenAI in 2020, 2022 and 2023, respectively.More in detail, gpt-4 is already deployed in Chatgpt application, which compared to other llms can generate context-aligned responses and interact naturally with the end users as a peer.This model goes beyond producing reports and translating assessments by creating source code (Haleem et al, 2022) and responding to complex questions posed by the students in realtime (George et al, 2023).It can also show creativity to some extent in writing (Baidoo-Anu and Ansah, 2023).t5 (Text-to-text Transfer Transformer) model was released by Google following the encoder-decoder transformer architecture in 2020.Even though its configuration is similar to bert, it differs in some steps of the pipeline, like pre-normalization (Pipalia et al, 2020). Given the widespread of ai-based solutions in our everyday lives and particularly the popularity of advanced nlp-based chatbots for learning purposes to generate and evaluate educational materials, this review seeks to provide a comprehensive overview of the systems that exploit llms and were explicitly designed for educational purposes (i.e., virtual tutors for question generation and assessment).Thus, involving students or teachers at the design or evaluation levels, excluding those works in which the application of the solution for educational use cases was feasible but not initially designed for that purpose.The ultimate objective is to promote the advancement of these existing solutions in a collaborative environment between academia (i.e., researchers and developers) and end-users (i.e., students and teachers).To the best of our knowledge, this is the first review in this regard.Note that there exist few review works that focused on specific related fields such as health care education (Sallam, 2023,?) or specific features like the responsible and ethical use of llms (Mhlanga, 2023) and their impact on academic integrity (Perkins, 2023). The rest of this paper is organized as follows.Section 2 describes the methods and materials used in the review.Section 3 presents the discussion on the selected relevant works.Finally, Section 4 concludes the article and details future research. Methodology The review methodology followed is composed of two steps: (i ) data gathering (Section 2.1), and (ii ) screening and eligibility criteria (Section 2.2). Figure 1 details the methods and material used.More in detail, this review aims to gather knowledge to answer the following research questions: • RQ1: Which solutions based on llms are being developed (e.g., for assessment tasks)?(i.e., excluding multidisciplinary solutions that were not specifically intended for learning assistance) • RQ2: Which educational solutions based on llms involved students or teachers at any level of the development process (e.g., design, evaluation)? Data gathering The data were extracted using Google Scholar4 with two search queries, specially designed to gather works within the educational field that leverage llms: 1. "education" AND "student" AND ("large language model" OR "GPT-3" OR "GPT-3.5"OR "GPT-4" OR "ChatGPT") -"review" 2. "education" OR "student" AND ("large language model" OR "GPT-3" OR "GPT-3.5"OR "GPT-4" OR "ChatGPT") -"review" Both queries have been restricted temporally since 2020, and the second query was applied to the title content exclusively.Note that duplicated elements and works that do not use llms or do not indicate which model is exploited were not considered.The same applies to the works that assess the performance of llms.In the end, 342 records were identified. Screening and eligibility criteria This process was designed to identify works within the field of study that were written in English while at the same time discarding theoretical and review contributions (i.e., those that do not propose an llm-based solution but review existing solutions or hypothesize on the impact of llms for educational purposes).The manual screening based on the above eligibility criteria resulted in 29 records.Note that this process distinguishes between published articles and conferences from pre-printed and non-peer-reviewed records.The criteria for selection and exclusion are presented in Table 1. Figure 2 and Figure 3 detail the distribution of the llms used and applications in the works selected.Firstly, the most popular model is bert, followed by gpt-3, t5, and gpt-3.5.The low representativeness of the last gpt model contrasts with its popularity.The latter is due to the fact that the data gathering corresponds to the first quarter of 2023, that is, shortly after it was released.Thus, new works exploiting it are expected to be published shortly.Furthermore, the most common tasks these models perform in the selected works are as virtual assistants and question generation, as shown in Figure 3, followed by answer grading and code explanation/correction.Note that most works were published in 2022, with few records in 2021, showing a growth trend in 2023.Regarding answer grading applications, Ahmed et al (2022) used the bert model.They exploited a modified version of the model based on triplets and the Siamese network, specially designed to generate sentences through semantically meaningful embeddings.The data set used is the one presented by Mohler and Mihalcea (2009).The authors applied the question demoting technique as part of the preprocessing, thus removing from the answer those words also contained in the question.The authors performed the experiments with two different combinations of input data: (i ) the reference and student answers, and (ii ) the concatenation of the question and the reference answer, plus the answer provided by the student.Evaluation metrics include Pearson correlation coefficient (pcc) and root mean square error (rmse).The results are approximately 0.8 pcc and 0.7 rmse.Moore et al ( 2022) presented another answer grading solution based on gpt-3.Unlike Ahmed et al (2022), the input data were gathered from an introductory chemistry course at the university level with almost 150 students.Moreover, the gpt-3 model was trained with the learningq data set (Chen et al, 2018), as in Bhat et al (2022).Based on the assessment of the questions posed to experts in the chemistry field, the model was able to correctly evaluate 32 % of the questions. Analysis & discussion Few works exist on code explanation and general explanation generation, learning software, and problem resolution.Firstly, MacNeil et al (2022b) proposed a gpt-3-based solution for code explanation based on 700 prompts.Note that it does not identify or correct errors.The main functionalities of the system encompass (i ) execution tracing, (ii ) identifying and explaining common bugs, and (iii ) output prediction.However, no results were provided.Humphry and Fuller (2023) proposed a solution based on gpt-3.5 to write conclusion statements about chemistry laboratory experiments.The evaluation of the solution relied on a discussion of features like readability and orthographic correctness of the generated text.Unlike the works above, which focused on textual input data, Yamaoka et al (2022) used the gpt-3 model to exploit social media data, particularly from Instagram, for learning purposes.The proposed pipeline comprises (i ) detecting the relevant objects in the images, (ii ) extracting keywords to generate sentences related to those keywords, and (iii ) providing linguist information about the words that composed the sentence.The ultimate objective was to acquire new vocabulary.The experiments consisted of a small pilot study with three students from Osaka Metropolitan University.The only results reported were the average of unknown words, 2.2 in the generated sentences.Finally, Zong and Krishnamachari (2022) used gpt-3 to identify and generate math problems involving systems of two linear equations.The experiments consisted of (i ) problem classification into five categories, (ii ) equation extraction from word problems, and (iii ) generation of similar exercises.The authors prepared the input data ad hoc.The accuracy of the results obtained in each of the three tasks above was 75 % (averaging the five categories); 80 % (with fine-tuning), and 60 % (also averaging the five categories), respectively. Regarding question generation, several representative examples were found in the literature.Bhat et al (2022) used both gpt-3 and t5 models, gpt-3 for question generation combined with a concept hierarchy extraction model, and t5 for the evaluation in terms of learning usefulness of the generated questions.The input data consisted of textual learning materials from a university data science course.More in detail, the concept hierarchy extraction method exploited the mooccubex pipeline (Yu et al, 2021), which extracts key concepts following a semi-supervised approach.Note that evaluation also involved computing the information score metric and manual assessment by human annotators.The experimental results obtained with the learningq data set (Chen et al, 2018) show that almost 75 % of the generated questions were considered useful by the gpt-3 model, with an agreement slightly higher than 65 % when compared to manual evaluation.Similarly, Dijkstra et al (2022) created EduQuiz with gpt-3, a multi-choice quiz generator for reading comprehension exploiting the eqg-race data set 5 (Jia et al, 2021).The authors evaluated the performance of EduQuiz using standard metrics, bleu-4, rouge-l, and meteor.Results attained 36.11,11.61, and 25.42 for these metrics, respectively.Additionally, Sharma et al (2022) proposed a fine-tuning pipeline composed of context recognition and paraphrasing, filtering irrelevant output, and translation to other languages for question generation at different levels using the t5 model.The authors used the data set by Mohler et al (2011) (an updated version of the data set used in Ahmed et al ( 2022)).The evaluation metrics computed were blue (Papineni et al, 2002) and meteor (Lavie and Agarwal, 2007).The results for the two metrics above were 0.52 and 57.66, respectively.Thus, compared with the question generation solution by Dijkstra et al (2022), Sharma et al (2022) obtained a more competitive meteor value.Ultimately, Nasution (2023) used gpt-3.5 for question generation.To assess the generated questions' reliability or internal consistency, the Cronbach's alpha coefficient (Taber, 2018) was computed, resulting in 0.65.Answers from a survey performed to almost 300 students show that 79 % of the generated questions were relevant, 72 % were moderately clear, and 71 % were of enough depth. In contrast, Phillips et al (2022) used gpt-3 to create summaries of students' chats in collaborative learning.Moreover, this solution detected confusion and frustration in the student's utterances.Input data was gathered from secondary school students in an ecosystem game.The authors briefly discussed how the system could provide advantageous knowledge to teachers about their interaction in a collaborative learning environment, but no further analysis or results were provided.Conversely, Prihar et al (2022) proposed a learning assistant based on the bert model and its variations (i.e., sbert and Mathbert) to generate support messages from chat logs obtained from fundamental interactions between a live upchieve tutor available at the assistments learning platform and the students.Even though 75 % of the generated messages were identified as relevant by manual human evaluation, these messages had a negative impact on the student's learning process, as the authors explained. The most common application uses llms as virtual assistants.Sophia and Jacob (2021) created edubot exploiting Dialogflow.Its main limitation lies in the basic language understanding capabilities (i.e., low variability in the responses provided), particularly regarding the user's emotions.Baha et al (2022) developed Edu-Chatbot exploiting the Xatkit framework.The system comprises an encoder based on Camembert and a decoding module for student intent recognition.Unfortunately, the intent classification decoder is based on a pre-defined set of recognized actions (e.g., simple questions, animations, videos, and quizzes).Thus, the language intelligence of the solution is limited.Furthermore, no evaluation was performed.Calabrese et al (2022) presented a virtual assistant prototype for Massive Online Open Courses (moocs).Their objective was to reduce the teaching load and maintain the quality of learning.Thus, its architecture allows the teacher to intervene in those questions that have not been resolved satisfactorily.More in detail, they used a personalized version of bert.The questions answered by the teacher are included in an additional document and allow the bert model to be improved.In contrast, Essel et al (2022) involved 68 undergraduate students in evaluating the solution developed using Flowxo and integrated into WhatsApp.Qualitative evaluation on the end-user's preferences of the virtual assistant instead of traditional interaction approaches with the teachers reached 58.8 %.Additionally, Liu et al (2022) presented a virtual assistant for online courses to resolve general and repetitive doubts about content and teaching materials.This system incorporates a sentiment analysis module to analyze the response's satisfaction based on the student's dialogue.They used two fine-tuning versions of bert model with an accuracy of 82 % and 90 % for the correct detection of the content and student's sentiment, respectively.Moreover, Mahajan (2022) created a system for students to improve their knowledge of the English language that allows them to obtain information on the meaning of words, make translations, resolve pronunciation doubts, etc.The authors exploited the Roberta model with an accuracy greater than 98 % in communication intent detection.Similarly, Mendoza et al (2022) created a virtual assistant intended for academic and administrative tasks but exploiting Dialogflow6 .The Cronbach's alpha coefficients during the evaluation of the system exceeded 0.7.In contrast, Topsakal and Topsakal (2022) presented a foreign language virtual assistant based on the gpt-3.5 model combined with augmented reality.The authors claim that this combination attracted students' attention and motivated them through entertaining learning thanks to gamification.In this case, the language model was used to establish a dialogue with the end users.Unfortunately, no results were discussed.Moreover, Tyen et al (2022) proposed a virtual assistant for second language learning with difficulty level adjustment in the decoder module and evaluated by experienced teachers.The system exploits Roberta fine-tuned with a Cambridge exams data set.The system attained Spearman and Pearson coefficients of 0.755 and 0.731, respectively.Finally, Wang et al (2022) developed an educational domain-specific chatbot.Its goal is to reduce pressure on teachers in virtual environments and improve response times by easing communication between students and teachers.They used Natural Language Understanding (nlu) techniques on variations of the bert model for the classification of intents and response generation.It presented an accuracy of 88 % in detecting intents.However, its values are lower than 50 % regarding semantic analysis. Table 3 lists the selected pre-printed or non-peer-reviewed works, taking into account their application, model used, and reproducibility feature.In this case, da Silva et al (2022) 7 , Zhang et al (2022) 8 and Christ (2023) 9 provide enough information for reproducibility, while Zhang et al (2022) 8 involved either teachers or students in the design or experimental plan.Distilbert Yes The distribution of applications is similar to the peer-reviewed records.Hardy (2021) 10 developed an automatic evaluation system for reading and writing exercises.The system uses the sbert model, among others, to capture semantic data and provide valuable insights related to the student's skills, using asap-aes 11 and asap-sas 12 data sets.Particularly, they exploited the passagedependent sentence-bert model trained using curricular learning (Graves et al, 2016).The results from the Quadratic Weighted Kappa (qwk) metric reached 0.76 on average. Regarding code correction, Zhang et al (2022) 8 presented mmapr, an error identification and correction system for code development based on the Openai Codex model 13 .The system fixes semantic and syntax errors by combining iterative querying, multi-modal prompts, program chunking, and test-casebased selection of a few shots.Results obtained with almost 300 students reached 96.50 % in corrected code rate with the few-shots-based approach.Phung et al (2023) 14 presented a similar solution to mmapr for code correction named pyfixv.The main difference is that the Codex model, combined with prompt engineering, explains the detected errors.Moreover, the explanations are also validated in terms of suitability for the students.The system has been tested with TigerJython (Kohn and Manaris, 2020) and Codeforces15 data sets.The precision attained 76 % in the most favorable scenario with the TigerJython data set. Cobbe et al ( 2021)16 elaborated a data set of 8.5k elementary school mathematical problems called gsm8k.Then, the gpt-3 model was used to generate comprehensible explanations of these problems, combining natural language and mathematical expressions.The authors trained verifiers to enhance the performance of the model beyond fine-tuning.Ultimately, they concluded that this approach enhanced the overall performance. Subsequently, da Silva et al (2022) 7 developed an automatic questionnaire generation system using the t5 model and applying the fine-tuning technique, named querai.The t5 model was evaluated with Skip Thought Vectors (stv), Embedding Average Cosine Similarity (eacs), Vector Extrema Cosine Similarity (vecs), and Greedy Matching Score (gms) metrics, with results higher than 0.8 except for vecs.Summing up, the accuracy of the pay-per-subscription solution is 91 %.Similar to da Silva et al (2022) 7 , Raina and Gales (2022)17 developed a multiple-choice question generation solution to generate both questions and the set of possible answers using apart from t5, the gpt-3 model, both trained with the race++ (Liang et al, 2019) data set composed of middle, high and college level questions.The results obtained are similar between the two models with an accuracy of 80 % (11 percentage points lower than the solution by da Silva et al (2022) 7 ).Note that the authors also measured the number of grammatical errors and other features like diversity and complexity.The lowest values are related to the diversity of the questions generated.In the best scenario, the t5 model attained 60 % accuracy, approximately.More recently, Christ (2023) 9 used bert to generate sql-Query exercises automatically.Experiments with knowledge graphs and natural language building were also performed as a baseline.The authors concluded that the Distilbert-based approach generates descriptions that are, on average, almost 50 % shorter and with a 20 % decrease in term frequency compared to the nlp baseline. gpt-3 and the different adaptations of the bert model are the most popular alternatives in the sample regarding answer grading, code explanation and general explanation generation, learning software, problem resolution, question generation, and text summarization.When it comes to their use as virtual assistants, the variety of models used increases.The current lower costs of the gpt-3.5 model will motivate a rapid increase in its use in the coming years.However, bert robustness as an entity detector, with evaluation metrics above 80 % in several of the discussed works, made it a reference for developing educational software tools.Unfortunately, most works reviewed do not provide the code or data used for their analysis, making reproducibility difficult.Finally, regarding the risks of exploiting llms for educational tasks, they are transversal (e.g., for automatic question generation and as virtual assistants, the two most popular applications identified).The lack of transparency of the models (i.e., the rationale behind their functioning, such as difficulty adjustment in question generation) could negatively impact the end-users.Regarding their use as virtual tutors, reinforcement learning from human feedback is essential to gain control over their operation and ensure fairness.Ultimately, the risk of poor accuracy must be palliated by including the probabilistic confidence of their response. Conclusion llms represent an undeniably mega-trend in the current century in many fields and industrial sectors.In the particular case of learning, these generative aibased solutions have produced a considerable buzz.Accordingly, they enable hands-on learning and are commonly used in classrooms nowadays.Compared to previous ai solutions and traditional methodologies, which focused primarily on modifying the textual input, advanced llms can generate onthe-fly human-like utterances, enhancing pedagogical practice and providing personalized assessment and tutoring. Given the popularity of llms, this work is the first to contribute with a comprehensive overview of their application within the educational field.Paying particular attention to those that involved students or teachers in the design or experimental plan.From the 342 records obtained during data gathering, 29 works passed the screening stage by meeting the eligibility criteria.They were discussed, taking into account their application within the educational field, the model used, and code and data availability features.Results show that the most common tasks performed as virtual assistants are question generation, answer grading, and code correction and explanation.Moreover, the most popular model continues to be bert, followed by gpt-3, t5, and gpt-3.5 models.In the end, this review identified 9 reproducible works and 8 solutions that involved either teachers or students in the design or experimental plan. Due to the recent launch of the gpt-4 model within the Chatgpt application, new works are expected to be published soon and will be analyzed as part of future work.Moreover, as future work, we will study the ethical implications of llms (i.e., their transparency and fairness behavior caused by the training data and privacy) and how the solutions discussed can be integrated into the education curricula, as well as their shortcomings and risk to academic integrity (e.g., plagiarism concerns).Finally, attention will be paid to those works that propose innovative teaching practices with llms and explore the use of ad hoc solutions through personal language models in the field. Fig. 2 : Fig. 2: Distribution of the llms in the records selected. Table 1 : Criteria for selection and exclusion. Table 2 : Liu et al (2022)articles published in journals and the proceedings of conferences, taking into account their application, the model used, and code and data availability.Note that just the works byLiu et al (2022); Selected articles published in journals or presented at conferences. Table 3 : Selected pre-printed or non peer-reviewed records.
6,178.2
2024-05-18T00:00:00.000
[ "Computer Science", "Education", "Linguistics" ]
Real Application for a Low-Cost Low-Power Self-Driving 1/10 Scale Car In this paper, we discuss a real-life application for a low-power and low-cost self-driving car on a 1/10 scale platform. We present algorithms developed to achieve driving autonomy on a Low Power Single Board Computer using an ARM-based processor in a controlled environment. The authors provide insight on the usability of this technology for gait and performance running analyses. We perform walking and running analyses on an indoor athletic track over an autonomous follow-cam configuration. Through testing, we demonstrate how the vehicle can produce reliable and clinically valuable data. We discuss possible improvements and present recommendations for future works. Introduction In early 2019, Tesla's CEO, Elon Musk, presented the company's first computer and chip to the world, named Full Self-Driving or FSD, capable of processing up to 2.5 billion pixels per second [1], an astonishing engineering feat. Although very impressive, many researchers and labs cannot afford or access this kind of computing power on portable devices for self-driving development. However, there are now many available options on the market capable of decent computing power at the lower end of the price range. For instance, let us think of the popular Nvidia Jetson Nano embedded computer or the even more popular, less brain-powered option, the Raspberry Pi 3B+. Though capable of self-driving on paper, very few studies present models with real-life conditions [2], reliability, scalability or specific applications other than following lane markings within closed lab doors [3,4]. This paper presents the full development of a 1/10 scale car equipped with a Raspberry Pi 4 capable of what would be considered as level 3 autonomous driving [5] used for the clinical analysis of locomotion. Objective The basis of this research is to perform quantitative motion analysis such as running and walking [6] through a moving visual referential based on the subject's position. An indoor athletic track and field was selected as a control environment. The research takes place at a University featuring an atypical 168-meter 4-lane indoor track. Each lane has a common width of 1 meter with 5 cm white lane line markings. The single radius geometry conforms to the standards where the straight segment is twice the length of the inner radius, more precisely, a 16-meterspan in this case. The secondary objective was to develop such an instrument using only low-cost devices capable of performing onboard computation, thus reducing the need for extra material. The limiting cost factor is of interest to compete with the currently used evaluation methods that are both affordable and well-established (ex. Paper-based checklist or timed evaluation using a stopwatch) [7,8]. Also, the system is developed to alleviate the use of wearable devices for conducting the gait analysis since such systems are often hard to use for a layperson in ecological settings [9]. Table 1 lists the libraries and the version used. The entire code for this project was written in Python3, version 3.7.4. Many open-source libraries were needed to achieve this level of driving autonomy. Apart from the libraries needed for the integration of the sensor suite, Opencv, numpy and scipy were mainly used to complete this project. Project architecture The project follows a rather simple architecture to reduce calculations and computing time. Information flows through the node sequentially without threads, as shown in Figure 3. Communication protocols are identified and represented by dashed lines. The robot performs its tasks by first reading the RGBcamera input and generating an output value through a controller. It then acquires data from de LiDAR sensor, providing the necessary values to perform a throttle output evaluation through another distinct controller. Values are then Figure 1). This R/C dedicated scale chassis can reach speeds of up to 48 km/h (30 mph) and figures, as stated by its name, a four-wheel drive configuration with differential on both the front and rear of the car. The vehicle is equipped with two ¼ inch (6 mm) steel rods functioning as mounts for the electronics. The assembly is mainly achieved with various 3D printed brackets and supports. Electronic components The computer module selected is the Raspberry Pi 4 (RP4) with 4GB of RAM (Cambridge, England). It has been equipped with a 32 GB Kingston Class 10 micro SD card (Fountain Valley, California, United States) for storage. The car comes stock with a TRAXXAS XL-5 Electronic Speed Controller (ESC), a Titan 12T brushed DC motor and a robust 2075 Waterproof servomotor. Both apparatus, the servomotor and the ESC, can be controlled via pulse modulation width (PWM). Though capable of producing software PWM throughout its general-purpose inputs and outputs (GPIO), the signal is too noisy to generate precise inputs for the ESC and the steering servomotor. To overcome this issue, an Adafruit PCA-9685 PWM (New-York, New-York, United States) driver board was added to the vehicle assembly. This device allows the RP4 to generate precise PWM via an I²C protocol and can be powered directly the computer. To conduct reliable subject following experiments, the robot was implemented with the Garmin LiDAR-LITE V3 (Olathe, Kansas, United States), a high precision laser distance sensor compatible with the RP4 I²C protocol. Lastly, the system uses a Logitech c905 (Lausanne, Switzerland) webcam as an RGB input directly connected to the RP4 USB port. Power With simplicity in mind, robot assembly has been rigged with two independent power sources. A 7.4 V 5800 mAh 2s battery pack from the car manufacturer was used for propulsion, whereas a RAVPower 5 V 26 800 mAh USB type-C power bank was used to power the main computer and peripheral devices. This decision was also made to avoid the need of a decision to produce a computer vision algorithm instead. As a matter of fact, the analysis environment of the project is sufficiently well-known and standardized to not require a more general approach line of a convolutional neural network. Many open-source lane detection algorithms are already available from a vast selection of suppliers, but none seemed to fit the needs of the project. Most of the codes available offer self-driving abilities with restricted capabilities. Projects often tend to use non-standard markings and scaled lane widths while not offering the speed capabilities anticipated by the team. To overcome these issues, it was decided that a computer vision algorithm would be developed, thus enabling self-driving in real-life conditions and applications, in our case within athletic track markings. Most of codes available over the Internet regarding lane detection tend to use the Canny edge detection over an area of interest (lane marking areas, generally the lower part of the input image) and then identify linear patterns with the Hough transform algorithm [10]. These patterns are returned as line extremums, then sorted as left and right line components and finally added to the evaluated frame for visualization. Though visually pleasing, this method does not provide a good approximation for curved lines, nor a mathematically valuable tool for trajectory evaluation. The lane detection code written for this project is in part based on the Ross Kippenbrock algorithm [11]. An RGB 960 × 170 resolution camera image is first fed into the program trimmed to remove over horizon area and resize to a 300 × 170 size to lower computational costs. The image is then modified via a warp function to create a synthetic bird's eye view. Using a color mask function, the lane linepixels are then selected according to their color and shade. An eroding function is applied to filter noise, thus removing undesired pixel agglomerates. To extract a discrete output from a pixel array, the image is then resized to about fifty percent of its original size. Selecting only the brightest pixels (closest to [255,255,255]), an array of around 200 points is created from the original image. Points are then rotated according to the vehicle coordinates (forward corresponding to +x axis and left +y axis as seen in Figure 1). The origin is at the center of the vehicle. To achieve reliable line sorting, we developed a robust algorithm. A vertical pixel sum of the first half of the resulting image is performed, generating a pseudo signal according to the predominance of white pixels. Since the distance between the two lines is constant throughout the track length, we can transferred to the PCA 9685 module by the RP4 via the I²C protocol which then outputs the signals to the ESC and to the steering servomotor, thus inducing trajectory modifications on the vehicle. The loop is repeated as long as needed, but can be stopped at any time by the evaluator or if the algorithm raises any safety concerns. Computer Vision and Control Systems Lane line detection One of the key elements of the complete project is the use of a Low Power computer to perform a self-driving task. Though powerful enough to perform small artificial intelligence tasks, the RP4 is not optimized for neural network computations or machine learning training, which lead to the Steering control As shown in Figure 4, the computer vision algorithm produces a smooth and reliable ideal trajectory for the car to use as a goal. This 2 nd degree polynomial equation can be used to approximate the car's actual cross track error (CTE) by evaluating the function's value at the vertical intercept, the car being conveniently placed at the origin. The evaluated CTE is fed into a proportional-integral-derivative (PID) controller which then outputs a pulse width value converted into torque change by the motor. The robot was initially implemented with a simple model predictive control (MPC) using an in-house-developed mathematical steering model considering the four-wheel drive nature of the car. However, this method proved to be too time-consuming in terms of computing and was dropped early in the project. perform a signal analysis on the sum array using the scipy. signal library. By considering the lane width as the predicted wavelength, a parameter of the library's find_peaks function, we can identify apex cause by the line markings. If more than two vertex were detected, we compare these values to previous results to avoid false line detection and select the two best options. By using the peaks as starting points for each line, we can define the slope angle of the segment formed by our initial point to the closest one. If this point is within a threshold, in the case of this study being 15, the next point is considered as being part of the line and added to a list. This threshold is derived from the maximum possible lane deviation (in degrees) of the innermost line of the track described as part of this research at a 2.5-meter distance (camera horizon) with an added error value of 5 degrees. The algorithm is repeated for each point, based on the last point added in the list and the previous slope angle as well as for each peak selected. This algorithm helps to remove perpendicular values and "off the chart" data. The code algorithm is presented below. Two 2 nd degree polynomial equations representing lane lines are computed from the two separate data sets created. These two functions are then averaged to evaluate the estimated lane center. Throttle control While steering control is always performed according to lane line markings, the robot can carry out different assessments requiring a wide range of speed control patterns. Training and evaluation with predefined velocities, only required a user speed profile input. However, many analyses are based on the subject's speed. To achieve a constant distance with the subject, the vehicle utilizes an optical distance measurement sensor. Control of speed and distance is achieved through a PID controller, using the current subject/ robot distance as an input and computing a throttling output for the ESC. As of right now, the PID parameters have been tuned using the Ziegler-Nichols method [12]. Data collection The main goal of this research is to produce similar results as what would be possible for an in-lab visual locomotion analysis. Such evaluations are generally performed on a treadmill or over a controlled area and use a so-called global referential where the room is the spatial reference. In this project, using such a system would induce the need for real-time position estimation as well as complex computations to correctly evaluate the subject's locomotion patterns. As a bypass, it was decided that the spatial reference would be fixed to the subject, thus nulling the need for accurate position estimations, as shown in Figure 5. Although many evaluation types can be performed using the robot, this paper will focus on a following evaluation (the robot is placed behind the subject being evaluated). The basics of this evaluation are simple. A distance or time is first selected by the evaluator to conduct the desired evaluation. A track lane must then be selected independently of its color or length to perform the evaluation, according to the test requirements. To ensure a good visual frame for image capture, the evaluator is encouraged to modify the goal distance between the subject and the vehicle. The set distance for this paper was 2.5 meters (≈100 inches). To engage an evaluation procedure, the vehicle is placed on the selected lane. Though the robot does not have to be perfectly placed in the center of the lane, positioning it on a fairly straight alignment and close to the middle will help the vehicle perform better from the beginning. The vehicle will recover the initial placement error within the first 5 meters. Critical positions include a plus/minus 45 cm away from the lane center and a plus/minus 30 degree offset from the lane center tangent. Once the car is placed as desired, the subject can take place in front of the vehicle. We recommend placing the subject somewhere between 50 to 25 percent closer to the set distance to avoid premature correction from the robot. The evaluator can then proceed with the car's program start-up. This can be achieved via the remote desktop application available on the Windows platform (Windows 10, Microsoft, USA). The robot's main computer has been programmed to generate a secure Wi-Fi network from which a remote computer can be connected to perform the assessment. A starting countdown will lead to the beginning of the test. The subject can then proceed with to walk and/or run. Results Trials were conducted through different testing which include a 180 m (600 ft.) walk (1.25m/s or 4.5km/h), a 180 m run (3.44 m/s or 12.4 km/h) and a trial at variable paces for 540 m (1800 ft.)(1.3, 1.5 and 2.5 m/s, or 4.7, 5.4 and 9 km/h, respectively). All tests were carried out on an indoor athletic track, as described in the objective. Experiments were conducted on two different subjects presenting no functional limitations or pathologies. Tests were conducted to better understand controller tuning and project limitations. Figure 6 presents the plotted results of the experiment, i.e. the CTE and steering output (δ) through time and the throttle output (APW) compared to the perceived distance (D) through time from the 555 m evaluation. All analyses were performed at an average rate of 10.1 Hz, a similar value to the 10.3 frames per second performed by the main computer when only the camera is in use with an external webcam app (Guvcviewer, http://guvcview.sourceforge.net/) at a 960 × 720 resolution. Discussion Overall, current results show that this platform is effective at providing quantitative data of gait parameters [6] using a self-driven vehicle in a quasi-controlled environment. Data show that further improvements are needed on the tuning of the PID values. As an example, even if gait induces a sinusoidal kinematic response, output extirpated from this analysis are too rough to represent it graphically. A 0.15 Hz frequency on the distance control can easily be observed in Figure 6a, a number too small to be expressed as a gait pattern [13]. Moreover, some analysis segments, as seen at 100 and 240 seconds, present more dramatic PID corrections over the period of 20 seconds with an amplitude variation of 20 centimeters. These variations could be explained by the subject's torso movements, which are currently considered as steady by the software. Regarding the autonomous lane driving control, Figure 6 depicts adjustments made by the vehicle based on what is perceived and computed by the camera. A variability in the steering output can be observed, and is proportional to the input from the lane line detection, giving it a saw tooth appearance. We can observe that even small lane detection errors are also considered in the vehicle's direction. Further data filtering could help prevent steering overcompensation, Safety Safety is a major concern when working with human and any living beings. To ensure the subject's safety throughout our testing, the robot was developed with some preprogrammed safety measures. A programmed safety switch was integrated to the code, making it possible to terminate any evaluation at any time from the remote computer, thus completely stopping the car. Another safety measure comes from the LiDAR coding. The code was implemented with a proximity warning that will ultimately stop the car and terminate the program if the robot comes too close to, or goes too far from a test subject or any other obstacle. Moreover, lane lines must be identified at all times to ensure that the car remains running. To overcome glitches and inaccuracies, the code can raise a maximum of 3 consecutive undetected line warnings before terminating, thus completely stopping the vehicle. If the wireless connection fails between the remote computer and the robot, the vehicle will also come to a complete stop. Conclusion The current work demonstrates that a low-cost low-power self-driving 1/10 scale car can be used to assess walking and running in a controlled environment. The data gathered thus providing the system with a smoother drive. A more restrictive line evaluation protocol could also be implemented to better discard "off the chart" line detection. Such results are promising since we were able to perform basic motion analysis from a steady frame on a moving subject. Although further testing and tuning will be needed to demonstrate the robot's real capabilities, results obtained throughout this experiment demonstrate the feasibility of such tool. Cost The estimated cost of the required equipment for the current system is approximately $1,000CAD ($750). As presented by Müller & Koltun [2], this project is comparable to other low-cost vehicles developed for automated driving and racing. Moreover, the current system is a stand-alone platform for navigation in quasi-unrestrained environments since other systems are typically designed to track lines on a laboratory floor where the aim of the project is to "train" the system to go faster using machine learning or neural networks. Therefore, the developed system responds to our second objective which was to provide a low-cost device for the analysis of human movement. is reliable and can be clinically valuable for health specialists when assessing one's mobility. Future studies will focus on implementing more complex analyses of human movement for the automatization and production of quantitative data to limb movements [14]. To produce more reliable analyses in the future, the vehicle could be equipped with a less restrictive distance-measuring device, such as a rotating LiDAR or even through visual estimations. More improvements on lane line detection should also provide the robot with a steadier control. The actual processing speed of 10 frames per second seems to be a limiting factor for the team in view of the maximum speed that can be safely reached by the robot. Further analysis and code modifications will be carried out to achieve a more efficient goal of 20 frames per second on the same hardware. Since the camera feed was identified as a limiting speed factor, a better, smarter, and faster U.S.B. 3.0 webcam could be implemented in parallel to a smaller input frame resolution. Future work will include the onboard kinematic motion analysis to better understand human locomotion and further document motion patterns from this new perspective. Also, induced vibrations produced by the track deformities should be considered to better capture motion. Modern software can produce reliable video capture by numerical stabilization, which could be implemented in this project.
4,685.8
2020-12-31T00:00:00.000
[ "Computer Science" ]
Pedestrian Detection System using YOLOv5 for Advanced Driver Assistance System (ADAS) The technology in transportation is continuously developing due to reaching the self-driving vehicle. The need of detecting the situation around vehicles is a must to prevent accidents. It is not only limited to the conventional vehicle in which accident commonly happens, but also to the autonomous vehicle. In this paper, we proposed a detection system for recognizing pedestrians using a camera and minicomputer. The approach of pedestrian detection is applied using object detection method (YOLOv5) which is based on the Convolutional Neural Network. The model that we proposed in this paper is trained using numerous epochs to find the optimum training configuration for detecting pedestrians. The lowest value of object and bounding box loss is found when it is trained using 2000 epochs, but it needs at least 3 hours to build the model. Meanwhile, the optimum model’s configuration is trained using 1000 epochs which has the biggest object (1.49 points) and moderate bounding box (1.5 points) loss reduction compared to the other number of epochs. This proposed system is implemented using Raspberry Pi4 and a monocular camera and it is only able to detect objects for 0.9 frames for each second. As further development, an advanced computing device is needed due to reach real-time pedestrian detection. Introduction In the study of intelligent transportation, the existence of pedestrians is the most important thing that must be considered. The Indonesian Highway Capacity Manual [1] stated that, whenever pedestrians are crossing the roads, every motorized vehicle must be stopped and let them pass the roads. In a conventional vehicle, which is fully controlled by humans, there are lots of accidents caused by the foolishness of the drivers when facing the pedestrians who cross the roads [2]. Nowadays, the development in transportation technology allowed a computer to control a vehicle, commonly called an autonomous vehicle [3]. Based on this situation, the requirement of a pedestrian detection system is needed. It can work as an early warning system for pedestrians who crossed the roads. This system is needed, especially in Indonesia which had lots of pedestrians crossing the roads not at the right time and place [4]. The accidents occur not only in human-controlled vehicles, but also occurs in autonomous vehicles [5]- [7]. This condition happened because it is not reached the full stage of its autonomous system, furthermore, it still needs control from the drivers to prevent some unpredicted events (such as accidents) [8]. This paper proposed a simple solution for preventing the accident between vehicles and pedestrians, by using object detection and a minicomputer as the main processing unit. Our proposed framework contributes to prevents accident for pedestrians. For further purpose, we tried to reduce the number of accidents with vehicles and pedestrians. The proposed framework is the first compact dashcam that built to detect pedestrians to prevent the accidents. In road segments, the probability of pedestrians crossing the road is almost predictable. Its intention depends on the distance and speed of vehicles near pedestrians [9]. The pedestrians can be detected by using Convolutional Neural Network (CNN). This method applies a deep learning method that uses convolution by moving a convolutional multiplier [10]. Redmon, et al., improved the CNN due to fastening the object's detection result [11]. Our proposed system recognizes the pedestrians by using the fifth version of You Only Look Once (YOLOv5) which runs based on the CNN method [12]. Adopting YOLOv5 in a pedestrian detection system for an advanced driver assistance system (ADAS) has The contents of this paper are structured as follows. The First section discussed the background of the pedestrian detection system using object detection. Section 2 discussed the proposed research methods for detecting pedestrians using YOLOv5 and Raspberry Pi4. Section 3 contains the result of the training model using various epochs and the detection result. At last, Section 4 shows the conclusions of this research. Research Methods In the autonomous vehicle, there are lots of sensors that are placed in order to be aware of the situation around it [21]. One of the ways to understand its nearby situation can be done by placing the cameras. It can capture images and later can be processed by detecting any object using computer vision techniques. This method allowed the computer to work like the eyes of humans by seeing every object that appears in front of the cameras [22]. The requirement analysis is built in order to create a pedestrian detection system. At least, it needs a standard resolution camera and an edge computer (Raspberry Pi4) which will be used as the system's main computing unit and object detection method for estimating the pedestrians in front of a vehicle. In general, this research consists of two stages. The first stage collects data on the pedestrian using the monocular camera, and the second stage detects pedestrians using YOLOv5. Figure 1 shows the proposed method for detecting pedestrians crossing the streets. In the early stage, a monocular camera will capture an image or video which appeared from its point of view. Every image that is collected, will be preprocessed in order to simplify the images, easy to process, and adjust to the need of the convolutional stage. Whenever the preprocessing step is done, the object detection process is begun. It tried to understand every object that appeared in an image. When it detects and recognizes any objects, a bounding box is drawn around the detected object. A bounding box has information such as its position and its size. Data Collection with Monocular Camera The pedestrian detection system proposed in this paper uses a minicomputer that is connected to a monocular camera. The definition of a monocular camera in this paper refers to a single camera which directed in line with the vehicle's direction. The dataset used in this study is manually customized using the image search feature on search engines. The dataset is searched by tracing the shape of the pedestrian's form in accordance with the camera settings that have been carried out on the proposed system according to the needs of this study. The dataset distribution technique uses a ratio of 6:4. These are performed since that produces a somewhat more significant fraction of the training data, hence optimizing the training. The dataset of pedestrian image samples contains 21610 photos. Therefore, the quantity of training data utilized is 12966 photos, whereas the amount of test data utilized is 8644 images. The computer that is connected to the camera will process the images collected. It tried to collect all information or event that occurred in front of the vehicle. The illustration of the placement of the camera and minicomputer is shown in Figure 2. In order to build the pedestrian detection system, there will be several requirements, namely the hardware and the development environment. Table 1 shows the requirement system used to build the pedestrian detection system. As shown in Table 1, the Raspberry Pi4 works as the main computer placed in the vehicle. It runs with Raspbian Buster OS and receives images from a monocular camera. Meanwhile, at first, the pedestrian detection software is developed on a different computer, as specified in the table. Whenever it's ready to run, it will be customized and moved to the Raspberry Pi4 to detect any pedestrians. Pedestrian Detection System The pedestrian detection system is built based on the object detection method. One of the top methods for detecting objects is You Only Look Once (YOLO). In its development, the various version of YOLO is proposed by several researchers. Redmon et al. proposed the first version of YOLO in 2015 and It works based on the Convolutional Neural Network [11] to define objects. Figure 3 shows the architecture of YOLO which was developed by Redmon. According to Deshpande et al. [23], YOLO is the fastest object detection method compared with Fast R-CNN and Faster R-CNN. Nowadays, YOLO is the most common method applied in intelligent transportation systems [24]- [26] or specifically in the development of autonomous vehicles [27]. Redmon et al., develop their methods to the next version which has an improvement in the accuracy and the speed of the algorithm [28]. In YOLOv3, the capability of detecting objects is improved [29]. It works for detecting common objects and it applies in every general case. YOLO's latest version is not developed directly by Redmon, but it is developed by other researchers in specified cases such as the intelligent transportation system [30]- [32]. To detect any pedestrians, the proposed system needs an image or video of the road's situation. It needs another preprocessing step when the system received videos as input. The system will read and store all frames in the video input. Whenever the frames are stored, the objects in the computer will be detected immediately. Figure 4 shows the illustration of the road's situation based on a simple dashcam (using a smartphone) proposed by Nasution, et al. [33]. It detects other vehicles which appeared on the roads. The pedestrian detection system that we proposed almost has a similar concept, but we focused on detecting pedestrians using a minicomputer and monocular camera. The Scenario of Experiment We simulate the pedestrian detection system using a minicomputer which is already attached by a monocular camera. In the simulation, we only tried to detect pedestrians who standing in front of the camera. As shown in Figure 5, the pedestrians are detected in the proposed system. In order to get a better detection system, there will be several model training with different numbers of epochs. According to Afaq and Rao [34], the definition of an epoch in machine learning is a process for onetime model training using the whole dataset. It tried to validate the training model by finding the minimum error among the various number of epochs. The loss function tried to find in this test is bounding box loss (Mean Squared Error) and object loss. There is no classification loss since the class that tried to detect is Surya Michrandi Nasution, Fussy Mentari Dirgantara Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol. Bounding box loss calculates by measuring differences between the predicted bounding box and the actual object's bounding box. It represents the capability of the method to find the center of an object and its bounding box to cover an object. Meanwhile, the object loss tends to show the confidence level of every detected object. Results and Discussions As mentioned in the previous section, in the beginning, we tried to find the minimum loss by training the model with various epochs. The loss function calculated in the experiment covers the bounding box and object loss. Later, we tried to detect the pedestrian by simulating our system outside the vehicle. Training Model The model is trained by using various epochs, namely 200, 500, 1000, and 2000 epochs. As aforementioned before, it is conducted in order to find the minimum loss with the most optimal number of epochs. The loss functions measured in the training model are bounding box and object loss. The classification loss is not measured in this paper since the category of object is limited to pedestrians (person). Training Model with 200 Epochs: The 200 epochs training model takes time around 30 minutes with the average confidence level range of the detected object around 79%. In several epochs, there are some values of object loss that are less than 15%, which means the confidence level reached 85% for understanding the objects. The result of training for measuring the object loss with 200 epochs is shown in Figure 6. At first, the loss reached 11% and it decreased to 9% at the 11th epoch. In the end, the loss reduction is less significant, and it reached 8.75%. Training Model with 500 Epochs: The result of loss function testing in the training model is slightly better compared to the previous training. The object loss in detecting objects is around 21.69%, it has the same meaning as the confidence level in detecting objects is 78.31%. In this testing, the training model takes 75 minutes to finish its epochs. Figure 8 shows the training result of object loss with 500 epochs. The testing result of bounding box loss in the model training with 500 epochs is better than the previous testing. Almost similar to the training model using 200 epochs, its loss reduced significantly at the beginning of epochs. In the end, the bounding box loss reached less than 0.08, as shown in Figure 9. epochs tends to be greater than the previous model training. As shown in Figure 10, the average object loss is reaching 20.2%, which means the object is detected with a level of confidence is almost 80%. As well as the object loss in this model training, the bounding box loss reached its lowest point. It reached 4% for its loss, and that means the system has 96% accuracy in the object's (pedestrian) detection area. Training with this number of epochs takes 180 minutes to build the model. Figure 13 shows the bounding box loss for model training with 2000 epochs. The most optimized number epoch, however, is around 1000. It not only has the maximum loss reduction (1.49 points) between epochs and a moderate reduction in measuring the loss of the bounding box (1.5 points) but also needs moderate training time to build a pedestrian detection model. As seen in Table 2, twice the number of epochs does not mean the loss is also reduced twice. Discussions Pedestrian Detection Result The testing of the pedestrian detection system is done by using Raspberry Pi which is integrated with a monocular camera. The system tried to detect the pedestrian using several samples of the recorded image as a testing method. Figure 14 shows the result of the pedestrian detection system that has been built in this research. As seen in the figure, there is a pedestrian who stands in front of the view of the module (Raspberry Pi and Camera). Based on the testing result, it successfully detects a pedestrian at various distances. The pedestrian detection system also can recognize using live video streams, but it is only able to detect objects for 0.9 frames per second. The pedestrian system needs an improvement in the computing module to deliver realtime pedestrian detection. According to the experiment that has been done, the pedestrian detection system using object detection (YOLOv5) works well for images. In order to detect objects using the camera's stream, the computer that is used must be improved due to speeding up the frame rate (near real-time detection). Whenever it is already able to detect objects in real-time, it can be relied on as a pedestrian detection system in autonomous vehicles. Conclusion Based on the testing results in this paper, we conclude the number of epochs in training a model can reduce the loss. But, expanding the number of epochs twice, does not mean the loss is reduced twice. The maximum number of epochs that we tried is 2,000 epochs, and while it has an object loss of 20.11% and a bounding box loss of 4.48%, it takes at least three hours to develop a model. In the meanwhile, the optimal number of epochs is 1000 epochs; this has the highest possible object loss reduction (1.49 points) and only a moderate reduction in bounding box loss (1.5 points). In comparison to the many other epochs, the training time for this one looks to be quite short. On the other hand, the implementation of the pedestrian detection system in Raspberry Pi as a main processing unit is only reliable for recorded images since it is only capable to deliver 0.9 frames/seconds. Due to the necessity of real-time detection which comes from the camera's stream, the main processing unit must be upgraded to better hardware such as jetson or any other minicomputer that is specified for object detection.
3,701.6
2023-06-05T00:00:00.000
[ "Computer Science" ]
The Irony in Ang Lee’s Life of Pi This paper investigates and analyses the irony in Ang Lee’s movie: Life of Pi (2012). Irony is a language (or pictorial in film) style used as a subtle insinuation as well as contains statements that are very contradictory or inversely proportional to the existing reality. There has not been very much work devoted to exploring what it means either to discover irony in movies or to best interpret it, and that is what this article attempts. Conventionally, there are three most commonly used irony in literature, which are verbal (or communicative), situational, and dramatic. The research method applied is descriptive analytic to describe the meaning and context of irony found in Life of Pi. The conclusion is that Ang Lee’s Life of Pi is full of situational irony, thus makes it fascinating to watch and to discuss. Besides, dramatic irony was also found, and it was applied in order to preserve the audience interest’s, provoke curiosity, and make a contrast between the situation of the characters and the later scene that ultimately unfolded. This paper also shows that the cinematographic aspect related to the analysed narration could dramatically add insightful meanings to the irony. INTRODUCTION MacDowell (2016) argued that American narrative cinema depicts a productive foundation for the discussion of irony in film. In the 85 th Academy Awards (commonly referred to as Oscars), Life of Pi, directed by Ang Lee, received four notable awards for best direction, cinematography, visual effect and original score (music) due to its abundant aural and visual resources. Many discussions about the film are stimulated by its plot on the relationship between the main character Pi and the tiger Richard Parker, and it comes as no surprise that the story about Indian Tamil boy from Pondicherry exploring themes of spirituality emerged as a critical and commercial success (Brooks, 2013). In fact, the original writer Yann Martel considered the movie as a pleasant adaptation, stating he was happy the written work could be adapted so well as a film. Even though the conclusion of the movie is not as puzzling as the novel's, the alternate version of Pi's story surprisingly arrives at the audience and raises the same important questions about truth, acknowledgement and faith (Quill, 2012). Considering its notable facts, Ang Lee's Life of Pi has started to gain scholars and researchers' interest by focusing upon its strong spiritual themes. The novel itself has been extensively studied by researchers, but the movie adaptation has not yet gained the same achievement. Stephens (2010) examined the original novel's narrative from both religious and science perspectives. In the end, he concluded that there are two alternate stories that could inspire two different types of reader: secular and religious one. In addition, Cheng and Liu (2014) set their research as a film discourse using semiotic modalities. They argued the relationship between the two protagonists are mostly determined by Pi's attitude toward Richard Parker. Despite the fact that the movie adaptation has also been widely investigated, in-depth analysis of irony concept equipped with cinematography approach in the movie is still rarely studied. Generally speaking, writers may utilise irony to make their readers break and ponder what happens and what has just been stated in the story, or to highlight a main theme (Cadden, 2000). Last but not least, Thorn (2015) used some approaches from multiculturalism to psychoanalysis in analysing the content of the movie. He summed up that Pi's cannibalism is sublimated into a story of a boy and four animals on a lifeboat. From the given studies above, none employed irony in analysing the movie. It is possible that irony as one of many language styles in literature is variously used in a film in order to obtain certain effects. Whether realised or not, the presence and the interpretation of irony in a narrative is expected to impact on the overall meaning of a particular scene or the film itself as a whole in which thoughts and feelings from the narrative could be uniquely conveyed. Furthermore, a much-debated subject in other disciplines, in film scholarship irony is habitually referred to but too seldom explored (MacDowell, 2016). Thus, this study primarily aims to fill the gap by investigating the ironic potential in a movie, which is Life of Pi (2012), and the following question guide this study: what ironies are presented in Ang Lee's Life of Pi, and what their meanings are. This research question aims to capture the irony and its meaning in the story with the use of cinematography approach like mise-en-scene. Further, this study is expected to be able to contribute to a better understanding of how irony as literary device impacts audience in perceiving a movie through different lenses. RESEARCH METHOD The method applied in this research was descriptive analytic as well as dialogue analysis to describe the irony context along with the meaning in Ang Lee's Life of Pi. The concept of irony (verbal or communicative, situational, and dramatic) analysed in this paper is based on (MacDowell, 2016), Abrams and Harpham (2011) as well as Arp and Johnson (2008). This research was conducted in two steps: first, discussion of the irony presented in the film based on chronological order and second, examining the irony found in the film from cinematographic aspects. To complete and provide additional lenses to irony analyses in this research, cinematography approach was essentially equipped, for the medium of the research object is a movie. To limit the scope, this study only focuses on the film's mise-en-scene. Mise-en-scene refers to all visual elements of a film production within the space provided (Lathrop & Sutton, 2014). Among important aspects of mise-enscene are setting, lighting, and movement of actors. Mise-en-scene is often recognised as an essential part of the creative process, and in order to control it, the director must stage the event for the camera (Stam, 2017). It is expected that through the mentioned methods above the irony and its meaning can be thoroughly explained. RESULTS AND DISCUSSION The author chronologically describes the scenes that have been analysed using the concept of irony that was mentioned earlier. Each scene will be first given an explanation of the irony type through the narrative or story aspect; then the intended image is added and ends with a discussion of the cinematographic aspect to obtain more insightful meaning. Piscine got his nickname "Pi" Pi was tired of having his name Piscine mocked and teased with "pissing" (urinating) by both his classmates and schoolmates. Therefore, the first irony occurred when Pi stood up and wrote his name on the blackboard in the beginning of a semester at school. This was an example of situational irony, for this incident was unexpected, and this is the story of how Pi eventually got his nickname "Pi". In this scene Pi's actions produced effects that are different from what the viewers might have expected. Normally, after what had happened earlier viewers may think that the teacher would try to stop Pi's actions or that other students once again would make fun of him, but neither did happen. In terms of cinematographic aspect, the placement of the camera was arranged in such a way that it felt as if Pi was acting as a teacher in the class and the audience carefully listened to the explanation he was giving. The camera is in the Eye level position-that is, the camera is the same height as the subject level or if the subject is standing/sitting the camera is in the same axis as the subject position (Pi). This is essential to strengthen Pi's position in the classroom and ultimately to pave his way as a "school legend" as he was momentarily taking in charge of the brief introduction after the prior bullies by his friends. The type of shot used was medium shot, which is a general, all-purpose shot (Ablan, 2002). The medium shot was used for a series of dialogue involving Pi and his classmates, and it allowed the audience to pick up on Pi's movements and gestures. His body language was essential to convey his self-assurance, and the shot remained close enough to capture that emotion. Practicing three faiths Pi constantly struggled in finding his way to find God. At first, he was Hindu due to parents' connection, and later on he also found God's love in Christianity. His parents did not know about Pi's journey in searching of God nor they knew when it would end. However, the audience know the secret that Pi was constantly practicing different religious beliefs, thus, creating a dramatic irony. This dramatic irony occurred when the audience knew the three religions that Pi followed that other characters (Pi's family) did not. This very moment was necessarily important in the story because the irony preserved the audience interest's, provoked curiosity, and made a contrast between the situation of the characters and the later scene that ultimately unfolded. Pi was believing in his ancestor religion Hinduism and then baptized into a Christian while practicing Islam. When his father finally discovered his son's religious doings, Pi was advised by his father's memorable quote: "Because believing in everything at the same time is the same as not believing in anything." The storm and sinking of Tsimtsum The thunder and lightning presence in the film indeed feels cliché, but the real sea storm is one of natural phenomena that will definitely make the audience shudder. Pi was seen sliding to the bottom of the deck, holding the railing handrails, then swimming along the flooded corridor to try to save his family. Having been forced to abandon the ship with a lifeboat, the subsequent scene when Pi despite the middle of the big ocean waves kept jumping into the water and seeing the ship sinking from under the sea is something ironic. This is another situational irony because the audience is expected to feel the grief knowing that Pi lost all his family members who drowned in the ship. Yet, viewers would surprisingly feel amazed and stunned by the cinematographic visual beauty. The camera is set to the position of the point of view in which Pi acts as a subject looking at the sinking of the Japanese cargo ship Tsimtsum due to the sea storm. Any audience will be amazed when they see this stunning scene. The type of shot used is long shot to stress the setting under the sea as well as to show Pi in shocked silence. Additionally, the lighting used is tungsten or room lighting to provide a dramatic effect in the story. From the cinematographic aspect, it goes without saying that it is ironically one of the best scenes in the film. Richard Parker in the lifeboat was what made Pi survive The biggest irony of this story is probably the presence of a feral and wild tiger in the lifeboat is exactly the reason why Pi remained alive for 7 months afloat. Pi was thrown into a lifeboat by the crew along with a hyena and a zebra, but then he rescued Richard Parker from drowning before he noticed what he was doing. Having a powerful carnivore like tiger in the lifeboat has made the hyena not to prey on Pi and killed the zebra, and later orangutan, instead. Richard Parker also gave Pi a cause to live, for he had to feed Richard Parker, and he could not just give up. Adult Pi remarked, "without Richard Parker, I would have died by now. My fear of him keeps me alert. Tending to his needs gives my life purpose." Thus, this scene can be considered as a situational irony, because living with the tiger was exactly what made Pi a survivor instead of losing his life just like adult Pi later stated: "Richard Parker, my fierce companion, the terrible one who kept me alive…" The scenes of Pi living together with Richard Parker began after the sinking of Tsimtsum and ended when they parted on the Mexican coast. These are the dominant scenes throughout the narrative, and varied camera placements were utilised ranging from high angle to low angle to show the tough journey they both made through the ocean. In the first image of figure 5, initially the camera is in the low angle with Richard Parker as the subject trying to pounce on Pi to show that he is the most powerful creature in the lifeboat. As the story went on, Pi successfully utilised existing device to tame Richard Parker like in the second image. Conversely, the camera is in a high angle position to show that Pi was then in more superior position in giving orders from this moment onwards. Most of the shots used in these notable scenes are close-up shots in order to tightly frame Richard Parker's face, showing his shifted reaction from being savage to submissive as the main focus in the two frames. Lighting used in the scenes mostly is daylight to depict open competition between Pi and Richard Parker in dominating the lifeboat. The same lighting also allows viewers to clearly see Richard Parker's facial expression that ultimately, the beast was forced to accept Pi as another ruler of the rescue boat, creating an uneasy truce between them. The lifeboat is spacious, yet it was awfully 'crowded' Another situational irony is when realising Pi's life in the lifeboat with Richard Parker, the tiger. It can be clearly seen that the lifeboat was designed to accommodate dozens of people. It would have been merrier had Pi shared it with other passengers, yet there were ultimately only two of them, and ironically it was awfully crowded. It is a sarcastic statement that the lifeboat could have accommodated more because it seems that Pi was feeling like he did not have enough room as it should be: "It's time to settle this. If we're going to live together, we have to learn to communicate" (Pi's voice-over for the tiger). The unexpected twist to the discovery that dozens of individuals could have been on the boat with Pi is indeed ironic. On the above figure, the camera is positioned on eye-level angle. It is regarded to be sentimentally unbiased and is chiefly employed for direct, realistic presentation between Pi-Parker relationship in the boat. The natural light helps the audience in watching Pi's necessity in constantly fighting for his place over the boat with Richard Parker despite its roomy state. Landing on the Algae Island This part could be seen as Pi's and Richard Parker's last journey in the story. When Pi was so desperate that he thought of giving up, suddenly an island appeared. At first, Pi was uncertain whether the island was real or not, and later he discovered that the mysterious island was actually made of algae in high density instead of soil. When Pi initially decided to land on the algae island, he was thinking of settling and resting from their journey on this island for good. This can ben seen when he tied the precious bracelet given by Anandi, Pi's girlfriend, to a tree root upon his landing on the island. However, Pi found the truth that the island was too good to be true for a paradise and realised that at night the island turned into an acidic carnivorous island devouring anything in the water. Pi then came to a sense that staying at the island mean imminent death to him, so he refused to set the island as his final resting place and headed to the lifeboat to continue the journey with the tiger. It turns out that the island itself is a situational irony since it was not a paradise like he thought at first. Despite being resourceful and peaceful with the abundant meerkats and scarce fresh water in the island, Pi saw the island as a coffin waiting for him to enter. Upon discovering human tooth wrapped in a fruit-like leaves, a wide long shot was used in the movie to show the audience what the island actually looked like. Apparently, the algae island was shaped like a human body. This strengthens the danger lies within the island itself as a carnivorous and deadly one. It was such an irony that the island of algae which nourished Pi and Richard Parker were also trying to consume them in return. Despite its artistically attractiveness, the bitter truth behind the island made Pi continue his journey, or else he would have been like the island itself: lying down dead. Pi's goodbye to Richard Parker After being adrift in the ocean for 227 days, Pi finally arrived at the coast of Mexico. His body was so thin and shabby that he collapsed from the long journey upon his long-awaited arrival on the beach. When his consciousness began to disappear, at that very moment, he saw Richard Parker leaving the boat and walking along the shore to the jungle. Just before entering the woods at the edge of the jungle, Richard Parker stopped. Pi and the audience should have been certain that Richard Parker was going to look back at his former master, had his ears flattened, and growl-that the tiger would bring their bond to an end ceremoniously or in some way. Yet he just stared ahead into the jungle and disappeared for good from Pi's life. Interestingly, two ironies emerged in this scene. First, that Pi was broken hearted after Richard Parker left him is a situational irony. This is primarily because most viewers including Pi himself thought that Richard Parker would at least turn towards Pi as a sign of parting after their epic survival; it did not happen, so this event was really unexpected, making it as an irony. Secondly, Richard Parker who looked bony and limp also presented a situational irony in which his dignity and pride as a tiger, the ruler of the jungle, became invisible. Similarly, in the end of the journey Pi once again looked malnourished despite his recovery in the algae island. The placement of the camera on above scene is set to the eye level position because in the initial narration it is told that superior Richard Parker eventually became Pi's companion. They became two creatures that have a symbiotic relationship of mutualism. The types of shots used are long shot on Richard Parker and close-up on Pi. In this scene, the long shot was used to show Richard Parker in relation to the surroundings, returning to his original habitat from so much time spent at sea. The close-up shot on Pi was to clearly show him crying over the parting with his fierce companion. In addition, lighting used on both frames was daylight to emphasize the dramatic effect presented in the film. CONCLUSION Undoubtedly, one of literary devices frequently used in Ang Lee's Life of Pi is irony. This irony brings some additional meanings to particular situation or scene in the film. Ironical scenes and situations in film could grow audiences' curiosity for a better understanding in appreciating the screen work. Ironies made literary works more dramatic, and bring the audience to imagine as well as to comprehend the implied meanings of the story from the motion picture. The irony in Ang Lee's Life of Pi is clearly seen when referring to the definition of the concept of irony itself. Of three types of irony, situational irony is dominantly presented in each discussed scene. In fact, two situational ironies could even emerge simultaneously in a scene. From the technical perspective, cinematographic aspect of mise-en-scene such as lighting and camera angle are essential elements in strengthening the irony and its meaning through the story of the movie. Despite these research merits, there are still gaps to fill for further research like how other literary devices like symbolism could be analysed in Ang Lee's Life of Pi. Alternatively, scholars can conduct deeper and further study concerning the irony concept with other cinematic approach in the movie.
4,657.8
2021-03-18T00:00:00.000
[ "Linguistics" ]
Chip Formation in the Machining of Al-Si / 10 % AlN Metal Matrix Composite by using a TiN-coated Carbide Tool This paper presents a study on chip formation in the milling process of Al-Si/10% AlN Metal Matrix Composite (MMC). It focuses on the effect of cutting parameters on the formation of the chip. Al-Si/10% AlN MMC reinforced with 10% AlN particle is a new-generation material that is suitable for manufacturing automotive and aerospace components. Several advantageous characteristics of this material include low density, light weight, high strength, high hardness and high stiffness. The milling process was carried out at dry cutting conditions by using TiN-coated carbide tool insert, which was developed by Standards and Industrial Research Institute of Malaysia (SIRIM). The machining parameters were as follows: a constant cutting speed of 230 m/min, feed rates of 0.4, 0.6 and 0.8 mm/tooth and cutting depths of 0.3, 0.4 and 0.5 mm. The analysis of the chip formation was performed using a video microscope (Sometech, SV-35). The chips were formed because of the shear between the work pieces and the cutting chips during dry milling of Al-Si/10% AlN MMC. These chips were small, short and discontinuous with outer face cracks. INTRODUCTION AlSi alloy is a Metal Matrix Composite (MMC) that is widely used invarious industrial sectors, such as transportation, domestic equipment, aerospace, military and construction.It is a matrix composite reinforced with AlN particle and transformed into a newgeneration material for automotive and aerospace applications (Said et al., 2014).In general, Al/Si MMCs consist of two chemically and physically distinct materials that are suitably distributed to provide properties that are not obtainable from either the individual phase or fibrous or particulate phase in the form of continuous or discontinuous fiber, whiskers and particles.They are distributed in a metallic matrix containing light metals, such as aluminum, magnesium, titanium and copper and their alloys (Said et al., 2014;Sahin and Sur, 2003;Patel and Patel, 2012;Chawla and Chawla, 2006).Al-Si MMCs materials have increasingly replaced conventional materials in many applications.MMC has a combination of metal and ceramic properties (Abdullah, 2009;Said et al., 2013). Chip formation is an important index of machining because it directly or indirectly indicates the nature and behavior of work at machining conditions as well as the nature and degree of interaction at the chip-tool interfaces (Radhika et al., 2013).MMC has very good mechanical properties, which are caused by the combination of hard reinforcement, such as SiC and elastic matrix material, such as aluminum or magnesium (Shetty et al., 2008). To date, only a few reports exist on the use of AlN as a reinforcement to the composite Al alloy.To achieve a longer tool life in the current production practices as well as to enhance our knowledge on tools that can withstand high cutting temperatures, understanding the mechanism of chip formation is a fundamental element that influences tool performance (Said et al., 2014).Models are used to understand the orthogonal cutting chip formation mechanism.The formation mechanism of the pieces depends on the nature of the machined material and the machining parameters.The three types of chips produced in machining (Oxley, 1989;Said et al., 2014) are discontinuous, continuous and continuous with built-up edge.Discontinuous chips are chips formed with multiple segments and produced when machining brittle materials at a low cutting speed.Continuous chips are produced when machining ductile materials at a high cutting speed and at a low feed rate (Groover, 1996).Continuous with built-up edge chips are produced when machining ductile materials at a low cutting speed.The study of chip formation is the cheapest and most effective approach to understand the machining characteristic of a material (Radhika et al., 2013). Chips produced can be divided into two categories, namely, acceptable and unacceptable chips (Ghani and Yong, 2006).Acceptable chips do not disturb work or machine tool and do not cause problems in chip removal, whereas unacceptable chips disrupt manufacturing operations because they have a tendency to shrink around tools and work piece as well as inflict security problems to workers (Ghani and Yong, 2006).Figure 1 shows the deformation zone during the machining process.A low shear zone extends along the shear plane and represents the boundary between the chip and the work piece material, which is subjected to sheared formation.The secondary shear zone is located along the tool rake surface and is subjected to additional shear to form the chip.The second area includes the interface between the chip and the tool rake face.Some of the shear caused by rubbing off or side face of the tool for the newly generated surface can also be observed. This study presents chip formation during Al-Si/10%AlN MMC machining on different parameters of feed rate, cutting depth and constant cutting speed by using the TiN-coated insert.The factors that influence chip formation were identified and their effect on improving the machinability of new materials was proposed.Figure 1 shows the major deformation zone in metal cutting.Friction and wear characteristics of the tool or work piece combination were important in this zone. METHODOLOGY Experimental machining: Al-Si/10%AlN MMC was produced via the stir casting process in block form with Table 1 shows its composition.The material is reinforced with 10% of small AlN particles with size of <10 µm and purity of >98%.The Al-Si/AlN MMC was fabricated via the stir casting method.The AlSi alloy ingot was heated in a graphite crucible at 750°C and held for 30 min until the material was completely melted.The preheated AlN particles were added to the molten metal, stirred for 5 min (Fig. 2) and immediately casted into a permanent mold via the bottom pour technique.The solidified Al-Si/10%AlN metal matrix composite underwent a heat treatment process to improve its mechanical properties, such as strength and hardness (Tomadi et al., 2013).The three stages of the heat treatments were solution treatment (0 to 540°C for 30 min and 540°C for 4 h), water quenching (60°C) and continuous aging (0 to 180°C for 30 min and 180°C for 4 h). Figure 3 shows the microstructure of Al-Si/10%AlN MMC with 10x magnification after heat treatments Table 2 shows the mechanical properties of 10 wt.% Al-Si/AlN MMC materials. The milling process was carried out at dry cutting conditions by using the TiN-coated carbide tool insert, which was developed by SIRIM (Advance Materials Research Centre).The TiN films were deposited on the carbide insert at similar physical conditions by using the Hauzer Techno Coating 9HTC 625/2 ARC coating system.The substrates were cleaned for 30 min in an ultrasonic bath by using a mild, solvent-free, detergent.The substrates were blow-dried using high-pressure nitrogen gas to remove any dust contaminants from the surface and then placed into the coating chamber.Film deposition was carried out with the substrate biased with a DC power source to introduce proper ion bombardment on the growing surface to assist in obtaining a desirable structure, grain size and film density.The chamber was evacuated to a pressure of approximately 4×10 -6 mbar and back filled with nitrogen gas at approximately 10 -3 mbar.The complete deposition procedure has been previously (Mubarak et al., 2008;Mubarak et al., 2005).Table 3 shows the summary of the deposition parameters.This study used a Coro Mill tool holder R390-020C4-11L and an uncoated carbide cutting tool insert with a diameter of Ø20 mm and a nose radius of 0.2 mm (Fig. 4).Table 4 shows the geometry carbide tools used for milling Al-Si/10%AlNMMC.The RESULTS AND DISCUSSION Chip formation and different feed rates and cutting depths: The chips were formed because of the shear between the work piece and cutting chips (Ghani and Yong, 2006).Figure 5 to 7 show the chip shapes formed during the dry milling of Al by using the TiN-coated carbide developed by SIRIM.The chips were small, short and d outer face cracks.Ozcatalbas (2003) (2013) also observed this form when machining Al composite materials. As shown in Fig. 7, the resulting chip was thick with cracks at a cutting speed of 230 of 0.8 mm/tooth and cutting depth of 0.5 result was due to the fact that the increase in feed rate results in the increase in tool-chip contact length, which increases the temperature of the surface of the work piece (Radhika et al., 2013).The band lines affected the three types of body wears during cutting which caused wear on the cutting tools easily and quickly. These chips were also observed by Ozcatal (2003).He found that chip volume increases with the gross thickness of the segment.He added that these phenomena occurred because of the sheet and the low hardness of the material.According to et al. (1997), low hardness and high to cause adhesion period but reduce the slip period in segment formation.The chip formation changed at a cutting speed of 230 m/min, but different results were obtained for different feed rates and cutting depths Figure 5 illustrates the chip formation 0.4 mm/tooth and a cutting depth of 0.4 illustrates the formation at a feed rate of 0.6 and a cutting depth of 0.3 mm.The Fig. 5 and 6 seemed shorter compared with that in Fig. 7, which shows formation at a feed rate of 0.8 and a cutting depth of 0.5 mm.According Ghani and Yong (2006) microstructure changes in the chip are also affected by the addition of cutting speed, constant chip thickness, or cutting depth and oth hardness of the work piece (Said et al CONCLUSION The chips formed during the machining process of Al-Si/10%AlN MMC by using TiN cutting tool insert developed by SIRIM were small, short and discontinuous with outer face cracks mechanism of chip formation involved the initiation of cracking of the outer surfaces, which were chip because of the high shear stress.The chip lengths had different sizes, which were based on feed rate and cutting depth.m/min, feed rates of 0.4, 0.6 and 0.8 and cutting RESULTS AND DISCUSSION Chip formation and different feed rates and cutting because of the shear between the work piece and cutting chips (Ghani and Yong, 2006).Figure 5 to 7 show the chip shapes formed during the dry milling of Al-Si/10%AlN MMC coated carbide developed by SIRIM.The chips were small, short and discontinuous with outer face cracks.Ozcatalbas (2003) and Said et al. (2013) also observed this form when machining Al 7, the resulting chip was thick with cracks at a cutting speed of 230 m/min, feed rate mm/tooth and cutting depth of 0.5 mm.This result was due to the fact that the increase in feed rate chip contact length, which creases the temperature of the surface of the work The band lines also body wears during cutting, which caused wear on the cutting tools easily and These chips were also observed by Ozcatalbas (2003).He found that chip volume increases with the gross thickness of the segment.He added that these phenomena occurred because of the sheet-like structure and the low hardness of the material.According to Lin ), low hardness and high ductility are known to cause adhesion period but reduce the slip period in The chip formation changed at a , but different results were for different feed rates and cutting depths. the chip formation at a feed rate of depth of 0. CONCLUSION The chips formed during the machining process of Si/10%AlN MMC by using TiN cutting tool insert developed by SIRIM were small, short and discontinuous with outer face cracks.The main mechanism of chip formation involved the initiation of cracking of the outer surfaces, which were chip-free because of the high shear stress.The chip lengths had different sizes, which were based on feed rate and Fig. 2 : Fig. 2: Fabrication of metal matrix composite using the stir casting technique a size of 120 mm long×100 mm wide ×50 mm thick.Table1shows its composition.The material is reinforced with 10% of small AlN particles with size of <10 µm and purity of >98%.The Al-Si/AlN MMC was fabricated via the stir casting method.The AlSi alloy ingot was heated in a graphite crucible at 750°C and held for 30 min until the material was completely melted.The preheated AlN particles were added to the molten metal, stirred for 5 min (Fig.2) and immediately casted into a permanent mold via the bottom pour technique.The solidified Al-Si/10%AlN metal matrix composite underwent a heat treatment process to improve its mechanical properties, such as strength and hardness(Tomadi et al., 2013).The three stages of the heat treatments were solution treatment (0 to 540°C for 30 min and 540°C for 4 h), water quenching (60°C) and continuous aging (0 to 180°C for 30 min and 180°C for 4 h).Figure3shows the microstructure of Al-Si/10%AlN MMC with 10x magnification after heat treatments Table2shows the mechanical properties of 10 wt.% Al-Si/AlN MMC materials.The milling process was carried out at dry cutting conditions by using the TiN-coated carbide tool insert, which was developed by SIRIM (Advance Materials Research Centre).The TiN films were deposited on the carbide insert at similar physical conditions by using Fig. 5 : Fig. 5: Elemental chips on cutting conditions V = 230 m/min, F = 0.4 mm/tooth, D.O.C 0.4 mm 4 mm and Fig. 6 a feed rate of 0.6 mm/tooth The chip formations in shorter compared with that in Fig. feed rate of 0.8 mm/tooth .According Ghani and Yong (2006) microstructure changes in the chip are also affected by the addition of cutting speed, constant chip thickness, or cutting depth and other factors such as et al., 2013). Table 1 : Chemical composition of materials Table 4 : Geometry carbide tools used for milling AL-SI/
3,099.6
2016-09-15T00:00:00.000
[ "Materials Science" ]
Coconut (Cocos nucifera) tree disease dataset: A dataset for disease detection and classification for machine learning applications The ``Coconut (Cocos nucifera) Tree Disease Dataset'' comprises 5,798 images across five disease categories: ``Bud Root Dropping,'' ``Bud Rot,'' ``Gray Leaf Spot,'' ``Leaf Rot,'' and ``Stem Bleeding.'' This dataset is intended for machine learning applications, facilitating disease detection and classification in coconut trees. The dataset's diversity and size make it suitable for training and evaluating disease detection models. The availability of this dataset will support advancements in plant pathology and aid in the sustainable management of coconut plantations. By providing a valuable resource for researchers, this dataset contributes to improved disease management and sustainable coconut plantation practices. Value of the Data • The "Coconut Tree Disease Dataset" is highly relevant to the agricultural and plant pathology research communities.It addresses an important real-world problem -coconut tree diseases -which can have severe implications for the coconut industry and food security.• First Open-Access Dataset: This dataset is the first openly accessible collection of coconut tree diseases samples.It facilitates collaboration among researchers, accelerating advancements in disease detection, monitoring, and management in coconut cultivation.• With 5,798 images representing five distinct disease categories, the dataset exhibits diversity and comprehensiveness.Such diversity is crucial for training and evaluating machine learning algorithms, ensuring their effectiveness in detecting and classifying different coconut tree diseases.. • Machine Learning Applications: The availability of this dataset opens avenues for researchers to develop and compare advanced machine learning models for coconut tree disease detection and classification.It encourages innovative approaches and fosters collaborations in the field of plant pathology. Data Description The image datasets play a crucial role in various fields, ranging from computer vision and machine learning to medical research and social sciences [ 1-6 , 17 ].The objective of the "Coconut ( Cocos nucifera ) Tree Disease Dataset'' is to provide a high-quality and diverse collection of images for facilitating machine learning applications in automated disease detection and classification in coconut ( Cocos nucifera ) trees.The "Coconut Tree Disease Dataset'' contains a collection of high-resolution images, each with dimensions of 768 pixels in width and 1024 pixels in height.The images have a resolution of 72 dots per inch (dpi), ensuring clarity and detail in the visual representation of the coconut tree diseases. Each image in the dataset is associated with one of the five disease classes, providing a wellbalanced representation of coconut tree diseases ( Table 1 ).The study of "Bud Root Dropping,'' "Bud Rot,'' "Gray Leaf Spot,'' "Leaf Rot,'' and "Stem Bleeding'' in coconut trees holds paramount importance due to their collective impact on the overall health and productivity of coconut plantations.Each of these diseases presents unique challenges and potential threats to the coconut industry."Bud Root Dropping'' affects early growth stages, potentially leading to stunted development and reduced yield."Bud Rot,'' a fungal disease, can swiftly devastate entire groves if not promptly identified and managed."Gray Leaf Spot'' poses a significant threat, as it spreads rapidly and can lead to widespread defoliation."Leaf Rot'' compromises the photosynthetic capacity of the tree, directly impacting its vitality."Stem Bleeding'' is a chronic condition, progressively weakening the tree's structural integrity.A comprehensive understanding of these diseases is essential for implementing targeted control measures, preventing widespread outbreaks, and ultimately ensuring the long-term viability of coconut plantations.The dataset we present here is a crucial resource for researchers and practitioners dedicated to combatting these specific diseases, enabling them to develop more effective and tailored solutions for disease management in coconut trees [ 18 , 19 ]. The dataset was captured using the rear cameras of Samsung F23 5G Mobile, which boasts high-resolution imaging capabilities at Kendur, Maharashtra (18 °47 06.4"N 74 °01 19.5"E).The use of mobile cameras for data collection allows for flexibility and ease of capturing images directly in the coconut plantations, ensuring real-world representations of the diseases in their natural settings.The dataset's high-resolution and mobile camera origin contribute to the dataset's quality and authenticity, making it a valuable resource for researchers and practitioners interested in developing advanced disease detection and classification models.Each category is labelled and organized in separate folders, ensuring easy access and identification of specific disease samples.Fig. 1 shows directory structure of the coconut tree disease dataset. Experimental design The dataset for Coconut Tree Disease was obtained by taking pictures using the highresolution rear cameras of a Samsung F23 5G Mobile.The data acquisition process consisted of two main steps, as summarized in Table 2 .Image Pre-processing July The images appropriate for dataset were selected from gathered images and were pre-processed. Step 1: Image Acquisition (Duration: April to July): During this phase, we conducted field visits in daylight to capture images related to various coconut tree diseases.The main goal was to create a comprehensive collection of disease-related images. Step 2: Image Pre-processing (Duration: July): In this step, we carefully reviewed the collected images and selected appropriate ones for the dataset.The selected images underwent pre-processing, which involved resizing, cropping, and enhancing them as needed using IrfanView Software.Pre-processing images in the dataset is important because it ensures uniformity in size, focuses on key features, and enhances image quality.This consistency helps machine learning models to better detect and classify diseases accurately. Overall, the data acquisition process involved capturing images during field visits and subsequently preparing them through pre-processing to include in the dataset. Materials or specification of image acquisition system The cameras used in the data acquisition process and the specifications of the captured images: Samsung Galaxy F 23 5G Android Mobile: • Make and Model: Samsung Galaxy F 23 5G (SM-E236B) Android Mobile. During the data collection process, efforts were made to adhere to standardized image acquisition practices, capturing each image using the rear cameras of a Samsung F23 5G Mobile known for its high-resolution imaging capabilities.This maintained consistency and quality throughout the dataset.Subsequently, the images were pre-processed using IrfanView Software, involving resizing, cropping, and contrast adjustments to enhance uniformity and highlight disease symptoms.The captured images were saved in JPG format and resized to a resolution of 768 × 1024 pixels.To ensure accurate labelling, a plant pathologists and agricultural specialists, from Rashtrapita Mahatma Gandhi Art's, Science College Nagbhir meticulously categorized each image with its respective disease category.This multi-step labelling process involved rigorous scrutiny and cross-validation by multiple experts to mitigate errors and enhance reliability.The validation of disease categories by domain experts not only added credibility to the dataset but also ensured the accurate representation of each disease's visual characteristics. Method The data for the coconut tree disease dataset was collected by visiting a farm in Kendur, Taluka-Shirur, District-Pune, India.Images were captured in various scenarios, including leaves in their natural environment and after being cut or separated from the plant.This allowed for a comprehensive representation of coconut tree diseases in different conditions. Table 3 presents the distribution of images by various categories of coconut tree diseases in the dataset.The dataset consists of a total of 5798 images, with each category containing a different number of images.The categories include Bud Root Dropping, Bud Rot, Gray Leaf spot, Leaf Rot, and Stem Bleeding.These are relatively common diseases that can affect coconut trees.These diseases are known to occur in coconut cultivation regions and can lead to varying degrees of damage if not managed properly.The original format of the images is now accessible to the public through Mendeley [7] .The dataset empowers machine learning models to recognize and categorize diseases in coconut trees, streamlining disease monitoring and intervention efforts for improved plantation health.Coconut tree diseases can cause substantial financial losses by decreasing crop yield and necessitating costly disease management measures, affecting both revenue and expenses for coconut growers.Coconut tree diseases pose significant challenges to coconut cultivation regions worldwide, threatening livelihoods and economies.Machine learning-driven early detection using datasets like this can mitigate these challenges and promote sustainable coconut farming practices. Evaluation framework A robust evaluation is crucial to ascertain the dataset's efficacy in training accurate and reliable models for disease detection in coconut trees.We incorporate key metrics such as accuracy, precision, recall, and the F1-score to provide a holistic understanding of the models' capabilities.We utilized a dataset of coconut trees images categorized into five classes, representing different disease types.Prior to training, we employed the VGG16, ResNet50, and MobileNetV2 architectures, which are renowned for their capabilities in image recognition tasks [16] .We preprocessed the dataset, performed data augmentation to enhance model generalization, and finetuned the models for the specific classification task.Before training, the initial performance of the models was limited, with VGG16 achieving 0.2 % accuracy, ResNet50 achieving 0.4% accuracy, and MobileNetV2 achieving 0.25% accuracy.However, after training, substantial improvements were observed, with VGG16 achieving an accuracy of 88%, ResNet50 achieving 94% accuracy, and MobileNetV2 achieving 92% accuracy ( Tables 4 and 5 ).The primary focus was directed towards the meticulous creation of a representative dataset, with the intention of mitigating class imbalance through specialized sampling techniques during subsequent model training.It is imperative to highlight that the reported results emanate from a comprehensive assessment encompassing 20 independent runs, a deliberate choice to bolster the robustness and reliability of our findings.Pertaining to the phenomenon of overfitting, it is noteworthy that the confusion matrix exhibited a discernible diagonal principal, indicative of a substantial proportion of correct predictions executed with high confidence.Furthermore, an indepth analysis unveiled a disparity of less than 10% in recognition rates among distinct classes, providing further evidence to support the absence of overfitting. Despite the notable improvements in accuracy post-training, the models exhibited signs of overfitting during testing.Overfitting occurs when a model performs well on the training data but struggles to generalize to unseen data, leading to diminished performance on the test set.This phenomenon was evident when evaluating the models on a separate test dataset.The models displayed higher accuracy on the training set compared to the test set, indicative of overfitting.The results indicate that while the models are capable of achieving high accuracy on the training data, their performance on unseen data remains suboptimal.Addressing overfitting is crucial to ensure that the models can make reliable predictions in real-world scenarios.Strategies such as incorporating regularization techniques, collecting additional diverse data, and exploring transfer learning approaches should be considered to mitigate overfitting and enhance model generalization.The dataset demonstrated the potential of deep CNNs for coconut leaf disease classification, achieving significant performance improvements through training.However, the challenge of overfitting during testing underscores the importance of further research to enhance the models' ability to generalize to new, unseen data.Addressing this challenge will contribute to the development of robust and reliable models for coconut disease detection, benefiting farmers and agriculture stakeholders. Dataset applications Through the application of advanced machine learning techniques, this dataset can markedly enhance disease diagnosis and management strategies.By training machine learning models on the dataset's diverse range of images depicting diseases like Bud Root Dropping, Bud Rot, Gray Leaf Spot, Leaf Rot, and Stem Bleeding, we can empower automated systems to rapidly and accurately identify disease symptoms.This capability enables timely intervention, as affected trees can be promptly treated or isolated, curbing the spread of diseases across plantations.Consequently, the dataset facilitates a paradigm shift towards proactive disease management, reducing the reliance on broad-spectrum pesticides and minimizing ecological impact [8][9][10][11] . The dataset's utility extends beyond mere disease detection.Machine learning models, fuelled by the dataset's rich visual information, can predict disease outbreaks based on historical data and prevailing environmental conditions.This predictive capability allows farmers and plantation managers to make informed decisions regarding disease control measures, resource allocation, and crop rotation, ultimately optimizing agricultural practices and minimizing losses. Machine learning-powered systems, utilizing the dataset's annotations, can guide targeted application of fertilizers, pesticides, and water resources.This optimization minimizes waste and environmental pollution, while ensuring that plants receive the necessary care tailored to their specific needs.This integration of data-driven insights and sustainable farming practices not only safeguards coconut plantations from disease-related challenges but also cultivates a resilient and ecologically conscious agricultural landscape [ 12 , 13 ]. The dataset holds promise for a range of practical applications beyond academic research, particularly within the agricultural sector and coconut plantation management.One such avenue involves its integration into automated monitoring systems, where machine learning algorithms harness the dataset to enable real-time disease detection and alerts for plantation owners and farmers.Collaborations with agricultural extension services could yield user-friendly platforms, allowing farmers to upload images of their coconut trees for swift disease diagnoses and tailored treatment recommendations.Additionally, machine learning models trained on the dataset could generate personalized disease management strategies, accounting for local conditions and best practices.The dataset's potential extends further to partnerships with technology companies, spurring the development of specialized hardware or software solutions for disease detection in coconut plantations.In essence, the "Coconut Tree Disease Dataset'' stands poised to revolution-ize disease monitoring, management, and knowledge dissemination, transcending academia to cultivate healthier and more sustainable coconut plantations.By harnessing the dataset's comprehensive images and disease annotations, machine learning can enhance precision agriculture approaches, optimizing irrigation, nutrient application, and crop protection.This synergy of advanced technology and agricultural expertise paves the way for a sustainable future, where datadriven decision-making mitigates disease impact, enhances productivity, and promotes environmentally conscious coconut plantation management [ 14 , 15 ]. Limitations The dataset is collected from a specific region, potentially limiting its applicability to other geographical areas with different disease prevalence or manifestations. Ethics Statement Our study does not involve studies with animals or humans.Therefore, we confirm that our research strictly adheres to the guidelines for authors provided by Data in Brief in terms of ethical considerations. Fig. 1 . Fig. 1.Directory structure of the coconut tree disease dataset. Table Subject The data collection process for the "Coconut Tree Disease Dataset" was meticulously carried out in the Kendur region, located in Taluka-Shirur, Pune district, Maharashtra, India.To ensure the dataset's relevance and diversity, images were collected from various coconut plantations in the Kendur region, considering different growth stages, environmental conditions, and disease manifestations. Table 1 Sample images of coconut disease dataset. Table 2 Data acquisition steps. Table 3 Total number of images per category in the coconut tree disease dataset. Table 4 Accuracy values of disease detection models. Table 5 Average performace values on machine learning models using 5 fold cross validation technique.
3,276
2023-10-01T00:00:00.000
[ "Computer Science", "Environmental Science" ]
Identification and Quantification of Explosives in Nanolitre Solution Volumes by Raman Spectroscopy in Suspended Core Optical Fibers A novel approach for identifying explosive species is reported, using Raman spectroscopy in suspended core optical fibers. Numerical simulations are presented that predict the strength of the observed signal as a function of fiber geometry, with the calculated trends verified experimentally and used to optimize the sensors. This technique is used to identify hydrogen peroxide in water solutions at volumes less than 60 nL and to quantify microgram amounts of material using the solvent's Raman signature as an internal calibration standard. The same system, without further modifications, is also used to detect 1,4-dinitrobenzene, a model molecule for nitrobenzene-based explosives such as 2,4,6-trinitrotoluene (TNT). Introduction Explosives detection is an area of chemical sensing of particular interest as a result of the advent of improvised explosive devices (IEDs) and terrorist activities that cause concerns for national security both home and abroad. As explosive devices move away from incorporating metal shells and parts, the need to detect the actual explosive molecules becomes imperative [1,2]. Amongst the various detection schemes, optics-based detectors show great promise as they allow non-invasive interrogation of species, can be applied across different analyte phases and are based on robust optoelectronics technology. Photoluminescence and chemoluminescence sensors in particular have shown potential for explosives detection, albeit with their own limitations [3]. While photoluminescence-based explosives detection has been shown to enable very low detection limits, false-positives can affect the selectivity of this technique. Chemoluminescence deals with this challenge by using analyte-specific binding sites and reactions to produce the detected optical signal; however this approach requires individual detector preparation depending on the analyte under investigation [4][5][6]. An alternative set of optical detection techniques are based on Raman and infrared (IR) spectroscopy, whereby the interactions between excitation light and molecular species result in unique, fingerprint-like spectra that can be compared against tables of materials to uniquely identify the explosive molecules [7][8][9][10]. However, these techniques suffer from relatively low signal intensities compared to luminescence detection [11], thus requiring additional signal enhancement techniques such as surface-enhanced Raman scattering (SERS) [12]. Across all the optical detection techniques, optical fibers have been successfully used in sensing chemical and explosive species due to their robustness, ability to access difficult-to-reach areas and their immunity to electromagnetic interference that allows them to separate the detection area from the controlling electronics [13,14]. Suspended-core optical fibers are a category of microstructured optical fibers in which a light-guiding core is suspended inside a capillary-like fiber jacket by a number of struts, essentially creating a micro-or nano-wire waveguide suspended inside a protective shell [15,16]. These small core dimensions result in a significant overlap of the guided light with any liquid or gas that is loaded into the holes in the fiber, which in turn creates a response signal along the entire length of the fiber that is subsequently waveguided by the core to a detector [17]. Sensing platforms based on suspended-core fibers are of particular interest to small-volume chemical and biological sensing as the signal generated by the analyte is integrated along the length of the fiber, resulting in low detection limits while their filling volumes are typically in the range of 10 s of nanoliters [17,18]. Prior experimental demonstrations of Raman and surface-enhanced Raman sensing in microstructured fibers have been focused on short lengths of hollow-and solid-core photonic crystal optical fibers [19][20][21][22][23][24]. Suspended-core fiber Raman sensors have been proposed [25], but surface functionalization to achieve SERS effect is required to observe any signal, increasing the sensor complexity and preparation time [26]. In this work, we use suspended-core optical fibers as an active dip sensor platform for Raman sensing of explosives without need for modifications when switching between species. This platform is used for detection of hydrogen peroxide (H 2 O 2 ), a material used in preparing homemade explosives that lacks the nitroaromatic groups that 2,4,6-trinitrotoluene (TNT)-based explosives use and therefore is more difficult to detect with traditional techniques based on specific chemical group interactions [27,28]. Numerical modeling is performed for these sensors that includes coupling of both the excitation and Raman fields to high order optical modes as a guide for fiber core radius selection, guiding the selection of a 0.85 μm radius fiber as the most effective Raman sensor. The small sampling volume (60 nL), combined with long interaction lengths inside the fiber, result in detected, quantifiable amounts of less than 1 μg of hydrogen peroxide in aqueous solutions. The same unmodified system is shown to also detect comparable amounts of 1,4-dinitrobenzene (DNB), a substitute for TNT-like molecules, highlighting the flexibility and unique identification capability of this technique. Experimental Section The experimental setup used in these experiments is shown in Figure 1. Continuous wave light from an Ar + laser (Melles Griot Series 43) at 488 nm is reflected off a Raman filter (Semrock 488 nm long-pass RazorEdge Ultrasteep) and is coupled into a suspended core fiber using a 60 microscope objective (focal spot radius 0.85 μm) to deliver 70 mW to the fiber front face, with 22 mW measured at the far end of the fiber due to coupling efficiency and fiber loss (31% throughput). Initially a 20 cm length of suspended core fiber was used, made in-house from silica glass (F300, Heraeus), with a core radius of 0.85 μm (based on the radius of a circle that has the same area as a triangle that completely fits within the core region) and a hole radius of 6.3 μm, also shown in Figure 1. The far end of the fiber is dipped into a glass vial containing an aqueous solution of hydrogen peroxide and the fiber is allowed to fill by capillary force action to a total filled length of 16 cm to avoid droplet formation at the coupling end, resulting in a total sampling volume of 60 nL. The Raman signal from the fiber is collected in backscattering mode through the Raman filter and is analyzed using a cooled-CCD spectrometer (Horiba Jobin Yvon iHR320) with an 1800 lines/mm holographic grating, resulting in a spectral resolution of 0.03 nm. One of the factors that can affect the behavior of a suspended core fiber used for Raman sensing is the Raman signal originating in the glass core itself. The degree to which this occurs depends on the core material, the molecules detected, and the geometry of the fiber. The excitation light travelling down the fiber core will induce a Raman signal from the glass that will be overlaid on the signal created by the analyte molecules around the core. Silica glass is known to have a Raman signature originating in the vibrational movements of the Si-O lattice [29]. Figure 2 shows a measured Raman signature of the unfilled silica suspended core fiber under 488 nm excitation from an Ar + laser, where the units for the x axis are inverse centimeters, expressing the difference in energy between the laser light and the observed spectral features. The same figure also shows the Raman signature of hydrogen peroxide for comparison, taken using an Agiltron H-PeakSeeker Pro-785 desktop Raman spectrometer. The main peak from hydrogen peroxide appears at 876 cm −1 and is attributed to the O-O stretching mode [30,31]. As seen in the graph there is a significant degree of overlap between the two spectra in the 800 to 900 cm −1 region, meaning that the results from the experiments will be mostly a function of the peroxide-to-silica (signal-to-background) ratio of Raman signals, rather than the net Raman signal of hydrogen peroxide. To this end it is essential to study the effects of changing the fiber sensor parameters that affect the ratio between the background Raman signal from the core and the Raman signal from the analyte sample in the holes surrounding the core. For a given fiber core material, the strongest parameter that can change this ratio is the size of the light-guiding core and therefore numerical simulations are required to determine the optical fiber with the best sensing performance. Numerical Modeling of Suspended Core Optical Fibers as Raman Sensors In evanescent field sensing experiments using suspended core fibers, it has been shown that smaller core radii result in enhanced signals when background signals are present [32]. Much like the fluorescence measurements, the signal-to-background ratio between the Raman signals from hydrogen peroxide in the holes of the suspended core fiber and the silica core is a direct result of the geometry of the fiber, which supports a small number of optical modes at the wavelengths and core sizes used here (19 modes for a 1 μm core radius fiber, 67 modes for a 2 μm core radius). In order to guide the fiber design process by identifying which fiber core dimensions will produce the highest signal-to-background ratio for Raman sensing experiments, numerical modeling of the suspended core fibers was performed. The model assumes a circular diameter suspended silica core surrounded by water, the solvent used in the experiments, to investigate the effect of core radius on the performance of the fiber sensor. Initially, excitation light E in is coupled into different optical modes within the fiber with a given coupling efficiency (CE j ) for each mode, j, as shown in Figure 3a and defined in Equation (1) [33]. E in and H in are the electric and magnetic field intensities of the input beam, e j and h j are the electric and magnetic field distribution of the fiber mode j and z is the direction of propagation of the mode along the length of the fiber. The guided light has components both in the core and in the cladding (in this case the solution-filled holes around the core) and therefore the corresponding power fractions PF j of the guided light for each mode will determine the strength of interaction with the silica core and analyte molecules in the holes respectively [34]. The power fraction for each excitation mode j is given by Equation (2) below: The capture fraction (CF jv ), defined as the fraction of the total Raman scattered signal generated by each excitation mode j across the entire area of the hole H and collected by each Raman wavelength mode v, is given by Equation (3) In Equation (3) λ is the Raman wavelength, n v is the solvent refractive index at the Raman wavelength, ε 0 is the electric permittivity of free space, µ 0 is the magnetic permeability of free space, and s j is the component of the pointing vector along the fiber axis (z direction) for the mode j. This definition assumes that the Raman scattering is uniform in all directions and has random polarization. Confinement loss (CL) is also considered, which becomes important for microstructured optical fibers due to the cladding index and core index being the same and thus the core supports leaky modes [33]. Thus, the capture fraction as a result of a given excitation mode j that scatters into all Raman wavelength modes v is expressed as Equation (4): The figure of merit (FOM x ) for either the Raman generated in the core (x = C) or in the holes (x = H) is then a function of the power fraction (PF j ) available in each excitation mode and the corresponding coupling efficiency (CE j ), the confinement loss (CL j ) of the mode and finally the capture fraction (CF j ) as defined above, across all excitation and Raman optical modes and is given by Equation (5): The total figure of merit (FOM T ) for the system, which takes into account both the analyte Raman signal generated in the holes and the background signal generated in the core is given by Equation (6). This parameter is expected to correlate to experimentally observed signal-to-background quantities, thus guiding the choice of fiber: Numerical simulations were performed for core radii between 0.1 and 2.2 μm, with coupling of both excitation and Raman signals into both the fundamental and higher order modes considered. The model was based on vector solutions of a step-index fiber, based on the work previously developed for fluorescence sensing [16,32,34], but with all excitation modes considered, and the coupling efficiency into these modes defined as given in Equation (1). Note that material and scattering losses were not included in the model as the proximity of the excitation and Raman wavelengths result in practically identical values of material loss at both wavelengths. Figure 4 shows the results from the numerical simulations performed across a range of different core sizes. The power fraction for the laser excitation follows the trend shown in previous work on suspended core fibers, whereby as the core becomes larger the fraction of light travelling in the holes around the core becomes smaller. As the core increases in size, a larger number of optical modes become available, resulting in apparent jumps in the power fractions guided through the fiber. The figure of merit, as defined here, becomes larger for Raman signal originating in the holes around the core as the core radius is reduced, with the opposite trend for Raman signal from the core of the fiber. Similarly to the power fraction results, deviations from the general trend are observed when more high order modes become available at different core radii. The total figure of merit calculations predict that there is a clear advantage in using small radius suspended cores, as the expected signal-to-background ratio increases with reducing core sizes. The behavior for very small core sizes (smaller than 0.5 μm) differs from that calculated for fluorescence measurements [32] in that there is no drop in the total figure of merit as the core radius becomes exceedingly small. This is due to the close proximity of the excitation and Raman wavelengths (22 nm apart for 488 nm excitation) that results in very similar confinement losses. In reality very small core sizes can exhibit reduced coupling efficiency of the excitation light, increased scattering losses from surface imperfections [36], and are more difficult to handle; our results though clearly show small core radii are better suited for Raman experiments when the silica Raman background overlaps with the analyte Raman signal. To verify these simulation results, silica suspended core fibers of different core radii (0.85, 1.15 and 2 μm) were fabricated and identical lengths were filled with an aqueous solution of hydrogen peroxide (30% w.t.). As an indication of total energy, the area under the curve in the wavenumber region where the Raman peak for hydrogen peroxide appears (876 cm −1 ) was integrated across its width for both empty and filled fibers, and the ratio between the hydrogen peroxide signal and the silica background was compared against the numerical results for the total figure of merit as a function of the core radius. These two quantities, the experimental signal-to-background ratio and the total figure of merit are not directly comparable in terms of absolute values as the model does not consider the total signal produced by the fiber, but they should be qualitatively comparable. To make a direct comparison between them, both ratios were normalized at their respective values for the minimum experimental core radius of 0.85 μm, as seen in Figure 5. The trend predicted across this range of core radii from the numerical calculations is in good agreement with the observed signal-to-background ratio, verifying the choice of the smallest available silica suspended core for Raman measurements of hydrogen peroxide. The graph also highlights future gains in signal-to-background ratios by fabricating and using even smaller core radius fibers, although the absolute value of enhancement is expected to vary somewhat for smaller size cores [36]. Based on these results the 0.85 μm core radius suspended core fiber was chosen for further experiments and analysis. Figure 5. Total figure of merit (squares) and experimental signal-to-background (S/B) ratio (circles), normalized at a core radius of 0.85 μm for ease of comparison, as a function of core radius. The graph also shows the expected improvement in signal-to-background ratio for smaller core radii. Raman Sensing of Explosives in Suspended Core Fibers The Raman signal from an unfilled 20 cm piece of suspended core silica fiber with a core radius of 0.85 μm can be seen in Figure 6. When the fiber is filled up to 16 cm to avoid droplet formation on the collection end (loading time 4.7 min) with a hydrogen peroxide solution (30% w.t. in water), the sample vial is removed from the end of the fiber. When the fiber is full the water peak is easily observable, from 3,200 to 4,000 cm −1 , originating in the O-H stretching [37,38]. This broad peak arises from both water and hydrogen peroxide Raman signals and is therefore representative of the total number of OH-containing molecules. The smaller sharp features at 1,050 cm −1 and 3,650 cm −1 appear for both empty and filled fibers and are likely due to light pollution from parasitic lines of the Argon ion laser used. Figure 6. Raman signal collected from an empty silica suspended core fiber (blue, dashed line) and the same fiber filled with a hydrogen peroxide aqueous solution (red, solid line) for 488 nm CW excitation. The region from 2,000 to 2,600 cm −1 is not shown as it contains no useful information. The inset shows the region of interest for hydrogen peroxide, between 700 and 950 cm −1 for clarity. Upon closer inspection of the area where the hydrogen peroxide Raman signature is expected, a peak is visible above the silica background at 876 cm −1 (inset of Figure 6) corresponding to the hydrogen peroxide O-O stretching Raman peak [29]. The fact that this signal is visible in the absence of surface-enhancing processes is a direct result of the ability of the suspended-core architecture to create, collect, and guide Raman scattered signal throughout the entire length of the fiber. To remove the silica background, the collected spectra are normalized at the first silica peak, which is not expected to change in shape as the silica core is unaffected by the filling process, and the signal for the empty fiber is then subtracted. The resulting spectra for hydrogen peroxide and water are easily distinguishable and their evolution with filling time can be studied, as shown in Figure 7 for 30% w.t. hydrogen peroxide in water. Both signatures increase in intensity as the fiber fills up with hydrogen peroxide solution by capillary force action. As an indication of the total intensity of the Raman scattering for each molecule, the area under the spectral curve is integrated after background subtraction. As the fiber geometry and filling time are well known, this integrated Raman signal can be plotted as a function of the amount of hydrogen peroxide, seen in Figure 7b. This allows the determination of the minimum amount of hydrogen peroxide that gives a clearly identifiable signal detectable by the system as 1 μg, for a 30% w.t. hydrogen peroxide aqueous solution inside a 16 cm piece of fiber, limited by the intensity of the silica Raman background signal. The integrated intensities of the peroxide and water Raman peaks increase in parallel as the fiber fills up, reflecting a proportional increase in the number of molecules for both the hydrogen peroxide and the water. The ratio between the two peak intensities is dictated by the ratio of molecules between the two (i.e., the hydrogen peroxide concentration in water) and the relative intensities of the Raman scattering. By measuring solutions of different concentration of hydrogen peroxide, 3%, 10%, 20%, and 30% w.t., it is possible to correlate the ratio between the two Raman peaks to the weight percentage of hydrogen peroxide in water, as seen in Figure 8, allowing the use of the water amount inside the fiber to be used as internal calibration for the amount of hydrogen peroxide [39]. The ratio of the peak intensities is calculated throughout the measurement at each sampling interval and the average value of the Raman ratio is plotted in Figure 8 against the known sample concentration, with vertical error bars indicating the standard deviation of each ratio. The observed linear relationship between the Raman intensities ratio and the hydrogen peroxide concentration allows the use of this internal calibration technique to add quantitative information to the measurements in a way that does not depend on input laser power fluctuations, coupling instabilities and other changes in the fiber environment. Raman sensing has the advantage of not requiring any tagging molecules (such as fluorophores) or surface modification to work across different species. As a demonstration of that in suspended core fibers, measurements were performed to sense 1,4-dinitrobenzene (DNB), a substitute molecule for 2,4,6-trinitrotoluene (TNT), dissolved in acetone at 2% w.t. In the region of 1,000 to 2,000 cm −1 a number of peaks are observed, seen in Figure 9a for a filled fiber in comparison to an empty one, originating in acetone and DNB [40]. The peaks at 1,362 cm −1 and 1,596 cm −1 are signature peaks for DNB, while the peaks at 1,430 and 1,710 cm −1 come from acetone. As for the hydrogen peroxide, quantification of DNB can be achieved by comparing the ratio of the Raman peak to the Raman peak of the solvent, in this case acetone. Quantification of DNB is shown in Figure 9b, demonstrating a linear response for sub-microgram detection quantities. These results demonstrate the ability of the suspended core Raman sensing platform to uniquely identify different explosive species without further modifications or requirements at comparable detection limits. Figure 9. (a) Comparison of Raman spectra collected from an empty fiber (dashed line) and a fiber filled with a 2% w.t. 1,4-dinitrobenzene (DNB) acetone solution after subtraction of the silica background signal using a suspended core fiber; (b) Integrated intensity for DNB (squares) and acetone (circles) Raman spectra as a function of the amount of DNB inside a suspended core fiber. Conclusions In this work we have successfully demonstrated a liquid-phase explosives sensing platform combining silica suspended core fibers with Raman spectroscopy to detect microgram amounts of explosive species in nanolitre sampling volumes. This is based on an unmodified suspended core fiber as a Raman sensing platform, making use of the relatively large power fractions of excitation light available in this geometry to interact with analyte molecules along long lengths of the fiber. Results from numerical modeling that includes higher order excitation and Raman optical modes within the fiber to study the effect of the core radius on the signal-to-background ratio compare well against experimental observations, guiding the choice of a small core suspended core fiber to optimize the fiber's sensing performance. Using silica suspended core fibers with a 0.85 μm core radius, sub-microgram quantities of hydrogen peroxide in 60 nL sampling volumes of aqueous solution are identified on the basis of their unique Raman fingerprint. In addition, by using the Raman signature of water as an internal calibration standard, quantification of the hydrogen peroxide content is possible. The same system can be used without any further modifications to detect 1,4-dinitrobenzene, a member of the nitroaromatic explosives group, at similar concentrations. These results highlight the potential for small volume, real time identification, and quantification of explosives in solutions by using suspended core fibers as active dip sensor elements.
5,420.8
2013-09-30T00:00:00.000
[ "Chemistry" ]
An IoT-Oriented Offloading Method with Privacy Preservation for Cloudlet-Enabled Wireless Metropolitan Area Networks With the development of the Internet of Things (IoT) technology, a vast amount of the IoT data is generated by mobile applications from mobile devices. Cloudlets provide a paradigm that allows the mobile applications and the generated IoT data to be offloaded from the mobile devices to the cloudlets for processing and storage through the access points (APs) in the Wireless Metropolitan Area Networks (WMANs). Since most of the IoT data is relevant to personal privacy, it is necessary to pay attention to data transmission security. However, it is still a challenge to realize the goal of optimizing the data transmission time, energy consumption and resource utilization with the privacy preservation considered for the cloudlet-enabled WMAN. In this paper, an IoT-oriented offloading method, named IOM, with privacy preservation is proposed to solve this problem. The task-offloading strategy with privacy preservation in WMANs is analyzed and modeled as a constrained multi-objective optimization problem. Then, the Dijkstra algorithm is employed to evaluate the shortest path between APs in WMANs, and the nondominated sorting differential evolution algorithm (NSDE) is adopted to optimize the proposed multi-objective problem. Finally, the experimental results demonstrate that the proposed method is both effective and efficient. Background A Wireless Metropolitan Area Network (WMAN) is a kind of mobile broadband wireless network, launched as a computer communication network within a city, which provides users with more convenient wireless services [1]. Metropolitan areas have high-density populations, where there are intensive data produced by the mobile devices in people's daily lives. Mobile cloud computing provides a novel paradigm that allows the computing tasks and the data from the mobile devices to be offloaded to the remote cloud for processing and storage through access points (APs) in the WMAN [2]. With the increasing number of mobile devices and the rapid growth of mobile cloud computing Motivation Currently, a QoS (Quality of Service) aware cloudlet load balancing method for IoT data in WMAN is proposed [29], but it does not take into consideration the data privacy preservation. As a result, some important IoT data leak easily, causing great loss to users. Therefore, the privacy preservation of the IoT data in cloudlet-based WMAN environments has become an urgent problem. Hence, we consider a separated data offloading for the IoT data which has privacy conflicts. Meanwhile, an IoT-oriented offloading method (IOM) with privacy preservation is proposed to lower the transmission time, save energy consumption and improve the resource utility. Paper Contributions The main contributions of this paper include the following: • Construct a systematic model of the resource utilization, the energy consumption and the data transmission time of the cloudlets when offloading the IoT data to the cloudlets. • Adopt the Dijkstra algorithm to calculate the shortest path between AP points in a WMAN in order to reduce the transmission time of data. • Optimize the multi-objective problem model by the nondominated sorting differential evolution (NSDE) algorithm with privacy preservation considered, and finally the optimal offloading strategies are output. • Conduct extensive experimental evaluations and comparison analysis to demonstrate the efficiency and effectiveness of the proposed method. The rest of the paper is organized as follows. Section 2 introduces the system model and the problem definition. Section 3 proposes an IoT-oriented offloading method with privacy preservation. In Section 4, simulation experiments and a comparison analysis are presented. Section 5 reviews related works. Finally, conclusions and future work are drawn in Section 6. System Model and Problem Formulation In this section, we present a system model that closely approximates the cloudlet environment in the WMAN first. Then, three optimized models of IoT data, that is, the transmission time model, the energy consumption model and the resource utilization model, are formed. Some key notations and descriptions used in the paper are listed in Table 1. Table 1. Key notations and descriptions. C The cloudlet collection A The AP(Access Point) collection T The computing task collection D The dataset collection of the computing tasks X The data offloading policy collection for T P The number of computing tasks l n (X) The resource utilization rate of the cloudlet c n σ(X) The number of the occupied cloudlets ψ(X) The average of l n (X) TT(X) The propagation delay time of the computing tasks T(X) The average of propagation delay time β P,n (X) The execution time of the task x p in the cloudlet c n ST(X) The maximal execution time of the task in the cloudlet c n E idle V M (X) The energy consumption of the idle VMs (Virtual Machines) E active V M (X) The energy consumption of the active VMs E c (X) The energy consumption of the cloudlets E(X) The total energy consumption Resource Model In this paper, we focus on the IoT-oriented data offloading with privacy preservation for the cloudlet-enabled WMAN. We consider a separated data offloading for the IoT data with privacy conflicts to optimize the transmission time, the energy consumption and the resource utilization. Suppose that there are N cloudlets, denoted as C = {c 1 , c 2 , . . . , c N } (each cloudlet has one host), which are deployed in the WMAN. There are M APs, denoted as A = {a 1 , a 2 , . . . , a M }. The cloudlets are connected through the APs, and M > N. P computing tasks that should be offloaded to the cloudlets for processing are donated as T = {t 1 , t 2 , . . . , t P }. The datasets of the computing tasks are donated as D = {d 1 , d 2 , . . . , d P }. Let X = {x 1 , x 2 , . . . , x P } be the offloading policy for the IoT data of the computing task set T, where x P I ∈ C (p = {1, 2, . . . , P}) is the cloudlet that the computing task t P is offloaded to. Figure 1 shows an example of cloudlet layout in the WMAN. The cloudlets are connected by the APs and deployed in the WMAN. There are seven APs, three physical machines, three cloudlets and three computing tasks in the example. The cloudlet c 1 is connected to the cloudlet c 2 through four APs, named a 1 , a 2 , a 3 and a 4 . If the IoT data of the computing task t 1 has privacy conflicts with the other data, t 1 will be migrated to the cloudlet c 2 for processing, through a 1 , a 2 , a 3 and a 4 . In addition, the cloudlet c 1 is connected to the cloudlet c 3 through three APs, named a 1 , a 6 and a 7 or through APs of a 1 , a 5 , a 6 and a 7 . Sensors 2018, 18, x FOR PEER REVIEW 4 of 18 three computing tasks in the example. The cloudlet c1 is connected to the cloudlet c2 through four APs, named a1, a2, a3 and a4. If the IoT data of the computing task t1 has privacy conflicts with the other data, t1 will be migrated to the cloudlet c2 for processing, through a1, a2, a3 and a4. In addition, the cloudlet c1 is connected to the cloudlet c3 through three APs, named a1, a6 and a7 or through APs of a1, a5, a6 and a7. Resource Utilization Model Compared to the mobile devices, the cloudlets have much greater physical resources including storage resources, computing resources and communication resources. When allocating the cloudlet resources to accommodate the IoT data, the resources are provided in the form of VMs. Assume that a cloudlet owns one physical machine. Let hn be the capacity of the n-th cloudlet cn and let up,n be the requirements of the computing task tP. The resource utilization is an important part to evaluate the efficiency of the cloudlets. According to the data offloading policy in , the resource utilization of the cloudlet is ( ), which is calculated by where , is a binary variable to judge whether is offloaded on , which is measured by Then, the number of the occupied hosts, denoted as ( ), is calculated by Finally, the average resource utilization of the cloudlet can be calculated by Data Transmission Model The IoT data needs to be offloaded to the cloudlet for processing or the conflict data needs to be offloaded to another cloudlet. During the offloading, there is some transmission delay, which may influence the efficiency of the cloudlets. The transmission time for the IoT applications to offload datasets should be taken into account. Resource Utilization Model Compared to the mobile devices, the cloudlets have much greater physical resources including storage resources, computing resources and communication resources. When allocating the cloudlet resources to accommodate the IoT data, the resources are provided in the form of VMs. Assume that a cloudlet owns one physical machine. Let h n be the capacity of the n-th cloudlet c n and let u p,n be the requirements of the computing task t P . The resource utilization is an important part to evaluate the efficiency of the cloudlets. According to the data offloading policy in X, the resource utilization of the cloudlet c n is l n (X), which is calculated by where θ p,n is a binary variable to judge whether t p is offloaded on c n , which is measured by Then, the number of the occupied hosts, denoted as σ(X), is calculated by Finally, the average resource utilization of the cloudlet can be calculated by Data Transmission Model The IoT data needs to be offloaded to the cloudlet for processing or the conflict data needs to be offloaded to another cloudlet. During the offloading, there is some transmission delay, which may influence the efficiency of the cloudlets. The transmission time for the IoT applications to offload datasets should be taken into account. When the computing tasks are in the current cloudlet without offloading, the transmission delay is neglected. When the computing tasks are offloaded from one cloudlet to another, it will pass through multiple APs. Thus the transmission delay is added by the transmission time among APs. Let N p,q be the number of halfway APs when the computing tasks are offloaded from the cloudlet c p to c q . Then, the transmission delay is calculated by where D p is the data scale of t p . Then, the average data transmission time T(X) can be calculated by Energy Consumption Model In this paper, the energy consumption is generated by the cloudlets, the active VMs and the idle VMs. The energy consumption is associated with the execution time of the computing tasks. Let β P,n (X) be the execution time of x p in c n , and it can be calculated by β P,n (X) = I p,n µ p,n ·CPU n , where I p,n is the instruction length of x p in c n , µ p,n is the number of the occupied VMs x p in c n , and CPU n is the running power of the physical machine in the cloudlet c n . The execution time of c n is denoted as ST(X), which is calculated by (β P,n (X) · θ p,n (X)). During the tracked execution period ST(X), the energy consumption of the active VMs in the cloudlet is denoted as E active V M (X), which is calculated by where η is the power rate for the running VM instances. Before the data offloading, all the VMs are assumed as the idle VMs. When the IoT data are offloaded to the cloudlets for execution, some idle VMs are switched into active VMs. This switch also generates a certain amount of energy consumption. Hence the energy consumption of the initial idle VMs is denoted as E initial V M (X), which is calculated by where τ is the power rate for the idle VM instances. The process by which an idle VM is changed into an active one also consumes some energy. The energy consumption to switch the situation of VMs is denoted as E switch V M (X), which is calculated by E switch V M (X) = h n − ∑ P p=1 u p,n ·ST(X)·θ p,n (X)·τ. (11) In this way, the energy consumption of the idle VMs in the cloudlet is denoted as E idle V M (X), which is calculated by All the running cloudlets consume the baseline energy during the tracked execution period ST(X). Such baseline energy consumption for the cloudlets is denoted as E C (X), which can be calculated by where ξ is the power rate for the cloudlets. Then, the total energy consumption denoted as E(X) is calculated by Data Privacy Preservation Model of the Comouting Tasks The IoT data contained in the computing tasks have different attributes, which may generate privacy conflicts. Putting the IoT data with privacy conflicts on the same cloudlet may make the IoT data be easily attacked by hackers and it will cause data leakage. Hence, the IoT data with privacy conflicts under this condition are offloaded to the other cloudlets for processing. The privacy conflicts of the datasets are modeled by a graph G = (D, E), where D is the set of datasets and E represents the conflicting relations between two datasets in D. Additionally, d i , d j ∈ E represents that there is a privacy conflict between the datasets d i and d j , which should be offloaded to different cloudlets for processing. Then, the conflicting datasets of d p are denoted as CD p , which is obtained by Hence, the conflicting collection is obtained by X= {x 1 , x 2 , · · · , x P }, where X = x i x i ∈ CD p , i = 1, 2, · · · , CD p . An IoT-Oriented Offloading Method with Privacy Preservation In this section, we mainly encode the IoT-oriented offloading model with data privacy conflicts in the WMAN. We aim to maximize the resource utilization in (4), minimize the data transmission time in (6) and minimize the energy consumption in (14) while satisfying the privacy constraints in (15). The formalized multi-objective problem is optimized by NSDE, and the diversity and convergence of the population are ensured through the mutation and crossover operations. In the individual selection phase, NSDE uses the fast nondominated sorting approach and the crowded-comparison operator to ensure that individuals with the relatively best fitness values in the current population can be preserved for the next generation. Shortest Path Acquisition of APs in WMAN Based on Dijkstra Algorithm To estimate the transmission time among the APs, we adopt the Dijkstra algorithm to calculate the shortest path between the APs. In order to reduce the transmission time of IoT data, each AP selects the shortest path for transmission. In Figure 1, all the computing tasks and the IoT data are uploaded to the APs closest to them and offloaded to the appropriate cloudlet for processing. However, in the WMAN, there may be multiple transmission paths between two APs. In order to reduce the transmission time of data, each AP selects the shortest path for transmission. Assuming that the transmission rate between the APs is the same, the WMAN can be regarded as a set of undirected unweighted graphs, and each AP is a node on this graph. The Dijkstra algorithm is used to calculate the shortest path between the mobile nodes. Optimization Problem Model by NSDE The problem model proposed in this paper is summarized as a constrained multi-objective optimization problem. NSDE is an essential multi-objective optimization algorithm using real number coding whose mutation vector is generated by the parent difference vector and intersects with the parent individual vector to generate a new individual vector. In the parent population and the offspring population, the fast nondominated sorting approach and the crowded-comparison operator are performed, and the individuals with better target values are preserved for the next generation. Compared with other algorithms, it is more effective in solving the approximation of the global optimal solution set in multidimensional space. In this section, the problem model is real-coded, performing crossover, mutation and selection operations, and the problem is using the fast nondominated sorting approach and the crowded-comparison operator in the selection phase to preserve the individuals with better fitness for the next generation, and through continuous iteration, it is constantly approaching the optimal solution set. (1) Encoding: A distribution strategy for all the computing tasks uploaded from each AP point is represented by a chromosome, and each gene in the chromosome represents the execution location of the task, which means that the task will be assigned to the corresponding cloudlet for execution. Therefore, the range of values for each gene depends on the number of the cloudlets that are used to perform the computing tasks. Figure 2 shows the value of each gene in a chromosome. Suppose that the n-th task t n will be assigned to M cloudlets for execution. Then, the length of this chromosome X j is N, and each gene will be a real value between 0 and M. However, in the calculation, each real value will be converted into an integer that represents the position of the execution cloudlet. For example, in Figure 2, the second task t 1 has a gene value of 3.2, and adopts the "down rounding" method, so the task t 1 will be assigned to cloudlet 3 for execution. operator are performed, and the individuals with better target values are preserved for the next generation. Compared with other algorithms, it is more effective in solving the approximation of the global optimal solution set in multidimensional space. In this section, the problem model is real-coded, performing crossover, mutation and selection operations, and the problem is using the fast nondominated sorting approach and the crowdedcomparison operator in the selection phase to preserve the individuals with better fitness for the next generation, and through continuous iteration, it is constantly approaching the optimal solution set. (1) Encoding: A distribution strategy for all the computing tasks uploaded from each AP point is represented by a chromosome, and each gene in the chromosome represents the execution location of the task, which means that the task will be assigned to the corresponding cloudlet for execution. Therefore, the range of values for each gene depends on the number of the cloudlets that are used to perform the computing tasks. Figure 2 shows the value of each gene in a chromosome. Suppose that the n-th task tn will be assigned to M cloudlets for execution. Then, the length of this chromosome Xj is N, and each gene will be a real value between 0 and M. However, in the calculation, each real value will be converted into an integer that represents the position of the execution cloudlet. For example, in Figure 2, the second task t1 has a gene value of 3.2, and adopts the "down rounding" method, so the task t1 will be assigned to cloudlet 3 for execution. (2) Fitness functions and constraints: A chromosome is an individual which represents an offloading strategy for all the computing tasks in the optimization problem. Multiple individuals constitute a population, and NSDE is used to optimize the population. The fitness functions are the criteria for evaluating each individual in the population, and the constraints are the conditions that each individual needs to satisfy during the problem optimization process. There are three fitness functions for this optimization problem: the average resource utilization of the cloudlet, the data transmission time and the energy consumption, which are calculated by fitness functions (4), (6) and (14), respectively. In this optimization problem, the larger average resource utilization of the cloudlet with the smaller data transmission time and the lower energy consumption contributes to the better individual. Hence, the NSDE comprehensively evaluates all individuals in the population through these three fitness functions, not one or two of them. However, during the evolution of the population, each individual also needs to meet two (2) Fitness functions and constraints: A chromosome is an individual which represents an offloading strategy for all the computing tasks in the optimization problem. Multiple individuals constitute a population, and NSDE is used to optimize the population. The fitness functions are the criteria for evaluating each individual in the population, and the constraints are the conditions that each individual needs to satisfy during the problem optimization process. There are three fitness functions for this optimization problem: the average resource utilization of the cloudlet, the data transmission time and the energy consumption, which are calculated by fitness functions (4), (6) and (14), respectively. In this optimization problem, the larger average resource utilization of the cloudlet with the smaller data transmission time and the lower energy consumption contributes to the better individual. Hence, the NSDE comprehensively evaluates all individuals in the population through these three fitness functions, not one or two of them. However, during the evolution of the population, each individual also needs to meet two constraints: the total number of VMs requested by all the computing tasks on each cloudlet cannot be greater than the maximum number of VMs in the cloudlet, and the privacy preservation of the data is satisfied, which means the conflicting data cannot be offloaded to the same cloudlet. (3) Initialization: Before the population is initialized, there are several algorithm parameters that need to be determined: the individual length N, which depends on the total number of computing tasks from all APs and each gene in the individual, represents the position of the cloudlet that the task is executed on, which has been introduced in the "Encoding" section; the size of population NP, which is usually set between 5N and 10N, but not less than 4N; additionally, in the evolution of the NSDE, three parameters are used, including the crossover factor CR, the mutation factor F and the mutation strategy, where CR and F mainly determine the optimization ability and the convergence speed of the NSDE. The mutation strategy selected in this paper is "DE/rand/1". After the parameters have been determined, the NSDE will generate a parent population P whose size is NP by initialization. The length of each individual is N, and the value of each gene in the individual is between 0 and M, where N and M represent the number of t computing tasks and cloudlets in the WMAN, respectively. (4) Mutation and crossover: The mutation operation is performed by randomly selecting three individuals X a , X b and X c from the parent population P, and generating a mutated individual H i by combining the third individual X a with the difference vector of X b and X c , which is scaled according to the variation factor F. The crossover operation generates every gene V i,j of the offspring individual V i by crossing X i,j of the parent individual X i and H i,j of the mutated individual H i , where X i,j , H i,j and V i,j respectively represent the j-th gene of the parent individual X i , the mutated individual H i and the offspring individual V i . (5) Selection: In the selection phase, based on the three fitness functions (4), (6) and (14), the NSDE performs the fast nondominated sorting approach and the crowded-comparison operator for the population O, which is composed of the parent population P and the offspring individual V i . The multiple nondominated layers L i (i = 0, 1, 2, . . . ) will be generated by the fast nondominated sorting approach, and the individuals in the nondominated layer with the lower nondominated level or the individuals with a better crowding distance in the same nondominated layer are preferentially populated into the parent population P of the next generation until the size of the population P is exactly equal to NP. The method of crowding distance calculation is described as follows: where D j represents the crowding distance, D j U , D j T and D j E represent the crowding distances of the average resource utilization, the data transmission time and the energy consumption, respectively. Besides, in (16), U j , T j and E j represent the objective function values of the average resource utilization, the data transmission time and the energy consumption, respectively, by the j-th offloading strategy X j . (6) Iteration: The NSDE takes the population P generated by performing selection operations as the parent population of the next generation, and combines the population P and the mutation population Q generated by performing mutation and crossover operations into a population whose size is 2NP. The parental population P of the next generation is regenerated by performing the selection operations on the population O. This process iterates until the termination condition is met, and finally the better solutions set S of the optimization problems is obtained. Experimental Evaluation In this section, a set of comprehensive simulations and experiments are conducted to evaluate the performance of the proposed IOM method. Specifically, we first introduce the simulation setup, including the simulation parameter settings and the statement of the comparative methods. Then, the influence of different task scales on the performance of the optimization metrics is evaluated. Simulation Setup In our simulation, three datasets with different scales of the computing tasks are applied for our experiments, and the number of computing tasks is set to 100, 150 and 200, respectively. The transmission speed of the cloudlets and the power rate of the cloudlets are set to 1200 M/s and 300 W according to [30]. The system decides which data items have conflicts according to the requirement of information security defined by the users or the processing records, which are assumed as the known information in our simulation. The specified parameter settings in this experiment are illustrated in Table 2. To conduct the comparison analysis, we employ another basic offloaded method. The comparative method is briefly expounded as follows. Benchmark: The task is offloaded to the nearest cloudlet first. If the task to be offloaded requires more resources than the current cloudlet owns or it has data conflicts with the computing tasks offloaded to the current cloudlet already, this task is offloaded to the cloudlet near the current one according to the Dijkstra algorithm. This process is repeated until all the computing tasks are offloaded to the cloudlets. The methods are implemented under the simulation tools by CloudSim on a PC machine with two Intel Core i7-5500U 2.40 GHz processors and 8 GB RAM. The corresponding evaluation results are depicted in detail in the following sections. Performance Evaluation of IOM The proposed IOM is intended to achieve a trade-off between optimizing the resource utilization, shortening the data transmission time and reducing the energy consumption while taking privacy preservation into consideration. We conducted 50 replicates of the experiment in the case of convergence for each task scale, and multiple sets of results were obtained. To identify a relatively optimal solution, simple additive weighting (SAW) and multiple-criteria decision making (MCDM) were used, where the optimal function is measured as follows: where Ψ(X), T(X) and E(X) represent the fitness of the data offloading strategy x i regarding the three objective functions mentioned above, respectively. Ψ max and Ψ min represent the maximum and minimum fitness values for the resource utilization. If Ψ max = Ψ min , let Analogously, T max and T min represent the maximum and minimum fitness for the data transmission time, and if T max = T min , let T max −T min = 1; E max and E min represent the maximum and minimum fitness for the energy consumption, and if E max = E min , let Figure 3 shows the comparison of the utility value of the solutions generated by IOM with different task scales. It is illustrated that when the task scale is 100, 150 or 200, four solutions are generated by IOM. For the solutions generated by IOM, we attempt to obtain the most balanced data offloading strategy by evaluating the utility value given in (17). After statistics and analysis, the solution with the maximum utility value is considered as the most balanced strategy. For instance, in Figure 3a, the final selected strategy is solution 3 because it achieves the highest utility value. Performance Evaluation of IOM The proposed IOM is intended to achieve a trade-off between optimizing the resource utilization, shortening the data transmission time and reducing the energy consumption while taking privacy preservation into consideration. We conducted 50 replicates of the experiment in the case of convergence for each task scale, and multiple sets of results were obtained. To identify a relatively optimal solution, simple additive weighting (SAW) and multiple-criteria decision making (MCDM) were used, where the optimal function is measured as follows: where ( ), T(X) and E(X) represent the fitness of the data offloading strategy xi regarding the three objective functions mentioned above, respectively. max and min represent the maximum and minimum fitness values for the resource utilization. If max = min , let Figure 3 shows the comparison of the utility value of the solutions generated by IOM with different task scales. It is illustrated that when the task scale is 100, 150 or 200, four solutions are generated by IOM. For the solutions generated by IOM, we attempt to obtain the most balanced data offloading strategy by evaluating the utility value given in (17). After statistics and analysis, the solution with the maximum utility value is considered as the most balanced strategy. For instance, in Figure 3a, the final selected strategy is solution 3 because it achieves the highest utility value. Comparison Analysis In this subsection, the comparisons of Benchmark and IOM with the same experimental context are analyzed in detail. The resource utilization, the data transmission time and the energy consumption are the main metrics for evaluating the performance of the data offloading methods. In Comparison Analysis In this subsection, the comparisons of Benchmark and IOM with the same experimental context are analyzed in detail. The resource utilization, the data transmission time and the energy consumption are the main metrics for evaluating the performance of the data offloading methods. In addition, the number of employed cloudlets is presented to show the resource usage of all the cloudlets for offloading the computing tasks. The corresponding results are shown in Figures 4-8. addition, the number of employed cloudlets is presented to show the resource usage of all the cloudlets for offloading the computing tasks. The corresponding results are shown in Figures 4-8. (1) Comparison of the number of employed cloudlets: Figure 4 illustrates the number of cloudlets employed by the data offloading methods. The total number of the cloudlets in our experiment is set to 50. As shown in Figure 4, IOM employs fewer cloudlets compared with Benchmark. Furthermore, as the number of the computing tasks increases, the number of the cloudlets employed by IOM increases. (2) Comparison of the resource utilization: After offloading all the computing tasks to the cloudlets via the data offloading methods, the occupation of the VMs is achieved. Figure 5 shows the comparison of the resource utilization of the cloudlets by using Benchmark and IOM with different task scales. The resource utilization is calculated according to the number of occupied cloudlets and the employed VMs in each cloudlet. Fewer employed cloudlets with more employed VMs contribute to a higher resource utilization. It is intuitive from Figure 5 that IOM achieves higher and stable resource utilization. That is, IOM reduces the number of unemployed VMs and wastes less resources than Benchmark. (3) Comparison of the data transmission time: In Figure 6, we compare the data transmission time of the different data offloading methods. It is intuitive that our proposed method IOM costs more time than Benchmark. With the increase of the task scales, the data transmission time is enlarged. This may be because our proposed method needs more transmission times to realize the goal of optimizing the resource utilization and the energy consumption, which may sacrifice some transmission time on the other hand. (1) Comparison of the number of employed cloudlets: Figure 4 illustrates the number of cloudlets employed by the data offloading methods. The total number of the cloudlets in our experiment is set to 50. As shown in Figure 4, IOM employs fewer cloudlets compared with Benchmark. Furthermore, as the number of the computing tasks increases, the number of the cloudlets employed by IOM increases. (2) Comparison of the resource utilization: After offloading all the computing tasks to the cloudlets via the data offloading methods, the occupation of the VMs is achieved. Figure 5 shows the comparison of the resource utilization of the cloudlets by using Benchmark and IOM with different task scales. The resource utilization is calculated according to the number of occupied cloudlets and the employed VMs in each cloudlet. Fewer employed cloudlets with more employed VMs contribute to a higher resource utilization. It is intuitive from Figure 5 that IOM achieves higher and stable resource utilization. That is, IOM reduces the number of unemployed VMs and wastes less resources than Benchmark. (3) Comparison of the data transmission time: In Figure 6, we compare the data transmission time of the different data offloading methods. It is intuitive that our proposed method IOM costs more time than Benchmark. With the increase of the task scales, the data transmission time is enlarged. This may be because our proposed method needs more transmission times to realize the goal of optimizing the resource utilization and the energy consumption, which may sacrifice some transmission time on the other hand. (4) Comparison of the energy consumption: As outlined in Section 2, the energy consumption is composed of the energy consumption of the active VMs, the energy consumption of the idle VMs, and the energy consumption of the cloudlets. In Figure 7, we compare these three aspects respectively with different task scales. As shown in Figure 7a, both methods achieve the same energy consumption of the active VMs at the same task scale because the same number of VMs are employed by Benchmark and IOM. Figure 7b shows that as the number of computing tasks increases, both methods increase the energy consumption of the idle VMs, but IOM generates less energy of the idle VMs due to less unemployed VMs used compared with Benchmark by occupying fewer cloudlets. Figure 7c indicates that IOM consumes less energy of the cloudlets than the Benchmark. The comparison of energy consumption in Figure 8 shows that IOM has better performance. For example, when the number of computing tasks is 100, IOM achieves a power consumption of less than 3000 W.s, whereas Benchmark generates more than 5000 W.s energy. (4) Comparison of the energy consumption: As outlined in Section 2, the energy consumption is composed of the energy consumption of the active VMs, the energy consumption of the idle VMs, and the energy consumption of the cloudlets. In Figure 7, we compare these three aspects respectively with different task scales. As shown in Figure 7a, both methods achieve the same energy consumption of the active VMs at the same task scale because the same number of VMs are employed by Benchmark and IOM. Figure 7b shows that as the number of computing tasks increases, both methods increase the energy consumption of the idle VMs, but IOM generates less energy of the idle VMs due to less unemployed VMs used compared with Benchmark by occupying fewer cloudlets. Figure 7c indicates that IOM consumes less energy of the cloudlets than the Benchmark. The comparison of energy consumption in Figure 8 shows that IOM has better performance. For example, when the number of computing tasks is 100, IOM achieves a power consumption of less than 3000 W.s, whereas Benchmark generates more than 5000 W.s energy. Related Work With the development of the IoT technology, more IoT data is produced by mobile devices in daily life. Edge cloud computing developed rapidly to solve the transmission delay of the IoT data, providing high-speed processing in cloud service [31]. One of the hot technologies of edge cloud computing is the cloudlet, which is applied to get a shorter response time and reduce the energy consumption of mobile devices by alternating the offloading destinations, compared to the traditional mobile cloud computing paradigm [32][33][34][35]. There have been many studies about cloudlets, which were fully investigated in [36][37][38][39][40][41][42], to name a few. In [36], the author studied the placement of the cloudlets in a large WMAN, consisting of many wireless APs. In order to realize the resource sharing of mobile users, Hoang et al. [37] used a cloudlet as a semi-Markov decision process (SMDP) to formalize a dynamic optimization problem. The SMDP is converted into a linear programming model to get the best solution. In the optimization model, mobile users need to consider different types of service quality under resource constraints. In [38], the author proposes a Performance-Enhancement Framework of Cloudlet (PEFC) to enhance the service performance of a cloudlet with limited resources. That paper aims to enhance the performance of the cloudlet and improve the experience of cloud service with limited resources. Artail et al. [39] proposed a general solution based on a mobile intelligent device to solve the service delay of the remote cloud. The author considered a cloud network, which distributes within a region and connects to the root server, to ensure resource availability. The framework is applicable to the environment where the cloudlet clients can sense networks and software services. Ciobanu et al. [40] introduced the drop computing paradigm, which proposes the concept of decentralized computing over multilayered networks, combining cloud and wireless technologies over a social crowd formed between mobile and edge devices. Mao et al. [41] jointly optimized task offload scheduling and transmission power allocation for mobile edge computing systems to reduce execution latency and device power consumption. The author proposed a low-complexity suboptimal algorithm to minimize the weighted sum of execution delay and device energy consumption, based on alternating minimization. Although the research on cloudlets is increasing, people often overlook the optimization of resource utilization, transmission delay and energy consumption when taking the privacy protection into account [43][44][45][46]. Current research mainly focuses on the capacitated cloudlets' placement to save energy or encrypt data to prevent data leakage. In [47], the author studied the cloudlet placement and mobile user allocation to the cloudlets in the WMAN. The author also designed a cloudlet placement algorithm, which placed the cloudlet in a user-intensive area of the wireless metropolitan area networks to balance the workload of WMAN. Mahadev et al. [48] introduced GigaSight, which is an internet-scale crowdsourced video content repository with powerful privacy preferences and access Related Work With the development of the IoT technology, more IoT data is produced by mobile devices in daily life. Edge cloud computing developed rapidly to solve the transmission delay of the IoT data, providing high-speed processing in cloud service [31]. One of the hot technologies of edge cloud computing is the cloudlet, which is applied to get a shorter response time and reduce the energy consumption of mobile devices by alternating the offloading destinations, compared to the traditional mobile cloud computing paradigm [32][33][34][35]. There have been many studies about cloudlets, which were fully investigated in [36][37][38][39][40][41][42], to name a few. In [36], the author studied the placement of the cloudlets in a large WMAN, consisting of many wireless APs. In order to realize the resource sharing of mobile users, Hoang et al. [37] used a cloudlet as a semi-Markov decision process (SMDP) to formalize a dynamic optimization problem. The SMDP is converted into a linear programming model to get the best solution. In the optimization model, mobile users need to consider different types of service quality under resource constraints. In [38], the author proposes a Performance-Enhancement Framework of Cloudlet (PEFC) to enhance the service performance of a cloudlet with limited resources. That paper aims to enhance the performance of the cloudlet and improve the experience of cloud service with limited resources. Artail et al. [39] proposed a general solution based on a mobile intelligent device to solve the service delay of the remote cloud. The author considered a cloud network, which distributes within a region and connects to the root server, to ensure resource availability. The framework is applicable to the environment where the cloudlet clients can sense networks and software services. Ciobanu et al. [40] introduced the drop computing paradigm, which proposes the concept of decentralized computing over multilayered networks, combining cloud and wireless technologies over a social crowd formed between mobile and edge devices. Mao et al. [41] jointly optimized task offload scheduling and transmission power allocation for mobile edge computing systems to reduce execution latency and device power consumption. The author proposed a low-complexity suboptimal algorithm to minimize the weighted sum of execution delay and device energy consumption, based on alternating minimization. Although the research on cloudlets is increasing, people often overlook the optimization of resource utilization, transmission delay and energy consumption when taking the privacy protection into account [43][44][45][46]. Current research mainly focuses on the capacitated cloudlets' placement to save energy or encrypt data to prevent data leakage. In [47], the author studied the cloudlet placement and mobile user allocation to the cloudlets in the WMAN. The author also designed a cloudlet placement algorithm, which placed the cloudlet in a user-intensive area of the wireless metropolitan area networks to balance the workload of WMAN. Mahadev et al. [48] introduced GigaSight, which is an internet-scale crowdsourced video content repository with powerful privacy preferences and access control features. The GigaSight architecture is a joint system of VM-based cloudlets that performs video analytics on the edge of the internet, reducing the need for cloudlet ingress bandwidth. Rahman et al. [49] proposed a mobile edge computing framework that provides real-time and location-aware personalized services for a large number of users. According to the new privacy policy paradigm, it can make a secure share of location. The framework uses server-side cloud blending and crowd edge fog computing terminals (FCTs) to switch tasks between FCTs and the cloud, based on network condition, geographic location and available resources. Chen et al. [50] used the flexibility of the cloudlets to create a novel healthcare system. Cloudlet features include privacy protection, data sharing and intrusion detection. In the data collection phase, the author used the Number Theory Research Unit (NTRU) method to encrypt data collected by wearable devices. Then, they proposed a new trust model to help users choose trusted partners to share stored data in the cloudlet and help similar patients to communicate with each other. Finally, in order to protect the medical system from malicious attacks, the author developed a new collaborative intrusion detection system (IDS) method based on cloud networks. Generally speaking, researchers do not take into consideration the data privacy preservation when optimizing the energy consumption of the cloudlets in the WMAN, or ameliorate the transmission time, energy consumption and resource utility when encrypting data [51][52][53]. Thus, an IoT-oriented offloading method with privacy preservation is proposed in this paper to optimize the transmission time, the energy consumption and the resource utilization when considering data privacy preservation. Conclusions and Future Work With the rapid development of IoT technology, the computing tasks of mobile applications have become so complex that it is necessary to offload the computing tasks to the remote cloud. For some applications with low latency requirements, it is necessary to offload the computing tasks to the nearby cloudlets for execution. Meanwhile, we have to be considerate of data conflicts to realize privacy preservation. In order to tackle such problems that have happened in the cloudlet-based WMAN environment, an IoT-oriented offloading method with privacy preservation is proposed in this paper to optimize the transmission time, the energy consumption and the resource utilization. Concretely, the task-offloading strategy with privacy preservation in the WMAN is modeled as a constrained multi-objective optimization problem. In order to reduce the transmission time, the Dijkstra algorithm is adopted to calculate the shortest path among APs in the WMAN. The multi-objective optimization problem is solved by an NSDE algorithm, and finally the best task-offloading strategy in the WMAN is obtained. In future work, we will attempt to adapt and extend our proposed method to a real-world scenario for cloudlet services in the WMAN environment. Additionally, the privacy preservation strategy will be updated on the basis of the IoT data. At the same time, more attributes of the real-world scenario will be added to confirm the accuracy of our experiment. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.
10,518.2
2018-09-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Quijote-PNG: Quasi-maximum likelihood estimation of Primordial Non-Gaussianity in the non-linear halo density field We study primordial non-Gaussian signatures in the redshift-space halo field on non-linear scales, using a quasi-maximum likelihood estimator based on optimally compressed power spectrum and modal bispectrum statistics. We train and validate the estimator on a suite of halo catalogues constructed from the Quijote-PNG N-body simulations, which we release to accompany this paper. We verify its unbiasedness and near optimality, for the three main types of primordial non-Gaussianity (PNG): local, equilateral, and orthogonal. We compare the modal bispectrum expansion with a $k$-binning approach, showing that the former allows for faster convergence of numerical derivatives in the computation of the score-function, thus leading to better final constraints. We find, in agreement with previous studies, that the local PNG signal in the halo-field is dominated by the scale-dependent bias signature on large scales and saturates at $k \sim 0.2~h\,\mathrm{Mpc}^{-1}$, whereas the small-scale bispectrum is the main source of information for equilateral and orthogonal PNG. Combining power spectrum and bispectrum on non-linear scales plays an important role in breaking degeneracies between cosmological and PNG parameters; such degeneracies remain however strong for equilateral PNG. We forecast that PNG parameters can be constrained with $\Delta f_\mathrm{NL}^\mathrm{local} = 45$, $\Delta f_\mathrm{NL}^\mathrm{equil} = 570$, $\Delta f_\mathrm{NL}^\mathrm{ortho} = 110$, on a cubic volume of $1 \left({ {\rm Gpc}/{ {\rm h}}} \right)^3$, at $z = 1$, considering scales up to $k_\mathrm{max} = 0.5~h\,\mathrm{Mpc}^{-1}$. INTRODUCTION The coming generation of spectroscopic and photometric galaxy surveys -e.g., Euclid, DESI, Spherex, Rubin Observatory, Roman (Laureijs et al. 2011;DESI Collaboration et al. 2016;Doré et al. 2014;LSST Science Collaboration et al. 2009) -will allow us to study galaxy clustering with an unprecedented level of accuracy and precision, shedding further light on many open questions in cosmology. Among the many exciting possibilities, an interesting prospect, which we mainly focus on in this work, will be that of improving our under-standing of Early Universe physics, via high precision tests of Primordial non-Gaussianity (PNG). Cosmic Microwave Background (CMB) measurements (Akrami et al. 2020), in agreement with theoretical expectations, have constrained the primordial cosmological perturbation field to be at most weakly non-Gaussian. This implies, for a large majority of Early Universe scenarios, that most of the PNG information is contained in the primordial bispectrum. For this reason, the bispectrum of dark matter tracers (e.g., galaxies) in Large Scale Structure (LSS) can be a powerful probe of PNG. Crucially, the 3D galaxy bispectrum gives us also access, in principle, to a larger number of modes with respect to the 2D (angular) bispectrum of CMB anisotropies. Therefore, LSS bispectrum analyses can potentially lead to significant improvements in PNG constraints over current, CMB-based, results. Achieving such improvements will require however to include non-linear scales in the analysis, carrying strong non-Gaussian (NG) signatures which are not primordial, but arise from late-time, non-linear evolution of cosmic structures. Disentangling the NG late time component from the subdominant primordial one is therefore a crucial challenge in this kind of studies. It can be addressed either by analytical modeling of the bispectrum -via a suitable perturbative approach at mildly non-linear scales (Cabass et al. 2022a,b;D'Amico et al. 2022) -or by relying on fully numerical approaches, which evaluate the bispectrum (and/or other summary statistics, see, e.g., Biagetti et al. 2021;Friedrich et al. 2020; Valogiannis & Dvorkin 2022) using large mock datasets; field-level inference on large scales, not relying on specific statistical summaries, has also been recently considered, see Andrews et al. (2023). In this work, which is the fourth in a series of papers, following Jung et al. (2022); Coulton et al. (2023a,b), we base our study on the Quijote-png simulation suite, recently presented in Coulton et al. (2023a). Our main goal is to quantify the accuracy with which f NL can be constrained using both the power spectrum and bispectrum of the dark matter halo field, up to strongly non-linear scales (k max = 0.5 h Mpc −1 ), using simulations with different kinds of PNG. More precisely, we consider the three main PNG bispectrum shapes, namely, the local, equilateral, and orthogonal shapes, which are predicted in a large variety of inflationary scenarios. This work extends our initial analysis presented in Jung et al. (2022), where we worked at the level of the matter field. As in the previous analysis, we derive forecasts for PNG and standard cosmological parameters, by combining power spectrum and bispectrum measurements at non-linear scales; our main focus is then on building an unbiased and nearly optimal quasi-maximum likelihood estimator, based on applying a MOPED-like compression algorithm to a modal decomposition of the data bispectrum. In our companion paper (Coulton et al. 2023b) we independently perform a similar analysis at a different redshift (z = 0 in Coulton et al. 2023b, vs. z = 1 in this work), but we employed a binned decomposition in k-space instead of the modal approach adopted here; in that work we studied in detail the PNG information content in the halo field while focusing on important numerical convergence issues. Therefore, the two analyses are comple-mentary to study the robustness of our approach and to cover a full range of crucial issues, from numerical stability, to the precise quantification of the information gain obtained from different observables at different scales and the demonstration of nearly optimal and unbiased bispectrum data compression for parameter estimation. Taken together, we think these works represent an important development in the effort to build a data analysis pipeline to be applied to observations. We release the halo catalogues of the Quijote-png 1 suite used in these works. Since in the current work we consider tracers of the underlying density field, an additional signature of PNG arises -in comparison to the previous matter field analysis -in the form of a scale-dependency in the tracer bias. Such feature has a power law behaviour, with degree determined by the squeezed limit of the PNG bispectrum shape under study: it is most prominent for local NG, with a ∼ 1/k 2 behaviour and absent in the equilateral case. Scale-dependent bias has been the object of significant study in the literature, see, e.g., Dalal et al. (2008) Giri et al. (2023), and was used to extract local PNG constraints from BOSS data Slosar et al. (2008); Ross et al. (2013); Leistedt et al. (2014); Mueller et al. (2021); Cabass et al. (2022a);D'Amico et al. (2022). Recently, it has been however pointed out that accurate modeling of scale-dependent bias from PNG also depends on details of galaxy formation, making its use as a tool to measure the PNG amplitude f NL significantly more challenging than previously thought Barreira (2020Barreira ( , 2022a. The effect of scale-dependent bias is automatically incorporated in our analysis, where we are mostly concerned with assessing its relative constraining power on different NG shapes, as compared to the bispectrum, and verifying agreement with both our analysis in Coulton et al. (2023b) and theoretical expectations (see, e.g. de Putter 2018; Karagiannis et al. 2018). The paper is structured as follows. In section 2 we briefly review the NG models considered in the analysis. In section 3 we illustrate our methodology for data compression and parameter estimation. In section 4 we discuss our Fisher matrix analysis, showing expected parameter constraints on different scales, and describe the application of our quasi-maximum likelihood, joint power spectrum and bispectrum estimator to simulated data. In section 5 we summarize our main results and draw our final conclusions. Finally, in appendix A we provide more details about the implementation of shotnoise modes in the bispectrum estimator. In appendix B we show a comparison between the modal and binned approaches to bispectrum estimation, and in appendix C we discuss the results of a preliminary study aimed at the application of the CARPool technique to the evaluation of covariances and numerical derivatives. BISPECTRUM SHAPES Violating any condition of the standard inflationary model induces a deviation from the perfect Gaussian initial conditions, which leads to non-zero high-order correlators. The largest of them, in most inflationary models, is the bispectrum, i.e. the three-point correlation function of Fourier modes, defined as: (1) The primordial bispectrum is generally written as where f NL is the dimensionless amplitude parameter corresponding to a given primordial bispectrum shape F (k 1 , k 2 , k 3 ), which encompasses the dependence of the bispectrum on triplets of Fourier space modes. In this work, we focus on building estimators to measure f NL of three of the most common primordial shapes, namely the local, equilateral and orthogonal 2 bispectra (see Coulton et al. 2023a, and references therein for the complete description of these templates). For dark matter tracers (e.g. halos), the presence of PNG has a significant impact, due to the introduced coupling between large and small scale modes, with the most known example being that of the local type. In this case, the halo overdensity on large scales will no longer depend only on the matter overdensity, but also on the primordial gravitational potential (see Desjacques et al. 2018, for a review). This results in a scale-dependent term that introduces an important PNG signature on the large scales of a correlator. This is of particular importance to the power spectrum of the observed dark matter tracers, since it enhances the PNG signal within the two-point correlation function, which otherwise would have been very limited, e.g. in the case of a dark matter field analysis (see e.g. Coulton et al. 2023a;Jung et al. 2022). The effect of the scale dependent term on the power spectrum has been extensively studied in the literature, especially for the local PNG type. However, recent developments have made the measurement of f NL , by such a term in the power spectrum, challenging, due to the perfect degeneracy between f NL and the scale dependent bias coefficient b φ (Barreira 2020(Barreira , 2022b. For the halo bispectrum, a significant amount of the PNG signal is located within the primordial part (eq. 2), while the scale dependent terms, studied at a theoretical level e.g. in Karagiannis et al. (2018), that could carry a notable amount of information on local PNG, suffer from the same limitations as the power spectrum (Barreira 2022a). The effect of the scale-dependent bias will be taken into account within the framework of the forwardmodeling. In a simulation-based approach we assume tight priors on the scale dependent bias coefficient b φ , in order to focus on the f NL constraints (see also Coulton et al. 2023b, for an extensive discussion). METHOD In this section we review the main aspects of our methodology for data compression and quasi-maximum likelihood estimation of cosmological and PNG parameters, starting from the evaluation of power spectrum and modal bispectrum summary statistics. Quasi maximum-likelihood estimator Starting from a given data vector d (a given set of summary statistics, like the power spectrum and/or the bispectrum) that depends on some parameters of interest denoted θ (e.g. f NL ), one can write the following quasi maximum-likelihood estimator for the value of the parameters (see Alsing & Wandelt 2018, for details): where the subscript * denotes that the quantities are evaluated at some chosen fiducial point, and µ and C are, respectively, the mean and the covariance of d. The two key ingredients of this estimator, which are the Fisher information F and the compressed score statistic t, will be detailed below. Note also that in this expression we assume a Gaussian likelihood and a dependence on parameters through the mean only, a reasonable assumption as verified in Jung et al. (2022). The Fisher matrix, a standard method to evaluate the information content of some observables, is given by: This requires knowledge of the derivatives ∇ θ µ and the covariance C, which can be both evaluated from a large set of simulations, as we do in this work (see section 4). However, reaching numerical convergence for the joint analysis of multiple parameters may be very challenging and therefore prone to wrong results. In Jung et al. (2022) we checked, by studying the matter field, that if the covariance matrix is not converged, it would typically induce suboptimal error bars, and that non-converged derivatives could bias the estimated parameters. In Coulton et al. (2023b) we showed that noisy derivatives could lead to overconfident error bars when working with the halo field. To tackle this problem, the alternative compressed Fisher method described in Coulton et al. (2023b) (see also Coulton & Wandelt 2023, for details) can provide conservative bounds. It consists of two steps, the first of which is to compress the data to the score function using (see Alsing & Wandelt 2018) that is equivalent to the MOPED compression scheme of Heavens et al. (2000). This operation reduces the data vector d of size n down to only p numbers, where p is the number of parameters of interest, while keeping all relevant information about these parameters. Then, to compute the compressed Fisher matrix one has only to apply the standard expression of eq. (4) to the compressed data. An important subtlety of this scheme is that it requires to use two separate sets of simulations for the two different steps. The first part is used for the compression step, to build a new summary statistics, which will be suboptimal if derivatives are noisy. The second part is then compressed and used to estimate the Fisher matrix from the compressed statistics, which is suboptimal if the compression step is suboptimal, but is also a lot less noisy due to the much lower dimensionality of the compressed statistics. 3 Summary statistics In this work, we use the same observables as in Jung et al. (2022), based on the power spectrum and bispectrum statistics as they contain significant and complementary information about both the ΛCDM cosmological parameters and the PNG amplitudes f NL . The standard power spectrum estimator of a field δ(k) defined on a grid of fundamental mode k f is given bŷ where V is the survey volume, and a binning of the krange has been introduced with each bin ∆ i having a width k f and containing N i independent vectors of k. As was initially shown for CMB NG analysis in Fergusson et al. (2010Fergusson et al. ( , 2012a, and extended later to LSS in Fergusson et al. (2012b); Regan et al. (2012); Schmittfull et al. (2013) (see also Lazanu et al. 2016Lazanu et al. , 2017Hung et al. 2019a,b;Byun et al. 2021;Byun & Krause 2022), the bispectrum information can be efficiently extracted from data by measuring the following modal coefficients: where for a well-chosen basis of one-dimensional functions q p (k) and mode triplets n ↔ (p, q, r). We refer the reader to Jung et al. (2022) for the details of the exact setup we use for the analyses presented in section 4, as they are almost identical (the only change is the addition of two special modes, introduced in Byun et al. (2021) and recalled in appendix A, describing the shot-noise component of the bispectrum expected from halos). Specifications For our analysis we use the publicly available Quijote 4 and Quijote-png 5 suites of N-body simulations Coulton et al. 2023a). Each simulation represents a periodic cubic box of length 1 h −1 Gpc, which contains 512 3 particles, run with the Gadget-III code (Springel 2005). Initial conditions are generated at z i = 127 with the codes 2LPTIC (Crocce et al. 2006) in the Gaussian case, and 2LPTPNG 6 in the non-Gaussian case (Scoccimarro et al. 2012;Coulton et al. 2023a); linear matter power spectra and transfer functions are obtained from CAMB (Lewis et al. 2000). Finally, dark matter halos are identified using the Friends-of-Friends algorithm (Davis et al. 1985) with a value of the linking length equal to b = 0.2; we select those with a mass larger than M min = 3.2 × 10 13 M /h, corresponding to a number densityn ∼ 5.1 × 10 −5 h 3 Mpc −3 at z = 1 (see Hahn et al. 2020, for a power spectrum and bispectrum analysis of these halo catalogues focused on cosmological parameters). We construct the halo density field in redshift-space at z = 1 by depositing the halo positions, displaced radially by the velocity, on a grid of size N grid = 256, using a fourth-order interpolation scheme implemented in the Pylians3 code 7 (Villaescusa-Navarro 2018). We then measure the power spectrum and modal bispectrum monopoles using the estimators (6) and (7) including modes up to k max = 0.5 h Mpc −1 . For the numerical computation of the covariance matrix, we have 15, 000 simulations at fiducial cosmology, whereas smaller sets of 500 realizations, with varying input parameters, are used to evaluate the derivatives in eq. 5. In particular the analysis is focused on the PNG amplitudes of the three shapes considered here, f local NL , f equil NL , f orth NL ; four cosmological parameters Ω m , n s , σ 8 and h; and one parameter related to the halo bias, M min . The variation of the minimum halo mass generates distinct catalogs with M fid min ± ∆M min (see table 1), which consequently leads to a variation of the halo number density. This is roughly equivalent to a variation of the linear bias contribution (see e.g. Desjacques et al. 2018, for details), while it propagates, to a minor extent, to higher order terms. Although, this bias model is quite simplistic, it is still useful within the framework of a first-order analysis presented in this work. A thorough investigation on the impact of the bias parameters, within a simulation-based approach, requires the population of halos with a HOD and the variation of the HOD parameters, which is left for future work. More details about the specifications of these simulations, concerning all the parameters considered in our analyses, can be found in table 1. Fisher constraints We aim to evaluate the information content on ΛCDM parameters and PNG amplitudes contained in the power spectrum and bispectrum of the halo field at redshift z = 1 using a Fisher matrix formalism. This analysis 7 https://github.com/franciscovillaescusa/Pylians3 complements the work of Coulton et al. (2023a), as we focus on a different redshift and make use of a different bispectrum estimator. We explore the dependence on the number of simulations used, the chosen k max or the role of the different summary statistics. We show the results in figures 1, 2, 3 and table 2. As highlighted in Coulton et al. (2023a), a main difficulty of this simulation-based approach is to accurately compute numerical derivatives of both the power spectrum and bispectrum with respect to the different parameters considered. This is illustrated in figure 1, where we show that using smaller subsets of the 500 available pairs of simulations per parameter leads to spurious smaller 1-σ uncertainties when analyzing jointly Instead of the computationally intensive possibility of producing many more simulations, we use here an alternative method, described briefly in section 3.1 to compute conservative constraints from a lower number of simulations. As expected, the resulting 1-σ error bars decrease when we use more simulations to calculate numerical derivatives, and using the full set they are only between 5% and 25% larger than the unconverged standard Fisher constraints. The results are stable when using 250 pairs of simulations or more to compute each derivative. In the simpler situation where we consider only the three following parameters {σ 8 , Ω m , f local NL } in the analysis, the two methods give very similar results when numerical convergence is reached (using at least 100 pairs of simulations per derivative). This is why in the rest of this work we always use the conservative approach, knowing it is equivalent to the standard Fisher approach in the cases where numerical accuracy can be reached with the available simulations, and otherwise only leads to a reasonable overestimation of order 10% as verified. Note that we manage to keep this overestimation small here due to the use of the modal bispectrum, rather than a standard "binned" approach, because it compresses the original data more efficiently leading to more stable numerical derivatives (we need less than 50 modes to extract the full information of the bispectrum up to k max = 0.5 h Mpc −1 ). This is shown explicitly in appendix B. In figure 2 we study the dependence of the constraints on k max , considering values from 0.1 to 0.5 h Mpc −1 . The largest improvement (for both ΛCDM cosmological and PNG parameters) is obtained between k max = 0.1 and 0.2 h Mpc −1 , at which point error bars on f local NL become saturated (as well as for h). However, for the equilateral and orthogonal shapes considering smaller scales yields better constraints (a few percent for each additional increase of 0.1 h Mpc −1 ). For other parameters, the gain can even be larger, justifying the need to probe these nonlinear scales. Note also that all these improvements are computed using the conservative error bars obtained from the compressed summary statistics. Including smaller scales in the analysis typically leads to less converged numerical derivatives, and the less converged these derivatives are the more suboptimal the conservative approach becomes. This means that we may be underestimating slightly the constraining power of the small scales (in any case, this effect should not be large, as for k max = 0.5 h Mpc −1 this overestimation is ∼ 10% as can be seen in figure1). In figure 3 we show the information content of the halo power spectrum, the halo bispectrum, and their combination. The bispectrum is a much more efficient probe of the equilateral and orthogonal shapes than the power spectrum, while for other parameters they yield constraints of the same order separately. Their combination always helps to reduce degeneracies, although to a lesser extent than for the matter field studied previously in Coulton et al. (2023a); Jung et al. (2022). In table 2, we present the 1-σ conservative constraints on ΛCDM parameters and PNG amplitudes using jointly the power spectrum and bispectrum and including small scales up to k max = 0.5 h Mpc −1 . Unlike the matter field case discussed in Coulton et al. (2023a); Jung et al. (2022), including PNG shapes in the analysis increases slightly error bars on ΛCDM cosmological parameters. The different PNG shapes are also less correlated, as analyzing them jointly increases only slightly their own error bars. Parameter estimation As was shown in Jung et al. (2022) for the matter field, the simple quasi maximum-likelihood estimator (see eq. 3) built from the Fisher matrix at some chosen fiducial cosmology is very efficient to measure ΛCDM cosmological parameters and PNG amplitudes using the power spectrum and bispectrum information. Here we extend this conclusion to the halo field. The key ingredient of the estimator is the Fisher matrix, which in this work is fully evaluated from a very large set of simulations. As discussed in the previous section, we use a two-step conservative approach for its computation leading to slightly suboptimal results, because numerical convergence is difficult to reach with the standard method. We verify that this leads nonetheless to unbiased and near-to-optimal measurements of parameters, by estimating jointly σ 8 , Ω m , n s , h, f local NL , f equil NL and f ortho NL in the Quijote simulations using both the power spectrum and the bispectrum. In figure 4, we study the effect of varying k max on the 1-σ error bars of the quasi-maximum likelihood estima- Table 2. Joint 1-σ error bars on cosmological parameters and PNG from the power spectrum and the modal bispectrum of the halo field at z = 1, at kmax = 0.5 h Mpc −1 . In the first part, we report the Fisher constraints described in section 4.2 and in the second part the corresponding error bars of the quasi-maximum likelihood estimator used in section 4.3. We analyzed 15000 Quijote halo catalogues of 1 (Gpc/h) 3 volume at fiducial cosmology, and sets of 500 simulations with one adjusted parameter. tor. To compute these error bars, we use a set of 1000 simulations at fiducial cosmology and analyze it with the estimator calibrated using all other simulations (using 14000 simulations instead of 15000 to calculate the covariance has been verified to have no impact on the results). We repeat the procedure for different sets of 1000 simulations, and compute the standard deviation of the results. As expected, this highlights a very similar behaviour as for the Fisher constraints discussed in the previous section. Concerning PNG, there is no im-provement for f local NL above k max = 0.2 h Mpc −1 , while for the other two shapes there is no clear saturation yet (although the gain between k max = 0.4 and 0.5 h Mpc −1 is only a few percent). For every other parameter considered (except h), the decreasing of error bars is significant up to k max = 0.5 h Mpc −1 . In table 2, we report the corresponding error bars at k max = 0.5 h Mpc −1 , considering cosmological parameters only or jointly with the PNG shapes. For all parameters, the error bars of the quasi-maximum likelihood estimator are close to, or slightly larger than the Fisher constraints reported in the same table (less than 10% difference). In figure 5, we compare the estimated parameters to their input values for different cases, focusing here on changes of PNG amplitudes. We first study the mildly nonlinear regime (k max = 0.2 h Mpc −1 ) and then include also nonlinear scales (k max = 0.5 h Mpc −1 ). The measured parameters match their expected values for both ranges of scales when studying datasets at fiducial cos-mology or having PNG of the equilateral or orthogonal types (f equil NL = +100 or f ortho NL = +100). There are however large statistical deviations on several parameters for the simulations with local NG (in particular f local NL , several datasets giving a value more than 5-σ away from the expected one). This difference of behaviour between this specific set and the others can be explained, by the fact that f local NL = 100 is more than 2-σ away from the fiducial value of f NL = 0 (based Figure 5. Relative difference of measured cosmological parameters and PNG amplitudes using the quasi maximum-likelihood estimator (eq. 3) with their expected value. We use the power spectrum and the bispectrum of the halo field jointly, for kmax = 0.2 h Mpc −1 in the top row and kmax = 0.5 h Mpc −1 below. Each column corresponds to a given parameter (cosmological or PNG). Each panel corresponds to a different input cosmology of the data samples (i.e. one with Gaussian initial conditions and the three types of PNG). For each input cosmology, we analyze five independent datasets of 100 realizations, each being indicated by its own colour and marker. The dark and light grey bands represent, respectively, the 2 and 1-σ intervals around the expected deviation (0). on error bars given in table 2) while f ortho NL = 100 and f equil NL = 100 are respectively smaller and a few times smaller than a 1-σ deviation from f NL = 0. The NG simulations of the three shapes correspond to different regimes where a parameter is more or less displaced from the model we use to calibrate the estimator. This is confirmed in figure 6, where we check that simulations with f local NL = 50 (thus roughly a 1-σ deviation) give this time the expected results. These tests confirm the unbiasedness of the quasi maximum-likelihood estimator, with the caveat that the estimator must be calibrated relatively close to the actual parameter values. This, of course, is due to the fact that the entire method is based on a linear approxi- mation of the likelihood around the fiducial parameters. For the same reason, however, it is clear that the issue can be immediately addressed -at the computational cost of producing new sets of simulations -by implementing a standard recursive procedure, in which the estimated parameters at the previous step generate the new fiducial model for the following step, until convergence. Note that this scenario is not bound to occur in practice, since current cosmological parameter constraints from, e.g., CMB datasets such as Planck produce already quite narrow priors. While it was shown in section 4.2 that using a lower number of simulations to compute derivatives leads to more suboptimal Fisher matrices, it is also important to verify the effect of changes in the number of simulations used to compute the covariance matrix. We explore this in figure 7, where we show the increase of error bars due to using fewer simulations. Above 1000 simulations, error bars of the quasi-maximum likelihood estimator are stable (variations at the percent level) and close to the Fisher estimates (10% difference at most). CONCLUSIONS In this paper, we have developed a joint power spectrum and bispectrum quasi-maximum likelihood estimator of cosmological and PNG parameters and applied it to the study of the halo field in the Quijote-png simulation suite. The data analysis pipeline applies the optimal data compression methodology developed in Alsing & Wandelt (2018); Heavens et al. (2000) to a set of power spectrum and modal bispectrum summary statistics, efficiently extracted from the input mock realizations. In this way, we extended our previous analysis (Jung et al. 2022), which considered the matter field in the same dataset. The main arising technical complication was related to the convergence of numerical derivatives that are used to compute the Fisher information and to perform the final compression step. This turns out to be much slower now, with respect to the previous matter field analysis, now leading to potential problems such as spurious "superoptimal" error bars in the final estimator. Interestingly, though, we have also found that our modal decomposition of the bispectrum makes derivative convergence much faster with respect to the binning approach we implemented in Coulton et al. (2023b). Although still not sufficient for a brute force computation with the available realizations, such faster convergence suggests that more investigation should be done in the future to find the optimal bispectrum decomposition scheme, for the best numerical stability. In the meantime, to circumvent the issue, we have implemented the method first described in Coulton et al. (2023b), which is based on computing the Fisher matrix of MOPED-compressed statistics, extracted from an independent simulation set. This approach leads to stable, robust results, at the price of slight suboptimality in the final estimator. Despite such small suboptimality, we have verified that the forecasted errors significantly improve after including nonlinear scales up to k max = 0.5 h Mpc −1 (see figure 2 as a summary of our main results), in agreement with our findings in Coulton et al. (2023b). Given the significant contribution provided by small scale, shot-noise dominated, bispectrum triangles, further improvements could be in principle achieved in a future galaxy density analysis, by selecting higher-density tracers. In contrast to other parameters, we have observed a saturation of the f local NL error at a scale k ∼ 0.2 h Mpc −1 ; this is again consistent with our previous findings and with other forecasts, such as those in Karagiannis et al. (2018), where it was shown that the f local NL signal is dominated by the scale-dependent bias signature, on large scales, both in the power spectrum and in squeezed bispectrum configurations. After investigating the power spectrum and bispectrum information content on non-linear scales, the final step of our analysis consisted in testing our quasimaximum likelihood estimator on the simulated dataset. We have verified that we can recover unbiased results, deep into the non-linear regime, up to k max = 0.5 h Mpc −1 (see figures 5 and 6). Unbiasedness is of course verified only provided the starting fiducial parameter values in the estimator are close enough to the real ones. We studied this in more detail by varying the input value of f local NL in the analyzed simulations and verifying that biased results are obtained when the true f local NL in the data is ∼ 2σ away from the fiducial choice in the estimator. In a realistic observational scenario, this issue can of course always be addressed by implementing a recursive estimation procedure, which however becomes more and more expensive, by requiring new mock realizations and re-calibration of the estimator weights at each step. This suggests to investigate the possibility to reduce the overall computational cost of simulations. We have started a preliminary analysis in this direction, using the CARPool method , which is further discussed in Appendix C. Another possibility is to use machine-learning-augmented simulations, see Kaushal et al. (2022); Jamieson et al. (2022); Piras et al. (2022) for examples. Making use of these different techniques will play a key role in enabling simulationbased inference with the upcoming generation of galaxy surveys, which will have a much higher tracer density. The recovered error bars are, as expected, slightly larger than the optimal Fisher bound. This is a direct consequence of the fact that, to secure unbiasedness and robustness of the results, we have calibrated the estimator weights using the stable, yet conservative approximation of the Fisher matrix described above. Also in this case though, the slight suboptimality does not prevent us from obtaining large improvements in precision for the final parameter estimates, when we include nonlinear scales in the analysis (see figure 4). By extending our previous analysis to the halo field in redshift space, we have made a significant step forward toward the final development of an efficient, joint power spectrum and bispectrum estimation pipeline, able to extract cosmological and PNG parameters at strongly non-linear scales from actual observations. In a follow-up work we will further extend the current analysis, by looking at the galaxy density field, simulated via a suitable Halo Occupation Distribution (HOD), following Hahn & Villaescusa-Navarro (2021). Marginalization over HOD parameters will also allow us to significantly improve the accuracy of our bias model, which is currently defined by a single parameter which describes the leading order contribution and only to a minor extent captures higher order effects. Our conclusions are in full agreement with those in our companion work, Coulton et al. (2023b), where we performed an independent analysis at a different redshift (z = 0 in Coulton et al. 2023b, vs. z = 1 in this paper) and used a standard binning scheme for the bispectrum, rather than the modal approach developed here. Besides increasing the robustness of our conclusions via cross-validation of independent data analysis pipelines, the two works complement each other in several ways and together cover a significant range of crucial issues: Coulton et al. (2023b) focused on addressing numerical stability issues, on assessing the information content of our observables at different scales and on evaluating in detail all possible contributions to the error budget (such as, e.g., shot noise and super-sample covariance effects), whereas the present study, while cross-checking the previous Fisher matrix results, is more centered on optimal data compression and on the development and testing of related statistical estimators. LV acknowledges ERC (BePreSySe, grant agree-ment 725327), PGC2018-098866-B-I00 MCIN/AEI/10.13039/501100011033 y FEDER "Una manera de hacer Europa", and the "Center of The left column is obtained with the modal bispectrum estimator used throughout this paper, while the two others use a standard "binned" approach for different widths of bin (3k f in the middle panels and 2k f in the right panels). Otherwise, this figure is similar to 1, for kmax = 0.2 h Mpc −1 . A. SHOT NOISE MODAL MODES The shot-noise contribution to the matter bispectrum at tree-level is given by wheren is the halo number density and P L (k) is the linear matter power spectrum. As introduced in Byun et al. (2021), this can be fully described in the modal way by using the two triplets (0, 0, 1) and (0, 0, 0) combining the following one-dimensional basis functions B. COMPARISON WITH THE STANDARD "BINNED" BISPECTRUM ESTIMATOR A key ingredient to compute the Fisher matrix (eq. 4), which is used both for constraint forecasts and to build estimators, is to have accurate derivatives of the summary statistics with respect to the different parameters considered. As discussed in section 4.2, even the large sets of 500 paired simulations for each parameter of the Quijote and Quijote-png collections are not sufficient to reach the necessary numerical convergence for the power spectrum and bispectrum derivatives. This typically leads to an underestimation of 1-σ error bars. On the other hand, a two-step computation, consisting first on compressing optimally the data, and then computing the Fisher matrix from this compressed data (using different datasets for the two steps), yields slightly overestimated error bars. Combining the power spectrum and the bispectrum information of the halo field at z = 1, we have verified that even when we include nonlinear scales up to k max = 0.5 h Mpc −1 , the difference between the lower and upper bounds of constraints is at most of order 20% on the different parameters, a very reasonable difference. In this appendix, we show that the modal estimator, which by construction compresses the bispectrum information in the data, is a necessary ingredient for the efficiency of the method. In figure 8, we compare the convergence of standard and conservative Fisher 1-σ uncertainties obtained with the modal bispectrum (as in the rest of this paper) and a standard "binned" bispectrum estimator. 8 A main result is that P(k) Power spectrum . The CARPool method applied to the power spectrum (left column) and modal bispectrum (right column) of the halo field, at z = 1. On the top row, the black dashed lines correspond to the averages from the 15000 Quijote simulations at fiducial cosmology with f local NL = 0 (note that in the bispectrum case, all modal coefficients are normalized by dividing by the modes from these 15000 simulations at fiducial cosmology). The blue lines correspond to the average from 500 simulations with f local NL = +100. The red dotted lines have been computed using the CARPool method (see eq. C5), using 10 simulations at f local NL = +100 as the high-fidelity simulations and the 15000 simulations at fiducial cosmology as surrogates. The blue areas and red vertical lines show the respective error bars from the two cases (they correspond to standard errors for sets of 10 simulations, and have been multiplied by a factor 10 for visibility in the power spectrum case). In the bottom row, we show the difference between the CARPool estimates and averages from 500 simulations, normalized by the standard deviation. The blue areas correspond to the standard error for 10 simulations, and error bars. Error bars on the CARPool estimates, shown in red, are computed by applying the CARPool method to many different sets of 10 simulations at f local NL = +100. the constraints obtained with the modal estimator are the most stringent, with a difference of order 10% with the standard estimator with bins of width 3k f , and even more with smaller bins of width 2k f . Indeed the estimator using the smallest bins gives here the largest error bars, despite the fact that in principle it should keep more information, due to the greater difficulty of computing sufficiently accurate numerical derivatives. This lack of convergence is also very clear when we compare the lower and upper bounds on error bars for all three methods. Using the full sets of simulations, the lower bounds are 10-20% smaller than the upper limits for the modal estimator, 30% for bins of width 3k f , and as much as two times smaller for bins of width 2k f . The modal estimator gives more stringent constraints, which are proven to be closer to the actual Fisher uncertainties, and should converge totally with a smaller number of simulations, as shown in the first row in the simple situation, where the modal estimator has fully converged and the other two have not. C. APPLICATION OF CARPOOL As we have verified in this paper, the quasi-maximum likelihood estimator is a powerful method to infer cosmological parameters and PNG amplitudes from halo catalogues using information beyond the mildly non-linear regime, which however, as other simulation-based methods, can require a large number of costly forward simulations. Therefore, a key component of future applications will be to include the variance reduction CARPool technique, developed in ; , 2022, into the full analysis pipeline. The basic idea behind CARPool is to use a relatively small number of high fidelity simulations combined with a large number of less accurate simulations, or surrogates, to measure some chosen summary statistics with much smaller error bars. In , these surrogates were computed using much faster, but less precise, N-body solvers like COLA (Tassev et al. 2013). This could for example be applied to the case of numerical derivatives, for which reaching numerical convergence typically requires thousands of costly simulations. We leave this application for future work, and instead focus here on the use of CARPool to speed-up the iteration process of quasi-maximum likelihood estimation. To obtain unbiased estimates of cosmological parameters or f NL 's, it is important that the fiducial cosmology where we evaluate the covariance and numerical derivatives is not too far from the actual parameter values. For example, in section 4, we have seen that with a fiducial cosmology at f local NL = 0, the estimator is unbiased in measuring f local NL in simulations with an input of f local NL = 50, but not for f local NL = 100. Note that even if the measured bias were large when averaging from hundreds of simulations, it would still be smaller than the 1-σ error bar, making it a good first estimate. Then, working by iteration and choosing a new fiducial cosmology at these roughly measured parameters should yield unbiased results. To avoid producing a completely new large set of simulations at the new fiducial cosmology, we can consider the original simulations as the surrogates of the CARPool method (the idea of combining simulations at different cosmology was also explored in Ding et al. 2022). The main ingredients of the CARPool method are: • A set of N paired high-fidelity simulations and surrogates, sharing the same random seeds to produce their initial conditions, from which we measure some chosen summary statistic denoted y or c (simulation or surrogate respectively) and the corresponding sample covariance given bŷ • A separate set of M surrogates, to compute the mean of c with the standard expression Then, the key quantity to compute is which by construction has the same ensemble average as y (i.e.x =ȳ). The variance of x is minimized when the control matrixβ is given byβ = Σ yc Σ −1 cc . In , it was shown that a very efficient choice, using only the diagonal elements of Σ yc and Σ cc , is the following diagonal control matrix: β diag = diag cov(y 1 , c 1 ) σ(c 1 ) 2 , cov(y 2 , c 2 ) σ(c 2 ) 2 , ..., cov(y n , c n ) σ(c n ) 2 , where n is the size of the vectors y and c. In figure 9, we show the results obtained with the CARPool technique applied to the Quijote-png set of halo catalogues. We use the large set of 15000 Quijote simulations at fiducial cosmology and with no PNG in their initial conditions as the surrogates, and a small set of 10 non-Gaussian simulations (f local NL = +100) with the same ΛCDM cosmological parameters as the high-fidelity simulations, the goal being to predict the power spectrum and bispectrum more accurately outside of fiducial cosmology. We compare the CARPool results to the 500 simulations with f local NL = +100 at our disposal and verify that they are indeed unbiased, as expected. We repeat the procedure to many 10 simulation subsets among the 500 to check that the result is not spurious, and to derive error bars on the CARPool averages. For all power spectrum and bispectrum modes, the error bars are significantly smaller than the standard errors on the average from 10 simulations alone. The effect is the strongest on linear scales (small k for the power spectrum, and the first few bispectrum modes which describes the tree-level matter bispectrum), but is also present in the non-linear regime. In figure 10, we follow a similar procedure to study the derivatives of the power spectrum and bispectrum with respect to f local NL . We use the derivatives evaluated at f local NL = 0 by finite difference applied to the 500 f local NL = ±100 simulations as surrogates, to compute the derivatives at f local NL = 50 using only a few simulations with f local NL = 0 or 100. For the power spectrum the improvement is small, or even negligible in some cases, outside of the largest scales. For the bispectrum, there is a significant improvement of the first few modes describing the tree-level matter bispectrum. All error bars are reduced by the CARPool method, although the improvement is very small for some modal coefficients. One issue here is the small number of surrogates compared to the previous application (only 500 instead of 15000), adding to the fact that we know that the surrogate derivatives are not even fully converged numerically (as discussed thoroughly in section 4). These examples illustrate briefly the possibilities of the CARPool technique. We leave its full implementation in the pipeline for future works, where we will also include the powerful "CARPool Bayes" introduced in Chartier & Wandelt (2022) for the fast and accurate estimation of the covariance matrix and its inverse.
10,731
2022-11-14T00:00:00.000
[ "Physics" ]
Human CD 4 − CD 8 − Invariant Natural Killer T Cells Promote IgG Secretion from B Cells Stimulated by Cross-Linking of Their Antigen Receptors Immunoglobulin (Ig) M production can be induced by the interaction of thymus-independent type-2 (TI-2) antigen (Ag) with B cell Ag receptors (BCRs) without the involvement of conventional T cells; for IgG production through the same process, however, a second signal is required. Previous studies have reported that invariant natural killer T (iNKT) cells may be responsible for the second signal involved in IgG production. In the present study, we addressed whether human iNKT cells could participate in the production of Ig against TI-2 Ag in vitro. Two major distinct subsets of human iNKT cells, CD4+ CD8β− (CD4) and CD4− CD8β− [double negative (DN)] cells, were generated from peripheral blood monocytes from a healthy volunteer. BCR engagement, triggered by anti-IgM antibody stimulation, examined here as a model of BCR engagement triggered by TI-2 Ag, induced abundant IgM production by B cells. Both CD4 and DN iNKT cells reduced IgM production and conversely enhanced IgG production in a dose-dependent manner. In addition, IgG production by CD19+CD27− (naïve) and CD19+CD27+ (memory) B cells was predominantly promoted by DN Present address: Tomomitsu Miyasaka, Department of Pathophysiology, Tohoku Medical and Pharmaceutical University, Sendai, Japan; Yukiko Akahori, Department of Medical Science Technology, International University of Health and Welfare, Narita, Japan. Corresponding author. Introduction The main causative bacteria of invasive infection, including Streptococcus pneumoniae and Haemophilus influenzae, possess thick polysaccharide capsules which confer the ability to resist phagocytosis by polymorphonuclear leukocytes [1].Immunoglobulins specific for these polysaccharide capsules enhances opsonophagocytic killing (OPK) activity, which plays an important role in host protection against infections caused by these encapsulated bacteria [2] [3].Recently, we reported that immunization with pneumococcal polysaccharide vaccine led to an increase in the serum level of serotype 3-specific IgG3, which facilitates survival after pneumococcal infection in mice [4].Immunization with pneumococcal polysaccharide vaccine also generates polysaccharide-specific IgG response in humans [5] [6]. Polysaccharide capsule, a thymus-independent type 2 (TI-2) antigen (Ag), has highly repetitive structures with simultaneous cross-linking of B cell receptors (BCRs) and induces B cell proliferation and IgM production [7].The antibody response induced by TI-2 Ag is smaller than that induced by thymus-dependent (TD) Ag, and consists largely of the production of low-affinity IgM by B cells without the conventionally necessary T cell involvement.In addition, however, IFN-γ induces T cell-independent IgG production in response to TI-2 Ag [8], as this cytokine triggers the secondary stimulatory signals for T cell-independent B cell activation and isotype switching to produce IgG [9] [10]. Human invariant natural killer T (iNKT) cells express only two αβ T cell Ag receptors, namely, Vα24-Jα18 and Vβ11, and have been identified as a unique lymphocyte population playing a critical role in both innate and adaptive immune responses [11] [12].Although Vα24 + Vβ11 + iNKT cells are present only in very small proportions (<0.01%-1%) in human blood [13], these cells recognize glycolipids from bacteria and/or self in context with CD1d, a nonpolymorphic MHC class I-like molecule, which leads to the production of large quantities of cytokines such as IFN-γ, IL-4, IL-10 and IL-17A [14].Human Vα24 + iNKT cells comprise two distinct major subpopulations, one expressing CD4 + CD8β − (CD4) and the other CD4 − CD8 − [double negative (DN)] [13].These two subsets of iNKT cells differ in terms of the cytokines they produce to regulate various immune responses [15].Mice lacking iNKT cells exhibit defective IgG response to pneumococcal polysaccharide Ags, intact response to TD Ags [16] and impaired host defense against pneumococcal infection [17].These previous observations suggest that iNKT cells may secrete the IFN-γ that triggers isotype switching in TI-2-induced IgG production. In the present study, we examined the in-vitro effect of human iNKT cells on Ig production by human B cells stimulated via cross-linking of their Ag receptors, which mimics BCR engagement by TI-2 Ags.We found that co-culture with iNKT cells reduced IgM production but increased IgG production by B cells stimulated via cross-linking of BCRs, and that this activity was higher in DN iNKT cells than in CD4 iNKT cells.These findings suggest that iNKT cells may contribute to the class-switching from IgM to IgG that occurs upon stimulation with TI-2 Ags. Ethical Statement All experimental protocols described in this study were reviewed and approved by the Ethics Committee for Human Experimentation at Tohoku University, Sendai, Japan (approval numbers: 2012-1-20, 2013-1-496). Generation of Vα24Jα18 + Invariant NKT Cells Human iNKT cells were separated from peripheral blood mononuclear cells (PBMCs) obtained from peripheral blood of healthy volunteers as described previously [18].After 15 days of expansion, CD4 + CD8 − (CD4) and CD4 − CD8 − [double negative (DN)] iNKT subsets were sorted using a FACSAria cell sorter (Becton Dickinson, San Diego, CA, USA).The CD4 and DN iNKT cells (2 × 10 6 cells/well) were stimulated with irradiated allogenic PBMC (1 × 10 7 cells/well) prepulsed for 5 h with α-GalCer (100 ng/ml) in RPMI 1640 medium supplemented with 10% human serum, 100 U/ml penicillin G, 100 μg/ml streptomycin, 2 mM L-glutamine, and 25 mM HEPES containing 20 U/ml rhIL-2.From day 3 to day 9, cells were split into two fractions once or twice daily.The cultures were expanded by adding medium containing rhIL-2.On day 11 or 12, expanded cells were collected and used as iNKT cells.The surface phenotypes of expanded iNKT cells were identified by flow cytometry (FACSCant II; BD Biosciences).The presence of dead cells was excluded by running parallel 7-AAD-stained samples.After one or two passages of each primary cell culture, the remaining cells were used in the experiments. Human Peripheral Blood B Cells PBMCs were isolated from heparinized blood of one healthy adult volunteer by standard density gradient concentration over Ficoll-Paque PLUS (GE Healthcare Life Sciences, Piscataway, NJ, USA).Interface PBMCs were pelleted, washed twice, and resuspended in MACS buffer (Miltenyi Biotec).The naïve (CD19 + CD27 − ) and memory phenotype (CD19 + CD27 + ) B cells were isolated from PBMCs by a memory B cell isolation kit according to the manufacturer's protocol. NKT and B Cell Cultures The B cells used in the current assays were derived from a single donor.Initially, CD27 − and CD27 + B cells (2.5 × 10 4 cells/well) were stimulated with goat F(ab')2 anti-human IgM (1 μg/ml) for 15 min on ice.The B cells were washed three times with culture medium, and then co-cultured with rabbit F(ab')2 anti-goat IgG (3 μg/ml) in the presence or absence of CD4 or DN iNKT cells (2.5× 10 3 -2.5× 10 4 cells/well) for five days.The culture supernatants were stored at −80˚C until assayed for immunoglobulins by ELISA. Measurement of Total IgG and IgM The quantities of IgM and IgG in the culture supernatants were measured by enzyme-linked immunosorbent assay (ELISA).Microtiter plates (Nunc A/S, Roskilde, Denmark) were coated with 250 ng/ml of anti-human IgM or 192 ng/ml of anti-human IgG Ab in PBS for 1 h at 37˚C, and blocked with 1% FCS PBS at 4˚C overnight.Prior to testing, samples were diluted with culture medium supplemented with 0.05% Tween 20.Next, serial two-fold dilution of hIgM or hIgG to 1:1024 was performed arbitrarily; resulting solutions were added to the wells and incubated at room temperature for 2 h.HRP-conjugated anti-human IgM or IgG antibodies diluted with 1:4000 were used as detection Ab.The concentrations of IgM and IgG were determined based on the absorbance at 450 nm.The detection limit was 0.2 ng/ml. Statistical Analysis Data are presented as mean values ± standard deviation (SD).Differences between the two groups were tested using two-tail analysis in an unpaired Student's t-test.Differences among three or more groups were tested using ANOVA with post-hoc analysis (Student-Newman-Keuls test). iNKT Cell-Induced Enhancement of Immunoglobulin Production by B Cells upon Stimulation with Antigen Receptor-Engagement To Role of iNKT Cells in Immunoglobulin Production by Naïve and Memory B Cells upon Stimulation with Antigen Receptor-Engagement IgM + CD27 + memory B cells in PBMCs play an important role in anti-pneumococcal polysaccharide IgG pro- duction in humans [19].Therefore, to address the effect of co-culture with iNKT cells on Ig production by naïve (CD27 − ) and memory (CD27 + ) B cells, we separated B cells into two subsets according to the expression of CD27 and stimulated each subset with anti-IgM Ab in the presence or absence of CD4 or DN iNKT cells.As shown in Figure 2(a) and Figure 2(c), both CD27 + and CD27 − B cells produced similar levels of IgM upon BCR cross-linking, and IgM production by both subsets was significantly reduced when co-cultured with either CD4 or DN iNKT cells.The cross-linking of BCRs induced low levels of IgG production by naïve and memory B cells in the absence of iNKT cells, and IgG production by CD27 + B cells was significantly higher than that by CD27 − B cells.In addition, IgG production by naïve and memory B cells was significantly enhanced when they were co-cultured with either CD4 iNKT cells or DN iNKT cells; this enhancement effect was much stronger with DN iNKT cells than with CD4 iNKT cells (Figure 2 Discussion In the present study, we evaluated the effect of co-culture with iNKT cells on Ig production by B cells upon stimulation via crosslinking of BCRs.Our data demonstrated that CD4 iNKT cells and DN iNKT cells accelerated the isotype switching from IgM to IgG, as shown by decreased IgM and increased IgG, in both naïve and memory B cells.DN iNKT cells accelerated this response even further than CD4 iNKT cells did.These results suggest that activation of iNKT cells may serve as a potent adjuvant, eliciting TI-2 Ag-induced IgG production in the development of more effective vaccination strategies for prevention of the infectious diseases caused by encapsulated bacteria.Kobrynski and coworkers [16] demonstrated that TI-2 Ag-specific IgG production was completely abrogated in CD1d-or β2-microglobulin-deficient mice, suggesting that NKT cells may potentially promote IgG production in response to TI-2 Ags.In addition, we previously reported that activation of iNKT cells by α-GalCer increased IFN-γ-producing NKT cells, and that this increase was correlated with enhanced production of the poly- saccharide-specific IgG3 after immunization with pneumococcal polysaccharide vaccine in mice [4].In our clinical study, immunization with anti-pneumococcal polysaccharide vaccine led to the production of serotype-specific IgG, which was correlated with an increase in DN iNKT cells in peripheral blood [5].Thus iNKT cells are suggested to play an important role in the production of serotype-specific IgG after immunization with TI-2 Ags. Co-culture with iNKT cells accelerated the production of IgG by B cells upon stimulation with BCR cross-linking without any exogenously added agonists of iNKT cells.In addition, α-GalCer did not induce any further increase in IgG production promoted by iNKT cells alone (data not shown).In earlier studies by Galli et al. [20], iNKT cell-induced promotion of B cell activation was demonstrated to depend on CD1d expressed on a variety of B cell subsets and to be delivered in the absence of α-GalCer [21].These findings suggest that iNKT cells may recognize some endogenous ligand presented in context with CD1d on B cells, although the responsible molecule remains to be identified. IgG production by B cells was more dramatically accelerated by DN iNKT cells than by CD4 iNKT cells.While CD4 iNKT cells have the potential to produce large amounts of Th2 cytokines such as IL-4 and IL-13, DN iNKT cells have a Th1-biased profile, enabling increased IFN-γ production and prominent expression of NK lineage receptors [15] [22]- [24].In addition, chemokine receptors such as CCR6 and CXCR6 are preferentially expressed on DN iNKT cells rather than CD4 iNKT cells [22] [25], though the latter are the more common type among Th1 cells [26].CD4 co-receptor potentiates the activation of human CD4 iNKT cells by engaging CD1d molecules [24].Previously, Galli and co-workers [20] demonstrated that, compared to DN iNKT cells, human CD4 iNKT cells induced higher levels of IgM and IgG production in α-GalCer-pulsed B cells.In that study, B cells were considered to receive activation signals during cognate interaction with activated iNKT cells without BCR cross-linking.The current study differed from theirs in terms of the primary B cell activation method.Thus the phenotypic and functional properties of iNKT cells may be associated with IgG production by B cells stimulated with TI-2 Ags, though further investigation is required to define the molecular mechanism mediating the functional difference between CD4 iNKT cells and DN iNKT cells in T cell-independent Ig production. TI-2 Ags are reported to generate memory B cells, although they do not elicit the Ab booster response or the germinal center formation following secondary immunization [27].Moens and coworkers [19] reported that CD19 + CD27 + IgM + B cells were predominantly associated with an anti-polysaccharide IgG response after pneumococcal polysaccharide vaccination.In keeping with these previous observations, in the current study, cross-linking of BCRs induced the production of IgG by CD27 + (memory type) B cells at a higher level than that by CD27 − (naïve) B cells in the absence of iNKT cells.Yet iNKT cells promoted IgG production not only by memory B cells but also by naïve B cells.In a previous study by Bai and co-workers [28], iNKT cell activation through cognate interaction with dendritic cells induced isotype switching by B cells and promoted long-term memory response to pneumococcal capsular polysaccharides.In addition, CD4 iNKT cells and DN iNKT cells are reported to promote the proliferation of naïve and memory B cells derived from peripheral blood in vitro [20].Thus, iNKT cells might be involved in the potentiation of IgG production by naïve and memory B cells upon stimulation with TI-2 Ags. Maddur and coworkers demonstrated that B cells activated via BCR cross-linking enhanced the expression of OX-40L and co-stimulatory molecules such as CD80, CD86 and CD40 on DCs [29] [30].In our previous study, DCs with increased expression of OX-40L caused NKT cells to produce substantial levels of IFN-γ [31].Considered collectively, B cells activated by TI-2 Ags may amplify IFN-γ production by iNKT cells through the enhanced Th1 response induced by DCs in vivo. Conclusion In conclusion, we demonstrated that iNKT cells promoted the production of IgG by human CD27 + and CD27 − B cells upon stimulation via cross-linking of BCRs and that IgG production was more strongly promoted by DN iNKT cells than by CD4 iNKT cells.The present study provides important implications for understanding the contribution of iNKT cells to IgG production by TI-2 Ag-stimulated B cells, which is expected to be helpful in the development of more effective vaccination strategies for prevention of pneumococcal infection. investigate the effect of co-culture with iNKT cells on Ig production by B cells activated via cross-linking of BCRs, B cells were stimulated with anti-IgM Ab in the presence or absence of CD4 or DN iNKT cells, and the production of IgM and IgG in the culture supernatants was analyzed.As shown in Figure 1(a), B cells produced large quantities of IgM under BCR cross-linking alone, whereas the addition of CD4 iNKT cells resulted in the reduction of IgM production in a dose-dependent manner.A similar pattern was observed upon co-culture with DN iNKT cells (Figure 1(c)).By contrast, IgG production by B cells was not clearly increased when activated via BCR cross-linking alone (Figure 1(b) and Figure 1(d)).The synthesis of IgG by B cells stimulated via cross-linking of BCRs was significantly enhanced by co-culture with CD4 and DN iNKT cells in a dose-dependent manner.In addition, this activity was much higher in DN iNKT cells than in CD4 iNKT cells (Figure 1(b) and Figure 1(d)). Figure 1 . Figure 1.Effect of co-culture with iNKT cells on IgM and IgG production by B cells stimulated with anti-IgM Ab.B cells were stimulated with anti-IgM Ab in the presence or absence of CD4 iNKT cells or DN iNKT cells for five days, and the concentrations of Ig in the culture supernatants were measured.IgM (a) and IgG (b) production by B cells co-cultured with CD4 iNKT cells; IgM (c) and IgG (d) production by B cells co-cultured with DN iNKT cells.Similar results were obtained in two independent experiments.α-μ, anti-IgM Ab. **, p < 0.01. Figure 2 . Figure 2. Effect of co-culture of iNKT cells on IgM and IgG production by naïve and memory B cells stimulated with anti-IgM Ab.CD27 + or CD27 − B cells were stimulated with anti-IgM Ab in the presence or absence of CD4 iNKT cells or DN iNKT cells for five days, and the concentrations of Ig in the culture supernatants were measured.IgM (a) and IgG (b) production by B cells co-cultured with CD4 iNKT cells; IgM (c) and IgG (d) production by B cells co-cultured with DN iNKT cells.Similar results were obtained in two independent experiments.α-μ, anti-IgM Ab. *, p < 0.05; **, p < 0.01; NS, not significant.
3,872.4
2016-05-12T00:00:00.000
[ "Biology", "Medicine" ]
Variational solutions to fermion-to-qubit mappings in two spatial dimensions Through the introduction of auxiliary fermions, or an enlarged spin space, one can map local fermion Hamiltonians onto local spin Hamiltonians, at the expense of introducing a set of additional constraints. We present a variational Monte-Carlo framework to study fermionic systems through higher-dimensional (>1D) Jordan-Wigner transformations. We provide exact solutions to the parity and Gauss-law constraints that are encountered in bosonization procedures. We study the $t$-$V$ model in 2D and demonstrate how both the ground state and the low-energy excitation spectra can be retrieved in combination with neural network quantum state ansatze. Introduction Studying the mapping between fermionic operators and bosonic operators (and vice versa) is interesting both from a theory perspective, as well as for computational studies. As examples of the former, the 1D transverse-field Ising model and the Kitaev honeycomb model can be diagonalized after reformulating the Hamiltonian in terms of fermionic degrees of freedom [1]. Furthermore, transforming fermionic Hamiltonians into a set of spin operators is necessary to compute properties of fermionic systems using digital quantum devices. Especially in the NISQ era [2] of intermediate scale devices, it is highly advantageous to study efficient mappings that require fewer qubit resources. The most natural and commonly used mapping between fermionic and spin degrees of freedom is the Jordan-Wigner transformation (JWT), which Jannes Nys<EMAIL_ADDRESS>follows as a natural consequence of the second quantization formalism of fermions [3]. After we have chosen a fermion ordering in the second quantization formalism, the JWT maps each fermion operator f † i onto spin operators as fol- are Pauli matrices applied to spin i. The operator chain S i = ⊗ j<i Z j is necessary to maintain the anti-commutation relations on the fermionic side using a set of Pauli matrix operators that themselves follow commutation relations, and are commonly referred to as Jordan-Wigner strings. Physical fermionic Hamiltonians that describe closed quantum systems consist of bilinear and quadratic terms in the creation/annihilation operators. Such Hamiltonians conserve the fermion parity P f = (−1) N f (with N f the number of fermions), and are referred to as even parity operators. When the original fermionic operators are both spatially local and have even parity (i.e. conserve P f ), the locality is trivially preserved in the resulting spin Hamiltonian in 1D, since the Jordan-Wigner strings S i of local fermionic operator pairs cancel each other. In higher dimensions (>1D), however, the chosen ordering of the fermions in the JWT becomes increasingly important. Local evenparity fermionic operators are no longer mapped onto a set of local products of Pauli matrices since the Jordan-Wigner strings S i of spatially local fermion operator pairs no longer cancel each other. When the dimensionality of the system increases, the Jordan-Wigner strings in the spin Hamiltonian become increasingly non-local and it therefore increasingly difficult to study these systems numerically [4]. The mapping of fermion operators onto quantum spin operators is not unique. One can use this freedom in order to generalizations to fermion-spin mappings in higher dimensions with the main aim to maintain locality in the operators, and thereby reducing the size of the Jordan-Wigner strings. One of the first studies that derived higher-dimensional generalizations to the Jordan-Wigner transformation dates back to the work of Wosiek [5]. In this work, Wosiek described a mapping of fermions moving on a 2D and 3D square lattice onto a set of local generalized Euclidean Dirac matrices. Thereby, he found the need to impose additional constraints on the system in order to remove redundant and unphysical sectors of the new Hilbert space. The constraints generated in this bosonization procedure were studied numerically only recently in Ref. [6]. Similar ideas were later explored by Bravyi and Kitaev [7] in the simulation of fermionic systems through local qubit gates. As in Ref. [8], they explored methods that increase the Hilbert space, while afterwards restricting the reachable quantum states to a physical sector of Hilbert space through a set of gauge conditions. Ball [9] demonstrated how these gauge conditions can be made local as well. Ball [9] and Verstraete-Cirac [8] both suggested the more explicit introduction of auxiliary fermionic modes to counteract the Jordan-Wigner strings. These auxiliary modes effectively store the parity nearby the interaction terms, which is otherwise captured by the Jordan-Wigner string [4]. The auxiliary Majorana fermions are subject to local interaction terms that commute with the physical Hamiltonian, which is necessary in order to keep the eigenspectrum of the original problem identifiable in the spectrum of the transformed Hamiltonian. In recent years, we have witnessed a renewed interest in methods for simulating fermionic systems through a set of local qubit gates [10,11,12,13,14]. This more recent theoretical activity is again driven by two main motives. On one hand, local mappings are of practical interest in the implementation of quantum algorithms to simulate fermionic matter on increasingly available qubit-based digital quantum computers. On the other hand, a recent theoretical advance has been made in connecting bosonization in 2D and Z 2 lattice gauge theories with Chern-Simons-like Gauss laws [13] (later generalized to three and arbitrary spatial dimensions in Ref. [15,16]). Various followup works have built on this connection to derive new fermion mappings in higher dimensions [17,10,18,19]. Compared to earlier methods (such as Refs. [9,8]) where JWT were carried out explicitly, recent techniques [13,18,6] take a different approach where one first defines bosonic operators from the fermionic ones, which can then be mapped directly onto quantum spin operators without the need to order the fermions. The equivalence of these bosonization procedures in 2D was proven by Chen et al. [14]. Despite this recent theoretical progress, the practical application of these techniques remains elusive, both in the context of classical computational methods, and as a basis for quantum algorithms. The main difficulty lies in the fact that the auxiliary degrees of freedom must satisfy stringent constraints in order to correctly represent fermionic degrees of freedom. Efficiently satisfying these constraints is a particularly important task especially in applications involving variational searches of many-body fermionic states. This is for example relevant for both classical variational methods based on spin/qubit degrees of freedom and for variational quantum algorithms tailored to qubits. In this work, we specifically focus on the variational simulation of fermionic systems on classical computers, with suitable many-body quantum states of spins degrees of freedom. Specifically, we demonstrate that we can factorize the wave function into an exact solution to the constraint, and a physical wave function that can be determined variationally. Furthermore, maintaining spatial locality in the transformation can allow us to map the fermionic Hamiltonian onto a spin Hamiltonian which features the same symmetries [17,10], and therefore gives us access to the low-energy excitation spectrum. We then show that there is substantial variational freedom in parameterizing the resulting many-body state. In this context, we concentrate on on neural-network-based parameterizations of the many-body wave function, known as neural-network quantum states (NQS) [20]. We show how this approach can be used to approximate the energy eigenstates of the fermionic system, using methods that are available to approximate quantum spin states [20, 21]. Bosonization We define an L x × L y 2D lattice with square cells and such that all edges point either along the lattice basis vectors x or y. The resulting set of edges reads E = {(r, r + x)|r ∈ V} ∪ {(r, r + y)|r ∈ V}. Here, x and y represent the lattice vectors and r = (r x , r y ), where r x ∈ {0, ..., L x −1} and r x = L x maps onto r x = 0 due to periodicity (similar for r y ). We study the t-V model (also called spinless Fermi-Hubbard model) where f † r are fermionic operators, and we have introduced the usual number operator n r = f † r f r . On each site, the physical Hilbert space is 2 dimensional {|0 , f † r |0 }, and the Hamiltonian in Eq. (1) contains only even parity fermionic operators. We carry out the bosonization procedure defined in Refs. [17,10], since it keeps the symmetries of the Hamiltonian manifest. For the sake of completeness, we describe the procedure in Appendix B, and summarize the results here. Bosonizing the Hamiltonian in Eq. (1) results in: On the resulting square lattice, each site hosts a physical (1) and auxiliary qubit/spin (2), as demonstrated in Fig. 1. As shown by Ref. [14], other bosonisation procedures are equivalent in 2D. Local constraints In order to reduce the number of degrees of freedom, the auxiliary system is subject to a Gausslaw constraint of the form In our notation, constraints such as the ones in Eqs. |Ψ . In terms of Pauli operators, G r in Eq. (3) takes the form We can separate the physical (1) and auxiliary (2) system and rewrite Eq. The operator on the left-hand side of this constraint also appears in Wen's plaquette model [22], and hence, Eq. (5) can be interpreted as a dynamical version this model, due to the right-hand side which depends on the physical system. The constraint in Eq. (5) resembles a Chern-Simons Gauss law, or flux attachment [13]. The constraint is diagonal in the physical system, and therefore, for each configuration of the physical system, the auxiliary system is in an eigenstate of the exactly solvable Wen's plaquette model (which is known to have robust topologically degenerate ground states), with different signs for the terms in the Hamiltonian. It is important to emphasize that the constraint in Eq. (5) is 'kinematic', meaning it only depends on the lattice topology, not on the Hamiltonian under consideration. Parity constraints We restrict the system to square even-by-even tori (i.e. L = L x = L y is even) with N = L 2 sites. Imposing the boundary conditions in the fermionic system, we obtain the additional constraints introduced by non-contractable Wilson loops. After bosonization, we obtain the following spin operator identities that need to be satisfied Furthermore, we fix the number of fermions to be N f , which is enforced through the constraint We can now set the chemical potential µ = 0, since it only adds a constant energy. Variational Monte Carlo approach The goal of this paper is to obtain variational solutions to the bosonized fermionic system. Hereby, we rely on Variational Monte Carlo (VMC). We briefly recap the concepts of VMC within our notation. For a more elaborate and pedagogical introduction, we refer to e.g. Ref. [23]. After a short recap, we will introduce our novel approaches to solving the gauge constraints described above. We will study the system in Eq. (1) using the bosonized Hamiltonian in Eq. (2). Our aim is to obtain the ground and low-lying excited states |Ψ using the decomposition In this notation |σ represent basis states in the S z basis. The systems consists of a physical sys- r N ) and auxiliary system We will take σ r ∈ {−1, +1} for simplicity. The probability amplitudes in Eq. (10) are given by a parametrized function Ψ θ with parameters θ. These parameters are determined by minimizing the variational energy We will evaluate the expectation value in Eq. (11) by sampling spin configurations using Markov-Chain Monte Carlo (MCMC). However, the constraints in Eqs. (3), and (7)-(9) restrict the allowed Hilbert space, and hence we must enforce these restrictions either by imposing constraints on Ψ θ , and/or by designing Markov-Chain samplers that only generate samples within the allowed Hilbert space. Solving constraints in VMC We now describe our novel approach to satisfy all constraints. To the best of our knowledge, there have so far been no attempts to solve local fermion-to-qubit transformed Hamiltonians in the VMC framework. The constraints in Eq. (7), (8) and (9) can be fulfilled exactly through a suitable sampling procedure since the constraints are diagonal in the S z basis. More specifically, MCMC samples can be generated through a set of sample updates based on the (free-fermion) Hamiltonian. Hence, given a sample, we can make the following Markov transitions r+x ): flip two neighboring qubits of system (1) along the x-axis: X r+y ): flip four qubits (2 physical, 2 auxiliary) on two neighboring sites along the y-axis: X It is also straightforward to generate initial random samples that fulfill the parity and number constraints, which are necessary to initiate the Markov chains. First, a physical system (1) fulfilling the N f constraint in Eq. (9) can be constructed. Next, we generate (L x − 1) × (L y − 1) random auxiliary qubit states, and infer the remaining auxiliary qubits by imposing the periodicity constraints in Eqs. (7) and (8). The resulting scheme is summarized in Algorithm 1. Alternatively, these parity constraints can be captured by a Restricted Boltzmann Machine (RBM) quantum state by introducing a hidden neuron with specific weight function [24]. The Gauss law in Eq. (3) cannot be satisfied through a sampling procedure alone, since the operators are not diagonal in the computational basis, and they therefore impose stringent constraints on the wave function itself. Within the basis that fulfills the constraints in Eq. (7) and (8) one can obtain the eigenspectrum of the original Hamiltonian through eigendecomposition of P −1 G HP G , where P G is the projection operator to the Gaussian constraint in Eq. (3) While the representation of the P G operator in the S z basis is block diagonal, it is not sparse, since the size of each block scales exponentially with the system size. Furthermore, since constraint Eq. (3) must be satisfied for all r ∈ V, the projection generates non-local effects in the auxiliary system, even though the individual constraints are local. Therefore, we must find an analytical form for the probability amplitude of a general quantum state that lies within the manifold of the Gauss law. Another approach is to implement the constraint in the Hamiltonian by adding the terms When the coupling K is taken to be sufficiently large, G r = 1 can in principle be satisfied by minimizing the total energy of the augmented Hamiltonian H = H + H c (this procedure is also suggested in Ref. [8]). In practice, however, we find that a soft constraint does not result in quantum states lying on the manifold dictated by the Gauss law, and therefore the spectrum does not reliably represent the physics of the fermionic system. As mentioned, exactly respecting Eq. (3) is essential in order to restrict our solution to the physical Hilbert space representing fermionic degrees of freedom. Since the r.h.s. operator in Eq. (5) is diagonal in the physical states, each Gauss constraint can be regarded as a dynamical constraint on the auxiliary system. It is important to obtain variational ansatzes which obey the Gauss law constraints by construction. Our (non-symmetrized) variational ansatz in general assumes a factorized form where θ represent a set of variational parameters, and ξ(σ) is purely a sign factor with |ξ| = 1. In this representation, we choose ξ(σ) to be the only factor with an explicit dependence on the auxiliary system. Notice that there are no constraints on Φ θ with respect to anti-symmetry, since this property is entirely covered by the ξ parity factor. Hence, ξ must be chosen in such a way that Eq. (3) is exactly fulfilled. However, we point out that many solutions are feasible due to the freedom of extracting additional sign structure from Φ θ , and absorbing it into ξ. Indeed, since G r is diagonal in the physical system (1), we obtain independent Gauss-law constraints for each physical configuration σ (1) . After defining the sign-generating function ξ, the sign structure of Φ(σ (1) ) is determined by the local spin Hamiltonian in Eq. (2). Furthermore, the main challenge is to find an exact solution that satisfies all constraints, including the parity constraints in Eqs. (7)-(8). Canonical sample reduction As a first solution to the Gauss law constraint, we define a consistent approach which uses a repeated application of G r to a quantum state to transform each auxiliary configuration σ (2) to a reduced auxiliary sample α (2) . One can verify that the following ansatz is an eigenstate of the constraint operators G r : Here, m i ∈ {0, 1} is determined by the iterative procedure followed to obtain the reduced sample. Therefore, we define an ordering for the lattice sites (r 1 , ..., r N ) and iteratively set m i = 1 if the auxiliary qubit at position r i is in the |1 state, and m i = 0 otherwise. When m i = 1, we apply operator C r i to the auxiliary plaquette attached to r i and continue with the next site in the sequence r i+1 . Algorithm 2 describes the sequential steps to obtain either directly ξ(σ), or (m 1 , ..., m N ) to evaluate Eq. (14). Also note that to each physical system configuration σ (1) , there corresponds only a single α (2) . The time complexity of this approach to satisfy the Gauss constraint is O (N ) for each sample σ. Doubly canonical sample reduction In the abovementioned solution, the symmetry properties of Φ(σ (1) ) are elusive. Alternatively, we can choose to optimize the sign structure of Φ by incorporating knowledge from the relevant symmetry group. Therefore, the abovementioned procedure can also be carried out on the reduced samples obtained by listing the samples obtained by applying all symmetry elements g(σ) in the symmetry group g ∈ G. We then take the sample σ red with the smallest lexicographic encoding of the physical system and apply the above-mentioned sequence of reductions on σ red to obtain α (2) . The representation of symmetry operations will be discussed in more detail in Section 4. Vacuum reduction A final method approaches the problem differently by defining the correct sign structure ξ only explicitly on the vacuum state |0 (1) σ (2) , and inferring the signs of other configurations by consistently relating them to the vacuum. The latter is defined in terms of spin states as where c σ (2) = ξ(0 (1) , σ (2) ) = ±1 is obtained via the approach in Section 3.2, and thus satisfies Eq. (3). In the vacuum, the constraint in Eq. (4) reduces to C r = 1. In order to obtain ξ(σ) from the definition of |Ω q , we take the set of occupation numbers (n 1 , ..., n N ) corresponding to σ (1) using n i = (1 − σ (1) i )/2), and rewrite these in terms of fermion operators in the usual way |n 1 , ..., can be transformed into a set of spin operator through the same bosonisation procedures that led to the spin Hamiltonian in Eq. (2). Our ansatz is now inspired by the above-mentioned mapping, and hence we assume where B represents the bosonisation procedure from Ref. [10], using the mappings in Eq. (31). Notice that to consistently bosonize non-local fermionic operators f † r N f ...f † r 1 in Eq. (17), this procedure requires an ordering of the sites, similarly to JWT. The above-mentioned solution also forms a solution to tackle the standard JWT Hamiltonian. The main conceptual difference that in the current formalism, the ansatz is varied to optimize a local Hamiltonian with manifest symmetries. However, as pointed out in Ref. [18], despite the fact that the constraints in Eq. (3) are local, they can introduce long-range correlations when they are satisfied for all plaquettes (which shows up in the sign structure). It is important to point out that the variational part of our ansatz Φ θ (σ (1) ) is a function of the σ (1) , which is equivalent to an occupation configuration. The resulting ansatz exactly fulfills the Gauss law in Eq. (3), and hence, the physical part of the wave function Φ(σ (1) ) is constraint free, meaning it does not require anti-symmetrization and therefore our method is determinant free. Hence, we may use a universal function approximator such as a Neural Network, to represent Φ θ (σ (1) ). In this work, we adopt a simple Restricted Boltzmann Machine (RBM) ansatz with complex weights and N hidden spins (thus with a hidden-spin density of α = 1) [20]. Quantum state symmetries It is expected that imposing symmetries will result in more reliable and accurate predictions of the ground state. We therefore turn to the representation of symmetry transformations within the bosonisation framework. A representation T g of an element g of a symmetry group G, can be decomposed into two components: a 'bare' transformation T b , and an auxiliary-mode transformation V T , such that U T = V T T b . The V T are tensor products of local single-site operators: V T = r∈V V T,r . These effectively replace the (non-local) parity factor one would encounter in the Jordan-Wigner transformation. Once we are able to determine V T,r for all elements in the C 4v group, we can obtain a symmetric quantum state ansatz (lying withing a chosen irrep I of the group), using where χ g represents the character of the chosen irrep. For translations, we have More details on symmetry operations and the V T,r for translation, rotation and reflection symmetry are deferred to Appendix C. Notice that the factor ξ(g(σ)) inside Ψ is the alternative for a factor which determines the parity of the T g operator on σ. Hence, no sorting is required, and the parity is determined through local operators, since parity information is captured by the auxiliary modes. The above-mentioned methods can therefore be carried out in O (N ) time, contrary to other determinant-free methods Ref. [25]. The latter would also be necessary when carrying out 1D Jordan-Wigner procedures, in which one must compute the parity of the translation operator in terms of fermion operators. Results We carry out numerical studies of the bosonization procedures by relying on the abovementioned solutions to all constraints. Hereby, we first investigate their ability to represent the ground state and the effect of different choices of ξ(σ) on the complexity of learning Φ θ (σ (1) ). Through these studies, we mainly probe the sensitivity of numerical and variational approaches to the dynamical character of the Gauss law constraint in Eq. (3), which was interpreted as a dynamical Wen plaquette model. Consider first an unsymmetrized variational ansatz. In Fig. 2, we show how the different methods to satisfy the Gauss law perform for a 2 × L lattices. We compare the results to the ones obtained through a Jordan-Wigner transformation mapping the two dimensional lattice onto a one-dimensional one through "snaking" along the L direction. By increasing L, we increase the degree of non-locality in the Jordan-Wignertransformed Hamiltonian, where Jordan-Wigner strings appear explicitly in the kinetic terms. Indeed, hopping operators transform under JWT as f † r f r+y → Q − r r∈P i→j Z r Q + r+y , where P i→j represents all sites on the path along the snake direction connecting site r and r + y. In the extreme case of r = (0, 0), the length of this path is 2L − 2. Although the canonical and doubly canonical methods are valid solutions to the Gauss law constraints, the results demonstrate that the corresponding physical wave function factor Φ θ (σ (1) ) are challenging to optimize. The underlying reason is that the ξ factors impose a non-trivial sign structure on Φ θ . Although the doubly canonical method should simplify the sign structure to be learned for samples that are connected though the considered symmetry operations, the ground state does not necessarily correspond to trivial irrep characters. Furthermore, the variational wave function Φ θ must still capture the signs of samples that are not connected through symmetry operators, an effect that is reflected in the inaccurate ground-state representations with this method. The vacuum reduction, however, only uses the non-local re- (0, π 2 ) (0,π) ( π 2 , π 2 ) ( π 2 ,π) (π,π) 10 −2 10 −1 10 0 10 1 (0, π 2 ) (0,π) ( π 2 , π 2 ) ( π 2 ,π) (π,π) duction method in Section 3.2 on the vacuum state (by solving the corresponding Wen plaquette model [22]). This results in energies closely matching the Exact Diagonalization (ED) results, and closely match those obtained with the snaked Jordan-Wigner approach. Similar conclusions can be drawn from Fig. 2b, where we increased the short dimension of the lattice, as well as the interaction strength. A difference between the canonical and vacuum method becomes increasingly visible for the 4×L lattice at large L, where the vacuum reduction method tends to result in lower energies. Notice that the canonical and vacuum methods result in similar energies for 2 × L lattices since the canonicalization procedure operates on a single chain of plaquettes. On the other hand, for 4 × L lattices, we follow a snaking procedure to canonicalize the plaquettes. Next, we consider a 4 × 4 system, and investigate the effect of embedding symmetries in the variational ansatz. We focus on the vacuum reduction method from Section 3.4, which generated superior results in the previous experiment. We compare the results to the ones obtainable through ED in Fig. 3. Note that the Hamiltonian is diagonalized on the manifold determined by the total projector P G P x P y P N f , where the projectors P x , P y , P N f enforce the parity constraints in Eqs. (7), (8) and (9) respectively. The unsymmetrized ansatz generates accurate groundstate energies, except for N f = 6. In the latter case, the ground state has a challenging sign structure, since the corresponding translationsymmetry sectors are ( π 2 , π 2 ) and (0, π). On the other hand, N f = 4 has an isolated ground state, and the ground-state for N f = 8 contains the simpler (0, 0) sector. Therefore, the ground states of N f = 4, 8 are more accessible during the VMC optimization. Hence, while the variational ansatz can accurately represent the ground state, and is only hindered by the optimization procedure. Symmetrizing the ansatz removes the local minimum encountered in the optimization, and therefore returns significantly better ground state estimates. When the energy gap is small, the unsymmetrized ansatz does not accurately represent the ground state. The latter is not necessarily due to a limited representational power, but is directly related to the optimization procedure via Stochastic Reconfiguration (SR), which heavily depends on the size of the gap. Symmetrizing the ansatz and restricting to a given irrep opens the gap and therefore generates more accurate results. Conclusions We showed that bosonization procedures which allow one to transform local fermionic operators into local qubit operators result in Gauss law constraints that must be fulfilled in order to reduce the Hilbert space. Fulfilling all constraints at the same time is challenging, and we introduced multiple approaches that solve these constraints ex-actly, without the need to apply projection operators. We applied our method to the t-V model on a square lattice in 2D. We argued that constraints related to parity and boundary conditions can be fulfilled through the sampling procedure. Our experiments demonstrate that satisfying the Gauss constraints directly affects our ability to reliably represent the ground state of a given Hamiltonian. We found that imposing a sign structure on the vacuum state simplifies this challenge. However, the gauge constraint introduces non-local effects, as would also be expected from a standard Jordan-Wigner transformation. By following the bosonization procedure from Ref. [10], we keep the symmetries of the fermionic system manifest, which allows us to restrict the variational ansatz to a chosen symmetry irrep using techniques that are commonly used to study quantum spin systems. We demonstrate that this allows us to study the low-energy spectrum using neuralnetwork quantum state ansatze. Since the Gauss constraint depends only on the lattice topology, our approach can in the future directly be applied to other Hamiltonians on square lattices. We foresee extensions of our work to other lattices and higher dimensions, as well as studies of the bosonic systems that are equivalent to fermionic systems. and auxiliary majorana fermions". Phys. Rev. B 106, 115109 (2022). [20] Giuseppe Carleo and Matthias Troyer. "Solving the quantum many-body problem with artificial neural networks". Figure 4: Conventions for the ordering of the auxiliary χ i r modes, and the directions of the edges that determine the sign of the Λ iµ r Λ jν r terms. Blue boxes correspond to a single location vector r. B.2 Parton decomposition In a last step, we again move to fermionic (parton) operators. For the physical modes, we have and for the auxiliary modes Notice that our parton Hilbert space is now 2 3 dimensional at each site. The on-site Hilbert space reads where n i=a,b,c ∈ {0, 1} and |Ω represents the vacuum. B.3 On-site Jordan-Wigner transformation The bosonic operators defined in Eq. where we identified |0 = |↑ and |1 = |↓ and As we will show in the next section, using the abovementioned mapping, one can rewrite the Hamiltonian in Eq. (27) in terms of Pauli operators using the definitions in Eq. (22). However, we first demonstrate how to eliminate the effect of one of the auxiliary qubits in Eq. (31). B.4 Constraints The Hilbert space is enlarged due to the b † and c † auxiliary fermion modes. The on-site parity operator Γ r for any site commutes with the Hamiltonian in Eq. (2). To reduce the Hilbert space, we restrict the on-site parton parity operator (which commutes with the Hamiltonian). Hence, the third spin is redundant and we can remove it by absorbing its effect in the other spins Using Eq. (35) to eliminate the third qubit, we obtain the expression for the Hamiltonian in Eq. (2). (36) While the above reduces to the identity operator in the fermionic formalism, it introduces a (gauge) constraint on the bosonic side. We can rewrite the above in terms of bosonic operators using Eq. (24)-(25), and ultimately in terms of the gauge operators Υ using Eq. (22). After some algebra, we obtain the constraint Υ 24 r Υ 32 r+x Υ 13 r+x+y Υ 41 r+y c = −1 (37) Hence, the auxiliary system is subject to a Gauss-law constraint of the form Imposing the boundary conditions in the fermionic system, we obtain the additional constraints introduced by non-contractable Wilson loops W x,y : W x,y = (−1) Lx,y . To see this, we carry out the same procedure as in Eq. (37) on these loops (for even-by-even tori). After bosonisation, we obtain the following spin operator identities that need to be satisfied
7,218.4
2022-05-02T00:00:00.000
[ "Physics" ]
Jet fragmentation functions in proton-proton collisions using soft-collinear effective theory The jet fragmentation function describes the longitudinal momentum distribution of hadrons inside a reconstructed jet. We study the jet fragmentation function in proton-proton collisions in the framework of soft-collinear effective theory (SCET). We find that, up to power corrections, the jet fragmentation function can be expressed as the ratio of the fragmenting jet function and the unmeasured jet function. Using renormalization group techniques, we are able to resum large logarithms of jet radii R in the perturbative expansion of the cross section. We use our theoretical formalism to describe the jet fragmentation functions for light hadron and heavy meson production measured at the Large Hadron Collider (LHC). Our calculations agree very well with the experimental data for the light hadron production. On the other hand, although our calculations for the heavy meson production inside jets are consistent with the PYTHIA simulation, they fail to describe the LHC data. We find that the jet fragmentation function for heavy meson production is very sensitive to the gluon-to-heavy-meson fragmentation function. Introduction Collimated jets of hadrons are a dominant feature of high energy particle interactions, especially at the current highest energy hadron collider, the Large Hadron Collider (LHC), where jets are abundantly produced. The internal structure of these jets has become an important tool to test the fundamental properties of Quantum Chromodynamics (QCD), and to search for new physics beyond the Standard Model [1,2]. Needless to say, a good understanding of jet substructure allows deeper insights into QCD dynamics and serves as a prerequisite for further progress. One of the jet substructure observables proposed and explored in more detail recently is the jet fragmentation function, which describes the longitudinal momentum distribution of hadrons inside a reconstructed jet [3][4][5][6][7][8][9][10][11][12][13]. Experimental studies on hadron distribution inside jets have been pioneered at the Tevatron [14] in the 1990s. More recently, both the ATLAS and the CMS collaborations have measured the distributions of light hadron [15][16][17][18] and heavy meson [19] production inside jets at the LHC. The jet fragmentation function is an interesting and important observable: since it probes the hadron fragmentation at a more differential level, it can reveal detailed information about the jet dynamics involved in producing the identified hadron. At the same time, it can provide further information about the non-perturbative hadronization encoded in the standard fragmentation functions. One might even gain insight into the nontrivial spin correlation through the study of azimuthal distribution of the hadron inside jets [20][21][22][23][24]. JHEP05(2016)125 Since gluon jets are much more abundant in proton-proton collisions at high energy hadron colliders, jet fragmentation functions should be more sensitive to gluon fragmentation. We will show that this is the case especially for heavy meson production inside jets. This situation is very different from the e + e − → h X and e p → e h X processes, where the gluon fragmentation function does not enter at leading-order in the perturbative calculation and, thus, can only be probed through QCD evolution or higher-order radiative corrections. There is also strong motivation to study the jet fragmentation function in heavy ion collisions at high energies, where hot and dense QCD medium -the quark-gluon plasma -is produced. By comparing the jet fragmentation function measured in ultra-relativistic nucleus collisions and the one in proton-proton collisions, one can understand how the presence of the strongly interacting medium produced in heavy ion collisions modifies the hadron distributions inside jets. Understanding the light and heavy flavor dynamics in the medium will help further determine the precise properties of the QGP. For recent experimental measurements of the jet fragmentation function in heavy ion collisions at the LHC, see [16][17][18]. For some theoretical work along this direction, see [25][26][27]. In this paper, we study the jet fragmentation function in proton-proton collisions using soft-collinear effective theory (SCET) [28][29][30][31][32]. Previously, in [10,13] a full next-to-leading order (NLO) calculation was performed. Closely related work with emphasis on heavy flavor was also recently presented in [33,34]. As we will show below, within SCET the hadron distribution inside jets is governed by the ratio of two quantities: the fragmenting jet function (FJF) G h i (ω, R, z, µ) introduced and studied in [3][4][5][6][7][8], and the unmeasured jet function J i (ω, R, µ) introduced in [35]. Here, i is the parton that initiates the jet with energy ω and radius R, while z is the fraction of the jet momentum carried by the identified hadron h. The FJF G h i (ω, R, z, µ) can be further written as a convolution of perturbatively calculable Wilson coefficients J ij and the fragmentation functions D h j (z, µ). Using the renormalization group techniques, we are able to simultaneously resum logarithms of the form ln R and ln(1−z), which have a significant numerical impact. Such resummations were not addressed previously in the fixed NLO calculation of [13]. We use the formalism to describe the experimental data at the LHC for the distribution of light hadron and heavy meson production inside jets. The study of the jet fragmentation function in heavy ion collisions using SCET will be performed in a forthcoming paper [36]. Some of the input for this calculation, such as the final-state in-medium splitting functions [37] and medium-modified fragmentation functions applied to leading hadron production [38,39], are already available. Here, we would like to remind the readers that, although the jet fragmentation function and the fragmenting jet function look very similar, they have different meanings. It is important to understand their differences and relations since they appear throughout the entire paper. The jet fragmentation function is an experimental observable describing the distribution of hadrons inside jets. On the other hand, the fragmenting jet function is a theoretical quantity which enters the factorized expression in the calculation of the jet fragmentation function. See section 2 for more details. The rest of the paper is organized as follows. In section 2, we first provide the definition of the jet fragmentation function. We then derive a factorized expression for the JHEP05(2016)125 jet fragmentation function, which involves the FJF and the unmeasured jet function. We give the matching coefficients for the FJF to be convolved with the standard fragmentation functions, and in particular for jets reconstructed using the anti-k T jet algorithm, which is used in almost all jet reconstruction at the LHC. We collect the detailed derivations of the matching coefficients in the appendix A. In section 3, we present the numerical results of our calculations for light hadron and heavy meson production inside jets and compare with the experimental data at the LHC. We also explore the theoretical uncertainty, the sensitivity of the observable to the jet algorithm (either cone or anti-k T ), and the radius dependence. We summarize our paper in section 4. Jet fragmentation function In this section we give the definition of the jet fragmentation function and calculate it using the factorized expression in SCET. The evaluation involves the fragmenting jet function G h i (ω, z, R, µ), and we provide the Wilson coefficients J ij to be convolved with the fragmentation function D h j (z, µ). We give the results for jets reconstructed using cone and anti-k T algorithms, as J ij depends on the jet algorithm. The results for cone jets are available in [7], while those for anti-k T jets were first written down in the appendix of [40]. We provide the detailed derivations of J ij for anti-k T jets in the appendix, and the results are consistent with [40]. Observable and factorized expression The jet fragmentation function F (z, p T ) describes the longitudinal momentum distribution of hadrons inside a reconstructed jet. We will compare our calculations with the jet fragmentation functions measured in proton-proton collisions, p + p → (jet with h) + X. Here, F (z, p T ) is defined as follows, where dσ h /dydp T dz and dσ/dydp T are the differential cross sections of jets with and without the reconstruction of the hadron h in the jet. Here, y and p T are the jet rapidity and transverse momentum. z is the fraction of the jet transverse momentum carried by the hadron, z ≡ p h T /p T , with p h T the transverse momentum of the hadron. Jets are reconstructed using either the cone or the anti-k T algorithm with the jet radius R, and the R-dependence is suppressed in the expression for F (z, p T ). As we will see, jet fragmentation functions will be different for jets reconstructed using different jet algorithms. Because the contribution from the soft radiation to the longitudinal momentum is power suppressed [41], it suffices to illustrate the SCET factorized expression for the jet fragmentation function in e + e − collisions (figure 1). Following [3,7,12,35,41,42], the differential cross section for N -jet production with the jet p T i and y i , the hadron h inside one jet (labeled by 1), and the energy cutoff Λ outside all the jets can be written as follows, hadron : z Λ Figure 1. Illustration of the N -jet production in e + e − collisions, where a hadron is measured in the jet labeled by J 1 with rapidity y and transverse momentum p T . z is the fraction of the jet momentum carried by the hadron. Jets are reconstructed using a jet algorithm with radius R. We impose an energy cutoff Λ outside the jets to ensure the N -jet configuration. Λ is a low energy scale constraining the soft radiation (red lines). The green lines represent the collinear splittings. where H(y i , p T i , µ) is the hard function describing the short-distance production of the N jets with rapidities y i and momenta p T i . S n 1 n 2 ···n N (Λ, µ) is the soft function with N soft Wilson lines along the jet directions. The energy cutoff Λ outside the jets is imposed to ensure the N -jet configuration. The hadron h measured inside jet 1 is described by the FJF G h ω 1 (z, µ), with the jet radius R suppressed. J ω i (µ) (for i = 2, · · · , N ) are the unmeasured jet functions introduced in [35], 1 with ω i representing the large light-cone component of the jet momentum and ω i = 2p T i in the frame where the jet is in the transverse direction. The factorized expression is valid for collimated jets up to power corrections of the type Λ/Q or R. On the other hand, the differential cross section for N -jet production is given by with the same hard function H(y i , p T i , µ), soft function S n 1 n 2 ···n N (Λ, µ), and unmeasured jet functions J ω i (µ) with i = 2, · · · , N . The only difference is that G h ω 1 (z, µ) in eq. (2.2) is replaced by the unmeasured jet function J ω 1 (µ) in eq. (2.3) since we do not measure the hadron. The distribution of the hadron h inside jet 1 then becomes, All the hard, soft and unmeasured jet functions (except for jet 1) cancel in the ratio. Taking the average over the jet production cross section, with proper phase space (PS) cuts on both jet rapidity y and transverse momentum p T , e.g. the rapidity interval and the width JHEP05(2016)125 of the p T bin, the jet fragmentation function F (z, p T ) becomes F (z, p T ) = 1 σ total i=q,g PS dy dp T dσ i dy dp T G h i (ω, R, z, µ) J i (ω, R, µ) , (2.5) where dσ i /dy dp T is the cross section to produce the jet initiated by parton i, and we have written out explicitly the arguments for both the FJF G h i (ω, R, z, µ) and the unmeasured jet function J i (ω, R, µ). In the next subsection we will provide explicit expressions for the fragmenting jet function G h i (ω, R, z, µ) and the unmeasured jet function J i (ω, R, µ). Here it is instructive to point out that G h i (ω, R, z, µ) and J i (ω, R, µ) have the same renormalization group (RG) evolution [5,7,35] and the ratio G h i (ω, R, z, µ)/J i (ω, R, µ) is renormalization group invariant, with possibly different characteristic scales for G h i and J i . Unmeasured jet function For convenience, we provide all the relevant results for the unmeasured jet function J i (ω, R, µ). At O(α s ) [35], where L = ln ω tan (R/2) µ , (2.8) and d q/g,alg J represents the algorithm-dependent pieces, The unmeasured jet function J i (ω, R, µ) satisfies the RG equation with the anomalous dimension given as follows: (2.14) JHEP05(2016)125 Here, Γ i cusp and γ i are the cusp and non-cusp anomalous dimensions, with the perturbative with T F = 1 2 , n f the number of active quark flavors, and The solution of the RG equation for the unmeasured jet function is where µ J is the characteristic scale of J i (ω, R, µ), which eliminates the large logarithms in the fixed-order calculation. From eqs. (2.6) and (2.7), the choice of µ J ∼ ω tan (R/2) ≡ p T R eliminates the logarithm L. We denote this scale as "p T R " for later convenience. Fragmenting jet function The fragmenting jet functions G h i (ω, R, z, µ) [5,7,11] are closely related to the fragmentation functions D h j through matching coefficients J ij where D h j (z, µ) is the fragmentation function of a parton j fragmenting into a hadron h. Eq. (2.19) for a light hadron h is valid up to power corrections of order Λ 2 QCD /ω 2 tan 2 (R/2). Thus, to avoid large non-perturbative power corrections, R should not be too small. On the other hand, for heavy meson fragmenting jet junction Λ QCD should be replaced by the heavy quark mass m Q in the above equation [12]. The Wilson coefficients J ij depend on the jet algorithm. The results for cone jets were given in [7], while those for anti-k T jets were first written down in the appendix of [40]. We provide the detailed derivations of J ij for anti-k T jets in the appendix, and the results are consistent with [40]. Here we only list the final results: JHEP05(2016)125 where the functionsP ji have the following expressions [6] P qq (z) = 1 + z 2 J alg ij (z) represent pieces that depend on the jet algorithm. For cone jets [7], The fragmenting jet function G h i (ω, R, z, µ) satisfies the following RG equation where the anomalous dimension γ i G (µ) = γ i J (µ) is the same as that of the unmeasured jet function J i (ω, R, µ) [5,7,35] in eq. (2.14). The solution to the RG equation is where the scale µ G should be the characteristic scale that eliminates the large logarithms in the fixed-order perturbative calculations. In the large z region, the scale choice JHEP05(2016)125 [7] both ln R and ln (1 − z). However, for consistency, this would require extracted fragmentation functions D h j with a built-in resummation of logarithms in (1 − z), which is currently not available. It might be instructive to point out that with such a scale, the power corrections in eq. (2.19) will be of the order of Λ 2 QCD / ω 2 tan 2 (R/2)(1 − z) 2 , similar to the usual threshold resummation, see, e.g. ref. [44]. For the numerical calculations presented in the next section, we will choose µ G = ω tan (R/2) to resum ln R and comment on the effect of ln (1 − z) resummation. Let us make a few comments about our resummation formalism. As we have pointed out already at the end of section 2.1, since G h i (ω, R, z, µ) and J i (ω, R, µ) follow the same RG evolution equations, as given in eqs. (2.37) and (2.18), respectively, the ratio as given in the factorized formalism eq. (2.5) is thus RG invariant. In other words, this ratio does not depend on the scale µ. Choosing µ G = µ J = ω tan(R/2), the whole RG exponential forms cancel in the ratio. However, this does not mean that resummation effects disappear in our framework. On the contrary, the resummation effect is shifted entirely into the scale µ G -dependence of the standard fragmentation function D h i (z, µ G ) through eq. (2.19). In other words, we are resumming ln(R) logarithms in this case through the DGLAP evolution equations of the fragmentation functions. This type of resummation was not achieved previously in the fixed NLO calculation of [13]. It will be very interesting to explore the exact relation between our work and the previous NLO calculation [13], which we are going to address in a future publication. Phenomenology In this section, we present the numerical results of our theoretical formalism and we compare our calculations with the experimental data for both light hadron and heavy meson production at the LHC. We will also explore the theoretical uncertainties of our formalism. Light hadron jet fragmentation function We first study the distribution of light hadrons inside jets in proton-proton collisions. Both ATLAS and CMS collaborations at the LHC have measured the distribution of light, charged hadrons h = h + + h − inside jets. We perform the numerical calculations using the CT14 NLO parton distribution functions [48] and the DSS07 NLO fragmentation functions [49,50]. We keep the Γ i 0,1 and γ i 0 terms in the series expansion of the anomalous dimension γ i J,G with i = q, g. Therefore the calculation is at next-to-leading logarithmic accuracy. In figure 2, we compare our calculations with the experimental data from ATLAS [15] in proton-proton collisions at the center-of-mass (CM) energy of √ s = 7 TeV. Jets are reconstructed using the anti-k T algorithm with R = 0.6 within the rapidity range |y| < 1.2. The transverse momenta p T of jets are measured across a wide range, from 25 GeV to 500 GeV. The numbers in square brackets correspond to different jet transverse momentum bins, e.g. [25,40] for all three scales µ, µ G , µ J by a factor of 2 around the above central values. See detailed discussions in section 3.2 below. Note that the DSS07 fragmentation function parameterizations for D h i (z, µ) are only valid for 0.05 < z < 1 and 1 < µ 2 < 10 5 GeV 2 . Thus, all the calculations outside these regions are based on the extrapolations of the DSS07 parameterizations provided by the distributed package from the authors [49,50]. As we have expected, the theoretical uncertainties from the scale variations are relatively small, due to the fact that G h i and J i follow the same RG running as discussed in section 2.1. At the same time, as one can see, there is good agreement between our theoretical calculations and the ATLAS data. Our calculations slightly overshoot the experimental data at large z for jets with low p T . Since there are large uncertainties for fragmentation functions in the large z region [51,52], jet fragmentation function measurements in proton-proton collisions can help constrain them in this region. [53], while the magenta triangles are the CMS data [17]. The blue solid curves are the "nominal" theoretical calculations, with the green bands representing the theoretical uncertainties estimated from scale variations. In figure 3, we compare our calculations with the preliminary ATLAS data [53], as well as the CMS measurements [17] in proton-proton collisions at the CM energy √ s = 2.76 TeV. Here, jets are reconstructed using the anti-k T algorithm with R = 0.4 within the rapidity range |y| < 1.6 for ATLAS, whereas for CMS R = 0.3 and 0.3 < |y| < 2. The solid red circles are the ATLAS data, while the magenta solid triangles are the CMS data. As one can see, our calculations agree with the data rather well. Note that the CMS data has a very different trend for low z 0.05 compared to the ATLAS data. Our theoretical predictions in figures 2 and 3 also agree with the results in [13] that use the full NLO calculation. Algorithm and radius dependence, and theoretical uncertainty Here, we study the dependence of the jet fragmentation function on the jet algorithm and the jet radius. We will also estimate the theoretical uncertainty by varying the characteristic scales in our formalism. We choose the scales µ = p T and µ G = µ J = p T R . The solid red curve is for anti-k T jets, while the dashed blue curve is for cone jets. Lower panel: the ratio of the jet fragmentation functions F (z, p T ) cone /F (z, p T ) kT for cone and anti-k T jets. We will first explore the jet algorithm dependence. In the upper panel of figure 4, we plot the jet fragmentation function F (z, p T ) for light charge hadrons as a function of z inside jets with 60 < p T < 80 GeV, |y| < 1.2, R = 0.6 at √ s = 7 TeV as an example. We choose the scales µ = p T and µ G = µ J = p T R . The solid red curve is for anti-k T jets, while the dashed blue curve is for cone jets. As we can see from this plot, F (z, p T ) for cone jets is smaller (larger) than that for anti-k T jets at large (small) z. This is a consequence of two combined effects: in the low z region, the FJF G h i for cone jets is larger than that for an anti-k T jet. As z gets closer to 1, the FJF G h i for cone and anti-k T jets approach the same value because there is little radiation left in the jet to distinguish between jet algorithms. Also, the unmeasured jet function J i for cone jets is larger than that for anti-k T jets. To see the difference more clearly, we plot the ratio F (z, p T ) cone /F (z, p T ) k T between the jet fragmentation functions for cone and anti-k T jets in the lower panel of figure 4. We now study the jet radius R dependence. We choose the scales µ = p T and µ G = µ J = p T R . In figure 5, we plot as an example the jet fragmentation functions F (z, p T ) as a function of z for four different jet radii R = 0.2 (solid red), R = 0.4 (dashed blue), R = 0.6 (dotted black) R = 0.8 (dash-dotted magenta) for jets with 60 < p T < 80 GeV, |y| < 1.2 at √ s = 7 TeV. We find that in the large z 0.1 region F (z, p T ) gets smaller as R increases. Whereas in the small z 0.1 region, F (z, p T ) becomes larger as R increases because of the normalization of F (z, p T ). This is related to the scale dependence of D h i (z, µ G ), which is governed by the DGLAP evolution equations: D h i (z, µ G ) increases (decreases) as µ G increases for small (large) z [54]. Since µ G = p T R = 2p T tan(R/2), increasing R will increase µ G . Finally, we estimate the uncertainty of our theoretical calculations by varying the scales µ, µ J , and µ G around We independently vary the scales by a factor of 2 around their central values, i.e., our calculations is generally small for the moderate z region, and it is compatible with the results based on the full NLO calculation in [13], where only the variation of the scale µ is implemented. This gives us confidence that the RG evolutions for both the FJF G h i and the unmeasured jet function J i indeed improve the convergence of the theoretical calculation. When z gets closer to 1, one can see that the scale uncertainty band becomes larger. As we have shown in the last section, there is an explicit dependence in the FJF G h i on ln(1 − z). These logarithms become large as z approaches 1, i.e. in the hadronic threshold limit. We may [7] simultaneously resum logarithms of the jet radius R and (1 − z) by choosing the scale µ G ∼ 2p T tan (R/2) (1 − z) ≡ p T RZ . We plot this by independently varying the scales as follows, Such scale variations correspond to the green band in the upper panel of figure 6, while the red dashed central curve represents the calculation with µ = p T , µ J = p T R , µ G = p T RZ . JHEP05(2016)125 As one can clearly see, the uncertainty of the calculation with ln(1 − z) resummation is largely reduced in the large z region. In order to see the effect of ln(1 − z) resummation more clearly, in the lower panel of figure 6, we plot the ratio R (p T RZ , p T R ) = F (z, p T )| µ G =p T RZ / F (z, p T )| µ G =p T R as a function of z and we set µ = p T and µ J = p T R . As one can see, resumming ln (1 − z) leads to an enhancement of the jet fragmentation function F (z, p T ) in the large z region. For z 0.8, the enhancement is about a factor of 2. Even though the theoretical uncertainty is reduced with the scale choice µ G = p T RZ , we do not use this scale when comparing to data in figures 2, 3 above and figure 7 below. This is due to the fact that the fragmentation functions that we use in our numerical studies are extracted using fixed-order calculations [49][50][51][52]. In order to be consistent, we have to adopt the conventional scale choice µ G = p T R . However, we want to make an important point. If one performs a fit for fragmentation functions using the F (z, p T ) data, the extracted functions would differ significantly in the large z region when the more accurate calculation with ln (1 − z) resummation is used. Our conclusions here are similar to the observations made in [55] in the context of threshold resummation. Heavy meson jet fragmentation function Our theoretical result in eq. (2.5) was derived for light hadron production inside jets. However, it can also be applied to describe heavy meson production inside jets using the Zero Mass Variable-Flavor Number Scheme (ZMVFNS) [56,57]. Such a scheme applies when the perturbative scales Q are much larger than the heavy quark mass m Q : Q 2 m 2 Q . In this kinematic regime, the heavy quarks are expected to behave like massless partons. One can, thus, treat heavy quarks as the other light partons, and logarithms associated with m Q are resummed using the DGLAP evolution. Power corrections of O(m 2 Q /Q 2 ) are neglected in this formalism. In our case, the ZMVFNS applies in the kinematic regime where µ, µ J , µ G m Q . The ATLAS collaboration has recently measured the distribution of D * ± mesons in jets with p T > 25 GeV and R = 0.6 [19]. Given the fact that the charm mass is relatively small m c ∼ 1.3 GeV [58], the jet transverse momentum is large and the radius is moderate, this satisfies the requirement for using the ZMVFNS. Within the ZMVFNS, the only change in our theoretical formalism is to also include the charm production in eq. (2.5): i=q,g,c with q and c representing light and charm flavor, respectively. Like in light hadron calculations, we make the scale choices µ = p T , µ G = µ J = p T R for the "nominal" calculations. We follow section 3.2 to calculate the theoretical uncertainties from the scale variations. We use the charm-meson fragmentation functions extracted from the inclusive production of a single charm-meson D in e + e − collisions: e + e − → D X. The parameterizations for D h i (z, µ) with i = q, g, c and h = D are available in [59], which yield a good description of the inclusive D-meson production in proton-proton collisions at the LHC [60]. Thus, we will use this parametrization in our calculations. In figure 7, we compare our calculations for the D * ± jet fragmentation function with the ATLAS experimental data at CM energy √ s = 7 TeV [19]. Jets are reconstructed using the anti-k T algorithm with R = 0.6, and the jet rapidity is within |y| < 2.5. We . The calculation of jet fragmentation functions for D * ± meson production compared to the experimental data from the ATLAS collaboration at √ s = 7 TeV [19]. Jets are reconstructed using the anti-k T algorithm with R = 0.6, and the jet rapidity is within |y| < 2.5. We show 6 different panels which correspond to different jet p T ranges. The solid blue circles are the experimental data measured by ATLAS [19], while the empty red circles are the PYTHIA simulations provided in the ATLAS paper [19]. The solid red curves are our default theoretical calculations using the ZMVFNS. The green bands are the estimated theoretical uncertainties from the scale variations. The dashed blue curves are our calculations using an enhanced gluon-to-D meson fragmentation function: show 6 different panels which correspond to different jet p T ranges covering 25 < p T < 70 GeV. The solid blue circles are the experimental data measured by ATLAS [19] and the empty red circles are the PYTHIA simulations provided in the ATLAS paper [19]. The solid red curves are our default theoretical calculations, which use the central values of the D-meson fragmentation functions D h i (z, µ) from [59]. The green bands are the theoretical uncertainties estimated from the scale variations. As one can clearly see, our theoretical calculations are consistent with the PYTHIA simulations for all different jet JHEP05(2016)125 p T bins. However, they are significantly below the experimental measurements from the ATLAS collaboration. As we have mentioned, the D-meson fragmentation functions are extracted in e + e − collisions, where the gluon fragmentation function D D g (z, µ) does not enter at leadingorder in the theoretical formalism. Thus, gluon fragmentation is only indirectly probed through QCD evolution and/or higher-order corrections. This leads to a large uncertainty of the extracted gluon-to-D meson fragmentation function. Note that ref. [59] does not provide the uncertainty of the extracted charmed-meson fragmentation functions. However, comparing different extractions from the same sets of e + e − data [59,61,62], we find that the gluon-to-D meson fragmentation function D D g (z, µ) can differ by a factor of 3, while quark-to-D meson fragmentation functions D D q,c (z, µ) do not vary so dramatically [59]. Other than that, the various extractions [59,61,62] differ only by the initial scales for the QCD evolution or by the treatment of the heavy quark mass. This provides a strong hint that the current extraction of the gluon-to-D meson fragmentation function could have a very large uncertainty. To explore the uncertainty of the gluon-to-D meson fragmentation function, we reperform our calculations of the jet fragmentation functions for D * ± meson with the gluonto-D meson fragmentation function enhanced by a factor of 2, i.e. D D g (z, µ) → 2 D D g (z, µ). These calculations are shown by the dashed blue curves in figure 7. They lead to much better agreement with the ATLAS data. We have also tried enhancing other quark-to-D meson fragmentation functions D D q,c (z, µ) by a similar factor, but none of them could lead to such an efficient enhancement in the jet fragmentation function. We conclude that jet fragmentation functions of heavy mesons in proton-proton collisions have great potential to constrain the gluon-to-heavy meson fragmentation functions. Summary In this paper we studied jet fragmentation functions for light hadrons and heavy mesons inside reconstructed jets. We wrote down a factorized expression in SCET for the jet fragmentation function in proton-proton collisions. We found that, up to power corrections, the jet fragmentation function can be expressed as the ratio of the fragmenting jet function and the unmeasured jet function. These two functions satisfy the same renormalization group equation, and the fragmenting jet function can be further expressed as a convolution between the fragmentation functions and the matching coefficients. Using SCET, we were able to simultaneously resum large logarithms of the jet radius R and (1 − z), which has a significant impact on the phenomenology considered in this work. We used the theoretical formalism to describe the jet fragmentation functions for light hadron and heavy meson production measured at the LHC. We found that our calculations agree very well with the experimental data for light hadron production. We explored the jet algorithm and the R dependence of the jet fragmentation functions, and we estimated the theoretical uncertainty by scale variation. For heavy meson production inside jets, although our calculations are consistent with PYTHIA simulations, they fail to describe the corresponding LHC data. We found that enhancing the gluon-to-heavy meson fragmentation function JHEP05(2016)125 leads to much better agreement with the experimental data. We emphasize that the jet fragmentation function for heavy meson production in proton-proton collisions is very sensitive to the gluon-to-heavy meson fragmentation function. In the future, we plan to extend our calculations to describe jet fragmentation functions in heavy ion collisions in order to understand nuclear modifications of hadron production inside jets. JHEP05(2016)125 where δ alg = δ cone or δ anti-k T are the constraints given in eqs. (A.3) and (A.4). The FJF G h i (ω, R, z, µ) can be matched onto the fragmentation function D h i (z, µ): and J ij are the matching coefficients. The FJF G j i (m 2 J , z, µ) with i, j = q, g has been extensively studied in [5,11]. Using pure dimensional regularization with 4 − 2 dimensions in the MS scheme, the bare results at O(α s ) can be written in the following compact form [11,63]: where the functions P ji (z, ) are Substituting eq. (A.7) into eq. (A.5) and performing the integration over m 2 J with the constraints imposed by the jet algorithm δ alg , one obtains the bare FJF G j i,bare (ω, R, z). We present the results for anti-k T jets here, as the explicit expressions are not available in the literature: +P qq (z) (L + ln z) (A.14) JHEP05(2016)125 +P gg (z) (L + ln z) where, as a reminder, β 0 and L are given by 16) andP ji have the expressions [6] given in eq. where i is not summed over on the right hand side. The corresponding renormalization group (RG) equation is given by where the anomalous dimension γ i G (µ) is The solution to eq. (A.18) is then where the scale µ G should be the characteristic scale chosen such that large logarithms in the fixed-order calculation are eliminated. The counter terms Z i G (µ) are given by Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
8,478.4
2016-05-01T00:00:00.000
[ "Physics" ]
Bosonic excitation spectra of superconducting Bi2Sr2CaCu2O8+δ and YBa2Cu3O6+x extracted from scanning tunneling spectra A detailed interpretation of scanning tunneling spectra obtained on unconventional superconductors enables one to gain information on the pairing boson. Decisive for this approach are inelastic tunneling events. Due to the lack of momentum conservation in tunneling from or to the sharp tip, those are enhanced in the geometry of a scanning tunneling microscope compared to planar tunnel junctions. This work extends the method of obtaining the bosonic excitation spectrum by deconvolution from tunneling spectra to nodal d-wave superconductors. In particular, scanning tunneling spectra of slightly underdoped Bi2Sr2CaCu2O8+δ with a Tc of 82 K and optimally doped YBa2Cu3O6+x with a Tc of 92 K reveal a resonance mode in their bosonic excitation spectrum at Ωres≈63 meV and Ωres≈61 meV respectively. In both cases, the overall shape of the bosonic excitation spectrum is indicative of predominant spin scattering with a resonant mode at Ωres<2Δ and overdamped spin fluctuations for energies larger than 2Δ. To perform the deconvolution of the experimental data, we implemented an efficient iterative algorithm that significantly enhances the reliability of our analysis. I. INTRODUCTION With the intention to unravel the unconventional pairing mechanism in high-temperature superconductors, extensive effort has been put into extracting the spectral density of the pairing boson from experimental data [1][2][3][4][5][6][7][8][9][10].An ever-present contender for this "bosonic glue" are antiferromagnetic spin fluctuations which have been extensively studied in the family of cuprate superconductors [11][12][13][14][15].Such an electronic pairing mechanism leads to a heavy renormalization of the boson spectrum when entering the superconducting state.In the normal state, overdamped spin excitations form a broad and gapless continuum.In the superconducting state, they develop a spin gap of 2∆, the minimum energy needed to create a particle-hole excitation, plus a rather longlived resonance mode at Ω res < 2∆ inside the spin gap [16][17][18][19][20][21][22][23][24][25][26].This resonance is made possible by the signchanging (d-wave) symmetry of the superconducting gap and identified as a spin exciton [18].The above mentioned behavior of the spin excitation spectrum has been directly observed in inelastic neutron scattering (INS) experiments [11,12,[27][28][29][30][31] yielding strong evidence for spin-fluctuation mediated pairing.Since signatures of this resonance mode are also expected to be visible in optical, photoemission and tunneling spectra, a considerable number of studies tried to complete the picture using these techniques [2][3][4][5][6][7][8][9][32][33][34][35], all probing a slightly different boson spectrum and facing complicated inversion techniques.Recently, machine learning algorithms entered the scene and their application to angle-resolved photoemission (ARPES) data proved to be a powerful concept to reverse-model the spin-spectrum, but this happens at the cost of a number of free parameters which cannot be easily mapped onto physical quantities [10,34].In this work, we extracted the bosonic spectrum from the inelastic part of scanning tunneling spectra which we obtained on the cuprate superconductors Bi 2 Sr 2 CaCu 2 O 8+δ (Bi2212) and YBa 2 Cu 3 O 6+x (Y123).In contrast to previous scanning tunneling spectroscopy (STS) [3,35] and break junction experiments [36] that focused on Bi2212, we obtain the boson spectrum without a functional prescription and over a wide energy range for both materials.It naturally exhibits the sharp resonance mode and overdamped continuum that are characteristic for the spin spectrum measured in INS. Inelastic electron tunneling spectroscopy experiments using the tip of a scanning tunneling microscope (IETS-STM) have proven to be a powerful tool in the study of bosonic excitations of vibrational [5,[37][38][39][40], magnetic [41][42][43][44][45][46] or plasmonic [37,47] character in metals, single molecules and also superconductors.Due to the spatial confinement of the electrons in the apex of the STM tip, the wave vector of the tunneling electrons is widely spread and the local density of states (LDOS) of the tip becomes rather flat and featureless.Consequently, the generally momentum dependent tunnel matrix element can be considered momentum independent in the STM geometry [48].As a result, the elastic contribution to the tunneling conductance σ el becomes directly proportional to the LDOS of the sample, as has been shown by Tersoff and Hamann [49].Similarly, the inelastic contribution σ inel to the tunneling conductance is given by a momentum integrated scattering probability of tunneling electrons sharing their initial state energy with a final state electron and a bosonic excitation.The absence of strict momentum conservation opens the phase space for the excited boson and as a consequence, the inelastic contributions to the tunneling current can be a magnitude larger than in planar junctions [40], in which the lateral momentum is conserved. Previous IETS-STM experiments used this effect to determine the Eliashberg function α 2 F (Ω) of the strongcoupling conventional superconductor Pb [40,50], which contains the momentum integrated spectral density of the pairing phonon F (Ω), as well as the electron-phonon coupling (EPC) constant α(Ω) [51][52][53].While for conventional superconductors, the Migdal theorem [54] allows to treat the electronic and phononic degrees of freedom to lowest order separately, largely simplifying the analysis of IETS spectra, the situation is less clear for unconventional superconductors with electronic pairing mechanism.Nevertheless, the theoretical description of IETS-STM spectra could be extended to the fully gapped Fe-based unconventional superconductors of s ± character [55,56].Strong coupling of electrons and spin fluctuations manifests in IETS-STM spectra as a characteristically lower differential conductance in the superconducting state compared to the normal state for energies slightly below 3∆ [55].Also for these systems, the boson spectrum could be reconstructed from IETS-STM spectra by deconvolution [56].In this work, we investigate, in how far this method can be extended to nodal d-wave superconductors.To do this, we follow the path of a deconvolution of scanning tunneling data, using a priori band structure for the normal state model and the inelastic scanning tunneling theory of unconventional superconductors derived by Hlobil et al. [55]. II. OUTLINE OF THE EXTRACTION PROCEDURE The total tunneling conductance σ tot between a normal conducting tip and a superconductor is comprised of the elastic tunneling contributions σ el , but also significant inelastic contributions σ inel [40,50,55,56].While the second derivative of the tunneling current d 2 I/dU 2 obtained on conventional superconductors in the normal state is directly proportional to the Eliashberg function α 2 F (Ω) [50], the bosonic glue in unconventional superconductors is drastically renormalized upon entering the superconducting phase. In the presence of strong inelastic contributions to the tunneling current an inversion procedure à la McMillan and Rowell [51] cannot be used to extract the Eliashberg function from the superconducting spectrum.As was shown in Ref. [55], the function g 2 χ ′′ (Ω) acts as the "generalized glue function" and analog to the Eliashberg function in superconductors driven by electronic interactions.As both, phonons and spin fluctuations, may couple to the tunneling quasiparticles, we define the bosonic spectrum where g is the spin-fermion coupling constant and χ ′′ is the dimensionless, momentum integrated spin spectrum. The inelastic differential conductance for positive voltage at zero temperature is given by While the explicit momentum dependence of the bosonic spectrum is lost in this form, it still contains the spin resonance at the antiferromagnetic ordering vector if antinodal points on the Fermi surface contribute significantly to the tunneling spectrum.As can be seen from Eq. ( 2), B is the source function, the DOS in the superconducting state ν s is the kernel and the inelastic tunneling conductance σ inel is the signal.Θ denotes the Heaviside step function.The general aim in this work is to extract the function B(Ω) as accurately as we can from scanning tunneling spectra, which we do by deconvolution of Eq. ( 2).We follow the following step-by-step procedure: 1. Determination of the superconducting density of states ν s 2. Determination of the inelastic tunneling conductance σ inel 3. Extraction of B(Ω) by deconvolution of Eq. ( 2) Assumptions and limitations of our extraction procedure are discussed in Section II D. A. Step 1: Determining νs From a scanning tunneling spectrum below T c we obtain the differential conductance dI/dU (eU ) ≡ σ tot (eU ).This function consists of the purely elastic part σ el and the inelastic part σ inel .The elastic part is directly proportional to the superconducting density of states in the sample ν s .In this step, we determine the functional form of the elastic contribution by fitting a model function to the low bias region of the dI/dU spectrum that keeps the complexity as low as possible while still capturing the relevant features of the band structure and pairing strength.We opt for a generalized Dynes function [57] with a momentum dependent gap: Here, is the angle-dependent DOS at the Fermi energy in the normal conducting phase, λ the band index and C λ the relative tunneling sensitivity for the band.The function ν F n (φ) weighs gap distributions for different (k x , k y ) by their abundance along the Fermi surface (FS).It is derived from a microscopic tight-binding approach that models the dispersion relation.In the case of Bi2212 we used a single-band model whereas for Y123 we considered two CuO 2 plane bands and one CuO chain band (see Appendix A).It should be noted that, unlike in fully gapped superconductors, the inelastic spectrum can be non-zero down to vanishing bias voltage because ∆(k) has a nodal structure.This prevents us from directly assigning the differential conductance for e|U | ≲ ∆ to the purely elastic tunneling contribution as was possible for the s ± superconductor monolayer FeSe [55,56].We will, however, start from here and refine σ el in the next step. B. Step 2: Determining σ inel We use the fact that σ tot (ω) = σ el (ω)+σ inel (ω) and the physical constraints σ inel (0) = 0 and σ inel (e|U | > 0) > 0. For a slowly varying bosonic function, which we expect in the range 0 < e|U | < ∆ due to the quick reopening of the gap near the nodal parts of the Fermi surface, σ inel is essentially given by the elastic contribution ν s (ω) times some scalar, real factor.We thus assume, that, in the range 0 < e|U | < ∆, σ el is well guessed by our Dynes fit times a factor η < 1.We approximated η using a boundary condition for the total number of states (see Appendix B) and in order to keep the condition σ el (∞) = σ 0 , we scale up our experimental curve by 1/η instead of scaling down our fitted curve.We took care that the choice of this numerical factor, that simply helps to perform our deconvolution algorithm and paint a more realistic picture of σ el , does not influence the extracted boson spectrum in a qualitative manner (see Appendix B). C. Step 3: Extracting B(Ω) We compare two methods by which the boson spectrum was determined: The direct deconvolution in Fourier space and the iterative Gold algorithm [58,59] to perform the deconvolution.The advantage of the Gold algorithm is that for positive kernel and signal, the result of this deconvolution method is always positive.This is in agreement with our physical constraint that the bosonic excitation spectrum is strictly positive.We used the implementation of the one-fold Gold algorithm in the TSpectrum class of ROOT system [60,61] in C language wrapped in a small python module. The bosonic function from direct deconvolution in Fourier space is obtained from where F denotes a Fourier Transform and F −1 the inverse transform.The abrupt change in elastic conduc-tance at zero energy (multiplication with Heaviside distribution in Eq. ( 2)) leads to heavy oscillations in the Fourier components.Therefore the result of this deconvolution procedure can contain non-zero contributions for E < 0 and negative contributions for 0 < E < ∆.They are exact solutions to Eq. ( 2) but from a subset of nonphysical solutions that we are not interested in.Because the solutions obtained in this way are highly oscillatory we show the result after Gaussian smoothing.In order to obtain a positive solution to Eq. ( 2), the result of the direct deconvolution method is used as a first guess to the Gold algorithm.The results shown in this work are obtained after 2,000,000 iterations at which point convergence has been reached. D. Assumptions and limitations In order for our extraction method to be applicable, several simplifying assumptions were made: 1. Quasiparticles with energy ω couple to bosonic excitations of energy Ω and effective integrated density of states B(Ω).The k-dependence of the interaction is thus disregarded. 2. σ inel has a simple relation to σ el for 0 < e|U | < ∆ (see Section II B) which is generally oversimplified for d-wave superconductors, especially near ∆. In general, retrieving the source function from a convolution integral is an ill-posed problem which means that we only obtain one solution from a large set of valid solutions to the convolution equation.Additional aspects that complicate the problem are the following: 1.The kernel function ν s (ω) is a guess which is very dependent on the modeling of the superconducting density of states and is further questioned by lack of energetic regions of purely elastic processes in the scanning tunneling spectrum.We are in fact on the verge of a necessity for blind deconvolution algorithms. 2. Strong electron-boson coupling leads to spectral features of σ el outside the gap that are neglected here as the contribution of σ inel is expected to be much larger.They can in principle be reconstructed within an Eliashberg theory using the extracted boson spectrum.Using this refined σ el and repeating the procedure until B(Ω) leads to the correct σ el and σ inel could further improve our result. 3. Electronic noise in the recorded spectra is ignored and consequently ends up in either ν s or B(Ω) 4. With B(Ω) we obtain only an "effective tunnel Eliashberg function" which includes all bosonic excitations that are accessible to the tunneling quasiparticles.Hence, no disentanglement into lattice and spin degrees of freedom is possible. III. EXPERIMENTAL METHODS We performed inelastic tunneling spectroscopy on a slightly underdoped Bi2212 sample with a T c of 82 K (UD82) and an optimally doped Y123 sample with a T c of 92 K (OP92) using a home-built STM with Joule-Thomson cooling [62].The samples were cleaved at a temperature of 78 K in ultra-high vacuum (UHV) and immediately transferred into the STM.All spectra were recorded with a tungsten tip.The set-up also allows to vary the temperature of the STM in order to record spectra above T c .Due to the large inhomogeneity of scanning tunneling spectra on Bi2212 [63], we show averaged spectra recorded at positions, where the dip-hump feature can be clearly seen.In the case of Y123, the overall spectral inhomogeneity was lower (see Appendix C) and we show spectra which are averaged over a 50×50 nm 2 area where the dip-hump feature was ubiquitous. A. Experimental results Fig. 1 shows experimental dI/dU and d 2 I/dU 2 spectra recorded at 0.7 K and 84 K.In order to remove a tilt in the spectra that stems from a slope in the density of states (DOS) due to hole doping [64,65], we followed the standard procedure and symmetrized the spectra in Fig. 1.The dI/dU spectrum for superconducting Bi2212 in Fig. 1(a) shows a single but smeared gap with residual zero-bias conductance due to the nodal d x 2 −y 2 gap symmetry.Similarly, the coherence peaks are smeared due to the gap symmetry and possibly also due to short quasiparticle lifetimes.This is typical for the underdoped regime and may be caused by its proximity to the insulating phase [63].Outside the gap, a clear dip of the superconducting spectrum below the normal conducting spectrum, followed by a hump reapproaching it, are visible.The V-shaped conductance in the normal state hints towards strong inelastic contributions to the tunneling current from overdamped electronic excitations that become partly gapped in the superconducting state as discussed in Ref. [55,66].The hump shows as a peak in the second derivative of the tunneling current that exceeds the curve of the normal state at ≈ 120 mV in Fig. 1(b).The relatively round shape of the superconducting gap in Bi2212 is atypical for a classic d-wave superconductor, in which the naive expectation is a V-shaped conductance minimum.As will be shown later on, the round shape of the gap can be generated without admixture of an s-wave pairing term by respecting the anisotropy of the Fermi surface in the normal state.The Fermi surface and gap anisotropy are summarized schematically in Fig. 2(a). B. Extraction of the bosonic spectrum We followed the step-by-step extraction procedure outlined in Section II starting from the determination of the superconducting density of states ν s .The optimal Dynes fit to our experimental spectrum in the superconducting state is shown in Fig. 2(b) with ∆ 0 = 63.31meV (∆ max = 59.14 meV and ∆ = 49.60 meV) and γ = 0.19.The resulting (in)elastic contribution is shown in red(grey) in Fig. 2(c).Here, a numerical scaling factor of η = 0.6 was used.The value for ∆ lies within the range of previously reported gap values on the Bi2212 surface [63], especially in the slightly underdoped regime, where variations of the local gap from the average gap tend to be larger [68]. The regularized bosonic function from direct deconvolution in Fourier space is shown in Fig. 2(d) in orange.The contributions at low energies are an artefact from the scaling with factor η. Despite our uncompromising simplifications, the bosonic spectrum recovers well the tendency of the total conductance in the forward convolution (Fig. 2(c) orange) and shows the expected behaviour for coupling to spin degrees of freedom at medium and high energies, i.e. a resonance mode at ∆ < E < 2∆ and approach of the normal state bosonic function B for E ≳ 3∆ [55], that, in contrast to the Eliashberg function in the case of phonon-mediated pairing, remains finite for energies well above 2∆.While the long-lived resonance mode is associated with a spin resonance due to the signchanging gap function, the broad high-energy tail of the bosonic spectrum is due to the coupling to overdamped spin fluctuations, or paramagnons [18].Resonant inelas-FIG.2. Bosonic Spectrum Extraction for Bi2212: (a) Schematic perspective view of the Fermi surface (FS) in the first Brillouin zone (BZ) in the normal/superconducting state (blue/red) (adapted from Ref. [67]).The color lightness depicts the relative density of states: the higher the lightness, the lower the density of states.(b) A generalized Dynes model (Eq.( 3)) with ∆0 = 63.30meV and γ = 0.15 (red) was fitted to the experimental differential conductance (black).(c) The total conductance (black line) has been scaled up by the factor η −1 = 1.67.The inelastic part of the conductance (vertical width of grey area) is given by the difference between the total (black) and the elastic part (red) of the conductance.Forward convolved conductance with the obtained boson spectral functions from the direct FFT method/Gold algorithm are shown in orange/green dashed lines.(d) Boson spectral function determined by direct FFT method/Gold algorithm (orange/green).The thin dark green line shows the boson spectral function for a different Dynes fit than in (b,c) with ∆0 = 65 meV (not least square minimum).The result of the direct FFT method has been regularized for clarity.A clear resonance mode at Ωres ≈ 63 meV is visible.Zero-energy contributions are an artefact from the scaling procedure.tic x-ray scattering (RIXS) studies have shown that these paramagnons dominate the bosonic spectrum for energies larger than ≈ 100 meV in several families of cuprates as almost all other contributors, e.g.phonons, lie lower in energy [69][70][71][72]. By application of the Gold algorithm we obtained the bosonic spectrum shown in green in Fig. 2(d).Again, the high value at E = 0 is a consequence of the scaling with factor η. The sharp peak at 10 meV is due to inadequacies of our elastic fit in the region of the coherence peak.It is e.g.not present in our analysis of Y123 (see Section V) and vanishes once one takes an elastic DOS with a larger gap (here 65 meV, not leastsquare minimum) as we show in the thin dark green line in Fig. 2(d).We could get rid of negative contributions and find a bosonic function that recovers well the total conductance (Fig. 2(c) green), especially the dip-hump structure, and shows a very clear resonance at The resonance mode extracted in this work is higher in energy than reported in inelastic neutron scattering experiments (Ω res ≈ 43 meV at the antiferromagnetic ordering vector) [12] and closer to the resonance determined by optical scattering (Ω res ≈ 60 meV) [32].Due to the loss of momentum information in tunneling, the centre of the resonance is expected to be shifted to higher energies compared to the INS results [55].Due to the large inhomogeneity of ∆ on the surface of Bi2212 [63], it is more instructive to compare the ratio Ω res /∆ to other works rather than the absolute value of Ω res .The ratio Ω res / ∆ = 1.3 lies within the current range of error of Ω res /∆ = 1.28 ± 8 by Yu et al. [25].In most other extraction methods of the bosonic mode energy, the normal state DOS is not respected, which is why, depending on the method, the ∆ 0 used there is most similar to what is here called ∆ max or ∆. ∆ max is the largest gap value that contributes to the elastic conductance spectrum and ∆ the momentum averaged and DOS weighted gap. A. Experimental results The dI/dU spectrum for superconducting Y123 in Fig. 3(a) is qualitatively in excellent agreement with previous STM measurements [73] and shows three lowenergy features: (i) a superconducting coherence peak at ≈ 25 meV that is sharper than in Bi2212, (ii) a highenergy shoulder of the coherence peak and (iii) a lowenergy peak at ≈ 10 meV.The high-energy shoulder as well as the sub-gap peak are believed to arise from the proximity-induced superconductivity in BaO planes and CuO chains [74][75][76][77].This would certainly account for the fact that these states are missing in the Bi- The inelastic part of the conductance (vertical width of grey area) is given by the difference between the total (black) and the elastic part (red) of the conductance.Forward convolved conductance with the obtained boson spectral functions from the direct FFT method/Gold algorithm are shown in orange/green dashed lines.(c) Boson spectral function determined by direct FFT method/Gold algorithm (orange/green).The result of the direct FFT method has been regularized for clarity.A clear resonance mode at Ωres ≈ 61 meV is visible.Zero-energy contributions are an artefact from the scaling procedure. based compounds and that the sub-gap peak shows a direction-dependent dispersion in ARPES data [78,79].At energies larger than ∆, we again find a clear diphump feature, similar as in Bi2212.The hump lies at ≈ 60 meV.The V-shaped background conductance in the high-energy regime of the superconducting spectrum is in agreement with the predicted inelastic contribution by magnetic scattering in the spin-fermion model [55,66]. B. Extraction of the bosonic spectrum We proceeded as in the case for Bi2212, but incorporated the one-dimensional band from the CuO chains as well as the bonding and anti-bonding band from the CuO 2 planes into the calculation of the normal state DOS to remodel the sub-gap peak and coherence peak shoulder in the estimated σ el of the superconducting state.The optimal Dynes fit with gaps ∆ AB 0 = 20.62 meV for the anti-bonding (AB), ∆ BB 0 = 25.77meV for the bonding (BB) and ∆ CHSS 0 = 5.66 meV for the chain band is shown in red in Fig. 3(a).Because vacuum-cleaved surfaces favour tunneling into states of the CuO chain plane [74,80,81], the sub-gap peak is pronounced and the contribution to the total DOS of the CH SS band is, in our analysis, roughly five times higher than for the AB and BB band.The size of ∆ BB is in good agreement with other scanning tunneling spectroscopy results spanning around 20 experiments, in which the extracted gap value lies between ∆ = 18 − 30 meV for optimally doped samples [63].For comparison: From Raman spectra, ∆ 0 , i.e. the gap in the antinodal direction, is frequently found to be 34 meV for optimally doped Y123 samples [82][83][84][85].It should be noted that vacuum cleaved surfaces of Y123 tend to be overdoped [80] which goes hand in hand with a steep decline of ∆.The reason for discrepancy between the gap measured in STS and ARPES [86,87] (also yielding ∆ 0 ∼ 34 meV) is expected due to two factors: (i) Although less influenced by a local gap variation than Bi2212, the gap of Y123 is expected to be inhomogeneous on a wider scale of > 100 nm [86].While ARPES yields an average gap over several of these domains, STS yields a more local gap.(ii) The measurement of a k-averaged gap value in STS naturally tends to give smaller values for a d-wave superconductor than the maximum gap size measured in ARPES.We try to eliminate this last effect by respecting the k-dependence of ∆ and ν F n in our fit.Nevertheless, despite the large T c , the spectroscopic results on Y123 in this work do not support an effective gap value of > 30 meV because the total conductivity is already on the decrease at this energy. Analogous to the case of Bi2212, the elastic part was, as a first guess, approximated by the Dynes fit to the total conductance times a scalar factor η. Here, η = 0.9 was chosen in order to secure the constraint σ inel (e|U | > 0) > 0. The (in)elastic parts to the total conductance are shown in red(grey) in Fig. 3(b). We compare the extracted bosonic DOS obtained from direct deconvolution and Gold algorithm for Y123 in Fig. 3(c).The resonance mode at Ω res ≈ 61 meV is significantly higher in energy than experimentally found by INS in (nearly) optimally doped samples with Ω res ≈ 41 meV [11,[88][89][90] and even lies at the onset of the spin scattering continuum at Ω c ≈ 60 meV [90].Apart from the k-space integration, which shifts the peak centre to higher energies, several other factors can play a key role: (i) The well-studied 41 meV odd-parity mode is paired with an even-parity mode at Ω e res ≈ 53 − 55 meV [90][91][92] which may be of the same origin as it vanishes at T c .This mode appears with a ≈ 3 − 20 times lower intensity in INS than the odd-parity mode, but this does not necessarily have to hold for a tunneling experiment.(ii) The bosonic spectrum extracted here is essentially poisoned by phononic contributions from every k-space angle.A disentanglement of phononic and electronic contributions to the total bosonic function by non-equilibrium optical spectroscopy showed that for Ω > 100 meV the bosonic function is purely electronic, yet in the energy range of the spin resonance, the contribution of strong-coupling phonons is almost equal to that of electronic origin [8]. (iii) Apart from physical arguments, there can also be made sceptical remarks on the deconvolution procedure: Evidently, it heavily depends on the guess of the elastic tunneling conductance, which in this case does not contain strong-coupling features from an Eliashberg theory.(iv) The pronounced contribution of the CuO chains to the total conductance essentially causes the resonance mode to appear at roughly ω hump − ∆ CHSS instead of ω hump − ∆ BB .Correcting for the 20 meV difference between the two gaps, it is likely that without sensitivity to the CuO chain gap, our extraction procedure will yield Ω res ≈ 41 meV ≈ 1.6 ∆ BB 0 ≈ 1.8 ∆ BB max ≈ 2.4 ∆BB . VI. CONCLUSION We recorded scanning tunneling spectra on superconducting Bi2212 (UD82) and Y123 (OP92) at 0.7 K and revealed a clear dip-hump structure outside the superconducting gap in both cases.The origin of this spectral feature can be traced back to a sharp resonance in the effective tunnel Eliashberg function.A careful separation of elastic and inelastic tunneling contributions enabled us to extract the bosonic excitation spectrum including this resonance.Comparing the obtained bosonic spectrum with inelastic neutron scattering data yields good agreement with the observed resonance mode and supports that magnetic fluctuations play an important role in the pairing mechanism of the cuprate superconductors. Our extraction method of the bosonic spectrum from scanning tunneling spectra paves a way to complement glue functions determined from optical spectroscopy or ARPES and has several advantageous features: The usage of scanning tunneling spectra yields the option for atomic resolution of the bosonic modes on the superconductor surface [5,56,93] as well as easy access of both occupied and unoccupied quasiparticle states with the high energy resolution of cryogenic STM setups. with chemical potential µ and hopping parameters t i as proposed in Ref. [94].For Bi2212, we used the set of parameters from Ref. [94] for a near optimally doped crystal and for Y123 we started from the parameters proposed in Ref. [75] for the optimally doped case and adjusted chemical potential, as well as t 2 to fit recently obtained Fermi surface contours measured by ARPES [95].While we only consider the binding band (BB) for Bi2212, for Y123, we take the binding (BB), anti-binding (AB) and the chain band (CH SS ) into consideration.The latter is modeled by a dispersion of the form The used tight binding parameters are summarized in Tab.I.The calculated Fermi surface in the first Brillouin zone (BZ) is shown in Fig. 4(a,c) for Bi2212 and Y123.An analytic expression for the Fermi wave vector k F (φ) is retrieved from the solution of ϵ(k, φ) = 0 where ϵ(k, φ) is the polar representation of Eq. ( 5).The normal DOS is then given by to approximate η, i.e. the total number of electronic states is conserved in the phase transition from the normal to the superconducting phase.This procedure is depicted in Fig. 5(a).In order to make sure that the introduction of the numerical scaling factor η has no poisoning effect on our extracted bosonic spectrum, the deconvolution of the Bi2212 spectrum by Gold's algorithm was performed for four different values of η.The results shown in Fig. 5(b) are comforting in the sense that the overall shape of the bosonic spectrum is unchanged.The only major difference lies in the magnitude of the zero-energy peak which is to be expected from a scalar multiplication, but since this peak is anyhow out of the bounds of physical contributions it does not harm the analysis. C. LDOS inhomogeneity As reported by Fischer et al., Bi2212 tends to show a large inhomegeneity of its LDOS in the superconducting state [63].This can be confirmed in our experiment by direct comparison of the conductance inhomegeneity measured on Bi2212 and Y123, shown in Fig. 6.The heat maps of the conductance variation δσ/σ(x, y) = 1 N Ut U =−Ut σ(eU, x, y) − σ(eU ) σ(eU ) (8) show that it is about three times higher on the Bi2212 surface than on the Y123 surface.As a consequence, a position averaged spectrum over a 50 × 50 nm 2 area can preserve detailed gap features better for Y123 than for Bi2212.Especially the dip-hump (dip marked by blue, hump marked by red arrows in Fig. 6) feature is still clearly visible in the position averaged spectrum of Y123 at ϵ ≈ 60 meV but is invisible in Bi2212.The preservation of this feature in the spectrum is crucial for our ITS analysis.Therefore, in the case of Bi2212, an average spectrum at one specific location, at which the dip-hump spectral feature was clearly visible, was chosen for this FIG.6. LDOS Inhomogeneity: Position averaged bias spectra on a 50 × 50 nm 2 area at T = 0.7 K for Bi2212 (a) and Y123 (b).Heat maps in the inset show the variation of the differential conductance within the averaging area.The higher inhomogeneity of the Bi2212 surface is reflected in both the conductance variation map and the blurred position averaged spectrum.The characteristic dip and hump are marked by blue and red arrows. FIG. 1 . FIG. 1. Tunneling Spectra on Bi2212: (a) Experimental dI/dU spectra in the superconducting/normal state (black/blue) recorded at 0.7/84 K after Gaussian smoothing, symmetrization and normalization to the differential conductance in the normal state at 200 mV.(b) Numerical derivative of spectra in (a). 6 FIG. 3 .= 5 . FIG. 3. Bosonic Spectrum Extraction for Y123: (a) A generalized Dynes model (Eq.(3)) with ∆ AB 0 = 20.62 meV for the anti-bonding (AB), ∆ BB 0 = 25.77meV for the bonding (BB) and ∆ CHSS 0 = 5.66 meV for the chain (CHSS) band (red) was fitted to the experimental differential conductance (black).(b) The total conductance (black line) has been scaled up by the factor η −1 = 1.11.The inelastic part of the conductance (vertical width of grey area) is given by the difference between the total (black) and the elastic part (red) of the conductance.Forward convolved conductance with the obtained boson spectral functions from the direct FFT method/Gold algorithm are shown in orange/green dashed lines.(c) Boson spectral function determined by direct FFT method/Gold algorithm (orange/green).The result of the direct FFT method has been regularized for clarity.A clear resonance mode at Ωres ≈ 61 meV is visible.Zero-energy contributions are an artefact from the scaling procedure. FIG. 4 . FIG. 4. Normal State Electrons: (a,c) Calculated Fermi surface in the first 2D BZ of Bi2212 (a) and Y123 (c).(b,d) Calculated Fermi wave vector kF (φ) (solid line) and normal conducting DOS along the Fermi surface contour ν F n (φ) (dashed line) as function of polar angle in the first BZ quadrant (sketched in a) for Bi2212 (b) and Y123 (d).Colors match the Fermi surface contours of the individual bands (AB, BB, CHSS) from (a,c). FIG. 5 . FIG. 5. Numerical Scaling Factor: (a) Determination of η through the boundary condition ∞ −∞ dωσ el n = ∞ −∞ dωσ el s .(b) Variation of η leaves the general shape of the extracted bosonic spectrum unaffected except for the magnitude of its zero-energy peak. TABLE I . Tight binding parameters: Chemical potential µ and hopping parameters ti used in the dispersion relation for the Bi2212 and Y123 bands (Eqs.(5) and (6)).
7,932.4
2023-06-06T00:00:00.000
[ "Physics" ]
A Procedure to Map Subsidence at the Regional Scale Using the Persistent Scatterer Interferometry ( PSI ) Technique In this paper, we present a procedure to map subsidence at the regional scale by means of persistent scatterer interferometry (PSI). Subsidence analysis is usually restricted to plain areas and where the presence of this phenomenon is already known. The proposed procedure allows a fast identification of subsidences in large and hilly-mountainous areas. The test area is the Tuscany region, in Central Italy, where several areas are affected by natural and anthropogenic subsidence and where PSI data acquired by the Envisat satellite are available both in ascending and descending orbit. The procedure consists of the definition of the vertical and horizontal components of the deformation measured by satellite at first, then of the calculation of the “real” displacement direction, so that mainly vertical deformations can be individuated and mapped. Introduction Subsidence can be defined as the progressive lowering of the ground; it can be a rapid process, such as a sudden collapse (i.e., sinkholes), due to karst processes, or a slow process; in the latter case, this phenomenon can be due both to natural causes, such as active faults (e.g., [1]) or volcanic processes, and anthropic causes, such as settlements, induced by new buildings [2], mining [3] and fluid pumping from underground [4,5]. Subsidence can be successfully analyzed by means of the PSI (persistent scatterer interferometry) technique. The PSI technique has shown its capability to provide information about ground deformations over wide areas with millimetric precision, making this approach suitable for both regional- [21] and local-scale [22] subsidence investigations.Through a statistical analysis of the signals backscattered from a network of individual, phase coherent targets, this approach retrieves estimates of the displacements occurring between different acquisitions by distinguishing the phase shift related to ground motions from the phase component, due to the atmosphere, topography and noise [23,24]. The application of the PSI technique to the analysis of subsidence is somehow facilitated by the fast data processing.Furthermore, subsidence often occurs in urban areas, where the high density of radar targets allows high accuracy measurements.Several applications of PSI to subsidence mapping can be found in the literature, such as [25][26][27][28]. The analysis of these phenomena using PSI is usually performed in plain areas and where their presence is known [6,29], leading to a lack of knowledge about the natural hazards affecting whole regions. Data acquired by various kinds of instruments are suitable for subsidence mapping (e.g., GPS [30] or leveling [31]); however, most of them are useful only to study local phenomena, and they are usually not available at the regional scale. This limitation has been overcome using radar satellite data, which allow measurements over large areas and, under suitable conditions, spatially dense information on slow ground surface deformations. At the same time, the PSI technique provides high accuracy that ranges from 1 to 3 mm on single measurements in correspondence to each SAR acquisition and between 0.1 and 1 mm/y for the line of sight (LOS) deformation rate [24]. The proposed method has been tested in Tuscany, where several areas affected by subsidence have been recognized and analyzed by a number of authors (i.e., [6,29,32]), but nowadays, a unique archive of the subsidences present in the regional territory still does not exist. A complete review of the existing works highlighted the lack of information for most part of the region, since almost all of the examined works were focused on the phenomena affecting the main alluvial plains of the region, where the major urban areas of Tuscany are located. To perform a mapping of the subsidence phenomena of an entire region, the main problem is to discriminate areas where ground movements can be due to subsidence or to other causes (i.e., landslides), since, at this scale, the morphology of the territory may vary significantly and the assumption that subsidences occur only in plain areas cannot and should not be considered valid (e.g., [33,34]).Overcoming this problem requires data and an analysis procedure adequate to the scale of the work. This work wants to provide a procedure to efficiently map subsidence at the regional scale using the PSI technique.The test area is the Tuscany region in Central Italy where several areas of different extensions have been affected by natural and anthropogenic subsidence.The procedure makes use of ascending and descending datasets of the Envisat satellite ranging in time from 2002 to 2010.This procedure can be reproducible also in other test sites and also with different datasets of PSI data, given the availability of ascending and descending data. Study Area The proposed procedure has been tested in the Tuscany region (Central Italy), which is about 23,000 km 2 wide.The territory is mainly hilly (66.5%), but it includes also mountainous areas (25.1%) and limited plains (8.4%) [35]. The geology and morphology of this region vary greatly from zone to zone: the main reliefs lie in the northern and eastern part of the region. The northern part is characterized by high mountains (up to 2000 m a.s.l.) made up of metamorphic rocks (Apuan Alps) and by steep valleys filled up by alluvial deposits; the eastern part is characterized by mountains made up of sedimentary rocks and by intermontane basins filled up by alluvial deposits, where the main cities of the region are located. The central and southern parts are characterized by hilly morphology with isolated volcanic reliefs (e.g., Monte Amiata) and spread alluvial plains (Figure 1). Methodology By the use of the PSI technique, a simple procedure to perform rapid mapping of subsidence phenomena has been developed.This methodology has been applied and tested in the Tuscany region (Italy), where PSI data acquired by the Envisat Satellite are available from 2002 to 2010, both in ascending (41 images, 1,142,707 PS) and descending orbit (41 images, 1,353,746 PS). The PSI data have been processed following the PSInSAR™ approach. The acquisition of the SAR satellite occurs along two different polar orbits, descending from north to south and ascending from south to north.Displacements are measured along a unit vector codirectional to the satellite, defined as the line of sight.Being that the orbit of SAR satellites is polar, it is impossible to estimate the component of displacement along the N-S direction on the azimuth plane. Data acquired in both orbits are necessary to perform subsidence analysis by PSI data over areas with different terrain altitudes, because the deformations detected by this technique are measured along the line of sight of the satellite, so the real value of the deformation highly depends on the orientation of the terrain with respect to the satellite. In fact, if deformations are detected in plain areas, landslide phenomena can be easily excluded, but this is not true if deformations are detected in hilly-mountainous areas, leading to possible misinterpretations of results. If data acquired in both orbits are available, this problem can be overcome, since the velocity (or the rate) of vertical and horizontal (only in E-W direction) deformations can be calculated by solving the following equation system [26,36]: where Va and Vd represent the velocity of deformation in ascending and in descending orbit, VV and VE the velocity in the vertical and horizontal (E-W) planes and θa and θd the incident angles (both 23°) in ascending and descending orbit (Figure 2).The solution of this system requires that a single measurement point is recognized as a valid target from the satellite in both orbits, but this condition rarely occurs, so artificial persistent scatterers (called synthetic PS), have to be created.This operation can be performed dividing the space through a sampling grid with square cells and calculating the mean deformation velocity for each of the two orbits; the results of this operation are two spatially regular series of synthetic PS, each one corresponding to the centroid of every cell, which can be now combined to define the components of the deformation vector. This procedure is shown in the flow chart reported in Figure 3: a sampling grid is applied to the same scene, for both orbits, and the mean deformation velocity is calculated for each cell.Now, these velocities (Va and Vd) can be used to calculate the vertical (VV) and horizontal (VE) velocities, by Equation (1).The definition of the cell size is not so obvious, since it should be sized appropriately.Having too few points within each cell should be avoided to use in the calculation, but also, points too far from each other, which could measure deformations due to different causes, should be avoided.Furthermore, using a mesh with too big of a dimension, it could be possible to merge the phenomena of the same type, but that should be considered as separate from a geological point of view (e.g., subsidences affecting two distinct river basins could be mapped as one). The scale of the work [37] is another factor to be considered when dimensioning the grid: using small cells (e.g., 1 hm 2 ) for regional scale analyses could be excessively expensive, in terms of time and computational capability. On the other side, using small cells can be useful for a detailed analysis of local phenomena, where high detail is necessary. If in local analyses the cell sizes can be easily defined considering the range of autocorrelation given by the semivariogram of PS, in case of regional scale works, the experience of the operator plays a fundamental role, since a unique geometric criterion is not identifiable, but other factors, mainly the geology and geomorphology of the area, have to be considered. Once the vertical and horizontal components of the deformation vector are known (positive velocities mean uplift or eastward deformations; negative velocities mean subsidence or westward deformations), it is possible to calculate the main direction of ground deformation (Vr), so as to discriminate areas affected by subsidence or by landslides. The modulus of Vr can be easily calculated by Pythagoras' theorem, since the angle between Vv and VE is 90°. The direction of Vr has been computed by means of trigonometric rules (Figure 4), at first solving the following equation (Equation 2) for both α and β to define the angle between VV (or VE) and Vr: where α is the angle between VE and Vr, β is the angle between VV and Vr, δ is the angle between VV and VE and it is 90° (sin δ = 1).Subsequently, the signs of VV and VE moduli have been used to define the direction of Vr.Solving Equation (1), upward or eastward movements will have positive moduli, and conversely westward or downward movements will have negative moduli.These values have been used to define in which quarter Vr will be located (Table 1). Table 1.Definition of the Vr direction on the basis of VV and VE values. V V and V E Values Quarter both V V and V E positive Z-E quarter V V positive and V E negative Z-W quarter both V V and V E negative N-W quarter V V negative and V E positive N-E quarter In order to discriminate areas affected by subsidence or by landslides the areas with mainly vertical deformations have been selected, filtering all the Vr values which are within a range of ±45° to the vertical axis (Figure 5), since subsidences (or uplifts) can be considered as mainly vertical movements.The filtered values could be now used to individuate the main subsidence areas (Figure 6), but it must be borne in mind that the radar satellites, used to perform PS analysis, cannot evaluate N-S-oriented displacements [24].This is to mean that northward (or southward) landslides may look like subsidences, since satellites can detect only the vertical component of the displacement.Comparison between unfiltered and filtered PS.This image is an example of the result of a regional-scale analysis, performed by a grid with 1-km 2 cells.It is possible to notice the presence of a subsiding area (red box). Figure 7. Comparison between the aspect map and synthetic PS for the subsiding area highlighted in Figure 6.The analysis of deformation has been refined by the use of a grid with 1-hm 2 cells.Zone 1: two PS seem to show only vertical deformations, but they are in a northward slope, so horizontal deformations cannot be excluded.Zone 2: Almost all PS show only vertical deformations, regardless of the aspect of the slope; a subsidence can be recognized.A 100-m side sampling grid has been used. To avoid misunderstandings of the results, Vr (or VV and VE) values can be compared with a map showing the orientation of the slopes (namely an aspect map), easily obtainable from a DTM of the study area (in this case, the DTM is 10 × 10 m). This comparison allows excluding these values, which fall within the northern or southern slopes (Figure 7). It is worth noticing that this comparison is especially useful for subsiding areas that are not too large (few km 2 ), in hilly-mountainous areas: in these areas, slope angles can vary from low (~15°) to moderate (~30°), and misunderstandings between landslide and subsidence can be frequent; consequently, the definition of the Vr direction can be useful to avoid misinterpretations. In large subsiding areas, a comparison with the aspect map would not be necessary, since a large number of PS, consistent with each other, is expected, regardless of the morphology of the territory. Discussion The presented procedure allows for quickly identifying and mapping subsidence phenomena at all scales, from regional to local ones.The main advantage of this procedure is to perform subsidence mapping in areas characterized by high topographic variability, where misunderstandings of the results can be easy. Since each synthetic PS is representative of a certain area, which may have different dimensions on the basis of the scale, this procedure can be more powerful when used in successive phases, i.e., using a sampling grid with broad cells (e.g., 1 km 2 ) to individuate the main subsiding areas, followed by a new analysis performed by a grid with smaller cells (e.g., 1 hm 2 ) for these areas. In this way, it is possible to initially identify the areas affected by subsidence and then to perform an analysis and a precise mapping of the phenomena, taking advantage of the previous small-scale analysis. Obviously, each one of these phases can be singly applied, on the basis of the goal of the work, i.e., if a subsidence phenomenon is well known, it would be a waste of time to perform a double analysis to locate and successively map the subsidence. This procedure has some limitations and uncertainties related to the PSI technique. The main limitation of this procedure is due to the impossibility of radar satellites to measure N-S-oriented deformation: in such a case, no analysis can be performed.Some uncertainties can be present when the analysis is performed in hilly-mountainous areas, where subsidences and landslides could be mixed up, but this problem can be overcome if a sufficient number of PS, with concordant velocities and directions of deformation, is available; the majority of these PS should be located on slopes not oriented in the N-S direction, otherwise, merely their high number would be useless for identifying the cause of the deformation. A further limitation is the availability of data, since if the satellite data are not available, a long time is required for their acquisition.Similarly, if data acquired only in one orbit (ascending or descending) are available, the whole procedure cannot be performed. Similarly, if few PS targets are available, the synthetic PS may not be representative of its cell, since it would be calculated only with few PS data (e.g., 2-3 targets for 1 km 2 ).The minimum number of required PS can be defined a priori, since it can vary on the basis of the scale of the work and the extension and morphology of the study area. In this work, we used Envisat data, but this procedure can be applied also to data acquired by other SAR satellites, such as TerraSAR-X or COSMO-SkyMed.The temporal resolution of different satellites can influence the results of the procedure: for instance, data acquired by satellites with a shorter revisiting time (e.g., COSMO-SkyMed) can be used to analyze faster subsidence phenomena than those identified by the use of Envisat data. The spatial resolution of the satellite (e.g., Envisat 5 × 20 m) must be considered when dimensioning the grid, to avoid useless analyses, such as, for instance, defining the cell size as smaller than the spatial resolution of the satellite.Furthermore, the ground resolution of the satellite (pixel size) influences the mapping precision when a detailed analysis is performed, while it has a lower influence in regional scale analyses. Conclusions Subsidence mapping and analysis by PSI are usually performed in areas where this phenomenon is already known.Furthermore, these analyses are rarely performed in hilly or mountainous areas. This paper presents a procedure to identify and map subsidence using the PSInSAR technique at the regional scale and/or in areas characterized by high topographic variability, where, usually, the identification of subsidences could be difficult. The discrimination of subsidence from landslides could be a useful tool for spatial planning and for risk management strategies, since good knowledge of the natural hazards involving a territory is essential to providing proper risk reduction measures. The procedure is based on the comparison and combination of interferometric data, acquired in different satellite orbits, so as to discriminate subsiding areas from other areas. The first step of the procedure is the calculation of the vertical and horizontal (in E-W direction) components of the deformation.This operation requires the definition of synthetic PSs (for both orbits), which can be made by dividing the space trough a sampling grid and assigning to each cell the mean deformation velocity of each PS.Once synthetics PS is defined, the two components can be calculated, and consequently, the main direction of ground deformation can be defined. Until now, the traditional methods of combining ascending and descending orbits allow one to obtain two different maps related to VE and VV, which have to be compared to identify subsiding areas (e.g., [24]).The proposed approach goes further, since it allows one to calculate the displacement direction (Vr direction) from the values of VE and VV in order to have a unique parameter to distinguish subsidence from other ground deformations. This discrimination is based on the principle that subsidences are characterized mainly (or only) by vertical deformations, so that, filtering out all of the deformations with the main horizontal components, the subsidences can be isolated and mapped. The proposed technique is quite flexible, since it can be applied in various geological and geomorphological settings, and it can be used with different datasets of PSI.The procedure works at the regional scale, and it can be applied to the fast detection of subsidence phenomena for land planning and risk management purposes. Figure 1 . Figure 1.Location of the test area. Figure 3 . Figure 3. Flow chart of the procedure to calculate the vertical and E-W components of the deformation recorded by the satellite.PS, persistent scatterers.Vel, velocity. Figure 4 . Figure 4. Schema illustrating the angles between Vr, VV and VE.α is the angle between VE and Vr; β is the angle between VV and Vr; δ is the angle between VV and VE. Figure 5 . Figure 5. Filtering of the data, on the basis of the synthetic PS.The direction of Vr is calculated on the basis of the values of VV and VE (positive or negative). Figure 6 . Figure 6.Comparison between unfiltered and filtered PS.This image is an example of the result of a regional-scale analysis, performed by a grid with 1-km 2 cells.It is possible to notice the presence of a subsiding area (red box).
4,576.6
2014-10-30T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Charge Carrier Relaxation in Colloidal FAPbI3 Nanostructures Using Global Analysis We study the hot charge carrier relaxation process in weakly confined hybrid lead iodide perovskite colloidal nanostructures, FAPbI3 (FA = formaminidium), using femtosecond transient absorption (TA). We compare the conventional analysis method based on the extraction of the carrier temperature (Tc) by fitting the high-energy tail of the band-edge bleach with a global analysis method modeling the continuous evolution of the spectral lineshape in time using a simple sequential kinetic model. This practical approach results in a more accurate way to determine the charge carrier relaxation dynamics. At high excitation fluence (density of charge carriers above 1018 cm−3), the cooling time increases up to almost 1 ps in thick nanoplates (NPs) and cubic nanocrystals (NCs), indicating the hot phonon bottleneck effect. Furthermore, Auger heating resulting from the multi-charge carrier recombination process slows down the relaxation even further to tens and hundreds of picoseconds. These two processes could only be well disentangled by analyzing simultaneously the spectral lineshape and amplitude evolution. Introduction Thanks to their outstanding properties, lead halide perovskites have emerged as extremely promising low-cost processing materials for several optoelectronic applications as solar cells [1,2], photo-detectors [3,4], light-emitting diodes [5,6] and lasers [7,8]. For all these applications, it is of main importance to characterize the rate at which hot charge carriers relax to the band-edge (cooling). For example, slow charge carrier cooling is advantageous in single junction solar cells where the extraction of the hot charge carriers could overpass the Shockley-Queisser limit [5]. In particular, the confinement effect in perovskite nanostructures could potentially efficiently slow down the cooling process by orders of magnitude through the intrinsic phonon bottleneck effect [6]. The most widely spread technique to investigate hot charge carrier relaxation in perovskite materials is transient absorption (TA) spectroscopy [7][8][9]. The high-energy tail of the band-edge bleach resulting from the Burstein-Moss effect reflects the population of the continuous high-energy levels above the bandgap and can be described by a Fermi-Dirac distribution with a characteristic carrier temperature, T c . The investigation of the full cooling dynamics using this conventional method is based on the extraction of a large number of T c by tail fitting of the TA spectra over a wide range of times t (several orders of magnitude). Typical fs-TA spectroscopy allows to access the relaxation dynamics from the sub-ps timescales to several nanoseconds with a well-defined excitation fluence and energy. However, the obtained T c values from this physically motivated approach, are not necessarily accurate. In particular, they have been found to be strongly dependent on the energy range used for the tail fitting [9,10]. For instance, in the region of interest, the bleach signal will be more or less superposed with the large broadband photo-induced absorption signal at energies above the band-edge [11], which can affect the fit. Moreover, nanocrystal samples are susceptible to present inhomogeneous spectral lineshapes due to size dispersion, which can artificially induce higher carrier temperatures. All of those imprecisions in the determination of T c will have a direct impact on the energy-loss rate values, proportional to dT c /dt, used to compare one sample to another in terms of composition or confinement effects [9]. In spite of such concerns, most of the reports use this conventional approach to extract carrier cooling dynamics not only in bulk hybrid halide perovskite materials (thin films) [7,8,12], but also in halide perovskite nanocrystals (NCs) [10,12,13]. In all these cases, this method can be applied as long as the energy level spacing remains below the thermal energy. In such polar semiconductors, the relaxation is governed by fast carrier-optical phonon scattering [14]. Reported carrier cooling times range from 210 to 600 fs in MAPbI 3 (MA = methylammonium) thin films [7] or from 100 to 800 fs in MAPbBr 3 thin films [13] with carrier densities from about 10 17 to 10 19 cm −3 and a similar excess of energy. This increase in cooling times with the initial excitation fluence is known as the hot phonon-bottleneck effect [15]. It is worth pointing out that in such bulk perovskite materials or weakly confined nanocrystals, in particular for iodide-based materials, the main photo-generated species are free carriers and not excitons [13][14][15]. In more strongly confined perovskite nanostructures, the conventional approach extracting T c is invalid. Alternatively, researchers use the buildup of the bandedge bleach over time to follow the cooling dynamics [16,17]. However, single wavelength trace analysis can be strongly complicated by excitonic effects such as Stark effects or coupled optical transition [18,19]. For weakly confined MAPbBr 3 NCs, slightly slower carrier cooling has been observed than in the bulk counterpart when excited at 400 nm while not much difference could be observed when comparing the energy-loss rate [13]. In a subsequent work, the cooling times at the same excitation wavelength and low carrier density (~10 17 cm −3 ) were found to be slightly dependent on the composition of lead bromide perovskites NCs with different cations, Cs, MA and FA: 310, 235 and 180 fs, respectively [16]. At high carrier densities (~10 19 cm −3 ), the decay of T c with t presents an additional component on a time range order of magnitude higher: 5 ps for CsPbBr 3 and 3 ps for MAPbBr 3 and FAPbBr 3 NCs [16] and even 10-30 ps in MAPbBr 3 NCs [13]. The picosecond component was also reported in FAPbI 3 NCs, together with another in a few hundreds of ps [10] that could not be observed in the shorter time range experiments of the previous authors. These substantially longer cooling times can be understood in terms of enhanced multi-particle processes in confined systems such as Auger recombination that leads to a re-heating effect [13,20] and thus should further increase the cooling times. Here, we present a study of hot carrier relaxation dynamics in weakly confined FAPbI 3 nanoplates (NPs) and nanocrystals (NCs) using femtosecond transient absorption spectroscopy at different excitation fluencies and photon energies. Special attention is given to the way of extracting carrier cooling times during the first few picoseconds. We compare the conventional method, i.e., the fitting of the high energy tail of the bandedge bleach, with a global analysis method using singular value decomposition (SVD). This latter method allows us to link the evolution in time of the charge carrier temperature and population (density). Additionally, the versatility of the global method allows us to extend the cooling analysis to a few nanoseconds, therefore covering then three distinct characteristic time regimes. Material Synthesis and Characterization Colloidal FAPbI 3 perovskite nanocrystals (NCs) and thick nanoplates (NPs) were synthesized by employing two different synthetic approaches. Cubic-shaped NCs were synthesized following a protocol based on the "hot-injection" (HI) crystallization method [21]. Thick FAPbI 3 nanoplates were synthesized at room temperature based on the "ligand-assisted re-precipitation" (LARP) method [22]. The shape and size of the prepared colloidal perovskite nanostructures were determined by transmission electronic microscopy (TEM) using a JEOL 2100 equipped with a LaB 6 filament and operating at 200 kV. Detailed synthetic methods are described in the Supplementary Materials with the list of chemical used, from Sigma Aldrich, Alfa Aesar and Acros Organics. Steady-State Spectroscopy Absorption measurements were carried out with a UV/Vis Lambda 850 spectrophotometer (Perkin Elmer) covering the 175-900 nm spectral range. Photoluminescence measurements were carried out with a Fluorolog 3-22 spectrofluorometer (HORIBA JOBIN-YVON, Chilly Mazarin, France) equipped with a R928P photomultiplier tube detector (200-870 nm, Hamamatsu, Massy, France) and a continuous wave xenon arc lamp (450 W, 250-2500 nm). An ultraviolet enhanced silicon photodiode reference detector monitored and compensated for variation in the xenon lamp intensity. Time-Resolved Spectroscopy Femtosecond transient absorption (TA) experiments were performed at room temperature using a home-built setup. Briefly, this uses a fundamental beam from an amplified Ti:Sa laser source (3 kHz, 40 fs, 2 W) split into two parts, one to generate the tunable excitation pump pulses from a home-made visible non-collinear optical parametric amplifier (NOPA) and the other part to produce the delayed white light continuum probe pulses. A more detailed description can be found in the Supplementary Materials. Different excitation wavelengths were used, 630 nm (1.97 eV) and 520 nm (2.38 eV) for the NPs and 650 nm (1.91 eV) for the cubic NCs. The excitation fluence was chosen between 6 and 60 µJ/cm 2 depending on the experiment. The maximum pump-probe time delay that could be reached was about 3.2 ns and the temporal resolution was about 130 fs, estimated from cross-phase modulation. Post-acquisition data treatments are also described in the Supplementary Materials (e.g. chirp correction). Experiments were performed in transmission at room-temperature in a 1-mm thick circulating cell connected to a peristaltic pump to prevent photo-degradation at the focus position and photo-charging effects. The colloidal NCs and NPs were dispersed in anhydrous toluene or chloroform, with an optical density of less than 0.3 at the excitation wavelength and above. Conventional Method: Tail-Fitting The high-energy tail reflects the distribution of the thermalized charge carriers occupying the quasi continuous high-energy levels as well, since the energy spacing remains small in comparison with k B T at room temperature in our weakly confined nanostructures [10,13]. This hot population follows a Fermi-Dirac distribution, which can be approximated by a Maxwell-Boltzmann function [15]: where, ∆A is the transient absorption signal, A 0 is a constant, hν is the detected energy, k B is the Boltzmann constant and T c is the charge carrier temperature. The PIA corresponds to the broadband Global Analysis The relaxation dynamics were accessed using a global analysis method based on singular value decomposition (SVD) to follow the evolution in time of the complex TA signal with several overlapping spectral components [16,23,24]. The open-source software Glotaran [25] was used to perform a global analysis of the TA data. The sequential kinetic model, with spectral components known as Evolution Associated Spectra (EAS i ) and related by successively decreasing rate constants (k i = 1/τ i ), is given by the following equation: with the associated time-dependent amplitude M i (t): assuming a Gaussian instrument response function (IRF), with full width at half maximum ∆ and centered at t 0 (time zero of the pump probe overlap) [26]. We have ∆ = ∆/ 2 ln(2) . In this first-order approximation, M i is a mono-exponential decay function (first term) that is convoluted with the IRF (giving the second term). The last term is a step function, more or less smooth depending on ∆, to generate the signal starting around t 0 . Here, the EAS i represent the different spectral contributions of the system state. This independent analysis of spectral and temporal characteristics facilitates the identification of the corresponding processes. In the simplest case with only an initial (EAS i ) and a final (EAS f ) state during the first relaxation stage, the time-dependent carrier temperatures can be extracted from the corresponding spectral components assuming a thermalized distribution of the charge carriers. Furthermore, the time constant τ 1 = 1/k 1 obtained from the global data analysis can be used to simulate the evolution of the carrier temperature T c using the following equation: where T i and T f are the initial and final carrier temperatures obtained from the EAS i and EAS f by the conventional method explained above. Sample Characterization The steady-state absorption and emission spectra of the cubic-shaped FAPbI 3 nanocrystals (NCs) and the thick FAPbI 3 nanoplates (NPs) are shown in Figure 1a. The absorption spectra of both samples extend to the near-infrared and present no well-defined excitonic structure, as expected in these weakly confined materials characterized by a small exciton binding energy. In the absence of a well-defined feature in the absorption spectrum of the two samples, the bandgap was estimated from the central energy of the photo-induced bandedge bleach of the TA data on a long time scale (away from the Burstein-Mott shift effects). It is 1.66 eV (745 nm) for the cubic-shaped NCs and 1.62 eV (765 nm) for the thick NPs. The cubic NCs exhibit a photoluminescence (PL) peak centered around 1.65 eV (752 nm) while the PL maximum of the thick nanoplates is at 1.61 eV (770 nm). The blue-shifted PL suggests Nanomaterials 2020, 10, 1897 5 of 15 some degree of confinement when compared with the typical 820-840 nm PL emission of bulk FAPbI 3 thin films [27] and macrocrystals [28]. Nanomaterials 2020, 10, x FOR PEER REVIEW 5 of 15 representative TEM pictures. The average size for the cubic FAPbI3 NCs is 12 ± 2 nm while the NP lateral dimensions were found to be 70 ± 20 nm and their thickness about 11 nm (Figure 1b, inset). This average thickness corresponds to approximatively 18 monolayers considering a thickness along the < 100 > crystallographic direction (0.6 nm each). The sizes and linear spectra of both samples are in agreement with a weak confinement when compared with the ≈5 nm exciton Bohr radius reported for FAPbI3 [29]. Carrier Relaxation Dynamics In order to make sure to investigate the full cooling dynamics, we performed femtosecond TA experiments over five decades of times, from hundreds of femtoseconds up to a few nanoseconds. The TA spectra during the first 3 ps of the thick colloidal FAPbI3 nanoplates (NPs) excited at 1.97 eV (630 nm) are shown in Figure 2a. This excitation above the bandgap corresponds to an excess energy of about 350 meV. The TA spectra present the same characteristic features as hybrid perovskite thin films [30]: (1) a large bleach signal at about 1.7 eV corresponding to band-edge state filling and extending to higher energies at early times (Burstein-Moss shift), (2) a second bleach signal at higher energy (>2.6 eV, out of scale) involving a higher energy transition and (3) a broad photo-induced absorption (PIA). It should be noted that the sign of the TA spectra in Figure 2a was inversed before normalization so that negative features appeared positive and inversely. As discussed in the Introduction, the high-energy tail reflects the distribution of the thermalized charge carriers occupying the quasi-continuous high-energy levels, since the energy spacing in these nanostructures remains small in comparison with at room temperature [10,13]. The NP thickness is similar to the average size of the cubic NCs, while the optical spectrum is slightly red-shifted in comparison due to the larger lateral dimensions of the NPs. The histograms of the measured sizes (extracted using the ImageJ program) are displayed in Figure 1b with their representative TEM pictures. The average size for the cubic FAPbI 3 NCs is 12 ± 2 nm while the NP lateral dimensions were found to be 70 ± 20 nm and their thickness about 11 nm (Figure 1b, inset). This average thickness corresponds to approximatively 18 monolayers considering a thickness along the < 100 > crystallographic direction (0.6 nm each). The sizes and linear spectra of both samples are in agreement with a weak confinement when compared with the ≈5 nm exciton Bohr radius reported for FAPbI 3 [29]. Carrier Relaxation Dynamics In order to make sure to investigate the full cooling dynamics, we performed femtosecond TA experiments over five decades of times, from hundreds of femtoseconds up to a few nanoseconds. The TA spectra during the first 3 ps of the thick colloidal FAPbI 3 nanoplates (NPs) excited at 1.97 eV (630 nm) are shown in Figure 2a. This excitation above the bandgap corresponds to an excess energy of about 350 meV. The TA spectra present the same characteristic features as hybrid perovskite thin films [30]: (1) a large bleach signal at about 1.7 eV corresponding to band-edge state filling and extending to higher energies at early times (Burstein-Moss shift), (2) a second bleach signal at higher energy (>2.6 eV, out of scale) involving a higher energy transition and (3) a broad photo-induced absorption (PIA). It should be noted that the sign of the TA spectra in Figure 2a was inversed before normalization so that negative features appeared positive and inversely. As discussed in the Introduction, the high-energy tail reflects the distribution of the thermalized charge carriers occupying the quasi-continuous high-energy levels, since the energy spacing in these nanostructures remains small in comparison with k B T at room temperature [10,13]. NPs extracted over five decades of time from the TA spectra for a 630 nm pump excitation at 6 μJ/cm 2 (black dots) and 60 μJ/cm 2 (wine-colored dots) using the conventional tail-fitting method. The multiexponential fits of these decays are displayed with solid lines (parameters in Table 1). Using Equation (1) to fit the high-energy tail (1.9-2.1 eV) of the band-edge bleach of each TA spectrum (see for example Figure 2a), time-dependent carrier temperatures were extracted as described in the Materials and Methods section. Typical Tc curves obtained for moderate-(6 μJ/cm 2 ) and high-excitation fluence (60 μJ/cm 2 ), ranging from 300 fs to 3 ns, are displayed in Figure 2b for FAPbI3 NPs. The average densities of electron-hole pairs created per pulse for these two fluences are estimated to be 5.6 × 10 17 and 5.6 × 10 18 cm −3 , respectively (see calculation in Supplementary Materials). These curves can be fitted satisfactorily using a multi-exponential decay function (R 2 factor 0.924 and 0.984, respectively), with the resulting parameters given in Table 1. We first focused on the short picosecond time-range (0-3 ps), where the amplitude of the associated time constant represents 90% of the total relaxation in terms of temperature decay at moderate excitation fluence and 70% at high fluence, as it is shown for the amplitudes of the triexponential fit in Table 1. As outlined in the Introduction, the extraction of the carrier temperature by the tail-fitting method can be relatively inaccurate so we turned to the global analysis (see Materials and Methods section above) in order to follow the evolution of the full lineshape in time. For that, we used two spectral components EAS1 and EAS2, with corresponding kinetic rates k1 and k2, such that EAS1 rises during the interaction with the pump pulse and decays exponentially with k1, while EAS2 rises with k1 and then decays exponentially with k2 (see Equations (2) and (3)). This approach satisfactorily reproduces the experimental TA data for all the excitation fluences as can be seen in selected decay traces in Figure A1c,d in Appendix A. Moreover, no significant improvement (b) Time-dependent carrier temperatures T c of FAPbI 3 NPs extracted over five decades of time from the TA spectra for a 630 nm pump excitation at 6 µJ/cm 2 (black dots) and 60 µJ/cm 2 (wine-colored dots) using the conventional tail-fitting method. The multiexponential fits of these decays are displayed with solid lines (parameters in Table 1). Using Equation (1) to fit the high-energy tail (1.9-2.1 eV) of the band-edge bleach of each TA spectrum (see for example Figure 2a), time-dependent carrier temperatures were extracted as described in the Materials and Methods section. Typical T c curves obtained for moderate-(6 µJ/cm 2 ) and high-excitation fluence (60 µJ/cm 2 ), ranging from 300 fs to 3 ns, are displayed in Figure 2b for FAPbI 3 NPs. The average densities of electron-hole pairs created per pulse for these two fluences are estimated to be 5.6 × 10 17 and 5.6 × 10 18 cm −3 , respectively (see calculation in Supplementary Materials). These curves can be fitted satisfactorily using a multi-exponential decay function (R 2 factor 0.924 and 0.984, respectively), with the resulting parameters given in Table 1. Table 1. Fit parameters for a tri-exponential decay of the charge carrier temperature (T c ) at moderate and high excitation fluence. We use the following equation: Fluence (µJ/cm 2 ) A 1 (%) τ 1 (ps) A 2 (%) τ 2 (ps) A 3 (%) τ 3 (ps) T 0 (K) We first focused on the short picosecond time-range (0-3 ps), where the amplitude of the associated time constant represents 90% of the total relaxation in terms of temperature decay at moderate excitation fluence and 70% at high fluence, as it is shown for the amplitudes of the tri-exponential fit in Table 1. As outlined in the Introduction, the extraction of the carrier temperature by the tail-fitting method can be relatively inaccurate so we turned to the global analysis (see Materials and Methods section above) in order to follow the evolution of the full lineshape in time. For that, we used two spectral components EAS 1 and EAS 2 , with corresponding kinetic rates k 1 and k 2 , such that EAS 1 rises during the interaction with the pump pulse and decays exponentially with k 1 , while EAS 2 rises with k 1 and then decays exponentially with k 2 (see Equations (2) and (3)). This approach satisfactorily reproduces the experimental TA data for all the excitation fluences as can be seen in selected decay traces in Figure A1c,d in Appendix A. Moreover, no significant improvement was obtained in the Glotaran root mean square (RMS) deviation when using more than two components for the global fit. An example of the EAS spectral components obtained by this sequential model is given for high fluence (60 µJ/cm 2 ) in Figure 3a. The spectral components at different excitation fluences are displayed in Appendix A. It should be noted that during the first picosecond time range, EAS 1 evolves into EAS 2 , presenting a much reduced high-energy tail. Nanomaterials 2020, 10, x FOR PEER REVIEW 7 of 15 was obtained in the Glotaran root mean square (RMS) deviation when using more than two components for the global fit. An example of the EAS spectral components obtained by this sequential model is given for high fluence (60 μJ/cm 2 ) in Figure 3a. The spectral components at different excitation fluences are displayed in Appendix A. It should be noted that during the first picosecond time range, EAS1 evolves into EAS2, presenting a much reduced high-energy tail. A comparison between the carrier temperature decays for FAPbI3 NPs obtained by the conventional high-energy tail fitting method (dots) and the global analysis (lines) is shown in Figure 3b. The resulting kinetic parameters from the global analysis are given in Table 2. We consider the evolution from EAS1 to EAS2 with the rate constant k1 as the early-stage relaxation of the hot charge carriers to the band-edge (hot carrier cooling). The resulting associated lifetime τ1 increases with the excitation fluence from 360 ± 20 fs at 6 μJ/cm 2 to 970 ± 80 fs at 60 μJ/cm 2 , which is, as we will discuss later, typical of the hot phonon bottleneck effect. In order to study the influence of the excitation photon energy on the relaxation dynamics of the FAPbI3 nanostructures, we performed similar experiments by exciting the NPs at higher photon energy (520 nm, excess energy of about 760 meV). Moreover, to see the influence of the morphology and/or surface ligands on this early stage of relaxation, additional measurements were performed on cubic-shaped NCs excited at 650 nm (about 240 meV above the bandgap energy of 1.67 eV). The experimental data were analyzed using the global analysis method described above. All the relaxation k1 rates obtained at different excitation fluences are plotted in Figure 4a. A comparison between the carrier temperature decays for FAPbI 3 NPs obtained by the conventional high-energy tail fitting method (dots) and the global analysis (lines) is shown in Figure 3b. The resulting kinetic parameters from the global analysis are given in Table 2. We consider the evolution from EAS 1 to EAS 2 with the rate constant k 1 as the early-stage relaxation of the hot charge carriers to the band-edge (hot carrier cooling). The resulting associated lifetime τ 1 increases with the excitation fluence from 360 ± 20 fs at 6 µJ/cm 2 to 970 ± 80 fs at 60 µJ/cm 2 , which is, as we will discuss later, typical of the hot phonon bottleneck effect. In order to study the influence of the excitation photon energy on the relaxation dynamics of the FAPbI 3 nanostructures, we performed similar experiments by exciting the NPs at higher photon energy (520 nm, excess energy of about 760 meV). Moreover, to see the influence of the morphology and/or surface ligands on this early stage of relaxation, additional measurements were performed on cubic-shaped NCs excited at 650 nm (about 240 meV above the bandgap energy of 1.67 eV). The experimental data were analyzed using the global analysis method described above. All the relaxation k 1 rates obtained at different excitation fluences are plotted in Figure 4a. At the lowest fluences, the cooling is relatively fast and depends on the excitation photon energy: τ1 = 1/k1 ≈ 411 ± 1 fs for the thick FAPbI3 nanoplates excited at 520 nm and 350 ± 10 fs when excited at 630 nm. While physically it takes a longer time to fully relax to the band-edge in the case of a higher excess energy, the "global energy loss rate" defined as the initial excess energy (ΔE = hυ-Eg) divided by the cooling time (τ1) is 1.9 and 1.0 eV/ps, respectively (Figure 4b). It is thus higher when it is excited at 520 nm than at 630 nm, as previously reported in bulk thin film perovskites [14]. For the cubicshaped FAPbI3 NCs excited at 650 nm, we obtained τ1 ≈ 240 fs, corresponding to the global energyloss rate of about 0.99 eV ps −1 . At higher carrier densities, the initial cooling time τ1 increases up to almost one picosecond for the two FAPbI3 samples, similarly to previously reported value for FAPbI3 NCs [10]. This corresponds to an effective energy-loss rate of about 0.33 eV ps −1 for the NPs excited at 630 nm. Returning to the thick NPs excited at 630 nm, after applying the global analysis method to study the early stage of the cooling dynamics in the short-time range, we applied it to longer time ranges. Indeed, the carrier temperature still evolves over tens and hundreds of ps, in particular at high excitation fluence (cf Figure 2b). In order to cover the full nanosecond time range with appropriate time steps, we divided the experiments into a middle time range, up to 200 ps with a step of 1 ps and a long time range, up to 3.2 ns with steps of 20 ps. For these two longer time ranges, three kinetic components were needed to reproduce the data well (the Glotaran RMS deviations were significantly lower for three components than for two). The three EAS components at high excitation fluence (60 μJ/cm 2 ) are shown in Figure 5a for the middle range up to 200 ps and in Figure 5b At the lowest fluences, the cooling is relatively fast and depends on the excitation photon energy: τ 1 = 1/k 1 ≈ 411 ± 1 fs for the thick FAPbI 3 nanoplates excited at 520 nm and 350 ± 10 fs when excited at 630 nm. While physically it takes a longer time to fully relax to the band-edge in the case of a higher excess energy, the "global energy loss rate" defined as the initial excess energy (∆E = hυ − E g ) divided by the cooling time (τ 1 ) is 1.9 and 1.0 eV/ps, respectively (Figure 4b). It is thus higher when it is excited at 520 nm than at 630 nm, as previously reported in bulk thin film perovskites [14]. For the cubic-shaped FAPbI 3 NCs excited at 650 nm, we obtained τ 1 ≈ 240 fs, corresponding to the global energy-loss rate of about 0.99 eV ps −1 . At higher carrier densities, the initial cooling time τ 1 increases up to almost one picosecond for the two FAPbI 3 samples, similarly to previously reported value for FAPbI 3 NCs [10]. This corresponds to an effective energy-loss rate of about 0.33 eV ps −1 for the NPs excited at 630 nm. Returning to the thick NPs excited at 630 nm, after applying the global analysis method to study the early stage of the cooling dynamics in the short-time range, we applied it to longer time ranges. Indeed, the carrier temperature still evolves over tens and hundreds of ps, in particular at high excitation fluence (cf Figure 2b). In order to cover the full nanosecond time range with appropriate time steps, we divided the experiments into a middle time range, up to 200 ps with a step of 1 ps and a long time range, up to 3.2 ns with steps of 20 ps. For these two longer time ranges, three kinetic components were needed to reproduce the data well (the Glotaran RMS deviations were significantly lower for three components than for two). The three EAS components at high excitation fluence (60 µJ/cm 2 ) are shown in Figure 5a for the middle range up to 200 ps and in Figure 5b Table 3. (2) and (3), using the corresponding EAS (for example in Figure 5a,b or Figure A2 in Appendix A) and linked kinetic constants displayed in Table 3). For the middle time range analysis, the characteristic time τ1,m corresponding to the evolution from EAS1 to EAS2 ranges from sub-ps to a few ps and thus corresponds well to the initial stage of (3), using the corresponding EAS (for example in Figure 5a,b or Figure A2 in Appendix A) and linked kinetic constants displayed in Table 3). Table 3. Fit parameters for a tri-exponential decay extracted from global analysis at different excitation fluences. For the middle time range analysis, the characteristic time τ 1,m corresponding to the evolution from EAS 1 to EAS 2 ranges from sub-ps to a few ps and thus corresponds well to the initial stage of the relaxation discussed above. However, we should note that the time step of 1 ps does not allow to determine this time constant with accuracy, especially at low excitation fluence. On the other hand, the second and third time constants are about 25-40 ps (EAS 2 to EAS 3 ) and 280-510 ps (decay of EAS 3 ), respectively. Here, a clear decrease with the excitation fluence can be observed in both time constants. In the case of the long-time range extending up to about 3 ns, the characteristic times are about 60-80 ps (EAS 1 to EAS 2 ) and 500-800 ps (EAS 2 to EAS 3 ). As in the analysis of the middle time range, the first time constant τ 1,l decreases with the excitation fluence. Curiously, an opposite trend was observed for the second time constant τ 2,l . In addition, a time constant τ 3,l in the order of several of ns was obtained from the fit but since it is out of the time range we do not take it into account. Limitations of the High-Energy Tail Fitting Method Although using the classical tail-fitting method to extract the carrier temperature T c from the TA spectra and then plotting the energy-loss rate versus this temperature allows to compare the cooling dynamics between different samples or, different initial excess energies for a given sample [13,14], we found it quite sensitive to the energy range used for the fit (Figure 2a). While some authors suggest a minimum range of 0.2 eV to ensure a mono-exponential decay to fit with Equation (1) [10], an extended energy range leads to an overlap of the band-edge bleach with the PIA signal that might introduce errors in the values of the extracted T c . Moreover, for a high density of photo-generated charge carriers, a non-negligible bandgap renormalization occurs, causing an important red-shift of the bleach [23]. This Coulomb screening effect has an opposite trend than the Burstein-Mott shift [31]. Even if the effective bandgap energy E eff is modified by this bandgap renormalization, its incidence on the T c value remains unclear. Finally, in the case of confined systems, the extracted carrier temperature can be artificially overestimated owing to a broad size-distribution. The high-energy tail of the PL spectrum (Figure 1a) can be understood as the presence of smaller-sized nanoplates produced by the "LARP" method. This can also result in a tail in the absorption band and thus can be confused with a higher temperature of the thick NPs. Experimentally, the final carrier temperature obtained for the NPs was higher than the expected lattice temperature at room conditions T~300 K (Figure 2b). In spite of this, the overall time dependence of the carrier temperature can be effectively described by a multi-exponential decay function, in line with previously reported relaxation dynamics in other FAPbI 3 NCs [10]. In our case, the time constants reproduce the extracted T c values satisfactorily over the full nanosecond time range (Figure 2b). The resulting fit parameters presented in Table 1 correspond to a first component (τ 1 ) from hundreds of femtoseconds to picoseconds, a second component (τ 2 ) of tens of ps and a last component (τ 3 ) of several hundreds of ps to ns. We will see that these characteristic time components will be retrieved in the global analysis when applied to different time ranges. Global Analysis Versus High-Energy Tail Fitting over Short Times Focusing on the time-evolution during the first few picoseconds, the evolution from a specific state, a "hot" state of the system, represented by EAS 1 , to another specific state, EAS 2 can be considered as a phenomenological approximation to determine the cooling dynamics. In this sense, the system physically evolves through a continuum of states, each of them described by its own T c . However, as EAS 1 and EAS 2 overlap over an important spectral range (Figure 3a), a sequence of linear combinations of them produces an effective continuous shift of the band-edge bleach and a successive decrease in this high-energy tail over all time ranges of the experiment. Finally, the two resulting curves obtained by the tail-fitting method and the global analysis are in very good agreement during this first picosecond time range (see Figure 3b). The drawbacks of the conventional approach in terms of characteristic cooling time is absent in the global analysis method as it does not rely on the subjective tail fitting but results from a full spectral evolution analysis. The sequential model, where EAS 1 decays with a characteristic rate k 1 to the EAS 2 state, gives a good agreement between the experimental and fitted data, as can be seen in the decay traces at the band-edge and in the PIA region, for all the excitation fluences (see Figure A1c,d). On the Hot Carrier Relaxation Mechanisms Several mechanisms can contribute to the charge carrier relaxation and recombination dynamics in these weakly confined samples: the hot-charge carrier cooling through charge carrier-longitudinal optical (LO)-phonon scattering, Auger recombination (Auger heating) and electron-hole recombination. The associated processes can lead to different recombination orders (e.g. mono-, bi-or tri-molecular) depending on their physical nature. They will appear in different time ranges, as they are dominant under specific conditions such as the charge carrier density. LO-Phonon Scattering as the First Relaxation Stage Using both conventional and global analysis methods, the fast sub-ps to ps component is strongly dependent on the excitation fluence. The characteristic time of this first decays increases from 0.25 to 2.1 ps with the tail-fitting and from 0.36 to 0.97 ps with the global analysis when going from 6 to 60 µJ/cm 2 . We attribute the evolution from EAS 1 to EAS 2 , and the corresponding rate constant k 1 , to the early-stage relaxation of a hot charge carrier population to a much more relaxed one through the carrier-LO-phonon scattering, in agreement with previous work reported in bulk and bulk-like hybrid lead halide perovskites [7][8][9]14]. The slowdown of the relaxation at high charge carrier density is characteristic of the hot phonon bottleneck effect. Although the exact mechanism of the hot phonon bottleneck effect is still under debate [30], the optical phonon mode(s) involved can be assigned to the Pb-I inorganic lattice with no (or weak) contribution from the organic cation vibrational modes [32,33]. The good agreement between the experimental data and a simple mono-exponential decay in this short time range is rather surprising since a more complex T c evolution in time was expected [9,13,14]. Auger Recombination Further Slows Down the Relaxation in a Longer Time Range In the tens to hundreds of picosecond time range, the high-energy tail of the bleach is still evolving, in particular at high excitation fluence, indicating that some additional slower mechanism is involved in the hot charge carrier relaxation. It can be seen by comparing EAS 1 and EAS 2 of the middle time range in Figure 5a that the total amplitude of the band-edge bleach does not evolve much during the first 10 ps. This means that the population (i.e., number of charge carriers) remains approximately constant during this initial stage of relaxation. The first time constant τ 1,m covering the time-range from sub-ps to a few picoseconds in this middle time range experiments corresponds well to the initial stage of relaxation described above by τ 1 in the short time range. While the total areas of the EAS 1 and EAS 2 bleach signals in the middle time range experiments are rather similar, a strong diminution of the signal was observed during the second stage, covering several tens of ps. This can be observed by comparing EAS 2 to EAS 3 in the middle time range experiments associated with time constant τ 2,m (Figure 5a) or EAS 1 to EAS 2 in the long time range data associated with τ 1,l (Figure 5b). At longer times, we found almost no evolution of the spectral lineshapes while the amplitude still evolves with characteristic times τ 2,l of a few hundreds of picoseconds and with τ 3,l of several nanoseconds (Figure 5b, inset). The change in time of the bleach amplitude ∆A gives information on the evolution of the carrier concentration due to mono-, bi-and trimolecular recombinations, depending on the processes taking place [34,35]. Neither charge carrier trapping [36] nor geminate (or non-geminate) electron-hole recombination should occur on this timescale considering the moderate excitation fluence. From the initial number of electron-hole pairs created per NC volume (i.e., electron-hole pair density), 5.6 × 10 17 and 5.6 × 10 18 cm −3 at excitation fluences of 6 and 60 µJ/cm 2 respectively, we can calculate the average distance between these pairs to be about 8.5 and 4 nm, respectively. These values are consistent with a fast, three-body Auger recombination [34]. Thus, we attribute the further diminution of the high energy tail taking place on the time scale of tens of ps to the effect of the non-radiative Auger recombination (i.e., Auger re-heating), as previously discussed for hybrid perovskite nanocrystals [10,12,13]. In agreement with this interpretation, the evaluated Auger recombination time constants τ 2,m or τ 1,l decreased with the excitation fluence (see Table 3). We note that the Auger time constants τ 1,l extracted from the analysis of the long time range are about twice as large as the value τ 2,m extracted from the middle time range. This shows that the non-radiative Auger recombination rate is time-dependent and cannot be assigned a well-defined time constant. Indeed, rather than mono-molecular dynamics leading to single exponential decay, Auger recombination in weakly confined NCs is a tri-molecular process which effectively leads to highly time-dependent exponential decays. That is why the time constants in the tens of picosecond time range are correlated with the ones in the hundred of picosecond time range where the high-energy tail is still evolving but in a much reduced manner. This also leads us to assign the longer time constants τ 3,m and possibly τ 2,l to the end of the Auger recombination process. Finally, at the longest measured times, the non-geminate electron-hole recombination leads to the disappearance of the bandgap bleach following bimolecular kinetics not related to cooling dynamics. This thus might partially affect τ 3,m and τ 2,l , in addition to the τ 3,l time component in several nanoseconds assigned to this recombination. Conclusions In conclusion, we have investigated the hot charge carrier cooling dynamics of colloidal FAPbI 3 nanostructures in the weak confinement regime. Overall, these results show that the evolution of the lineshape and signal amplitude in time as obtained from global TA data analysis allows us to disentangle the different processes behind the charge carrier relaxation and recombination in these samples. The extracted kinetic parameters over short times (the first few picoseconds) give a good description of the main cooling dynamics through LO-phonon emission, with an important hot phonon bottleneck effect slowing down the relaxation to almost 1 ps for a charge carrier density of about 6 × 10 18 cm −3 . While more sophisticated analysis going beyond a simple exponential behavior could be applied over longer times, the sequential kinetic model used successfully shows the involvement of Auger re-heating in both cooling and recombination mechanisms from a few tens to hundreds of picoseconds, for a charge carrier density of 10 18 -10 19 cm −3 . Finally, we emphasize the importance of employing the global analysis method, without which independent observations of the time-evolution of the line-shape and the amplitude of the very complex TA spectra would have been impossible. Supplementary Materials: The following sections/figures are available online at http://www.mdpi.com/2079-4991/10/10/1897/s1, Material synthesis and characterizations. TA data treatments. Figure S1: TA map of the solvent response (toluene) gives an estimation of the temporal resolution, Figure S2: Plot of the TA maps of FAPbI 3 NPs excited at 630 nm, before (left) and after (right) chirp correction. Calculations of the initial electron-hole pair density. The EAS spectra are shown in Figure A1 for four different excitation densities. EAS1 displays a characteristic high-energy tail and a pronounced blue-shift compared to the bandgap energy of 1.62 eV. Appendix A The EAS spectra are shown in Figure A1 for four different excitation densities. EAS1 displays a characteristic high-energy tail and a pronounced blue-shift compared to the bandgap energy of 1.62 eV.
9,801.4
2020-09-23T00:00:00.000
[ "Materials Science", "Physics" ]
THE PERIGEO PROJECT: INERTIAL AND IMAGING SENSORS PROCESSING, INTEGRATION AND VALIDATION ON UAV PLATFORMS FOR SPACE NAVIGATION The PERIGEO R&D project aims at developing, testing and validating algorithms and/or methods for space missions in various field of research. This paper focuses in one of the scenarios considered in PERIGEO: navigation for atmospheric flights. Space missions heavily rely on navigation to reach success, and autonomy of on-board navigation systems and sensors is desired to reach new frontiers of space exploration. From the technology side, optical frame cameras, LiDAR and inertial technologies are selected to cover the requirements of such missions. From the processing side, image processing techniques are developed for vision-based relative and absolute navigation, based on point extraction and matching from camera images, and crater detection and matching in camera and LiDAR images. The current paper addresses the challenges of space navigation, presents the current developments and preliminary results, and describes payload elements to be integrated in an Unmanned Aerial Vehicle (UAV) for in-flight testing of systems and algorithms. Again, UAVs are key enablers of scientific capabilities, in this case, to bridge the gap between laboratory simulation and expensive, real space missions. INTRODUCTION This paper is framed within the PERIGEO project and focuses on a specific goal: space navigation using inertial and imaging sensors for Earth observation and Atmospheric flight.Its organization is the following: firstly, a description of the project, its testing and validation facilities and a history review of space navigation is provided.Secondly, specific developments in the simulated environment are presented, including preliminary results, and the integration of sensors and systems within the UAV payload are described. The PERIGEO project The PERIGEO 1 project aims to provide a framework for research and validation of space technology and science by means of Earth-analogue environments.Within PERIGEO, tools and methodologies are designed and implemented to continue the efforts performed in laboratory development, exposing the technology to representative and [to a certain extent] realistic environmental and dynamic conditions, similar to those of space missions.This corresponds to increasing the Technological Readiness Level 2 (TRL) from 3-4, to TRL 5-6 or higher. Space engineering is probably one of the most challenging and risky technological disciplines due to the extreme conditions encountered in space (temperature, radiation, gravity), which are hardly representative on the Earth.Within a mission, space vehicles face several operation phases until achieving its final destination to perform a particular scientific 1 Funded by the INNPRONTA 2011-2014 programme (CDTI, Spain) 2 Quoting European Space Agency (ESA), "TRL<5 relate to innovative technologies before/during the mission assessment phase; TRL>5 relate to existing technologies and to missions in definition phase."task e.g.surface planet exploration.From space shuttle launch to orbiting, rendez-vous, or entry, descending and landing, a single set of on-board systems have to face manifold challenges.Therefore, specific goals in PERIGEO consist in the following: defining new mission and vehicle designs by multidisciplinary optimization of its characteristics; developing a robust and failure-tolerant control system; creating an integrated research and design process to mature the project developments through a logical work sequence (research/laboratory-testing/real-flight-testing); and exploring new navigation methods based on imaging and inertial sensors, including its hybridization.Therefore, PERIGEO paves the way for tests and validation of algorithms and/or methods for space missions. Four different scenarios are considered within the project: Earth Observation, related to GNC and observation data acquisition and processing from artificial satellites; Interplanetary Flight, focusing on celestial bodies' exploration such as planets or asteroids, aiming at a better understanding of the space characteristics and planetary evolution; Atmospheric Flight, related to the GNC aspects to guide a platform though a planet atmosphere to obtain in-situ measurements from a celestial body including the return to the Earth or other bodies after the exploration; and Entering, Descending and Landing (EDL), in which high precision in navigation and environment characterization are required to safely place the vehicle on the explored body surface. Testing and validation of space systems The project defines two testing and validation environments.On one hand, the so-called Dual Synthetic Environment (DSE) facility is devised as a set of hardware and software tools to support the maturation of different methods and technologies related to GNC (Attitude and Orbit Control Systems (AOCS), Global Navigation Satellite Systems (GNSS), image processing, etc.)This facility shall act as a demonstrator to test new, particular solutions within real environments, proving its validity and adequacy for final implementation in a production or market phase.With the DSE, validating a new algorithm, or testing a sensor becomes practical and feasible without developing ad-hoc validation frameworks. On the other hand, the use of Unmanned Aerial Vehicles (UAVs) has been identified as a goal-achieving enabler, particularly for in-flight validation of space-related technologies.The high versatility and accessibility of such platforms permits continuous testing of navigation and control algorithms in close-to-real flight conditions.At this point, a new item might be added to the [exhaustive] list of application niches of UAVs: space technology and science validation. Many scenarios might then be materialized using these two environments.For example, one scenario that has tailored project developments is that of an atmospheric flight on Titan, the largest moon of Saturn.Atypical profile for that mission is illustrated in Figure 1.In a hypothetical exploration mission, an airplane would be deployed from a space shuttle at around 40 kilometers of altitude to begin exploration i.e. upper-troposphere analysis, while descending to a nominal altitude at which surface observation is feasible (one centimeter of Ground Sampling Distance (GSD) is usually considered).Typically, climb-andglide maneuvers would be performed while transmitting data to Earth ground stations, during continuous observation campaigns that can be as long as months or years, up to final landing of the vehicle to reach the 'in-surface science phase'. This scenario clearly exemplifies a set of challenges, which are drivers for the developments further explained in this paper, namely:  Autonomous and real-time navigation.Autonomy in navigation is needed to drive the vehicle along the atmosphere (ground communication to control the aircraft is unaffordable); thus, the navigation solution must be computed and supplied in real-time. Unavailability of absolute location beacons.In outer space, the use of GNSS or other type of 'absolute navigation beacons' is simply unfeasible.In this case, navigation is uniquely performed through estimation methods fusing the information obtained independently from external emitters or aiding [Earth re-entry missions, precisely within the 24.000 kilometer height layer, are also considered as atmospheric flight, and exceptionally GNSS measurements might still be used for navigation.] Vision-aided INS navigation.In view of the latter, the navigation approach in atmospheric flights shall be based on sensors such as Inertial Measurement Units (IMU), or [passive] imaging systems such as visible-spectrum cameras to deliver time-position-velocity-attitude (tPVA).The use of camera-based measurements aims at replacing the complementary nature of GNSS: compensating for IMU-based solution drifts by calibrating sensor errors.  Relative and Absolute navigation.Although no radiobeacons might be considered in place, absolute navigation i.e. providing navigation solution referenced to a surfaceattached reference frame, is necessary at some point.For that purpose, the identification of [geo-referenced] surface landmarks e.g.craters, valleys, etc. and the extraction and use of this geo-information for navigation is a common technique in space navigation, and becomes then a goal of the project.Indeed, whenever those landmarks are not available or simply not identified, relative navigation is performed i.e. orientation of consecutive images through exploiting common image features -as a matter of fact, this is 'the only thing to do' when no absolute information is available. The use of UAVs is of the utmost convenience in relation to the achievement of such goals.The described mission profile is highly replicable using unmanned platforms, and the technology of interest (sensors, systems and algorithms) can be tested on-board, in real payload integration conditions.UAVs bridge the gap between exhaustive laboratory simulation and expensive, ultimate space missions. Review of space navigation: sensors, systems and algorithms When considering interplanetary missions, the combination of [on-board] autonomy and [ground-based] automation offers significant advantages in providing the necessary Guidance, Navigation and Control (GNC) and mission science capabilities.The need for autonomy is driven by both mission safety (i.e., quick decisions must be taken for successfully performing the mission), and by cost reduction (i.e.reducing ground-operator work hours).Missions requiring long periods without base contact, and the existence of critical phases i.e.EDL requiring immediate response make communicationdependant GNC simply unfeasible.This fact dictates a high level of on-board autonomy of the spacecraft, covering system, subsystem and instrument level, without forgetting minimum levels of accuracy, precision and robustness. Nonetheless, and particularly for GNC, Earth-based radioassistance has been traditionally implemented as the main driver of space platforms to reach solar system bodies (Bernhardt et al., 2011).At the end of 1950s, a world-wide network of large antennas and communication facilities called Deep Space Network was established for deep-space vehicle tracking.In addition to radio-assistance, Inertial Measurement Units (IMUs) have been also presents since the rising of space exploration: inertial technology was present the first time mankind left the gravitational influence of Earth i.e. the Apollo 8. Inherited from ballistic rockets, IMUs have been and still are used in top-level missions (Mars Reconnaissance Orbiter, Lunar Reconnaissance Orbiter and Solar TErrestrial RElations Observatory (STEREO) missions included a Miniaturized IMU (MIMU) from Honeywell; and Northrop Grumman's LN200S IMU is present in a wide range of missions, including Mars rovers as the Curiosity). More and more, the use of other on-board autonomous technology has been fostered.NASA's DeepSpace-1 mission, back in 1998, introduced the Autonomous Optical Navigation (AutoNav) system complementing its on-board IMU.This system, based on triangulation from images of celestial bodies, was independent from ground signals as it basically relied on optical cameras as a means of providing measurements for navigation at an affordable cost, size and power consumption.Currently, vision-based systems are essential components of GNC systems in space missions, during all phases of operation.(Massaro, 2013) provides a recent comprehensive review of vision-based spacecraft navigation literature and techniques, with special focus on terrain-relative navigation, demonstrating wide adoption of vision-based solutions in actual systems.Recently, LiDAR technology has also been introduced for space missions, attracted by a superior precision and by its independence from lighting conditions, and particularly as a means to final approach and landing on an asteroid or planet.In parallel to technology, navigation algorithms have had their own evolution in space navigation, all starting from a common origin: the Kalman filter.Following a visit by R.E.Kalman to the NASA office back in 1960 (Grewal and Andrews, 2010), the Apollo mission designers chose a Kalman filter for flying to and back from the Moon, due to its tailoring towards real-time and the ability to cope with non-stationary problems.However, the need to deal with non-linearity and non-Gaussianity conditions, which hold for example in low-fuel missions where nonlinearities are exploited to reduce fuel consumption (Grover and Sato, 2012), have pushed researchers to investigate the use of advance filters e.g.particle filters for space navigation (Ning and Fang, 2008), (Ke et al., 2013). Despite the criticality of space missions, on-board technology is (sometimes surprisingly) primitive due to limitations on available power on-board, and due to other primary design concerns such as radiation resiliency, faulttolerance in electronics, etc.As an illustrating fact, the Voyager-I platform, the first man-made object launched in 1977 to reach interstellar space, carried three computers able to process 8 kilo-instructions per second -yet, a regular smartphone processes 14 billion of instructions per second.More modern architectures, such as the UT699 LEON3FT SPARC V8 Core, achieve approximately 53 MIPS throughput using a 66MHz base clock frequency -this performance is comparable to an Intel 486DX, released in 1992.Thus, space developments shall be always constrained to severe hardware limitations, and algorithms and methods on board shall be remarkably 'light'. NAVIGATION CONCEPT IN PERIGEO In view of the context described along the paper, the navigation concept proposed in PERIGEO accounts for a manifold of functionalities:  The navigation sensor set shall consist on inertial and imaging sensors i.e. optical frame camera.The use of GNSS is restricted to Earth operations e.g.satellite observation, Earth re-entry, etc.  Absolute and relative navigation modes shall be operational,  Fault resiliency mechanisms, including hardware and software, shall be in place to deliver robust navigation,  Algorithms and methods for sensor integration shall be austere in terms of computational burden, mirroring onboard GNC systems, and the scalability of the system shall be characterized to propose upgrades of technology TRL. Coarsely, relative navigation refers to the propagation of the navigation states (within this paper, time-position-velocityattitude, tPVA) expressed in a convenient reference frame e.g.global, local surface-attached, or local instrumental, between two epochs in which the navigation states have been estimated using external absolute information i.e. absolute navigation, such as previously geo-referenced grounds landmarks or GNSS measurements, if available. Inertial technology provides inherently relative information -acceleration and rotation rate measurements are differential magnitudes over time.Nonetheless, absolute inertial navigation is achieved simply by numerical integration of the well-known mechanization equations, when expressed in a global reference frame (Rosales and Colomina, 2008).It is also well-known that the inherent relative nature of the measurements is directly translated into drifts over time of the navigation solution -this is why absolute updates e.g.GNSS suitably complement IMUbased navigation.Now, when using camera images, absolute visual-based navigation is achieved by relating 2D measurements i.e. extracted from images, with 3D measurements i.e. ground elements.In contrast, when establishing 2D-2D relations between measurements i.e. image-to-image, relative navigation is performed accounting for a particular issue: the scale of the model cannot be retrieved (Horn, 1990).In other words, one degree of freedom of the translation vector between two images is unobservable. This chapter presents the two proposed scenarios for testing the PERIGEO developments, the simulated environment and the representative environment.For the first scenario, and in order to tailor algorithm development towards real space environments, a tool named Planet and Asteroid Natural Scene Generation Utility (PANGU) has been used to obtain simulated optical images and digital elevation models from the Moon (Rowell et al., 2012).In addition, the OpenCV image processing library (Bradski, 2000) has been used to benefit from low-level handling and processing mechanisms for images.Preliminary results are presented in the simulated environment.For the second scenario, hardware and software developments are integrated into a UAV for real, in-flight testing in representative environments.The first flight campaign will take place during the second half of 2014. Simulated environment: Moon images Vision-based relative navigation using point extraction and matching. Relative navigation is approached through extraction and matching of particular features of interest in camera images.Every image acquired by the camera is processed in search of distinctive points i.e. pixels that have a particular response in terms of intensity.Once a set of points is extracted from an image at time , it is matched to a different set of extracted points in a previous image, acquired at time .A large set of literature is available on this topic, resulting in a myriad of methods to perform such task, as reviewed in (Leutenegger et al., 2010).In view of the computational burden restrictions, the use of binary descriptors was considered, resulting on a BRISKbased implementation. Once the image measurements are available, the wellknown coplanarity model (Luhmann et al., 2006) is used to solve five degrees of freedom among the set of position and attitude states.More specifically, given the parameters (the translation vector) and (the relative attitude) between two perspective centres from two different, overlapping images, and given the observations (pairs of matched points), a non-linear least-squares problem is formulated using the 'coplanarity' observation equation.Via an iterative approach, estimates of the relative orientation i.e. are provided.[Note that, as one degree of freedom corresponding to the translation vector cannot be estimated, one shall work with conveniently normalized vectors.]The initialization of the leastsquares scheme can be solved by accounting for particular motion characteristics of the platform i.e. forward motion and small attitude variations within epochs. At this point, robustness is an issue to be dealt with.Automatic image measurement and matching techniques are prone to deliver [a large amount of] outliers i.e. image points that are incorrectly matched.Indeed, environments featuring highly repetitive patterns (urban scenarios, dense forests) or low-textured scenes (planetary surfaces) are conflictive, and shall implement robustness mechanisms. In PERIGEO, the envisioned strategy is the following: after extracting and matching image points, a prediction of the orientation of the second image is performed (first image is already oriented).Then, the parallax for each matched pair of points i.e. the difference between their ground projections is compared to a chosen threshold.By doing so, outliers and inliers are clearly separated.The aforementioned prediction might be achieved by propagating the navigation states through a low-complexity dynamic model e.g.'naïve' motion ( ), or more complex models e.g.inertial mechanization equations using IMU observables. Figure 1 depicts matching results using BRISK and descriptor matching by vector comparison using Hammingnorm, and based on the previously described approach i.e. parallax-based outlier removal.This particular results show that 144 out of 386 matched points were actually 'correct' (green lines in the figure, outliers not drawn for clarity), where correctness was defined by a threshold of 0.5 meters in parallax. Crater-based absolute navigation. An important image processing task within the project is crater detection in camera images to approach absolute naviga tion, but also the extraction of craters in LiDAR images aiming at camera/LiDAR co-registration.Craters are common features in outer space bodies, and have been considered as navigation landmarks in literature (van Pham et al., 2010).An algorithm based in contour extraction has been developed to identify crater borders or rims, implemented in two modes: processing camera and LiDAR images.[Note that we refer as 'LiDAR images' to the LiDAR-based digital elevation models, considering 'height' as 'pixel intensity'].For camera images, the algorithm combines the illuminated and non-illuminated parts of the image to identify craters.The current version of the algorithm presents small miss-association issues (Figure 2, leftmost) and few positive detections in low Sun elevation (Figure 2, left-middle) occur -yet, crater detection is feasible regardless of illumination conditions.For the LiDAR images, Sobel gradient operators are applied in the horizontal and vertical directions to extract the variations of the terrain (Figure 2, right-middle).The key fact is that a crater responds to the Sobel operator in all directions, enabling thus its detection (Figure 2, rightmost). In order to perform absolute navigation, 2D information i.e. extracted craters shall be associated to 3D information i.e. coordinates of the craters in a global frame.For that purpose, a procedure has been developed to describe and match craters.Firstly, to build a descriptor per each group of three craters, encapsulating its local morphology i.e. normalized distances from crater centroids and normalized areas.Secondly, the descriptors are compared and, thanks to the ordination of each group of three craters, a figure of merit for each pair of matched craters is provided. Note that this procedure is independent of the imaging source from which craters have been extracted.Thus, in the case of absolute navigation, the extracted craters are matched to craters pre-extracted in geo-referenced imagery.By doing so, 3D coordinates is associated to camera image crater centroids. At this point, the exterior orientation problem, alternatively known as Perspective n-Point (PnP) problem, is solved using the approach described in (Garro et al., 2012).The same approach for crater matching is applied to camera and LiDAR crater to perform co-registration.Preliminary results for relative and absolute navigation. For testing purposes, a trajectory was simulated consisting in forward-motion towards North, 'descend-ascend' manoeuvres, similar to the Titan's flight, with eventual roll and heading turns up to ten and thirty degrees, respectively.A set of 19 simulated image pairs were processed to perform relative and absolute navigation.That is, on one side, point extraction and matching was performed, and known image orientations were used to remove outliers with a parallax threshold of 0.5 meters; and on the other side, craters were extracted from images and, in image pairs 15 and 16, the extracted craters were provided of 3D coordinates in a surface-attached reference frame.Results in position and attitude are provided in Figure 3, comparing a relative navigation solution (green) and a relative/absolute navigation solution (blue), in the terms previously described. In position estimation for relative navigation, the accumulated along-track and height errors are of 25% and 15%, respectively.However, the two absolute updates in the relative/absolute trajectory compensates for drifts.Note that the across-track component is correlated with the roll and heading angles, leading to large errors in presence of angle variations for relative navigation.Yet, again due to the absolute updates, this effect is also mitigated for the relative/absolute navigation On the other hand, attitude is well estimated in both approaches, except for the pitch angle, which is strongly correlated to heading and roll variations.Again, absolute updates are able to decorrelate the parameters and mitigate this effect. The reader shall keep in mind that these preliminary results are obtained with a camera stand-alone solution, including outlier removal via ideal orientation propagation and fixed parallax threshold.Further research will focus on the integration of inertial measurements for orientation propagation and adaptive thresholding techniques. Multi-sensor integration for navigation in space processing architectures. Another development within PERIGEO is the migration and evaluation of robust estimation algorithms for navigation into a space-like processing architecture.The so-called Fault-Tolerant Processing Architecture (FTPA) is a single-core 32-bit processor LEON3, widely used in space missions, running a RTEMS real-time operating system.Within this processor, the goal of the project is to migrate a multi-sensor, multi-scenario navigation filter, developed by CTTC (Fernández et al., 2010), capable of processing IMU/GNSS or IMU/GNSS/camera sensor configuration, including robustness capabilities.The FTPA is fed through serial ports by other systems providing the navigation measurements. However, the FTPA features low computation capabilities (a processor power of 50 MHz and 512 Kb of RAM), supposing a real challenge for algorithm migration.(de Florio et al., 2009) presents a comparative analysis between space processing architectures in an orbit propagation scenario for a satellite in Low Earth Orbit (LEO), including LEON3.The results show that a single update from a Kalman filter implementation, including propagation of position-velocity states and its covariances, takes roughly 1.4 seconds (the interested reader might check further details on the force models or integration times applied in this exercise). Currently, the migration and adaptation of the navigation filter is being performed through an intermediate processor simulation environment, leading to FTPA-ready software.After the migration, the FTPA will be fed with space-simulated measurements (IMU and camera) within the DSE to reproduce realistic space conditions. Representative environment: integration in the UAV Project testing is a key component in PERIGEO, as the project seeks to increase the TRL of the developed technologies using UAVs as a means for this purpose.Table 1 shows the subsystems and sensors to be integrated in the UAV payload. GRIP In-house multi-frequency GNSS receiver, featuring Galileo's AltBOC signal reception Low-cost miniaturized IMU Javad TR-G3T Geodetic-grade, multi-frequency GNSS receiver Table 1.Subsystems and sensors integrated in the UAV Genuine space sensors and systems are not available within the project -thus, the scalability and restrictions of the previously discussed requirements have been analyzed and the appropriate technologies have been selected to be tested inflight.As an example, the short LiDAR sensor scanning range is regarded as a 'scale problem', in which cost and performance are clearly correlated.In this case, modulating the mission flying altitude enables the use of LiDAR within the project.Figure 5 shows particular developments of the CTTC within PERIGEO, and sensors to be integrated.The UAV to be used for project testing is the SIVA (Figure 6), a fixed-wing platform developed by Instituto Nacional de Tecnologia Aeroespacial (INTA), featuring 300 kg of MTOW, 6 hours of endurance, a maximum speed of 170 km/h, and able to carry 40 kg of payload. Project testing shall be conducted around a controlled and segregated area around a take-off and landing site in which SIVA is authorized to manoeuvre e.g.regional airports.Depending on the final testing site, the analogies with outer space scenarios is not obvious, and therefore, preparation of the test site is mandatory to include space-like elements i.e. craters.Further testing shall consider areas with a high-degree of likeliness with the Moon or other bodies of interest. SUMMARY AND WAY FORWARD This paper presents the current status of developments in relation to navigation tasks within the PERIGEO project.The paper has presented the context of space navigation, highlighting the limitations and requirements of sensors and systems for such purpose.After that, the two testing environments of the project have been presented.Firstly, within the simulation environment, Moon images have been simulated and preliminary results have been presented for relative and ab- Further research in PERIGEO will focus on the robustness for vision-based navigation, by analyzing the potential of parallax-based outlier rejection with relative orientation propagation using various dynamic models.The final phase of the project will also cover extensive result generation for the developed algorithms. Figure 1 : Figure 1: Atmospheric Flight on Titan Mission Profile As a pioneering example, the Hayabusa mission by the Japan Aerospace Exploration Agency used LiDAR technology to successfully land on the Itokawa asteroid in 2005.Current research is available on the use of LiDAR for Terrain Relative Navigation (TRN) and Hazard Detection and Avoidance (HDA) (de Lafontaine et al., 2008), and current technology state-ofthe-art is based on 3D Flash LiDAR technology (Advanced Scientific Concepts, 2013) providing 2D+1D range images. Figure 2 . Figure 2. Point extraction, description and matching: correct matched pairs of points in consecutive overlapping images. Figure 3 . Figure 3. Crater extraction in camera images with Sun elevation around 45º (leftmost) and Sun elevation around 1º (left-center); LiDAR image after Sobel operator in the horizontal direction (right-center), and crater identification in LiDAR image (rightmost) Figure 6 . Figure 6.The SIVA unmanned aerial vehicle (INTA) solute navigation, based on point and crater extraction from images, respectively.Secondly, the representative environment consists on hardware/software integration into a UAV payload, including cameras, LiDAR and IMU, to perform in-flight testing of the development during the current year.Further research in PERIGEO will focus on the robustness for vision-based navigation, by analyzing the potential of parallax-based outlier rejection with relative orientation propagation using various dynamic models.The final phase of the project will also cover extensive result generation for the developed algorithms.
6,013
2014-03-05T00:00:00.000
[ "Computer Science" ]
Dynamic Decision-Making Process in the Opportunistic Spectrum Access are able to learn channels qualities and availabilities and further enhance the QoS. Introduction Game theory represents a decision-making mathematical tool that attracts much attention, when it comes to networks for resource sharing, congestion control, transmission-rate adaptation, etc. This theory was originally and exclusively proposed for economics before being applied to many other topics, such as: financial, regulation, military, political science and also biology. The main objective for using the game theory is to study and analyze cooperative or competitive situations for rational players in order to find an equilibrium among them. When, players reach the equilibrium point, then none of them can gain more by changing its action. Game theory is widely applied in Cognitive Radio (CR) in order to enhance the spectrum efficiency of the licensed frequency bands. Indeed, according to many recent studies, the frequency bands are not well used. On the one hand, the demands on high data rate applications and wireless devices have experienced unprecedented advancement since 1990s which makes the frequency bands more and more crowded. On the other hand, several simulations have been conducted in the United States and showed that 60 % of the frequency bands are not used [1]. Several solutions have been recommended by the Federal Communications Commission (FCC) in order to enhance the usage of the spectrum. Opportunistic Spectrum Access (OSA) in CR, represents one of the proposed solutions, where users are categorized into two groups namely: Licensed users (Primary Users: PUs) who have the right to access the frequency bands at any time, and unlicensed users (Secondary Users: SUs) that can access the frequency bands in an opportunistic manner. Usually, SUs can coexist with PUs in the same frequency bands as far as they dont cause any harmful interference to these latter. Indeed, SUs are able to access the frequency bands currently unused by PUs. SUs in OSA have many challenges in order to reduce the interference with PUs: • Spectrum Sensing: A SU should sense the frequency bands and identify the available spectrum holes before making any decision. The main challenge is to gather an accurate information about the status of the spectrum (free or busy) in order to access only the unused channels without causing any harmful interference to PUs. Due to hardware constraints, delay and high energy consummation, a SU may be able to sense a portion of the frequency bands (e.g. one channel at each time slot) and decides whether the selected channel is free to transmit. • Spectrum Decision: At each time slot, a SU should decide which channel to access based on past success or failure decisions. As a result, a SU can gather some information about the availability and quality of channels and build a database of the spectrum access environment. This database is used in order to make a good decision and enhance the future actions of the SU. • Spectrum Sharing: In order to share the available spectrum among SUs, two main models exist: Cooperative or competitive access. In the cooperative behaviors, the users need to exchange information with each other in order to maximize their opportunities and thus decrease the interference among themselves. Despite the latter benefits of the cooperative access, each user should be informed about others decisions before making any action which may increase the complexity of the secondary network. While, in the competitive access, each SU makes an action based on its local observation. However, this lack of information exchange can increase the number of collisions among users. To solve this issue, a specific policy is required to learn the vacancy probabilities of available channels and decrease the number of collisions among users. 1 . Therefore, the user should evacuate its current channel when he identifies its targeted channel. This paper is an extension of our original work presented in [2] with a novel policy called All-Powerful Learning (APL) is proposed in order to maximize the opportunities of SUs, share the available spectrum among them, and limit the interference among PUs and SUs. Instead of only considering the availability, this paper takes into account a quality information metric, where the priority users should access only best channels with the highest availability and quality. Multi-Armed Bandit Problem Multi-Armed Bandit (MAB) model represents one of the famous models, in game theory, that is adopted to enhance the efficiency of the licensed frequency bands. Moreover, MAB problem represents a simple case of the Reinforcement Learning (RL). In the RL, the agent should enhance his behavior from the feedback (e.g. reward). Indeed, the RL may allow an agent to adapt to his environment by finding a suitable action to reach the best reward. The agent can maximize his reward without any prior information about his environment. However, by memorizing the states of an environment or the actions he took, the agent can make a better decision in the future. The reward feedback, also called reinforcement signal, has an important role to help an agent to learn from its environment. The RL is widely used in several domains: Robotics, Aircraft control, self-driving cars, Business strategy planning, etc. It was first developed for a single agent who should find an optimal policy that maximizes his expected reward knowing that the optimal policy depends on the environment. Unlike the case of a single agent, for multiple agents, the optimal policy depends not only on the environment but also on the policies selected by other agents. Moreover, when multiple agents apply the same policy their approaches in such systems often fail because each agent tries individually to reach a desired result. In other words, it is impossible for all agents in a certain system to maximize simultaneously their personal reward, although find an equilibrium for the system representing a point of interest. Subsequently, it is important to find a policy for each agent in order to guarantee the convergence to an equilibrium state in which no agent can gain more when modifying its own action. In RL, Exploitation-Exploration dilemma represents an attractive problem. In order to maximize his performance (exploitation), the agent should gather some information about his environment (exploration). This is known as the Exploration-Exploitation dilemma in the reinforcement learning. If the agent spends a lot of time on the exploration phase, then he cannot maximize his reward. Similarly, when the agent focuses on the exploitation phase by exploiting his current information, then he may miss the best action that leads to the highest reward. Thus, the agent needs to balance the tradeoff between Exploration and Exploitation in order to obtain an appropriate result. Due to its generic nature, the MAB model is widely adopted in many fields, such as: wireless channel access, jamming communication or object tracking. In such model, an agent can play a single arm at each time trying to maximize its long-term reward. To reach its goal, the agent needs to find the best arm in terms of expected reward. At each time slot, the agent can choose the current best arm (exploitation) or play other arms trying to obtain a robust estimation of their reward (exploration). Generally, an optimal policy, used by the agent, should balance between the exploitation and the exploration phases while pulling the arms. 50% 30% 60% 70% Like most RL frameworks, the agent starts the game without any priori knowledge about the expected reward of the arms. The main goal of the agent is to find the arm with the highest expected reward. Here, we should define two classes of arms: Optimal arm: This arm has the highest expected reward and is represented by the arm 2 in Fig. 2. The agent tries to reach this arm in order to maximize his expected reward. Suboptimal arms: Include all other arms considered as non-optimal. Efficient MAB algorithms should be able to limit playing with suboptimal arms. To solve the MAB problem, several algorithms have been proposed, such as: Thompson Sampling [4], Upper Confidence Bound (UCB) [5], -greedy [6], Exponential weights for Exploration and Exploitation (EXP3) [7], etc. The performance of a given MAB algorithm is usually measured by a regret that represents the gap between the reward obtained in the ideal scenario, where the user know the expected reward of each arm and often pulls the best one, and that obtained using a given MAB algorithm. It is worth mentioning that these algorithms have been suggested for a single SU in the context of OSA where the SU is considered as an agent and the channels become equivalent to the different arms. Then, it is assumed that each channel is associated with a distinct availability probability and the SU should estimate this latter after a finite number of time slots. In this work, we first start to formulate the classical OSA as a MAB problem, in which, we consider a single Secondary User (SU) that needs to access opportunistically the frequency band. Later on, we will consider more realistic conditions that deal with the OSA (e.g. multiple users, Quality of Service, collision among users, dynamic access). Thompson Sampling Thompson Sampling (TS), a randomized algorithm with a bayesian spirit, represents one of the earliest algorithms proposed to tackle the MAB problem. In TS, each arm has assigned an index B i (t, T i (t)) that contains information based on the past success and failure observations. After a finite number of time slots, the index B i (t, T i (t)) will be very close to the mean reward of each arm. By selecting the arm with the highest index at each time slot, the agent often selects the best arm with the highest reward. This index achieves a trade-off between the exploration and the exploitation phases and can be defined as follows: where W i (t, T i (t)) and Z i (t, T i (t)) represent respectively the success and failure access; a and b are constant numbers. Despite its excellent performance that can exceed the stateof-the-art MAB algorithms [8,9,10], TS is widely ignored in the literature. This ignorance is due to the fact that this algorithm is proposed with a lack of proof and a slight mathematical background unlike other MAB algorithms, such as: UCB or -greedy. Recently, TS has attracted more attention and is being used in several fields [11,12,13]. Recent studies have found a theoretical upper bound for its convergence to the best choice [14,15,16]. Upper Confidence Bound Upper Confidence Bound (UCB) represents one of the famous MAB algorithms firstly proposed in [5]. Like TS, the index B i (t, T i (t)) of UCB contains two phases, the exploration www.astesj.com 225 and the exploitation phases, in order to estimate the vacancy probabilities of channels and then access the best one. In the literature, several variants of UCB have been proposed to enhance the performance of the classical UCB, such as: UCB1, UCB2, UCB-tuned, Bayes-UCB, KL-UCB [8,17,18,19]. UCB1 [17] represents the simplest version that balances between the complexity and the optimality. Algorithm 1: Thompson Sampling Algorithm Input: C, n, 1 C: number of channels, 2 n: total number of slots, 3 Parameters: : the state of the selected channel, equals one if the channel is free and 0 otherwise, 5 T i (t): number of times the i th channel is sensed by SU, 6 W i (t, T i (t)): the success access of the i th channel, 7 Z i (t, T i (t)): the failure access of the i th channel, : the index assigned for the i th channel, 13 % 1 a t =i : equal 1 if the user selects the i th channel and 0 otherwise, Algorithm 2: UCB1 Algorithm Input: α, C, n, 1 α: exploration-exploitation factor, 2 C: number of channels, 3 n: total number of slots, 4 Parameters: : the index assigned for i th channel, 9 foreach t = 1 to C do 10 SU senses each channel once, 11 SU updates its index B i (t, T i (t)), 17 % S i (τ) = 1 if the channel i is vacant and 0 otherwise, For this reason, UCB1 is the widely adopted version scheme in the context of CR to help a SU make an optimal decision [20,21,22,23,24,25]. In UCB1, the index B i (t, T i (t)) essentially comprises two important factors: X i (T i (t)) and A i (t, T i (t))) that represent respectively the exploitation (or the expected reward) and the exploration phases: where the exploitation and the exploration factors can be expressed as: The factor A i (t, T i (t)) has an important role in learning the availability probabilities of channels by pushing the algorithm to examine the state of all available channels. Thus, after a finite time t, X i (T i (t)) of the i th channel will approximately equal to its availability probability µ i . In [17], the authors found an upper bound of the sum of regret (i.e. the loss of reward by selecting the worst channels) for a single agent and C arms. It has shown that the upper bound of the regret achieves a logarithmic asymptotic behavior, which means that after a finite number of time slots, the agent will be able to identify the best arm and always select it. -greedy One of the simplest MAB algorithms to tackle the MAB problem is referred to -greedy that was firstly proposed in [6]. A recent version of this algorithm is proposed in [17] in order to achieve a better performance compared to several previous versions (see algorithm 1). Like several MAB algorithms, -greedy contains two phases completely separated: exploration and exploitation. During the exploration phase, the user chooses a random channel in order to learn the vacancy probability of channels. While in the exploitation phase, the user usually selects the channel with the highest expected reward X i (T i (t)). The authors of [17] have also investigated the analytical convergence of the -greedy and proved that the regret (i.e. the loss of reward by selection the worst channel) achieves a logarithmic asymptotic behavior. Problem Formulation In the previous section, we introduced the well-known MAB algorithms that help a MAB agent makes a good decision. In this section, we present the classical OSA for a single SU in order to formulate it as a MAB problem. However, MAB algorithms can represent an optimal solution for the classical OSA, as it can be seen in section 5. On the other hand, we www.astesj.com consider more developed scenarios compared to the classical OSA such as multiple SUs, decreasing the collisions among users and also estimating the quality of the available channels. We first present the OSA for multiple SUs in the next section and, hereinafter, we propose the new APL policy to manage a secondary network. Algorithm 3: -greedy Algorithm Input: C, H, n, 1 C: number of channels, 2 H: exploration constant, 3 n: total number of slots, 4 Parameters: T i (t), 5 T i (t): number of times the channel is sensed up to time t, 6 χ: a uniform random variable in [0,1], Output: 10 SU makes a random action a t , Single User Case Let us consider a SU accesses C channels, each of which associated with a vacancy probability µ i ∈ [0, 1]. Let the vacancy probabilities be ordered by their availability probabilities, µ 1 > µ 2 > ... > µ C , which are initially unknown for the secondary user. A most important objective of the SU is to estimate the vacancy probabilities of channels after a finite time in order to access the best channel that has µ 1 as vacancy probability. At each time slot, the user can select one channel and transmit its data if available; otherwise, it should wait the next slot to sense another channel. Let the state of the i th channel at slot t be referred to S i (t): S i (t) equals 1 if the i th channel is free and 0 otherwise. Hereinafter, we consider that the obtained reward from the i th channel r i (t), at slot t is equal to its state: r i (t) = S i (t). Let T i (t) represent the number of times to access the i th channel up to the slot t. The user should be rational by adopting a given policy in order to quickly identify the best channel. A policy selected by the SU may not be considered as optimal in term of the accuracy of the channels' vacancy estimation or the convergence speed towards the best channel. Finally, let us introduce the regret that rerepsents the gap between the reward obtained in an ideal scenario and that can be obtained using a given policy as follows: where n represents the total number of time slots and µ β i (t) stands for the vacancy probability of the selected channel at slot t under the policy β, and E(.) is the mathematical expectation. Multi-User Case In this section, we consider U SUs trying to learn the vacancy probabilities of the C channels and then access only the U best ones (C > U). When several SUs existing in the spectrum, their main challenge is to learn collectively or separately the vacant probability of channels as much as possible in order to access the best ones. Therefore, a policy selected by users should estimate the vacancy of channels as much as possible, and should also be able to decrease the collisions number among users. Therefore, let us define the regret for multiple users that takes into account both the convergence speed to the U best channels and the collision number among users as follows: where µ k stands for the vacancy probability of the k th best channel; S β (t) represents the global reward obtained by all users at time t using the policy β and is defined as follows: where S i (t) represents the state of the i th channel at time t: S i (t) = 1 if the i th channel is available and 0 otherwise; I i,j (t) indicates that no collisions have appeared in the i th channel by the j th user at slot t: I i,j (t) = 1 if the j th user is the sole occupant of the channel i and 0 otherwise. Finally, the regret that takes into consideration the channels' occupancy and the collisions number among users can be expressed by: where P i,j (n) = ∑ n t=1 E I i,j (t) represents the expectation of times that the j th user is the only occupant of the i th channel up to n, and the mean of reward can be given by: Multi-Priority Access In the existing models of OSA where several SUs exist in the network, the main challenge is to learn collectively (via a cooperative learning) or separately (via a competitive learning) the www.astesj.com available channels while decreasing the number of collisions with each other. In our work, we focus on the competitive priority access, where the k th user should selfishly estimate the vacancy probabilities of channels in order to access the k th best one. Our proposed policy for the priority access takes into account the dynamic access where the priority users can enter/leave the network at any time. To the best of our knowledge, only the priority or the random access are considered without the dynamic access in several proposed MAB policies [24,25,26,27] (a simple example for the priority dynamic access is shown in Fig. 3). To formulate the OSA as a MAB problem, recent works extend the simple case of MAB (i.e. the case of a single agent) to consider several agents [20,25,26,28,29]. In our work, we are interested in the OSA for multiple priority access in which SUs should access the spectrum according to their ranks. Moreover, decreasing the number of collisions among SUs represents a point of interest to enhance the global performance of the secondary network. In general, when two SUs access the same channel to transmit, their data cannot be correctly received because of the interference between them. When a collision occurs among users, several proposals can be found in the literature in order to enhance their behavior in the next slots. We present below two well-known collision models in the literature that are widely used in OSA: • ALOHA-like model: If a collision occurs between two or more users, then none of them receives a reward, despite the selected channels is free. This model may ensure the fairness among users, and no collision avoidance mechanism is used. • Reward sharing model: If two or more users select the same channel at the same time, the colliding users share the obtained reward from the selected channel (each of them receives the same reward). The above models can affect the methodologies used to collect the reward from the target channel while the learning phase is not affected. In our work, we consider the most widely used, ALOHA-like. Based on the ALOHA-like, the works of [2,20,21,25,26,27,28,30] proposed semi-distributed and distributed algorithms in which users cannot exchange information with each other. Liu and Zhao in [28], proposed Time-Division Fair Share (TDFS) policy and showed that the proposed algorithm may achieve an asymptotic logarithmic behavior. In such algorithm, the users access the channels with different offsets. TDFS also ensures the fairness among users; while in our work we are interested in the priority access where users access the channels based on their prior rank. In [28], TDFS policy was been used to extend UCB1 algorithm to consider multiple users. Beside TDFS, the authors of [20] proposed Random Rank policy, based on UCB1, to manage the secondary network. Random Rank represents a distributed policy (i.e. no-information exchange among users) in which the user achieves a different throughput. The authors of [24] proposed the Selective Learning of the k th largest expected rewards (SLK) policy, based on UCB1, that represents an efficient policy for the priority access. However, SLK allows only a fixed number of users to access the available channels. So that, the dynamic access under SLK cannot be considered since this latter restricts the access. Similarly to SLK, the authors of [25] proposed the kth − MAB for the priority access which is based on UCB1 and -greedy. In kth − MAB, the time is slotted and each slot is divided into multi sub-slots depending on the users priority ranks. For instance, the slot of SU U is divided into U sub-slots in order to find the U th best channel and transmit data via this channel. Therefore, the main limitation of this policy remains in the dissatisfaction of transmission time of high ranked users. For the random access, several learning policies can be found in the literature, where the SU selects randomly its channel. The authors of [26] proposed the Musical Chairs policy as well the Dynamic Musical Chairs (DMC) policy for a dynamic access. In both policies, the SU selects a random channel up to time T 0 in order to estimate the vacancy of channels and the number of users, U, in the network. After T 0 , the SU chooses a random channel between {1, ..., U}. The main drawback of the Musical Chairs and DMC is that the users should known the total number of transmission time slots as well as the number of available channels. Moreover, in DMC a restrict access is considered, where the users cannot leave the network during the time T 0 . To find the U-best channels, the authors of [27] proposed the Multi-user -greedy collision Avoiding (MEGA) algorithm based on the -greedy algorithm proposed in [17]. However, their algorithm suffers the same drawbacks of the Musical Chairs and the Dynamic Musical Chairs and it does not consider the priority access. In order to solve all these limitations, we propose in section (4.1) a novel policy called APL for the priority dynamic access. APL for the Priority Access In this section, we propose a new policy for the priority access. This policy enables a secondary user to learn the vacant probabilities of channels and ensures the convergence to his dedicated channel. Moreover, it can be used with all learning MAB algorithms such as: Thompson Sampling (TS), Upper www.astesj.com 228 Confidence Bound (UCB), AUCB, e-UCB, e-greedy, etc. We should highlight that our proposed policy does not require prior knowledge about the channels as in the case for other policies, such as: Musical Chair [26], SLK [24], k-th MAB [25], MEGA [27], etc. Indeed, existing policies to manage a secondary network suffer from one or more of the following disadvantages: 1. The number of users should be fixed and known to all users. 2. SUs should have a prior information about the number of channels. 3. Expected transmission time should be known. 4. The dynamic access is not suggested. To recall, in a dynamic access, the users can at any given time enter or leave the network. Some algorithms consider a restricted dynamic access, where a SU can't leave the network during the learning or the exploration phases. 6. The vacant probabilities of channels should be static; otherwise, users cannot adapt to their environment. 7. The priority access is seldomly suggested in the literature, while the random access represents the most used model. Unlike SLK and k-th MAB, our proposed policy for the priority access, called All-Powerful Learning algorithm (APL), doesn't suffer from the above mentioned drawbacks. As a matter of fact, SLK and k-th MAB policies suffer from the 1 st , 2 nd and 4 th mentioned drawbacks. In a classical priority access, each channel has assigned an index B i (t) and the highest priority user SU 1 should sense and access the channel with the highest index B i (t) at each time slot. Indeed, the best channel, after a finite number of time slots, will have the highest index B i (t). As the second priority user SU 2 should avoid the first best channel and try to access the second best one. To reach his goal, SU 2 should sense the first and second best channels at each time slot in order to estimate their vacant probabilities and then access the second best channel if available. In this case, the complexity of the hardware is increased, and we conclude that a classical priority access represents a costly and impractical method to settle down each user to his dedicated channel. In the case of APL, at each time slot, the user senses a channel and transmits his data if the channel is available (see algorithm 4). In our policy, each SU k has a prior rank, k ∈ {1, ..., U}, and his target is to access the k-th best channel. The major problem of the competitive priority access is that each user should selfishly estimate the vacant probabilities of the available channels. Our policy can intelligently solve this issue by making each user generate a rank around his prior rank to get information about the channels availability. For instance, if the rank generated by the k-th user equals 3 (considering that k > 3), then he should access the channel that has the third index, i.e. B 3 (t). In this case, SU k can examinate the states of the k best channels and his target is the k-th best one. Algorithm 4: APL for the priority dynamic access Input: k, ξ k (t), r i (t), 1 k: indicates the k − th user or k − th best channel, 2 ξ k (t): indicates a presence of collision for the k − th user at instant t, 3 r i (t): indicates the state of the i − th channel at instant t, r i (t) = 1 if the channel is free and 0 otherwise, 4 Initialization 5 k = 1, 6 for t = 1 to C do 7 SU k senses each channel once, However, if the rank created by SU k is different than k, then he selects a channel with one the following probabilities: {µ 1 , µ 2 , ..., µ k−1 } and he may collide with a priority user, i.e. SU 1 , SU 2 , ..., SU k−1 . Therefore, SU k should avoid regenerating his rank at each time slot; otherwise, a large number of collisions may occur among users and transmitted data can be lost. So, after each collision, SU k should regenerate his rank from the set {1, ..., k}. Thus, after a finite number of slots, each user settles down to his dedicated channel. It remains to investigate the analytical convergence of APL to verify its performance in a real radio environment. Quality of Service As mentioned before, UCB represents one of the popular MAB algorithms that is widely suggested in the literature, where several variants have been proposed. In [23], we proposed a new variant of UCB called the Quality of Service UCB1 (QoS-UCB1) for a single SU, where this latter is able to learn channels' vacancy and quality. To consider multiple SUs, this version of UCB is extended using the Random Rank policy proposed in [20] to manage a secondary network. It has been shown that the Random Rank policy with the QoS-UCB1 represents an optimal solution to allow users to learn separately channels' vacancy and quality. However, in this paper, we www.astesj.com 229 evaluate the performance of our APL policy with QoS-UCB1 for the priority access. Supposing that each channel has a binary quality represented by q i (t) at slot t: q i (t) = 1 if the channel has a good quality and 0 otherwise. Then, the expected quality collected from the channel i up to time n is given by: The global mean reward, that takes into account channels' vacancy and quality, can be expressed as follows [23]: The index assigned to the i th channel that considers both vacancy and quality B Q i (t, T i (t)) can be defined by: (11) According to [23], the term Q i (t, T i (t)) of the quality factor is given by the following equation: where the parameter γ stands for the weight of the quality factor; M i (t, T i (t)) = G max (t) − G i (T i (t)) being the difference between the maximum expected quality over channels at time t, i.e. G max (t), and the one collected from channel i up to time slot t, i.e. G i (T i (t)). However, when the i th channel has a good quality G i (T i (t)) as well as a good availability X i (T i (t)) at time t. The quality factor Q i (t, T i (t)) decreases while X i (T i (t)) increases. Subsequently, by selecting the maximum of its index B Q i (t, T i (t)), the user has a large chance to access the i th channel with a high quality and availability. Simulations and Results In our simulations, we consider three main scenarios: In the first one, a SU tries to learn the vacancy of channels using the MAB algorithms: TS, UCB1 and -greedy in order to access the best one with the highest vacancy probability. We also compare the performance of these MAB algorithms to show which one can offer more opportunities for the SU. In a second scenario, we considered 4 SUs trying to learn the vacancy of channels with a low number of collisions. In this scenario, we show that, based on our policy APL, users reach their dedicated channel faster than several existing policies. In the last scenario, using APL with the QoS-UCB1, users should learn both vacancy and quality of channels and then converge towards channels that have a good vacancy and quality. In our algorithm, two factors can affect the convergence: α or H while the convergence of UCB1 and -greedy are affected by α and H respectively. We consider the value of α and H for which UCB1 and -greedy achieve their best performance. According to [17], the best value of H = c×K d 2 (i.e. c = 0.1 is a constant number, K = 9 and d = min i (µ 1 − µ i ) = 0.1) and α are 90 and 2 respectively in order to ensure a balance between the exploration and exploitation phases. Let us initially consider a SU trying to access 9 channels associated with the following vacancy probabilities: Fig. 4 compares the regret of the SU using the three MAB algorithms: TS, UCB1 and -greedy over 1000 Monte Carlo runs. The simulation outcomes are presented with a shaded region enveloping the average regret. As we can see, the regrets of the 3 MAB algorithms have a logarithmic asymptotic behavior with respect to the number of slots, while TS produces a lower regret for all simulations. That means that the SU can quickly reaches the best channel that offers more opportunities for the user compared to other channels. In the second scenario of our simulation, we evaluate the performance of APL and its ability to make each user selects his dedicated channel after a finite number of time slots. We evaluate the performance of APL compared to the existing learning policies such as Musical Chair and SLK. To make this comparison, we use two main performance indexes: the regret related to the access of worst channels and the percentage of times to access best channels by each user. A collision may occur when two or more users try to access the same channel. We adopt in our simulations the www.astesj.com 230 ALOHA model, widely used one in OSA, in which none of the collided users receives a reward. After each collision, and based on our policy APL, the collided users should regenerate their rank. First, we consider a static setting of users, then we investigate the dynamic access in which the priority users can enter or leave the network. Figure 6: The percentage of times where each SU k selects its optimal channel using the proposed approach In Fig. 5, we compare the regret of APL to SLK and Musical Chair. APL and SLK take into consideration the priority access while Musical Chair is proposed for the random access. Despite the regret of APL and SLK has a logarithmic asymptotic behavior, the regret of Musical Chair has two parts: • A linear part at the beginning, during the learning period, due to the large number of collisions resulting from the random selection. • A constant part in which the users exploit the U best channels. As we can see from Fig. 5, APL using TS outperforms Musical Chair and SLK by achieving the lower regret. Fig. 6 shows the percentage of times that the k-th user accesses his ded-icated channel based on our policy APL up to n, P k (n). This latter is given by: where β l APL (t) represents the channel selected at time t under APL using the learning algorithm l, such as: TS, UCB1 orgreedy. As we can see, based on our policy APL, the users are able to converge to their targeted channels: SU 1 converges to the best channel µ 1 , followed by SU 2 , SU 3 and SU 4 to the channels µ 2 , µ 3 and µ 4 respectively. In addition, we can observe a fast converges of APL using TS compared to TS. This figure clearly shows that, based on APL, the users converge to their dedicated channels: the first priority user SU 1 converges towards the best channel µ 1 = 0.9, followed by SU 2 , SU 3 and SU 4 towards channels µ 2 = 0.8, µ 3 = 0.7 and µ 4 = 0.6 respectively. In addition, we can see that the users quickly reach their dedicated channels using TS and a slow one under UCB1 and -greedy. Fig. 7 compares the regret of APL and DMC for the dynamic access where the dotted line indicates the entering and leaving of users on the network. Figures (6a) and (6b) represent respectively the cumulative and average regrets of APL, where at each entering or leaving of users, a significant increase in the regret is observed. It is worth mentioning that, in the dynamic scenario and based on APL, the user can change its current channel for two reasons: 1. When a collision occurs, SU k should generate a random 2. When a PU reappears in the network and accesses the current channel used by SU k , the index of this channel decreases, and it may be overwhelmed by another channel that has a low index. To the best of our knowledge, two policies exist in the literature that consider the dynamic access but without considering priority access: DMC [26] and MEGA [27]. The authors of [26] show that the DMC achieves better performance compared to MEGA policy. In Figures (6c) and (6d), we can see that the performance of APL outperforms the one of DMC and achieves a lower regret. However, after the dynamic access interval, our algorithm achieves a logarithmic regret although the regret of DMC keeps growing with time. Thus, the access under DMC algorithm is realized in epochs, where each one is composed of a learning phase with enough rounds of random exploration to learn the U best channels and the number of users under the dynamic access. The length of an epoch and the learning phase are T 1 and T 0 respectively. These two parameters depend on the number of channels C and the total number of slots n. Let us start with the last scenario in which users are able to learn both channels' vacancy and quality using our APL policy where the empirical mean of the quality collected from channels as follows: G = [0.7 0.9 0.2 0.8 0.8 0.7 0.7 0.8 0.8]. Thus, the global mean reward that takes into consideration both quality and vacancy µ Q is given by: µ Q = [0.63 0.72 0.14 0.48 0.4 0.28 0.21 0.16 0.08].After estimating the channels' availability and quality (i.e. µ Q ) and based on our APL policy with QoS-UCB1, the first priority user SU 1 should converge towards the channel that has the highest global mean, i.e. channel 2, while the target of SU 2 , SU 3 and SU 4 should be respectively channels 1, 4 and 5. On the other hand, in the case of APL with UCB1, the target of the priority users SU 1 , SU 2 , SU 3 , and SU 4 should be respectively the channels 1, 2, 3 and 4. This result can be confirmed in Fig. 8, where the priority users access their dedicated channels using APL with QoS-UCB1 or UCB1. Fig. 9 displays the achievable regret of APL with QoS-UCB1 and UCB1 in the multi-user case. Despite the fact that the two curves have a logarithmic asymptotic behavior, we notice an improvement regret of APL with QoS-UCB1 compared to UCB1. Conclusion This paper deals with the Opportunistic Spectrum Access (OSA) problem in the context of Cognitive Radio (CR) for a single or multiple Secondary Users (SUs). Recently, several Multi-Armed Bandit (MAB) algorithms have been suggested to help a single SU make a good decision. To tackle the problem of OSA with several SUs, we proposed a novel policy for the priority access called All-Powerful Learning (APL) that allows several SUs to learn separately the channels' vacancy without any cooperation or a prior knowledge about the available channels. Moreover, APL considers the priority dynamic access while only the priority or the dynamic access are separately considered in several recent works, such as Selective Learning of the k th largest expected rewards (SLK), Musical Chairs, Multi-user -greedy collision Avoiding (MEGA) and kth − MAB. In our work, the Quality of Service (QoS) have been also investigated where SU is able to learn both quality and availability of channels and then make an optimal decision with respect to its prior rank. Like most important works in OSA, this work focuses on the Independent Identical Distributed (IID) model in which the state of each channel is supposed to be drawn from an IID process. In future work, we will consider the Markov process as a dynamic memory model to describe the state of available channels, although it is a more complex process compared to IID.
9,472.4
2020-01-01T00:00:00.000
[ "Computer Science" ]
Characterization of Mycoplasma gallisepticum pyruvate dehydrogenase alpha and beta subunits and their roles in cytoadherence Mycoplasma gallisepticum is a causative agent of chronic respiratory disease in chickens, typically causing great economic losses. Cytoadherence is the critical stage for mycoplasma infection, and the associated proteins are important for mycoplasma pathogenesis. Many glycolytic enzymes are localized on the cell surface and can bind the extracellular matrix of host cells. In this study, the M. gallisepticum pyruvate dehydrogenase E1 alpha subunit (PDHA) and beta subunit (PDHB) were expressed in Escherichia coli, and their enzymatic activities were identified based on 2,6-dichlorophenol indophenol reduction. When recombinant PDHA (rPDHA) and recombinant PDHB (rPDHB) were mixed at a 1:1 molar ratio, they exhibited strong enzymatic activity. Alone, rPDHA and rPDHB exhibited no or weak enzymatic activity. Further experiments indicated that both PDHA and PDHB were surface-exposed immunogenic proteins of M. gallisepticum. Bactericidal assays showed that the mouse anti-rPDHA and anti-rPDHB sera killed 48.0% and 75.1% of mycoplasmas respectively. A combination of rPDHA and rPDHB antisera had a mean bactericidal rate of 65.2%, indicating that rPDHA and rPDHB were protective antigens, and combining the two sera did not interfere with bactericidal activity. Indirect immunofluorescence and surface display assays showed that both PDHA and PDHB adhered to DF-1 chicken embryo fibroblast cells and adherence was significantly inhibited by antisera against PDHA and PDHB. Adherence inhibition of M. gallisepticum to DF-1 chicken embryo fibroblast cells was 30.2% for mouse anti-rPDHA serum, 45.1% for mouse anti-rPDHB serum and 72.5% for a combination of rPDHA and rPDHB antisera, suggesting that rPDHA and rPDHB antisera may have synergistically interfered with M. gallisepticum cytoadherence. Plasminogen (Plg)-binding assays further demonstrated that both PDHA and PDHB were Plg-binding proteins, which may have contributed to bacterial colonization. Our results clarified the enzymatic activity of M. gallisepticum PDHA and PDHB and demonstrated these compounds as Plg-binding proteins involved in cytoadherence. Introduction Mycoplasma gallisepticum is one of the most important avian pathogens. It is the primary agent of chronic respiratory disease in chickens and infectious sinusitis in turkeys, causing great economic losses in the poultry industry worldwide [1]. Previous studies showed that M. gallisepticum invades DF-1 chicken embryo fibroblast cells (named DF-1 cells thereafter) in vitro [2], passes through the respiratory mucosal barrier, enters the bloodstream and spreads throughout the body [3]. Cytoadherence is the initial stage for mycoplasmas to colonize and infect host cells [4] and is essential for mycoplasma virulence [3]. Investigation of proteins involved in cytoadherence is important for a better understanding of mycoplasma pathogenesis. Several cytoadherence-related proteins of M. gallisepticum have been reported, such as GapA and CrmA [5][6][7], MGC2 [8], PvpA [9], and the OsmC-like protein MG1142 [10]. In addition, studies revealed that some glycolytic enzymes, including glyceraldehyde-3-phosphate dehydrogenase (GAPDH) [11][12][13][14] and α-enolase (Eno) [15][16][17][18], are on the surface of mycoplasmas. They are involved in cytoadherence through interactions with host components such as plasminogen (Plg) [17][18][19][20], fibronectin (Fn) [11,18], mucin [12] and β-actin [14,16]. In M. gallisepticum, some metabolic enzymes are also cytoadhesins including triosephosphate isomerase [21], Eno [22], and pyruvate kinase [23]. These multifunctional glycolytic enzymes are called moonlighting proteins [24]; whether other moonlighting proteins exist in M. gallisepticum is not clear. The multiple functions of proteins may be a way to compensate for limited genetic resources. Based on this hypothesis, further investigation of moonlighting proteins is helpful for enrichment of the adhesion-related proteins in M. gallisepticum and for better understanding the organization of the parasitic lifestyle of mycoplasmas. In bacteria, the pyruvate dehydrogenase complex (PDHc) is an important glycolytic enzyme complex that converts pyruvic acid to acetyl-CoA and catalyses the reduction of NAD+ to NADH. In addition to enzyme activity, the pyruvate dehydrogenase E1 subunit (PDH E1) is an immunogenic protein from many mycoplasma species such as M. agalactiae [25], M. bovis [26], M. pneumoniae [27], M. hyopneumoniae [28], M. mycoides subsp. mycoides SC [29] and M. mycoides subsp. capri. [30,31]. The PDH E1 alpha subunit (PDHA) of M. pneumoniae is a membrane-associated protein [32] and was confirmed as an P1 adhesin-complexed protein [33]. The PDH E1 beta subunit (PDHB) of M. pneumoniae is a cell surfacelocated protein that binds human Fn and plg [27,34]. Both PDHA and PDHB of M. pneumoniae bind to human Plg [35,36] and other human extracellular matrix (ECM) proteins [37]. In addition, PDHA and PDHB of M. pneumoniae bind to human lung epithelial cells and binding is reduced significantly by anti-plasminogen. This result indicates that PDH E1 might be involved in colonization of the respiratory tract [36]. However, PDH E1-mediated cell adherence is rarely reported in other mycoplasmas. Here, we investigated the enzyme activities, subcellular localization, immunogenicity, cytoadherence and Plg-binding ability of M. gallisepticum PDHA and PDHB. The results may provide a molecular basis for further study of their function in M. gallisepticum pathogenesis. Materials and methods Bacterial strains, vectors, sera, cell lines, and cell culture M. gallisepticum strain R low (CVCC 1651) was obtained from the China Veterinary Culture Collection Center (CVCC, Beijing, China) and cultured in mycoplasma broth base (Haibo, Qingdao, China) with 10% horse serum (Thermo Fisher Scientific, Waltham, MA, USA) at 37˚C in a 5% CO 2 atmosphere. The His-tag vector pET-28a (+) (Novagen, Madison, WI, USA) was used for DNA manipulation. The vector pET28a-InaZN-EGFP was used for the surface display system and named as pIGN. The pIGN vector contains nucleotide sequences for the N-terminal domain of ice nucleation (InaZN) and enhanced green fluorescent protein (EGFP) and was previously constructed in our lab [38]. The M. gallisepticum-infected and M. gallisepticum-negative chicken sera were from CVCC. Escherichia coli strains DH5α and BL21 (DE3) (Tiangen, Beijing, China) were used as host strains for gene cloning and recombinant protein expression; after transformation with recombinant expression vectors, E. coli were grown in Luria-Bertani (LB) broth or on LB agar plates supplemented with 50 μg mL −1 kanamycin at 37˚C. DF-1 cells were from American Type Culture Collection (ATCC, Manassas, VA, USA) and cultured in high-glucose Dulbecco's modified Eagle's medium (DMEM; GE Healthcare Life Science, HyClone, Logan, UT, USA) containing 10% foetal bovine serum (FBS; Gibco, Carlsbad, CA, USA), 100 IU mL −1 penicillin, and 100 μg mL −1 streptomycin, at 37˚C in a 5% CO 2 atmosphere. DF-1 cells used in this study were confirmed to be mycoplasma-free using PCR PromoKine Mycoplasma Test kits I/C (PromoCell, Heidelberg, Germany) according to the manufacturer's protocol. Cloning and expression of M. gallisepticum pdhA and pdhB genes M. gallisepticum strain R low cultures at logarithmic growth were harvested by centrifugation at 12,000 ×g for 10 min, and genomic DNA was extracted using TIANamp bacterial genomic DNA kits (centrifugal column type; Tiangen, Beijing, China). Using the M. gallisepticum strain R low genome sequence (NC_004829.2), gene sequences for pdhA (MGA_RS02765) and pdhB (MGA_RS02760) were extracted and analysed. Five TGA sites were found in the pdhA gene, and two in the pdhB gene; this sequence encodes tryptophan in mycoplasmas but is a stop codon in E. coli. To change TGA to TGG, overlapping PCR was conducted for site-directed mutagenesis using primers in Table 1 (A1F to A6R for pdhA; B1F to B3R for pdhB). Full-length pdhA and pdhB gene fragments were subcloned into pET-28a (+) at the BamH I/Xho I sites for Table 1. Primers for overlapping PCR of pdhA and pdhB. Mouse polyclonal antisera against recombinant proteins and M. gallisepticum whole cells Purified rPDHA and rPDHB proteins, and M. gallisepticum whole cells (inactivated with 0.4% formalin for 16 h at 37˚C) were emulsified with an equal volume of Freund's complete adjuvant (Sigma) and used to immunize 6-week-old female BALB/c mice (SLAC, Shanghai, China) via multipoint subcutaneous injection respectively (100 μg purified protein or 10 10 colony forming units [CFUs] M. gallisepticum whole cells per mouse). After the first immunization, two boosters were given at 2-week intervals. Two non-immunized mice were used as negative controls. Tail vein blood from immunized and non-immunized mice was collected and the titres of polyclonal antisera measured by indirect enzyme-linked immunosorbent assay (iELISA) as previously described [22], with 96-well plates coated with coating buffer (16 mM Na 2 CO 3 , 34 mM NaHCO 3 , pH 9.6) containing purified protein (0.5 μg per well) or M. gallisepticum total protein prepared by sonic disruption of M. gallisepticum bacteria (0.5 μg per well) overnight at 4˚C. When titres significantly increased, blood samples were collected from the infraorbital sinuses of mice and serum samples were separated and stored at −20˚C. Surface localization, distribution and immunogenicity analyses Surface localization of rPDHA and rPDHB on M. gallisepticum cells was determined using suspension immunofluorescence assays. M. gallisepticum strain R low cells were collected at midlogarithmic phase by centrifugation and washed three times with phosphate buffer saline (PBS, 137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 2 mM KH 2 PO 4 ; pH 7.4). Mycoplasma pellets were resuspended in PBS buffer containing 5% (w/v) skim milk and incubated for 1 h at 37˚C. After centrifugation, cells were re-suspended in mouse anti-rPDHA (1:100) or anti-rPDHB serum (1:100) and incubated for 1 h at 37˚C. Cells were washed three times with centrifugation and incubated with fluorescein isothiocyanate (FITC)-conjugated goat anti-mouse IgG (Sigma; 1:200) for 1 h at 4˚C. After washing three times, pellets were re-suspended in PBS, spread onto glass slides, and observed by fluorescence microscope (Ti-S; Nikon, Tokyo, Japan). Mouse antiserum against M. gallisepticum rEno was used as positive control [22]. We previously found that M. synoviae fructose-bisphosphate aldolase (FBA) is a cytoplasmic protein and is not on the membrane surface [40]. M. gallisepticum FBA had the same distribution in M. gallisepticum cells as in M. synoviae (data not shown). Therefore, mouse antiserum against M. gallisepticum recombinant FBA (rFBA) and non-immunized mouse serum were used as negative controls. To detect the subcellular localization of PDHA and PDHB in M. gallisepticum, membrane and cytoplasmic protein fractions of M. gallisepticum were extracted using ReadyPrep protein extraction kits (Membrane I; Bio-Rad, Hercules, CA, USA) according to the manufacturer's instruction. Total cell proteins were prepared by sonic disruption of M. gallisepticum bacteria using an ultrasonic cell disruptor (Jingxin, Shanghai, China). Proteins were quantified using BCA protein assay kits (Pierce). Western blots were performed using 8 μg total cell proteins, membrane proteins, or cytoplasmic proteins as previously described [18]. Proteins were separated by SDS-PAGE and transferred to nitrocellulose (NC) membranes (Bio-rad). NC membranes were incubated with mouse anti-rPDHA or anti-rPDHB serum at 1:1,000 dilution. After washing three times with PBST (PBS buffer containing 0.05% Tween-20), NC membranes were incubated in horseradish peroxidase (HRP)-conjugated goat anti-mouse immunoglobulin G (IgG; 1:8,000; Sigma-Aldrich) and visualized with chemiluminescence (ECL) substrate kits (Thermo Fisher). Mouse anti-rEno or anti-rFBA sera were used as controls. Complement-dependent bactericidal assays Bactericidal assays were performed using mouse anti-rPDHA or anti-rPDHB serum as previously described [23]. Each antiserum was diluted to an ELISA titre of 1:5,000, 40 μL of anti-rPDHA or anti-rPDHB mouse serum, or a combination of each 20 μL anti-rPDHA and anti-rPDHB mouse serum were added with 120 μL M. gallisepticum suspension (5×10 3 CFU mL −1 ) and 40 μL complement serum (CVCC; 1:5), and incubated for 1 h at 37˚C. Subsequently, each 100 μL reaction mixture was spread onto mycoplasma agar plates in 60-mm dishes and cultured for colony counting. Mouse antiserum against M. gallisepticum whole cells was the positive control and non-immunized mouse serum was the negative control. Experiments were independently repeated three times. All mouse serum samples were inactivated by incubation for 30 min at 56˚C before use. Bactericidal percentages were calculated as: (1-CFU from antiserum treatment / CFU from non-immunized serum treatment) × 100%. Indirect immunofluorescence and inhibition assays To detect adherence of rPDHA and rPDHB to DF-1 cells, indirect immunofluorescence assays were performed. DF-1 cells were propagated on glass coverslips in 6-well cell culture plates (Corning) in DMEM with 10% FBS for 24 h. DF-1 cell monolayers were washed three times with PBS and incubated with rPDHA or rPDHB at 10 μg per well in DMEM for 1 h at 37˚C with 5% CO 2 for adhesion assays. DMEM without addition of recombinant protein was used as a negative control. For inhibition tests, each 10 μg rPDHA or rPDHB in DMEM was preincubated with respective mouse anti-rPDHA serum (1:100) or anti-rPDHB serum (1:100) at 37˚C for 1 h. The rPDHA and rPDHB treated with non-immunized mouse serum were used as negative controls. After incubation with proteins, cells were washed five times with PBS-1% BSA, fixed with 4% paraformaldehyde (pH 7.4, Sigma) for 20 min at room temperature and blocked with 1% (w/v) bovine serum albumin (BSA) in PBS for 2 h at 37˚C. Mouse anti-rPDHA or anti-rPDHB serum was diluted (1:1,000) in PBS-1% BSA buffer and added to the dishes for overnight incubation at 4˚C. After washing three times, dishes were overlaid with goat anti-mouse IgG (H+L)-DyLight 488 (Abbkine, 1:400) at 37˚C for 1 h and 10 μM 1,1'-dioctadecyl-3,3,3',3'-tetramethylindocarbocyanine perchlorate (DilC18 (3); Beyotime, Shanghai, China) was used to label cell membranes at room temperature for 10 min. To label cell nuclei, 0.1 μg/mL 4',6-diamidino-2-phenylindole (DAPI; Beyotime) was added at room temperature for 10 min. Cells were observed by laser scanning confocal microscope (LSM800; Zeiss, Oberkochen, German). All experiments were repeated in triplicate. Surface display assays To examine the effects of M. gallisepticum PDHA and PDHB on bacterial adhesion, we used a previously constructed surface display system for pIGN that contained InaZN as the anchoring motif and EGFP as the reporter [38]. With the pIGN system, the protein was expressed as a fusion with InaZN and EGFP, and displayed on the surface of E. coli BL21 (DE3) cells. This system was demonstrated for the detection of mycoplasma adhesion proteins. Full-length pdhA and pdhB gene fragments were inserted into pIGN at the BamH I/Xho I and BamH I/ EcoR I sites, respectively, and fusion proteins were expressed in E. coli BL21 (DE3) with 1 mM IPTG for 12 h at 37˚C. For adhesion assays, induced E. coli BL21 (DE3) cells containing pIGN-PDHA or pIGN-PDHB were washed with PBS, re-suspended in DMEM, and used to infect a monolayer of DF-1 cells for 2 h at 37˚C at 50 MOI. After removing non-adherent E. coli cells by washing with PBS, cells were fixed with 4% paraformaldehyde for 15 min and stained with DilC18(3) (Beyotime) and DAPI (Beyotime) as described above. Induced E. coli BL21 (DE3) cells harbouring pIGN were used as negative controls. For adhesion inhibition assays, induced E. coli BL21 (DE3) cells containing pIGN-PDHA or pIGN-PDHB were preincubated with mouse antisera against rPDHA or rPDHB for 1 h at 37˚C before adding to DF-1 cells. E. coli BL21 (DE3) cells treated with non-immunized rabbit serum were used as negative controls. After staining with DilC18 (3) and DAPI, cells were observed using a fluorescence microscope (Ti-S; Nikon). All experiments were repeated in triplicate. Adherence inhibition assay on colony counting Inhibition of M. gallisepticum adhesion to DF-1 cells by mouse anti-rPDHA or anti-rPDHB serum was identified by colony counting assays in plates as previously described, with some modifications [22]. M. gallisepticum strain R low at mid-logarithmic phase was collected by centrifugation at 4000 ×g for 10 min at 4˚C, and treated with 100 μL mouse anti-rPDHA or anti-rPDHB serum at the same antibody titre (1:500), or a combination of each 50 μL mouse antiserum (1:500 antibody titre) for 1 h at 37˚C. Mouse anti-M. gallisepticum serum and nonimmunized mouse serum were used as positive and negative controls. Monolayer DF-1 cells in 24-well plates (Corning) were washed three times with PBS and infected for 2 h with serumpre-treated M. gallisepticum bacteria at 200 MOI. M. gallisepticum treated with non-immunized serum was used as the negative control. After infection, cells were washed three times with PBS and dissociated with 0.05% trypsin for 10 min at 37˚C. Cell lysates were serially diluted and plated onto mycoplasma solid medium for 5-7 days at 37˚C with 5% CO 2 . Mycoplasma colonies were counted and inhibition of mycoplasma adherence to DF-1 cells by antisera was calculated as: (1-CFU from antiserum treatment/CFU from non-immunized serum treatment) × 100%. All experiments were done in triplicate. Binding of rPDHA and rPDHB to chicken plasminogen To test the binding activity of rPDHA and rPDHB to chicken plasminogen (cPlg), Western blots and ELISA assays were conducted as previously described [18]. For Western blots, 4 μg of M. gallisepticum total proteins, purified rPDHA, and rPDHB were subjected to SDS-PAGE respectively and transferred to NC membranes. Membranes were blocked with 5% skim milk in PBST at room temperature for 40 min and washed three times with PBST. Membranes were incubated with 10 μg/mL cPlg (Cell Sciences, Canton, MA, USA) in PBST and stored for 2 h at 37˚C. Membranes not incubated with cPlg were used as negative controls. After washing, membranes were incubated with rabbit anti-cPlg IgG fraction polyclonal antibody (1:1000; Cell Sciences) for 2 h at 37˚C. After additional washing, membranes were incubated with goat anti-rabbit IgG-HRP (1:8000; Sigma-Aldrich) for 1 h at 37˚C and visualized with an ECL substrate kit (Thermo Fisher). For ELISA assays, ELISA plates were coated with 1 μg per well purified rPDHA or rPDHB protein, or a mixture of rPDHA and rPDHB in equal quantities of 0.5 μg per well and incubated overnight at 4˚C. M. gallisepticum total proteins (1 μg per well) and BSA (1 μg per well) were used as positive and negative controls respectively. After washing three times with PBST, wells were blocked with PBST containing 5% skim milk for 2 h at 37˚C. Serially diluted cPlg (0, 0.015, 0.03, 0.06, 0.125, 0.25, 0.5 μg or 1.0 μg per well) in PBST was added to wells and incubated for 2 h at 37˚C. After washing, plates were incubated with rabbit anti-cPlg polyclonal antibody (1:2000 for 1.5 h; Cell Sciences) at 37˚C. After washing, plates were incubated with HRP-conjugated goat anti-rabbit IgG (1:5,000; Sigma-Aldrich) for 1 h at 37˚C. Plates were incubated with TMB substrate solution (Tiangen) and the absorbance at OD 450 was measured. All experiments were done in triplicate. Statistical analysis Data are expressed as means ± SD for adhesion and adhesion inhibition assays. Student's t-test and two-way ANOVA were performed with the software package in GraphPad Prism version 6 (La Jolla, CA, USA). Differences were considered statistically significant at p < 0.05 or very significant at p < 0.01, p < 0.001 or p < 0.0001. Expression, purification and antibody production of M. gallisepticum rPDHA and rPDHB Full-length pdhA (1080 bp) and pdhB (978 bp) gene fragments were obtained from overlapping PCR amplification (Fig 1A), cloned into pET-28a (+) and transformed into E. coli BL21 (DE3) cells. IPTG was used to induce expression of rPDHA and rPDHB, which were subjected to SDS-PAGE and showed approximate molecular masses of 42 kDa for rPDHA and 38 kDa for rPDHB (Fig 1B, lanes 2 and 3). Purified rPDHA and rPDHB proteins were also obtained ( Fig 1B, lanes 4 and 5). After three immunizations, mouse antisera against rPDHA or rPDHB were collected and the antibody titres were determined as 1:51,200 for rPDHA and 1:204,800 for rPDHB in an ELISA assay. Enzymatic activities of rPDHA and rPDHB The enzymatic activities of rPDHA and rPDHB were measured by detecting reduction of 2,6-DCPIP at OD 600 . The rPDHA, rPDHB and a mixture of rPDHA and rPDHB at a 1:1 molar ratio were tested for enzymatic activity. PDHc from porcine heart and reaction buffer without additions were the positive and negative controls, respectively. Alone, rPDHA had no detectable catalytic activity and rPDHB displayed weak catalytic activity. High activity was noted when rPDHA was mixed with rPDHB at a 1:1 molar ratio, similar to the positive control porcine heart PDHc (Fig 2). The distribution of PDHA and PDHB in M. gallisepticum was determined by Western blots. PDHA and PDHB were mainly distributed in the cytoplasmic component of M. gallisepticum, with a small amount in the membrane component (Fig 3B). The results suggested that Immunogenicity analysis of rPDHA and rPDHB was determined by Western blot and ELISA assays. Western blot analyses with M. gallisepticum-infected chicken sera showed positive bands for all detected concentrations of rPDHA (Fig 3C, lanes 1 to 4) and rPDHB (Fig 3C, lanes 6 to 9) with band densities in a dose-dependent pattern. No band was present after interaction with M. gallisepticum-negative chicken serum. In an ELISA assay, mouse anti-rPDHA or anti-rPDHB sera were diluted to the same antibody titre of 1:51,200 and then subjected to two-fold serial dilution for immunogenicity analyses. Anti-rPDHA and anti-rPDHB mouse sera with the same antibody titre had similar reactivity against M. gallisepticum whole antigen ( Fig 3D). These results suggested that both M. gallisepticum PDHA and PDHB were immunogenic proteins. Antisera bactericidal assays To evaluate the protective abilities of M. gallisepticum PDHA and PDHB, we performed complement-dependent antisera bactericidal assays. As shown in Table 2, the positive control of mouse anti-M. gallisepticum serum had a bactericidal rate of 91.0% (Table 2), mouse anti-rPDHA serum alone had a bactericidal rate of 48.0% and anti-rPDHB alone had a rate of 75.1%. A combination of rPDHA and rPDHB antisera had a bactericidal rate of 65.2%, similar to the means of 48.0% and 75.1%, respectively. Compared with non-immunized mouse serum, both mouse anti-rPDHA and anti-rPDHB sera showed significant bactericidal activity (p< 0.001). Adherence of rPDHA and rPDHB to DF-1 cells by indirect immunofluorescence assays Adhesion of rPDHA and rPDHB to DF-1 cells was detected by indirect immunofluorescence assays. Both rPDHA ( Fig 4A) and rPDHB (Fig 4E) adhered to DF-1 cells and adherence was effectively inhibited by the corresponding mouse anti-rPDHA or anti-rPDHB sera (Fig 4B and 4F). Negative serum from non-immunized mice did not affect adherence of rPDHA and rPDHB to DF-1 cells (Fig 4C and 4G) and cells incubated with antiserum only showed no green fluorescence (Fig 4D and 4H). These results indicated that M. gallisepticum PDHA and PDHB were potential adhesion-related proteins. In adhesion and inhibition assays, recombinant E. coli containing pIGN-PDHA or pIGN-PDHB adhered to DF-1 cells (Fig 5A and 5D), and adherence of recombinant E. coli was significantly restrained by mouse antiserum against rPDHA or rPDHB (Fig 5B and 5E). E. coli containing pIGN showed no adhesion ability (Fig 5G). These results suggested that M. gallisepticum PDHA and PDHB mediated adhesion of bacteria to host cells. Adherence inhibition of M. gallisepticum to DF-1 cells by mouse antisera against rPDHA or rPDHB Adherence inhibition assays were performed using Mycoplasma-free DF-1 cells. M. gallisepticum pre-treated with mouse anti-M. gallisepticum serum or non-immunized mouse serum were used as controls. Adherence inhibition rates (%) by mouse anti-rPDHA or mouse anti-rPDHB serum of M. gallisepticum adhering to DF-1 cells were 30.2% (p < 0.001) and 45.1% (p <0.001) respectively (Table 3). The adherence inhibition rate of a combination of rPDHA and rPDHB antisera (1:250 antibody titre of each serum) was 72.5% (p< 0.001), which was higher than that of rPDHA or rPDHB antiserum treatment alone. This result suggested synergistic effects of rPDHA and rPDHB antisera on adherence inhibition capacity. Binding activity of rPDHA and rPDHB to cPlg Western blots demonstrated that M. gallisepticum total proteins, purified rPDHA, and rPDHB ( Fig 6A, lanes 1, 2 and 3) interacted with cPlg but not with the rabbit polyclonal antibody against cPlg alone (Fig 6A, lanes 4, 5 and 6). In addition, there were many positive bands including the bands for PDHA (39 kDa) and PDHB (34 kDa) when M. gallisepticum total proteins interacted with cPlg. In an ELISA binding assays (Fig 6B), OD 450 values from coating with M. gallisepticum total proteins, rPDHA, rPDHB, or a mixture of rPDHA and rPDHB were significantly higher than that from coating with BSA (p < 0.0001). This result indicated that M. gallisepticum total proteins, rPDHA and rPDHB bound to cPlg in a dose-dependent manner. In addition, the binding ability of rPDHB was significantly higher than that of rPDHA (p< 0.0001). The mixture of rPDHA and rPDHB showed moderate Plg-binding ability compared with rPDHA or rPDHB alone. Discussion Due to limited genome sizes, mycoplasmas possess limited biosynthetic and metabolic capabilities and are parasites, using infected host cells for their nutrition [41,42]. To use limited genomic resources more effectively, many mycoplasma glycolytic enzymes are typically expressed on the surface. They function as multifunctional enzymes, in adhesion to host epithelia, binding to host components, colonization, persistence and invasion of the host tissues [24,43]. PDHA and PDHB of M. gallisepticum were confirmed as surfaced-localized proteins, which makes it possible for them to participate in cytoadherence. However, most of the surfaceexposed multifunctional enzymes including Eno, PDHA and PDHB, do not have typical signal peptides or membrane-anchoring mechanisms and are therefore described as "anchorless" or "non-classically secreted" proteins [44]. The mechanism of surface display for these enzymes remains unclear. A study showed that some cytoplasmic proteins including GroEL, DnaK, Eno, PDHB and PDHD are secreted in large amounts during the late stationary phase of Bacillus subtilis, and release of these proteins is not due to gross cell lysis but rather a process in which the protein domain structure is a contributing factor [44]. Another study indicated that Streptococcus pneumoniae Eno re-associates on the bacterial surface after secretion [45], suggesting a potential mechanism for its surface localization. However, whether M. gallisepticum PDHA or PDHB are secreted or re-associated on the bacterial cell-surface remains unknown. We collected supernatant proteins from M. gallisepticum liquid culture as described for B. subtilis with some modifications [44]. The supernatant was filtered (0.1 μm Millex; Millipore) to remove residual cells and concentrated by TCA-acetone precipitation. A mixture of secreted and medium proteins was obtained. Western blots with mouse anti-PDHA, anti-PDHB or anti-Eno serum showed no positive bands. Detecting secreted proteins from mycoplasma is not easy. A number of medium proteins (from horse serum, yeast extract or bouillon) are in M. gallisepticum supernatants from conventional cultures, which may make mycoplasma secreted proteins hard to be detected. Therefore, finding a low or no-protein medium that allows M. gallisepticum to grow is necessary, which needs to be further investigated. Adhesion of mycoplasmas to host cell surfaces is a necessary stage for infection and parasitization. Through adhesion, mycoplasmas obtain essential nutrients such as carbohydrates, lipids and proteins from the host. Searching for adhesion-related proteins and elucidating their functions in cytoadherence are of great significance. In M. pneumoniae, PDHA and PDHB bind to HeLa [35] and A549 cells (a human lung carcinoma cell line) [36] in ELISA assays with HeLa cell-coated or A549 cell-coated plates. Adherence is significantly reduced by corresponding antisera. In M. gallisepticum, although some cytoadherence-associated proteins have been found, PDHA and PDHB have not been reported. In our study, adhesion assays were conducted by three different methods, indirect immunofluorescence assay, pIGN surface-displaying system, and colony counting assays. All methods confirmed that PDHA and PDHB were cytoadherence-associated proteins of M. gallisepticum and adherence was inhibited by corresponding antisera. We visualized the cytoadherence of PDHA and PDHB in M. gallisepticum using indirect immunofluorescence assays and a pIGN surface-displaying system. By these two methods, the negative controls of His-tagged FBA protein and E. coli containing pIGN (InaZ-EGFP) exhibited no adhesion, supporting the positive results for PDHA and PDHB. In colony counting assays, anti-rPDHA serum alone caused 30.2% of adhesion inhibition rates and anti-rPDHB serum alone caused 45.1%, indicating that both PDHA and PDHB might be Column FITC, rPDHA or rPDHB protein labelled with antisera and goat anti-mouse FITC conjugate. Column Dil, cell membrane labelled using DilC18 (3). Column DAPI, cell nuclei labelled by DAPI; Column merge, merge of fluorescent images. https://doi.org/10.1371/journal.pone.0208745.g004 Characterization of Mycoplasma gallisepticum PDHA and PDHB Mixtures of PDHA and PDHB antisera caused higher adhesion inhibition rates of 72.5%, suggesting the combination of PDHA and PDHB antisera might have higher protective ability. In addition, disruption of M. agalactiae PDHB significantly reduces invasiveness in HeLa cells [46]. The absence of M. agalactiae PDHB influences initial colonization and systemic spreading of M. agalactiae during experimental infection of sheep [47], suggesting that PDHB may help microorganisms colonize and invade host cells and cross host tissue barriers. M. gallisepticum is confirmed to adhere to and invade cultured human epithelial cells (HeLa-229) and chicken embryo fibroblasts [48,49]. Since cytoadherence is the first step for pathogens to invade host cells, M. gallisepticum PDHA and PDHB may also be involved in invasion into host cells. However, whether M. gallisepticum PDHA or PDHB is related to the pathogen invading the host remains to be further explored. Host ECM proteins including Plg, Fn, β-actin, lactoferrin, laminin, and vitronectin were confirmed to be glycolytic enzyme-binding-related host proteins. Interactions of microbial proteins with host ECM proteins contributes to colonization [50][51][52] and dissemination [53,54] and are important factors for virulence strategies [55]. Plg is a 92-kDa plasma protein and is the zymogen of plasmin, which is a serine protease that dissolves fibrin blood clots [56]. In M. gallisepticum and some other bacteria, Plg-binding is reported to markedly increase the cytoadherence to and invasion of host cells by bacteria [20,49,50]. Moonlighting proteins are hypothesized to help microorganisms cross host tissue barriers by binding to host cell Plg. Our study confirmed the binding ability of M. gallisepticum PDHA and PDHB to cPlg by both Western blot and ELISA. Whether Plg-binding is involved in cytoadherence and invasion of PDHA and PDHB to DF-1 cells, and whether PDHA and PDHB bind to other ECM proteins remain to be further explored. When M. gallisepticum total proteins interacted with cPlg, many positive bands appeared (Fig 6A, lane 1), implying many proteins in M. gallisepticum were Plg-binding proteins. In addition, a band between 55 and 70 kDa showed strong reaction with cPlg, while the PDHA (39 kDa) and PDHB (34 kDa) in M. gallisepticum showed weak band, which gave us the information that PDHA and PDHB may not be the major Plg-binding proteins in M. gallisepticum, and there are some other more important Plg-binding proteins need to be further investigated. Multifunctional enzymes were confirmed as immunogenic proteins involved in modulating the host immune system. Vibrio parahaemolyticus Eno is reported to be a protective antigen [57] and FbaA and GAPDH of virulent pneumococci were identified as cross protective antigens, protecting mice from respiratory challenges [58]. Immunogenicity analysis showed that both rPDHA and rPDHB were immunogenic proteins in reactions with M. galliseticuminfected chicken serum. Complement-mediated bactericidal assays showed bactericidal rates of 48.0% for anti-rPDHA serum and 75.1% for anti-rPDHB serum, indicating that both PDHA and PDHB were protective antigens of M. gallisepticum. A combination of rPDHA and rPDHB antisera had a bactericidal rate of 65.2%, similar to the mean of 48.0% and 75.1%, suggesting that in combination, the two sera did not interfere with each other for bactericidal activity. From prokaryotes to mammals, PDHc E1 typically contains two structural forms: one of two identical α subunits and another of four subunits of α2 and β2. Since the catalytic action relies on the combination of homodimers or heterotetramers, the phosphorylation or mutation of specific amino acid residues of subunits or disruption or deletion of PDHc E1 subunits is sufficient to inactivate the enzyme [39,59]. In our study, rPDHA and rPDHB enzymatic activities were detected using 2,6-DCPIP assays as previously described [39]. For rPDHA or rPDHB alone, no or very weak catalytic activity was detected. However, when rPDHA and rPDHB were combined, strong enzymatic activity was observed. The results indicated that the two subunits of PDH E1 were indispensable and worked together. In conclusion, we characterized the enzymatic activity of rPDHA and rPDHB, and found that both M. gallisepticum PDHA and PDHB were surface localized and immunogenic proteins. Furthermore, both PDHA and PDHB were identified as Plg-binding proteins involved in cytoadherence, suggesting they may be involved in bacterial colonization and dissemination in host cells and may contribute to mycoplasma virulence. Ethics approval and consent to participate The animal experiments were performed in accordance with the Institutional Animal Care and Use Committee (IACUC) guidelines set by Shanghai Veterinary Research Institute, the Supporting information S1 Fig. SDS-PAGE of recombinant E. coli containing pIGN, pIGN-
7,435.4
2018-12-10T00:00:00.000
[ "Biology", "Medicine" ]
GDSC SMLM: Single-molecule localisation microscopy software for ImageJ Single-molecule localisation microscopy (SMLM) uses software to extract super-resolved positions from microscope images of fluorescent molecules. These localisations can then be used to render super-resolution images or analysed to extract information about molecular behaviour. The GDSC SMLM software provides a set of tools for analysing SMLM data in a single cross-platform environment. The software identifies fluorescent molecules in raw microscope images and localises their positions using stages of spot detection, spot fitting and spot rejection. The resulting localisation data set can then be visualised, cropped and filtered. A suite of downstream analysis tools enable the user to perform single-particle tracking, cluster analysis and drift correction. In addition, GDSC SMLM also provides utility tools that enable modelling of EM-CCD and sCMOS cameras as well as point spread functions (PSFs) for data simulation. The software is written in Java and runs as a collection of plugins for the ImageJ software. Introduction Single-molecule localisation microscopy (SMLM) uses image processing software to extract super-resolved positions of individual fluorescent molecules from diffraction-limited time series of microscope images [1][2][3] . Depending on the sample type, single-molecule localisation data sets can be used to reconstruct pointillist super-resolution images of cellular structures or extract information about molecular diffusion. A range of SMLM-based techniques now exist, each with differing strategies for temporally separating fluorescence emission from closely spaced fluorescent molecules 4 . The application of these techniques relies heavily on the availability and usability of SMLM analysis software. Over the past decade, many research groups have sought to develop their own custom software solutions for analysis of single-molecule data to maximise the flexibility and clarity of analyses which is otherwise not achievable with proprietary software. In a similar vein, we aspired to create an all-in-one solution for our data analysis that required no programming experience from the end user and could be easily expanded as new techniques and methodologies emerged. Here, we describe the resulting Genome Damage and Stability Centre (GDSC) SMLM software for single-molecule localisation and analysis, available in a single cross-platform software environment as a set of plugins for ImageJ 5 (RRID:SCR_003070). The GDSC SMLM [40] software encompasses a single-molecule fitting plugin, Peak Fit, which can determine the position of fluorescent molecules appearing as spots in raw localisation microscopy image sequences. The performance of this plugin was ranked as one of the best-in-class for the 2D data sets in the second localisation microscopy software challenge 6 . It uses a hybrid approach to fit spot candidates that combines simultaneous multi-emitter fitting [7][8][9] and single-emitter fitting 10 . Data sets of single molecule positions and associated metrics (e.g. localisation precision) can then be visualised in table format or rendered into super-resolution images. A wide range of supplementary plugins are also available for quantitative analysis of SMLM data. For example, subsets of the localisation data sets can be produced using filters or by cropping using regions of interest on rendered images. Plugins for more in-depth analyses are also provided for techniques such as single-particle tracking, clustering and cluster visualisation [11][12][13] , pair-correlation photoactivated localisation microscopy (PC-PALM) [14][15][16][17] , time-correlated photoactivated localisation microscopy (tcPALM) 18 , cross-talk activation analysis 19 and Fourier image resolution 20 . Fitting and analyses capabilities are supported by a suite of calibration and modelling plugins which allow analysis of the noise and gain of electron multiplying charge-coupled device (EM-CCD) and scientific complementary metal-oxide-semiconductor (sCMOS) cameras for use in maximum likelihood fitting models, as well as construction of point spread function (PSF) models from fluorescent bead calibration images. Finally, a group of simulation plugins allow users to create SMLM-like camera images for quantitative testing of models and predictions. The GDSC SMLM software has been successfully used in recent single-molecule studies to quantify chromatin association of DNA binding proteins in fission yeast 21-23 , visualise clustering of glucose receptors in adipocytes 24 and calculating single-molecule dwell times of EB3 on microtubules in vitro 25 . In this paper we provide examples of elementary use cases that describe fitting localisation image data, handling localisation data, image rendering and data analysis for single molecule tracking experiments. The software is supported by an online user manual of the available functionalities, providing comprehensive documentation including a workflow for the optimisation of fitting parameters for typical imaging conditions. Analysis methods Single molecule image data consists of single point sources of light which are then subjected to the point spread function (PSF) of the microscope. The Peak Fit plugin uses a 2D Gaussian function to model the PSF and is suitable for PSFs that appear as spots on the image. Figure 1 shows an overview of the image processing pipeline. Fitting the image data involves identification of candidate spots; fitting the spots using the PSF; and filtering the results to reject poor fits. Image frames are processed independently allowing parallel processing. The identification stage finds spot candidates within a box region using non-maximum suppression 26 . Typically the region is a square of edge length 2n + 1 where n = ⌊Aσ 0 ⌋, σ 0 is the initial Gaussian width and A is the search parameter. Noise can be reduced using a smoothing filter prior to identification, for example a mean or Gaussian filter. Fitting uses a Gaussian function as described in Smith et al., 2010 27 to model the signal for each pixel as with: x and y the centre of the k th pixel; u k (x, y) the expected value in the k th pixel; B the background level; Signal the total volume of the Gaussian; ∆E x (x, y) the integral of the Gaussian 2D function over the x-dimension; and ∆E y (x, y) the integral of the Gaussian 2D function over the y-dimension. The fitting stage is a single pass algorithm which visits each candidate only once. The spot candidates are ranked by intensity and processed in order during fitting. Fitting uses a box region around the candidate, typically the region is a square of edge length 2n+1 where n = ⌊Bσ 0 ⌋ and B is the fitting parameter. For each candidate the algorithm selects from several possible fitting options depending on whether other candidates are within and/or adjacent to the fitting region. The target candidate is always fit and the XY position is allowed to freely move. In high density single-molecule data, it is possible for multiple emitters to be present in the fit region -these candidates are known as neighbours. These are included in the fit if their intensity is within a fraction of the intensity of the target; typically the neighbour height is 30%. Neighbour candidates have their XY position bounded by a shift of ±1 pixel. Any candidates that have previously been visited use their known fit parameters to initialise fitting; unprocessed candidates use an estimation routine to initialise the fit parameters using the peak height and expected PSF width. The fit region is expanded by 50% to define an area of pixels outside the region. If these contain previously fitted spots the PSF of each spot is subtracted from the data (these are precomputed neighbours) to remove bright pixels at the border of the fit region. If fitting using multiple PSFs fails any previously fitted neighbours have their PSF subtracted from the data before a second fit of only the target spot. In the event of low density data with no neighbour candidates the algorithm defaults to fitting only the target candidate. The Levenberg-Marquardt algorithm (LMA 28,29 ) is used to fit the PSF. A camera calibration allows converting the input data to photons for fitting using maximum likelihood estimation (MLE) for Poisson distributed data 30 . If calibration is not available then fitting uses a least-squares estimator (LSE). If the fitting successfully converges the target spot may be refit as a pair of spots (doublet). This is only performed if the fit residuals (the difference between fitted function and the actual data) are asymmetric. Asymmetry analysis is performed using an adaption of the method detailed in the rapidSTORM 10 user documentation and is redescribed here for clarity. The residuals are divided into four quadrants surrounding the fit centre labelled clockwise A to D. Opposing quadrants are summed and the absolute difference divided by the total sum Analysis is performed using axes XY centred on the fit location to define the quadrants, and repeated with the axes rotated 45 degrees. The candidate is refit as two spots if the maximum residuals score is above a threshold. The pair of spots are accepted if they pass the configured spot filter and the fit score is improved. Improvement is measured for least squares estimation using the adjusted coefficient of determination 31 ; maximum likelihood estimation methods use the Bayesian Information Criterion 32 . The fitting stage can perform up to four fits per target: candidate fit with neighbours (multi); candidate fit as doublet with neighbours (multi doublet); candidate fit (single); and candidate fit as doublet (single doublet). Fits are performed as required. If the fit with neighbours is accepted then the single fit is not performed. If there are no neighbours or the neighbour fit failed then the single fit is performed. Success for either the multi or single fit may trigger a doublet fit depending on the residuals analysis. The filtering stage uses various filters based on the fit parameters to reject or accept the spot. The initial standard deviation σ 0 is compared to the fitted standard deviation and assessed using a minimum and maximum width factor. The initial target position is compared to the fitted position and assessed using a shift factor. The signal-to-noise ratio (SNR) is computed using the mean signal of the Gaussian within the region defined by the peak width at half maxima (PWHM), and the noise estimated from the fitted background assuming a Poisson noise model with added Gaussian read noise of the camera with: I the mean Gaussian intensity; I the Gaussian intensity in photons; σ the X or Y standard deviation; r the Mahalanobis distance for a 2D normal distribution that contains 50 percent of the integral (r = 2 ln(1 ) p − − ; p = 0.5); πσ x σ y r 2 the area of the Gaussian containing half the signal; B the local background in photons computed in a local background region; var i the variance at pixel i in counts; g i the gain at pixel i; and n the number of pixels in the background region. The background region size is 2w + 1 defined as w = ⌈rσ⌉ clipped to [1,3] in each dimension. For single spots the local background is the fitted background plus the contribution to the local region from precomputed neighbours. For multi spots the background is the mean of the input data in the local background region with the candidate spot subtracted. The SNR must pass a minimum threshold. The XY localisation precision is computed using the Mortensen formula 33 , or derived from inversion of the Fisher information matrix for a Poisson process (see Smith et al., 2010, SI eq. 9 27 ). The individual filters are combined to create a composite selection criteria used to accept the spot parameters. Candidates are processed per frame in order of intensity and processing is halted based on stopping criteria. The fail limit specifies the number of consecutive failures that are allowed before stopping. The pass rate specifies the fraction of fits that must be successful otherwise processing is stopped. If the stopping criteria is reached no further unvisited candidates will be fit. However any low ranking candidates that were fit as neighbours will be processed as the main fitting target to refine the parameters generated when the fitting region was not centred on the spot. Implementation The GDSC SMLM software (RRID:SCR_022717) is written in Java and structured into two components: GDSC SMLM contains all the code for single molecule analysis; and GDSC Core [41] contains general utilities and is used by software other than the SMLM code 34 . Each component is divided into two modules: a base module contains the analysis functionality and can be used directly as a library; and a module that requires the ImageJ 5 library as a dependency and is intended to be executed by ImageJ in a graphical environment. The GDSC SMLM ImageJ module contains plugins that function to collect input parameters, execute the library routines and present results. The software uses a data model of localisation microscopy results. The model contains the XYZ coordinates of each molecule and the associated data generated when processing raw image data such as signal intensity, noise and localisation precision. The model also contains metadata describing the microscope used in the data acquisition such as the image bounds, pixel magnification, camera specification and PSF information. The calibration is used to map the raw image data such as pixel position and camera counts to physical units such as position in nm and intensity in photons. The data model provides an application programming interface (API) to access data in specified units allowing storage-agnostic data analysis. Operation The GDSC SMLM software requires Java 1.8. There are no platform requirements beyond those required to run ImageJ and the software has been tested on Windows, Linux and Mac OS. The software is packaged into Java archives (jars) for the GDSC Core and GDSC SMLM components. There are a number of dependencies that are required at runtime. The software is distributed using an ImageJ update site which hosts all the required files to install the software into an instance of ImageJ. For example a user of Fiji 35 (RRID:SCR_002285) should run Help > Update and add the GDSC SMLM2 update site. This will install and regularly update the software to the latest version. The software can be installed manually by downloading the latest jar files from the update site here and placing them in the ImageJ plugins and jars directories. Install instructions are available in the online manual available here. The plugins are under the Plugins > GDSC SMLM menu and grouped by general functionality (see Table 1). A tools window can be opened that provides buttons to execute each of the plugins. This can be customised to change the order and available plugins by editing a configuration text file to allow grouping common plugins. The plugins have been designed to support the ImageJ macro recorder and batch execution in macros. Settings are collected using dialogs and a Help button will open a web page for the user manual describing all the parameters for the plugin. Dialogs may collect additional options for currently configured settings using context sensitive buttons. Analysis is performed on images or previously generated localisation data sets. The Peak Fit plugin is used to fit a 2D Gaussian PSF to single molecule imaging data. The plugin can be executed on the current image or against an image series in a specified folder. Precomputed results can be loaded from file. Custom file formats can be loaded using the Load Localisations plugin which reads any delimited text file using a configurable text parser. The Results Manager is used to load files to memory, display results and save analysis results to file. The software provides text and binary file formats supporting all localisation data and metadata. The analysis plugins operate on localisation data, without assumptions on the original image PSF, and may create graphic output, files or new data sets. Data sets may be exported in various formats for analysis in external software. Use cases The following use cases provide an introduction to functionality in the GDSC SMLM software. The sections detail fitting single molecule localisation data; and loading, displaying, analysing and saving localisation datasets. A data set containing example use case data is available here 36 . The data set provides an SMLM image, a fit settings template, and the results of fitting the image using the template settings (see the Data availability section for details). Fitting single molecule localisation data Fitting single molecule localisation data requires a series of input image frames. This can be a stack image open in ImageJ or a file series loaded from a folder. OME TIFF images too large to fit in memory can be opened using the TIFF Series Viewer plugin from the GDSC SMLM Tools menu. The image was opened in ImageJ and the Peak Fit plugin was run. The dialog contains settings for calibration of the input image, spot filtering, spot fitting, fit result filtering, results output and results preview. The preview option allows the results to be displayed for the current frame, and allows interactively changing the settings and the image frame. Calibration of the input image pixel size and exposure time is required to generate results in physical units. Details of the camera used to capture the image is required for maximum likelihood fitting. If the camera type is unknown then fitting is limited to least-squares estimation. CCD cameras require the camera bias, gain and read noise. sCMOS cameras require a camera model containing per-pixel calibration; a model can be generated from calibration images using the SCMOS Analysis plugin. The localisations are fit using a Gaussian 2D function to model the PSF. The type of function can be selected and the PSF width parameters provided. The width can be estimated from observations on fixed fluorophores imaged at various z-depths. The value should represent the width of the in-focus PSF, i.e. the minimum of the width against z-depth profile. An approximate value, typically around 1 pixel, can be used to generate results and the average width of high quality spots used to refine the PSF width. The spot filtering settings control identification of candidate spots. A wider smoothing filter will reduce the number of candidates by eliminating noise but may also merge close neighbours to a single candidate; a wider search width will reduce the number of candidates in noisy regions but may eliminate neighbours in dense regions; the border width prevents fitting of candidates near the edge of the image; and the fitting width controls the extent of the fit window around the spot. A wider fitting width will improve accuracy for isolated spots at the expense of speed however high density regions may be very slow if neighbours are included in the fit. Ideally the width should cover most of the PSF through the entire depth of field where spots appear as Gaussian peaks. The effect of changing the spot filter parameters can be explored Table 1 Fitting Identification of localisations on an image. Results Loading, saving and management of localisation results sets. Analysis Analysis on localisations for example single particle tracking, clustering and Fourier image resolution. PC PALM Pair correlation (PC) analysis. Model Simulate single-molecule images. Calibration Estimate point spread function widths and allow calibration of the imaging camera noise and gain. Tools Utility plugins for image manipulation. Toolset Install of the SMLM Toolset and configuration of the SMLM Tools window. dynamically on an image using the Spot Finder (Preview) plugin. This uses the same spot filter configuration and previews the candidates on the image. The fitting settings specify the fit solver and the fit engine configuration. The fit solver chooses the method used to fit the data. Least-squares fitting can be used without any camera calibration. The other methods require the camera information to create the probability model for fitting. There are several maximum likelihood estimation (MLE) methods available; the authors find the Levenberg-Marquardt method for Poisson distributed data 30 is suitable for most images as a compromise between speed and robustness. Further details of the fit solvers and their suitability for different data can be found in the user documentation. Fit solver configuration is collected using an additional options dialog including parameters controlling the convergence criteria. Increasing the number of iterations can be used to improve the number of fits that converge. High density data benefits from higher iterations at the expense of speed. Details of the parameters for each fit solver are in the user manual accessed from the Help button. The fail limit specifies how many candidates are allowed to be rejected before stopping fitting of the image frame. Processing also stops when the fraction of successful candidates is below the pass rate. Neighbours can optionally be included in the fit if they are above a height threshold relative to the candidate. A value too low can include candidates that are image noise. Low neighbours typically do not affect the fit of a peak whereas higher neighbours contain most of the signal in the fit region. The residuals threshold is for high density data. It controls how asymmetric a spot must be to refit as two spots; any spot with residuals above this threshold is refit as a doublet. Doublet fits are only accepted if they pass the results filter and the fit score is an improvement over the single fit. Lowering the neighbour height and residuals threshold impacts runtime and these parameters can be adjusted by repeat fitting of data and monitoring runtime and fitting performance. The duplicate distance is used to exclude any fit result close to an existing result in the frame to eliminate drift from a candidate location to another spot in the fit region. The result filter settings control selection of the fit results. Results must pass the configured criteria using measures such as how far the fit shifted from the candidate location, the signal-to-noise ratio (SNR), the fitted width compared to the initial width, and the estimated localisation precision. A simple filter rejects the fit result if any of the configured criteria are not satisfied. Alternatively it is also possible to specify a smart filter that supports logical combinations (And, Or) to create complex filter logic (for details see the user manual). The SNR and precision filters use the signal and width of the fitted Gaussian and the background noise and are the best filters to exclude poor fit results. The minimum width filter can be used to exclude fits that are too narrow to be a PSF and are false positive candidates from image noise. The maximum width filter can be used to limit the depth of field since out of focus spots will have a wider PSF. If fitting diffusing molecules the spot may be wider due to motion blur and the width filter should be configured wider to accommodate the PSF blur. The results settings control the results output. The Log progress option will output verbose fitting information on each candidate; it can be used to gain information on fitting a specific target on an example frame including why a fit failed or was rejected. This information assists in setting the parameters. Results may be output to a table, rendered into an image, and saved to file or memory. If no output options are selected the default is saved to memory. In-memory results can be output to the other formats using the Results Manager plugin. When the settings are configured the OK button will start fitting on the image. The fitting progress is reported to the ImageJ progress bar and can be stopped using the Escape key. If results were saved to memory the localisations can be viewed on the input image using the Overlay Results plugin. Renaming the results can be performed using the Rename Results plugin allowing repeat fitting with different settings to be compared using the Results Match Calculator plugin. Template settings Templates provide reusable settings for localisation fitting. Templates can be used to pre-configure settings for the software for different microscope equipment or imaging conditions. A template can be created using the Fit Configuration plugin. This presents the current settings used in localisation fitting. These can be adjusted if required, including using any current templates as a start point, and then saved as a template file. The template is registered with the software and available for use when fitting an image. The template will be reloaded for the next session in ImageJ. Templates are managed using the Template Manager plugin. This allows the current templates to be viewed, new templates to be registered and existing templates to be deregistered. Templates are divided into two classes: standard templates are built into the software and provide default settings that are suitable for a range of input images; custom templates are stored as files and registered. When viewing a custom template the file path will be shown allowing the template file to be transferred to and registered with another ImageJ instance. Loading and saving localisation data Localisation results can be read from and written to supported formats using the Results Manager. The GDSC SMLM text file format uses tab delimited fields that can be read by other software. The file contains header information describing the results such as the calibration, coordinate bounds and if applicable the fit configuration used to generate the results. A binary format can be used to support faster I/O (input/output) of large data sets. Plain text localisation files in any delimited format can be read using the Load Localisations plugin. The field delimiter can be configured and the columns in the data assigned to the required localisation fields of time frame and coordinates. Optional fields such as signal intensity, estimated localisation precision and molecule IDs can be read. When loading a localisation file the calibration can be specified for the distance and intensity units and information on the camera can be provided. This information is used by analysis plugins to interpret the localisation data in meaningful physical units. A data set loaded into memory can have the calibration updated using the Calibrate Results plugin. Results can be written in a custom delimited text format using the Save Localisations plugin which writes any of the available fields to file in a user-specified format. Results display Localisation data sets can be displayed in a Images are rendered using a scaling of the localisation coordinates to output pixels. The reconstruction maps each localisation to a pixel and assigns the chosen magnitude to the single pixel or weighted to the 2x2 surrounding neighbours. The magnitude can be assigned as a single count, or using localisation data such as the localisation intensity, frame, z depth or ID. Optionally localisations with PSF information can be rendered using a 2D Gaussian to approximate the spot. Each additional localisation mapped to the same pixel creates an update that is an addition for intensity data or a replacement for non-intensity data such as frame or ID. The histogram equalisation option performs contrast enhancement to improve visibility of low intensity pixels. The final image is created as an ImageJ image. Tracking can be performed on existing data sets. A simple tracking algorithm joins localisations if the distance is within the configured time and distance thresholds. Ties are resolved using nearest-neighbour variations which rank with time or distance priority. This is suitable for low density data with short lived tracks. Alternatively, dynamic multiple target tracking 39 uses a model to assign the probability that a localisation should connect to a track based on the current diffusion rate and intensity of the molecule. New tracks are created as required and existing tracks can expire if no localisations have been assigned to them for a set number of frames. The algorithm is suitable for long lived tracks as the probability model is constructed using a temporal window of the most recent localisations in the track. Conclusions The GDSC SMLM software provides a wide range of functionality for working with single-molecule localisation microscopy data. Microscope images of fluorescent spots can be processed to super-resolved positions of molecules using the Peak Fit plugin. The fitting engine uses the stages of spot identification, localisation and rejection. Each stage is configurable and settings can be saved as templates for repeatable analysis of images from different microscopes and reproducible analysis across software platforms. Analysis plugins act on localisation data sets that are created by fitting data or loaded from external sources. The ImageJ graphic environment allows data sets to be viewed as images and tables and interactively modified for example by cropping, selecting sub-sets or filtering based on properties of the localisations. Data sets can be saved with all associated data using the GDSC file formats or exported to selected formats for analysis by external tools. A wide range of plugins are available for analysis such as single-particle tracking, clustering and cluster visualisation, drift correction, tcPALM 18 , and Fourier image resolution 20 . Tools are provided for analysis of EM-CCD and sCMOS cameras and construction of PSF models from bead calibration images for use in simulations. The GDSC SMLM software is distributed as a collection of plugins for ImageJ with a single-click install process using the ImageJ update site. Plugins support recording and playback via the ImageJ macro language and context-sensitive help links to the online documentation. Further details of all the functionality is described in the online user manual. Open Peer Review accessibility. In the Java world (ImageJ/Fiji) alone, several options are available. ThunderSTORM, for example, is despite its age (published in 2014, thereby lacking ways to correctly analyse sCMOS data) still widely used as it offers easy and robust access to 2D and especially 3D SMLM (astigmatism!). Unfortunately, ThunderSTORM is not actively maintained anymore. Another promising and more recent package is MARS (https://elifesciences.org/articles/75899), which focuses more on handling of single-molecule data (including, for example, smFRET) rather than providing extensive options to perform high accuracy and precision analysis of SMLM data. GDSC SMLM is therefore a very welcome addition as the inclusion of many tools such as fitting options, camera calibrations, tracking, simulations etc into a single package promises to empower SMLM data analysis. Data availability The manuscript is well written and provides a good overview on the available modalities. Further information is available via an excellent documentation site (https://gdscsmlm.readthedocs.io/en/v1.0.2/index.html) and raw data is provided for users to test the software (https://doi.org/10.6084/m9.figshare.20795677). Overall, I fully approve the article to be further indexed on online academic databases. Additional comments Personally, I am a bit confused when the word "image" refers to an image/movie consisting of many frames (e.g. the doc chapter 3 refers to: "Finding and fitting spots on the image" (especially if "image" is then combined temporal (!) "median filter" then moving along frames in a movie. Maybe the authors could screen the text in the manuscript and the documentation to clean up potential issues or at least provide a clear definition. 1. As it stands, GDSC SMLM seems to target the advanced user as the variety of options are somewhat overwhelming (and could be better introduced or exemplified). The authors opted to include shortcuts here, e.g., the module "SimpleFit" which is supposed to "just" fit some data. When I run SimpleFit, however, choosing an sCMOS as camera, the analysis failed as sCMOS requires a camera profile. Again, this is explained somewhere in the documentation, but an easy solution could be to default to the NA option in case no profile is available. (NA camera option is again easily mixed up with N.A. referring to numerical aperture). 2. The 3D data analysis parts (astigmatism) is currently marked as not being robust ("This option is experimental and should not be used for 3D analysis"). What is the current status? Are the authors planning to add additional 3D functionalities (e.g., biplane/DH,..)? 3. Assuming that users of ThunderSTORM form a large group of potential users, how does GDSC SMLM perform performance wise (e.g. time to analyse the provided data versus achievable recall/precision etc?). 4. Along the same line, a set of step-by-step tutorials (maybe even complemented by a written macro) for beginner and advanced level could help to draw and keep additional users to the software. Do the authors plan to support/extend the software in the near future? 7.
7,408.8
2022-09-29T00:00:00.000
[ "Computer Science", "Biology" ]
Line Beam Scanning-Based Ultra-Fast THz Imaging Platform In order to realize rapid THz detecting and imaging, a line beam scanning-based ultra-fast THz imaging platform is designed combining simple optical components and lightweight mechanical system. The designed THz imaging platform has the resolution of 12 mm, the scanning angle range of ±10.5◦, the scanning speed of 0.17 s/frame, and the scanning range of 2 m × 0.8 m; moreover, it can realize rapid human body THz imaging and distinguish metallic objects. Considering its high-quality performance in THz imaging and detecting, it is believed the proposed line beam scanning-based ultra-fast THz imaging platform can be used in the future in various safe screening applications. Introduction In order to contain the increasing threat of terrorism, public security screening systems, especially at airports and railway stations, are becoming increasingly significant.Classical security screening systems include metal scanners for people and X-ray detectors for luggage and have been widely used [1][2][3][4].Unfortunately, these traditional techniques are no longer capable of fulfilling the demands due to their low detection efficiency and high false alarm rate; therefore, developing rapid and precise security screening system is required [5,6]. Recently developed X-ray imaging can easily detect contraband, even concealed under cloths due to the strong X-ray absorption by metals [7][8][9].However, considering its high photon energy and ionizing potential, it is harmful to human bodies and only suited for luggage detections.Different from X-ray imaging, THz imaging can also provide target details in another perspective [10][11][12][13][14][15][16]: After penetrating the cloths, THz wave is often absorbed by the human body but strongly reflected by metals; therefore, the received reflected THz wave marks the metallic objects [17].Compared to X-ray, THz wave has a much lower frequency band, which is nonionizing and poses no known health risks [18][19][20][21][22], indicating that THz imaging is especially suitable for security screening for both people and luggage, though there is a lack of efficient sources and detectors [23][24][25].The currently proposed THz imaging systems include two fundamental tactics based on synthetic aperture radar (SAR) [26,27] and multiple-input multiple-output (MIMO) [28][29][30][31], respectively.SAR-based THz imaging suffers from slow imaging efficiency for each scan, limiting their applications in high-speed security screening.In order to accelerate the efficiency, MIMO-based THz imaging can reduce the scanning time using parallel measurement with an entire array of transmitters and receivers, illustrating that MIMO-based THz imaging is preferred for high-speed security screening platform construction.Here, in order to further accelerate the THz imaging speed, as well as to simplify the system, a line beam scanning-based ultra-fast THz imaging platform is proposed.Its optical system is rather simple, only composed of a source/sensor chip for THz wave emitting and receiving, a cylindrical reflector for wavefront reshaping, and a scanning mirror for line beam scanning.To assemble the optical system, as well as to control the mirror for line beam scanning, a lightweight mechanical system is designed and fabricated.The designed THz imaging platform realizes human body scanning within 0.17 sec in a high resolution of 12 mm and easily distinguishes metallic objects, it is believed the proposed line scanning-based ultra-fast THz imaging platform is a potential tool for fast safe public screening applications. Platform Design and Construction The optical system of the line beam scanning-based ultra-fast THz imaging platform shown in Figure 1A is rather simple, only composed of a source/sensor chip in Figure 1B, an elliptical cylindrical reflector in Figure 1C, and a scanning mirror in Figure 1A.The source/sensor chip not only emits THz spherical wavefront generated by the emitting antenna, but also detects the THz signals reflected from the target with the receiving antenna: The THz signal with the central frequency of 340 GHz, the bandwidth of 20 GHz, and the peak output power of 0.5 mW, is transmitted from the 4-Tx channels and generates a narrow beam line based on the optical system, and the reflected THz signal from the detecting sample is collected by all the 16-Rx channels.In our case, the THz wave is generated from microwave through frequency multiplication, in which the efficiencies are rather low in both emitting and receiving, much lower than the efficiencies of the THz generation based on photoconductive antenna [23][24][25], but a rather narrow wavelength band can often be maintained, which can hardly be achieved using photoconductive antenna.Moreover, an elliptical cylindrical reflector was used to reshape the spherical wavefront from the source into the cylindrical wavefront to generate the line beam, as well as to collect the reflected THz signals to the THz sensors.For the elliptical cylindrical reflector, two elliptical foci were conjugated, as the source/sensor chip was located at one focus, the line beam was well generated at another elliptical focus.Moreover, the scanning mirror can realize the line beam scanning with a rather large range and high speed.Considering the practical applications of the THz imaging platform, the distance between the working plane and the scanning mirror was set as 3 m, and the scanning range should reach 2 m × 0.8 m at the working plane.In order to optimize the optical system, ray tracing algorithm [32] was implemented: after inputting the parameters of the scanning mirror and the elliptical cylindrical reflector, as well as their positions, the THz intensity distribution can be estimated using multiple rays, and according to the numerically computed THz intensity distribution, the configuration of the elliptical cylindrical reflector was finally obtained: The optimized elliptical cylindrical reflector has the radius of the curvature of 554.1 mm and the Conic factor of −0.724.According to the optimized optical system, the intensity distribution at the working plane was then computed as shown in Figure 1D.It is shown that the length of the line beam can reach 0.8 m as provided by the central intensity distribution along the x-axis; besides, according to the central intensity distribution along the y-axis, the full width at half maximum (FWHM) reached 12 mm, indicating the resolution of the THz imaging platform.Via numerical simulations, it is proved that the designed optical system can achieve fast safe screening.Next, the mechanical system of the line beam scanning-based ultra-fast THz imaging platform shown in Figure 2A was designed and constructed not only to assemble the optical system, but also to realize line beam scanning by controlling the scanning mirror.In order to realize lightweight design, super-hard aluminum alloy 7075 was used as the materials of both the mechanical system and the mirrors (including the elliptical cylindrical reflector and the scanning mirror); moreover, honeycomb hole array were drilled at the back side of the scanning mirror as shown in Figure 2B to further decrease its weight to 17 kg, which is 34% lighter than without hole drilling.Both the source/sensor chip and the elliptical cylindrical reflector with the size of 380 mm × 600 mm as shown in Figure 2C were fixed at the bottom of the mechanical system, and the scanning mirror with the size of 403 mm × 792 mm combining with the rotation motor (Kollmogen KBMS 43H03, USA, Radford) and the encoder (Heidenhain RCN 2380, Germany, Traunreut) was designed as the scanning section.The encoder chosen had 26-bit circular grating with a precision of ±5"; the rotation motor selected was frameless, which can obtain the rotation range larger than ±9°.Considering the weight of the scanning mirror was less than 250 N required by rotation motor, it is proved that the scanning section could realize the line beam scanning with high precision and a wide range.Moreover, with the quantitative analysis on the vibration mode of the whole mechanical system, its inherent frequency is around 47.7 Hz, much higher than that of the scanning mirror as 2.5 Hz; therefore, no resonance occurred during line beam scanning.In addition, the whole THz imaging platform including the source/sensor part and the mechanical part uses the power supply of 220 V AC, and the total power of the whole system is ~2500 W. With the platform design and construction in both optical and mechanical systems, the main optimization on the THz imaging platform includes (1) optimizing the optical system composed of Next, the mechanical system of the line beam scanning-based ultra-fast THz imaging platform shown in Figure 2A was designed and constructed not only to assemble the optical system, but also to realize line beam scanning by controlling the scanning mirror.In order to realize lightweight design, super-hard aluminum alloy 7075 was used as the materials of both the mechanical system and the mirrors (including the elliptical cylindrical reflector and the scanning mirror); moreover, honeycomb hole array were drilled at the back side of the scanning mirror as shown in Figure 2B to further decrease its weight to 17 kg, which is 34% lighter than without hole drilling.Both the source/sensor chip and the elliptical cylindrical reflector with the size of 380 mm × 600 mm as shown in Figure 2C were fixed at the bottom of the mechanical system, and the scanning mirror with the size of 403 mm × 792 mm combining with the rotation motor (Kollmogen KBMS 43H03, USA, Radford) and the encoder (Heidenhain RCN 2380, Germany, Traunreut) was designed as the scanning section.The encoder chosen had 26-bit circular grating with a precision of ±5"; the rotation motor selected was frameless, which can obtain the rotation range larger than ±9 • .Considering the weight of the scanning mirror was less than 250 N required by rotation motor, it is proved that the scanning section could realize the line beam scanning with high precision and a wide range.Moreover, with the quantitative analysis on the vibration mode of the whole mechanical system, its inherent frequency is around 47.7 Hz, much higher than that of the scanning mirror as 2.5 Hz; therefore, no resonance occurred during line beam scanning.In addition, the whole THz imaging platform including the source/sensor part and the mechanical part uses the power supply of 220 V AC, and the total power of the whole system is ~2500 W. With the platform design and construction in both optical and mechanical systems, the main optimization on the THz imaging platform includes (1) optimizing the optical system composed of the elliptical cylindrical reflector and the scanning mirror to pursue high imaging resolution; (2) lightening the mechanical structure, and (3) precisely controlling the scanning using the rotation motor and the encoder to improve the mechanical system.After the design and construction of both the optical and mechanical systems, the performance of the line beam scanning-based ultra-fast THz imaging platform was tested, and it was then adopted for practical applications for safe screening on the human body, which are both described in the following section. Appl.Sci.2019, 9, x FOR PEER REVIEW 4 of 9 the elliptical cylindrical reflector and the scanning mirror to pursue high imaging resolution; (2) lightening the mechanical structure, and (3) precisely controlling the scanning using the rotation motor and the encoder to improve the mechanical system.After the design and construction of both the optical and mechanical systems, the performance of the line beam scanning-based ultra-fast THz imaging platform was tested, and it was then adopted for practical applications for safe screening on the human body, which are both described in the following section. Experiments After designing and constructing the line beam scanning-based ultra-fast THz imaging platform, its performance was tested.The scanning mirror was adjusted to keep the optical axis parallel with the ground coordinate, and a Brunei tube working as a THz single point detector was scanned along the vertical direction to determine the optical axis position.The Brunei tube was scanned along the optical axis, since the intensity at the working plane should have the highest value, indicating that the distance between the working plane and the scanning mirror was 3.02 m.After the determination of the working plane, the Brunei tube was scanned in the working plane to measure the intensity distribution.The intensity distributions along the x and y axes (marked in Figure 1D) are shown in Figure 3A,B, respectively.There is a little difference between the numerically computed and practically measured results, it is often because the numerically calculated intensity distributions were obtained in the ideal condition which did not consider the aberration in the THz system caused by errors in the system fabrication and integration; however, these errors inevitably occurred in the designed THz imaging system, thus, broadening the THz intensity distribution; moreover, the noise of the THz detector also introduced background errors in THz intensity measurements, which also broadened the THz intensity distribution along the y-axis.Though intensity distribution broadened in the measured results, both the numerically computed and practically measured results are still close, via Gaussian fitting, it indicates that the FWHM of the measured line beam was 12.04 mm, close to the numerically evaluated FWHM as ~12 mm, proving the proposed THz imaging platform can reach the resolution of ~12 mm.Besides, both the numerically computed and practically measured intensity distributions along the y-axis with 200 mm, 300 mm, and 400 mm away from the central axis were also listed in Figure 3C-E, respectively.The coincidence between the numerically calculated and the practically measured results in Figure 3A-E proved the high-quality construction of the THz imaging platform.Since the line beam Experiments After designing and constructing the line beam scanning-based ultra-fast THz imaging platform, its performance was tested.The scanning mirror was adjusted to keep the optical axis parallel with the ground coordinate, and a Brunei tube working as a THz single point detector was scanned along the vertical direction to determine the optical axis position.The Brunei tube was scanned along the optical axis, since the intensity at the working plane should have the highest value, indicating that the distance between the working plane and the scanning mirror was 3.02 m.After the determination of the working plane, the Brunei tube was scanned in the working plane to measure the intensity distribution.The intensity distributions along the x and y axes (marked in Figure 1D) are shown in Figure 3A,B, respectively.There is a little difference between the numerically computed and practically measured results, it is often because the numerically calculated intensity distributions were obtained in the ideal condition which did not consider the aberration in the THz system caused by errors in the system fabrication and integration; however, these errors inevitably occurred in the designed THz imaging system, thus, broadening the THz intensity distribution; moreover, the noise of the THz detector also introduced background errors in THz intensity measurements, which also broadened the THz intensity distribution along the y-axis.Though intensity distribution broadened in the measured results, both the numerically computed and practically measured results are still close, via Gaussian fitting, it indicates that the FWHM of the measured line beam was 12.04 mm, close to the numerically evaluated FWHM as ~12 mm, proving the proposed THz imaging platform can reach the resolution of ~12 mm.Besides, both the numerically computed and practically measured intensity distributions along the y-axis with 200 mm, 300 mm, and 400 mm away from the central axis were also listed in Figure 3C-E, respectively.The coincidence between the numerically calculated and the practically measured results in Figure 3A-E proved the high-quality construction of the THz imaging platform.Since the line beam scanning-based THz imaging platform requires high-speed scanning for two-dimensional imaging, both the detecting region and the scanning speed were then measured.Figure 3F shows the scanning trajectory, indicating that the scanning speed could reach 170 ms/frame, the scanning range of the mirror was within ±10.5 • , and the scanning range at the working plane could reach 2 m × 0.8 m, all satisfying the scanning requirements.scanning-based THz imaging platform requires high-speed scanning for two-dimensional imaging, both the detecting region and the scanning speed were then measured.Figure 3F shows the scanning trajectory, indicating that the scanning speed could reach 170 ms/frame, the scanning range of the mirror was within ±10.5°, and the scanning range at the working plane could reach 2 m × 0.8 m, all satisfying the scanning requirements.After the certification on the line beam scanning-based THz imaging platform, it was finally adopted in the safe screening of the human body THz imaging as shown in Figure 4.Both the images captured by the direct optical device and the proposed THz imaging platform provide information from different perspectives.With the reconstructed THz image, the metal fabricated objects as the metallic gun can be easily distinguished according to the high intensity of the collected THz signals as shown in Figure 4.However, besides the distinguished contraband, a few spots of high intensity still occurred in the THz image, some of them were generated by the THz signals reflected from the metallic parts such as the metallic zipper, and some of them were generated from the THz signals After the certification on the line beam scanning-based THz imaging platform, it was finally adopted in the safe screening of the human body THz imaging as shown in Figure 4.Both the images captured by the direct optical device and the proposed THz imaging platform provide information from different perspectives.With the reconstructed THz image, the metal fabricated objects as the metallic gun can be easily distinguished according to the high intensity of the collected THz signals as shown in Figure 4.However, besides the distinguished contraband, a few spots of high intensity still occurred in the THz image, some of them were generated by the THz signals reflected from the metallic parts such as the metallic zipper, and some of them were generated from the THz signals reflected from bones.However, with improved image processing methods [33,34], the contraband can be accurately distinguished according to their special configuration and reflectivity.Moreover, the line beam scanning-based ultra-fast THz imaging platform can realize complete human body imaging within 0.2 s, indicating high-speed scanning and imaging capability. It is worth noting that the high relative humidity decreases the contrast in THz imaging due to the THz signal loss in the water vapor, therefore, high relative humidity affects the performance of the device.In our experiments, the THz imaging was implemented in an open environment with room temperature, and the image was captured with the relative humidity of ~60%, and the contrast is still satisfied since the metallic gun can be clearly distinguished, indicating that the proposed THz imaging platform can be adopted in practical applications. Appl.Sci.2019, 9, x FOR PEER REVIEW 6 of 9 reflected from bones.However, with improved image processing methods [33,34], the contraband can be accurately distinguished according to their special configuration and reflectivity.Moreover, the line beam scanning-based ultra-fast THz imaging platform can realize complete human body imaging within 0.2 s, indicating high-speed scanning and imaging capability. It is worth noting that the high relative humidity decreases the contrast in THz imaging due to the THz signal loss in the water vapor, therefore, high relative humidity affects the performance of the device.In our experiments, the THz imaging was implemented in an open environment with room temperature, and the image was captured with the relative humidity of ~60%, and the contrast is still satisfied since the metallic gun can be clearly distinguished, indicating that the proposed THz imaging platform can be adopted in practical applications.Moreover, compared to the devices as PNNL [35,36], JPL [37,38], and TeraScreen [28,29], the proposed line beam scanning-based ultra-fast THz imaging platform still has high-quality performance in THz imaging.The scanning range of the proposed platform reaches 2 m × 0.8 m, which is comparable with the TeraScreen, but much larger than the 0.5 m × 0.5 m of JPL.The scanning speed of the proposed platform is 0.17 s/frame, a little lower than the 0.13 s/frame of TeraScreen, but still much faster than the 1 s/frame of JPL.According to its high-quality performance and practical application in safe screening of the human body, as well as the comparison with the existing systems, it is believed the proposed line beam scanning-based ultra-fast THz imaging platform can be adopted for THz imaging and detecting in various applications. Conclusions In order to realize ultra-fast THz detecting, a line beam scanning-based THz imaging platform has been designed and constructed in this paper.The optical system is rather simple only composing of a source/sensor chip for THz wave emitting and receiving, an elliptical cylindrical reflector for cylindrical wavefront reshaping and a scanning mirror for line beam scanning.Moreover, the mechanical system is fabricated not only to assemble the optical system, but also to realize line beam scanning rapidly and precisely by controlling the scanning mirror with the frameless rotation motor and the circular grating-based encoder.With the quantitative characterization on the line beam Moreover, compared to the devices as PNNL [35,36], JPL [37,38], and TeraScreen [28,29], the proposed line beam scanning-based ultra-fast THz imaging platform still has high-quality performance in THz imaging.The scanning range of the proposed platform reaches 2 m × 0.8 m, which is comparable with the TeraScreen, but much larger than the 0.5 m × 0.5 m of JPL.The scanning speed of the proposed platform is 0.17 s/frame, a little lower than the 0.13 s/frame of TeraScreen, but still much faster than the 1 s/frame of JPL.According to its high-quality performance and practical application in safe screening of the human body, as well as the comparison with the existing systems, it is believed the proposed line beam scanning-based ultra-fast THz imaging platform can be adopted for THz imaging and detecting in various applications. Conclusions In order to realize ultra-fast THz detecting, a line beam scanning-based THz imaging platform has been designed and constructed in this paper.The optical system is rather simple only composing of a source/sensor chip for THz wave emitting and receiving, an elliptical cylindrical reflector for cylindrical wavefront reshaping and a scanning mirror for line beam scanning.Moreover, the mechanical system is fabricated not only to assemble the optical system, but also to realize line beam scanning rapidly and precisely by controlling the scanning mirror with the frameless rotation motor and the circular grating-based encoder.With the quantitative characterization on the line beam scanning-based ultra-fast THz imaging platform, it has the resolution of ~12 mm, the scanning speed of 0.17 s/frame, the scanning angle range of ±10.5 • and the scanning range of 2 m × 0.8 m, proving its high-quality performance in THz imaging.Moreover, it is finally adopted in fast safe screening, indicating that the platform not only realizes rapid human body THz imaging within 0.2 s, but also accurately distinguishes metal fabricated objects.Considering its high-quality performance in THz imaging and successful adoption in fast safe screening, it is believed the proposed line beam scanning-based ultra-fast THz imaging platform can be used in various conditions such as contraband inspections and human body THz imaging. Figure 1 . Figure 1.The optical system of the line beam scanning-based ultra-fast THz imaging platform.(A) Framework of the optical system; (B) the source/sensor chip; (C) the elliptical cylindrical reflector; (D) the calculated intensity distribution at the working plane; (E,F) the calculated sectional intensity distributions along the x and y axes. Figure 1 . Figure 1.The optical system of the line beam scanning-based ultra-fast THz imaging platform.(A) Framework of the optical system; (B) the source/sensor chip; (C) the elliptical cylindrical reflector; (D) the calculated intensity distribution at the working plane; (E,F) the calculated sectional intensity distributions along the x and y axes. Figure 2 . Figure 2. The mechanical system of the line beam scanning-based ultra-fast THz imaging platform.(A) Framework of the mechanical system; (B) Scanning mirror and the honeycomb hole array at its back side; (C) The elliptical cylindrical reflector. Figure 2 . Figure 2. The mechanical system of the line beam scanning-based ultra-fast THz imaging platform.(A) Framework of the mechanical system; (B) Scanning mirror and the honeycomb hole array at its back side; (C) The elliptical cylindrical reflector. Figure 3 . Figure 3. Certification on the line beam scanning-based THz imaging platform.(A,B) Practically measured and numerically computed sectional intensity distributions along the x and y; (C,D) and (E) Practically measured and numerically computed sectional intensity distributions along the y-axis with 100 mm, 200 mm, and 300 mm away from the central axis.(F) The scanning trajectory of the line beam scanning-based THz imaging platform. Figure 3 . Figure 3. Certification on the line beam scanning-based THz imaging platform.(A,B) Practically measured and numerically computed sectional intensity distributions along the x and y; (C-E) Practically measured and numerically computed sectional intensity distributions along the y-axis with 100 mm, 200 mm, and 300 mm away from the central axis.(F) The scanning trajectory of the line beam scanning-based THz imaging platform. Figure 4 . Figure 4. (A) Human body optical image.(B) Human body THz image obtained using the line beam scanning-based THz imaging platform.The gun is marked in red circle. Figure 4 . Figure 4. (A) Human body optical image.(B) Human body THz image obtained using the line beam scanning-based THz imaging platform.The gun is marked in red circle.
5,716.2
2019-01-07T00:00:00.000
[ "Engineering", "Physics" ]
Study on the Performance and Mechanism of Cold-Recycled Asphalt Based on Permeable Recycling Agent In order to investigate the influence of recycling agent composition on the recycling effect of aged asphalt in the cold recycling process, the design and optimization of cold recycling agent composition were performed through the central composite design-response surface method combined with the dynamic shear rheometer (DSR) test and the bending beam rheometer (BBR) test. The molecular weight distribution and component changes in aged asphalt before and after the addition of a cold recycling agent were also analyzed by gel permeation chromatography (GPC) and hydrogen-flame ionization test. The results showed that the permeable cold recycling agent has a recycling effect on the aged asphalt, but its effectiveness is greatly affected by recycling agent composition. The best recycling effect was achieved when the ratio of aromatic oil and penetrant in the cold recycling agent was 61.2:38.8, respectively. In terms of the recycling agent and aromatic functional groups in the aromatic oil, the aromatics in the recycling agent are derived from the aromatic oils, and the penetrant is only fused and permeated with the aromatic oils. After the admixture of the cold recycling agent, the penetrant in the recycling agent allows the aromatic oil to enter the aged asphalt at room temperature. The light components volatilized by aging are replenished, allowing the aged asphalt to recover some of its properties. Introduction With more emphasis on the concept of sustainable development of road engineering, pavement materials [1][2][3] have gradually become a research hotspot.Currently, the commonly used asphalt pavement recycling technology can be divided into hot recycling and cold recycling according to the construction temperature.Although hot recycling is more effective in improving the road performance of reclaimed asphalt pavement (RAP) than cold recycling, the higher construction temperature also affects the environment with increased amounts of harmful gases and carbon emissions.Due to the relatively low construction temperature, cold recycling can effectively decrease the harmful gas and heat emissions generated by RAP during the paving process and reduce energy consumption.However, in the process of cold recycling, RAP is often regarded as a "black stone" [4].As a result, the old asphalt on the surface of RAP cannot be fully utilized, decreasing the utilization rate of the old asphalt on the surface of RAP and increasing the usage of new asphalt.Mao [5] argued that the surface of RAP is covered with old asphalt.If its performance can be restored, its utilization in the recycling process can be improved.According to Ashimova et al. [6], a recycling agent can effectively restore the properties of old asphalt on the surface of RAP, making it useful in recycled asphalt mixtures.Abraham et al. [7] believed that the light components of the recycling agent could be added to regenerate the old asphalt.All these findings indicate that research on recycling agents is necessary for the recycling of old asphalt.Currently, recycling agents [8][9][10] are mostly used in hot recycling.Due to the temperature limitation of cold recycling, the oil and lightweight components required for recycling cannot penetrate and replenish the old asphalt at room temperature.Therefore, the development of cold recycling agents is of great significance. Asphalt consists of four kinds of components [11], including saturates, aromatics, resins, and asphaltenes.Xiao et al. [12] concluded that the aging of asphalt is mainly due to the decrease in light components (such as saturates and aromatics) and the increase in heavy components (such as colloids and asphaltenes) after high temperatures and oxidation [13][14][15].As a result, the high-temperature performance of asphalt rises, and the low-temperature one decreases.The main components of the recycling agent for coldrecycled asphalt are aromatic oils [16] and penetrants.Aromatic oils can replenish saturates and aromatics in aged asphalt [17], i.e., as well as the light components [18].The main component of the penetrant is methylene chloride, which is an organic solvent that can penetrate the old asphalt to restore some of its properties when fused with aromatic oils.Recycling agents are generally used in hot recycling to replenish the components and enhance the low-temperature crack resistance of the old asphalt [19][20][21].Nevertheless, these agents are rarely used in cold recycling. Based on the discussion above, the recycling agent for cold-recycled asphalt is proposed in this study.It can penetrate the old asphalt on the surface of RAP at room temperature and activate the old asphalt, which is effective in the synthesis process of cold-recycled asphalt, thus achieving the purpose of resource-saving.The optimal ratio for preparing recycling agents for cold-recycled asphalts is investigated using the central composite design-response surface method.The proportional relationship among recycling agent components is obtained by analyzing the high-and low-temperature performance and fatigue life of aged asphalt after adding the recycling agent, as well as by comparing the functional groups of the recycling agent with those of aromatic oils.Furthermore, the recycling mechanism of the recycling agent is analyzed according to molecular weight and component changes, and the technical route is shown in Figure 1. Materials 2023, 16, x FOR PEER REVIEW 2 of 17 the old asphalt.All these findings indicate that research on recycling agents is necessary for the recycling of old asphalt.Currently, recycling agents [8][9][10] are mostly used in hot recycling.Due to the temperature limitation of cold recycling, the oil and lightweight components required for recycling cannot penetrate and replenish the old asphalt at room temperature.Therefore, the development of cold recycling agents is of great significance.Asphalt consists of four kinds of components [11], including saturates, aromatics, resins, and asphaltenes.Xiao et al. [12] concluded that the aging of asphalt is mainly due to the decrease in light components (such as saturates and aromatics) and the increase in heavy components (such as colloids and asphaltenes) after high temperatures and oxidation [13][14][15].As a result, the high-temperature performance of asphalt rises, and the lowtemperature one decreases.The main components of the recycling agent for cold-recycled asphalt are aromatic oils [16] and penetrants.Aromatic oils can replenish saturates and aromatics in aged asphalt [17], i.e., as well as the light components [18].The main component of the penetrant is methylene chloride, which is an organic solvent that can penetrate the old asphalt to restore some of its properties when fused with aromatic oils.Recycling agents are generally used in hot recycling to replenish the components and enhance the low-temperature crack resistance of the old asphalt [19][20][21].Nevertheless, these agents are rarely used in cold recycling. Based on the discussion above, the recycling agent for cold-recycled asphalt is proposed in this study.It can penetrate the old asphalt on the surface of RAP at room temperature and activate the old asphalt, which is effective in the synthesis process of coldrecycled asphalt, thus achieving the purpose of resource-saving.The optimal ratio for preparing recycling agents for cold-recycled asphalts is investigated using the central composite design-response surface method.The proportional relationship among recycling agent components is obtained by analyzing the high-and low-temperature performance and fatigue life of aged asphalt after adding the recycling agent, as well as by comparing the functional groups of the recycling agent with those of aromatic oils.Furthermore, the recycling mechanism of the recycling agent is analyzed according to molecular weight and component changes, and the technical route is shown in Figure 1. Materials and Methods This section introduces the experimental materials, optimal mixing ratio for preparing cold recycled permeable regenerative agents, and experimental methods for verifying their regenerative performance, as well as the preparation method of cold recycled asphalt. Test Material The category of neat asphalt in the test is PG 64-22, and the aged asphalt was prepared by heating the neat asphalt through a rotary film oven [22] at 163 • C for 5 h.The main technical indicators of neat asphalt and aged asphalt are shown in Table 1.The cold recycling agent used in the test was produced by blending aromatic oils and penetrants in a certain proportion.Aromatic oils are bought directly, and their technical specifications are shown in Table 2.The penetrant is a composite penetrant with methylene chloride as the main ingredient, and its technical specifications are shown in Table 3. Test Method This section introduces the methods for preparing the optimal mixing ratio of regenerant components and verifying their regeneration performance in this study. Central Composite Design-Response Surface Method In this study, a 2-factor, 5-level experiment was carried out to optimize the composition of the cold recycling agent using the central composite design-response surface method [23,24].The factor code level and experimental design are shown in Tables 4 and 5, respectively.In order to optimize the composition of cold recycling agents for the best overall performance, the recycling effect was evaluated by calculating the OD value according to Equations (1) and (2).The OD value means the abbreviation of "overall desirability" [25].In the case that many indicators exist in the experimental results, the optimal conditions for each indicator may be contradictory.The OD value was used to integrate all indicators into a single value and reflect the overall results.The changes in asphalt performance indexes before and after recycling were introduced as the evaluation basis to analyze the effect of recycling agents on aged asphalts.The smaller difference between recycled asphalt and neat asphalt indicates closer properties and a better recycling effect.The difference in fatigue life between aged asphalt and neat asphalt after adding the recycling agent is taken as an absolute value, and the test design and results are further analyzed. where y is the value of the indicator, i is the test number, d max denotes the factor for which the larger value is better, d min represents the factor for which the smaller value is better, y max is the maximum value of the indicator in each column, and y min is the minimum value of the indicator in each column.The geometric mean of each indicator after normalization was calculated through Equation (3) to obtain the overall normalized value: where n is the number of indicators and d is the normalized value. Specimen Preparation for Dynamic Shear Rheometer (DSR) Test and Bending Beam Rheometer (BBR) Test The asphalt aged for 5 h was poured into the molds for the DSR and BBR tests.The demolding was performed after cooling, and the different dosages of recycling agents were evenly sprayed onto the specimen surface, as shown in Figure 2.After 20 min of static infiltration, the excess recycling agent was wiped off the surface, and the specimen was subjected to the DSR test or placed in a water bath at −12 • C for 1 h.Then, the BBR test was performed with the recycling agent, whose dosage was 8% of the mass of the asphalt specimen.The asphalt aged for 5 h was poured into the molds for the DSR and BBR tests.The demolding was performed after cooling, and the different dosages of recycling agents were evenly sprayed onto the specimen surface, as shown in Figure 2.After 20 min of static infiltration, the excess recycling agent was wiped off the surface, and the specimen was subjected to the DSR test or placed in a water bath at −12 °C for 1 h.Then, the BBR test was performed with the recycling agent, whose dosage was 8% of the mass of the asphalt specimen. Temperature Sweep Test In order to investigate the effect of recycling agent addition on the high-temperature performance of aged asphalt, the high-temperature performance of neat asphalt, aged asphalt, and aged asphalt with different dosages of recycling agent was analyzed through the temperature sweep test.Moreover, the high-temperature performance and fatigue performance of asphalt were evaluated using phase angle δ ( ) , complex shear modulus G * ( ) [26], storage modulus ( ) ( / ) .The load frequency of the test was 10 rad/s, the test temperature was 52~76 °C (6 °C interval), the diameter of the parallel plate was 25 mm, and the gap of the parallel plate was (2 ± 0.05) mm. Linear Amplitude Sweep (LAS) Test The fatigue performance of asphalt is the main indicator to evaluate the durability performance of asphalt.In order to ensure the excellent durability of recycled asphalt, its fatigue properties need to be analyzed. Currently, the commonly used methods for evaluating the fatigue performance of asphalt admixtures include time sweep, LAS, and fatigue factors.Among these methods, the LAS is a new type of test to evaluate the fatigue performance of asphalt based on the theory of viscoelastic continuum damage. Through the LAS, the development of continuum damage in asphalt under repetitive loading can be analyzed.The VECD model constructed with the obtained fatigue equations can effectively predict and evaluate the fatigue performance of asphalt [28][29][30].The specific test procedure for LAS is divided into two parts: frequency sweep and amplitude sweep.First, the damage analysis parameter α was obtained by performing a frequency sweep on the asphalt within a strain level of 0.1% and a frequency of 0.2 to 30 Hz.Then, the amplitude sweep was performed at a loading frequency of 10 Hz with a linear increase Temperature Sweep Test In order to investigate the effect of recycling agent addition on the high-temperature performance of aged asphalt, the high-temperature performance of neat asphalt, aged asphalt, and aged asphalt with different dosages of recycling agent was analyzed through the temperature sweep test.Moreover, the high-temperature performance and fatigue performance of asphalt were evaluated using phase angle (δ), complex shear modulus (G * ) [26], storage modulus ([G (ω)]), rutting factor (|G * | sin δ) [27], and fatigue factor (G * / sin δ).The load frequency of the test was 10 rad/s, the test temperature was 52~76 • C (6 • C interval), the diameter of the parallel plate was 25 mm, and the gap of the parallel plate was (2 ± 0.05) mm. Linear Amplitude Sweep (LAS) Test The fatigue performance of asphalt is the main indicator to evaluate the durability performance of asphalt.In order to ensure the excellent durability of recycled asphalt, its fatigue properties need to be analyzed. Currently, the commonly used methods for evaluating the fatigue performance of asphalt admixtures include time sweep, LAS, and fatigue factors.Among these methods, the LAS is a new type of test to evaluate the fatigue performance of asphalt based on the theory of viscoelastic continuum damage. Through the LAS, the development of continuum damage in asphalt under repetitive loading can be analyzed.The VECD model constructed with the obtained fatigue equations can effectively predict and evaluate the fatigue performance of asphalt [28][29][30].The specific test procedure for LAS is divided into two parts: frequency sweep and amplitude sweep.First, the damage analysis parameter α was obtained by performing a frequency sweep on the asphalt within a strain level of 0.1% and a frequency of 0.2 to 30 Hz.Then, the amplitude sweep was performed at a loading frequency of 10 Hz with a linear increase in loading amplitude from 0.1% to 30%.The diameter of the parallel plate was 8 mm, and the distance between the parallel plates was taken as (2 ± 0.05) mm. BBR Test The flexural creep modulus S and creep rate m can be obtained from the BBR test of asphalt, and these two indicators are used to evaluate the low-temperature cracking resistance of the aged asphalt after the addition of recycling agents.In general, a larger S value indicates that the asphalt is harder and more prone to cracking at low temperatures; a larger m value indicates the better stress relaxation of the asphalt (i.e., the slower accumulation of stress leads to better low-temperature performance) [31,32].Therefore, recycled asphalt has better flexibility and relaxation ability with a smaller S and a larger m.In addition, the low-temperature coefficient (k = S/m) [33] was introduced to comprehensively evaluate the low-temperature performance of aged asphalt after adding the recycling agent. Before the start of each test, the instrument was calibrated using a stainless steel beam with a thickness of 1.0-1.6 mm to ensure that the water bath temperature reached the test temperature.After maintaining the test temperature for 65 min, the specimen was placed on the support, and the temperature of the thermostatic bath was controlled within ±0.1 • C of the experimental temperature.A load contact of 40 mN was manually applied to the specimen, and the time of load application was less than 10 s to ensure that the specimen and the load tip were in contact with each other. Fourier Transform Infrared Spectrometer (FTIR) Tests Attenuated total reflection [34] was conducted to analyze the chemical structure of the aromatic oils and recycling agents using a Nicolet iS50 FTIR from Thermo Fisher (Waltham, MA, USA).The functional groups of the light components in the aromatic oils and recycling agent and the performance of the recycling agents on aged asphalt were determined.Aromatic oil and recycling agent samples were tested separately in a specimen cell.Asphalt samples can be scanned directly without pre-processing, and the reflective signals from the surface can be captured to analyze the organic components on the surface, as well as the structural information of the inorganic materials.Before the test, it is necessary to pre-heat the infrared light meter for 30 min.Then, the background scanning was performed under the same conditions, with a resolution of 4 cm −1 , a scan number of 32 times, and a scanning range of 4000~500 cm −1 .After collecting IR spectral data for each sample, the surface of the crystal plate was wiped and cleaned using a carbon disulfide solution.Two parallel specimens were prepared for each group, which were required to be scanned twice to ensure repeatability and overcome errors due to unstable measurement environments and uneven aging of the asphalt. Gel Chromatography Test Gel permeation chromatography (GPC) is a type of liquid chromatography that can be used to determine the molecular weight and molecular weight distribution in asphalt for the investigation of its aging properties.The indicators used to evaluate the molecular weight distribution are the weight-average molecular weight (M w ) and the number-average molecular weight (M n ), of which M n is often used to reflect the trend of small molecular weights while M w can reflect the change in large molecular weights, as expressed by Equations ( 4) and ( 5), respectively [35][36][37].The model of the test apparatus was Agilent 1100 with a refractive index detector.The test temperature was 30 • C, the eluent was tetrahydrofuran, the flow rate of the eluent was 1.0 mL/min, the concentration of the specimen was 1.0 g/L, and the injection volume of each test was 0.05 mL. Thin-Layer Chromatography-Flame Ionization Detection (TLC-FID) Test Compared with the four-component test for standard asphalt, TLC-FID has the characteristics of less sample dosage, high precision, good repeatability, short test time, and less pollution [38,39].This test can reflect the changes in each component in the aged asphalt after adding the recycling agent.The recycling effect of aged asphalt can be observed when compared to the neat asphalt composition. Approximately 50 µg of the sample was extracted through the sample syringe and dispensed at 2 cm from the top of the rod.The solvent was removed by evaporation at room temperature and placed in an unfolding bath filled with n-heptane for unfolding.When the solvent n-heptane carrying saturates reached 11~12 cm of the rod, the rod was removed and dried for 10 min and then put into the unfolding tank containing toluene.When the toluene carrying the aromatic component is moved to 8 cm or 9 cm, the rod is taken out and dried for 10 min and then put into the unfolding tank with ethanol and toluene.Finally, when the solvents of ethanol and toluene brought resins to 4~5 cm, the specimen was taken out and placed in an 80 • C constant temperature box to dry for 10 min. Test Results of Central Composite Design for Recycling Agents The results of the central composite design-response surface method test and the fitted values are shown in Table 6.The nonlinear regression analysis was performed using the Design Expert 11 software (https://www.statease.com/software/design-expert/),and Equation ( 6) was obtained after fitting (multi-correlation coefficient R 2 = 0.9184): where x 1 is aromatic oil content and x 2 is penetrant content.OD is the response surface depicted by one dependent variable and two independent variables, and the three-dimensional plot is shown in Figure 3. It can be seen from Figure 3 that the composition of the recycling agent has a greater impact on the OD value.The figure shows a trend that the OD value increases first and then decreases with the increase in penetrant content in the recycling agent.In contrast, the OD value first increases and then decreases with the decreasing content of aromatic oils.This result indicates that the two components of the cold recycling agent can affect the recycling performance and is reflected by the OD value.The highest OD value was achieved when the proportion of aromatic oil and penetrant recycling agent was 61.2% and 38.8% in the recycling agent, respectively, indicating the best recycling effect.The optimal ratio of the recycling agent was experimentally verified, and the OD value obtained from the model was 0.839.According to the optimal dosage of the two components (i.e., 61.2% aromatic oil and 38.8% penetrant), the recycling agent was produced and added to the aged asphalt, with a deviation of 3.2% (Table 7).The small deviation of the predicted value from the actual value indicates that the developed mathematical model can predict the optimal dosages of the two components in the recycling agent for the cold-recycled asphalt.It can be seen from Figure 3 that the composition of the recycling agent has a greater impact on the OD value.The figure shows a trend that the OD value increases first and then decreases with the increase in penetrant content in the recycling agent.In contrast, the OD value first increases and then decreases with the decreasing content of aromatic oils.This result indicates that the two components of the cold recycling agent can affect the recycling performance and is reflected by the OD value.The highest OD value was achieved when the proportion of aromatic oil and penetrant recycling agent was 61.2% and 38.8% in the recycling agent, respectively, indicating the best recycling effect.The optimal ratio of the recycling agent was experimentally verified, and the OD value obtained from the model was 0.839.According to the optimal dosage of the two components (i.e., 61.2% aromatic oil and 38.8% penetrant), the recycling agent was produced and added to the aged asphalt, with a deviation of 3.2% (Table 7).The small deviation of the predicted value from the actual value indicates that the developed mathematical model can predict the optimal dosages of the two components in the recycling agent for the cold-recycled asphalt. Results Analysis of the Infrared Spectroscopy Test The infrared spectra of the aromatic oil and the cold recycling agent were scanned, as shown in Figure 4.The results show strong vibration peaks of aromatic C-H and C=C stretching at 2924 cm −1 , 2835 cm −1 , and 1705 cm −1 , indicating that the aromatic oil is rich in Results Analysis of the Infrared Spectroscopy Test The infrared spectra of the aromatic oil and the cold recycling agent were scanned, as shown in Figure 4.The results show strong vibration peaks of aromatic C-H and C=C stretching at 2924 cm −1 , 2835 cm −1 , and 1705 cm −1 , indicating that the aromatic oil is rich in aromatic light components.The recycling agents are generated after mixing aromatic oils with penetrants.The infrared spectra of the recycling agent and the aromatic oil are basically the same, reflecting that the substances in the recycling agent that can replenish the light component of the aged asphalt come from the aromatic oil.It also demonstrates that the penetrant in the recycling agent fused with the aromatic oil only serves to penetrate the aged asphalt. Materials 2023, 16, x FOR PEER REVIEW 9 of 17 aromatic light components.The recycling agents are generated after mixing aromatic oils with penetrants.The infrared spectra of the recycling agent and the aromatic oil are basically the same, reflecting that the substances in the recycling agent that can replenish the light component of the aged asphalt come from the aromatic oil.It also demonstrates that the penetrant in the recycling agent fused with the aromatic oil only serves to penetrate the aged asphalt. Results Analysis of the Temperature Sweep Test The results of the temperature sweep test are shown in Figure 5.As the temperature increases, the composite shear modulus of asphalt gradually decreases while the phase Results Analysis of the Temperature Sweep Test The results of the temperature sweep test are shown in Figure 5.As the temperature increases, the composite shear modulus of asphalt gradually decreases while the phase angle gradually increases.Compared to unaged asphalt, the aged asphalt shows a significant increase in complex shear modulus and a decrease in phase angle after 5 h of aging.This result indicates that aging changes the viscoelastic ratio of asphalt, and aged asphalt has an increased elastic component and hardness.The addition of a recycling agent has a certain softening effect on the aged asphalt.After adding the recycling agent, the asphalt complex shear modulus decreases, and the phase angle increases, but this effect is strongly influenced by the composition of the recycling agent.The high-temperature performance of the asphalt specimens in Group IV is most similar to that of the unaged neat asphalt, suggesting that the recycling agent in Group IV has the optimal effect compared to other recycling agents.The superior recycling performance is due to different ratios of penetrant and aromatic oil in the recycling agent.A reasonable ratio of penetrant and aromatic oil contributes to the more complete penetration of aromatic oil into aging asphalt at room temperature, thus replenishing the light components volatilized from the asphalt due to aging.On the other hand, excessive penetrant means less aromatic oil in the recycling agent, which cannot effectively replenish the light components volatilized from the asphalt, leading to a poorer recycling effect and a higher modulus. Results Analysis of the Temperature Sweep Test The results of the BBR test are shown in Figures 6 and 7.It can be seen that the S value of neat asphalt increases after aging while the m value decreases, indicating a decrease in the cracking resistance and the low-temperature performance of the asphalt.After adding the cold recycling agent, the S value of aged asphalt decreases, and the m value increases, suggesting that the addition of the recycling agent can improve the low-temperature performance of aged asphalt.The previous study argued that the low-temperature performance of asphalt cannot be comprehensively evaluated by S value or m value alone [40].Therefore, the low-temperature coefficient k, which is obtained by calculating the ratio of S to m, is a better way to evaluate the low-temperature performance of asphalt, and the calculation results are shown in Figure 8.It can be seen that the k of asphalt specimens in Group V is closest to that of the unaged asphalt, indicating that this group of specimens has the best low-temperature performance after recycling compared to the other groups.A reasonable mixing ratio of the recycling agent makes the aromatic oil and penetrant fully integrated.As a result, penetrant with lightweight components can penetrate the aged asphalt, enabling the recycling of the aged asphalt.When the ratio of penetrant in the recycling agent is low, the penetrant cannot mix with the excess aromatic oils and cannot fully penetrate the old asphalt.On the contrary, although more penetrant can be fully integrated with the light components in the aromatic oil, insufficient aromatic oil Results Analysis of the Temperature Sweep Test The results of the BBR test are shown in Figures 6 and 7.It can be seen that the S value of neat asphalt increases after aging while the m value decreases, indicating a decrease in the cracking resistance and the low-temperature performance of the asphalt.After adding the cold recycling agent, the S value of aged asphalt decreases, and the m value increases, suggesting that the addition of the recycling agent can improve the low-temperature performance of aged asphalt.The previous study argued that the lowtemperature performance of asphalt cannot be comprehensively evaluated by S value or m value alone [40].Therefore, the low-temperature coefficient k, which is obtained by calculating the ratio of S to m, is a better way to evaluate the low-temperature performance of asphalt, and the calculation results are shown in Figure 8.It can be seen that the k of asphalt specimens in Group V is closest to that of the unaged asphalt, indicating that this group of specimens has the best low-temperature performance after recycling compared to the other groups.A reasonable mixing ratio of the recycling agent makes the aromatic oil and penetrant fully integrated.As a result, penetrant with lightweight components can penetrate the aged asphalt, enabling the recycling of the aged asphalt.When the ratio of penetrant in the recycling agent is low, the penetrant cannot mix with the excess aromatic oils and cannot fully penetrate the old asphalt.On the contrary, although more penetrant can be fully integrated with the light components in the aromatic oil, insufficient aromatic oil cannot replenish the aging asphalt with enough light components to achieve a good recycling effect. Result Analysis of the LAS Test According to the results of the LAS test, the fatigue life value (N f ) of the fitted asphalt is obtained, as shown in Figure 9.It can be seen that after 5 h of aging, the N f of neat asphalt increases from 25,761 cycles to 143,772 cycles.The fatigue life of aged asphalt decreases after the addition of the recycling agent, indicating that the addition of the recycling agent has a recycling effect on aged asphalt.The fatigue life of aged asphalt after adding the recycling agent at the dosage of Group VII is 68,684 cycles, which is closest to that of the aged asphalt.After adding the recycling agent at the dosage of Group III into the aged asphalt, the fatigue life is 4320, and the recycling agent at a dosage of Group IX leads to a fatigue life of 27,155, which is the closest to that of the neat asphalt.The reason for the situation above is that the proportion of penetrants in the recycling agent of Group VII is low, resulting in incomplete penetration of oil into the aged asphalt and a poor recycling effect.The fatigue life decreases as the dosage of penetrant in the recycling agent increases.However, when the proportion of penetrant is too high, the fatigue life is lower than that of the neat asphalt.The main component of the penetrant is methylene chloride, which softens the aged asphalt when in contact with it.An excess dosage of penetrant in the recycling agent increases the softening degree of the asphalt, leading to a decreased fatigue life. Result Analysis of the LAS Test According to the results of the LAS test, the fatigue life value (Nf) of the fitted asphalt is obtained, as shown in Figure 9.It can be seen that after 5 h of aging, the Nf of neat asphalt increases from 25,761 cycles to 143,772 cycles.The fatigue life of aged asphalt decreases after the addition of the recycling agent, indicating that the addition of the recycling agent has a recycling effect on aged asphalt.The fatigue life of aged asphalt after adding the recycling agent at the dosage of Group VII is 68,684 cycles, which is closest to that of the aged asphalt.After adding the recycling agent at the dosage of Group III into the aged asphalt, the fatigue life is 4320, and the recycling agent at a dosage of Group IX leads to a fatigue life of 27,155, which is the closest to that of the neat asphalt.The reason for the situation above is that the proportion of penetrants in the recycling agent of Group VII is low, resulting in incomplete penetration of oil into the aged asphalt and a poor recycling effect.The fatigue life decreases as the dosage of penetrant in the recycling agent increases.However, when the proportion of penetrant is too high, the fatigue life is lower than that of the neat asphalt.The main component of the penetrant is methylene chloride, which softens the aged asphalt when in contact with it.An excess dosage of penetrant in the recycling agent increases the softening degree of the asphalt, leading to a decreased fatigue life. Result Analysis of the GPC Test Typically, the macroscopic properties of asphalt can be reflected by its microscopic molecular structure as well as its molecular weight distribution.The recycling agents in Result Analysis of the GPC Test Typically, the macroscopic properties of asphalt can be reflected by its microscopic molecular structure as well as its molecular weight distribution.The recycling agents in the dosage of Group II, Group III, Group IV, and Group VI were selected and added to the aged asphalt, and the corresponding samples were subjected to the GPC test.The results of the test are shown in Figure 10.The spectra are divided into large, medium, and small molecule regions, which are summarized in Table 8.It can be seen that the number of large molecules increases and the number of medium and small molecules decreases in aged asphalt compared to unaged asphalt.The addition of recycling agents replenishes small molecules in the aged asphalt, thus affecting the molecular weight distribution of the recycled asphalt.After adding the recycling agent, the large molecules in the asphalt decrease, and the small molecules increase, with the component ratio of asphalt being restored to a certain extent.The properties of asphalt are also recovered, but the recovery effect is significantly influenced by the ratio of aromatic oil to penetrant in the recycling agent.Based on the results of the GPC test, the high-temperature performance of the aged asphalt decreases after the addition of a recycling agent, and the low-temperature performance increases.In addition, the number of large molecules and small molecules in aged asphalt is close to that of neat asphalt, indicating that the recycling agent for cold-recycled asphalt can restore the performance of aged asphalt.According to the molecular weight distribution of aged asphalt with different dosages of recycling agents, the large molecule content of recycled asphalt with a dosage of Group II is 15.74%, which is higher than that of the other three groups.Its proportion of large molecules is also closer to that of aged asphalt, indicating that the dosage of Group II can lead to better high-temperature performance, further validating the best high-temperature performance brought by the dosage of Group II in the temperature sweep test.Compared with the other three groups, the dosage of Group III results in a higher proportion of small molecules (37.22%), indicating a better low-temperature performance of the asphalt than that in the other three groups.This result is consistent with the BBR test.aterials 2023, 16, x FOR PEER REVIEW asphalt decreases after the addition of a recycling agent, and the low-tempe mance increases.In addition, the number of large molecules and small mol asphalt is close to that of neat asphalt, indicating that the recycling agent for asphalt can restore the performance of aged asphalt.According to the mo distribution of aged asphalt with different dosages of recycling agents, the l content of recycled asphalt with a dosage of Group II is 15.74%, which is hi of the other three groups.Its proportion of large molecules is also closer t asphalt, indicating that the dosage of Group II can lead to better high-tempe mance, further validating the best high-temperature performance brought of Group II in the temperature sweep test.Compared with the other thre dosage of Group III results in a higher proportion of small molecules (37.22 a better low-temperature performance of the asphalt than that in the other This result is consistent with the BBR test. Result Analysis of the TLC-FID Test As shown in Figure 11, the content of saturates and aromatics in neat asphalt decreases after aging, while the content of resins and asphaltenes increases.After adding different proportions of recycling agents to aged asphalt, the recovery degree of asphalt components varies.In order to evaluate the component replenishment after adding a recycling agent to aged asphalt, the colloidal stability index (I c ) of asphalt [41] was introduced to assess the performance of the recycling agent.The structure of the asphalt colloidal structure is determined by the relative content of the components.The I c was calculated according to Equation (7).A smaller I c indicates that the colloidal structure is closer to the gel structure, and a larger I c indicates that the colloidal structure is closer to the soil structure.Similarly, systems with an I c close to that of the original asphalt are more stable.As shown in Table 9, R ' is the ratio of I c in each group to the I c of neat asphalt.After 5 h of aging, the I c value of the asphalt was reduced to 87.21% of the neat asphalt.When the aged asphalt was incorporated with the recycling agent at the dosage of Group II, its I c value (1.223) was 87.81% of that of the neat asphalt, which is closest to the aged asphalt.After adding the recycling agent to the aged asphalt at the dosage of Group IV, its I c value (1.266) was 90.93% of that of the neat asphalt, which is closest to the neat asphalt with the best recycling effect.The colloidal stability index I c can be calculated as follows: where I c is the colloidal stability index of asphalt, R is the mass fraction of resins in asphalt, %, A is the mass fraction of aromatics in asphalt, %, ASP is the mass fraction of asphaltenes in asphalt, %, and S is the mass fraction of saturates in asphalt, %. As shown in Table 9, R is the ratio of Ic in each group to the Ic of neat asphalt.After 5 h of aging, the Ic value of the asphalt was reduced to 87.21% of the neat asphalt.When the aged asphalt was incorporated with the recycling agent at the dosage of Group II, its Ic value (1.223) was 87.81% of that of the neat asphalt, which is closest to the aged asphalt.After adding the recycling agent to the aged asphalt at the dosage of Group IV, its Ic value (1.266) was 90.93% of that of the neat asphalt, which is closest to the neat asphalt with the best recycling effect. Conclusions A permeable cold recycled asphalt regeneration agent is synthesized and added to aged asphalt, and then the high-and low-temperature performance and fatigue life of aged asphalt after regeneration are analyzed using high-and low-temperature rheological performance and fatigue tests.The regeneration mechanism of permeable cold recycling agent on aged asphalt was explored through an infrared spectrum test, gel chromatography test, and component analysis test.The following conclusion is drawn: (1) The permeable cold recycling agent has a certain recycling effect on the aged asphalt. As the proportion of penetrant in the recycling agent increases, the high-temperature performance of recycled asphalt tends to decrease and then increase, while its lowtemperature performance tends to increase and then decrease.(2) The recovery performance of recycled asphalt is strongly influenced by the mixing ratio of the cold recycling agent.The best recycling effect can be achieved when the ratio of aromatic oil and penetrant in the cold recycling agent is 61.2% and 38.8%, respectively.(3) A cold recycling agent contained in the penetrant enables the aromatic oil to penetrate the aged asphalt at room temperature, replenishing the light components volatilized due to aging, improving the colloidal structure, and restoring the performance of the aged asphalt.(4) From the aromatic functional groups of the cold recycling agent, it can be seen that all lightweight components are from the aromatic oil.The penetrant serves only as an organic solvent that blends with the aromatic oils and penetrates the asphalt. This study introduces a synthesized permeable cold regeneration agent, analyzes and investigates its regeneration effect, and finds that it can achieve the effect of restoring certain performance of aged asphalt under room temperature conditions.However, the optimal permeation regeneration time and permeation depth mechanism of this regenerant are not yet clear.Further research could be conducted on the optimal permeation time and permeation mechanism of this regenerant, with the hope of being applied to practical engineering production. Figure 2 . Figure 2. The preparation method of the cold recycled asphalt regenerant sample is shown in the figure. Figure 2 . Figure 2. The preparation method of the cold recycled asphalt regenerant sample is shown in the figure. Figure 3 . Figure 3.The effect surface of the ratio (code) of penetrant and aromatic oil to regenerant on OD value. Figure 3 . Figure 3.The effect surface of the ratio (code) of penetrant and aromatic oil to regenerant on OD value. Figure 5 . Figure 5. Temperature scanning test results.(a) Complex shear modulus and time relationship diagram.(b) The phase angle and time diagram. Figure 5 . Figure 5. Temperature scanning test results.(a) Complex shear modulus and time relationship diagram.(b) The phase angle and time diagram. Figure 9 . Figure 9.The fatigue life value of each group. Figure 9 . Figure 9.The fatigue life value of each group. Figure 11 . Figure 11.Asphalt four component quality analysis diagram. Table 1 . Matrix asphalt and aging asphalt technical indicators. Table 2 . Aromatic oil technical index table. Table 4 . Factor code level and value. Table 5 . Factor test design and component mixing scale. Note: Group 9~Group 13 was a repeated test. Table 7 . Verification test results table. Table 7 . Verification test results table. Table 8 . Molecular weight calculation results. Table 8 . Molecular weight calculation results. Note: LMS is large molecules, MMS is middle molecules, and SMS is small molecules. Table 9 . Asphalt colloid index I c and R ' calculation results table. Table 9 . Asphalt colloid index Ic and R calculation results table. Figure 11.Asphalt four component quality analysis diagram.
9,668.4
2023-09-28T00:00:00.000
[ "Materials Science" ]
Albumin Submicron Particles with Entrapped Riboflavin—Fabrication and Characterization Although riboflavin (RF) belongs to the water-soluble vitamins of group B, its solubility is low. Therefore, the application of micro-formulations may help to overcome this limiting factor for the delivery of RF. In this study we immobilized RF in newly developed albumin submicron particles prepared using the Co-precipitation Crosslinking Dissolution technique (CCD-technique) of manganese chloride and sodium carbonate in the presence of human serum albumin (HSA) and RF. The resulting RF containing HSA particles (RF-HSA-MPs) showed a narrow size distribution in the range of 0.9 to 1 μm, uniform peanut-like morphology, and a zeta-potential of −15 mV. In vitro release studies represented biphasic release profiles of RF in a phosphate buffered saline (PBS) pH 7.4 and a cell culture medium (RPMI) 1640 medium over a prolonged period. Hemolysis, platelet activation, and phagocytosis assays revealed a good hemocompatibility of RF-HSA-MPs. Introduction Riboflavin (RF), also known as vitamin B2, is a partially water-soluble vitamin that belongs to the group of flavoenzymes which catalyze oxidation-reduction reactions [1]. It is intrinsically fluorescent and has been used as modern drug [2]. It has been reported that RF has in vivo anti-metastatic properties in melanoma [3]. Several studies have shown that RF may also have antioxidant and anti-inflammatory effects [4,5]. Protective properties against cancer were shown in connection with co-enzyme Q10, RF, and niacin in tamoxifen-treated postmenopausal breast cancer patients [6]. RF has also been useful in photodynamic therapy (PDT). Because of its photosensitizing characteristics, it has a wide range of biological actions, such as inducing apoptosis in leukemia [7] and reducing the progression of prostate cancer cells [8], renal cancer cells [4], and melanoma [3]. Moreover, irradiated RF has been used to inactivate pathogens in blood transfusions [9] and it has the stabilized the corneal collagen crosslinking in keratoconus treatment [10]. RF is required in many oxidation-reduction reactions, and therefore RF deficiency may affect many systems [1]. RF is considered to be one of the most common vitamins with a deficiency in people of developing countries, particularly the countries where rice is their staple food. Consequently, a long-term use of RF supplement is required. Although it belongs to the water-soluble vitamins of group B, its solubility is about 2.65 × 10 −5 mol/L −1 [11]. Therefore, micro-formulations based on hydrophobic interactions between RF and human serum albumin (HSA) may apply to overcome this limiting factor and to increase the therapeutic efficiency of the RF photosensitizer in cancer therapy [12,13]. The immobilization of compounds is a promising strategy for the improvement of stability, solubility, and biological activity through compound capture by carbonate microspheres in the process of their formation (Co-precipitation). The Co-precipitation Crosslinking Dissolution technique (CCD-technique) resulted in the fabrication of biopolymer particles using the precipitation of MnCl 2 and Na 2 CO 3 in the presence of a biopolymer solutions [14,15]. In the case of proteins, we used glutaraldehyde to crosslink the proteins in the MnCO 3 template. The concentration was very low (<0.1%) and the final particles did not contain free aldehyde groups. Therefore, no toxic effects could be found [15]. The uniform peanut-like submicron particles were produced with a relatively high protein entrapment efficiency and a narrow distribution of around 700 nm. These carbonate particles could be easily loaded with bioactive compounds (e.g., enzyme) during their preparation [16][17][18][19]. The particle size and shape could be altered by adjusting the experimental conditions such as pH, choice of salt and/or salt concentration, temperature, and rate of mixing the solutions. This technique becomes increasingly interesting due to the high drug-loading capacity of the carbonate particles, the ease of preparation by simply mixing two starting solutions under mild conditions, and the complete dissolution of the carbonate template using EDTA at a neutral pH. Micro-and nanoparticles made of human serum albumin (HSA) are an attractive alternative to synthetic polymers for use in the field of medicine and drug delivery due to their high binding capacity to both hydrophobic and hydrophilic drugs. Albumin nanoparticles showed the benefits of biocompatibility, biodegradability, non-toxic and non-immunogenic properties, thus avoiding inflammatory responses [20]. The HSA-based nanoparticles have been employed to deliver a variety of drugs such as brucine [21] and paclitaxel [22]. Various methods have been developed for the preparation of albumin particles such as desolvation/coacervation [23], emulsification [24], thermal gelation [25], nano spray drying [26], and self-assembly techniques [27] as well as co-precipitation which is used in our studies presented here. For our investigations, RF served as model substance to demonstrate that more or less hydrophobic small molecules can be loaded into protein submicron particles using the CCD-technique. Additionally, the release of RF in vitro was studied in a phosphate buffered saline (PBS) and a cell culture medium (RPMI). Finally, we investigated the hemocompatibility of the RF containing HSA particles (RF-HSA-MPs), which is important for their application as intra-venously administered drug carriers. Fabrication of RF-HSA-MPs Particles As RF is slightly soluble in water, a stock solution of 50 mM RF was prepared by dissolving it in 100% DMSO. The RF stock solution was protected from light by aluminum foil to prevent photo-degradation. The RF-HSA-MPs were fabricated using a modified protocol based on the previously described CCD-technique [17,18]. Briefly, 20 mL of MnCl 2 solution containing 10 mM RF and 10 mg/mL HSA were mixed in a 100 mL beaker for 1 min. Then 20 mL of Na 2 CO 3 were added rapidly under vigorous stirring (Bibby Scientific CB161 Magnetic Stirrer, level 3) for 30 s at room temperature (final concentrations of RF and HSA were 5 mM (≈1.9 mg/mL) and 80 µM (5 mg/mL), respectively). The final concentration of MnCl 2 /Na 2 CO 3 varied from 0.0625 to 0.25 M with a constant RF solution and HSA concentration. The hybrid particles obtained were separated by centrifugation at 3000× g for 3 min and washed twice with a 0.9% NaCl solution. The particles were suspended in a GA solution (final concentration 0.1%) and incubated at room temperature for 1 h, followed by centrifugation at 3000× g for 3 min. The remaining unbound aldehyde groups of GA in the particles were quenched using 0.08 M glycine and 0.625 mg/mL NaBH 4 , and the MnCO 3 template was subsequently removed by treatment with EDTA solution (0.25 M, pH 7.4) at room temperature for 30 min. Finally, the resulting particles were centrifuged, washed until the washing solution became colorless, and resuspended in Ampuwa ® for further use. The fabrication scheme of the submicron particles is shown in Figure 1. Fabrication of RF-HSA-MPs Particles As RF is slightly soluble in water, a stock solution of 50 mM RF was prepared by dissolving it in 100% DMSO. The RF stock solution was protected from light by aluminum foil to prevent photodegradation. The RF-HSA-MPs were fabricated using a modified protocol based on the previously described CCD-technique [17,18]. Briefly, 20 mL of MnCl2 solution containing 10 mM RF and 10 mg/mL HSA were mixed in a 100 mL beaker for 1 min. Then 20 mL of Na2CO3 were added rapidly under vigorous stirring (Bibby Scientific CB161 Magnetic Stirrer, level 3) for 30 s at room temperature (final concentrations of RF and HSA were 5 mM (≈ 1.9 mg/mL) and 80 μM (5 mg/mL), respectively). The final concentration of MnCl2/Na2CO3 varied from 0.0625 to 0.25 M with a constant RF solution and HSA concentration. The hybrid particles obtained were separated by centrifugation at 3000 × g for 3 min and washed twice with a 0.9% NaCl solution. The particles were suspended in a GA solution (final concentration 0.1%) and incubated at room temperature for 1 h, followed by centrifugation at 3000 × g for 3 min. The remaining unbound aldehyde groups of GA in the particles were quenched using 0.08 M glycine and 0.625 mg/mL NaBH4, and the MnCO3 template was subsequently removed by treatment with EDTA solution (0.25 M, pH 7.4) at room temperature for 30 min. Finally, the resulting particles were centrifuged, washed until the washing solution became colorless, and resuspended in Ampuwa ® for further use. The fabrication scheme of the submicron particles is shown in Figure 1. HSA particles with 4 mL DMSO without RF (HSA-MPs) were prepared following the same procedures and used as a control. The amount of RF or HSA entrapped in the particles was determined as the difference between the total RF (RFt) or HSA (Pt) amount added and the free non-entrapped RF (RFf) or HSA (Pf) amount in the supernatants after co-precipitation and after each washing step. The RF concentration was determined spectroscopically measuring the absorbance of the supernatants at 445 nm with a HSA particles with 4 mL DMSO without RF (HSA-MPs) were prepared following the same procedures and used as a control. The amount of RF or HSA entrapped in the particles was determined as the difference between the total RF (RF t ) or HSA (P t ) amount added and the free non-entrapped RF (RF f ) or HSA (P f ) amount in the supernatants after co-precipitation and after each washing step. The RF concentration was determined spectroscopically measuring the absorbance of the supernatants at 445 nm with a microplate reader (PowerWave 340, BioTek Instruments GmbH, Bad Friedrichshall, Germany). The protein concentration was determined using a Coomassie Plus (Bradford) Assay Kit (Thermo Fisher Scientific, Waltham, IL, USA) with an absorbance measurement at 595 nm. Size, Zeta-Potential and Morphology of the HSA-MPs and RF-HSA-MPs The size, polydispersity index, and zeta potential of the obtained particles were measured using a Zetasizer Nano ZS instrument (Malvern Instruments Ltd., Malvern, UK) at 25 • C. The particles were dispersed in PBS pH 7.4 and taken in a clear disposable zeta cell for zeta-potential measurement and in a plastic disposable cuvette for particle size measurement. Additionally, the particles were imaged using a confocal microscope (CLSM ZeissLSM 510 meta, Zeiss MicroImaging GmbH, Jena, Germany) and the size was assessed from the obtained images using the ImageJ-1 software (NIH, Bethesda, MD, USA). The morphology of HSA-MPs and RF-HSA-MPs was investigated using an atomic force microscopy (AFM) in taping mode and a Nanoscope III Multimode AFM (Digital Instrument Inc., Santa Barbara, CA, USA). The samples were prepared on a freshly cleaved mica substrate pretreated with polyethylene imine (Mw 25 kDa, 1 mM for 20 min) by applying a drop of diluted particle suspension. The substrate was then rinsed with deionized water and dried under a gentle stream of nitrogen. The scans of the particles were first performed in the dry state, followed by the addition of a drop of deionized water and a scan in the wet state. For the scans in air (dry state) micro-lithographed tips on silicon nitride (Si 3 N 4 ) cantilevers with a spring constant of 42 N/m and a resonance frequency of 300 kHz (Olympus Corporation, Tokyo, Japan) were used. Cantilevers with a spring constant of 3 N/m and a resonance frequency of 75 kHz (Budget Sensors, Innovative Solutions Bulgaria Ltd., Sofia, Bulgaria) were used for the measurements in the wet state. The Nanoscope software was used to record and analyze the obtained images. Intrinsic Fluorescence of the HSA-MPs and RF-HSA-MPs The HSA-MPs and the RF-HSA-MPs were observed using a confocal laser scanning microscope (CLSM; ZeissLSM 510 meta, Zeiss MicroImaging GmbH, Jena, Germany) equipped with a 100× oil immersion objective (a numerical aperture of 1.3). Images of the samples were prepared in transmission and fluorescence mode with fluorescence excitation at 488 nm and a 505 nm long pass emission filter. The same settings were used for the imaging of the particles prepared with and without RF. Additionally, the particles were mounted on a glass slide using DakoCytomation fluorescent mounting medium and visualized using an Axio Observer (Zeiss, Göttingen, Germany). The fluorescence intensity was recorded at an excitation wavelength of 480 nm and an emission wavelength of 535 nm. A Zeiss filter cube no. 9 was used for fluorescence microscopy (EX 450-490, BS 510, EM LP 515). The distribution of the fluorescence intensity inside the populations of the HSA-MPs and RF-HSA-MPs was analyzed using a flow cytometry (FACS-Canto II, Becton and Dickinson, Franklin Lakes City, NJ, USA) after diluting the samples with PBS at a ratio of 1:40 [28]. The performance of the flow cytometer was checked regularly using Cytometer Setup and Tracking Beads (BD Biosciences, Franklin Lakes, NJ, USA) to ensure the accuracy and precision of the measurements. A total of 10,000 events of particles were recorded from each sample. Subsequently, the fluorescence of the particles was determined in the PE-A channel as the relative median fluorescence intensity (RFI). The data were analyzed using the FlowJo v10 software (Tree Star, Ashland, OR, USA). In Vitro Release of RF from the RF-HSA-MPs For the release studies, 2.5 mL of 16% (v/v) RF-HSA-MPs suspension were transferred into a dialysis membrane sleeve (Cellu Sep T3, MWCO 12,000-14,000, Creative BioMart, Shirly, NY, USA) and sealed at both ends after adding 1 mL release media (0.1 M PBS pH 7.4 or RPMI 1640 medium supplemented with 10% fetal bovine serum (FBS) and 1% PenStrep to mimic the biological environment). The dialyzer was then introduced into a 25 mL glass cylinder containing 9 mL of release media (0.1 M PBS pH 7.4 or RPMI 1640 medium), which was stirred continuously at 100 rpm using a magnetic stir bar at room temperature. The samples were removed from any light because of the light sensitivity of RF. The RF-release was assessed intermittently by sampling (400 µL) the contents of the outer media and replacing this with an equal volume of fresh PBS pH 7.4 or RPMI 1640 medium immediately after sampling, correspondingly. The amount of released RF was measured at a wavelength of 445 nm using an UV-vis spectrophotometer (Hitachi U2800, Hitachi High-Technologies Corporation, Kreefeld, Germany). The release profiles of RF in PBS pH 7.4 and in RPMI 1640 medium were displayed as time dependency for the remaining RF concentration in the RF-HSA-MPs and fitted with the release model of Pappas [29], Equation (1) was used: where m (t)/m (∞) is the cumulative drug release, t is the release time (in hours), the first term of the right side is the Fickian contribution (F), the second term is the Case-II relaxational contribution (R) that reflects the structural and geometric characteristics of the MPs, and n is a diffusional exponent that is characteristic for the controlled release of the loaded drug [30,31]. Hemocompatibility of HSA-MPs and RF-HSA MPs Freshly withdrawn venous blood was collected from healthy volunteers and anticoagulated using lithium heparin (368494, BD Vacutainers) or into sodium citrate (366575, BD Vacutainer). The blood samples were collected at the Charité-Universitätsmedizin Berlin (# EA1/137/14) and all donors provided written informed consent. The blood samples were mixed gently (but thoroughly) to ensure adequate mixing with the anticoagulant immediately after blood collection. All samples were processed within 2 h of blood collection. Hemolysis Test The hemolytic activity was determined on the release of hemoglobin from damaged erythrocytes in vitro. Human heparinized blood was washed with PBS by centrifugation at 3000× g for 5 min to isolate the red blood cells (RBCs). The RBCs were further washed until a colorless pellet was obtained and then diluted to achieve a cell suspension with a volume concentration of 2% in PBS. Then, 0.5 mL of the 0.5%, 1%, and 2% diluted RBCs suspension was mixed with 0.5 mL of 2% HSA-MPs, RF-HSA-MPs, double distilled water as the positive (PC) or PBS as a negative (NC) control. After incubation at 37 • C for 3 h and centrifugation at 3000× g for 5 min, the supernatants were transferred carefully to a 96-well plate and the absorbance was measured using a microplate reader at 545 nm. The degree of hemolysis was calculated as the hemolytic ratio (HR) using Equation (2): where A test is the absorbance of the tested sample, A NC is the absorbance of the negative control in PBS and A PC is the absorbance of the positive control in distilled water. Phagocytosis Test The interaction of the HSA-MPs and RF-HSA-MPs with the blood leukocytes was evaluated in vitro in human whole blood using a commercial Phagotest kit (Glycotope-Biotechnology GmbH, Heidelberg, Germany). Manufacturer's instructions were partially modified: all reactions were performed with half of the volume (50 µL instead of 100 µL), lysing solution was changed to ammonium chloride lysing solution (155 mM NH 4 Cl, 12 mM NaHCO 3 , 0.1 mM EDTA), and DNA was not stained. To put it briefly, 10 µL of 2 × 10 11 per mL, RF-HSA-MPs, HSA-MPs, were added into 50 µL heparinized whole blood and carefully mixed. For the negative control 10 µL of PBS were added to 50 µL blood and for the positive control (functional test of the granulocytes and monocytes in the blood) 10 µL of 2 × 10 11 FITC-labeled opsonized E. coli (positive control) were applied. The samples were incubated at 37 • C for 30 min (PBS and FITC-labeled opsonized E. coli were incubated for 10 min). The control samples remained on ice. At the end of the incubation period, all samples were placed in the ice-bath. A quenching solution was added and washed with ice-cold PBS. The erythrocytes were lysed with ammonium chloride solution for 15 min. The cells were washed twice and re-suspended in ice-cold PBS. The percentage of granulocytes and monocytes exhibiting phagocytosis was determined using a flow cytometer (BD FACS Canto II, Franklin Lakes, NJ, USA). Platelet Activation Test The effect of the HSA-MPs and RF-HSA-MPs on the function of the blood platelets was tested in a platelet-rich plasma (PRP). The PRP was isolated from human whole blood anticoagulated with sodium citrate by centrifugation at 150× g for 15 min and used immediately. Then the HSA-MPs and RF-HSA-MPs were added to the PRP at a final ratio of 5 particles per 1 platelet, carefully mixed and incubated in a water bath at 37 • C for 30 min. A negative control was prepared adding the same volume of PBS instead of particle suspension. To induce activation and aggregation of the platelets, the pre-incubated PRP samples were treated with 0.5 mg/mL of arachidonic acid or 0.018 mg/mL of epinephrine (Mölab, Langenfeld, Germany) at 37 • C for 30 min. An ABX Micros 60 hematology analyzer (Horiba Europe GmbH) was used to detect the platelet number in the samples before and after incubation. Finally, the platelets were stained with APC-mouse anti-human CD41a and Alexa Fluor ® 488-mouse anti-human CD62p (p-selectin), kept in the dark for 20 min, and fixed with 500 µL of a fixative solution (0.5% paraformaldehyde in PBS) to each test tube to stop the reactions. The expression of the platelet activation marker CD62P and the constitutively present platelet marker CD42b were analyzed using a flow cytometry (BD FACS Canto II). HSA and RF Content, Size, Zeta-Potential and Morphology In previous studies by our group, it has been shown that the co-precipitation technique is much more effective for the protein entrapment than the absorption onto the carbonate particles [32,33]. Moreover, it was found that the entrapment of proteins using the MnCO 3 template was higher than that of the CaCO 3 template [14,15]. The encapsulation efficiency was attributed to the electrostatic attraction between negatively charged proteins and more positively charged MnCO 3 as well as to the stronger affinity of Mn 2+ to proteins and in particular to HSA [34]. However, the addition of low molecular weight compounds into polymeric particles and capsules still remains a challenge. In our study we used RF as a model to investigate the potential of the CCD-technique to deliver carrier systems for low molecular weight drugs with poor water solubility. The weak water-soluble RF was added together with HSA via the CCD-technique as shown in Figure 1. To achieve this RF was already added during the first step of the particle preparation, the co-precipitation, together with HSA. It had been previously shown that RF interacts with albumin through adsorption on the tryptophan residues via hydrophobic interactions [12,13], which was expected to support the RF entrapment into the HSA-MPs. The co-precipitation was performed at the previously optimized concentration of MnCl 2 and Na 2 CO 3 for the entrapment of HSA (0.125 M). The average amounts of entrapped HSA and RF particles under these preparation conditions were 2.9 ± 0.8 mg and 2.5 ± 0.5 mg per mL, respectively. This means that in a particle suspension with a volume concentration of 10%, the RF concentration will be roughly 290 µg/mL which is over four times higher than the solubility of RF in water at 20 • C (70 µg/mL, GESTIS-materials database). On CLSM images the HSA-MPs and RF-HSA-MPs exhibited a size between 0.9 and 1.1 µm with an average long diameter of 1 µm. These values correlated well with the measurements using the dynamic light scattering (Zetasizer Nano ZS, Malvern Instruments Ltd., Malvern, UK) which delivered values of 1.04 ± 0.15 µm. There were no significant differences found between HSA-MPs and RF-HSA-MPs. Under the conditions chosen for this study, which were a rapid mixing of all compound at room temperature, the size of the particles was highly reproducible. The main factors that determine the size of the MnCO 3 particles were the concentrations of manganese and carbonate ions, the flow rate of the solutions during mixing, and the temperature. Particular variations of these parameters are needed for controlling size and shape of the particles in co-precipitation reactions [35]. The mixing of MnSO 4 and NH 4 HCO 3 has been widely employed to prepare manganese carbonate particles and used as scarified templates for the assembly of polyelectrolyte multilayers via layer-by-layer (LBL) self-assembly technique. Micron MnCO 3 crystals with different size distributions varying from 1 to 10 µm have been synthesized at low concentration ratios of MnSO 4 /NH 4 HCO 3 with long precipitation times and additional solvents at high temperature [36][37][38][39][40]. Subsequently, the manganese carbonate core was dissolved in HCl at low pH. In this study, MnCO 3 was synthesized by a co-precipitation method using MnCl 2 and Na 2 CO 3 as the manganese and carbonate source, respectively. The precipitation was completed very fast, at room temperature with high salt concentration, and the dissolution was completed with EDTA at neutral pH. These conditions are suitable for the preparation of protein particles avoiding denaturation and preserving the function of the proteins. The morphology of HSA-MPs and RF-HSA-MPs was analyzed using an AFM as shown in Figure 2. The shape of both kinds of particles was peanut-like. The long diameter measured for both kinds of particles varied between 780 and 890 nm without significant differences between them. The thickness of the particles was determined from the height profiles were 400 ± 45 nm, which corresponds to half of the long diameter. The addition of RF did not seem to interfere with the geometry of the particles. The RF-HSA-MPs und HSA-MPs were further investigated with respect to their electrokinetic potential (zeta-potential). This parameter is important for the stability of a particle suspension, in particular for the behavior of the particles in biological fluids. Therefore, three measurements of the zeta-potential were conducted in PBS pH 7.4 (conductivity 17 mS/cm). Both HSA-MPs and RF-HSA-MPs exhibited zeta-potential of approximately −15 mV, which is a relatively high value at the high ionic strength of PBS. In water (conductivity 14 µS/cm) the zeta-potential was approximately −39 mV, which contributed to the high colloidal stability and absence of aggregation of the particles in a biologically relevant media. 2. The shape of both kinds of particles was peanut-like. The long diameter measured for both kinds of particles varied between 780 and 890 nm without significant differences between them. The thickness of the particles was determined from the height profiles were 400 ± 45 nm, which corresponds to half of the long diameter. The addition of RF did not seem to interfere with the geometry of the particles. The RF-HSA-MPs und HSA-MPs were further investigated with respect to their electrokinetic potential (zeta-potential). This parameter is important for the stability of a particle suspension, in Intrinsic Fluorescence of the HSA-MPs and RF-HSA-MPs Both HSA-MPs and RF-HSA-MPs could be detected in the fluorescence channels of the confocal microscope. A weak autofluorescence due to the GA crosslinking was observed in the HSA-MPs as seen in Figure 3(A1,A2), whereas the RF-HSA-MPs showed significantly stronger fluorescence due to the entrapped RF as seen in Figure 3(B1,B2). More clearly the difference of the fluorescent emission is demonstrated in the 3D color surface map representing a single HSA-MP and RF-HSA-MP in Figure 3(A3,B3), respectively. The higher value of the intrinsic fluorescence of the RF-HSA-MP confirms the successful entrapment of the drug into the particles. Additionally, the intrinsic fluorescence is very useful for tracking these particles when they interact with cells without the need for additional labeling. Both HSA-MPs and RF-HSA-MPs could be detected in the fluorescence channels of the confocal microscope. A weak autofluorescence due to the GA crosslinking was observed in the HSA-MPs as seen in Figure 3 (A1,A2), whereas the RF-HSA-MPs showed significantly stronger fluorescence due to the entrapped RF as seen in Figure 3 (B1,B2). More clearly the difference of the fluorescent emission is demonstrated in the 3D color surface map representing a single HSA-MP and RF-HSA-MP in Figure 3 (A3,B3), respectively. The higher value of the intrinsic fluorescence of the RF-HSA-MP confirms the successful entrapment of the drug into the particles. Additionally, the intrinsic fluorescence is very useful for tracking these particles when they interact with cells without the need for additional labeling. In Vitro Release of RF from the RF-HSA-MPs The investigation of the drug release was performed using a dialysis-bag diffusion method against PBS pH 7.4 as well as RPMI 1640 medium. The cut-off of the dialysis bag allowed the free diffusion of released RF through the semi-permeable membrane from the solution inside the dialysis bag to the outside following the concentration gradient. The results of the in vitro release of RF from In Vitro Release of RF from the RF-HSA-MPs The investigation of the drug release was performed using a dialysis-bag diffusion method against PBS pH 7.4 as well as RPMI 1640 medium. The cut-off of the dialysis bag allowed the free diffusion of released RF through the semi-permeable membrane from the solution inside the dialysis bag to the outside following the concentration gradient. The results of the in vitro release of RF from RF-HSA-MPs are shown in Figure 4a. The decrease of the RF concentration remaining in the particle suspensions was followed for 80 h. It can be seen that the drug release profiles, in both investigated media, are bi-phasic with an initial burst release of approximately 7% in PBS and 12% in RPMI from the initial RF concentration during the first 2 to 3 h. Thereafter, the release rate decreased and a sustained release was observed until the end of the experiments. After 80 h the drug release remained 30% and 45% of the initial loading in PBS and in RPMI, respectively. The release in the RPMI 1640 medium which contained varying amino acids and 10% calf serum albumin was significantly faster, probably due to the adsorption of the released RF by these compounds, which resulted in a clearance of the free RF from the solution. Consequently, the concentration of the freely dissolved RF remained lower in the RPMI as compared with the concentration of the freely dissolved RF in the PBS, which led to a faster release. A similar release behavior was shown for a hydrophobic anti-cancer drug from a micelle system. The release was accelerated in buffers containing albumin [41] due to the binding of the drug to the hydrophobic regions of albumin. The release profiles were fitted using the model of Pappas Equation (2), which is suitable to describe bi-phasic controlled release of entrapped drugs from particles. The values calculated for K 1 are larger than those calculated for K 2 by more than one magnitude for both PBS and RPMI. This indicates that the release is dominated by the diffusion mechanism. In RMPI the domination by the Fickian diffusion is much stronger due to the greater RF concentration gradient between RF in the RF-HSA-MPs and the bulk RPMI-phase. behavior was shown for a hydrophobic anti-cancer drug from a micelle system. The release was accelerated in buffers containing albumin [41] due to the binding of the drug to the hydrophobic regions of albumin. The release profiles were fitted using the model of Pappas Equation (2), which is suitable to describe bi-phasic controlled release of entrapped drugs from particles. The values calculated for K1 are larger than those calculated for K2 by more than one magnitude for both PBS and RPMI. This indicates that the release is dominated by the diffusion mechanism. In RMPI the domination by the Fickian diffusion is much stronger due to the greater RF concentration gradient between RF in the RF-HSA-MPs and the bulk RPMI-phase. Hemolysis Test Hemolysis tests were performed to assess the impact of HSA-MPs and RF-HSA-MPs on the membrane stability of human erythrocytes. The HSA-MPs and RF-HSA-MPs showed low hemolytic activity with the percentage of hemolysis in the range of 4-7% and in a dose-dependent manner as shown in Figure 5. Therefore, the HSA-MPs and RF-HSA-MPs did not cause strong hemolytic effects. However, according to criterion listed in the ASTM E2524-08 standard, more than 5% hemolysis indicates damage to RBCs caused by the test materials. This critical value was reached at particle concentration of 1% for both HSA-MPs and RF-HSA-MPs. Hemolysis Test Hemolysis tests were performed to assess the impact of HSA-MPs and RF-HSA-MPs on the membrane stability of human erythrocytes. The HSA-MPs and RF-HSA-MPs showed low hemolytic activity with the percentage of hemolysis in the range of 4-7% and in a dose-dependent manner as shown in Figure 5. Therefore, the HSA-MPs and RF-HSA-MPs did not cause strong hemolytic effects. However, according to criterion listed in the ASTM E2524-08 standard, more than 5% hemolysis indicates damage to RBCs caused by the test materials. This critical value was reached at particle concentration of 1% for both HSA-MPs and RF-HSA-MPs. In general, size, surface charge, and surface area are key parameters that affect the hemolytic potential of particles. Negatively charged particles interact less with the negative charged cell surface than positively charged particles. Micron-sized particles are more likely to produce a lower level of hemolysis than smaller particles [42][43][44]. The increase in surface-to-volume ratio with the decrease in size of particles results in enlarged surface contact area and provides the chance for damage to take place to a cell membrane. This could explain the dose-dependent increase of hemolysis observed with the HSA-MPs and RF-HSA-MPs. Phagocytosis Test The ability of the HSA-MPs and RF-HSA-MPs to induce phagocytic activity of granulocytes and monocytes in whole blood was analyzed using a standard phagocytosis kit. Representative results of these tests are shown in Figure 6. The fluorescence signal from HSA-MPs and RF-HSA-MPs was detected in the PE-A channel of the flow cytometer, and the FITC-labelled E. coli (used as a standard positive control for phagocytosis) was detected in the FITC-A channel. The three main populations of white blood cells were identified based on their forward scatter (FSC) and side scatter (SSC): granulocytes, monocytes, and lymphocytes (dot-plot Figure 6A). The histograms in Figure 6B,C represent the distribution of the fluorescence intensity within the population of HSA-MPs and RF-HSA-MPs in the PE-A-channel. The higher fluorescence emission of the RF-HSA-MPs is clearly visible in the shift of their histogram by one order of magnitude to higher fluorescence intensities. The analysis of the fluorescence distribution in the granulocyte and monocyte populations of the samples incubated at 37 °C with FITC-E.coli ( Figure 6D,G) shows a strong right shift in the FITC channel due to the engulfment of the fluorescent bacteria. This was not the case in the samples incubated with HSA-MPs and RF-HSA-MPs ( Figure 6E,I). The particles did not induce phagocytosis, and therefore their immunogenicity is low. Avoiding clearance by phagocytosis is very important in drug delivery systems using micro-particles and in many cases requires complicated and expensive surface modification of the drug carriers [45]. Therefore, our HSA-MPs and RF-HSA-MPs are very promising for use in applications for drug delivery systems. In general, size, surface charge, and surface area are key parameters that affect the hemolytic potential of particles. Negatively charged particles interact less with the negative charged cell surface than positively charged particles. Micron-sized particles are more likely to produce a lower level of hemolysis than smaller particles [42][43][44]. The increase in surface-to-volume ratio with the decrease in size of particles results in enlarged surface contact area and provides the chance for damage to take place to a cell membrane. This could explain the dose-dependent increase of hemolysis observed with the HSA-MPs and RF-HSA-MPs. Phagocytosis Test The ability of the HSA-MPs and RF-HSA-MPs to induce phagocytic activity of granulocytes and monocytes in whole blood was analyzed using a standard phagocytosis kit. Representative results of these tests are shown in Figure 6. The fluorescence signal from HSA-MPs and RF-HSA-MPs was detected in the PE-A channel of the flow cytometer, and the FITC-labelled E. coli (used as a standard positive control for phagocytosis) was detected in the FITC-A channel. The three main populations of white blood cells were identified based on their forward scatter (FSC) and side scatter (SSC): granulocytes, monocytes, and lymphocytes (dot-plot Figure 6A). The histograms in Figure 6B,C represent the distribution of the fluorescence intensity within the population of HSA-MPs and RF-HSA-MPs in the PE-A-channel. The higher fluorescence emission of the RF-HSA-MPs is clearly visible in the shift of their histogram by one order of magnitude to higher fluorescence intensities. The analysis of the fluorescence distribution in the granulocyte and monocyte populations of the samples incubated at 37 • C with FITC-E. coli ( Figure 6D,G) shows a strong right shift in the FITC channel due to the engulfment of the fluorescent bacteria. This was not the case in the samples incubated with HSA-MPs and RF-HSA-MPs ( Figure 6E,I). The particles did not induce phagocytosis, and therefore their immunogenicity is low. Avoiding clearance by phagocytosis is very important in drug delivery systems using micro-particles and in many cases requires complicated and expensive surface modification of the drug carriers [45]. Therefore, our HSA-MPs and RF-HSA-MPs are very promising for use in applications for drug delivery systems. Platelet Activation Test Further, platelet activation was determined by evaluating expression of CD62p (P-selectin) and CD42b platelet surface markers. Non-treated PRP (negative control) showed nearly 10% platelet activation (expression of CD62p) caused by sample handling and preparation. Incubation with agonists (arachidonic acid, epinephrine, and collagen) caused an increased expression of CD62p in the platelets confirming their functionality. The measurement of the CD42b/CD62p co-expression in platelet samples treated with the HSA-MPs or RF-HSA-MPs revealed that there was no effect on the CD62p expression in CD42b positive cells. This is comparable to the control sample. Together with agonist, HSA-MPs or RF-HSA-MPs did not induce a different behavior in the activation of the platelets in comparison with the samples treated with agonists only. Therefore, both HSA-MPs and RF-HSA-MPs did not activate the platelets and did not augment the platelet response to antagonists. Representative dot plots are shown in Figure 7B and summarized results of the platelet activation test are shown in Figure 7B. Platelet Activation Test Further, platelet activation was determined by evaluating expression of CD62p (P-selectin) and CD42b platelet surface markers. Non-treated PRP (negative control) showed nearly 10% platelet activation (expression of CD62p) caused by sample handling and preparation. Incubation with agonists (arachidonic acid, epinephrine, and collagen) caused an increased expression of CD62p in the platelets confirming their functionality. The measurement of the CD42b/CD62p co-expression in platelet samples treated with the HSA-MPs or RF-HSA-MPs revealed that there was no effect on the CD62p expression in CD42b positive cells. This is comparable to the control sample. Together with agonist, HSA-MPs or RF-HSA-MPs did not induce a different behavior in the activation of the platelets in comparison with the samples treated with agonists only. Therefore, both HSA-MPs and RF-HSA-MPs did not activate the platelets and did not augment the platelet response to antagonists. Representative dot plots are shown in Figure 7B and summarized results of the platelet activation test are shown in Figure 7B. Conclusions In conclusion, we demonstrated that the encapsulation of a drug with a low molecular weight and low water-soluble macromolecule, RF, can be performed by capturing the growing MnCO 3 particles together with HSA. The negatively charged particles can be produced with a narrow size distribution and diameters less than 1 µm. The release of RF from the particles exhibits bi-phasic profile with a dominating Fickian diffusion mechanism. These findings suggest that RF-HSA-MPs represent a compelling strategy for a long-term drug delivery system, and that the CCD-technique of incorporation is applicable to various biomolecules with different molecular weights. Taken together, with the investigation of the release of RF and the hemocompatibility, this work provides basic information for the production and application of HSA-based micro-particles as a drug carrier system.
8,636.6
2019-03-01T00:00:00.000
[ "Biology" ]
“First” abyssal record of Stenosemus exaratus (G.O. Sars, 1878) (Mollusca, Polyplacophora) in the North-Atlantic Ocean Abstract The first proven abyssal record of Stenosemus exaratus (G.O. Sars, 1878) is presented on the basis of an ROV study in the Irish Sea. For the first time in situ images of the species and data on the environmental parameters are provided. Introduction Polyplacophoran molluscs are a group of exclusively benthic organisms distributed worldwide that are found from the splash zone down to hadal depths (Schwabe 2008). According to Schwabe (2008) the maximum depth in which Stenosemus exaratus (G.O. Sars, 1878) has been collected is 2580 m. Schwabe (2008) cited depth ranges for this species cited by Kaas and Van Belle (1990), but confirmed localities where the species was collected in abyssal depths could not be traced. Thus proof for the occurrence of S. exaratus below the continental rise sensu Gage and Tyler (1991) is still lacking. The only abyssal records of chitons in the North Atlantic (excluding the Caribbean Sea) are restricted to a handful of records from off Galicia and the Bay of Biscay and all refer to Leptochiton alveolus (M. Sars MS, Lovén, 1846). Kaas and Van Belle (1994) were apparently also aware of abyssal records for Placiphorella atlantica (Verrill & S. I. Smith in Verrill, 1882), but subsequent research again failed to trace these (Schwabe 2008). Thus Leptochiton alveolus is so far the only "true" abyssal Northern Atlantic species for which precise occurrence records are available. During an expedition exploring canyon systems to the southeast of the Rockall Trough on the shelf edge of Ireland, one of us (LA) was able to collect three specimens of S. exaratus by means of an ROV (remotely operated vehicle). Still and high-definition video camera systems provide for the first time an insight into the species' habitat. In addition, data are presented on relevant environmental parameters. Material and methods The chitons recorded here were collected during survey CE10004 of RV Celtic Explorer. This cruise, entitled 'Species at the Margins' sampled an unnamed canyon system at the edge of the continental margin, north of the Porcupine Bank, using the Irish deepwater ROV Holland I. ROV Holland I is a Quasar work class ROV rated to 3000 m. It is equipped with several video camera systems including a Kongsberg OE14-502a high definition colour zoom and a Kongsberg OE14-208 digital stills camera, and has two robotic arms and a slurp sampler. Laser sights are positioned 10 cm apart to facilitate size estimates. Samples from the slurp sampler are maintained in an enclosed system for the duration of the dive. Fauna collected with the robotic arms are stored in extendable storage boxes. Once samples arrived on deck, they were hand-picked from the ROV boxes and sediment was sieved through a 500 μm mesh. The chitons were deposited and identified at the Bavarian State collection of Zoology (ZSM Mol 20110215) (by ES). Environmental parameters were obtained using a 24-rosette conductivity-temperature-depth (CTD) data logger from the nearest locality and by visual inspection of the sediment. According to the available video sequences the species was collected at 2:06 pm. The position of the ROV was determined using a global acoustic positioning system, which incorporates inertial navigation systems and global positioning using ultrashort baseline beacons. At these depths, position data can be intermittent. We obtained position data approximately 10 minutes after the chiton was collected. As the ROV was climbing a vertical wall during this period, only the depth value is slightly inaccurate, the actual collection depth being slightly (approximately 20 meters as estimated from video footage) deeper than the nearest datum point. Data resources The data underpinning the analysis reported in this paper are deposited in the Dryad Data Repository at doi: 10.5061/dryad.h261h. Results During station 96 of cruise CE1004, an ROV dive to 3000 m depth, three full grown specimens of Stenosemus exaratus (G.O. Sars, 1878) were collected on a steep wall of an unnamed canyon southeast of the Rockall Trough (Fig. 1) at 54.2172°N, 12.6598°W. One specimen was sighted and taken just below the nearest recorded depth of 2733 m. Two additional specimens were taken blind by the slurp sampler, from wall sediment during the course of the dive. The wall extends vertically from approximately 2800 to 2650 m and consists of chalk, but is covered all over by a very fine greenish-grey silt layer. Despite a remarkably high number of scars and micro cavities the only other obvious macrobenthic fauna close to the sighted chiton was a glass sponge approximately 30 cm in length. No feeding tracks or "home" marks were visible around the chiton. Data from the CTD at station 93 (54.217°N, 12.661°W, depth 2733 m) reveals the following abiotic parameters: salinity 34.925, temperature 2.85°C, pressure 2775.87 db and oxygen 235 μmol/kg (this corresponds to a saturation of about 72-73%). These data indicate that the Bay of Biscay area is influenced by cold oxygen-rich Labrador Sea Water (e.g., McGrath et al. 2012) Discussion Abyssal records of chitons are scarce and few species are known to inhabit depths below the continental slope (see Schwabe 2008). Schwabe (2008) also showed that eurybathy occurs very rarely in polyplacophorans. Among the few species exhibiting eurybathy is Stenosemus exaratus reported herein. The present finding represents its deepest record (Fig. 2, circle), but it also occurs rather shallowly in fjord systems, including the Chilean Fjord region, where Schwabe and Sellanes (2010) recorded the shallowest occurrence at 23 m. A similar situation was revealed for the other North Atlantic abyssal species, Leptochiton alveolus (Fig. 2, triangles). While its abyssal records to date are restricted to the canyon regions of the Bay of Biscay, we found it at 1380 m during cruise CE11006 of RV Celtic Explorer during a dive of ROV Holland I under a Lophelia pertusa bank in the Whittard Canyon. This coral species was also recorded at 1350 m in the Whittard Canyon system by Huvenne et al. (2011). Mortensen and Fossa (2006, as Lepidochitona [sic] alveolus) reported L. alveolus from living cold-water coral Lophelia pertusa reefs in the Midfjord (Norway) in depths between 150-160 m. The previous deep-water findings of Leptochiton alveolus in the Bay of Biscay (Fig. 2) region lack accompanying data and it remains unclear, if the species is somehow related to the occurrence of Lophelia pertusa. However, hypothetically this would be possible, as Davies and Guinotte (2011: figs 4, 5) demonstrated that a co-occurrence of both species is possible. Jensen and Frederiksen (1992), however, did not record a single chiton from Lophelia associated communities. were provided by the GEBCO_08 Grid, version 20100927. We thank the captain and crew of RV Celtic Explorer and the ROV team led by Jim MacDonald. This research survey was carried out under the Sea Change strategy with the support of the Marine Institute and the Marine Research Sub-programme of the National Development Plan 2007-2013. Boris Sirenko (Russia), Bruno Dell'Angelo (Italy) and an anonymous reviewer kindly provide helpful comments on an earlier version of the manuscript.
1,626.8
2013-03-04T00:00:00.000
[ "Biology", "Environmental Science" ]
Reanalysis-based contextualization of real-time snow cover monitoring from space Satellite remote sensing provides real-time information on the extent of the snow cover. However, the period of record is generally too short to build a reference climatology from these data alone, preventing their use as climatic indicators. Here we show that reanalysis data can be used to reconstruct a 30 year snow cover time series that fits well with the satellite observations. This climatology can then be used to put the current state of the snow cover into perspective. We implemented this approach to provide real-time information on the snow cover area in the Alps through a web application. Introduction Various stakeholders including citizens seek realtime information on the current state of the environment (Hewitt et al 2012, Vaughan andDessai 2014). The availability of real-time meteorological data, accompanied by their climatic context contributes to increasing environmental awareness and provides relevant information for the real-time management of challenging situations (Overpeck et al 2011). Social media now allows a swift dissemination of such information reducing barriers between scientific organizations and society (Pearce et al 2019). In situ meteorological observations are critically relevant but they often provide sparse sampling of hydrometeorological variables, especially in mountain regions (Hik and Williamson 2019). Satellites provide real-time, spatially continuous data on some essential climate variables (Bojinski et al 2014). However, satellite observation time periods are often too short to characterize climatological references, especially at the local scale. On the other hand, meteorological reanalyzes provide information over longer time scales but are often not available in real-time. For example, the global Modern-Era Retrospective analysis for Research and Applications (MERRA-2) is published with a latency of a few weeks (Reichle et al 2017). ERA5 is updated with a shorter latency of 5 d. However, the fully quality-checked final product is released two months later (Hersbach et al 2020). In addition, the MERRA-2 and ERA5 spatial resolutions are approximately 50 km and 30 km respectively, which is too coarse for some applications. More advanced reanalyzes are updated with longer latency. For example, ERA5-Land (approximately 9 km resolution) is available with a 2-3 month delay (Muñoz-Sabater et al 2021), which makes it unsuitable for real-time applications. Hence, there is often a gap between the availability of real-time products and long-term records at the climactic time scale due to the lack of immediate real time availability. Many environmental variables are related to these time scales, making this situation unfortunate; however, the combination of various sources of information can be used to bridge this gap and build upon their benefits across temporal and spatial scales (AghaKouchak and Nakhjiri 2012, Notarnicola 2022). Here, we exemplify such an approach, focusing on the development of real-time monitoring of the snow cover area based on the combination of satellite and reanalysis data. The current well-established methods used to retrieve the snow cover extent from spaceborne sensors have been developed since the 1980s. In particular, the Moderate-Resolution Imaging Spectroradiometer (MODIS) multispectral optical sensor onboard Terra has enabled the daily measurement of the snow cover area at 500 m resolution since 2000 (cloud-permitting). MODIS snow products are distributed by the National Snow and Ice Data Center (NSIDC) to alleviate the processing of the MODIS data by end users. These products indicate the presence of snow along with a cloud mask (Hall and Riggs 2015). Based on MODIS data only, the first author has developed a web-based tool to analyze in real-time the snow cover area in the Alps (Alps Snow Monitor n.d.). This tool has attracted some attention, especially in the context of the severe drought that affected the Alps during the winter of 2022 (European Commission. Joint Research Centre 2022). For example, this tool allowed us to reveal that on 2 March 2022, the snow cover area in the Alps had reached its lowest value since 2001. This finding was published on 5 March 2022 . Later, we showed that the 2022 snow cover area in the Alps reached its minimum earlier than any other year since 2001 (figure 1). Other similar tools use the MODIS record to provide information to the general public, e.g. 'Snow-CloudMetrics' (Crumley et al 2020) or 'Snow Today' (Snow Today Article | NSIDC Reports n.d.). However, although the MODIS data record already spans 22 years, it does not reach the 30 year standards that are used by meteorological agencies to define climatological references. Hence, it could lead to non-robust assessments of the true extend of a given situation. Given the interest in having realtime information regarding snow cover in the Alps, we sought to provide a more robust contextualization of the MODIS observations. To this end, we used longer reanalysis data spanning the entire European Alps domain at a sufficiently fine spatial resolution for this mountain environment, namely, the Uncertainties in Ensembles of Regional ReAnalysis (UERRA) reanalysis at 5.5 km resolution (Soci et al 2016, Morin et al 2021. A 30 year-long snow cover area climatology was generated from this dataset using machine learning to de facto generate MODIS-like data using the reanalysis for the time period before the onset of the MODIS observations. Here, we describe the tool and the method for generating this contextualization dataset. We discuss the implications of bringing together the best of the two worlds of space-borne observations (real-time, short time periods) with reanalysis data (longer-term). Remote sensing data Our application was designed to map and plot the evolution of the snow-covered area over a large region (>10 4 km 2 ), such as the entire Alps range or one of its main river catchments (figure 2). At this scale, the Terra/MODIS MOD10A1 snow products offer the best compromise in terms of accuracy, spatial resolution, revisit and duration (Dietz et al 2012, Dumont andGascoin 2016). From the MOD10A1 snow product, we extracted the NDSI_Snow_Cover field. This field indicates the normalized difference snow index (NDSI) for snow-covered pixels. The NDSI is computed using the green and shortwave infrared bands and is used to identify snow, based on its higher reflectance in the visible portion of the spectrum compared to the shortwave infrared (Hall et al 2002). Approximately 50% of the pixels were flagged as clouds; therefore, we implemented a gap-filling algorithm using linear interpolation in the time dimension and on a pixel basis. This algorithm was limited to fill a maximum of 10 d, which is usually sufficient to fill approximately 90% of the cloud pixels in temperate regions (Gascoin et al 2015). Although more sophisticated algorithms exist to fill the gaps in MODIS snow cover products, linear interpolation in the time dimension is an acceptable trade-off between efficiency and accuracy and sufficient for our scale of analysis (Parajka andBlöschl 2008, Gascoin et al 2015). The linear interpolation was bounded to a time window of 5 d to limit the computation time. The resulting series of daily gap-free NDSI was converted to a series of binary snow cover maps (snow/no-snow) using the threshold NDSI > 0.2. This threshold corresponds to a snow cover fraction of approximately 30% following the relationship which was used in the previous MOD10A1 products (Salomonson and Appel 2004). This threshold is arbitrary but necessary as a binarization of the snow cover fraction must be done to allow for the computation of a snow cover duration map. Then, daily gap-filled snow cover maps over 2000-2021 were aggregated to a 5 km resolution by majority resampling. The snow-covered fraction of a given region was computed from this time series of 5 km resolution snow maps and subsequently used as training data for the machine learning algorithm described below. Reanalysis data We used the UERRA 5.5 km reanalysis data, which covers all of Europe and is available from 1961 to 2015 (Soci et al 2016, Lopez 2019, Morin et al 2021. This reanalysis uses the ERA-40 (from 1961ERA-40 (from to 2002 and ERA-Interim (from 2002 global reanalyzes as input to the regional numerical weather prediction system HARMONIE (Bengtsson et al 2017) at 11 km resolution, downscaled to 5.5 km resolution. These predictions were combined with an analysis system enabling correction of the raw predictions of HARMONIE using in situ observations of temperature, relative humidity and precipitation (Bazile et al 2017). During a second step, the reanalysis of near-surface atmospheric variables at 5.5 km resolution was used to drive the snow cover model Interactions between Soil, Biosphere and Atmosphere-Explicit Snow (ISBA-ES) as part of the SURFEX modeling platform (Masson et al 2013). ISBA-ES snow cover model is an intermediate complexity 12 layers snow scheme Etchevers 2001, Decharme et al 2016). This reanalysis provides estimates of daily snow depth values at 5.5 km resolution (Bazile et al 2017). The full documentation is available along the UERRA dataset in the Copernicus Climate Data Store (Lopez 2019). The UERRA snow depth data available from 1961 to 2015, were used as a predictor for the machine learning approach described below. Model We trained a statistical model to predict the snow cover area of a given region of interest at the daily time step. For a given day, the input samples (predictors) are the snow depth values on that day extracted over the region of interest from the UERRA reanalysis. We tried linear regression and gradient boosting, two commonly used machine learning algorithms in geosciences and ecology. Gradient boosting is a flexible and efficient nonparametric statistical learning technique for classification and regression (Friedman 2001). We used the implementation of the linear regression and gradient boosting regression functions in the Scikit-learn Python module (Pedregosa et al 2011). The model was optimized by minimizing the squared error between the training and predicted data. The data were randomly split into training (75%) and test (25%) subsets. For the gradient boosting, we set the learning rate to 0.1 (default value), the number of boosting stages to 100 (default value) and did not activate stochastic subsampling. The function to measure the quality of a split was kept to the default Friedman mean squared error. We fit a different model for each region of interest using the same workflow. We considered five regions of interest: (a) the entire Alps range and the intersections of the Alps range polygon with the river basins of the Rhine, Rhône, Danube and Po. The Alps and the river basin polygons were sourced from the European Environmental Agency. Indeed, we chose to divide the Alps domain into river basins to characterize the spatial variability of the snow cover within the Alps range and to strengthen the link with external water resource monitoring tools in a more meaningful way than an aggregation at the scale of the entire Alps domain. We used the region-calibrated model to predict the snow-covered fraction of each region from 1991 to 2018. This series was completed with the MODIS data to generate a 30 year climatology until 2021. We considered that a snow season starts on 1 November and ends on 1 July. Although the seasonal snow cover can last beyond July in the Alps, we restricted our analysis to the core of the snow season to reduce computational cost. In addition, our method is not adapted to detect patchy snow cover, which is predominant during late summer. The predicted snow cover time series were then aggregated by day of year in the form of percentile values corresponding to 0 (minimum), 25 (first quartile), 50 (median), 75 (third quartile) and 100 (maximum). These daily statistics formed the climatology that was used below. The climatology was uploaded as a public asset to the Earth Engine server. Real-time implementation The real-time application was implemented in Google Earth Engine (Gorelick et al 2017). MOD10A1 products are quickly ingested in Earth Engine so that the latency between sensing time and the availability of the images is usually below 4 d (see appendix). The user must select a region of interest (Alps, Rhine, Po, Rhône or Danube). Then, the application computes the snow cover area from the beginning of the snow season until the latest available MODIS snow product using the same method as described above, i.e. after linear interpolation of the cloud pixels. The resulting series is concatenated to the climatology and transferred from the server to the client, i.e. any web browser that calls the application. The application returns the data as an interactive chart, which displays the plotted values if the user hovers the mouse on the lines. If the user enlarges the chart, the plotted data can be exported as comma separated values for further analyses. The computation was split into two periods from 1 November to 31 December and from 1 January to 1 July to avoid hitting current Earth Engine usage limits for noncommercial users. The charts were embedded in a webpage of the Séries Temporelles blog (Alps Snow Monitor n.d.) but can be run in a separate window of a web browser. Figure 3 shows the performance of the gradient boosting regressor that was used to predict the snow-covered fraction of the Alps from UERRA data. The predicted samples were not used in the model optimization. The model performance is high, with R 2 = 0.98 and a root mean squared deviation (RMSD) value of 4%. A linear regression model was also tested but performed poorly, with an R 2 value of −1.41 and RMSD value of 39%. We also obtained an accurate model for every river basin subregion (figure 5). The performances were slightly lower, with R 2 values ranging from 0.96 to 0.97 and RMSD values ranging from 4% to 6%. Figure 6 shows an example of the predicted snow cover fraction during two snow seasons 2004-2006 for the Rhône subregion. Figure 7 shows a screenshot of the application as of 31 July 2022 when the Po subregion was selected. Figure 8 compares the output of the application before and after the introduction of the 30 year climatology. In the former version, 20 years of MODIS data were used to provide a range of variability. In the new version, we plot the current year over the percentiles from the 30 year climatology . This new version reveals the exceptional trajectory of the snow cover area in the Alps during the hydrological year 2021-2022, as it remained well below the median during almost the entire melt season and eventually . Alps snow monitor application before (left) and after (right) the upgrade including the 30 year climatology. Note that the new application was upgraded to start on 1 October, whereas the previous version only started on 1 November, which explains why the red curves do not look similar, but they are identical over the common period (1 October-1 July). Discussion and conclusion Cloud computing platforms enable the development of services based on remote sensing data streams without having to develop infrastructure (data storage, updating, cataloging, processing, publishing). We developed our application in the Google Earth Engine, but it could be implemented in another server since it uses only public data and standard remote sensing image processing algorithms that are implemented in open-source software. We find that the error of our statistical model data slightly increased from the entire Alps domain to the river basin subdomains. These minimal performance losses were expected due to the reduction of the domain size, which tends to increase the uncertainty in MODIS data. It also reflects the effect of the uncertainties in the UERRA reanalysis at the local scale. This suggests that further reductions in the domain size could lead to increased errors, limiting the relevance of this approach for local-scale real-time snow cover monitoring. Our model was trained after applying a NDSI threshold of 0.2 to capture lower snow fractions than the standard 0.4 threshold, however it may fail to retrieve areas with less than 30% of snow cover (Salomonson and Appel 2004). The gradient boosting should be trained again if this threshold is changed to capture lower snow fractions. We provide the source code of the entire pipeline to do this (see Data and code availability below). The integration of higher resolution remote sensing products is a solution to focus on more local scales (e.g. ski resorts, national parks, etc.). Methods exist that allow downscaling MODIS to 20 m resolution in real time using Sentinel-2 products (Revuelto et al 2021). Predictions into the future are the next step to this approach, which could be feasible using numerical weather prediction output instead of reanalysis data (Andersson et al 2021). Another key limitation of this application is that it does not provide information on the snow water equivalent or streamflow. Both variables would be more directly useful for water management. In snowdominated catchments, streamflow can be simulated in real time using MODIS data and the Snowmelt Runoff Model (Rango andMartinec 1979, Sproles et al 2016). This could be done with the same platform if stream gauge data are available to calibrate the model parameters. Mapping the distribution of the snow water equivalent might be more challenging, especially in mountain catchments, and requires a more advanced combination of model and remote sensing data (Dozier et al 2016). Despite these limitations, the current application is relevant to characterize the snow cover and indirectly the status of the Alps water resources in real time. This is especially relevant during periods of extreme weather, such as the 2022 snow drought. It allows everyone to figure how extreme the current condition is, thereby contributing to a better understanding of the climate system and its evolution. Given the increasing availability of remote sensing products and land surface reanalysis, a similar approach could be implemented to characterize the evolution of other key variables such as soil moisture, surface water area, evapotranspiration, vegetation phenology, etc. Data and code availability A pre-release of the application and its previous version are available at https://labo.obs-mip.fr/ multitemp/apps/alps-snow-monitor/. The data and Python code to train the model and infer the snow cover fraction climatology is publicly available on Git-Hub (https://github.com/sgascoin/ModisExtension/ releases/tag/v1). Earth Engine JavaScript code to compute the snow cover fraction from MODIS products and the code of the web application are publicly available in this git repository (https:// earthengine.googlesource.com/users/sgascoin/apps). We acknowledge Dongdong Kong (China University of Geosciences) for sharing the temporal interpolation code (https://github.com/gee-hydro/gee_docs). Data availability statement The data that support the findings of this study are openly available at the following URL: https://github. com/sgascoin/ModisExtension. We computed latency times for all MOD10A1 images acquired in 2021 that are available in Earth Engine (figure A1). We found that the images were available with a median time of 2.34 d, and 90% of the images were available with a latency of 4.36 d. For the current set of images acquired in 2022 (on 3 July 2022), the median latency was 3.39 and 90th percentile was 9.07 d. In 2017 and 2018, Earth Engine engineers reported authentication problems with NSIDC downloads which significantly delayed the availability of MOD10A1 products, but this did not happen again in the recent years.
4,394.6
2022-10-28T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Prognostic Value of GDF-15 in Predicting Prolonged Intensive Care Stay following Cardiac Surgery: A Pilot Study Introduction Predicting intensive care unit length of stay and outcome following cardiac surgery is currently based on clinical parameters. Novel biomarkers could be employed to improve the prediction models. Materials and Methods We performed a qualitative cytokine screening array to identify highly expressed biomarkers in preoperative blood samples of cardiac surgery patients. After identification of one highly expressed biomarker, growth differentiation factor 15 (GDF-15), a quantitative ELISA was undertaken. Preoperative levels of GDF-15 were compared in regard to duration of intensive care stay, cardiopulmonary bypass time, and indicators of organ dysfunction. Results Preoperatively, GDF-15 was highly expressed in addition to several less highly expressed other biomarkers. After qualitative analysis, we could show that preoperatively raised levels of GDF-15 were positively associated with prolonged ICU stay exceeding 48 h (median 713 versus 1041 pg/ml, p = 0.003). It was also associated with prolonged mechanical ventilation and rates of severe sepsis but not with dialysis rates or cardiopulmonary bypass time. In univariate regression, raised GDF-15 levels were predictive of a prolonged ICU stay (OR 1.01, 95% confidence interval 1–1.02, and p = 0.029). On ROC curves, GDF-15 was found to predict prolonged ICU stay (AUC = 0.86, 95% confidence interval 0.71–0.99, and p = 0.003). Conclusion GDF-15 showed potential as predictor of prolonged intensive care stay following cardiac surgery, which might be valuable for risk stratification models. Introduction Advances in surgical and medical techniques as well as innovation in intensive care treatment have reduced mortality during and after cardiac surgery [1]. Conversely, morbidity has increased, mainly due to increased utilization of cardiac surgery in the elderly and more vulnerable patients with increasing amounts of preexisting diseases leading to more complex intensive care treatment [2]. Prolonged stay in the intensive care unit (ICU) following cardiac surgery represents a significant burden of disease. Up to 26% of patients will spend more than 3 days in the ICU after cardiac surgery, which is in turn associated with organ dysfunction, prolonged mechanical ventilation, and thus impaired outcomes [3]. To overcome these circumstances, prediction models of prolonged ICU stay can be helpful and should be implemented for efficient use of ICU resources [4]. However, the current risk stratification models' predictive ability has not improved despite further inclusion of patient and disease characteristics [5]. A novel approach of improving these models could be the inclusion of biomarkers for preinterventional risk stratification. The use of established and emerging biomarkers, such as CRP and GDF-15, has shown significant promise as predictors of outcome in myocardial infarction and heart failure [6,7]. Additionally, the measurement of biomarkers would be a reliable variable, i.e., not prone to be influenced by inaccurate medical history or clinical judgement. Whilst several biomarkers have been investigated for use as predictors of mortality and morbidity in cardiac surgery patients, no studies have considered their value in predicting length of stay on the ICU [8,9]. They could be an additional tool to provide information for preoperative optimization and accurate prediction of postoperative outcomes in this group of vulnerable patients. Biomarkers, especially cytokines, can be used to show underlying physiological and pathophysiological processes. For instance, biomarkers are already widely used in nephrology to predict kidney failure [10]. The primary objective of this study was an exploration of novel cytokines for prediction of prolonged ICU length of stay (PICULOS) in preoperative blood samples of cardiac surgery patients. Subsequently, a further analysis of highly expressed cytokines and their relationship to PICULOS was undertaken. The secondary objective included determining the usefulness of highly expressed cytokines for predicting severe sepsis, length of mechanical ventilation, renal replacement therapy, delirium, and mortality. Study Design and Patient Selection. This prospective observational study used an existing biobank of blood samples collected from cardiac surgery patients (Ethics Committee of the University Hospital Aachen, RWTH University, Aachen, Germany, reference number EK 151/09). The principal enrolment criterion was cardiac surgery including coronary artery bypass grafting (CABG), aortic valve or combined CABG/aortic valve operations (AVR) performed during cardiopulmonary bypass at the University Hospital Aachen between January 2017 and July 2017. Exclusion criteria were other types of cardiac surgery, incomplete medical records, and missing blood samples. All patients provided written informed consent, and their identifying information was removed prior to analysis. Blood samples were collected 1 day preoperatively, directly upon ICU admission, 24 and 48 hours postoperatively. After centrifugation at 4°C for 10 minutes, plasma samples were frozen at -80°C. We defined prolonged intensive care length of stay (PICULOS) as a time period greater than 48 hours as other studies in cardiac surgery demonstrated recovery within 48 hours and showed development of complications thereafter [11]. Patient characteristics and clinical parameters were retrieved from an electronically patient data recording system (medico//s, Siemens, Germany) and from a patient data management system (IntelliSpace Critical Care and Anesthesia, ICCA Rev. F.01.01.001, Philips Electronics, The Netherlands). The definition of severe sepsis as outlined in the Third International Consensus Definitions for Sepsis and Septic Shock was used [12]. Postoperative delirium was defined by CAM-ICU [13]. Acute kidney failure was defined as stage 3 kidney injury following KDIGO guidelines [14]. EuroSCORE II was calculated using the online tool [15]. We randomly selected 4 patients who underwent a normal ICU stay as a control group (non-PICULOS), i.e., shorter than 48 hours, and another group of 4 patients who stayed longer than 48 hours on the ICU (PICULOS). Cytokine Screening. A cytokine and chemokine detection array (Proteome Profiler™ Human XL Cytokine Array Kit, R&D Systems, Minneapolis, MN, USA) covering 105 cytokines was used to screen 4 randomly selected preoperative blood samples from patients with PICULOS. These were matched by 4 randomly selected non-PICULOS patients. The plasma samples were not pooled. After dilution and overnight incubation, the detection membrane was washed and a detection antibody was added. Strepatividin-HRP and chemiluminescent detection agents were applied, and the signal produced was captured. The mean spot density was measured using the ImageQuant TL software (Version 8.1, GE Healthcare). These values were normalized against a calibrated measurement described in the test kit instructions. After this measurement, we averaged the mean spot density of each cytokine within the groups to allow a comparison between PICULOS and non-PICULOS blood samples. GDF-15 Measurements and Patient Selection for Further Quantitative Analysis. After identifying GDF-15 as one cytokine showing the most distinctive differences between groups, we performed a quantitative measurement. A further patient selection (n = 89) was performed for both PICULOS and non-PICULOS after expanding our exclusion criteria to patients with glomerular filtration rate (GFR) below 50 ml/min or any signs of inflammation as both of these conditions can also cause GDF-15 elevation [16,17]. We randomly selected 12 patients form each group, resulting in 24 patients in total. The stored plasma was thawed and analyzed using a commercially available enzyme-linked immunosorbent assay (Duoset® ELISA development system, human GDF-15, catalogue number DY957, R&D Systems, Minneapolis, MN, USA) following the manufacturer's protocol. Due to expected levels of GDF-15 and the sensitivity of the test kit, the samples were diluted according to the manufacturer's instruction up to 1 : 50. 2.4. Statistical Analysis. Discrete variables are given as absolute number and percentages. Continuous variables are presented as median and interquartile range (IQR) due to the skewed distribution of most of the parameters and to facilitate comparison. Differences between groups were assessed using Mann-Whitney U test and chi-squared test where appropriate. Receiver-operator characteristic (ROC) curve analysis was performed in order to assess the cut-off value of GDF-15 for PICULOS (i.e., the values with the maximum sum of sensitivity and specificity). Area under curve (AUC) was also derived. The prognostic value of GDF-15 for predicting PICULOS was assessed by performing a univariate logistic analysis. A probability value of <0.05 was considered significant. Statistical analysis was performed in GraphPad Prism 7 (GraphPad Software, San Diego, CA, USA) and SPSS 25 (SPSS, Chicago, IL, USA). Cytokine Screening. Of the 248 cardiac surgery patients for whom samples were stored in the biobank, 129 had to be excluded due to surgery procedures different from CABG or AVR or limited sample availability. In total, 119 patients could be included for cytokine screening ( Figure 1). Amongst those screened, 42 patients spent less than 48 hours on the ICU, representing the non-PICULOS group, and 77 spent more than 48 hours on the ICU (PICULOS group). After randomly selecting 4 preoperative samples from the PICU-LOS group and another 4 control samples from the non-PICULOS group, we performed the cytokine screening with the Human XL Cytokine Array Kit. We identified GDF-15 as a novel cytokine with higher preoperative expression in PICULOS patients after undergoing cardiac surgery. As depicted in Figure 2, in the PICULOS group mean GDF-15 expression was more than twice as high as in non-PICULOS patients. Other cytokines also showed a higher expression in the PICULOS group, especially Chitinase-3-like-1, IGFPB-2, IL-18 Bpa, and TIM-3, yet clearly less distinctive than GDF-15. Interestingly, Serpin-E1 and Vitamin D BP exhibited decreased expression. The other 98 cytokines were expressed at similar levels or not detectable. An example of both a PICULOS and non-PICULOS cytokine array with subsequent analysis is shown in Figure 3. GDF-15 Measurements 3.2.1. Patient Characteristics. For further quantification of the preoperative GDF-15 levels, GDF-15 serum levels from 12 patients with PICULOS and 12 non-PICULOS patients were analyzed. The median age was 67 for the non-PICULOS group and 79 for the PICULOS group, which was a statistically significant difference (p = 0:032). Additionally, Euro-SCORE II was raised significantly in the PICULOS group with 3.85 percent versus 1.34 percent for the non-PICULOS cohort (p = 0:006). All other preoperative baseline characteristics showed no differences between the groups and are shown in Table 1. The postoperative, during ICU stay, characteristics of the patients showed many significant differences which are described in Table 2. All patients in the PICULOS group had a significantly higher risk stratification score in SAPS II, Apache II, and SOFA. Also, the duration of mechanical ventilation was longer (8 vs. 200 hours, p = 0:001) as was the duration of vasopressor use (12 vs. 200 hours, p = 0:001). Severe sepsis was seen more frequently in the PICULOS group as was the need for dialysis and delirium. Interestingly, the duration of cardiopulmonary bypass does not affect the duration of ICU stay within the groups compared. The most commonly performed surgery was coronary artery bypass graft. Also, 9 patients underwent a combined operation, whereby both a coronary bypass and aortic valve replacement were performed. The median time of cardiopulmonary bypass was 106 minutes for non-PICULOS patients versus 125 minutes for PICULOS patients. Raised levels of preoperatively raised GDF-15 were not associated with prolonged cardiopulmonary bypass duration ( Figure 5). Patients with preoperatively raised GDF-15 levels spent longer time undergoing mechanical ventilation. Regarding further clinical outcomes, patients with raised levels of GDF-15 required longer vasopressor therapy and were subject to severe sepsis more frequently as could be depicted in Figure 6. Rates of renal replacement therapy in the context with acute kidney failure were not increased with raised GDF-15 levels. Finally, rates of delirium were significantly associated with raised GDF-15 levels (median 718 versus 1491 pg/ml, p = 0:0006). 3.3. GDF-15 Prediction. As described in methods, we performed a logistic regression analysis of GDF-15 for prediction of prolonged ICU stay and also for other values. Univariate analysis showed GDF-15 levels (odds ratio 1.01, 95% confidence interval 1-1.02, and p = 0:029) to be predictive for a prolonged ICU stay. Additionally, age, EuroSCORE II, SAPS II, and SOFA scores were also prognostic for a prolonged ICU stay. However, when a multivariate analysis was performed, no further predictive value was found (Table 3). Discussion Our study is aimed at investigating potential novel cytokines predictive of prolonged ICU stay following cardiac surgery. We could demonstrate that cytokines are expressed differently in patients who spend longer than 48 hours on the ICU when compared to patients whose stay is shorter than 48 hours. Especially GDF-15 showed a significant, raised expression preoperatively in PICULOS patients versus non-PICULOS patient after both quantitative and qualitative analyses. Furthermore, severe sepsis rates, vasopressor support, and time of MV were significantly enhanced in the PICULOS group. Moreover, raised levels of GDF-15 were predictive of a prolonged ICU stay in univariate logistic regression. An increasing availability of cardiac surgery is counterbalanced by an increasing preexisting illness and frailty of patients. Moreover, ICU resources are scarce in most hospitals. To resolve this dilemma, better predictive models are required for sufficient risk stratification, especially in cardiac surgery patients. However, traditional preoperative risk stratification models such as the EuroSCORE II do not include any biomarkers. Other structural weaknesses are concerns about interobserver variability due to encoding mismatches or definition of risk factors [18] and methodological concerns regarding clinical validation [19]. Biomarkers, as indicators of biological stress, such as inflammation, can be used to predict clinical outcomes. They can be used to predict organ dysfunction, frailty, and biological aging. Therefore, inclusion of one or more biomarkers in risk stratification models could increase the accuracy and ease of use. Generally, risk stratification using biomarkers has been sparingly evaluated in cardiac surgery. Prior research by Brown et al. [20] showed that inclusion of 4 additional biomarkers (cardiac troponin T, NT-ProBNP, and CRP) did not improve the predictive capability of a risk stratification model. Another study could show that brain natriuretic compound (BNP) can be used to predict postoperative mortality after cardiac surgery [21]. Raised levels of ST2, Galectin-3, and NT-ProBNP preoperatively were predictive of inhospital mortality in a paper by Polineni et al. [8]. Our study included CRP and ST2 during the cytokine screening process. CRP was highly expressed in all our patients, probably due to the high sensitivity of the cytokine array profiler and the multitude of organic reasons of heighted expression, both pathological and nonpathological. The other cytokine, ST2, was only marginally raised in the PICULOS group and therefore did not deserve further quantification. In sum, there exists a broad spectrum of biomarkers which have been evaluated regarding different clinical concerns. However, no specific biomarkers are described in terms of prolonged ICU stay after cardiac surgery. Intensive care units provide high levels of complex and expensive care, especially after cardiac surgery. Many factors are associated with a prolonged, postoperative ICU stay. Subsequently, the Acute Physiological and Chronic Health Evaluation (APACHE) scoring system was revised to its latest Disease Markers version, APACHE IV. The APACHE IV uses 129 variables but no single biomarker to predict mortality rates and to estimate length of stay [22]. Additionally, the APACHE IV is designed to evaluate cardiac surgery patients. Other ICU prediction models such as SAPS2 [23] and SOFA [24] are used to solely predict ICU mortality and are therefore less useful to predict length of ICU stay. Both these models are calculated within 24 to 48 hours after admission to the ICU unit, respectively. An analysis of risk stratification models for prolonged ICU stay used a time frame between 6 to 48 hours as a "normal" ICU stay [25]. This study also showed that the various models of intensive care risk stratification (APACHE, SOFA, and SAPS) were inaccurate with poor predictive ability due to lacking validation and inadequate benchmarking. Our study demonstrates that raised GDF-15 levels preoperatively are associated with prolonged ICU stay following cardiac surgery. GDF-15 has been analyzed extensively in medical practice as a marker of cardiac dysfunction [26]. In coronary artery disease patients, GDF-15 serum level was found to be significantly elevated compared to healthy controls [27]. It shows promise as a biomarker following STelevation acute myocardial infarction, predicting both shortand long-term outcomes [6]. Another recent study by Kuster et al. demonstrated that GDF-15 is useful in predicting middle term events in stable heart failure [7]. One previous study by Heringlake et al. could demonstrate that preoperatively raised levels of GDF-15 were an independent predictor of outcome following cardiac bypass surgery [9]. They showed that including preoperatively raised GDF-15 levels of over 1.8 ng/ml in the risk stratification model (EuroSCORE II) improved the predictive value, especially when compared to NT-ProBNP which did not result in reclassification. Further investigations by Guenancia et al. and Heringlake et al. could show that preoperatively raised GDF-15 levels were associated with acute kidney injury following CABG [28,29]. Specifically, both studies showed prolonged ICU stay as a secondary outcome. These findings are in line with our results and underline the usefulness of GDF-15 in cardiac surgery patients/risk stratification. We could show that regardless of duration of cardiopulmonary bypass, GDF-15 levels were raised similarly. This suggests that the actual complexity of the cardiac surgery has no influence on preoperative GDF-15 levels. The association of patient's outcome and raised levels of GDF-15 is not limited to the cardiac surgery and ICU setting. GDF-15, also known as MIC-1, is a stress-induced cytokine belonging to the superfamily of transforming growth factor-β (TGF-β) [30]. It is weakly expressed under all physiological conditions [31]. The normal range of GDF-15 has been reported as 150-1150 pg/ml [32] and 733-999 pg/ml [33]. Raised levels of GDF-15 are also measured in kidney failure [16] and various types of cancer such as colon [34], prostate [35], or melanoma [36]. Interestingly, we could demonstrate that GDF-15 showed significant differences in terms of length of ICU stay whereby higher levels of GDF-15 were predictive of more days on the ICU. It was also positively associated with significantly longer duration of mechanical ventilation. Furthermore, the rates of severe sepsis and vasopressor use were significantly higher in the patients with preoperatively raised GDF-15 levels. To our Disease Markers knowledge, this is the first study to show this association. However, we could not demonstrate that dialysis rates were increased, which is thought-provoking because GDF-15 is a biomarker for the prediction of kidney failure [37]. It was shown that preoperative GDF-15 is a biomarker of both renal dysfunction and muscle wasting in preoperative cardiac surgery patients which could in turn contribute to prolonged ICU stay [38]. Increased preoperative GDF-15 level might be indicative for an already existing cellular response to advanced inflammation [17]. In mice models, GDF-15 secreted by the myocardium was found to act protective and antihypertrophic [39]. Furthermore, after myocardial infarction, GDF-15 induction permitted infarct healing by limiting polymorphonuclear leucocyte (PMN) recruitment. Mechanistically, the anti-inflammatory effect of GDF-15 was caused by an interference with chemokine signaling [40]. Using univariate analysis, we demonstrated that GDF-15 levels are predictive of prolonged ICU stay. We also showed that EuroSCORE II, SOFA, and SAPS2 scores at ICU admission and age predicted prolonged ICU stay. Raised GDF-15 is associated with increasing age [41,42]. We could confirm this finding in our observations. When we performed multivariate analysis, we could not demonstrate further predictive value, possibly due to the small sample size. Generally, as stated by Wiklund et al., GDF-15 is marker of all-cause mortality [43]. It is associated with age and many pathophysiological processes making it a rather unspecific marker of biological age and stress in humans. Limitations Our study has limitations that need to be addressed. The raised levels of GDF-15 in our patients could also be caused by other comorbidities, despite performing a selection. The influence of inflammation, kidney function, cardiovascular disease, and malignancy on the expression of GDF-15 remains unknown. There was also a difference in age between both groups which is a confounding factor especially given the fact that GDF-15 is raised in age. This influence is demonstrated in the lack of predictive ability following multivariate analysis. Generally, the low predictive ability in univariate logistic regression and the lack of predictive The cause of this is the limited sample size. A further limitation is the prospective observational study character, yet with a retrospective analysis of data. It was performed in a single center without randomization. Also, we did not explore the long-term outcome of our patients. Finally, the data might not directly be transferable to other patient groups. However, we particularly focused on cardiac surgery, as this group is known to have a most pronounced perioperative risk. Conclusion We performed a broad explorative analysis of novel cytokines. This allowed us to exclude cytokines that showed no predictive value, but also identify cytokines which showed promise as novel biomarkers. We evaluated GDF-15 both qualitatively and quantitatively in regard to prolonged ICU stay and confirmed its predictive value in cardiac surgery patients. Our study is the first to demonstrate an association between preoperatively raised GDF-15 levels and prolonged ICU stay. Future research could include a further, prospective validation of GDF-15 as a predictor of prolonged ICU stay both in regard to specific groups such as cardiac surgery patients and the general population. Evaluating other clinical predictors and biomarkers such as BNP alongside GDF-15 would be a useful further study. The clinical utility of GDF-15 regarding positive and negative predictive values needs to be established for predefined length of stay. Also, further exploration of other raised or decreased cytokines could be performed in terms of risk stratification models. Finally, an evaluation in the form of randomized, prospective clinical trial to further asses GDF-15 as a predictive biomarker should be undertaken. Data Availability The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
4,942
2021-06-15T00:00:00.000
[ "Medicine", "Biology" ]
Air Traffic Flow Management Delay Prediction Based on Feature Extraction and an Optimization Algorithm : Air Traffic Flow Management (ATFM) delay can quantitatively reflect the congestion caused by the imbalance between capacity and demand in an airspace network. Furthermore, it is an important parameter for the ex-post analysis of airspace congestion and the effectiveness of ATFM strategy implementation. If ATFM delays can be predicted in advance, the predictability and effectiveness of ATFM strategies can be improved. In this paper, a short-term ATFM delay regression prediction method is proposed for the characteristics of the multiple sources, high dimension, and complexity of ATFM delay prediction data. The method firstly constructs an ATFM delay prediction network model, specifies the prediction object Introduction It is difficult to match the continuous growth in air transportation demand with the improvement in airport and airspace networks' support abilities and management levels.ATFM is facing unprecedented challenges; moreover, China's ATFM system is in its initial stage and its implementation effect has not achieved its expectation.As a key evaluation index of the implementation effect of the ATFM strategy, ATFM delays are very important in reducing the number of flight delays and improve the efficiency of an ATFM system.Therefore, this paper focuses on ATFM delay prediction research and enacts ATFM delay prediction from a network perspective so as to grasp delay trends and bottleneck points in the system more accurately and enable the ATFM department to predict possible delay situations in advance and adjust the ATFM strategy to reduce potential delay losses. An ATFM delay is the time difference between the Target Take-Off Time (TTOT) requested by the aircraft operator and the Calculated Take-Off Time (CTOT) first assigned by the ATFM function [1].The capacity of the nodes (airports or waypoints) is dynamically changing due to various reasons such as weather, other airspace users, etc.When there is a mismatch between the node capacity and demand, the ATFM sector performs a traffic management strategy that takes the TTOT as the input and assigns the CTOT to each flight subject to traffic control for calculation.ATFM delays reflect flight delays due to airspace cell capacity constraints that result from the adoption of ATFM strategies.The International Civil Aviation Organization (ICAO) in its ATFM manual suggests an average ATFM delay of 1 min for each flight en route [2].The European Organization for the Safety of Air Navigation (EUROCONTROL) suggests an average ATFM delay of 0.5 min [3].Therefore, predictions of ATFM delay durations and the number of delayed flights can provide a decision-making reference for the selection and application of ATFM strategies. ATFM delay prediction research mainly includes two major aspects, as follows: ATFM delay causes and applications as well as the ATFM delay prediction model.In the research on ATFM delay causes and applications, Delgado et al. first divided ATFM delay into ground delay and airborne delay, and they proposed a deceleration strategy so that airborne ATFM delays could replace a small portion of ground ATFM delays [4].At present, many studies have been carried out at home and abroad in the field of airport surface operation optimization, mainly focusing on airport surface traffic operation modeling, airport surface performance index analysis, and airport surface resource optimization scheduling.In order to validate the scientific validity of existing ATFM regulations, Delgado et al. concluded that ATFM delays caused by airspace capacity account for 50-60% of the total number of ATFM delays, with this outcome being reached by integrating historical data from the past five years; they also concluded that airspace capacity constraints are mainly due to air traffic control capacity and staffing issues.In addition, the currently available airspace capacity is lower than the expected traffic demand [5].Bolic et al. optimized flight plans by shifting the times of the flights causing ATFM delays [6].Post et al. evaluated the operating conditions leading to the increased probability of an airport ATFM delay through Bayesian networks, and the results showed that the predicted arrival congestion index and the actual arrival congestion index were the indicators that had the highest impact on airport ATFM delays [7].Ramon et al. proposed three indicators to predict the trend of ATFM delays from the perspective of ATFM delay evolution trends, which were as follows: the expectation of an actual ATFM delay, the probability distribution of an ATFM delay, and the trend of ATFM delays [8].Sergi et al. categorized the causes of ATFM delays into those of airport traffic, airport capacity, network capacity, and operating slots.Meanwhile, they proposed two classification models to realize the prediction of ATFM delay occurrence probability and delay causes [9]. In addition, space weather events are also an important factor affecting ATFM delays.Space weather refers to the process of the sun interfering with the space environment, the geomagnetic field, as well as the Earth's ionosphere and thermosphere.Wiliams [10] and James [11] et al. have carried out a number of studies on coronal mass ejections (CMEs), proving that CMEs are the largest rapid ejection phenomenon in the solar atmosphere and the main source of disturbance for space weather.These disturbances may affect the high-frequency radio wave communications that are used by the aviation industry [12], affect the normal operation of global navigation satellite systems [13,14], and even cause increased radiation that endangers the health of flight crews and passengers [15,16].When encountering unusual space weather, airlines respond to these threats with measures such as cancelling flight plans, lowering flight altitudes, and changing flight routes, thereby resulting in additional fuel consumption [15,16].In addition, when the timing of space weather affects the normal operation of satellite navigation, aircraft must use ground navigation instead of satellite navigation, which leads to higher standards for aircraft separation and lower airspace capacity, resulting in increased flight delays, increased costs, and other problems [12][13][14].To deal with the effects of unusual space weather, Robyn et al. examine the moderate and severe thresholds adopted to identify events where space weather is likely to affect high-frequency radio communication and evaluate the frequency and duration of events [12].Xue et al. simulated a satellite navigation failure scenario and evaluated the potential economic impact of the upcoming space weather on flight operations from the ATFM perspective [13].Xue et al. created a hypothetical scenario by simulating the prediction flight data of Hong Kong International Airport during a geomagnetic storm to explore the impact of GNSS positioning error on ATFM [14].Xue et al. proposed a multi-objective optimization model to assign flight altitude and speed [15].Hands developed a new model to predict the effects of the airborne radiation environment to provide real-time information about atmospheric radiation [16]. In the research on ATFM delay prediction models, in order to realize high-precision ATFM delay regression prediction, traditional machine learning can no longer meet the complex and large-volume prediction task, and scholars prefer to use deep learning and its deformation algorithms.The concept of deep learning originates from the research of artificial neural networks.Multi-layer perceptrons with multiple hidden layers are a kind of deep learning structure.Aiming at the problems of low computational efficiency and numerous parameters of deep learning algorithms, Jingyi Qu et al. have successively proposed an airport delay prediction model based on regional residuals and LSTM network [17], a flight delay prediction model based on the spatiotemporal sequence of Conv-LSTM [18], and a flight delay prediction model based on MobileNetV2 [19].In addition, Jingyi Qu et al.The results showed that air traffic regional control centers are one of the main influencing factors [21].Chen et al. developed a deep residual neural network (ResNet) for nonlinear functional regression, replacing convolutional and pooling layers with a fully connected layer to ensure that the deep residuals can achieve high-precision prediction of complex problems in nonlinear regression [22].Qu et al. proposed two flight delay prediction models based on meteorological data, namely the DCNN model and SE-DenseNet model.In the DCNN model, both a linear channel and convolutional channel are designed to enhance the patency of the deep network.In the SE-Densenet model, an SE module is added after the convolution layer of each DenseNet block to realize feature recalibration in the feature extraction process [23].Chen et al. extended the traditional idea of the FC-LSTM network to the Conv-LSTM network, and used the Conv-LSTM network to extract spatial and temporal features to achieve short-term prediction of delay in network structure [24].Micha et al. combined the hybrid density network and random forest algorithm to realize the probability prediction of flight delay, and integrated these probability prediction results into the flight gate allocation problem, improving the robustness of gate allocation [25].Hu et al. proposed a traffic flow prediction model based on multi-attention mechanism Spatiotemporal Graph Convolution network to realize dynamic adjustment of spatiotemporal features [26].Ma et al. proposed a traffic flow prediction method based on multi-head self-attention mechanism spatiotemporal Infographic Convolutional network [27].Aiming at the problem that the DenseNet model will lose the basic information obtained from independent input features, Jiang et al. proposed an improved regression model by DenseNet, in which the convolution layer and pooling layer are replaced by the fully connected layer, and the original connection shortcut is maintained to reuse features [28].Sergi et al. proposed the RNN-CNN cascade architecture to realize capacity prediction of enroute traffic [29].Jiang Yu et al. regularized the airport network graph structure by means of spectral convolution, used GCN and GLU to capture the spatiotemporal correlation in the network and formed spatiotemporal convolution blocks, and proposed a flight delay prediction model based on spatiotemporal graph convolution neural network [30].Wu Chen et al. obtained the dynamic characteristics of airport ground support process by using the Petri Net model, and integrated CNN, LSTM, and ATT algorithms to propose a CNN-LSTM-ATT flight delay prediction model [31].Deep learning is widely used in the field of transportation and has excellent performance in delay prediction, but there are problems such as complex and many parameters and high dependence on raw data.At the same time, according to the classification of ATFM delay causes by EUROCONTROL [32], the ATFM delay data are characterized by multiple data sources, multiple variables, and unbalanced data, which has caused some difficulties to the prediction work. A summary of ATFM delay application and prediction methods is shown in Tables 1 and 2. The ATFM system in China did not start operation until May 2021.Compared to developed aviation countries such as Europe and the United States, there is still a certain research gap in the study of ATFM delays in China.Particularly in the areas of prediction and post-analysis, further research is needed.In the face of increasingly saturated airspace resources, in-depth research on ATFM delay indicators is crucial to reduce delays caused by current capacity shortages and provide references and preparations for effectively managing available capacity.Additionally, existing ATFM delay prediction algorithms primarily focus on traditional machine learning and deep learning.Traditional machine learning methods have low prediction accuracy, while deep learning methods perform well in delay prediction but suffer from complex and numerous parameters and high dependency on raw data.Moreover, ATFM delay prediction datasets exhibit characteristics such as multiple data sources, multiple variables, and imbalanced data, which pose certain difficulties for prediction work.Therefore, algorithm optimization or the use of joint algorithms is necessary to achieve accurate and high-precision ATFM delay prediction.Therefore, in order to achieve reliable and high-precision ATFM delay prediction results, this paper combines feature extraction algorithms, a deep learning prediction model, and a parameter optimization algorithm, and proposes two ATFM delay prediction models with higher robustness, which can achieve short-term prediction of ATFM delay duration and the number of delayed flights from the tactical stage. ATFM Delay Prediction Method Design 2.1. ATFM Delay Prediction Network Model This paper determines whether ATFM delay occurs on a flight according to the difference between CTOT and TTOT of the flight.The calculation method is shown in Equation ( 1 The mean value calculation method of ATFM delay is shown as follows.According to the mean value calculation method, ATFM delay of departure, ATFM delay of arrival, ATFM delay of airport, and other dimensions can be calculated. Average ATFM delay = Total ATFM Delay Total Flight Volume The average ATFM delay per unit time is: N indicates the total number of flights per unit of time; Di indicates the delay coefficient of the ith flight. Flights affected by congestion nodes generate ATFM delays due to flow control.The location where ATFM delays occur can be the departure airport, destination airport, a waypoint on the route, etc.In this paper, we do not take the ATFM delay of the individual flight or airport as the prediction object, but realize the prediction of ATFM delay duration and delayed flight volume from a systematic point of view. According to the basic concept of network graph and ATFM delay generation process, the dynamic ATFM delay prediction network graph is constructed by integrating time information with the airports and waypoints as nodes and the routes as edges.The ATFM delay prediction network graph G can be expressed as G = (V, E, T), where, V denotes the set of nodes; E is the set of edges; T is the set of time, an ordered time sequence, which represents the time points in the dynamic network graph.(V1, V2) denotes a directed edge from node V1 to node V2.In order to simplify the network graph, two key waypoints are selected as nodes in the network graph for each route.According to the running direction of the route, the departure airport node, two key waypoints, and the destination airport node are connected in turn, and the connecting line constitutes a complete directed edge.A brief schematic of the ATFM delay prediction network is shown in Figure 1, in which the ATFM delays are predicted for AC-edge, BD-edge, CA-edge, and DB-edge. ATFM Delay Prediction Process There are two major steps in ATFM delay prediction research, as shown in Figure 2. ATFM Delay Prediction Process There are two major steps in ATFM delay prediction research, as shown in Figure 2. ATFM Delay Prediction Process There are two major steps in ATFM delay prediction research, as shown in Figure 2. The first step is to build the ATFM delay prediction network and dataset, including the following three steps. (1) Data collection and preprocessing: using and matching weather forecast data, flow control release data, flight schedule data, and route data.And the corresponding ATFM delays are calculated.Meanwhile, the variables with more outliers and missing values are eliminated to form the temporal ATFM delay original prediction dataset. A B Step 1:Constructing ATFM Delay Prediction Networks and Datasets The first step is to build the ATFM delay prediction network and dataset, including the following three steps. (1) Data collection and preprocessing: using and matching weather forecast data, flow control release data, flight schedule data, and route data.And the corresponding ATFM delays are calculated.Meanwhile, the variables with more outliers and missing values are eliminated to form the temporal ATFM delay original prediction dataset.(2) Establish ATFM delay prediction network model: Select the key elements in the network and construct the network model.According to the original prediction dataset, define the scope of the network and construct the ATFM delay prediction network diagram.(3) Construct ATFM delay prediction index system: In the case of lack of data acquisition and unclear delay causes, mine factors affecting ATFM delay from the perspective of departure airport, destination airport, airspace network, etc., to form a high-quality and diversified ATFM delay prediction dataset.It includes the mining of common flow control information, key node identification and flow statistics methods, and dynamic weighted PageRank value calculation of nodes. Step 2 is the construction of the ATFM delay prediction model and example validation, including the following two steps.(2) Instance validation: Four typical busy airports and their main route points in East China are selected as nodes of the ATFM delay prediction network for instance validation.The combinations of different models are tested for effectiveness, and the importance of prediction features and prediction results are analyzed in depth. ATFM Delay Prediction Index System EUROCONTROL classifies ATFM delay causes into two main categories, including route disturbance events, airport disturbance events, waypoint capacity, airport capacity, airport weather, and control staffing.Among them, route and airport disturbance events are the main reasons affecting ATFM delay.The occurrence of disturbing events is usually random and inevitable.Therefore, when constructing the factors affecting ATFM delay, some innovative indicators should be put forward according to the situation of prediction network construction and data collection. Common Flow Control Information Mining In China, the reasons for flow control are categorized into six main groups: public safety, flight schedules, airports, ATC, traffic, and other airspace users.There is a close relationship between flow control information and ATFM delays.And flow control information can convey flow control measures and adjustment aspects, which provide flow control for flights that experience ATFM delays.By analyzing the occurrence patterns and trends of historical flow control, some of the more constant flow control information can be mined and used as ATFM delay predictors to improve ATFM delay predictability. Taking the flow control information received by the Shanghai Approach from 1 January 2023 to 20 June 2023 as an example, there are 9841 flow control messages in total.The statistics of the top ten historical flow control messages received are shown in Table 3.Among them, the flow control named Message-MIT-OVTAN was published 1256 times, the frequency of publication accounted for 12.76% of the total number of releases, and the average duration of flow control measures was 637 min.The top ten flow control measures last more than 280 min, showing a pattern in flow control reasons and time distribution.Therefore, the flow control content with frequent flow control and long duration is selected as the key category index of ATFM delay prediction.The frequency statistics of controlled waypoints are shown in Figure 3, in which OV-TAN waypoints were controlled 4009 times, much higher than other waypoints, accounting for 7.05% of the total controlled waypoint frequency.In addition, there are 115 waypoints with more than 100 instances (749 waypoints were controlled), and the frequency of high-frequency controlled waypoints accounted for 85% of the total controlled waypoint frequency.Therefore, the waypoints with more than 100 instances of controlled frequency are regarded as common controlled waypoints, which are used as ATFM delay predictors. Routes containing common controlled waypoints have a higher probability of generating ATFM delay. The frequency statistics of controlled waypoints are shown in Figure 3, in which OVTAN waypoints were controlled 4009 times, much higher than other waypoints, accounting for 7.05% of the total controlled waypoint frequency.In addition, there are 115 waypoints with more than 100 instances (749 waypoints were controlled), and the frequency of high-frequency controlled waypoints accounted for 85% of the total controlled waypoint frequency.Therefore, the waypoints with more than 100 instances of controlled frequency are regarded as common controlled waypoints, which are used as ATFM delay predictors.Routes containing common controlled waypoints have a higher probability of generating ATFM delay. Key Node Identification and Flow Counting Method There may be one or more routes between city pairs.In order to simplify the experiment, this paper filters out one of the most frequently used routes between city pairs.Multiple waypoints exist on the route, and the higher the flow of a waypoint, the higher the possibility of the waypoint becoming a capacity bottleneck, thus generating ATFM delay.Based on historical data statistics, the top two waypoints account for approximately 20% of the total traffic.Therefore, these two waypoints are considered as key waypoints on the route. Key Node Identification and Flow Counting Method There may be one or more routes between city pairs.In order to simplify the experiment, this paper filters out one of the most frequently used routes between city pairs.Multiple waypoints exist on the route, and the higher the flow of a waypoint, the higher the possibility of the waypoint becoming a capacity bottleneck, thus generating ATFM delay.Based on historical data statistics, the top two waypoints account for approximately 20% of the total traffic.Therefore, these two waypoints are considered as key waypoints on the route.Usually, the key waypoints are the ones carrying large flow pressure or the intersection of multiple routes.The flow statistics for the key waypoints are shown in Figure 4. Dynamic Weighted PageRank Calculation Method The PageRank algorithm can be defined on any directed network graph and describes the behavior of a random wanderer visiting each node along the directed graph.Under certain conditions, the probability of the limit case visiting each node converges to a smooth distribution, and the value of this probability is the PageRank value, which can indicate the importance of the node [33].The higher the importance of nodes (airports or key waypoints) in the network, the higher the probability of ATFM delay, so the PageRank value of the nodes in the ATFM delay prediction network can be used as an indicator of ATFM delay prediction.In this paper, for the problem of the average distribution irrationality in the traditional PageRank algorithm, the dynamic weighted PageRank algorithm is Dynamic Weighted PageRank Calculation Method The PageRank algorithm can be defined on any directed network graph and describes the behavior of a random wanderer visiting each node along the directed graph.Under certain conditions, the probability of the limit case visiting each node converges to a smooth distribution, and the value of this probability is the PageRank value, which can indicate the importance of the node [33].The higher the importance of nodes (airports or key waypoints) in the network, the higher the probability of ATFM delay, so the PageRank value of the nodes in the ATFM delay prediction network can be used as an indicator of ATFM delay prediction.In this paper, for the problem of the average distribution irrationality in the traditional PageRank algorithm, the dynamic weighted PageRank algorithm is used to calculate the importance of the nodes in the network.And the dynamic weighted PageRank value can more accurately reflect edge weight and time factors on the importance of nodes. (1) Dynamic weighted PageRank value calculation for airport nodes The higher the waypoint flow passed by a route departing from an airport, the higher the likelihood that the route will be subject to flow control, and the more important the airport node is in the ATFM delay prediction network.According to the statistical process of key waypoints, two key waypoints are filtered out for the routes passing between two airport nodes.These two key waypoints can represent the higher level of the routes passing through the busy nodes, so the sum of the two key waypoints' flow can be used as the weights for weighted PageRank value calculation for airport nodes.The formula for calculating the dynamic weighted PageRank value of an airport node in the airport network is as follows: where N is the total number of airport nodes in the airport network and other symbols are defined as shown in Table 4. Table 4. Formula symbol definition for dynamically weighted PageRank values. Symbol Definition PR t+1 (V i ) PageRank value of node V i at the t + 1 moment. ∂ ∂ is the damping coefficient, a parameter that controls the probability of randomly visiting a node. β β is an attenuation factor that controls the effect of time.The value of β ranges from 0 to 1, indicating the decline degree in the importance of the page. PR t (V i ), PR t V j PageRank value of node V i , node V j at moment t. Count V j The number of outgoing chains for node V j W V j , V i The weights of node V j and node V i µ Weight coefficient, µ ranging from 0 to 1, controls the influence of input weight on PageRank value. At moment t, the flow at two key waypoints R 1 and R 2 on a route with node V j as the departure airport and node V i as the destination airport. (2) Dynamic PageRank value calculation method for waypoints Because it is difficult to obtain the relevant data of waypoints, the unweighted dynamic PageRank value calculation method is used.The formula for calculating the dynamic PageRank value of waypoints in the airspace network is as follows: where N is the total number of selected waypoints in the airspace network, and other symbols are defined as shown in Table 2. In summary, the ATFM delay prediction index system is constructed as shown in Table 5; ATFM delay prediction indexes are categorized as four major categories: departure airport, arrival airport, airspace network, and others. ATFM Delay PATFM Delay Prediction Model The ATFM delay prediction task belongs to time series regression prediction and involves multiple factors and complex relationships between variables in the prediction data.In this paper, based on the feature extraction module, a heuristic parameter optimization algorithm, we improve the feature extraction ability, long-term dependence modeling ability, and computational efficiency of the prediction model, so as to obtain better performance and effect. Feature Extraction Module In this paper, CNN, TCN, and attention mechanism are used to extract features from the ATFM delay prediction dataset from temporal and spatial perspectives, respectively.By extracting the most representative features from the prediction data and mining the hidden information, the model operation performance is improved.In ATFM delay prediction, CNN performs multilayer convolution and pooling operations on the received ATFM delay multidimensional prediction data to extract spatial features with local perceptual ability.These features can capture structures and patterns in the input data, such as airspace distribution structure in air traffic, flight density, etc. TCN can effectively capture long-term dependencies and temporal correlations in time series data, as well as model complex nonlinear relationships, to improve the accuracy of the prediction model. Attention mechanism is a technique used to enhance the performance of neural network models by dynamically assigning weights so that the model can pay more attention to the useful information in the input and improve the performance and expressiveness of the model.In LSTM, attention mechanism can be applied to the input, hidden state, and output parts.And CNN and TCN can pre-process the input data of LSTM, so attention mechanism is applied to the output part of LSTM. The infrastructure of using the attention mechanism for the output part is shown in Figure 5. First, the attention score is obtained by performing similarity computation between query and key; then, the attention score is normalized to obtain attention weights; finally, the attention weights are multiplied by the corresponding values, and all the weighted values are summed up to obtain the final weighted representation.The result of the weighted summation can be used as a direct output of the prediction result or passed to the subsequent layers for further processing. LSTM Model Long Short-Term Memory (LSTM) solves the problem of gradient vanishing and gradient explosion in traditional RNN by introducing a gating mechanism.The structure of LSTM is shown in Figure 6.x, h, and C represent input, hidden state, and memory state, respectively.The LSTM selectively updates, saves, and passes information through the interaction of x and C, and the interaction of h and C. The LSTM contains three key gating mechanisms: forgetting gate, input gate, and output gate.Through the use of gating operations and state updates, the sigmoid function and tanh function help the LSTM model to better deal with long-term dependencies, memorized information, and the hidden state of the output.In this case, the sigmoid function maps the input value to a range between 0 and 1, and the tanh function maps the input value to a range between −1 and 1. LSTM Model Long Short-Term Memory (LSTM) solves the problem of gradient vanishing and gradient explosion in traditional RNN by introducing a gating mechanism.The structure of LSTM is shown in Figure 6.x, h, and C represent input, hidden state, and memory state, respectively.The LSTM selectively updates, saves, and passes information through the interaction of x and C, and the interaction of h and C. The LSTM contains three key gating mechanisms: forgetting gate, input gate, and output gate.Through the use of gating operations and state updates, the sigmoid function and tanh function help the LSTM model to better deal with long-term dependencies, memorized information, and the hidden state of the output.In this case, the sigmoid function maps the input value to a range between 0 and 1, and the tanh function maps the input value to a range between −1 and 1. LSTM Model Based on Feature Extraction Optimization Referring to CNN-LSTM [17,18] and TCN-LSTM [34] models, and combining with attention mechanism, this paper proposes two improved LSTM models, which are the CNN-LSTM-ATT model and the TCN-LSTM-ATT model.As shown in Figure 7, the steps of ATFM delay prediction are as follows: (1) Input the ATFM delay time series and prediction index data into the feature extraction module.Among them, CNN mainly extracts the spatial characteristics of data, and TCN mainly extracts the temporal characteristics of data.The input data are convolved and pooled in the feature extraction module to obtain the feature-mapped LSTM Model Based on Feature Extraction Optimization Referring to CNN-LSTM [17,18] and TCN-LSTM [34] models, and combining with attention mechanism, this paper proposes two improved LSTM models, which are the CNN-LSTM-ATT model and the TCN-LSTM-ATT model.As shown in Figure 7, the steps of ATFM delay prediction are as follows: ATFM Delay Prediction Model Based on Sparrow Search Algorithm Sparrow search algorithm (SSA) is a heuristic optimization algorithm based on the foraging and migratory behavior of bird flocks.SSA finds the optimal solution by simulating the interaction, cooperation, and competition behaviors of sparrows during the foraging process.During the optimization process, each sparrow represents a solution and its quality is evaluated based on its fitness value.By simulating the searching, following and competing among individual sparrows, the algorithm gradually adjusts its position to approximate the optimal solution.The SSA algorithm has the advantages of faster convergence, excellent global search capability, high adaptivity, etc., and can be applied to a wide range of optimization problems. The CNN-LSTM-ATT model and TCN-LSTM-ATT model have complex structures, and their performance depends largely on the selection of parameters.In recent years, in order to improve the performance and prediction accuracy, many scholars have used the SSA algorithm to optimize the parameters in the LSTM model and improved LSTM models [35][36][37].Therefore, in this paper, the SSA algorithm is used to automatically search for parameter combinations in the ATFM delay prediction model.The parameter is regarded ATFM Delay Prediction Model Based on Sparrow Search Algorithm Sparrow search algorithm (SSA) is a heuristic optimization algorithm based on the foraging and migratory behavior of bird flocks.SSA finds the optimal solution by simulating the interaction, cooperation, and competition behaviors of sparrows during the foraging process.During the optimization process, each sparrow represents a solution and its quality is evaluated based on its fitness value.By simulating the searching, following and competing among individual sparrows, the algorithm gradually adjusts its position to approximate the optimal solution.The SSA algorithm has the advantages of faster convergence, excellent global search capability, high adaptivity, etc., and can be applied to a wide range of optimization problems. The CNN-LSTM-ATT model and TCN-LSTM-ATT model have complex structures, and their performance depends largely on the selection of parameters.In recent years, in order to improve the performance and prediction accuracy, many scholars have used the SSA algorithm to optimize the parameters in the LSTM model and improved LSTM models [35][36][37].Therefore, in this paper, the SSA algorithm is used to automatically search for parameter combinations in the ATFM delay prediction model.The parameter is regarded as a sparrow individual, and the model performance is determined according to the location of the sparrow individual in the space.After several rounds of testing, we determine the important parameters that affect the prediction performance of the CNN-LSTM-ATT model and TCN-LSTM-ATT model.For the CNN-LSTM-ATT model, the number of layers in the hidden layer, the number of neurons, and learning rate in the LSTM model are the parameters to be optimized.For the SSA-LSTM-2 model, the number of filters of the convolutional layer in the TCN module, the number of neurons in the hidden layer, and the learning rate are the parameters to be optimized.The parameter definitions are shown in Table 6. Parameter Definition The number of layers in the hidden layer (n_hidden) In LSTM network, the more hidden layers, the more complex the model, the stronger the learning ability, and the easier it is to overfit. The number of neurons (n_neuron) n_neuron determines the capacity and expressive power of the model.A higher number of neurons increases the complexity of the model, allowing it to better capture long-term dependencies and complex patterns in the input sequence. Learning rate Learning rate can control the network learning speed.If the setting is too small, the model convergence speed will slow down.If the setting is too large, oscillations may occur and the network cannot converge. The number of filters in convolutional layer (n_filter) n_filter determines the expressiveness and learning ability of the model.A larger number of filters can capture more local features and increase the receptive field of the model, which may lead to overfitting. The optimization process of the SSA algorithm for the ATFM delay prediction model is shown in Figure 8. (1) Determine the parameters to be optimized and set the range of parameters.According to the constraint range, randomly generate the position and speed of initial individuals to construct the sparrow population.At the same time, initialize the parameters such as population number, dimension, and initial position. (2) According to the current position of sparrow individuals, pass the corresponding parameters to the ATFM delay prediction model.Then, train the ATFM delay prediction model using the training set and evaluate the model performance using the validation set. (3) Calculate the fitness function value based on the performance metrics (accuracy, loss function) to measure the performance of the sparrow individual. (4) Based on the fitness function value, update the new speed and position of the individual sparrow so that the individual sparrow moves to a more optimal position.The sparrow individual with the highest fitness is selected as the globally optimal position in the population. (5) Repeat steps 2-4 until a predetermined number of iterations is reached.( 6) At the end of the iterations, select the sparrow individual with the best fitness based on the fitness function value, and its corresponding ATFM delay prediction model parameter combination is the best parameter combination. The number of filters in convolutional layer (n_filter) n_filter determines the expressiveness and learning ability of the model.A larger number of filters can capture more local features and increase the receptive field of the model, which may lead to overfitting. The optimization process of the SSA algorithm for the ATFM delay prediction model is shown in Figure 8. (1) Determine the parameters to be optimized and set the range of parameters.According to the constraint range, randomly generate the position and speed of initial individuals to construct the sparrow population.At the same time, initialize the parameters such as population number, dimension, and initial position. Experimental Environment According to the authors' prequel study on congestion discrimination and prediction in air traffic networks [38], a region with a high congestion level in Chinese airspace (East China) is selected as the ATFM delay prediction network.In addition, the flow control during the data collection period mainly occurred on domestic routes, so only domestic routes are selected for example validation.Four fields in East China are selected as the departure airports to construct the ATFM delay prediction network, as shown in Figure 9.Among them, the four east China fields are denoted by ICAO codes, ZSSS (Shanghai Hongqiao International Airport), ZSPD (Shanghai Pudong International Airport), ZSHC (Hangzhou Xiaoshan International Airport), and ZSNJ (Nanjing Lukou International Airport). From 1 May to 31 May 2023, we select the ATFM delay prediction data of four departures in East China, with a total of 43,964 valid data points, 20,654 data points with CTOT moments assigned, and a total of 15,994 ATFM delay data points actually generated.The ATFM delay prediction data from 1 May to 21 May are used as the training set; the ATFM delay prediction data from 22 May to 24 May are used as the validation set; and the ATFM delay prediction data from 25 May to 31 May are used as the test set.routes are selected for example validation.Four fields in East China are selected as the departure airports to construct the ATFM delay prediction network, as shown in Figure 9.Among them, the four east China fields are denoted by ICAO codes, ZSSS (Shanghai Hongqiao International Airport), ZSPD (Shanghai Pudong International Airport), ZSHC (Hangzhou Xiaoshan International Airport), and ZSNJ (Nanjing Lukou International Airport).ATFM delays are generated due to a variety of complex reasons.In order to improve the accuracy of ATFM delay prediction and make the prediction indicators as close as possible to the real situation, the time window of the relevant indicators is selected as shown in Table 7.In this paper, from a tactical point of view, we can make short-term prediction of ATFM delay duration and delayed flight volume from one day to several hours in the future. Comparison of Prediction Effect The experiments are conducted in the PyTorch framework to build and train the models, and after several rounds of testing, the random seed is set to 221.In this paper, we propose the CNN-LSTM-ATT model and TCN-LSTM-ATT model based on SSA optimization (denoted as SSA-LSTM-1 and SSA-LSTM-2, respectively), and use CNN-LSTM, TCN-LSTM, CNN-LSTM-ATT (denoted as LSTM-1), and TCN-LSTM-ATT (denoted as LSTM-2) models as comparison experiments. The parameter optimization results of the SSA-LSTM-1 and SSA-LSTM-2 models for the ATFM delay prediction data in East China are shown in Tables 8 and 9, respectively.For the SSA-LSTM-1 model, the Mean Absolute Error (MAE) and R2 results of combination 10 are optimal.Therefore, combination 10 is set as the optimal combination of SSA-LSTM-1 with two hidden layers, 64 neurons, and a learning rate of 0.001.For the SSA-LSTM-2 model, the evaluation parameters of combination 10 are optimal with an MAE of about 4.4 min and an R2 of about 0.87.Combination 10 is set as the optimal combination of SSA-LSTM-2 with 32 filters, 64 neurons, and a learning rate of 0.01.The ATFM delay prediction data are input into the six prediction models and the prediction performances of the models are evaluated using the loss function, MAE, and R2.As shown in Figure 10, the SSA-LSTM-1 and SSA-LSTM-2 models outperform the other four models in terms of convergence speed and loss values.The SSA-LSTM-1 and SSA-LSTM-2 models reach the converged state after only 22 iterations.Among them, the SSA-LSTM-1 loss value is lower than the SSA-LSTM-2 model.As shown in Table 10, the CNN-LSTM and TCN-LSTM models perform poorly with low R2.The LSTM-1 and LSTM-2 models have higher R2 and the models fit the data better, but the MAE values are high.The SSA-LSTM-1 and SSA-LSTM-2 models have the best performances in terms of MAE and R2 metrics.This indicates that the optimization of the SSA algorithm for LSTM-1 and LSTM-2 can improve the accuracy and reliability of prediction.In summary, SSA-LSTM-1 and SSA-LSTM-2 outperform the other four models in prediction performance, and SSA-LSTM-1 is slightly better than A-LSTM-2 in prediction accuracy. Analysis of Prediction Result We output the optimal prediction results of SSA-LSTM-1 for ATFM delay prediction and compare with the actual ATFM delay values, as shown in Figure 11.As a whole, the predicted values of ATFM delay are lower than the actual values.When the actual ATFM delay value is low, the ATFM delay prediction accuracy is high, and when the actual ATFM delay value is high, the prediction results have some deviation.In addition, the ATFM prediction results of SSA-LSTM-1 for ZSNJ and ZSPD are better than those of ZSHC and ZSSS.Among the four airports, ATFM delays of more than 60 min accounted for less than 10% of the data, but the MAE of this part of the data is much higher than that of ATFM delays of less than 60 min.Therefore, in order to further compare the prediction effect under different values, we set 60 min as the ATFM delay threshold and divide into two groups of prediction data, as shown in Figure 12.Among them, the most obvious difference is in ZSHC, where the MAE for ATFM delay over 60 min is 22.6 min, while the MAE for ATFM delay less than 60 min is only 3 min.There are fewer high-delay samples in the predicted data, and more complex factors in practice lead to high ATFM delay, which limits the ability of the model to predict the high ATFM delay. other four models in terms of convergence speed and loss values.The SSA-LSTM-1 and SSA-LSTM-2 models reach the converged state after only 22 iterations.Among them, the SSA-LSTM-1 loss value is lower than the SSA-LSTM-2 model.As shown in Table 10, the CNN-LSTM and TCN-LSTM models perform poorly with low R2.The LSTM-1 and LSTM-2 models have higher R2 and the models fit the data better, but the MAE values are high.The SSA-LSTM-1 and SSA-LSTM-2 models have the best performances in terms of MAE and R2 metrics.This indicates that the optimization of the SSA algorithm for LSTM-1 and LSTM-2 can improve the accuracy and reliability of prediction.In summary, SSA-LSTM-1 and SSA-LSTM-2 outperform the other four models in prediction performance, and SSA-LSTM-1 is slightly better than A-LSTM-2 in prediction accuracy. Analysis of Prediction Result We output the optimal prediction results of SSA-LSTM-1 for ATFM delay prediction and compare with the actual ATFM delay values, as shown in Figure 11.As a whole, the predicted values of ATFM delay are lower than the actual values.When the actual ATFM delay value is low, the ATFM delay prediction accuracy is high, and when the actual ATFM delay value is high, the prediction results have some deviation.In addition, the ATFM prediction results of SSA-LSTM-1 for ZSNJ and ZSPD are better than those of ZSHC and ZSSS.Among the four airports, ATFM delays of more than 60 min accounted for less than 10% of the data, but the MAE of this part of the data is much higher than that of ATFM delays of less than 60 min.Therefore, in order to further compare the prediction effect under different values, we set 60 min as the ATFM delay threshold and divide into two groups of prediction data, as shown in Figure 12.Among them, the most obvious difference is in ZSHC, where the MAE for ATFM delay over 60 min is 22.6 min, while the MAE for ATFM delay less than 60 min is only 3 min.There are fewer high-delay samples in the predicted data, and more complex factors in practice lead to high ATFM delay, which limits the ability of the model to predict the high ATFM delay.13, there is a certain pattern in the time distribution, which is characterized by a low value at both ends and a high value in the middle.In addition, when the ATFM delayed flight volume is low, the ATFM delay prediction accuracy is high; when the ATFM delayed flight volume increases, the MAE of ATFM delay prediction is high.In SSA-LSTM-1 and SSA-LSTM-2, we calculate the absolute mean of the gradient and normalize it to obtain the importance of the predicted features, selecting the features with importance greater than 0.01 for comparison.As shown in Figure 14, the normalized flow control content contributes the most to SSA-LSTM-1 and SSA-LSTM-2, with feature importance of 0.19 and 0.21, respectively.This is followed by the common controlled waypoints, with feature importance of 0.13 and 0.16, respectively.In addition, the weather From 27 May 2023 to 31 May 2023, the prediction results of SSA-LSTM-1 on ATFM delayed flight volume are output and compared with MAE.As shown in Figure 13, there is a certain pattern in the time distribution, which is characterized by a low value at both ends and a high value in the middle.In addition, when the ATFM delayed flight volume is low, the ATFM delay prediction accuracy is high; when the ATFM delayed flight volume increases, the MAE of ATFM delay prediction is high.In SSA-LSTM-1 and SSA-LSTM-2, we calculate the absolute mean of the gradient and normalize it to obtain the importance of the predicted features, selecting the features with importance greater than 0.01 for comparison.As shown in Figure 14, the normalized flow control content contributes the most to SSA-LSTM-1 and SSA-LSTM-2, with feature importance of 0.19 and 0.21, respectively.This is followed by the common controlled waypoints, with feature importance of 0.13 and 0.16, respectively.In addition, the weather In SSA-LSTM-1 and SSA-LSTM-2, we calculate the absolute mean of the gradient and normalize it to obtain the importance of the predicted features, selecting the features with importance greater than 0.01 for comparison.As shown in Figure 14, the normalized flow control content contributes the most to SSA-LSTM-1 and SSA-LSTM-2, with feature importance of 0.19 and 0.21, respectively.This is followed by the common controlled waypoints, with feature importance of 0.13 and 0.16, respectively.In addition, the weather type in the departure airport, estimated flow-to-capacity ratio of departure airport, and estimated flow at key waypoints are also important features affecting ATFM delay prediction, with a contribution rate of more than 5%.In summary, the common flow control information has a greater impact on ATFM delay prediction. x FOR PEER REVIEW 21 of 24 type in the departure airport, estimated flow-to-capacity ratio of departure airport, and estimated flow at key waypoints are also important features affecting ATFM delay prediction, with a contribution rate of more than 5%.In summary, the common flow control information has a greater impact on ATFM delay prediction. Discussions and Implications In order to further improve the predictability of ATFM delays, this paper adds normalized flow control content and normalized controlled waypoint indicators in order to construct a more comprehensive ATFM delay regression prediction indicator system.Meanwhile, this paper proposes two ATFM delay regression prediction models based on the improved LSTM model, which realizes the short-term prediction of ATFM delay duration and delayed flight volume. The occurrence of delays often leads to the waste and loss of resources due to untimely and unreasonable remedial measures and lagging information communication, thus hindering the development of the economy.Therefore, accurately grasping the ATFM delays and development trends under congested hours can provide airlines with some space to take measures to solve the problem and reduce economic losses such as additional fuel costs, wasted human resources, and loss of passengers caused by delays.At the same time, ATFM can better plan and manage air traffic flow to improve overall air operation efficiency, thus attracting more passengers and increasing the economic contribution of air transportation. In addition, the purpose of demand and capacity management of air traffic is not only to control the demand in order to ensure and improve the flight quality and passenger satisfaction, but more importantly, to identify the key factors affecting the high quality of aviation networks and airports.This paper explores the key factors affecting ATFM delays by calculating the contribution rate of ATFM delay prediction indicators to the model and provides a scientific basis for improving the current situation of delays caused by traffic management from the root. Conclusions In order to solve the problems of multi-source, high-dimensional, and unbalanced ATFM delay prediction data, this paper proposes two ATFM delay prediction models based on improved deep learning algorithms to realize the short-term prediction of ATFM delay.The main results are as follows: (1) Construct ATFM delay prediction network model.Taking the points of imbalance between capacity and demand (airports and waypoints) that flights may pass through on Discussions and Implications In order to further improve the predictability of ATFM delays, this paper adds normalized flow control content and normalized controlled waypoint indicators in order to construct a more comprehensive ATFM delay regression prediction indicator system.Meanwhile, this paper proposes two ATFM delay regression prediction models based on the improved LSTM model, which realizes the short-term prediction of ATFM delay duration and delayed flight volume. The occurrence of delays often leads to the waste and loss of resources due to untimely and unreasonable remedial measures and lagging information communication, thus hindering the development of the economy.Therefore, accurately grasping the ATFM delays and development trends under congested hours can provide airlines with some space to take measures to solve the problem and reduce economic losses such as additional fuel costs, wasted human resources, and loss of passengers caused by delays.At the same time, ATFM can better plan and manage air traffic flow to improve overall air operation efficiency, thus attracting more passengers and increasing the economic contribution of air transportation. In addition, the purpose of demand and capacity management of air traffic is not only to control the demand in order to ensure and improve the flight quality and passenger satisfaction, but more importantly, to identify the key factors affecting the high quality of aviation networks and airports.This paper explores the key factors affecting ATFM delays by calculating the contribution rate of ATFM delay prediction indicators to the model and provides a scientific basis for improving the current situation of delays caused by traffic management from the root. Conclusions In order to solve the problems of multi-source, high-dimensional, and unbalanced ATFM delay prediction data, this paper proposes two ATFM delay prediction models based on improved deep learning algorithms to realize the short-term prediction of ATFM delay.The main results are as follows: (1) Construct ATFM delay prediction network model.Taking the points of imbalance between capacity and demand (airports and waypoints) that flights may pass through on routes as nodes in the ATFM delay prediction network, and routes as edges, the dynamic ATFM delay prediction network model is constructed in terms of days.In order to avoid the inconsistency of ATFM delay generation and occurrence locations, the edges in the ATFM delay prediction network are used as the prediction objects. (2) Construct ATFM delay prediction index system and propose innovative indicators through the mining of historical flow control data, combing the common flow control information, and selecting common flow control contents and common controlled waypoints as the key prediction indicators.In addition, this system of predictive metrics includes estimated traffic and dynamically weighted PageRank values for key nodes. (3) Construct ATFM delay prediction model.Combining the feature extraction module, prediction model, and parameter optimization algorithm, we construct the SSA-LSTM-1 and SSA-LSTM-2 prediction models.The model prediction results show that the MAE of SSA-LSTM-1 and SSA-LSTM-2 for ATFM delay duration prediction is 4.25 min and 4.38 min, respectively.Among them, the prediction MAE of the SSA-LSTM-1 model is reduced by 2.71 min, 3.68 min, 1.28 min, and 1.05 min compared to CNN-LSTM, TCN-LSTM, CNN-LSTM-ATT, and TCN-LSTM-ATT, respectively.To exclude the effect of higher delay values, 60 min was set as the ATFM delay threshold, and the predicted MAE of SSA-LSTM-1 for ZSHC with ATFM delay of more than 60 min is 22.6 min, while the predicted MAE for ATFM delay of less than 60 min is only 2.9 min.In addition, through the calculation of the contribution ratio of the prediction metrics, the normalized flow control content and normalized waypoints contribute the most to the prediction results of SSA-LSTM-1 and SSA-LSTM-2, with a significance of more than 0.03. In this paper, we focus on the mining of factors influencing ATFM delay and ATFM delay regression prediction, and the accuracy of model prediction decreases in the case of more delayed flights and higher ATFM delay values.In order to further improve the predictability of ATFM delays and provide support for deploying ATFM strategies in advance, the next phase of research will consider adding more reliable influencing factors and introducing a data imbalance algorithm to optimize the model.In addition, this paper needs to integrate flight information, flow control data, weather preparation data, etc., during data collection.And a large amount of data is lost due to the data matching problem, thus leading to a reduction in data samples, which is also a problem to be considered in the next phase. proposed a flight delay prediction model based on NR-DenseNet, which simultaneously realizes delay class classification prediction and regression prediction by establishing a shared layer of multi-task learning feature extraction and a loss weighting method [20].Yu et al. applied deep belief network to mine the internal and deep patterns of flight delay, and proposed the DBN-SVR flight delay prediction model. ), and D is the delay coefficient of the flight.that the flight does not experience an ATFM delay and D = 1 indicates that the flight experiences an ATFM delay. ( 1 ) Constructing ATFM delay prediction model: Joint feature extraction module, prediction module, and parameter optimization module are used to construct different combinations of ATFM delay prediction network models, including CNN-LSTM-ATT, TCN-LSTM-ATT, and CNN-LSTM-ATT based on SSA optimization. Figure 3 . Figure 3. Statistics chart of controlled waypoint frequency. Usually, the key waypoints are the ones carrying large flow pressure or the intersection of multiple routes.The flow statistics for the key waypoints are shown in Figure 4. Figure 3 . Figure 3. Statistics chart of controlled waypoint frequency. Figure 4 . Figure 4. Flow statistics chart of key waypoints. Figure 4 . Figure 4. Flow statistics chart of key waypoints. Figure 5 . Figure 5. First, the attention score is obtained by performing similarity computation between query and key; then, the attention score is normalized to obtain attention weights; finally, the attention weights are multiplied by the corresponding values, and all the weighted values are summed up to obtain the final weighted representation.The result of the weighted summation can be used as a direct output of the prediction result or passed to the subsequent layers for further processing. Figure 5 . Figure 5. Flow statistics chart of key waypoints.Figure 5. Flow statistics chart of key waypoints. Figure 5 . Figure 5. Flow statistics chart of key waypoints.Figure 5. Flow statistics chart of key waypoints. ( 1 ) Input the ATFM delay time series and prediction index data into the feature extraction module.Among them, CNN mainly extracts the spatial characteristics of data, and TCN mainly extracts the temporal characteristics of data.The input data are convolved and pooled in the feature extraction module to obtain the feature-mapped data, which are then passed to the LSTM layer through the fully connected layer.(2) At each time step, the LSTM receives an input vector from the feature extraction module and gradually updates its internal state and memory, calculating the value of the hidden state or memory cell for the current time step.The value of this hidden state or memory cell is regarded as the result of the processing of the feature data by LSTM, which is passed to the Attention module.(3) The Attention module accepts the output and attention weight vector of LSTM.By calculating the similarity relationship between each time step output and the attention weights, Attention obtains a weighted output vector that measures the importance of each time step output.Attention outputs a weighted aggregated feature vector.(4) The output of Attention is plugged into the fully connected layer, which is further nonlinearly transformed and mapped by the activation function.The final output is then produced.Aerospace 2024, 11, x FOR PEER REVIEW 14 of 24 Figure 8 . Figure 8. Process of ATFM delay prediction model based on SSA optimization(iteration). Figure 8 . Figure 8. Process of ATFM delay prediction model based on SSA optimization (iteration). Figure 9 .From 1 Figure 9. ATFM delay prediction network diagram of four airports in East China.From 1 May to 31 May 2023, we select the ATFM delay prediction data of four departures in East China, with a total of 43,964 valid data points, 20,654 data points with CTOT Figure 9 . Figure 9. ATFM delay prediction network diagram of four airports in East China. Figure 10 . Figure 10.Comparison diagram of loss function curve. Figure 10 . Figure 10.Comparison diagram of loss function curve. Figure 12 . Figure 12.Comparison of ATFM delay prediction results under different values. From 27 May 2023 to 31 May 2023, the prediction results of SSA-LSTM-1 on ATFM delayed flight volume are output and compared with MAE.As shown in Figure Figure 13 . Figure 13.Comparison of ATFM delay prediction flight volume and MAE. Figure 12 . Figure 12.Comparison of ATFM delay prediction results under different values. From 27 24 Figure 12 . Figure 12.Comparison of ATFM delay prediction results under different values. Figure 13 . Figure 13.Comparison of ATFM delay prediction flight volume and MAE. Figure 13 . Figure 13.Comparison of ATFM delay prediction flight volume and MAE. Figure 14 . Figure 14.Weight comparison of ATFM delay prediction index. Figure 14 . Figure 14.Weight comparison of ATFM delay prediction index. Table 1 . Summary of previous research on ATFM delay applications. Table 2 . Summary of previous research on ATFM delay prediction methods. Table 3 . Historical Statistics Table of Flow Control Measures Accepted by Shanghai Approach. Table 5 . ATFM delay prediction index system. Table 7 . Time window of ATFM delay prediction index. Table 10 . Comparison table of evaluation parameters in ATFM delay prediction models. Table 10 . Comparison table of evaluation parameters in ATFM delay prediction models.
13,165.6
2024-02-19T00:00:00.000
[ "Engineering", "Computer Science" ]
An evaluation of the antibacterial properties and shear bond strength of copper nanoparticles as a nanofiller in orthodontic adhesive 42 © Australian Society of Orthodontists Inc. 2015 Objectives: To evaluate the antibacterial properties and effects of an orthodontic adhesive containing copper nanoparticles (NPs) on the material’s shear bond strength. Methods: Antimicrobial activity was analysed by a disk diffusion test against S. aureus, E. coli and S. mutans. The NPs were added to the orthodontic adhesive at 0.0100 wt%, 0.0075 wt%, and 0.0050 wt%. Sixty extracted bicuspids were divided into two groups and the enamel of all teeth was conditioned with phosphoric acid. A coat of moisture insensitive primer (MIP) was applied prior to the bonding of brackets with composite resin. Group I served as a control and the bonding procedure was performed according to the manufacturer’s instructions. Group II comprised the test teeth, into which 0.0100 wt% copper NPs were included in the MIP. Samples were tested and statistically analysed (p ≤ 0.05). The adhesive remnant index (ARI) was also assessed microscopically. Results: The adhesive with copper NPs showed a bactericidal effect against the bacteria under study. A significantly higher bond strength was obtained with the orthodontic adhesive that included 0.0100 wt% of copper NPs (15.23 ± 6.8 MPa) in comparison with the control group (9.59 ± 4.3 MPa). The ARI scores indicated that the groups were significantly different and strengthened by the incorporation of NPs (p = 0.004). Conclusion: The results of the present study suggested that an orthodontic adhesive, which included copper NPs, significantly increased material shear bond strength without adverse side effects on colour and appearance. The adhesive interface was strengthened by homogeneously dispersed copper NPs added as a nanofiller. (Aust Orthod J 2015; 31: 42–48) Introduction Despite the great scientific advances in adhesive materials used in orthodontics, further improvements are needed in order to prevent the undesirable formation of white spot lesions. 1 The decalcification of enamel is common during fixed orthodontic treatment and is associated with the accumulation of dental plaque retained around appliances and the bonding composite. Previous studies have shown that there is a significant increase in caries-causing bacteria when fixed orthodontic appliances are placed. 2 Usually, acid production by bacteria causes demineralisation of the enamel surface, which may lead to dental caries. 3 A desirable property of contemporary orthodontic adhesives is an antibacterial effect. However, past evidence has demonstrated that the addition of antibacterial components such as chlorhexidine, varnish and gel significantly decreased shear bond strength of the bonding material. 4,5 Currently, a major application of nanotechnology is an antibacterial effect produced by metal nanoparticles (NPs) of either gold, 6 silver, 7 zinc 8 or copper. 9 Copper is significantly more affordable than silver or gold and so it is economically attractive. The antibacterial properties of copper NPs have been widely studied. [9][10][11][12][13][14][15][16][17] In earlier trials, 18 copper NPs were prepared by a simple chemical method and their antibacterial activity was tested against Staphylococcus aureus, Escherichia coli and Streptococcus mutans, with promising results that indicated a potential use in dental materials science. Therefore, the purpose of the present study was to evaluate the antibacterial properties and effects on material shear bond strength of an orthodontic adhesive that contained copper NPs. Materials and methods A conventional light cured orthodontic adhesive (Transbond MIP, 3M Unitek, CA, USA) was selected for use in this study due to its hydrophilic properties. Copper NPs, suspended in isopropyl alcohol, were added to the adhesive resin with a micropipette in concentrations of 0.0100 wt%, 0.0075 wt%, and 0.0050 wt%. The copper NPs were synthesised according to a previously published report. 18 Antibacterial test The adhesive's antibacterial activity was determined by a disk diffusion technique which con formed to the recommended standards of the National Committee for Clinical Laboratory Standards. 19 Mueller-Hinton agar (MHA) plates were prepared and inoculated with 200 μl of bacterial culture. The culture was adjusted with sterile saline to achieve a turbidity equivalent to a 0.5 McFarland standard. Disks made of filter paper were impregnated with either 20 μl of chlorhexidine or bonding adhesive containing one of three different concentrations of copper NPs. The disks were firmly placed on the agar plates. Antibacterial testing was performed against three culture strains: S. aureus, E. coli and S. mutans. The antibacterial activity of the adhesive was determined in two batches for each strain using (a) unpolymerised adhesive and (b) polymerised adhesive that had been cured for 20 seconds with an LED (Ortholux, 3M Unitek, CA, USA). Two positive controls comprising chlorhexidine at 2% (Dentsply) and specific drugs -Cefotaxime (30 μg) against S. mutans and S. aureus and trimethoprim-sulfamethoxazole (1.25/3.75 μg) against E. coli -were used. Adhesive without chlorhexidine or NPs was tested against the same bacteria and served as a control. The inhibition of the antibacterial halos generated on the agar plates was measured by using reflected light over the agar plate. The measured distances were rounded to the nearest millimeter with the use of the ImageJ 1.47e software program (National Institutes of Health, MD, USA). The program was calibrated using a known distance and each determination was repeated three times. Shear bond strength (SBS) Teeth Sixty freshly extracted, healthy (without caries and restoration-free) human premolars were cleaned with a rotary brush and stored in a 0.2% solution of distilled water and thymol at 4°C until required. Samples were cleaned with fluoride-free pumice paste using rubber prophylactic cups, and washed with water and airdried. Each premolar was individually embedded in an acrylic mould with its labial surface parallel to the mould base. This ensured that the labial surface would be parallel to the applied force during the shear bond test. The teeth were randomly divided in two equal groups (N = 30). Brackets Stainless steel bicuspid brackets (0.018 inch, Alexander Discipline, Ormco Corp., CA, USA) were used. The average surface area of the bracket base was determined to be 14.21 mm 2 , which was obtained by averaging 10 randomly-measured bracket bases. Bonding procedure Group I (control): The bonding surface was etched with 37% phosphoric acid gel for 15 seconds, rinsed with water for 30 seconds, and dried with oil-and moisture-free air until the enamel had a faintly frosty appearance. A thin coat of Transbond Moisture Insensitive Primer (MIP, 3M Unitek) was applied to the etched surface. The orthodontic brackets were bonded with Transbond Plus CC Adhesive (3M Unitek) and light cured (Ortholux, 3M Unitek) for 12 seconds. Group II (experimental): The bonding procedure was performed following the procedures applied to group I; however, the MIP (3M Unitek) was combined with 0.0100 wt% copper NPs. The copper NPs were stored in solution, which required and justified the use of a moisture insensitive primer. Since the experimental adhesive incorporating 0.0100 wt% copper NPs was the concentration previously determined to show antibacterial effects, it was used to test the SBS. Storage A 0.017 × 0.025 inch stainless steel wire was ligated into each bracket slot to reduce deformation of the bracket during the debonding process. The teeth were stored in distilled water at 37°C for 24 hours. SBS test A universal testing machine (Autograph AGS-X, Shimadsu Corp., Tokyo, Japan) with a crosshead speed of 0.5 mm/min delivered an occluso-gingival shear load to the bracket using a chisel-edge plunger. The maximum load was recorded in megapascals (MPa). Descriptive statistics and the Student t-test were applied (SPSS 19, IBM Corp., IL, USA) to analyse the data (p < 0.05). Adhesive remnant index (ARI) After shear debonding, the enamel surfaces were microscopically examined under X10 magnification to determine the amount of residual adhesive. The quantity of adhesive was scored for each tooth using the adhesive remnant index (ARI) 20 scale, which ranges from 0 to 3. Zero indicates no adhesive remaining on the tooth; 1 indicates less than half of the enamel bonding site is covered with adhesive; 2 indicates more than half of the enamel bonding site is covered with adhesive; and 3 indicates the enamel site is covered entirely with adhesive. The chi-square test was applied to evaluate the ARI. Distribution of nanoparticles in the adhesive An ultrastructural TEM and qualitative analysis was performed to determine the distribution and size of the copper NPs dispersed within the polymerised orthodontic adhesive. Antibacterial test The antibacterial effects observed on the agar plates are shown in Figure 1 and Table I. The experimental adhesive containing 0.0100 wt% copper NPs was the only concentration that showed antibacterial properties and those results were comparable with the effects of chlorhexidine. The adhesives containing 0.0075 wt%, and 0.0050 wt% copper NPs displayed no antibacterial activity. SBS The SBS values (expressed in MPa) and descriptive statistics are shown in Table II. The mean value of shear bond strength of the experimental group was significantly higher (15.2 ± 6.8 MPa) than the control group 9.5 ± 4.3 MPa (Student t-test p = 0.00001). The addition of copper NPs to the moisture insensitive primer significantly increased the SBS. In addition, the copper NPs offered antibacterial effects around the orthodontic bracket, as well as in the interface between the enamel and adhesive. ARI The remnant scores indicating the amount of adhesive remaining after the SBS test are shown in Table III. There was a significant difference in the debonding pattern between the groups. ARI scores in the control group were mainly distributed in the 0 and 1 range. The remnant scores in group II were mainly distributed in the 1 and 2 range. Distribution of nanoparticles in adhesive The distribution of copper NPs within the polymer was homogeneous as no aggregation of nanoparticles was observed. The distribution of nanoparticles in the adhesive is shown in Figure 2. The size of the NPs was less than 20 nm and their shape was almost spherical, which produced a greater surface area to volume ratio and rendered the NPs more reactive. The formation of white spot lesions (WSL) is a common side effect related to orthodontic fixed appliances. 21 Due to its rough surface, excess composite around a bracket base is a common site of plaque accumulation. [22][23][24] Oral pH is an additional critical factor that promotes the development of enamel demineralisation. 22,23 In order to prevent WSL, an adhesive with antibacterial properties would be desirable. In the present study, antibacterial testing showed that adhesive containing nanoparticles was effective against bacteria in a similar manner to the bactericidal effects of chlorhexidine. In comparison, the antibacterial activity of unpolymerised adhesive was higher than polymerised adhesive, possibly due to the diffusion capability of the adhesive on the agar plates. Zones of inhibition (mm) Recently, composite resins with nanofillers have been developed and introduced in order to reduce shrinkage during polymerisation. 24 Nanoparticles can provide higher dimensional stability and decrease surface roughness, which is an important factor related to bacterial adhesion. 24,25 It was therefore considered that an orthodontic adhesive that incorporated copper NPs would deliver a reduction in WSL by controlling bacterial activity. Camphorquinone is the initiator responsible for resin polymerisation. 25 As polymerisation occurred, the incorporated NPs appeared to have little influence on the process. The time required to fully cure the adhesive with NPs was the same required for the control adhesive. The adhesive containing the NPs was light cured together with the unfilled composite resin for 12 seconds, as determined by the timer of the Ortholux (3M Unitek) curing device. However, experiments with higher concentrations of nanoparticles (>3%) have indicated that the colour (dark reddish brown) of the copper NPs affects the activation of the visible light photo-initiator. While the adhesive was reinforced by homogeneouslyincorporated copper NPs as a nanofiller, a previous study reported that varying the percentage weight of the nanofiller favorably influenced resin debonding characteristics. 24 The earlier results also indicated that increasing the level of the nanofillers beyond a certain adhesive weight fraction reduced the interface strength. 24 In addition, nanoparticles improved the coefficient of thermal expansion of the resin, and provided more dimensional stability. 25 In the present research, the SBS was increased significantly by the addition of 0.0100 wt% copper NPs into the orthodontic adhesive. The values obtained were much higher than the requirements reported for clinical practice. 26 According to the ARI scores in the experimental group, the adhesive remnants were higher than those for the control group, which indicated that the adhesion between enamel and resin likely increased with the addition of copper NPs. The tendency for NPs to coalesce into macro-size aggregates has been shown to affect the material properties. 27 The addition of a nanofiller to the adhesive matrix may confer improved properties to the composites but an even dispersion of the nanoparticles is required. TEM images showed an even dispersion of copper NPs; however, further studies are necessary to determine the toxicity and biocompatibility of the NPs for intra-oral application. Conclusions Under the conditions of the present study, the following conclusions may be drawn: • The distribution of copper NPs in the adhesive resin was homogeneous and without aggregation. • The orthodontic adhesive containing 0.0100 wt% copper NPs as nanofiller expressed antibacterial activity. • The SBS of orthodontic brackets significantly increased following the use of orthodontic adhesive containing 0.0100 wt% copper NPs. • The ARI was significantly higher in the experimental group, which indicated that the bond strength between enamel and adhesive was higher. • The colour appearance of the tooth was not affected by the addition of 0.0100 wt% copper NPs.
3,119.2
2015-01-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Technical Solutions to Mitigate Reliability Challenges due to Technology Scaling of Charge Storage NVM Charge storage nonvolatile memory (NVM) is one of the main driving forces in the evolution of IT handheld devices. Technology scaling of charge storage NVM has always been the strategy to achieve higher density NVMwith lower cost per bit in order to meet the persistent consumer demand for larger storage space. However, conventional technology scaling of charge storage NVM has run into many critical reliability challenges related to fundamental device characteristics.Therefore, further technology scaling has to be supplemented with novel approaches in order to surmount these reliability issues to achieve desired reliability performance. This paper is focused on reviewing critical research findings on major reliability challenges and technical solutions to mitigate technology scaling challenges of charge storage NVM. Most of these technical solutions are still in research phase while a few of them are more mature and ready for production phase. Three of the mature technical solutions will be reviewed in detail, that is, tunnel oxide top/bottom nitridation, nanocrystal, and phase change memory (PCM). Key advantages and reported reliability challenges of these approaches are thoroughly reviewed in this paper. This paper will serve as a good reference to understand the future trend of innovative technical solutions to overcome the reliability challenges of charge storage NVM due to technology scaling. Introduction Charge storage nonvolatile memory (NVM), that is, standard floating gate (FG) and nitride-based charge trap flash (CTF) memory, has always been at the heart of the evolution of IT mobile devices, for example, tablet, cell phone, digital camera, and so forth.The future outlook of charge storage NVM is getting brighter as the world's technological percapita capacity to store information has roughly doubled every year and projected to create up to 2.5 quintillion bytes of data everyday as of 2012 [1,2].This indicates the insatiable demand for bigger storage space, and lower cost per bit continues to rise for flash memory in the future.The persistent effort to achieve bigger memory space at lower cost was driven by Moore's law of cost reduction through technological scaling [3][4][5][6].Conventional technology scaling that typically scales down the physical dimensions of charge storage NVM came into fruition through the advancement in lithography techniques as the main driving force [3]. As shown in Figure 1, technology trend of charge storage NVM predicted by International Technology Roadmap for Semiconductors (ITRS) 2011 revealed that the floor space of charge storage NVM continues to shrink, and by 2015, flash memory is predicted to be scaled to 16 nm [2].Beyond 30 nm, this continuous aggressive scaling of charge storage NVM is fast approaching NVM device's fundamental limit or its practical limit in considering the balance between economic gain and investment required to resolve issues that arise from scaling [5].Technological scaling of charge storage NVM has unveiled many critical reliability issues related to device characteristics, for example, charge loss (CL), charge gain (CG), and random telegraph noise (RTN) exhibited through threshold voltage ( ) shift and broadening of memory cell, neighboring bit interference (i.e., disturb phenomenon), cell-to-cell coupling interferences, and severe short channel effects. Increase in cell-to-cell interference and decrease in gate coupling ratio have been highlighted as the two main technology barriers to develop memory technology of sub-40 nm and less [4,5,[7][8][9][10][11][12][13][14].Furthermore, technological scaling of physical dimension for memory cell alone will not be able to completely overcome these reliability issues.Kinam et al. have proposed and emphasized that further technology scaling should be complemented with novel mitigation approaches to extend the dominance of charge storage NVM in semiconductor market [4][5][6][7][8][9][10][11][12][13][14].Many researchers have dedicated their research work on these novel approaches to extend the longevity of charge storage NVM devices beyond 30 nm.These novel approaches include (1) novel flash cell structures [4,[15][16][17][18][19], for example, Hemi-Cylindrical FET (HCFET) and FinFET; (2) new lithography process technologies, for example, improvement in patterning techniques to realize 20 nm structures [20]; (3) novel materials in charge storage layer, for example, phase change memory (PCM) [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37], magnetic random access memory (MRAM) [6], and nanocrystal ; (4) tunnel barrier engineering, for example, VARIable Oxide Thickness (VARIOT) [60][61][62][63], and implementation of high-k dielectric [64,65], for example, HfO 2 ; (5) enhancement of flash memory system by integrating complex compensation schemes through implementing embedded flash controllers [7]; (6) improvement made on error correction code (ECC) algorithm [7]; and (7) innovative way to stack flash cell, for example, high density 3D stack NAND flash and cross point memories.As transistor based charge storage NVM approaching fundamental limits of NVM characteristics soon, these technical solutions or combination of them could be the key in future development of charge storage NVM.Thus, thorough studies and understanding of these technical solutions are required.This paper is focused on providing comprehensive review on research findings on reliability challenges and technical solutions to mitigate device characteristics issues that stem from technology scaling of charge storage NVM.In Section 2, reliability challenges resulted from technology scaling of charge storage NVM are reviewed.In Section 3, an overview of viable technical solutions is reviewed.These technical solutions consist of tunnel barrier engineering, novel flash cell structures, and emerging NVM technologies that quickly evolve from research phase to production, for example, PCM.Among the technical mitigation methods, three of the mature technical solutions are discussed in detail, that is, tunnel oxide nitridation, nanocrystal memory, and PCM.Based on comprehensive work done by research groups on these three mitigation methods, key advantages and critical reliability challenges are reviewed.Section 4 reviews tunnel oxide nitridation in detail and assesses the intricate changes of implementing top/bottom nitridation to endurance/retention performance of charge storage NVM.In Sections 5 and 6, intricate research findings of emerging NVM technologies, that is, nanocrystal and phase change memory (PCM), are reviewed, respectively.These novel NVM technologies may eclipse standard FG flash memory as the main driving force to sustain the growth of semiconductor market due to its superior scalability.Section 7 wraps up our review.Our thorough review in this paper can be served as good reference to understand recent research findings on the reliability challenges of charge storage NVM that stems from technology scaling and technical solutions (in research phase or in production) to mitigate device characteristic issues of charge storage NVM. Reliability Challenges of Technology Scaling for Charge Storage NVM To achieve larger memory space with lower cost per bit, relentless cost saving effort was aggressively pursued through reduction in physical dimension of charge storage NVM as shown in Figure 1.Recent publications have reported that further scaling beyond 30 nm has resulted in critical reliability challenges.These challenges on device characteristics of charge storage NVM include cell level instability issues, array level instability issues due to cell-to-cell interference, RTN, and so forth.These mechanisms yielded read failures through broadening and shifting of distribution of memory cells that impact the memory window (MW).Table 1 summarizes crucial reliability challenges faced by charge storage NVM and critical findings based on comprehensive work done by many researchers. Overview of Viable Technical Solutions Throughout the technology scaling trend of charge storage NVM, there are two main obstacles to breakthrough to reach the next technology nodes, that is, limitation on lithography process and device characteristics (as reviewed in As shown in Table 2, the viable technical mitigation solutions can be categorized to tunnel barrier engineering (TBE), novel flash cell structure, and emerging NVM technologies.To further enhance the endurance and retention performance of tunnel dielectric of charge storage NVM, TBE was implemented by modulating the electrical properties of tunnel dielectric through several major techniques, that is, (1) nitridation at top/bottom interfaces of tunnel oxide layer [67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82]; (2) implementation of novel VARIOT concept [60][61][62][63]; and (3) replacement of conventional oxide layer with high-k dielectric material [64,65].Tunnel oxide nitridation is performed by incorporating optimized nitrogen concentration at top/bottom tunnel oxide layer of charge storage layer through thermal nitridation and chemical/physical nitridation process techniques to enhance reliability performance of charge storage NVM devices [67].Implementation of novel VARIOT concept involves two-layer dielectric stack with a combination of low-k/high-k or three-layer dielectric stack with a combination of low-k/high-k/low-k to regulate tunnel barrier height and achieve enhancement in endurance/retention characteristics of charge storage NVM devices [60][61][62][63].Combinational approaches of tunnel oxide nitridation and bandgap engineering have exhibited excellent endurance performance for NAND flash memory [63]. In order to enhance the reliability performance and scalability of flash memory, high-k materials are employed to substitute oxide layer of interpoly dielectric stack of FG flash memory or tunnel oxide layer of nitride based CTF and FG flash memory [64,65].Novel flash cell structures were proposed by researchers to address the severe short channel effects and scalability of standard FG flash memory.In order to surmount critical reliability challenges in device issues that stem from technology scaling, several exploratory and interesting flash cell structures such as FinFET [16][17][18][19]110] and HCFET [15] are studied.Kwak et al. have reported that HCFET exhibited excellent enhancement in subthreshold swing and off current when compared to planar type NVM cell structure [15].Thus, this shows that HCFET provides superior short channel effects over planar type NVM cell structure [15].On the other hand, another type of cell structure, namely, FinFET, is also actively under study for its superior reliability performance [16][17][18][19].Figure 4 shows the TEM cross section of sub-40 nm Bandgap-Engineered (BE) SONOS NVM devices of (a) near planar and (b) FinFET structure [19].Implementation of FinFET structure in charge storage NVM devices has shown excellent short channel control characteristics and scalability as compared to planar type cell structure [16][17][18][19].Comparing the near planar structure, FinFET structure shows superior resistance to body effect as well as drain induced barrier lowering (DIBL) effect as a result of the capability of gate control [17,19].FinFET structure based NVM exhibited superior memory window (MW) based on post P/E cycled distribution data and good feasibility to implement multilevel cell (MLC) which requires stringent control on distribution width [17,19].As shown in Figure 5, Lue et al. have reported that novel buried channel FinFET BE-SONOS NVM exhibited excellent P/E endurance characteristics with no significant MW closure or roll up [18].However, implementation of complex cell geometry Reliability challenges due to technology scaling of charge storage NVM References V distribution broadening and shifting due to cell level V instability mechanisms, such as charge loss, charge gain, and RTN.These mechanisms exacerbated along technology scaling [83][84][85][86][87][88][89][90][91][92][93][94][95][96] Neighboring bit interference (disturb phenomenon) that inadvertently alters V of neighboring cell while erase/program/read on an other memory cell [97-99] FG interference to adjacent memory cell of standard FG flash memory [100] Decrement in tolerable loss of electrons in storage layer due to shrinkage in cell's dimension as shown in Figure 2 [9] Program interference caused by cell-to-cell interference of adjacent word line [101] Adjacent bit line cell interference due to RTN on 32 nm NAND flash [102] Limitation on thickness of tunnel oxide layer > 8 nm for FG flash memory to prevent severe defect assisted charge leakage or Flash-SILC [103][104][105][106][107] Limitation on gate coupling ratio of GCR > 0.6 for control gate to properly regulate the channel for FG flash memory [106] Edge word line disturb exhibited by FG NAND memories [108] Variability effect of V distribution of nanoscale NAND memories [109] Cell-to-cell coupling interference ratio was found to be inversely proportional to design rules of 2D memory structure that includes FG and charge trap flash (CTF) structures; as shown in Figure 3, 2D memory structure will hit the design limit for coupling interference ratio of 5 at approximately 16 nm Buried channel Surface channel of FinFET structure and program/erase optimization faces several hurdles to overcome which requires greater study before FinFET based NVM technology could roll out from the research lab to be ready for production [17][18][19]. On the other hand, extensive research effort has been put into understanding and developing exploratory NVM technologies such as PCM, MRAM, nanocrystal, and RRAM into mature technology that is ready for production.Among the various exploratory NVM technologies, several NVM technologies such as PCM, MRAM, and RRAM have been thoroughly researched to enable the transition from charge based NVM technology to noncharge based NVM technology.The research on noncharge based NVM technologies heavily concentrated on the innovation in search of new materials to be applied as new charge storage layer.Even though each of these technical mitigation methods are elucidated separately, but research of combinational mitigation methods is carefully studied for the potential improvements in reliability performance [17][18][19]47].In Sections 4, 5, and 6, each of these three mature technical solutions was reviewed in detail, that is, top/bottom nitridation of tunnel oxide, nanocrystal quantum dot memory, and PCM. Tunnel Oxide Nitridation Tunnel oxide nitridation has long been a topic of great interest as one of the mitigation methods to enhance the reliability of tunnel oxide.This is of great significance for charge storage NVM when there are many reliability issues that arise due to charge traps generated under high electric field of applied FN-tunneling mechanism as summarized in Resistive RAM (RRAM) Relies on the ability to switch to different resistance states by applying sufficient voltage across the structures.RRAM consists of simple oxide or complex oxide or transition metal oxide structures [6,66,110] Table 1.The incorporation of nitrogen into tunnel oxide of charge storage NVM through many nitridation schemes (as shown in Figure 6) can enhance the reliability performance of tunnel oxide of charge storage NVM [67][68][69][70][71][72][73][74][75][76][77][78][79][80].Major advantages of tunnel oxide nitridation include (1) increase in immunity towards FNstress that translates to larger memory window (MW); (2) increase in resistance towards instability induced by irradiation of high energy particles, such as gamma rays [81]; and (3) effective barrier to prevent the penetration of boron or any impurities from polysilicon (FG) to tunnel oxide layer [70,77].Nonetheless based on the recent published literature, tunnel oxide nitridation induces critical instabilities, for example, quick electron detrapping (QED) and random telegraph signal (RTS) [77,79]. Based on the published literatures, there are two main nitridation methods, that is, bottom nitridation [67-69, 71-76, 78-80] and top nitridation [70,77] that will be discussed in this section.Bottom nitridation of tunnel oxide is done by incorporating nitrogen in the oxide located near to the channel of charge storage NVM cell [68,69,[71][72][73][74][75][76][78][79][80].This method enhances the endurance characteristics through selective substitution of Si-N bond onto dangling bond of Si-O near to SiO 2 /Si interface [68].However, the incorporation of excess Si-N bonds in bulk oxide increases the probability of defect-related breakdown [68].As shown in Figure 7(a), shift of various tunnel oxynitrides was plotted as a function of program/erase (P/E) cycle counts [68].It shows that tunnel oxynitrides exhibit wider memory window (MW) as compared to conventional dry oxide after extensive P/E cycling [68,76] as shown in Figure 7(b).For this improvement in endurance characteristics on tunnel oxynitrides, Kim et al. attributed endurance characteristics and reduces MW closure, but it increases the probability of defectrelated breakdown which may cause retention issue [68,77,79,80].Lee et al. [80] reported that tunnel oxide nitridation yields large quick electron detrapping (QED) and random telegraph noise (RTN) for fresh device due to increment in defect density through incorporation of nitrogen content in tunnel oxide. On the other hand, top nitridation of tunnel oxide is typically done by forming a silicon oxynitride (SiON) layer between floating gate (FG) and tunnel oxide [70,77] through rapid thermal nitridation with ammonia (NH 3 ) anneal and decoupled plasma nitridation [70,77].Similar tradeoff between endurance and retention performance of nitrided charge storage NVM was reported in a recent study [77].Figure 8(a) shows the normalized Δ of various top nitridation (TN) profile plotted as a function of P/E cycle count [17].Evidently, it shows that higher nitrogen concentration in TN profile yields larger Δ which means more charges are trapped.Figure 8(b) shows curves of normalized Δ of different TN profile after P/E cycling and after 32 hours bake at 85 ∘ C [77].After bake, normalized Δ slightly reduces for TN-A and TN-B profile, but normalized Δ exacerbates for maximum concentration of TN-C.Based on this intriguing behavior, Kim et al. suggested that TN layer may introduce deep energy traps, and thus, appropriate nitrogen content in TN-A and TN-B profile could cause fewer charges to detrap.However, further increment in nitrogen concentration causes more charges to be trapped due to increase in defect density.The defect generation may surmount the deep energy trap effect and cause higher Δ shift [77].Kim et al. have attributed the improvement in endurance and retention characteristics of nitrided oxide layer to the substitution of distorted yet stable Si-O bonds in tunnel oxide layer with relatively stronger Si-N bonds to relieve the interface strain [77,79,82]. As a summary, the reliability performance of nitrided tunnel oxide depends heavily on the specific nitridation schemes and distribution of nitrogen concentration with consideration of the tradeoff between endurance and retention behavior [68][69][70][71][72][73][74][75][76][77][78][79][80][81][82].Optimal bottom nitridation on SiO 2 /Si interface does enhance endurance performance while fine-tuned top nitridation on FG/SiO 2 can improve retention performance of charge storage NVM.Optimal nitridation process is needed to balance out the tradeoffs and obtain best reliability performance of nitrided NVM devices.The primary advantage of tunnel oxide nitridation is that enhancement in endurance and retention behavior of tunnel oxide can be done by leveraging existing Si material in CMOS compatible fabrication process.This approach yielded cost effectiveness in fabrication.The concern of this approach is that meticulous and precise control in optimal nitridation scheme is required to achieve desired reliability performance of charge storage NVM. Nanocrystal Memory Discrete silicon nanocrystal NVM was proposed by Tiwari et al. in 1995 [38, 39] as a potential alternative to standard FG flash memory as remedy to conflicting requirements of tunnel oxide that stems from incessant technology scaling. To improve program/erase speed and reduce operating voltage, thinner tunnel oxide of FG flash memory is desirable to allow fast and efficient transfer of charges in and out of FG.At the same time, the tunnel oxide isolation between FG and silicon substrate has to be sufficient to meet data retention criterion of 10 years typical for industrial applications.Thus with discrete nanocrystal NVM as alternative, scaling of tunnel dielectric is feasible to achieve lower operating voltage, faster program/erase/read speed, and desirable charge retention time.As comprehensively reported by Chang et al. in [40], the most common techniques to form nanocrystals as quantum storage dots are self-assembly, precipitation, and chemical reaction.Among these three techniques, precipitation and chemical reaction are found to be more robust in controlling the size and density of nanocrystals [40]. As shown in Figure 9, silicon nanocrystal NVM replaces conductive polysilicon floating gate charge storage layer of standard flash memory with discrete and mutually isolated charge storage nodes in silicon nanocrystals distributed in control oxide layer [38][39][40][41].Each nanocrystal or "dot" stores few electrons in the control oxide layer, and collectively, these charges will modulate the channel conduction of each memory cell.Due to the nature of distributed discrete charge storage, nanocrystal NVM exhibits excellent inherent immunity towards defects assisted charge leakage through defects in tunnel oxide that critically limits the scaling of tunnel oxide below 8 nm for standard FG flash memory [103][104][105][106][107]. Thus, tunnel oxide of silicon nanocrystal can be further scaled down below 8 nm with consideration for the tradeoff between operating voltage, speed, and charge retention time. The recent published literature have shown popular trend in implementing combinations of technical mitigation methods as illustrated in Table 2 to better achieve program/erase characteristics and enhance retention performance in the form of larger memory window (MW).As reported by Qian et al., nitridation was performed on silicon nanocrystals as shown in Figure 10 [47].This approach yielded larger MW, faster programming speed, and improved retention performance [47].Typical quantum dots used in nanocrystal NVM are based on silicon material, but metal nanocrystals, that is, germanium, Au, Gd 2 O 3 , and other refractory metals, are also proposed and researched [40,47,[56][57][58].For better endurance and retention performance as compared to conventional silicon nanocrystal NVM, Kim et good endurance/retention performance as compared to SiGe dots with conventional SiO 2 as tunnel dielectric [47].On the other hand, performance as compared to conventional FG based NVM [56].However due to the complexity process implementing these proposed novel combinational approaches, manufacturability of these nanocrystals NVM devices in production environment is still a huge challenge to overcome.Uniform charge injection mechanisms were used to transport charges into and out from nanocrystals, such as direct tunneling [38][39][40][41].Band diagrams during charge injection, retention and removal are shown in Figures 9(b), 9(c), and 9(d), respectively [38,39].The dynamics of charge transport and retention mainly depend on quantum confinement effect and coulomb blockade effect [38,39,43].When an electron is injected and retained in nanocrystal, the nanocrystal is charged up by 2 /2 tt with tt representing the nanocrystal capacitance that depends on its size, thickness of tunnel oxide, and thickness of control oxide layer [38,39,43].The charged-up nanocrystal will hence reduces the electric field across the tunnel oxide which then reduces the tunneling current density during program operation [43].She and King reported that this coulomb blockade effect has its pros and cons [43].The salient advantage of this coulomb blockade effect is its effectiveness to impede electrons to tunnel through at low electric field (low gate voltage), and this effectively enhances the immunity of nanocrystal NVM towards flash memory disturb [43].However, coulomb blockade effect negatively impacts programming speed and retention time [43].To improve programming speed, larger nanocrystals are desirable to achieve fast and high tunneling current during programming operation.Since the nanocrystals are charged up after programming operation, there is significant tendency for the electrons in the nanocrystals to tunnel back to channel.She and King also reported that quantum confinement energy becomes significant because the dimensions of nanocrystals are in nanometer range [43].Thus, this causes the conduction band of nanocrystal to shift upwards while the conduction band offset between nanocrystal and surrounding control oxide layer reduces [43].Careful considerations of coulomb blockade effect, quantum confinement effect, and typical 10 years data retention requirement are required to determine the size and density of nanocrystals and the thickness of tunnel oxide.Furthermore, based on TCAD simulations done on nanocrystals, Gasperin et al. reported that width, number, size, and positions of nanocrystals can impact the charge localizations of nanocrystal memory cells which then impact the program window in subthreshold as well as linear region [54]. As compared to conventional charge storage NVM, for example, standard FG flash memory and nitride based CTF NVM, there are two potential leakage paths, that is, vertical leakage path through intrinsic direct tunneling (DT) and extrinsic defect assisted tunneling of SILC as shown in Figure 12.Based on comprehensive modeling work done by Monzio Compagnoni et al. in [53], retention time was modeled and calculated as a function of nanocrystal spacing.As shown in Figure 13, direct tunneling (DT) becomes dominant discharge mechanism for large while LT dominates at smaller region below 1 .Monzio Compagnoni et al. reported that minimum of 3.7 nm is sufficient to fulfill 10 years data retention requirement [53].Therefore, larger spacing between nanocrystals is able to effectively suppress lateral tunneling and improves data retention performance [42,49,53].For nanocrystals with typical diameter of 6 nm and density of = 3 × 10 11 cm −2 , Monzio Compagnoni et al. reported that minimum tunnel oxide thickness of 4.2 nm is required to fulfill 10 years data retention requirement [45,53].As a summary, Tables 3(a) and 3(b) summarize the salient advantages and critical challenges of typical nanocrystal NVM as compared to standard FG flash memory with the corresponding literature references. Phase Change Memory (PCM) Phase change memory (PCM) or also known as ovonic unified memory is one of the promising emerging NVM that has been developed for the past 10 to 15 years and made it into production.Furthermore, PCM emerges as one of the mature mitigation alternatives to conventional charge storage NVM.PCM primarily depends on the characteristic of chalcogenide material to switch to amorphous or crystalline phases through heat controlled by amplitude and timing of electric pulses in a typical PCM memory array [23].The most common chalcogenide material used in PCM is Ge 2 Sb 2 Te 5 (GST) material.In PCM terminology, if GST switches to amorphous state which yielded high resistance, the PCM cell is in reset state.On the other hand, if GST switches to crystalline state which yielded low resistance, the PCM cell is the lateral spacing between each nanocrystal. 1 is the tunnel oxide thickness [45]. is in set state.The difference between set and reset states of chalcogenide material is the atomic order and electron trap density that yielded several order differences in low field resistance [21,22]. Figure 14(a) shows typical schematic cross section of PCM cell [21].PCM consists of top/bottom contact, GST layer with "mushroom" shape active region and resistor that acts as "heater" to heat up the active region of GST layer.Figure 14(b) illustrates the electrical current pulse shapes issued during set/reset/read operations [56].During reset operation, a huge electrical current pulse for a short period of time is issued to melt the active region and convert it from crystalline phase to amorphous phase.On the other hand, during set operation, a moderate electrical current pulse was issued for a sufficient time period to heat up the active region to a distinct temperature between melting and crystalline temperature.This distinct temperature is used to convert the active region of GST material from amorphous phase to crystalline phase.The read operation is done by issuing a small electrical current pulse to measure out the resistance of the PCM cell.The voltage drop across the cell should be lower than the threshold voltage of PCM cell to inhibit destructive read operation that may alter the data content [21,22]. Figure 15 shows the current-voltage (-) measurement of PCM cell in amorphous and crystalline states during read/set/reset operations [21].Threshold voltage ( ) represents the condition in which the conductivity in amorphous phase changes from high resistance state (or off state) to low resistance state (or dynamic on state) [21,25].Below indicated in Figure 15, the resistance of amorphous state is much higher as compared to crystalline state.The amorphous state exhibits electronic threshold voltage switch effect that reduces the resistance of amorphous state to be comparable to crystalline state.This enables set operation to be carried out successfully.Figure 15 also indicates that reset operation of typical PCM cell consumes the most power to melt the active region of GST material.Set operation is the key limit for operating speed of PCM as shown in Figure 14(b). PCM is one of the production-ready emerging NVM technologies with potential capability of multilevel cell operation as shown in the recent literature.The main attractiveness of PCM is its scalability to sub-20 nm, and recent study has shown that no significant intrinsic retention issue was found on 10 nm technology node [37].With direct write technology, PCM does not require any erase operation prior to writing data into the memory cell which is similar to DRAM.Fast read/write at low write/read operating voltage coupled with good data retention and superior endurance performance as compared to standard FG flash memory make PCM very attractive in semiconductor industry.Since PCM is chalcogenide based, studies have reported that PCM is immune to charge based radiation effects which is a genuine reliability concern for charge storage NVM.Table 4 summarizes the key attributes of PCM.Table 5 summarizes recent research findings on reliability issues of PCM.Based on comprehensive work done by Bae et al., physical origins of endurance failures were investigated.There are three types of endurance failures as reported by Bae et al., that is, stuck reset, stuck set, and tail bits with low resistance originating from reset distribution due to composition changes of GST film in active region [29]. Another key challenge of PCM is to ensure that sufficient thermal isolation is placed on the adjacent cells during reset operations to inhibit thermal disturbance effect [27,28].Increase in temperature due to thermal disturbance causes reduction in drift and reset resistance [27,28].Similar to discrete charge storage NVM, current fluctuations in PCM are impacted by RTN effect [24].Fugazza et al. have reported that RTN effect on PCM originates from the fluctuation traps located within the amorphous GST material [24].Since Optimal nanocrystal memory performance requires formation of nanocrystals of optimal size and density and preserving them during subsequent processing steps [40, 41, 43-45, 50, 52, 54] 4 Impact of charge localization of nanocrystals to operating window [54] Table 4: Key advantages of PCM. Key attributes of PCM References 1 Scalable to sub-20 nm; new study shows no significant intrinsic retention issue for PCM at 10 nm [31,37] 2 Low random access read latency at ∼50 ns, fast write performance at ∼100 ns, good data retention >10 years, low write and read operating voltage, direct write technology that requires no erase prior to write operation, and good endurance performance at 10 9 cycles [21,[28][29][30] 3 As compared to charge storage NVM, chalcogenide based NVM is immune to charge based radiation effects [35,36] 4 Multilevel cell operation capability [34] Table 5: Key reliability challenges of PCM. Reliability challenges of PCM References Reported endurance failures of PCM are stuck reset of set state (open due to void generated at interface between GST and bottom electrode contact), the stuck set of reset state (small voids spread over the active region that block heat from bottom electrode contact), and tails bit with low resistance from the reset distribution [29] 2 Thermal disturbance effect on V and resistance during reset operation.[21] reported that increase in temperature due to thermal disturbance decreases V drift and reset resistance [27,28] 3 RTN effect found on PCM.Dependency of current fluctuation on programmed resistance was confirmed through experimental work and numerical model [24] 4 Structural relaxation (SR) effect induce V shift as a function of annealing time.This indicates the data stability depends on the SR effect for amorphous phase of PCM [25,26] PCM relies on resistance contrast between amorphous and crystalline phases of chalcogenide material, data stability of PCM was reported to depend on structural relaxation process that yielded temperature accelerated time evolution of electrical properties of active region of chalcogenide material.Based on extensive work done by Ielmini et al. and Lavizzari et al., reliability of PCM is mainly attributed to the metastable nature of amorphous phase that can be impacted by structural relaxation process [25,28].As a summary, PCM is an excellent mitigation alternative to charge storage NVM that faces imminent steep reliability challenges due to further technological scaling per Moore's law.Key advantages of PCM are its superior endurance/retention performance, better scalability without significant reliability issues, and immunity towards extrinsic irradiation effects. Conclusion In order to quench the insatiable demand for bigger storage space with lower cost per bit, the persistent effort of technology scaling of memory cell dimension has been driven by Moore's law.However, technology scaling of memory cell dimension alone is not able to surmount the challenges faced by charge storage NVM, especially on device characteristic issues.Further technology scaling is recommended to be complemented with innovative mitigation techniques.In this paper, critical reliability challenges of charge storage NVM with emphasis on device characteristic issues have been reviewed.Overall technical mitigation approaches to overcome fundamental device characteristic issues of charge storage NVM have been discussed.Key advantages and reliability challenges of tunnel oxide nitridation, nanocrystal based NVM, and PCM have been carefully reviewed in this paper.These three mitigation approaches are topics of great interest among researchers to extend the dominance of flash memory in semiconductor NVM industry. Figure 2 : Figure 2: Number of tolerable electrons losses as a function of technology node [9]. Figure 5 : Figure5: Evolution program and erase along increment of program/erase (P/E) cycling for both FinFET and planar structure NVM devices.Buried channel FinFET based NVM device showed superior endurance characteristics as compared to planar structure up to 500 K P/E cycles[18]. Figure 7 : Figure7: (a) shift of various tunnel oxynitrides as a function of P/E cycle counts[68]; (b) memory window (MW) after 10 6 P/E cycle as a function of tunnel oxide thickness[68]. Figure 8 : Figure 8: (a) Normalized Δ is plotted as a function of P/E cycle for each TN profile.(b) Normalized Δ after bake at each P/E cycle for each top nitridation profile.Δ is normalized by the maximum value of TN-C, and N% is normalized by maximum concentration of TN-C.TN-0 shown in this figure indicates no TN [77]. Figure 9 :Figure 10 : Figure 9: (a) Schematic cross section of silicon nanocrystal NVM; (b) band diagram during charge injection through program operation; (c) band diagram during charge retention; (d) band diagram during charge removal through erase operation[38,39]. Figure 12 : Figure12: Schematic diagram of vertical leakage path (through direct tunneling (DT) and SILC) and lateral tunneling (LT). is the lateral spacing between each nanocrystal. 1 is the tunnel oxide thickness[45]. Table 1 : Overview of major reliability challenges due to technology scaling of charge storage NVM. Table 2 : Summary of viable technical solutions to mitigate device characteristic issues of charge storage NVM. [68] to reduction in electron trapping in high nitrogen concentration oxide[68].Kim et al. also reported that increment in nitrogen content improves al. reported that SiGe dots with HfO 2 as tunnel dielectric exhibited desirable balance of low voltage operations and Figure 11: Novel HfO 2 based nanocrystal NVM proposed to achieve enhanced P/E characteristics and retention performance as compared to FG based NVM devices [56]. [58] et al.reported the use of Au and Gd 2 O 3 nanocrystals based NVM which exhibited ultrafast program/erase characteristics, disturb-free behavior and multilevel cells capability[58].For typical multilevel cells, disturb-free behavior is critical to clearly distinguish levels of all bits in the same NVM cell.As shown in Figure11, Lin and Chien reported that HfO 2 based nanocrystals NVM exhibited better P/E characteristics and retention Table 3 : (a) Key advantages of nanocrystal NVM as compared to standard FG flash memory.(b) Critical challenges of nanocrystal NVM.
7,806.4
2013-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Observations on the Bethe ansatz solutions of the spin-1/2 isotropic anti-ferromagnetic Heisenberg chain Evidence is presented that the solutions of the Bethe ansatz equations for spin-½ isotropic Heisenberg chains in fixed total spin and momentum sectors are the roots of single variable polynomials with integer (or integer based) coefficients. Such solutions are used as a starting point for investigation of long chain (critical region) properties. In the total spin S = 0 sector I conjecture explicit formulae for the Bethe string configuration labelling of all left and right tower excitations in the k = 1, SU(2) Wess-Zumino-Witten model. Introduction This paper presents empirical observations about the states of (even) length L periodic chains of s=½ spins anti-ferromagnetically coupled as defined by the Hamiltonian Symmetry dictates that eigenstates of (1) can be labelled by total S and S z and (quasi) momentum K. Each stretched state (S z =S) is constructed from N=L/2−S overturned spins from the totally aligned spin configuration. Any S z <S state can be generated by angular momentum lowering operators but will not be discussed here. Bethe [1] (for an English translation see [2]) showed (1) is soluble by associating with each overturned spin a (quasi) momentum eigenvalue k n , −π<k n π. These eigenvalues satisfy the Bethe ansatz equations (BAE) The (scaled) sum of the k n is the total momentum and is clearly unaffected by k n sign reversal. While some progress has been made in the numerical solution of the BAE (2) (cf Hao et al [3]) it remains a difficult challenge. Here I present evidence that there exist important relations satisfied by BAE solutions that can be used as easily implemented checks on existing numerical solutions and/or provide alternative methods of solution. The evidence is most apparent when instead of momenta k or rapidities λ one uses x=2cos(k). Consider the case that K=L/6, L/4, or L/3 (or their negatives) and let D K =D K (L, S) be the total number of eigenstates of (1) at the given K, L and S. I find the polynomial formed from the BAE solutions has real, rational coefficients r i and can be rationalized to form an integer coefficient polynomial I x . K ( ) One can consider the process in reverse. For any given I x K ( ) existing commercial software such as Maple will efficiently find all roots x i and a finite search algorithm can find the D K combinations of N momenta k i =±arccos(x i /2) that satisfy the BAE. In principle I x K ( ) can be found from a single 1 solution x=x 1 say, obtained to some minimum accuracy, by an integer relation algorithm such as PSLQ [4] implemented on Maple. More practically, one can combine all the solutions of the BAE that are most easily found with a less accuracy demanding PSLQ to determine I x . K ( ) Similar considerations apply for K=0 and L/2. Here symmetry allows solution of the BAE to be determined from integer coefficient polynomials I x K ( ) of reduced degree whose roots are only the non-trivial x i . For all other K the BAE solutions are the roots of polynomials whose coefficients are 'integer based'. What this means is that the K in the interval 0<K<L/2 group into blocks K d with M d members consisting of those K whose greatest common divisor with L is d. The number of members M d =j(L/d)/2 where j(n) is Euler's function (cf Hardy and Wright [5] section 5.5). If M d =1 the situation is that described by (5); otherwise the members of K d have the same D K and the terms cos(2πmK/L), m=0, 1, K, M d −1, are integrally independent. The root polynomial analog of (5) for any member is / from (6). If p i K ( ) is known to sufficient accuracy, the PSLQ algorithm will determine the rationals r 0,i and r m,i . In other words, if all BAE solutions for one member K are known, the polynomials P x K ¢ ( ) for all other M d members follow trivially without reference to the BAE. All of the polynomial generated BAE solutions have been plausibly identified with Bethe string configurations and, by continuity in L, define the Bethe string content at large L. This is important for discussion of the critical behaviour of (1) which Affleck [6] showed is the k=1, SU(2) Wess-Zumino-Witten (WZW) model. Subsequently Affleck et al [7] provided additional analytic and numerical confirmation. An apparent discrepancy in the asymptotic behaviour of the ground state energy has recently been resolved [8], justifying a systematic study of other states in long chains to identify the Bethe string content of the left and right tower excitations in the WZW model in the critical region. In the S=0 sector, the asymptotic L→∞ energy eigenstates of (1) are expected [7], based on WZW and conformal field theory arguments, to have the form / / This brief synopsis of the main results of the paper is expanded in the following sections together with numerous illustrative examples of BAE solutions. Section 2 is a summary of Bethe's solution for N=2 overturned spins but is here recast in a form that leads directly to (5) and (6). Since the most efficient implementation of the PSLQ algorithm requires the number of unknown constants to be available, section 3 is devoted to deriving formulas for D K (L, S). At the symmetry points K=0 and L/2, states are either non-degenerate or 2-fold degenerate coming from the inversion symmetry k n →−k n that leaves K and E unchanged. Explicit formulas for these symmetry distinct state counts are also derived. Section 4 provides example BAE solutions at K=0 and L/2; some of these are directly derivable algebraically from (2) and provide justification for (5) that extends beyond the N=2 overturned spin case. Section 5 reports some general K results for N=3. Here confirmation of (5) and (6) is based entirely on numerical inference but is important because it shows the conjectured structure is not an accidental feature that arises because the BAE have an analytic solution when N=2. Section 6 is devoted to the example L=16, S=0. Results from sections 4, 5 and 6 of the more extensive polynomials and associated state lists are provided as text files L20_nondegen.txt, 3_overturned_spins.txt and L16_singlet.txt respectively in supplementary data. Section 7 describes the basis for the multiplicity generator (8) and the general formulas for the B x . Two overturned spins The BAE (2) for two overturned spins are and the equality between first and last term implies the roots of unity condition with the scaling convention (3) for the total momentum. We can also use the first equation in (11) for a second relation, , t a n , 1 2 a result that applies equally to . 2 l A useful equivalent to (13) for either λ obtained by cross multiplying and rearranging is For K=0, the solutions to (14) are given by the roots of unity condition (λ+i)/(λ−i)=exp(2πin/ (L−1)) or k n =2nπ/(L−1), n=−L/2+1, −L/2+2, K, L/2−1 from which we must exclude k n =0 as the solution for the S=L/2 uniform state. The S=L/2−2 solutions are the distinct pair combinations satisfying k 1 +k 2 =0. There are L/2−1 such (non-degenerate) pairs and these exhaust the k n list for pairs. In summary, the solution lists k K Ŝ and k K S of momentum adopting the convention of using hatted variables for the non-degenerate states at K=0 and L/2. For K=L/2, there is one non-degenerate singular solution identified with λ 2 +1=0. We write the state formally as with finite quantities such as the energy contribution to (4), ΔE Sing =−2, understood to be the result of a careful limiting procedure. The solutions arising as roots of unity are k n =2nπ/(L−2), n=−L/2+2, −L/2+3, K, L/2−1 from which we exclude k n =π as the solution for the S=L/2−1 spin-wave. The S=L/2−2 solutions are the distinct k n pairs which sum to π (modulo 2π). One such set is k 1 =2nπ/(L−2), k 2 =(L−2−2n)π/(L−2), 0<n< L 4 . ⌊ ⌋ / The negatives, −k 1 , −k 2 are also solutions and exhaust the possibilities. Since reversing the signs of all k n leaves the energy unchanged as well as the sum k 1 +k 2 =π (modulo 2π), each state is doubly degenerate. In summary, for K=L/2, where it is understood that we list only the positive half of the degenerate states. For 0<K<L/2 we first express (14) in alternative forms. By dividing through by (λ 2 +1) L/2 we get the equivalent which is useful for contributing to the discussion by Bethe [1] and Essler et al [9] of a possible complex pair k=πK/L±iy K solution for K>1. On substituting either k into (18) we find after some algebra that y K must satisfy K L y L y L K odd y L y L K even cos sinh 2 2 sinh 2 , , cosh 2 2 cosh 2 , . 19 The left hand side of (19) differs from unity by O(1/L 2 ) for large L whereas the right hand side for odd K never exceeds 1−2/L. Thus we recover the known result that for fixed odd K>1 there is always some critical length L c satisfying cos(πK/L c )=1-2/L c beyond which the complex solution transforms via y K →iy K to two real solutions. The (19) for even K always has a solution but is interesting in that the associated λ pair has imaginary parts I(λ)≈±2L 1/2 /(πK) for fixed K and L→∞ that do not approach the ideal 2-string values ±1 [10]. A second alternative forms the basis for the polynomials (5) and (6). Squaring both sides of (14) yields an equation explicitly dependent on λ 2 only which we write as λ 2 =(2+x)/(2−x), x=2cos(k). After rearranging and multiplying through by the denominator factor (2−x) L−1 and a convenient normalization we arrive at an equation for k given by where the x n are the roots of C K (x). In the process of squaring (14) we have lost k n sign information but this can be recovered by a finite m, n and sign search process in which we demand the correct signs in (20) are those for which k m +k n =2πK/L. The A and B in (20) are polynomials in x=2cos(k) of degree L−3 and L−1 respectively; explicitly, To reduce C x K ( ) to the polynomial P x K S ( ) whose only roots are those for S=L/2-2 we must divide out the factor x K L 2 cos 2p -( ) / for the S=L/2-1 spin-wave. If K is odd we must also divide out two spurious root factors x K L x K L 2 cos 2 cos ; p p -+ ( ( ))( ( )) / / the first (k=πK/L) is easily shown to be a solution of (18) but has no pair partner for a BAE solution because the second (k= π−πK/L) leads to left and right hand sides of (18) having opposite sign. In summary, where the roots x n of P K L 2 2 -/ combine into L/2−2(L/2−1) BAE solution pairs for K odd(even). For every BAE solution of (22) one automatically has also a BAE solution for −K obtained by simply reversing all k n signs. A summary list of the number of solutions K n which is the non-degenerate part of D K (L, S) for K=0 and L/2, v K which is one-half of the remaining degenerate part and v K =D K (L, S) for 0<K<L/2 is where the ellipsis indicates a repetition of the alternating sequence L/2−i, i=2, 1, 2, K to a total of L/2−1 terms. The total number of states from (23) is For K=L/6, L/4 or L/3 all cosine terms in (22) are rational so that (22) simplifies by elementary division to a polynomial with rational coefficients. As example, for L=12, S=4 and K=2, 3 and 4 we find the rationalized polynomials P x I x For general K in 0<K<L/2 excluding K=L/6, L/4 and L/3 treated above, elementary algebraic division in (22) will lead to products of cosines that can always be eliminated by use of 2cos(a)cos(b)=cos(a+b)+cos (a−b). The resulting polynomial P K L 2 2 -/ has coefficients that are sums of (possibly many redundant) c nK . By using various trigonometric identities it is possible to reduce the number of c nK in the coefficient of any x i to a minimum number of integrally independent terms. As a first step in this reduction, inversion and shifts allow replacement of any c nK by c mK with 0m L 4 ⌊ ⌋ / provided we treat separately even and odd K so that the replacement rule (28) with its (−1) K factors is the same for all K in either category. Such separation with distinct rules for different groups but the same rules for every K within a group dictates that the general grouping is defined by blocks K d where d is the greatest common divisor of K and L. The number of members M d in block K d is j(L/d)/2 where j(n) is Euler's function and the division by 2 arises from our restriction 0<K<L/2. Any K d with one element will be one of L/6, L/4 or L/3 which was considered in the preceding paragraph. Before dealing with the general c mK reduction to an integrally independent set consider the L=12, S=4 example again. The distinct blocks are K 1 =1, 5 and K 2 =2, K 3 =3, K 4 =4 so that only K 1 remains to be treated 2 . The c mK left after reduction by (28) are 1, c 1K , c 2K and c 3K but for K=K 1 =1 or 5, c K 2 1 =1/2 and c K 3 1 =0 leaving only the integrally independent 1 and c K 1 1 in which to express the result of the division (22). The explicit result for the rationalized P K 4 from ( where the L=4 M case is the trivial cos(πn/2)=0 for n odd (e.g. c K 3 1 = 0 in the L=12 example above). The result in (32) for L=4 M+2 follows from the roots of unity condition n N cos 2 / together with (28). No identities beyond (28) and (32) are needed if L/2=p, p prime >2, or L/2=2 ℓ . If L has odd divisors >1 some of the odd K in the interval 0<K<L/2 will be excluded in the construction of K 1 . We will then need as many new identities as there have been exclusions. One set of identities follows trivially from (32)whenever L is a multiple of some 4M+2, M>0, then which with f>1 supplements (32). An example is f=2, M=1 giving c K 2 1 =1/2 used in the L=12 discussion leading to (29). Other identities follow from (33) which we get by first rewriting (33) as On multiplying this by c iK 1 and again using the identity 2cos(a)cos(b)=cos (a+b)+cos(a−b) we obtain An example replacement using (34) is at L=24 (f=4, M=1) where K 1 =1, 5, 7, 11 and with i=1, c c c . = - Together with c K 4 1 =1/2 from (33) and c K 6 1 =0 from (32) we are left with the required four integrally independent 1, c , replaces (32) and there are corresponding replacements for (33) and (34). For any given L and divisor d, at most two c mK d relations are needed to complete the division (22) provided these are used in replacements at each step of the division process so as to always limit the maximum m in c mK d to a fixed number. Furthermore, the effort to derive the required relations from formulas such as (32)-(34) can be avoided by using a PSLQ determination instead. Specifically, for any L and d we know the number of K d elements is M d = j(L/d)/2 and the empirical evidence, based on PSLQ analysis, is that c mK d =cos(2πmK d /L), m=0, 1, K, M d −1, are integrally independent and can be used as a basis in which to express any c , mK d m M d , as a sum with rational coefficients. The PSLQ algorithm, with any K d as numerical input, will provide an analytical expression for c M K d d that suffices for d even and in addition c M ) that is required for d odd. This procedure has been confirmed for all even L to 100. This completes the N=2 overturned spin analysis that forms the basis for (5) and (6). Many examples have shown the structure of (5) and (6), as defined by the blocks K d with M d members, remains unchanged for any NL/2 overturned spins. The N dependence lies entirely in the degree of the integer polynomials which relates directly to the number of states D K (L, S) determined in the next section. State counting To determine the number D(N, L, K) of states of total momentum K for N overturned spins in a length L periodic chain start with the observation that the binomial L N ( ) is the total number of configurations ψ for fixed N and L and these can be separated into exclusive classes ψ d where d is a common divisor of L and N. The distinguishing feature of class ψ d is that for configurations T n ψ d (translations by n=1, 2, K from ψ d ) the first occurrence of T n ψ d =ψ d is at n=L/d. Such configurations are formed from d repetitions of N/d overturned spins on segments of length L/d. The configurations in ψ d can be grouped into D d blocks, each block containing L/d translation related configurations T n−1 ψ, n=1, K, L/d, which provide a basis for forming, by superposition, D d states for each total K which is necessarily restricted to multiples of d. Adding together the state counts (L/d) D d of every class ψ d gives the total L N ; ( ) this is the sum rule The notation used in (36) and the following is that (M, M′) is the greatest common divisor of a pair M, M′ while m|M denotes m is a divisor, including 1 and M, of M and Σ m|M means sum over m subject to the constraint m|M. More generally, the total number of periodic configurations of period L/d is and contributing to this total are the classes ψ (N,L)/d′ for d′|(N, L)/d. The corresponding sum rule is being the special case d=1. The number of equations (37) are the number σ 0 of divisors d of (N, L) and these uniquely determine the σ 0 unknown D d . The number of states D(N, L, K) then follows as å D = / and so we confirm An explicit formula for D d is obtained as follows. Define f(d′) as the expression in the sums (37) and replace that equation list by the equivalent where μ is the Möbius function. An equivalent of (40) is obtained by the substitution d→(N, L)/d again and when the resulting D d is substituted into (38) we get the explicit state count 0K , the fully aligned states. The number of states D K (L, S) at fixed S=S z is given by the well known subtraction As an example consider L=12. We get from D(N, L, K) in (41) that D(1, 12, K)=1, D(2, 12, K)= 5+Δ 2,K , D(3, 12, K)=18+Δ 3,K , D(4, 12, K)=40+2Δ 2,K +Δ 4,K , D(5, 12, K)=66, D(6, 12, K)= 75+3Δ 2,K +Δ 3,K +Δ 6,K while a subtraction (42) gives D K (12, 0)=9+3Δ 2,K +Δ 3,K +Δ 6,K . The values D 0 (12, 0)=14 and those for other L using (41) and (42) agree with the sums D(SP01)+D(SP02) given by Fabricius et al [11] in their Table II. On the other hand D(6, 12, 0)=80 calculated here differs from their D(S z =0, K=0)=44. More detailed comparison shows that D(S z =0, K=0) in [11] incorrectly includes only even S contributions. That the state counts (41) are correct has been confirmed by many additional checks including comparison to a generalization of Bethe's [1] state counting to which I now turn. A string configuration for a state of total spin S and S z =S on a chain of even length L with periodic boundary conditions is specified by the list (p 1 , p 2 , p 3 , K) where the p n are the number of n-strings in the configuration. Each n-string is associated with n overturned spins and this yields the constraint N=∑np n =L/2−S on the total number of overturned spins N. The Bethe formula for the number of states with this configuration, denoted below as p , with each binomial factor the number of ways p n 'particles' (i.e. strings) and h n 'holes' can be arranged in p n +h n integer slots. An important observation from (43) is that h n depends only on p m , m>n and in particular h 1 =2 S+2∑(n−1)p n is fixed by the n-string content for n2. Since the constraint N=∑np n =L/2−S also fixes p 1 =N−∑ n>1 (np n ), any configuration can equally be specified by just the list (p 2 , p 3 , K). Bethe also introduced P=∑p n for the total number of 'particles' which yields the alternative expressions h 1 =2S+2∑(n−1)p n =2S+2N−2P=L−2P, results that will be of use later (cf (54)). Bethe shows that D L S p , , n ( { })summed over all p n { } that are the unrestricted partitions of N, gives the correct total number of states for N overturned spins but does not explicitly remark on the number of states at fixed total (scaled) momentum K=(L/2π)∑k i . However, implicit in (43) is the observation that a shift of any 'particle' or 'hole' to an adjacent slot leads to the same change |ΔK|=1. Consequently it is possible to define a generator Z L S p , , n q ( { }) which is a polynomial invariant under the interchange q↔1/q with the coefficient of q κ being the number of states at K=κ relative to a central value K=K c . This generator has the form of ⎟ replaced by the Gaussian binomial modified by a prefactor q −ph/2 for q↔1/q invariance. Explicitly, The justification for this prescription relies first on Pólya's [12] observation that the coefficient of q A in the expansion of is the number of p+h step walks between (0, 0) and (h, p) that enclose area A between the walk, the x-axis and the line x=h. Second, there is a one to one correspondence between Pólya walks and configurations of p particles and h holes and, to within an additive constant, K=A. To show this adopt the reference configuration corresponding to the zero area Pólya walk to be that of all particles to the left of all holes with the holes labelled 1, 2, K, h in sequence starting with hole 1 as the rightmost hole. A general configuration will have n i particles to the right of hole i with 0n 1 n 2 Kn h p. If this configuration is represented as a histogram of n i versus i inscribed in an h×p rectangle it will be seen to be one of Pólya's walks with A=∑ i n i . Furthermore every particle to the right of a hole is the result of an adjacent particle hole interchange and a unit increase in momentum implying ∑ i n i =K and hence A=K relative to the reference configuration momentum. It is observed empirically that the central (symmetric) K c is either 0 or L/2 (mod L), depending on whether P=∑p n , is even or odd respectively. On incorporating this result we get as our generalization of the Bethe formula (43) the q-generator with [] q for each n given by (44). The coefficient of q K in (45) is the contribution of the particle configuration {p n } to the number of states at momentum K. Shifts of K by multiples of L are understood to bring K into the first Brillouin zone −L/2<KL/2. The relation to the total number of states (42) is where the left hand side sum is understood to be over all partitions of N=L/2−S. Consider as example L=12, S=0. Separate the partitions p(6) into even and odd P; then, in a truncated notation and {p n } written as product n , 0, 1 12 13 3 24 15 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 4 5 8 8 12 8 8 5 4 , 12, 0, The sum mapped to the first Brillouin zone is which agrees with D K (12, 0)=9+3Δ 2,K +Δ 3,K +Δ 6,K noted in the paragraph following (42). Neither method of calculation distinguishes between degenerate and non-degenerate states at the symmetry points K=0 and K=L/2. For that I turn to another generalization of Bethe's method. Some of the states at K=K c arise from terms in which, in every binomial factor in (43), the particles and holes are symmetrically distributed. If the number of overturned spins is odd one of the associated Bethe wavevectors will be π but except for this isolated case the Bethe wave-vectors k i will occur in symmetric pairs 3 (k i , −k i ) and describe the non-degenerate states at K=0 or K=L/2. To obtain the number of these states note that the number of holes h n is always even in each binomial distribution and exactly half of the holes, h n /2, must occupy, say the right, half of the available slots, p 2 n ⌊ ⌋ / +h n /2. The occupancy of the left half is fixed by the required symmetry so that the symmetric (non-degenerate) state count is just the new binomial product Bethe has proved the analogous formula D(L, S, P) for the constrained total number of states by induction after first showing it satisfies the recursion The corresponding recursion here follows by replacing the binomial in the second equality in (52), which is the n=1 factor in (43), by the n=1 factor in (49) thus giving To show (51) satisfies (53) the four cases in which N and P are separately even or odd must be considered. For the even-even case set N=2 R, R=1, 2, K and P=2Q, Q=1, 2, K, R; then (53) reduces to where the two terms in braces arise from the even p 1 =2q and odd p 1 =2q+1 terms in the original p 1 sum in (53). These can be combined into a single binomial and if we define R-Q=A, Q-q-1=k the right hand side of (54) can be written with the equality verified by direct comparison of terms in the sum with terms in the hypergeometric function. The latter is Saalschützian (cf Erdélyi et al [13] which, since R−A=Q, then confirms (54) is correct. A similar analysis for the remaining N and P even/odd cases verifies that (51) satisfies the recursion (53) in general. Furthermore, the special values Z L S , , 1 1 (51), which are easily shown to agree with the definition (50), serve as the initial conditions to complete the inductive proof of (51) for P>2. Now only a sum over P in (51) remains to obtain the total number of symmetric (non-degenerate) states. In compliance with the discussion on whether the center of symmetry K c is 0 or L/2, we have Explicit calculation of Bethe states has confirmed (57) in many cases, including all SL/2 and even L12. All other states at K=0 and L/2, necessarily including those translated from outside the first Brillouin zone, are doubly-degenerate states related by the reflection symmetry k i →−k i . The state counts (57) take a particularly simple form when related directly to the wave-vector lists that occur. These are of the form [ * , (k 1 , −k 1 ), (k 2 , −k 2 ),... ,(k n , −k n )] where * are special values comprising four cases of N * =0 to 3 wave-vectorsnull; π; π/2+i∞, π/2−i∞; π/2+i∞, π/2−i∞, πwith associated K * =0, L/2, L/2, 0 respectively. The state counts (57) now take the form The results in (58) confirm those for N * =3 for L≡2 mod 4 and N * =2 for even L in [14] (their equations (29) and (30)). These authors do not give general results for the remaining case N * =3 for L≡0 mod 4 but their specific count of 4 for L=12 with 5 overturned spins is in errordisagreeing with the count of 5 from (58), the explicit (61) arrived at by an independent calculation below, and the results reported in [15]. Non-degenerate states at K=0 and L/2 The simplest extension of BAE solutions to more than 2 overturned spins is for states of symmetrically distributed particles and holes discussed in the preceding section. These are the non-degenerate states at K=0 and L/2 and I begin with a few examples of states contributing to counts (58). The result of fixing the N * special wave-vectors is a reduced set of BAE for the remaining n independent rapidities λ j =cot(k j /2) 4 . These equations are in which the L/2 roots of the middle factor are the λ 1 =λ 2 solutions of (62) and are to be discarded. Also to be discarded are the roots of the first factor in (63) which are the singular solutions already accounted for in the N * wave-vector list. The (L/2-1)(L/2-2) roots of the R polynomial in (63) are to be paired using (62) and so exhaust the state counts (58) for n=2. For L=10, S=0, K=5 one has N * =1, n=2 and the R polynomial root equation, expressed in x, is If the roots of (64) are ordered from 1 to 12 by non-decreasing real part, the Bethe solutions are the pairs [1,9], [3,10], [5,11], [2,4], [6,7], [8,12] obtained using (62). In explicit terms and including the N * =1 root k=π (λ=0), the solutions in this sequence are, respectively corresponding to the odd partitions of 5 overturned spins, namely 1 2 3 1 (3 cases) and 1 5 , 1 1 2 2 , 5 1 (one case each) again in agreement with (49). I am unaware of any simple algebraic process that will find the analogs of polynomials (61) or (64) for n>2. On the other hand, the existence of these polynomials has been confirmed in a number of cases either by direct construction from solutions of the BAE or more simply by use of the integer relation algorithm PSLQ [4]. For any given L, N * and n one need only find one BAE solution from which to pick a wave-vector k 1 and determine x 1 =2cos(k 1 ) with a certain minimum accuracy. This x 1 is used to construct the list [1, x 1 , x 1 2 , K, x 1 nD ] where D is the state counts from (58). This list serves as input to the PSLQ algorithm and provided the accuracy is adequate, the output will be the integer coefficient list [a 0 , a 1 , a 2 , K, a nD ] in the polynomial ∑a i x i . Software packages such as Maple can efficiently find polynomial roots and what remains is then just a finite search process for D groups of n roots that satisfy the BAE. The needed accuracy in x 1 for a successful PSLQ return is roughly nD times the number of digits in the coefficient a i of largest magnitude. This can be a severe limitation but one can always reduce the PSLQ complexity by increasing the number of BAE solutions used for input. Instead of the single root power list one constructs the array, and by linear algebra, its triangular reduction * * * * * * * which leaves, in the final row, nD−m+2 non-zero elements that become the new input list into PSLQ. Back substitution of a successful PSLQ coefficient list return into the triangular reduction array yields successive polynomial coefficients. To within the floating-point accuracy used these are either integer or rational and in the latter case the entire (tentative) list can be converted to integer by an appropriate multiplication. It is advantageous to supply the PSLQ algorithm with real coefficients; complex x i need not be discarded and instead the complex power list [1, x i , x i 2 , K, x i nD ] should be input as the two lists which are its real and imaginary parts. The largest L treated by this method has been L=20 (D=126) with PSLQ input reduced to less than 50 elements. The final 631(505) polynomial coefficients for K=0(10) appear in the supplementary data file L20_nondegen.txt as lists Ih0S0L20 I 0 0 (ˆ) and Ih10S0L20 I . and is identified as the P=4 partition 1 1 2 1 3 1 4 1 . Here it is important to recognize that symmetry dictates that the nominal Bethe strings have the same (vanishing) real part which implies 2-fold λ root degeneracy at both ±i and 0. This degeneracy must be lifted and (67) shows it is lifted by a spitting of the roots in the imaginary direction 5 . The splitting in (67) can be emphasized by writing the λ roots as Figure 1. Non-degenerate L=20 state energies at K=0 and10, separated into partitions of 10 and grouped by particle number P=∑p n . The left-most column in each P group is the partition 1 P-1 (11-P) 1 ; partitions for the remaining columns are in the dictionary order given in Table 24.2 of AS [18]. The horizontal lines are the ferromagnetic (L/2) and limiting anti-ferromagnetic (L/2-2Lln (2) that even for L=20 are in qualitative agreement with (68). One notes that e Q becomes doubly exponentially small but does not in any way prevent the ±i and 0 root splittings from becoming exponentially small. A more interesting situation arises at L>20 when the (2 1 3 1 4 1 )-string combination with L/2-9 remaining 1-strings is kept as an S=0 excitation. The number of such excitations based on (49) is m m 6 + ( ) for L=20+4 m but I consider for each L only the one state in which the 1-string λ i are in magnitude as small as possible and sandwiched between a symmetric set of 1-string large |λ| holes 6 . The 1-strings interfere significantly with the (2 1 3 1 4 1 )-string combination and lead to an increase in both e 0 and e 1 in (68) until a complex λ i collision occurs and changes the qualitative character of the solution. This is illustrated in figure 2. That the states for L 40 are indeed the continuation of those for L<40 is confirmed by noting that the squares of the splitting between the colliding roots form a smooth sequence with a sign change at L≈39. The configuration at L=48 when viewed in isolation would almost certainly be identified as an 'apparent' 1 15 3 3 partition rather than the Bethe 1 15 2 1 3 1 4 1 . While this is just a more elaborate example of a complex root collision discussed following (19) and already observed by Bethe and others, it does illustrate that quartet configurations are typically unstable intermediate forms that facilitate transitions between states of different character. BAE solutions for L>20 such as those shown in figure 2 have been found by Newton-Raphson (NR) iteration. The L=20 results are invaluable as a template for NR initialization for L=24. For larger L, polynomial extrapolation in L (with allowance for root collision) is usually adequate for the complex root initialization. For real root initialization it is preferable to start with numerical approximations to the density ρ=dn/dλ and extrapolate these in L. One then obtains λ n by the integration n= d Let the left hand side be numerical BAE solutions and the explicit terms with integer ε and 2 s on the right hand side be possible asymptotic WZW solutions. A sample of such paired graphs, including the (2 1 3 1 4 1 )-string combination featured in figure 2, is shown in figure 3. It is apparent that in most cases a length L=1280 is more than adequate to unambiguously establish the Bethe-WZW correspondence. The results of the correspondence from figure 3 and many similar calculations are given in and can be understood to be the rules for all states with at most two (n>1)strings. A more comprehensive set of rules and combinatorial relations will be given in section 7 after Bethe string configurations at general K have been discussed in sections 5 and 6. I close this section with a discussion of a very different but intriguing state. It is the single particle P=1, L/2string state which appears in figure 1 as the lowest lying S=0 excitation on the ground state of a ferromagnetic chain. This is the state [323, 341, 379, 468] which in λ representation is is the analogous single particle L/2-string for L/2 odd. The explicit (72) and (73) serve as useful templates for initial guesses for larger L and can be easily improved by NR iteration. Oscillations due to odd/even L/2 rapidly decay with increasing L and I find from an analysis of states to L=60 that the energy is where the π 2 has been inferred from numerical values but is not in doubt. Corresponding inference for the other numerical values in the series (74) has not been successful. The excitation energy ∝ 1/L implies this state is not two ferromagnetic domains separated by finite width domain walls. Another guess for a classical analog of this state is one in which the chain is cut and the ferromagnetic ground state twisted by 2π before reconnection. This state is not topologically distinct from the ground state but it is a highly degenerate stationary energy state since the vector defining the 2π rotation can have any orientation. In all such states neighbouring spins deviate by ⟩where n e and a n are chosen to guarantee the orthogonality 0. The characteristic (eigenvalue) polynomial of this matrix agrees with that found by Bethe ansatz for all cases considered. The highest energy eigenvalue is that of the single particle L/2-string state and the spin-spin correlations found from the associated eigenvectors for even L from 4 to 20 are shown in figure 4. The j=1 correlation C 1 is related to the energy (74) by C 1 =2E L /(3L) while the factor 3 enhancement of the j=0 correlation C 0 over that of the asymptotic C 1 in figure 4 is the s=½ distinction between spin length squared s(s+1)=3/4 for a single quantum spin and the maximum s s 1 2   ⟨ · ⟩=s 2 =¼ for distinct (parallel) spins. This obvious quantum effect has no classical analog. The data in figure 4 for L=20 has the Fourier decomposition Table 1. WZW asymptotic parameters ε and s together with Bethe 1-string hole count h 1 for the lowest energy cusp state picked from every column in figure 1. Each main configuration entry is the L=20 Bethe (n>1)-string list; this is followed by a label in parentheses that is the 'apparent' large L string content if there are changes as a result of root interactions. For states labelled by F nm see text; for a state designated with +n there are additional cusp states with energies ε greater by 4 m, m=1 K n. States of 3 overturned spins for general K This section confirms the structures (5) and (6) for the case of 3 overturned spins and Bethe string configurations 1 3 , 1 1 2 1 and 3 1 . The results are coefficient lists reported in 3_overturned_spins.txt based on the following notation and conventions. The data for given L starts with the state count list in the form (23), versus site separation j in the single particle P=1, L/2string state for chains of length 4, 6, K, 20. Lines connecting C j , 0jL/2 at the same L are a guide to the eye. The remaining lines are polynomial in 1/L fits to C L/2 for the largest L and these extrapolate to C ∞ =-0.33(1) as shown. The inset shows the convergence of C j for L=20 when only contributions from bases 1 y ñ | through n y ñ | are kept. The extremes are C j =δ j,0 -δ j,L/2 for n=1 and the exact C j for n=126. Intermediate curves are n=2, 3, 4, 5 and 7. For K=0, L/6, L/4, L/3 or L/2 the polynomial P x is defined by the single list 'IKS$' but for K an element of block K d with M d =j(L/d)/2 2 elements as described following (28) replaces the single list IKS$   used to define P x K ( ) as in the (29) example. The 3ν K roots of P K (x) define the 3 Bethe wave-vectors for all of the ν K states. These states are represented as the list k$S$ := [[n 1 , n 2 , n 3 ], [n 4 , n 5 , n 6 ], K, [K, n 3 K n ]] where the $ are numerical K and S as before while the |n i | are position pointers to the root list. It is to be understood that the roots x i are arranged in non-decreasing R(x i ) order with the Bethe momentum k n i associated with n i then uniquely given by (26). The list k S $ $  is energy ordered with the energy of each state given by (4). A check is provided by energy polynomial coefficient lists IeKS$   that are the analog of IKS$   but define polynomials P K (E) whose ν K roots are the energies of the states in the kKS$   lists. The analogy between IeKS$   and IKS$   extends to the combining rule (80) that is applicable to both lists. One final list Ihe S $ $   provides the energies for the states generated from Ih S $ $  at K=L/2. The results presented in 3_overturned_spins.txt include all even L, 8L26. The L=8 data, part of which is ) are the roots of the associated polynomials P K (x) of either coefficient list. Only the result following the first equality conforms to that in (6) but the second form with irrational multipliers is more typically found when obtaining the polynomials by PSLQ. Since such different forms give identical roots, supplementary data that is equivalent to (6), as in L=8 above, has been left unchanged. The remaining data in (81) can be used to verify that the roots of P K (E) determined from the list sum Ie S K Ie S 1 1 2cos are the energies calculated from lists kKS1   for K=1 and 3 using (4). The maximum L=26 exceeds the L≈21.86 critical value where (19) shows the first complex root collision for 2 overturned spins and this allows us to explore more fully string interactions. As a specific example, figure 5 shows how strings in 1 1 2 1 interact and modify bare 2 1 behaviour. The main indicators of interaction are the approximately linear drifts from the marker values and a pronounced level repulsion around the line 2k 1 =k 2 mod 2π. 2-string root collisions are first observed at L=24, a marginal shift from L=22 expected based on (19), for 22 distinct k 1 values. In contrast for the 23rd k 1 , the collision seen in the L=74 inset is suppressed until L=56. For states with root collision expected at L≈61.35 based on (19), the 2-string remains complex in two states with k 1 above the line 2k 1 =k 2 mod 2π at L=74. L=16 singlet states for general K This section provides further confirmation of (5) and (6) but more importantly provides the BAE solutions that serve as templates for the calculations of much longer chains. The notation and conventions follow those in section 5 and start with the S=0, L=16 state count   for K a member of the block K 1 =1, 3, 5, 7 or K 2 =2, 6 the superposition rules of (80) apply. Similarly for the energy polynomial lists IeK .   Solutions are plausibly identified with Bethe string configurations that are partitions of 8 and confirm the counts (45). Energy versus momentum of all states, separated by partition, is shown in figures 6-8. Of particular note are cusp states defined as those for which all 1-strings occupy adjacent positions with no intervening holes. In the limit of large L these are local energy minima with respect to 1-string excitation and a particularly important set of low energy states called current excitations by Bortz et al [20]. Many such large L (≈1000) solutions have been found by NR and analyzed similarly to that described in the text leading to (70). The results for all cusp states supplementing the odd ε cases from table 1 are shown in figure 9 for ε41. The state labelling conforms to that used in figures 6-8. Combinatorial rules that predict the location of states in the L→∞ limit shown in figure 9 are found to be a simple modification of the standard Bethe rules and are described in the next section. Here I only note that while the Bethe string labelling is an essential component of these rules, a different 'apparent' string labelling is often a much better indicator of the solution rapidities in the complex λ plane. Very clear patterns are seen in figure 9 of which the most striking is that all state counts are consistent with products of the (left and right moving) ±κ excitation counts appearing in the single row diagonals ε=(n−1) 2 +2|κ| terminating at the single n-string values at κ=0. This is as expected for the WZW model and is also explored in more detail in the next section which concludes with a conjecture for the string content of all left and right tower states in the total S=0 sector. Another observation is that any cusp state associated with WZW spin s contains at least one n-string with n>2 s; this is shown to follow from the string content conjecture. Low energy S=0 state counting for L→∞ The cusp state examples described in sections 4 and 6 lead naturally to conjectures for the multiplicity of all low lying singlet states as L→∞. An important parameter in the cusp state classification is the number of 1-string holes h 1 =2∑(n-1)p n (cf (43) and subsequent discussion) which is necessarily even and fixed by the n-string content for n>1. Thus h 1 is decoupled from L and our analysis does not require any specific value for L beyond L even and L?h 1 . The complete list of possible p n h 1 1 > { } = (p 2 , p 3 , K) for any h 1 is the list of the partitions of h 1 /2 with every integer in a partition incremented by one. For example, for h 1 =10, the partitions of 5 (1 5 , 1 3 2 1 , 1 2 3 1 , 1 1 2 2 , 1 1 4 1 , 2 1 3 1 , 5 1 ) after incrementing are the cusp state configurations {p n>1 } 10 (2 5 , 2 3 3 1 , 2 2 4 1 , 2 1 3 2 , 2 1 5 1 , 3 1 4 1 , 6 1 ). Define P =∑ n>1 p n and Ñ =∑ n>1 np n , the total number of 'particles' and overturned spins Figure 6. Energy E versus (quasi) momentum K for Bethe 1 8-n n 1 (P=9-n) configurations. Only the (n>1)-string component is used as plot label. Lines connect upper and lower boundary states for each n as a guide to the eye. For clarity the K (mod L) for each state has been chosen such that only after including reflection about K=0(L/2) for states of even(odd) particle number P will the display be in explicit agreement with the counts (45). Diamonds replace crosses for the ground state (gs) and cusp states described in the text. Configurations with no 1-strings are also potential cusp states and are marked as squares. The horizontal line of length L (one periodic cell) marks the ferromagnetic energy L/2. respectively in the (n>1)-strings. For the example list, P =5, 4, 3, 3, 2, 2, 1 and in general Ñ = P +h 1 /2 with 1P h 1 /2. Any particular {p n>1 } appears as a distinct cusp state for each division of h 1 into exclusively left h L and right h R holes. In the symmetric case, h L =h R =h 1 /2, there is a trivial generalization of the generator (45) to the cusp state generator 7 . Bethe configuration labelled cusp state asymptotic energy ε from (7) versus momentum κ=K−K c , K c =0(L/2) for L/2 odd(even). States are distinguished by WZW model s=s L =s R =1/2(red), 3/2(green) and 5/2(blue). States at κ>0 are obtained by reflection about κ=0. There are hidden s=3/2 states near κ=0, ε=37; for these see the lower left corner insert which hides the s=1/2 states instead. The data for the lowest excited state shown has been carried to L=16384 in [21]. To efficiently evaluate configuration sums of expressions such as (86) it is useful to first determine amplitudes A i,j , i=1, 2, K, 1ji, which are sums of the products Π [K] q appearing in (85) and (86) subject to the constraints h 1 =2i, P =j (or Ñ =i+j). Endpoint values are A i,1 =A i,i =1 arising from configurations (i+1) 1 and 2 i respectively. Intermediate cases for the partition of 5 list above are is easily verified by noting the Gaussian binomials at q=1 are ordinary binomials after which follows a one to one correspondence between the sums here and Bethe's constrained sums D L S P , , ( )in (52). What are L and P with S=0 in (52) are here h 1 and P respectively as a consequence of our transcription of partitions into the cusp state configurations {p n>1 }. The symmetry A i,j+1 =A i,i-j allows us to restrict our explicit amplitude calculation to A i,i-j with 1j(i-1)/2 in which case every Gaussian binomial product contains some f 2 and with the replacement q→1/q we see all the terms in (93) can be identified with the terms on a single (lowest) diagonal of energy versus momentum in figure 9. / , we meet only the products 2 1 from B 2 0 ( ) times (p 2 , p 3 , K) from B . 2,2 2n The process for getting the independent left and right cusp count generators illustrated by the examples above extends to the general case. Equations (90) and (91) rewritten as } h 1 =4 s+2ν, contributing to the sum (100) are as described in the first paragraph of this section while the product is over those n for which p n is non-vanishing. The Gaussian binomial a b q ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ is to be understood to vanish when a<b which happens when n is its maximum n max (making h n =0, cf (85)) and n max 2 s; this is the basis for the rule that there is no contribution to B q s v s 2 2 2 + ( ) ( ) from a configuration that does not contain at least one string with length greater than 2 s. Conversely, every configuration with n max >2 s contributes; for a proof it suffices to show h n −2s+n−1 0 for every n2 s in (100). Now h n =2∑ m>n (m−n)p m 2(n max −n) n max −n+1>2s−n+1 which is the required result. Agreement between (98) and (100) has been confirmed numerically to high order. Many regularities are observed in the solutions (98) and (100). Of particular note, the terms c κ q κ in B q s s 2 2 2 n + ( ) The formulas for C (1) and C (0) =1+xC (1) can be derived using the A i,j sum rule in the discussion following which corresponds to ordinary multiplication and division (p′ 2 , p′ 3 , K)(p 2 , p 3 , K)/2 1 =(p 2 +p′ 2 -1, p 3 +p′ 3 , K). Two possibilities for the 3 1 product are required to correctly generate all terms in (A.1) with case c1 arising from the replacement 3 1 →2 2 that maintains h 1 =4 as noted in the remarks following (95). Consider first the generation of B q .
12,530
2019-02-11T00:00:00.000
[ "Physics" ]
A Componentizable Server-Side Framework for Building Remote and Virtual Laboratories. —Currently, virtual/remotes laboratories are often being built to improve learning and researching capabilities in some areas of knowledge. Generally these virtual/remotes laboratories are built from scratch again and again, instead of reusing software and hardware infrastructures. This paper presents a new framework, RVLab, to help developers building flexible and robust server-side virtual and remotes laboratories quickly. RVLab affords support for the basic requirements of these systems such as the user management or the resources (instruments and devices) reservation. Unlike other lab systems, RVLab is adapted to devices and instruments of any real laboratory due to a secure and robust mechanism that allows the remote execution of lab programs. Moreover, it improves the user interaction with real labs, providing a real-time visualization of experiments and lab instruments by means of the control of video camera placed into lab, and the transmission of video streaming with different quality to users. I. INTRODUCTION AND BACKGROUND In university environment, especially in technical and scientific degrees, it is usually necessary to manage several sophisticated lab instruments for student formation, in many cases with a restrictive use, due to the high costs associated with respect to the deployment, startup and maintenance of these systems. A frequent strategy to improve the utilization of these expensive labs, reducing at the same time the overall costs, is the inclusion of (some kind of) remote support or simulation capability. Those inclusions do not significantly affect the effectiveness and educational objectives achieved by a physical laboratory [1]. However, the integration of hardware and software technologies into a remote or virtual laboratory supposes a very specific design, not reusable in general for others labs. Many confusing terms such as on-line, web-based, remote, distance are mostly employed to describe and to identify a remote laboratory in contrast with the traditional one. Eick [2], for example, takes the term web lab to highlight the web nature of the used tools and technologies, while Garcia-Zubia et al. [3] focus the importance on the distance between the laboratory and the computer used to manage the lab. However, the system architecture of both proposals is very similar. Couter et al. [4] carried out a study analyzing the terms used to characterize a remote lab in a wide set of papers from the bibliography, and he found that the differences basically depends on the available features, the technologies used, the purpose or scientific interest of the lab and, finally, the functionality or possible activities performed by the lab. In order to avoid confusion and misunderstanding [5], we define a laboratory computing system (LCS) as a set of hardware-software components and equipment necessary to perform any researches, experiments, scientific or technical work. The LCS can be considered a Virtual Lab when the lab carried out simulations or a partial emulation of the equipment available on a real lab. Instead, a Remote Lab (RL) refers to the set of hardware-software components which allows the access and control of real instruments from any location in the world through a network (e.g. Internet). Therefore a virtual/remote lab (VRL) denotes a combined LCS which can operate with simulation and real instruments. Similar definitions are found in other paper [6]. In some examples of LCS are the following: RLs for measuring instrument [7] [8], VRLs for program microcontroller [1], RLs for networking [9], general RLs [10] [11]. An inspection of the features and capabilities of VRL reveals that there are some basic services and applications shared by all the LCS such as the management of users and lab resources, lab reservation, the scheduling, control and monitoring of experiments or experimental sessions with laboratory system. Most of the applications are developed specifically for every LCS, instead of developing generic components that should be integrated into laboratory system, making difficult the reusability of LCS [12]. Despite the advances in software and hardware platforms (or infrastructures), there are some aspects to be improved in LCS. A user rich experience with the lab in learning environments requires a continuous feedback of students in order to enforce the learned concepts in experimental sessions. An active way to achieve that reinforcement is by enabling the interaction with instructor and other students during lab sessions [13], for example, with the use of collaborative tools such as discussion boards, online conferencing system or concurrent live chat. Another option is adding video-camera management in order to improve the perception of user interaction with the lab. The laboratory must organize the data generated (experimental data, measurements, video streaming, images, etc.) during an experimental session, and then transmit it to the users in order to give them the perception of a real interaction with the lab. Couteur [4] pointed "Seeing is believing -using a camera to see the experiment is important to the student." in order to incise that real-time visualization of the laboratory with cameras is an important data source to be managed by the LCS. Thus, an effective coordination and synchronization should be performed with heterogeneous data sources provided to the user, and not individual unrelated components such system with virtual desktop [14] or the use of webcam used only with VNC [5]. The framework, RVLab, is proposed to implement server side LCS systems to control remotely real and virtual labs in an easy and free fashion. RVLab provides a set of components adaptable to any LCS in two ways. Firstly, it delivers a set of common basic pluggable modules for the general management (users, lab resources, reserve time) of any LCS, deployed on a server, not necessary coupled with the LCS. The server hides the localization of lab resources to users enabling the security of system and provides a common endpoint to the startup of client applications. Secondly, RVLab includes an extensible component, Instrument Management Subsystem, for the handling of specific data sources (experimental data or images) from physical instruments and the injection of commands from client to instruments. RVLab supplies a set of communicating channels adaptable to several communication protocols between the instruments and the clients. Furthermore, RVLab acts as a bridge between instruments and the clients, monitoring and supervising the data and commands between counterparts. The mechanism is very flexible and not intrusive in the sense that it releases to developers the way that client applications can handle the instruments of a RVL lab system. Therefore, developers must program the client applications within the context of physical lab, and RVLab extents its remote execution deriving the data and commands through secure communication channels. The rest of the paper is organized as follows. Section 2 introduces presents the architecture and components of RVLab. Section 3 describes how add new instruments to RVLab step by step and how communicate the instrument with clients. Section 4 details the proposed method to add new cameras and how the system can be changed to storage user data. Section 5 explains a study case of how it is applied on a practical domain. Section 6 presents the performance evaluation of RVLab achieved in the study case. Section 7 describes some related works. Finally, Section 8 exposes the conclusions of the paper.Architecture RVLab is a componentizable framework which builds online lab systems, remote and virtual, adaptable to any instruments of the lab, reducing the implementation tasks only with respect to the instrument to be virtualized or to be accessible remotely. RVLab produces lab systems with three-tier architecture as it is shown in figure 1. Each tier represents a logical piece of the lab system placed on a computer or device into the network with a specific responsibility with respect to overall system. For instance, instrument-side applications are in charge of virtualizing the real instruments of the lab, and give to server-side applications, an interface to access and control the instruments. Server-side applications arrange data from instruments and hide the management of instrument resources to client applications, and finally client-side applications is responsible to show to an user (or several ones) the state of the controlled environment where the experiment is occurred in base of data obtained from instruments, and the corresponding user's interface with panels in order to allow the supervision and control of the lab system. The insertion of a server-side tier improves the security of the lab system, hiding the physical address of all instrument resources. In contrast, the exposition of any instrument resources directly to user (e.g., a webcam [5]) can be dangerous with respect to security and it should be avoid [10] [11]. RVLab facilitates the design of lab systems providing a set of server-side pluggable components which simplify the management of lab system. These components (figure 2) can be selected and parameterized at runtime using program configuration. By default RVLab supports the following modules: a) User management subsystem. It checks the user's identity and controls his or her permission levels of any lab resources. This subsystem can manage an individual user or multiple users organized in groups. The user's list can be stored in XML files or a database depending on the number of users to be controlled. RVLab can execute secure scripts for the creation, modification or deletion of multiple users in a secure way. The scripts can be stored on a special secure ftp, and are executed by admin user. b) Camera management subsystem is a basic parameterized component of RVLab to install, control and configure remotely the cameras placed on a physical laboratory. It is also responsible to capture a video signal of the environment and the further transmission of video-streaming with different quality to users. RVLab includes natively support for commercial IP cameras from Axis and Vivotek, fixed and PTZ. Opposite to a direct access to an IP camera (not recommended in a secure lab), RVLab addresses transparently video-streaming, removing even the users limit usually imposed by some IP cameras to control the video-streaming bandwidth. c) The Time Reservation subsystem provides a common flexible procedure to reserve and assign equitable time access to laboratory (and lab resources). An user can reserve an instrument the time necessary for his or her experiment, and he or she can know the time limits to carry out the experiment and the actual user reserves. Then, RVLab schedule the lab time into a FIFO queue. d) Instrument Management subsystem. RVlab has implemented a flexible and adaptable module to administer directly the instruments or instrument-based devices. The instruments can be implemented by developers using the preferred programming language, distributed paradigm or network protocol. However, some restriction and rules must be imposed in order to be controlled by the lab system. e) Lab Resources Management subsystem is a module responsible of administering any other lab resources such as manuals, tutorials, and so on, in general, necessary for the training of users. Lab resources in some cases can be also user logs, reserve lists or stored partial results. RVLab includes a common way to deal with lab resources, enabling commands to upload them to server, to download them or to assign time use and user groups. Each lab resource uploaded by a user is stored on a secure ftp server after checking the nature of the uploaded files. f) Chat Subsystem. RVLab includes a concurrent live chat system to provide active interaction between users (e.g. student-student and instructor-student in an education domain). The subsystem opens a socket for each user once they have been registered to broadcast all the messages to user opened sessions. Figure 2 shows the deployment diagram of the system architecture for an online lab system, which indicates the three main blocks, client-side, server-side and instrumentside, and how they are connected. RVLab adds pluggable server-side components into the server, which decouples the instruments from clients using the same or different communication protocols and gives a secure access between clients and instrument resources. RVLab does not provide code for instrument-side and client-side blocks, only for server-side block, leaving to developers the mission to implement the rest of blocks. Furthermore, RVLab components impose a model to manage instruments with some slight constraints (examined in next section), and give an interface with a well-defined API to help the coding of client-side applications. Client-side applications can access to server by using XML-RPC [15], a distributed paradigm based on remote procedure call using XLM for data format and HTTP as transport protocol. The selection of XML-RPC is because it defines a simple RPC mechanism over usual HTTP transport, which is language and platform independent; there are multiple libraries in many programming languages such as c, c++, java or php, covering the requirements of any developer. RVLab provides an API to server-side components in terms of commands to be invoked by client-side applications through XML-RPC; Table I shows a list of these commands, but developers may add more commands for their instrument components (see next section). To design RVLab we are focused on three main aspects that become design goals: versatility/reusability, security and instruments ubiquity. One of the major shortcomings of a VRL or an online lab system is the reusability [12]. The most of VRLs are developed for a specific type of laboratory instruments, requiring a new development from scratch for every new lab system. Instead of reinventing the wheel in the development of a lab system, reusing common components give to developers a reusable base to reduce the software efforts and costs, achieving a new lab system in shorter delivery time. Besides to reusability, the versatility is a strong feature desirable for any real laboratory, and also a key factor for the design of RVLab framework Sommerville recommends the use of some techniques to ensure the reusability of the designed program [16], listed in Table II. RVLab makes use of some of them to improve the code reusability of server-side components. A set of design patterns [17] such as factory pattern, singleton and observer are applied in RVLab in order to explode the capability of adapting the framework to any instruments and cameras and extend the capabilities of its block components. In addition, RVLab is implemented in QT [18], a cross-platform application framework widely used in multiple platforms, augmenting in this way the portability of the code. RVLab is completely configurable with configuration files and parameterized to change its behavior. Security is another important factor in RVL lab system in order to maintain safe the devices, instruments, lab resources and users of the laboratory. RVLab applies security policies at two levels. First, it manages and routes all data and command connections using SSL by checking user and permissions in any request. Second, the RVLab architecture benefits the security, exposing only one access endpoint for lab clients and hiding all the devices, Recommendation for reusability [Som95] Approach Description Design patterns Generic abstractions that occur across applications are represented as design patterns that show abstract and concrete objects and interactions. Componentbased development Systems are developed by integrating components (collections of objects) that conform to component-model standards. Application frameworks Collections of abstract and concrete classes that can be adapted and extended to create application systems. Legacy system wrapping Legacy systems that can be "wrapped" by defining a set of interfaces and providing access to these legacy systems through these interfaces. Serviceoriented systems Systems are developed by linking shared services that may be externally provided. Application product lines An application type is generalized around a common architecture so that it can be adapted in different ways for different customers. COTS integration Systems are developed by integrating existing application systems (COTS: Commercial of the shelf). Configurable vertical applications A generic system is designed so that it can be configured to the needs of specific system customers. Program libraries Class and function libraries implementing commonly-used abstractions are available for reuse. Program generators. A generator system embeds knowledge of a particular type of application and can generate systems or system fragments in that domain. Aspect-Oriented software development Shared components are woven into an application at different places when the program is compiled. instruments and cameras which could be connected into the same LAN or several LANs. A common weakness found on many laboratories [5] [9] in our opinion is the separation of the lab management applications using different technologies; i.e., an IP camera for the control and visualization of the environment (some time directly), and a web tool based on moodle for user and content management. The technologies disaggregation makes difficult the lab configuration and the protection against security attacks. However, RVLab provides an effective and unified mechanism to cope the technological diversity required for the developing of lab tools and applications. Then, although communication connections outside RVLab control are possible and not recommended, it exposes a set of secure communication channels in order to apply security policies with XML-RPC protocol. In addition, the model favors the reconfiguration of laboratory applications, without any notification to user client application. Instruments and cameras can be anywhere and they can be connected to server using several internet protocols or any other communication protocol. The server-side components hide the instruments and cameras and address the commands invoked by client-side applications. Insertion of an instrument into a Lab system A RVL or online lab system developed with the support of RVLab must contain components for the management A COMPONENTIZABLE SERVER-SIDE FRAMEWORK FOR BUILDING REMOTE AND VIRTUAL LABORATORIES. of users, cameras, time reserves, lab resources and instruments. The last one, the instruments, is the dependent part in any lab system. RVLab gives complete freedom in its design and coding with respect to the programming language, distributed paradigm and communication protocol. But it is imposed some rules and restrictions that must be satisfied by the component to be added in lab system. RVLab applied some design patterns to simplify the way that a developer could build a new instrument into the lab system. For instance, abstract factory pattern facilitates the adaptation of any instrument to RVLab vision in order to manage them. To design a component for an instrument with RVLab we must distinguish several stages: a) Implement the instrument component. b) Register the instrument component into the server. c) Prepare a configuration file to dynamically load the instrument component by an instrument manager. d) Design of an API for the new instrument component. First we need to implement the instrument component as a derived class which inherits from InstrumentResource abstract class, and can be managed by the Instrument Management subsystem. The derived class must have an implementation of the instrument which supposes the communication with the instrument device or the software that controls the instrument using a specific communication protocol whether it is specified by the instrument, or any other ones defined by the developer. In study cases we have used TCP based communication protocols. Figure 4 shows the logic representation of any instrument as a subclass of InstrumentResource. In order to simplify the management of InstrumentResource derived classes, a dynamic subclass registering mechanism has been implemented. This mechanism allows selecting dynamically the concrete subclass of the abstract factory at runtime from a set of registered subclass.. The dynamic subclass registering mechanism must store into a map, an id of the subclass and a pointer to a constructor method of objects of this subclass. The static method InstrumentResource::registerSubClass can be called from anywhere, and allows registering a new subclass. For example, InstrumentResource::registerSubClass("injector",Injecto r::createInstance); The above function stores into the map an id "injector" with the function pointer Injector::createInstance, which is a method that returns an instantiation of a new object for Injector class. Using the above mechanism the developers can register the potential instruments to be managed by the framework. The Instrument manager class is responsible to store all the controlled instrument objects in the system, and in general they might be read from configuration file. Using same ids in configuration files than registered classes, the instrument management subsystem can create new instrument objects. Therefore it is important that ids included into the configuration file will be the same ids of the classes already registered. Otherwise the instrument management subsystem throws an error and instrument will not be instantiated. An example of a section the configuration file is shown in the following code: <instrument> <id>injector</id> <name>Injector PSD/3</name> <number>2</number> <groupInstrumentID>1</groupInstrumentID> </instrument> In above code, there are several parameters,. Instru-mentManager is responsible for reading all parameter in the xml file and it will instantiate the new class with these data. The constructor of new class must accept an array of QObjects. Another advantage of our registering mechanism is that the constructor of new classes can accept onthe-fly a variable array of parameters (i.e. using QObject definition of QT) instead of a fixed list of parameters. This allows a flexible instantiation of instrument classes, but with the requirements of parsing each parameters of the array. Therefore InstrumentManager will read all parameter in xml file, it will create an array with the data and it will call to the constructor of new class using the array as parameter. Finally, it is necessary to communicate clients with instruments. The instrument objects could create a new API for the instruments by the definition of specific commands or services, which can be invoked by a XML-RPC protocol from client-side applications. In order to register a new service or command, developers only have to register operation name, the object receptor and the method of object receptor. An example is: LocalLab::addMethod("executeExperiment", instrument1, "loadAndExecute"); This operation registers new XML-RPC method with name "executeExperiment". When a client launches a request with executeExperiment id, RVLab will call the method loadAndExecute in object instrument1. The invocation of a command by a client in executeExperiment should have the same number of parameters with the same type; in other case the command it is not accepted. The above procedure uses a request-response mechanism based on RPC paradigm to route the commands from the client-side applications, blocking the object receptor until a response is transmitted to client. But, sometimes we can require an asynchronous mechanism to notify data or any type of event to client. In these cases, RVLab allows the opening of secure sockets to client to send data stream or communicate client and instrument directly. It is also possible to use the sockets to route the commands directly to instrument. But in this case, the proxy component of RVLab could be used in addition to adapt these commands between different communication protocols. RVLab can synchronize several video sources when they are sent using rtsp or rtp. RVlab uses marks of time included in the protocol. RVlab supports others protocols such as http. However synchronization will not be available. These protocols are not metadata to synchronize. III. STUDY CASE: DOMOLAB RVLab has been used for the development of an online lab system in order to verify the possibilities of this framework. The support of RVLab provides the base for the implementation of server-side application. But, in addition, it has been necessary to perform the implementation of instrument-side and client-side applications. Domolab is an online lab system for learning the principles of concurrent, embedded and real time programming using a house-scaled model as a didactic tool [19]. The house-scaled model is equipped with a microcontroller of 8, 16 or 32 bits and a plenty of sensors and actuators to simulate home-automation systems in a home environment. Initially, the house-scaled model was placed on a laboratory and was used directly by computer science students to test their programs. In order to implement their programs, the students must use the framework JavaES (Java Embedded Systems) [20]. JavaES is a Java based middleware that abstracts the hardware interaction to sensors and actuators. Then, it provides a simplified way to implement Java programs that access to hardware devices independently of used microcontrollers. Domolab gives to the students the opportunity to test their programs directly from their houses, without being in the lab. Domolab has a robust self-reset system loaded onto the microcontroller that admits the upload of user programs using a remote Domolab client-side program. This is achieved by the support of RVLab framework. An operational server-side program is executed using default pluggable components, giving a control of users, lab resources, time reserves, and the microcontroller where the user program will be uploaded.Once the user program is loaded into the microcontroller through server, its execution is monitoring giving data directly to the user. The user can see what happen in the remote house model by the visualization of the laboratory with three IP cameras, which can be also controlled by user. Therefore, two static and one PTZ camera is placed on the physical lab, that it allows users to see the consequences of their program in the remote house model (e.g. switch on a light) by transmitting video-streaming to user. For the uploading of user program into microcontroller, a script on the server is applied that store temporally the user program into a secure ftp (vsftpd) for that user, before Instrument Management Subsystem might send it to microcontroller of house-scaled model. Figure 5 shows the user's interface for the remote Domolab client-side application on which user makes use of lab system. It should be implemented completely using commands from the API of server-side components that are invoked through XML-RPC. Therefore, both instrument-side and client-side application must be developed. IV. PERFORMANCE EVALUATION A preliminary performance evaluation is performed on the server-side components supported by RVLab for Domolab lab system. The system has been tested with forty users connect at time. In testing session, a server with two Intel Xeon Quad Core E5405 @ 2Ghz, 4GB RAM a 100Mbits/s connection was used. To control the experiment a virtual machine with four cores and 512MB RAM was used. The experiment consists of analyzing the registration of one hundred fifty users, belonging to four different groups at the same time to lab server. In this case there were twenty reserves available, fourteen personal and five reserves of group. During testing phase, there were two valid reserves, a personal one from 11:00 to 12:00, and a group one from12:00 to 13:00. All the measurements are made from 11:39 to 12:19. At 12:19h we begin to disconnect the computers in the last stage. When there were forty clients connected, we sent numerous turn requests to the server, being the most expensive task, since they need to check all the reserves. First we carried out a measurement of the CPU rate, and its variation with respect to time and connected clients. Figure 7 shows that CPU rate has a little variance in time because most of the calculation is the recoding of the three video-streaming taken by the cameras. Figure 8 measures the memory consumption in the same time frame. As it is seen the memory consumption is very stable, since the memory load is mainly due to the reading of XML files, and especially to the recoding of the video-streaming. In our case with the encoding of three simultaneous streaming requires the use of approximately 95% of the 512 MB of RAM; this measure includes the rest of processes of OS. Therefore, the CPU burden on the connected clients is minimal, because their number has a little impact on the use of main memory. Another critical parameter for simultaneous user connections is the bandwidth as it is shown in figure 9. This gives a comparison between data sending and data reception on the server. Received data (in blue) has a little variation around 1.37 Mb/s, because the data received from cameras had been arriving constantly with a similar bitrate, and did not depend on the number of connected users. In contrast, the red graph shows the data sent to users. As it is shown in figure 8, the mean value of sent data is 36.89MB/s, 27 times greater than received data. The value is increasing when the number of connected users grows, and is decreasing at the end of the experiment when users decrease. The reason for this fact is that when the users grow, the video-streaming must be send to more users. V. RELATED WORK Ranaldo et al. [21] presented a similar flexible system that manages both the real-time visualization of the measurement instrumentation and data flows concerning experiments on real measurement instrumentation. But, A COMPONENTIZABLE SERVER-SIDE FRAMEWORK FOR BUILDING REMOTE AND VIRTUAL LABORATORIES. RVLab manages any instrument, not only electronic devices, in an unified way without the need of connecting VPN or Windows environment. Moreover, RVLab can be adapted to LabView, but it is not dependent of any software and hardware platform. Harward et al. have designed iLab Shared Architecture that it is used in several universities around the world [10] [22] [23]. This system provides a scalable, uniform platform to access to lab systems. iLab and Ranaldo et al. [21] are focused to work with LabView, which provides good interfaces to work with several instruments in engineering and electronica areas. However there are several areas where LabView is not used and it does not offer support for video streaming. VI. CONCLUSION On conclusion, high costs associated to specific instruments require the adoption of software and hardware infrastructures that simplifies the development of laboratories and maximize its use at the same time. RVLab is implemented with the premise that any laboratory infrastructure (including hardware and software) should be sufficient flexible and versatile to adapt to any specific instrument without losing the performance. The Instrument Management Subsystem of RVLab offers a robust and secure mechanism to developers that can be adaptable and extensible for any instrument of a laboratory. In addition, RVLab allows remote control of a set of cameras (IP camera and USB camera) and the synchronized transmission of video-streaming to user using efficient lightweight codecs (included in RVLab). The difference with other proposals is that RVLab includes natively the support for the camera and video management based on open network tools instead of working with independent devices. RVLab is applied for the development of two RV Laboratories with good performance with respects to the number of connected users with a moderate consumption of resources. The implementation was relatively quick, requiring the coding of a few classes.
6,990.2
2012-12-05T00:00:00.000
[ "Computer Science", "Engineering" ]
MALAT1/ mir-1-3p mediated BRF2 expression promotes HCC progression via inhibiting the LKB1/AMPK signaling pathway Background The long non-coding RNA metastasis-associated lung adenocarcinoma transcript 1 (MALAT1) has been reported to play a vital role in the occurrence and development of various tumors. However, the underlying mechanism of MALAT1 in hepatocellular carcinoma (HCC) has not been thoroughly elucidated. Methods The expression levels of MALAT1 in HCC tissues and different cell lines were detected by qRT-PCR. Antisense oligonucleotides (ASO)-MALAT1 transfected cells were used to explore the biological effects of MALAT1 in HCC cells by cell counting kit 8 (CCK-8), colony formation, transwell, wound healing, and flow cytometry analysis. Western blotting was performed to measure AMPK and apoptosis-related protein levels. Dual-luciferase reporter assay was performed to verify the relationship between MALAT1 and its specific targets. Results We found that MALAT1 was upregulated in HCC, and MALAT1 knockdown in HCC cells inhibited cell proliferation, migration, and invasion and inhibited apoptosis in vitro. Further studies demonstrated that MALAT1 positively regulated the expression of transcription factor II B‑related factor 2 (BRF2), which was associated with tumor recurrence, large tumor size, and poor prognosis in HCC. Mechanistically, MALAT1 was found to act as a competitive endogenous RNA to sponge has-miR-1-3p, which upregulated BRF2 expression. Knockdown of BRF2 inhibited the progression of HCC by activating the LKB1/AMPK signaling pathway. Overexpression of BRF2 reversed the inhibitory effect of MALAT1 knockdown on HCC cell viability. Moreover, ASO targeting MALAT1 inhibited the growth of xenograft tumors. Conclusions Our results demonstrate a novel MALAT1/miR-1-3p/BRF2/LKB1/AMPK regulatory axis in HCC, which may provide new molecular therapeutic targets for HCC in the future. Supplementary Information The online version contains supplementary material available at 10.1186/s12935-023-03034-1. Introduction Hepatocellular carcinoma (HCC) is one of the most prevalent cancers and one of the main causes of cancerrelated death worldwide [1,2].The treatment of HCC remains challenging and is largely predicated on early diagnosis.Therefore, exploring the pathogenesis of HCC and identifying new targets for HCC are urgently needed for its clinical treatment. Long noncoding RNAs (lncRNAs) are a class of singlestrand RNAs with a minimum length of 200 bases that generally do not encode proteins [3,4].LncRNAs have diverse biological functions, including the regulation of gene expression at the level of transcription, RNA stabilization, and translation [5,6].Accumulating evidence has indicated that lncRNAs contribute to the pathogenesis and development of human malignant tumors, with roles in cell proliferation, migration, metastasis, invasion, and differentiation, by functioning as an oncogene or tumor suppressor [7][8][9].The lncRNA metastasis-associated in lung adenocarcinoma transcript 1 (MALAT1), also known as nuclear enrichment autosomal transcript 2 (NEAT2), was originally identified as one of the most prominently overexpressed transcripts in metastatic nonsmall cell lung cancer tissues [10].Previous studies have shown that MALAT1 plays an important role as an oncogenic molecule in cancers.For example, up-regulation of MALAT1 has been shown to be associated with tumor invasion and metastasis in colorectal cancer, prostate cancer, and lung cancer [11][12][13].Abnormal expression of MALAT1 in ovarian cancer is associated with tumor invasion and poor prognosis [14,15].The biological mechanism of MALAT1 has not been fully elucidated in HCC. As an RNA polymerase III core transcription factor, transcription factor II B (TFIIB)-related factor 2 (BRF2) is located on TFIIB and involved in RNA polymerase III recruitment and transcription initiation [16].RNA polymerase III expression contributes to the regulation of biosynthetic functions for cell survival, and dysregulation of RNA polymerase III-mediated transcription caused by the up-regulation of BRF2 expression may lead to uncontrolled cell growth, which is directly linked to cancer cell proliferation [17,18].Recent studies have shown that BRF2 is overexpressed in various solid cancers, including lung cancer, breast cancer, and esophageal squamous cell cancer [19][20][21][22].However, the function and the exact mechanism of BRF2 in HCC progression are still unclear. Antisense oligonucleotides (ASO), which are 20 to 30 nucleotides in length, block the functions of RNA (including miRNA) by highly complementary sequence matching [23].Compared with RNA interference technology that function in the cytoplasm such as siRNA, ASO has certain advantages in exerting effects in the nucleus [24].Exciting advances have been made with ASO-based therapies in genetically related diseases including cancer [25,26], highlighting the potential for ASO therapies to provide benefits to patients. In this study, we investigated other potential mechanisms of MALAT1 in HCC.We conducted an examination of MALAT1 expression in cancer tissues obtained from HCC patients.Additionally, we investigated the effects of MALAT1 on the proliferation and apoptosis of HCC cells.We found that MALAT1 upregulated the expression of BRF2, which was an independent predictor of prognosis in HCC patients.BRF2 knockdown inhibited cell proliferation and promoted cell apoptosis of HCC cells.We also analyzed the binding sites of hsa-miR-1-3p with MALAT1 and BRF2 using bioinformatics methods.We hypothesized that MALAT1 functioned as a competitive endogenous RNA (ceRNA) to sponge hsa-miR-1-3p, which upregulates BRF2 expression.Knockdown of BRF2 inhibited the progression of HCC by activating LKB1/AMPK signaling pathway.Overexpression of BRF2 reversed the inhibitory effect of MALAT1 knockdown on HCC cell viability and LKB1/AMPK activation.Importantly, ASO targeting MALAT1 was effective in inhibiting HCC tumor growth in vivo.Here we report the MALAT1/has-miR-1-3p/BRF2 /LKB1/AMPK axis in HCC, and these findings can provide new therapeutic targets for HCC patients. Cell lines and cell culture PLC/PRF/5, Huh7, and MHCC97H cells were maintained in high-glucose DMEM (BasalMedia, Shanghai, China) with 10% fetal bovine serum (LON-SERA, Shanghai Shuangru Biology Science and Technology Co., Ltd).Hep3B cells were maintained in MEM-α supplemented with 10% fetal bovine serum.All cells were cultured in a humid incubator at 37℃ with 5% CO2.Media were replaced every other day. Tissue collection, follow-up and tissue microarray We obtained 41 HCC samples and matched normal tissues from Qilu Hospital of Shandong University (Jinan, China).Our study was approved by the Ethics Committee of Qilu Hospital of Shandong University.Tissue microarray (TMA) was constructed by Shanghai Outdo Biotech Company using archival specimens from 200 anonymous HCC patients.The follow-up process and analysis of clinicopathological information were performed following a previous study [27]. Quantitative reverse-transcription polymerase chain reaction (qRT-PCR) Total RNA was isolated from HCC cell lines and tumor cells by TRIzol (Invitrogen, USA).RNA was reverse transcribed into cDNA using a reverse transcription kit (Vazyme, China).Quantitative real-time PCR was performed on the Bio-Rad CFX Connect (Bio-Rad Laboratories, USA) using SYBR Premix Ex Taq (Takara, China).qRT-PCR analysis was performed as described in a previous study [28].Primer sequences are listed in Supplementary Table 1. Cell Counting Kit-8 (CCK-8) assa y HCC cells were cultured in 96-well plates at a density of 2,000 cells/well, and 10µLCCK8 (Solarbio, China) solution was added to each hole of the plates after the plates were incubated in the incubator for an appropriate time.The culture plates were incubated in the incubator for 1 h, and the absorbance at 450 nm was measured with an enzyme label. Transwell migration and invasion assays Transwell chambers (24-well, JET BIOFIL, China) were used for Transwell assays.In brief, 3 × 104 transfected HCC cells were seeded into the upper chambers with 100 µl serum-free DMEM or MEM-α.Medium containing 10% fetal bovine serum (600 µl) was added into the lower chamber.After 36 h, the cells that migrated or invaded were fixed with formaldehyde for 20 min and stained with 0.1% crystal violet for 2 h.The cells were photographed under a microscope and counted. Western blot, immunohistochemistry, and immunofluorescence The western blot, immunohistochemistry, and immunofluorescence analyses were performed following a previous study [29].The primary antibodies used in these analyses are shown in Supplementary Table 2. Flow cytometry analysis The Annexin V-FITC/PI apoptosis detection kit (Vazyme, China) was used to analyze cell apoptosis.Cells were collected 48 h after transfection, washed twice with phosphate buffer (PBS) and suspended with 500 µl binding buffer.Next, 5 µL Annexin V-FITC and 5 µL propidium iodide (PI) were added to the cell suspension.The cells were incubated at dark room temperature for 15 min, and the late and early apoptotic rates were measured by flow cytometry (BD Calibur, USA). Dual-luciferase reporter assay Wild-type MALAT1 and BRF2 and mutant MALAT1 and BRF2 were cloned into firefly luciferase gene reporter vector pmirGLO (GenePharma, China).The pmirGLO-MALAT1-WT, BRF2-WT, MALAT1-MUT or BRF2-MUT was co-transfected with hsa-miR-1-3p or control mimics into HCC cells.At 48 h after transfection, the cells were lysed, and the luciferase activity was determined following instructions of the dual-luciferase reporter assay kit (Promega, China). RNA sequencing (RNA-seq) and RNA-seq data analysis RNA-seq and RNA-seq data analysis were performed following a previous study [27].The accession number of RNA-seq raw data in the GEO database is GSE239394. Xenograft mouse models All animal experiments were performed with the approval of the Ethics Committee of our hospital.Fiveweek-old male athymic BALB/c nude mice (Charles River, China) were maintained in a specific pathogenfree environment.Huh7 or Hep3B cells (8.0 × 106 cells/ mouse) were injected into the right flanks of the mice.When the tumor size was 100 mm³, the mice were randomly divided into two groups with six mice in each group.Intratumor injections of MALAT1 or control ASO were given every three days for a total of six times (5 nmol ASO in 100 µl sterile PBS).After five weeks, tumor specimens were collected for further analysis. Statistical analysis Statistical analysis was performed using GraphPad Prism 8.0 (GraphPad Software, USA) and IBM SPSS Statistics 25 (SPSS, Inc., Chicago, IL, USA).Data are expressed as mean ± SD.Student t-test was used to analyze the difference between two groups, and the chi-square test was used to analyze categoric variables.Survival curves for overall survival (OS) and recurrence-free survival (RFS) were analyzed by the Kaplan-Meier method and evaluated using the log-rank test.The independent factors affecting prognosis were analyzed by multivariate Cox proportional risk regression.p values < 0.05 indicated statistical significance. MALAT1 expression is increased in HCC Analysis of The Cancer Genome Atlas (TCGA) database revealed that MALAT1 was upregulated in many types of cancer tissues, including HCC, compared with normal tissues (Fig. 1a).We collected 41 HCC tissues and paired normal tissues and confirmed that MALAT1 levels were higher in tumor tissue than in normal tissue of most HCC patients (Fig. 1b).We also detected the mRNA expression level of MALAT1 in various human HCC cell lines and found that the expression level of MALAT1 in Huh7 and Hep3B cells was higher than that of MHCC97H and PLC/PRF/5 cell lines (Fig. 1c). MALAT1 promotes proliferation, migration, and invasion and reduces apoptosis of HCC cells To explore the biological role of MALAT1 in HCC, we conducted a series of functional loss experiments.We designed ASO targeting MALAT1 (ASO-MALAT1) and negative control (ASO-NC) and transfected them into Huh7 and Hep3B cells for cell proliferation, colony formation, wound healing, transwell, and apoptosis assays.Knockdown efficiency was evaluated by RT-qPCR.Compared with the ASO-NC group, the MALAT1-ASO group showed markedly reduced expression of MALAT1 in Huh7 and Hep3B cells (Fig. 2a, b).CCK-8 assays were performed to assess the proliferative ability of HCC cells.We found that knockdown of MALAT1 inhibited the proliferation ability of HCC cells (Fig. 2c, d).Transwell assays showed that MALAT1 knockdown inhibited the migration and invasion abilities of HCC cells (Fig. 2e, f).Flow cytometric analysis indicated that MALAT1 knockdown promoted apoptosis of HCC cells (Fig. 2g).To further confirm the effect of MALAT1 on apoptosis, western blotting was used to detect the expressions of apoptosis-related proteins after MALAT1 knockdown.MALAT1 knockdown in both Huh7 and Hep3B cells increased the protein expression of Bax, cleaved-caspase-3, and cleaved-caspase-9, while the protein expression BCL-2 was decreased (Fig. 2h).Taken together, these results indicate that MALAT1 plays a key role in regulating the proliferation, migration, invasion, and apoptosis of HCC cells. MALAT1 regulates the expression of BRF2, which is an independent predictor of HCC prognosis To explore the molecular mechanism of MALAT1 in HCC, we screened target genes associated with MALAT1 (i.e., highly expressed in HCC tumor tissues and positively associated with poor prognosis in HCC patients) from TCGA database and identified the BRF2 gene.We found that BRF2 was significantly correlated with MALAT1 expression (Fig. 3a).Both TCGA database and qRT-PCR results showed that BRF2 expression was up-regulated in HCC (Fig. 3b, c).qRT-PCR results also confirmed a significant positive correlation between MALAT1 and BRF2 expression in HCC tissues (Fig. 3d).Furthermore, MALAT1 knockdown resulted in downregulation of BRF2 expression in Huh7 and Hep3B cells (Fig. 3e, f), indicating a regulatory relationship between MALAT1 and BRF2.These results were verified by IF staining for BRF2 (Fig. 3g). Kaplan-Meier curve analysis showed that the mRNA level of BRF2 negatively correlated with OS and RFS in HCC tissues from TCGA (Fig. 3h, i).Moreover, IHC of TMA indicated that the expression level of BRF2 in HCC tissues was also higher than that in adjacent normal liver tissues (Fig. 3j, k).Kaplan-Meier survival analysis revealed that patients with high BRF2 expression had worse OS and RFS than those with low BRF2 expression (Fig. 3l, m).Analysis of BRF2 expression and clinical characteristics of TMA indicated there was a statistical correlation between BRF2 expression level and two clinicopathological features (tumor size and recurrence) (Table 1).Furthermore, Cox proportional hazard model confirmed that BRF2 expression was an independent predictor of OS and RFS in HCC patients (Supplementary Tables 3, Supplementary Table 4).Overall, these findings suggested that MALAT1 regulates the expression of BRF2 and that BRF2 might be a valuable prognostic predictor in HCC. BRF2 promotes the proliferation, migration, and invasion and prevents apoptosis of HCC cells To determine the biological function of BRF2 in HCC, we knocked down BRF2 expression by transfecting Huh7 and Hep3B cells with two independent siRNAs.qRT-PCR results showed that two siRNAs effectively reduced BRF2 expression in both cell lines by at least 60% (Fig. 4a, b).Using CCK8 and clonogenic assays, we found that BRF2 knockdown significantly inhibited the proliferation and clonogenic abilities of Huh7 and Hep3B cells (Fig. 4c-e). Transwell and wound healing assays demonstrated that BRF2 knockdown inhibited the migration and invasion abilities of HCC cells (Fig. 4f-h).Flow cytometry analysis and western blot showed that knockdown BRF2 also promoted the apoptosis of HCC cell lines (Fig. 4i, g). We continued to investigate the regulatory mechanism among hsa-miR-1-3p, BRF2 and MALAT1.While the mRNA and protein expression of BRF2 were decreased after MALAT1 knockdown in Huh7 and Hep3B cells, the hsa-miR-1-3p inhibitor significantly reversed the reduction of BRF2 (Fig. 5j).Collectively, our data suggested that MALAT1 functioned as a competitive endogenous RNA (ceRNA) to sponge hsa-miR-1-3p, which upregulates BRF2 expression. The oncogenic effect of BRF2 in HCC may involve inhibition of the LKB1/AMPK signaling pathway To explore the potential molecular mechanisms underlying the effect of BRF2 on HCC cells, we performed RNAseq on Huh7 cells after BRF2 knockdown.We analyzed the RNA-seq results between Huh7-NC and Huh7-si cells and identified 2192 differentially expressed genes (DEGs) (fold change ≥ 2 and p < 0.05), including 1991 upregulated and 201 down-regulated DEGs (Fig. 6a).Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis showed that the upregulated DEGs in Huh7 cells were enriched in signaling pathways such as human papillomavirus infection and the AMPK signaling pathway (Fig. 6b).Gene Ontology (GO) functional classifications showed that the biological processes included transcription regulator activity and ATP binding (Fig. 6c).The KEGG and GO enrichment analysis results of downregulated DEGs in Huh7 cells were shown in Fig. 6d, e. The RNA-seq results indicated a potential link between BRF2 and the AMPK signaling pathway in HCC cells.AMPK is regulated by various upstream factors, including serine/threonine kinase (LKB1), a serine/threonine kinase that phosphorylates and activates AMPK [31].The LKB1-AMPK signaling pathway has been shown to play roles in metabolism, protein synthesis, mitochondrial homeostasis, control of cell growth, autophagy, and apoptosis [32,33].Recent studies showed that activation of the LKB1-AMPK-mTOR signaling pathway inhibits the malignant behavior of tumor cells [34,35].Therefore, we hypothesized that BRF2 might play an oncogenic role by regulating the LKB1-AMPK signaling pathway. We examined levels of key factors in the LKB1-AMPK signaling pathway after knockdown of BRF2 using qRT-PCR and western blotting.Our results showed that knockdown of BRF2 led to increased expression of LKB1 and p-AMPK proteins and decreased expression of p-mTOR (Fig. 6f, g).These results suggest that BRF2 inhibits the LKB1-AMPK-mTOR pathway in HCC cells. Overexpression of BRF2 abrogated the effects of MALAT1 knockdown We next performed rescue assays by overexpressing BRF2 in MALAT1 knockdown cells.qRT-PCR confirmed BRF2 mRNA overexpression efficiency (Fig. 7a).Through CCK8, transwell assay and flow cytometry analysis, we found that BRF2 overexpression significantly rescued the proliferation, migration, invasion and apoptosis of MALAT1 knockdown cells (Fig. 7c-i).Overexpression of BRF2 also rescued the activation of the LKB1-AMPK pathway and the changes of apoptosis-related proteins after MALAT1 knockdown (Fig. 7g).Overall, MALAT1 regulated the LKB1-AMPK pathway by upregulating BRF2 thereby promoting proliferation and inhibiting apoptosis of HCC cells.Unadjusted Western blot images are available in the Supplementary Material. Knockdown of MALAT1 impedes xenograft tumor growth To evaluate the effect of MALAT1 on HCC in vivo, mice were injected with Huh7 or Hep3B cells and then randomly divided into two groups; intratumor injection with ASO-NC or ASO-MALAT1 was performed every four days (Fig. 8a).After six injections, the xenograft tumors injected with ASO-MALAT1 were significantly smaller than those injected with ASO-NC, both in volume and weight (Fig. 8b-d).We also found that the expression of BRF2 was significantly decreased in tumors injected with ASO-MALAT1 (Fig. 8e). Discussion Increasing studies have shown that the lncRNA MALAT1 plays an important regulatory role in tumor proliferation, invasion, metastasis and drug resistance [36].Studies in HCC showed that up-regulation of MALAT1 is associated with poor prognosis of patients and may be a biomarker for poor clinical prognosis [37,38].In this study, we demonstrated that MALAT1 was upregulated in HCC tissues.Furthermore, MALAT1 knockdown significantly inhibited the proliferation, migration, invasion and antiapoptosis ability of HCC cells by down-regulating the expression of BRF2.These results are consistent with previous findings suggesting that MALAT1 functions as an oncogene in HCC. LncRNAs act as endogenous molecular sponges, ceR-NAs, competing for the binding of miRNAs and regulating the expression levels of mRNAs.Studies have shown that the ceRNA network plays a major regulatory role in liver cancer and affects the development of HCC [39].Our results identified a link between MALAT1 and BRF2 and we hypothesized that MALAT1 might act as ceRNA to regulate BRF2 expression in HCC.Our results provide the first evidence that MALAT1 sponges miR-1-3p to regulate BRF2 expression. BRF2 has vital functions in the transcriptional regulation of small untranslated RNAs in eukaryotes [40].BRF2 upregulation has been observed in many types of cancers and is critical in the development and progression of various cancers, including lung cancer, breast cancer, and esophageal squamous cell cancer [19][20][21][22]41].However, the functions and clinical relevance of BRF2 in HCC have been largely unknown.In our study, biogenic analysis showed that BRF2 expression was positively correlated with MALAT1 expression.BRF2 was highly expressed in HCC tumor samples, and a positive relationship was identified between MALAT1 and BRF2 in HCC tumor samples.MALAT1 knockdown reduced BRF2 expression at mRNA and protein levels.These results indicated that BRF2 is a downstream target of MALAT1 in HCC.We also found that BRF2 expression was positively associated with tumor size, neoplasm recurrence, and poor prognosis in HCC.Down-regulation of BRF2 significantly inhibited the ability of proliferation, colony formation, migration, and invasion of HCC cells and promoted apoptosis.Functional rescue assays demonstrated that overexpression of BRF2 reversed the anticancer properties induced by MALAT1 knockdown, which further confirmed that BRF2 was regulated by MALAT1. We further explored the mechanism by which MALAT1 acts as ceRNA to regulate BRF2 and found that hsa-miR-1-3p targets both MALAT1 and BRF2 by bioinformatics technology and dual luciferase assay.Hsa-miR-1-3p has been identified as a tumor suppressor in multiple types of cancers such as HCC, bladder cancer, and prostate cancer [42][43][44].In HCC cells, hsa-miR-1-3p targets ORC6 and SOX9 to promote apoptosis [42,45].We found that the expression of hsa-miR-1-3p was reduced in HCC tissue samples, and knockdown of MALAT1 up-regulated the expression of hsa-miR-1-3p in HCC cell lines.Furthermore, hsa-miR-1-3p interacts with and regulates MALAT1 and BRF2, and inhibition of hsa-miR-1-3p partially reversed the reduction of BRF2 expression caused by MALAT1 knockdown.Li et al. reported that MALAT1 sponges hsa-miR-1-3p in esophagus cancer [46].Similarly, the increase of hsa-miR-1-3p decreased the expression of BRF2.Hence, we concluded that hsa-miR-1-3p was required for HCC cell proliferation, metastasis and anti-apoptosis induced by the MALAT1-BRF2 axis.Together these results indicate that MALAT1 is involved in the development of HCC by regulating hsa-miR-1-3p and BRF2. LKB1-AMPK signaling regulates a variety of cellular functions including cell metabolism, apoptosis, and autophagy [47].Previous studies demonstrated that LKB1-AMPK signaling plays a major role in tumor suppression by negatively regulating cancer cell metabolism and proliferation [48][49][50].For instance, Li et al. reported that tankyrase inhibitors regulated metabolic homeostasis and inhibited tumorigenesis by activating the LKB1-AMPK signaling pathway [34].In this study, we found that both MALAT1 and BRF2 knockdown activated the LKB1-AMPK pathway and promoted cell apoptosis in HCC.Furthermore, BRF2 overexpression reversed the effect of MALAT1 knockdown on LKB1-AMPK pathway and cell apoptosis.Collectively, our results suggest that MALAT1 knockdown inhibited the proliferation and induced apoptosis of HCC cells via activating the LKB1-AMPK pathway through downregulating BRF2. ASO can bind complementary RNA sequences and recruit ribonuclease H for RNA degradation in vitro and in vivo [51][52][53].Research on targeted therapy using ASO-based technology has developed rapidly, indicating that ASO is a very promising therapeutic strategy in clinical practice [51,52,54].Here, ASO-MALAT1 greatly reduced the proliferation of HCC tumors in our xenograft model, and BRF2 expression was decreased in tumors.Our results support a role for the MALAT1-BRF2 regulatory axis in HCC and suggest that targeted inhibition of MALAT1 by ASO technology may be an effective therapeutic approach to delay HCC progression. Conclusions Our study demonstrated that MALAT1 and BRF2 promoted cell proliferation in HCC, and BRF2 was an independent predictor of prognosis in patients with HCC.The MALAT1/hsa-miR-1-3p/BRF2/LKB1/AMPK regulatory axis plays a crucial role in HCC progression and represents potential therapeutic targets for HCC (Fig. 8f ). Fig. 1 Fig. 1 The expression of MALAT1 increased in hepatocellular carcinoma.(a) According to the TCGA database, MALAT1 is upregulated in many types of cancer tissues, including hepatocellular carcinoma, compared with normal tissues.(b) MALT1expression in 41 pairs of HCC tissues and normal tissues was examined by qRT-PCR.(c) The expression of MALAT1 in hepatocellular carcinoma cells was analyzed by qRT-PCR. Fig. 2 Fig. 2 MALAT1 promotes the proliferation, migration, invasion and anti-apoptosis of HCC cells.(a, b) The transfection efficiency of ASO-MALAT1 in Huh7 and Hep3B cells was detected by qRT-PCR.(c, d) After silencing MALAT1, cell viability of Huh7 and Hep3B cells was detected by CCK-8 assay on days 0, 1, 2, 3, 4 and 5. (e, f ) Transwell assay was used to detect migration and invasion of hepatocellular carcinoma cells after MALAT1 down-regulation.(g) The apoptosis rate of HCC cells after MALAT1 knockdown was detected by flow cytometry.(h) Western blot analysis was performed to detect the expression levels of apoptosis-related proteins in HCC cells after MALAT1 knockdown.** P < 0.01, *** P < 0.001 Fig. 3 Fig. 3 MALAT1 regulates the expression of BRF2 which is associated with poor prognosis in HCC patients.(a) The correlation between MALAT1 and BRF2 expression was analyzed from the TCGA database.(b) TCGA database suggested that BRF2 mRNA level was higher in tumour tissues than in normal.(c) BRF2 expression in 41 pairs of HCC tissues and normal tissues was examined by qRT-PCR.(d) The correlation between MALAT1 and BRF2 expression was analyzed by qRT-PCR in 41 tissue pairs.(e, f ) The expression level of BRF2 was detected by qRT-PCR and western blot after MALAT1 knockdown.(g) IF staining for BRF2 showed the effect of MALAT1 knockdown on BRF2 expression.Scale, 10 μm.(h, i) Kaplan-Meier curve analysis showed that the mRNA level of BRF2 in HCC tissues was negatively correlated with OS and RFS.(j, k) The expression of BRF2 in human HCC specimens was detected by IHC.Scale, 50 μm.(l, m) Kaplan-Meier curve analysis of TMA patients showed that HCC patients with high BRF2 expression had lower OS and RFS. Fig. 4 Fig. 5 Fig. 4 Knockdown of BRF2 inhibits HCC progression.(a, b) The transfection efficiency of si1/2-MALAT1 in Huh7 and Hep3B cells was detected by RT-qPCR and western blot.(c, d) After silencing BRF2, cell viability of Huh7 and Hep3B cells was detected by CCK-8 assay on days 0, 1, 2, 3, 4 and 5. (e) Colony formation assays were used to determine the role of BRF2 in HCC cell colony formation.(f-h) Transwell assay and cell scratch assay were used to detect migration and invasion of hepatocellular carcinoma cells after BRF2 down-regulation.(i) The apoptosis rate of HCC cells after BRF2 knockdown was detected by flow cytometry.(j) Western blot analysis was performed to detect the expression levels of apoptosis-related proteins in HCC cells after BRF2 knockdown.** P < 0.01, *** P < 0.001 Fig. 6 Fig. 6 The carcinogenic effect of BRF2 may mainly depend on the inhibition of LKB1/AMPK signaling pathway.(a) The number of up-regulated genes and down-regulated genes in Huh7 cells by comparing the si-BRF2 group and the si-NC group.(b, c) Functional enrichment analysis including the GO and KEGG pathways was performed in the upregulated DEGs in Huh7-siBRF2.(d, e) Functional enrichment analysis including the GO and KEGG pathways was performed in the downregulated DEGs in Huh7-siBRF2.(f ) The expression changes of AMPKα-2 mRNA levels after knockdown BRF2.(g) The expression changes of the LKB1-AMPK signaling pathway after knockdown of BRF2 by Western blotting.** P < 0.01, *** P < 0.001 Fig. 7 Fig. 8 Fig. 7 Overexpression of BRF2 abrogated MALAT1 knockdown.(a, b) The transfection efficiency of OE-BRF2 in Huh7 and Hep3B cells was detected by qRT-PCR.(c, d) After silencing MALAT1 and overexpression BRF2, cell viability of Huh7 and Hep3B cells was detected by CCK-8 assay on days 0, 1, 2, 3, 4 and 5. (e, f ) Transwell assay was used to detect migration and invasion of HCC cells after MALAT1 down-regulation and overexpression of BRF2.(g) The apoptosis rate of HCC cells after MALAT1 down-regulation and overexpression of BRF2 was detected by flow cytometry.(h) Western blot analysis was performed to detect the expression levels of apoptosis-related proteins and LKB1/AMPK in HCC cells after MALAT1 down-regulation and overexpression of BRF2.** P < 0.01, *** P < 0.001 Table 1 Statistics for BRF2 and clinicopathologic features in HCC patients
6,014.4
2023-08-31T00:00:00.000
[ "Medicine", "Biology" ]
Topological structures, spontaneous symmetry breaking and energy spectra in dipole hexagonal lattices The interplay between the special triangular/hexagonal two dimensional lattice and the long range dipole–dipole interaction gives rise to topological defects, specifically the vortex, formed by a particular arrangement of the interacting classic dipoles. The nature of such vortices has been traditionally explained on the basis of numerical evidence. Here we propose the emerging formation of vortices as the natural minimum energy configuration of interacting (in-plane) two-dimensional dipoles based on the mechanism of spontaneous symmetry breaking. As opposed to the quantal case, where spin textures such as skyrmions or bimerons occur due to non-linearities in their Hamiltonian, it is still possible to witness classic topological structures due only to the nature of the dipole–dipole force. We shall present other (new) topological structures for the in-plane honeycomb lattice, as well as for two-dimensional out-of-plane dipoles. These structures will prove to be essential in the minimum energy configurations for three-dimensional simple hexagonal and hexagonal-closed-packed structures, whose energies in the bulk are obtained for the first time. Motivation Two classic identical dipoles of magnitude of µ , m u and m v localized, respectively, at positions r u and r v , with dipole moments given as interact with each other giving rise to the interaction energy where r uv is the vector between the two dipoles u and v, r uv = ||r uv || is their separation distance (constant C E is either µ 0 4π or 1 4πǫ 0 , magnetic or electric, respectively). Energy (2) will be given in units of C E µ 2 /a 3 throughout the present work. The terms in (2) possess a well-defined physical meaning. On the one hand, the first term is essentially the classical counterpart of the Heisenberg exchange interaction, which plays a paramount role in magnetism. On the other hand, the second term favors the alignment between the two dipoles. In a particular configuration of dipoles in equilibrium, there is a non-trivial interplay between the two contributions, which, depending on the particular lattice, does not resemble the usual Heisenberg interaction. Furthermore, the Heisenberg Hamiltonian is computed for nearest-neighbors. What will make our study specially challenging is that in the case of interaction (2), one will consider all pairs, as opposed to just neighbors as in the usual Heisenberg case, and the coupling constant will not be a constant at all, but shall possess a long-range nature (going as 1/r 3 ). Emerging fields of research such as chiral magnetism and topological spintronics are developed through the study of topologically nontrivial spin textures. These structures are found in a variety of magnetic materials [25][26][27][28][29][30][31][32][33][34] , and are potentially interesting for information processing and data storage. As opposed to the classic interaction energy functional form (2), the Hamiltonian for quantum spins, running over nearest-neighbor, next-nearestneighbor, and next-next-nearest-neighbor sites, contains additional non-linear terms that are responsible for the appearance of magnetic skyrmions in 3D or magnetic bimerons in 2D 35 . Also, these topological magnetized spin textures carry integer topological charges Q. In essence, it is the existence of non-linearities in the quantum Hamiltonian that lead to well-defined topological structures. In spite of the stark contrast existing between the nature and form of the interactions in quantum and classical systems, one may wonder if it is still possible to observe special topological structures in classic systems of interacting dipoles. The answer is affirmative in systems with hexagonal symmetry, either in two or three dimensions. To be more specific, we shall focus on the minimum energy configuration of systems of dipoles in the thermodynamic limit, as opposed to the quantum case. The classical extremal energy configurations of a magnetic dipole system (either minimum or maximum) is one of equilibrium in which no torque should act on any given dipole. Now, the total energy of a system of dipoles in terms of (2) can also be cast in the form of the Hamiltonian The quadratic form (3) will be particularly suitable in the next section. Luttinger-Tisza and energy decomposion methods We can obtain the spectrum of a system of identical dipoles using the Luttinger-Tisza method 17 under the assumption that the minimum energy configuration exhibits translational symmetry. If T(i) denotes the points generated from i with discrete translations belonging to the T symmetry group the mentioned symmetry corresponds to m i = m i ′ for all i ′ ∈ T(i) . The system can be split into identical cells and thus limit the summation to one single cell. Therefore, the energy per dipole can be expressed as The energy per dipole finally reads as The method involves the sotution of an eigenvalue problem of the nd dimensional matrix  (d is the dimension of the dipole), with k being the eigenvalues and x k the corresponding (orthogonal) eigenvectors of the system, with �x k � = √ n . The final expression for the energy reads as where a k denotes the components of m in the base {x k }. Two conditions must be satisfied for i = 1 . . . n , namely Within the framework of the Luttinger-Tisza method, these two constraints are known as the strong and the weak conditions, respectively. We can thus obtain the minimum energy per dipole as E min = 1/2 min µ 2 , where min denotes the smallest eigenvalue of  . There exists an alternative to the previous Luttinger-Tisza method, derived by us [36][37][38] . For a certain set of positions {r 1 , r 2 , . . . , r M } corresponding to M identical dipoles (all of them with the same magnitude µ ), one resorts to a minimization procedure for its total dipole-dipole interaction energy. We chose the well-known simulated annealing 39 (SA) method. This Monte Carlo method is able to scape from local minima in the space of M(2M) variables corresponding to 2D(3D). We have checked that even for high values of the total dimensionality of the problem, it returns precise values for the optimal energy. For a finite sample of N dipoles it is then easy to infer the minimum energy configuration. Once it is known, one needs to "grow" the plaquette in the most symmetric way, that is, equally in all possible directions. In the same spirit of the liquid drop model 40 , which considers the total nuclear energy as a sum of different contributions, such as the volume or the surface term, we shall correspond each inferred total energy E N to a series of functionally different terms depending on N, which fixed coefficients to be obtained later on. Therefore, the total energy is decomposed as The term ∝ N is the volume contribution, in other words, the asymptotic energy per dipole in the thermodynamic limit. That is, V E ≡ lim N→∞ E N (N)/N . The second term in (9) corresponds to the surface contribution. Similarly, the third term in (9) takes into account those dipoles on the boundary ∂� . Finally there is an independent contribution. By adjusting the numerically obtained total dipole-dipole plaquette energy to the functional form (9), which employs the so-called Levenberg-Marquardt 41 non-linear regression, we do obtain remarkable results. In this fashion, we are able to infer the value of lim N→∞ E N (N)/N with great precision. The previous overall optimal configuration − sample growing scheme, constitutes a easy-to-implement alternative to Luttinger's because (1) no periodic boundary conditions are imposed, nor (2) the computation of Ewald sums (given in terms of involved lattice sums 42 ) are required for obtaining the minimum energy per dipole in the limit N → ∞. Also, as a general rule in any two-dimensional periodic configuration of dipoles (not necessarily optimal ones), we have discovered that every global rotation of all dipoles by a phase θ is followed by a linear dependency of the total interaction energy on the cosine of twice the phase. That is, E(θ) = a + b cos(2θ) . Heuristically, this relation can be obtained as a generalization of the 1D case. Suppose that all dipoles along an infinite straight line are rotated clockwise by an angle θ . The total (minimum) energy per dipole in the thermodynamic limit will be given by E(θ) = (1 − 3 cos 2 θ)ζ (3) , where ζ(3) is the so called Apéry constant. By using the relation between the cosine of an angle and half that angle, we obtain E(θ) = −1/2 ζ(3) − 3/2 ζ(3) cos(2θ). Two-dimensional structures Hexagonal lattice: out-of-plane dipoles. Let us first consider an infinite hexagonal layer where dipoles can freely rotate out-of-plane, yet always being fixed in space. The first state that one could think of is the maximum energy state, which corresponds to all dipoles either point upwards or downwards. This instance was first considered by Danilov et al. 43 , where a direct calculation of the total energy per particle in terms of lattice sums was mentioned to be divergent, which is not. In point of fact, we do obtain using our energy decomposition method TÂm . Interestingly, the minimum energy configuration when we grow the crystal of dipoles in hexagonal plaquettes corresponds to a special arrangement of dipoles as shown in Fig. 1. Crosses represent dipoles pointing inwards, whereas dots correspond to dipoles coming out of the plane. We call this configuration the "flag" state, due to its resemblance with the South African flag. Notice the special arrangement of the dipoles, that makes this state posses a point symmetry of 120 • . This state is certainly unique from the topological point of view. Also, we notice that it is highly sensitive to the boundary ∂� . The transition from the flag state Fig. 1a to a new state whose boundary is square-like Fig. 1b is also shown. This configuration possesses a higher energy, namely, −0.883517 ± 1.852 × 10 −10 . When the lattice is further distorted to form a parallelogram, we recover 1/6th of the original state in Fig. 1c. It is obvious that the shape of the boundary affects the total energy per dipole all the way to the thermodynamic limit, a feature which is not blurred by finite-size effects. We currently can only offer energetic numeric arguments in order to explain the existence of the flag state, but no physical ones. Notice that the high sensitivity of the flag state to the boundary conditions is an inherently typical feature of topological states in the quantum sense. In other words, in the quantum realm, it is very difficult to rigorously prove how the boundary ∂� affects the robustness of the corresponding enclosed state, and that is no different classically. All that is known to happen is that the flag state is not topologically equivalent to the other two configurations. There is a clear symmetry reduction from going to the flag state to the others, an effect that has energetic consequences only in the square-like confined one, the one in Fig. 1b. It is likely that first and second configurations (a) and (b) in Fig. 1 tend to the last one (c) there in the bulk. Hexagonal lattice: in-plane dipoles. The Luttinger-Tisza method was applied to a system of twodimensional dipole moments with identical scalar strength located at the sites of an infinite rhombic lattice with an arbitrary rhombicity angle by Brankov and Danchev 44 . This two-dimensional cell choice allowed to study the simple square lattice and the hexagonal one in a very compact way. Their study was an extension of two previous works by Belobrov et al. 45,46 that considered finite clusters of dipoles, highlighting the existence in the hexagonal case of a "macro-vortex". Brankov and Danchev 44 extended their original work and provided a better numerical accuracy to the computation of configuration energies. Although further work on dipole-dipole interacting systems followed 47-55 inspired by the Luttinger and Tisza seminal work 17 , Refs. [44][45][46] constitute the first consistent treatment of simple square lattice of dipoles and, specifically, the hexagonal lattice one. We shall provide an extended description of the hexagonal case in the present work, with an increased precision of the energy per dipole in the thermodynamic limit by several orders of magnitude. Figure 2 depicts the set of possible states obtained by either employing the Luttinger-Tisza method, or using numerical computations based on symmetry arguments. Each configuration is shown, along with their energy per dipole energies, given with exact decimal digits. Incidentally, the antiferromagnetic states of energies − 2.04746 and 2.96697 are connected by a global rotation in the sense of the E(θ) = a + b cos(2θ). Let us consider the case of the minimum possible energy configuration given by the vortex. The Luttinger-Tisza method provides an energy given by the lattice sum where L 2 stands for the points in the hexagonal lattice. There are means to speed-up the computation of lattice sums such as (10) by using Ewald sums. What we would like to stress out is that we shall compute the very same value for (10) without the burden of even considering the Luttinger-Tisza method at all, with all the corresponding formalism. We shall, then, show how our energy decomposition method works. Incidentally, we consider a (10) Figure 1. Depiction of the minimum energy configuration for perpendicular dipoles in a single layer, as we distort the lattice on which we grow the dipole crystal. From an hexagonal shape (a), whose ground state is the flag state with energy -0.919515, we obtain another minimum energy configuration where the shape is square-like, as shown in (b) www.nature.com/scientificreports/ slight deviation of (9) of the form N V E + N 1/2 S E + C , where the boundary term is not accounted. Recall that we are only interested in the bulk value V E , and this omission shall not imply any variation in the final result whatsoever. Once we have obtained several hexagonal clusters of increasing size N their corresponding energies (according to Fig. 2a or 2b), the concomitant non-linear fit returns the values V E = −2.758545 ± 8.223 · 10 −9 , S = 1.22402 ± 7.662 × 10 −6 , C = 3.30666 ± 0.001497 . Extraordinary precise values for E 0 (10) as well as for the other energies are thus obtained in the same fashion without employing the tools required for the Luttinger-Tisza method. Let us focus on the description of the vortex state. First of all, it is not expected to have a peculiar configuration of in-plane dipoles as the minimum energy state, having a considerable energy of − 2.758545. As opposed to the flag state, the vortex state is very much robust and, as far as our numerical computations are concerned, does not depend on the boundary ∂� of the system. The vortex state is depicted in Fig. 3b in great detail. Let us analyze the vortex state from the symmetry point of view. The union of three spirals (red, green and blue) has a third-order axis (rotation on 120 • ) perpendicular to the plane of the lattice. This is a point group of symmetry 3. The symmetry group of the hexagonal lattice contains identical transformation, 2-, 3-and 6th order rotations and six reflections. The identical transformation leads to one spiral, shown in the inset of Fig. 3, and rotations lead to the spiral configuration that forms the vortex state. What is remarkable about the vortex state from the symmetry perspective alone, is that the partition of the hexagonal lattice into spirals is not unique. One can construct the partition on two spirals with the second-order axis (the point group of symmetry 2, rotation of 180 • as in Fig. 3b. Furthermore, one can also obtain the partition of the hexagonal lattice on the central point with 6 spirals with the six-order axis (rotation on 60 • , the point group 6), which constitutes the vortex in Fig. 3a. Of course, all these groups are subgroups of the symmetry group of the hexagonal lattice. The vortex structure and the onset of spontaneous symmetry breaking. Despite the previous description, the question still remains: why a (macro)vortex? The works 45,46 managed to answer the question for finite clusters of dipoles, based on surface magnetic charge arguments. However, when dealing with the corresponding N → ∞ , a more intuitive reason arises: "The structure of a dipole system should be sensitive to the symmetry of the nearest-neighbor environment". However, this claim is rather vague. One possible explanation could be the following. Let us assume that the minimum energy configuration for low temperatures is the (in-plane) ferromagnetic one in the bulk. The original Hamiltonian (3) has an O(2) dipole rotation symmetry, whilst the direction of the ferromagnetic order is not fixed for N → ∞ . Things, however, change near the edges in finite systems. There there is a clear ferromagnetic direction, and different regions or domains merge from the boundary to the interior, in a sort of inwards dipole "freezing". The previous argument somehow implicitly assumes the six-fold symmetry of the vortex state as inferred from the lattice. There is no reason why ferromagnetic domains should not merge forming a triangular shape, for instance. Again, it is not quite convincing. In essence, the existence of vortices formed by magnetic dipoles have been suggested mainly by numerical simulations. Analytic Taylor expansion of the dipole-dipole interaction has shown 56 that it is able to obtain a local representation in the plane such that the lattice symmetry induces a magnetic anisotropy. This approximation, although purely mathematical in essence, is somehow capable of predicting the existence of vortices in the ground state magnetization. Based on the previous facts, we shall pursue an explanation solely based on physical and symmetry arguments. To such an end, we shall first define the chirality of a set of N dipoles {m i } with respect to an arbitrary, perpendicular axis o as where r i denotes the position of the dipole m i with respect to an axis o. The chirality is obviously a quantity independent of the position of the axis. In the case of the vortex state, we obtain the following result with V 1 = 1.73205 ± 1.833 × 10 −7 , V 2 = 7.79359 ± 2.491 × 10 −4 , and V 3 = 11.4368 ± 0.08078 . Therefore, we shall define the chirality of the system of dipoles in the thermodynamic limit as ξ ≡ lim N→∞ ξ N /N 3 , which is equal to ±1.73205 , depending on the vortex pointing clockwise or counter-clockwise. The magnetization of the vortex state is per se rather intriguing. The three embedded spirals conform an overall state with a nonzero total dipole moment M. Since the number of dipoles N goes as 1 + 3n(n − 1) (formula for hexagonal centered numbers) for each added layer n, and M is easily found to be (2n − 1)µ , we thus have the explicit dependence Therefore the magnetization per dipole of this state goes as 1/ √ N , as opposed to the expected 1/N behavior in the thermodynamic limit. Notice that in the rest of states depicted in Fig. 2, M is either 0 or o(1/N). The (11) www.nature.com/scientificreports/ magnetization per dipole could be a potential physical magnitude to be used in order to link the minimum energy configuration of the hexagonal lattice, that is, the vortex, with the rest of the spectrum. However, it does not change continuously and remains practically zero for all practical purposes. That is why we shall consider the aforementioned chirality ξ . In the language of phase transitions, this is going to be our order parameter in the Landau sense, although at zero temperature. In order to explain the appearance of a vortex structure as the minimum energy configuration of interacting dipoles in the hexagonal lattice we shall invoke here a mechanism of spontaneous symmetry breaking. Phases of matter, such as crystals, magnets, and conventional superconductors, as well as simple phase transitions can be described by spontaneous symmetry breaking (See 58,59 and references therein). In the case of crystals as periodic arrays of dipoles, for instance, they are not invariant under all translations (only under a small subset of translations by a lattice vector). The Hamiltonian of our system (3), for in-plane dipoles, has a finite rotational symmetry defined by the symmetry group of the hexagonal lattice plus Z 2 , as well as all the corresponding states depicted in Fig. 2, a symmetry that is further reduced for the ground states, the vortex states. The vortex clearly breaks this rotational symmetry. For out-of-plane dipoles, the Hamiltonian is invariant under O(2) rotations around any perpendicular axis. Although the flag state has 120 • degrees symmetry, it is indeed invariant under O (2), as well as the other out-of-plane states. Plus, the vortex configuration possesses a Z 2 symmetry, that connects configurations with opposite chiralities. Notice that Z 2 commutes with the Hamiltonian (3). Numerically, the vortex structure is rather robust, and already occurs for finite instances. It is extremely difficult to prove that the vortex state does not depend on the boundary conditions as we go to the thermodynamic limit, as opposed to the flag state. As previously mentioned, they easily appear in all sort of simulations. This fact does not prove anything by itself. We can only rely on numerical calculations to assert the robustness of the vortex state (recall that this procedure is also the case in the quantum realm). All the previous states, either out-of-or in-plane dipole configurations, has exactly zero chirality except the vortex structure. Thus, we can consider different routes to the spontaneous symmetry breaking that lead to the formation of vortex states. They are shown in detail in Fig. 4. Path III connects the maximum and minimum energies, by a continuously development of the chirality. The same situation happens for path I, linking two states energetically close in relative terms. Path II is rather special for it brings together the flag state and the The spontaneous symmetry breaking mechanism that we propose here use the chirality as the order parameter, which induces the appearance of a special configuration of dipoles having the minimum possible energy. It does not necessarily imply that the shape has to be that of a (macro)vortex, but at least a different one from the set of possible states possessing a clear, differentiated symmetry. The honeycomb lattice: topology of the ground state. In the honeycomb lattice, and following our procedure, a sufficiently big sample of dipoles unveils the disposition of the dipoles for the minimum energy configuration. This structure is shown in Fig. 5, along with the lattice. The minimum energy per dipole in the thermodynamic limit is found to be −2.22691 ± 6.322 × 10 −6 , higher than the one corresponding to the hexagonal lattice. We do not observe a single dipole structure, as in the case of the vortex in the hexagonal lattice, with ferromagnetic domains. Antiferromagnetism is not present either. Instead, the system is formed by an array of local sinks and vortices. For every sink, we have six local vortices that surrounds it. As in all classical ground states with no geometric frustration 57 , it is not highly degenerate (only two) and the total magnetization is zero. Certainly, we cannot possibly have the topology present in skyrmion/bimeron spin textures, yet we can obtain a periodic arrangement of interesting structures such as sinks and vortices. Three-dimensional structures Hexagonal simple lattice. The problem of obtaining the minimum energy per particle in the bulk of a system of interacting dipoles is interesting per se, regardless of the peculiarities of the corresponding dipole topology. Also, to the best of our recollection, it has not been studied previously in the literature. Again, in order to tackle the problem, we use our methodology. After placing N z ∈ [1, 100] layers of hexagonal lattices in parallel, each one of which with side N x = N y = 20 , we conclude that no dipole projection onto the x-y plane occurs. Of course, there are finite-size effects that account for dipoles than can be found not being perfectly perpendicular, but these effects are blurred in the thermodynamic limit. In point of fact, if the www.nature.com/scientificreports/ 2D layer retains its hexagonal shape, the final state of the entire column corresponds to layers of flag states one on top of the other. On the contrary, if the 2D layers possess some other shape, such as a parallelogram, the crystal will look like a 1/6-prism of the latter. As in the 2D case, we know that the two vertical arrangements will possess the same energy, the same zero magnetization, but the flag hexagonal lattice dipole crystal will display a particular topology, not present in the 1/6-prism. In both cases, either flag layers or a 1/6-prism, we shall have multi-ferromagnetic-domain structures of dipoles pointing up or down along the z-axis. This is the case for any d value in the rhombic cell. Thus, d shall correspond to the interlayer distance, measured in units of the hexagonal lattice constant a. Of course, there is no preferred value d. Therefore, it is mandatory to perform an exhaustive numerical exploration in order to obtain the minimum energy per dipole in the bulk and for every different d value. The results of the computations are depicted in Fig. 6. The small spheres in Fig. 6 and 7 denote the position of each dipole in the crystal, which could easily be regarded as the position of the nuclear magnetic moment corresponding to the nucleus of a certain species. The representation using spheres also helps to better understand the later comparison between different three-dimensional structures in terms of hard spheres. The total interaction energy monotonically decreases as d decreases, which implies that the system is bounded. The first computed point is for d = √ 2/3a , which corresponds (for later comparison) to the minimum interlayer distance in the case of the hexagonal closed packed lattice, for a = b = 1 . As the interlayer separation increases, the interaction between layers becomes negligible, and we recover asymptotically the expected value for one single layer, namely, − 0.919515. Here we list the computed values for several distances d. Figure 6. Plot of the evolution of the energy per particle in the thermodynamic limit as a function of the vertical cell distance d, for both the flag-state crystal and the 1/6-prism. The horizontal line corresponds to the asymptotic d → ∞ energy per particle value of the flag state. See text for details. Figure 7. Plot of the evolution of the energy per particle in the thermodynamic limit as a function of the vertical cell distance d, for the hcp lattice. The horizontal line corresponds to the asymptotic d → ∞ energy per particle value of the vortex state. See text for details. to a), the hcp lattice is energetically more bounded, a feature which is somehow intuitively expected. Our results for the hcp lattice, with vortex layers, are not to be confused with helical spin configurations in hexagonal crystals 60 . On the one hand, the former occur in simple hexagonal lattices, and on the other, their explanation is due to the a positive interaction between nearest neighbors and a negative interaction with secondnearest neighbors. Therefore, rotations in planar layers occur only if spins are treated classically as dipoles in the Heisenberg exchange Hamiltonian −|J 1 | cos α + |J 2 | cos(2α) , where α is the angle of rotation of the dipoles in the layers forming an helix. Discussion We have described the emergence of the robust vortex state as the minimum energy configuration possible mediated by a mechanism of spontaneous symmetry breaking. In a way, since T = 0 , we are describing a classic topological transition driven by the continuous chirality ξ parameter. The real arena where the symmetry breaking should be considered is at finite temperature, and not to employ the internal energy E or U, but the (Helmholtz) free energy F = E − T · S instead (a collection of particles will always seek to minimize its free energy). There, one should take into account that the entropy S clearly scales differently as opposed to the total energy E. Scaling here is referred to the size of the system. In other words, depending on the density of states of the system, the entropy may have a scaling with system size different than the internal energy. Therefore, the ensuing study should become more involved. However, all the present contribution is described at zero temperature. It the temperature was finite, as it was lowered, the entropy would provide a decreasing contribution to the free energy, and the system would at some point fall into another state. This usual temperature-driven analysis has not been implemented here, for again we are treating the system at zero temperature. However, the Landau approach for finite T may not to be much different from what has been presented here. We are confident that, in a finite temperature analysis, our conclusions about the feasibility of the spontaneous symmetry breaking explanation for the existence of vortex would still hold. Energies and configurations have been computed either by recourse to the Luttinger-Tisza method, or by employing our energy decomposition approach, which has been proved successful in other scenarios involving the computation of energies based on the dipole-dipole energy interaction of periodic systems in the thermodynamic limit. The topology of the equilibrium configurations for out-of-plane dipoles has provided a new state, the flag state. We have noticed that it is very sensitive to the boundary conditions (as opposed to the vortex case), much like topological phases in the quantum case. Besides the usual states in the in-plane case for the hexagonal lattices, we have computed the exact minimum energies per dipole for the three-dimensional cases as a function of the interlayer distance. Remarkably, in the simple hexagonal case the layers are formed by flag states, whereas in the hexagonal closed packed configuration, the layers are the vortices themselves. Also, an energy analysis shows that the hexagonal closed packed system is more bound. Summing up, we have presented that, within the limits of the classic dipole-dipole interaction, one can observe non-trivial topological structures that have special interest, in particular the minimum energy vortex configuration. Thus, we showed that particular configurations of dipoles are not exclusive to quantum spin textures such as skyrmions or bimerons, but also occur in the special case of triangular or hexagonal lattices. Data availability The data are available upon reasonable request at jbv276@uib.es.
7,152.6
2021-02-18T00:00:00.000
[ "Physics" ]
Study on electron stochastic motions in the magnetosonic wave field: Test particle simulations Using the test particle simulation method, we investigate the stochastic motion of electrons with energy of 300 keV in a monochromatic magnetosonic (MS) wave field. This study is motivated by the violation of the quasi‐linear theory assumption, when strong MS waves (amplitude up to ~1 nT) are present in the Earth’s magnetosphere. First, electron motion can become stochastic when the wave amplitude exceeds a certain threshold. If an electron initially resonates with the MS wave via bounce resonance, as the bounce resonance order increases, the amplitude threshold of electron stochastic motion increases until it reaches the peak at about the 11th order in our study, then the amplitude threshold slowly declines. Further, we find that the coexistence of bounce and Landau resonances between electrons and MS waves will significantly reduce the amplitude threshold. In some cases, the electron motion can become stochastic in the field of an MS wave with amplitudes below 1 nT. Regardless, if neither the bounce nor Landau resonance condition is satisfied initially, then the amplitude threshold of stochastic motion shows an increasing trend for lower frequencies and a decreasing trend for higher frequencies, even though the amplitude threshold is always very large (> 5 nT). Our study suggests that electron stochastic motion should also be considered when modeling electron dynamics regulated by intense MS waves in the Earth’s magnetosphere. Introduction f cp f LHR Magnetosonic (MS) waves, also referred to as equatorial noise or ion Bernstein mode, are electromagnetic emissions at frequencies between the proton gyrofrequency and lower hybrid resonance frequency in the Earth's magnetosphere (Russell et al., 1970;Gurnett, 1976;Santolík et al., 2002Santolík et al., , 2004Gary et al., 2011;Boardsen et al., 2016). They are usually detected at several harmonics of the local proton gyrofrequency in the time-frequency spectrogram (Balikhin et al., 2015;Boardsen et al., 2018), while intense MS waves can also have a continuous spectrum (Tsurutani et al., 2014). The discrete and continuous nature of MS waves has been well-explained by Chen LJ et al. (2016) and SUN JC et al. (2016a, b). Satellite observations indicate that MS waves are confined within a few degrees of the magnetic equator and located at 2 ≤ L-shell ≤ 8 (Perraut et al., 1982;Laakso et al., 1990;Ma QL et al., 90°2 013), which are commonly believed to be excited by the ring distribution of ring-current protons (Boardsen et al., 1992;Meredith et al., 2008;Chen LJ et al., 2010;Ma QL et al., 2014;Yu J et al., 2019b). MS waves typically have a very large wave normal angle (WNA) that is close to and are nearly linear polarized (Kasahara et al., 1994;Chen LJ and Thorne, 2012;Min K et al., 2019). The dominant magnetic component of MS waves is along the background magnetic field (i.e., large magnetic compressibility), while the wave electric field is nearly aligned with the wave vector (Boardsen et al., 2016(Boardsen et al., , 2018Gao XL et al., 2018;Sun JC et al., 2020). Statistical results show that MS waves have average amplitudes of ~50 pT in the inner magnetosphere (Ma QL et al., 2013), whereas very intense MS waves have also been reported with amplitudes up to ~1 nT during substorm activity (Tsurutani et al., 2014;Teng SC et al., 2019). Energetic electrons trapped in Earth's Van Allen radiation belts typically experience three periodic motions: gyration, bounce, and drift. These three periodic motions are decoupled in the magnetosphere because of their well-separated periods, and are associated with the first, second, and third adiabatic invariants, re-90°s pectively (Schulz and Lanzerotti, 1974). The violation of these adiabatic invariants can be caused by several plasma waves through resonances with electrons, leading to irreversible, dynamic changes of electron distribution (Thorne, 2010). In general, these resonant wave-particle interactions can be described in terms of pitch angle and energy diffusion coefficients based on quasi-linear theory, which has been widely employed to study interactions between MS waves and electrons in the magnetosphere (Horne et al., 2007;Li XX et al., 2015;Maldonado and Chen LJ, 2018;Fu S et al., 2019a). MS waves are capable of accelerating electrons between ~10 keV and ~100 keV to relativistic energies via Landau resonance, indicating they might play an important role in radiation belt dynamics (Horne et al., 2007). Further, MS wave-driven resonance interaction is believed to be a candidate mechanism responsible for the formation of electron butterfly distributions whose local flux has a minimum value at pitch angle in the magnetosphere (Xiao FL et al., 2015;Li JX et al., 2016;Ma QL et al., 2016;Maldonado et al., 2016;Ni BB et al., 2018;Yu J et al., 2019a). The motion of a charged particle in a wave field can become stochastic (i.e., chaos) if the wave amplitude exceeds a certain threshold, resulting in the nonlinear diffusion of particles in pitch angle and energy (Smith and Kaufman, 1978;Lichtenberg and Lieberman, 1992;Chen L et al., 2001;Gates et al., 2001). Theoretical and simulation studies demonstrate that the stochastic motion of charged particles occurs when neighboring resonances overlap in phase space, and the width of the resonant island is mainly controlled by the wave amplitude (Villalón and Burke, 1987;Guo ZH et al., 2008;Lu QM and Chen L, 2009). A Poincare plot is the intersection of an orbit in the state space of a continuous dynamical system with a certain lower-dimensional subspace, which is a useful tool to analyze a dynamical system and approximately determine the amplitude threshold of stochastic motion. The other way to derive the stochastic threshold is Fast Fourier Transform (FFT) analysis of the particle velocity time series, in which the transition between the discrete and continuous power spectra appears. Both methods have been widely used to study ion dynamics in the Alfven/ion cyclotron or MHD MS wave field (Chen L et al., 2001;White et al., 2002;Guo ZH et al., 2008;Lu QM and Chen L, 2009;Gao XL et al., 2012). In this paper, we thoroughly study the electron stochastic motion induced by a monochromatic MS wave with the fully relativistic test particle simulation method, which may violate the assumption of the quasi-linear theory (QLT). We have given the amplitude thresholds of electron stochastic motion in the MS wave field in various cases and also investigated their dependences on wave and particle parameters. Importantly, the results reveal that the amplitude threshold of stochastic motion for ~100 keV electrons can be less than 1 nT in some cases. This suggests that electron stochastic motion may be induced by MS waves in the Earth's magnetosphere, and therefore the validity of QLT may be questionable when intense MS waves are involved. In Section 2, the test particle model is described followed by simulation results in Section 3, concluding with a summary and discussion in Section 4. Simulation Model We adopt a test particle simulation model to study electron mo- tion in the field of a monochromatic MS wave. Here, the background magnetic field is a simplified dipole field without considering the curvature, given as in the Cartesian coordinate system, where z is the distance along the field line from the equatorial plane. is given as a function of magnetic latitude as , and in order to satisfy , we simply require and . The relation between z and is described by in our model, where L is the L-shell value and is the radius of Earth. In this simplified dipole field model, we have neglected the drift motion of electrons by assuming that MS waves are distributed uniformly over magnetic local time, which has also been used in previous works (Tao X et al., 2012;Fu S et al., 2019a). In this model, the chosen simulation domain is located at the magnetic field line of L = 6.8, meaning the trajectory of the test particle is fixed at this field line (i.e., the z-axis). Here the choice of a relatively larger L allows that the bounce resonances between MS waves and electrons can occur for the first few orders (Shprits, 2009). Satellite observations indicate that MS waves are typically detected near the magnetic equator, so MS waves in our model are confined within and have a constant amplitude in this region. The WNA of MS waves is fixed as , and its wave vector lies in the x-z plane. To reduce the transit time effect, the ratio between plasma frequency and electron gyrofrequency ( ) is set relatively higher, i.e., 32. The electrons' motion in our simulation model can be described by the following equations: where is the Lorentz factor, and and q are the rest mass and charge of the electron, respectively. The wave field and at a specific position and time is given by: where is the wave phase, and the wave normal vector k is given by . is the wave frequency, and is the initial wave phase at the equator of L = 6.8. Here, the wave number k, as well as each component of the wave amplitude, i.e., , , , , , and can be determined by combining the cold plasma dispersion relation of MS waves and the Maxwell equation (e.g., Tao X et al., 2012;Tao X and Li X, 2016;Fu S et al., 2016Fu S et al., , 2019b Here, is the WNA of the MS wave, P, S, and D are the Stix parameters, is the refractive index, and c is the light speed. The Boris method (Boris, 1970) is utilized to solve the fully relativistic Lorentz Equation (1), which has been widely used in previous works (Lu QM and Chen L, 2009;Gao XL et al., 2012;Fu S et al., 2019a, b;Cai B et al., 2020;Fu S and Ge YS, 2021). Note that we only use Equation (1) to trace the charged particle motion in the analytical wave field, where wave-particle interactions are nonself-consistent. In each simulation run, we initially set the selected electron at the equator of L = 6.8, and the initial gyro-phase angle (i.e., the angle between and the x-axis) of the electron is fixed as zero (i.e., and ). The time step is set as , and the total time for each simulation case is , where is the unperturbed bounce period of the selected electron. The electron's velocity is normalized by v th , which is the thermal velocity of electrons with a temperature of 1 eV. In this study, we consider both the bounce resonance and Landau resonance between electrons and MS waves. The bounce resonance condition is , where n is the bounce resonance order, and is the electron's bounce frequency that is estimated by (Lyons and Williams, 1984, p. 24). Note that, since the bounce resonance occurs over a finite range of near (n = 1, 2, 3 …), i.e., not necessarily at exact (Chen LJ et al., 2015), the bounce resonance condition can be well-satisfied initially. The Landau resonance condition is given by , where is the parallel wave number and is the parallel velocity of the electron. There exist two methods to exhibit the stochastic motion of the electron in the literature. The first is that the trajectory of the electron in the Poincare map is irregular, while the second is that the frequency spectrum of electron velocity is continuous. Simulation Results We first investigate electron motion in the wave field when the bounce resonance satisfies at the first order. Figure 1 displays the spectrum of the z component of electron velocity as a function of frequency, which is obtained by the Fast Fourier Transform (FFT) time series of from t = 0 to . The parameters of the monochromatic MS wave are listed here: , initial phase at the equatorial plane , and amplitude (a) B w = 0.1 nT, (b) B w = 0.7 nT, and (c) B w = 3.0 nT. The initial equatorial pitch angle and kinetic energy of the selected electrons are and 300 keV, i.e., with the corresponding bounce period and bounce angular frequency . When the wave amplitude is very small (0.1 nT), the spectrum shows several finite power peaks (Figure 1a) where each peak corresponds to one periodic bounce motion of an electron. Here, the change of energy and pitch angle of the electrons caused by MS wave interaction leads to the change of bounce period of the electrons, and the existence of finite power peaks in the spectrum means the electron motion is quasi-periodic. As the amplitude increases, the spectrum of tends to have more and more power peaks (Figure 1b) until it becomes continuous ( Figure 1c). In other words, the motion of the electrons becomes stochastic in the field of MS waves with a sufficiently large amplitude. This method of presenting the stochastic motion of charged particles in a wave field was developed by Lu QM and Chen L (2009). In this case, the amplitude threshold of electron stochastic motion is estimated as 0.7 nT. It is worth mentioning that, although the threshold estimated in this study is not very accurate, the dependences of the amplitude threshold on wave and particle parameters (i.e., the main conclusions in this paper) remain unchanged. The electron stochastic motion is also illustrated by the Poincare map in Figure 2. Figures 2a, 2d, and 2g and Figures 2b, 2e, and 2h give the overview and partially enlarged view, respectively, of the scatterplots of the electron's trajectory in (z, ) space,. The black line in Figure 2a, 2d, or 2g represents the Landau resonance velocity as a function of z. Further, Figure 2c, 2f, and 2i show the Poincare map obtained by recording points when the electron crosses the equator from south to north (i.e., z = 0, and > 0); the para- meters of waves and electrons are the same as those in Figure 1. As expected, when the wave amplitude is small (0.1 nT), the Poincare maps in Figures 2b and 2c only show simple lines, supporting the periodic motion of the electron in this wave field. If the wave amplitude increases to 3 nT, then the trajectory of electrons in the Poincare map becomes irregular and the electron motion is chaotic (Figures 2h and 2i). Since the bounce resonant scattering effect of MS waves on electrons is sensitive to the initial wave phase (Chen LJ and Bortnik, 2020), the amplitude threshold of electron stochastic motion is also affected by the initial wave phase . Figure 3a shows the scatterplots of by recording points when electrons cross the equator from south to north (i.e., z = 0, and > 0) at different initial wave phase where the MS wave amplitude is fixed as B w = 0.4 nT, while in 3b the amplitude threshold of electron stochastic motion is shown as a function of . The initial pitch angle and en- and 300 keV. The horizontal dashed line represents the initial value of for electrons at the equator of L = 6.8. The maximum variation in caused by the first-order bounce resonance roughly reflects the width of the resonant island. Furthermore, the variation of is found to be dependent on the initial wave phase (Figure 3a), which is also equal to the initial phase difference between the electrons and MS waves in our model (Chen LJ and Bortnik, 2020). According to previous studies (Guo ZH et al., 2008), the stochastic motion of the charged particles can be caused by the overlapping of adjacent resonant islands, suggesting that the larger the resonant islands, the more easily the electron stochastic motion can occur. As shown in Figure 3b, the amplitude threshold of electron stochastic motion has a minimum of ~0.4 nT at . Hereafter, the amplitude threshold of electron stochastic motion in the field of one MS wave is given by the minimum value over different initial phases. Figure 2. (a, b, d, e, g, and h) The scatterplots of electrons' trajectory in (z, ) space. The black line represents Landau resonance velocity as a function of z. (c, f, and i) Poincare map of ( ), which is obtained by recording points when the electron crosses the equator from south to north (i.e., z = 0 and > 0). Here the parameters of wave and electron are the same as those in Figure 1. Earth and Planetary The amplitude threshold also depends on the particle parameters; the dependence of the amplitude threshold on the initial pitch angle of the electron is given in Figure 4. Different colors represent the bounce resonance between MS waves and electrons with different orders of n. In each simulation run, the frequency of MS waves will be chosen to satisfy the bounce resonance condition , and we fix the initial kinetic energy of electrons as 300 keV. The shaded region ( ) where the left and right boundaries of are determined by the Landau resonance at and , is named the Landau resonance region for convenience. If the equatorial pitch angle of the electron falls in this region, then the electrons should experience Landau resonance with the MS waves during their bounce motion. As shown in Figure 4, there is a clear trend that the amplitude threshold of electron stochastic motion is decreasing as the equatorial pitch angle of electrons approaches the Landau resonance region for each bounce resonance order. Interestingly, the amplitude thresholds reach the minimum in the Landau resonance region. These results imply the coexistence of bounce and Landau resonances will facilitate the development of electron stochastic motion. Besides, we find that the amplitude threshold increases proportionally with the bounce resonance order n (n = 1, 2, and 3). The correlation between bounce resonance order n 0 1 2 ϕ 0 (π) ϕ 0 (π) igure 4. The amplitude threshold of the electron stochastic motion as a function of initial equatorial pitch angle under bounce resonance with different order n. The shaded region ( ), also named Landau resonance region, is determined by the Landau resonance condition at and . and the amplitude threshold has been thoroughly investigated in Figure 5. α eq0 Figure 5 shows the amplitude threshold of electron stochastic motion at different bounce resonance order n and electron . α eq0 n ≤ 11 Note that the selected three are far away from the Landau resonance region. When MS wave-driven bounce resonance occurs at the low order ( ), the amplitude threshold tends to become larger with the increase of n. However, if the bounce reson- igure 6. (a, c, and e) The scatterplots of in the same format as Figure 3a for bounce resonance order n = 3, 11, and 14, respectively. The wave amplitude is fixed as B w = 5.0 nT. (b, d, and f) The bounce and Landau resonance conditions in the ( , ) plane for the monochromatic MS wave with frequency , , and , respectively. The lines of coded color denote the bounce resonance conditions, while the black solid and dotted lines indicate the Landau resonance conditions at and , respectively. The black plus marks the initial position of the selected electron, whose and are and 300 keV. Earth and Planetary ance order n is over , then an increase of n will reduce the amplitude threshold. To better understand the dependence of the stochastic threshold on bounce resonance order, further detail is given in Figure 6. Figures 6a, 6c, and 6e show the scatterplot of in the same format as Figure 3a for bounce resonance order n = 3, 11, and 14, respectively. The wave amplitude is fixed as B w = 5.0 nT. Figures 6b, 6d, and 6f present the bounce and Landau resonance conditions in the ( , ) plane for MS waves with frequency , , and , respectively. Here, is the unperturbed bounce frequency of the selected electron, whose and are and 300 keV. In Figure 6b, 6d, and 6f, the colored lines denote the bounce resonance conditions, while the black solid and dotted lines indicate the Landau resonance conditions at and , respectively. The black plus marks the initial position of the selected electron in the ( , ) plane. From- Figure 6a to 6c, we find the maximum variation caused by the bounce resonance becomes weaker with the increase of n, but remains almost the same level if n is larger than 11 (Figure 6e). This indicates that the width of the bounce resonance island is significantly reduced with increasing resonance order for lower orders (< 11), but remains nearly unchanged for higher orders (> 11). From Figure 6b, 6d, to 6f, it is clear that the spacing between adjacent bounce resonances in energy is declining with the increase of resonance order, suggesting that the bounce resonance islands are much closer to each other (i.e., the electron energy change caused by the bounce resonance will make the adjacent resonant islands overlap more easily) at higher resonance orders. Since the stochastic motion of electrons should be controlled by the two factors, such as the width of resonance island and the spacing between them, we can speculate that at lower bounce resonance orders, i.e., < 11 for this electron, the decrease in the width of resonance island with the increase of resonance order plays a major role in increasing the amplitude threshold. While at higher orders > 11, the width of resonance islands remains nearly unchanged, however the adjacent islands become closer as n increases, which may lead to reduced amplitude threshold. To compare with amplitude threshold under resonance, we have also studied the electron stochastic motion when neither the bounce nor Landau resonance condition is satisfied at the beginning. Figure 7 shows the amplitude threshold as a function of wave frequency for different initial equatorial pitch angles; the initial kinetic energy of the electron is also fixed as 300 keV. First off, the amplitude threshold increases with the wave frequency below , but gradually decreases above , attributed to competition between the width and spacing of resonance islands as discussed above. Moreover, the amplitude thresholds shown in Figure 7 are much larger than those in Figure 5, thus electron stochastic motion is not stimulated easily without the resonant interactions between electrons and MS waves. It is worth noting that, since the high background plasma density condition ( ) is adopted in our model (Li JX et al., 2014), the transit time effect on electron stochastic motion is quite weak. Conclusions and Discussions Using a test particle simulation method, we studied the stochastic motion of 300 keV electrons in the field of a monochromatic MS wave. If the electrons initially satisfy the bounce resonance with the MS wave, the amplitude threshold of electron stochastic motion initially increases as the bounce resonance order increases, until it peaks at the 11th order in our study. The amplitude threshold then begins a slow decline. It is remarkable that the coexistence of bounce and Landau resonances between electrons and MS waves will significantly reduce the amplitude threshold. In some cases, the electrons' motion can become stochastic in the field of MS waves with amplitudes below 1 nT, suggesting that electron stochastic motion can be caused by intense MS waves in the Earth's magnetosphere, violating the assumption of quasi-linear theory. If neither the bounce nor Landau resonance condition is satisfied initially, the amplitude threshold also exhibits an increasing trend for lower frequencies and a decreasing trend for higher frequencies, however the amplitude threshold is always very large (> 5 nT). Previous studies revealed that the motion of ions in the Alfvén/ion cyclotron or MHD MS wave field can become stochastic when the wave amplitude exceeds a certain threshold, such that the trapping widths associated with neighboring resonances overlap. In this study, we confirm that electron stochastic motion can also be caused by the MS wave, i.e., ion Bernstein mode. Especially with the coexistence of bounce and Landau resonance, electron motion can become stochastic in the field of MS waves with amplitudes below 1 nT in the Earth's magnetosphere. Quasi-linear theory (QLT) is a widely used tool to study the interaction between electron and plasma waves in the Earth's radiation belt; we caution that the validity of QLT may be questionable when intense MS waves are involved. Previous studies have shown that hundreds of keV electrons can be sufficiently scattered by MS waves via bounce or Landau resonance (Horne et al., 2007;Li JX et al., 2014;Fu S et al., 2019a). Without any preference, we chose 300 keV electrons to study the motion in an MS wave field. For electrons with other energies (e.g., from tens of keV to ~1 MeV), the amplitude threshold of electrons' stochastic motion will change, but their dependences on ave and particle parameters are quite similar. Here, for the hundreds of keV electrons with a large pitch angle (60°-90°), i.e., with a corresponding bounce period of ~1 s, motion in the low-frequency (~10 Hz) MS wave field with large amplitude (~several hundreds of pT) may more likely become stochastic in the Earth's magnetosphere. Previous studies showed that the bounce and Landau resonant scattering effects of MS waves on electrons are also sensitive to WNA (Chen LJ et al., 2015;Lei MD et al., 2017). The WNA of MS waves is usually described as a Gaussian distribution with a central peak at 89° or 89.5°. In this study, the WNA of a monochromatic MS wave is fixed as 89°. If we change the WNA of this wave, the pitch angle range of Landau resonance and amplitude threshold of electron stochastic motion will change, but the amplitude threshold reaches the minimum in the new Landan resonance region. In addition, we focus on equatorial MS waves within , though off-equatorial MS waves have also been reported with amplitudes up to several hundreds of pT (Ni BB et al., 2018). The electron motion in the strong off-equatorial MS wave field can also become stochastic but may have smaller equatorial pitch angles. For convenience and clearance, we only consider the monochromatic MS wave in our model, but the MS waves in the magnetosphere usually exhibit a discrete or continuous spectrum. The electron stochastic motion in various MS spectra is also interesting and may tend to occur more easily, but that is beyond this paper and requires further investigation.
6,084.4
2021-08-02T00:00:00.000
[ "Physics" ]
The possibility of simplified modelling of radiation heat transfer within a steel porous charge The article refers to the problem of calculating the effective thermal conductivity kef of a steel porous charge. In proposed approach for each heat transfer mechanism, which occurs in the considered medium, the corresponding thermal resistance is assigned. The model values of the kef coefficient were determined twice for the same input data (geometric dimensions, emissivity, temperature, thermal conductivity of steel and gas). The difference between successive calculations depended on the use of two different methods to determine the thermal radiation resistance Rrd. In the first approach the resistance Rrd was calculated using the exact method and in the second approach the simplified, approximate method was used. The discrepancy between the obtained results of kef in both approaches provides the evidence to use the approximate method to determine the resistance Rrd. A bundle of square steel sections were used to demonstrate the challenge. In the exact method, data regarding the temperature distribution within a single profile was used. These data were obtained based on experimental research using a guarded hot plate apparatus. The calculations were performed using a temperature range between 200C and 800C for two sections: 6060mm and 8080mm and three emissivities: 0.5, 0.7 and 0.9. Introduction Heat treatment operations of steel products have significant bearing on all key performance metrics of the plant, i.e.: productivity, energy consumption, product quality and emission of pollutants. This causes that heat treatment parameters should be selected very carefully. In the metallurgical industry for control and design of heat treatment parameters special numerical models have been used for more than three decades [1][2][3]. Such models predict spatial and temporal changes in the temperature of the charge. One of the challenges of this prediction is having required knowledge about the thermal properties of the heated charge. This issue becomes particularly complicated when the charge with a porous structure (e.g bundles of long elements or wire coils) is treated. Such kind of charge is a granular medium with the gas phase filling the gaps in between the steel elements. Therefore, during the heat treatment the following complex processes take place simultaneously: conduction in steel, conduction in gas, contact conduction, free convection and thermal radiation. As a result, the effective thermal conductivity kef becomes the key thermal property of the porous charge. This parameter is commonly used in the theory of porous media [4,5]. The effective thermal conductivity of the porous material can be calculated by applying the model which is based on the analysis of thermal resistance for individual modes of heat transfer [6,7]. If the hightemperature process is applied to the above scenario, one of the heat transfer modes to be considered in the kef model is thermal radiation. The thermal resistance of radiation Rrd in this model can be calculated in two ways: exact and approximate. In the exact approach, resistance Rrd is determined using the radiosity method. The main challenge comes down to solving the system of equations. Obtaining a solution by this method requires knowledge on the temperature values of all the surfaces that close the space of radiative heat transfer. To obtain information on changes in the temperature distribution within the heated porous charge, experimental investigations are necessary. Therefore, this is a significant drawback of this method. This problem is eliminated in the approximate method as this utilises one simple equation in which only the average temperature is applied. This paper compares the two methods of calculating the effective thermal conductivity of a porous charge. The first, thermal radiation resistance is determined using the exact method, the second the approximated method. The analysis was carried out for the porous charge of a steel square section bundle which is illustrated in Fig. 1. Due to the high porosity of this charge, the fraction of thermal radiation in the total heat transfer is particularly high in this case. Analysis and modelling When analyzing the thermal radiation in the area of the bundle, the process within each profile needs to be considered. This is radiation heat transfer in a threesurface enclosure as illustrated in Fig 2. It is assumed that the temperature of the bottom surface A1 is T1, the temperature of the top surface A2 is T2 and the temperature of the lateral surface A3 is T3 and T1 > T3 > T2. Since the enclosure is square the following relationship takes place: A1 = A2 = 0.5 A3. It is also assumed that the surfaces of the enclosure are opaque, diffuse, grey and its radiative properties are the same and expressed by the emissivity  (1 = 2 = 3). The net heat flux of radiation qrd in this system is [8]: were J1 and J2 represents radiosity of the surfaces A1 and A2 respectively. The radiation resistance Rrd for this system is expressed by the following relationship: Therefore, the challenge in calculating the resistance Rrd in the exact method is to determine the radiosities Ji of all surfaces in the considered enclosure, which are described by the following equations: is surface reflectivity. The emissive power per unit area of each surface Ei is described by the equation [8]: The view factors Fi-j used in equations (3) for the surface layout under consideration have the following values: In order to determine each radiosity, the system of equations (3) comes down to the matrix form: where Xrd is a dimensionless coefficient with the value depending on the emissivity and the shape, as well as the relative position of the surfaces that represent the boundaries for the space [9]. For square enclosure Xrd = 1/. The effective thermal conductivity of the analyzed charge is calculated using the definition of conduction resistance for the flat layer with dimension l [10]: where Rto is the total thermal resistance of the considered medium. For the section bundle the l represents the sum of section dimension in the direction of heat flow and the width of the gap between the individual sections. While resistance Rto is the sum of the section thermal resistance Rst and thermal resistance of the gap Rgp: The methodology for calculating the section thermal resistance Rst is described in the publication [11]. While calculating this parameter, the Rrd resistance needs to be taken into account. Whereas the gap thermal resistance Rgp is calculated by the polynomial [12]: The values of coefficients Bi from this polynomial depend on the surface physical state of the adjacent sections. Results and discussion The calculations presented below were performed for two square sections: 60 mm and 80 mm with the wall thickness of 3 mm and three emissivities: 0.5, 0.7 and 0.9. The temperatures of the individual section surfaces required for the calculation of the resistance Rrd in the exact method were obtained from experimental research. These measurements were taken while determining the effective thermal conductivity of the section bundles using the guarded hot plate apparatus [13]. During these experiments, temperatures t1 and t2 were captured using thin jacked thermocouples within the selected sections. Measurements points were located on opposite surfaces of the section, perpendicular to the direction of the heat flow. The results of the measurements were used to calculate the temperature differences t = t1 -t2, which corresponds to the mean temperature tm = 0.5(t1 + t2). The values of the parameter t in the function of the temperature tm obtained for the sections 60 and 80mm are shown in Fig. 3. The data presented in Fig. 3 was then approximated by linear regression equations. The following relationships were obtained for each section: The equations (11) were used to determine the temperatures for each surfaces of the section (T1, T2 and T3) which are necessary for calculating the Rrd resistance in the exact method: Fig. 4 presents the results of radiation resistance calculations for the considered scenarios, which were obtained by both methods (exact and approximate). It was observed that the results for both sections using the same method (Figs. 4a and 4b) were very similar with only a very minor difference of 0.1% However, there are significant differences between the exact and the approximate methods for the same sections, with the exact method producing higher values. These differences in the results of both methods depend on the emissivity. When the emissivity decreases greater differences are observed. In order to analyze the deviations in detail, the percentage excess of radiation resistance Rrd were calculated: where Rrd-ex is the resistance obtained using the exact method, whereas Rrd-ap is the resistance obtained in the approximate method. The results of the Rrd parameter obtained for the 80mm section are presented in Fig. 5. It can be noted that this parameter is almost constant for each emissivity in a temperature function. Very similar results were obtained for the 60 mm section. This implies that for a given emissivity the average value of this parameter can be applied. Thus, the difference in Rrd resistance between the exact and the approximate method is reduced linearly with the increase in emissivity, reaching zero for the black body surface. However, for a square enclosure with an emissivity of 0.5, the Rrd-ex value is about 20% greater than the Rrd-ap value. In line with the purpose of this article, it was decided to evaluate how the observed difference in Rrd-ex and Rrd-ap values affects the value of the effective thermal conductivity of the square section bundle. In these calculations it was assumed that the thermal conductivity of steel ks and the thermal conductivity of gas phase kg change with temperature in the following relationships: (15) Equation (14) describes changes in thermal conductivity of low-alloy steel with carbon content of 0.2%, while equation (15) describes changes in thermal conductivity of air. These equations were determined through approximation of the literature data [14,15]. The calculations of the coefficient kef were performed for the same scenarios as for the radiation resistance. To determine the gap thermal resistance Rgp, in equation (10), the following values of Bi coefficients were used:  B1 = 2.3110 -5 ;  B2 = -0.0534;  B3 = 58.16. Calculations results for the effective thermal conductivity for the analyzed bundles are presented in Fig. 7. The obtained kef values for the 60 mm sections are within the 3.98.8 W/(m 2 K) range, whereas for the 80 mm sections the kef values are within the 3.511.3 W/(m 2 K) range. This coefficient increases as a function of temperature, and the dynamic of it increases with the increase in the emissivity. At the same time, higher values were obtained when the radiation resistance Rrd-ap was taken into account in the calculations. In order to demonstrate the influence of the radiation resistance value on the effective thermal conductivity, the percentage difference between the kef values obtained for both scenarios was calculated. This parameter is defined as follows: where kef-ap denote the kef value calculated taking into account the resistance Rrd-ap, and kef-ex denote the kef value calculated taking into account the resistance Rrd-ex. The results of the calculations for the kef for each bundle are presented in Fig. 8. The value of the kef for the 60mm section bundle in 200C temperature was not greater than 1.5%. However, when the temperature increased to 800C the obtained values were between 1.7% and 8.7%. The highest values were observed for the emissivity of 0.5. For the emissivity of 0.7 the maximum value was 5.3%. A similar trend was observed for the 80mm sections. For 200 the maximum value obtained was 2.6%, whilst for 800C and the successive emissivities obtained values were 10.6%, 6.1% and 1.9% respectively. The values of the kef parameter averaged for the whole temperature range obtained for all the analyzed scenarios are summarized in Table 1. As it can be seen, the use of the approximate method for calculations the radiation resistance in the model of the effective thermal conductivity, in relation to the entire temperature range, exceeds the maximum value of the kef coefficient by approximately 7%. However, this applies to the emissivity of 0.5, while the surface of steel profiles subjected to heat treatment is usually characterized by the emissivity above 0.7. Therefore, the overestimation of the value of kef using the approximate method will be less than 5%. It can be stated that for industrial needs, the accuracy of calculations at this level is sufficient. However, in a situation where more accurate calculations of the kef coefficient are required, the radiation resistance can be calculated using the approximate method. In order to apply this approach, the use of an appropriate correction term is required, which takes into account the linear impact of the emissivity:   Due to the use of equation (17), the mathematical model of the effective thermal conductivity avoids solving the system of equations (6a), however, the results of these calculations will not be overstated as observed with equation (7). Conclusions The problem presented in this article is related to the optimization of heat treatment of the square steel profiles heated in the form of bundles. Due to the porous structure of this charge, its basic thermal property is effective thermal conductivity. This parameter quantifies the ability of the bundle to transmit heat as a result of the complex processes of conduction in steel and air, contact conduction, thermal radiation and free convection. The calculations of the kef coefficient were performed for two scenarios, which differ in the way of their modeling (exact and approximate) for radiation resistance. It has been shown that with the approximate method of determining resistance Rrd, the results of the kef calculations are overstated by an average of about 5%. In order to eliminate this discrepancy an updated version of the approximate equation was proposed, in which a corrective term was added. This term takes into account the linear impact of emissivity on the value of radiation resistance Rrd.
3,231.2
2018-01-01T00:00:00.000
[ "Physics" ]